title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
For those who don’t want to read the new ‘Robert Galbraith’ serial killer tale
For those who don’t want to read the new ‘Robert Galbraith’ serial killer tale Why are people complaining about J. K. Rowling’s new book? J. K. Rowling, the author of the Harry Potter series, has recently made a series of anti-transgender statements. I won’t repeat them here, but you can look them up if you like. They are on her website and her Twitter account. As a result, a large number of transgender people and their allies have expressed a collective desire to stop consuming her work (or at least to avoid paying for it or otherwise to refrain from amplifying or encouraging her as an author). Many people have fond memories of Harry Potter, and they do not wish to give up this fantasy world that meant a great deal to them. I won’t start a fight about that; I was never a Harry Potter fan, so I cannot “give up” something that I never had, and I don’t feel I can tell others exactly how to part with something they care about. That process might be personal, and it might play out differently for different people. Nonetheless, I can make a general statement that readers should feel warmly and gently invited to avoid supporting a billionaire author who has recently chosen to use her enormous platform to denigrate transgender people. While I never had much interest in J. K. Rowling’s work and have even less now, I had a reason to pick up her latest book, and I have a reason to tell you about it. Troubled Blood was released yesterday (September 15, 2020) under J. K. Rowling’s pen name, Robert Galbraith. It’s the fifth novel in a series featuring the fictional detective Cormoran Strike. This instalment is 944 pages. The LGBT+ publication Pink News announced it as “a cis male serial killer who dresses as a woman to kill his cis female victims.” (“Cis” is an abbreviation for “cisgender,” meaning “not transgender.”) Here’s why I care. In 2018, I published Painting Dragons, an examination of the “eunuch villain” trope. A eunuch villain is not the same as a cross-dressing villain, but there may be some overlap in the Venn diagram; to put it another way, at least in the fantasy-land of metaphors, the concepts are adjacent. There is a broader problem of “queer villains,” and, here, I’m talking about the sort of queerness that has to do more with gender than with sexuality. Since I’ve positioned myself as a person who is knowledgable about this literary trope, I feel it is my responsibility to weigh in on Troubled Blood. I read the 944 pages on the day it came out. Because, as a transgender person myself, I have no motivation to preserve or heighten the suspense of this particular author’s book, this essay includes some detail from the end of the book. I don’t think of it as a spoiler. How can I spoil something that the transgender community has made a collective decision not to enjoy? It is, rather, simply information. I am informing. If you don’t want to read it, but you’re curious what’s in it, let me tell you about it. What happens in ‘Troubled Blood’ A serial killer, Dennis Creed, began his murder career in England in 1968 when he was in his early thirties. He rented a permanent room in a boarding house on Liverpool Road near Paradise Park. This is where he kept the women he abducted. Now 77 — the novel is set in 2014 — he has been in jails and psychiatric facilities for decades. Some unsolved crimes seem to point to him. The detective finds photographs of Creed “at various ages, from pretty, curly-haired blond toddler all the way through to the police mugshot of a slender man with a weak, sensual mouth and large, square glasses.” In Chapter 53, we are told: “Dennis Creed had been a meticulous planner, a genius of misdirection in his neat little white van, dressed in the pink coat he’d stolen from Vi Cooper, and sometimes wearing a wig that, from a distance, to a drunk victim, gave his hazy form a feminine appearance just long enough for his large hands to close over a gasping mouth.” Having confused his victims — by drugging them or by appearing from a distance to be a trustworthy female — Creed drove them to the boarding house, chained them to the radiator in the basement, physically tortured them for months in especially sadistic ways, and eventually killed and dismembered them. He did this multiple times. Police knew there was an “Essex Butcher” but didn’t identify Dennis Creed until 1976. Relying heavily on information in a 1985 true-crime book The Demon of Paradise Park devoted to Creed’s crimes (which of course exists only within this novel), detectives reopen the 1974 disappearance of Dr. Margot Bamborough. No one had ever found her body nor convincingly tied her murder to Creed. The detective goes to the psychiatric facility to interview Creed, who has a “working-class, East London accent” and, by then, a “triple chin.” He provides information about one of his prior victims. (At the end of the nearly thousand-page book: Yes, they find Bamborough’s body. No, her killer wasn’t Creed.) Within the Dr. Bamborough case, there’s another theme of gender confusion: A mysterious patient at the clinic the day the doctor disappeared. The patient’s name had been written in the receptionist’s log simply as “Theo question mark.” One of the doctors remembered seeing this person and assumed Theo was a man. The receptionist insisted otherwise: “She was broad-shouldered, I noticed that when she came to the desk, but she was definitely a woman.” The detectives are interested in finding and questioning Theo, whoever he or she is, so they repeatedly bring up the mystery of Theo’s gender. Oddly — remember, this case is reopened forty years later, in 2014 in the novel’s fictional world, and the novel is being published in 2020— no character ever floats the idea that Theo might have been a transgender woman. In fact, the prefix “trans-”, applied to gender (as in “transgender,” “transsexual,” “transvestite,” or simply “trans”), appears nowhere in this book. Theo’s possible gender is never discussed in this light. Nor is Creed’s crossdressing as part of his misogynistic violence ever analyzed psychologically. So those are the relevant parts of the plot in J. K. Rowling’s new book. Is it original? No. In 1937 in New York City, amid public panic about murderous rapists, the magazine Inside Detective warned that one killer on the run “MAY BE MASQUERADING IN FEMALE ATTIRE!” Place the emphasis on the “may be,” please, because he was not. (You can read more about this true case in Harold Schechter’s book The Mad Sculptor.) Within fiction, you can find the “crossdressing killer” motif in Psycho (a 1959 novel, then a film), Dressed to Kill (a 1980 film), and Silence of the Lambs (a 1988 novel, then a film). If you’d like more information on these cinematic tropes, I highly recommend the Netflix documentary Disclosure, released earlier this year. Is it weird? …that a woman author who claims to worry that transgender women threaten cisgender women’s safety should devote her career to (a) writing explicit scenes in which a man tortures women, not to inform readers but to entertain them, and that she should do so (b) under a masculine pen name? Yes. That is very, very weird. Is it transphobic? Yes. Considering the book’s sins in isolation — that is, if I had read the text without knowing who wrote it — I’d say its sins are relatively mild. Regarding the serial killer, Dennis Creed, the crossdressing element could have been explained better to make it more than just a replay of old horror movies. His crossdressing was just a deliberate ruse to lure his victims; nothing more is ever done with that information. Regarding the mysterious visitor to the doctor’s office, Theo, the detectives make themselves look foolish in failing to consider the possibility that Theo is transgender. There’s no reason to exclude the word “transgender” from the novel. The author was not struggling to fit a word limit. The topic could have been better addressed: distinguish concepts of clothing and identity, acknowledge that transgender people generally aren’t violent, and exculpate queer people at the end. Add another page or ten and deal with it. If I didn’t know who the author was, I’d say these were sins of omission and of ignorance. But the book can’t be considered in isolation from its author. This is a billionaire author, famous for writing Harry Potter, who has, just within the past year, assumed the mantle of anti-transgender rants. She absolutely knew what she was doing in this novel. Her framing is intentional. She wants to scare people about transgender women, not only in fiction, but also in real life. We know from life context that this book is serving a larger agenda. Nick Cohen, a columnist for the Spectator who read an early review copy of Troubled Blood, wrote on the book’s release day that “transvestism barely features. When it does, nothing is made of the fact that the killer wears a wig and a woman’s coat…” But that’s exactly the problem. Why does Rowling mention it at all, if she intends to make “nothing” of it? Especially when she’s been criticized all year for expressing her anti-transgender viewpoints? If she cared how this was received and interpreted, she could have made a bigger effort. If anything, I imagine she is happy to leverage this novel to deliberately capitalize on the publicity she gets from repeatedly offending transgender people. Readers are taught to consider a work as a whole and not complain excessively about a relatively tiny detail. Teachers want us demonstrate that we’ve actually read the entire book; meanwhile, living authors and marketers generally plead with readers to “be fair” (and, ideally, generous) to their personality and product. But I am not in school anymore, nor do I have a motivation to be generous to a billionaire whose new brand is slandering my community. I wonder if J. K. Rowling wrote 944 pages with the intention of minimizing the passage about crossdressing so that her defenders can object that her book, as a whole, isn’t about that. They would be correct; the book, as a whole, is not about the villain putting on a wig. But part of the book is about that. The transphobic part.
https://medium.com/books-are-our-superpower/robert-galbraith-serial-killer-jk-rowling-transphobic-5b79031cae6e
['Tucker Lieberman']
2020-09-16 18:43:01.762000+00:00
['Reading', 'Books', 'LGBTQ', 'Robert Galbraith', 'Jk Rowling']
UX … it’s more than just graphic design
UX … it’s more than just graphic design Every specialist involved in designing UX is a UX designer. A content strategist sits at a table with a stack of user personas, drawing bubbles on a page, mapping the information a web user is going to need, how they’re going to use it, and in what order. A graphic designer stands at their desk, drafting content blocks on a wireframe, anticipating the needs of the user who will be visiting that page. An interaction designer sits in traffic on their way home, thinking about what a button should do when the user clicks it, and what type of user action should make the email signup form unfold before them on the page. All these specialists are helping to design a user experience. And while it’s easy to suppose that “design” is simply a shorthand for “graphic design,” in the case of UX, it’s so much more. What’s user experience? Well — in addition to being a buzzword — UX is also “an important, strategically relevant, multidisciplinary field for solving very difficult problems at the intersection of technology, spaces, the Internet and people.” (So says Trip O’Dell, product design manager at Amazon.) Literally defined, UX is a person’s perceptions and responses from the use of a product, system, or service. That’s how the International Organization for Standardization puts it. In less stuffy speech, user experience is “how you feel about every interaction you have with what’s in front of you in the moment you’re using it.” User Testing Blog followed that latter definition with several worthy questions: Can you use it? Can you find it? Does it serve a need that you have? Do you want to use it? Do you find it valuable? Do you trust it? Is it accessible to you? These questions comprise a good litmus test for UX on the Internet. When creating a website, you’re aiming for “yes” all the way down the line. What’s UX design? According to Wikipedia, user experience design is “the process of enhancing user satisfaction with a product by improving the usability, accessibility, and pleasure provided in the interaction with the product.” Make it fantastic, in other words. Do everything you can to wow your user on all those litmus questions above. Which brings us to the point UX design is not a synonym for graphic design for the web. While it’s easy to assume that hey, design is design, these animals are pretty different. And in this case, that difference is pretty crucial. A graphic designer plays an important role in UX design — but there are other roles, no less important. Truth: you can’t have peerless UX without a disarmingly attractive, elegantly simple, self-explanatory visual design. Equally true, you can’t have great UX without an architecture that’s sensitive to a user’s needs, structuring information in a logical, comprehensible way. Or without page layouts (wireframes) that offer the right content, in the right place, so intuitively that a user doesn’t even have to think about what they came for, because they’re already doing it. Or without on-point messaging that appeals to the user’s immediate practical priorities and underlying emotional needs in a deeply compelling way. Or without on-page elements — breadcrumbs, for example — that support the experience by making the website effortlessly navigable. Or user testing to catch hangups and refine the design. Or best-practice web development to put the site on its feet and get it rolling. In short, there’s a difference between designing a visual user interface (UI design) and designing every aspect of a multi-dimensional experience (UX design). Here’s Kyla Tom, lead graphic designer at Madison Ave. Collective, on the big picture: “Web design … requires content development from individuals with editorial expertise, a graphic designer to really dig into the final UI design and create iconography, an interaction designer who knows exactly how smooth actions and transitions need to be, and a back-end as well as front-end developer to maintain the site and bring everything to life on screen.” In short, UX is teamwork. UX designers come in many shapes Because UX design isn’t the sole purview of any one individual — some mythical being who’s able to handle it all solo — it’s worth thinking about the various specialists involved in designing an excellent user experience, and acknowledging their role as UX designers. The magic happens at the intersection of several very different, very vital skill sets: Information architecture Content strategy Wireframing Graphic design Copywriting User interaction Web development And it’s more than the sum of the parts. UX is strategic. It’s iterative. It’s multidisciplinary. UX design is what happens when content, graphic design, and development click. It’s the satisfaction you feel when, as a user, you land on a website and your needs are answered before you even have to ask. That’s no buzzword. That’s an ideal worth striving for. So, when asked what UX design is, don’t fill in the bubble next to “graphic design for the web.” Remember that on the multiple choice test, the correct answer for UX is: all of the above.
https://medium.com/madison-ave-collective/ux-its-more-than-just-graphic-design-45e894517fbc
['Elisabeth Mccumber']
2018-02-20 20:52:43.573000+00:00
['UX', 'Design', 'UI Design', 'UX Design', 'Ui Ux Design']
Use Entities For Watson Assistant Node Conditions
When evaluating “non-intent” user responses in Watson Assistant (WA), try to use entities instead of evaluating the contents of “input.text”. Entities are both reusable and not case sensitive, meaning you will get cleaner code. Using “input.text” in WA is a great way to capture and save the input into a context variable for later use or for determining the length of what was said, but for dialog node conditions, it can directly short-cut some of WA’s capabilities and can become a maintenance nightmare. For example, let’s say the user is asked “Would you like to receive your statement by mail or fax?”. If you use “input.text” to test the user response for the value of “mail” you will miss common variations. The condition (input.text == “Mail”) ||(input.text == “mail”) doesn’t capture all case variations The condition (Input.text.toLowerCase() = “mail”) is better for case sensitivity, but would not handle the situation where the utterance is something like “send it by mail” In speech applications, mis-transcriptions (homonyms) are possible. Neither of the cases above would work if the utterance came to WA as “male” Avoid these issues by setting up entities to capture key items in the utterance and configuring the node conditions to look for those entities. Create Entity WA Entities — My entities Configure Dialog WA Dialog Using @deliveryPreference:mail for the condition… Tests for the occurrence of “mail” in the utterance Captures all synonyms of “mail” configured for the entity (ie “male”) Is case insensitive Ignores any additional words in the utterance There are many uses for “input.text” and powerful string methods available that can be used to evaluate the object. However, when configuring node conditions, it’s good practice to try to use entities to simplify and organize your WA design. We have built a workspace analyzer that detects “input.text” conditions at https://github.com/cognitive-catalyst/WA-Testing-Tool/. Download the tool and navigate to the ‘validate_workspace’ section. This will help you quickly discover these conditions and others that you may wish to improve. Find more Watson Assistant Best Practices at https://medium.com/ibm-watson/best-practices-for-building-and-maintaining-a-chatbot-a8b78f0b1b72. For help implementing these practices, reach out to IBM Data and AI Expert Labs and Learning.
https://medium.com/ibm-data-ai/use-entities-for-watson-assistant-node-conditions-4cc33b2f25ba
['Leo Mazzoli']
2020-02-05 18:58:43.253000+00:00
['Watson Assistant', 'Tutorial', 'NLP', 'Artificial Intelligence', 'Chatbots']
Handling asynchronous errors in Scala at Hootsuite
Introduction Every day Hootsuite makes hundreds of thousands of API calls, and processes millions of events that happened in various social networks. Our microservice architecture, and a handful of asynchronous servers with efficient error handling, make this possible. Let’s take a look at how the Scala servers deal with errors. Different types of error handling in Scala First, let’s see what kinds of error handling mechanisms exist in Scala Exceptions Unlike Java, all exceptions in Scala are unchecked. We need to write a partial function in order to catch one explicitly. It is important to make sure that we are catching what we want to catch. For example, use scala.util.control.NonFatal to catch the normal errors only. // Example code try { dbQuery() } catch { case NonFatal(e) => handleErrors(e) } If we replace NonFatal(e) with _ , the block will catch every single exception including JVM errors such as java.lang.OutOfMemoryError . Options Programming in Java often produces abuse of null to represent an absent optional value and it led to many nasty NullPointerExceptions. Scala offers a container type named Option to get rid of the usage of null. An Option[T] instance may or may not contain an instance of T. If an Option[T] object contains a present value of T, then it is a Some[T] instance. If it contains an absent value of T, then it is the None object. // Example code val maybeProfileId: Option[String] = request.body.profileId maybeProfileId match { case None => MissingArgumentsError(“ProfileId is required”)) case Some(profileId) => updateProfileId(profileId) } Note that Some(null) is still possible in Scala and it is potentially a very nasty bug. When we have code that returns null , it is best to wrap it in Option() . Try Unlike Option, Try can be used to handle specific exceptions more explicitly. Try[T] represents a computation that may result in a wrapped value of type T, Success[T] when it’s successful or a wrapped throwable, Failure[T] when it’s unsuccessful. If you know that a computation may result in an error, you can simply use Try[T] as the return type of the function. This allows the clients of the function to explicitly deal with the possibility of an error. // Example code Try(portString.toInt) match { case Success(port) => new ServerAddress(hostName, port) case Failure(_) => throw new ParseException(portString) } Either Either is the more complicated but better way to handle errors; we can create a custom Algebraic Data Type to structure and maintain the exceptions. Either takes two type parameters; an Either[L, R] instance can contain either an instance of L or an instance of R. The Either type has two sub-types, Left and Right. If an Either [L, R] object contains an instance of L, then it is a Left[L] instance and vice versa. For error handling, Left is used to represent failure and Right is used to represent success by convention. It’s perfect for dealing with expected external failures such as parsing or validation. // Example code trait ApiError { val message: String } object ApiError { case object MissingProfileError extends ApiError { override val message: String = “Missing profile” } } def getProfileResult( response: Either[ProfileError, ProfileResponse] ): Result = response match { case Right(profileIdResesponse) => Ok(Json.toJson(profileIdResponse)) case Left(MissingProfileError) => NotFound(ApiError.MissingProfileError) } Asynchronous usage We have looked at various of methods used for handling errors, but how will they be used in multi-threaded environments? Future with failure Scala has another container type called Future[T], representing a computation that is supposed to complete and return a value of type T eventually. If the process fails or times out, the Future will contain an error instead. // Example code val hasPermission: Future[Boolean] = permission match { case “canManageGroup” => memberId match { case Some(memberId) => canManageGroup(memberId) case _ => Future.failed(BadRequestException(MissingParams)) } } Future without failure If we review the example code above, one improvement we can make is to not raise an exception for a missing argument. To handle the error in a more controlled, self contained way, we can combine the usage of Future and Either. // Example code val hasPermission: Future[Either[PermissionError, Boolean]] = perm match { case “canManageGroup” => memberId match { case Some(memberId) => canManageGroup(memberId).asRight case _ => BadRequest(MissingParams)).asLeft } } Simplify Future[Either[L, R]] with Cats EitherT While it is a good practice to handle errors or perform validation asynchronously using Future and Either, adding chains of operations such as (flat)mapping and pattern matching on the containers can require a lot of boilerplate. EitherT can be used to remove the hassle. EitherT[F[_], A, B] is a lightweight wrapper for F[Either[A, B]]. In our case, Future[Either[L, R]] can be transformed into EitherT[Future, L, R] which gets rid of the extra layer between Future and Either. // Example code def updateFirstName(name: String): Future[Either[DataError, UpdateResult]] = ??? def updateLastName(name: String): Future[Either[DataError, UpdateResult]] = ??? def updateFirstAndLastName(firstName: String, lastName: String): Future[Either[DataError, UpdateResult]] = updateFirstName(firstName) flatMap { firstEither => updateLastName(lastName) flatMap { secondEither => (firstEither, secondEither) match { case (Right(res1), Right(res2)) => sumResult(res1, res2) case (Left(err), _) => Future.successful(Left(err)) case (_, Left(err)) => Future.successful(Left(err)) } } } The function can be re-written using EitherT as: // Example code def updateFirstAndLastName(firstName: String, lastName: String): EitherT[Future, DataError, UpdateResult] = for { a <- EitherT(updateFirstName(firstName)) b <- EitherT(updateLastName(lastName)) result <- EitherT(aggregateResult(firstRes, lastRes)) } yield result Conclusion Most Scala services at Hootsuite use all of the error handling patterns mentioned above in appropriate situations. Either is widely used to gracefully control business errors, Try filters expected failure more explicitly, and Option is seen in a lot of places where the value can be absent. The combination of Future and Either is definitely the most prominent, but this can make the code quite noisy due to double wrapping of objects. This problem is solved by adopting EitherT, the monad transformer from the Cats library. It allows us to create clean and readable but powerful asynchronous code.
https://medium.com/hootsuite-engineering/handling-asynchronous-errors-in-scala-at-hootsuite-935f3d0461af
['Brian Pak']
2018-08-09 23:20:27.078000+00:00
['Microservices', 'Programming', 'Error Handling', 'Co Op', 'Scala']
Easy Text Annotation in a Jupyter Notebook
Easy Text Annotation in a Jupyter Notebook How to use tortus annotation tool Image by author At the heart of any sentiment analysis project is a good set of labeled data. Pre-labeled datasets can be found on various sites all over the internet. But… What if you have come up with a custom dataset that has no labels ? ? What if you have to provide those labels before proceeding with your project? What if you are not willing to pay to outsource the task of labeling? I was recently faced with this very issue while retrieving text data from the Twitter Streaming API for a sentiment analysis project. I quickly discovered annotating the data myself would be a painful task without a good tool. This was the inspiration behind building tortus, a tool that makes it easy to label your text data within a Jupyter Notebook!
https://towardsdatascience.com/tortus-e4002d95134b
['Siphu Langeni']
2020-10-10 11:37:56.253000+00:00
['Sentiment Analysis', 'Jupyter Notebook', 'Annotation Tools', 'NLP', 'Data Science']
Investigate and solve Compute Engine cold starts like a detective🕵🏽‍♀️
Investigate and solve Compute Engine cold starts like a detective🕵🏽‍♀️ Season of Scale Season of Scale “Season of Scale” is a blog and video series to help enterprises and developers build scale and resilience into your design patterns. In this series we plan on walking you through some patterns and practices for creating apps that are resilient and scalable, two essential goals of many modern architecture exercises. In Season 2, we’re covering how to optimize your applications to improve instance startup time! If you haven’t seen Season 1, check it out here. How to improve Compute Engine startup times (this article) How to improve App Engine startup times How to improve Cloud Run startup times Shaving seconds off compute startup times might take a bit of detective work. How do you know if the issue lies within request, provision, or boot phases? In this article, we hone in on profiling Compute instances. I’ll explain how to pinpoint whether provisioning, scripts, or images contribute to slower instance startup times. Check out the video Review So far we have looked at how Critter Junction, a multiplayer online game following life simulation as a critter. They’ve successfully launched and globally scaled a their gaming app on Compute Engine. With their growing daily active users, we helped them set up autoscaling, global load balancing, and autohealing to handle globally distributed and constantly rising traffic. Cold start time woes But, Critter Junction’s been seeing longer than wanted startup times for their Compute Engine instances, even though they set everything according to our autoscaling recommendations. They knew they were running some logic on their game servers on Compute Engine, like taking user inputs to spawn them into a new critter’s island. After profiling their startup times, they were seeing more than 380 second cold start times, while the response latency for a request was in the 300 millisecond range. They also did a performance test to see how long Compute Engine was taking to create their instances versus how much time their code was taking to run, Right from Cloud Shell, it showed: Request, Provision, Boot Request is the time between asking for a VM and getting a response back from the Create Instance API acknowledging that you’ve asked for it. You can profile this by timing how long it takes Google Cloud to respond to the Insert Instance REST command. Provision is the time Compute Engine takes to find space for the VM on its architecture. Use the Get Instance API on a regular basis and wait for the status flag to change from provisioning to running. Boot time is when startup scripts and other custom code executes up to the point when the instance is available. Just repeatedly poll a health check that is served by the same runtime as your app. Then time the change between receiving 500, 400 and 200 status codes. After doing these, Critter Junction noticed the majority of instance startup time usually happened during the boot phase, when the instance executes startup scripts. This is not uncommon, so you should profile your boot scripts to see which phases are creating performance bottlenecks. Introducing the Seconds Variable To get a sense of what stages of your script are taking the most boot time, one trick is to wrap each section of your startup script with a command that utilizes the SECONDS variable, then append the time elapsed for each stage to a file, and set up a new endpoint to serve that file when requested. SECONDS=0 # do some work duration=$SECONDS echo "$(($duration / 60)) minutes and $(($duration % 60)) seconds elapsed." This let Critter Junction dig even deeper to poll the endpoint and get data back without too much heavy lifting or modification to their service. And there it was! An example graph generated by timing the startup phases of the instance. Notice that the graph on the right is in sub-second scale. The performance bottleneck seemed to be public images — preconfigured combinations of the OS and bootloaders. These images are great when you want to get up and running, but as you start building production-level systems, the large portion of bootup time is no longer booting the OS, but the user-executed startup sequence that grabs packages and binaries, and initializes them. Use custom images Critter Junction was able to address this by creating custom images for their instances. Which you can do from source disks, images, snapshots, or images stored in Cloud Storage, then use the images to create VM instances. Custom images list When the target instance is booted, the image information is copied right to the hard drive. This is great when you’ve created and modified a root persistent disk to a certain state and want to save that state to reuse it with new instances, and when your setup includes installing (and compiling) big libraries, or pieces of software. Armed and ready When you’re trying to scale to millions of requests per second, being serviced by thousands of instances, a small change in boot time can make a big difference in costs, response time and most importantly, the perception of performance by your users. Stay tuned for what’s next for Critter Junction. And remember, always be architecting. Next steps and references:
https://medium.com/google-cloud/investigate-and-solve-compute-engine-cold-starts-like-a-detective-%EF%B8%8F-66a03736cb03
['Stephanie Wong']
2020-09-14 18:47:29.968000+00:00
['Software Development', 'Google', 'Google Cloud Platform', 'Computer Science', 'Cloud']
Design interval training
Sprint In 2016 the tech world seemed abuzz about a “Design Sprint” process detailed in the book Sprint by Jake Knapp and other Google Ventures employees. The premise was that a new area of opportunity for product could be tackled by a dedicated small team in just five working days, at least conceptually. The authors had honed the process during real company visits. Plenty of readers latched on to the idea and tried it at their own companies. On the flipside, there was a backlash, with some feeling like a thoughtful ideation phase could not possibly be completed in a week, or that blocking out several busy calendars for a week was just not feasible. Somewhere in-between, I feel there are emphases from the Sprint process that are important to cover, even if they are not all possible within the same week. By tackling these key areas intermittently over a longer period of time, you can cover some vital stages of the ideation process with more room to think and fit in other demands. Tackling these steps in a more protracted way can mean you get all the bases covered at a more manageable pace. Focus on the Problem The start of the Sprint week is spent learning all you can about the problem space being tackled, something which is all too commonly skipped over in the rush to get things done. “Ask the experts” the Sprint book advises; get a thorough understanding of the subject matter at hand. Getting deep into the problem is an important fundamental to tackle before working out any solutions. Talk to those affected, particularly those suffering from the problem, to learn more. You do not necessarily need to schedule all your interviews in one afternoon though. Take your time to dwell on the problem; who is affected and why does it matter to them? When does it impact their lives? How widespread is the problem? Who is not affected? Your understanding of the problem will inform all your later work, so you need to make sure the problem is as complete as possible before you move on to solutions… More is more Being set up with a thorough understanding of the problem at hand, your team is then in a great position to work on solving it. There is usually more than one solution to a problem, and you do not always get things right the first time; continuous improvement and failing fast are popular tech concepts. Sprint advocates aiming for multiple solutions right away; iterating on sketches until you come up with the most compelling variant. Try spending more time in-between to allow your ideas to marinate, and one day whilst tackling something else you may get that brain-wave that adds the missing piece you’ve been looking for. Prototype Waiting until engineering resources can be scheduled to undertake your new idea can be costly in terms of both time and money. Prototyping allows you to validate the way ideas fit together, both in your own mind and others, without the heavy upfront costs. Linking together a few mock-ups without code or a back-end allows viewers to visualise your product before you set about building it. Coming up with a believable prototype in one day when sprinting can be a challenge though, and is certainly rushed. Spending a bit longer allows you to cover up the seams and ensure that your prototype does a convincing job. Then you can use the prototype for validating the idea in question, rather than answering questions about whether the finished version would be full colour. Try it out The earlier you can get feedback, the earlier you can learn. The earlier you can learn, the earlier you can shape your solution to be more in-line with what’s required, or learn from your mistakes and start over. It’s like the virtuous feedback loop detailed in the well-known Lean Startup book by Eric Ries, except starting it before you build puts you even further upstream in terms of learning. That’s why Sprint crams Friday feedback into that week as the final key requirement; without the knowledge gained from the feedback sessions, you just have nicely presented assumptions, and risk proceeding towards outputs over outcomes. Commencing the feedback gathering promptly but spreading it over a slightly longer period, say one session every day or two, can be more practical though. That way you do not have to try and slot in five people to help you out in one day; you can accommodate their schedules. You can spend more time discussing the area at hand with your participants both before and after the prototype test to get deeper into the issues without hurrying them through your script and revolving door. If something goes wrong, as it always can, and you discover a gaping prototype flaw during the first session that threatens to compromise all your feedback, you can realistically correct it for the later tests. That isn’t to say you need more than five tests to see the trends though; the Sprint book very helpfully references a study showing the diminishing returns you get after the fifth session, and other similar studies reinforce the finding. It just means you can spend more time gathering, organising and analysing your feedback before taking action, soaking up all the value your in-person tests will give. More haste, less speed I would encourage you to read the Sprint book in detail for great insights in the areas mentioned. By all means try a Design Sprint at your company if you feel you can be accommodating enough to make it work. Life can get in the way though, so rather than taking an all-or-nothing approach, why not deconstruct the Design Sprint and tackle the key stages along a timeline that works for your situation? That may be a more realistic way for you to cover the important analysis steps that can put your product on the right track.
https://medium.com/the-daily-standup/design-interval-training-50b09bf8bd25
['Mark Jones']
2017-10-06 02:57:07.589000+00:00
['Product Design', 'Design Sprint', 'Design', 'User Experience', 'Product Management']
6 Tips For Junior Devs, After 10,000h Of Engineering Software
Since the beginning of my career as a Software Dev in early 2018, I constantly sought growth and challenge in my day-to-day. I have attended a multitude of tech conferences, networked with dozens of world-class engineers, and consumed hundreds of hours of tech-related content in my spare time. On top of that, I have graduated with a Software Engineering degree and wrote a couple of successful Medium articles. I’ve built many side projects, contributed to open source libs, and even tried building a bunch of tech startups. Nothing of that would have taught me anything about how to be a developer, had I not learned it the hard way on the job, working with a living software and real people. By now, I’ve worked in multiple companies (been fired from one too), working different parts of the technology stack. I’ve done QA automation, Back-end, Front-end, Ops, DevOps, and now ended up as a Junior Site Reliability Engineer. Some say that my experience is way above the Junior position. Yet, I still proudly hold on to this title, believing that a title is worth as much as one’s professional maturity.
https://vyrwu.medium.com/5-tips-for-junior-devs-from-over-10-000h-of-software-engineering-9aad682f6468
['Aleksander', 'Vyrwu']
2020-12-21 00:54:22.572000+00:00
['Advice', 'Programming', 'Software Development', 'Engineering', 'Junior']
Ocean Waves (Sinusoidal) Regression
Definition : A Sine wave or sinusoidal wave is a mathematical curve that describes a smooth periodic oscillation. A Sine wave is a continuous wave, it goes from 0 to 360 degrees. Table Representing Sine Values Generation of Sine Wave Sinusoidal function is given by, Sine Function Formula The period of the sine curve is the length of one cycle of the curve. The natural period of the sine curve is 2π. So, a coefficient of b=1 is equivalent to a period of 2π. To get the period of the sine curve for any coefficient B, just divide 2π by the coefficient b to get the new period of the curve. Real Life Application Of Sine Function : (1) Generation of music waves. (2) Sound travels in waves. (3) Trigonometric functions in constructions. (4) Used in space flights. (5) GPS location calculations. (6) Architecture. (7) Electrical current. (8) Radio broadcasting. (9) Low and high tides of the ocean. (10) Buildings. Now, I’m going to show you different kinds of sine waves that can be generated by modifying its parameters. My ultimate goal is to show you how modification of parameters affects the shape of the graph. After that, I’m going to take an example that will show how we can implement sinusoidal regression in python. First of all, we are going to have a look at different graphs of sine waves by modifying the parameter values. Why are we going to do this? As we know data visualization has a major role in data science. While working with data (regression) we need to find the best fit curve for it. For that, we’ll have a lot of parameters in our function. Now if we don’t know what happens when we change these parameters then it’s going to be a cumbersome journey to go through it, right? So here we’ll take examples to understand what happens when we change the parameter values. How we should understand it? We will take our main sine function and then we’ll modify the parameter values then there will be a graph for that to visualize it. What I want you to do is take a pen and paper and try to plot the sine graph while going through examples. I think that’ll help you understand better. Let’s have a look at different sine graphs! ☀️ Example : 1 Y = 1*Sin(1(X+0))+0 Y = SinX A = 1 B = 1 C = 0 D = 0 Period = 2*pi/1 = 2*pi Y = SinX Here we can see that the sine wave has the amplitude of 1 and the length of cycle for the sine wave goes from 0 to 2pi. Example 2 : Y = 2*Sin(1(X+0))+0 Y = 2SinX A = 2 B = 1 C = 0 D = 0 Period = 2*pi/1 = 2*pi Y = 2SinX Here we can see that the sine wave has an amplitude of 2. As we can see that it increases the height of our sine wave. The length of cycle for the sine wave goes from 0 to 2pi. Example 3 : Y = 1*Sin(2(X+0))+0 Y = Sin2X A = 1 B = 2 C = 0 D = 0 Period = 2*pi/2 = pi Y = Sin2X Here we can see that the sine wave has an amplitude of 1. The length of cycle for the sine wave goes from 0 to pi. Example 4: Y = 2*Sin(2(X+0))+0 Y = 2Sin2X A = 2 B = 2 C = 0 D = 0 Period = 2*pi/2 = pi Y = 2Sin2X Here we can see that the sine wave has an amplitude of 2. The length of cycle for the sine wave goes from 0 to pi. As we can see from the graph it has increased the height of our wave and one cycle completer at pi. Example 5: Y = 2*Sin(1(X+1))+0 Y = 2Sin(X+1) A = 2 B = 1 C = 1 (Shift to Left) D =0 Period = 2*pi/1 = 2*pi Y = 2Sin(X+1) Here we have shifted our curve to the left by 1. We took the amplitude value as 1. Notice that here we have the period of 2*pi. That means one cycle has a length of 2*pi. Since we have shifted it to the left by one unit, the first cycle will be shifted 1 unit to the left from 2pi. Example 6 : Y = 2*Sin(1(X-1))+0 Y = 2Sin(X-1) A = 2 B = 1 C = -1 (Shift to Right) Period = 2*pi/1 = 2*pi Y = 2Sin(X-1) Here we have shifted our curve to the right by 1. We took the amplitude value as 1. Notice that here we have the period of 2*pi. That means one cycle has a length of 2*pi. Since we have shifted it to the right by one unit, the first cycle will be shifted 1 unit to the right from 2pi. Example 7: Y = 1*Sin(1(X+0))+2 Y = SinX +2 A = 1 B = 1 C = 0 D =2 Period = 2*pi/1 = 2*pi Y = SinX +2 Here notice that we have shifted our curve 2 points on the positive y-axis. The amplitude of the curve is 1. The period as you can see is also 2*pi. Example 8 : Y = 1*Sin(1(X+0)) — 2 Y = SinX — 2 A = 1 B = 1 C =0 D =-2 Period = 2*pi/1 = 2*pi Y = SinX — 2 Here notice that we have shifted our curve 2 points on the negative y-axis. The amplitude of the curve is 1. The period as you can see is also 2*pi. Example 9: Y = -1*Sin(1(X+0))+0 Y = -SinX A = -1 B = 1 C = 0 D =0 Period = 2*pi/1 = 2*pi Y = -SinX Here we have changed the amplitude value to -1. From the illustration above, we can see that our graph is inverted from the previous version which has amplitude of 1. It means the positive y-axis is replaced by the negative y-axis. Example 10 : Y = -2*Sin(1(X+0))+0 Y = -2SinX A = -2 B = 1 C = 0 D = 0 Period = 2*pi/1 = 2*pi Y = -2SinX Here we are going to set the value of amplitude to -2. So it’s just like our previous graph but the height of sine curve is increased. Also notice that the period of sine curve is 2*pi. Example 11 : Y = -2*Sin(1(X-1))+0 Y = -2Sin(X-1) A = -2 B = 1 C = -1 D = 0 Period = 2*pi/1 = 2*pi Y = -2Sin(X-1) Here we have shifted the curve to the right by 1 point and also changed the amplitude value to -1. The period of the sine curve is 2*pi. Example 12 : Y = -2*Sin(1(X+1))+0 Y = -2Sin(X+1) A = -2 B = 1 C = 1 D = 0 Period = 2*pi/1 = 2*pi Y = -2Sin(X+1) Here we have shifted the curve to the left by 1 point and also changed the amplitude value to -1. So it’s going to negative y-axis first. The period of the sine curve is 2*pi. Example 13 : Y = 2*Sin(-1(X+1))+0 Y = 2Sin(-1(X+1)) A = 2 B = -1 C = 1 D = 0 Period = 2*pi/-1 = -2*pi Y = 2Sin(-1(X+1)) Here we have shifted the curve to the left by 1 unit. One thing to notice is that, since we have period of -2*pi, our graph is going to the left side or we can say on the negative x-axis. When we have a positive value of the period it goes to the positive x-axis. Example :14 Y = -2*Sin(-1(X-1))+0 Y = -2Sin(-1(X-1)) A = -2 B = -1 C = -1 D = 0 Period = 2*pi/-1 = -2*pi Y = -2Sin(-1(X-1)) Here we have shifted the curve to the right by 1 unit. One thing to notice is that, since we have period of -2*pi, our graph is going to the left side or we can say on the negative x-axis. Example 15 : Y = 1*Sin(1(X+1))+1 Y = 1*Sin(X+1) + 1 A =1 B = 1 C =1 D=1 Period = 2*pi/1 = 2*pi Y = 1*Sin(X+1) + 1 Here we have the amplitude value of 1 and we have also shifted the curve to the left by 1 unit. Here notice that the period of our curve is 2*pi. One more thing to notice is that we have shifted our curve by 1 on the positive y-axis. Example 16 : Y = -1*Sin(-1(X-1))-1 A = -1 B = -1 C = -1 D =-1 Period = 2*pi/-1 = -2*pi Y = -1*Sin(-1(X-1))-1 Here we have the amplitude value of -1 and we have also shifted the curve to the right by 1 unit. Here notice that the period of our curve is -2*pi. So it’s going to go to the left first. One more thing to notice is that we have shifted our curve by 1 on the negative y-axis. Credits : Unsplash Let’s code : (1) Import required libraries : Here we are going to import four libraries. numpy : for calculations. for calculations. matplotlib : to plot our dataset and curves. to plot our dataset and curves. curve_fit : to find the optimal parameters values for our sine curve. to find the optimal parameters values for our sine curve. r2_score : to calculate the accuracy of our model. # Import required libraries : import numpy as np import matplotlib.pyplot as plt from scipy.optimize import curve_fit # For curve fitting from sklearn.metrics import r2_score # To check accuracy (2) Generate Dataset : Since we don’t have an actual dataset that represents a sine wave pattern, what we are going to do is make our own dataset. Here we are going to use linspace function to get the value of X. For Y value we’ll use 2*pi*X. Now our real-life dataset isn’t going to follow the exact since pattern, right? There will be some noise in the dataset. We’re also going to add some noise in our dataset to make it look more realistic! After then we are just going to scatter the X, Y points on the plane. That way we can visualize the dataset we have created. # Generating dataset : # Y = A*sin(B(X + C)) + D # A = Amplitude # Period = 2*pi/0B # Period = Length of One Cycle # C = Phase Shift (In Radian) # D = Vertical Shift X = np.linspace(0,1,100) #(Start,End,Points) # Here… # A = 1 # B= 2*pi # B = 2*pi/Period # Period = 1 # C = 0 # D = 0 Y = 1*np.sin(2*np.pi*X) # Adding some Noise : Noise = 0.4*np.random.normal(size=100) Y_data = Y + Noise plt.scatter(X,Y_data,c=”r”) (3) Finding the best fit line for our dataset : Here I’m going to show you, how we can fit a “Regression Line” to our dataset. Here we’ll calculate the error and in the next part, we’ll plot the sine curve that best fits our dataset. From the accuracy of both models, we can be sure that why we should use sinusoidal regression in this case. # Function to calculate the value : def calc_line(X,m,b): return b + X*m # It returns optimized parameters for our function : # popt stores optimal parameters # pcov stores the covariance between each parameters. popt,pcov = curve_fit(calc_line,X,Y_data) # Plot the main data : plt.scatter(X,Y_data) # Plot the best fit line : plt.plot(X,calc_line(X,*popt),c=”r”) # Check the accuracy of model : Accuracy =r2_score(Y_data,calc_line(X,*popt)) print (“Accuracy of Linear Model : “,Accuracy) Here notice that for our data-set which follows sine wave pattern, we have found the best fit line for it. Also notice that the accuracy of our model is only around 40%. So here we can conclude that for data-sets that follows sine wave pattern if we use simple linear regression then we may not achieve higher accuracy. That’s the reason to use sine wave regression. (4) Finding the optimal sine curve that fits our data : Now it’s time to find the best fit curve for it. Here we can see that our data follows sine wave pattern. So we are going to find the optimal parameters using curve_fit method for sine wave. After then are going to plot the data with the best fit curve to visualize it. # Calculate the value : def calc_sine(x,a,b,c,d): return a * np.sin(b* ( x + np.radians(c))) + d # Finding optimal parameters : popt,pcov = curve_fit(calc_sine,X,Y_data) # Plot the main data : plt.scatter(X,Y_data) # Plot the best fit curve : plt.plot(X,calc_sine(X,*popt),c=”r”) # Check the accuracy : Accuracy =r2_score(Y_data,calc_sine(X,*popt)) print (Accuracy) Here notice that our best fit curve is in the shape of the sine wave. Also, notice that the accuracy of our model has increased to around 79%. So here we can conclude that for sine regression helped us achieve higher accuracy. Putting it all together : Okay. So I think that covers almost everything we are going to need in Machine Learning from sinusoidal waves. If you enjoyed reading this, then please hit the clap icon, that’ll motivate me to write such comprehensive articles on various machine learning algorithms. Thank you for reading this article. I hope it helped! I regularly post my articles on : patrickstar0110.blogspot.com All my articles are available on: medium.com/@shuklapratik22 If you have any doubts then feel free to contact me: [email protected]
https://medium.com/nightingale/ocean-waves-sinusoidal-regression-5c46c8bd4e58
['Pratik Shukla']
2020-06-18 13:01:01.197000+00:00
['Machine Learning', 'Mathematics', 'Artificial Intelligence', 'Data Science', 'Data Visualization']
Thank You for Resisting the Cheeto-in-Chief
Thank You for Resisting the Cheeto-in-Chief How Americans cheered one rogue government tweeter by DAVID AXE In the days following Donald Trump’s Jan. 20, 2017 inauguration as the 45th president of the United States, his administration moved quickly to remove all mentions of climate change from U.S. government websites and social media. Everyday Americans were … not fans, if notes of encouragement that citizens sent to one rogue climate-change tweeter are any indication. “Thank you to the bad-ass,” one American wrote to the tweeter. “You fine people may be our nation’s last line of defense,” another commented. Trump’s government-wide act of science-denial included a gag order targeting the Environmental Protection Agency. The White House barred the EPA and its employees from speaking to the press or posting on social media. Famously, a former seasonal employee at Badlands National Park in South Dakota fought back. Taking advantage of their access to the park’s official Twitter account, on Jan. 24, 2017 the former employee tweeted several statements about climate change. “Today, the amount of carbon dioxide in the atmosphere is higher than at any time in the last 650,000 years,” one tweet read. The National Park Service quickly deleted the tweets — illegally, as official tweets are public records that the federal government is required to archive and make available to the public. DEFIANT requested, under the Freedom of Information Act, copies of records regarding the tweet controversy. The trove of documents, which the park service made public on April 13, 2017, includes copies of emails that members of the public sent to the park service. They’re pure gold. To quote a couple — “If I could visit the Badlands right now, I would do it just to shake the hand of whoever updates your Twitter account. Roll on, NPS employees who believe in climate change.” “Thank you to the bad-ass — I mean, Badlands social-media rep — who stood up to the Cheeto-in-chief regarding climate change. Every act of resistance is so important right now.” And most powerfully — Who would’ve thought it? National Park employees waging a digital guerilla war against an OCD moron who still insists climate change doesn’t exist. You fine people may be our nation’s last line of defense against destruction of not only our own national treasures but the natural world as a whole. Please rest assured the nation appreciates your courageous determination to protect our natural wonders. Don’t be cowered. Don’t be bullied. Don’t be silenced. The dark cloud that hangs over America at the moment will pass and a new day will most certainly dawn. Until then, stay strong and resolute. And thanks to all of you for your peerless service. Stay defiant. Follow DEFIANT on Facebook and Twitter.
https://medium.com/defiant/thank-you-for-resisting-the-cheeto-in-chief-e43dfdb8a07
['David Axe']
2017-04-25 06:55:42.608000+00:00
['National Parks', 'Defiant Science', 'Environment']
The Knowledge Triangle
The Knowledge Triangle — a graph technologies metaphor where raw data is converted into information about people, places and things and connected into a query-ready graph. Although we use the term “knowledge” broadly in normal conversation, it has a specific meaning in the AI and graph database community. Even within computer science, it has many different meanings based on the context of a discussion. This article gives a suggested definition of the term “knowledge” and uses the Knowledge Triangle metaphor to explain our definition. We will then show some variations of the Knowledge Triangle and see how the word knowledge is used in information management and learning management systems. I have found that having a clear image of the knowledge triangle in your mind is essential to understanding the processes around modern database architectures. Here is our definition of Knowledge in the context of AI and graph databases: Knowledge is connected-information that is query ready. This definition is a much shorter than the Wikipedia Knowledge page which is: …a familiarity, awareness, or understanding of someone or something, such as facts, information, descriptions, or skills, which is acquired through experience or education by perceiving, discovering, or learning. The Wikipedia definition is longer, more general and applicable to many domains like philosophy, learning, and cognitive science. Our definition is shorter and only intended for the context of computing. Our definition is also dependant on how we define “information”, “connected”, and “query ready”. To understand these terms, let’s reference the Knowledge Triangle figure above. In the knowledge triangle diagram, let’s start at the bottom Data Layer. The data layer contains unprocessed raw information in the forms of binary codes, numeric codes, dates, strings, and full-text descriptions that we find in documents. The data layer can also include images (just as a jpeg file), speech (in the form of a sound file), and video data. You can imagine raw data as a stream of ones and zeros. It is a raw dump of data from your hard drive. Some types of raw data, such as an image — can be directly understood by a person just by viewing it. Usually, raw data is not typically useful without additional processing. We call this processing of raw data enrichment. Enrichment Enrichment takes raw data and extracts the things we care about and converts data into Information. This Information consistest of items we call business entities: people, places, events, and concepts. Information is the second layer of the Knowledge Triangle. Information is more useful than raw data, but Information itself consists of islands of disconnected items. When we start to link information together, it becomes part of the Knowledge layer. Knowledge is the top layer of the Knowledge Triangle. The knowledge layer puts information into context with the rest of the information in our system. It is this context that gives information structure. Structure gives us hints about how relevant information is for a given task. Structure Informs Relevancy How does structure inform relevance? Let’s take a search example. Let’s say we have a book on the topic of NoSQL. The word “NoSQL” should appear in the title of that book. There also might be other books on related topics, but they only mention NoSQL in a footnote of the book. If the counts of the term NoSQL are the same in both books then the book on NoSQL might be buried far down in the search results. A search engine that uses structural search knows that titles are essential in findability. Structural search engines boost hits of a keyword within a title of a document by a factor of 10 or 100. Many search engines (notability Sharepoint) ignore the structure of a document when doing document retrieval, so they have a reputation for their inability to find documents. The structured search example above is an excellent example of where query readiness is enhanced in the knowledge layer. The fact that a keyword appears somewhere in a document reflects very little structure. The fact that a keyword appears in a title has much more value. The fact that the keyword appeared in a chapter title gives us some knowledge that that entire chapter is about that keyword. Enrichment and Machine Learning Today most enrichment is done by using simple rule-based systems. The most basic rules are called data ingestion rules where data transformation maps are created and executed when new data is loaded into our system. A typical map rule says take data from the fourth column of the CSV file and assign this to the field PersonFamilyName. These rules are manually created and maintained. About 70% of the cost of building enterprise knowledge graphs are related to building and maintaining these mapping processes. These mapping steps are often the most tedious parts of building AI systems since they require attention to detail and validation. Source systems frequently change, and there the meaning of codes may drift over time. Continuous testing and data quality staff are essential for these processes to be robust. The phrase garbage-in, garbage-out (GIGO) applies. What is revolutionary about the mapping process is we are just starting to see machine learning play a role in this process. These processes are often called automated schema mapping or algorithm assisted mapping. To be automated, these processes involve keeping a careful log of prior mappings as a training set. New maps can then be predicted with up to 95% accuracy for new data sources. These algorithms leverage lexical names, field definitions, data profiles, and semantic links for predicting matching. Automatic schema mapping is an active field of research for many organizations building knowledge graphs. Automated mapping will lower the cost of building enterprise knowledge graphs dramatically. Graph algorithms such as cosine similarity can be ideal for finding the right matches. Structure and Abstraction We should also note that many things in the real world also reflect the Knowledge Triangle architecture of raw data at the bottom and connected concepts at the top. One of my favorite examples is the multi-level architecture of the neural networks in animal brains, as depicted below. Brains have multiple layers of neural networks. Data arrives at the bottom layers and travels upward with each layer representing more abstract concepts. The human neocortex has up to six layers of processing. This figure is derived from Jeff Hawkin’s book On Intelligence. Just like the Knowledge Triangle, raw data arrives at the bottom layer and travels upwards. But unlike our three-layer model, brains have up to six layers of complex non-linear data transformations. At the top of the stack, concepts such as recognition of an object in a file, the detection of a specific object in an image or the detection a person’s face are turned to the “on” state. There are also feedback layers downward so that if the output of one layer has quality problems, new signals are sent back down to gain more insights at what objects are recognized. Many people like to use brain metaphors when they explain knowledge graphs. Although some of these metaphors are useful, I urge you to use them cautiously. Brains typically have 10,000 connections per-vertex, and each connection does complex signal processing. So the architectures are very different in practice. The last term we need to define is query readiness. Query Readiness Of the many ways we can store data, which forms are the most useful for general analysis? Which forms need the minimum of processing before we can look for insights? What are the queries, searches, and algorithms we can use plug it to quickly find meaning in the data? The larger the number of these things you can use without modification, the more query ready your data is. What the industry is finding is that the number of algorithms available to graph developers today is large and growing. The rise of distributed native labeled property graphs is making these algorithms available even for tens of billions of vertices. In summary, graphs are wining the algorithms race. The performance and scale-out abilities of modern graph database are pushing them to the forefront of innovation. Variations on the Knowledge Triangle There are also many variations on the basic knowledge triangle metaphor that are useful in some situations. One of the most common is to add a Wisdom layer on top of the Knowledge layer. This four-layer triangle is known as the DIKW pyramid and is used frequently in information architecture discussions. I tend to downplay the role of wisdom in my early knowledge graph courses since the wisdom layer seems to be associated with touchy-feely topics or stories about visiting the guru on the mountain top for advice. That being said, there are some useful things to consider about the term wisdom. For example, when you go to an experienced person for advice, you share with them your problem and the context of that problem. You expect them to use their knowledge to give you advice about your problem. You are expecting them to transfer a bit of their knowledge to you. We imagine the wisdom layer as a feedback layer to the structure of the knowledge layer. Wisdom can inform us how we can structure our knowledge in a way that it can be transferred from one context to another and still be valuable. Stated in another way, can we take a small sub-graph out of an enterprise knowledge graph and port it to another graph and still be useful? For example, let’s suppose we have a sub-graph that stores information about geography. It might have postal codes, cities, counties, states, regions, countries, islands and continents in the graphs. Can we lift the geospatial subgraph out of one graph and drop into another graph? In neural networks and deep learning, taking a few layers of one neural network and dropping it into another network is called transfer learning. Transfer learning is frequently used in image and language models where training times are extensive. How you reconnect these networks into a new setting is a non-trivial problem. These are the questions about the knowledge layer that you should be asking when you design your enterprise knowledge graph. If calling reuse issues the “Wisdom” layer helps you in your discussions, we encourage you to adopt this layer. Data Science and Knowlege Science In some of my prior articles, I discussed the trends of moving from data science to knowledge science. We can also use the Knowledge Triangle metaphor to explain this process. This process is fundamentally about allowing staff direct access to a connected graph of your enterprise knowledge, thus saving them all the hassles of making meaning out of your raw data in the data lake. Data science staff can get faster time to insights using direct access to a knowledge graph. To wrap up the post, I also want to suggest one other version of the knowledge triangle that has been mapped to an actual set of tools in a production knowledge graph. Instead of the abstract concept of raw data, we replace it with a diagram of a Data Lake or an object store such as Amazon S3. At the Information layer, we list the concepts we are looking for in the Data Lake, the definitions of these concepts, and the rules to validate each of these atomic data elements to make them valid. We also allow users to associate each business entity with a URI so they can be linked together in the next higher step. At the Knowledge Graph layer, we talk about making connections between the entities found in the information layer and the tools we use to connect data and find missing relationships automatically. These processes include entity resolution, master data management, deduplication and shape validation. From Data Lakes to transfer learning. The Knowledge Triangle in practice. This diagram also mentions that there is often a feedback layer that automatically sends alerts to the data enrichers that there might be missing data and clues on how this data can be found. Knowledge Spaces in Learning Management Systems Lastly, we want to mention that modern AI-powered learning management systems (LMS) also use the term Knowledge Space. In the context of an LMS, knowledge space is a set of concepts that must be mastered to achieve proficiency in a field. Each student has a Knowledge State that shows where they are in learning a topic. AI-powered LMS systems use recommendation engines to recommend learning content associated with the edges of known concepts in each student's Knowledge Space. I will be discussing the topic of AI in education and Knowledge Spaces in a future blog post. Summary In summary, The Knowlege Triangle is one of the most useful metaphors in our graph architecture toolkit. Along with The Neighborhood Walk, the Open World, and the Jenga Tower, it forms the basis for our introductory chapter on Knowledge Graph concepts. I want to thank my friend Arun Batchu for introducing me to the Knowledge Triangle his willingness to transfer his wisdom to me.
https://dmccreary.medium.com/the-knowledge-triangle-c5124637d54c
['Dan Mccreary']
2019-09-01 19:03:55.851000+00:00
['Dikw', 'Knowledge Triangle', 'Information Architecture', 'Artificial Intelligence', 'Graph Databases']
Attitude Is Everything
How do you currently see Life? After you wake each morning and push through the grogginess and grumpiness, how do you honestly feel about Life? Don’t pay much attention to the people and situations influencing you, focus rather on how you feel inside. Don’t try to impress me or some other person with your answer. Ask yourself from your heart, from your soul, do you like your current perspective of life? Your Perspective is important because the one you hold has a hold over you It affects the way you see the world and how you carry yourself through each moment. It chooses the words you speak and the actions you take. And as the world sees you expressing yourself, they are, in some way, influenced by you — some may even end up following you. So, you have to be responsible for yourself because everything you do could change someone’s life for better or worse. Remember, you alone see your Perspective, the world feels only your Attitude You are responsible for your Perspective. You need to be able to take everything weighing down on you and still carry out yourself well. It doesn’t matter whether the day feels good or bad, you must never succumb to the people or situations influencing you the wrong way. You need to be your own and hold firm on the belief that nothing is stronger than your belief in yourself. And everything out there exists only to serve you in some way You may not know how at first, but you know that at some point in the future, its meaning will reveal itself to you. Don’t consider yourself a victim and succumb to the things facing you, see yourself as being the one who benefits and overcome the things facing you. Don’t sit idle, hold strong, and keep moving forward. This Attitude can change the world in some way every day. Keep with you the words of Arthur Gordon: Be bold and mighty forces will come to your aid. In the past, whenever I had fallen short in almost any undertaking, it was seldom because I had tried and failed. It was because I had let fear of failure stop me from trying at all. I would like for you to keep these with you: Your Perspective Powers Your Attitude Inky Johnson once said something quite interesting: perspective drives performance. And I am paraphrasing here but, the adversity that finds you is not as important as your perspective of it. You may not be able to control the adversity that will find you, but you choose how you see it and this will decide what you do with it. Your Perspective will influence your Attitude A destructive perspective can lead you towards a destructive attitude in the same way a constructive perspective can lead you towards a constructive attitude. The latter asks for more effort but do not let this discourage you because this brings with it a great deal of value. Embrace it. If you were to be consumed by anything, let it be this. You will not regret the value you find in the end. “I am still determined to be cheerful and happy, in whatever situation I may be; for I have also learned from experience that the greater part of our happiness or misery depends upon our dispositions, and not upon our circumstances.” — Martha Washington A Comprehensive Understanding Leads To A Better A Perspective Make room for Doubt and ask the questions you need to ask. Understanding is a journey and the questions you ask help you explore this journey. Look at the good and the bad, and try to draw value from them both. Be not afraid to explore, for the more ground you cover, the broader you will be able to think. And along the way, the understanding will reveal itself to you. This helps you find and follow what makes sense to you But be careful not to mislead yourself. The understanding you find is meant to help you choose a Perspective that will bring you value today, tomorrow, and each day after. Don’t choose one without considering the long-term value. But once you find the one that aligns with your soul, keep it. Let it be expressed through your Attitude and be the reason why the world remembers you. “Carve your name on hearts, not tombstones. A legacy is etched into the minds of others and the stories they share about you.” ― Shannon Alder This Is How Your Brave The World It is no secret that the world can feel harsh at times. Things often do not work in your favor and they may compound into an enormous and powerful weight resting itself on your shoulders. I can say, if you leave this weight alone, it will grow and hold a great influence over you and the life you live. It will do its best to overwhelm you. But whatever you do, do not succumb Don’t be influenced by everything living outside your control. They may feel strong, they may feel impossible, but know that you are capable, know that you are stronger. Stay true to your belief in yourself. You may feel confused, but you will understand it better at some point in the future. Don’t let go of your hope and the terrors of the world will hold no power over you. “It takes courage to grow up and become who you really are.” ― E.E. Cummings Photo by Ali Pazani on Unsplash You are going to live your life each day and go through the time given to you. You may do it willingly or unwillingly, but this is an important decision you must make each day. I do say, choose the former because Life is far too interesting to live it unwillingly. Some days may be good and some days may be bad, but neither should be enough to sway your Perspective of life as a whole. Your Perspective may be nourished by the outside world but it stems from within Your Perspective belongs to you, it is for you to maintain and express through each thing that you do. Let it not be influenced but rather nourished by the people and situations living outside your control. Let everything serve you on your way. A well-rooted Perspective and indomitable Will leaves you with an empowering Attitude, one that can brave the world and change it at the same time. “Life is not easy for any of us. But what of that? We must have perseverance and, above all, confidence in ourselves. We must believe that we are gifted for something, and that this thing, at whatever cost, must be attained” — Marie Curie Invest In Your Existence, Kind Reader.
https://medium.com/live-your-life-on-purpose/attitude-is-everything-5d46cc4046c5
['René Chunilall']
2020-12-25 17:03:12.792000+00:00
['Life Lessons', 'Life', 'Self Improvement', 'Self-awareness', 'Self Mastery']
Flask’s Latest Rival in Data Science
On the comparison between Flask and Streamlit: a reader noted that Flask has capabilities in excess of Streamlit. I appreciate this point and would encourage users to look at their use cases and use the right technology. For the users who require a tool to deploy models for your team or clients, Streamlit is very efficient, however, for users who require more advanced solutions, Flask is probably better. Competitors of Streamlit would include Bokeh and Dash. Streamlit This is where Streamlit comes into its own, and why they just raised $6m to get the job done. They created a library off the back of an existing python framework that allows users to deploy functional code. Kind of similar to how Tensorflow works: Streamlit adds on a new feature in its UI, corresponding to a new function being called in the Python Script. For example the following 6 lines of code. I append a “title” method, a “write” method, a “select” method and a “write” method (from Streamlit): import streamlit as st st.title(‘Hello World’) st.write(‘Pick an option’) keys = [‘Normal’,’Uniform’] dist_key = st.selectbox(‘Which Distribution do you want?’,keys) st.write(‘You have chosen {}’.format(dist_key)) Save that into a file called “test.py”, then run “streamlit run test.py” and it produces the following in your browser on http://localhost:8501 /: Code above produced this. Fantastic how efficient Streamlits library makes UI programming. Now this is awesome. It’s both clean to look at and clearly efficient to create. Jupyter Notebooks is also another successful “alternative” but it’s a bit different. Notebooks is better as a framework for research or report writing however there’s little you can do in the way of actually letting someone else use your code as it’s impractical to give someone else a notebook of code. Co-labs kind of bridges that gap but it’s still not as clean. Streamlit fills this void by giving the user an ability to deploy code in an easy manner so the client use the product. For those of us who like making small things, this has always been an issue. Ease of Use Ok so let’s create something that we may actually want someone else to use. Let’s say I want to teach my nephew about distributions. I want to make an app that he can use where he selects a distribution, and then it draws a line chart of it. Something as simple as the following: Code provided below how to create this In this example, you can see that the user has a choice between 2 items in a drop down menu: and when he selects either, you hope that the line chart would up date with the chart. Taking a step back, I’m providing the user with: Some Information about a problem The user then has the ability to make a choice The corresponding chart is then returned to the user Now in Flask, something like the above would easily require hundreds of lines of code (before even getting to the aesthetics) however Streamlit have achieved the above in a negligible amount of code. Note that the above required the following ~11 lines of code: import streamlit as st import numpy as np # Write a title and a bit of a blurb st.title(‘Distribution Tester’) st.write(‘Pick a distribution from the list and we shall draw the a line chart from a random sample from the distribution’) # Make some choices for a user to select keys = [‘Normal’,’Uniform’] dist_key = st.selectbox(‘Which Distribution do you want to plot?’,keys) # Logic of our program if dist_key == ‘Normal’: nums = np.random.randn(1000) elif dist_key == ‘Uniform’: nums = np.array([np.random.randint(100) for i in range(1000)]) # Display User st.line_chart(nums) I find it amazing because the amount of code required is so small to produce something that actually looks and works pretty good. For anyone who’s played around with UI before, you’ll know how difficult it is to achieve something of this quality. To have Streamlit produce an open-source framework for researchers and teams a like, development time has been immensely reduced. I cannot emphasis this point enough. Given this, no Data Scientist or Machine Learning Researcher can ever complain about not being able to deploy work. Nor can they complain about getting an MVP running. Streamlit have done all the hard work. Amazing job guys!
https://towardsdatascience.com/the-end-of-flask-in-data-science-738920090ed9
['Mohammad Ahmad']
2020-06-27 09:37:45.212000+00:00
['Programming', 'Artificial Intelligence', 'Data Science', 'UX', 'Machine Learning']
Designing your Company Architecture on Google Cloud Platform
Photo by Austin Distel on Unsplash Introduction In this blog, I am going to cover the basic aspects of setting up your company architecture on Google Cloud. It is essential that the infrastructure you develop has high cohesion and low coupling, setting up such an architecture helps you to scale your target services and apps at an incredible speed without worrying about it affecting your entire workflow. Also, a well defined and structured architecture enables faster bug tracking/fixation and prevents the problem of single-point failure. Google Cloud Platform Resource Hierarchy Lets first understand how the resource hierarchy needs to be set up on google cloud platform. While designing your workflow you can take a top-down or bottom-up approach whichever you prefer, understanding these concepts is easier if you take a bottom-up approach however I strongly recommend that you keep in mind the top-down approach as well when you actually begin setting up your architecture. GCP Resource Hierarchy If you have worked on any cloud platform before such as AWS, Azure, GCP, Digital Ocean and so on you must be familiar with Virtual Machines or EC2 Instances, these comprise your CPU/GPU instances. Your Virtual Machines are organized in projects and your projects are in turn organized in folders. Folders can be organized inside parent/child folders and in the end, your folders come under Organization Node. The Organization Node is the root of your company architecture. Generation of folders is optional in GCP however I strongly recommend that you use folders inside an organization as using a well-defined folder structure will make your life a hell of a lot easier further down the line when you want to create Teams and give IAM permissions to your Team Members or Virtual Instances. Technically you can add permissions in either of the three places — Organization Node, Folder, or Project. I personally prefer the allotment of IAM permissions at a folder level as it makes management of teams and resources much easier and ubiquitous in a team environment. Only the project Owners and Admins should have the Organisation Level Permissions. The IAM permissions have a downwards Inheritance that is why Folders are the best place to assign permissions. Often in a startup environment, numerous companies end up setting up their entire tech stack inside a single project, naturally, if everything is at a single place and it would make your life easier as you won't have to create VPC’s or subnets to make the shared resources available inside your other projects, however when you start tending toward the creation of an enterprise from a startup you would realize that doing everything inside a single project was not a good idea at all. Generation of separate projects and a folder structure might seem daunting and complex at first but trust me its all worth it. The Levels of the hierarchy provide trust boundaries and resource isolation in your organization. Here’s an example of how you might organize your resources. There are three projects, each of which uses resources from several GCP services. Resources inherit the policies of their parent resource. For instance, if you set a policy at the organization level, it is automatically inherited by all its children's projects. And this inheritance is transitive, which means that all the resources in those projects inherit the policy too. Google Cloud Platform Console This is the place where you manage everything. Cloud Console in the place where you can switch between your various projects. All Google Cloud Platform resources belong to a Google Cloud Platform Console project. Thus a project is a place where all your services and apps would be. Key Features of the Project are : ● Track resource and quota usage. ● Enable billing. ● Manage permissions and credentials. ● Enable services and APIs. Some people think that the generation of numerous projects would result in high billings and all but this is a false assumption as projects are billed and managed separately. Your billings depends on the resources you use, so it doesn't matter that they are in which project the billing amount would be exactly the same. Moreover, it might actually help you to understand which project costs you how much. Cloud Security Security requires a collaborative environment from both the consumer and provider ends. One of the major advantages of using any cloud platform is that you don’t have to worry about the physical security of your resources i.e VM instances and all, as often setting up on-premise security for your data centers and servers is not feasible especially for startups and new companies. Moreover, you will also have to worry about power outages due to some calamity or some other reason. Cloud servers are backed up on various regions and on numerous continents due to which a backup system is available even if one region goes down due to some unprecedented circumstances. That being said consumers need to be careful while handling Customer -managed security responsibilities. , especially the setup of IAM roles across different members/engineers of your company and limiting the accesses to specific tasks that a particular engineer needs to perform. Also, network setting should be carefully set up while exposing your apps to the external DNS or publically accessible IP’s. Negligence on this part might often become catastrophic if some hacker gets wind of your network vulnerabilities. IAM Roles There are three types of IAM Roles: Primitive Roles: IAM primitive roles apply across all GCP services in a project. Primitive roles are broad. You apply them to a GCP project, and they affect all resources in that project. These are the Owner, Editor, and Viewer roles. If you’re a viewer on a given resource, you can examine it but not change its state. If you’re an editor, you can do everything a viewer can do plus change its state. And if you’re an owner, you can do everything an editor can do plus manage roles and permissions on the resource. The owner role on a project lets you do one more thing too: you can set up billing. Often companies want someone to be able to control the billing for a project without the right to change the resources in the project, and that’s why you can grant someone the billing administrator role. Primitive roles provide segregation at a higher level but personally I do not prefer primitive roles for all tasks because things in real like production scenarios are not as ideal as primitive roles make them seem. We often need fine-grained roles in order to create appropriate segregation of resources and accesses across all members of our organization. 2. Predefined Roles: These roles apply to a particular GCP service in a project. GCP services offer their own sets of predefined roles, and they define where those roles can be applied. This is the role management that any initial stage startup should use. However, these roles are often better defined at a folder level rather than at user level in an organization as these are very fine-grained privileges, and maintaining them at a user level would become a tedious task as the organization grows and the number of employees increases. Maintaining this at the user level is not a viable or feasible option at all. 3. Custom Roles: These roles let you define a precise set of permissions. What if you need something even finer-grained? That’s what custom roles permit. A lot of companies use a “least-privilege” model, in which each person in your organization the minimal amount of privilege needed to do his or her job. So, for example, maybe I want to define an “instanceOperator” role, to allow some users to stop and start Compute Engine virtual machines but not reconfigure them. Custom roles allow me to do that. Using Custom roles comes under the category of the higher level of IAM role management and early phase startups should avoid these as maintaining and setting these up is a herculean task and would need a good solid team to set this up and manage it. If you decide to use custom roles, you’ll need to manage the permissions that make them up. Some companies decide they’d rather stick with the predefined roles. Second, custom roles can only be used at the project or organization levels. They can’t be used at the folder level. Service Accounts Services accounts are used in the case when you want to give accesses to a resource rather than to a person. For instance, maybe you have an application running in a virtual machine that needs to store data in Google Cloud Storage. But you don’t want to let just anyone on the Internet have access to that data; only that virtual machine. So you’d create a service account to authenticate your VM to Cloud Storage. Service accounts are named with an email address, but instead of passwords, they use cryptographic keys to access resources. In this simple example, a service account has been granted Compute Engine’s Instance Admin role. This would allow an application running in a VM with that service account to create, modify, and delete other VMs. Here’s a more complex example. Say you have an application that’s implemented across a group of Compute Engine virtual machines. One component of your application needs to have an editor role on another project, but another component doesn’t. So you would create two different service accounts, one for each subgroup of virtual machines. Only the first service account has privilege on the other project. That reduces the potential impact of a miscoded application or a compromised virtual machine. Summary Company Hierarchy: Company hierarchical setup is a very important task and should never be done in a rushed manner as it is the base for your entire tech stack and your product, spend as much time as needed in order to set up the best infrastructure for your organization. An Organization Node > Folders > Projects > Resources architecture customized as per your organization is a good option, to begin with. Security: Good collaboration needs to be maintained between the customer and the provider in order to make your apps/services highly secured. Customers mainly need to work on IAM Role level and Network level for securing their organization. Role Management: Startups should prefer using Predefined Roles allocated at a folder level or use service accounts for the role management purpose.
https://medium.com/swlh/designing-your-company-architecture-on-google-cloud-platform-be705de7eb64
['Arneesh Aima']
2020-05-19 14:48:17.482000+00:00
['Permission', 'Startup', 'Infrastructure', 'Google Cloud Platform', 'Cloud']
Three Writers to Revisit (or Discover) as Black History Month Ends
Three Writers to Revisit (or Discover) as Black History Month Ends Only one is famous, but all three broke new ground Three very different writers . . . Poet, playwright, activist, and educator Amiri Baraka (1934–2014) was famously controversial, under more than one name. Teacher, mentor, poet, and novelist Margaret Walker (1915–1998) broke new ground for women of color and changed the way black families were depicted in fiction. Attorney, poet, and storyteller Samuel Alfred Beadle (1857–1932) captured vignettes of black life in the south at the turn of the twentieth century. Very different. But all three made significant contributions to modern American literature — and together, they represent a span of time that runs from the Civil War to the election of Barack Obama. I have the privilege of knowing something about both Walker and Beadle because for a long time I wrote biographies of poets for Chadwyck-Healey’s reference series Literature Online. And LION strives to include in its database not only the most famous figures from every era, but also those who may be almost invisible to history. One project involved profiling writers chosen for the Yale Younger Poets Series, which began in 1918. On that list was Margaret Walker—the first black woman ever to win a national literary prize in America. In fact, she was the first person of color to be included in the Yale Younger Poets Series, which selects for publication only one outstanding poet under the age of thirty each year. In 1942, Walker’s impassioned collection For My People was chosen by the Series editor Stephen Vincent Benét (then one of America’s most popular poets). Walker went on to become a respected educator and a successful writer, whose epic novel Jubilee was among the first works of fiction to present a realistic picture of black life in the time of slavery. Written over a period of nearly three decades, Jubilee was finally published in 1966, just after Walker earned a doctoral degree from the University of Iowa. And over the next twenty years, her pathbreaking novel was translated into seven languages, and sold more than a million copies. Jubilee tells the story of Vyry, a character closely based on Walker’s maternal great-grandmother. The first part focuses on Vyry’s life as a slave and her complicated marriage to a free black man. The second depicts the destruction and violence of the Civil War, while the third follows Vyry’s struggles to establish a home for her family after their emancipation. The book’s fifty-eight chapters are rich with details of daily life, stories drawn from folklore, and scenes from history. Through all her trials, Vyry emerges as a resilient, even heroic woman who manages to maintain a strong spirit, but refuses to limit her freedom of mind with the burden of hate. Throughout her life, Margaret Walker was an advocate for women of color and an outspoken commentator on issues of race and gender equality. After graduating from Northwestern University when she was just twenty, Walker worked for the Federal Writers Program (a project of the Depression-era Works Progress Administration), and became part of a politically engaged Chicago writing group led by controversial novelist Richard Wright. These experiences — in conjunction with a deep Christian faith — shaped not only her poetry, but her career as a teacher and mentor. In 1949 Walker moved to Mississippi, where she taught for more than twenty years at Jackson State University, raised a family, and founded the Institute for the Study of the History, Life, and Culture of Black People. By the time Walker arrived, Mississippi was very different from the state where Samuel Alfred Beadle lived for most of his life. He was among a number of young black men who studied law and became attorneys in the late nineteenth century, hoping to improve a justice system still severely biased by racism. He was also one of eight black writers in Mississippi whose works were published in the early twentieth century, and gained some recognition outside the state. Despite the fact that he lived in such a politically charged time, Beadle focused most of his first volume — Sketches from Life in Dixie — on traditional themes such as love and courtship, spiritual reflection, and the follies of youth. It contained seven short stories and more than fifty poems, including a long heroic fantasy reminiscent of the Pre-Raphaelites. But there were also commentaries on the problems faced by black citizens, portrayed most notably in the poem “LINES. Suggested by the Assaults made on the Negro Soldiers as they passed through the south on their way to and from our war with Spain.” “LINES” describes the experience of black soldiers who endured sometimes violent racial discrimination from their own countrymen. But with the poem’s refrain, Beadle returns always to patriotism and love of country. For three decades, Beadle maintained a successful legal practice, but only by confronting many difficult challenges — and in the preface to his second poetry collection, Lyrics of the Underworld (1912), he expressed frustration with social conditions in the south. Featured in the book were sixteen striking photographs by Beadle’s son, who went on to become one of Mississippi’s best known black photographers. In 1930, Beadle moved to Chicago — reversing the path taken by Margaret Walker — but lived there for only a short time before his death. By contrast to those two writers, and their journeys between the deep south and the midwest, Amiri Baraka lived for most of eighty years in or near New York City. Born in 1934 as Leroy Jones, he grew up in Newark, New Jersey in a middle-class family, and earned a scholarship to Rutgers University. But within a year, as he began what would be a long-running quest for self-realization, Jones transferred to historically black Howard University. By the time he graduated, Jones was disillusioned with what he saw as an emphasis on upward mobility at Howard, and decided to enlist in the Air Force rather than immediately pursuing a career. Along the way, he had changed the spelling of his name from Leroy to LeRoi. While stationed in Puerto Rico, Jones began an ambitious self-directed reading program, focusing on literature (especially poetry), politics, and economics. These studies were conspicuously outside the Air Force mainstream, however, and raised suspicions that Jones might be a communist sympathizer — leading eventually to his dishonorable discharge. But by then, LeRoi Jones had found his direction. In the years between 1958 and 1965, he became the only black writer to carve out a place in Manhattan’s frenetic, Beat-inspired literary scene. In addition to co-founding two small but influential publications, he attracted increasing attention as both a poet and a playwright. By 1964, when his controversial, racially charged play Dutchman won a prestigious Obie Award, Jones was one of the most talked-about writers in New York. But a year later, outraged by the assassination of Malcolm X, he rejected the mostly-white Manhattan milieu, moved to Harlem, and started a short-lived black arts program. After that he returned to his home town of Newark, converted to Islam, and changed his name several times — ending up as Amiri Baraka. For a while he was involved with the Black Nationalist movement, but following a trip to revolutionary Cuba, he became an outspoken proponent of Third World Marxism. These years of political activism and realignment were reflected in perhaps his most important poetry collection, Hard Facts, 1973–1975. And by the end of the 1970s, Baraka had established himself as both an important writer and an impassioned advocate for black identity. In 1980, Baraka’s complex life took yet another turn. He joined the faculty of African Studies at SUNY-Stony Brook, where he would teach for the next two decades, and soon began a period of sustained literary accomplishment. His work garnered a Poetry Award from the National Endowment for the Arts (1981); a New Jersey Council for the Arts award (1982); an American Book Award from the Before Columbus Foundation (1984); a PEN-Faulkner Award (1989); the Langston Hughes Medal for outstanding contributions to literature (1989); a Foreign Poet Award from the Ferroni Foundation (1993); and the Playwright’s Award, Winston-Salem Black Drama Festival (1997). After retiring from Stony Brook at the end of the century, Amiri Baraka once again became a controversial figure —expressing and later recanting anti-Semitic views about the World Trade Center bombing, then refusing public pressure to relinquish his appointment as poet laureate of New Jersey. But eventually the uproar died down, and although his literary work received much less attention in later years, Baraka was revered as a public figure. He continued writing until his death in 2014, and among his late works was an impassioned appreciation of Margaret Walker, who had passed away in 1998. Looking back at these three figures, we see a continuing pattern of courage and determination. Samuel Alfred Beadle had the courage not only to pursue justice but to write poetry in a time and place still grappling with the very issues that had led to civil war. Margaret Walker had the courage to break through racial barriers in the American literary establishment, and the determination to transform her family’s experiences into a unique work of historical fiction. Amiri Baraka had the courage to express controversial ideas, and the creative persistence to reinvent himself several times over. Through their work, each of these writers shed light on the experience of being black in America, and for that they deserve to be not only remembered but greatly appreciated. Especially now.
https://medium.com/literally-literary/three-writers-to-revisit-or-discover-as-black-history-month-ends-33a77817fbfe
['Cynthia Giles']
2020-02-27 02:26:51.676000+00:00
['Writing', 'Literature', 'History', 'Women', 'Essay']
Learn NLP the Stanford Way — Lesson 2
In the previous post, we introduced NLP. To find out word meanings with the Python programming language, we used the NLTK package and worked our way into word embeddings using the gensim package and Word2vec. Since we only touched the Word2Vec technique from a 10,000-feet overview, we are now going to dive deeper into the training method to create a Word2vec model. Word2vec family The Word2vec (Mikolov et al. 2013)[1][2] is not a singular technique or algorithm. It’s actually a family of neural network architectures and optimization techniques that can produce good results learning embeddings for large datasets. The network architectures are shallow, composed of two layers, and are trained to produce vector representations of words given their context. The two model variations that can be used are: Continuous Bag of Words (CBOW), and Skip-gram. Training algorithms Continuous Bag of Words The CBOW model is based on trying to predict a central word from the context words around it. We select a few words from a fixed-size window — the authors recommend this technique around a size of 5 — create a dictionary containing the words and their frequencies, and train the model by predicting the central word from the bag of words. The CBOW model doesn’t take into consideration the order of the words inside the “bag.” Skip-gram With the Skip-gram model, we predict outside words given a central context word. It works in the opposite way of the CBOW model. With this method, the authors recommend using a window of size 10. On performance and accuracy: The CBOW model is faster than the Skip-gram, but the Skip-gram architecture works better with infrequent words. Training Techniques While the word embeddings created by the network can express the relationships between words, the network itself presents scalability issues. Depending on the vocabulary size, the number of operations needed to calculate the network's output layer is huge. Here are some techniques that are frequently used with Word2vec networks: Hierarchical Softmax The hierarchical softmax technique, proposed by Morin and Bengio[1], is applied due to the sheer size of regular vocabularies. In a regular neural network output layer, using the softmax function, the computing power needed to address the probability distribution of a full-sized vocabulary in any given language would be extremely large. We can formalize this by giving a size V vocabulary; we can denote the complexity using O(V). With hierarchical softmax, the complexity is O(log2(V)) instead of O(V). That is achieved through the use of a multi-layer binary tree to calculate the probability of each word. Simple exercise Imagine that we are working with the English vocabulary, which in some libraries is represented by roughly 2 million word embedding — V. That implies a computational cost of O(V) => O(2,000,000). Using the hierarchical softmax, we would work with O(log2(V)) => O(log2(2,000,000) => O(~20). If you are searching for a more technical, in-depth explanation, I recommend you this blog post. Negative Sampling The intuition behind negative sampling, presented by Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean[3] is only to update a subset of weights in the training process, affecting only the target — positive — word and a few of the non-related — negative — words, chosen using a “unigram distribution,” in which more frequent words are preferred to be selected as negative examples. An illustration of the computations needed in a regular skip-gram model and one with negative sampling — Source Using negative sampling, the computational cost is dramatically lower than the regular softmax version since it transforms a multi-classification task into a few binary classification tasks. Implementing different Word2vec models using gensim You can duplicate my Deepnote notebook here and follow me as I walk through this project for the best experience. We will use gensim, a Python library, to create different Word2vec models from the same corpus, just passing different parameters to the Word2Vec class constructor. First, we import the necessary packages and download the corpus: Then we can create different Word2Vec models using the downloaded corpus and different parameters: The following parameters are passed to the constructor to define the training algorithm and optimization technique used (source) : sg ({0, 1}, optional) — Training algorithm: 1 for skip-gram; otherwise CBOW. ({0, 1}, optional) — Training algorithm: 1 for skip-gram; otherwise CBOW. hs ({0, 1}, optional) — If 1, hierarchical softmax will be used for model training. If 0 and negative is non-zero, negative sampling will be used. ({0, 1}, optional) — If 1, hierarchical softmax will be used for model training. If 0 and negative is non-zero, negative sampling will be used. Negative (int, optional) — If > 0, negative sampling will be used; the int for negative specifies how many “noise words” should be drawn (usually between 5–20). If set to 0, no negative sampling is used. GloVe GloVe: Global Vectors for Word Representation, presented by Jeffrey Pennington, Richard Socher, and Christopher D. Manning, is another model mainly based on word embeddings. GloVe is an unsupervised learning algorithm for obtaining vector representations for words. Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and the resulting representations showcase interesting linear substructures of the word vector space. — Stanford GloVe The main intuition is to scan through the whole corpus and compute the co-occurrence statistics for each word given a context. You can picture a matrix, with rows being the words and columns being the different contexts. Then you would reduce the dimensionality of each row to represent a word vector by factoring the matrix. GloVe combines two model families: the local context window method and the global matrix factorization method. The main difference between the two is that while the Word2vec model uses local contexts and a shallow neural network, the GloVe model is based on local and global word co-occurrence and uses the matrix factorization method. Using GloVe with gensim To use GloVe with gensim is really easy. You can use the api package to download a trained GloVe model. You can also convert a GloVe model to a Word2Vec model in gensim using the glove2word2vec script. Word Senses Now we know how to create and use word embeddings created by the Word2vec and GloVe models. But still — are those vectors enough to represent words accurately in different contexts? Common or long-lived words can have several meanings. How can we create embeddings that can capture all meanings from a word? Linear Algebraic Structure of Word Senses, with Applications to Polysemy Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma, Andrej Risteski [4] propose a solution by representing different senses of the same word using a linear superposition — meaning the creation of a word embedding based on a weighted average of each sense embedding and its frequency. The different senses of the word tie — Taken from the second class slides. Considering that the vector embedding space is high dimensional and sparse, we can reconstruct the different sense vectors from just the weighted average — or the linear superposition — of the senses. Conclusion Next, we will discuss word window classification, neural networks, and PyTorch, topics of the Stanford course’s second lecture. I hope you enjoyed reading this post. If you have any questions, feel free to leave a comment. Thank you for your time. Take care, and keep coding! References Software and Libraries
https://towardsdatascience.com/learn-nlp-the-stanford-way-lesson-2-7447f2c12b36
['Thiago Candido']
2020-12-07 17:25:43.244000+00:00
['Programming', 'NLP', 'Python', 'Data Science', 'Machine Learning']
Bangun Aplikasi iOS Pertamamu menggunakan Xcode Storyboard. (Aplikasi COVID-19)
Journal about apps development for business and eCommerce from GITS Indonesia, a Google Certified Agency and Google Cloud Partner. | Website: gits.id Follow
https://medium.com/gits-apps-insight/bangun-aplikasi-ios-pertamamu-menggunakan-xcode-storyboard-aplikasi-covid-19-866cbdd5ed14
['Muhammad Rahman']
2020-05-12 06:28:11.620000+00:00
['Mobile App Development', 'iOS App Development', 'Storyboard', 'iOS', 'Xcode']
The Quality of Information You Consume Can Determine the Quality of Your Life
The Quality of Information You Consume Can Determine the Quality of Your Life 6 websites that can make you a smarter person. The internet is teeming with websites. The majority of them are not worth your time. Think of websites like Buzzfeed, Bored Panda or any other site that is chock full of clickbait and low-quality posts. Despite this, there are corners of the internet where you can find high-quality websites full of meaningful and insightful posts. The quality of the information you consume determines the quality of your life. If you only frequented sites that are full of conspiracy theories, you’d view the world very differently from someone who didn’t. The information age has made knowledge more accessible than ever, but it also means we have to sift through the rubbish before we find something valuable. Learning is a lifelong pursuit and one that doesn’t finish when you leave school or university. With information available at our fingertips, you’re doing yourself a disservice if you’re not seeking to learn more. Lifelong learning is the best way to improve your career chances and life. The internet is arguably the best place to learn due to the resources it contains. Twenty years ago, if we wanted to learn about a specific topic, we had to turn to an encyclopedia or buy a book on the topic. Now, we can hop on our phones and learn about the intricacies of the biases which populate the human mind or chaos theory in a matter of seconds. When doing so, there are a few places you should turn to first before you go down the rabbit hole of a google search. These websites are among the best on the internet for providing insightful posts on a wide range of topics. By frequenting these sites not only will you learn a lot about many different things, but you’ll also become smarter too as you fill in the gaps in your knowledge and seek to plug the new gaps that arise.
https://medium.com/mind-cafe/the-quality-of-information-you-consume-can-determine-the-quality-of-your-life-2845c0f2b44c
['Tom Stevenson']
2020-12-25 14:27:54.084000+00:00
['Life', 'Self Improvement', 'Education', 'Learning', 'Productivity']
Jobs for Your Personality: How to Own Your INFJ Career
If you’re in the middle of a job hunt, you’re probably weighing up the usual considerations. Money, travel, responsibility. The usual. But there’s one thing we often overlook. Something that shouldn’t just affect our job choices, but shape our entire career. I’m talking about our personality type. According to the Myers Briggs Type Indicator, the most popular personality test of its kind, there are 16 personality types. Not sure what type you are? Take the test for yourself. The latest in our Jobs for Your Personality series, this article focuses on INFJ personalities or INFJs for short. But what makes an INFJ? Well it’s defined by four key character traits: Introversion Intuition Feeling Judgement Creative, ambitious and idealistic, INFJs recognise the need for change and take the necessary steps to make it happen, fighting tirelessly for their cause. As such, they are often called ‘Advocates’. But Advocates are extremely rare. In fact, INFJ is the rarest personality type in the world. So if you’re an INFJ, what skills can you offer? And what career path should you pursue? Let’s take a look. INFJ Careers advice While INFJs have an innate ability to perceive other people’s feelings, they are often misunderstood by those around them. So what makes them tick? And do any of these strengths and weaknesses resonate with you? INFJ Strengths Passionate — INFJs are fiercely determined. They will stop at nothing to support their cause, even if it means ruffling a few feathers along the way. Decisive — Unlike some personality types like INFP, INFJs don’t let their inspiration go to waste. Blessed with great willpower, they make excellent decision makers. Altruistic — INFJs fight for what’s right. They want positive change for everyone, not just themselves. Creative — Compassionate and wildly imaginative, INFJs are naturally creative. They also tend to make excellent writers and orators. INFJ Weaknesses Perfectionist — As INFJs are so committed to their cause, work opportunities and relationships can suffer in their pursuit of perfection. Private — Driven by a need to present their best possible selves, INFJs sometimes find it hard to let their guard down around friends, family and colleagues. Exhaustion — Because INFJs give it their all, they can succumb to exhaustion if they don’t find a way to let off steam. Best Jobs for INFJ So what’s the bottom line? Well, their intuitive and empathetic temperament make INFJs a natural fit for careers in healthcare, education and public service. It’s no surprise then that famous INFJs include Martin Luther King and Mother Teresa. And while many INFJs explore careers in charity work and advocacy, there are a range of other paths to consider. Here are just a few. Psychologist Wonderful listeners and deeply empathetic, INFJs can study and evaluate human behaviour like few others. Counsellor Driven by a desire to connect with others, INFJs make wonderful counsellors. Whether that’s in schools, hospitals or private practices. Scientist The solitary surroundings of the lab are perfect for INFJs. Here they can align their desire for change with their strong work ethic. Teacher Inspirational, motivational and compelling, INFJs have all the traits of a perfect teacher. Writer INFJs are wonderful communicators. That’s why careers in writing, both creative and professional, tend to appeal. INFJ Careers in Business You might be wondering. Can INFJs thrive in a business environment? Of course they can! But to really succeed, INFJs need to find the moral objectives in their work. That’s why high wages and seniority may not necessarily appeal. And while the collaborative structure of a corporate environment may hamper their strong personal goals, there are a great selection of INFJ careers in business worth considering. Entrepreneur Advocates are more likely than other personality types to go it alone. Entrepreneurship allows INFJs to steer their business to their own moral compass. HR Aside from being good judges of character, INFJs have the organisational ability to manage the many facets of human resources. Corporate trainer Just like teaching, corporate training allows INFJs to exercise their inspirational qualities to bring about positive change. INFJ Careers to Avoid While INFJs are capable enough to succeed in any field, there are some careers that may jar with their personality. Accounting Routine work like accounting or data analysis may leave INFJs feeling unfulfilled. Politics The public scrutiny and regular conflict of politics may dilute their will for change. Sales High pressure and tight deadlines often feel unimportant to INFJs. Putting INFJ to Good Use Here’s the thing. Any personality type can thrive in any job. But finding a profession that aligns with your personality type may help you achieve long term job satisfaction. So how can you make the most of your INFJ personality? Find your cause To really thrive, INFJs need a cause to get behind. Whether that’s environmental change or life coaching, look for ways you can make a difference. Find a great team It’s important for INFPs to be able to grow and learn alongside those they’re working with. So find a team that will help you to help them. Seek independence Alternatively, you may prefer to work alone. If so, find a role where you can be productive without being swamped by others. Focus on your skills Remember, INFJs are intuitive, empathetic and altruistic. So let these traits guide your career choices. A Final Word More than anything else, INFJs need to be able to flex their creativity and insightfulness. However, they also need to know that what they’re doing is in line with their principles and helping other people. With all that to consider, finding the perfect job is easier said than done. But here’s the good news. INFJs are incredibly intelligent. And while some INFJs struggle to pick a career path for fear of missing out on other opportunities, their creativity and imagination are invaluable in modern business. Not only that, their ability to turn concepts into concrete plans is a skill cherished in every industry. Why not take the test for yourself? Or for more careers advice, visit our insights page. This article was originally published on Advance
https://medium.com/heyadvance/jobs-for-your-personality-how-to-own-your-infj-career-2de478d0103c
[]
2018-11-03 19:06:01.587000+00:00
['Careers', 'Teaching', 'Writer', 'Psychology', 'HR']
Book Review: “Twilight of Democracy” (by Anne Applebaum)
Book Review: “Twilight of Democracy” (by Anne Applebaum) Anne Applebaum’s new book is a haunting reminder of just how fragile democracies are and how easily they can be dismantled from within. I’ve been meaning to read this book for a while, ever since I saw that Anne Applebaum was going to expand the essay that she wrote for The Atlantic into a full-length book. I’m glad that I’ve read it, but it’s a haunting book, that’s for sure. It’s a potent, and somewhat apocalyptic, reminder that democracies are only as strong as the people who live in them, that there is nothing about them that is ontologically strong and secure. In order to continue to function as they should, they require buy-in and maintenance and support and, above all, faith. Without those things they are subject to corrosion and destruction from within. Applebaum focuses on three different countries that have seen a rising tide of anti-democratic tide: Poland, the UK, and the US. Of the three, it’s Poland that has suffered the most dramatic reversals, as the Law and Justice Party has slowly risen to power on a tide of falsehoods, conspiracy theories, and reactionary conservatism. In the UK, meanwhile, a strange sort of nostalgia took hold in the years leading up to the Brexit vote, a yearning for a time when Britain actually did things in the world. As foolish and mendacious as he could be (and still is), Boris Johnson and others like him were able to seize control of the narrative and create enough of a groundswell to achieve the Brexit vote they desired. However, it’s important to point out that what’s happened in Poland and the UK isn’t confined to those nations, and Twilight also includes a discussion of various other nations in Europe that have begun to struggle against the rising tide of nationalism. She discusses Spain and Hungary in particular detail, and as someone who isn’t particularly plugged in to European politics — except in the most general sense — it was rather distressing to see this reactionary sort of ideology has taken root all over the continent, aided and abetted by technology, which makes it significantly easier to spread disinformation. In the hands of such menacing figures as Viktor Orbán of Hungary, such power is very dangerous indeed, especially since he seems committed to nothing less than the rewriting of history itself (a common tactic among authoritarians everywhere). Her discussion of the US by necessity includes a substantial analysis of Fox News and its role in the decline of faith in democracy. Applebaum focuses in particular on Laura Ingraham, who has gone from a relative unknown to one of the most powerful, and most Trumpian, voices at Fox News. Like so many others on the right, she has given into a certain sort of apocalyptic despair, which means that she is both willing to carry water for an authoritarian figure (part of a whole class of such people that Applebaum refers to as clercs) and smother the contradictions that such an action necessitates. Throughout the book, Applebaum is as concerned with people as she is with processes, in that she often focuses on the individuals whose choices and political actions have led to the current state of affairs. There’s a potent truth here, for it’s a fact that no authoritarian is able to rise to power without having the support of at least some of those in positions of cultural authority to buy in, what Applebaum, following in the footsteps of Julien Benda, refers to as clercs. These are the people who make authoritarianism palatable to the masses, whether through their positions in powerful media or by distorting museum exhibits to support a dominant agenda (which has happened in Hungary). Now, it has to be said that Applebaum does sometimes play a bit of both-sidesism, particularly when it comes to highlighting the supposed excesses of the left. She doesn’t dwell on it too much, but she does call out what she sees as the problems with cancel culture, which she sees as smothering rational political debate. Given that she contributes to The Atlantic (which has made the alleged power of cancel culture one of its most frequently reported-on phenomena), and that she has worked with and for a number of conservative groups and individuals, I’m not terribly surprised at this line of attack on her part, nor do I think it undercuts the validity of the book’s more substantial arguments. However, the very fact that she puts them into the conversation at all shows the extent to which many of the most prominent voices in conservatism still cling to their old pieties, refusing to take accountability for their own culpability for the state in which we find ourselves. Indeed, one of the frustrating things about this book is Applebaum’s lack of self-reflection, particularly when it comes to her friendships with some of the very people that she criticizes (one has to wonder what type of person would be friends with Boris Johnson in the first place). Twilight of Democracy is part of a growing body of work devoted to the study and analysis of what it is that makes democracies work and why they are so fragile, and for that reason it’s necessary reading. Even though Trump has been defeated at the ballot box and is due to leave the White House on January 20, it’s important to remember that, as of this writing, his assault on the electoral system itself is ongoing, with a “final showdown” set to take place on January 6, when Congress is supposed to certify the results from the Electoral College. Given this context, this book is thus something of an intellectual call to arms. It reaches out to each of us, asking us to do our part to ensure that democracy doesn’t go the way of so many other failed political systems. As she reminds us near the end of the book: “no political victory is ever permanent, no definition of ‘the nation’ is guaranteed to last, and no elite of any kind […] rules forever.” There’s something more than a little terrifying about the idea that history is one long cycle, that every political victory must be re-fought again and again and again. But such, alas, is the nature of modernity.
https://medium.com/reluctant-moderation/book-review-twilight-of-democracy-by-anne-applebaum-878ed0bd3677
['Dr. Thomas J. West Iii']
2020-12-28 18:07:39.716000+00:00
['Politics', 'Democracy', 'History', 'Authoritarianism', 'Books']
Why “Data Looks Better Naked”
Let’s explore from a historical standpoint. This will allow us to better understand how what was once seen as “well designed” now looks “overly detailed”, and how the teachings of ‘why data looks better naked’ came about. The knowledge behind why data looks better naked comes from the teachings of Edward Tufte, an artist and statistician. As a statistics professor at Yale University, Edward has written, designed and published four books dedicated to the knowledge of data visualization. In 1983, Tufte published his first book called, The Visual Display of Quantitative Information where it focused on the theories and practices behind designing data graphics (statistical graphs, charts and tables). It was in this book that Edward introduced the concept of “data-ink”. Data-ink is “the non-erasable core of the graphic, the non-redundant ink arranged in response to variation in the numbers represented”. He goes on to explain that we should “remove all non-data-ink and redundant data-ink, within reason.” In doing so, this will create a more cohesive graphical design when it comes to data visualization. The below GIFs, created by designer Joey Cherdarchuk, illustrates the step by step process taken into “stripping away the excess” in order to make a graph visually “naked”. Column Chart Table Chart Let’s follow the teachings of Edward Tufte with the support of Joey Cherdarchuk’s visuals when it comes to data representation. Though we are accustomed to the old style way of formulating data, let’s push forward in the direction of minimalism. To ensure that your data comes off as clear as possible, let’s strip down the data (rather than dress it up). This will make the data more “effective, attractive and impactive” when the method of “less is more” is put to use.
https://medium.com/comms-planning/why-data-looks-better-naked-ac2adb872378
['Naja Bomani']
2016-08-15 16:06:17.734000+00:00
['Simplicity', 'Minimalism', 'Design', 'Data Visualization', 'Data']
Ionic & Felgo: App Development Framework Comparison
Cross-platform development is making a lot of noise in today’s dev world and there is a reason why. A shared codebase can save a lot of time if you want to target multiple platforms. There are several approaches for creating cross-platform applications. But which one is better? This time you will see the comparison of Ionic and Felgo. Differences between Cross-Platform Frameworks Before we start, let’s take a peek at the history of cross-platform development. In early cross-platform mobile app development times, apps were displayed in a WebView. A WebView is nothing more than a native browser window without any extra interface The HTML engine of the browser took care of rendering all app elements. The idea was to create and run a web application with a native look and feel. This way developers could deploy to many platforms. The platform just had to provide the browser technology. This approach is still used by many frameworks, including Ionic. On the other hand, a standard web app running inside a browser cannot access all the functionalities of a target device that a modern app needs. That is why tools like Cordova became popular. It provided a web-to-native bridge. The bridge granted access to functionalities like localization in a WebView. Ionic also provides such a bridge with Capacitor. But in reality, it is nothing more than the good old Cordova with some upgrades. In summary, if you want to create an application using the Ionic framework, you will need to use a web technology stack: HTML, CSS, and JavaScript. Other frameworks, such as AngularJS or React, would also be useful to give the app the desired modern feel. Hybrid Frameworks and Rendering with a WebView Hybrid Frameworks, like Ionic, render their content within a WebView. This WebView is wrapped with APIs to access native device features. However, this approach has some disadvantages like: The performance of your app depends on the internal version of the WebView used in the targeted OS. This dependency can cause different behaviors and performance characteristics on different OS versions (e.g. Android 6.0 vs 9.0). You will depend on Apple and Google to add features and improve the performance of the WebView. There are features that depend on web engines like Webkit and CHromium for both iOS and Android. Some of the CSS fields supported by the JavaScript standard are an example of such a feature. It makes maintainability harder as you need to support multiple Webview browser versions and types. Web renderers were designed to display websites or multimedia content in a browser. They do not render user interfaces & animations very efficiently. Because of that, performance is significantly slower compared to native apps. The Felgo Approach Let’s focus now on how Felgo handles cross-platform rendering. Qt with Felgo compiles real native applications without the need for a WebView. Felgo renders its UI elements with the Qt rendering engine built on C++ & OpenGL ES / Vulkan / Metal. This so-called “scene graph renderer” is optimized for performance. It also guarantees that the UI will look the same on any device & platform. Furthermore, it is also possible to keep your existing native iOS, Android, or C++ code. You can simply reuse your own native code with Felgo thanks to its architecture. The core language behind Qt & Felgo is C++, which is famous for its performance and stability. However, it is not ideal for creating a modern UI and cutting-edge applications. So Qt introduced a new language called QML. QML is a declarative language that lets you compose your UI as a tree of visual items, very similar to HTML. For adding application logic, QML relies on JavaScript. Developers can easily get started if they are familiar with these web technologies. Felgo comes with everything you need to build stunning applications in record time. To achieve native performance, all QML items actually translate to performant C++ components in the backend. Your QML and JavaScript get executed and visualized by a highly optimized C++ renderer. Qt also compiles all components Just in Time (JIT) or Ahead of Time (AIT) if configured. This way, QML can achieve native performance. Qt & Felgo not only allow you to develop cross-platform for iOS and Android. You can also run your applications on desktop, web and embedded systems. Inside the Frameworks The devil is in the details and that is why it’s crucial to take a look inside the architecture of both frameworks. Let’s start with Ionic. The browser renders your code and Ionic needs a bridge to access OS functionalities like a camera: You have to rely on this bridge to access native features. It is not possible to build an application that directly uses these platform APIs. But what about Felgo? You won’t need any additional bridge to access the OS functionalities. You have direct access to all platform features with the native code in your application. This also includes the highly performant QML Engine, which is part of your Qt application: This architecture ensures a consistent performance on all target platforms and devices. Framework Business Potential When considering business potential, there are some things to keep in mind. First is, of course, current staff experience. When developing with Ionic, you need a team with quite a lot of knowledge about web app development. If they are lacking some of them, the training would take some time. When considering Felgo, the main skill your team should have is knowledge of JavaScript, because QML is derived from it. As JS is one of the most popular programming languages, the probability that your fellow programmers have such ability is quite high. If you already work with programmers who have JavaScript knowledge then it’s easy to reuse their skills in the new Felgo project. Another aspect to consider is the supported platforms. Apart from Web, Ionic supports only iOS and Android. With Felgo, you can deploy also to Windows, Mac, Linux, and embedded devices. The variety of platforms is much bigger when using Felgo. Framework Documentation Many developers consider documentation as one of the most important factors not only in terms of learning new technology but also in case of reducing the time of development. When creating the app, you will sooner or later bump into some issues that will require some additional knowledge. Documentation is the best place to look for it. If it is high quality, you will solve the problem in no time. Otherwise, you will struggle with scrolling through many pages and hope to find a detailed answer. Both Felgo and Ionic offer great documentation, to browse APIs, examples and demos. Learning Curve Comparison When taking the first steps with Ionic, you need to learn quite a lot of technologies like HTML, Sassy CSS, and JavaScript. On top of that, you should also know a front-end framework like Angular. It uses Typescript language that you will also need to be familiar with. You might also use React to give the app the desired modern look and feel. There’s a lot to learn if you aren’t an expert in web development but would like to create mobile apps with a cross-platform framework. Besides, Angular and React are not known for being easy to learn. To learn Felgo, you need some QML skills and know JavaScript to write functions in QML. QML, due to its JSON-like notation, is very friendly for new users. The gap between Ionic and Felgo’s necessary technology stack is rather big — especially if you are not specialized in any kind of web app technology. To summarize, the learning curve of Ionic can be much steeper than Felgo’s. Especially when learning the chosen front-end JS framework at the same time. Framework Pricing and Licensing For personal usage or “low-budget” developers, both of the frameworks are free. If you’d like to include additional services and tools into your app, you can get professional plans to ensure that you get the most out of the solution. Felgo offers advanced features like analytics and push notifications. Whereas Ionic gives you more than 100 live updates per month in their paid licenses. Hello World Mobile App Comparison Architecture and functionalities are one thing. But learning a certain technology simplicity and clarity are a completely different matter. How to compare these factors? It’s quite simple — let’s write a simple app! Proceeding with Ionic, you can see at the beginning that creating logic and design will need two separate files for every page. You’ll also need to write the code in two different notations: HTML and TypeScript. Now, let’s look at the Hello Word app written with Felgo: Run this code on your iOS or Android device now, with Live Code Reloading Here you can see how you can create the logic and design in the same QML file. This has a positive impact on the entry-level of technology. QML is also easier to read than HTML, with less syntax overhead. This especially matters when dealing with large projects where a single page can contain many objects. At the same time, the application logic with TypeScript and QML are quite similar because both are based on JavaScript syntax. Comparing Integrated Development Environments When comparing frameworks, it is also worth taking a look at integrated development environments (IDE), and what they can offer you to make development more efficient. Felgo isn’t just a framework for cross-platform development. It also offers a whole set of tools that you can use throughout the entire lifespan of the application. Felgo comes with the full featured Qt Creator IDE. You also have access to QML Hot Reload that lets you view edits of QML code in real-time. This feature comes with a tool called Felgo Live Server. It lets you deploy apps to multiple, real devices via a network. In the IDE, you have access to built-in documentation. Here you can find info about Felgo types as well as about all Qt classes. Once you write some code, you can use an integrated debugger and profiler to analyze your app’s execution flow. In this matter, Ionic falls behind as it has no dedicated IDE. Thus, you need to rely on tools that are not fully adjusted to this framework. With Felgo you also get access to Cloud Builds. This service allows you to build and release cross-platform applications to app stores like Apple Store and Google Play. You can integrate it with your code repository and CI/CD system, so you don’t need to do so manually on every platform. With Cloud Builds you don’t even need to have a MacBook to release iOS applications. Cross-Platform Framework Comparison Overview: What is the best cross-platform framework? The answer to this question does not really exist — there is no silver bullet. Instead, you should ask “What framework is best for me and my project?”. Several factors can help you decide on a particular technology. To ease the decision-making process, you should ask yourself a few questions: What programming language do you or your team have experience in? What are the requirements of your app? What tooling helps you to work more efficiently? What platforms do you want to support, now and also in the future? Do you have an existing code you want to reuse? Who can help me if I run into problems? Every technology has its pros and cons and your use-case matters. If you are looking for a reliable, efficient, and easy-to-learn framework, you should definitely consider having a look at Felgo & Qt. Related Articles: QML Tutorial for Beginners 3 Practical App Development Video Tutorials Best Practices of Cross-Platform App Development on Mobile More Posts Like This Flutter, React Native & Felgo: The App Framework Comparison Continuous Integration and Delivery (CI/CD) for Qt and Felgo QML Hot Reload for Qt — Felgo
https://medium.com/the-innovation/ionic-felgo-app-development-framework-comparison-ba84de105a20
['Christian Feldbacher']
2020-07-08 10:16:51.360000+00:00
['Mobile App Development', 'Programming', 'Technology', 'Apps', 'Framework']
Watson Text to Speech Releases 5 New Neural Voices!
We are pleased to announce that IBM Watson Text to Speech, a cloud service that enables users to converts text into natural-sounding audio, has introduced five new neural voices (four US English voices and a German voice). These new voices are now generally available in our public cloud offering. Take A Listen! Click on the names to listen to the new voice samples: US English — Emily “If you know your party’s extension number, you can enter it at any time. For Sales and Customer Service, press 1.” US English — Kevin “For all other inquiries, please stay on the line, and a representative will be happy to assist you.” US English — Henry “Our business hours are Monday through Friday from 8 am to 7 pm except on major holidays. Please leave a message with your name, contact information, and the nature of your call and someone from the appropriate department will contact you on the next business day.” US English — Olivia “All of our agents are currently busy. Please hold, and we will answer your call as soon as possible.” German— Erika “Alle unsere Mitarbeiter sind derzeit im Gespräch. Bitte bleiben Sie dran, wir werden Ihren Anruf so schnell wie möglich weiterleiten.” Learn More Interested in discovering our TTS capabilities, languages and voice technologies? Click here to learn more. Try out our TTS languages and voice technologies for yourself with this demo. Or read the science behind the technology of our new neural voices in our whitepaper: “High quality, lightweight and adaptable TTS using LPCNet”.
https://medium.com/ibm-watson/watson-text-to-speech-releases-5-new-neural-voices-2476863c5e23
['Vijay Ilankamban']
2020-03-14 14:01:00.991000+00:00
['Artificial Intelligence', 'Speech Recognition', 'Machine Learning', 'Announcements', 'Watson Text To Speech']
How to Compare 200+ Cryptocurrencies with Open-Source CoinScraper Module
A bit dramatic, I know, but it’s a pretty big deal if you are an overbought/oversold kind-of-trader. Most traders use technical analysis to and their favorite indicators to make smart decisions in the market. This could range from moving averages or exponential moving average, depending on the trader. Some may prefer Moving Average Convergence Divergence (MACD) and others on-balance volume (OBV). Everyone I know uses different tools and we all use some of the same tools as well, but this post is not about Bollinger Bands, Fibonacci Retracements, or Ichimoku Clouds. This post is about the coinscrapper, and while those tools are nice, you need the data before you can use any tool! The coinscraper client was designed to compare the top 200 supported assets on Kucoin’s decentralized exchange. Each crypto asset’s historical and fundamental data is sourced from coinmarketcap.com. After all the data is collected, the data is preprocessed before calculating relative strength index. We end up with a summary table of the top 200 assets along with their relative strength. The client has a few requirements/dependencies, please see requirements.txt file, or install the following: import requests import pandas as pd import time import random import math import numpy as np from math import pi import matplotlib.pyplot as plt # %matplotlib inline from os import mkdir from os.path import exists To install the client module, download the .py file and the demo notebook for Google Colab. You can download the files from the repo here. Connecting to Client Now that you have the client module installed, open the Demo notebook and run this cell. The demo will walk through some errors and show you how to fix them if they happen to you while running this client. from coinscraper import coinscrapper today = 'YYYYMMDD' client = coinscrapper(today) The client module will require a google authentication, and will also require a selenium web-driver. I would suggest running the client in Google Colab to test it out. If you are experiencing any errors, please make sure you have uploaded the files to your Colab Notebooks folder on Google Drive. Feel free to change the file path or change any functionality. Pulling Summary The coinscraper client is filled with various methods, but the all purpose one is the .summary() method. This function was designed to process all the actions from getting the list of assets that are traded on KuCoin; creating links for historic data; converting html tables to dataframes; munging the data; generating a .csv file, and an html table with all the results. client.summary() Access Saved Datasets Using the client we can also access the datasets saved during the summary process. The datasets are saved in a python list, and will contain historic data for each asset. The fundamental data and the RSI dataset are separte pythonic lists of datasets. client.technical_data client.fundamental_data client.rsi_data Below is an example of how to access the saved datasets using the client.In [10]: # Historic Price Data list_of_historic_data = client.technical_data print('Historic Price Data: ') display(list_of_historic_data[0].head()) # Fundamental Data list_of_fundamental_data = client.fundamental_data print(' Fundamental Data: ') display(list_of_fundamental_data[0].set_index(0).stack()) # RSI Data list_of_RSI_data = client.rsi_data print(' RSI Data: ') display(list_of_RSI_data[0].tail()) Historic Price Data: DateOpen*HighLowClose**VolumeMarket Cap0Aug 20, 202091.25101.3191.25101.3115344164317903490141Aug 19, 202093.4794.4989.5591.2810936760116129313732Aug 18, 202093.3797.1592.0993.5211999836316524633993Aug 17, 202091.2294.6589.7993.368319201116495215074Aug 16, 202090.0791.3288.4191.22641774871611628957 Fundamental Data: Monero Price 1 $93.81 USD Monero ROI 1 3,693.27% Market Rank 1 #16 Market Cap 1 $1,657,837,591 USD 24 Hour Volume 1 $176,797,734 USD Circulating Supply 1 17,672,780 XMR Total Supply 1 17,672,780 XMR Max Supply 1 No Data All Time High 1 $495.84 USD(Jan 07, 2018) All Time Low 1 $0.212967 USD(Jan 14, 2015) 52 Week High / Low 1 $105.52 USD /$26.70 USD 90 Day High / Low 1 $105.52 USD /$60.43 USD 30 Day High / Low 1 $105.52 USD /$70.89 USD 7 Day High / Low 1 $105.52 USD /$88.41 USD 24 Hour High / Low 1 $105.52 USD /$92.60 USD Yesterday's High / Low 1 $101.31 USD /$91.25 USD Yesterday's Open / Close 1 $91.25 USD /$101.31 USD Yesterday's Change 1 $10.06 USD (11.02%) Yesterday's Volume 1 $153,441,643 USD dtype: object RSI Data: DateOpen*HighLowClose**VolumeMarket Capdate_RSIdate_ 2020–08–16Aug 16, 202090.0791.3288.4191.226417748716116289572020–08–1659.6750132020–08–17Aug 17, 202091.2294.6589.7993.368319201116495215072020–08–1762.4408222020–08–18Aug 18, 202093.3797.1592.0993.5211999836316524633992020–08–1862.6471052020–08–19Aug 19, 202093.4794.4989.5591.2810936760116129313732020–08–1957.8562802020–08–20Aug 20, 202091.25101.3191.25101.3115344164317903490142020–08–2069.210352 Access RSI Charts Using the client we can access the RSI Charts saved during the summary process. The charts are saved in a python list, and also saved to your authenticated google drive. The plots are the matplotlib objects that just require the .show() method. client.plots client.candle_sticks (coming soon) Below is an example of how to access the saved datasets using the client.In [24]: import os In [29]: os.listdir('drive/My Drive/CoinScraper/charts/monero/') Out[29]: ['RSI-20200820.png'] In [37]: img = plt.imread("/content/drive/My Drive/CoinScraper/charts/monero/RSI-20200820.png") plt.figure(figsize=(32,18)) plt.axis('off') plt.imshow(img); Access the Log File Using the client we can also access the log file saved during the summary process. The log file is a text file that shows which processes are running, or errors that occur. client.log (Below is an example of how to access the log.) [48] client.log Here is the table html code generated for web:
https://medium.com/the-innovation/how-to-compare-200-cryptocurrencies-with-open-source-coinscraper-module-269d5d2e1f15
['Jacob Tadesse']
2020-08-24 18:18:34.263000+00:00
['Web Scraping', 'Python', 'Pandas', 'Cryptocurrency', 'Data Science']
The festival of families
This week in East Asia — when the moon is its roundest and brightest on the 15th day of the 8th month of the lunar calendar — we celebrate the Mid-Autumn Festival. Traditionally, the festival gives thanks for the harvest, but it is also a time to appreciate harmonious unions and families coming together. A timely coincidence, because my in-laws are visiting from the UK. We’ve been experiencing Hong Kong tourist hotspots including the bright lights of Victoria Harbour, the Big Buddha that commands a view over Lantau Island, and a boat ride to the remote island of Po Toi. During every adventure we have witnessed other families — smiling, arguing, laughing, but nonetheless spending time together. This same week, I attended the funeral of a talented friend who left us too soon. Her estranged family arrived from opposite sides of the world to mourn her loss, each in their own way but united in grief. The tensions between the family members stretch vertically and horizontally through the ages Families. They unite us and they tear us apart. My first novel is one of a trilogy covering four generations of familial shenanigans. The tensions between the family members stretch vertically and horizontally through the ages, like a delicate web that masks its strength. Families make for complicated dynamics, and I am grateful to Beth Miller — author of When We Were Sisters — for her thoughts on how to deal with families when writing fiction: “The key thing I do when writing is to focus on the dynamics between each of the various members. If you have four people in a family, you have at least eleven possible configurations of relationship, all with their different complexities, secrets and tensions. How does what A say or do impact on B? How do things change if C comes on the scene? The writer here is like a family therapist. Both writer and therapist have to tease out the dynamics, work out how each pairing, each triad, each quartet, changes depending on who’s there, what new stuff they’re bringing, their shared and separate histories.” Gotham Writers provides helpful guidance on determining which family member should be the main protagonist and how to write different POVs to tell the broader family story. At the same time, it warns of the risk of more than one character taking centre stage and diluting the focus and cohesiveness. Over the centuries, brightly lit lanterns have become symbolic of the Mid Autumn Festival. Just like family members, lanterns come in all shapes, sizes and colours. As my family sat on the rooftop, and marvelled at the glorious full moon and the array of colourful lanterns bobbing in unison in the warm sea breeze, we chatted about everything and nothing, and were grateful for our differences and for our unity. Originally published at www.rjverity.com on October 6, 2017.
https://medium.com/words-on-writing/the-festival-of-families-be2cde72b317
['Rj Verity']
2018-05-01 00:37:05.471000+00:00
['Rj Verity', 'Writing', 'Writing Tips', 'Writer', 'Words On Writing']
Why Dropping Out of School Will Make Your Life Better
I always hated school, just like a lot of you I suppose. So I quit two years ago, and I’m now attending a professional course. Something very far from the traditional school system. But why? Why is that a good idea that you should consider? Well… there are a couple of reasons. Some of them are more related to the system itself, and some about your mental health and time. Let’s admit that school is useless, come on.
https://medium.com/illumination/why-dropping-out-of-school-will-make-your-life-better-77558eef68b6
['Alyssa Di Grazia']
2020-12-28 10:42:12.372000+00:00
['Life Lessons', 'Self Improvement', 'Writing', 'Life', 'Change']
assimilated agony
assimilated agony how often have we turned a blind eye while others begged for us to see?
https://medium.com/a-cornered-gurl/assimilated-agony-774c6f254167
['Tre L. Loadholt']
2017-10-05 23:33:50.674000+00:00
['Micropoetry', 'Love', 'Writing', 'Compassion', 'A Cornered Gurl']
Why Sidewalk News is bringing local news amongst the people
One of my core journalistic beliefs is that, for a community to thrive, all of its members must have access to high quality local news. And that often isn’t the case — as a 2018 report by Fiona Morgan and James Hamilton determined, “Poor people get poor information, because income inequality generates information inequality.” But I believe there’s a way to use public infrastructure that already exists almost everywhere in the country to bring the news amongst the people — outdoor advertising. There are 3.3 million out-of-home (OOH) advertising spaces in the United States, and the format already supports more than just ads — the FBI says that the use of digital billboards has played a part in arresting 50 of the country’s most wanted criminals in the last decade. What Sidewalk News will do is help local news outlets use OOH advertising spaces like bus shelters and street furniture to engage with their community directly by putting their news onto these platforms. Doing so serves three purposes. One is providing news to all members of a community without concern for their technological prowess or ability to pay. As media becomes more digitally-focused, lower-income and less-educated Americans are less likely to have access to high quality news than their wealthier, more-educated peers. Using OOH spaces levels the playing field by making all people equally able to consume this news. Another purpose is to give community members a personal connection to a news story that may otherwise seem esoteric. Because each “news ad” will be tailored to its specific display point, the news can be “ultra hyper localized” to that particular spot. For instance, someone sitting in a bus shelter will learn from Sidewalk News about how a city-wide issue will affect the street she’s standing on, or the bus line she’s about to take. This will drive civic engagement as people become more aware about the issues surrounding them. The third purpose is to advertise the media outlet by showing off what they do best — local news. By posting local news on outdoor displays, a reader on the street will see how the media outlet is covering news that is relevant to them. This also builds credibility and brand awareness of the outlet in the community, particularly with potential readers who may not be as familiar with their work. With news outlets overstretched and under resourced, I don’t imagine that it would be realistic for my partner news outlets to fund this project. I had originally viewed this project as one that could only be funded through donations from journalism organizations or benevolent individuals interested in fostering community engagement. Increasingly, I think the model has the potential for multiple revenue streams. One avenue for revenue will still be philanthropy from groups interested in local news, civic engagement, and public spaces. I believe these investments will be necessary to get projects started and build the infrastructure required to make them sustainable. But ultimately, there will be an option for sponsorship. Local companies and community organizations will be able to sponsor these “news ads” to show their commitment to supporting local news. Out-of-home advertising exists almost everywhere; it’s already a part of our lives. I believe we have an opportunity to make it part of the way we consume the news.
https://medium.com/journalism-innovation/why-sidewalk-news-is-bringing-local-news-amongst-the-people-29bd0277c6a9
['Elise Czajkowski']
2019-04-04 16:08:59.940000+00:00
['Journalism', 'Sidewalk', 'Advertising']
Artificial Neural Network From Scratch Using Python Numpy
Finally, let’s build the ANN ANN So here we have: Input node with some inputs (Real numbers; x1, x2, xn) with their weights (Real numbers; w1, w2, wn) and bias (Real number). And this parameters (weights and bias) connects with our hidden nodes, where we compute weighted sum (sigma or z) for all inputs and theirs weight and then we apply a non-linear activation function (like, sigmoid, tanh, etc) and this generate a our final output (y). Now, in our model we have 28 x 28 pixels of image (total pixels is 784 pixel) and this pixel is our inputs that goes to our input node then it goes to hidden node (single hidden layer) and then generate a output (single digits between 0 and 9). Sigmoid Activation Function Here our y-hat (or output of nodes) is sigmoid(dot product of weight and input ‘x’ + bias) Implementing Sigmoid Activation Function: #activation sigmoid def sigmoid(x): return 1. / (1.+np.exp(-x)) Cross-entropy Loss (a.k.a Cost, Error) Function For ‘n’ classes and single samples or for ’n’ digits and single image, below we the have formula: For ’n’ classes and single samples But, for ’n’ classes and multiple(m) samples or for ’n’ digits and multiple single image, below we the have formula: Implementing Cross-entropy Loss Function: #cross-entropy for our cost function def compute_multiclass_loss(Y, Y_hat): L_sum = np.sum(np.multiply(Y, np.log(Y_hat))) m = Y.shape[1] L = -(1/m) * L_sum return L Back-propagation Using Gradient Descent Algorithm Back-propagation Back-propagation is just a way of propagating the total loss back into the neural network to know how much of the loss every node is responsible for, and subsequently updating the weights in such a way that minimizes the loss by giving the nodes with higher error rates lower weights and vice versa. Gradient Descent Gradient descent is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. To find a local minimum of a function using gradient descent, we take steps proportional to the negative of the gradient of the function at the current point. Formula : new weight = prev. weight — learning rate*gradient Gradient Descent Algorithm Computing Gradient Finally let’s implement it and train our model n_x = X_train.shape[0] n_h = 64 digits = 10 learning_rate = 1 epochs = 2000 Initializing Weights and bias: W1 = np.random.randn(n_h, n_x) b1 = np.zeros((n_h, 1)) W2 = np.random.randn(digits, n_h) b2 = np.zeros((digits, 1)) X = X_train Y = Y_train Now, training start: for i in range(epochs): Z1 = np.matmul(W1,X) + b1 A1 = sigmoid(Z1) Z2 = np.matmul(W2,A1) + b2 A2 = np.exp(Z2) / np.sum(np.exp(Z2), axis=0) cost = compute_multiclass_loss(Y, A2) dZ2 = A2-Y dW2 = (1./m) * np.matmul(dZ2, A1.T) db2 = (1./m) * np.sum(dZ2, axis=1, keepdims=True) dA1 = np.matmul(W2.T, dZ2) dZ1 = dA1 * sigmoid(Z1) * (1 - sigmoid(Z1)) dW1 = (1./m) * np.matmul(dZ1, X.T) db1 = (1./m) * np.sum(dZ1, axis=1, keepdims=True) W2 = W2 - learning_rate * dW2 b2 = b2 - learning_rate * db2 W1 = W1 - learning_rate * dW1 b1 = b1 - learning_rate * db1 if (i % 100 == 0): print("Epoch", i, "cost: ", cost) print("Final cost:", cost) model loss Generating our predictions and checking accuracy: Z1 = np.matmul(W1, X_test) + b1 A1 = sigmoid(Z1) Z2 = np.matmul(W2, A1) + b2 A2 = np.exp(Z2) / np.sum(np.exp(Z2), axis=0) predictions = np.argmax(A2, axis=0) labels = np.argmax(Y_test, axis=0) print(confusion_matrix(predictions, labels)) print(classification_report(predictions, labels)) Okay, we got 92% accuracy which is pretty good.
https://medium.com/analytics-vidhya/artificial-neural-network-from-scratch-using-python-numpy-580e9bacd67c
['Madhav Mishra']
2020-09-08 01:15:42.030000+00:00
['Programming', 'Deep Learning', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
Boost your marketing strategy with RFM
Okay, now that we combined these features into one dataframe, let’s visualize it! (code link) This is nothing but a huge mess of customer pile which begs the question: How are we going to segment them? Most data scientists try to answer this question by implementing K-Means clustering, which is basically a machine learning algorithm used for seperating data into clusters based on data points’ distance to each other: (https://stanford.edu/~cpiech/cs221/handouts/kmeans.html) However, I am an advocate of a manual approach, since ML algorithms do not consider any industry-specific dynamics. Accordingly, I will follow two essential steps to ace this challenge: 1- Quantile Transformation and Ranking In simple terms, a quantile is where a sample is divided into equal-sized subgroups. Quartiles are also quantiles; they divide the distribution into four equal parts. In this case, I seperated RFM values into 4 quartiles and simply labeled them based on their values. (very bad-bad-good-very good). This is what the dataframe transformed into: 2- Segmentation by RFM rankings This is the part, where the business understanding comes into play. I wrote a python function with a bunch of ‘if-else statements’ to define segments based on their RFM rankings. For instance, I extracted a customer segment with ‘very good’ levels of F-M, yet ‘very bad’ level of R. This means that the customer was once very active and valuable but it’s been a long time since the latest transaction. Hence, the company needs to win this group back urgently! The below-stated function is my personal approach and one can finetune these classes based on different market dynamics. And here we go! We successfully created our customer segments to efficiently design upcoming marketing strategies! An elegant treemap comes in handy in terms of reflecting the big picture: (code link) Customer Treemap by Segments Below, is an analysis of the recently created customer segments through R-F-M. Notice how each group has its own behavioral pattern and differentiates meaningfully.
https://kerim-birgun.medium.com/boost-your-marketing-strategy-with-rfm-c737926fe621
['Kerim Birgun']
2020-11-05 22:51:04.981000+00:00
['Python', 'Data Science', 'Segmentation', 'Analytics', 'Marketing Strategies']
How to Run PostgreSQL Using Docker
Setup First, we need to install Docker. We will use a Docker compose file, a SQL dump file containing bootstrap data, and macOS in this setup. You can download these two files separately. Just make sure to put both docker-compose.yml and infile in the same folder. Alternatively, you can get the repository from here. Now, let’s discuss docker-compose and SQL dump files briefly. Docker Compose: It’s a YAML file, and we can define containers and their properties inside. These containers are called services. For example, if your application has multiple stacks, such as a web server and a database server, we can use a docker-compose file. It’s a YAML file, and we can define containers and their properties inside. These containers are called services. For example, if your application has multiple stacks, such as a web server and a database server, we can use a docker-compose file. SQL-dump: A sql-dump contains SQL queries in plain text. PostgreSQL provides the command-line utility program pg_dump to create and read dump files. Let’s break down the individual ingredients of the docker-compose.yml file. version: '3.8' services: db: container_name: pg_container image: postgres restart: always environment: POSTGRES_USER: root POSTGRES_PASSWORD: root POSTGRES_DB: test_db ports: - "5432:5432" volumes: - $HOME/Desktop/PostgreSql-Snippets/infile:/infile - pg_data:/var/lib/postgresql/data/ volumes: pg_data: The first line defines the version of the Compose file, which is 3.8. There are other file formats — 1, 2, 2.x, and 3.x. Get more information on Compose file formats from Docker’s documentation here. After that, we have the services hash, and it contains the services for an application. For our application, we only have one service called db. Inside the db service, the first tag container_name is used to change the default container name to pg_container for our convenience. The second tag image is used to define the Docker image for the db service, and we are using the pre-built official image of PostgreSQL. For the third tag restart, we have set the value always. What it does is it always automatically restarts the container by saving time. It restarts the container when either the Docker daemon restarts or the container itself is manually restarted. For example, every time you reboot your machine, you don’t have to manually start the container. The fourth tag environment defines a set of environment variables. Later we will use these for database authentication purposes. Here we have POSTGRES_USER , POSTGRES_PASSWORD , and POSTGRES_DB . Among these three variables, the only required one is the POSTGRES_PASSWORD . The default value of POSTGRES_USER is postgres , and for POSTGRES_DB it’s the value of POSTGRES_USER . You can read more about these variables from here. The fifth tag is the ports tag and is used to define both host and container ports. It maps port 5432 on the host to port 5432 on the container. Finally, the volumes tag is used to mount a folder from the host machine to the container. It comprises two fields separated by a colon, the first part is the path in the host machine and the second part is the path in the container. Remove this portion if you don’t want to mount the sql-dump into the container. The second line of volumes tag is used to store the database data, the first part is the name of the volume, and the second part is the path in the container where the database data is stored. But how do we know what’s that path exactly? We can determine the path by running the following command using the psql . We will discuss how to use psql later in this post. show data_directory; Remove the second line if you don’t want to back up your container’s database data. If you choose to remove both lines under the volumes tag, remove the volumes tag. At the end of the docker-compose file, you can see that we have defined the volume pg_data under the volumes tag. It allows us to reuse the volume across multiple services. Read more about volumes tag here. The moment of truth. Let’s run the following command from the same directory where the docker-compose.yml file is located. We can see that it starts and runs our entire app. docker-compose up Inspection We can check if the container is running or not using the docker ps command on the host machine. As we can see, we have a running container called pg_container . docker ps Moreover, we can see the image by running the docker images command. docker images Finally, we can see that a volume has been created by running the docker volume ls command. docker volume ls Connect to psql What is psql? It’s a terminal-based interface to PostgreSQL, which allows us to run SQL queries interactively. First, let’s access our running container pg_container . docker exec -it pg_container bash I talked about how the above line works in the following article. Have a look. Now we can connect to psql server using the hostname, database name, username, and password. psql --host=pg_container --dbname=test_db --username=root If you want to type less, use the following command. Find more options for PostgreSQL interactive terminal from here. psql -h pg_container -d test_db -U root Here, the password’s value is the root , which has been defined inside the docker-compose file earlier. Load data from a file Now we can load the dump file into our test_db database. In this case, infile . It is accessible inside the container because we have mounted it from the host machine. psql -h pg_container -d test_db -U root -f infile If we run the PostgreSQL command \dt , we can see two tables called marks and students inside our database test_db . Did we miss something? Not really, but yes! Since our data is backed up in the volume called postgresql-snippets_pg_data, we can remove the container without losing the database data. Let’s try that now. First, delete the container and then create it again. docker rm pg_container docker-compose up Now after accessing the container and psql we can still see our data! docker exec -it pg_container bash psql -h pg_container -d test_db -U root \dt In case you want to delete the backup volume, use the docker volume rm command. Read the documentation here. docker volume rm postgresql-snippets_pg_data Or you can use the docker-compose command. Read the documentation here.
https://towardsdatascience.com/how-to-run-postgresql-using-docker-15bf87b452d4
['Mahbub Zaman']
2020-12-30 13:21:45.957000+00:00
['Programming', 'Software Engineering', 'Postgresql', 'Data Science', 'Docker']
4 Useful JavaScript Books for Aspiring Developers
4 Useful JavaScript Books for Aspiring Developers Amazing books for JavaScript knowledge. Photo by Thought Catalog on Unsplash Introduction Not all of us prefer learning online or with video tutorials, there are people that prefer books. Reading these books can benefit both your physical and mental health, and those benefits can last a lifetime. Coding books are very useful if you love reading them because they give you all the details and knowledge you need. Reading books is one of the best ways to learn JavaScript. In this article, we will give you a list of some useful JavaScript books for developers. Let’s get right into it. 1. JavaScript and jQuery This book was written for anyone who wants to make his websites a little more interesting, engaging, interactive, or usable. It was written by Jon Duckett in order to help beginners understand the basics of JavaScript and jQuery very well. All you need is just a basic understanding of HTML and CSS. I recommend starting with this book if you are a beginner, but don’t rely too heavily on jQuery as it’s a bit outdated and most employers find this to be a deterrent. You can check the book here if you are interested. 2. You Don’t Know JS This is an awesome book series by Kyle Simpson exploring the parts of JavaScript that we all think we understand but don’t really know. All these books are free, which is incredible. Here is the Github repository for the series if you are interested. 3. JavaScript Design Patterns Design patterns are reusable solutions to commonly occurring problems in software design. They are both exciting and fascinating topics to explore in any programming language. This book (“Learning JavaScript Design Patterns”) was written by Addy Osmani. By reading it, you will explore applying both classical and modern design patterns to the JavaScript programming language. You can check the book here if you are interested. 4. JavaScript Allongé JavaScript Allongé is a book about programming with functions. It’s written in JavaScript of course. This book starts at the beginning, with values and expressions, and builds from there to discuss types, identity, functions, closures, scopes, collections, iterators, and many more subjects up to working with classes and instances. It also teaches you how to handle complex code, and how to simplify code without dumbing it down. You can check it out here if you are interested. Conclusion As you can see, all these books are full of knowledge and value. They helped a lot of developers to improve their JavaScript skills. You can choose any one that fits you and start gaining useful JavaScript knowledge. Thank you for reading this article, I hope you found it useful. More Reading
https://medium.com/javascript-in-plain-english/4-useful-javascript-books-for-aspiring-developers-67d9de904ea9
['Mehdi Aoussiad']
2020-12-27 21:28:33.443000+00:00
['JavaScript', 'Web Development', 'Coding', 'Books', 'Programming']
Start a React Project Truly from Scratch Using Webpack and Babel
What is Webpack and why is it used? Webpack is a module bundler; as the name implies, it bundles every module that a project needs into one or more bundles that can be referenced in the primary html file. For example, when building a JavaScript application that has JavaScript code separated into multiple files, each file must be loaded into the primary html file using the <script> tag. <body> ... <script src="libs/react.min.js"></script> <script src='src/header.js'></script> <script src='src/dashboard.js'></script> <script src='src/api.js'></script> <script src='src/something.js'></script> </body> By implementing the use of Webpack, these separate JavaScript files can be intelligently bundled into one file that can then be loaded into the primary html file. <body> ... <script src='dist/bundle.js'></script> </body> In this instance, using Webpack not only dramatically reduces the number of imports but also eliminates any issues that may arise if the scripts are not loaded in order. Besides module bundling, Webpack also offers Loaders and Plugins which can be used to transform files before, during, or after the bundling process. Loaders and Plugins are explored in further detail later on in this article.
https://joshiaawaj.medium.com/start-a-react-project-truly-from-scratch-using-webpack-and-babel-dbaaeea3f8da
['Aawaj Joshi']
2020-12-07 11:19:59.609000+00:00
['React', 'ES6', 'Webpack', 'Jsx', 'Babel']
Please, Can Science and Faith Live in Unison?
Please, Can Science and Faith Live in Unison? Gaining Power by Uniting the World Photo by David Vázquez on Unsplash I called to ask How are you? Instead your words Accused No, not me Directly My beliefs My lifestyle Everything that makes me Me Maybe Maybe I am wrong You are right COVID is a hoax Then why did I watch Dad die from outside The hospital window? Why does my granddaughter Wear a tiny Paw Patrol mask And ask, Mommy do I still have germs? Would a hoax Require refrigerated Boxes to house the Bodies of dead? I shared a friend’s tears As she told me the names Of two loved ones Who died this week You complain, Your grandchild can’t Attend his school I cry because my daughter Who teaches must attend With substitutes in many classes You believe Satan Owns the virus Or hoax as you call it His goal is to isolate Separate mankind I refuse to grant him That much power God provides opportunities To choose wisely Human’s with faith Grab onto the lifelines Believing that God is good Reach out and accept his gifts Masks, hand sanitizer, vaccines Science and faith can Survive in unison Unless you believe The world is flat
https://medium.com/the-pom/can-science-and-faith-live-in-unison-2ea682bb74aa
['Brenda Mahler']
2020-12-14 17:33:07.559000+00:00
['Faith and Life', 'Poetry', 'Faith', 'Science Fiction', 'Coronavirus']
Lessons in growth engineering: How we doubled sign ups from Pin landing pages
Jeff Chang | Pinterest engineer, Growth A popular topic within growth hacking circles is improving conversion rates on landing pages. Everyone has seen those “10 tips to triple your conversion rate” articles that are littered with general tips (e.g. increase CTA size) and promise gains, which are usually small at best. Instead of trying those general tactics, we doubled page conversions and increased SEO traffic by doing one thing: leveraging data to better understand potential Pinners. Improving Pin landing pages The first step to improve landing page conversions was selecting the right page to work on. While the “Pin page” (a landing page for clicks to a Pin from another site) is one of our highest trafficked pages, it converted worse than other landing pages, so we invested more resources into it. At first, we didn’t have much data about which parts of the page were effective at convincing a new user to sign up, so we tried a simpler, more visual page layout. After testing this new design in an A/B experiment, we learned it didn’t increase signups compared to the previous version (i.e. the control). It also was hard to extract learnings from this design because it was so different from any previous version. Was it because we replaced the board on the right with Related Pins? Was it because we didn’t show as much content after scrolling? In this case, we learned that by taking smaller steps, we could learn more from each new version. So, we tried a new version more similar to the control, where we allowed the Pinner to swipe through a carousel of Related Pins at the top of the page. This version also underperformed, but only slightly. The data showed few people clicked on Related Pins, possibly because they were small and difficult to distinguish. Next, we tried making Related Pins bigger and added attribution so they looked more like regular Pins. This was a success! We saw a lot of engagement with the which led to more signups. Our hypothesis was that this version better because it illustrated the related content on Pinterest and, in turn, showed the value of signing up. We shipped this version, and it became the control in future experiments. However, we wanted to see if we could do even better at converting Pinners on Pin pages. Because Related Pins seemed enticing to new users, we wanted to further highlight them by adding them to the normally blank spaces on the left and right sides of a Pin. We were surprised to find this version performed the same as the control. For our next iteration, we tried something simpler, where Related Pins were only on the right of the Pin. We were excited to learn this version beat the new control. But, we wanted to do even better. We looked into the user action event-tracking funnels and found those who clicked through on the main Pin (and thus went to the external site) barely converted, but those who clicked on a Related Pin (and landed on the closeup for that Pin) converted at a much higher rate. So, we reduced the size of the main Pin to be the same as the Related Pins and gave the Related Pins grid more real estate on the page. This iteration was a huge success and beat the previous control by over 25 percent (and that’s compounded on top of the gains of the previous versions!). Compared to our first Pin page, this iteration converted at twice the rate. Our first instinct was to ship this immediately, but instead we looked into the SEO experiment we ran alongside it and noticed that it dropped traffic by 10 percent. (Related post: SEO experiment framework.) If we shipped this Pin page, we’d get a net win (increased signups outweighed traffic losses), but we wanted to do better. Conversions and SEO When working on conversions for any page that gets a significant amount of traffic from search engines, you must consider SEO effects. For example, if an experiment increased signups by 20 percent but dropped traffic by 50 percent, the result is a net signup loss. For this experiment, we segmented the traffic by various verticals, such as web traffic, image traffic and traffic by subdomain, and saw the biggest traffic drop in image search. We compared the images in the two designs, and found the big difference was we shrunk the size of the image. From previous experiments, we know when we change the images on the page, even just the size, image search traffic is impacted since search engines have to recrawl billions of our pages. We ran another SEO experiment where we used the same large-size image file as before, but sized it down to fit inside the smaller Pin. This change increased the traffic difference from -10 percent to +10 percent, even though the design looks the same visually. Not only did this new layout increase conversions, it also increased traffic to the page. These effects multiply with each other to create a larger net signup gain. Key lessons By iterating quickly and thoughtfully, we were able to double Pin page conversions and increase SEO traffic. Here are the key lessons we learned along the way: Learn more about users by analyzing the event tracking funnel data from experiments. Use past experiment learnings to drive new iterations instead of trying “random” ideas. It’s best to have a hypothesis backed by data for why each new design will perform better. The faster you iterate, the faster you learn and see gains. If you’re working on converting a page that gets a significant amount of traffic from search engines, running an SEO experiment in conjunction with a conversion experiment is a must. Even if you increase conversions, you might also see a traffic loss resulting in an overall net signup loss. If you’re interested in growth engineering and love experimenting, join our team! Acknowledgements: These projects were a joint effort between engineering, PM and design on the User Acquisition team.
https://medium.com/pinterest-engineering/lessons-in-growth-engineering-how-we-doubled-sign-ups-from-pin-landing-pages-1c0bc400cdb9
['Pinterest Engineering']
2017-02-21 19:52:34.121000+00:00
['SEO', 'Growth', 'Klp', 'Engineering', 'Data']
Fat Acceptance Is Self-Acceptance
I’ve wasted a lot of time waiting until I was thin to go after the things I wanted. I didn’t have the self-confidence to put myself out there because I was fat. Was it an excuse? In some ways, yes, but I did experience roadblocks because of my weight. I’m getting older, and I no longer have the luxury to wait until I’m a perfect weight to go after my goals. We have to love ourselves all the time, which what acceptance is all about. If you’re fat, then fat-acceptance is self-acceptance, the same way every other kind of acceptance is. Fat-acceptance doesn’t mean not growing, improving, or challenging one’s self. Acceptance gives you a foundation that allows you to move past your emotional obstacles with less fear. Think of it this way, if you’re cutting an apple on an unstable cutting-board, you run the risk of hurting yourself with the knife. We need a stable starting point to take chances, put ourselves out there, and take actionable steps. How can you change if you loathe who you are at the start or seek help if you don’t feel you are worth it? Without acceptance, there’s nothing to keep you going. The opposite of fat-acceptance is internalizing cruelty or mistreatment because you feel it’s justified. You can’t defend yourself from fat-shaming, discrimination, and abuse if you’re convinced you deserve it because of your body-size. Being fat isn’t a crime — though I’m sure some see it that way. Fat people should be allowed to be happy and accept themselves for their successes and failures. Fat-acceptance isn’t the same thing as body-positivity. I like to think the body-positive movement began with good intentions. They wanted people to feel good about their bodies, even when those bodies weren’t perfect. However, somewhere along the line, as the movement grew, the idea of body-positivity began to apply to only those whose bodies were acceptable fat, and not unruly fat. You could be positive about your body if it were curvy, thick in the right places, or voluptuous, but if it was obviously fat, then body-positive was something for you to aspire to. In her article, Leaving Body Positivity Behind for Fat Acceptance, writer Rachael Hope writes: I disagree with the idea that loving your body is a goal that sets us up for failure. Loving your body doesn’t have to mean that you don’t think you have flaws or that you don’t have bad days. The same way that when you love another human being you don’t like them every moment of every day. I have days where I feel down on myself or dislike the way my body looks or feels. There are specific parts of my body I like less than others. But I still love my body. I love that it is my home. I love that it lets me physically connect with people. I love that it lets me feel touch and pleasure. Accepting your body is something to be proud of. For me, accepting it was part of falling in love. I don’t love my body because it’s the BEST body or because it’s a BETTER body than someone else’s. I love it because it is my body, and I love myself. As fat people, we need to accept, care for, and love our bodies — it’s vital to our feelings of worth, self-esteem, and the quality of our lives. We need it as armor to fight our battles and help protect us against shame and humiliation. When you accept your body, you can start to heal. Fat-acceptance allows us to be honest with ourselves and helps us to see both our limitations and talents. Without fat-acceptance, you may shut something down like working out or applying for a job because you’re starting from a shaky spot. When you accept your body, you can start to heal. No matter what size it is, how healthy it’s perceived to be, or how it serves you — you’re alive and that’s thanks to your body. The next time someone tries to shame you for having both self-acceptance and fat-acceptance let them know that you’re not dependent on their approval, and that you don’t need their opinions about your relationship to your own body.
https://medium.com/fattitude/fat-acceptance-is-self-acceptance-b05f19edaaaf
['Christine Schoenwald']
2020-12-04 08:38:01.287000+00:00
['Self Acceptance', 'Fat Acceptance', 'Mental Health', 'Culture', 'Feminism']
The Curation of Our Little Library is Abysmal and I’d Like to Complain to a Manager
The Curation of Our Little Library is Abysmal and I’d Like to Complain to a Manager Three copies of The DaVinci Code? This is ridiculous. Imagine my excitement when a little library appeared just blocks away from my house. Painted green with a glass door that swung open and shut, at last something Pinterest-worthy was happening in my part of the country. I was sure the mere proximity of the thing would make me feel smarter and more well-read. However, the little library has been sitting smugly on the corner for a year now and, I hate to complain, but the outcome of this experiment is downright embarrassing for all involved. So, please direct me to the person managing this whole situation, because I have a few complaints. Mostly I’m annoyed by the contents of the little library. I assumed everyone would chip in to contribute only the most enlightening and thought-provoking reading materials from their personal libraries. As for myself, I added my copy of The Secret and the person who took it is sure to be hella actualized by now. So, I’ve done my part. But, as with their lawn care, others in this neighborhood have chosen to do the bare minimum. I saw one woman slip in a pamphlet for her laser hair removal MLM and then walk away as if she’d just saved the world. This is unacceptable. What if someone important comes to our neighborhood and sees a dogeared copy of 50 Shades of Gray sitting right next to an abridged (abridged!) version of The Three Musketeers? What would Alexandre Dumas think to find his masterful adventure novel cut into pieces and shelved next to Horny Twilight? He would condemn us all. I’ve also been meaning to address the San Diego travel book from 2003. It’s been in the library for almost four months and no one has taken it. We all know what 2003 was like and none of us are interested in revisiting it. (Two words. Embellished. Camo.) Keep your dated travel books where they belong, in the background of your Instagram photos where they’ll impress your 23 followers. Of Mice and Men would have been a decent choice for inclusion, if someone hadn’t ripped out the final ten pages. Sure it’s not the happiest of endings and we can all relate to the impulse, but books with parts missing look tacky. And Sarah Palin’s autobiography? I can’t get too mad about it, because someone was probably trying to get a cursed item out of their house. But, by attempting to pass it on to another neighbor, they’re reenacting one of the most overdone horror tropes. Everybody please burn your evil items in your backyards beneath the light of a full moon instead of adding them to the little library. Also, if your novel is not in English, what are you even doing? This is America and I live here mostly so I don’t have to be subjected to the German language. German people make up words for everything and it’s exhausting. We don’t need a word for the sad feeling you get after cutting your toenails and we certainly don’t need a word for how men are doomed to turn into their fathers, because none of us want to think about that. I could go on and on and on about the selection of books available, but that wouldn’t leave me time to complain about the sketchy characters who have started hanging out around the little library, with their wire rim spectacles and their bookshop totes. If you can believe it, I walked over there the other day to donate my copy of Eat Pray Love, a revolutionary work that transformed my relationship with myself, and they sneered at me. They told me it was a reductive novel and I needed to expand my horizons. Then they criticized the contents of the little library, which, honestly, was way too confusing for me and I really need to complain to a manager if I’m going to sort out the complicated array of feelings I’m experiencing right now.
https://sarah-lofgren.medium.com/the-curation-of-our-little-library-is-abysmal-and-id-like-to-complain-to-a-manager-c3c60ec5a468
['Sarah Lofgren']
2020-06-29 22:43:35.848000+00:00
['Satire', 'Humor', 'Reading', 'Funny', 'Books']
Get cracking with NativeScript
So you are a pro in Javascript OR a real good Angular, Vue developer and now want to explore building native apps on mobile. However, you are getting a migraine seeing the number of options! React Native, Dart, Kotlin which one should I choose? Well, take a sip of your favourite coffee and sit back. We got you, NativeScript! NativeScript allows you to build native apps using Angular or TypeScript or modern Javascript and still give you the truly native UI and performance. It allows you to embed a web framework to generate a native app. Sounds cool, doesn’t it? So, let’s get cracking with it! Architecture NativeScript prominently uses MVVM model which enables it to have two-way data binding, so the data gets instantly reflected on the view. Another important advantage of this approach is that Models and View Models are reusable. This makes it possible to use it with Vue and Angular frameworks where most of the business logic can be shared with WEB components. It also provides a rich set of Javascript modules which is categorized as UI Modules, Application Modules, Core Modules. These can be accessed at any time to write any complex application. Native Plugins are written in platform-oriented languages (Swift and Java) which generally acts as a wrapper and can be used with Javascript plugin. Write once run everywhere NativeScript helps in building native applications in Javascript, however, you can build mobile apps either with JavaScript/TypeScript or Angular. Most of the code written in JS will remain the same for both platforms. It allows code sharing for business logic and some UI styles for Android and iOS. Performance NativeScript shows the ability of running the animations at 60 frames per second, virtualized scrolling, caching similar to native apps. Moreover, it can offload long-running processes to maintain frontend speed. In the latest release of NS v6.7.8 the newly composed Webpack module has improved the performance on Android considerably. From NS v6.7.8 onwards we can see the following improvements Build process for Android increases by 30% while for IOS it increases by 10% Streamlined store approval process makes it enable a faster process for new versions to update. Native device features NativeScript provides the feature of writing native expressions directly, with Javascript or TypeScript. This avoids unwanted use of Javascript wrappers around the native ones, so the developer can focus only on business logic. It allows us to call native APIs from Javascript directly because they deal with the same Native APIs. For e.g, If you want to integrate the camera feature in an app, you can initialize this through JS also. In addition to this, NativeScript readily provides support to newly available IOS and Android API, by which we can easily shift to new features rather than depending on a specific version. Pre-styled UI components There is a rich set of pre-styled components available with NativeScript. You can simply plug and play these components. Also, there is a good bifurcation on layout and components. You can customize the components quite easily. For e.g. Date picker, Bottom Navigation, Slider, Tabs, etc. Community support Earlier, there was less community support available for NativeScript but with time we are seeing a good amount of developers digging into the framework. Also, many organizations are adopting the framework for app development, which automatically helps in building the community. Ready to use plugins The NativeScript plugins are building blocks that encapsulate some functionality and help developers build apps faster (just like the NativeScript Core Modules, which is a plugin). Most are community-built, written in TypeScript/JavaScript. Some include native libraries, which are called from the TS/JS code thanks to the Runtimes. Native Script maintains an official marketplace of plugins for most of the native modules. In addition to this, NS does provide support from npm, CocoaPods (iOS), and Gradle (Android) directly, along with hundreds of verified NativeScript plugins. AR/VR capabilities NativeScript lets you access iOS and Android APIs to build mobile apps using JavaScript, and ARKit is no exception. The releases of AR SDKs from Apple (ARKit) and Google (ARCore) have presented an opportunity for NativeScript to enable developers to create immersive cross-platform AR experiences. There is a plugin called nativescript-ar available on the marketplace for this. Web support As NativeScript comes with the support of different web frameworks like Angular, Vue, it allows you to build web and mobile apps out of a single codebase. It won’t stop at sharing only services but you can easily share: Component class definition — that is the xyz.component.ts Pipes Router configuration SCSS variables With NativeScript 6.0 the amount of code reuse between web and mobile has increased. NativeScript can achieve 70% code reuse across web and mobile, including support for PWAs. This shortens development and testing cycles for both web and mobile apps in production while ensuring consistency across digital channels. It also lowers the cost of development and maintenance for deployed applications. Learning curve As NativeScript is based on Javascript, you can use Typescript, Angular or Vue to develop apps. It also supports the declarative coding style. So being a web developer, you don’t need to learn new languages or syntax. NativeScript bypasses the dependency to learn Objective C(IOS) and Java/Kotlin(Android) for bridging concepts by injecting all iOS and Android APIs into the Javascript Virtual Machines. Language used As mentioned earlier, NativeScript uses JavaScript to build native mobile apps. It comes in different flavors — pure JavaScript/ TypeScript, with Angular and with Vue.js. So you can choose any of the given to start with your app. PWA support You can create a PWA with the NativeScript. Through the use of the NativeScript and Angular integration, it’s quite easy to build a PWA (Progressive Web App). From v6.0 onwards NativeScript provides support for PWA which also enhances code reusability between mobile and web applications. With the new concept of HMR (Hot Module Replacement) provides developers to see changes to JavaScript and CSS resources without reloading the application which enables a better user experience for PWA Current limitations NativeScript does have some limitations mostly related to App size which is mostly large in size, but you can overcome this limitation by running Webpack and Uglify. Android performance was not up to market standard in initial versions, but later on, in latest releases v.6.7.8 it is claimed to have better performance along with support for Android X library. Closing Thoughts As a web developer, when you start thinking about building mobile apps with cross-compiled platforms, it is definitely worth exploring NativeScript as one of the reliable options. As mentioned above it comes with a lot of capabilities, features and plugins which we need for any mobile app development. Cheers !!
https://medium.com/globant/get-cracking-with-nativescript-421b45e0d1b3
['Shreyas Upadhye']
2020-08-13 10:54:46.769000+00:00
['Mobile App Development', 'iOS App Development', 'Nativescript', 'Cross Compile', 'Android App Development']
Skiing During the Pandemic
Skiing During the Pandemic What’s changed, what’s working, and what needs improvement Opening day at Big Sky Ski Resort, Big Sky, Montana. Photo by Tom Johnson. As the holidays approach and the coronavirus pandemic enters a dangerous new phase, many skiers wonder whether it’s safe to return to the slopes. Last week, my wife and I got a first hand look. If our experience is any indication, the industry could be in for a challenging season. Thanksgiving Day marked the beginning of ski season at many resorts across the U.S. We spent the day skiing at Big Sky, Montana’s largest and best known ski resort. Since September, we’ve lived at Big Sky Mountain Village, located at the base of the ski area. During our time here, we’ve been impressed by the absence of crowds. All of that changed on Thanksgiving morning, when thousands of people showed up to ring in the new season. Lifts opened at 9 am, but even before then, skiers gathered at the base area. Lines quickly formed at the lifts, and skiers packed the few runs with sufficient snow to open. From a COVID-19 perspective, skiing has the reputation of being a relatively safe activity. Skiing is an outdoor sport. Skiers are spread out over vast areas and breathe unlimited quantities of fresh mountain air. But it’s not the skiing that poses the greatest risk; it’s the congregation at the base, in lift lines, at mountain dining facilities, and in bars and restaurants at night. As in many places, enforcement of mitigation measures is key. The best laid plans can be derailed by lack of compliance. Lift line at Big Sky on opening day. Photo by Tom Johnson. Destination resorts like Big Sky attract guests from across the country and around the world. Those guests bring with them illnesses present in their home regions and then commingle in resort facilities and in surrounding communities. The resulting stew has the potential to feed disease outbreaks. The risks posed by ski areas are well documented. Last winter, a coronavirus outbreak at the Austrian ski resort of Ischgl was linked to more than 6,000 infections in nearly 50 countries, an event that contributed to Europe’s initial coronavirus surge. Europe is now wrestling with how to avoid a repeat of last year. Austria and Switzerland recently decided to open for the season, while other countries, such as Italy, Germany and France, vow to remain shut or operate under significant restrictions. Safety first Last spring, surging COVID cases caused the U.S. ski season to grind to a halt. Ski areas abruptly closed in March, shortening the season by as much as two months and closing many business dependent on winter tourism. Big Sky was no exception. In an effort to get ahead of potential coronavirus-related setbacks, the National Ski Areas Association this fall published their “Ski Well, Be Well” guide to best practices for skiers and resorts alike. The association represents more than 300 alpine resorts that account for more than 90 percent of the skier/snowboarder visits nationwide. “Ski industry leaders from across the country established these foundational best practices according to scientific guidelines put forth by infectious disease experts, including the CDC and WHO,” the organization says on its website. “Ski areas will comply with additional federal, state and local regulations as they are implemented.” Both Boyne Mountain Resorts and Vail Resorts advised the creation of the safety document and endorsed its contents. Boyne owns Big Sky, as well as Washington’s Snoqualmie, Maine’s Sugarloaf and several others. Vail Resorts, the nation’s largest ski corporation, owns and operates 37 mountain resorts in three countries, including Vail, Beaver Creek, Breckenridge, Park City, and Whistler Blackcomb. In attempt to assert some control over resort capacity, Vail Resorts in August announced plans for its first reservation system, requiring skiers to make reservations to ski ahead of time. Other resorts are slowly adopting reservation systems, especially for skiers using partner passes. Skiers using the Ikon Pass will need to make reservations at many resorts, including Big Sky. Big Sky is now actively rolling out its slate of best management practices intended to curtail risky behaviors known to facilitate transmission of the virus. “Each of our teams have worked tirelessly to develop new operational practices with the goal of providing the safest experience possible for our guests and our teams,” Big Sky Public Relations Manager Stacie Mesuda told me in an email exchange. “Many things will be different this season — directional traffic in our F&B (food and beverage) locations, new lift-line configurations, social distancing guidelines, and most important, the requirement for all team members and guests to mask up while at the resort as it is mandatory in all public space.” The resort’s publicized face covering requirements include wearing masks while at the base area, in lift lines, while riding and unloading chairlifts, and while indoors. “Our efforts to wear masks and facial coverings consistently are a crucial factor in staying open all season,” Mesuda said. In addition, Mesuda said, the resort has invested in weekly surveillance testing for both symptomatic and asymptomatic employees, beginning in early December. Separately, Big Sky is participating in a community-wide testing partnership. Good intentions Our experience on Thanksgiving suggests that despite good intentions, operating the resort safely remains a challenge. Throughout the day, we observed behaviors that were inconsistent with Big Sky’s published regulations and raised questions about whether the resort and the industry have the ability to operate in accordance with their own safety requirements. In many ways, skiers and the resort behaved as if this season is no different from previous ones. Lift and ticket lines were long and social distancing was scant. Signage encouraging safe practices was present but not always visible in crowded areas, leaving skiers unsure where and how to queue safely at lifts. Absent were any resort employees roaming the lines to assist with directions or enforce social distancing and face covering requirements. Opening day at Big Sky. Photo by Tom Johnson. Mask usage was far from universal. The face coverings we observed among guests were likely to be worn below the nose, where they provided no protection, either for the individual or those around them. Even when worn properly, the masks we observed often consisted of coarse-woven gators or bandanas. Few masks we saw were constructed of materials thought to provide maximum protection. Mesuda told me in a follow-up exchange that “we believe our guests can do much better and have empowered all our teammates to remind guests of our resort policies, while also educating them on proper wearing, acceptable forms of coverings, and ensuring everyone does their part. Most of our guests want to do the right thing — and this is all new to them. We are finding that once we provide some education to our guests about our expectations, they are happy to comply. However, several non-compliant guests were asked to leave the resort and we will maintain that approach in every similar instance.” Uncomfortable moments While actively skiing, my wife and I felt as safe as we would have during any other ski season. But throughout the course of the day, we found ourselves in situations—ticket lines, lift lines, riding on lifts—that caused us to accept risks that we have scrupulously avoided throughout the pandemic. This made us uncomfortable. In the line to obtain our passes, a printer malfunction delayed processing of orders. We stood in line for more than 30 minutes. Most patrons around us wore masks, but social distancing was spotty. We saw no resort employees in the area enforcing compliance. There were no mazes to guide patrons, no marks on walkways suggesting safe distancing. The group behind us continually encroached on our personal space. At one point, a man in the group stood only inches behind my wife. “Can you please scoot back?” she asked. The man puffed up and glared at her as he took a step back. “Is this far enough?” “Six feet,” my wife said, pointing to a resort sign posted next to her. In the lift line, because of crowd density, we saw few posted signs encouraging compliance with the resort’s safety rules until we made it all the way to the front and were about to board the lift. By then, we had spent 20 minutes jammed together with other skiers awaiting a trip up the mountain. “Are you having trouble enforcing social distancing and mask requirements?” I asked a lift operator as we approached the front of the line. “It’s not that I’m having trouble,” she said. “They pull their masks up when I tell them to. But until then, they just do what they want.” She asked whether we had seen many people without masks. We’d observed around seventy percent compliance, I told her, but that even those in compliance wore face coverings that offered little protection, or they wore them below the nose. She agreed. “Did you receive much training on COVID?” my wife asked. “It was very brief,” she said. “Very brief.” Mesuda later told me that “every new and returning employee went through a mandatory orientation session which featured COVID-19 education, review of resort policies and expectations (for guests and employees), and complemented with department-specific training from department managers.” I asked the lift operator whether the resort had people patrolling lift lines to ensure compliance with Big Sky’s face covering and social distancing requirements. “We’re supposed to,” she said, looking futilely out at the line snaking into the distance. Looking in the same direction, we saw hundreds of people packed together on a windless day, sharing space and air, breathing hard from runs just completed. Compliance tended to fall among groups: if one member of a group wore a mask, everyone wore a mask. In other groups, no one wore face coverings. Skiers who went through the line unmasked weren’t asked by lift operators to mask up until just before loading onto the lift — at which point, a mask was arguably less helpful. “Thanks for caring about this,” the lift operator said as we boarded. “Because nobody else seems to.” Opening day at Big Sky. Photo by Tom Johnson. Who’s in charge? At least initially, Big Sky appears to have chosen to rely on guests to independently monitor their behavior and adhere to the resort’s safety requirements. Mesuda as much as confirmed this. “While we do our part, we are also asking our community to do their part and use good judgement to be socially distant whenever possible,” Mesuda said. And that’s part of the problem. Perhaps due to coronavirus fatigue or politicization of COVID mitigation measures, a segment of the population continues to abstain from mask wearing and social distancing requirements. Without enforcement, the resort and its guests are at the mercy of those who elect not to comply. And those who exhibit reckless behavior on the slopes are likely to conduct themselves recklessly in other aspects of the their lives, making them more likely to contract the virus and put everyone around them at risk. Regardless of guest behavior, the resort appeared to have some trouble doing their own part. For instance, we were told to expect separate lines for those wanting to ride only with members of their parties and those willing to share lift rides with others. “Our lift riding plan is a hybrid intended to maximize uphill capacity while respecting personal choice and space,” Mesuda said. “Guests can choose one of two lines — a “Friends & Family” line if they want to ride with the group they are traveling with (drive together, ride together) or a “normal” line, which would load lifts with unrelated parties as we have typically done in the past.” On opening day, we saw no such options and no signage directing us. At the front of the line, we were loaded onto the lift with another party, and no one asked us whether that was okay. Mesuda later explained that the option to ride only with members of one’s own party is limited. “On select lifts, we are offering guests the option to ride with their party only; and if guests are comfortable, the ability to ride with other parties as well,” Mesuda said. “On lower volume days, we will accommodate our guest’s desire to ride alone or only with members of their party across all lifts that are running.” On Big Sky’s website, the resort states it will not enforce maximum capacity on chairlifts but will allow groups traveling together to ride on their own chair when “operationally feasible.” In contrast, lifts will be loaded only with people in the same group in nearby Colorado. Table for two Perhaps the greatest danger on the mountain lies within the resort’s indoor dining facilities. States around the country are once again issuing stay-at-home orders and forbidding or limiting public gatherings indoors, including indoor dining at restaurants. Despite ranking in the top 10 per capita for test positivity rates, conservative-leaning Montana has resisted implementing aggressive restrictions. “We are operating restaurants at 50 percent occupancy in compliance with state and county guidelines,” Mesuda wrote. “In addition to managing to a reduced capacity, we have implemented additional measures to minimize the risk of COVID-19 exposure which includes: requiring facial coverings in all indoor facilities unless seated and eating, directional specific entry/exits, increased frequency of sanitizing common area surfaces, as well as online ordering, dedicated pickup areas and even the introduction of a delivery service to lodging units by way of Swifty Delivery.” Operating dining facilities at 50 percent capacity falls short of restrictions imposed by many cities and states with lower per capita infection rates, and in places where diners are more likely to come from nearby neighborhoods. At Big Sky and other resorts, dining facilities could be veritable melting pots — and 50 percent capacity is little different from an average day in mid-winter. The CDC rates on-site dining with indoor seating as “Highest Risk” if seating capacity is not reduced and tables not spaced at least 6 feet apart. Former FDA chief Scott Gottlieb said Monday on CNBC that he avoids indoor dining altogether. “I will not eat indoors in a restaurant,” Gottlieb said on “Squawk Box.” “I’ve been eating outdoors since the summertime and wouldn’t eat indoors in a restaurant. I think the risk is too high to be in a confined space without a mask on with other people eating in that same location right now.” While admittedly cautious, my wife and I haven’t set foot in an indoor restaurant in eight months. The prospect of sitting down with strangers from distant locales, regardless of how much surface cleaning is done, is unimaginable. For skiers with concerns about airborne transmission of COVID-19, that leaves few choices but to eat outside. Al fresco dining is fine when the weather is pleasant, but it’s a chilly prospect during a January blizzard. The author and his wife on opening day at Big Sky. Photo by Tom Johnson. Looking forward Opening day at Big Sky reminded me of an episode of truTV’s educational comedy series Adam Ruins Everything. In an episode entitled “Adam Ruins Security,” Adam explains the concept of “security theater.” Security theater is the practice of enacting security measures that are intended to provide the feeling of improved security while doing little or nothing to achieve it. Examples include tightened airport security after a terrorist attack. On opening day, I felt that Big Sky was to a degree practicing “safety theater.” Through their published guidelines, the resort talked a good game, and this made me feel safe. But on opening day, the resort failed to take some of the actions that would have actually kept me safe. Since opening day, I’ve noticed some improvements. Mesuda acknowledged as much: “The lift lane configuration has been extended for Ramcharger 8 and the same since the opening of Swift Current, with a recurring placement of messaging noting facial coverings are required and to socially distance at least 6 feet apart. While lift lanes may have seemed crowded, with most guests maintaining a “tip to tail” distance between skiers and riders, they are generally spaced 6 feet apart and adhering to the recommended spacing between parties. It’s also important to note, by providing our guests the opportunity to only ride with their party on select lifts, and encouraging additional spacing between groups — lift lines will appear longer this season. However, with reduced guest volume and our high-speed chairlift network, we believe guest wait times will be less impacted.” Readers of this story can view the photos included and ascertain for themselves whether Mesuda’s “tip to tail” comment is accurate. In the resort’s defense, Thursday was the first day of the season. The resort will no doubt iron out kinks and address shortcomings in their safety policies. The situation on opening day was made more difficult by virtue of the fact that snowfall has been light, meaning that only a couple of lifts and runs were open. With few choices, visitors are confined to a smaller footprint of skiable terrain. As the season progresses, guests will spread out. This may reduce congestion at the base and in lift lines. But the resort only has a few weeks to get it right before the holiday crush, when many times the number of visitors — heralding from broad geographic areas — will descend on the mountain. By then, the pandemic’s grip on the nation is expected to tighten. Big stakes There’s a lot at stake, not just for Big Sky, but for the country’s nearly 500 ski resorts and the communities that depend on them. Snow sports tourism contributes around $20 billion to the U.S. economy each year, according to researchers at the University of New Hampshire and Colorado State University. There’s a lot at stake for skiers, too. Between travel, accommodations, lift tickets and gear, skiing is an expensive sport. Having a ski vacation cut short by a COVID-19 infection is a difficult pill to swallow, even without considering the risk of ending up in the hospital. And if guests become infected, they may be forced to quarantine onsite. “Ski areas have also been asked to message to their guests that they will be required to extend their stay and quarantine should they test positive for COVID-19 during their stay,” a spokeswoman from the Colorado Department of Public Health and Environment recently said. If that were to come about, guests themselves may be on the hook financially. “If you have to isolate, you are going to have to pay,” Aspen Chamber president Debbie Braun said. “People need to be very aware when they come to town, and we need to make sure they understand our public health orders.” Mesuda said that in the event that guests become infected during their stay at Big Sky, the resort will “assist them on a case by case basis to ensure they are isolating safely and in compliance with the local health department’s best practices.” She declined to say who would pay for an extended stay. Bottom line In the weeks leading up to opening day, we were comforted by Big Sky’s aggressive messaging that suggested the organization is respectful of the coronavirus and is taking steps to mitigate the threat it poses. Big Sky’s website contains pages of information detailing the steps they have taken to ensure guest safety. Our actual experience underscores the fact that regulations without enforcement are of little use. There will always be a subset of the population that ignores or even ridicules restrictions as onerous, overbearing, or an infringement on personal liberty. Mesuda says that with the policies it has in place, the resort is confident it can make it through the season safely. If things go poorly, however, they are not afraid to adjust. “We intend to use common sense and good practices to open safely and efficiently for the full duration of our ski season. Like last winter, we are not afraid to pivot or make hard choices once the season is underway; but remain confident that our current plan will allow us to have a full season of skiing.” With little snow in the forecast, skiable terrain across the West will likely remain limited into the holiday season. Congestion at the base, in lift lines, and in restaurants will persist. Guests will continue to flock in from all points. With that in mind, it is incumbent on Big Sky and other resorts to do more to enforce their regulations and weed out noncompliant visitors who would put others at risk. My wife and I intend to ski throughout the season, but like all things this year, we will modify our behavior. We will restrict our skiing to days when crowds are thinner. We’ll bring our own meals and eat them outside. We’ll avoid situations where crowds form and social distancing is inadequate. We’ll request to ride lifts without other parties. We may ski shorter days, since we won’t use the resort’s lodges and restaurants for meals or rest breaks. By limiting our exposure to others, we believe we’ll feel safe enough to ski. And as long as we are skiing—actually gliding over snow in fresh mountain air—there is probably not a safer place to recreate.
https://medium.com/illumination/thinking-of-skiing-this-season-33a2fcb30ae
['Tom Johnson']
2020-12-09 15:03:52.571000+00:00
['Sports', 'Health', 'Business', 'Skiing', 'Covid 19']
I always wanted to write but I never felt good enough.
During my senior year of my undergraduate degree I really started to notice how much school and work had taken over my life. I was living life by reacting to every fire that appeared, instead of conquering my goals. Realizing that I was headed down the wrong path mentally forced me to re-examine what I wanted out of life. I started to read again, looking for meaning where I had lost it. I started to listen to great minds again, through YouTube and podcasts. I can’t stress enough how much listening to podcasts during my daily commute has improved my life. If you aren’t listening to podcasts or audio-books instead of the same old songs and boring radio stations, then you need to get Castbox, Spotify, Soundcloud or whatever is easiest. Taking those 10–30 minutes every morning and evening to think critically about new topics or listen to something more engaging than the newest hit song will help everyone in the long run. There are podcasts about anything you can think of, just like there are books about almost every topic imaginable. Find what suits you and upgrade your drive time. Another flaw I am working on overcoming is procrastination. If there’s one thing I have learned in MBA school is that “what gets measured tends to get done”. That is to say that if you set a simple goal, you are much more likely to complete the task than if you keep that task locked up inside your head. Make a list on your phone of tasks you want or need to get done (I use google Keep for lists and google Tasks for specific goals). Since I have started writing out my goals I have gotten a lot more done, and I feel more accomplished checking off the box on my phone app. One of the newer techniques I am trying out to boost my fight against laziness is setting goals on my google calendar (which I had never thought of before). I have mountains of books I want to read, but I fall into the habit of getting lost in the middle of books and just watching T.V. instead. To fix this I am set a new goal on my calendar to read 30 minutes for at least 3 days a week. So far I like it, because it notifies me at my set time of day I want to read and nudges me to get some reading done. The first week I only got 1/3 days read, but this week I have all 3 already. People don’t like failure, and setting goals that remind us of when we don’t live up to our ideals nudges us to do better. It seems trivial, but recognize your goals by defining them. Set some type of measurement for your definition of “success” towards that goal, and you will see a difference. Setting a realistic goal is half the battle. I could have easily put my goal as read every night of the week, but that is too much to jump right into for me at the moment. That also doesn’t mean that I don’t want to read every night, because I do, but I am working slowly towards it. The rub is that there will be ups and downs along the path, but keep striving towards the top and you will see improvements. I reset my life by realizing what my goals were. I wanted to improve my relationships, read more, learn to write better, and countless other goals, but I never would have made as much progress if I had left those goals floating around in my mind. Part of my procrastination is the feeling of not making significant progress, but by setting measurements it has helped me to see the changes I am making in my life. I am determined to keep climbing.
https://medium.com/the-ascent/how-i-upgraded-my-life-and-started-writing-again-bb5983ed229d
['Michael Wentz']
2018-07-11 21:01:01.194000+00:00
['Life Lessons', 'Procrastination', 'Goals', 'Writing', 'Personal Development']
iPad Air 4 vs. iPad Pro (2020)
iPad Air 4 vs. iPad Pro (2020) …Did Apple create its own iPad Pro killer? iPad Pro (2020) vs. iPad Air Apple’s “Time Flies”event just wrapped up and though there wasn’t an abundance of hardware showcased, leaving many disappointed, there is something big to take notice of. Out of all the products we got a look at the event, the product that got the most exciting hype around is the brand new iPad Air which now shares a lot of the similarities of the 2020 iPad Pro. This raises the question that many will need answering when buying these iPads — what iPad should you buy? The iPad Air or iPad Pro? This is why we’re going to discuss the major differences between both as a buyer’s guide. Design The iPad Air has existed since 2013 as a middle-level iPad that shares a lot of the iPad Pro’s features but didn’t have that higher-end price tag. These two were quite similar for a while until 2018, when Apple “modernized the iPad Pro with a squared-off design and edges, again giving it a much more industrial look, smaller bezels, and Face ID. That also means a cut down of the headphone jack. For those who wanted this stunning design, they looked up to the iPad Pro. However, that is not the case anymore. The 2020 iPad Air has received an all-new design that is the same design language as of the iPad Pro. The Air was not just granted a new design, but new colors as well! Both high-end tablets receive the “Space Gray” and Silver colors, default yet clean. But, the new colors of the Air are the glowing Rose Gold, a tint of Green, and Sky Blue, resembling the color to 2020’s Color of the Year! My, the amazing colors of the iPad Air. iPad Air 4 colors. iPad Air gets a more variety of colors. More than the Pro. But to be honest, about more than half the people who own any iPad use a protective case as these iPads come with a hefty price. So, if one of those who use a case for your iPad, don’t be too surprised if you can’t see your beautiful new color! Display With the iPad Pro, you have the options to pick between an 11-inch display or a 12.9" one. With the iPad Air, there is just a 10.9-inch option. It’s the 11" Pro but with bigger bezels, decreasing the screen size. For those looking for the biggest screen a high-end iPad can provide, the 12.9-inch iPad Pro is the way to go. Many people want to turn their iPads into laptop-replacements so for those people, this relatively big size is beneficial to get that laptop feel and size. Both the iPad Air and the iPad Pro are True Tone capable. The iPad Pro is a little brighter with 600 nits compared to 500 nits on the new Air. If you want a little more screen brightness, the iPad Pro is going to be the way to go. The last major thing regarding display is Pro Motion. Having that 120hz fluid display is one of the key features that stand out on the iPad Pro. Not only does it make the iPad feel super smooth but it also allows the refresh rate to drop down to match your content, so if you’re watching a movie, it runs at 24fps as it was set as, saves battery life at the same time. Promotion is also what makes the iPad Pro by par the best tablet to write with latency so low, you won’t even feel it. Whether you’re gaming, writing, or just browsing the web. On the writing note, both iPads do support Apple Pencil Generation 2 but the experience you’ll get on the iPad Pro is superior to the iPad Air can get tanks to 120htz. The iPad Air’s 60hz screen will have more than double the latency but keep in mind Apple’s latency is down to 9 milliseconds! Not much of a downside to me… Authentication iPad Air 4 built-in Touch ID Although both screens look very similar, there is a very important distinction to make and that is with authentication. On the iPad Pro, you have all the benefits of Face ID — for unlocking your iPad, making purchases, Apple Pay, all that stuff is handled on Face ID, the facial recognition system tucked into the bezel at the top of the iPad. With the iPad Air, you get the Touch ID technology into the Air’s power button. We’ve seen Touch ID in the home button but not integrated into the power button itself. This can be seen as a benefit if you’re in a public space and you’re required to wear a mask — in an instant all by using your finger. Touch ID will come out victorious and Face ID — not so much. It may not be “easier” than just glance at your iPad but it does get the job done. To test the speed between both, we’ll see in October… Processor This is where the tables turn may turn for iPad Air’s favor. This is because the new iPad Air is equipped with Apple’s all-new A14 Bionic processor, Apple’s latest and greatest silicon expected, the successor to its already fast chip worldwide, A13. This is where it becomes tricky. This is the chip that will go on the iPhone 12s. This must be massive. Apple’s A12Z (iPad Pro’s chip)is still the mightiest Apple Silicon chip in intensive tasks as seen in Geekbench 5’s multi-core test. The A13 beats the A12x in a single-core test. So the A14 will certainly be even faster, exceeding the A13 by a noble amount. But Apple may hold back the intensive strength of the A14 so that it doesn’t beat out the iPad Pro’s chip. Chips are the Pro’s specialty. The A14 has 6 cores and 4 GPUS compared to 8 cores and 8 GPUS. Again said, the A14 won’t outperform the A12z. Again, we’ll have to wait till October for the real fun to begin. Reviewers will help us find the definite answer then. But know this; Single Core wise in light tasks like surfing the web and Youtube, the A14 will triumph. Maybe not intensive work-wise… Etc. Like the iPad Pro, the Air utilizes a USB-C port which allows for connecting external storage, better power, faster data transfer speeds. The iPad Pro has a 12-megapixel main camera + an ultra-wide camera. The back of the iPad Air has just one single 12-megapixel camera and no ultra-wide. I honestly use my iPad camera for occasional selfies and just to scan a document so..it doesn't matter much to me but if it does for you…well now you know! The Pro also utilizes a Lidar scanner. Speaker wise, The iPad Pro has a 4 speaker setup that’s going to a very immersive stereo audio experience while the iPad Air has — 2 speakers for a landscape stereo audio experience. (it has 4 cutouts for 4 speakers like the Pro but this iPad only has two, one on each side.) RAM could stand in the way of many buying this iPad Air because it only has 4 gigabytes compared to 6 on Pros. Now, there are many who complain about 4 gigabytes on a iPhone. So on a iPad, would the problem be worse? And as Apple has given access to their iPads like the Air to become laptop replacements, would the 4 gigabytes of RAM suffice for computer-level multitasking? Ask yourself that because if you are going to do heavy tasks even just multitasking, the Pro is the best choice…for an iPad. Macs are just around the corner mates! Accessories won’t be a problem. The iPad Air is also compatible with all of the accessories of the iPad Pro such as Magic Keyboard and 2nd generation Apple Pencil. Storage Capacity. The iPad Air is limited to an “okay” storage range of 64gb to a max of 256gb whereas with the iPad Pro you have more flexibility from 128gb to 1TB. Pricing You can buy a base model iPad Pro 11-inch right now for $799. The newest iPad Air base-model is for $599. The base 11" Pro has 128gb for base while the $600 Air gets only 64gb for a base. In $200, you are able to get 128 gigabytes of storage, 120 hertz, Face-ID, etc if you get the Pro instead. BUT, to be honest, the Air is already getting Apple’s next best chip, has THE same modern shape and design, and access to the Pro accessories and more. The only “big” thing that you could miss out from the Pro is Promotion but if you don’t care, thats fine! Plus, if storage is a problem, you can up the storage to 256gigs and be $50 short from the base 11" Pro. Closing All in all, the iPad Air has become in many ways become a better deal than the Pro. Personally, my dream iPad is an iPad that does the following; A powerful chip A top-notch display A modern design “Durable” in the long run Access to the iOS/iPadOS app store coming from a Intel Mac Worth it all the way financially The iPad Air is the definition for my dream iPad and that used to be the 11" Pro. However, there are some great things for some people that will make them levitate to the Pro and thats fine. That will never change for a chance, and for those, go for the Pro…but the Air is worth a chance!
https://medium.com/swlh/ipad-air-4-vs-ipad-pro-2020-97a76d396e40
['Doctor Marvel']
2020-10-22 06:38:42.232000+00:00
['iPad', 'Apple', 'Technology', 'Tech', 'Gadgets']
by Jean de La Rochebrochard
Funnel, Model, Growth & Retention Whatever your business is, you have to master those four sets of metrics. Funnel of conversion The destination does not matter if you fail at taking the right path towards it. The funnel of conversion is the path from business origination toward closing and post-closing. If you run an e-commerce business for instance, it goes as follow: Points of contact (SEO, SEM, Content, Social Sharing, Word of mouth…) Converted into visitors Converted into users Converted into buyers Converted into customers & ambassadors This is a simple representation of what a conversion funnel looks like. What it also shows is that every single step matters and is connected to the next one. Visitors don’t become buyers, they become users at first: people who navigate within your website/application, interested in what you have to offer. They become buyers when they purchase something and they become real customers when you can build a relationship with them so they can buy from you again and talk about your service/products around them! If you develop a mobile consumer app, it goes as follow: Points of contact (Appstore featuring, media…) converted into Downloads converted into Signups converted into Active Users (DAU, MAU…) converted into Purchasing Active Users? If you run a SaaS Business: Points of contact converted into Visitors > Trial Users / Demo Request > Converted Users > Up-selling rate / Churn rate Define your funnel of conversion, observe where are the bottlenecks, points of friction. See where you fail to lead more people toward the next step and focus on improving each step, one after the other. Don’t overthink, keep it simple. Business Model & Model Equilibrium Who are your customers, what do you sell to them (product, service, ads…), through which form (subscription, one shot)? Your business model equilibrium is like your funnel of conversion, business-wise. You go from the revenues all the way down to the operational result. You must make the difference between the aggregated funnel of your business model and the detailed version of it. Let me explain it for an e-commerce business: The aggregated business model equilibrium, monthly, looks as follow: + Average Basket per order - Average Cost of Goods Sold (per order) = Gross Margin (per order) - Average Logistic Costs (…) - Average Transportation Costs (…) - Other Variable Costs Associated (in average of course, per order…) = Contribution Margin (Pre Marketing Costs) - Average Marketing Costs (Marketing Costs for the month / # of Orders) = Net Contribution Margin Net Contribution Margin * Number of orders = Available ressources to cover your fixed costs. You see, it’s pretty simple. The only problem here is that we only cover aggregated data. We have no details whatsoever about the gross margin per product, the marketing costs, the rate of returning customers, the costs of logistic and transportation… You must take each of those metrics separately and observe their specificities, their min/max & standard deviation, how you can improve them individually in order to improve your overall business model equilibrium. Focus on the ones with the higher impacts (usually on top, like the gross margin). Growth Growth is not gross! Let’s see it that way: if your startup does not grow, another one does. There are too many things behind which entrepreneurs hide in order not to focus on growth: Product development, Branding, Team, Technical debt… And many others! Growth is something you always run after, like almost everything else in a startup. Growth is a full time, scary, challenging everyday mission to achieve. What matters is the Compound Growth Rate. You should focus on the most downstream metric of your startup and make sure it grows, week after week, month after month. For instance, to calculate the growth rate of your monthly active users, the formula is the following: ((Ending value / Beginning value) ^ (1/ # of periods)) - 1 Month 1: 100 000 users Month 12: 200 000 users Oh great! 2 times more users during the period ?!… Except that ((200 000 / 100 00) ^(1/12)) - 1 = 5.95% compound monthly growth, and let’s get this straight: this is not great! Look at the real brutal impact of the compound growth rate over a 1 year period only: 5% growth weekly = Ending value 12x the beginning value 10% growth weekly = Ending value 129x the beginning value 30% growth monthly = Ending value 18x the beginning value As you can see, if your compound weekly growth rate is not 5% but 10%, the impact is not 2 times but 10 times more important over a 1 year period only! Now, you understand how can emerge empires like Uber, Airbnb or Snapchat only few years after inception. Retention Retention is the Holly Grail of all metrics. It is useless without growth as well as growth is stupid without retention. Retention has many forms: If you sell subscriptions, how many of your customers remain after 1, 3, 6, 12, 24 months? It allows you to calculate both your churn rate (how many of them leave) and your Customer Lifetime Value (how much one customer generate in average during a 12, 24, 36 months period). If you develop a mobile consumer app, what is the ratio between your monthly active users and your daily active users (DAU/MAU)? How many of your active users remain active after 1, 3, 6 months? How many friends do they invite, what is the virality effect of your app? If you run an e-commerce business, what is the percentage of customers who buys 2 times or more every year. How many of your customers are returning ones. What is the average number of orders per customer per year? Learn how to calculate those retention metrics and for those who struggle with the understanding of a cohort, here is a quick example: Now that you’re getting more familiar with the Fantastic 4, apply them to your business. Again, Google, Quora or any valuable person of your industry can help you. Do not hesitate to ask. If you liked this post, please share it with the rest of the community :)
https://medium.com/kima-ventures/the-fantastic-4-funnel-model-growth-retention-dc47f1c761cd
['Jean De La Rochebrochard']
2015-09-05 11:52:33.456000+00:00
['Metrics', 'Startup', 'Analytics']
Voice of an Angel
Voice of an Angel A novel Image courtesy of Conny Manero Synopsis Talent agent, Jack Garrett, hears the voice of an angel drifting down from a balcony in Greenwich Village. Frustrated, he spends nights walking the streets trying to find his angel. Jessie Green is in a dead-end job until she loses it, and quickly grabs an opportunity for a better life. With her best friend, Betty McGill, they both stumble into new but different careers with the help of serendipitous good luck. Through a web of unexpected circumstances, Jack and Jessie’s lives are about to collide with more than a few surprises. Will love get in the way of making their dreams come true? Jessie and Jack both have a lot to learn, but can they really trust each other? Voice of an Angel…where more than one dream can come true. Chapter 1 June 1998 Jessie Green glanced at the red digital clock on the wall … 4:30 p.m. Another half hour and they could all go home. With a sigh, she reached for another shirt from a pile of freshly laundered linen and placed it on the press. In the five years, she had worked for Muller’s Laundry & Dry Cleaning she had pressed hundreds, maybe even thousands of garments: shirts, blouses, slacks, table cloths, and bedsheets. It was not a bad job. She knew there were better jobs, but with no qualifications, working in a laundry was all she could do. When Jenny Sullivan came to collect the work orders of the day for invoicing tomorrow, Jessie watched the girl with a mixture of admiration and envy. Jenny was Harry Muller’s assistant and always looked picture perfect. She never had a hair out of place, a smudge in her make-up, a wrinkle or stain on her clothes, a ladder in her stockings or dirt on her shoes. Jessie wondered how she did it, how she managed to always look so cucumber fresh. Looking at Jenny made Jessie wish she had finished high school, and then she too could have gone to secretarial school and looked smart in cute little outfits, with cute little shoes. Instead, she wore jeans, T-shirts, and sneakers to work because being comfortable was important when you were on your feet eight hours a day. She often regretted dropping out of school. If only she had stuck it out those last three months. But no, back then she was far too anxious to make her debut into the working world. She felt she was wasting her time in a classroom. She could not wait to get out into the real world and start earning money. When Jessie heard that Muller’s Laundry & Dry Cleaning was looking for help she applied for a job and was hired on the spot. The following Monday, instead of going to school, she proudly went to work. At the time she was certain she was making the right decision, but now she was not so sure. If she had graduated she could have her choice of careers. Instead, she worked in this laundry, this hot, steamy laundry, and was probably stuck here forever. Sure she was earning money, but Jenny probably made double if not triple of what she was making. At the sound of her name, Jessie looked up from her work and saw Betty McGill frantically tapping her wristwatch. She cast another glance at the wall clock and nodded at her friend. It was just after 5:00 p.m. “Are you okay?” Betty asked as they walked home, noticing that her friend was not her usual talkative self. Jessie gave a listless shrug. “Just thinking, you know.” “About what?” “The past. The future.” Betty frowned. “That’s heavy thinking my friend.” “Don’t you ever think about things?” “Like what?” “Like what the future holds for you.” Betty shrugged her shoulders. “I suppose I’ll meet a nice guy, get married and have kids someday. What else is there?” “A career.” “A career!” Betty burst out laughing. “Jessie, you and I work in a laundry, I would hardly call that a career.” “Don’t you ever wish you could do something else? Something a little more challenging, a little more sophisticated.” Betty looked at her friend and smiled. “Sure I do. I would like to be a doctor or a lawyer or something else that earns me tons of money, but I’m not exactly qualified.” Jessie hesitated before making the suggestion. “We could go back to school.” Betty laughed again. “Jess it takes years to qualify as a doctor or a lawyer and we didn’t even finish high school.” Jessie waved an impatient hand. “I don’t mean that. I mean, we could take a course, a secretarial course.” “You mean to learn to type and stuff?” “That is exactly what I mean.” Betty looked doubtful. “I don’t know Jess, I’m sure there’s more to being an assistant than just typing. I think you have to be smart for that sort of work.” “We’ are smart Betty.” Jessie retorted with a small edge in her voice. Betty continued. “And there is the small problem with a decent wardrobe. You’ve seen the kind of outfits Jenny wears to work. I don’t know about you, but I don’t have those kinds of clothes.” Jessie had to admit that Betty had a point. Their wardrobe was a potential problem. Both of them wore mainly jeans and T-shirts. Hardly appropriate office wear. “Any plans for tonight?” Betty asked in an attempt to change the subject. “Nothing special,” Jessie answered with a hint of boredom in her voice. Same thing I do every Monday, Tuesday, and Thursday night … ironing.” “You still iron for your neighbors?” Jessie nodded. “Elizabeth and Clara are old, they can’t do their own ironing anymore and they are very grateful that I help them. I do Elizabeth’s laundry on Mondays, Clara’s on Tuesdays, and my own on Thursdays.” Betty shook her head in wonder. “I don’t know how you do it girl. You iron all day long and then you go home to more ironing. Haven’t you ever suggested to them that they could send out their stuff to a laundry?” “No,” Jessie said vehemently, “and I’m not about to, it’s extra money for me.” That night after she finished dinner and washed the dishes, Jessie set up her ironing board and iron and collected the ironing from the storage room. She switched on the stereo, selected a CD, plugged in headphones and turned up the volume. She liked nothing better than to sing along with a CD. Singing along with a CD was something Jessie loved to do while ironing. She sometimes worried that the neighbors might hear her, but thought this unlikely. She never heard a sound from them, so she figured they couldn’t hear her either. If her voice drifted down to the street through the wide-open balcony doors that was different. People on the street below couldn’t see her. They didn’t know where she was, didn’t know who she was. When the last piece of clothing was ironed and folded, Jessie packed away the iron and the board, put the kettle on for a cup of coffee, and decided she would curl up with a book on the couch. She would slip between the pages and let herself be transported to a sleepy Irish village with some wide-awake citizens. She loved the little village in which the story was set, and she loved the people in it. They seemed so real. They were not the pretentious high society types with tons of money. They were not professionals with glamorous careers. They were ordinary people, with ordinary lives, who loved and cried, worked and struggled, and somehow made a success of what they were doing. Considering herself ordinary too, Jessie liked reading success stories. They gave her hope and courage for the future. When the clock struck eleven she reluctantly closed her book and carried it with her to bed. She stopped to close the balcony door and switch off the lights. In bed, she would read another couple of pages and before falling asleep and dreaming of a wonderful future. Chapter 2 But Jessie couldn’t sleep. She tossed and turned and imagined herself sitting behind a desk. She would be dressed in a stunning outfit, answering ringing phones with a smile on her face. A million thoughts scurried through her mind. She knew that completing the course would present many obstacles. She worried she might not qualify as an applicant due to her lack of education. If she was accepted it would have to be an evening class. Would she be able to manage to work all day and attending school at night? She wanted this so badly she would just have to do it. She also wondered where such classes were held, how long each class was, how long a course was, and how much it would cost. When a nearby church bell struck two o’clock, Jessie sat up and slipped out of bed. She would have some hot chocolate. Maybe that would help her sleep. Sipping the hot drink at the kitchen table, she reached for yesterday’s newspaper and turned to the classifieds. She was surprised at the number of ads for secretaries, administrative assistants, and executive assistants. She wondered what the difference was between an executive assistant and an administrative assistant. She studied the requirements for each job listed: tying correspondence, typing financial statements, organizing meetings, scheduling appointments, booking flight and hotel accommodations, filing, and answering calls. When she turned the page she saw a number of ads for private colleges. Some offered courses in drawing and painting, some in car mechanics, hairdressing, foot care, and massage. There were also some that offered secretarial courses. Jessie’s eyes widened when she saw the price … a thousand dollars for a three-month course, not exactly cheap. Somewhat disheartened she closed the paper, finished her hot chocolate, and went back to bed. The next day at work she made some mental calculations. Half of her wages went to rent, a portion went to bills, another portion to groceries, and toiletries. That left precious little to spend on personal items or necessities for the apartment. How could she possibly save up a thousand dollars for a course? At three o’clock, Betty indicated with a drinking gesture that it was time for a break. “You look tired,” she commented as soon as she and Jessie sat down at one of the cafeteria tables. “Are you feeling okay?” “Fine,” Jessie shrugged. “Just a little tired. I didn’t get much sleep last night.” “Oh?” “I kept thinking about taking that course, the secretarial course, and…” “What is it suddenly with you wanting to be an assistant?” Betty demanded in an annoyed tone. “You’re a press operator. You have been for five years. You’ve always been happy with your work. At least I’ve never heard you complain. But now suddenly you got it in your head that you want to be an assistant. What’s wrong with being a press operator?” At first, Jessie said nothing, she just stared at her coffee, but then slowly she started formulating her thoughts. “I’m tired of being in a steamy room all day Betty. I’m tired of being hot and sweaty, doing the same thing day after day after day. I’m tired of watching my life go by. I’ve been here five years and I’m doing today what I was doing on my first day. I’m tired of people looking down on me and they do you know. I met a guy the other day and we hit it off, right up to the point where he asked me what I did for a living, and then suddenly he changed. You know why he changed? I do, I wasn’t good enough for him. And this isn’t the first time it’s happened. There have been others I’ve gone out with, but who dumped me as soon as they found out I work in a laundry.” “That’s stupid,” Betty spat. “Anyone who rates you by what you do, or how much money you have, isn’t worthy of you.” “Well, that may be true, but that’s not even why I want to take the course. I want to do it for me because I want something better for myself.” “And a secretarial course is the answer? You think you can be an assistant?” Jessie stayed silent for a moment. If Betty didn’t believe in her, what chance did she have with strangers? But she wanted to try. She had to try. If it didn’t work out, it didn’t work out, but she had to try. “Jessie.” When Jessie looked up Jenny Sullivan was standing next to her. “Yes.” “Mr. Muller would like to see you in his office.” A sense of panic flooded through Jessie. In all the years she had worked for the laundry service she had never been asked to go to the boss’ office. Whatever Mr. Muller had to say was relayed to the staff through memos Jenny pinned on the notice board in the cafeteria. There was only one occasion when Mr. Muller wanted to see an employee in person … to fire that employee. But why he would want to fire her? Jessie had no idea. She was never late, she was dependable and she was good at her job. She cast a worried glance at Betty, who looked just as worried. Trembling Jessie got off her chair and followed Jenny up the stairs to the first floor where the offices were located. “Wait here,” Jenny instructed when they arrived at her office. “Have a seat please.” Jenny stepped into the adjoining office and closed the door. Jessie sat down and looked around her. So this was Jenny’s office. Somehow she had pictured it a little bit more glamorous. It had cream-colored walls, dark brown furniture, and a threadbare brown carpet. The only things that livened up the place a bit were two green potted plants on the windowsill, a pink teddy bear next to Jenny’s computer, and a red picture frame on the desk. But the office was bright with sunshine and Jessie thought how wonderful it must be to have natural light all day; to see the sun and the sky, the rain and the snow. In the laundry in the basement, they worked with harsh white tube lights and had no idea what the weather was like. “Jessie, Mr. Muller will see you now.” The door of the adjoining office had opened and Jenny motioned Jessie to step inside. Jessie didn’t want to go in. She had the feeling that no good would come of this meeting. Keeping her eyes downcast, Jessie couldn’t help but notice the changes as she entered Mr. Muller’s office. The dull brown carpet changed to a plush cream one, and when she looked up she found herself surrounded by luxury. She knew enough about wood to recognize that the numerous bookcases, credenza, and huge desk were oak. She didn’t have to touch the three-piece lounge suite to know that it was made from the softest leather, and she didn’t need to examine the decanter and glasses on the credenza to know they were crystal. There was a big difference between this office and Jenny’s but in comparison to the laundry area downstairs, this place was a palace. “Jessie,” Harry Muller said rising from the high backed chair behind his desk, “please come in and have a seat.” Wringing her hands Jessie perched on the indicated chair and waited for what was coming. She didn’t have to wait long. “I’m afraid I have some bad news for you Jessie,” Harry Muller came straight to the point. Yep, I’m fired, Jessie thought. She only half heard how her boss praised her work, thanked her for five years of loyal service, but explained that machines were taking over manual labor. Her mind was in such turmoil she only heard the end of his speech, “So I’m afraid I’m gonna have to let you go. I’m really sorry Jessie. It speaks for itself that I will give you an excellent reference and a month’s salary in advance.” Jessie nodded, thanked her boss, and left the office. As she descended the stairs reality slowly settled in. She was unemployed. She didn’t have a job anymore. She wouldn’t be coming back here on Monday. What was she going to do? What was going to happen to her? She wouldn’t have an income anymore. How was she going to pay the rent? How was she going to pay for groceries? Hang on, don’t panic, she told herself, Mr. Muller had stated that she would get a month’s wages in advance. Surely she could find another job within a month. Yes, she could do that. Things would be all right. She might even find a better job. Who wanted to work in a steamy laundry anyway? Chapter 3 At the bottom of the stairs, Betty anxiously awaited Jessie. “And?” she said, inclining her head a little. “What did he want to see you for?” “I just got fired,” Jessie said flatly. “Fired!” Betty cried, not able to hide the outrage in her voice. “Why? What did you do? What did he fire you for?” “Apparently a machine is going to do my job,” Jessie shrugged. Betty was momentarily speechless. “I … I can’t believe it,” she eventually stammered. “How could he? And what do you mean a machine is going to do your job? How can a machine press shirts and blouses? It probably can do sheets and tablecloths and other flat things, but how can it do delicate things?” Jessie merely shrugged. “So where does that leave me?” Betty added as an afterthought. “Am I gonna be fired too?” Jessie took a deep breath, shrugged again, and shook her head. She had no idea. She also had no idea as to what she was supposed to do now. Was she supposed to finish her day, or should she say goodbye to everyone and just leave? “Jessie,” both Jessie and Betty looked up at the sound of Jenny Sullivan’s voice as she came hurrying down the stairs. “Can I talk to you for a moment?” “I’ll talk to you later,” Betty said, sensing the two women needed some privacy. “Wanna grab a cup of coffee?” Jenny led the way to the cafeteria, poured two cups of coffee and took them over to a table by the window. “What will you do now?” “I don’t know,” Jessie said, cupping the coffee between her hands. “I was actually just thinking about that. Do I leave now, or do I finish the day?” “You don’t have to finish the day,” Jenny shook her head. “You may leave right away if you like. But before you go I wanted to have a bit of a chat with you. What will you do now? What are your plans? I realize you haven’t had much time to consider your future and you’re probably still in shock, but…” When Jenny stopped speaking, Jessie looked up. “But what?” “Well, I wanted to make a suggestion.” Jessie waited for what was to come. “I’ve been watching you and listening to you for some time now,” Jenny started tentatively, “and you seem like a very intelligent person. Every morning I see you come in with The New York Times and you don’t just skim the pages, you read the articles. And you talk differently than the other workers around here. You seem to know a lot about politics and the economy in general, and you use words like exemplify, governance, and misconstrue. One would expect such language from a college graduate, not from a … laborer. Now, don’t get me wrong, I think it’s great that you’re so well-spoken, but you do seem a little out of place here. Behind a hot press, I mean.” Jessie was temporarily at a loss for words. On the one hand, she felt slightly put off that Jenny was surprised she read the newspaper, took an interest in politics and the economy, and knew a few intellectual words. Just because she worked with her hands didn’t mean she didn’t have a mind. But on the other hand, she was flattered that Jenny was taking an interest in her, and she couldn’t wait to hear what she had to suggest. “I think you can do better than working in a laundry,” Jenny went on. “I think by terminating your employment here, Mr. Muller might have done you the biggest favor.” “So what do you suggest?” Jessie said, pinching her eyebrows together. “Are you saying that I should apply to work in a store?” Jenny inclined her head. “Set your sights a little higher Jessie. Have you thought about going back to school? Perhaps take a course of some sort?” “As a matter of fact, I have,” Jessie admitted hesitantly. “But…” “But what?” “Courses are expensive. It would have been difficult enough to pay for a course while I was earning a monthly paycheque, but now, now that I’ve lost my job…” “On the contrary,” Jenny interrupted. “Now is the perfect time. While you were working it would have been hard to go to night school, but now that you’re not working you have the time to pursue a new career.” “And what do you suggest I do for money?” Jenny waved a dismissive hand. “Since it’s only a matter of money, take any job, any job at all. Be a waitress in a bar or a restaurant. It doesn’t pay much, but the tips can add up. Then once you’re finished with your course you can just walk out. Do something with your life, Jessie.” Jessie was about to mention that she didn’t know anything about waitressing when Jenny handed her two envelopes. Jessie recognized her pay packet, but she wondered about the second envelope. “What is this?” “This one is your paycheque,” Jenny explained. “This week’s pay plus another four weeks as Mr. Muller promised. And this,” she tapped the second envelope, “is a gift from Mr. Muller himself. Invest it wisely.” After Jenny had left her, Jessie reflected on the five years she had worked for Muller Laundry & Dry Cleaning Services. At age seventeen she had arrived at this building full of enthusiasm. She was going to be a working girl. No more classrooms and homework for her, she was a grown-up and she was joining the working force. She had quickly become friends with all the other workers, especially Betty, who had started working for the laundry a little over a year ago and had shown her the ropes. They had sought out each other’s company outside work too. They often went shopping together, went for walks in the park, or just visited each other at home. The years passed and when Jessie lost her parents in a car accident she suggested to Betty they become roommates, but as an only child, Betty wouldn’t leave her widowed mother. In time Jessie considered herself happy. She had her own apartment, the furnishings — although mainly second-hand stuff — were tasteful, and she loved her job. It wasn’t until she started dating and was repeatedly dumped after mentioning she was a press operator in a laundry service that she became unhappy with her job. Now her job had come to an unexpected end. According to Jenny, that was a blessing. Jessie finished her coffee, went to the locker room to collect her handbag before heading for the exit. She knew she should say goodbye to everyone, but she couldn’t face them. She hated good-byes. She would see Betty tomorrow, and the others — when they heard the news — well, they would understand. Outside the gates, she turned around for one last look. For everyone else the weekend was about to begin, followed by another work week. She had no idea what she would be doing next week. That night in her apartment Jessie opened the gift envelope. To her utter amazement inside was a cheque in the amount of three thousand dollars and a note that read: “Please accept this as a token of my appreciation for the last five years of excellent service. Have fun with it. Harry Muller”. Jessie knew right away what she would do with the windfall. Jenny had advised her to invest it wisely, Mr. Muller wrote to have fun with it. Well, she was going to do both. She was going to invest part of the money in herself and enroll in a secretarial course, and with the rest, she was going to go shopping, invest in a whole new wardrobe. Smiling she reached for the phone. “Betty,” she said when the call was answered, “want to go to the mall with me tomorrow?”
https://medium.com/illumination/voice-of-an-angel-2231073d84d3
['Conny Manero']
2020-06-15 01:10:36.964000+00:00
['Books', 'Reading', 'Novel', 'Book Recommendations', 'Novel Excerpt']
‘The Unexamined Life Is Not Worth Living’
“The unexamined life is not worth living” Socrates said as he declared the essence of a good life. “The only good is knowledge”. With knowledge, a person could shape their own destiny and find true happiness. During a time when the world was looking to the cosmos for understanding, Socrates looked inward to the human mind for the questions and answers to life. The way to wisdom was to be found through human dialogue. Something in Socrates’s words and search for knowledge resonated with me as I thought about writing and the struggles we all face as writers. I realized that the process itself is the true gift. Writing is the process of examining life. Some days our words come out beautifully, other days not so much. But one thing remains no matter what happens — we are personally changed because we are actively learning, actively striving, and our minds are engaged and lit up as we struggle with our ideas and words through the writing process. As writers, we’re exploring ideas that we’re passionate about, and experiences that will make an impact on other people and bring some sort of value into the world. Sometimes the writing process seems overwhelming and frustrating. Self-motivation and confidence, even sense of purpose, go up and down in alternating waves of frustration and euphoria. Sometimes it’s just so difficult to get started. But when this happens, we should remind ourselves of the gift that writing gives us.
https://medium.com/the-brave-writer/the-unexamined-life-is-not-worth-living-e90573573e8f
['Tania Miller']
2020-12-17 13:47:51.405000+00:00
['Writing', 'Self Improvement', 'Writing Tips', 'Philosophy', 'Personal Development']
Building a scalable and available home feed
Dan Feng | Pinterest engineer, Discovery We pride ourselves on being a company focused first and foremost on the user experience. In order to deliver a great experience, including showing related content in the home feed, we’re building a service that’s fast and highly available. From a Pinner’s point of view, availability means how often they’ll get errors. For service owners, availability means how many minutes the service can be down without violating SLA (service level agreement). We use number of nines to measure the availability of our site and each service. The Pinterest home feed is a personalized collection of Pins for each person. One third of Pinterest’s traffic lands on home feed, which makes it one of our most critical pages. When building our new home feed, achieving four nines or higher was one of the metrics used for measuring the success of the project. The full discussion for the new home feed architecture can be found at ‘Building a smarter home feed’. Here, I’ll focus on the design decisions from behind the scenes. Isolating challenges The home feed system can be simplified to support three use cases: writing Pinners’ feed to a storage, serving feed from the storage and removing feed when it’s required. Writing feed can have a huge QPS (query per second). Fortunately it’s not user-facing and certain delay (e.g. seconds or even minutes) is tolerable. Serving has relatively small QPS when comparing the writing operation, but it’s user-facing and has a tight performance requirement. A simple design can include writing all feed to a storage and serving and deleting from it. At our current scale, we keep hundreds of terabyte data and support millions of operations per second. We’ve had success with HBase in our past iterations of the home feed system. After evaluating all the options, we chose HBase as our backend storage. The problem with the design is it’s very challenging to tune the same storage to meet the requirements for both a high volume of writing and a high performance of reading and updating. For example, when a person creates a new Pin, we’ll fan out the Pin to all his or her followers. Followers are sharded across all HBase regions. When we fan out the same Pin to hundreds of Pinners, the write operation will hit multiple regions, lock the WAL (write ahead log) on each region server, update it and unlock it after use. Locking the WAL for each write/update/delete operation isn’t efficient and quickly becomes a bottleneck. A better approach is to batch operations and push the changes to HBase once in a while, which increases the throughput of the HBase cluster dramatically. But the latency of each operation can be as high as the flush interval. For user-facing operations, our latency requirement is at millisecond level and the approach will fail us miserably. To satisfy the different requirements, we designed a system with two HBase clusters and save data to different HBase clusters at different stages (see the component diagram below). Zen is a service that provides a graph data model on top of HBase and abstracts the details of HBase operations from data producer and consumer. SmartFeed worker is pushing feed from all sources (we also reference sources as pools) to HBase through Zen, and called by PinLater, an asynchronous job execution system that can tolerate certain delays and failures. HBase for materialized content saves the Pins that have potentially been shown in the home feed before, and its content is accessed through Zen. SmartFeed content generator is in charge of selecting new Pins from the pools, scoring and ordering them. SmartFeed service is indirectly retrieving feed (content) from both of the HBase clusters, and only talks to the pools cluster through SmartFeed content generator. When a Pinner hits their home feed: SmartFeed service calls content generator to get new Pins Content generator decides how many Pins should be returned and how they should be ordered in the returned result Simultaneously SmartFeed service retrieves saved Pins from HBase for materialized content SmartFeed service will wait for the results from the above two steps, mix and return them. (If the calls to content generator fails or timeouts, the result from step 2 will still be returned.) Offline, SmartFeed service will save the new result to HBase for materialized content and delete them from HBase for pools With this design, we separate user-facing components from non user-facing components. Since different HBase clusters have different volumes of data and usage patterns, we can scale and configure them individually to meet their needs. In reality, we have far less Pins in materialized content cluster than the cluster for pools. We can make it more reliable and faster without too much cost. Speculative execution With the design above, the availability is as good as the HBase for materialized content since we’re serving content only when it’s available. From time to time, HBase cluster can have JVM (Java virtual machine) garbage collection, node failure, region movement, etc. With a single HBase cluster, the availability can occasionally drop below four nines. To improve the availability over four nines, we implement something called speculative execution. We always keep a hot standby HBase cluster in a different EC2 availability zone to avoid losing Pinners’ data. Any changes made to the primary HBase cluster will be synced to the standby cluster within a few hundred milliseconds. In the event of a partial failure of the primary cluster we’ll serve the data from a standby cluster. This technique helps make the whole system four nines of read availability (not write) and provides a much better Pinner experience than failing the requests. The way that the speculative execution works is: Make a call to the primary cluster to retrieve data If the call fails or doesn’t return within certain time, make another call to the standby cluster Return the data from the cluster which returns first With this approach, SmartFeed service will be able to return data if either of the clusters is available and the overall availability is close to the combined availability of the two clusters. The tricky part is to pick a proper waiting time. Since syncing data from the primary cluster to the standby cluster has some delay, the data returned from the standby cluster can be stale. If the waiting time is too small, Pinners will have a higher chance of getting stale data. If the waiting time is too long, Pinners have to wait unnecessarily long even if we can return results from the standby cluster much earlier. For us, we find if a call doesn’t return within time x, it’ll eventually time out in most cases. The time x is also larger than the 99.9 percentile of the call’s latency. We decided to use this as the cutoff time, which means results may be returned from the standby cluster for one out of 1,000 calls. Another interesting finding is that the latency of the standby cluster is higher than the primary cluster because so few calls fall back to the standby cluster, and it’s in a ‘cold’ state for most of the time. To warm up the pipeline and get it ready for use, we randomly forward x percent of calls to the standby cluster and drop the result. One time the primary HBase was down for almost one hour because of some hardware issue. Thanks to speculative execution, all home feed requests automatically failover to the standby cluster. The performance and success rate of home feed was not impacted at all during the whole HBase incident. Outcomes Since the launch of SmartFeed project, we’ve been handling hundreds of millions of calls per day and haven’t had a major incident with the availability dropping below 95 percent for more than five minutes. Overall, our availability is better than four nines. If you’re interested in tackling challenges and making improvements like this, join our team! Dan Feng is a software engineer at Pinterest. Acknowledgements: This technology was built in collaboration with Chris Pinchak, Xun Liu, Raghavendra Prabhu, Jeremy Carroll, Dmitry Chechik, Varun Sharma and Tian-Ying Chang. This team, as well as people from across the company, helped make this project a reality with their technical insights and invaluable feedback. For Pinterest engineering news and updates, follow our engineering Pinterest, Facebook and Twitter. Interested in joining the team? Check out our Careers site.
https://medium.com/pinterest-engineering/building-a-scalable-and-available-home-feed-6a343766bb6
['Pinterest Engineering']
2017-02-17 21:59:32.137000+00:00
['Engineering', 'Pinterest', 'Hbase', 'Data', 'Qps']
How to Choose the RIGHT Influencers for Your Brand!
One of the most common questions I get asked about and one of my strengths when it comes to creating an Influencer Marketing Strategy is choosing the right influencer. Here’s how to choose the right influencers for your brand in 6 simple steps! Set an objective One of the most common problems I see when executing campaigns for brands is that they want to achieve everything at once but that doesn’t work. You need to set a clear objective as it then defines what approach you’ll take to choosing influencers so ask yourself what am I trying to achieve from collaborating with influencers? Some examples could be product awareness, brand awareness, traffic to website or conversion/sales. Each tier of influencers serves a different purpose — in example, it’s recommended to use tier 1 influencers (influencers with a very high follower base) for a new product launch during the first phase of the launch and then move on to using tier 2 influencers afterwards. 2. Keep your opinion out of it as long as they fit into your brand Ready for the ugly truth? Your opinion doesn’t matter, your customers won’t care about what you think of a certain influencer. Look at the end results — just because you don’t like the influencer personally doesn’t mean that they aren’t worthwhile and it certainly doesn’t mean that they won’t achieve results. As long as they fit into your brand and they have a similar target audience, work with them. 3. Analyize their follower base Look at their followers. The best way to know if an influencer is going to get you the results you want is by ensuring that their followers are your target audience. If they aren’t then you should be working for them just for the heck of it. Make sure you have a certain TA set prior to even talking or engaging with any influencer. 4. Analyize their numbers Take a look at their engagement rates. If an influencer has 100,000 followers with only a few hundred likes — something is fishy. At the same time keep in mind the nature of the platform for example, Instagram rolled out an algorithm a while ago and that has affected engagement rates. Take a look at the amount of likes vs amount of comments, etc. The best thing to do is to look at historical data and see the month on month follower growth or engagements to ensure an influencer has real followers and isn’t buying ‘likes or comments’. iconosquare.com is a good tool to use for such data. 5. Previous collaborations with other brands What brands did they work with? How did that go? What results did they achieve from working with those brands? I know you’re not able to see all the results but you can get a sense of how their followers reacted to those collaborations. They’ll give you key insights especially if those brands are similar to yours. Look at the amount of engagement, quality of comments, interactions, etc. 6. What Do Others Say About Them? This is the most underrated point in our industry. Brand Managers & Agency People, I BEG YOU — ask around before working with an influencer to avoid disappointments and nightmares. I’ve created my own database and algorithm on rating influencers based on various criteria such as their follower base, how easy they are to work with, their price vs ROI ratio, etc. Working with influencers who are considered celebrities in our modern day and influencers who are barely known, I’ve learned that some of them have been amazing while others have been a nightmare. They’ve allowed their ego to get to them and become quite a challenge to work with (to say the least!). Reality check — there’s so many influencers out there that I as a marketeer have constantly told influencers that this industry has become a competitive market to where they need to rely on word-of-mouth marketing themselves as no brand manager or agency guy wants to work with someone that adds stress to their lives. Do you have any other tips on how to choose influencers or do you need help in running your next influencer marketing campaign? Tweet me @mikealnaji and let’s discuss!
https://medium.com/astrolabs/how-to-choose-the-right-influencers-for-your-brand-f610e9a22bf3
['Mike Alnaji']
2017-08-27 14:30:33.979000+00:00
['Marketing', 'Influencer Marketing', 'Digital', 'Digital Marketing', 'Social Media']
Meaning Eventually Finds Its Place In FKA Twigs’ Grandiose Artistic Vision
Meaning Eventually Finds Its Place In FKA Twigs’ Grandiose Artistic Vision Her latest LP “MAGDALENE” is an interstellar voyage through the dust of the broken heart to the planet of self During her long-lasting hiatus, the avant-garde pop princess FKA Twigs got crushed by the brutal hands of her troubled relationships — falling in love with the ex-blood sucker Robert Pattinson and splitting with him in a seemingly heart-wrecking manner. Two years following the end of this romance, twigs comes back feeling quite herself and ready to baptise her loyal fanbase in the career-defining shrift. In the pre-“MAGDALENE” era, British-born singer and songwriter Taliah Barnett took on a cunning challenge to manufacture the new RnB sound — blending her fragile-sounding soprano and the spooky art-house opera inspired imagery with the precision and inanimation of electronic music. Her efforts were met with universal critical acclaim, but looking in retrospect, the alien world twigs built around herself felt overwhelmed by her pompous visionary yet lacking raw human experience. In this sense, “MAGDALENE” comes through as a true revelation, being an album that runs on the fuel of imaginative lyricism and storytelling. Here, Twigs tries on the image of Mary Magdalene and tells us the love story with her very own Jesus, drawing inspiration from the chants of classical religious music and futuristic electronic sounds. The album starts with “thousand eyes” and, in an instance, enswathes you with a death-bearing sound of acapella choral signing and thud but heavy background instrumental, serving as a reminder of memorial liturgy music stripped off to its most naked condition. Alongside this soul-shivering melody, Twigs begins her Skaespherean story of spiritual death and thorny resurrection. If I walk out the door, it starts our last goodbye If you don’t pull me back, it wakes a thousand eyes Going forward, the narrative unfurls in a surprisingly cohesive manner, letting Twigs’ vocal arrangements and poetic talent shine the way they’ve never had before. No matter what it is — the antithesis of distorted chest voice and paper-light head voice on “home with you”, or the intimate insight into the secrets of womanhood on “mary magdalene” — Twigs serves us a bowl full of ripe-fruit she religiously gathered at the garden of Eden. There are many great moments on “MAGDALEN” — either lyrically, sonically or vocally — but the times it ascends to the sky-high levels lie at the intersection of the experimental search Twigs underwent in her previous work and the lucidity of pop-sound she tamed with the help of her fellow co-writers and co-producers. The result of this artistic confluence materialized in “sad day”, which might be remembered as one of the best pop tracks of this decade. Channelling Kate Bush ethereal timbre, Twigs starts the track by whispering simple yet beautiful lyrics into the isolation of her listener's auricles, and then the melody expands into this epic dance/electronic ballad where Skrillex’s production touch suddenly falls in place and electrifies the record. Bearing in mind how great the rest of the album is, it’s at best puzzling how “holy terrain” featuring Future made it to the final cut. It’s a commercially-baked, mediocre RnB song made to, perhaps, please the wider audience. A sad but not criminal oversight which gets forgotten as soon as the record ends. Despite being listed as the sixth track, “fallen alien” comes through as the culminating phase of the narrative. It’s the most experimental track on the album exposing Twigs in her most desperate state. She basically goes on a full-scale jihad against her former lover. Her vocals here are nothing less than transcending: incisive, hysterical, in a good sense of this word, and on the edge of breaking loose into a wild scream. The last three songs reveal Twigs meditating on the aftermath of relationships and reconciling with the damage left after it. The album ends with the lead single “cellophane”, a beautiful piano ballad which wraps all the memories and feelings into a thin, transparent sheet of regenerated cellulose. All wrapped in cellophane, the feelings that we had. It might sound trivial but the beauty of “MAGDALENE” truly lies in the eyes of its beholder. You can try and disentangle it into separate pieces to only be left with the overwhelming layers of artificial noises and digitally-produced sounds. But looking at “MAGDALENE” in its entirety reveals a timeless piece of pop art which was born in a happy marriage of a self-aware artist and her multifaceted talent. P.S. You can find me on Twitter and Instagram.
https://tonysolovjov.medium.com/meaning-eventually-finds-its-place-in-fka-twigs-grandiose-artistic-vision-bdfc8627fe53
['Tony Solovjov']
2020-01-30 09:32:44.502000+00:00
['Review', 'Music', 'Art', 'Pop', 'Culture']
How to Set the Mood for Maximum Productivity
How to Set the Mood for Maximum Productivity Get more done in less time by implementing these little rituals Many of us struggle with being consistently productive. We plan so much for the day and then we end up procrastinating instead. And often, even when we do start working, it’s doesn’t go the way we expected. We just can’t seem to get in the zone. I used to struggle with this a lot. I would sit down, open a document and start working on the task at hand. Yet my mind would wander and wouldn't stay focused on the work I was supposed to be doing. Often I would give up, saying: “I’m just not in the mood. It’s not a productive day and there’s nothing I can do about it.” But then I learned that this wasn’t true. There is always something you can do. All I needed to regain my focus was a consistent setting that I associated with work.
https://medium.com/live-your-life-on-purpose/how-to-set-the-mood-for-maximum-productivity-57d735fcc787
['Veronika Jel']
2020-06-16 13:01:01.324000+00:00
['Advice', 'Work From Home', 'Life Hacking', 'Self Improvement', 'Productivity']
THE OCEAN
THE OCEAN Beautiful and unexplored 71% of the earth’s surface consists of water. Large bodies of water are called oceans. Ocean provides so many things for us, An article from ecology.com about “10 Things to Know About the Ocean”. Those things start from oxygen that we need for breath until the job we need to survive in the world. The majority of people think that rain forests are the main producer of oxygen, but this is a misconception. The Ocean Preneur posted an article that showed the ratio of oxygen production. Rain forests only produce 28% of oxygen and the ocean produces 70% of oxygen for us. Ocean also became a regulator for earth climate, keeping our planet warm when the temperature sinks. Image from earth.com How come the ocean plays an important role to control our weather? Ocean Exploration and Research posted an article that shows how ocean taking a big role in our climate system, the majority of radiation from the sun is absorbed by the ocean, particularly in tropical waters around the equator, where the ocean acts like a massive, heat-retaining solar panel. The ocean doesn’t just store solar radiation, it also helps to distribute heat around the globe. The earth Water Cycle also influenced by the ocean. In the evaporation process, the ocean takes an important role to take the sun heat and make deliver it to the condensation process (the process of making cloud form). After a long process, finally, rainfall from skies after the precipitation process. That’s how the ocean playing a role to control the weather. Ocean also provides food for humans. More than a billion people depend on the sea for protein sources. Not only from fish and the other animal creatures that live in the ocean but also plants that live in the ocean such as algae and seaweed. Image from sciencenewsforstudents.org These days people like to spend their holiday in the ocean, from the only sigh seeing the wonderful view of the ocean floor until doing some water sport in the ocean. These days so many water sport that we can do in the ocean, such as scuba diving, snorkeling, surfing, parasailing, wakeboarding, sea kayaking, free diving, sea walking, cage diving, etc. In 2016, based on FAO (Food and Agriculture Organization) data 59.6 million people in the world were engaged in fisheries and aquaculture. Only at the European Union level, the blue sector represents 3.362.510 of jobs in 9 subsectors. In the United States, almost three million jobs are directly dependent on the resources of the oceans and Great Lakes. Image from boraborapearlbeachresort.com Even a tribe in Austronesian lives above the ocean. It called the Sama-Bajau Tribe. They live nomadic above the ocean. Today, not only the Sama-Bajau Tribe who lives above the ocean. The tourism sector has spread its wings to develop a new holiday trend “live above the ocean”. The destination place for those luxuries holiday such as; Maldives, Bora-Bora, Derawan Island, and many more. Not only humans who depend their lives on the ocean, but so many creatures also depend their lives on the ocean. Ocean became their home and place to live on. Wallace wrote in his book that “being near, in, on, or underwater can make you happier, healthier, more connected, and better at what you do.” The ocean can affect our psychology side in good ways. That’s what the ocean can provide for us, what about us? Did we do something good for the ocean? Today, so many issues regarding pollution and ocean damage. Humans live about 200.000 years on earth, in that period we have depended on our life to the ocean. If ocean pollution and damage continue, how long can we stay at earth and enjoy the ocean? Image from dailymail.co.uk Those issues are; oil spills, seas of plastic garbage, sewage disposal, toxic chemicals. Seas of plastic garbage became the hottest issue today. National Geographic posted an article about many threats that the ocean needs to face today, one of those threats is plastic issues. 12,7 million tons of plastic garbage was found in Ocean every year. Seas of plastic garbage not only impact human’s life but also the ocean ecosystem and all the creatures who live there. If the ocean can talk, it will shout out loud and tell humans to stop destroying everything. The oceans gave us everything that we need, but we destroy it. As a human being that lives on the earth, we should realize our mistakes and try to fix them. Start from a small thing to big goals. Let’s start with the simplest thing like reducing the usage of plastic. Let’s start now! Give our contribution to a better life and a better future.
https://medium.com/tfi-student-community/the-ocean-2cefca773ce9
['Laurent Angelica Santoso']
2019-11-29 08:01:02.085000+00:00
['Environment', 'Sea', 'Beautiful', 'Life', 'Oceans']
Auto-Encoder: What Is It? And What Is It Used For? (Part 1)
Auto-Encoder: What Is It? And What Is It Used For? (Part 1) A Gentle Introduction to Auto-Encoder and Some Of Its Common Use Cases With Python Code Background: Autoencoder is an unsupervised artificial neural network that learns how to efficiently compress and encode data then learns how to reconstruct the data back from the reduced encoded representation to a representation that is as close to the original input as possible. Autoencoder, by design, reduces data dimensions by learning how to ignore the noise in the data. Here is an example of the input/output image from the MNIST dataset to an autoencoder. Autoencoder for MNIST Autoencoder Components: Autoencoders consists of 4 main parts: 1- Encoder: In which the model learns how to reduce the input dimensions and compress the input data into an encoded representation. 2- Bottleneck: which is the layer that contains the compressed representation of the input data. This is the lowest possible dimensions of the input data. 3- Decoder: In which the model learns how to reconstruct the data from the encoded representation to be as close to the original input as possible. 4- Reconstruction Loss: This is the method that measures measure how well the decoder is performing and how close the output is to the original input. The training then involves using back propagation in order to minimize the network’s reconstruction loss. You must be wondering why would I train a neural network just to output an image or data that is exactly the same as the input! This article will cover the most common use cases for Autoencoder. Let’s get started: Autoencoder Architecture: The network architecture for autoencoders can vary between a simple FeedForward network, LSTM network or Convolutional Neural Network depending on the use case. We will explore some of those architectures in the new next few lines. 1- Autoencoder for Anomaly Detection: There are many ways and techniques to detect anomalies and outliers. I have covered this topic in a different post below: However, if you have correlated input data, the autoencoder method will work very well because the encoding operation relies on the correlated features to compress the data. Let’s say that we have trained an autoencoder on the MNIST dataset. Using a simple FeedForward neural network, we can achieve this by building a simple 6 layers network as below: The output of the code above is: Train on 60000 samples, validate on 10000 samples Epoch 1/10 60000/60000 [==============================] - 6s 103us/step - loss: 0.0757 - val_loss: 0.0505 Epoch 2/10 60000/60000 [==============================] - 6s 96us/step - loss: 0.0420 - val_loss: 0.0355 Epoch 3/10 60000/60000 [==============================] - 6s 95us/step - loss: 0.0331 - val_loss: 0.0301 Epoch 4/10 60000/60000 [==============================] - 6s 96us/step - loss: 0.0287 - val_loss: 0.0266 Epoch 5/10 60000/60000 [==============================] - 6s 95us/step - loss: 0.0259 - val_loss: 0.0244 Epoch 6/10 60000/60000 [==============================] - 6s 96us/step - loss: 0.0240 - val_loss: 0.0228 Epoch 7/10 60000/60000 [==============================] - 6s 95us/step - loss: 0.0226 - val_loss: 0.0216 Epoch 8/10 60000/60000 [==============================] - 6s 97us/step - loss: 0.0215 - val_loss: 0.0207 Epoch 9/10 60000/60000 [==============================] - 6s 96us/step - loss: 0.0207 - val_loss: 0.0199 Epoch 10/10 60000/60000 [==============================] - 6s 96us/step - loss: 0.0200 - val_loss: 0.0193 As you can see in the output, the last reconstruction loss/error for the validation set is 0.0193 which is great. Now, if I pass any normal image from the MNIST dataset, the reconstruction loss will be very low (< 0.02) BUT if I tried to pass any other different image (outlier or anomaly), we will get a high reconstruction loss value because the network failed to reconstruct the image/input that is considered an anomaly. Notice in the code above, you can use only the encoder part to compress some data or images and you can also only use the decoder part to decompress the data by loading the decoder layers. Now, let’s do some anomaly detection. The code below uses two different images to predict the anomaly score (reconstruction error) using the autoencoder network we trained above. the first image is from the MNIST and the result is 5.43209. This means that the image is not an anomaly. The second image I used, is a completely random image that doesn’t belong to the training dataset and the results were: 6789.4907. This high error means that the image is an anomaly. The same concept applies to any type of dataset. 2- Image Denoising:
https://towardsdatascience.com/auto-encoder-what-is-it-and-what-is-it-used-for-part-1-3e5c6f017726
['Will Badr']
2019-07-01 07:09:48.367000+00:00
['Artificial Intelligence', 'Machine Learning', 'Neural Networks', 'Data Science', 'Deep Learning']
A Warm Fuzzy Hug
#DecemberSelfCare A Warm Fuzzy Hug Adorable fuzzy PJs, this poem’s for you Photo by Anastasia Zhenina on Unsplash as the temperature dips into the icy stages the phase where the air feels spicy against your skin i celebrate the winter indoors by breaking out the festive fuzzy PJs the ones that make you feel like you’re a walking live teddy bear the PJs that envelope you like a bear hug.
https://medium.com/the-brain-is-a-noodle/a-warm-fuzzy-hug-ad180ba399d1
['Lucy The Eggcademic', 'She Her']
2020-12-23 10:05:17.353000+00:00
['Poetry Prompt', 'Mental Health', 'Self Care', 'Poetry']
Data Science for Everyone: Getting To Know Your Data — Part 1
Data: Formulating the Concepts Definitions The word data is the plural form of the word datum, which has the meaning of a “single piece of information, as a fact, statistic, or code” [5]. Another definition is “something given or admitted especially as a basis for reasoning or inference” [6] In simple terms, data can be defined as numbers, characters, words, sounds, or symbols that can use to describe, quantify, recognize physical or virtual entities. For example, if you can sufficiently describe a person with some data points (datums) such as name, date of birth, gender, appearance (colors and built), height, weight, etc. The same information can also be used to differentiate one person from another for recognition purposes. Figure 3: Data field with a value assigned to it. (Image by author) Let’s look at this conversation between two people “That [tall] [boy] with [brown hair] working as a [barista] at [ABC Cofee Shop] helped me when my car broke down in front of his shop. I think his name is [James]”. The words within [] are the data points you may use to recognize the specific person in a normal conversation as well in a systematic data application. Data points are sometimes mentioned as features, data fields, characteristics, facts, and attributes. Which should be taken as the same concept at the high level. A collection of data fields we can use to describe a person can be called a data model of a person. That becomes a record when the values are assigned to those fields. Similarly, we can represent other physical objects like vehicles, buildings, books using their data points which can describe their characteristics. Figure 4: Data fields used to represent a person. (Image by author) Several related records can be arranged into a structure such as a table or a list. Imagine a table containing data about 100 different people one row representing each person and each column used to store one data point. Figure 5: Table Many related data structures are combined into one larger structure that becomes a database. Depending on the application, there are multiple types of databases and database management systems to choose from. Data and Information We looked at the basic concept of the data everyone should know. Let’s quickly look at another related concept always mention with the data. That is Information. Let’s try to understand the difference between information and Data. As we discussed above, data comes with two main components, structure and context. Without them, data has no meaning or value. When data is taken with structure, context, and meaning, we call it information. Here is an example, you got some data: a list of values with different color names. It is certainly data but can it alone give you any context? Is that data meaningful or useful? The answer could be no for both the questions. Figure 6: Information (Photo by William Iven on Unsplash) What if the same list is given with another data point for each color value: a car model and brand? The data now have some context and meaning. How about adding another data point: price? We now added some context to the data and can be used to derive useful information from it. In the person data example illustrated in Figure 4, all the attributes “name”, “date of birth”, “height”, “weight”, etc. have no use unless they are connected to the person entity (arrows in the figure). Data Organization Based on how they are arranged, data collections can be categorized as structured, unstructured, and semi-structured. Figure 7: Illustrating structured, unstructured, and semi-structured data. (Image by author) The most traditional form of data collection is structured where the data is organized into tables which is easy to handle by both the human and machines. Structured data are easier to search and manage. Unstructured data like images, videos, sound, and large text contents like books, letters, paragraphs are used by humans for many centuries of years even before the computer era. Managing and searching through unstructured data is quite cumbersome. According to a Forbes Technology Council post referring to Gartner, an “estimate that upward of 80% of enterprise data today is unstructured [7]”. Therefore, the research and development efforts in data science are heavy in this area. With the advent of the internet and evolution of the computer technology, semi-structured data forms like mark-up languages, hierarchical data structures have become popular in storing and transmitting data. Semi-structured data considered self-describing data and it helps to store complex data that cannot easily organize into tables. Figure 8: Semi-structured data formats (XML and JSON). (Image by author) In the process of analyzing the data, unstructured data is converted into a structured or semi-structured form utilizing suitable data science methodologies. Data types Let’s now take a different perspective on understanding data. Data can be represented in many different ways such as numbers, characters, symbols, pictograms, colors, signs, object arrangements, etc. Figure 9: Different data representations and conversion into digital form. (Image by author) In digitally computerized data representation every other representation will melt down into numbers and ends up in the binary form when storing, transmitting, and computing. We can find a system of data types used in computing such as boolean, integer, float, and characters, as primary types. The derived types of these are strings, arrays, lists, sets, vectors, matrices, tensors, complex numbers, structure, enumerators, dictionaries, tables, and objects made possible the complex computing applications we all benefit from in this era. A special data type is known as null, none, or void depending on the programming language is used to represent “nothing”. An in-depth discussion on the different data types and their uses is planned for a future article. Figure 10: Data Types used in computing. (Image by author) Data File Types Traditionally you access data from printed materials, videos, display boards, etc. In a computer system, your data comes as files or streams. The common file types are text, binary, images/photos, audio, video, archive, compressed, database. A detailed discussion on these file formats and their uses is planned for a future article. Data Encoding The data you accessed is not always stayed in the same format as it presented to you. We learned the ultimate form of data will be binary (1/0) in the digital systems. However, there are intermediate representations used when storing and transmitting data. When data is moving from one location to another their representations can also change. We call that encoding and decoding with reference to one representation. One of the common encoding schemes is the American Standard Code for Information Interchange (ASCII) [8] which use to represent characters (letters, digits, and special signs). Figure 10 illustrates the encoding of the text “DATA@8” into ASCII. The universally accepted encoding scheme for the same purpose is known and UNICODE [9]. Figure 11: Character Encoding Example (ASCII) (Image by author) Encoding should not be confused with encryption, which hides data content from unintended parties. A detailed discussion on the encoding-decoding, encryption-decryption and their uses are planned for a future article. Analog vs. Digital Data In nature, the data exist in analog form and we need to convert them into machine recognizable binary form to be used with digital computing machines. The term “digitization” is used to name this conversion process [10]. In electronics, this sometimes called analog to digital encoding. Some examples of digitization are scanning a paper document to create its digital copy, recording a sound from your mobile phone mic, recording your walking track using GPS data. When the digitally stored data need to serve as an analog output, digital to analog conversion is used. You are using this analog to digital conversion and vise versa in your personal devices such as mobile phones, video displays, sound recorders, cameras, music players, etc. Figure 12: Analogue and Digital conversion: digitization of a sound signal. (Image by author) Qualitative and Quantitative nature of data Each data measurement can also be classified as qualitative and quantitative and their subclasses by the nature of values they can take. Figure 13: Classifying data measurements by their Qualitative and Quantitative nature. (Image by author) Data is a broad concept that can be examined from a variety of perspectives. The more you combine those different perspectives, you will get a better grip of the data you are dealing with. Therefore, the concepts we discussed above are crucial at every stage in the data science workflow. They are also important at any time you engage with the data or data-driven applications. Measuring Data Data is quantifiable. Data is quantifiable. The smallest unit of digital data is called a bit, which is also used as a scale to measure data. A single bit can store a value of either 0 or 1. In the data encoding example, we showed that ASCII uses 7 bits which makes 2⁷ = 128 possible combinations. In other words, 128 different characters can be represented using an ASCII value. A group of 8 bits (octet) is considered as one byte which is the fundamental unit of measuring data. The symbol defined by the International System of Units (SI) is B. To measure large quantities of data the SI prefixes such as Kilo (K), Mega (M), Giga (G), etc. are used [11]. The use of these prefixes must not be confused with the binary interpretation of prefixes used in many applications like Ki, Mi, Gi, etc.[12,13] Figure 14: System of units for measuring digital Information (Image by author, information source: [12]) Data visualization You have seen charts, plots, and various infographics condensing data and information into graphical representations. Visualization is a very efficient way of communicating data. It is also important in the early stages of data science workflow to understand the data, and various quality control measures before moving into the later stages. Some argue data visualization is both an art and science. An in-depth discussion on data visualization methods and their uses are planned for a future article. It is also important in the early stages of data science workflow to understand the data, and various quality control measures before moving into the later stages. Some argue data visualization is both an art and science. An in-depth discussion on data visualization methods and their uses are planned for a future article.
https://medium.com/towards-artificial-intelligence/data-science-for-everyone-getting-to-know-your-data-part-1-bb8b6d7782b1
['Sumudu Tennakoon']
2020-12-24 01:03:36.911000+00:00
['Data Science', 'Machine Learning', 'Data Scientist', 'Artificial Intelligence', 'Education']
Ten Deep Learning Concepts You Should Know for Data Science Interviews
Deep learning and neural networks can get really complicated. When it comes to data science interviews, however, there are only so many concepts that interviewers test. After going through hundreds and hundreds of data science interview questions, I compiled 10 deep learning concepts that came up the most often. In this article, I’m going to go over these 10 concepts, what they’re all about, and why they’re so important. With that said, here we go!
https://towardsdatascience.com/ten-deep-learning-concepts-you-should-know-for-data-science-interviews-a77f10bb9662
['Terence Shin']
2020-12-10 04:04:50.070000+00:00
['Deep Learning', 'Artificial Intelligence', 'Machine Learning', 'Data Science', 'Work']
Getting Started with Python
Python is an amazing language that is used in a wide variety of applications. Did you know that Python is used in applications involving automation, data science and web apps? For example, Facebook uses Python to process images. Before we get started, let us first break down the concepts that we need to learn in order to become a Python ninja. Python Syntax Data Structures Algorithms The first thing we must be acquainted with is the syntax of the Python programming language. We must also learn the proper data structures that we want to use when we are solving a particular problem. Lastly, we must know which algorithm we would want to use to reach the solution of the problem. Now you must be thinking, this is fine and all, but what do I need to install on my machine in order to run Python? If you are already using a Mac or Linux operating system, then Python is already pre-installed. If you are using a Windows 10 machine, Microsoft released an update that also pre-installs Python. However, if you do not have Python installed on your OS or are unsure if you have Python installed, here is a breakdown between a Mac OS and Windows OS for checking:
https://medium.com/quick-code/getting-started-with-python-313eb74915c8
['Rafay Syed']
2019-08-28 01:21:52.907000+00:00
['Programming', 'Computer Science', 'Python']
Spores on the cooling off corpse of data science
A few years ago, data scientism was a fresh shiny hot air balloon that took off due to the aristocratic arrogance of prestigious, predictive analytic vendors who simply milked their cows and watched with glass eyes as the era of big data and cloud computation began. The burner in the balloon’s basket was fueled by the dropping cost of storing and manipulating massive amounts of data, due to the quality big data tools and sophisticated cloud infrastructure that has become widely available. These were hacked together by resident data scientists using mainly R/Python or other open source tools. The marketing storm of “data driven business”, “BIG data” and “cloud computation” and the decades long hypnotic education about extraordinary value of predictive analytic established a craving appetite for companies all around the world to have their own burner developed. This craving appetite delivered headlines stating that the sexiest job of the century will be data science. This belief is still quite fashionable, and data scientists need not worry about their job yet. However, there are voices and debates question the longtime future of data science. I am also one of those who believes the bubble is leaking, however I got to this conclusion in a different way, which I will now unfold… Once upon a time… The story goes back about fifteen or twenty years, when the classical vendors of predictive analytic software started spreading their products outside the classical application areas such as banking and insurance. They did this first in terms of slideware. Later, slideware evolved into expensive, yet exceptionally unstable, betas. Nevertheless, in the evangelizations of data mining yield to extensive usage of predictive models in various business areas such as: predictive marketing and CRM, financial services, telecommunication, retail, travel, healthcare and pharmaceuticals. The applications areas of predictive models and the market of predictive analytics is still growing today and predicted to grow in the remainder of the decade by all major research companies. It is also expected that new application areas will strongly emerge such as predictive maintenance. However, I strongly believe that the landscape of predictive analytics is being revolutionized, and will be marked by an alternative to the era of data scientists, or by the new release of data mining software that still copies the data mining workflows invented twenty years ago for classical applications, as aforementioned. Today, the most widely used applications of predictive analytics are the different from the classical applications for which the classical workflows were invented. In the early ages of predictive analytics, when it was used strictly for banking and insurance purposes, there was a very high financial impact in relation to the prediction. A single prediction could earn or lose the company hundreds of thousands of dollars. This fact, unsurprisingly, formed the workflow of the classical data mining applications, as well as focusing on the development and fine tuning of a single model, done by a horde of highly trained mathematicians and statisticians, preferably with a PhD. This culture of “rocket science” still forms the decisions about predictive analytics today. While the financial impact of predictions in the mass applications areas such as: predictive marketing, CRM, retail or travel, is tiny, most of these companies are still purchasing expensive predictive analytic tools that were built for the classical applications. Most of these companies still hire highly trained data miners to use these products, or struggle to recruit versatile data scientists who can build in-house tools to generate predictions whose financial impact can be measured in fractions of pennies. I’m pretty much sure that most of the business cases of predictive analytics would fail when they tested. Back to the spores on the corpse The key to the future of predictive analytics is the ease of applications and speed of deploy-ability. Interpret-ability of a model, or the performance of an individual model becomes less important, and this produces new challenges to be faced by the traditional data mining workflows and software packages. A majority of the applications of predictive analytics would benefit more using Machine Learning as a Service (MLaaS) instead of owning and licensing a standalone product. MLaaS products will be accessed through well-defined and fairly standardized APIs that open doors for continuous innovations. The most successful candidates in the future of MLaaS providers will feature the management of large amounts of models through advanced model monitoring capabilities. They will support automatic model development, and by closing the fact-feedback-loop, they will also provide online learning and the models got automatically re-trained when there is a drop in predictive performance. In the very near future, the aforementioned features of MLaaS products will revolutionize the world of predictive analytics as we know it today. Hundreds of classical applications, products, serves and novel IoT applications will benefit from the adaptation capability or “plug and play intelligence” they provide.
https://medium.com/data-science-without-marketing-mystery/spores-on-the-cooling-off-corpse-of-data-science-fb7aef0fd715
[]
2017-09-08 10:19:34.307000+00:00
['Machine Learning', 'Predictive Analytics', 'Data Science', 'Big Data', 'CRM']
Build An Android App To Monitor and Convert “bitcoin and etherum” in 20 Local Currencies
In the era of the digital world, the monetary system is constantly changing, and things that has been popular are recently being replaced by improved technologies. The payment industry is particularly affected by this digital era of cryptocurrencies given the public acceptance it has received by many countries and payment platforms. Countries like Japan has already made it a legal means of payment alongside many others. A friend once wrote “ Our ecosystem is no longer just about the code but about people who build and use products. Recently, I’ve realized that 50% of my time (including weekends) is distributed to VS code, the terminal, and Slack. This whole thing is becoming a lifestyle and of course, I’m embracing it — it’s what I love”. I believe he’s not alone. A lot of us spend up to 50hrs a week on productivity tools. Why should we limit it to just code? Why not extend it to cover daily life utility tasks for us. With that in mind, I’ve put up a developer tool to show the possibilities of monitoring these cryptocurrencies in realtime on your android devices. Not just that, you will be able to convert them across 20 different local currencies. So in this tutorial, we’ll walk through how you can build this application for yourself leveraging on the api we’ll be providing for this purpose. DEMO It’s always good practice to have a visual and practical idea of what you’re building and how it works, so you can take a look at this short clip to see how this app works, you can also access the source code on github. Without further ado, let’s head on over to Android Studio and start building.. Side Knowledge By virtue of this article, you will learn a few other Android Development skills like: Making API calls Processing nested Json objects with iterators Making network requests with volley Working with recyclerviews and cardviews Mathematical conversions with formats etc Technologies Before we go ahead and start building, it is wise to talk about the technologies we’ll be using so it won’t look confusing when you come across them as we progress. Volley — a third party library that allows us make network requests seamlessly — a third party library that allows us make network requests seamlessly Recyclerview/CardView — Android special layouts for better organizing contents on screen Now create a new Android Studio project called “CryptoCompare”by now this should be a fairly basic step however if you’re just starting off, refer to any of my previous posts on how to set up a new AS project. Once you’re done creating a new project, install the dependencies for the technologies we talked about. Open your app level build.gradle file and add: click sync to install dependencies MainActivity Layout Then open Activity_main.xml and set up the layout like so: This is quite a simple layout with a toolbar and three TextView objects for local currency, BTC and ETH respectively, this is primarily just serving as headers for the values to be loaded remotely into the recyclerview we defined below the TextView objects. This layout should look like this in your xml visualizer: Hey, yours might not look exactly like this but then it shouldn’t, because i used a custom background image which you probably don’t have. The important thing to look out for is the three TextView objects showing up as expected and the blue lines denoting the area covered by your recyclerview and probably the toolbar. When we make an api call that will return the values of these TextView objects, we’ll simply pass the data into our CardView layout and then use the layout to populate the recyclerview accordingly, make sense ? okay lets continue. CardView Layout Talking about CardView, let’s create a new layout resource file called “card_items.xml”. This will be the CardView layout where we will define the contents we’d like to display on the recyclerview i.e Currency, BTC and ETH. So create the new resource file and set it up like so: This is a simple CardView layout with three TextView objects that we predefined with dummy values to serve as place holders for the actual data we’ll be getting from our api. Just for the sake of clarity, your xml visualizer for this layout should look like this: Now let’s head over to our MainActivity.java file and get interactive. Open MainActivity.java and initialize the recyclerview object. Then we start making the api call. First we store the api inside a variable we defined as “private static final String URL_DATA” and then use it to build our JSONObject request like so: What we have done in the onCreate() method here is quite simple, we defined our api in a string variable, initialized our toolbar, texts and recyclerview. We also created an ArrayList from a CardItems class that we are yet to create but will do so soon. Notice we also called a method loadURLData(). This is the method where we make the request to the api to return the values of the bitcoin, etherum and their respective values in 20 currencies. If you copied this snippet into your studio and got errors then don’t fret, you’re not lost, actually we called a method and even two classes we are yet to create: loadURLData() MyAdapter class class CardItems class. So go back inside the MainActivity class and create the loadURLData() method and set it up like so: loadURData() Here we are simply making an api call with volley passing in the variable that stores the Api. The method returns a response from where we then extract our btc and eth values into a JSONObject. Then we use the iterator<?> to iterate through the nested object and match the individual btc and eth values to their respective currencies keysBTC and keysETH . Next we create the MyAdapter class. So create a new java class called MyAdapter and set it up like so: MyAdapeter class The MyAdapter class is associated with our recyclerview view object. We use it to organize the contents of the recyclerview. In this context, we simply inflated the card_items.xml layout and then used the implemented methods to create viewHolders and bind it’s contents to the inflated layout. Okay let’s step through this for a bit, by the way if you see any red lines at this point, don’t worry you are not alone, i’ll explain why you got the red lines and how to overcome it. From the top, when we created the MyAdapter class, we extended RcyclerView.Adapter<MyAdapter.ViewHolder> and passed into it the the cardItemsList and the Context, which prompted us to implement it’s associating methods ( viewHolder() and onBindViewHolder() ). Inside the viewHolder() method we simply inflated the card_items.xml layout file and returned a new viewHolder(v) method. Then in the onBindViewHolder() method, we created an instance of the CardItems class and then stored the values of the cardItem objects into it’s respective variables (curr, btcVal and ethVal). Then to finally bind these variables to their respective positions on the viewHolder, we set them on the viewHolder holder with the help of our CardItems instance where we defined the setters and getters. Finally, notice that when we extended the MyAdapter class, we passed in <MyAdapter.ViewHolder> hence, we created the ViewHolder class inside the MyAdapter class where we simply initialized all the view objects in the card_item.xml file, including the LinearLayout. CardItems class Finally, to finish setting up our MainActivity, we create the CardItems class. The CardItems class will simply help us generate setters and getters for the contents of our card_items.xml file which we have earlier initialized in the onCreat() method of the MainActivity class . So create a new java class called CardItems and set it up like so: At this point everything is correctly set up. If you run the app, you should now see that the contents we passed from the JSON response into our card_items layout file will show up on the layout which in turn gets laid out on the recyclerview like so:
https://medium.com/quick-code/build-an-android-app-to-monitor-and-convert-bitcoin-and-etherum-in-20-local-currencies-6628a9058a29
['Ekene Eze']
2018-02-01 17:05:03.845000+00:00
['Mobile App Development', 'Cryptocurrency', 'Android', 'Android App Development', 'Bitcoin']
Taking Data Visualization to Another Level
When you tend to use one library for a certain period of time, you get used to it. But, you need to evolve and learn something new every day. If you are still stuck up with Matplotlib(Which is amazing), Seaborn(This is amazing too), Pandas(Basic, yet easy Visualization) and Bokeh, You need to move on and try something new. Many amazing visualization libraries are available in python, which turns to be very versatile. Here, I’m going to discuss about these amazing libraries: Plotly Cufflinks Folium Altair + Vega D3.js (My best pick) If you are aware and use the libraries mentioned above then you are on the right track of evolution. They can help in generating some amazing visualizations and syntax ain’t difficult too. Generally, I prefer more of plotly+cufflinks and D3.js. Alright, lets get back to the basics: Plotly Plotly is an open-source, interactive and browser-based graphing library for python. Plotly is a library that allows you to create interactive plots that you can use in dashboards or websites (you can save them as html files or static images). Plotly is built on top of plotly.js which in turn is built on D3.js and it is a high-level charting library. plotly comes with over 30 chart types, including scientific charts, statistical charts, 3D graphs, financial charts and more. The best things about plotly is that you can use it in Jupyter Notebooks, as well as stand alone HTML pages. You can also use it online on their site but i prefer more to use it offline and you can also save the visualization as an image. It’s pretty simple to use it and get it to work. — Method to use it in Jupyter Notebook (Offline) First, install the plotly library. pip install plotly Then open jupyter notebook and type this: from plotly import __version__ from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot init_notebook_mode(connected=True) The syntax is quite simpler or simplest, i say? In Pandas, you use dataframe.plot() and here, you use dataframe.iplot(). This “i” changes the whole definition of the visualization. With just one line, i generated this scatter plot. You can customize it as you want. Remember to specify mode markers or you’ll just get some cluster of lines. Scatter plot generated using plotly Please note that as the data increases, plotly begins to choke. So, I would only use plotly when I have less than 500K data points. Try it all in your Jupyter Notebook. Cufflinks Cufflinks bind Plotly directly to pandas dataframes. The combination is just amazing, the power of plotly combined with flexibility of Pandas. It is more effective than plotly. The syntax is even simpler than plotly. With plotly’s Python library, you can describe figures with DataFrame’s series and index, but with cufflinks you can plot it directly. Here is an example: df = cf.datagen.lines() py.iplot([{ 'x': df.index, 'y': df[col], 'name': col } for col in df.columns]) With Plotly df.iplot(kind='scatter') With Cufflinks Cufflinks makes it much easier to plot stuffs. You can also generated amazing 3D charts with cufflinks too. I have generated this 3D with just couple of lines of code. 3D chart with Cufflinks You can always try it out in your Jupyter Notebook. — Quick Hack: Set in the config: c.NotebookApp.iopub_data_rate_limit = 1.0e10 Import it the following way: import plotly.graph_objs as go import plotly.plotly as py import cufflinks as cf from plotly.offline import iplot, init_notebook_mode cf.go_offline() # Set global theme cf.set_config_file(world_readable=True, theme=’pearl’, offline=True) init_notebook_mode() And it works inline. Next, I’m going to talk about yet another amazing Viz library. Folium Folium is built on the data wrangling strengths of the Python ecosystem and the mapping strengths of the Leaflet.js library. You can manipulate your data in python, then visualize it in a Leaflet map via folium. Folium is turning out be an amazing library for plotting spatial data. You can also generate heat maps and choropleth maps using folium. Let’s learn something about folium: Maps are defined as a folium.Map object, addition of other folium objects on top of the folium.Map can be done to improve the map rendered You can use different map tiles for the map rendered by Folium, such as MapBox, OpenStreetMap , and several other tiles, for that you can visit this github repo folder or this documentation page. You can also select different map projections. Many projections are available out there. Let’s generate a Choropleth map with Geojson of US unemployment. Here is the snippet: map = folium.Map([43, -100], zoom_start=4) choropleth = folium.Choropleth( geo_data=us_states, data=state_data, columns=['State', 'Unemployment'], key_on='feature.id', fill_color='YlGn', name='Unenployment', show=False, ).add_to(m) # The underlying GeoJson and StepColormap objects are reachable print(type(choropleth.geojson)) print(type(choropleth.color_scale)) folium.LayerControl(collapsed=False).add_to(m) map.save(os.path.join('results', 'GeoChoro.html')) map This is just a basic one, you can add markers, pop ups and a lot more to it. Here is how it would look like. Map with leaflet and folium Altair + Vega Altair is a declarative statistical visualization library and it is based on Vega and Vega-Lite. Altair enables you to build a wide range of statistical visualizations quickly with a powerful and concise visualization grammar. You need to install it the following way, if you are using Jupyter Notebook. It also includes some example vega datasets. pip install -U altair vega_datasets notebook vega Altair’s main dependency is Vega, in order to make the plots to be visible on the screen, you need to install it and also, you need to run this command for every new session. alt.renderers.enable(‘notebook’) Data in Altair is built around the Pandas Dataframe. One of the defining characteristics of statistical visualization is that it begins with tidy Dataframes. You can also save the plot as an image or open it in vega editor for more options. It’s definitely not the best one out there, but definitely worth a try for the sake of creators hard work. Here is an example, I’m using cars dataset for this; import altair as alt from vega_datasets import data source = data.cars() brush = alt.selection(type='interval') points = alt.Chart().mark_point().encode( x='Horsepower:Q', y='Miles_per_Gallon:Q', color=alt.condition(brush, 'Origin:N', alt.value('lightgray')) ).add_selection( brush ) bars = alt.Chart().mark_bar().encode( y='Origin:N', color='Origin:N', x='count(Origin):Q' ).transform_filter( brush ) alt.vconcat(points, bars, data=source) Scatter plot and Histogram with Altair and Vega You can try it out in your own Notebook and let me know if you get stuck anywhere! D3.js (Data Driven Documents) D3.js is a JavaScript library for manipulation of documents based on the data. You can bring data to life using HTML, SVG, and CSS. D3 does not require you to tie tie yourself to any proprietary framework because modern browsers have it all that D3 needs and it is also used for combining powerful visualization components and for a data-driven approach to DOM manipulation. D3.js is the best data visualization library out in the market. I prefer to use it almost every time. You can use it with python and as well as with R. Originally, it works with JavaScript and that becomes quite tough because JS has wide range of functions and requires much learning and experience but if you are a JS pro then you don’t need to make a second thought. Although, Python and R has made it a bit simpler, just a bit! But you get the best stuff out there with this library. D3py has 3 three main dependencies: Numpy Pandas NetworkX I would suggest you to use it with JavaScript or R, not with python because the version is out of date and was last updated in 2016. Though, it was just a thin python wrapper for D3.js. R has an interface for D3 visualizations. With r2d3, you can bind data from R to D3 visualizations. D3 visualizations created with r2d3 work just like R plots within RStudio, R Markdown documents, and Shiny applications. You can install the r2d3 package from CRAN as follows: install.packages(“r2d3”) You can make some amazing visualizations with this one, Let me show you a couple of it here. Sequences Sunburst — Kerry Rodden’s Block (Source) Activity Status of an Year — Kunal Dhariwal (Me, lol) From basics to High end, you can build anything with D3.js, Don’t forget to try it out. If you encounter any error or need any help, you can always make a comment or ping me on LinkedIn. LinkedIn: https://bit.ly/2u4YPoF Github: https://bit.ly/2SQV7ss P.S- Special Thanks to the creators and contributors of those amazing libraries.
https://medium.com/hackernoon/taking-data-visualization-to-another-level-4d1c47bb01a2
['Kunal Dhariwal']
2019-05-10 13:31:07.864000+00:00
['Python', 'Data Science', 'Data Analysis', 'Data Visualization', 'Hackernoon Top Story']
​Physicists create prototype superefficient memory for future computers
Illustration. Energy efficient memory. Credit: @tsarcyanide/MIPT Press Office Researchers from the Moscow Institute of Physics and Technology and their colleagues from Germany and the Netherlands have achieved material magnetization switching on the shortest timescales, at a minimal energy cost. They have thus developed a prototype of energy-efficient data storage devices. The paper was published in the journal Nature. The rapid development of information technology calls for data storage devices controlled by quantum mechanisms without energy losses. Maintaining data centers consumes over 3% of the power generated worldwide, and this figure is growing. While writing and reading information is a bottleneck for IT development, the fundamental laws of nature actually do not prohibit the existence of fast and energy-efficient data storage. The most reliable way of storing data is to encode it as binary zeros and ones, which correspond to the orientations of the microscopic magnets, known as spins, in magnetic materials. This is how a computer hard drive stores information. To switch a bit between its two basic states, it is remagnetized via a magnetic field pulse. However, this operation requires much time and energy. Back in 2016, Sebastian Baierl from the University of Regensburg in Germany, Anatoly Zvezdin from MIPT in Russia, Alexey Kimel from Radboud University Nijmegen in the Netherlands and Russian Technological University MIREA, along with other colleagues, proposed a way for rapid spin switching in thulium orthoferrite via T-rays. Their technique for remagnetizing memory bits proved faster and more efficient than using magnetic field pulses. This effect stems from a special connection between spin states and the electrical component of a T-ray pulse. “The idea was to use the previously discovered spin switching mechanism as an instrument for efficiently driving spins out of equilibrium and studying the fundamental limitations on the speed and energy cost of writing information. Our research focused on the so-called fingerprints of the mechanism with the maximum possible speed and minimum energy dissipation,” commented study co-author Professor Alexey Kimel of Radboud University Nijmegen and MIREA. In this study, we exposed spin states to specially tuned T-pulses. Their characteristic photon energies are on the order of the energy barrier between the spin states. The pulses last picoseconds, which corresponds to one light oscillation cycle. The team used a specially developed structure comprised by micrometer-sized gold antennas deposited on a thulium orthoferrite sample. As a result, the researchers spotted the characteristic spectral signatures indicating successful spin switching with only the minimal energy losses imposed by the fundamental laws of thermodynamics. For the first time, a spin switch was complete in a mere 3 picoseconds and with almost no energy dissipation. This shows the enormous potential of magnetism for addressing the crucial problems in information technology. According to the researchers, their experimental findings agree with theoretical model predictions. “The rare earth materials, which provided the basis for this discovery, are currently experiencing a sort of a renaissance,” said Professor Anatoly Zvezdin, who heads the Magnetic Heterostructures and Spintronics Lab at MIPT. “Their fundamental properties were studied half a century ago, with major contributions by Russian physicists, MSU and MIPT alumni. This is an excellent example of how fundamental research finds its way into practice decades after it was completed.” The joint work of several research teams has led to the creation of a structure that is a promising prototype of future data storage devices. Such devices would be compact and capable of transferring data within picoseconds. Fitting this storage with antennas will make it compatible with on-chip T-ray sources.
https://mipt.medium.com/physicists-create-prototype-superefficient-memory-for-future-computers-moscow-institute-of-489c6c4f181c
['Moscow Institute Of Physics']
2019-05-17 08:57:42.032000+00:00
['Science', 'Computers', 'Computer Memories', 'Efficiency']
3 Ways To Know If You’re On Track To Success
SELF When you look back, what propelled you there will be obvious… Photo by Razvan Chisu on Unsplash Many of us live life day to day without recognizing how each day is bringing us closer to that which we want to achieve. I have been using triggering and reflective questions at the end of each day for years now. I found the need for such a checklist even before it was cool. Before influencers began raving about bullet journals and before ‘The Secret’ came out encouraging positive affirmation trends. I’ve always chased the high of inspiration and believe there isn’t a greater feeling. Many people lose faith after not seeing results soon enough and feel the need for their efforts and patience to be proven by visible achievements. Putting my progress into perspective by categorizing it has been a helpful way for me to tell whether each day is leading me to my desired success. It ensures I feel positive about the small tasks and insignificant little things that I do daily. I believe that the constant use of these questions is what has led me to be the ever-confident and clear-visioned individual that I am today. The answers more often than not take me to a content and humble place of appreciation of my constant efforts. At times I apply them in reverse as I know the looming checklist that is ahead. It, therefore, encourages me to provide a reason for at least one of the categories to be ticked off that day. Of course, success doesn't come overnight. So, why do so many of us feel demotivated and fail to recognize just how much we help our future selves with our daily progress? It isn’t due to one big action or sacrifice but due to thousands of decisions and efforts made every day that then coincide with luck and timing. It’s an initial decision to change our mindset, ultimately leading to a change in attitude until it has seeped into our bones as a habit of nature. The little things eventually make up the big picture and this takes time. Hopefully, these three questions can calm your spirit and ensure that you see each day as a small success towards a greater goal. Life is a juggling act. Cavemen never had this many errands to run! There’s no question about it — humans are overworking themselves to death. So let’s not forget to consider just how well we are doing. We are thrust into a world where our human traits long to be happy each day. Meanwhile, we are expected to make money just to survive, and the way in which we do isn’t usually fulfilling. According to research, an astronomical 85% of people are unhappy with their jobs. Simply being human also means nurturing yourself as well as the relationships and friendships in your life. To fulfill all of these each day is nearly impossible. This can feel entirely overwhelming when we are chasing a dream or working towards success and being content really does come when we find that perfect balance.
https://medium.com/age-of-awareness/3-ways-to-know-if-youre-on-track-to-success-1469c2b68550
['Sandra Michelle']
2020-11-26 00:06:31.524000+00:00
['Productivity', 'Inspiration', 'Self', 'Self Improvement', 'Life Lessons']
The Story That’s Not Being Told: Mimi Lok & Last of Her Name
Last of her Name by Mimi Lok. Kaya Press, 2019. 200 pp, prose. Cheyenne Heckermann: Can you tell me about the journey toward publishing Last of Her Name? Mimi Lok: It was long! I’d written earlier drafts of the stories over roughly a ten-year span, and spent about three years working on the collection in earnest — rewriting, discarding, organizing. I wanted to send it directly to presses I’d long admired, but every writer I knew told me to find an agent first. After sending the manuscript to various agencies, I learned that I could not get an agent without also having a novel in the works, and I had no novel at the time. So I went back to my original plan and sent it directly to editors. I was thrilled to sell the book to Kaya Press, who I’ve loved for years. Just over a year later, the book was released. CH: What went into your decision to write “Wedding Night” with such distinct breaks and vignettes between sections? ML: “Wedding Night” is a messed up love story between two very different people. It was written in a fragmentary way, with perspective shifts between the protagonists Wai Lan and Sing, and this sort of disembodied, omnipotent perspective. Since the nature of memory is key to this story, telling the story in fragments with a greater emphasis on mood and sensory details made more sense than a linear, smoothly coherent narrative. CH: One of the pieces in Last of Her Name is a novella. Was there anything different in your process with “The Woman in the Closet?” ML: With the novella I had a slightly clearer sense of the story than with the others, possibly because it was partly inspired by a real-life incident. The story follows Granny Ng, an elderly homeless woman who breaks into a young man’s home, and I was interested in following her closely over a substantial period of time and seeing how things unfold for her. I also knew how it would end on a surface level, but what it had in common with the other stories is that I still had to relinquish control of the story to the characters’ desires, needs, and impulses, and let things go where they needed to go in between. I knew the what but not the how. CH: You make excellent use of perspective shifts in your short stories. What do you enjoy about having these shifts, and how do they influence your stories? ML: Challenging what’s accepted as the default perspective, I hope, shakes up our idea of whose experiences and perspectives we privilege over others, whose we don’t consider but should, all of that. I’m always curious about the story that’s not being told, and even if we only get a glimpse of that, it reminds us of complexities and nuances beyond our immediate perception. CH: What’s next for Mimi Lok? Is there anything that you’re working on that you can talk about? ML: I am writing more stories, and also working on a novel. Mimi Lok is the author of the story collection Last Of Her Name, published October 2019 by Kaya Press. Last of Her Name was recently shortlisted for the 2020 PEN/Robert W. Bingham prize for debut short story collection, and a 2020 Northern California Book Award. A story from the collection, “The Woman in the Closet,” was nominated for a 2020 National Magazine Award in Fiction with McSweeney’s Quarterly. Mimi is the recipient of a Smithsonian Ingenuity Award and an Ylvisaker Award for Fiction. Her work can be found in McSweeney’s, Electric Literature, LitHub, Nimrod, Lucky Peach, Hyphen, the South China Morning Post, and elsewhere. She is currently working on a novel. Mimi is also the founding director and executive editor of Voice of Witness, an award-winning human rights/oral history nonprofit that amplifies marginalized voices through a book series and a national education program.
https://medium.com/anomalyblog/an-interview-with-mimi-lok-on-last-of-her-name-41fef835d9a2
['Cheyenne Heckermann']
2020-02-18 15:50:51.943000+00:00
['Publishing', 'Fiction', 'Interview', 'Featured', 'Books']
Coin-o-graphy
For a child growing up in a middle class family of Bangladesh, one of the first financial lesson he or she learns is how to save. We were taught how much important saving is, and how one should save. Considering the age, our parents often thought it was not the proper time to introduce saving in banking organizations. Instead we are taught to save inside ‘banks’ made of clay. It is an enclosed clay pot, with only a thin slit as opening — the slit was big enough so that we could shove coins into it. The only way to regain the coins is to break them. We used to drop coins whenever we could. Then when it was full, we used to break them. In my lifetime I have filled and broken around 6 of them. The latest demolition was performed today, 30th September 2017. The latest coins have a tendency to get rusty and we needed to clean them up. So we broke the bank. This is how the clay banks look like. Image is collected from Google Image Search. It felt like I opened up a Pandora’s box. Each coin was carrying a bit and piece of my past, since childhood. Each coin can curve a story around it. And that was the exact moment when I thought of writing this piece. It is a collection of stories around these coins, and they way we used to save them. Stories from my parents, and their parents. Stories from my own life.
https://medium.com/the-moonwreckers-diary/for-a-child-growing-up-in-a-middle-class-family-of-bangladesh-one-of-the-first-financial-lesson-he-195dbfc1e13e
['Akm Nivrito']
2017-09-30 09:58:38.566000+00:00
['Money', 'Stories', 'Saving', 'Memories', 'Storytelling']
Text Generation With Python And GPT-2
Results We can get some great results with very little code. Here are a few examples that should give you a better understanding of the impact of each argument in the .generate method. outputs = model.generate( inputs, max_length=200, do_sample=True ) tokenizer.decode(outputs[0], skip_special_tokens=True) [Out]: "He began his premiership by forming a five-man war cabinet which included Chamerlain as Lord President of the Council, Labour leader Clement Attlee as Lord Privy Seal (later as Deputy Prime Minister), Halifax as Foreign Secretary and Labour's Arthur Greenwood as a minister without portfolio. In practice, these cabinet officers were part of the National Security Council, then headed by Lord Chamberlain's secretary Arthur Hargreaves. A number of them became key cabinet secretaries, including Lord Hargreaves in 1948, Lord Butler as Justice Minister in 1949 and Lord Codds as justice minister until his death in 1975. After being replaced by Lord Hargreaves in 1955, there was speculation that the next general election would see Chamberlain and Howe try and avoid a hard line policy..." We can add more randomness with temperature — the default value is 1 , a high value like 5 will produce a pretty nonsensical output: outputs = model.generate( inputs, max_length=200, do_sample=True, temperature=5 ) tokenizer.decode(outputs[0], skip_special_tokens=True) [Out]: "He began his premiership by forming a five-man war cabinet which included Chamerlain as Lord President of the Council, Labour leader Clement Attlee as Lord Privy Seal (later as Deputy Prime Minister), Halifax as Foreign Secretary and Labour's Arthur Greenwood as a minister without portfolio. In practice, his foreign secretaries generally assumed other duties during cabinet so his job fell less and smaller - sometimes twice his overall stature so long a day seemed manageable after he became Chief Arctic Advisor: Mr Wilson led one reshover where we've also done another three, despite taking responsibility over appointments including Prime (for both) his time here since 1901)[31],[38-4]. (These last had fewer staff as many than he is responsible..." Turning the temperature down below 1 will produce more linear but less creative outputs. We can also add the top_k parameter — which limits the sample tokens to a given number of the most probable tokens. This results in text that tends to stick to the same topic (or set of words) for a longer period of time.
https://towardsdatascience.com/text-generation-with-python-and-gpt-2-1fecbff1635b
['James Briggs']
2020-12-28 14:47:13.167000+00:00
['Machine Learning', 'Data Science', 'Technology', 'Artificial Intelligence', 'Programming']
How to Create an Animated Bar Chart With React and d3
How to Create an Animated Bar Chart With React and d3 Michael Tong Follow Sep 22 · 3 min read Photo by Markus Winkler on Unsplash Have you ever looked at data visualizations and be wowed by all the effects and animations? Have you ever wondered how to integrate visualizations with react? In this article, we will talk about how to make an animating bar chart using d3 in react. To understand how to create the bar chart, let’s understand what d3 is and how it works. D3 is an open-source javascript library that is used to create custom interactive data visualizations. It is data-driven and generates visualizations from data that can come from arrays, objects, jsons, or data from a CSV or XML file. It allows direct selection of elements/nodes in the DOM and attach styles and attributes to generate visualizations. Here is an example of a d3 bar chart: I know this is a bit long so let me break this down. Above we set the margins for the graph and on line 28/29, you would see there is an xscale and yscale. The xscale determines our range on the x-axis and in our case, that would be the range of the years(1993, 1994, etc). On the other hand, the yscale determines the scale depending on the height of the values. Afterward, we select the current ref and initializes a bar this way: we select the “g” element of the current SVG, which is the bar chart itself. Over here, we start joining the data we get from another file. Normally, this will be data from a CSV or JSON file. Afterward, we initialize the chart. Here is where it gets interesting. After I set the attr of width, a call to make duration and delay how fast the bars show up. Let’s look at how the rest of the chart is setup: Over here, we set up the bar labels first. Afterward, we determine the location of the x-axis and y-axis labels, which we attach to the element “g”. “g” is our master node for the whole barChart. We also select x-axis-title and y-axis-title and bind its data attribute to the respective fields of year and yAxisTitle. We also dictate other attributes that come along with it, such as x, y position, transform, and its font-size. Pretty straightforward, right? Let’s take a look at how it’s being utilized inside App.js: Over here, we have a bar chart, where we set the width and the height as well as the y-axis title. We also give radio options for users to select between us and japan data, which maps to a different set of values from the data JSON under ‘./utils/constant’. It’s hard to show the graph with the animation here but here is a brief overview of how it would actually look like: That’s it! I know I talked a lot about the visualization but I will also provide the steps to set this out from scratch. Step 1: install node on your machine and run the following command: curl “https://nodejs.org/dist/latest/node-${VERSION:-$(wget -qO- https://nodejs.org/dist/latest/ | sed -nE ‘s|.*>node-(.*)\.pkg</a>.*|\1|p’)}.pkg” > “$HOME/Downloads/node-latest.pkg” && sudo installer -store -pkg “$HOME/Downloads/node-latest.pkg” -target “/” Step 2: run the following command: npx create-react-app economic-growth-chart Step 3: go to app.js and replace with the following content:(already shown once in this article) Step 4: run the following command: npm install -- save d3 @material-ui/core Step 5: Creates a utils folder under the src folder and create constant.js with the following content: Step 6: under the src folder, create a folder called components and create a class called BarChart.js(this is also mentioned in this article already): Now go into your terminal and run npm start! Your project is all set up.
https://medium.com/weekly-webtips/how-to-create-an-animated-barchart-with-react-and-d3-b4fd3662633f
['Michael Tong']
2020-09-23 06:10:31.808000+00:00
['D3js', 'Web Development', 'React', 'JavaScript', 'Data Visualization']
NaNoWriMo Week 1: Engaging My Inner Trickster
Morning raven. Day 2 of NaNoWriMo began when my alarm sang a merry tune at 5:30am. I hit snooze. Inside my imagination, my characters glared at me, rolled over and went back to bed. When I rose, two snooze cycles later and sat in front of my open laptop, I expected my characters to flow through a scene just following a major confrontation. I wanted my main character to spill a tiny bit of her secrets but not all of them. Nothing happened. Fear raged while my creativity still slept. This is stupid. Pointless. You should have stayed in bed. Your characters are boring and pointless and you’re not going to have any pages for your writing group to critique in three weeks. Why do you even bother, you should quit your writing group. They’re all better writers than you anyway. Then, for no reason except maybe my Creativity woke up, I remembered The Trickster¹. Instead of forcing my characters to do things on the page, I wrote all those fears, sucking them out of my brain and putting them right in the middle of the page where my characters were supposed to be talking. After three paragraphs of dumping random thoughts and fears, my characters jumped in. They interrupted my boring monologue and took over. They skipped that conversation I was trying to force them to have and talked about something else. When it was time to get ready for work I had 1500 words. And in case you were wondering, I do include my rambling thoughts as part of my word count. Because that’s what a Trickster would do. Because they are part of my novel writing process, especially in November. What do you do, when fear whispers its vile poison into your ear? How do you engage with your Trickster during NaNoWriMo? Respond, please, and let me know. My inner Trickster loves new tricks.
https://medium.com/nanowrimo/nanowrimo-week-1-engaging-my-inner-trickster-7b2f48d88081
['Julie Russell']
2018-11-02 16:50:15.578000+00:00
['NaNoWriMo', 'Writing']
Understanding Big O Space Complexity
Most Common Types of Big O O(n) If n = some integer, the number of operations within the algorithm (the way of solving a given problem) increases roughly in proportion with n. This type of algorithm is not ideal! An example of an algorithm with an O(n) could be a function which takes an integer, ’n’, as an argument, and uses a ‘for’ loop to calculate the sum of all numbers up to and including ’n’. In this case, the number of operations increases in proportion to n because the larger n is, the more summations will need to be done within the ‘for’ loop to solve the problem. O(1) In the case of an algorithm with an O(1), the number of operations required to complete the problem stays consistent no matter the value of ’n’. This is the most preferable type of Big O and results in the best performance. Example: a function which takes an integer, ’n’, and simply performs some operations on n without any sort of ‘for’ loop, searching, etc. The reason this is O(1) is because no matter the size of n, the number operations within the function will stay exactly the same. O(n ²) In this case, the amount of operations within the function will increase exponentially with ’n’. An example of this could be a function involving a nested loop which calls n as well. This results in an exponential increase in the amount of operations required to complete the problem. O(n ²) is the slowest and least preferable type of algorithm! O(log n) An algorithm with an O(log n) begins with an increase in proportion with ’n’, but eventually levels off. This is a high performing algorithm and is the next best thing to O(1)! An example of this could involve a search algorithm where the answer space keeps getting split, over and over again until the answer is found. Below is a helpful graph of common types of Big O Notation in relation to time complexity:
https://medium.com/datadriveninvestor/understanding-big-o-space-complexity-6826478e5a9f
['Colton Kaiser']
2020-06-08 17:22:10.004000+00:00
['Software Engineering', 'Coding', 'Software Development', 'Big O Notation', 'Programming']
The Movies Are Getting Better
I’m writing this in response to Rebecca Stevens A. ‘S article about how there is no such thing as Black Privilege. Rebecca is right. From India, as a person who watches America through the movies, I’m going to talk about a few movies. I am not a movie buff, but I do have a subscription to an English Movie Pack and I watch part of an English movie everyday during my lunch-hour. 1. Pretty Woman I recently re-watched this. Back in 1990 when I was 14, I watched this movie with a friend and both our moms. Our moms would clap their hands over our eyes in many scenes, so naturally being able to watch all of it was good fun in itself. Screenshot from Apple TV trailer of 1990 movie Pretty Woman Pretty Woman has one black person in it. Darryl, the limousine driver. There isn’t a single black person in the upper class party here. Screenshot from trailer of 1990 movie Pretty Woman So as a movie, promoting human equality, I’d say it was a #fail. Don’t worry, Hollywood will fix everything! Here come, movie no.2 and 3. 2. The Mighty Ducks Screenshot from trailer of 1992 movie The Mighty Ducks. Coach is white, while the rag-tag team he teaches is almost all-white. Like in Quidditch, however, this game is gender-neutral, with one girl player. Screenshot from trailer of 1992 movie The Mighty Ducks There is one black player on the team, but his dad is always on the white coach’s case, along with a white mom of another player. Screenshot from trailer of 1992 movie The Mighty Ducks In this movie, I’d give Disney credit for trying, but not so hard that it looked artificial. The focus is on the Coach’s drinking and driving, and his desire to win even if he’s cheating, unlike the kids who are honest. 3. Hardball Screenshot from 2001 movie Hard Ball Hard Ball too deals with a white coach down on his luck who’s forced to coach a kids’ baseball team. I’m amazed they liked the format enough to make the movie twice. Screenshot from 2001 movie Hard Ball The thing is: in the older movie, The Mighty Ducks, 1992, most of the ice hockey team is white. There is one black kid on the team, that is all. In the newer one, 2001, Coach is white, while the kids are all black. This movie isn’t as great as The Mighty Ducks because the Ducks’ coach works harder than the Kekambas’, the game play is better thought out. I liked the way the Ducks’ ice hockey coach has them have to use eggs for pucks without breaking them. Keanu Reeve’s character doesn’t do much in the “this way and that’s how” of baseball. 4. A Time To Kill Screenshot from trailer of 1996 movie A Time To Kill Next up are two courtroom dramas. One is a dramatization of the novel, A Time to Kill, by John Grisham. Here, the lawyer is white, while the defendant is black. The defendant, played by Samuel L. Jackson, deliberately picks a white lawyer to offset his all-white jury. I wouldn’t call this a racist movie, but it is a bit like a low-calorie, fat-free, health drink. It reminds you of all the things you’re trying to avoid. 5. Marshall Screenshot from trailer of 2017 movie Marshall. The other courtroom drama is Marshall. This movie also has a black defendant, but here, the lawyer is black, too. He’s from the N-double-A-C-P. I deliberately didn’t say NAACP because they never said it that way in the movie, it was always N-double-A-C-P. Screenshot from trailer of 2017 movie Marshall. This movie, Marshall, is so great it makes your skin prickle. I wish such movies were released in the theaters in India, and all we get in the movie halls are the superhero movies.
https://medium.com/illumination-curated/the-movies-are-getting-better-b969b92c7909
['Tooth Truth Roopa Vikesh']
2020-12-15 19:06:43.221000+00:00
['Nonfiction', 'Parenting', 'Movies', 'Diversity', 'Perspective']
NumPy: Stacking, Splitting, Array attributes
Stacking Arrays can be stacked horizontally, depth-wise, or vertically. We can use, for that purpose, the vstack , dstack , hstack , column_stack , row_stack , and concatenate functions. Time for action — stacking arrays First, let’s set up some arrays: In: a = arange(9).reshape(3,3) In: a Out: array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) In: b = 2 * a In: b Out: array([[ 0, 2, 4], [ 6, 8, 10], [12, 14, 16]]) Horizontal stacking: Starting with horizontal stacking, we will form a tuple of ndarrays and give it to the hstack function. It stacks arrays in sequence horizontally (column-wise).This is shown as follows: In: hstack((a, b)) Out: array([[ 0, 1, 2, 0, 2, 4], [ 3, 4, 5, 6, 8, 10], [ 6, 7, 8, 12, 14, 16]]) We can achieve the same with the concatenate function, concatenation along the second axis, except for 1-D arrays where it concatenates along the first axis, which is shown as follows: In: concatenate((a, b), axis=1) Out: array([[ 0, 1, 2, 0, 2, 4], [ 3, 4, 5, 6, 8, 10], [ 6, 7, 8, 12, 14, 16]]) 2. Vertical stacking: vstack function is used to stack the sequence of input arrays vertically to make a single array. With vertical stacking, again, a tuple is formed. This time, it is given to the vstack function. This can be seen as follows: In: vstack((a, b)) Out: array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 0, 2, 4], [ 6, 8, 10], [12, 14, 16]]) The concatenate function produces the same result with the axis set to 0. This is the default value for the axis argument. In: concatenate((a, b), axis=0) Out: array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 0, 2, 4], [ 6, 8, 10], [12, 14, 16]]) 3. Depth stacking: Stack arrays in sequence depth wise (along third axis). This is equivalent to concatenation along the third axis after 2-D arrays of shape (M,N) have been reshaped to (M,N,1) and 1-D arrays of shape (N,) have been reshaped to (1,N,1). Additionally, there is the depth-wise stacking using dstack and a tuple, of course. This means stacking of a list of arrays along the third axis (depth). For instance, we could stack 2D arrays of image data on top of each other. In: dstack((a, b)) Out: array([[[ 0, 0], [1, 2], [2,4]], [[3,6], [4,8], [5,10]], [[6,12], [7, 14], [8,16]]]]) 4. Column stacking: The column_stack function stacks 1D arrays column-wise. It’s shown as follows: In: oned = arange(2) In: oned Out: array([0, 1]) In: twiceoned = 2 * oned In: twiceoned Out: array([0, 2]) In: column_stack((oned, twiceoned)) Out: array([[0, 0], [1, 2]]) 2D arrays are stacked the way hstack stacks them: In: column_stack((a, b)) Out: array([[ 0, 1, 2, 0, 2, 4], [ 3, 4, 5, 6, 8, 10], [ 6, 7, 8, 12, 14, 16]]) In: column_stack((a, b)) == hstack((a, b)) Out: array([[ True, True, True, True, True,True], [ True, True, True, True, True,True], [ True, True, True, True, True,True]],dtype=bool) Yes, you guessed it right! We compared two arrays with the == operator. Isn’t it beautiful? 5. Row stacking: NumPy, of course, also has a function that does row-wise stacking. It is called row_stack and, for 1D arrays, it just stacks the arrays in rows into a 2D array. In: row_stack((oned, twiceoned)) Out: array([[0, 1], [0, 2]]) The row_stack function results for 2D arrays are equal to. Yes, exactly the vstack function results. In: row_stack((a, b)) Out: array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 0, 2, 4], [ 6, 8, 10], [12, 14, 16]]) In: row_stack((a,b)) == vstack((a, b)) Out: array([[ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True], [ True, True, True]], dtype=bool) What just happened? We stacked arrays horizontally, depth-wise, or vertically. We used the vstack , dstack ,hstack , column_stack , row_stack , and concatenate functions. Splitting Arrays can be split vertically, horizontally, or depth-wise. The functions involved are hsplit , vsplit , dsplit , and split . We can either split into arrays of the same shape or indicate the position after which the split should occur. Time for action — splitting arrays Horizontal splitting:The ensuing code splits an array along its horizontal axis into three pieces of the same size and shape. This is shown as follows: In: a Out: array([[0, 1, [3, 4, [6, 7, In: hsplit(a, Out: [array([[0], [3], [6]]), array([[1], [4], [7]]), array([[2], [5], [8]])] Compare it with a call of the split function, with extra parameter axis=1 : In: split(a, 3, axis=1) Out: [array([[0], [3], [6]]), array([[1], [4], [7]]), array([[2], [5], [8]])] 2. Vertical splitting: vsplit splits along the vertical axis: In: vsplit(a, 3) Out: [array([[0, 1, 2]]), array([[3, 4, 5]]), array([[6, 7, 8]])] The split function, with axis=0 , also splits along the vertical axis: In: split(a, 3, axis=0) Out: [array([[0, 1, 2]]), array([[3, 4, 5]]), array([[6, 7, 8]])] 3. Depth-wise splitting: The dsplit function, unsurprisingly, splits depth-wise. We will need an array of rank 3 first: In: c = arange(27).reshape(3, 3, 3) In: c Out: array([[[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8]], [[ 9, 10, 11], [12, 13, 14], [15, 16, 17]], [[18, 19, 20], [21, 22, 23], [24, 25, 26]]]) In: dsplit(c, 3) Out: [array([[[ 0], [ 3], [ 6]], [[ 9], [12], [15]], [[18], [21], [24]]]), array([[[ 1], [ 4], [ 7]], [[10], [13], [16]], [[19], [22], [25]]]), array([[[ 2], [ 5], [ 8]], [[11], [14], [17]], [[20], [23], [26]]])] What just happened? We split arrays using the hsplit , vsplit , dsplit , and split functions. Array attributes Besides the shape and dtype attributes, ndarray has a number of other attributes, as shown in the following list: ndim gives the number of dimensions: In: b Out: array([[ 0,1,2,3,4,5,6,7,8,9,10,11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]]) In: b.ndim Out: 2 2. size contains the number of elements. This is shown a follows: In: b.size Out: 24 3. itemsize gives the number of bytes for each element in the array: In: b.itemsize Out: 8 4. If you want the total number of bytes the array requires, you can have a look at nbytes . This is just a product of the itemsize and size attributes: In: b.resize(6,4) In: b Out: array([[ 0, 1, 2,3], [ 4, 5, 6,7], [ 8, 9, 10,11], [12, 13, 14,15], [16, 17, 18,19], [20, 21, 22,23]]) In: b.T Out: array([[ 0, 4, 8,12,16,20], [ 1, 5, 9,13,17,21], [ 2, 6, 10,14,18,22], [ 3, 7, 11,15,19,23]]) 5. If the array has a rank lower than 2, we will just get a view of the array: In: b.ndim Out: 1 In: b.T Out: array([0, 1, 2, 3, 4]) Complex numbers in NumPy are represented by .j . For example, we can create an array with complex numbers: In: b = array([1.j + 1, 2.j + 3]) In: b Out: array([ 1.+1.j, 3.+2.j]) 6. The real attribute gives us the real part of the array, or the array itself if it only contains real numbers: In: b.real Out: array([ 1.,3.]) 7. The imag attribute contains the imaginary part of the array: In: b.imag Out: array([ 1.,2.]) 8. If the array contains complex numbers, then the data type is automatically also complex: In: b.dtype Out: dtype('complex128') In: b.dtype.str Out: '<c16' 9. The flat attribute returns a numpy.flatiter object. This is the only way to acquire a flatiter — we do not have access to a flatiter constructor. The flat iterator enables us to loop through an array as if it is a flat array, as shown next: In: b = arange(4).reshape(2,2) In: b Out: array([[0, 1], [2, 3]]) In: f = b.flat In: f Out: <numpy.flatiter object at 0x103013e00> In: for item in f: print item .....: 0 1 2 3 It is possible to directly get an element with the flatiter object: In: b.flat[2] Out: 2 or multiple elements: In: b.flat[[1,3]] Out: array([1, 3]) The flat attribute is settable. Setting the value of the flat attribute leads to overwriting the values of the whole array: In: b.flat = 7 In: b Out: array([[7, 7], [7, 7]]) or selected elements In: b.flat[[1,3]] = 1 In: b Out: array([[7, 1], [7, 1]]) Time for action — converting arrays We can convert a NumPy array to a Python list with the tolist function. This is shown as follows: Convert to a list: In: b Out: array([ 1.+1.j, 3.+2.j]) In: b.tolist() Out: [(1+1j), (3+2j)] 2. astype function: The astype function converts the array to an array of the specified type: In: b Out: array([ 1.+1.j, 3.+2.j]) In: b.astype(int) /usr/local/bin/ipython:1: ComplexWarning: Casting complex values to real discards the imaginary part #!/usr/bin/python Out: array([1, 3]) We are losing the imaginary part when casting from complex type to int. The astype function also accepts the name of a type as a string. In: b.astype('complex') Out: array([ 1.+1.j, 3.+2.j]) It won’t show any warning this time, because we used the proper data type. What just happened? We converted NumPy arrays to a list and to arrays of different data types. Summary We learned a lot the shape of an array can be manipulated in many ways — stacking, resizing, reshaping, and splitting.
https://medium.com/python-in-plain-english/numpy-stacking-splitting-array-attributes-b3ad04b47646
['Bhanu Soni']
2020-12-23 08:48:14.495000+00:00
['Numpy', 'Python', 'Machine Learning', 'Data Science', 'Programming']
The B2B Marketplace Stack
When people think about marketplaces, they usually assume it’s all about matching the demand and supply side. The reality is that it involves so much more than that, particularly when it comes to B2B. In the following post, which we’ve written (in collaboration with our friends at Hokodo) off the back of working with several B2B marketplaces and interviewing many more, we’ll try to unpack the building blocks of these businesses. Hopefully, it will be useful to entrepreneurs that are in the process of building a B2B marketplace. The B2B marketplace stack usually consists of the following 4 functions: Curating the suppliers Facilitating the transaction Supporting the fulfilment of the orders Providing value-added services Before I dive into it, it’s worth noting that not every B2B marketplaces offers every component included in this diagram. A service marketplace, for instance, will most likely not need to offer logistics or leveraged purchasing. Many of these functions are also key to B2C marketplaces and not exclusive to B2B. 1. Curating the suppliers 1.1 Credentialing Ever ordered something online only to realise its some cheap knock off and you’ve been massively ripped off? Well that’s where credentialing comes into play by ensuring the trustworthiness of suppliers on the platform. Why it matters: Whilst it might sound basic, credentialing is paramount in B2B transactions where buyers might in some cases be taking a significant business risk in trying out a new supplier and need to be certain that all parties on the platform can deliver to a certain standard. Who does it well: Metalshub*, a marketplace for trading metals, only allows suppliers onto its platform that meet certain compliance requirements and continuously checks that they have the relevant and up to date quality certificates. This not only builds trust but also saves purchasing departments from having to run their usual Total Quality Management (TQM) procedures, which in turn encourages them to keep using the platform. 1.2 Cataloguing and Searchability This is all about making it as easy as possible for a buyer to find exactly what he or she is looking for in as few clicks as possible. Why it matters: Unlike B2C customers that might enjoy scrolling endlessly to find their dream purchase, most B2B customers are strapped for time and and speed of transaction is vital. Who does it well: Rekki*, a marketplace which connects restaurants to their suppliers, has developed a translation engine that allows chefs to search for inventory using different abbreviations and kitchen slang, significantly speeding up the ordering process. ManoMano, a marketplace for construction materials, allows its busy construction workers to order using voice enabling them to purchase on the go. 1.3 Leverage Purchasing Once marketplaces reach a certain scale they can use their market power to secure better prices for their buyers, because who doesn’t love a good discount? This is especially true for marketplaces that are able to pool multiple small orders from buyers into a single large order. Why it matters: When going up against the status quo, you ideally want to build something which is both 10x better and 10x cheaper than what’s out there already, particularly when going after business buyers who tend to be price sensitive. Guaranteeing customers competitive prices solves one part of the 10x equation. Who does it well: Shippo, a logistics marketplace, pools demand for shipping services amongst small businesses and, in turn, is able to get up to 60% discounts with carriers such as UPs and Fedex, amongst others. Similarly, Famitoo, a marketplace for agricultural supplies, enables small farmers that were traditionally ripped off by large suppliers, to purchase at similar rates to much larger farmers. 2. Organising the transaction 2.1 Matchmaking and price discovery Connecting demand and supply and helping them transact act the right price is at the core of any marketplace. As I’ve written about in a previous blog, the matching between the demand and supply sides can be done in three different ways depending on marketplace dynamics: double commit (both buyers and sellers opt-in), buyer-pick (sellers input their availability and buyers select a supplier) and marketplace-picks (a buyer is automatically matched with a seller). Why it matters: Matchmaking and price discovery can be particularly hard to crack in B2B marketplaces where in some cases you might have complex RFP or bidding based transactions and in others, might have established buyer-supplier relationships leading to a reluctance to try out new suppliers. Who does it well: This is table stakes for marketplaces and there are many ways of doing it well depending on which matching style you opt for. Laserhub, a marketplace for custom metal sheets, takes a marketplace-picks approach. They abstract away the identity of the supplier and standardise pricing so that the buyer always feels like he or she is transacting with a single party (Laserhub), removing the pressure of having to choose which supplier they should interact with and figuring out what should be the best price. Others, like Rekki, accept the fact that established relationships are a key part of the restaurant/supplier industry and focus on facilitating the connection between these parties, before pushing them to match with new ones. 2.2 Payment Sounds simple, but it’s far easier said than done. Why it matters: Enabling payments on your marketplaces is one of the key ways to reduce the risk of leakage. That being said, when it comes to B2B transactions it is a real challenge. Large transaction sizes often mean that credit card payments online are not an option. On top of that, buyers expect to be offered payment on credit terms (e.g. net 30 days). As a result, many B2B marketplaces need to give their customers the option to pay via invoice and manage the related collection process which tends to be complex. Due to the longer payment times, they often need to find ways of helping manage the credit risk and liquidity strain for suppliers that are not paid out immediately. Who does it well: Marketplaces such as Rigup, Faire and Ankorstore make it part of their core proposition to grant 30 to 90 days of credit to (eligible) buyers whilst also allowing their suppliers to get paid right after the order. In doing so, they take on the credit risk of a buyer not paying in time in exchange for greater supplier loyalty. They also provide buyers with the flexibility to pay via invoice. Hokodo, who we collaborated with on this blog, is one of the key providers of these solutions — if payments are a pain, check them out. 2.3 Transaction Admin This refers to any tasks which need to be done once an order is placed, from confirming the availability of the goods in stock, sending an invoice to the buyer, organising the last compliance checks (if applicable) and orchestrating the various ancillary services (logistics, cargo and credit insurance, financing etc.). Why it matters: Let’s face it, nobody likes admin… Done well, this can become a unique selling point for the platform and can even drive suppliers to bring their whole portfolio of buyers onto the platform. Who does it well: Many good marketplaces integrate with their suppliers ERP systems, which reduces the need to constantly update stocks. Privateaser, a marketplace for events providers, consolidates the invoices from multiple different suppliers into a single invoice — as if the buyers only had one supplier — significantly reducing the admin work which comes with managing multiple invoices. 3. Supporting the fulfilment 3.1 Shipping & logistics This includes warehousing, packaging, customs handling, inspection services, delivery and returns processing. Many B2B goods marketplaces add this as a feature on top of their platform. Why it matters: Taking on these additional functions allows marketplaces to entrench themselves in the supply chain of their users, reducing risk of disintermediation and justifying higher take-rates. Several marketplaces are also well positioned to negotiate better shipping rates than a small buyer would be able to. Who does it well: Amazon’s fulfilment platform is a prime example of this in B2C, with many sellers outsourcing their entire post-sales operation to Amazon, even for goods which are not necessarily sold through Amazon. Another example is Ankorstore, which offers free shipping for orders over €300, even if the transaction is actually made of several orders sourced from various suppliers. This saves costs for buyers and acts as an incentive for them to move their existing suppliers onto the Ankorstore marketplace. 3.2 After-sales This refers to all of the support offered by a marketplace following the provision of a good or services. Why it matters: Offering robust after-sales is critical to increasing customers’ satisfaction and stickiness. On top of this, it generates a virtuous cycle of positive customer reviews (reduces the risk of negative reviews), which in turn builds trust and reputation for the platform and attracts future customers. Who does it well: ManoMano, a construction marketplace, offers a “Garantie Béton” (Concrete Guarantee) which goes above and beyond the industry standard by compensating customers (on their own books) for failed or late deliveries, damaged items or returns that have not yet been refunded. Faire, the B2B wholesale marketplace, is another great example. On top of net 60-day payment terms and bulk shipping, Faire offers free returns on unsold inventory, encouraging retailers to order more and test new products without having to take on inventory risk, you can read more about this here. 3.3 Dispute resolution Once marketplaces reach a certain scale, disputes inevitably arise. In some cases, this might be because a buyer goes insolvent and can’t pay, goods were damaged or there was an operational mistake. In other cases, it could be due to fraudulent actors e.g. buyers that pretend the goods haven’t arrived or never had the intention of paying back the goods. Why it matters: The economics of a marketplace can be upset by a very small percentage of dysfunctioning or fraudulent participants. For marketplaces with low margins, a single loss caused by a chargeback could require several additional transactions to be recouped. Unresolved disputes also have a big negative impact on NPS and, in many cases, result in churn. Who does it well: Hectare, a livestock and agricultural marketplace, introduced an escrow payment facility whereby buyers pay funds into an escrow account prior to delivery, reducing the likelihood of a dispute. They also introduced credit insurance, using Hokodo, to soften the blow in cases of non-payment. 4. Providing value-added services 4.1 Data & Analytics As they scale, B2B marketplaces accumulate huge amounts of data, which can be repackaged to enable better transparency across marketplace participants or sold to drive additional value. Why it matters: By opening up access to data on prices, best-selling SKUs and industry dynamics, marketplaces can help their participants make better business decisions and provide them with an additional incentive to keep using the platform. In certain cases, data can even act as an additional revenue stream. Who does it well: AdQuick, a marketplace that allows buyers to book out-of-home (OOH) advertising, views data as a key part of their value proposition. By gathering individual data points across their marketplace and integrating with various data sources e.g. mobile phones, AdQuick can provide advertisers with accurate attribution analytics, enabling them to measure the effectiveness of an outdoor campaign similarly to how the ROI of online campaigns is measured. This is something which was previously not possible. Metalshub*, a trading platform for metals and ferroalloys, has leveraged the data they have accumulated on their marketplace to launch the first price indices for certain types of metals. This will not only be a huge differentiator given the opaque market they are operating in, but moving forward will also be a key revenue driver for them. JOOR is a B2B marketplace in the fashion sector that connects more than 8,000 brands (sellers) with retailers. One of their main value propositions, aside from connecting the demand and supply side is its data exchange which provides brands with a real-time view of the latest transactions allowing them to spot emerging market trends, identify the best-selling styles so as to adjust their offering accordingly. 4.2 Industry-specific tools Most B2B marketplaces these days offer some form of embedded software that goes beyond the pure matching of demand and supply. They are SaaS-enabled. Why it matters: As I’ve written about in my Primer on B2B marketplaces, due to a combination of complex workflows, large AOVs and established buyer-supplier relationships in B2B transactions, it tends to be much harder for B2B marketplaces to capture the transaction on their platform compared to B2C marketplaces. As a result, they often need to build workflow tools to either streamline the complexity or get users more comfortable transacting large volumes online. Who does it well: Faire, the wholesale marketplace, offers a whole suite of tools from invoice management, advance payments and a chat solution which helps suppliers streamline their ordering processes. Privateaser, a marketplace that brings together event organisers with a community of vetter suppliers, built a booking system for its suppliers, similar to what OpenTable offers restaurants. Lantum, a marketplace connecting healthcare organisations (clinics, GP practices) with temporary healthcare staff, built a platform which allowed healthcare organisations to not only find and book external staff but, also to manage their internal staff. In parallel, Lantum provides software to freelance doctors to manage their admin, taxes and find new work opportunities. That’s all folks! Hopefully the above gives you a good view of some of the key components which make up B2B marketplaces. As mentioned previously, not all of these components will be relevant for all B2B marketplaces. The importance of each of the building blocks depends very much on the industry you are operating in and the market dynamics. Certain elements, such as supporting the fulfilment of an order, might be more relevant for goods marketplaces. Whilst others, like vetting and credentialing might be even more crucial for services marketplaces (e.g. healthcare staff) where suppliers might be relatively unknown. If you have any feedback on the above stack we would love to hear from you. This post was written in collaboration with Hokodo, one of the leading providers of credit management solutions for B2B marketplaces, make sure to check them out once you get the chance :) *P9 portfolio companies Don’t miss out on any future P9 content by signing up to our ICYMI: newsletter!
https://medium.com/point-nine-news/the-b2b-marketplace-stack-fa5b650f09b0
['Julia Morrongiello']
2020-12-08 13:29:54.074000+00:00
['Startup', 'B2B', 'Marketplace', 'VC']
This Book Made Me Feel Hopeful in a Way the Quran Never Did
This book is relevant to my experiences as a woman. When I first held Feminist Theory: From Margin to Center by bell hooks in my hands, I did what I always do. I cracked open the copy and smelled it. I love the scent of crisp pages. Then, I sat down and began reading. Frankly, I couldn’t wait to start consuming the content. I have a habit of skipping the acknowledgments and the preface of most books I read, but with this one, I wanted to learn everything the author had to say. The first few pages were so poignant and meaningful that I stopped to reflect. It was already an emotional experience, and I hadn’t even made much headway. I could already see myself re-reading passages, filling the margins with annotations, and highlighting ideas that stood out to me. I would take ample notes in the process of gaining more knowledge. This was a book I could interact with. It held promise and hope. I saw my religious education as a tedious process that I would one day escape. I had an active Muslim upbringing. For most of my life, religious teachers and family members tried to instill the same excitement in me for the Quran. As a child, I went to an Islamic school, or madrasa, in addition to regular school. I was also home-schooled in the Quran. I was sent to Sunday school to study with a scholar. I even attended a Muslim summer camp. I was deeply entrenched in the religion. None of it worked. Instead of developing a passion for Islamic principles, I saw my religious education as a tedious process that I would one day escape. The readings were not relevant to me. Frankly, as I get older, I am angered by how much of my valuable time was wasted by people trying to indoctrinate me. One of the first indicators to me that Islam (and generally organized religion for that matter) was incompatible with my life was the way in which it brazenly supported the oppression of women. This oppression is implemented not just by the extremists toting weapons on news channels but also by people who would be called moderate Muslims. The Quran does not speak to women. It speaks to men about what to do with women. Unlike the Quran, Feminist Theory would give me tools to think about the world and my place in it. This book would not invalidate my experiences of oppression. Instead, it would help me better understand them. I knew that after I finished reading it, I would be better equipped to challenge the thoughts that work against me and other women. I felt my heart fill to the brim. I have been starving for the autonomy of thought, but this too requires learning. Of course, I am afraid to write this article. Speaking out against organized religion and specifically doing that as a woman who disagrees with the messages of Islam takes a lot of courage. I have been hesitant to write about my fraught relationship with Islam for four main reasons. I was concerned about being ostracized by my family for my beliefs. I have begun to find my peace with that. I am concerned about my actual physical safety because being a vocal religious dissenter is dangerous. I do not want my work to be convoluted and misused by right-wing extremists to justify their hatred and bigotry. I know that self-declared liberal White people become deeply uncomfortable when someone tells them that their “progressive” beliefs are uninformed. It is fascinating to me that my anxiety arises in part because of the individuals who perform a self-gratifying form of acceptance without concerning themselves about the details of what it is they are accepting.
https://medium.com/an-amygdala/this-book-made-me-feel-hopeful-in-a-way-the-quran-never-did-9af3ef1e668d
['Rebeca Ansar']
2020-06-29 00:37:41.308000+00:00
['Personal Growth', 'Women', 'Feminism', 'Self', 'Books']
5 Odd Jobs People Were Once Paid to Do
5 Odd Jobs People Were Once Paid to Do #4 Dog Whipping Unemployed men in line at the soup kitchen (1931), from the US National Archives and Records Administration, Public Domain via Wikimedia Commons Technology and artificial intelligence have been rendering more and more jobs obsolete. A volatile mix of automation and lockdowns have led unemployment rates to double digits in many parts of the world. Last April alone, there were 23.1 million jobless Americans. All of these factor into the many anxieties a regular laborer has to deal with day-to-day. Apart from a stagnant wage, they also wake up to uncertainty. Still, the extinction of jobs isn’t something new. Sometimes, the innovations and circumstances that lead to obsolescence are welcomed by both clients and workers alike. That’s because some of these jobs were downright awful. Here are five jobs people once did that now have been made obsolete through the passage of time. 1. Human Garden Ornaments Human Garden Ornament (1795), By Johann Baptist Theobald Schmitt, Public Domain via Wikimedia Commons Quarantine has made a lot of people discover the joys of landscaping and backyard gardening. Among the many garden decor items that have been sold due to this trend is the classic garden gnome. The precursor to the garden gnome was a garden hermit, and being paid to be an ornament in a rich person’s garden was a real job in the 18th century. The worker was hired to look like a hermit — long nails, untidy hair, and a disheveled beard were all required by his employer. Some of them were even prevented from cleaning themselves to give a more “authentic hermit” appearance. When visitors arrived in the garden, these hermits read them poetry or lines from popular books as a form of entertainment. These hermits-for-hire were not allowed to leave the garden until the end of their contract period. This often lasted for months and sometimes years, with failure resulting in forfeiture of pay. With all these strict rules, it was common for garden hermits to quit their job midway, forcing nobles to replace them with a variation of the garden gnome we have today. 2. Human Alarm Clocks Knocker Upper at work (1947), Public Domain via Wikimedia Commons One thing we take for granted is the system of “date and time.” We don’t think about calendars, clocks, and schedules as innovative because most of us were born into the system — we can organize our days through our phones accordingly. But a unified system of time and date is a fairly recent invention in history, and so is the annoying alarm clock that comes with it. The job of an alarm clock is to wake people up, and in 19th century Britain, that was done by a person called the “Knocker Upper.” The human alarm clocks would use a long pole made of bamboo to reach the window of their clients. With a small wire tied on its end, they would then tap on the window until their customers arose for work. People hired knocker uppers on a subscription basis because they couldn’t afford expensive alarm clocks, which we have on our phones today. The practice was so ingrained in factory-work customs that the role persisted even up to the 1970s in some parts of Britain. 3. Poop Farmers Toilet in Rosenborg Castle Copenhagen, by Zymurgy, CC BY-SA 4.0 via Wikimedia Commons Another thing we rarely think about, and for understandable reasons, is how exactly our poop gets disposed of. There is a complex system that makes sure most of the modern world is free from foul odors and the diseases that come with them. But what is now largely automatic used to be a manual job. Known commonly as gong farmers, old English for “to go,” these men worked in groups to save the town from a build-up of human waste. The man tasked with scooping poop into a bucket down a pit was known as the “hole man.” Waste would then be passed on to the “rope men,” tasked with pulling the heavy pile out of the pit. Lastly, the “tub man” brought the waste and disposed of it outside of the town. When more intricate systems of plumbing were developed, the need for gong farmers slowly waned. I’m on the fence about whether or not they were happy about that. 4. Dog Whippers Dog Whipper Statue in the Netherlands, Public Domain via Wikimedia Commons “Dog whipper” was an official title given by a church to men who controlled the behavior of bothersome animals during holy ceremonies. As the name implies, they literally whipped misbehaving dogs during a service. At that time, churches had no effective way of regulating the entry of animals, so they did things manually. These early iterations of animal control officers were equipped with three-foot-long whips and a pair of tongs to catch and get rid of noisy, and often fornicating, cats and dogs. They had a second role too. Dog whippers also sometimes forcefully poked dozing and sleeping mass goers awake! 5. Court Dwarfs Portrait of the court dwarf Sebastián de Morra (1645), by Diego Velázquez, Public Domain via Wikimedia Commons Court dwarfs have been recorded in the histories of Egypt, Rome, and China. They were often given as gifts to different ruling families and sometimes traded as property. In early modern Europe, court dwarfs gained a little more esteem, earning a wage and sometimes doubling as diplomats. Most court dwarfs were given a ceremonial job in royal courts. They were placed beside the king or queen during public gatherings in order to make the royals look more powerful in comparison to their short stature. Other dwarfs also played the “natural fool,” often complementing a jester. As the influence of royal families decreased, so did the employment of court dwarfs. Historians point to the reign of Charles the XII of Sweden as the point wherein the practice completely stopped.
https://medium.com/history-of-yesterday/5-odd-jobs-people-were-once-paid-to-do-aa47194194fe
['Ben Kageyama']
2020-12-25 09:01:09.792000+00:00
['History', 'Work', 'Nonfiction', 'Labor', 'Jobs']
For many universities, the landscape is changing.
For many universities, the landscape is changing. Gone are the days when the throughput of students was the major measure of success. Now universities are evaluated on the outcomes for their students, and the extent to which those students have the right skills to join the workforce. Perhaps the most challenging shift is that of expectations among students themselves. As digital natives, they live with information and opportunities at their fingertips. When they want to research something, book something or share an insight, they jump online looking for high quality and frictionless experience from their university. Meeting these expectations is more complex than simply digitising course content. Instead, it must involve shaping all the factors, both online and offline, that influence whether students achieve their goals. Many factors are within the university’s control, such as campus culture, spatial design and course content, while other factors are harder to influence, such as students feeling lonely or trying to succeed at university with a physical or mental health condition. Human-centred design can be a game-changer for universities One factor determines the success of these efforts to shape the student experience: design, specifically human-centred design (HCD). HCD involves gaining deep insight into the needs of all users of a system and using those insights as a base for ideas or solutions. These solutions are tested, and users provide feedback for further refinement before solutions are implemented. Ongoing user feedback and data drive continuous improvement; the design process never ends. What we mean by design, inspired by Margaret Hagan, who specialises in HCD in legal services. A traditional approach to rethinking student support services might have focussed on reorganising the student support team, recruiting to new roles or changing the performance metrics of the team. These interventions may not be helpful if they are not implemented with students’ experience of the service front and centre. By starting with the student experience, a university can consider all the opportunities that might exist to support students. The university might even anticipate some risk factors and offer the right support mechanisms before crisis hits. Putting HCD into practice Nous Group has worked with a range of universities and other organisations to drive outcomes using HCD. From our experience, when thinking about system (re)design, the hardest part can be getting started. But breaking the process down to stepping stones can help create a clear path forward. Universities are rich with data, which can be a blessing and a curse. As the results of research prompt further research questions, it is easy to fall into a research infinity loop and never actually take any action. This is understandable, given the human instinct to want to know more, but is not helpful. Instead, customer experience designers need to pursue progress rather than perfection. This involves doing enough research to be confident in the insights to take a first step. There is no right way to start, so be brave and start somewhere. Customer experience designers should apply the same thoughtfulness to designing a project as to designing solutions. Every phase of the task — from research, to workshop, to a governance group conversation — is a chance to engage people in developing a shared language and understanding of the problem and can unite people in taking collective action later. Genuine involvement from people across the university — including students — helps to break down barriers, develop understanding and bust assumptions, all of which makes embedding change simpler. Involving students in design requires a different dynamic to that that traditionally exists in university-student relationships. It needs to be more equitable, so designers and students tap into the skills and experiences of the other to come up with answers that will work for everyone. This may mean accommodating differences in suitable hours, spaces, communications technology and timeframes. To build engagement with people from across the university, find a small group willing to start using a new service, system or way of working. Then monitor their progress to create a buzz and build from there. For many universities, this will be daunting, so having the right governance and mandate is important. Be creative about how engagement is done and always be sure to close the feedback loop. Let those you have engaged with know how their contributions have influenced the work. To make a change sustainable it needs to go beyond changing the behaviour of individual users and instead needs to alter the organisation, culture or system. Start with short-term fixes to generate evidence of progress, celebrating successes and learning as you go, but always keep the longer-term goal in mind. Finding the right blend of technical and design expertise can be difficult. Some universities have built strong linkages between services based on goodwill and communication, but now face the challenge of sustaining widespread commitment amid competing priorities. Other universities have developed effective IT systems to connect functions, but lack commitment from users due to a lack of understanding of students’ experience. Part of the solution is to be deliberate about who you have in the project team so that you have the right expertise and all members are active contributors. Many customer experience designers find great success in creating a dedicated space where colleagues and customers can be immersed in the experience of using design-led approaches. This needs to be visual and show rather than tell. Teams need to explore problems from multiple perspectives Like a stone dropped in placid waters, the ripples caused by bad design can extend to the furthest reaches of a university’s performance. It is the role of customer experience designers to locate the pain points in a process, and then do something about them. This requires using robust research techniques to diagnose the issues well and strong evaluation techniques to measure the impact of progress to inform continuous improvement and to demonstrate the value created by the work. Good design requires a vast array of skills, from strategy to qualitative and quantitative research, advanced data analytics, knowledge of existing and emerging technology, and quality evaluation techniques. So more than ever we need to collaborate and create teams that can explore problems from multiple perspectives. User expectations are not staying still, so design solutions cannot either.
https://medium.com/swlh/how-human-centred-design-can-help-universities-better-serve-students-e1d561a9f89c
['Kirsty Elderton']
2019-06-08 11:33:06.307000+00:00
['Higher Education', 'Human Centred Design', 'University', 'Design', 'Student Experience']
The Future of Podcasting is Subscription — Lessons from the History of Media
“History doesn’t repeat itself, but it often rhymes” On April 24, 2019, Luminary Media officially launched its subscription podcasting platform to the public, and was quickly dismissed and even attacked by some in the media and on Twitter. Luminary, which raised nearly $100 million from investors prior to launch, offers a free podcast player, along with a catalog of 40+ exclusive ad-free original podcasts locked behind a $7.99 monthly subscription. To some, introducing a new business model was an affront to the relatively new medium, as demonstrated in the Fast Company article titled “Why podcast fans will always reject a “Netflix for podcasts,” in which the author deftly states “First, it’s annoying.” What these critics fail to understand is that this story has been told before — and in almost every case, the quality of content has increased, the consumer experience has improved, and creators have been more appropriately compensated for their talent. Advertising, which to date has been the primary revenue channel for podcasts (bringing in a paltry $314 million in 2017), also initially supported nearly every new media format in their formative years. This was true for newspapers, radio, television, and early digital video platforms. Historically, it was unclear (and unlikely) that consumers would pay directly for new types of content enabled by new technologies and means of distribution, leaving sponsorships & advertising as the only potential for monetizing mediums. In order to drive the value of an advertisement up, media companies needed wide distribution to drive circulation (which is even why you may still get The Yellow Pages delivered to your home every year…) Wide distribution = more consumers = more advertising revenue. In this model, in order to invest more capital into quality content, content producers must either (A) reach a wider audience, or (B) insert more ads into the content. However as more consumers adopt the new distribution channels (radios, television sets, smart phones) and incorporate the content into their daily lives, things evolve. Often, new consumer propositions emerge, promising a better experience or higher quality content for a premium price. Radio had been primarily free and ad-supported since its inception in the early 20th century, until Satellite radio (a whole new and expensive distribution system) launched in 2001. Subscription satellite radio still had a slow start without any standout content, until it secured the exclusive distribution rights for The Howard Stern Show in 2006, putting the previously free program behind its subscription paywall, and landing over 180,000 subscribers overnight. Today, Sirius XM has about 33 million subscribers. Television programming was also born as sponsored content broadcast for free over the airwaves (ABC, CBS, and NBC), before pay-cable channels like Home Box Office (HBO) began to emerge in the 1970’s. HBO launched by transmitting popular films straight into subscribers homes, before evolving into producing HBO Original Films and eventually, HBO Original Series such as The Sopranos, The Wire, and Game of Thrones. This iconic content, which draws 140 million+ global subscribers to HBO, would not be possible on a purely advertising-supported channel. While the podcast industry is still in its infancy, it has seen tremendous growth in both consumption and production volume over the past decade. In 2008, less than 10% of the U.S. population listened to podcasts on a monthly basis, while nearly 1/3rd of the country does today — about 90 million monthly listeners. Further, there are an estimated 700,000 podcasts and 29 million podcast episodes as of April 2019. Surely, some portion of those 90 million monthly U.S. listeners would be willing to pay for a premium ad-free podcast experience in which creators have the resources to innovate and create high-quality content. The objection that some are raising in regards to Luminary’s paywall is that just because you can charge for content doesn’t mean that you should. The ignorance in this protest is that the pressure to create content that people are willing to pay for often drives up the quality of said content and the resources that producers can devote to it. The expense and investment in subscription based HBO’s Game of Thrones dwarfs the budget of any show on (primarily) ad-supported CBS. HBO viewers are willing to pay a monthly fee for access to this premium content, which in turn allows the network and talent to invest more heavily in quality content. Further, talent is the backbone of any creative industry, and deserves to be well compensated for the value that they create. While some podcast creators are able to reach a sizeable enough audience to court advertisers, others must turn to platforms like Patreon to ask listeners to donate on a recurring basis in order to fund their favorite shows. Even A-List talent and seasoned podcast professionals with hundreds of thousands of fans must rely on inserting advertisements for Casper into their content, which still leaves little room for investment into high production value, longterm projects, or dabbling with innovative formats. Subscription business models allow the upfront investment directly in talent and content — such as Netflix’s astonishing overall deals for television creators Ryan Murphy ($300M) and Shonda Rhimes ($100M), who left Twentieth Century Fox TV and ABC Studios respectively, studios which were primarily focused on advertising-supported broadcast television, and couldn’t afford to compete with Netflix’s offers. We also know from other media platforms that consumers are generally pretty amenable to pay higher fees for ad-free experiences. Spotify and Hulu, for instance, both have ~50% of users paying for ad-free tiers of the same content. Spotify has made its podcasting ambitions clear, spending $400 million to acquire Gimlet Media (a podcast studio with a significant catalog), Parcast (another podcast studio) and Anchor (tools for podcast creators). Spotify’s interest in podcasts is less about creating better content, a better listener experience, or rewarding creators — but instead about making the company’s basic economics work. Despite having 100 million paying subscribers and 217 million total monthly active users, the company’s deals with music publishers means it is still unprofitable due to the share of subscription and advertising revenue that Spotify must send to publishers based on users’ listening habits. If Spotify can get users to listen to more podcasts and less music, it can shift some of that revenue to its own pocket. For its part, Luminary has the advantage of having a maniacal focus on a very specific type of content — just as Netflix has had and maintained for the better part of a decade. Despite its success with on-demand video, Netflix has resisted the urge to move into sports, news, live TV, gaming / eSports, or ad-supported content — which has provided the clarity and focus to build a dominant media company and beloved consumer brand in record time. Luminary has the opportunity to execute a similar playbook for the podcasting community. To be clear — there will always be free, ad-supported podcasts in the world, just as there is terrestrial radio and broadcast television. Although some are upset at the potential disruption (and disaggregation) that Luminary will likely ignite in the podcast community, both creators and listeners are likely to be beneficiaries: creators will have the opportunity to experiment and invest in high-quality content they want to create and be more fairly compensated for their talent, while consumers will benefit from a better podcasting experience, with ad-free, high-quality content. This is a story that has been told before — and the podcast community should be excited.
https://medium.com/the-raabithole/the-future-of-podcasting-is-subscription-lessons-from-the-history-of-media-d486bd693141
['Mike Raab']
2019-06-08 04:57:45.250000+00:00
['Business', 'Podcast', 'Media', 'Future', 'Culture']
How the Data Stole Christmas
by Anonymous The door sprung open and our three little ones joyfully leaped onto the bed, waking my wife and I from peaceful dreams of slumbering sugarplum fairies. Oh Gosh, it was 6:03am, but who could blame them. The kids wait for Christmas day all year. I sat up as giggles thrashed with reckless excitement on the bed. My feet found slippers and I lumbered down the hall in the direction of the coffee maker. I could hear the tearing of wrapping paper and whoops of joy coming from the living-room tree. This scene was playing out in countless ways in households all across America. Christmas is a time for family, friends and a relaxing respite from the busy work calendar. Unfortunately, it was all about to come to an abrupt end as my cell phone interrupted our family time with an impatient buzz. “Who could be calling me today?” I should have powered it off and tossed the damn thing into a snowbank, but as the manager of a data team in a global e-commerce company, I was used to taking calls at odd times. I pressed talk and reluctantly held the phone to my ear. It was my co-worker. All hell was breaking loose. The back-end for the e-commerce site had crashed. It was all-hands-on-deck to get it back on-line. Our VP wanted hourly status reports — the next one in 47 minutes. I jumped into some clothes, grabbed a coat and mumbled to my disbelieving wife that I would be back in a couple of hours. To make a long story short, I actually didn’t make it home until 2 am the next morning. Yes, we got the site back up, but I missed Christmas that year. Lessons Learned The most tragic part of this story is that it didn’t have to be this way. We had talented people on our team, but our approach to data operations in those days was based on a flawed methodology: · We had one instance of the production environment. When the data team needed to make a change, like for example, updating a schema, we did so directly on the live operational system. · Making changes was so fraught with risk that we instituted heavy-weight procedures to control it. This slowed innovation to a crawl and even with all the triple checking, outages still occurred. · We tested new changes to the best of our abilities, but since our development systems used a simplified tools environment, we would encounter unexpected errors when moving code into the more complex production environment. · Our test data was perfect whereas production data is notoriously messy. Production data always threw unexpected anomalies at our code. Managing a continuing succession of outages while trying to keep development projects on schedule and under budget is like trying to play “whack a mole” while simultaneously reading a book. Yet, our approach in those days was mainly based on hope and heroism. We release code and “hope” it doesn’t break anything. When there’s an outage, we call in the technical experts (the “heroes”) to work around-the-clock to fix the problem. Looking back, this was no way to run a major operation. It’s not really a surprise that the head of our department got replaced about every two years. Executive management needed to pin the responsibility somewhere. DataOps — Lessons Applied It is tempting but simple-minded to blame outages on people. A robust business process eliminates errors and improves efficiency despite the fact that error-prone humans are involved. When software companies (Netflix, Facebook, …) started executing millions of high-quality code releases per year, it offered an opportunity for data organizations to renew their approach to development and operations. The methodologies used by these software engineering organizations — Agile development, DevOps and lean manufacturing — apply equally well to analytics creation and operations. The data industry refers to this initiative as DataOps. The fastest way to institute DataOps methods, even assuming the preexistence of a legacy tools and technical environment, is by aligning and automating workflows using a DataOps Platform. A DataOps Platform offers these capabilities: · Minimizes cycle time — A DataOps Platform aligns production and development environments using virtualization and orchestrates the DataOps testing, qualification and release of new analytics code with a button push. Continuous deployment of new code is how software companies produce such a high volume of releases per year. · Eliminates errors — The functional, unit and regression testing of code enables new analytics to be deployed with confidence that it will work as promised in operations. In addition, the data that flow through operations is tested and subject to controls at every step of the operations pipeline. Data errors are trapped and remediated before they corrupt charts and graphs. · Fosters collaboration — DataOps integrates version control and workflow tools like Jira. The DataOps Platform enables team members to share analytics components, encouraging reuse and improving productivity. Geographically dispersed teams can use their own choice of toolchains while fitting into higher-level orchestrations. The code and data quality supported by DataOps minimizes the unplanned work that can disrupt a data engineer’s weekend or holiday. With a DataOps Platform, enterprises can move away from relying on hope and heroism. Continuous deployment fully tests and deploys new analytics eliminating time-consuming and error-prone manual steps. Tests and statistical controls ensure that data is error-free before it flows into models and analytics. Have a DataOps Holiday Season With DataOps in place, tests monitor the data flowing through operational systems 24x7x365. While data scientists are home sipping eggnog, DataOps works overtime to keep operational systems up and running. I may have missed a Christmas celebration with my family that one time, but with DataOps, never again. Happy holidays. For more information about how a DataOps Platform can compress your new analytics cycle time and eliminate data errors, please give us a shout at datakitchen.io.
https://medium.com/data-ops/how-the-data-stole-christmas-78454531d0a8
[]
2019-12-24 13:22:33.143000+00:00
['Data Science', 'Big Data', 'Dataops', 'Analytics', 'DevOps']
Revisiting Imperial College’s COVID-19 Spread Models
How to run open-source Tensorflow models on Kubernetes and reviewing how effective the COVID-19 spread model was in measuring the effect of interventions. Photo by Brian McGowan on Unsplash Earlier this month, the United Kingdom became the first European country to approve and administer the first doses of Pfizer/BioNTech’s COVID-19 vaccine. The United States quickly followed suit with the FDA and CDC recently recommending Moderna’s vaccine as well as Pfizer’s to give the world a glimmer of hope. Other international players, notably China and Russia, are also pushing to approve and produce their own vaccines. Even as COVID-19 continues to rage on, this news of vaccines signals a hopeful end in sight. To that end, I wanted to revisit a study from the Imperial College COVID-19 Response Team, “Estimating the number of infections and the impact of non-pharmaceutical interventions on COVID-19 in 11 European countries”, published in March. The study used a semi-mechanistic Bayesian hierarchical model to estimate the impact of non-pharmaceutical interventions such as isolation, the closing of public spaces (e.g. schools, churches, sports arenas), as well as widescale social distancing measures. The Tensorflow implementation used in the paper is open-sourced under the MIT License and available at Tensorflow.org and Github. Code Setup While Google provides a free, hosted Jupyter notebook service through Google Colab, I wanted to run the analysis on Kubernetes to practice running data science and machine learning projects on Kubernetes as well as to compare the developer experience for both. To replicate the managed notebook experience of Google Colab, I looked for a similar Kubernetes experience without needing to stand up a cluster myself. At the same time, I wanted some control over my Kubernetes environment and not a fully managed data science platform like the Google AI Platform. I eventually found puzl.ee, a Kubernetes service provider with GPU support that charges per pod usage. Puzl.ee creates a unique namespace for my workloads and charges for resource usage similar to serverless Kubernetes offerings such as Google’s Cloud Run or AWS Fargate. The number of packaged applications are currently limited (Gitlab CI Runner, SSH Server, and Jupyter Notebook), but support for H2O.ai, PostgreSQL, Determined AI, Redis, Jupyter Hub, Drone CI, MongoDB, and Airflow is on the roadmap. Fortunately, puzl.ee had already published a quick start guide for setting up a Jupyter Notebook with GPU, so I provisioned my Jupyter Notebook after signing up for a free account. I was given various options for predefined Docker images (as well as an option to use my own). I could also adjust my resource requests, including NVIDIA RTX 2080Ti GPU, before installing my Jupyter Notebook. Within a minute or so, my Jupyter Notebook with Tensorflow was installed. Afterward, I realized that the code required a Tensorflow 2+ image. Even with the reinstallation, the provisioning process was smooth and painless. Unfortunately, I ran into several issues trying to run Tensorflow out of the box. I was surprised to see Pandas not included in the standard distribution of the provided Docker image and encountered several errors such as undefined symbol: _ZN10tensorflow8OpKernel11TraceStringEPNS_15OpKernelContextEb . After a few unsuccessful solutions from StackOverflow, I decided to compile my own Docker image with the necessary Tensorflow components installed, which allowed me to move on. However, this highlighted a huge challenge in AIOps where version control and software compatibility are still hard, making these semi-hosted platforms less effective as it requires some DevOps work to untangle and fix dependencies. Model Setup & Data Preprocessing After correctly installing all the necessary Python modules, I was able to follow the code to set up the model and load the data. The entire setup is posted on Github, but I’ll summarize the important sections below: Mechanistic model for infections and deaths The infection model takes in the timing and the type of interventions, population size, and initial cases per country along with the effectiveness of interventions and the rate of disease transmission as parameters to simulate the number of infections in each European country over time. The model produces two key probabilities conv_serial_interval (convolution of previous daily infections and the distribution over the time of becoming infected and infecting others) and conv_fatality_rate (convolution of previous daily infections and the distribution over the time of infection and death). Parameters & Probabilities Parameter values, which are assumed to be independent, include exponential distribution of initial cases per country, negative binomial distribution for the number of deaths, infection rate per each infected person, the effectiveness of each intervention type, and the noise in the infection fatality rate. Given these parameters, a likelihood of observed death is calculated along with the probability of death given infection. Finally, infection transmission is assumed to be Gamma distributed, to turn conv_serial_interval to predict_infections . Key Assumptions The study aims to model the effectiveness of intervention measures by looking at the infection and fatality rates. Here the model assumes that the decline in the number of COVID cases is a direct response to interventions rather than gradual changes in behavior. Also, the study assumes that interventions will have the same effect across the European countries selected, not accounting for the size, population density, and perhaps the average age of its citizens, which we now know has a huge impact on fatality rates. Replicating the Results The dataset includes interventions enforced and infection/fatality rates from 11 European countries (Austria Belgium, Denmark, France, Germany, Italy, Norway, Spain, Sweden, Switzerland, and the UK). After applying the Tensorflow model as described on the Github post, I was able to replicate the effectiveness of the interventions graph as well as infections/deaths by country: Effectiveness of interventions (same as Figure 4 on the original paper): shows no effects are significantly different since most measures when in effect around the same time Infections, deaths, and R_t by country (same as Figure 2 on the original paper) Comparing the Model with Actual Data The paper estimated that various intervention measures were successful in curbing the rate of infections in Europe with the caveat that given the long incubation period of COVID-19 and the time between transmission and mortality, the data collected at the time may be premature to conclude effectiveness in certain countries where the pandemic was at its nascent phases. Looking back, we now know that the initial measures were somewhat successful in slowing down the rate of infection in Europe. It took drastic measures such as country-wide lockdowns and enforcing large-scale social distancing guidelines, but the data shows a downward trend in infection rates until a recent uptick in cases. Perhaps a more telling graph is comparing the results of Europe as a whole vs. the United States where interventions measures were rolled out in a less coordinated manner: Now that we have more data, it would be interesting to see a follow-up study to include other parameters to reflect cultural or political factors that made intervention measures more or less successful across the globe. You can create your own visualization using the link below: Colab vs. Puzl.ee Coming back to the data science side of things, this exercise of running the Tensorflow model reminded me of the challenges we still face before making Kubernetes the go-to platform for data science and machine learning tasks. As a free tool, Google Colab makes it easy to clone open-source notebooks and run experiments without any infrastructure setup. With Google Colab Pro, which is priced at $9.99/month, most workloads can be run in a managed manner without too many restrictions. However, the initial provisioning process on puzl.ee was surprisingly smooth. As the team works on putting together more predefined Docker images, I expect some of the installation and configuration challenges I faced to diminish. I also liked the option of running my own Kubernetes pod to potentially extend the experiment to add other microservices to either fetch/post data or integrate it with other databases within the same Kubernetes cluster. Once puzl.ee adds native support for popular Helm charts via the dashboard similar to how it provides pre-made Docker images, I plan to take a look again for some of my side projects.
https://medium.com/dev-genius/revisiting-imperial-colleges-covid-19-spread-models-daa7ac1a7862
['Yitaek Hwang']
2020-12-24 16:27:29.356000+00:00
['Data Science', 'Kubernetes', 'Machine Learning', 'Programming']
Unlock honest feedback from your employees with this one word
Unlock honest feedback from your employees with this one word Consider using this one word in your next one on one meeting… A few years ago, a CEO told me how she was struggling to get honest feedback from her board. No one seemed willing to be critical or give her pointers on things she could improve. After every board meeting, she would turn to them and ask directly: “What feedback does anyone have for me?” She’d hear crickets. Every single time. No one would speak up. Even though they were board members — people who are supposed to hold her accountable as the CEO of the company — they shied away from offering their honest input. This was so perplexing to the CEO. She felt like she was being very clear with what she wanted… Why weren’t they just giving her the feedback she was asking for? One day, she decided to try something different. Instead of asking, “What feedback does anyone have for me?”… she asked this: “What advice does anyone have for me?” All of sudden, everyone started weighing in. “Well I might try this…” and “The way you brought up this point could’ve been better…” and “You could try structuring the meeting like this…” The word “advice” unlocked all the honest feedback that CEO needed. Why? The word “feedback’” carries a lot of baggage. To some, they automatically associate it with a “critique” or something negative. It can seem scary and formal. But “advice” is a much more welcoming word. Advice is about lending someone a hand. When someone gives you advice, they’re just looking out for you. And when you ask for advice, it’s an invitation. You’re signaling that another person has expertise or knowledge that you find interesting and valuable. That person is often flattered you even asked for advice in the first place. Who doesn’t love to give advice? :-) The next time you’d like to get honest feedback, try asking for advice instead. Notice how much more people open up to you. See how swapping that one word makes a difference.
https://medium.com/signal-v-noise/unlock-honest-feedback-with-this-one-word-dcaf3839e7ee
['Claire Lew']
2018-12-04 15:15:54.492000+00:00
['Leadership', 'Startup', 'Employee Engagement', 'Employee Feedback', 'Management']
How we initiate point of sale transactions globally
The challenge Most POS setups include a cash register, controlled by store staff, a payment terminal, where the shopper enters their card, and a serial connection between the two. A library is embedded on the cash register facilitating communication between the cash register and the payment terminal. These libraries are typically created and maintained by the company that facilitates the terminals (such as Adyen). Using libraries creates a number of challenges: A tight integration between the cash register and the library means a significant amount of setup and development work is required, because the library will be part of the cash register software. The cash register software — which is third party — is often updated as infrequently as once a year, meaning retailers are not able to immediately benefit from the latest library updates. Cash registers differ significantly between vendors and platforms, creating a large maintenance burden on the development of the library for Adyen. Data centers Furthermore, many larger retailers prefer centrally-hosted solutions for their cash register software. This means the software needs to be configured to initiate a transaction on a payment terminal in the store, by routing requests from the data center into the store network. To do this, merchants need to use port forwarding to manage payments across multiple locations, a fixed IP for each terminal, or possibly a VPN setup for security. All these possibilities involve a complex network setup that drains operational resources. Solving the library challenge with the Nexo protocol Ideally, we needed a solution that would be independent of any specific platform, able to be used for serial connections, local network, and internet transports, and support a message format with advanced features such as asynchronous notifications. To meet these criteria, we removed our need for libraries and created the Terminal API, adopting the Nexo protocol — a card payment standard that facilitates communication between the cash register and terminal. Nexo’s basic interaction model is request/response JSON messaging. This means that making a payment with the Terminal API is a simple request-response, and all informational events, such as notifying where the terminal is in the payment process, are communicated via JSON webhooks that are optionally implemented. Using this approach is advantageous because: Supporting new programming languages is simpler, as the library required all potential events to be implemented as a callback and passed as part of the initial payment request. Maintaining a JSON messaging format, rather than custom libraries, callbacks, and SDKs, makes it far easier for merchants to roll out and update the software. Internally, this had the added benefit of us not needing to support multiple programming languages for the API. Solving network setup complexity by routing through the cloud Using the Terminal API over the store network was a great first step. However, it did not solve the challenge of initiating payments from a centralized place such as a data center. To simplify the setup investment and remove the cost of all this complexity, we also adapted our Terminal API for the cloud. The in-store architecture relied on the merchant’s cash register and backend to communicate to the terminal, as below: In the cloud version of the API, we added the ability for the merchant to initiate a terminal payment directly with Adyen’s backend. Incorporating WebSockets One advantage of serial connections is that they provide bidirectional communication, so both cash register and payment terminals can initiate communication and exchange data related to the status of the transaction. With our Terminal API over the network, transactions are https request-response. The cash register initiates a payment request by sending an https request to the terminal. However, on the internet, having a communication channel where both parties can initiate communication is cumbersome, as the NATed terminals cannot be reached without opening the firewall and setting up port forwarding. We needed a solution to easily enable bidirectional communication. We found this solution in WebSockets. This technology is used by a number of platforms for push notifications, such as in newsfeeds, and we leveraged it for communication between a terminal and the Adyen backend. To enable bidirectional communication, we create a single https request from the payment terminal, and added headers to request an upgrade the connection to a WebSocket, as displayed below. After that, a bidirectional communication channel is established between our backend and the payment terminal. A standard flow is as follows: As the cash register initiates a transaction, it sends an https request to the Adyen backend. The Adyen backend looks up which WebSocket the terminal is using and routes the request to the terminal over it (more on this below under load balancing). The terminal delivers its response to the Adyen backend over the WebSocket and the backend subsequently delivers it as a https response to the cash register. Load balancing and redundancy Redundancy is a key consideration in our system architecture. During application updates, or when carrying out maintenance, transactions cannot be affected. Our payment acceptance layer is made up of multiple servers over multiple data centers around the world. This helps reduce latency and ensure redundancy. (Note: you can read more about our approach to redundancy and database setup here: Updating a 50 terabyte PostgreSQL database). This infrastructure ensures redundancy and the possibility to balance loads if we need to carry out maintenance. However, it does raise a new challenge — when a terminal opens a connection with Server A, and a cash register with server B, what happens? We configured our setup so that if a terminal connects to Server A, a message is triggered from that server to other servers that says “I now have a connection with this terminal.” If a cash register then connects to Server B, Server B can look up which server owns the WebSocket connection, and route the message via that server. If we need to carry out application updates or maintenance on one server, it sends a message to the terminal to reconnect with another server. Once all connections are closed we can begin. Conclusion Our Terminal API simplifies rollout and ensures merchants are able to stay abreast of the latest software updates. However, there are more innovative ways in which it may be used. For example, since in-store payments can be initiated remotely, merchants would be able to create an experience where a shopper initiates an order in-app, walks into a store, scans a QR code with their phone to initiate the payment on the terminal, and picks up their item. These kinds of possibilities make it very exciting for us to see how merchants use this technology. For more information on our Terminal API, you can see our documentation: Terminal API Integration and a blog post on the commercial benefits: Introducing the Terminal API.
https://medium.com/adyen/how-we-initiate-point-of-sale-transactions-globally-7fad4786db16
[]
2020-07-08 05:16:38.864000+00:00
['Java', 'Retail Technology', 'Cloud', 'Payments', 'Point Of Sale Software']
Django REST API
Last week we talked about creating basic applications with Django. Today let’s try to design a RESTful API with Django. Prerequisites Before we start let’s install additional libraries which will help us design API: pip install djangorestframework pip install markdown pip install django-filter Project Setup Now let’s make sure that all moving parts of the REST framework libraries are in their places. In the project folder we have file settings.py. In it there is an array INSTALLED_APPS. Add one more element, ‘rest_framework’, to this array. Also we need to add this dictionary to this file: REST_FRAMEWORK = { ‘DEFAULT_PAGINATION_CLASS’: ‘rest_framework.pagination.PageNumberPagination’, ‘PAGE_SIZE’: 10 } In the previous post we didn’t sync with the database. We can do it by running migrations: python manage.py migrate Also let’s create superuser: python manage.py createsuperuser — email [email protected] — username admin Now our initial database and admin user ready we can start developing API. Serializers Serialization is a process of turning data into readable format. In our case we will use serializer from the django rest framework library. Let’s cd to the app directory and create file serializers.py. In this file we will import the User model, import serializer and create filter data we are going to send out. This is how serializer looks like: Views Next we will need to create a view where user serializer will be used. Here we will query the database and sort record: You can see that we are not declaring individual views. Instead we use viewsets. We group common behavior into classes. Urls And finally we need to write some routes for our API. Let’s cd to the project (not app) directory and open urls.py. Using viewsets allows automatically generating each URL config by registered viewset with router class: And our brand new API is ready to use. API CRUD Let’s run server and see what we got: python manage.py runserver If we open localhost there will be all declared routes, in our case only one, because only the users have been created. And if we click on the users route we will see serialized data from the User table. That is Read operation, what about the rest of them? Easy! On the bottom of the page we can find an interface for sending POST requests which will create a new record in the user table. And after creating a new user we can follow RESTful convention and go to the URL users/:id (in our case that id will be 2) and see all details about this specific user and we have interfaces for Update and Delete actions. CRUD is complete! Conclusion Django is a very powerful framework and with additional libraries it allows us to create fully functioning APIs very fast. Keep learning, keep growing! Let’s connect on LinkedIn!
https://medium.com/datadriveninvestor/django-rest-api-1ab821e40733
['Pavel Ilin']
2020-11-10 09:14:50.636000+00:00
['Python', 'Django', 'Rest Api', 'Rest', 'Framework']
6 Steps to Up the Sustainability Game for Your Business
Lessons from the most sustainably managed company in the world Photo by Alexander Abero on Unsplash While some business owners are still hanging onto a feast or famine mindset, others, as if tipped off by a prophet, have embraced a marathon running attitude. Gone are the old days where Darwinism reigned in business — sustainability is the new law of jungle. However, unlike financial success, ‘sustainability’ of a business is daunting to measure to say the least. It’s like trying to predict human longevity — use all the health matrix you want but only time will tell. Yet The Wall Street Journal cracked the hard nut head-on. They recently published a ranking of the The 100 Most Sustainably Managed Companies in the World. Who claims the top spot? Let me spare you the laborious effort of a click. It’s one of the largest Japanese conglomerate with business in electronic products, video gaming, music and media — Sony. Sony’s headquarter in Tokyo. Photo: Sony.net To combat the pandemic alone, Sony set up a $100 million Sony Global Relief Fund for COVID-19 for efforts in medical — donating $10 million to the World Health Organization’s COVID-19 Solidarity Response Fund; education — joining UNICEF to roll out digital learning platform “Learning Passport” in Latin America; and the creative industry —Play At Home for gamers, 500 ARTISTS WANTED for musicians, and free Sony cameras for visual artists. All these efforts find their roots in Sony’s purpose statement revealed last year, “Fill the world with emotion, through the power of creativity and technology”, which to Sony is not a goal but reason for existence. Before the word “Covid-19” existed in any dictionary, Sony has been funding startups focusing on environmental technologies with plans to invest 1 billion yen ($9.46 million) over the course of three to five years and recoup in about a decade. It publicly announced its goal to achieve a “zero environmental footprint” by 2050 and use 100% renewable energy by 2040 — both are now very well on track. If you are wondering how Sony became the most sustainably managed company in the world, here’s six easily applicable steps we learned from Sony’s sustainability success.
https://medium.com/datadriveninvestor/6-steps-to-up-the-sustainability-game-for-your-business-ddd41b8f596b
['Eunice X.']
2020-12-28 15:49:42.209000+00:00
['Finance', 'Investing', 'Sustainability', 'Venture Capital', 'Economics']
Fixing our Bug Problem
by Thomas Gomersall Insects may not be what many would consider endangered species, but according to a devastating 2019 study in Biological Conservation, 41 per cent are in decline. An additional third are threatened and without immediate, radical action, most could be extinct within decades in what many have dubbed the ‘Insect Apocalypse’ (Sánchez-Bayo & Wyckhuys, 2019). This wouldn’t just be apocalyptic for insects, but for the countless animals and plants that they feed and pollinate, including our food crops. Yet in places such as Hong Kong, people continue to destroy their habitats for development and use pesticides that kill far more insects than the mosquitos they are intended for (Williams, 2019). But if Swedish teen environmental activist Greta Thunberg proves anything, it’s that ordinary citizens can make a difference. Here are some ways Hongkongers can help protect local insects. Citizen scientist programmes, such as City Nature Challenge, help familiarise people about insects. Photo credit: WWF-Hong Kong Love the Bugs Public support is integral for successful conservation. But widespread public ignorance of the importance of insects and limited exposure to them make insect conservation a tough sell (Tsang, 2019). Better education programmes about insects for schools and the wider public can help address this, while citizen scientist programmes, such as City Nature Challenge, help familiarise people about insects. “If you want to ask people to conserve insects, the most direct way is through [using] photography to make them realise that they can look very beautiful” says Toby Tsang, a post-doctoral urban ecology researcher at the University of Hong Kong. “If you can manage to somehow first promote [insect conservation] through photography, I think more people will start paying attention.” Tropical and subtropical insects can only survive within a narrow temperature range. Photo credit: Thomas Gomersall Tackle Climate Crisis This is a particularly important measure for protecting Hong Kong insects, as tropical and subtropical insects can only survive within a narrow range of temperatures, making it especially hard for them to cope with rapid warming (Bonebrake & Mastrandrea, 2010), as shown by the mass bee die-offs in last summer’s heatwave (Williams, 2019) and the expected significant declines in butterfly diversity in country parks from current warming projections (Cheng & Bonebrake, 2017). Measures to reduce Hong Kong’s carbon footprint include buying more locally sourced foods*, eating less meat and using low-emission public transport** and most importantly, continuing to pressure the government to cut emissions. Roofs of residential high rises provide plenty of space for communal rooftop gardens. Photo credit: Mathew Pryor Bug Cities While natural areas certainly provide better habitat overall, urban green spaces in Hong Kong (e.g. parks) are nonetheless surprisingly valuable for insect conservation. They support considerable insect diversity (including 58 of Hong Kong’s 250 butterfly species) and can help insects to move between the fragmented country parks, maintaining vital inter-population gene flow. However, habitat quality in parks is limited by frequent pesticide spraying and vegetation trimming (Tam & Bonebrake, 2016; Bonebrake, 2019). Luckily, ordinary citizens can easily create more insect-friendly green spaces themselves. Those with conventional gardens can do this by leaving small sections of them untended. But roofs of residential high rises also provide plenty of space for communal rooftop gardens and getting building management permission to start one is usually fairly easy (Pryor, 2019). If individuals each add their own plants to the garden, this will lead to a greater plant abundance that can support more insect species (Tsang & Bonebrake, 2017), particularly if these include native species such as the Chinese ixora and rhododendrons (Burghardt et al, 2009; Tam & Bonebrake, 2016; Ng & Corlett, 2000). Seasonally blooming plants, such as Chinese ixora are best for pollinators. Photo credit: Thomas Gomersall In return, rooftop gardens and the insects they attract (including butterflies, bees and beetles) can bring unexpected benefits for gardeners (Pryor, 2019). For instance, a high abundance and diversity of pollinators have been linked to higher crop yields (Garibaldi et al, 2014), good news for those who like growing fruit and vegetables on the roof. Even the process of creating and tending to a rooftop garden has been found to have great psychological benefits for people. “I do a lot of research into why people farm on the roof. [Roof gardens] produce huge amounts of happiness.” says Mathew Pryor, Head of the Division of Landscape Architecture at the University of Hong Kong. “Everybody who participates in a rooftop farm is blissfully happy. […] It’s a personal project.” As for natural insect habitats, Hongkongers should lobby their legislators to do more to protect these areas and vote for those whose environmental policies go towards meeting such goals. Growing flowers of varying lengths will help butterflies that specialise in feeding from long-bodied flowers. Photo credit: Thomas Gomersall Food, Glorious Food When creating insect-friendly green spaces, it’s also important to consider food sources, particularly to encourage butterfly breeding, as the caterpillars of many species will only feed from specific plants (Lo & Hui, 2004, p.64) ***. Seasonally blooming plants, such as Chinese ixora are best for pollinators as they grow quickly and produce lots of flowers and nectar. Growing several species that bloom at different times of the year ensures that pollinators have abundant food year-round (Pryor, 2019) while growing flowers of varying lengths will help butterflies that specialise in feeding from long-bodied flowers along with more generalist feeders (Kunte, 2007). Some butterflies will also feed from fluids produced by rotting fruit such as papaya and banana skins. (Lo & Hui, 2004, p.65; Bunker, 2019). Pesticides are the second-biggest driver of global insect declines. Photo credit: Thomas Gomersall Put down the Pesticide Of course, not everyone who keeps plants may want to attract insects that could potentially strip them to the stem. But don’t reach for that bug-spray. Pesticides are the second-biggest driver of global insect declines (Sánchez-Bayo & Wyckhuys, 2019) and there are plenty of other means to keep insects away from plants without killing them. Planting marigolds next to other plants is one environmentally friendly way to guard against insect infestation. Photo credit: Thomas Gomersall One way is the plant-by-plant method, in which bad-smelling plants like marigolds are placed close to other plants to ‘guard’ them from insects. Another is to use non-toxic, homemade sprays like water-diluted vinegar or water boiled with garlic. If the odour generated by this method puts you off as much as the insects, a less smelly though no-less effective or insect-friendly option would be to use Neem oil, which is available in most gardening shops. (Bunker, 2019). External Links * Locally sourced food in Hong Kong: https://medium.com/wwfhk-e/eating-sustainably-in-hong-kong-65b7f0dff961 ** Advice on cutting personal carbon emissions: https://medium.com/wwfhk-e/when-the-lights-go-back-on-thoughts-on-earth-hour-96e70c5307fc *** Specific caterpillar food plant preferences of the world’s butterflies and moths: https://www.nhm.ac.uk/our-science/data/hostplants/search/index.dsml References · Bonebrake, TC (PhD), interviewed by Thomas Gomersall, 2019, The University of Hong Kong. · Bonebrake, T.C. and Mastrandrea, M.D. 2010. Tolerance adaptation and precipitation changes complicate latitudinal patterns of climate change impacts. PNAS. 091184107. · Bunker, S, interviewed by Thomas Gomersall, 2019, World Wide Fund for Nature — Hong Kong. · Burghardt, K.T., Tallamy, D.W. and W.G. Shriver. 2009. Impacts of native plants on bird and butterfly biodiversity in suburban landscapes. Conservation Biology, vol. 23 (1): 219pp–234pp. · Cheng, W. and Bonebrake, T.C. 2017. Conservation effectiveness of protected areas for Hong Kong butterflies declines under climate change. Journal of Insect Conservation, vol. 21: 599pp-606pp. · Garibaldi, L.A., Carvalheiro, L.G., Leonhardt, S.D., Aizen, M.A., Blaauw, B.R., Isaacs, R., Kuhlmann, M., Kleijn, D., Klein, A.M., Kremen, C., Morandin, L., Scheper, J. and R. Winfree. 2014. From research to action: enhancing crop yield through wild pollinators. Frontiers in Ecology and the Environment, vol. 12 (8): 439pp–447pp. · Lo, P.Y.F. and Hui, W.L. 2004. Hong Kong Butterflies, 1st edn., Friends of the Country Parks, Hong Kong. 64pp–65pp. · Ng, S.C. and Corlett, R.T. 2000. Comparative reproductive biology of the six species of Rhododendron (Ericaceae) in Hong Kong, South China. Canadian Journal of Botany, vol. 78 (2): 221pp–229pp. · Pryor, M (PhD), interviewed by Thomas Gomersall, 2019, The University of Hong Kong. · Sánchez-Bayo, F. and Wyckhuys, K.A.G. 2019. Worldwide declines of the entomofauna: A review of its drivers. Biological Conservation, vol. 232: 8pp–27pp. · Tam, K.C. and Bonebrake, T.C. 2016. Butterfly diversity, habitat and vegetation usage in Hong Kong urban parks. Urban Ecosystems, vol. 19: 721pp–733pp. · Tsang, TPN. (PhD), interviewed by Thomas Gomersall, 2019, The University of Hong Kong. · Tsang, T.P.N. and Bonebrake, T.C. 2017. Contrasting roles of environmental and spatial processes for common and rare urban butterfly species compositions. Landscape Ecology, vol. 32 (1): 47pp–57pp. · Williams, M., ‘The insect apocalypse is coming: Hong Kong moth study shows the threats and complexities’. South China Morning Post, 31 March 2019, https://www.scmp.com/magazines/post-magazine/long-reads/article/3003821/insect-apocalypse-coming-study-hong-kong-moth (Accessed: 1 April 2019)
https://medium.com/wwfhk-e/fixing-our-bug-problem-4a8041c7ebe0
['Wwf Hk']
2019-07-15 02:43:11.484000+00:00
['Biodiversity', 'Nature', 'Extinction', 'Insects', 'Environment']
Netflix Is Now Worth More Than Disney — What’s Their Next Move?
BYTE/SIZE Netflix Is Now Worth More Than Disney — What’s Their Next Move? Four 3D chess moves that could make Netflix top dog Created by Murto Hilali My friends, who are 17–18-year-old males, have devoted their eyeballs to Netflix’s latest reality show: Too Hot To Handle, where attractive singles spend a month on a desert island trying not to advance the human species together (no sex): These are guys who read self-help books and publish on Medium, guys who I admire. My point? Netflix has successfully stolen my friend group’s souls, AND I WANT THEM BACK. Slow clap, Netflix, slow clap. Quarantine has people flocking to their screens and Joe Exotic’ mullet, and the streaming giant’s market cap is almost $187 billion now — just above Disney’s. With all this momentum, don’t be surprised if the firm starts making baller moves to grow its reach. I got Netflix two months ago, so I’m already an expert — here are my unrequested (and probably unqualified) ideas for where Netflix could be heading next.
https://medium.com/swlh/netflix-next-move-9ea44b42150f
['Murto Hilali']
2020-05-14 03:38:19.247000+00:00
['Finance', 'Technology', 'Marketing', 'Business', 'Data']
15+ Binary Tree Coding Problems from FAANG Interview
Image by Omni Matryx from Pixabay A Binary Tree is a hierarchical Data Structure. Depending on how you store the nodes in a tree, the Terminology differs. Out of Free Stories. Here is my Friend Link Hey guys, I have been sharing a lot about Tech Interview Questions asked in FAANG, I am currently working on the Tech Interview Questions asked in LinkedIn, Yahoo, and Oracle. I have been researching a lot about these “interview problems”. When it comes to Binary Tree Problems, Most of them can be solved, if you have a strong foundation in certain types of problems. This post is all about making you strong in the fundamental Logic’s that are used to Solve Binary Tree Problems. So that when you are in your Interview and you come across a Binary Tree problem you will know which logic to use and how you could approach that problem! Free For Kindle Readers If you are Preparing for your Interview. Even if you are settled down in your job, keeping yourself up-to-date with the latest Interview Problems is essential for your career growth. Start your prep from Here! 15+ Binary Tree Coding Problems from Programming Interviews What is the Lowest Common Ancestor? How to find the Lowest Common Ancestor of Two Given Nodes? Solution How to find out if a given tree is a subtree of another tree? Solution How to Traverse the Binary Tree Iteratively? Solution What is Breadth First Traversal? How to implement it? Solution How to find out the Diameter of a Tree? Solution How to Traverse the Tree in Zig-Zag fashion? Solution What is Depth First Traversal? How to implement it? Solution How to print the Right Side View of A Binary Tree? Solution How to Construct BST from Preorder Traversal? Solution How to find out if two given trees are mirror images of each other? Solution How to find out the sum of the Deepest Leaves in a Binary Tree? Solution How to Capture a Binary Tree into a 2D Array? Solution How to Merge Two Binary Trees? Solution How to find if a pair of Nodes in BST is equal to a target? Solution How to Find the Minimum Distance Between Two Nodes in a given BST? Solution These are some of the most popular binary tree-based questions asked on Programming job interviews. You can solve them to become comfortable with tree-based problems. Go Even Further These are some of the most common questions about binary tree data structure form coding interviews that help you to do really well in your interview. I have also shared a lot of Coding Interview Questions asked in FAANG on my blog, so if you are really interested, you can always go there and read through them. These Challenges will improve you in Dynamic Programming, Back Tracking, Greedy Approaches, Sorting and Searching Techniques to help you do well in the Technical Interviews. Good Knowledge of these Different Algorithms and the time and space complexities behind is a must-know for every interview. Focus on this the most. Further Reading 4 Incredibly Useful Linked List Tips for Interview Top 25 Amazon SDE Interview Questions Do you think you really know about Fibonacci Numbers? 9 Best String Problems Solved using C Programming One Does not Simply Solve 50 Hacker Rank Challenges End of the Line You have now reached the end of this article. Thank you for reading it. Good luck with your Programming Interview! If you come across any of these questions in your interview. Kindly Share it on the comments section below. I will be thrilled to read them. Before you go: Want to become outstanding in java programming? Free for Kindle Readers. A compilation of 100 Java(Interview) Programming problems which have been solved. (HackerRank) 🐱‍💻 This is completely free 🆓 if you have an amazon kindle subscription. If you like this article, then please share it with your friends and colleagues, and don’t forget to follow the house of codes on Twitter!
https://medium.com/dev-genius/15-binary-tree-coding-problems-from-faang-interview-2ba1ec67d077
['House Of Codes']
2020-06-22 12:54:02.762000+00:00
['Coding', 'Java', 'Software Development', 'Interview', 'Programming']
Karma Enters Global Market: KYC Now Available For The Foreign Citizens
Hello, dear friends! Foreign Investors Allowed Finally, a citizen of any country can pass the KYC on our platform and become an accredited investor. We’ve connected popular KYC and AML provider — The Sumsub company. They provide a standard KYC procedure: one just need to add a passport photo/ID/drivers license photo and selfie with this document. There are also important rules for our Korean friends After passing the KYC investor can join any existing loan offer on the Market. For the moment we start to work with foreign investors who have accounts in Russian banks or in Rubles in local banks. Plus, we're now integrating with an international payment system to allow any investor to join our platform from any banking account or a credit card. Automatic accreditation for the Russian investors We've developed a scoring system which automatically check investors' data. It helps to reduce the average moderation time to 1 min. Notifications about virtual account updates You always can control any incomes on your virtual account, such as payout amount, new deposits etc. Email and SMS notifications will make you aware of any operations with your balance. “Zero commission” offer for investors extended till the end of June The amount of new Karma investors is increasing on a daily basis. We are really happy that the majority of investors is staying with us and reinvesting more after receiving their first income. We're working on attracting new investors also. That’s why we decided not to charge platform’s commission from all investors till the end of June. Withdrawal fees We’ve paid banking fees for any operations by ourself for a long time, but as the amount of users is growing, we have start charging a small fee of 50 RUB for every virtual account withdrawal. Cheers^_^
https://medium.com/karmared/karma-enters-global-market-kyc-now-available-for-the-foreign-citizens-72373dcf2953
['Karma Project']
2019-04-09 07:13:51.356000+00:00
['Investing', 'P2p', 'Global', 'Development', 'Banking']
The Feynman Technique Can Help You Remember Everything You Read
The 4 Steps You Need To Take In essence, the Feynman technique consists of four steps: identify the subject, explain the content, identify your knowledge gaps, simplify your explanation. Here’s how it works for any book you read: #1 Choose the book you want to remember After you’ve finished a book worth remembering, take out a blank sheet. Title it with the book’s name. Then, mentally recall all principles and main points you want to keep in mind. Here, many people make the mistake to simply copy the table of content or their highlights. By not recalling the information, they skip the learning part. What you want to do instead, is to retrieve the concepts and ideas from your own memory. Yes, this requires your brainpower. But by thinking about the concepts, you’re creating an effective learning experience. While writing your key points, try to use the simplest language you can. Often, we use complicated jargon to mask our unknowingness. Big words and fluffy “expert words” stop us from getting to the point. “If you can’t explain it simply, you don’t understand it well enough.” — Albert Einstein #2 Pretend you are explaining the content to a 12-year old This sounds simpler than it is. In fact, explaining a concept as plain as possible requires deep understanding. Because when you explain an idea from start to finish to a 12-year old, you force yourself to simplify relationships and connections between concepts. If you don’t have a 12-year old around, find an interested friend, record a voice message for a mastermind group, or write down your explanation as a review on Amazon, Goodreads, or Quora. #3 Identify your knowledge gaps and reread Explaining a book’s key points helps you find out what you didn’t understand. There will be passages you’re crystal clear about. At other points, you will struggle. These are the valuable hints to dig deeper. Only when you find knowledge gaps — where you omit an important aspect, search for words, or have trouble linking ideas to each other — can you really start learning. When you know where you’re stuck go back to your book and re-read the passage until you can explain it in your own simple language. Filling your knowledge gaps is the extra step required to really remember what you read and skipping it leads to an illusion of knowledge. #4 Simplify Your Explanation (optional) Depending on a book’s complexity, you might be able to explain and remember the ideas after the previous. If you feel unsure, however, you can add an additional simplification layer. Read your notes out loud and organize them into the simplest narrative possible. Once the explanation sounds simple, it’s a great indicator that you’ve done the proper work. It’s only when you can explain in plain language what you read that you’ll know you truly understood the content.
https://medium.com/age-of-awareness/the-feynman-technique-will-make-you-remember-what-you-read-f0bce8cc4c43
['Eva Keiffenheim']
2020-10-21 15:03:23.461000+00:00
['Reading', 'Books', 'Education', 'Learning', 'Personal Development']
Don’t Use Quarantine as an Excuse To Stop Having Boundaries
Don’t Use Quarantine as an Excuse To Stop Having Boundaries You need to protect your mental health now more than ever Adobe Stock Photo There’s nothing like a global crisis to give people in your lives the excuse they have been looking for to touch base. We are scared, anxious, and nervous and because we are emotionally vulnerable, it can cause us to allow outreach from people that we wouldn’t normally tolerate. In the past few weeks, I have seen and heard examples of the following types of what I would call “quarantine inspired outreach”. “I’m so bored, let’s shake things up.” A coworker told me that she had become so bored while trapped indoors that she thought about texting her ex-boyfriend over the weekend, just to create a little excitement. Although this may seem like a good idea after too much time and a little too much wine, don’t do it. You need to respect people’s boundaries and remember that just because you think it’s funny to text your ex, his new girlfriend is probably not going to think the same thing. No one needs to be instigating fights while people are trapped together in a 500-square-foot apartment.
https://medium.com/fearless-she-wrote/dont-use-quarantine-as-an-excuse-to-stop-having-boundaries-32176664d263
['Carrie Wynn']
2020-04-09 17:10:00.311000+00:00
['Mindfulness', 'Relationships', 'Mental Health', 'Advice', 'Self']
What Are the Rules for Lending Your Books to Friends
Photo by Євгенія Височина on Unsplash When you start to collect books at home, your friends and guesses will begin to envy you. Then every time you have visitors, you will live in dread of the moment that when they want to borrow one of your books. Besides, you will find yourself in an uncomfortable conversation. Letting a visitor borrow a book is a complicated situation. Every reader like me wants to share the stories that make us happy; however, on the other, we do not want anyone to put their greasy fingers on our beautiful little books. The reason is that the statistics tell us that if we lend a book, the probability of seeing it again is almost 0. Even if we are lucky enough to see our books back, we would see that most of the pages of it turn into yellow and fall apart as we turn the pages. I am utterly sure about that because It is like the law of thermodynamics. Once one of my books leaves the house door, it becomes Schrödinger’s cat, and it will be both located and lost. I call that situation “the uncertainty principle of borrowed books.” I also want to add that yellow pages or missing pages are not our friends’ fault, and it is not because they are careless. If we want our friends to return the books that they borrow, we have to put set some strict rules. Moreover, we should print a booklet and put it inside the books. For the rules, the first thing we need to make it clear that they have to return the books. That’s why we should make a list of which books we give and to whom in a simple spreadsheet, or in a booklet to keep them recorded. If we also want to put an approximate return date, do it. For instance, we would even write it down in the agenda to send them a reminder email. Yes, we need to be that heavy! You should also clarify that if they return the books with stained chocolate, wine, coffee, or other fluid and moisture — not to mention loose pages or damaged covers — they have to buy a new one. That is the best way to make sure that they will be careful. Finally, and the most vital point to keep in mind! The books that we love shouldn’t be lent. Thus, we shouldn’t present them because it is the best way to see them again. If our favorite books pass to others, then they will give the books to strangers. Unfortunately, it is almost impossible for those books to find their way back home. If you are still willing to lend your favorite books, there is nothing to say. You are such a person that wants everyone to enjoy such special readings as you. I sometimes buy a couple of second-hand copies and have them available to everyone who asks me to read them. If the books are returned to me, that’s fine. If they get lost in the hyperspace, then nothing happens, and I don’t need to be sad.
https://mathladyhazel.medium.com/what-are-the-rules-for-lending-your-books-to-friends-d77ea84433f6
['Hazel Clementine']
2020-01-18 08:26:57.300000+00:00
['Books', 'Reading', 'Friendship', 'Relationships', 'Education']
Why I Am Choosing to Stop Writing on Medium at This Point
I published my first post on Medium on October 25, 2019. and my 400th on October 24, 2020. In that one year, I earned a total of $359.94, just shy of 90 cents per story. While that is more than I’ve ever earned from my writing, it is less than what I expected to earn per month by the year’s end. When things don’t go as planned, it is only prudent to analyze and evaluate your efforts. “Regardless of how far you’ve traveled down the wrong path, turn around.” ~ Turkish Proverb Looking Back When I decided to publish on Medium, I planned to evaluate the ROI after the first six months of 2020. However, by the end of June, I was fun making friends and picking up followers, so instead of doing a thorough analysis, I decided to continue. I have learned that the journey of life is not linear. There are ups and downs and a lot of curves along the way. Sometimes, you even have to take a detour and get back down the road. Medium is no different. I learned from others that sometimes it takes one story to catch fire for your income to skyrocket. I have always believed in synchronicity. When you are committed to an idea or a goal, the Universe conspires for you. And, when that happens, it doesn’t always look like what you may have expected. In October, as I approached my first anniversary of writing on Medium, the Universe gifted me with a sizable reward that had nothing to do with writing. It made me pause. I decided to take a look at where I was and where I was headed. Freedom Lifestyle Simply put, Freedom Lifestyle is organizing your affairs in such a way that you can spend your time doing what you love and enjoy. When I started writing on Medium, I had been enjoying Freedom Lifestyle because I was able to twist balloons and give them away. I learned that when I did that, others gave away money. It worked for me. I approached writing on Medium in a similar vein. My goal was to build it up to the point where it will take care of my ongoing expenses and free up the time I spent on twisting balloons. I didn't expect that I would be forced to stop twisting balloons because of the way 2020 turned out. However, as I have always believed, the Universe had arranged to take care of me before I even knew that there might be a need for it. Moving Forward I have been receiving signs that while I am on the right path, a fork lies on the road ahead, where I may be able to change direction and have a more fulfilling opportunity. I enjoy writing, and it enables me to pursue my purpose in life. However, I know that it is not something I feel passionate about. It is not the thing I do best. My choice of communication modality is verbal, as opposed to writing. It is time for me to explore that. “If you can’t figure out your purpose, figure out your passion. For your passion will lead you right into your purpose.” ~ T. D. Jakes As always, thank you for reading and responding. Here are a couple of related stories: Graphic created by Rasheed Hooda using Canva Will you buy me some chai?
https://medium.com/narrative/why-i-am-choosing-to-stop-writing-on-medium-at-this-point-7f03398f3863
['Rasheed Hooda']
2020-11-08 13:45:55.755000+00:00
['Writing', 'Purpose', 'Passion', 'Options', 'Life Lessons']
Large Scale Satellite Data Processing
Large Scale Satellite Data Processing With more amounts of spatial data coming in than ever, we look to increase satellite data processing methods and efficiency. Source: Techspot Spatial Data — What is it? Spatial data, or geospatial data, can be thought of as any data containing location information on the Earth’s surface. Spatial data is present in any field or industry, especially in today’s “real-time” data-driven world. There are a few common terminologies that better explain the language of spatial data. Vector data is any data point that contains points, lines, and/or polygons. It can be thought of as our “man-made” map view of the world, consisting of road networks, administrative boundaries, etc. Raster data, referred to as imagery, is any pixelated data, such as satellite images. Often, raster data are photos taken from any areal device like satellites or drones. The resolutions can greatly vary depending on device precision and other technological and areal factors. Geographic Coordinate System (GCS) is a projection of the Earth’s 3D surface to define locations on Earth. It uses latitude, longitude, degrees (angels), and axes. High-Resolution Earth Imagery is Now Made Publicly Available Source: EOSDIS Imagery There have been huge advancements in remote sensing technologies within the last decade alone. This has paved the way for petabytes of high-resolution satellite imagery being made publicly available. The use-case of this data applies to any industry — especially fields like atmospheric science, agriculture, ecology, soil science, etc. NASA EOSDIS provides public access to over 17 petabytes, and growing, satellite imagery data. The European Space Agency (ESA) launched the Sentinel-1A satellite which collects over 5 petabytes of data within the first two years of its launch alone. These advancements, however, exposed vulnerabilities with processing drastic amounts of data and consequently paved the way for us to build new systems of spatial data processing — including SpatialHadoop, Simba, GeoSpark, RasDaMan, and more. These systems, however, focus solely focus on processing raster or image data. They perform poorly on queries processing both vector and raster data simultaneously. Additionally, these methods require the conversion of raster-to-vector or vector-to-raster — which proves to be extremely inefficient when processing large amounts of data. So what’s a solution that can handle both vector and raster data? One answer is zonal statistics. Zonal Statistics Zonal statistics is the operation that processes a combination of vector and raster data. It computes statistics over a raster layer, for each polygon in a vector layer. All values are aggregated from the raster layer which overlaps with a set of polygons from the vector layer. For example, let’s say we are interested in finding the average temperature of U.S. states. We have polygons of U.S. states from the vector layer vector and temperature data from the raster layer. Zonal statistics can compute the average temperature (or other statistics) for each state in the U.S. This computation of processing vector and raster layers simultaneously is called the scanline method. The Scanline Method Figure 1 The scanline method works directly for the zonal statistics problem by finding intersections between horizontal scanlines and respective polygon boundaries, as seen in Figure2(c). In the latest version, the scanline method can now process multiple adjacent polygons at a time, further reducing processing complexities. This means, for example, if we wanted to find the average temperature of all the states in the west — we can now compute this query in one computation, as opposed to computing the average separately per state. The Algorithm Figure 2 Input : a set of polygons (vector layer), raster layer : a set of polygons (vector layer), raster layer Output: desired aggregation (e.g. sum, minimum, maximum, count, average of pixels inside the polygon) Step 1: As shown in Figure2(b), first calculate the Minimum Bounding Rectangle (MBR) of the input polygon and map its two corners to the raster layer to locate the range of rows/scanlines to process in the raster layer. To include multiple polygons, we extend this step to all queried polygons. Step 2: As shown in Figure2(c), compute the intersections of each scanline with its polygon boundaries. Each scanline is converted to the vector space and stores its y-coordinates in a sorted list. Then, each polygon is scanned for its corresponding range of scanlines — which are then used to compute intersections with the polygon. Step 3: As shown in Figure 2(d), the pixels that lie inside the polygons are processed. Rather than processing one polygon at a time, this step processes one scanline at a time, speeding up computation significantly. Pros of the Scanline Method This algorithm overcomes the limitation of raster and vector-based methods. First, it only requires minimal intermediate storage for the intersection points. Second, it accesses the pixels that are inside the polygon which improves disk IO for very large raster layers. Thirds, it does not require any complicated point-in-polygon tests which make it faster than the vectorization methods. Finally, the scanline method is IO-bound which makes it optimal from the processing perspective since it requires one scan over the raster data to process all polygons. Use-Cases Source: ESA — Average Temperature in Arctic 1997–2008 As I mentioned above, the use-cases for processing spatial data are endless. As we are slowly but surely moving towards a more “green” and environmentally friendly global community, it is important to be able to process and extract insights from Earth imagery. Processing both vector data and satellite data simultaneously allows us to not only apply spatial data insights to our societies, but it allows us to foresee trends on the Earth’s surface. We can better monitor vegetation, temperature, ocean level changes, etc. Visit my GitHub repository to view a full-stack application using the scanline method in the backend to process multiple polygons through a user-interface.
https://medium.com/towards-artificial-intelligence/large-scale-satellite-data-processing-e963692380b8
[]
2020-12-25 01:02:58.806000+00:00
['Satellite', 'Satellite Technology', 'Spatial Analysis', 'Big Data', 'Remote Sensing']
From Pandemic to Panopticon: Are We Habituating Aggressive Surveillance?
In Shoshana Zuboff’s 2019 book The Age of Surveillance Capitalism, she recalls the response to the launch of Google Glass in 2012. Zuboff describes public horror, as well as loud protestations from privacy advocates who were deeply concerned that the product’s undetectable recording of people and places threatened to eliminate “a person’s reasonable expectation of privacy and/or anonymity.” Zuboff describes the product: Google Glass combined computation, communication, photography, GPS tracking, data retrieval, and audio and video recording capabilities in a wearable format patterned on eyeglasses. The data it gathered — location, audio, video, photos, and other personal information — moved from the device to Google’s servers. At the time, campaigners warned of a potential chilling effect on the population if Google Glass were to be married with new facial recognition technology, and in 2013 a congressional privacy caucus asked then Google CEO Larry Page for assurances on privacy safeguards for the product. Eventually, after visceral public rejection, Google parked Glass in 2015 with a short blog announcing that they would be working on future versions. And although we never saw the relaunch of a follow-up consumer Glass, the product didn’t disappear into the sunset as some had predicted. Instead, Google took the opportunity to regroup and redirect, unwilling to turn its back on the chance of harvesting valuable swathes of what Zuboff terms “behavioral surplus data”, or cede this wearables turf to a rival. Instead, as a next move, in 2017 Google publicly announced the Glass Enterprise Edition in what Zuboff calls a “tactical retreat into the workplace.” The workplace being the gold standard of environments in which invasive technologies are habituated and normalized. In workplaces, wearable technologies can be authentically useful points of reference (rather than luxury items), and are therefore treated with less scrutiny than the same technologies in the public space. As Zuboff quips: “Glass at work was most certainly the backdoor to Glass on our streets”, adding: The lesson of Glass is that when one route to a supply source [of behavioral data] encounters obstacles, others are constructed to take up the slack and drive expansion. This kind of expansionism should certainly be on our minds right now as we survey the ways in which government and the tech industry have responded to the COVID-19 pandemic. Most notably in asking if the current situation — one in which the public are prepared to forgo deep scrutiny in the hopes of some solution — presents a real opportunity for tech companies to habituate surveillance technologies at scale? Technologies that have been previously met with widespread repugnance. Syndromic Surveillance Over the last few days and weeks, the media have reported offers from tech companies looking to help governments stymy the spread of the coronavirus. Suggestions vary in content, but many (or most) could reasonably be classified as efforts to track and/or monitor the population in order to understand how the virus moves — known as “syndromic surveillance.” On Monday, Facebook’s Data for Good team announced new tools for tracking how well we’re all social distancing by using our location data. Facebook were following hot on the heels of Google, who promised to do something very similar just last week. According to reports, the readouts from Google’s data stash will reveal phenomenal levels of detail, including “changes in visits to people’s homes, as determined by signals such as where users spend their time during the day and at night.” This granular data is intended to inform government policy decisions, and ultimately influence public behavior to curtail the spread of the virus. This end purpose is, of course, an extremely noble one: saving human lives. This is a cause that legitimizes most methods. Nevertheless, we should not let our sheer desperation to stop this abominable disease blind us to some of the burgeoning concerns surrounding tech’s enthusiastic rollout of unprecedented intrusion. Control Concerns It’s almost reflexive now to look to China when discussing the excessive deployment of technological surveillance tools. Not unexpectedly, the Chinese government has turned the COVID-19 outbreak into an opportunity to flex their surveillance tech muscles, while baking ever more controls into the daily lives of citizens. Authorities have been monitoring smartphones, using facial recognition technology to detect elevated temperatures in a crowd or those not wearing face masks, and obliging the public to consistently check and self report their medical condition for tracking purposes. The Guardian, further reported: Getting into one’s apartment compound or workplace requires scanning a QR code, writing down one’s name and ID number, temperature and recent travel history. Telecom operators track people’s movements while social media platforms like WeChat and Weibo have hotlines for people to report others who may be sick. Some cities are offering people rewards for informing on sick neighbors. But this is what we’ve come to expect from China. Perhaps more surprising is that similar pervasive tracking techniques have been adopted in so many other COVID-19 hotspots around the globe. This silent, yet penetrative policing is still unfamiliar to the public in most areas stricken by the coronavirus. The New York Times reported that in Lombardy, Italy, local authorities are using mobile phone location data to determine whether citizens are obeying lockdown, and in Israel, Prime Minister Benjamin Netanyahu has authorized surveillance technology normally reserved for terrorists to be used on the broader population. In countries like the UK and the US, the announcement of each new tracking technology has been accompanied by an avalanche of privacy assurances. Yet, we’ve already seen a number of worrying instances where the vigilant monitoring of the pandemic has tipped over into boundary-crossing privacy lapses — like this tweet from New York’s Mayor Bill de Blasio. And in Mexico, when public health officials notified Uber about a passenger infected with the virus, the company suspended the accounts of two drivers who had given him rides, then tracked down and suspended the accounts of a further 200 passengers who had also ridden with those drivers (NY Times). The pandemic has unleashed a fresh government enthusiasm for using tech to monitor, identify, and neutralize threats. And although this behavior might seem like a natural response to a crisis, authorities should be alive to the dehumanizing aspects of surveillance, as well as the point at which they start to view the rest of us as mere scientific subjects, rather than active participants in societal efforts. A False Choice? Of course, there are those who would willingly relinquish personal privacy in order to save lives. They believe that an end to this suffering justifies any action taken by governments and tech companies, even if it involves a rummage in our personal data cupboards. But what isn’t clear is the extent to which we can trust this as a straight transaction. After all, these are largely unproven technologies. In the New York Times, Natasha Singer and Chloe Sang-Hun write: The fast pace of the pandemic…is prompting governments to put in place a patchwork of digital surveillance measures in the name of their own interests, with little international coordination on how appropriate or effective they are. And writing for NBC News’ THINK, Albert Fox Cahn and John Veiszlemlein similarly point out that the effectiveness of tech tracking pandemic outbreaks is “decidedly unclear”. They recount previous efforts, like Google Flu Trends, that were abandoned as failures. In short, we could be giving up our most personal data for the sake of a largely ineffective mapping experiment. Yuval Noah Harari argues that the choice between health and privacy is, in fact, a false one. He emphasizes the critical role of trust in achieving compliance and co-operation, and says that public faith is not built through the deployment of authoritarian surveillance technologies, but by encouraging the populace to use personal tech to evaluate their own health in a way that informs responsible personal choices. Harari writes: When people are told the scientific facts, and when people trust public authorities to tell them these facts, citizens can do the right thing even without a Big Brother watching over their shoulders. A self-motivated and well-informed population is usually far more powerful and effective than a policed, ignorant population. He ends with a caution that we could be signing away personal freedoms, thinking it is the only choice. The New (Ab)Normal So, to return to our original question: has this dreadful pandemic provided legitimacy to an aggressive, pervasive surveillance that will carry on into the future? Are we witnessing the beginning of a new normal? Nearly two decades after the 9/11 attacks, law enforcement agencies still have access to the high-powered surveillance systems that were instituted in response to imminent terror threats. Indeed, as Yuval Harari asserts, the nature of emergencies tends to be that the short-term measures they give rise to become fixtures of life on the premise that the next disaster is always lurking. He adds that, “immature and even dangerous technologies are pressed into service, because the risks of doing nothing are bigger.” Whenever we eventually emerge from this difficult time, there is every chance that our collective tolerance for deep surveillance will be higher, and the barriers that previously prevented intrusive technologies taking hold will be lower. If we doubt this, it’s important to know that some tech companies are already openly talking about the pandemic in terms of an expansion opportunity. Perhaps if our skins are thicker, and privacy becomes a sort of quaint, 20th century concern, we could worry less and enjoy greater security and convenience in a post-pandemic era? If this seems appealing, then it’s worth remembering that the benefits of constant and penetrating surveillance, like disease tracking or crime detection, are offset in a range of different and troubling ways. By allowing a permanent tech surveillance land grab, we simultaneously allow and embed a loss of anonymity, as well as an new onslaught of commercial and governmental profiling, cognitive exploitation, behavioral manipulation, and data-driven discrimination. To let this mission creep go on unchallenged would be to assent to a new status quo where we willingly play complacent lab rats for our information masters. So, as we urgently will an end to this global devastation, let’s be attentive when it comes to the aftermath and clean-up, lest we immediately exchange one temporary nightmare scenario for another, more lasting one.
https://medium.com/swlh/from-pandemic-to-panopticon-are-we-habituating-aggressive-surveillance-f880ef754bc0
['Fiona J Mcevoy']
2020-04-10 00:09:49.499000+00:00
['Covid 19', 'Coronavirus', 'Technology', 'Surveillance', 'Government']
Why Donald Trump Could Win Even Though He’s Losing
Since he was diagnosed with COVID-19, the president’s poll numbers definitely haven’t shifted for the better. According to recent polls, Trump is down double-digits and in every single battleground state except Georgia. Because there will likely be no other debates and the debacle that was the first one, it raises the question: can Donald Trump win this election? To be honest, chances are that he will lose the popular vote by a wide margin. But just like last election, it’s still possible to have an electoral college/popular vote split. To shift the election, Trump will likely have to gain voters in states like Florida, Michigan, Ohio, Iowa, and Pennsylvania to have a shot at winning the White House again. That’s going to take some more work by him and his campaign, and he’s going to have to change the rhetoric of his campaign. But it’s still possible. As we well know by now, polls don’t give us the true picture of how the election is going to turn out, and things can change fast in the weeks preceding election day. This election is different, so here’s a couple of scenarios. Donald Trump wins the Electoral College Although the most unlikely of these three scenarios, this result is still quite possible. Although pundits put his chances at about one-in-seven, Trump still has a chance to pull this election. Here’s the most recent polling data for some battleground states: Michigan: Biden 54, Trump 43 Ohio: Biden 44, Trump 43 Arizona: Biden 48, Trump 43 Wisconsin: Biden 47, Trump 42 Florida: Biden 53, Trump 43 If Trump managed to win all of those states, he’d still have to win through various combinations of North Carolina, Georgia, Pennsylvania, and Iowa. Long story short, this won’t be the easiest run for his campaign. But assuming that the polls underrepresent Trump’s supporters in statewide polls, which they certainly did last election, Trump is within striking distance in Ohio, Arizona, and Wisconsin; if he carries those states, it wouldn’t be hard to conceive a Trump win on November 3rd (or later). The Electoral College Comes Out As a Tie Assuming that Nevada, New Hampshire, and Michigan go blue, both of which are solid leads for Biden, and Texas and Georgia go red, there are two likely, in my view, scenarios that could end up in a tie: Biden wins Arizona and Wisconsin; Trump wins Florida, Pennsylvania, Ohio, North Carolina, Iowa, Maine’s Second District, and Nebraska’s Second District. Biden wins Pennsylvania and Maine’s Second District; Trump wins Florida, Arizona, Wisconsin, Ohio, North Carolina, Iowa, and Nebraska’s Second District. In this case, the election would go the the House of Representatives, where each state delegation would receive one vote; the candidate with the majority of the delegation votes would become president. If the current congress voted, it would likely be a Trump victory: a majority of state delegations are Republican. But here’s another weird situation: if the Democrats take back the Senate and Republicans retain a majority of the state delegations in the House, we could end up with a President Trump and a Vice-President Harris. Needless to say, this probably isn’t going to happen. Trump Challenges the Legitimacy of the Election Finally, there’s the situation that many fear: Trump refuses to back the results of the election. He won’t guarantee the peaceful transition of power that’s taken place in our country for centuries. The problem with this election, however, is that there could be an illusory appearance of voter fraud. As results come in on election day, it’s likely that in-person voting could indicate a landslide victory for Trump: Republican voters are much more likely to vote in-person. Here’s an interesting cartoon by David Horsey in the Seattle Times However, as mail-in ballots are counted, more and more Democratic votes will be counted, causing a massive shift in the electoral college almost a week after election day. Because Trump has been building a case of voter fraud since the onset of this pandemic, I don’t think any of us would be surprised if this did end up happening. However, it’s likely that nothing would come out of a legal challenge by the president. As seen in 2000, the Constitution ensures that the presidential election goes through in the most efficient, if not fair, process possible, allowing for a continual transition of power: that’s probably not going to change just because one man decides to delegitimize a system that he once beat. I know that each of the situations that I’ve listed are unlikely, but that’s just the nature of the race for our president. There’s no denying it: Donald Trump is losing this election. However angry it might make him or his supporters, electoral shift is a real thing. But I have to emphasize one thing: this election is like no other. For the first time in American history, mail-in voting could make up an even larger percentage of the voting population, and election results may not be known even a week after the election. That just means that faith in our electoral system will undoubtedly be low. Realistically, my point is this: Donald Trump isn’t out of this election. When some pundits thought that it was an almost guaranteed Clinton win last election, Trump beat the odds. Although from my perspective, that’s not likely to happen again, it still could. For the Democrats reading this, it’s okay to think that Biden will win; if the election were tomorrow, I think he would. But there’s a reason that the “October surprise” is a thing. Don’t be surprised if it doesn’t turn out the way you might hope it will.
https://medium.com/discourse/why-donald-trump-could-win-even-though-hes-losing-bb7f763251ab
['Yash Rajpal']
2020-10-12 17:01:41.315000+00:00
['Politics', 'Trump', 'Election 2020', 'Biden', 'Coronavirus']
On The Art of Facing Things
It turns out facing things is not as hard, not nearly as hard, as resisting them. But to face things, especially forces that oppose us, we must go against every instinct we have to continue to believe and do what we believed and did before. Facing things requires we undo and unlearn the well-worn emotional habits that we have repeated so often we forget we can do something else, and mistake them for cause and effect, the way the world is and will always be. Salmon have much to teach us about the art of facing things. In swimming up waterfalls, these remarkable creatures seem to defy gravity. It is an amazing thing to behold. A closer look reveals a wisdom for all beings who want to thrive. What the salmon somehow know is how to turn their underside — from center to tail — into the powerful current coming at them, which hits them squarely and the impact then launches them out and further up the waterfall; to which their reaction is, again, to turn their underside back into the powerful current that, of course, again hits them squarely; and this successive impact launches them further out and up the waterfall. Their leaning into what they face bounces them further and further along their unlikely journey. — Mark Nepo, The Book of Awakening The salmon shows the raging waterfall its tender side, the part that is most defenseless, the part that a fisherman would gut and all the loose pulsing life of its inner organs spills out. To have guts spill out like that makes me think about how strong our skins must be to hold in all the life that we keep integrating: It looks haphazard, a mess, when our guts are spilled. But inside, there’s an unseen order that keeps salmon, and humans living, that keeps us moving forward through the most unlikely of circumstances. What does it mean to expose one’s tender belly to the elements, to face the strongest forces which are intent upon repelling us? One would think such power, such force would be impossible to resist, that the salmon, or the person, has no other choice but to go with the flow, in the direction, with the momentum of the water, its power, what appears to be the source of power. Water has the ability to adapt itself to any kind of container and is strong enough to bore through stone. One must have a strong container to direct the flow of water, and this perhaps is where people, not salmon, think they must have a certain kind of power to be able to control the forces at work in their own lives. But the salmon does not try to direct the water or the direction it is flowing. Yet salmon defy the dams that direct the powerful flow of water. So what is this that allows them to go against the strongest and powerful currents, to defy the strength of a waterfall and gravity itself? Could it be that tenderness is disarming because those who are in a race to shore up bigger defenses cannot anticipate those who refuse to fight? Even the raw power of the water cannot overcome the salmon who go their own way, who follow the call to live and to cultivate all that is yet to be born, what only they can cultivate. I have found in my own life that when I stopped trying to get the love and attention and recognition I so desperately wanted from people who couldn’t give it to me, the people who could see me then appeared. They say that love is the most powerful force in the universe. When faced with actual forces almost no one believes it, except maybe the salmon who have learned something about the power of love to create a different way. A note about reference points: Fred Rogers famously said we should “look for the helpers,” in any situation where we are uncertain. Chances are good we’ll find them. In my search through 331 images for salmon, not one photo pictured a living salmon. Many were photos of sushi. A few were salmon-colored rooms. A few of the photos of waterfalls featured women, some nude, or fisherman. Photos of dams showed neither people nor the wildlife affected by them. I wonder if perhaps our difficulties in facing things, in knowing how to face the strange changing circumstances of our lives have something to do with our reference points — what we look to when we are looking around for clues as to how we might handle a situation. When we surround ourselves with a world with human beings at the center, and other living things dead or absent, is it any wonder we find only the solutions we’ve already thought of? What other ways might we develop to solve problems or even understand the nature of our problem were we to expand the scope of whom and what we include as reference points?
https://medium.com/the-philosophers-stone/on-the-art-of-facing-things-865ca66f1651
['Suzanne Lagrande']
2020-09-11 21:36:02.679000+00:00
['Self-awareness', 'Love', 'Life Lessons', 'Philosophy', 'Life']
Database “Magic”​: Running Huge High Throughput-Low Latency KV Database with No Data In Memory
Database “Magic”​: Running Huge High Throughput-Low Latency KV Database with No Data In Memory Zohar Elkayam Follow Dec 20 · 4 min read A couple of weeks ago I was talking to one of my oldest database colleagues (and a very dear friend of mine). We were chatting about how key/value stores and databases are evolving, and how they always seem to be revolving around in-memory solutions and cache. The main rant was how this kind of thing doesn’t scale well, while being expensive and complicated to maintain. My friend’s background story was that they are running an application that uses a user profile with almost 700 million profiles (their total size was around 2TB, with a replication factor of 2). Since the access to the user profile is very random (meaning, users are fetched and updated without the application being able to “guess” which user it would need next) they could not use pre-heating of the data to memory. Their main issue was that sometimes they get peaks of over 500k operations per second of this mixed workload and that doesn’t scale very well. User Profile use case summary In my friend’s mind’s eye, the only things they could do is use some kind of a memory based solution. They could either use an in-memory store — which, as we said before, doesn’t scale well and is hard to maintain, or use a traditional cache-first solution, but lose some of the low latency required, because most of the records are not cached. I explained that Aerospike is different. In Aerospike we can store 700 million profiles, 2 TB of data, provide the said 500k TPS (400k reads and 100k writes, concurrently) with a sub 1ms latency, but without storing any of the data in memory. The memory usage would then be very minimal — under 5 percent of the data for that use case. My friend was suspicious: “ What kind of wizardry are you pulling here?!” So since I am not a wizard (yet, I am still convinced my Hogwarts acceptance letter is on its way — I’m almost sure it’s just the owl delayed), I went ahead and created a modest demo cluster for them, just to show my “magic”. Aerospike Cluster: Hybrid Memory Architecture In this next screenshot, we can see the result: a 6 node cluster, running 500k TPS: 400k reads + 100k writes, storing 1.73TB of data but only utilizing 83.45GB of RAM. Running a 6 node cluster: 1.73TB of data, 84 GB of RAM This cluster doesn’t have a specialized hardware of any kind. It’s using 6 nodes of AWS’ c5ad.4xl (a standard option for a node), which means a total of 192GB RAM and 3.5TB of ephemeral devices, cluster-wide. From the pricing perspective it’s only about 1900$ a month, way less than what they pay now (and I price-tagged it before any discounts). Obviously, if the cluster has a total of 192GB of DRAM, the data is not being stored fully in memory. In this case 0 percent of the data was fetched from any sort of cache — so, for 1.73TB of data, the memory usage was under 84GB (even though the Linux kernel would allow for some caching if needed! This makes things even better when using other access patterns like Common Records, or Read After Write). The cool thing is the performance. Predictable performance is something every application needs — and for the peaks described earlier, we can see in the next screenshots a latency of under 1ms for both reads and writes!
https://medium.com/aerospike-developer-blog/database-magic-running-huge-high-throughput-low-latency-kv-database-with-no-data-in-memory-eb67ecdac851
['Zohar Elkayam']
2020-12-21 10:48:19.017000+00:00
['NoSQL', 'Redis', 'Database', 'Big Data', 'Aerospike']
Welcome to the Bazaar of the Bizarre
Here are some quick notes pertaining to the tabs of this publication. Since tabs are tag-loaded, it is important to properly tag posts, so that they fall under the correct tabs. For posts to be loaded in the Poetry From The Soul tab, make sure that your main tag is “Poetry.” For posts to fall under the Musings tab, make sure to use “Writing” as your main tag. Although I have no problem with writers cross-tagging their work as “Poetry”, “Musings”, and/or “Writing.” it is my aim to have the bulk of “Poetry” posts fall under the Poetry From The Soul tab; whereas, the Musings tab will be the general section solely for various kinds of prose such as contemplations, flash fiction, meditations, memoirs, reflections, short stories, and vignettes. For posts to fall under the Fibonacci & Other Weird Forms tab, please use the tag “Forms,” and anything in this section can be automatically cross-tagged as “Poetry.” Although all forms are welcome, I say the more experimental (and less traditional), the better, only because experimentation with various forms has always been instrumental in the honing of my skills as a poet. Photo by Jr Korpa on Unsplash
https://medium.com/the-bazaar-of-the-bizarre/welcome-to-the-bazaar-of-the-bizarre-74d9aee0e1cf
[]
2020-12-11 07:13:15.505000+00:00
['Musings', 'Writing', 'Bazaar Of The Bizarre', 'Mdshall', '21stenturygrio']
Engineer Q&A: Jessica Chong, Frontend Engineer
I’m taking part in this Q&A as part of an effort to introduce the world to the Engineering team at Optimizely. If you’re interested in joining, we’re hiring Frontend Engineers in San Francisco and Austin! Tell us about yourself! What’s your name, what do you do at Optimizely, and how long have you been here? Tell us a bit about your trajectory to get here. My name is Jess, and I’m a Senior Software Engineer working on Product at Optimizely. I’ve been here for 2.5 years. I got here via the I/Own It scholarship program, which was originally conceived to grow our WomEng population. I cannot overstate how much this scholarship changed my life. Last year everything came full circle when I ran the program for the second time. You can read more about it here. How did you figure out that this was what you wanted to do? I started making websites in 1999, when I was 13 years old and still using dial-up. I would spend hours poring through tutorials on htmlgoodies.com, painstakingly positioning tables and frames, exchanging design ideas and HTML tips with my Internet friends, and uploading my sites to domains that were owned by other teenagers. It was empowering, and I was part of a supportive community. C:\Windows\Desktop\jess\yes\index.htm I’ve actually blogged about my “Internetolescence”, a core part of my teenage identity, but it never crossed my mind that I could make a career out of making stuff on the Internet until I was well into my adulthood. In high school I was drawn to the arts and social sciences, and I studied Geography as an undergrad at Vassar because it addressed the core questions I had about the world, namely: “How/why does where you are born impact how you live and how you die?” I didn’t know of anyone who was pursuing software engineering in school, or, in fact, as a career. In truth, the only formal computer science education I’ve had was my seventh-grade computer class, where I made a calculator with Visual Basic. I was definitely most excited about styling the calculator (It was purple and yellow, and labeled ~*JeSsIcA’s fUnkEe caLcuLatOr!!*~). Somehow I turned my middle-school hobby of making websites into a career. I freelanced for several years making websites before I landed at the job where I used Optimizely. What’s it like to be a Frontend Engineer at Optimizely? My day-to-day consists of doing code reviews, reviewing engineering design docs from my peers, scoping work, 1:1s, and of course, writing code. Frontend work here ranges from technical infrastructure to product/feature work. Frontend engineers mostly write Javascript, but if we want, we can go into the Python app backend and update or write APIs there as well. I’ve also written code in the Optimizely client codebase (the stuff that gets loaded on our customers’ websites). Most of my recent work correlates to specific features in our product — I recently drove two features: mutually exclusive experiments and multivariate testing. I work closely with Product Managers like Jon and Whelan; Designers; and other Engineers. As a Frontend Engineer, I see myself as the final gatekeeper before a feature reaches an audience. I have to ask myself many questions as I’m developing: Is the code I’m writing performant? Is the UI I’m building or reviewing for a coworker intuitive to use? How can I make this easy for the next person to build on? Frontend engineers are distributed across different “squads” or teams, but we convene every two weeks at the Frontend Community of Practice (or Frontend COP), which is led by individual contributors. Anyone can put whatever they want on the meeting agenda. We’ve talked about things like ES6 conversion, security, data validations, tests, code coverage, code organization, and interview questions for incoming candidates. We’re in an interesting moment because we’re in the midst of shepherding a migration from VueJS to ReactJS. We handle application state using NuclearJS, an open-source project first developed here at Optimizely by Jordan Garcia. What I’m learning is that the engineering challenges are not exclusively technical; many of them are interpersonal. For example, how do you sell people on an idea? How do you convince people with competing (and often conflicting) agendas that refactoring is a good thing? What have you been working on lately? The last few quarters, I’ve been midwifing Multivariate Testing to completion. One of my squad’s main goals is to get Optimizely X to feature parity with Optimizely Classic, and Multivariate Testing is one of the remaining pieces. Multivariate Tests allow customers to test multiple different factors on their sites to see what combination of factors has the best outcome. We made full-factorial Multivariate Testing generally available in March, and are about to release partial-factorial testing. I hope that customers love it. I’ve been monitoring the usage dashboard for our product (we use a Chartio built by our inhouse Chartio wiz Jon) and watching more and more customers use it. It’s been really cool to drive Multivariate Testing, especially because I used it in Optimizely’s Classic product, when I was a customer! It’s also been a cross-functional effort, requiring work on the backend, frontend, event infrastructure, QA, and client. By far the best thing about working on this project has been my team. I’m also fresh off of a two-week rotation as a Support Engineer for my squad. As Support Eng, we’re required to basically drop everything that we’re doing and focus exclusively on resolving bugs. I love the adrenaline rush I get when I can reliably reproduce a bug and solve it… but I’m always relieved to come off the rotation because working with a constant sense of urgency (or panic) is exhausting and not sustainable. Outside of the Eng org, I also co-chair the Diversity and Inclusion group here and ran the second iteration of the I/Own It scholarship. Last year I was the Ambassador for the ADEPT organization for Optimizely.org, our company’s social impact arm. I think John Leonard, who manages Optimizely.org, does very impactful work driving volunteer activities and giving us ownership to run programming ourselves. Last year I ran a day where we hosted high school CS students and ran a couple of clothing drives for St. Anthony’s. I also volunteered as a mentor with BUILD; we worked in small groups to help high schoolers build skills in marketing, technology, and entrepreneurship. The programming that we do in partnership with BUILD is run completely voluntarily by my colleagues — it’s really special. What’s unique about engineering at Optimizely? I am surrounded by smart, passionate, collaborative, and wonderful people who genuinely want their peers to succeed. I feel like my peers 110% have my back. We also have a tremendous amount of ownership over our own work. I feel supported by my manager, Asa, and am constantly pushed to do things that I’m afraid to do. I was lucky to have the chance to be a tech lead/epic owner on two features and to work on the same team as some of the most generous, fun people here. I hosted a Girl Geek dinner panel. L-R: Kelly, Neha, Elizabeth, Heather, Yours Truly. This isn’t so much related to engineering, but I suppose it’s illustrative of how I’ve been encouraged to grow: I have a terrible fear of public speaking. In seventh grade, everyone had to give speeches in class; as soon as I opened my mouth to deliver my speech on dreams (wherein I kept pronouncing ‘Freud’ Frowd’), the piece of gum that had been marinating in my saliva for an hour fell onto the floor. Knowing about my fear of public speaking, my managers found opportunities for me to lead onboarding sessions for technical new hires, and I’ve had the chance to speak on several diversity-related panels here. Another thing I love is that our leadership is incredibly open — we all have a direct line to Bill, our VP of Engineering; every time I meet with him, I come away with the sense that my opinions matter, and that my feedback will turn into action. The Engineering organization is very democratic. I love the energy that fills the room any time we run an ADEPT-wide retrospective. It’s like a weird family reunion, but with lots of Post-Its and Sharpie fumes, and the knowledge that our feedback will be heard, considered, and acted upon! I like that we’re not dogmatic about our approaches to work, and that we are flexible. Lastly, I love that engineers also have a lot of input into product. We’re encouraged to come up with test ideas and to dogfood our own product. (I hate that term. I prefer “drink our own champagne” even though that sounds very Marie Antoinette-ish). 🌸🌺🌼 The Inaugural Floral Jumpsuit Friday 🌸🌺🌼 Also, I’m part of a group called WomEng. We meet at least once a month for lunch and other activities. A few weeks ago we took a self defense class! I love that we have such varied interests — yoga, running, improv, skiing, sewing, art, pool. What advice do you have for other engineers? Debugging is your best friend. Have compassion! Realize that everyone comes from a different background and perspective, but that ultimately, everyone wants a good outcome at the end of the day. Pay it forward! I had a great experience onboarding an intern from a completely different background. I went to Hack Reactor, so my programming knowledge is almost 100% in JavaScript. I had to onboard an intern who had a CS degree but had never done web development and had never written JavaScript before. I had to figure out how to teach concepts that were still relatively new to me to someone who was coming from a different perspective. I learned a lot from teaching, especially in the process of figuring out how to take a complex idea and synthesize it down to the most important bits so other people can understand it. What do you do outside of work? Outside of work, I like to sew, knit, read, make Kombucha, bike, watch TV shows whose target audience is teenagers (cough Riverdale cough), and write incredibly lame limericks about work. Here are a few: TODAY I LEARNED I TALK TO MYSELF. “Excuse me,” says Matt, from my right, “But I’ve overheard much of your plight.” “I’m talking?” I ask. “Yeah! Throughout your whole task!” (I’d assumed my thoughts were out of ear sight!) POOR YOYO Johanna spent hours on the spreadsheet. She filled it in all nice and neat. But no autosave Meant the outcome was grave… Next time she’ll hit Save on repeat! SUPPORT ENG ROTATION The line changes were meant to be few for the bug at the top of the queue. It was very unpleasant to find my assessment was so poor, it made James stew!
https://medium.com/engineers-optimizely/engineer-q-a-jessica-chong-frontend-engineer-a62fa7994ecb
['Jessica Chong']
2019-04-02 21:43:25.703000+00:00
['Interview', 'Front End Development', 'Women In Engineering', 'Software Engineering', 'Engineering Team']
Can You Keep Google Out of Your Gmail?
Gmail is a great service, but not everyone is comfortable giving Google access to their email. Security expert Max Eddy explains what steps will (and won’t) help keep your messages private. By Max Eddy This week, I’m following up on a message from a reader who previously wrote in about how not to get locked out of your accounts when you’re using two-factor authentication, or 2FA. Jeremy from Capetown also asked whether it’s possible to use 2FA to keep Google out of Gmail. What Is Two-Factor Authentication? Two-factor authentication is when you use two authentication factors from a list of a possible three: Something you know, something you have, or something you are. A password, for example, is something you know and a fingerprint is something you are. When you use the two together, you’re using 2FA. In practical terms, 2FA involves an extra step you take after entering your password to absolutely prove you are who you say you are. This often involves using a one-time code generated from an app or sent via SMS, but there are many other options, including tap-to-login apps like Duo or hardware security keys like those from Yubico and other manufacturers. 2FA is good. You should use it. It’s a great way to keep the bad guys out of your accounts, but it doesn’t appear that it will do much to keep out Google. Who Sees What? In general, Google does appear to have access to the content of your emails. Christopher Cuong Nguyen, who lists himself as a former Google employee, wrote on Quora in 2010 that a very small number of employees can access email content, and that a highly regulated path exists for information to be retrieved. Now, this information is almost a decade old, but it does demonstrate that at one point, there were people who could reach into your Gmail account. Google says that as a law-abiding company, it is required to comply with legal requests for information from governments and law enforcement. This can include the contents of your email messages, although Google points out that it strives to narrow the scope of requests it receives and requires a search warrant before handing over your photos, documents, email messages and more. There are other ways Google uses your Gmail information. While the company no longer scans messages to generate custom ad content, it famously did so for years. Even now, Gmail parses your messages enough to pull out and highlight travel information and generate type-ahead suggestions when you write messages. Depending on your level of comfort, this might be totally fine or wildly invasive. Google does appear to encrypt your emails, but primarily while those messages are in transit. Even if those messages are encrypted while at rest on Google’s servers, if Google is managing the encryption keys—and what I have seen implies it does—Google could still conceivably access your messages. 2FA Isn’t the Answer I can see where Jeremy is coming from with his question. Since I control my Yubikey, and Google doesn’t, if I enable 2FA, Google shouldn’t be able to access my Gmail account. Google can, however, effect changes to accounts that are secured with 2FA. Firing up one of my non-work Gmail accounts, I clicked the Forgot My Password option. It immediately jumped alternate options for sign-in: sending a text to my phone, using my Yubikey, tapping an alert on a verified phone, sending an email to my recovery email address, answering a security question, entering the date I created my Gmail account, and then finally leaving an email address where I could be reached by Google to address my problem directly. If Google can grant me access to my own account without necessarily having my password or second factor, that implies that Google can do that itself. Even Google’s Advanced Protection Program for Gmail has a kind of recovery option. When enabled, Advanced Protection requires that you enroll two different hardware security keys-one for login and another as a backup. If you lose both keys, Google says this about regaining control of your Advanced Protection Program account: If you still have access to a logged-in session, you can visit account.google.com and register replacement keys in place of the lost keys. If you have lost both keys and do not have access to a logged-in session, you will need to submit a request to recover your account. It will take a few days for Google to verify it’s you and grant you access to your account. On balance, it seems like 2FA±even the extreme version of it used in Advanced Protection—is not enough to keep Google itself out of your email. For most people, that’s probably a good thing. Email accounts are an incredibly important part of an individual’s security infrastructure. If you lose a password or have to change a password, an email sent to a verified account is usually part of the process. If an attacker gains access to your email account, they could go on to use the account recovery option on websites to gain access to even more accounts. It’s important that users have the means to regain control of their accounts. Truly Private Messages When we talk about what can and cannot be seen in messaging systems, we’re talking about encryption, not authentication. Most services use encryption at different points in the process of sending and storing a message. Gmail, for example, uses TLS when sending a message to ensure it’s not intercepted. When a messaging service of any kind retains the keys used to encrypt your messages when they’re resting on the server, it’s a safe assumption that the company can access those messages themselves. If you want to keep your Gmail account, but want to make your messages unreadable, you could encrypt those messages yourself. There are numerous encryption plug-ins for Chrome, or you can configure Thunderbird to encrypt your messages with PGP, a commonly used encryption scheme for email. The more expensive Yubico models can also be configured to spit out your PGP key when needed. I am going to be honest and say that while I am sure some of these work, I have never been able to understand them adequately. The creator of PGP famously said that even he finds the process too convoluted to understand. What might be easier is using encryption tools to encrypt messages, and then attach or paste the encrypted output into Gmail. You’d have to coordinate the decryption process on the other end, but the content of the email would not be readable to Google, or anyone else for that matter. Keybase.io is another service that can encrypt, decrypt, or sign text that can be used in an email. If you absolutely must be sure that no one but you has access to your email, there are a few options. First and foremost would be to ditch Gmail. ProtonMail, from the creators of ProtonVPN, is a service intended to respect your privacy and does so by encrypting all your email messages-including those you send and receive from people using other email providers. Here’s how ProtonMail describes its operation: All messages in your ProtonMail inbox are stored end-to-end encrypted. This means we cannot read any of your messages or hand them over to third parties. This includes messages sent to you by non-ProtonMail users, although keep in mind if an email is sent to you from Gmail, Gmail likely retains a copy of that message as well. Another option is to look beyond email. The late 2010s brought about a glut of over-the-top messaging services, which use your data connection instead of your SMS plan to send messages between devices. In recent years, many of those services have adopted end-to-end encryption, meaning that only you and your intended recipient, can read your messages. Signal is the best known, and an excellent app in its own right. WhatsApp adopted the Signal protocol, and now encrypts its messages end to end. Facebook Messenger, somewhat ironically, also uses the Signal protocol for its Secret Messages mode. Apple’s Messages platform might is probably best known for its stickers and animoji karaoke, but it’s also a remarkably secure messaging system. It’s also notable because unlike other messaging services, you can send and receive messages on either your phone or your computer without granting Apple access to the content of your messages. When it comes to using Gmail, I recommend people listen to their guts. If you’re deeply worried about your messages being read by humans or bots, try an alternative. If Gmail is really convenient for you, and you like the features it offers, stick with it. Trying to bend Gmail toward being totally secure is definitely possible, but there are so many easier alternatives. Lastly, 2FA is a great solution for keeping the bad guys out of your accounts, and that’s about it. I wouldn’t rely on it to lock out the owner of a service.
https://medium.com/pcmag-access/can-you-keep-google-out-of-your-gmail-3ec59d0d5e90
[]
2019-05-20 21:24:24.996000+00:00
['Privacy', 'Cybersecurity', 'Security', 'Google', 'Technology']
How to Embrace Middle Age. When I got my first letter from AARP…
Photo by Aron Visuals @ Unsplash.com I stood at my mailbox with the grocery bags hanging on my arm as time seemed to slow. This surreal moment, I presume, was to give my mind a few extra seconds to process what was happening. At first, seeing the unmistakable logo didn’t really affect me until I realized the letter from AARP was addressed to me. “This is it. I’m here. I’m officially old.” There are fewer years in front of me than behind me and the last 10 or so could have me eating from a spoon and making macaroni art. But before allowing myself to spiral down a rabbit hole of depression while contemplating my mortality I thought, “I wonder if there are any cool discounts” and tore open the envelope. There weren’t. In the weeks that followed, my thoughts swirled like creamer in coffee. I wondered whether my life has made any kind of impact on the world. I reflected on the few life accomplishments in my 53 years of existence. I remembered how I much really want to travel. Is the Red Hat Society still around? I figured it was time to get that pink vintage camper I always dreamed about and take those road trips I put off for the last 30 years. Almost instantly visiting my family became a priority. It was also after receiving the letter it became clear the entire world has been noticing my age even though I haven’t. Have I been in denial the entire time? It feels like I woke up from a coma and am now on the other side of middle age. I notice things that I haven’t before. Fine lines aren’t fine lines anymore. They’re wrinkles. People call me ma’am. I’m not as physically strong as I was and now use tools to do things like open pickle jars and remove bottle caps. I threw my shoulder out trying to start the lawnmower last summer. I could go on….and on…and on. In order to make myself feel better, it only made sense to call my sister and invite her to my pity party. She’s 15 years older than me and always knows what to say to help me feel better. After telling her of my life-altering-ah-ha moment, she said, “You think that’s bad? I get mail from crematoriums and funeral homes!” What the heck??!! Her tone turned serious, yet warm and she simply said, “Sissy, make your memories NOW. You’re wondering where 30 years went? You have 10–15 good years left before you REALLY start slowing down. Enjoy the time you have left. Make these years count.” She’s right. This IS the best time of my life. The more I think about it, the more I realize there are a LOT of reasons why being over 50 is fabulous. It’s so freeing. All those ideas and beliefs I thought were so important and that I struggled with just aren’t that important. It didn’t take long to find the first 12 reasons why being over 50 is awesome. 1) Grandchildren! I should’ve had them first. Oh my heart! When you have kids you don’t think it’s possible for your heart to love anymore. Then grandkids happen. 2) I’m at that age where I can date a 38-year-old or his father. Options baby, options! Just not both. That would be weird. 3) Some places will give me the senior discount anyway. Who knew? 4) My minivan days are over. Geez, I hated that thing. Yep, that’s me in the roadster. BTW I’ve also noticed that “oh yeah” nod from other middle-agers in their sports or luxury cars. 5) I’m the kooky/eccentric/hippie mom and I’m okay with that. 6) Mumus are being considered as a wardrobe option. 7) I’m starting to wonder if you can win enough at bingo to make a living. 8) No more hosting giant family holiday dinners. That baton has thankfully been passed to my children. As much as I enjoy entertaining, it simply got to be too much. Now I just show up with a dish and help clean up. 9) Vintage camper or converted school bus living is a very real possibility. Tiny home? RV? 10) I can enroll in college for whatever I WANT, without regard to whether or not my degree will help me in my career. Yarn dying, anthropology, hip hop dance, sociology, astrophysics, yak breeding. 11) No more periods! That alone makes aging awesome. 12) Errands consist of booze runs, craft stores, and yard sales not soccer practice, PTA meetings, and dry cleaners. The more I think about it, the more I’m falling in love with my later years. It’s time to start checking off the items on my bucket list. Oh yeah….I AM joining the Red Hat Society.
https://medium.com/crows-feet/embracing-middle-age-celebrating-and-enjoying-the-rest-of-my-life-554c020be1ec
['Angelica Mordant']
2020-03-07 22:30:18.368000+00:00
['Life', 'Life Lessons', 'Positive Thinking', 'Psychology', 'Women']
Robert Service — The Poet of the Yukon
A Short Biography of the Preston born Balladeer, Poet and Novelist Robert with Marlene Dietrich during the filming of The Spoiler, 1942. Image: wikipedia Having written about Jack London recently, and the new film version of his novel, The Call of the Wild, I thought it was time to take another look at Robert Service, the poet, balladeer and novelist who, like London, made a fortune out of the Yukon and Klondike gold rushes by their chosen forms of literature, and not gold prospecting, which they realised early on was a fools errand. Both men were born within two years of each other (1874 and 1876), with Service the elder. Both writers grew up during a great flowering of American literature, with writers such as Frank Norris, Stephen Crane, Theodore Dreiser, Bret Harte, Ambrose Bierce, and Henry James. Although Service read Burns at a young age, each was influenced by Rudyard Kipling early: London discovering him in prison as a disruptive youth, Service by way of boredom working in a Scottish bank, with lunch breaks spent reading. Both men sought adventure and found it. Sadly, London died aged only fifty during WWI, with Service dying aged eighty-four at the height of Rock ’n’ Roll. Jack London’s legacy lives on. Robert Service is almost forgotten. Surely, it must be time for a film about him: it’s a good story. Service at his desk in the early 1930s. Image: Yukon Info Robert Service’s father, also called Robert, was born in Glasgow in 1837 — the year of Queen Victoria’s succession to the throne — where, at the age of fourteen, he became a clerk at the Commercial Bank of Scotland. Eighteen years later, in 1869, with no prospect of ever becoming anything other than a clerk, he decided to move to Lancashire and the prosperous town of Preston, which had, with the end of the American Civil War, regained once more its place at the centre of the cotton spinning and weaving industries. It was also a centre for banking, having created its own bank, The Preston Bank, at 39 Fishergate in 1844. Robert was sure his prospects of promotion within the banking industry would be assured in Preston. He applied to The Preston Bank for the position of clerk and was accepted. The site is today occupied by the NatWest and the Abbey National. Sadly, things were to be no different in Preston than they had been in Glasgow, and by 1873 Robert had given up any idea of promotion, settling down to the daily task of recording figures and serving customers. Although something of a loner Robert senior nevertheless enjoyed the hustle and bustle of Preston and took every chance he could to make his way out into the countryside where he’d walk for miles and perhaps read from a small volume of poetry he often kept in his coat pocket. Emily Parker — Robert Service’s mother, whose family were originally from Liverpool — was born in 1854. Her father, James Parker, had been born in the Lancashire town of Clitheroe soon after the Battle of Waterloo, in 1815. Emily’s mother Ann, and her father were both staunch Wesleyan Methodists, who had met and fallen in love after a service at Clitheroe’s Methodist Chapel. They married in 1835, and moved to Preston in 1838 where James started a wholesale grocery and tea importing business, which had, by 1872, become hugely successful with impressive business premises on Church Street, and a large Georgian mansion in the prestigious Winckley Square. The Parkers’ moved in the very best Preston social circles, with James becoming a Conservative councillor in the 1850s. With the sudden death of Ann Parker, in 1872, Emily was at last free to look for a husband, and found him in The Preston Bank. Emily was a pretty girl who had often been to The Preston Bank with her father, and it was probably on these visits that she noticed the rather portly, but rather distinguished looking Robert Service working behind the bank’s counter. Emily set her sights on him and eventually won his heart, resulting in the couple eloping to Gretna Green to be married. Robert Service was born in the Christian Road house, Preston, on January 16th 1874, but it would take his father six weeks to register the boy’s birth. Robert Service would later write of his father that there was an “…other-worldliness and irresponsibility about him…” that brought out both irritation and admiration in Robert Service the poet who would always have a soft spot for his old man. On the 24th of November 1875, Robert’s maternal grandfather, James Parker, died of cancer, leaving, it was estimated by the Preston Guardian, anywhere between £50,000 — £100,000. In fact, according to James Parker’s will, he only left £18,000 of which £4,000 went to his housekeeper, with £2,300 going to Emily, Robert’s mother. On the strength of this legacy the Service family moved from Christian Road to 27 Latham Street, a slightly more gentile address just a tad closer to the prestigious Winckley Square. Robert’s father then gave up his position at the bank setting himself up as an independent insurance agent. As one might imagine things didn’t work out and in the spring of 1878 the family packed their belongings and caught a train to Glasgow, Robert senior’s home town, settling in the select, and elegant, Lansdowne Crescent, where Robert’s father had another go at selling insurance. Elegant it may have been, but 29 Lansdowne Crescent was a very small apartment indeed, with the consequence that, with Emily again pregnant (the Services already had five children) it was decided to off-load the two older boys, Robert and John, onto John Service (their paternal grandfather)and his family, which included three maiden aunts, in the small town of Kilwinning where the boys would remain for the next few years. As Service biographer, James Mackay, writes: “…Kilwinning, a small burgh and market town of some five thousand souls, situated on the right bank of the River Garnock in north Ayrshire, about twenty-four miles south-west of Glasgow. “ An account of Kilwinning in 1851 dismisses it as comprising ‘one street, a few lanes and a square called the Green…” It would seem, even with a good deal of house building in the years from 1851 to 1878, the place still felt “…like a village…” and was full of Service family off-shoots, not least grandfather John Service — who was also postmaster of Kilwinning — and his wife Agnes, who raised the two brothers in the Post Office, as well as looking after the aunts whose company Robert seemed to enjoy. The postmaster often talked of his own grandfather and how he’d been a friend of the poet Robert Burns, which, because of the age difference, seems unlikely, but nevertheless, as these things often do, the stories stuck, and was something of a fortuitous myth for an up and coming poet. Maybe it also encouraged Service to read Burns, and write his first poem — a grace — at the age of six? The postmaster was an easy going sort of chap, until Sunday came along, when he metamorphosed into a very strict adherent to the Sabbath. The Post Office was closed, with silence demanded about the house, especially at breakfast. There must be no reading of newspapers or books, and no singing of hymns in the house. The family then waited for the church bells to ring and, as the black coated and frocked worshipers made their way down the long street others would join them in silence from their homes. And although Robert Service, as a child, found the whole thing tiresome, he was already observing people, and their ways. It would all go into his work in later years. When Robert was nine he, with younger brother John, moved back to Glasgow and their parents. The boys attended Hillhead School, leaving aged fifteen. Robert, like his father, found work in a bank. In the late1890s Robert realised banking was not for him and left to find adventure (he was a great reader of adventure stories which must have included those of Jack London in Famous Fantastic Mysteries) in US and Canada. He tried his a hand at various jobs, even working in a bank again, but he couldn’t settle. And like London before him, realised he also wanted to be a writer. But what sort of writer? Like Jack London, Robert Service took to the west coast roads, living rough, roads that eventually took him to the Yukon (a mighty long walk), the furthest point in the north-west Canada. It was to be the making of him, realising quickly during this vagabond period that he could write a much looser, rhyming ballad-style Kiplingesque long form poetry that, during the deep frozen Yukon winters, could keep a bar room full of hard living, hard drinking, gold prospectors entertained. Robert Service was a hit, with his first book of ballads, Songs of a Sourdough, published in 1907 to huge success. By 1908, still in the Yukon surrounded by miners, Service became rather chained to his desk as he got stuck in to his second volume, as his biographer, James Mackay writes: “ With the onset of the winter of 1908 Robert got down to serious writing, producing his second book in four months, working from midnight till three in the morning. Any other hours were impossible because of the rumpus about him. Robert’s colleagues whooped it up every evening, but he would retire to bed at nine and sleep till twelve, then make a pot of strong, black tea and begin to write.” When his publishers received his manuscript they were rather perturbed about certain poems, their violence and vulgarity, and couldn’t promise to publish unless they were removed, at which point Service threatened to take the MSS to another publisher. Eventually a compromise was reached with Briggs the publisher agreeing to pay Service an extra 5% in royalties for the removal of just one offending poem. When Ballads of a Cheechako was published later in 1908 it was another huge success, with Robert receiving a cheque for $3,000 within days of its publication. Robert Service was the best agent he ever had. Thirteen more volumes of ballads and poetry followed, along with six novels, three volumes of non-fiction, several popular songs, numerous articles, plus fifteen collections of his verse. Several of his novels were made into movies, all of which earned him a great deal of money, allowing him to work just four months of the year, with the rest of time spent relaxing, ice skating and bob-sleighing, and travelling. He had achieved the goal he’d set himself after the publication of Sourdough. Service moved to Paris in 1913, living in the Latin Quarter and posing as an artist. then, in June 1913 he married Parisienne Germaine Bourgoin, daughter of a distillery owner in France. She was thirteen years younger than Robert. With the onset of WWI, Service worked briefly as a war correspondent for the Toronto Star (later Hemingway’s paper), then as an ambulance driver with the American Red Cross, as would Hemingway. During the winter of 1917 Service moved his family to the south of France, and it was in Menton that Doris, one of his twin daughters, caught scarlet fever and died. Fearing that their other twin daughter, Iris, may catch the disease, Robert moved his wife and daughter to their summer home in Lancieux. Then, after hearing about a devastating Zeppelin raid on Canadian troops, Robert offered to help the war effort in any way he could (he was forty-one), resulting in an attachment to the Canadian Expeditionary Force “…with a commission to tour France, reporting back on the activities of the troops.” As result of that attachment, and the brutality of the fighting he witnessed, Service wrote a series of war poems that are amongst the very best of his work. After the war the Service family lived in the south of France before returning to Paris. And although he lived in Paris at the same time as Hemingway, Scott Fitzgerald, Ezra Pound and James Joyce, he never met them, and because of the generational gap had probably never heard of them. It is certain they had heard of him, may even have read his work. In 1920, Robert Service was worth some $600,000 (around $90m today), with a good deal of it invested in stocks and shares. But that same year saw a sudden drop (50%) in share prices that reduced his investment values hugely. The poet didn’t hesitate and re-invested his money into life annuities with some of the biggest insurance companies in the US. It was a wise move that kept him in comfort for the rest of his life. Had he not done so he would have been wiped in the crash of 1929. Of the six novels he wrote, three were thrillers, written in Paris in the 1920s, all of which were turned into silent movies. When wintering in Nice during the twenties, Robert would often be seen dining with Somerset Maugham and H. G. Wells, who were of the same generation and equally rich. Throughout the interwar years Service and his family travelled widely in Europe, and often, as with Somerset Maugham, spending many months in the far-east, usually ending up in California, where they mixed with the Hollywood crowd. During WWII, the family settled in the US, with Robert working for the government raising War Bonds. After the war they returned to France, eventually settling back at their home in Lancieux, a home that had been turned into a German gun emplacement during WWII, with Robert’s precious library utterly destroyed. Robert Service died there, from a “…wonky heart…”on September 11th, 1958. Germaine Service survived him by thirty-one years, dying aged one-hundred- and-two on December 26th, 1989, in Monte Carlo. Interestingly Iris Service married the manager of Lloyd’s Bank, in Monte Carlo, in 1952. Robert Service is perhaps best known now for “The Shooting of Dan McGrew”, which would have been okay with him as he thought of himself as a writer of verse, and not as a poet. He was much more than that. And James Mackay’s 1996 brilliant biography of Service, Vagabond of Verse, sets the record straight.
https://stevenewmanwriter.medium.com/robert-service-the-poet-of-the-yukon-e48a44113251
['Steve Newman Writer']
2020-02-20 17:39:10.016000+00:00
['Poetry', 'Books', 'Biography', 'Literature', 'History']
In search of better agriculture and food sector outcomes in Punjab Province
* A five-year program seeks to empower small scale farmers and strengthen markets in Punjab province in Pakistan * The transformation process is essential to boost sustainable growth and tackle persistent malnutrition in a province where about 40 % are employed in agriculture and about 40% of children under age 5 are stunted * A recent visit to Punjab provides snapshots of the opportunities and challenges involved What if public expenditure and regulations could be designed to deliver more results-per-rupee in the agriculture and food sector of Punjab province in Pakistan? What if government spending resulted in more poverty reduction, higher resilience, more business opportunities, and better nutrition? What would a smarter food economy look like? Who would benefit and who would stand to lose? A year and a half after the start of a five-year Punjab Agriculture and Rural Transformation Program, the answers to these questions are still being formulated, as reforms and modernization attempts are being made in the fields, market lanes and offices of Punjab, Pakistan’s largest province. But one thing is clear: there is appetite for change. Although public support for agriculture totaled about US$ 1.3 billion in 2017, growth has been low and erratic in the last few years, holding back a sector that provides about 40 percent of employment and contributes more than 20 percent of provincial GDP. Nor is the sector providing adequate nutrition: a survey found 39.2 percent of children under 5 to be stunted in Punjab. The program known by its acronym SMART, supported by a World Bank program-for-results loan, seeks to remove some of the obstacles to growth by introducing policy and regulatory changes, and technological innovations. A visit to the province in July 2019 provided multiple snapshots of the opportunities and challenges involved in the transformation process.
https://medium.com/world-of-opportunity/in-search-of-better-agriculture-and-food-sector-outcomes-in-punjab-province-428ffeb261c3
['World Bank']
2019-09-06 15:02:45.177000+00:00
['Health', 'Poverty', 'Agriculture', 'Data', 'Food']
What’s on Mind ?
Whats on your mind? They asked Different worlds Or happy thoughts. There is a dark tunnel Which leads to hell And this is all Left to imagine. A slideshow flash before my eyes Of people I love and hate The happy thoughts of future Are far behind Fear takes it all away Does it happen to you too? The trembles pass down my spine And the hair stands erect at their end Negative thoughts has filled me up And the tunnel will take me to hell. On the other side, I saw a shrub blooming Among the dark clouds It gave courage and strength Maybe that is all what I want.
https://medium.com/poets-unlimited/whats-on-mind-54aa222641eb
['Nalini Gupta']
2017-09-21 21:17:04.661000+00:00
['Deep Thoughts', 'Poetry', 'Fiction', 'Writing', 'Poetry On Medium']
Why I Stopped Forgiving People… and Maybe You Should, Too
Why I Stopped Forgiving People… and Maybe You Should, Too Forgiveness was turning me into a chump Image by Timisu Forgive and forget, right? I used to think that. I don’t anymore. Now look, I know I’m up against some real heavy hitters here when I say maybe you should stop forgiving people. After all, isn’t forgiveness a cornerstone of the major religions? The Tanakh says that one who forgives an insult keeps a friend (Proverbs 17:9). The Christian New Testament says to forgive, if you have anything against anyone (Mark 11:25). The Qur’an says that one who forgives shall have reward with God (42.40). The Vedas say that forgiveness is the greatest strength (Mahābhārata 5.33.48). Well, I tried that. For decades, I forgave people who did me wrong. And you know what? It made me feel good about myself. But then one day, I woke up. And I realized that forgiving people was turning me into a chump. No Good Deed Goes Unpunished What changed my mind was Frankie. We met in high school, and despite our differences became friends. But at some point, he started going down roads I didn’t particularly want to be on. It wasn’t just all the New Age stuff he and his girlfriend Hailey were into. It was their physically abusive relationship, which he made no apologies for, along with his white suburban Marxism and what I considered an abuse of psychedelics. I reached my breaking point one night shortly after his marriage (not to Hailey) when he invited me up to his new place, saying we’d go out and shoot some pool. Turns out, he hadn’t bothered to tell his new wife about these plans, and I found myself cooling my heels out in the hallway while overhearing a knock-down-drag-out argument. Eventually he walked out, cool as a cucumber, and drove us to a pool hall somewhere in Atlanta. Almost as soon as we arrived he excused himself and slipped into the back to score some acid. I ended up playing 9-ball with total strangers for over an hour. I’d just had the bartender call me a cab, hoping he could find Frankie’s place from my memory of the directions to his house from mine which were on a scrap of paper in my car (this was before cell phones and GPS), when Frankie reappeared, all smiles. His guy didn’t have the stuff, so they’d had to go get it. After that night, I didn’t see Frankie for two or three years. Until the day he showed up on my front porch. His wife was divorcing him. All his stuff was in his car. It wasn’t much stuff. He’d sold most of it, including his guitars. “What’d you do,” I asked, “have a yard sale?” Frankie let out a quick “Ha!” Apparently, he had other venues for selling things. So he stayed a couple weeks. Back then, I was using the “envelope method” of budgeting, and a couple of times I could swear I’d had more cash in this or that envelope than was now there. But since I didn’t write down the running amounts, I couldn’t be sure. I told myself I was being paranoid, misjudging my friend, letting old scores make me suspicious. Then again, why was it that I, of all people, was seemingly the only option he had to turn to? A few days after he split, I went to get my checkbook out of my desk. And didn’t see it. Uh-oh. Rummaging around, I found it, and felt ashamed of myself. Until I thought, “Wait, where’s the watch?” My grandfather’s silver pocket watch. (Yes, like in Pulp Fiction — except my grandfather carried his in his vest and never took it to war.) I knew the watch was in that drawer. Except now, it wasn’t. I searched the house. No dice. It was gone. I had no idea where Frankie had lit out to. And I’ve never seen him since. The Trap of Forgiveness It took a while, but I forgave Frankie. For the night at the pool hall, the cash in the envelopes, even the watch, which was irreplaceable. Being angry wasn’t doing me any good. And the odds of getting restitution were pretty much zero. It felt like the right thing to do, like it made me a better person. Then came the day, many years later, when I decided to finally go through all my old photographs, put dates and names on the backs, organize them, and toss out the ones I didn’t want to keep. I pulled out all the photo albums and Fotomat envelopes. And there, in the back of the drawer, was the watch. That’s when it hit me. Forgiveness had made me a chump. I realized then that forgiving Frankie hadn’t really made me a better person. It had only made me feel like one. But not just better than who I’d been before. Better than Frankie. It was a way of permanently casting Frankie as the offender and myself as the victim. I got to be blameless, and he got to be the villain. Truth was, if I was honest with myself, I’d had my own problems in relationships. I wasn’t physically abusive, but I knew how to turn the emotional screws when I wanted to. And come to think of it, I’d pulled a vanishing act on very close friends a couple of times myself, and for no more noble reasons than he had. And, too, I’d taken my own flights of fantasy into strange philosophies and mystical nonsense. And I could be a downright arrogant sumbich. I was no better than him. After all, we became friends for a reason, didn’t we? But by “forgiving” him, I got pretend to myself that I was. That he was down there and I was up here. My “forgiveness” had never been about him. It had been about me the whole time. It was a shaming moment. And it changed my attitude. If Not Forgiveness, What? There are lots of stories told about the Buddha, to illustrate his teachings. In one of them, a man decides to test the Buddha by insulting him. If he were to react with anger, he would show himself to be a fraud. If he did nothing, he would reveal himself as a coward. So the man found the Buddha sitting with his disciples, walked up to him, and spat right in his face. The Buddha wiped off the spittle with the hem of his garment, looked up at the man, and said, “What now? What else do you have to say?” The man was not prepared for this question. He turned and left in silence and went home. And that night, he could not sleep for shame at what he had done. The next day, he again found the Buddha sitting with his disciples, and he bowed to him and said, “Sir, please forgive me for what I did to you yesterday.” “I’m afraid that’s not possible,” responded the Buddha. “I cannot forgive you. Because I have no grudge against you. Please, sit down, and let us talk of something else.” Returning to Here and Now There is a technique in Buddhist counseling to ask the person seeking help to focus on what is going on at the moment. If a person is angry about an argument they have had with their spouse, they might be asked “So where is your spouse right now?” And then they might be asked “Where is your argument?” The argument no longer exists. What exists is simply who we are at this moment. We do not need to “let go of the argument” because there is no argument to let go of. There is only who we are now, where we are now. Once we see this, we can get to the truly important question: What next? What do I choose to do at this time? What karma, what result, do I intend to create? Doing this, we can escape the trap of forgiveness, the self-serving urge to cast ourselves as the victim and the other as the offender, ourselves as the good guy and the other as the bad guy. We can recognize our responsibility to decide how we are going to act, and let go of our desire to protect our own ego. And believe it or not, we can do this for offenses a lot more heinous than petty larceny (real or imagined). Just ask the Vietnamese monk Thích Nhất Hạnh. For me, it has turned out to be the key to getting past things I am not yet ready to write about, and maybe never will be. I don’t have to carry them anymore. I have no idea where Frankie is today. I’ll probably never see him again. But wherever you are, Frankie, I owe you one.
https://medium.com/illumination-curated/why-i-stopped-forgiving-people-and-maybe-you-should-too-37fc61e00a22
['Paul Thomas Zenki']
2020-11-22 02:20:56.949000+00:00
['Psychology', 'Personal Growth', 'Zen', 'Buddhism', 'Forgiveness']
What if you could draw your thoughts in technical interviews?
A lot of people like drawing out their approach as they work through coding challenges, and that usually involves using a pen/paper or your iPad. Even some of the best tech interview related products like AlgoExpert (which I highly recommend using) end up showing you how to work through problems by drawing on a (digital)whiteboard. This got me thinking, why dont we have a single place to code, draw, and video chat with another person? I decided to build that. Just share your sandbox link! Interview Sandbox is an app that came out of the desire for a place to practice, pencil, and perform without having to have a split screen, or to spend time drawing on a piece of paper and showing your interviewer your thoughts. The app is pretty simple, you create your sandbox (no login needed!) and share your link with anyone else, to get them on the same page. They can see your code in real time, see your drawings in real time, and you can chat with them too. After you’re done, just save the link for future reference. That’s it. The app is currently in v1 so there may be some quirks and some bugs, but I will be ironing those out and it should get better with each successive use! Oh and if you have any feedback on how it can get better, I would love to hear. Just leave a comment.
https://medium.com/hackernoon/what-if-you-could-draw-your-thoughts-in-technical-interviews-47e1ff87bf33
['Sagar Desai']
2020-05-27 21:08:54.178000+00:00
['Technical Interview', 'Software Engineering', 'Software Development', 'Interview', 'Coding']