title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Getting Into Games Writing
If I’ve pointed you at this post instead of writing you a long response to an email, I apologise, but these enquiries come up so often that I’ve decided to write my answer here. I get asked this a lot. What’s the career path to become a games writer? If jobs are advertised, they’re normally for experienced writers — so how do you get that experience if there aren’t any jobs? The first and most obvious answer is that there is no set path to getting into games writing. Pretty much everybody I know in the industry came into it in a different way. There’s no real point in recounting my own experience, because you’ll never recreate the same bizarre collection of circumstances. However, I have some pointers for you. If you follow the suggestions below you’ll be on the right wavelength for us at Talespinners, and I’d be very surprised if the same skills don’t open doors at other games companies. 1. Read It sounds obvious, but to be a writer you need a good command of writing. You need to know sentence structure, mood, flow, turn of phrase, metaphor… and yes, all these things can be learned. But they really need to be internalised. By far the best way I know of doing this is to read, and read avidly, and read material which is well-written. Whether you know it or not, you absorb the styles that you read until they become second nature. It becomes obvious to you when a sentence simply ‘feels wrong’ — clunky, rough, or forced. Take a look at this from Gary Provost for a good example of what I mean. Then, of course, as a professional writer it’s a good idea to learn the underlying reasons why those sentences sound wrong to you. But without that initial handle on how a sentence should feel, you’ll struggle. You’re going to have to master many different writing disciplines, so don’t just stick to prose. Absorb dialogue — read screenplays, watch movies, read plays. Absorb rhyme, song, comic book, Twitter conversation, news article, radio broadcast, fact and fiction — whatever you can get your hands on. The second reason for reading is to absorb information. Information about people, things, times and places. Get to grips with genres, moods and feelings. Seek out good examples of drama. The more you know, the more you have to bring to any project. 2. Play Again, it sounds obvious, but there are many good writers out there who have absolutely no feel for computer games; in particular, they have a hard time grasping non-linearity of story. Consider player agency, the fact that AI might trigger dialogue, that there are levels of success, that the player may be customisable, and that in many cases you can’t rely on one thing happening before another. If you’re already a gamer, you’ve got a leg up, because you’ll have absorbed examples of how story is delivered in a game and where it works versus where it doesn’t. Once you’ve got that grasp, dive deeper, if you can, into game analysis — there are plenty of articles by games writers on sites like Gamasutra; there are books, there are online forums, and there are conferences. Oh, and make sure you play a variety of games — indie, triple-A, action, adventures, strategy, puzzlers, anything and everything. Don’t limit yourself to just the genre of game you normally enjoy — you might be surprised! Take notes on what works, what you think could be done better, and how that might be achieved. Don’t just play, learn the medium. And don’t just stick to computer games. Some of the best-known games writers and designers came from a background in tabletop roleplaying. Acting as a GM for a gaming group gives you a solid grounding in how to adapt to a shifting narrative that’s dependent on the actions of your players and the give-and-take between you. 3. Write Well, yes, of course. Anyone trying to get anywhere knows you need to write, and write, and keep writing. Practice makes perfect. But, for games, perhaps you need to think about what you’re practising. Prose is the first thing. You will be writing text that will appear on screen. But prose is only a one of a range of skills that a games writer needs to know how to use. Let’s go through a quick non-exhaustive list of other skills: Dialogue: characters will be speaking to each other; your main character may have lines; there will be narration, cutscenes, drama, conflict, emotion. Screenplay format is a useful thing to get under your belt for starters, but learning to write dialogue is about getting a handle on character voices and the dramatic ebb and flow of a scene. Character Creation: you’ll be writing character briefs for the game team; these words may never be seen by the audience of the game itself, but you need to be able to create a compelling, deep character based on input from the game designers, and to convey it to every other department working on the game. You may also have to write briefs for voice casting and voice actors. World Building: as with character briefs, the players may never see these words, but you’re likely to be highly involved in creating the history and culture of the world the game is set in. The whole game will be built on this base; the story will arise from it. You’ll need to care about factions, conflict, drama, integrating game design elements into the game’s world, naming things, mood, and atmosphere. Again, this will need to be conveyed to the rest of the team, and there’ll be a lot of give and take between you and them. Often it’s up to you to ensure it all makes sense. Story Building: plotting out the game’s storyline, its highs and lows, its branches, twists and turns. Much of this will come from the design team, but it’s up to you to make it coherent and communicate it to the rest of your team. Narrative Design: some writing jobs require the writer’s involvement here, some don’t, but it’s critical to have a grasp of how your narrative will be delivered to the player, even if it’s the design department working that out. Many of these skills are completely separate from your ability to produce a well-turned phrase. They require a deep knowledge of story, and a good understanding of everything from drama to politics — or at the very least a willingness to learn and a handle on how to research. Often similar skills are drawn on for creating novels, but in games the critical thing to remember is that you’re part of a team. Not only do you need to accommodate their input, and to collaborate with them throughout the project, but you’re also likely to be the person in charge of communicating all these story elements to the rest of the team while keeping everything coherent. Not all these skills will be required by all games companies. Not all writers have these skills. However, having a good grasp of them will be helpful, whatever writing role you end up in. 4. Write Games The tools are out there for you to make your own games. To create them, and publish them, without any input from anyone else. Take a look at Twine, ChoiceScript, and Ink — all are free and have minimal system requirements. They’ll allow you to create scenes in prose and link them through choices in the manner of Choose Your Own Adventure books. Some even let you publish commercially. Even if not for publication, these projects make excellent portfolio pieces and can show you have an understanding of cause, effect, branching and choices. Then collaborate. Get out there and find teams to work with. Sure, you may not find a paid position — but there are many people in other disciplines who are in the same boat as you, teams of amateur games creators working on mods and full games. My general rule is that you shouldn’t work for free if anyone else is getting paid; other than that, dive in. Collaboration is utterly critical for most games writing roles and this is the best way to expand your portfolio and show that you can handle working with others. If you’re struggling to find people, get yourself down to a local Game Jam. I’ve mentioned portfolios. Create one, put it online, include examples of games you’ve created and collaborated on, and make sure you show off some of the skills I was discussing in part 3. For some guidance on where to start with assembling a game writing portfolio, The Game Narrative Toolbox is a great resource. 5. Get Out and Meet People If you’re anything like me — and many writers and game developers are — then this is really tough. Going out, socialising with strangers, finding others in the industry. It is tough. But don’t treat it as the dreaded ‘networking’ — treat it as going out to chat with other people who broadly have the same hobbies and same cultural references as yourself. You love games, they love games, and you’ve already got something in common. And don’t use this just to sell yourself as a writer. Talk to people, get to know them, get to like them. When it comes to putting together a team to work on a game, someone you know, like, and trust will always be further up your list than a cold call and a CV, so long as you have the skills to back it up. If an opportunity arises, great, but if one doesn’t — you’ve made some new friends and you haven’t lost anything. A small note of caution — don’t harshly criticise people in the industry, especially if you’re new. You won’t impress anyone. It’s a small industry, and you’ll be burning your bridges. If you really have to be critical, be polite and constructive. You have no idea what was happening in the background while certain choices were being made on a particular game — you’ll fairly quickly learn that game development is about compromise. Most cities now have games social events. Check Meetup if you’re not sure. It’s also worth attending BAFTA, TIGA or UKIE talks and events if you’re in the UK, or IGDA events and mixers in other countries. Check out the IGDA Writers’ SIG. And go along to conferences, listen to people in the industry talk, and get to know the people you may one day be working with. Events and talks are great ice-breakers — that stranger in the bar has just seen the same thing as you, so you have immediate common ground. 6. Carry On Writing Games Congratulations, you are now a games writer. :-) Now all you need to do is to get paid for it…
https://wildwinter.medium.com/getting-into-games-writing-64b23a41f51b
['Ian Thomas']
2019-09-24 14:32:01.365000+00:00
['Videogames', 'Game Development', 'Games Writing', 'Recruitment', 'Writing']
Playground Driven Development
DEVELOPMENT Playground Driven Development How to prototype your app faster than light speed. Swift Playgrounds are a wonderful thing. They are fast. They allow you to try quickly your code. They allow you to check your UI in a matter of seconds. They can really speed up your development cycle. This is a brief video of how they can help while prototyping a cell for a collection, for example. The video is running at 10x. The normal speed duration is around 3 min. The goal of this task is to add some dark shadows behind the image. The main gain, in this case, is that you can render your view immediately in the playground. You can change small bits and check how they look. Once you are satisfied with the changes, you can take the playground file and move it to the right place. However, how can we start using playground in our development cycle, now? Chances are that you are working on a complex application. The application can be composed of different frameworks and potentially it uses also some Pods, leveraging the Cocoapods’ dependency management. This article guides you to add a playground to your app so that you can prototype your UI and your algorithms faster than ever! Note: If you are already an expert on Xcode and dependency management, you can skip directly to the “Adding The Playground” section of this article. Creating the project Let’s start with the basics. We have to create an iOS application and, to keep it simple, we can: Open Xcode Click on File > New > Project Select App from the iOS templates Give it a name (I have chosen PDD ) Choose a location on disk and click Create Nothing fancy here. Just the usual stuff to create a new project. Given that we don’t want to actually use storyboards, let’s clean up a bit the template: Delete the Main.Storyboard file file Add the following snippet to the SceneDelege.swift : Open the Info.plist file and remove the two highlighted entries Hit the Play button and make sure that the app builds and runs! Adding a framework We all like modular applications. They have several pros: they build faster : unchanged modules are not rebuilt. : unchanged modules are not rebuilt. they encapsulate the logic better : the module exposes only what it wants and requires only what it needs. : the module exposes only what it wants and requires only what it needs. they foster collaboration: many people can work on different modules and the final integration it’s usually easier. many people can work on different modules and the final integration it’s usually easier. they push code reuse: if there is a piece of code that is generic enough and that is used in several places, it’s a good candidate for a module. However, iOS apps in Xcode do not define a Target that can be imported by other modules. This is true also for the playgrounds: if we want to prototype something in our app, we need it to live in a framework. And that’s another reason why we need to create a framework! So, let’s create a framework where we can implement our UI. In Xcode, click on File > New > Target Scroll down until you find the Framework template. Select the Framework template and click next Give it a name (I called it UI ) Press Finish Framework template Hint: If you have to create a big app, consider creating a separate module for each UX flow or feature… or even for each screen! It really depends on how much you want to push the modularization. For the sake of this example, let’s take the ViewController.swift file move it to the new framework. If we try to build the project now, we will incur a bunch of failures. To fix them: Open the SceneDelegate.swift file and add import UI Open the ViewController.swift file and update the code with the following snippet: In this snippet, we just make the ViewController public so that we can create it from the app, and we add the required initializers. Now, we can hit the Play button again and see that the app is running as before. Adding some pods We don’t want to reinvent the wheel. There are many very good libraries out there that can solve common problems in our daily routine. Cocoapods is one of the most famous dependency management and also the one that we use at Bending Spoons every day. Let’s try to integrate a library into our project. In the root folder of our project, create a file called Podfile Copy and paste the following snippet In the terminal, navigate to your project root folder run the command pod install and wait for the process to finish Snippet of the Podfile to install the PinLayout dependency. This process will create an Xcode workspace that has the .xcworkspace extension. This environment is slightly different from the standard project: it basically collects different .xcodeproj so that they can work together. Let’s open the PDD.xcworkspace and you will see the following situation in the project navigator: Now, we can access the PinLayout Pod and we can use it to layout elements of our views! Adding the playground Given that we want to be really fast when iterating on the UI, we want to add a Playground to the project. To start with, let’s create it in the standard way: In Xcode click on File > New > Playground Choose the SingleView template and press next Navigate to the same location where you create your project Give a name to the Playground (I decided to call it PDD ) In the bottom of the dialog, select your workspace for both the Add to: and the Group: options. Press Create Now Xcode created a new playground we can play with. The playground also supports the liveView feature that helps us in prototyping everything fast. However, we now have modified our workspace. What happens if we have to add a new Pod to the project? The pod install command recreates the .xcworkspace and, once opened, the playground is gone! Adding the playground back automatically If we explore a bit what is a .xcworkspace file, we discover that it’s basically a folder which, in particular, contains a file called contents.xcworkspacedata . This file defines the structure of the workspace. A newly created workspace from the pod install only has the following content: While a workspace modified by adding a playground has a contents.xcworkspacedata with this content: Thus, to automatically add our playground to the workspace, we can cook up a little script (in Swift, of course) we run just after the pod install command. The script looks like this one: To run it, you just have to type swift integrate-playground.swift in the terminal and let the magic happen. It basically uses the file manager to retrieve the playground and the xcworkspace . Then it adds the line to include the playground into the workspace. And that’s it: from now on, we will always have our playground to prototype our views and algorithms! Prototyping a view At this point, we can use the playground to prototype whatever we want. For example, we can add the following snippet. It basically: import PinLayout import UI Add the following code for a view, which uses PinLayout to layout the label. Create the ViewController from the UI module Create the MyView and replace the ViewController.view Set it as the new liveView . Finally, we can tun the Playground. It builds and runs extremely faster than the project. We won’t even have to wait for the simulator to bootstrap, install the app, and run it. We can iterate on the UI until it’s final. Once we are happy, we can just copy the code we just wrote into a proper file of the proper module. And that’s it! Fast prototyping in a nutshell! Final result of the playground prototype. Conclusion In this article, we walked through the creation of a project with several dependencies: custom frameworks, pods, and playgrounds. The ideas explained will work also with the Swift Package Manager and other dependencies management systems. We also automated the inclusion of the playground into the workspace: this way, we won’t have to add it back manually every time the workspace is re-created. At Bending Spoons we do not create our projects manually, but we leverage different tools that create the xcodeproj from other templates. Xcodegen is one of those tools. The ideas expressed in this article apply to projects created with these tools as well. Playgrounds are truly powerful. I hope that this article can speed up the development of your next app.
https://riccardocipolleschi.medium.com/playground-driven-development-d71204f926eb
['Riccardo Cipolleschi']
2020-09-08 11:08:53.776000+00:00
['Mobile App Development', 'Swift Programming', 'App Development', 'Agile', 'Programming']
Putting AI in the Hands of Truckers
Putting AI in the Hands of Truckers by Michael Watson “G etting AI into the hands of those on the frontlines can have a big impact.” As companies build out their AI (Artificial Intelligence) capabilities, one of the exciting trends is to bring the algorithms close to the front lines and decision makers. One great example of this is from Landstar System Inc. Landstar is a unique transportation firm. They are asset-light and work with a network of independent truck drivers. They call the drivers a BCO — Business Capacity Owner. The BCOs are completely independent businesses and decide for themselves what to haul, when to haul, and where to haul. A recent press release discusses their new app for the BCO’s that puts the power of AI into their hands. The app allows the BCOs to quickly find multi-leg, multi-week runs. The app works by having the driver simply enter their current origin, the final spot they want to end up, and how long they want to be hauling loads. The AI behind the scenes* (exclusive Landstar technology called Landstar Maximizer®), then taps into the load board and suggests the best possible multi-leg loads to the BCO. The best possible loads can include options that bring the most possible revenue to the BCOs or that bring the best combination of revenue along with desired destinations. Landstar has 9,600 BCOs on the road. The app, which rolled out just this year, has already seen 70% of Landstar’s BCOs download and use it. The press release notes that the BCOs now spend less time searching through the load boards for good loads — this means more time for productive work. Also, because of the tool’s embedded AI, the loads that are now returned will meet the needs of the BCOs much better. When organizations use AI to solve the right problems and put that AI into easy-to-use apps for those making decisions on the front line, you can have a big impact. *Disclosure: My company, Opex Analytics, worked with Landstar on this technology. This article was originally published on July 24, 2018 in Supply Chain Digest. _________________________________________________________________ If you liked this blog post, check out more of our work, follow us on social media (Twitter, LinkedIn, and Facebook), or join us for our free monthly Academy webinars.
https://medium.com/opex-analytics/putting-ai-in-the-hands-of-truckers-4c57f3b804be
['Opex Analytics']
2019-07-25 14:28:55.262000+00:00
['Business Strategy', 'Analytics', 'Apps', 'Artificial Intelligence', 'Transportation']
Is the Future Controllable? If We Could, Should We?
Copyright 2020. All rights reserved Human progress often takes place when an individual genius offers an innovation that transforms everything. We benefit from an ancient inventor who decided that carrying things would be better by using wheels. The same with the domesticators of fire. Ask anyone what inventions are society-changing and they will rattle off electricity, the light bulb, vaccines, automobiles, radio, TV, and the Internet. We adopt these new technologies because they are demonstrably superior to what we did before. The problem is that there is an unacknowledged period where the innovation begins to show its flaws or, more worrisome, have negative effects. Let’s look at some quick examples: Automobiles allow us to easily travel long distances in physical comfort. But they also produce suburban sprawl and spew carbon into the atmosphere, contributing to global warming. They also support sedentary lifestyles that can lead to our maladies of diabetes and heart disease. Taking another, the Internet gave us long-distance, instantaneous communications, but also (in ascending order of awfulness) cute cat videos, email spam, online trolls, cyberstalking, and manipulated elections. Are these just unanticipated consequences, or could they have been mitigated or managed in some way? As a species, we need to find ways to think through the future of innovation without having to fix a problem that the future as thrust upon us. Sadly, I do not have an answer, but perhaps a way to talk about it. As we look at the rise of monopolies and oligopolies in technology — in search, online buying stuff, social media — what we see are innovations that were originally innovative. But over time their adoption and business growth tends to crowd out the next generation of innovations. Granted, there are always potential disruptive innovators who come at the problem from a radically different angle (Thank you, Clayton Christensen), but quite often they are adopted, bought out, or destroyed by the existing paradigm. Here’s a thought experiment: When was the last time you produced a business document that you wrote using word processing software not created by a multi-billion-dollar tech firm? I use Microsoft Word — an application that was first created in 1981. (Does anyone remember WordStar?) We have let technology companies grow to become monopolies partly because they are beneficial, like Amazon, and partly because they are predatory, like Amazon. Let us not just accept these realities without asking ourselves if these business concentrations have costs to society, justice, and even freedom. Amazon may offer incredible ease and convenience (Our neighbors thought we ran an online business because of all the delivery trucks that came to our house in the run-up to Christmas — they were all Amazon orders. Yes, we have a problem.), but the company also pushes their workers hard to keep the line moving, sometimes to the detriment of comfort and even safety. Jeff Bezos may be the richest man in the world, but his wealth depends on a lot of other people, and he does not share very well. (He needs to read the wisdom of Robert Fulghum — he may be able to find it on Amazon.) There’s a remedy for monopoly power, it’s based in a 19th Century law, the Sherman Antitrust Act. It’s been used to break up companies from Standard Oil to AT&T. The problem with it is that it’s used after the fact. After the damage to competition and the free market have been done. Granted, no one knew that John D. Rockefeller was going to become a monopolist when he started Standard Oil in 1870, but by 1880 the New York World observed that it was “the most cruel, impudent, pitiless, and grasping monopoly that ever fastened upon a country.” (Wikipedia) It remained so until 1911, when it was broken up. So, legally, we are running behind the curve. At least until now, maybe. One of the more recent and profoundly powerful innovations, CRISPR (short for Clustered Regularly Interspaced Short Palindromic Repeats) technology offers a rapid and relatively easy method to change our genes — genetic engineering in other words. Is this a good innovation? Or a harbinger of a dystopian future? It could be used to address the genes that cause sickle cell disease or cystic fibrosis or Duchenne muscular dystrophy, forever. Not only the recipient of the gene therapy may be spared these maladies, but their children and children’s children. It sounds great. But there are both costs and risks. Editing genes is still experimental, meaning very expensive even now and not assured of success. There’s still much we do not know. Its expense puts it in the class of luxury goods, only available to those with the money to pay for the procedure. Moreover, so far, it’s not legal. That’s actually where it becomes interesting for the future of the procedure, but also for the future of innovation. The newness of this innovation buys us some time to consider the costs and benefits of the approach. Granted, a Chinese researcher claimed to use the technique to provide immunity to HIV in 2018, but he was sentenced to prison for doing so. The rest of society still has time to take a breath and consider the ramifications of genetic engineering. If we proceed, there are methods for research that allow us to test safety and efficacy before broad application. We also have time to wrestle with the thornier issues of the ethics of such manipulation and the implications of its cost. Yet, this is a biotechnology issue before it becomes a monopoly issue. What can we do to address the problems of the next tech innovation that breeds a new generation of monopoly? Perhaps start building in some safeguards in patents to allow a pause in implementation before widespread application and growth. Subject such patent applications to a societal impact assessment like environmental impact studies before we make irreversible changes. Yes, it will slow down the pace of innovation, but what if we had such a consideration before the Internet was loosed on the world? It had no security layer and was wide open to hacks that compromise systems. Would we be better off if there had been a short pause to reflect on the future? If we brought in a technology Devil’s Advocate? There’s no perfect answer, but the answers we have been getting in the last generation or so don’t look so good where we stand now and as we turn our gaze forward.
https://medium.com/the-innovation/is-the-future-controllable-if-we-could-should-we-3d331f644d42
['David Potenziani']
2020-07-27 06:16:14.610000+00:00
['Monopoly', 'Social Impact', 'Technology', 'Innovation', 'Biotechnology']
Kaggle CareerCon 2019 competition report (#121 of 1449, top 9%)
This has been my first Kaggle competition and I’m very happy with the result I got. I devoted many hours to analyzing data, writing code and generating my data science template which helped me achieve a great result. During the last days of the competition I was around position number 500, however when the test set was changed from the public to the private one I jumped a lot of positions to the #121 which meant being at the top 9% of the competition ranking. I started writing some reports during the competition and posting them on medium (1), (2). Those were a very primitive solution which didn’t get a good score. After the second post, I started working on my data science template and applied it to this competition. Only doing this gave me a jump in accuracy from 0.39 to 0.65. That was my most important jump through all the competition and I could do it in just a few days work. From then on I kept working on improving my solution but never getting more than a 1% jump in accuracy at each step. The most important steps I did that allowed me to increase my accuracy were adding a lot of feature engineering and doing feature selection. The competition The objective of the competition was to predict on which kind of surface a robot was moving (wood, concrete, etc.). To predict this they gave data of the movement of the robots, including orientation given in quaternions (x, y, z, w), angular velocity (x, y, z) and linear acceleration (x, y, z). They were given as time series data divided in sets of 128 measurements. Visualisation of the time series data It is very interesting to follow the kernels published in Kaggle from other competitors and right after the competition ended, the top performers show how they got their results. I’m a little bit disappointed with the final results because one of the main factors that gave people the victory was realizing the dataset which was divided in chunks of 128 measurements, those chunks of measurements were part of bigger recording sessions also available inside the dataset. Basically you just have to compare the different chunks between each other and see which ones belong to the same big chunk. I decided I wouldn’t implement this feature in my code since this was my first competition and my objective was to learn data science, not winning a competition. This kind of feature wouldn’t give any relevant result in a real-life project so it wouldn’t really teach me anything useful for my projects. The code After this introduction, let’s dive into the code and used techniques. I implemented it using Google Colab. I will only show the important parts of the project, I won’t get into detail about how I loaded the dataset or how I pre-processed the data to make sure I had a clean dataset. I will only focus on the most important parts which made the difference in the result. On a first data exploration, the most insightful thing was the correlation matrix: #Correlation analysis sns.heatmap(df_train.corr(), annot=True, fmt='.2f') Correlation matrix for the dataset As you can see some variables were highly correlated like orientation_W and orientation_X. During the final steps of the competition this lead to deleting some of those highly correlated features, boosting the model accuracy. Another important step was feature engineering. I added many features based on the original time series data, while I lost the time series information, I got a set of new features based on that which were even more informative. The features included the mean of the time series, the maximum value, the minimum value and many, many more. One important set of features was the Fourier Transform, adding the frequency domain transformation to the time series data. For cross validation I used a very simple strategy of dividing the training data given by the competition into a training and validation set. For that I used 70% of the data for training and 30% for validation purposes, this was done while stratifying both sets (having the same percentage of each category in both sets). The algorithm Then came the model selection phase. I tested many models including linear regression, Random Forest, XGBoost, LightGBM. The one that gave me the best results on the first run was random forest, so I chose to work with it. Looking back on it, I regret not working also with the XGBoost algorithm since I believe with some hyper-parameter tuning it could have given great results. Using Grid Search I found the best values for the hyper-parameters for a random forest with this dataset. Those values were: n_estimators=5000 min_samples_leaf=1 max_features=0.3 They were found using grid search, testing different values. This is the list of values that were tested: n_estimators:[100,500,1000,2500,5000,10000] min_samples_leaf:[1, 3, 5, 10, 15, 20] max_features:['auto', 0.1, 0.3, 0.5] Feature selection After training the random forest model, I took the least important features out of the model. This gave me an important jump in accuracy. important_features = feature_importances[feature_importances['importance']>0.001].index.values X_train = X_train[important_features] Using the dataFrame only with the important features, I retrained the random forest and got an accuracy boost. Submission After having a working model I wrote a function to generate the submission file. testpreds = rfc.predict(df_test) testpreds = le.inverse_transform(testpreds) dfres = pd.DataFrame() dfres['series_id'] = df_test.index dfres['surface'] = testpreds dfresf = dfres.groupby('series_id').agg(lambda x:x.value_counts().index[0]) dfresf.to_csv('out.csv') All was left then was to upload the submission to Kaggle and enjoy the emotion and nervousness while waiting for the evaluation. Conclusion The solution outlined in this post, was not the first solution I got for the competition. It involved a lot of iteration and testing of different ideas, many of them good and many of them not. Overall it was a very interesting challenge and a great opportunity to participate in a data science project with a defined objective and a great learning experience. Totally recommended to follow a Kaggle competition from the beginning and fight to get in the first positions of the ranking.
https://medium.com/saturdays-ai/kaggle-careercon-2019-competition-report-121-of-1449-top-9-21a1b7901af7
['Albert Sanchez Lafuente']
2019-04-17 20:34:34.813000+00:00
['Random Forest', 'Machine Learning', 'Data Science', 'Kaggle', 'Sklearn']
Count Tolstoy is Missing
Tolstoy’s disappearance fit well with his growing eccentricity and his ascetic inclinations. Although a wealthy man, supported by his international literary success, Tolstoy spent his final years simplifying his life. He lived in a peasant’s hut on his estate, refusing the comforts of his mansion. He embraced rustic simplicity, wearing the rough clothing of Russian peasants while he farmed his lands with antiquated equipment. In every way, he tried to emulate the downtrodden and dispossessed of his country. Tolstoy’s concern for the peasantry may have been responsible for his desire to abandon family and comfortable surroundings in his final days. Some newspapers reported that he had fallen out with his second oldest son, Ilya, who had been selected to administer the count’s property. Ilya had immediately attempted to increase the profitability of the estate, pressuring the peasants to pay higher rents for the land they farmed. This distressed the author, and, after falling out with his son and wife, he chose to spend his final moments far from home and hearth. Tolstoy at home, 1897. Library of Congress, Public Domain. It was later learned that he and his personal physician, Doctor Makovetsky, had departed on October 10, taking seats in the third class compartment of a train bound for the Tula District, south of Moscow. Tolstoy’s daughter, Alexandra, conspired in the pair’s escape. Tolstoy took the equivalent of $17 in rubles with him; Alexandra ensured that the doctor carried a much more substantial bankroll. Tolstoy hoped to spend a week visiting his sister, Maria, who was a nun in the ancient convent at Shamardino. On his way to her convent, the author stopped at the monastery of Optina. In 1901, after the publication of his final book, Resurrection, the Russian Orthodox church had expelled Tolstoy from the ranks of the faithful. Uncertain of his reception, the elderly author announced his arrival at Optina by saying, “I am the excommunicated and anathematized Leo Tolstoy. Is there any objection to my stay here?” The monastery’s abbot replied, “It is both a duty and a pleasure to offer you shelter.” Tolstoy and an elderly monk spent a day in spiritual conversation. A day later, the author and his physician continued to Shamardino. Tolstoy’s intention of an extended stay was foiled when the author was recognized and news of his presence leaked to the world. Joined by his daughter, Alexandra, Tolstoy and Doctor Makovetsky attempted again to throw the world off their tracks. Tolstoy announced that he was heading for the house he kept in Moscow. He then surreptitiously inserted himself into the third class compartment of a train heading south toward the Caucasus. Jammed among the peasants in an overheated carriage, Tolstoy spiked a fever. Makovetsky, monitoring the author’s condition, decided that it would be suicidal to continue. He forced the author and Alexandra to disembark at Astapova, an obscure station on the train line. It would be stretching the facts to call Astapova a town: it only possessed a handful of peasant huts. There was no hospital, so Tolstoy was lodged in the railroad station master’s home. The media immediately descended on the tiny, backwater town. Newspaper reporters, magazine writers, and a handful of cameramen intent on filming the author’s dying moments, poured off arriving trains and ringed the tiny house where the author lay. Alexandra, acting as official spokeswoman for her father, assured the media that Tolstoy was in no immediate danger: he was suffering from a relapse of bronchitis, a high fever, and an inflammation of the lower lobe of his left lung. Doctor Makovetsky was convinced that he would quickly recover. There was no need for the world’s press to keep a vigil like expectant jackals. Please leave, she begged the paparazzi. They paid no attention to her request. Tolstoy’s wife, Sofia, arrived in Astapova. It was not a pleasant reunions. An argument between the couple, reported the newspapers, had prompted Tolstoy’s desire to flee. Sofia, long-tried by her husband’s fascination with poverty and the simply life of the peasantry, had snapped when he had announced his intention to renounce the copyrights on his literary works. The royalties produced by his copyrights represented a major source of the family’s income. Sofia despaired when Tolstoy suggested that he no longer wished to profit from his writing. They had argued, and, in response, Tolstoy had abandoned his wife and family. In Astapova, Sofia begged her husband to return to their estate, where he could receive the finest care and convalesce in comfort. She offered to charter a special train to carry him back to Yasnaya Polyana. Tolstoy refused. His condition wavered between hopeless and hopeful. On November 17, the Russian newspapers reported that he had died; hours later they printed retractions: Tolstoy was alive and his fever had returned to normal. Unfortunately, by the time the premature death notice was corrected, newspapers around the world had already printed Tolstoy’s obituary. The newsmen continued their uncomfortable vigil in Astapova. According to the New York Times, The correspondents “have not been able to obtain indoor sleeping quarters or a place to eat. The only telegraph facilities are furnished by the operator of the flag station, and they are completely inadequate.” Several railroad carriages were parked on a siding next to the station, offering the only shelter from the cold winter snow. The newsmen packed inside the cold train cars, waiting for something to happen. The reports issuing from this harried band of correspondents were not optimistic. Tolstoy’s fever had declined, but his heart appeared to be failing. The doctors, while unready to concede defeat, grew pessimistic.
https://medium.com/lessons-from-history/count-tolstoy-is-missing-3b12bbbf619e
['Richard J. Goodrich']
2020-12-18 17:31:36.776000+00:00
['Russia', 'History', 'Books', 'Literature', 'Tolstoy']
In Theory, we all Need sex
All humans are sexual creatures. If we weren’t, our species would have gone extinct long ago. And yet, many of us remain reluctant to accept sex as part of our shared humanness, a key component of our relationships and interactions. Some of us have been conditioned to view sex as dirty and reprehensible, something we should endeavor not to want, think about, or even discuss. As a result, we either go without, have bad sex, or do not trust ourselves to state our needs, preferences, or fantasies. We all have them but we pretend we don’t so as not to appear wanton or lewd, so as not to jeopardize the carefully curated exterior many of us present to the world. This unwillingness to embrace our sexual selves creates myriad issues that can destroy lives. Gay folks end up trapped in heterosexual relationships; trans folks end up trapped in a body that doesn’t fit; hetero folks end up trapped in lies. With courage and communication, almost any situation is reversible. But not everyone will find it in themselves to accept their sexuality in all its complexity. Further, not everyone has the good fortune of living in an open-minded and supportive society. Or of having a partner with whom we’ve created a safe space conducive to the joint exploration of sexuality. Although sex is never a solo pursuit, how many of us go it alone regardless of partnership status? How many of us hoard our desires for lack of a willing interlocutor? How many of us outsource our fantasies to strangers via specialized websites or by hiring a sex worker? How many of us have resigned ourselves to a sexless existence in which the only relief we get comes in the form of erotica or porn, when we’re able to react to it at all?
https://asingularstory.medium.com/in-theory-we-all-need-sex-2f533095e51f
['A Singular Story']
2020-09-09 19:12:41.340000+00:00
['Sexuality', 'Mental Health', 'Self', 'Relationships', 'Culture']
How I Live (Well) On $20K A Year
I was talking to some friends from New York recently, and they were complaining about how crazy expensive it is to live there. They are three single guys sharing a two-bedroom apartment on the outskirts of the city, and between rent and going out for lunch and clubs and drinks and this and that, they were each spending around $40K a year. I nodded along as they said this — they made every dollar they spent sound like an inevitable expense — but alarms were going off in my head. 40,000 dollars?! So when I got off the call, I decided to check how much I had spent in 2019. This was fairly easy to do as I write down all my expenses in a spreadsheet at the end of each month — it was just a matter of adding up some numbers. NOTE If you don’t track your monthly expenses you can still quickly see how much you spent in 2019 by downloading all your credit/debit card transactions onto a spreadsheet. Search “how to export transactions from <insert your bank’s name>” to see how to do it for your bank. The result? $20,700.
https://medium.com/makingofamillionaire/how-i-live-well-on-20k-a-year-4a60af8fd0c0
['Dan Ymas']
2020-12-22 18:02:01.272000+00:00
['Life Lessons', 'Personal Development', 'Life', 'Money', 'Writing']
Crisis Communications for COVID-19
Effective communication between leaders and stakeholders is crucial in the face of trying conditions. The COVID-19 crisis confronts leaders in the United States, and in the global community at large, with unprecedented challenges for which no existing playbook, plan, or set of tactics can reliably provide a best-practice roadmap — or even the promise of a good outcome. The situation is extremely uncertain, ambiguous, even chaotic. And there is no reason to think that this uncertainty will abate any time soon. COVID-19 as a medical phenomenon may continue to evolve to produce additional surprises — while the unprecedented actions that many jurisdictions are now taking will almost surely produce unforeseeable follow-on effects and new challenges to surmount. Leaders and their organizations will need to operate in an agile, problem-solving mode for an indefinite time to come.[3] Communication with employees, customers, investors, constituents, and other stakeholders can contribute decisively to the successful navigation of these stressful circumstances. But how should leaders think about what they are trying to say — and how to say it? This short note lays out simple frameworks that can be used to formulate the messages that leaders can and should — indeed, must — convey to help their communities and organizations make their way forward as effectively as they reasonably can. The Stockdale Paradox, Slightly Modified [4] In difficult circumstances, leaders must help their stakeholders understand and face up to their ultimately unavoidable reality. Admiral James Stockdale, the senior American officer incarcerated in the POW camps in North Vietnam during the Vietnam War, was responsible for trying to help his fellow inmates. He has been credited with saving many of them. Asked how, he said that in dire circumstances leaders must do two things: First, they must be brutally honest about the reality; Second, they must offer a rational basis for hope. To the original version of the Stockdale Paradox (or, perhaps better, the Stockdale Principles), we suggest adding a third element: Third, they must show empathy for the losses and suffering of their followers. First, Stockdale explained that people who did not grasp the reality but instead harbored false hopes were the first to die — because they were inevitably devastated as their hopes proved false. By contrast, those who were helped to understand reality but had a rational basis for belief that things might eventually be better were able to withstand extremes of deprivation, mistreatment, torture, and disease — and still survive.[5] Second, in crafting messages about COVID-19, you need to combine honest description with rational hopes about how to navigate a more positive path forward. For example, leaders might say, “Rigorous social distancing will certainly create very difficult hardships for us all — separation from loved ones, personal economic woes, difficulties securing groceries and medications, among other problems. But, hopefully, by slowing the spread of COVID-19, we will keep the load on hospitals manageable and give ourselves time to develop a vaccine, as well as medicines to alleviate the symptoms of those infected.” Third, in our view it is important to acknowledge the challenges, difficulties, and suffering created by the difficult circumstances you are leading people through. This must be authentic, and appropriate to the way in which you express yourself — but it is important not to seem aloof from the situation or from the grave realities faced by your constituents. The Four Canonical Questions True crisis events — large scale, highly uncertain circumstances, like the COVID-19 pandemic and its myriad of follow-on consequences — are “whole of community” events: literally, all of us are in the event together (though different groups may be affected in different ways). Generally speaking, however, we all are implicitly or explicitly seeking answers to four central questions: (1) Situation: What is happening? What is this event? What are the key facts and defining circumstances? (2) Identity: To whom is this happening? Who is included in “we” when people say “We are in this together?” Am I part of the group that is affected? Do leaders notice, care about, and pay attention to me and what matters to me — and others like me? (3) Values and Interests at Risk: Why should we care? What are the things that “we” especially value that are threatened by this event? (4) Action: What should people like us, with values like ours, do in a situation like this? People constantly seek — and will inevitably find — answers to these questions, whether or not leaders directly address them. They will find the answers in what you and others say, and in what you don’t say, and, critically, in how you behave. As a leader, you will do better when you speak explicitly to these questions, and when you then behave in ways consistent with what you said. (It doesn’t help much if you say people should keep personal distance from one another and then you are seen on television shaking hands at press conferences.) A major priority for a leader in crafting crisis communications, therefore, is to decide what answers to give to the four canonical questions — and then to ensure that what you say is consistent with what you want to convey. The following provide some guidelines for addressing the questions: (1) Frame and describe the event in the terms you want your stakeholders to internalize and respond to. How serious is it? How long is it likely to last? Who is particularly likely to be affected? (2) Clearly indicate how you are defining the community involved. Who is included? Whose interests are you taking into account? Are there significant differences among subgroups within that larger community? (3) Describe what values and interests are at risk. What is likely to be affected that matters to us? What are we most interested in preserving? What is essential to us? What may we be forced to leave behind as we move forward? (4) Describe how you want people to behave. What are they supposed to do? What should they not do? What sacrifices are they going to have to make to preserve what they really care about? Speak intentionally, repeatedly, and authentically to this set of questions. Always remember that your stakeholders are looking for — and will surely find — answers whether you provide them or not. Better to articulate thoughtful answers to these questions clearly and authentically. Because people will be stressed and distracted during crises, it is crucial that you speak not only clearly but also concisely, so that stakeholders can comprehend and retain the critical information you are trying to convey. A Template To frame public communications in high-stress events, there is a simple “template,” derived from analysis of both effective and problematic messaging. These rules are designed to be very general, so as to cover a wide range of circumstances: (1) Say what you know (and the basis of your knowledge), speaking especially to the first three canonical questions.[6] Situation: Frame and describe the event in the terms you want your stakeholders to internalize and respond to. How serious is it? How long is it likely to last? Who is particularly likely to be affected? Identity: Clearly indicate how you are defining the community involved. Who is included? Whose interests are you taking into account? Are there significant differences among subgroups within that larger community? Values and Interests: What is likely to be affected that matters to us? What are we most interested in preserving? What is essential to us? What may we be forced to leave behind as we move forward? You will generally have a better grasp of what is happening (better “situational awareness”) than your stakeholders. Share what you know, but be careful to describe how you know what you know. In crisis events, early information is often wrong.[7] It is fine to say “on the basis of what we have learned so far from ____, we currently believe ____” (so long as there is reason to believe that the stated source may be reliable). If your sources later change their view, you are in a better position to announce the changed information than if you put it in your own voice. Note that an important rule embedded here is that you should not say things that you do not know: Don’t speak from hope. You can speak to hope — that is, you can say why you think there is reason for hope. But don’t confuse your hopes about the future with the facts as they are now on the ground. Don’t speculate about exactly what has happened (which you may not know yet) or about why it happened (which you almost surely don’t know yet). Don’t make predictions. You don’t know what will happen next, so forecasting is risky; any prediction you make is most likely to draw attention if it turns out to be wrong. (But an exception: you can say when you will be back to say more — because that you can control.) Don’t suggest that you can control the future — this power is not given to you. “We will (characterization of a hoped-for outcome)” is a very risky statement to make. (2) Say what you are doing.[8] Let people know what actions are being taken and how they relate to your understanding of the situation. Importantly, this should include how you are seeking further information and what perspective you can offer on what may be a rapidly evolving event. (3) Say what others should do, thus answering the fourth canonical question. Provide guidance and direction to your stakeholders. What should employees do? What should your customers or your suppliers or the public do? In stressful situations, people want to be proactive rather than passive, so providing ideas about what they can usefully do may be helpful to their sense of well-being. It is not therapeutic to be told to sit quietly and do nothing — try to help people find something useful that they can do, because most will want to actively contribute. In some situations, this may be nothing more than telling them where, when, and how they can access further information. But avoid suggesting actions that are make-work or likely to prove inconsequential, which can damage your credibility and authenticity. (4) Offer perspective. In high-stress situations, people may find it hard to acknowledge danger or, in contrast, appropriately bound their concerns. Some may be in denial and may not be paying attention to the seriousness of the situation. For others, shock or fear leads them to overstate the severity of the event. To the extent possible, help people get perspective by explaining how to regard the new reality and by providing realistic comparisons that help ground their perceptions.[9] Considerations in Crafting the Statement A framework that many have found useful in thinking about how to craft the message you are formulating is the “Four M’s:”[10] Messagee: Who is the message addressed to? What is her or his frame of mind or reference / state of knowledge / degree of understanding / world view / means of absorbing and processing information / preferred channels of communication? Message: What, exactly, are you trying to say to the audience you have chosen? Messager: Who is the right source of the message / person to say the message is coming from? Whose authority / legitimacy / standing / political capital / expertise is best invoked to make the message persuasive? Messenger: Who is the best person to carry / deliver this message? In whose voice will it be best heard, received, and positively responded to? From whom will the message be most likely to seem legitimate and acceptable? In crafting these four elements of the message, it is useful to bear in mind two key determinants of the power of the message as it will be encountered by the audience: Empathy It is important to start with an understanding of the messagee’s approaches to the world and the state of their information. An empathic analysis is crucial to crafting the message in a form that will be persuasive. A common mistake is to imagine that the argument that convinced you will convince others as well. The point, you need to remember, is not to persuade you. (You are already persuaded, presumably.) Since others do not necessarily share your prior information, priorities, world view, or means and methods of absorbing and processing information, they will not necessarily be impressed by the same arguments that persuaded you. Instead, you need to work the problem from their perspective. Given their premises and approaches to reasoning (or narrative), what formulation of the relevant facts and arguments would most likely persuade them? Whose Voice? We often think of an argument, by and of itself, as being persuasive. But people can almost always resist being persuaded if they do not wish to be. It is useful to remember instead that being persuaded is a gift from the persuadee to the persuader. What would make the intended persuadee willing to make this gift of allowing her or his mind to be changed by this message? Who has standing with this audience? By whom will its members be willing to allow themselves to be persuaded? Will it be an expert or scientist, a political leader, a prominent business person? If you can figure out the answer to this question, you can more effectively solve the problem of voice: from whom should this be coming, and by whom should it be carried? Crisis Communications as a Sequence Each of your messages will be part of a sequence of communications conveying important information to your followers and constituents. Thus, you need to make sure that the messages are both individually well-crafted and also that they make sense when viewed across time. Seeing each message as part of a larger fabric or pattern makes it important to focus for each message in turn on “the Five C’s:” Clear: make the ideas in each message as precise and crystalline as possible; Concise: within the bounds required by accuracy and clarity, make each message as succinct as possible — your audience has a lot going on, so your message needs to focus on the essentials (only!); Coherent: make sure that if your message has different parts, they are aligned with one another and that your overall message is thus internally logical; Consistent: make sure that your messages at different times are aligned with one another and make sense as a group across time … and that if your message is changing over time that you explain what is driving the changes; and Credible: make sure that the facts you convey are accurate; stay within what you know, to preserve your long-term believability. Pacing the Unwelcome News Finally, leaders must be conscious of unfolding bad news in a way and at a rate that their stakeholders can manage and adjust to. In his work on “adaptive leadership,” Ronald Heifetz formulated the problem of leadership as helping stakeholders adapt to new realities.[11] In our teaching about leadership in crisis situations, we have found it useful to formulate this idea in this way: Leadership is the process of bringing a new and generally unwelcome reality to an individual, group, organization, or society, and helping her, him, it, or them successfully adapt to it. Pacing therefore is extremely important. Undertaking this form of leadership — dosing the organization with an unwelcome reality — tends to raise the level of stress in the organization. Some people respond with “avoidance mechanisms” — things they can say or do that would allow them not to have to undertake the “adaptive work” that would be implied by facing up to the real circumstances you are describing. Some people go into denial, some make excuses (we are too busy, we need more resources, …), some attack the leader (that is, you). Knowing that there will generally be resistance, you need to calibrate the process of educating the group, organization, or society at a sustainable rate. Too much, too fast can lead to panic or rejection; too little, too late will leave the problem festering and unsolved. A major challenge for you, thus, is to find the right rate — fast enough to deal with the problem without causing a wave of resistance that will undercut the effort. Leaders can benefit from the ways of thinking about public communication outlined in this brief, crafting their messages for stakeholders with honesty about the facts but with a rational element of hope for the future. To be effective, this basic purpose requires thoughtful analysis of the audience for the message, the way in which it is framed to be informative and persuasive, and how it can best be presented as authoritative and legitimate.
https://medium.com/covid-19-public-sector-resources/crisis-communications-for-covid-19-8994e85ac71b
['Harvard Ash Center']
2020-04-29 16:32:16.479000+00:00
['Crisis Communications', 'Leadership', 'Coronavirus', 'Crisis Management', 'Covid 19']
How to kickstart your career as a developer
Being a developer is a fantastic opportunity. Not only do get to know about how the digital world operates and how stuff is made, how apps get deployed, how your Instagram feed is generated and how Amazon gets your orders, but you also exhibit the skills to work from absolutely anywhere and at anytime — on your terms, as you please. It’s quite an exciting time to be a developer. With the myriad of tools available to us devs, the real-world applications that we can make on our own, and a volley of new technologies waiting to be explored and worked upon, there has never been an era where you can be school student shipping paradigm-shifting solutions to millions of people all around the globe. What’s more, we have so much to work upon, don’t feel excited about websites, how about making first-person shooters. Don’t feel like working on cloud mainframes, try machine learning, make an in-game self-driving car system or two. There’s a lot you can do with the development skillset, since most of the stuff you’ll learn in one discipline is easily transferable to any other domain within the development world. While all this is true, most people don’t really seem to have an idea of how to go about learning to become a developer, and then the process of ‘getting into the industry’. Most people don’t really have much idea as to where they should start. That’s because this field evolves much faster than all other fields. Developers don’t learn skills and then sit on those skills for years, part of a developer’s job is to keep learning continually. This is what many people miss. If you’re just starting out with development, I suggest you should check out some of these articles, then come back here when you’re ready for development and use this article to get going in the industry: If you know how to code properly or have learnt most of the skills required for being developer, carry on with this article.
https://medium.com/flutter-community/how-to-kickstart-your-career-as-a-developer-e602da6d0c5c
['Aamish Ahmad Beg']
2020-12-22 17:51:45.498000+00:00
['Development', 'Learning', 'Software Development', 'Coding', 'Job Hunting']
Understanding Variational Autoencoders (VAEs)
Mathematical details of VAEs In the previous section we gave the following intuitive overview: VAEs are autoencoders that encode inputs as distributions instead of points and whose latent space “organisation” is regularised by constraining distributions returned by the encoder to be close to a standard Gaussian. In this section we will give a more mathematical view of VAEs that will allow us to justify the regularisation term more rigorously. To do so, we will set a clear probabilistic framework and will use, in particular, variational inference technique. Probabilistic framework and assumptions Let’s begin by defining a probabilistic graphical model to describe our data. We denote by x the variable that represents our data and assume that x is generated from a latent variable z (the encoded representation) that is not directly observed. Thus, for each data point, the following two steps generative process is assumed: first, a latent representation z is sampled from the prior distribution p(z) second, the data x is sampled from the conditional likelihood distribution p(x|z) Graphical model of the data generation process. With such a probabilistic model in mind, we can redefine our notions of encoder and decoder. Indeed, contrarily to a simple autoencoder that consider deterministic encoder and decoder, we are going to consider now probabilistic versions of these two objects. The “probabilistic decoder” is naturally defined by p(x|z), that describes the distribution of the decoded variable given the encoded one, whereas the “probabilistic encoder” is defined by p(z|x), that describes the distribution of the encoded variable given the decoded one. At this point, we can already notice that the regularisation of the latent space that we lacked in simple autoencoders naturally appears here in the definition of the data generation process: encoded representations z in the latent space are indeed assumed to follow the prior distribution p(z). Otherwise, we can also remind the the well-known Bayes theorem that makes the link between the prior p(z), the likelihood p(x|z), and the posterior p(z|x) Let’s now make the assumption that p(z) is a standard Gaussian distribution and that p(x|z) is a Gaussian distribution whose mean is defined by a deterministic function f of the variable of z and whose covariance matrix has the form of a positive constant c that multiplies the identity matrix I. The function f is assumed to belong to a family of functions denoted F that is left unspecified for the moment and that will be chosen later. Thus, we have Let’s consider, for now, that f is well defined and fixed. In theory, as we know p(z) and p(x|z), we can use the Bayes theorem to compute p(z|x): this is a classical Bayesian inference problem. However, as we discussed in our previous article, this kind of computation is often intractable (because of the integral at the denominator) and require the use of approximation techniques such as variational inference. Note. Here we can mention that p(z) and p(x|z) are both Gaussian distribution. So, if we had E(x|z) = f(z) = z, it would imply that p(z|x) should also follow a Gaussian distribution and, in theory, we could “only” try to express the mean and the covariance matrix of p(z|x) with respect to the means and the covariance matrices of p(z) and p(x|z). However, in practice this condition is not met and we need to use of an approximation technique like variational inference that makes the approach pretty general and more robust to some changes in the hypothesis of the model. Variational inference formulation In statistics, variational inference (VI) is a technique to approximate complex distributions. The idea is to set a parametrised family of distribution (for example the family of Gaussians, whose parameters are the mean and the covariance) and to look for the best approximation of our target distribution among this family. The best element in the family is one that minimise a given approximation error measurement (most of the time the Kullback-Leibler divergence between approximation and target) and is found by gradient descent over the parameters that describe the family. For more details, we refer to our post on variational inference and references therein. Here we are going to approximate p(z|x) by a Gaussian distribution q_x(z) whose mean and covariance are defined by two functions, g and h, of the parameter x. These two functions are supposed to belong, respectively, to the families of functions G and H that will be specified later but that are supposed to be parametrised. Thus we can denote So, we have defined this way a family of candidates for variational inference and need now to find the best approximation among this family by optimising the functions g and h (in fact, their parameters) to minimise the Kullback-Leibler divergence between the approximation and the target p(z|x). In other words, we are looking for the optimal g* and h* such that In the second last equation, we can observe the tradeoff there exists — when approximating the posterior p(z|x) — between maximising the likelihood of the “observations” (maximisation of the expected log-likelihood, for the first term) and staying close to the prior distribution (minimisation of the KL divergence between q_x(z) and p(z), for the second term). This tradeoff is natural for Bayesian inference problem and express the balance that needs to be found between the confidence we have in the data and the confidence we have in the prior. Up to know, we have assumed the function f known and fixed and we have showed that, under such assumptions, we can approximate the posterior p(z|x) using variational inference technique. However, in practice this function f, that defines the decoder, is not known and also need to be chosen. To do so, let’s remind that our initial goal is to find a performant encoding-decoding scheme whose latent space is regular enough to be used for generative purpose. If the regularity is mostly ruled by the prior distribution assumed over the latent space, the performance of the overall encoding-decoding scheme highly depends on the choice of the function f. Indeed, as p(z|x) can be approximate (by variational inference) from p(z) and p(x|z) and as p(z) is a simple standard Gaussian, the only two levers we have at our disposal in our model to make optimisations are the parameter c (that defines the variance of the likelihood) and the function f (that defines the mean of the likelihood). So, let’s consider that, as we discussed earlier, we can get for any function f in F (each defining a different probabilistic decoder p(x|z)) the best approximation of p(z|x), denoted q*_x(z). Despite its probabilistic nature, we are looking for an encoding-decoding scheme as efficient as possible and, then, we want to choose the function f that maximises the expected log-likelihood of x given z when z is sampled from q*_x(z). In other words, for a given input x, we want to maximise the probability to have x̂ = x when we sample z from the distribution q*_x(z) and then sample x̂ from the distribution p(x|z). Thus, we are looking for the optimal f* such that where q*_x(z) depends on the function f and is obtained as described before. Gathering all the pieces together, we are looking for optimal f*, g* and h* such that We can identify in this objective function the elements introduced in the intuitive description of VAEs given in the previous section: the reconstruction error between x and f(z) and the regularisation term given by the KL divergence between q_x(z) and p(z) (which is a standard Gaussian). We can also notice the constant c that rules the balance between the two previous terms. The higher c is the more we assume a high variance around f(z) for the probabilistic decoder in our model and, so, the more we favour the regularisation term over the reconstruction term (and the opposite stands if c is low). Bringing neural networks into the model Up to know, we have set a probabilistic model that depends on three functions, f, g and h, and express, using variational inference, the optimisation problem to solve in order to get f*, g* and h* that give the optimal encoding-decoding scheme with this model. As we can’t easily optimise over the entire space of functions, we constrain the optimisation domain and decide to express f, g and h as neural networks. Thus, F, G and H correspond respectively to the families of functions defined by the networks architectures and the optimisation is done over the parameters of these networks. In practice, g and h are not defined by two completely independent networks but share a part of their architecture and their weights so that we have As it defines the covariance matrix of q_x(z), h(x) is supposed to be a square matrix. However, in order to simplify the computation and reduce the number of parameters, we make the additional assumption that our approximation of p(z|x), q_x(z), is a multidimensional Gaussian distribution with diagonal covariance matrix (variables independence assumption). With this assumption, h(x) is simply the vector of the diagonal elements of the covariance matrix and has then the same size as g(x). However, we reduce this way the family of distributions we consider for variational inference and, so, the approximation of p(z|x) obtained can be less accurate. Encoder part of the VAE. Contrarily to the encoder part that models p(z|x) and for which we considered a Gaussian with both mean and covariance that are functions of x (g and h), our model assumes for p(x|z) a Gaussian with fixed covariance. The function f of the variable z defining the mean of that Gaussian is modelled by a neural network and can be represented as follows Decoder part of the VAE. The overall architecture is then obtained by concatenating the encoder and the decoder parts. However we still need to be very careful about the way we sample from the distribution returned by the encoder during the training. The sampling process has to be expressed in a way that allows the error to be backpropagated through the network. A simple trick, called reparametrisation trick, is used to make the gradient descent possible despite the random sampling that occurs halfway of the architecture and consists in using the fact that if z is a random variable following a Gaussian distribution with mean g(x) and with covariance H(x)=h(x).h^t(x) then it can be expressed as
https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
['Joseph Rocca']
2020-07-07 20:31:29.780000+00:00
['Machine Learning', 'Artificial Intelligence', 'Deep Learning', 'Towards Data Science', 'Data Science']
Don’t Set Goals, Set Experiments
Everyone knows by now that there is a high statistical chance of failing their New Year’s resolutions. There are 3 ways we can respond to this stat: Convince ourselves that we’re different and aim to be a part of the 20% that do succeed. Take it to heart and omit New Year’s resolutions from our vocabulary. Take the time to figure out why, and find a better way to approach your own success. If we follow route #3, a better way to put ourselves in the best position to succeed might be to set experiments instead of goals. Matt D’Avella, a highly accomplished filmmaker with over 3 million YouTube subscribers, did exactly that in 2019: He did a 30-day habit experiment each month to see what worked and what didn’t. He assigned one habit to each month of the year and worked hard to do that one habit every day for 30 days. After each month he completely dropped that habit to focus on the next one. One of the biggest reasons that people fail their resolutions is that they set goals that aren’t actually for themselves but are based on other people’s lives. There’s so much advice out there and so many people telling us about the habits that have changed their lives, but these habits aren’t one-size-fits-all.
https://medium.com/age-of-awareness/dont-set-goals-set-experiments-31bfca768c8e
['Christian Pow']
2020-12-22 02:32:36.075000+00:00
['Self-awareness', 'Self', 'Education', 'Habit Building', 'Life Lessons']
Creating Python Docker Image For Windows Nano Server
My company, DEVSISTERS, is an early adopter of Windows container and Windows Kubernetes in Korea and its gaming industries. Recently, our team operated a Windows stack-based game software closed beta for a limited time. (Related Link, Korean) During the closed beta period, we often collected application dump files to debug and improve server applications. Initially, we wrote a script with PowerShell and FileSystemWatcher. But as you know, FileSystemWatcher and PowerShell do not work correctly sometimes. Also, in the Windows environment, PowerShell would be the right choice, but most of our team members are not familiar. Initially, a simple automation script wrote in Python. Currently, Python official images only packaged with Windows Server Core image, not the Nano Server image. This option makes containerized Python applications consume more memory and resources, which makes a quite overhead. In most cases, people accept this limitation willingly because the Nano server has too limited features than traditional Windows Server SKU. If you try to run a Python application in Nano Server, you will soon face a very tough problem. These differences can make overwork and can waste your time. But I decided to make a hard work because I want to optimize the Python workload in Windows container environment. So I used about two business days and worked done charmingly. :-) The Dockerfile — Build Stage I used a multi-staged build for minimizing output image size. I defined some environment variables and changed the default shell to PowerShell. FROM mcr.microsoft.com/windows/servercore:1809 ENV PYTHON_VERSION 3.7.4 ENV PYTHON_RELEASE 3.7.4 # if this is called "PIP_VERSION", pip explodes with "ValueError: invalid truth value '<VERSION>'" ENV PYTHON_PIP_VERSION 20.0.2 https://github.com/pypa/get-pip ENV PYTHON_GET_PIP_URL https://github.com/pypa/get-pip/raw/d59197a3c169cef378a22428a3fa99d33e080a5d/get-pip.py WORKDIR C:\\Temp SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'Continue'; $verbosePreference='Continue';"] Then, download an embedded version of Python Windows release and extract the ZIP file. Also, I download the get_pip.py script file too. Before doing that, I modified SecurityProtocol property to allow communication with the GitHub URL. RUN [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12; \ Invoke-WebRequest -UseBasicParsing -Uri "https://www.python.org/ftp/python/$env:PYTHON_RELEASE/python-$env:PYTHON_VERSION-embed-amd64.zip" -Out 'Python.zip'; \ Expand-Archive -Path "Python.zip"; \ Invoke-WebRequest -UseBasicParsing -Uri "$env:PYTHON_GET_PIP_URL" -OutFile 'Python\get-pip.py'; I used an embedded version of Python because the official Win32 installer will not work in Nano Server. And this is the hard part. Some complicated configurations applied in this stage. I’ll explain what’s going on here. RUN [String]::Format('@set PYTHON_PIP_VERSION={0}', $env:PYTHON_PIP_VERSION) | Out-File -FilePath 'Python\pipver.cmd' -Encoding ASCII; \ $FileVer = [System.Version]::Parse([System.Diagnostics.FileVersionInfo]::GetVersionInfo('Python\python.exe').ProductVersion); \ $Postfix = $FileVer.Major.ToString() + $FileVer.Minor.ToString(); \ Remove-Item -Path "Python\python$Postfix._pth"; \ Expand-Archive -Path "Python\python$Postfix.zip" -Destination "Python\Lib"; \ Remove-Item -Path "Python\python$Postfix.zip"; \ New-Item -Type Directory -Path "Python\DLLs"; I create PIPVER.CMD file to pass the PYTHON_PIP_VERSION environment variable to Nano Server. For reducing the hard-coded part, I looked up the Win32 resource table in the python executable file and made a postfix string. This postfix string continuously used to extract the compiled Python library archive file and removing the _PTH file. Embedded Python does not honor the system path variable (PYTHONPATH) due to the _PTH file after version 3.7.x. Removing this file makes embedded Python works like traditional Python. I extracted the archived pre-compiled Python library to Libs directory. Finally, for latter use, I created the DLLs directory separately. This directory used by pip and virtualenv. Phew! The hard part is over. Until now, in this build stage, I created a temporary directory and composed a Python installation directory manually. The Dockerfile — Nano Server Let’s dive into the Nano Server. FROM mcr.microsoft.com/windows/nanoserver:1809 COPY --from=0 C:\\Temp\\Python C:\\Python USER ContainerAdministrator By default, Windows Container uses the ContainserUser account. For security reasons, even in a container, the user does not have all permissions. If you want to modify the registry and system settings in the Windows container, you should change your user account to ContainerAdministrator. ENV PYTHONPATH C:\\Python;C:\\Python\\Scripts;C:\\Python\\DLLs;C:\\Python\\Lib;C:\\Python\\Lib\\plat-win;C:\\Python\\Lib\\site-packages RUN setx.exe /m PATH %PATH%;%PYTHONPATH% && \ setx.exe /m PYTHONPATH %PYTHONPATH% && \ setx.exe /m PIP_CACHE_DIR C:\Users\ContainerUser\AppData\Local\pip\Cache && \ reg.exe ADD HKLM\SYSTEM\CurrentControlSet\Control\FileSystem /v LongPathsEnabled /t REG_DWORD /d 1 /f I defined the PYTHONPATH environment variable locally. Then I configured the PATH, PYTHONPATH environment variable to system-wide. Also, I set the PIP_CACHE_DIR environment variable for hiding the PIP cache directory from the container root path. The last line configures the long-path support for Windows operating system. RUN assoc .py=Python.File && \ assoc .pyc=Python.CompiledFile && \ assoc .pyd=Python.Extension && \ assoc .pyo=Python.CompiledFile && \ assoc .pyw=Python.NoConFile && \ assoc .pyz=Python.ArchiveFile && \ assoc .pyzw=Python.NoConArchiveFile && \ ftype Python.ArchiveFile="C:\Python\python.exe" "%1" %* && \ ftype Python.CompiledFile="C:\Python\python.exe" "%1" %* && \ ftype Python.File="C:\Python\python.exe" "%1" %* && \ ftype Python.NoConArchiveFile="C:\Python\pythonw.exe" "%1" %* && \ ftype Python.NoConFile="C:\Python\pythonw.exe" "%1" %* In the case of the AWS CLI, it requires “.PY” file extension should be associated with a Python interpreter directly. These commands make mappings between major Python file extensions and Python interpreter, respectively. RUN call C:\Python\pipver.cmd && \ %COMSPEC% /s /c "echo Installing pip==%PYTHON_PIP_VERSION% ..." && \ %COMSPEC% /s /c "C:\Python\python.exe C:\Python\get-pip.py --disable-pip-version-check --no-cache-dir pip==%PYTHON_PIP_VERSION%" && \ echo Removing ... && \ del /f /q C:\Python\get-pip.py C:\Python\pipver.cmd && \ echo Verifying install ... && \ echo python --version && \ python --version && \ echo Verifying pip install ... && \ echo pip --version && \ pip --version && \ echo Complete. The remaining parts are relatively simple. Simply call the get-pip script with — disable-pip-version-check, — no-cache-dir, and specify the PIP version. After the PIP installation completed, remove temporary files and verifying Python and PIP works correctly. RUN pip install virtualenv USER ContainerUser CMD ["python"] In the official Python Windows Server Core image, it adds the virtualenv package for convenience. So I simply added it to provide virtualenv tool in the Nano Server container. Then, changing the user to ContainerUser again. This configuration makes the container more secure. Finally, I specify the default command of this image as a Python interpreter. Test Flight — AWS CLI & Virtual Environment First, I tested the installation of AWS CLI. Then, I tested the installation of Django in a virtual environment. It looks like work correctly. Test Flight — Django Web Application Lastly, I created a simple Django sample web site with the Nano Server. I wrote a simple Dockerfile to achieve this. First, I create a new Django project. django-admin startproject helloworld Then, I modified the settings.py file to allow all hosts. In this case, I’m not using any reverse proxy server, so I need to adjust the security setting that would enable incoming connection to Windows container. ... # SECURITY WARNING: don't run with debug turned on in production! DEBUG = True ALLOWED_HOSTS = ["*"] ... Lastly, I create a Dockerfile to build a docker image. FROM rkttu/python-nanoserver:3.7.4_1809 EXPOSE 8000 RUN pip install django WORKDIR C:\\website ADD . . ENV DJANGO_DEBUG=1 CMD python -Wall manage.py runserver --insecure 0.0.0.0:8000 Let’s start the Nano server-based Django application! docker build -t helloworld:latest . docker run --rm -d -p 8000:8000 helloworld:latest Voila! After launching the container, I can browse the Django app. From now on, you can run your ordinary Python application in the Nano Server container. It makes your Windows-based Python application much slimmer and works fast. Do you want to use the image? I published a public Git repository and Docker Hub repository. You can clone the code or pull the image immediately. GitHub repository URL: https://github.com/rkttu/python-nanoserver Docker Hub: docker pull rkttu/python-nanoserver:3.7.4_1809 or docker pull rkttu/python-nanoserver:3.8.2_1809 And as always, All kinds of contributions are welcome!
https://medium.com/dev-genius/creating-python-docker-image-for-windows-nano-server-151e1ab7188a
['Jung-Hyun Nam']
2020-12-21 18:01:50.591000+00:00
['Windows Server', 'DevOps', 'Docker', 'Programming', 'Python']
The emergence of Modern Conv Nets
Photo by Alina Grubnyak on Unsplash For humans, vision feels so easy since we do it all day long without thinking about it. But if we think about just how hard the problem is, and how amazing it is that we can see. To see the world, we have to deal with all sorts of “nuisance” factors, such as a change in pose or lighting. Amazingly, the human visual system does this all so seamlessly that we don’t even have to think about it. Computer Vision is a very active field of research, which tries to help the machines to see the world as humans do. This field made tremendous progress in the last decade because of modern Deep Learning techniques and the availability of a large set of images online. In this article, we are going to talk about object recognition the task of classifying an image into a set of object categories and how modern conv nets played a huge part. ImageNet Dataset Researchers built ImageNet, a massive object recognition dataset consist of 1.2 millions of images and has almost around 1000 of object categories. Based on this dataset, ImageNet Large Scale Visual Recognition Challenge (ILSVRC) is a competition that evaluates algorithms for object detection and image classification at a massive scale. Due to the diverse object categories, usually one reports top five accuracy, where the algorithms make five different prediction for every image, and if one of the top five predictions are right, then we would consider the algorithm is working fine. Now let’s talk about some of the architectures that revolutionized the image classification task. LeNet Let’s look at a particular conv net architecture LeNet before the modern deep learning algorithms and the ImageNet dataset. LeNet Architecture The LeNet architecture is summarized in the following table; we can draw useful conclusions from that table: LeNet Summary Most of the units are in the C1 (first convolution) layer. Most of the connections are in C3 (second convolution) layer. Most of the weights are in F5 (fully connected) layers. Convolution layers are the most expensive part of the network in terms of running time in general. Memory is another scarce resource; Backprop requires storing all of the activations in memory for training. The activations don’t need to be stored in test time. The weights constitute the vast majority of trainable parameters of the model. LeNet was carefully designed to push the limits of all of these resource constraints using the computing power of 1998. Try increasing the sizes of various layers and checking that you’re substantially enhancing the usage of one or more of these resources. As we’ll see, conv nets have significantly grown larger to exploit new computing resources. The Modern Conv Nets: AlexNet Architecture AlexNet was the conv net architecture, which started a revolution in computer vision by smashing the ILSVRC benchmark. Like LeNet, it consists mostly of convolution, pooling, and fully connected layers. AlexNet is 100 to 1000 times bigger than LeNet, but both of them had almost the same structure. Moreover, like LeNet, most of the weights are in fully connected layers, and most of the connections are in convolutional layers. Comparison of the early Conv Nets All credits go to the sudden dramatical advances in the hardware, especially GPUs (Graphics Processing Units). GPUs are geared towards high parallel processing; one of the things they do well is the matrix multiplication. Since most the neural networks depend on the matrix multiplication, GPUs gave roughly a 30-fold speedup in practice for training neural nets. AlexNet achieved a top-5 error of 28.5%, which was substantially better than the competitors. The results prompted some of the world’s largest software companies to start up research labs focused on deep learning. In 2013, the ILSVRC winner was based on tweaks to AlexNet. In 2014, it was VGGNet, another conv net based on more or less similar principles. The winning entry for 2014, GoogLeNet, or Inception, deserves mention. As we can see, architecture has gotten more complicated since the days of LeNet. But the interesting point is that they did a lot of work to reduce the number of trainable parameters (weights) from AlexNet’s 60 million, to about 2 million. The reason has to do with saving memory at “test time.” Inception Architecture Traditionally, there is no need for distinction between training and testing time because both training and testing are done on a single machine. But at Google, the training could be distributed over lots of computers in a data center. But the network was also supposed to be runnable on an Android cell phone so that images wouldn’t have to be sent to Google’s servers for classification. On a cell phone, it would have been extravagant to spend 240MB to store AlexNet’s 60 million parameters, so it was crucial to cut down on parameters to make it fit in memory. They achieved this in two ways. First, they eliminated the fully connected layers, which we already saw contain most of the parameters in LeNet and AlexNet. GoogLeNet is convolutions all the way. This is analogous to how linear bottleneck layers can reduce the number of parameters. They call this layer-within-a-layer architecture “Inception,” after the movie about dreams-within-dreams. Performance on ImageNet improved astonishingly fast during the years the competition was run. Here are the figures: It’s unusual for error rates to drop by a factor of 6 over a period of 5 years, especially on a task like an object recognition that hundreds of researchers had already worked hard on and where performance had seemed to plateau. Human-performance is around 5.1%. They stopped running the object recognition competition because the performance is already so good. Thanks to Prof. Roger Grosse and Prof. Jimmy Ba at the University of Toronto, for taking an excellent class on Neural Networks and Deep Learning. Some of the contents are taken from their teaching notes for this article. Reference: Roger Grosse, Jimmy Ba Lecture Slides http://www.cs.toronto.edu/~rgrosse/courses/csc421_2019/slides/lec10.pdf
https://karthikbhaskar.medium.com/the-emergence-of-modern-conv-nets-adefe5b56de5
['Karthik Bhaskar']
2019-07-05 20:33:30.978000+00:00
['Machine Learning', 'Deep Learning', 'Convolutional Network', 'Data Science', 'Artificial Intelligence']
“Not So Much A Library, More A Way of Life”
“Not So Much A Library, More A Way of Life” The Lit & Phil is my favourite place on the planet. What’s yours? Photo by Jaredd Craig on Unsplash I was warned, when I first moved to Newcastle. “Chaz, beware the Lit and Phil,” they said. “It’s not so much a library, it’s more a way of life,” they said. “Once you’re in its clutches,” they said, ominously… Properly, it’s called The Literary and Philosophical Society of Newcastle upon Tyne. Despite the name, it’s really a private library. It occupies a beautifully proportioned Georgian building close by the railway station at the heart of the city. The interior’s Victorian, though, thanks to a total refurbishment after a devastating fire in 1850. It’s still beautiful: library porn at its best. Crazy-high domed ceilings, wrought-iron spiral staircases, galleries and Greek statuary (there used to be a genuine Egyptian mummy, but that was lost in the fire, alas) and of course books everywhere, more books than you can possibly imagine. I heeded the warnings and kept my distance — hell, it cost money to join, and I didn’t have money — until at last I couldn’t any longer. I’d been invited to write for an anthology of Shakespearean historical mysteries, each story treating with a different play; I desperately wanted to subvert the brief by writing about the Sonnets, or more specifically the man who first printed them (George Eld, since you ask). I’d done a little letterpress printing, and loved it as a craft — but of course I knew nothing about Jacobean methods or processes. The Lit & Phil, on the other hand, knew everything, or at least everything I needed and more besides. It’s probably impractical to spend two weeks researching for one short story, but oh, it was such fun. I was hooked. Besides which, I had a Crusader-based fantasy series to write next, and what I most needed to read were the outdated narrative histories of a previous generation. They weren’t in the university library, but of course they were in the Lit & Phil. (When they buy a book for their permanent collection, then “permanent” is the word that applies. I once borrowed an Esperanto dictionary, which turned out to be the Esperanto dictionary, the original; it had been bought the year of publication,1908, and no one had ever borrowed it until I did, more than a century later…) So there I was, half in love with the place already — and then my passion for new smart tech overtook me, and I bought a laptop. Thing was, I was old-fashioned in my writerly habits. I had a study at home, and a desk, with a big desktop computer on it (which in fact I still do: it’s a different home and a different desk and in a different country, but nevertheless), and that was where writing happened. I really didn’t need a laptop, I had no conceivable use for it. But oh, it was a thing of beauty, made of carbon fibre and weighing less than a kilogram, despite having a full-size keyboard and a disk drive and so on and so forth. I lusted after that thing, and when the chance came — when the price dropped, to be frank — I snatched it up. And then, of course, I had to justify it. People with laptops work out of the house, right? They work on the move, they work in coffee-shops, they work in pubs… So I changed the habits of a lifetime, and tried this out; and found that I loved it, to the point where almost the only work I was doing was when I was travelling or otherwise away from home. Which was unsustainable, clearly. What I needed was a place to be that was not my house, but was accessible say six days of the week, and quiet, and could guarantee me a table and a chair, and… The Lit & Phil has a Silence Room, down in the basement. Essentially, I moved in. How many novels did I write down there? I’m not sure, but the most of my output over ten or twelve years, most like. We embraced each other, that room and I. I had my own table there, and heaven preserve any interloper (there were honestly not that many over the years, but a few, oh yes) from the ferocity of my glare. One infuriating day an elderly man and a teenage boy dared to play chess there, and chess is not a game that can be played in silence, especially when one player is mentoring the other; I made a story out of that (but that, as it happens, is another story, and shall be told elsewhen). When I had visitors, showing them around the Lit & Phil was my prime delight. One time the teenage daughter of my oldest friends had come to stay, and I took her up into the gallery, where the older books were shelved, and said that one of the joys thereof was that you could fling your hand out at random and practically be assured of finding something interesting. And I suited the action to the word, plucked a book off the shelf without looking to see what I had — and now I have my own copy of that selfsame book, because it was “Beards” by Reginald Reynolds, which is just irresistible. It’s a social history of the beard, by one of those classic eccentric British academics of days of yore, and it’s a delight from first to last, and I never would have found it else. There is much, much more to be said about the Lit & Phil, and the various ways it changed and saved my life; but those must wait for another day. I will leave you, though, with this one last touching truth: You can no longer smoke anywhere in the building (when I joined, back in the ’90s, there was still a Smoking Table; they closed it down very shortly after I quit smoking) but this is not the sort of library that fusses about food and drink. You can bring your own lunch in, and eat as you work; failing that, you can buy tea and biscuits at the hatch. You’re not actually supposed to consume alcohol on the premises except at organised events, but, y’know. No one’s going to examine the contents of your thermos flask.
https://chazbrenchley.medium.com/not-so-much-a-library-more-a-way-of-life-e31428363b9b
['Chaz Brenchley']
2019-08-10 19:14:06.634000+00:00
['Books', 'Lit And Phil', 'Esperanto', 'Libraries', 'Newcastle']
Improving Reading Focus by Dimming Online Distractions
The Information Superhighway is Lined with Billboards — We’re Here to Help Reading on the web is distracting. Not only are you bombarded by email dings, calendar reminders, and Slack notifications, the very website you’re reading is preventing you from focusing. How? By surrounding the one thing you care about — the body text on the page — with a rotating cast of colorful icons, calling out to you: SHARE THIS ARTICLE ON FACEBOOK, LINKEDIN, AND GOOGLE PLUS! Got a minute? PLAY SPIDER SOLITAIRE OR MAHJONGG ! And don’t forget to SUBSCRIBE TO OUR NEWS ALERT EMAIL LIST! Focus Mode: Blinders for the Internet We’re helping people focus while reading by literally dimming these online distractions. Here’s how it works: Focus Mode dims the distractions that surround the article You can customize the level of dimming from 1% to 100% Whenever you move the mouse, the dimming disappears — so you can click on links or navigate the site (this is also why it’s better than using the “reader mode” offered in some browsers, which has to be deactivated before you can resume navigation) You can deactivate Focus Mode for any websites that don’t work well with it BeeLine Reader + Focus Mode: flipping the script to make websites more readable Focus Mode is the latest addition to the BeeLine Reader Chrome extension, which people around the world use millions of times a week. The main function of BeeLine Reader is to enhance reading ease, speed, and focus by applying a line-wrapping color gradient to text (see demo below, if this is the first time you’ve heard of BeeLine). With the addition of Focus Mode, we’re taking things one step further to improve website readability. Instead of making you focus on boring black text — which is the least visually interesting thing on the page — we format the web in a reader-centric way. We dim the distractions and apply our focus-enhancing color gradient to the body text, which makes it supremely easy to focus while reading.
https://medium.com/hackernoon/improving-reading-focus-by-dimming-online-distractions-ae52726abe1
['Beeline Reader']
2017-07-17 17:11:38.217000+00:00
['Education', 'Focus', 'Accessibility', 'Productivity', 'Lifehacks']
Building Pinalytics: Pinterest’s data analytics engine
Stephanie Rogers | Pinterest engineer, Discovery Pinterest is a data-driven company. On the Data Engineering team, we’re always considering how we can deliver data in a meaningful way to the rest of the company. To help employees analyze information quickly and better understand metrics more efficiently, we built Pinalytics, our own customizable platform for big data analytics. We built Pinalytics with the following goals in mind: Simple interface to create custom dashboards to analyze and share data Scalable backend that supports low latency queries Support for persisting and automatically updating report data Ability to segment and aggregate high dimensional data Pinalytics architecture Pinalytics has three main components: Web app, which has the rich UI components to help users dig in and discover insights Reporter, which generates the data in a report format Backend, which consists of the Thrift service and Hbase databases that power the tool User interface The Pinalytics web application stack consists of MySQL, SQLAlchemy and the Flask framework. The frontend is built using the React.js library to create user interface components such as simple, interactive visualizations. Customizability Visualization is the main form of analysis within Pinalytics, with a specific focus on time-series plots that update daily. Everything can be customized, including the data displayed in a chart (by choosing specific built-in or user-defined metrics) as well as the chart itself (by selecting various features of the visualization). We have more than 100 custom dashboards that offer access to charts and metrics for daily tracking. We offer operations that are applied globally on all charts within a particular dashboard including segmentation, rollup aggregation and setting of time window or axes origins. Metric composer The metrics composer is a tool on top of Pinalytics, which enables customized time series data by combining metrics together via composite functions on the frontend. Pinterest teams can call and embed various functions to create a formula that will be evaluated and displayed dynamically. For example, DIVIDE(SUM(M1, M2), M3) would be a valid composition. We support basic arithmetic functions as well as more complex calculations that we commonly use, including a simple moving average with a seven-day window, three-day lag difference and anomaly detection. Custom reporting and metric computation We wanted to enable employees to customize both the data being visualized and the segmentation of that data, a combination which we call a report. Creating a new customized segment report for Pinalytics that’s updated daily is as easy as a few lines of code or simply writing a Hive query in a separate user interface built for non-technical users. After running the query, the report will automatically appear on Pinalytics for further analysis and update daily. With the ability to write custom jobs comes the possibility of redundancy, where the same data is processed over and over again. To avoid this problem, we consolidated the core ETLs that compute common metrics. These core metrics are also adjusted for the most recent spam. Finally, we offer higher dimensionality by segmenting data on important features such as gender, country and application type, when applicable. The core metrics consist of data that tracks user activity, events, retention and signups, as well as unique events and events per application type. Scalable analytics engine One big challenge in building a system like Pinalytics at scale was allowing flexible and efficient aggregation over multidimensional data at run time. For example, a single chart segmented by country, app type and event type with different days on fully denormalized data could end up rolling up over 1 billion rows in one load. To avoid pre-processing of data, and at the same time provide low latency post aggregation, we needed a scalable backend engine. To address this problem, we designed our own backend made up of several components including a Thrift Server, HBase with coprocessors deployed and a secondary index table. The secondary index table contains all meta data for all reports, such as the metrics, segment keys and encodings of each report. When each report is created, the Thrift Handler will automatically invoke the creation of a HBase table for the report, manage the splits of this report table and construct segment key encodings which are persisted into the secondary index table. The purpose of segment value encoding is to ensure the row key as short as possible. Each report could have multiple metrics and dimensions (segments). The row key is constructed as follows: To support flexible roll-up of arbitrary dimensions/segments for each request, the Thrift Server takes advantage of the HBase FuzzyRowFilter. The Thrift server constructs a FuzzyRowFilterMask to filter eligible row keys based on the segment specification. The Thrift Handler then initiates a parallel request on each table region with this FuzzyRowFilterMask. Each region has a coprocessor deployed to roll up metric values for qualified rows in parallel. The Thrift Handler does the final aggregation on returned results from each region to obtain the final value. Metrics computation As mentioned earlier, we faced a variety of challenges while building these metrics. Let’s take activity metrics as an example. Activity metrics depend on multiple days of user activity data. For instance, we have a dataset called xd28_users that tracks users’ different actions during the last 28 days. As it takes some time to accurately catch spammy users, once we learn about a user’s malicious activities we have to recompute past metrics. Recomputing these metrics is expensive both in terms of I/O and computation since a large amount of user activity data over the past X months needs to be processed. Additionally, our data is partitioned by date, which prevents us from efficiently aggregating the activity metrics. A natural way to aggregate user actions for consecutive days or months is to keep a rolling window of metrics for each user over 28 days and partially add or subtract events from these metrics as we move forward day by day. However, keeping the intermediate metrics and event data in memory was prohibitively expensive due to the scale. We first transformed the data in a way that each user’s event data and spam definition over months are stored in continuous file blocks. This way we only have to keep the intermediate metrics and event data of one user in memory at a given time. The resulting core metrics are then computed from a Cascading job. Cascading’s data streaming abstraction on Hadoop Map-Reduce improves developer productivity and provides more flexibility than Hive. The data transformation along with the rolling-window algorithm gave us a speedup of 20x — 50x. Outcomes More than half of the teams at Pinterest are using Pinalytics to track relevant metrics daily and make fast decisions backed by data to improve the user experience. If you’re interested in working on large scale data processing and analytics challenges like this one, join our team! Acknowledgements: Pinalytics was built by Chunyan Wang, Stephanie Rogers, Jooseong Kim and Joshua Inkenbrandt, along with the rest of the data engineering team. We got a lot of useful feedback from the Business Analytics team during the development phase and other engineers from across the company. For Pinterest engineering news and updates, follow our engineering Pinterest, Facebook and Twitter. Interested in joining the team? Check out our Careers site.
https://medium.com/pinterest-engineering/building-pinalytics-pinterests-data-analytics-engine-e14c651780ca
['Pinterest Engineering']
2017-02-17 22:03:02.363000+00:00
['Big Data', 'Pinterest', 'Analytics', 'Dataengineering']
AoGProTips: Synchronize animations with the Text-To-Speech
When building smart display games for the Google Assistant using Interactive Canvas, you can add fun animations to create a fully immersive experience. Do you know you can synchronize your animations with audio, such as making a dinosaur open its mouth at the exact moment you play the roaring sound? In this blog post, you will learn how to use the SSML mark tag with the onTtsMark callback function to synchronize your animations with audio. SSML stands for Speech Synthesis Markup Language. By using SSML, you can make your conversation’s responses sound more natural by adding breaks between words, and adjusting the speed, pitch and rate of a word. Look at the SSML example below. <speak> The dinosaur is about to roar <mark name = ‘START_ROAR’><audio src=’roar.mp3’/><mark name =’STOP_ROAR’></speak> In the SSML, the <mark> tag allows you to indicate during the generated TTS audio when the dinosaur should start and stop animating. It generates events during TTS; your code has a callback that gets triggered by each mark tag. Each mark event has a name. In this example, we have two events named “START_ROAR” and “STOP_ROAR”. You need to write code that can be triggered by each mark event. The code for the “START_ROAR” event can open the dinosaur’s mouth after the spoken prompt “The dinosaur is about to roar “ is complete. Similarly, the code for the “STOP_ROAR” event can close the dinosaur’s mouth. Now that the mark tags are in place within the SSML, you can write the logic for each animation when the mark tag is hit. Keep in mind, the name of the mark tag must be unique within the SSML so that the onTtsMark callback responds to the correct cue. Let’s look at the code below. You register a callback for onTtsMark. The onTtsMark() method receives the markName in the SSML, and depending on the name of the mark tag, triggers the corresponding logic which plays the animation of the dinosaur moving its mouth. For the ‘START_ROAR’ markName, it will call the beginRoaring function, whereas ‘STOP_ROARING’ will call the stopRoaring function to stop the animation. Now that you have learned how to synchronize animations using the SSML mark tag and the onTtsMark callback, we hope you apply this tip in your next Action to create a fully immersive gaming experience for your users. If you have a tip that you think other developers should know about, share your thoughts with us on Twitter using #AoGProTips. Lastly, check out our collection of other pro tips here.
https://medium.com/google-developers/aogprotips-synchronize-animations-with-the-text-to-speech-e9bb64860b44
['Mandy Chan']
2020-04-02 16:05:00.688000+00:00
['Voice Assistant', 'Developer', 'Actions On Google', 'Development', 'Google Home']
Making audience research count: six lessons from Norway’s leading business paper
Making audience research count: six lessons from Norway’s leading business paper Ingeborg Volan’s insights from her year-long audience research at Dagens Næringsliv Ingeborg Volan during the News Impact Academy, Paris 2018 Finding out what metrics to use and how to use them to measure engagement remains a challenge for many news organisations. And once you’ve gained new insights about your readers, how do you use them to add value to your journalism? To find out more about the specifics of audience research in a news organisation, we talked with our former News Impact Academy participant and News Impact Summit speaker Ingeborg Volan, Director of Audience Engagement at Norwegian business paper Dagens Næringsliv (DN). Over the last year, DN carried out extensive audience research to get an in-depth understanding of their readers, and also to pinpoint what makes great journalistic storytelling. They’ll use the results to shape a new strategy ahead of DN’s reorganisation this fall. And they’ve already implemented some of it to their day-to-day. “We’re starting to build a new culture for trying to understand what makes subscribers become loyal and active readers,” Volan said. “We’ve created a close collaboration between the newsroom and the user revenue department, which deals with subscriptions.” Here are some important lessons they’ve learned about audience research during this process. 1. Listen like ethnographers, not like journalists Volan set up ethnographic interviews with 30 potential readers after she had practised this audience research method at the News Impact Academy, led by Marie Gilot from CUNY. This method is used to better understand human behaviour. “Rather than asking people how they feel about the newspaper, our conversations focused on the people themselves… to find out what their lives are like,” Volan said. “What kind of person are you? What’s your day like? Do you prefer using your mobile device, desktop or print?” Volan and her team now have a clearer image of their readers and they’re using these insights to make their journalism more relevant and interesting. DN redesigned their mobile platform in autumn 2018 2. It takes a large amount of data to know your readers The team also did a data analysis to find out what topics and articles their users are interested in. They took four months of data from 150,000 devices from users that had more than 50 page views across all DN’s digital platforms. The data enabled them to identify five interest-based segments. Combined with the results from the ethnographic interviews, they managed to identify standardised users, or so-called ‘personas’. For example, one group of readers is more interested in markets and finance, while another distinct group of readers is more into politics and society. “Now we know how big these groups actually are,” Volan said. 3. Think about your readers’ needs Additionally, the team analysed when and how users accessed DN’s platforms. They tried to understand the emotional reasons users may have for interacting with certain types of stories, using a BBC World Service study on reader needs as a source of inspiration. “The politics segment prefers using the app, whereas the finance segment more often uses desktop and preferably during the daytime,” said Volan. “There is also an evening and weekend segment of readers, which are more into leisure, free time and personal finance.” The team used these new insights to introduce a policy for front page publishers, indicating what stories should go out in the morning, night, and weekend. “On a weekday evening, they’ll have one or two good stories to relax or learn something, aiming to respond to the emotional background around news consumption,” said Volan. 4. Uncover underrepresented reader groups The research also revealed some underrepresented segments, such as female readers and young people. “About 37% of the workforce in Norwegian businesses is female, and only one in four of our readers is a woman, so we should be able to have at least 37% female readership,” said Volan. When it comes to young readers, DN is particularly interested in reaching the ones who will end up in Norwegian business and administration. ”We’re working really hard on making our journalism accessible to people entering the workforce or even during their college and university years.” DN recently did a redesign of their print edition 5. Downgrade the importance of metrics around virality In terms of analytics, Volan’s team uses standard measurements, KPIs and dashboards to make sure they’re aligned with the company’s strategy. However, they’ve eliminated pageviews around viral stories from their internal reports. “Pageviews of stories that go viral on Facebook only show random, flyby users who never visit our website anyway,” said Volan. “We want to make sure that it’s our regular users and subscribers who are accessing the content.” 6. Produce more habit-forming content The team also put effort into determining which stories are kept subscription-only and what content will be shared for free to recruit new types of readers. “Some content doesn’t necessarily convert into new subscriptions, but it has high value to our readers and it builds loyalty and habits,” said Volan. “We’ve been working really hard on having enough of that sort of habit-forming content.” Overall, Volan sees data as “really good feedback” from readers, and a helpful tool to understand the audience, which helps to improve their journalism. “We want to make sure that we don’t focus all our journalism based on what our numbers show,” Volan said. “We don’t want to fall into a clickbaity sort of approach. It’s not what we do. And being a subscription business we’re not as tempted to do that.”
https://medium.com/we-are-the-european-journalism-centre/making-audience-research-count-6-lessons-from-norways-leading-business-paper-b2d59c67b4df
['Ingrid Cobben']
2019-12-19 15:30:14.500000+00:00
['Ethnographic Research', 'Journalism', 'Audience Research', 'Insights', 'Analytics']
Influencers in Your Backyard: Our Top 25 Based in Atlanta
Julius is an influencer marketing platform leveraged by brand and agency marketers to connect with creators and manage campaigns. Find more influencer marketing tips and best practices on our blog.
https://medium.com/juliusworks/influencers-in-your-backyard-our-top-25-based-in-atlanta-8f7d26840c7b
[]
2017-06-07 21:48:12.602000+00:00
['Marketing', 'Digital Influencers', 'Influencers', 'Influencer Marketing', 'Atlanta']
The case for designing for accessibility, and how to actually practice it
Why is it important? Inaccessible and unethical design can harm people 😣 Digital experiences that overlook the needs across different demographics, including disabled consumers, can unintentionally exclude and even harm users. User harm comes in various forms. In the context of inaccessible design, demographics can feel societal harm in the form of exclusion, reinforced stereotypes, unequal opportunities, and discrimination. Imagine a scenario where Marc, a colorblind man, tries to book a doctor's appointment through an app. The app isn’t optimized for colorblind users, so Marc has a hard time reading specific call-to-action buttons and crucial information sections – leaving Marc feeling frustrated and stuck. Colorblind individuals like Marc may have a difficult time differentiating between colors like red and green within a digital scheduling calendar. You’re probably thinking, “I would never want to harm any of my users!” Of course, you wouldn’t intentionally seek to exclude or harm users with your work. Though exclusive design is hardly ever intentional, design decisions will always impose consequences on all types of users – those that are intended to use your product, and those who aren’t in your targeted scope. Lu Han from Spotify.Design puts it beautifully: “These unethical decisions aren’t usually the result of bad intentions. Rather, it’s a systematic issue that stems from our focus on short-term business goals, like engagement and revenue, often at the expense of user trust and wellbeing.” Accessible design breaks into huge untapped markets 🔨 It’s a common misconception that spending time to build websites and apps for users with specific needs is too expensive or time-costly since these underrepresented demographics don’t constitute a majority of mainstream technology users. As the Head of Product Inclusion at Google Annie Jean-Baptiste says, “Their voices may not be those traditionally heard and listened to in the product design process, but theirs are the voices that will define the future of your products, making them richer and better overall.” Your organization will prosper if diversity, equity, and inclusion are pursued and properly implemented across your products, teams, and business decisions. Though everyone has different needs, people generally want to feel seen and heard through technology. People are naturally more likely to consume products that resonate with their unique backgrounds and fulfill their individual needs. Representation within products may be more elusive than representation in entities like media, yet the most successful products are products that people can personally resonate and interact intuitively with. Product development has a ton of untapped markets: Product development has a ton of untapped markets — demographics including those who are 65 and older, LGBTQ+, and/or disabled. One of the reasons why brands like Nike and Coca-Cola are globally-recognized is that these companies focus on rolling out inclusive marketing campaigns. Just take a look at Fenty Beauty. Rihanna’s cosmetic brand launched in 2018 with 40 different foundation colors to accommodate individuals of different complexions – as a result, consumers eagerly purchased her products, deeming her foundation launch a commercial success. As a marketer, designer, or manager, you have the ability to include and unify your users and customers in a way that makes everyone feel like they belong. Practicing accessible, inclusive design makes people feel heard and accounted for – in return, you miss fewer opportunities as you expand your target audience. Building accessible products benefits more people than you’d expect 🙇🏻‍♀️ Have you ever taken a moment to appreciate the dips in an elevated curbside? These curb cuts were originally designed to make streets more accessible to wheelchair users. Yet, the people who use and benefit from curb cuts extend beyond wheelchair users – stroller-pushers, skateboarders, luggage-carriers can all appreciate the cut in the sidewalk. It’s the same thing for closed captioning, wheelchair ramps, and text-to-speech technology. Behold, the curb cut — a ramp cut into a street curb to provide access between a sidewalk and the street. This phenomenon is called the curb-cut effect. Technology that is designed for disabled people can help everybody – not just the intended user group. In the larger picture, universal design can be utilized to target systemic issues and inequalities. The curb-cut effect is based on the notion of equity, not to be mixed up with equality. “Equality gives everyone the right to ride on the bus. Equity ensures that there are curb cuts so people in wheelchairs can get to the bus stop and lifts so they can get on the bus, and ensures that there are bus lines where people need them so they can get to wherever they need to go.” – Angela Glover Blackwell To integrate design accessibility practices in the very products we use every day is to push for equitable opportunities for users of all needs to prosper and reach their full potential. As the technology industry knocks down the walls of exclusion in their products and services, society as a whole can build accessible pathways to success. In the end, everybody gains.
https://rachellejin.medium.com/the-case-for-designing-for-accessibility-and-how-to-actually-practice-it-43018cd5af80
['Rachelle Jin']
2020-12-10 02:59:25.387000+00:00
['Accessibility', 'Design', 'UX', 'UX Design', 'Product Design']
DNN training for LibSVM-formatted data — From Keras to Estimator
Background Suppose you have a LibSVM-formatted dataset that has run LR and GBDT, and want to know the effect of DNN quickly. Then, this article is for you. Although the popularity of deep learning research and application has been on the rise for many years, and TensorFlow is well known amongst users who specialize in algorithms, not everyone is familiar with this tool. In addition, it is not a matter of instantly building a simple DNN model based on a personal dataset, especially when the dataset is in LibSVM format. LibSVM is a common format for machine learning, and is supported by many tools, including Liblinear, XGBoost, LightGBM, ytk-learn and xlearn. However, TensorFlow has not provided an elegant solution for this either officially or privately, which has caused a lot of inconvenience for new users, and is unfortunate considering it is such a widely used tool. To this end, this article provides a fully verified solution (some code), which I believe can help new users save some time. Introduction The code in this article can be used: To quickly verify the effect of the LibSVM-formatted dataset on the DNN model, to compare it with other linear models or tree models and explore the limits of the model. To reduce the dimension of high-dimensional features. The output of the first hidden layer can be used as the Embedding, and added to other training processes. To get started with TensorFlow Keras, Estimator, and Dataset for beginners. The coding follows the following principles: Instead of developing your own code, you should try to use official or other code that is recognized as having the best performance unless you have to. The code should be as streamlined as possible. The ultimate time and space complexity is pursued. This article only introduces the most basic DNN multi-class classification training evaluation code. For more advanced complex models, see outstanding open-source projects, such as DeepCTR. Subsequent articles will share the application of these complex models in practical research. Below are four advanced code and concepts for TensorFlow to train DNN for LibSVM-formatted data. The latter two are recommended. Keras Generator Here are three choices: TensorFlow API: Building a DNN model using TensorFlow is easy for skilled users. A DNN model can be built right away with a low-level API, but the code is a slightly messy. By contrast, the high-level API Keras is more “considerate”, and the code is extremely streamlined and clear at a glance. LibSVM-formatted data reading: It is easy to write the code that reads data in LibSVM format. You can simply convert the sparse coding into dense coding. However, since sklearn has already provided load_svmlight_file, why not use it? This function will read the entire file into the memory, which is feasible for smaller data sizes. fit and fit_generator: Keras model training only receives dense coding, while LibSVM uses sparse coding. If the dataset is not too large, it can be read into the memory through load_svmlight_file. However, if you convert all the data into dense coding and then feed it to fit, the memory may crash. The ideal solution is to read the data into the memory on demand, and then convert it. Here, for convenience, all the data is read into the memory using load_svmlight_file, and saved in sparse coding. And, it is then fed to fit_generator in batches during use. The code is as follows: import numpy as np from sklearn.datasets import load_svmlight_file from tensorflow import keras import tensorflow as tf feature_len = 100000 # 特征维度,下面使用时可替换成 X_train.shape[1] n_epochs = 1 batch_size = 256 train_file_path = './data/train_libsvm.txt' test_file_path = './data/test_libsvm.txt' def batch_generator(X_data, y_data, batch_size): number_of_batches = X_data.shape[0]/batch_size counter=0 index = np.arange(np.shape(y_data)[0]) while True: index_batch = index[batch_size*counter:batch_size*(counter+1)] X_batch = X_data[index_batch,:].todense() y_batch = y_data[index_batch] counter += 1 yield np.array(X_batch),y_batch if (counter > number_of_batches): counter=0 def create_keras_model(feature_len): model = keras.Sequential([ # 可在此添加隐层 keras.layers.Dense(64, input_shape=[feature_len], activation=tf.nn.tanh), keras.layers.Dense(6, activation=tf.nn.softmax) ]) model.compile(optimizer=tf.train.AdamOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model if __name__ == "__main__": X_train, y_train = load_svmlight_file(train_file_path) X_test, y_test = load_svmlight_file(test_file_path) keras_model = create_keras_model(X_train.shape[1]) keras_model.fit_generator(generator=batch_generator(X_train, y_train, batch_size = batch_size), steps_per_epoch=int(X_train.shape[0]/batch_size), epochs=n_epochs) test_loss, test_acc = keras_model.evaluate_generator(generator=batch_generator(X_test, y_test, batch_size = batch_size), steps=int(X_test.shape[0]/batch_size)) print('Test accuracy:', test_acc) The code above is the code used in practical research earlier. It was able to complete the training task at that time, but the shortcomings of the code are obvious. On the one hand, the space complexity is low. The resident memory of big data will affect other processes. When it comes to big datasets, it is ineffective. On the other hand, the availability is poor. The batch_generator needs to be manually encoded to realize the batch of data, which is time-consuming and error-prone. TensorFlow Dataset is a perfect solution. However, I was not familiar with Dataset and did not know how to use TF low-level API to parse LibSVM and convert SparseTensor to DenseTensor, so I put it on hold due to limited time at that time, and the problem was solved later. The key point is the decode_libsvm function in the following code. After LibSVM-formatted data is converted into Dataset, DNN is unlocked and can run freely on any large dataset. The following in turn describes the application of Dataset in Keras model, Keras to Estimator, and DNNClassifier. The following is the Embedding code. The output of the first hidden layer is used as Embedding: def save_output_file(output_array, filename): result = list() for row_data in output_array: line = ','.join([str(x) for x in row_data.tolist()]) result.append(line) with open(filename,'w') as fw: fw.write('%s' % ' '.join(result)) X_test, y_test = load_svmlight_file("./data/test_libsvm.txt") model = load_model('./dnn_onelayer_tanh.model') dense1_layer_model = Model(inputs=model.input, outputs=model.layers[0].output) dense1_output = dense1_layer_model.predict(X_test) save_output_file(dense1_output, './hidden_output/hidden_output_test.txt') Keras Dataset The LibSVM-formatted data read by load_svmlight_file is changed to Dataset read by decode_libsvm. import numpy as np from sklearn.datasets import load_svmlight_file from tensorflow import keras import tensorflow as tf feature_len = 138830 n_epochs = 1 batch_size = 256 train_file_path = './data/train_libsvm.txt' test_file_path = './data/test_libsvm.txt' def decode_libsvm(line): columns = tf.string_split([line], ' ') labels = tf.string_to_number(columns.values[0], out_type=tf.int32) labels = tf.reshape(labels,[-1]) splits = tf.string_split(columns.values[1:], ':') id_vals = tf.reshape(splits.values,splits.dense_shape) feat_ids, feat_vals = tf.split(id_vals,num_or_size_splits=2,axis=1) feat_ids = tf.string_to_number(feat_ids, out_type=tf.int64) feat_vals = tf.string_to_number(feat_vals, out_type=tf.float32) # 由于 libsvm 特征编码从 1 开始,这里需要将 feat_ids 减 1 sparse_feature = tf.SparseTensor(feat_ids-1, tf.reshape(feat_vals,[-1]), [feature_len]) dense_feature = tf.sparse.to_dense(sparse_feature) return dense_feature, labels def create_keras_model(): model = keras.Sequential([ keras.layers.Dense(64, input_shape=[feature_len], activation=tf.nn.tanh), keras.layers.Dense(6, activation=tf.nn.softmax) ]) model.compile(optimizer=tf.train.AdamOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model if __name__ == "__main__": dataset_train = tf.data.TextLineDataset([train_file_path]).map(decode_libsvm).batch(batch_size).repeat() dataset_test = tf.data.TextLineDataset([test_file_path]).map(decode_libsvm).batch(batch_size).repeat() keras_model = create_keras_model() sample_size = 10000 # 由于训练函数必须要指定 steps_per_epoch,所以这里需要先获取到样本数 keras_model.fit(dataset_train, steps_per_epoch=int(sample_size/batch_size), epochs=n_epochs) test_loss, test_acc = keras_model.evaluate(dataset_test, steps=int(sample_size/batch_size)) print('Test accuracy:', test_acc) This solves the problem with high space complexity, and ensures the data does not occupy a large amount of memory. However, in terms of availability, two inconveniences still exist: When the Keras “fit” function is used, steps_per_epoch must be specified. To ensure that the entire batch of data is completed in each round, the sample size must be computed in advance, which is unreasonable. In fact, the dataset.repeat can ensure the entire batch of data is completed in each round. If Estimator is used, it is not necessary to specify steps_per_epoch. The feature dimension feature_len needs to be computed in advance. LibSVM uses sparse coding, so it is impossible to infer the feature dimension by reading only one or a few rows of data. You can use load_svmlight_file offline to obtain the feature dimension feature_len=X_train.shape[1], and then hard code it into the code. This is an inherent feature of LibSVM. Therefore, this is the only way to deal with it. Keras Model to Estimator Another high-level API of TensorFlow is Estimator, which is more flexible. Its standalone code is consistent with the distributed code, and the underlying hardware facilities need not be considered, so it can be conveniently combined with some distributed scheduling frameworks (such as xlearning). In addition, Estimator seems to gets more comprehensive support from TensorFlow than Keras. Estimator is a high-level API that is independent from Keras. If Keras was used before, it is impossible to reconstruct all the data into Estimator in a short time. TensorFlow also provides the model_to_estimator interface for Keras models, which can also benefit from Estimator. from tensorflow import keras import tensorflow as tf from tensorflow.python.platform import tf_logging # 打开 estimator 日志,可在训练时输出日志,了解进度 tf_logging.set_verbosity('INFO') feature_len = 100000 n_epochs = 1 batch_size = 256 train_file_path = './data/train_libsvm.txt' test_file_path = './data/test_libsvm.txt' # 注意这里多了个参数 input_name,返回值也与上不同 def decode_libsvm(line, input_name): columns = tf.string_split([line], ' ') labels = tf.string_to_number(columns.values[0], out_type=tf.int32) labels = tf.reshape(labels,[-1]) splits = tf.string_split(columns.values[1:], ':') id_vals = tf.reshape(splits.values,splits.dense_shape) feat_ids, feat_vals = tf.split(id_vals,num_or_size_splits=2,axis=1) feat_ids = tf.string_to_number(feat_ids, out_type=tf.int64) feat_vals = tf.string_to_number(feat_vals, out_type=tf.float32) sparse_feature = tf.SparseTensor(feat_ids-1, tf.reshape(feat_vals,[-1]),[feature_len]) dense_feature = tf.sparse.to_dense(sparse_feature) return {input_name: dense_feature}, labels def input_train(input_name): # 这里使用 lambda 来给 map 中的 decode_libsvm 函数添加除 line 之的参数 return tf.data.TextLineDataset([train_file_path]).map(lambda line: decode_libsvm(line, input_name)).batch(batch_size).repeat(n_epochs).make_one_shot_iterator().get_next() def input_test(input_name): return tf.data.TextLineDataset([train_file_path]).map(lambda line: decode_libsvm(line, input_name)).batch(batch_size).make_one_shot_iterator().get_next() def create_keras_model(feature_len): model = keras.Sequential([ # 可在此添加隐层 keras.layers.Dense(64, input_shape=[feature_len], activation=tf.nn.tanh), keras.layers.Dense(6, activation=tf.nn.softmax) ]) model.compile(optimizer=tf.train.AdamOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model def create_keras_estimator(): model = create_keras_model() input_name = model.input_names[0] estimator = tf.keras.estimator.model_to_estimator(model) return estimator, input_name if __name__ == "__main__": keras_estimator, input_name = create_keras_estimator(feature_len) keras_estimator.train(input_fn=lambda:input_train(input_name)) eval_result = keras_estimator.evaluate(input_fn=lambda:input_train(input_name)) print(eval_result) Here, sample_size need not be computed, but feature_len still needs to be computed in advance. Note that the “dict key” returned by input_fn of Estimator must be consistent with the input name of the model. This value is passed through input_name. Many people use Keras, and Keras is also used in many open-source projects to build complex models. Due to the special format of Keras models, they cannot be saved in some platforms, but these platforms support Estimator models to be saved. In this case, it is very convenient to use model_to_estimator to save Keras models. DNNClassifier Finally, let’s directly use the Estimator pre-created by TensorFlow: DNNClassifier. import tensorflow as tf from tensorflow.python.platform import tf_logging # 打开 estimator 日志,可在训练时输出日志,了解进度 tf_logging.set_verbosity('INFO') feature_len = 100000 n_epochs = 1 batch_size = 256 train_file_path = './data/train_libsvm.txt' test_file_path = './data/test_libsvm.txt' def decode_libsvm(line, input_name): columns = tf.string_split([line], ' ') labels = tf.string_to_number(columns.values[0], out_type=tf.int32) labels = tf.reshape(labels,[-1]) splits = tf.string_split(columns.values[1:], ':') id_vals = tf.reshape(splits.values,splits.dense_shape) feat_ids, feat_vals = tf.split(id_vals,num_or_size_splits=2,axis=1) feat_ids = tf.string_to_number(feat_ids, out_type=tf.int64) feat_vals = tf.string_to_number(feat_vals, out_type=tf.float32) sparse_feature = tf.SparseTensor(feat_ids-1,tf.reshape(feat_vals,[-1]),[feature_len]) dense_feature = tf.sparse.to_dense(sparse_feature) return {input_name: dense_feature}, labels def input_train(input_name): return tf.data.TextLineDataset([train_file_path]).map(lambda line: decode_libsvm(line, input_name)).batch(batch_size).repeat(n_epochs).make_one_shot_iterator().get_next() def input_test(input_name): return tf.data.TextLineDataset([train_file_path]).map(lambda line: decode_libsvm(line, input_name)).batch(batch_size).make_one_shot_iterator().get_next() def create_dnn_estimator(): input_name = "dense_input" feature_columns = tf.feature_column.numeric_column(input_name, shape=[feature_len]) estimator = tf.estimator.DNNClassifier(hidden_units=[64], n_classes=6, feature_columns=[feature_columns]) return estimator, input_name if __name__ == "__main__": dnn_estimator, input_name = create_dnn_estimator() dnn_estimator.train(input_fn=lambda:input_train(input_name)) eval_result = dnn_estimator.evaluate(input_fn=lambda:input_test(input_name)) print(' Test set accuracy: {accuracy:0.3f} '.format(**eval_result)) The logic of Estimator code is clear, and it is easy to use, and very powerful. For more information about Estimator, see official documentation. The above solutions, except for the first one, which doesn’t conveniently process big data, can be run on a single machine, and the network structure and target functions can be modified as needed when in use. The code in this article is derived from a survey, which takes several hours to debug. The code is “idle” after the survey is completed. This code is now shared for reference, and I hope it can be helpful to other users. Original Source
https://medium.com/dataseries/dnn-training-for-libsvm-formatted-data-from-keras-to-estimator-12f12b2da6b8
['Alibaba Cloud']
2019-11-05 09:24:52.391000+00:00
['TensorFlow', 'Machine Learning', 'Big Data', 'Algorithms', 'Alibabacloud']
The Pressure of Perfection
I am willing to guess that at least once in your lifetime that you have been told that “practice makes perfect” or to “strive for perfection” or that your work’s “not perfect but…”. Well, I am here to clear up a myth as old as time — perfection does not exist! Practice in fact makes permanence and we should be striving for our best (which is different for everyone). The definition of perfection has been created by society's expectation of how we should present ourselves and act. Take the media for example — we are presented with one ‘perfect’ body type which in most cases is completely unachievable, but it is what we have accepted in society as being a norm. Social media bombards us with perfection every day. Whether it is ads for weight loss products or an ab machine to fit us in said ‘perfect’ body type. Maybe it is an idolization of the ‘perfect’ lifestyle. You know, the one where you own a mansion with 50 members of staff, unlimited money, and everything you can ever dream of. The people who have that are usually celebrities who dominate social media and make us feel bad about our lives. What we are doing is taking our whole life and comparing it to the highlights of those that we see on social media and idolizing how amazing their life is. But, I’m willing to guarantee you that their life isn’t ‘perfect’ and just like you and me they sit on the sofa crying into a tub of Ben and Jerry’s when life gets tough. And that’s okay — it’s a coping mechanism for your mental health, but it’s not something you’d share on social media because we’re led to believe that social media is only for the top-notch aspects of our lives. Basically, there is a checklist that we have created as a society. Only post a picture on social media if it ticks the following boxes: · Do I look happy? Yes. · Do I look rich? Yes. · Do I look like I’ve got my whole life sorted out? Yes. · Am I crying or super emotional? No (don’t be silly). · Am I discussing my mental health? Shhhhh. The point is we are told what to post on social media which is why it is so toxic and filled with society’s version of ‘perfection’ which only makes us feel bad. We spent way too much time, energy, and money trying to match up to this unrealistic expectation. So, I have created my own checklist for posting on social media, and it is quite simple: · Do you want to share it on social media? Yes. Simple, right? If you want to share it, then share it. Does it have to be ‘perfect’? Nope. It just has to make you feel good. Why strive for perfection when you can channel all your energy and time into being authentic, unique, and just you.
https://medium.com/indian-thoughts/the-pressure-of-perfection-83b2dcb56360
[]
2020-12-11 14:57:13.444000+00:00
['Self Improvement', 'Self', 'Social Media', 'Perfection', 'Mental Health']
Roku Publishing Made Easy.
Now available at www.mediarazzi.com New Book Reveals All the Secrets to Publishing a Profitable Roku Channel By Mediarazzi Staff It goes without saying, with video marketing at its highest point ever, Roku and Connected TV channels are, “the new website.” At least, according to Mediarazzi founder/CEO Phil Autelitano they are. In his new, “Publish Your Own Roku Channel” book and program, he reveals all the secrets he’s learned over nearly a decade of publishing profitable Roku TV Channels. If you’re one of the five people left in this world who’s never heard of Roku, you should check out their website before reading any further — www.roku.com If you know Roku, then you know it’s the future of television. The Connected TV industry, led by Roku, over the past ten years has EXPLODED from just a few thousand active viewers in 2008–9 to tens of millions now. And it’s STILL growing strong! Every day, millions of viewers tune into Roku for everything from TV and movies to music, sports, news, weather, politics, food, and a host of other exciting content. Roku has become a household name in many homes across the nation, and has cemented it’s place as the platform of choice for streaming television viewers. That said, with all those viewers, content creators are coming out of the woodwork to develop their own Roku “channels” to distribute and profit from their content in ways they just can’t on other platforms or online with sites like YouTube. There’s never been a better time in history to distribute your content to millions of viewers, quickly, easily — and downright inexpensively — than now. Roku has dropped the barrier-to-entry for television viewership down to the ridiculous, and because advertisers are quickly catching on, there’s never been a better time than now to PROFIT the most from your content. Of course, it all starts with creating (publishing) a Roku Channel… In his “Publish Your Own Roku Channel” book and program, Phil Autelitano takes you step-by-step through the process of designing, building, monetizing and publishing your Roku Channel live to millions of Roku users nationally and worldwide. He simplifies the process to the point ANYONE with reasonable computer and Internet skills can do it — with zero coding knowledge or experience required. Heck, you don’t even have to have CONTENT, because he tells you where to get that, too, by offering dozens of sources for free and easy-to-acquire content for your channel. There’s never been a more comprehensive instruction manual to publishing a Roku Channel than now. “Publish Your Own Roku Channel” includes absolutely everything you need to do it: Comprehensive Roku-building instructions Easy-to-use graphics templates Pre-built JSON channel feed template Free Content Sources Monetization Sources (for making money with your channel) Easy-to-follow coding guide Hands-on instructions Ongoing advice and guidance And more! “Publish Your Own Roku Channel” is more than just a “book” — it’s a complete program, or course, better yet, a virtual “Roku Channel-in-a-box.” And it’s not just for beginners. The advice, strategies, tips, tactics and techniques Phil Autelitano reveals in this program will benefit even the most advanced Roku developer or publisher. He’s literally taken out all the stops and revealed EVERYTHING he’s learned over nearly a decade of developing and publishing Roku Channels for clients that include major brands and celebrities like Paula Deen and boxing legend Oscar de la Hoya, and more. There’s no argument Phil Autelitano has earned his place as one of the foremost experts on Roku publishing and monetization, and in this book/program he proves it. It’s a value at ten times the price and it’s available exclusively at www.mediarazzi.com. And if the book and program aren’t enough, Phil is now offering hands-on instructor-led training and one-on-one Roku coaching programs. He works with clients small and large to develop and produce quality, revenue-generating Roku Channels. Before you spend ONE DIME on Roku development, you need to read “Publish Your Own Roku Channel” by Phil Autelitano. It will give you the knowledge, insights and even the skills you need to create a professionally-designed Roku Channel that MAKES MONEY for you. Learn more at www.mediarazzi.com.
https://medium.com/business-marketing/roku-publishing-made-easy-e5c857d4d2f9
['Phil Autelitano']
2018-11-19 07:44:55.543000+00:00
['Video Marketing', 'Roku', 'Content Marketing', 'Marketing', 'Branding']
Pipenv: A powerful blend of “pip” and “venv”
Pipenv: A powerful blend of “pip” and “venv” Bring all the packages to the Python world pipenv > venv — Activate Anyone who has worked on Python must have used “pip” and “venv” for their projects. Pip helps you to install a new library to your global Python environment. Let’s say you are working on a new scraping project. But, you don’t have “Beautiful Soup" installed in your global environment. You can simply use the “pip” command to install it. pip install beautifulsoup4 But, what if you are working on two projects and both of them require different versions of the same library? pip becomes your foe and venv your only friend! A virtual environment aka venv lets you install different dependencies of a package. So, you can use them in an isolated way. It is fascinating that we can use “pip” if we are working on a single project and want to install a new package in the global environment. And, if we are working on different projects, a virtual environment works best for us. Why I ditched venv and pip, then? In PyCon 2108, Pipenv was introduced and was called as “The Future of Python Dependency Management”. Pipenv combines both “pip” and “venv” into one simple and easy-to-use tool. Pipenv automatically creates and manages a virtual environment for your projects, as well as adds/removes packages from your Pipfile as you install/uninstall packages. Installing pipenv For installing pipenv, just give the following command in your terminal. pip install pipenv Working with pipenv Activating and Deactivating virtual environment Now, if you do “pip 3 freeze” in your command line terminal. This will show everything you have in your current environment i.e global environment. You can now create a new folder (wherever you want to build your project) and begin with pipenv journey. The following command will activate the environment. pipenv shell It will activate a new virtual environment for you. Also, it will create a pip file. Now, if you open up Python and import the “sys” module. Using “sys.executable”, you will get to known where Python is being executed i.e in a virtual environment. You can easily activate the environment using the “pipenv shell” command while deactivation is done using the “exit” command. Installing packages with pipenv You can simply install packages in your virtual environment using the following command. pipenv install <package name> It also creates a “Pipfile.lock”, which has all of the dependencies and sub-dependencies. And, it will give us a deterministic build so that we can use the same versions of everything when we are ready to deploy. Now, if you open Python in the terminal. You can easily use the package you have installed before using the “pipenv install” command. Listing packages in your environment You can use the following command to list all the packages in your Python environment. pipenv lock -r Uninstalling Packages If you do not want a package to be in your Python environment, you can simply uninstall it. pipenv uninstall <package name> Summary I have talked about how you can get rid of pip and venv. Instead, you can work on a combination of both i.e pipenv. I have created a guide for pipenv usability. How you can install it? How you can work with it? Easy to use, scalable, and bringing both “venv” and “pip” tools together certainly makes pipenv tool the “Future of Python Dependency Management”. References
https://medium.com/swlh/pipenv-a-powerful-blend-of-pip-and-venv-84b2e9e66750
['Vishal Sharma']
2020-06-09 12:33:51.498000+00:00
['Coding', 'Programming', 'Software Development', 'Python', 'Computer Science']
Poetry in Motion
Photo by Sergei Akulich on Unsplash To make progress in AI, our models need to learn to cope with the messy real world By Samuel Flender — 5 min read in Science
https://towardsdatascience.com/poetry-in-motion-ea7d5e6c5bb6
['Tds Editors']
2020-09-21 14:06:05.477000+00:00
['Knowledge Graph', 'Science', 'The Daily Pick']
I Dated a Hoarder for Nearly a Decade
I Dated a Hoarder for Nearly a Decade What it’s really like to date a man who lives with trash Photo by Tobias Tullius on Unsplash I met a man at work, and we fell in love. He was neat and clean with hair that was flawlessly coifed. His mustache and goatee were trimmed and his clothes were ironed. Sharp creases ran down the legs of his pants. He was also a hoarder. From his outward appearance, no one could ever guess his secret. He lived with mounds and mounds of trash. That man never met a piece of garbage that he didn’t like. Receipts, junk mail, used Q-tips, and scraps of paper were all equally important. He hoarded them like they were gold. He brought me to the house where he lived with two roommates — his parents. It was dark. The only light illuminating the room came from the television set. After introducing me to his parents, he brought me downstairs to a room with a huge sectional sofa that had been arranged into a square and covered in pillows and blankets. That’s where we talked and watched television while The Suze Orman Show provided the only light in the room. After several nocturnal visits to the basement, which smelled vaguely like mold and rotting potatoes, he finally showed me his room on the second floor. It was immediately apparent why he hadn’t shown me sooner. The room was small, and there wasn’t enough space between the foot of the bed and the bureau to open the drawers more than six inches. The bed was covered — covered — in stacks of books and papers. A thick layer of dust and grime covered every surface in the room. It nearly obscured the screen of the television set that sat on a nightstand opposite the side of the bed. Snarls of wires led from electrical outlets under the window to the television, VCR, and video game systems against the adjacent wall. Like everything else in the room, there was a heavy gathering of dust and dirt surrounding the wires. In the center of the room, there lay a twin blanket, folded in half. That was the size of the usable space in the room. After his parents had the sectional sofa from the basement transported to the garbage dump, we spent all our time together in his bedroom on that folded blanket. We ate there, watched television, and played videogames there. Actually he played videogames and made me watch because he didn’t like the way I played. As our relationship continued, the empty space in the center of his bedroom grew smaller and smaller. Empty boxes and piles of junk mail encroached on our space. Mounds of used Q-Tips sat inches from my toes. Peeled and used packing tape scattered like crepe paper streamers. Eventually, I spent hours trapped in an area roughly 2'X2'. I sat criss-cross applesauce and barefoot, watching him play vintage Atari games. Since he was controlling in addition to being a hoarder, he didn’t permit me to read a book or use my laptop while he was playing his games. I’d sit there and wait patiently until it was time for us to eat picnic-style on the floor or watch cartoons, usually one after the other, so he could get back to his games. Then he stopped wearing deodorant. Our conversations, which had started out amazing when we had more space and he was still wearing deodorant, became an exercise in spatial relations and body odors. With my head partially under the bed and my feet immersed in a pile of used Q-Tips, achieving a rewarding and meaningful conversation with my hoarder boyfriend became increasingly difficult. We couldn’t hang out anywhere else in the house either. The hoarding wasn’t confined to my boyfriend’s room. The basement, minus the discarded sofa, was nothing but a concrete floor, stacks of books, and empty packaging material culled and saved from decades of packages because you never know when you might need hundreds of empty boxes or yards of used but perfectly serviceable bubble wrap. His mother was unemployed, and she was always home. That meant watching television on the living room sofa or eating at the kitchen table were out of the question anyhow, but the unmistakable signs of hoarding grew and spread throughout the home during the eight years that I visited. The bright and sunny dining room had a large window overlooking the front yard when it was accessible. A giant pile of refuse had formed in front of the window, keeping anyone from approaching within ten feet. The pile was an amalgam of dismembered doll parts, dog toys, and bones left over from t-bone steaks and boiled dinners. There was a matching pile of garbage in the kitchen, just random packages of half-eaten toddler snacks and decapitated doll heads mixed with animal bones, puddles of dog urine, and unopened utility bills. It was like something you would see on television only with the added bonus of being able to smell it. In summer, the house became home to an ant infestation the likes of which I had never seen. My boyfriend squished them beneath his toes, leaving the remains of dead ants on the kitchen floor. I was surprised they didn’t have a problem with cockroaches or mice. Spending time together became less desirable as space became more limited. If I couldn’t contort my body to fit inside two square feet of floor space, then I was out of luck. Watching television comfortably was out of the question with my feet in the air and my neck contorted at an uncomfortable angle against the corner made from the edge of his bed and the front of his bureau. He was too cheap to take me out to dinner at restaurants and forbade me from paying for dinner either. He was dirty and cheap. He called it frugal. That meant our options were severely stunted. Fortunately, I stopped caring. Unfortunately, we dated for nearly a decade. Finally, like all good things, and all bad things, our relationship came to an end. I am confident that his hoarding did not.
https://medium.com/traceys-folly/i-dated-a-hoarder-4ce87848b490
['Tracey Folly']
2020-12-14 18:21:28.068000+00:00
['Self', 'Relationships', 'Lifestyle', 'Nonfiction', 'This Happened To Me']
Iris Data Prediction using Decision Tree Algorithm
Iris Data Prediction using Decision Tree Algorithm Bhavesh Follow Dec 14 · 5 min read @Task — We have given sample Iris dataset of flowers with 3 category to train our Algorithm/classifier and the Purpose is if we feed any new data to this classifier, it would be able to predict the right class accordingly. Now we start with importing few python library for reading data file or visualizing our data points. To download Iris dataset Click here and for getting ipython notebook, link is mention below. python code to read csv file After reading the csv file data, now we explore the dataset and get some basic understanding regarding dataset.. Some Basic Information of Data set Iris_data contain total 6 features in which 4 features (SepalLengthCm, SepalWidthCm, PetalLengthCm, PetalwidthCm) are independent features and 1 feature(Species) is dependent or target variable. And Id column is like serial number for each data points. All Independent features has not-null float values and target variable has class labels(Iris-setosa, Iris-versicolor, Iris-virginica) Top 10 records Basic Info of Data With “Iris_data.describe()” function we get some numerical information like Total datapoints count, mean value, standard deviation value, 50 percentile value etc. for each numeric feature in our dataset. This will help us to understand the some basic statistic analysis of data. As we saw that each classes (Species) has equal number of data points, So our Iris data said to be Balanced dataset. No Class is fully dominating in our dataset. Visualizing Iris Data For Visualizing the dataset we used Matplotlib or seaborn as a python library. Their are many plots like scatter, hist, bar, count etc. to visualized the data for better understanding… By looking the Scatter plot we can say that all bluepoints(Iris-setosa) are separated perfectly as compare to orange(versicolor) or green(virginica) points for features(SepalLengthCm, SepalwidthCm) By looking the result of pair plot we sure that all blue points are well separated with other two classes. But Versicolor and virginica are partially overlapping with each other. In pair plot we saw that their are some feature combination which has very less overlapping b/w Versicolor and virginica, that’s means those feature are very important for our classification task purpose. Exploring Some New Features Here I just try to find some new feature with the help of existing features. Taking difference of each feature with each other to get some more information and visualized it by using plots. Now finding relationship b/w new feature based on class labels using pair plot. With help of Pair plot we are getting some new information but it is more likely similar with our main data features as we saw earlier. Every combination well separate the Iris-setosa but has some overlapped b/w Versicolor and virginica. Building Classification Model First we need to drop Id column as it is of no use in classifying the class labels.. Visualizing Decision Tree using graphviz library As our model has been trained…. Now we can validate our Decision tree using cross validation method to get the accuracy or performance score of our model. As we know our selected feature are working well and model gives very good accuracy score on validate or actual test data. So Now we can trained our model on Actual train dataset with selected features for evaluating/ deploying our model in real world cases. ‘’’Training model on Actual train data… ‘’’ Final Decision tree build for deploying in real world cases…. Checking the performance of model on Actual Test data… This is how we read, analyzed or visualized Iris Dataset using python and build a simple Decision Tree classifier for predicting Iris Species classes for new data points which we feed into classifier… If you want to get the whole code then the link for Iris dataset and ipython notebook(.ipynb) is here.
https://medium.com/analytics-vidhya/iris-data-prediction-using-decision-tree-algorithm-7948fb68201b
[]
2020-12-15 16:34:21.364000+00:00
['Decision Tree', 'Machine Learning', 'Prediction Model', 'Iris Dataset', 'Visualization']
Choreography Inside Kubernetes
There would probably be thousands of articles about K8s to date. You may ask — What would this story be about then? :) This story would be all about my appreciation for the architecture and system design of Kubernetes. All the people that have started the k8s journey would have figured out the components of the master plane and the worker plane. Less do we know about What the system design looks like? How do the components interact with each other? What kind of message exchange happens? The sequence of events that would happen to complete an action (against the cluster)? Answers to these questions has helped me understand k8s better. K8s master plane components are setup in a “Hub and Spoke” model. This would resonate well with architects and networking people. The “API server and the etcd data store” form the hub of the master plane. The controller manager (deployment controller, replica controller) and scheduler form the spokes. How does this setup work? The spoke components do not directly interact with each other. Any message exchange would happen through the API server & etcd. This makes the flow of events inside k8s a well designed choreography (explained in detail in the next section). If you had to visualize the setup, For Visualization purposes only, not from the official documentation Getting back to the part why I call this a nicely designed choreography, each component has a clearly established responsibility inside K8s. The execution of the tasks of any component would start when an event that the component has subscribed to is published Components after completing their work (business logic in k8s world) would then perform a REST call to the API server. The API server is a powerful component that exposes REST APIs in a lot of well separated namespaces ( each namespace housing the APIs of a specific component or functionality) The page mentioned below(not the latest version but is a “wealth of information”) provides a clear picture of all the API endpoints, their definitions and additional details of the parameters of the k8s apis. What if you had a swagger file of the APIs exposed from the api server ?? https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#-strong-api-overview-strong- 4. The API server would now act on the https api call and after processing the request would create another event for the downstream components. This flow keeps going until the intended action is completed. Take a look at this diagram and you would know why I used the terms “Choreography” and “Hub and Spoke” . These are sequence of actions that take place when a “Pod Deployment” is executed. Majority of the actions in k8s work based on a timed notification mechanism called “watch”. When we think of any notification mechanism, there would of course be subscribers and publishers. You should have by now inferred that the Hub component (API server and etcd store)acts as the publisher and the spoke components, the subscribers. When you execute a yaml template ( through deployment pipelines or from the command line), the following actions are executed The yaml is converted to json and sent to the api server the api server (after validating the payload) would now save the requested deployment settings to the etcd database ( this is the first step in creating the “desired state” of the cluster the etcd database creates an event for the key-value data for the cluster resource that got added/updated/deleted. The consumer of this is the API sever The API server in addition to hosting the REST backend also has the “watch” mechanism built into it. Now the events would be sent to the other components by the “ClientGo” component (watch the video linked to from the next section) The controllers on receiving the events perform a sequence of operations ( displayed in the diagram shown below). Interesting part of this step is the the system design of the controller. You will appreciate the simplicity and beauty of it too! The controllers after completing their defined tasks, send an update back to the API server. This is saved in the etcd as an update version. The controllers in each step help in getting the “actual state towards the desired state” You can get a picture of the details of the watch mechanism, flow of events and the system design of controllers from this video. An amazing one in my opinion The Life of a Kubernetes Watch Event — Wenjia Zhang & Haowei Cai, Google Representation of the diagram shown in the video at 21:08 Among the plenty of things that you would learn from this design (take time to digest the information in the video), one “design decision” that is worth a mention is that — The controllers are “Level based systems” and not “Edge based systems” Not intending to duplicate any information, I urge you to read this amazing article — resources and controller Emphasis is on the design decision though. The controller does not work on the events that it receives. Events work just as a change notification source.Reconciliation is performed using the data in the “Local Cache”. This is the reason that the local cache is kept updated with every change that happens in the current state of the system that is of interest to the listening controller. An excerpt from the same amazing article about the watch action( point of interest of each component in the sequence). You can correlate this with the second diagram (sequence) in this article. Deployment Controller watches Deployments + ReplicaSets (+ Pods) ReplicaSet Controller watches ReplicaSets + Pods Scheduler (Controller) watches Pods Node (Controller) watches Pods (+ Secrets + ConfigMaps) That was all the feel good information I wanted to share with my fellow enthusiasts. Hope you enjoyed this one too. Additional Reference https://medium.com/@tsuyoshiushio/kubernetes-in-three-diagrams-6aba8432541c
https://medium.com/swlh/choreography-inside-kubernetes-aa5fcf69ac65
['Sriram Ganesan']
2020-09-08 08:05:21.013000+00:00
['Kubernetes']
How Not to Behave In a Pandemic
How Not to Behave In a Pandemic We’re not really at our best right now. The University of Illinois at Urbana-Champaign, my alma mater, has developed an incredible, unprecedented Covid-19 testing program, one that has administered more than one million tests so far. (That’s more than 10 states.) It’s an incredible achievement, and it has allowed them to stay open, for in-person classes, while many other universities have been forced to shut down or offer only virtual classes. It’s an amazing system that has been deployed in efficient, intelligent fashion; at one point, they were responsible for roughly 2 percent of all tests in the entire country. I’m incredibly proud of my school. Still, the University was unable to avoid an early outbreak, back in September, even though its test-and-alert system was able to isolate cases and warn those with the virus that they were infectious. Why did the outbreak still happen? A chemist at the U of I who worked on the testing system explained: When we put the whole program in place, we did a bunch of modelling to try to understand how student socialization was going to integrate with the fast, recurrent testing. We modeled that they were going to go to parties and that they probably weren’t going to wear masks, and it would lead to some level of transmission. What we didn’t model for is that people would choose to go to a party if they knew that they were positive. The overwhelming majority of our students have done a great job, but unfortunately, a small number of students chose to make very bad decisions that led to a rise in cases. 2. Texas Monthly’s Emily McCullar has a fantastic story today in which she interviewed wedding photographers in Texas about what they experienced during wedding season this summer, in the middle of the pandemic. There are many jaw-dropping stories, but the most staggering one is right there in the lede: The wedding photographer had already spent an hour or two inside with the unmasked wedding party when one of the bridesmaids approached her. The woman thanked her for still showing up, considering “everything that’s going on with the groom.” When the photographer asked what she meant by that, the bridesmaid said the groom had tested positive for COVID-19 the day before. “She was looking for me to be like, ‘Oh, that’s crazy,’ like I was going to agree with her that it was fine,” the photographer recalls. “So I was like, ‘What are you talking about?’ And she was like, ‘Oh no no no, don’t freak out. He doesn’t have symptoms. He’s fine.’” The photographer, who has asthma and three kids, left with her assistant before the night was over. Her exit was tense. The wedding planner said it was the most unprofessional thing she’d ever seen. Bridesmaids accused her of heartlessly ruining an innocent woman’s wedding day. She recalls one bridesmaid telling her, “I’m a teacher, I have fourteen students. If I’m willing to risk it, why aren’t you?” Another said everyone was going to get COVID eventually, so what was the big deal? The friend of the bride who’d spilled the beans cried about being the “worst bridesmaid ever.” After the photographer left, she canceled her Thanksgiving plans with family, sent her kids to relatives’ houses so they wouldn’t get sick, and informed the brides of her upcoming weddings that she’d be subcontracting to other shooters. A few days later she started to feel sick, and sure enough, tested positive for COVID-19. She informed the couple. “But they didn’t care,” she says. They didn’t offer to compensate her for the test, nor did they apologize for getting her sick. 3. A friend of mine teaches at a rural school that has been open throughout the pandemic and does not require masks for its students or its teachers. She has had to provide all self-protection on by herself, constructing a plexiglass shield around her desk and pushing desks as far away from each other as she can. The school has provided no information on the number of positive Covid-19 cases among its students and teachers and has rebuffed inquiries requesting that specific information. On Monday, in the middle of the day, one of her students, the son of a fellow teacher at the school, was pulled out of class by school officials. They then went down the hall, to the room where his father teaches high school sophomores, and made him leave as well. It turned out that the father — who, I repeat, is a high school teacher — had been feeling sick the week before and gotten a Covid test on Saturday. On Sunday, it came back positive, as did his son’s. And then, on Monday, knowing all this, he went ahead and went into school to teach, unmasked, regardless. The only reason the school knew he’d tested positive was because a public health official saw his results and happened to remember he was a teacher. He was angry they made him leave. He found it a grand injustice that he was forced to leave his class. My friend is now worried she won’t get to have Christmas with her family because she was exposed 11 days before the holiday. 4. On Wednesday afternoon, Politico reported that Health and Human Services adviser Paul Alexander, back in July, wrote a memo to his bosses saying that the only way to control Covid-19 was to let everyone get it. He wrote: “There is no other way, we need to establish herd, and it only comes about allowing the non-high risk groups expose themselves to the virus. PERIOD. …Infants, kids, teens, young people, young adults, middle aged with no conditions etc. have zero to little risk….so we use them to develop herd…we want them infected … [I]t may be that it will be best if we open up and flood the zone and let the kids and young folk get infected” This is what Paul Alexander looks like, by the way, in case you come across him on the street and want to make sure to avoid him. Don’t worry, you’ll recognize him: He won’t be wearing a mask, after all. These are four stories, and there are surely hundreds of thousands of more like them. I know that the federal government’s response to the pandemic has been nearly non-existent, and that we had the absolute worst possible person to be in charge at this particular moment in history. They made it so much worse. But we have to do our part too. It is difficult to wrap your mind around the sort of person who would test positive for a fatal, infectious disease and then decide it’s still cool to go to that party, or to get married in front of hundreds people, or to still teach class. I do not know how to deal with those people. I do not know how to fix the problem of people legitimately not giving a shit about any other person but themselves. Do you? I’m honestly asking. Because I’ve got no answers. Will Leitch writes multiple pieces a week for Medium. Make sure to follow him right here. He lives in Athens, Georgia, with his family, and is the author of five books, including the upcoming novel “How Lucky,” released by Harper next May. He also writes a free weekly newsletter that you might enjoy.
https://williamfleitch.medium.com/four-stories-of-human-behavior-in-the-worst-global-pandemic-in-a-century-e229b3c7e89
['Will Leitch']
2020-12-16 23:47:17.314000+00:00
['Pandemic', 'Covid 19', 'Coronavirus', 'Paul Alexander']
Vegan Chocolate Cake (and I Mean Chocolate!)
We had just enjoyed a lovely dinner out with our friends. “Let’s save a few pennies. Come over for dessert!” we offered. I was looking forward to a longer visit but had a short blast of anxiety as I flipped through a mental Rolodex of my go-to dessert list. What could I make that was super quick but still possessed some decadence? Store-bought or shortcut desserts are a rarity in our home. Desserts made from scratch just taste better to us and I think it’s because they are made with a particular standard in mind. You don’t want a facsimile of flavor, you want the real deal. Special preservatives to make that cake taste moist longer? How about we just bake and eat it right away so that it doesn’t have to stay moist for long? Chocolate lovers, you understand when I say let’s have rich, sexy chocolate over gritty sugar, please! We want the cocoa to be the star in a chocolate dessert, yes? In fact, I scoff at a recipe when it says “3 tablespoons of cocoa.” You’re kidding me. Is this a chocolate dessert or are we just adding cocoa for colour? Is this a cake the size of an Easy Bake Oven cake? Come on now! According to the recipe we’re about to make: ¼ cup of cocoa, it states. Now we’re getting somewhere but I become philosophical as to whether this measurement is a statement or a suggestion. If I use my ¼ cup measuring device rounded to about a ⅓ cup size portion, does that work for you? It works for me! However you like your chocolate level, this dessert is super duper Alice Cooper quick to make. So fast, in fact, that by the time our dinner friends knocked on our door I had the batter mixed and poured in the pan, ready to pop it in the oven.
https://medium.com/tenderlymag/vegan-chocolate-cake-and-i-mean-chocolate-3c9a63379b83
['Eira Braun-Labossiere']
2020-09-07 22:00:15.534000+00:00
['Chocolate', 'Recipe', 'Vegan', 'Food', 'Dessert']
The UpToken Burn Is Complete
Another week, another exciting UpToken announcement! Last week, we were able to share with you that industry-leading digital asset exchanges Bittrex, Upbit, and Bancor all added UpToken to their platforms. Moreover, the industry’s top token tracking sites such as CoinMarketCap, LiveCoinWatch, Bitgur added UpToken to their platforms. While that is all incredibly exciting, we are even more excited to announce that on March 21st, our Senior Blockchain Engineer (and inventor of the Token Sale), J.R. Willett, completed a token burn. As detailed in the Terms of Sale, the team created 10,000,000,000 UpToken for the sale, with the intent of burning any unsold tokens. At approximately 9:14 am PT last Wednesday, we completed the token burn. You can see the transaction here. As shown above, after the burning 9,815,362,000 tokens, the total global supply of UpToken was reduced to 184,638,000. Additionally, as promised in the whitepaper, we also moved 36,927,602 UpToken to the Coinme UpToken Vault. Per the terms of sale, the vaulted tokens will remain locked until we deploy 500 ATMS. Therefore, the total circulating supply of UpToken after removing the amount in the Coinme UpToken Vault (as well as some tokens still to be distributed) sits at 136,471,746. We are thrilled about the rest of 2018: with our next wave of crypto ATM deployments beginning in April, as well as some exciting updates to our wallet coming soon after, we know that 2018 is going to be a banner year for Coinme. Thank you so much for being on this journey with us!
https://medium.com/coinme/the-uptoken-burn-is-complete-773dfa9e773b
[]
2018-03-26 16:35:22.991000+00:00
['Token Sale', 'ICO', 'Startup', 'Blockchain', 'Bitcoin']
Shades of Gray
Recently, I discovered a sugar daddy/baby website that functions just like a dating site with the exception that the listed ladies are mostly college girls looking for “sponsorship.” While sex isn’t necessarily part of the deal, the fact remains that boys will be boys — and the great majority of the time we can assume it’s implicit in the transaction in much the same way it is for straight-up escort directories. And it made me think that sexual prostitution is not as clearly defined as Webster’s might have you think. “Prostitution” (according to Webster’s) is the act of selling sex for money. We get that. But what about the college girls in the aforementioned site? And consider women who put out for the material things “boyfriends” they care nothing about buy for them? Shouldn’t they be considered prostitutes as well — even if greenbacks themselves aren’t passed directly from one hand to another? And if all but the obvious are prostitutes, wouldn’t it follow that prosecuting one subset while letting the others off the hook is an act of hypocrisy? It all seems so meaningless. What I don’t get is why any of these pursuits are against the law — especially the one form that gets prosecuted. What the law sees as prosecutable prostitution is a straight-up business deal. “I do this and you do that.” That’s easy. But what about the woman who pretends she cares…relieving the clueless bastard of mountains of money…all while knowing it’s all about robbing the guy of not just his cash — but his dignity as well? Prosecutable prostitution stands on reasonably high moral ground (at least for me). Hoodwinking a guy into thinking the woman upon whom he heaps currency in the form of clothing, jewelry, etc. is in love is immoral. There’s your reprobate right there — not the girl who says upfront “you want sex…I want money. Let’s make a deal!” We all know that the criminalization of traditional prostitution is based on religion and fear. Zealots who are convinced there’s a deity are afraid that He would condemn such an act. But why? Is it any worse than selling some stupid widget nobody needs at a 200% profit and then blowing the proceeds on some vice? Then there’s the fear factor. Most guys don’t want to think that their girlfriends have experienced thousands of guys before them. It makes dudes insecure. And given that it’s mostly men who write laws (though admittedly, that’s changing), it follows that those men attempt to legislate against such activity as if it’s going to change anything. If there’s any pursuit more futile than the government’s war on drugs…it’s got to be their war on prostitution. Trafficking, where it exists is something different. I’ll give ya that. But selling “it?” Who in his right mind cares? Consider the disease rationalization. Prostitutes spread disease. True enough — but no more than having sex with some chick you met at a bar — or anywhere else for that matter. Having sex spreads disease. That is a documented fact. How one gets that sex is beside the point. And if you think that the pay for play trade spreads disease more often per union than in any other venue, wouldn’t it make sense to legalize the trade so all girls are licensed and checked by doctors periodically? Maybe someday…somebody will see the light. Ya know…like out in Nevada where citizens vote on the business’s legality on a county by county basis? But I don’t hear anything about New York entering the 21st century! Aren’t we supposed to be enlightened? Haven’t we legalized same sex marriage? It’s high time that those who would make our laws realize that providing sex for a fee isn’t that much different from giving a guy a haircut. It’s a service. And when performed honorably, professionally, and expertly, it’s one that some find more therapeutic than a visit to a psychiatrist. And often cheaper. So what’s the big deal? Decriminalize all of it. Let law enforcement catch criminals who hurt people. Not those who help them.
https://medium.com/sex-and-satire/shades-of-gray-5849385f3bed
['William', 'Dollar Bill']
2020-10-18 06:29:47.591000+00:00
['Life', 'Prostitution', 'Culture', 'Psychology', 'Life Lessons']
The MCAT Taught Me You Always Need To Remind Yourself Why
Photo by Jessie Vissichio As I was planning on studying for the MCAT (the Medical College Admissions Test) a couple months ago, I told myself something like this, as I was trying to motivate myself to study for the behemoth that is the 7 and a half hour exam: “Ryan, you’re going to study 12 hours a day, stay 100% sober, and take 14 practice tests over the next three months.” Well, a week before my MCAT, I have not accomplished these summer process-oriented goals, not even close. I have written before about how perfectionism is an attempt to be your own God, and here I was shamelessly trying to be my own God. No, I have not studied 12 hours a day — although there are a couple of days I’ve come close, the reality is that I’ve averaged somewhere between six to eight hours of studying a day, with a lot of variation depending on how motivated I feel. I have not taken 14 practice tests — I’ve barely pumped out seven. I have not stayed 100% sober. But that doesn’t mean I’m not satisfied with the progress I’ve made along the way. From my first diagnostic practice test to my last full-length practice exam, I improved my score 25 points, almost 6 to 7 points per section. No matter how well I do on the real thing a week from now, I have to be satisfied with that growth. But if there were things I would have done differently and ways I would have approached this exam differently, this is what I would have told myself three months ago, and what I would tell anyone who is taking the MCAT soon. You need to have a life. Who actually studies 12 hours a day? I had this epiphany the other day — 90% people who tell you they’re studying 12 hours a day aren’t studying productively for 12 hours a day. I know this because I am one of those people — studying two hours, getting really tired, taking a two hour nap, working out, studying another two hours, watching YouTube videos, running errands, and then studying another two hours and going to bed is what people really mean when they “study 12 hours a day.” What I really meant was that the MCAT was on my mind 12 hours a day. I blew off hanging out with my friends and having a social life a couple of times the past couple months, and now I seriously regret I did. The key to doing well on the exam and doing well holistically is balance — and having a life, hanging out with friends, spending time writing and doing things I enjoy were all a part of that. One thing that made sure I wasn’t doing too much and burning myself out was being on my college cross country team. The past several weeks, I have had an obligation to run 80 or more miles a week, and when that happens, there was a severe limit on my ability to study 12 hours a day. Another big thing I learned was this: Don’t spend time comparing yourself to others. You have to do whatever works for you. I shamelessly admit that I spent too much time on the MCAT Reddit page seeing how people raised their scores from somewhere in the 30th percentile to somewhere in the >100th percentile. In doing so, not only did I feel bad about myself and feel more and more impatient about my slow pace of progress, but I didn’t realize that it was statistically impossible for as many people to score 524+ (100th percentile) on the exam as the number of people who said they did. It was upon first scanning the MCAT Reddit page that I first set those unrealistic expectations for myself. I’m the first person to tell people not to compare yourself to others and not to care what other people thought, and here I was doing precisely the opposite of what I preach. There were people who said that they improved their scores on the MCAT by being 100% sober and studying 12 hours a day for three months, which was why I wanted to do the same. There were people who made 5000 flashcards, which I obsessively downloaded and made on my computer. What I realized was you are not those people. You have your own style and ways of doing things — I wrote 150 pages of my own notes, which helped significantly more than some other person’s flashcards. It’s a game of the patience, persistence, endurance. I cannot tell you how many times I’ve felt like giving up only 10 to 15 minutes into a full-length practice test of the MCAT — the fact that I had to sit in a chair for the next seven hours felt excruciating. I cannot tell you how many times I did give up and just wait the next day, when I had more energy and motivation, to finish it. But somewhere late in my preparation, I learned this: it is a very long exam. I can’t tell you how many times, too, I’ve gone through a passage in the CARS (Critical Analysis and Reasoning Skills) section, understood absolutely nothing about the passage, came back to it some other time and gotten every question in the section right. I can’t tell you how many times I thought I was going to fail a passage and knew nothing about whatever topic they were testing, only to see that I did know it. Every cliche there could possibly be about life applies to the MCAT — it truly isn’t a sprint, it’s a marathon. You need to remind yourself why. The original reason why I wanted to be a pre-medical student and a doctor, coming into college, was a series of circumstances that afflicted my family. A not so noble and extrinsic reason for wanting to be a doctor was because my parents want me to be one. Several times down the line, I lost motivation and forgot about those reasons. Every time I gave up on studying or gave up on a practice exam, I had to remind myself why I started in the first place. Writing this article is one of those times. “I want to be a doctor because my parents want me to” and “I’ve already taken all the pre-med courses” were sentiments I had to seriously reckon with multiple times in the process — I needed something better, something more to push myself intrinsically. It said something to me that I’m inherently passionate about almost everything I do, and completely not passionate about the medical school application process and this exam — I’m not ready for medical school right now. I felt that before. I don’t know what the future holds, and I don’t know if I ever will be ready. God might have that plan for me down the line, or he might have some other plan. Reminding myself why has been a more challenging battle than ever studying any of the material on the exam, and that is a battle I will keep reckoning with as I go into my last year of college. The truth is there’s a chance I might not even go to medical school, and might not even be a doctor. I’m at a very transient stage of my life where that bridge is very far down the road, but the no matter my career and life choices, and no matter the outcome, studying for the MCAT has taught me many life lessons and traits that will stick with me for a long time. It has taught me to be more patient, it has taught me to have more balance, and it has taught me not to compare myself to other people, and those lessons are far more valuable than the score.
https://medium.com/the-dream-verse/the-mcat-taught-me-you-always-need-to-remind-yourself-why-42abda2b89b9
['Ryan Fan']
2019-11-17 02:44:19.704000+00:00
['Education', 'Self', 'Life', 'Inspiration', 'Science']
The MAFAT Dataset — A Closer Look
Photo by Paweł Czerwiński on Unsplash This is the 2nd article in our MAFAT Radar competition series, where we take an in-depth look at the different aspects of the challenge and our approach to it. If you want a recap, check out this post. Let’s jump straight in. The competition organizers give a clear explanation of the data they provide: The dataset consists of signals recorded by ground doppler-pulse radars. Each radar “stares” at a fixed, wide area of interest. Whenever an animal or a human moves within the radar’s covered area, it is detected and tracked. The dataset contains records of those tracks. The tracks in the dataset are split into 32 time-unit segments. Each record in the dataset represents a single segment. A segment consists of a matrix with I/Q values and metadata. The matrix of each segment has a size of 32x128. The X-axis represents the pulse transmission time, also known as “slow-time”. The Y-axis represents the reception time of signals with respect to pulse transmission time divided into 128 equal sized bins, also known as “fast-time”. The Y-axis is usually referred to as “range” or “velocity” The following datasets were provided: 5 CSV files (Training set, Public Test set, and Auxiliary set (3 files)) containing the metadata, 5 pickle files (serialized Python object structure format) containing doppler readings that track the object’s center of mass and slow/fast time readings in the form of a standardized I/Q matrix. The Auxiliary datasets consisted of: An Auxiliary “Experiment” Dataset of human only labeled recordings, but they were recorded in a controlled environment, which doesn’t necessarily reflect a “natural” recording. An Auxiliary “Synthetic” Dataset with low SNR segments that were created by transforming the high SNR signals from the train set. An Auxiliary “Background” Dataset — Segments that were recorded by a sensor in parallel to segments with tracks but at a different range. These segments contain the recorded “noise.” Each segment also contains a field mapping to the original High or Low SNR track id. Braden Riggs & George Williams from GSI Technology — SPOILER ALERT: they were the winning team — wrote a very thorough post at the start of the competition where they provide a great overview of the dataset and give key insights into the challenges posed by it. We’ll give a summary below, and for those who want to read the whole thing, it’s available here: Data Description The radar data had a few different important characteristics worth explaining: Signal-to-Noise Ratio (SNR) The SNR refers to the quality of the signal that produced the data i.e. to what degree the signal was generated by the movement of the target as opposed to some other internal or external noise-generating process, for example the weather, or the inherent noise of the machine. I/Q Matrix An I/Q Matrix is a N x M matrix with complex values, in our case 32 x 128. The real and imaginary parts result from the amplitude and phase components of the doppler radar reading. In short, even though the radar is picking up a very complicated wave, it can still be described using only the amplitude and phase of two sinusoidal signals in quadrature. For a good explanation, read this more lengthy description. Each row corresponds to a “slow-time” radar pulse while the columns are a point in the “fast-time” reading of the reflected signal, which corresponds to the distance from the origin. If you want to do a deep dive — MIT has a lecture series just for the courageous few: Doppler burst The doppler burst reading is a vector indicating the location of the “center of mass” for each long-time radar burst. Segments vs Tracks The data was originally recorded in tracks, which did not have a set time length. However, they were split into 32 second time frame segments, and we needed to predict a classification from just the 32 second segment. While in the train data we were given the track-id for the given segments (and therefore could theoretically restitch it together) in the test data we did not know the track ids and therefore couldn’t rely on a longer timeframe to use for prediction. Challenges The small size of the training data The Training set consisted of only 6656 segments, while the test set had 106 segments. To put that into some perspective, the CIFAR-10 dataset has 60000 images, and the Image Net Dataset has over 14 Million. In short, we’d need to generate a lot more data if we’d want to use any Deep Learning algorithms as a classifier. Signal to Noise Ratio Imbalance There was a 1.7:1 ratio of Low SNR to High SNR segments in the train set. Not only were the segments inconsistent with their SNR, but the overwhelming majority (~2/3) of them were extremely noisy. In the test set, the lowSNR:HighSNR ratio was much more balanced, closer to a 1:1. Class Imbalance There was a majority of animal segments/tracks in the training data, which would inevitably create a bias in any model towards predicting animals. Again in the test set, the ratio of labels was more balanced. The Scoring Metric To quote the official website: Submissions are evaluated on the Area Under the Receiver Operating Characteristic Curve (ROC AUC) between the predicted probability and the observed target as calculated by roc_auc_score in scikit-learn (v 0.23.1). If you’re unfamiliar with ROC AUC — check out this article. Conclusion The next article in the series will outline how we dealt with the primary limitation we saw, namely the limited amount of training examples, by going deeper into the data augmentation techniques we utilized for this challenge. Stay tuned!
https://medium.com/definitely-not-sota-but-we-do-our-best/the-mafat-dataset-a-closer-look-e567773071bc
['Adam Cohn']
2020-11-17 14:55:57.192000+00:00
['Radar', 'Data Exploration', 'Data Science']
The magic ingredient that improves workshop collaboration
The magic ingredient that improves workshop collaboration How I’m using the ‘alone together’ approach as a designer Photo by You X Ventures on Unsplash I love working collaboratively. I get a real buzz out of getting a group together, grappling tricky problems and bouncing around ideas. At its best, collaboration can be an energising experience. But is it always productive and inclusive for everyone? My first experience with immersive collaboration was at Google’s +20 design workshop back in 2013. The all-day workshop was an experiment to see what would happen if you got a range of designers together to define and explore some truly blue sky concepts. The whole day was carefully structured, even down to when and what we had for lunch. I left with 10 times the energy I started with and felt super productive, even though the whole day was a one-off experiment with no real follow-on actions. But in the years that followed, I rarely saw this impact from other workshops (and I’m sure it wasn’t just down to the copious amounts of free coffee they gave us). I wanted all the workshops I was involved in to be engaging and productive for everyone. +20 UX Workshop with Google — Sydney 2013 So I did my research, undertook some training and began facilitating workshops for my team and others in the organisation. Through my experience, I realised that not all people enjoy workshops and team events. Looking back on those experiences with my newfound learnings, the poor participation and workshop experience was not because they didn’t like collaboration, it was because not everyone was having the same experience. Some people were confident collaborating and sharing their thoughts publicly and others preferred more structure and time to provide their input. It was clear that I needed a way to structure workshops so it gave everyone the same positive and productive experience, regardless of how participants preferred to work. How could I give people their own personal time and space, while also maintaining an open forum for collaboration and discussion? Discovering ‘alone together’ One of the books I read on my journey to run better workshops was Sprint by Jake Knapp, John Zeratsky and Braden Kowitz. It’s a bestselling book that details a multi-day workshop structure for validating big ideas, fast. I could see subtle similarities between the design sprint process and that of the Google +20 workshop I attended in 2013. Both had very intentional plans for workshops. The author constructs the week-long workshop so each activity has a specific purpose and they all interlink and build off each other. Most importantly, workshop activities are structured so that participants are given chunks of time working alone on individual contributions to a shared goal, as well as time for sharing and making decisions as a team. This approach to collaboration is called ‘alone together’. ‘Sprint’ by Jake Knapp, John Zeratsky and Braden Kowitz (2016). My first exposure to the principal of working alone together. It sounds counterintuitive, but it means that as a team, we are working together towards a shared challenge but we’re doing it individually. ‘Working alone together’ allows everyone to contribute — everyone’s voice is treated equally and there is no judgment. It allows people with different working styles to be given time and space to think through ideas and problems, with an opportunity to share them with the team without the risk of being influenced by other people or workplace dynamics. It also cuts down the unnecessary discussion and group think that can often happen in workshops. For example, there is a strong tendency for groups to converge on a single idea instead of evaluating a range of ideas against each other. Someone’s contribution to a group and the response can also be influenced by things like level of seniority, unconscious bias, confidence and personality type. This can lead to fewer ideas that are less original. For more on that, check out Why Group Brainstorming Is a Waste of Time By Tomas Chamorro-Premuzic and The Journey of Brainstorming by Hanisha Besant. I still have the handouts from the Google +20 design workshop, and reading them again with years of experience attending, planning and facilitating workshops has made me realise why this standalone, all-day workshop was so effective. ‘Alone together’ was the magic ingredient that made it so special. We had the mental space to think through challenges in our own way, then come together as a team to decide on a shared direction. Google +20 design lab workshop brainwriting handout (2013) Practice, practice, practice I have found this structure of alone time, combined with group collaboration windows, an effective approach to make sure everyone is included and given an opportunity to contribute during the workshop. So how does it work? Let’s step through a really basic example from one of my past projects at Xero. One of our cross-functional product pods was looking to address some new business needs and customer insights for the project they were working on. They could only focus on one requirement at a time, so the team needed to get aligned on what they were going to work on first. This is basically how we approached it: We listed all the business requirements and customer needs and put them in priority order. This reminded the team what they were working on and what to tackle first. The top couple of business priorities were rephrased as How Might We? statements. For example, one of our priorities was to help partnering businesses grow. This was reframed as: how might we empower partners to reach more potential clients? This extra detail made it easier for the team to focus on the opportunity and generate ideas in response to the statement. Everyone noted down their solution ideas for each statement on sticky notes. They did this alone, without speaking. This gave all the participants time and space to think individually without being influenced by each other. The sticky notes were then put up on the wall and everyone had time to read through other people’s ideas. Again, this was done without discussion. We tried to save all questions for after the process, to mitigate any bias. Everyone quietly took the time to place sticky dots (all the same colour) on the ideas they wanted to vote for (each participant got multiple votes, usually around 3–7). SILENCE AGAIN! The whole aim is to reduce the influence that participants have on each other. It can be a bit tricky at this point as participants can see the amount of dots on each idea and sometimes who is voting on what. It’s not a perfect system, but by giving participants multiple votes and all the voting dots being the same colour, it reduces the likelihood of groupthink. This is where running workshops digitally is actually an advantage, as tools like Miro have voting systems to keep everything anonymous. The facilitator (me!) tallied the votes and read out the popular ones, sharing them with the team. I wasn’t involved in the voting process, so I could remain objective about the ideas being voted on. The team then plotted these solutions on an impact effort matrix. Based on this, we could see what solutions had the least effort and most impact. This was a shared team process, giving the whole team ownership over the idea they would execute. The team then discussed what action items needed to be assigned to each chosen idea, so we left the workshop knowing how we would move forward (rather than just a list of great ideas). Everyone left the session feeling energised and aligned, and this continued in the following days when we started working on those actions. It worked because decision making was done anonymously and in silence, reducing the amount of influence that participants had on each other. But we also had the opportunity to plan our next steps as a team. It gave everyone a sense of ownership of our newfound direction. Example of how to structure a basic remote workshop activity using the principles of ‘alone together’. It sounds like everything in my workshop went perfectly, but of course it didn’t. There are a handful of challenges I’ve encountered while using the ‘alone together’ approach in workshops — here’s how I’ve tried to address them: Sometimes people are uncomfortable with silence, so they try to have a discussion with their teammates, particularly while voting on ideas. This takes away many of the benefits, so try to enforce the silence — otherwise it can get out of hand. Another tip (from the AJ & Smart agency) is to play some background music to ease people’s discomfort with silence. Participants may want to discuss items that didn’t receive a high number of votes from the team. This can undermine the people that didn’t vote for that idea, so as a facilitator it’s important to try and gently move the focus back to the ideas with the highest votes. It can be really hard for a facilitator to try and run the workshop and enforce the structure, while also contributing ideas and voting. It can also seem a little biased to other participants. So try and have a dedicated facilitator who isn’t involved in the process. There are always going to be people who want an extra few minutes here and there to finish their task. This time can really add up, so as a facilitator it’s important to try and keep each activity running on time. I like to keep 5–10 minutes up my sleeve just in case activities run over time. Reimagining collaboration in a virtual environment Of course, now that we’re all working from home due to COVID-19, the process can look a little different. But the approach remains the same (on the upside, you can mute participants to stop them chatting!). At Xero, we use Miro as a digital collaboration tool, so people can still put up their sticky notes and vote on them. There are a bunch of great templates available via Miroverse which can give you some great ideas and tips for improving your remote workshops. Group discussion can be a little harder, but with a good facilitator it shouldn’t be any different to a usual remote standup. I still encourage the use of a timer to keep activities on track — I personally use the one in Miro, but there are some lightweight timers available online. If all else fails, just give people a verbal warning when an activity is nearing its end. It’s still quite a new approach for me, but I can already see results. All participants have the opportunity to be heard, decisions are being made as a team, and everyone seems to leave the sessions energised. I encourage you to try the ‘alone together’ approach for your next workshop, so it’s not an energy-sapping talkfest, but a good collaborative experience for everyone involved. Next on the reading list: The Workshopper Playbook by Jonathan Courtney P.S. This just turned up at my front door as I was finishing this article: The Workshopper Playbook by Jonathan Courtney. I am looking forward to getting stuck into it! If you can’t get your hands on it some of the content is on the website workshopper.com. Happy collaborating alone… together 😉
https://medium.com/humans-of-xero/the-magic-ingredient-that-improves-workshop-collaboration-6f14d8bc9aba
['Will Lester']
2020-08-24 21:45:23.339000+00:00
['Alone Together', 'Collaboration', 'Facilitation', 'Workshop', 'Design']
How Mobile App Development is recasting the VR Landscape ?
You cannot see the downfall of the trend of Virtual Reality in near time, with the speed it has gained over the years. In fact, Virtual reality is one of the most promising industries in the World today. The market for Virtual Reality is amazingly increasing, so are its services and needs from companies. A fistful of companies have let people to modify their mobile phones into VR gadgets by starting their tour on growing VR headsets and other devices. Mobile app development companies are definitely enforcing this technology to toss new mobile applications. In the business outlook, most of the companies are practically gaining much from VR development, which also includes most of the fields like healthcare, fashion, entertainment, real estate, aerospace and manufacturing. The basis of this technological jump in the direction of progressions is mobile app development. The technology can be applied only when there is a product to make use of the technology. Mobile apps are the best products where Virtual Reality (VR) can be put to use efficiently. By now you must be wondering how mobile app development makes or breaks the VR market. The simplest of the answers would be to say mobile app development companies can improvise the use of VR in people’s day to day life. Let’s dig deeper to learn how this happens: Boost Business Productivity To convert the process of conferences and business meetings, mobile app developers are anticipated to make these much productive. They are working to make savings on business travels of the company by reforming and rearranging the teleconference operations. It is not far when the meetings can happen Virtually instead of members from across the globe meet at a place. This reduces the costs in fields such as travel, security, stay, etc. An Improved learning Experience VR market is expanding dramatically in the education sector as it has made interactive learning a main objective and also the institutions wanting to create a better learning environment for the learners. In most of the schools today, digitisation has become an integral part of teaching curriculum. In such as environment, virtual reality can make learning very interesting. Just imagine the fun students can have, when they witness the events of history instead of just listening to the lectures. And not to mention that, pictorial memory stays for longer ! On-The-Go Experience Unmatchable customer experiences for the users, is provided by VR-based app development, irrespective of current location. VR powered apps let the people to have boosted experience of services and products, also blesses e-commerce, tourism, hotels, and real estate. What are the features of VR App Development? With the UI design, mobile app development companies need to consider few factors, when they are influencing mobile apps with VR. Examining the essence of the UI/UX of smart devices is the first thing to be considered by a company before even starting the mobile app development.This ensures that users find the app interesting and easy to go through. Smooth App Operation The main objective here is to allow the users to experience the Virtual Reality uninterruptedly by promoting the smooth app operation. No user would want to experience slow and discontinuities of VR. Hence, constant bugs need to be found and thrown out by the developers. Efficient and Stable Tracking Based on the present conditions, the app makes use of sensors, certainly, accelerometers and gyroscopes, to recover data. Virtual Reality applications do alter the images on mobile screen, track and trace the head positions. It would be our pleasure to help you out, if you want a VR app for your mobile apps devices. Let us know what you think? Originally published at yugasa.com on October 23, 2018.
https://medium.com/yugasa/how-mobile-app-development-is-recasting-the-vr-landscape-c8bf2e3f4a95
['Yugasa Software Labs']
2018-10-23 10:35:54.121000+00:00
['Outsourcing Company India', 'Mobile App Development', 'Outsourcing', 'AR', 'VR']
4 principles for designing automation with great user experience
4 principles for designing automation with great user experience Improve user experience by adhering to the principles of transparency, predictability, adaptability and level of automation. Reference: Scott Adams (2018). “Dilbert”. Available at Dilbert.com While automation may improve our lives, that highly depends on how we implement it. Unfortunately, there has yet to be a focus on how to design automation which improves user experience. However, major industries such as the aviation industry started successfully automating functions back in the 1980s. This gives us 40 odd years of research to draw on. Last year, I wrote my master thesis of psychology about automation. More precisely, I wrote about how to design Shore Control Center’s. These are centers which will be used for monitoring and/or controlling autonomous ships. Based on the extensive reading I did while writing my thesis, I have synthesized the four most important principles for designing great automation. 1. Transparency — guide the user’s mental model It is not a coincidence that transparency is the first item on this list. Above all else, transparency is key for successful automation. Transparency refers to understanding how a system works. For instance, imagine if a toaster was see-through. That we could see the heaters glow up, toasting the bread on both sides. Then perhaps it would be so transparent that Hobbes would understand where the bread goes. Reference: Bill Watterson (1988). “Calvin and Hobbes” Available at https://pbs.twimg.com/media/DbqbSqDUwAAaBVW.jpg Making a system understandable is nothing new in UX. Products that are simple to understand affords better mental models for our users. However, it becomes a bigger challenge when it comes to automation. When a user looks at a website, they usually have previous experience from visiting other websites. That makes the website easier to understand, because they roughly know what to expect. We call this external consistency. However, automation is not as widespread yet. As an example, think of an automated home-device, such as Google Home or Alexa. When you ask the device a question, how does it actually work? For instance, Google Home seems to work by looking up information on the internet. However, it does not seem able to integrate information in a meaningful way, the same way a normal google search does. For example, if I do a computer google search for “Viking FK”, my beloved local football team, google gives me news, fixtures, table standings. Google Home, on the other hand, will read me the first paragraph of their Wikipedia article, telling me about when the club was formed. If you understand how the system works, you can phrase your question accordingly. This makes it more likely that the device will be able to help you. If not, you might go through several frustrating, failed attempts. With this particular issue, human-centric design also comes into play, as a user should not have to adapt to technology. For now, though, designers must be very clever. An automatic system may not even have a screen, yet it must still contain enough signifiers to create transparency. Otherwise, users do not know what to expect from their automated system. Which brings us onto our next topic. 2. Predictability — Understandable and predictable output Related to transparency, we must also ensure that users can predict what the automatic system will do. That is quite difficult at times, though, and this is a principle even the biggest companies struggle to implement. Let’s use Spotify’s “Discover Weekly” as an example. “Discover Weekly” is a playlist of music, automatically created by Spotify, customised to the specific user. The playlist contains music the user might like, but have not listened to yet. It does this by comparing users with overlapping tastes in music, and suggests songs which the other user has not listened to yet. It is a brilliant idea! However, in my personal “Discover Weekly”, I am mostly given a strange mix of old Scandinavian folk and country music. Strangely enough, that is not really my kind of music. This renders “Discover Weekly” a bit useless for me. Printscreen from my “Discover Weekly” in Spotify. Why Spotify suggests this music, I am unsure of. I have a theory based on my own mental model (which might be wrong); I am guessing that Spotify emphasises matching music to my native language (Norwegian) above my preferred genres. If that is the case, the issue could have been resolved by letting me set rough filters, like prioritizing genres over language. Instead, the only option Spotify provides me for adapting the algorithm is to manually go through the list, disliking each song/artist individually. That is repetitive work better done by a machine. In this case, I can make a fair guess about how the system works, without being able to resolve the issue. Therefore, a transparent system is not enough, it must also be predictable. In addition, it must allow for users to make adaptations. 3. Adaptability — Let your user make changes Let’s say you want to send a text message to your buddy Jim about leaving work early to catch the game. Your phone replies “okay, sending message to Tim”, your boss. Oh no, is it about to send the message? Quick, how do I override it? Automation can be a great feature, but it can also be a massive annoyance. Most of all, it is affected by something called brittleness. Automation rarely fails, it does what it is programmed to do. However, if something is wrong with the input information (Jim vs Tim, for instance), it will perform the wrong action. Keep in mind that your users are quite smart people. You do not automate functions because they are unable to do them, but rather so they do not have to do them. Automation should make your customers relax. However, if they are unsure if their automated product will act correctly, their life is not made any easier. Instead, they might be anxious about using the product, and end up doing the task manually instead. Therefore, be mindful of what and how you automate. Is there a function which is hard to automate perfectly? Let the user solve it manually, or use automation to give them some options. The name of the game is “level of automation”. 4. Level of Automation — Choosing the right one Often when we speak of automation, we treat it as a unified concept. However, automation should be treated as a range, going from low-level automation all the way up to full automation. The taxonomy of automation was published by Raja Parasuraman (2000). He argued that by selecting the best fit, we can maximize the benefits and minimize the disadvantages of automation. He divides automation into four functions: Information acquisition: Sensing and registering input data. Information analysis: Allows for prediction of data, and the integration of information into a single parameter. Decisions and action selection: Creates a hypothesis of the problem, presents options for how to solve it and highlights the best option. Action implementation: Automation selects the action without human intervention and does the action. That might seem a bit abstract. Let us take a SatNav/car GPS as an example: Information acquisition: Your SatNav register the number of cars on the road. Information analysis: Your SatNav takes a few pieces of data (cars on the road, time of day, weather conditions) and calculates your estimated time of arrival. Decisions and action selection: You enter a destination. The SatNav calculates different possible routes and presents you with some alternatives, highlighting the route it thinks is best. Action implementation: You enter the destination, and the SatNav gives you a route. Next time you design automation, take a moment to consider if it actually needs to be fully automated (“Action implementation”)? Would it perhaps suffice to simply provide the users with choices (“Decisions and action selection”)? I will admit, it looks quite cool when someone is presenting a fully automated system at a technology conference. However, that does not necessarily equal good user experience if the product makes bad choices for users. By making the right choice at the level of automation, you avoid your users feeling like they are “fighting against the system”. As an example, have a look at this Norwegian commercial. Rema 1000 (2018). “Smarthus”. Available at https://www.youtube.com/watch?v=sgJLpuprQp8 The commercial was created by a grocery store chain, which slogan is “the simple option is often the best option”. That is a good message to keep in the back of your head as you design automation.
https://uxdesign.cc/4-principles-for-designing-automation-with-great-user-experience-c425878c4f64
['Lars Berntzen Arholm']
2020-06-16 11:44:09.793000+00:00
['Automation', 'Innovation', 'User Experience', 'Technology', 'Design']
“The American President” at 25: An Ode to My Favorite Movie
Image copyright: Sony/Universal/Warner Bros. This month marks the 25th anniversary of my favorite movie of all time. Written by a pre-West Wing Aaron Sorkin, directed by a post-When Harry Met Sally… Rob Reiner, and featuring a sterling ensemble that included some of the finest work of Michael Douglas, Annette Bening, and Michael J. Fox’s careers, The American President is a highly underrated, politically charged romance. It was released to warm critical reception and strong but unspectacular box office, but never got the following that I believed it deserved. In honor of its silver anniversary, I revisited the film and take a look at how it holds up. My Discovery of the Movie I have never claimed to have been a typical child. One of the many ways I deviated from the norm was through my interest in movies, music, and television that were clearly meant to appeal to consumers decades older than me. I appreciated behind-the-scenes craftsmanship, polished writing, challenging subject matter, and complex themes from a very young age. At least once a week during my childhood, I would venture to the video store to rent one (or two or three) movies. I still remember the feeling of walking around the video store to see the new releases, what was in stock, and what was coming soon. Honestly, I probably could have spent hours browsing the small Blockbuster Video in our town (or maybe that was even before Blockbuster came to Rome, New York and I was at the mom and pop shop). One day when I was in 6th grade, I had stayed home from school either because I had one of my classic ear infections or because I was in need of a “mental health day” (which my mother lovingly indulged). We went to the video store and picked up a copy of The American President on VHS. My mom loved Michael Douglas, I had read good coverage of it in Entertainment Weekly (my Bible as an adolescent), and it wasn’t Rated R, so we picked it up. It was my Mom’s day off so we sat at home and popped it in the VCR. We loved it so much that when the credits roll, we immediately rewound it and watched it all over again. Since then I have seen it at least 30 times, probably more. So what was it about this middle aged, heterosexual romantic comedy-cum-liberal political fable that intrigued a 12-year-old closeted gay boy growing up in a largely isolated Republican town? Well, everything. What Makes the Movie Work So Beautifully Here’s the movie’s setup: Andrew Shepard (Oscar-winner Michael Douglas) is the President of the United States. Widowed during his presidential campaign, he rode a wave of sympathy to his win and is now enjoying nearly historic approval ratings. He is a couple of months away from the State of the Union, where he plans to announce bold legislation regarding crime prevention and fossil fuel reductions. He is supported by his Chief of Staff A.J. McInnerney (Martin Sheen), domestic policy advisor Lewis Rothchild (Michael J. Fox), press secretary Robin McCall (Anna Deavere Smith), White House Deputy Chief of Staff Leon Kodak (David Paymer), and personal aide Janie Basdin (Samantha Mathis). Image copyright: Sony/Universal/Warner Bros. Sydney Ellen Wade (four-time Oscar nominee Annette Bening) is a liberal lobbyist recently hired by a high profile environmental agency. She works under Leo Solomon (John Mahoney) and alongside Susan Sloane (Wendy Malick) and David (Joshua Malina). On her first trip to the White House she boldly critiques the president’s “mockery of environmental leadership” in a meeting with his Chief of Staff not knowing that the President is standing behind her. He is instantly smitten. (It’s a “meet cute” that could have been endlessly shmaltzy, but this film is way too sharp and sophisticated for that.) Their whirlwind courtship instantly grabs major media attention and is exploited by the presumptive Republican nominee for President (Robert Rumson, played by Oscar winner Richard Dreyfuss) to launch a character attack on Shepard. As the attacks heat up and the fossil fuel legislation is threatened, tensions erupt everywhere. The film has countless strengths. The most obvious at first is the superlative acting. Douglas is so charismatic and convincing as Shepard that it makes me wonder how he didn’t carve out a niche as the go-to president portrayer in his post-Fatal Attraction and Wall Street years. Perhaps most impressively is how he goes toe-to-toe with Bening, who turns in one of her finest performances, which is saying something given her utterly remarkable career. She oscillates from swooning romantic to ferocious political powerhouse with remarkable ease. Both Douglas and Bening get spectacular speeches before the credits roll that alone should have earned them Oscar nominations. They are supported by a top-notch supporting cast. Michael J. Fox takes on his most mature role to date and knocks it out of the park as the only member of the President’s team who is truly willing to challenge him. Martin Sheen effortlessly commands the screen in a fully believable, lived-in performance. Anne Deavere Smith, John Mahoney, Wendie Malick, and Joshua Malina don’t get much in the way of character development or back story but they have several memorable moments. Richard Dreyfuss is mostly a one-dimensional antagonist, but he sneers with perfection. Every role down to the smallest seems to have been cast with the utmost care. One of the things that makes the acting so impressive is the fact that the screenplay is so challenging. Aaron Sorkin is known for his incredibly dense, verbose, fast-paced screenplays. This was his third movie screenplay, following Oscar-nominated A Few Good Men and the forgettable Alec Baldwin-Nicole Kidman thriller Malice. Sorkin’s biggest successes were ahead of him. He is perhaps most famous for creating and wrote the massively successful series The West Wing (more on the striking influence of The American President on that show below) and his Oscar win for writing David Fincher’s Facebook origin story The Social Network (he also scored additional Oscar nominations for writing Moneyball and Molly’s Game). But everything that made those projects so brilliant is evident here in droves — the wit, sophistication, intelligence, charm, tension, and power. Even as Sorkin continues to impress (his latest film, The Trial of the Chicago 7, recently premiered on Netflix and is garnering well-deserved Oscar buzz), I will always consider The American President his finest and most under-appreciated work. The American President was directed by Rob Reiner. The Emmy-winning star of iconic 1970s sitcom All in the Family and son of comedy legend Carl Reiner, Reiner had an absolutely remarkable run as a director through the 1980s and early 1990s, including When Harry Met Sally…, The Princess Bride, Stand By Me, This is Spinal Tap, Misery, and A Few Good Men (making this his second collaboration with Sorkin). He directs his ninth film with vision and precision. He brings together the superb production values beautifully — the sweeping score of Marc Shaiman, the lush and detailed production design, the warm cinematography, and the seamless editing that keeps everything perfectly paced.
https://medium.com/rants-and-raves/the-american-president-at-25-an-ode-to-my-favorite-movie-300429befb93
['Richard Lebeau']
2020-11-30 20:42:40.390000+00:00
['Politics', 'Movies', 'Film', 'Culture', 'Writing']
Write for Memoirs from History
Write for Memoirs from History Memoirs from History offers you a chance to write for our growing publication! Photo by author Hello and welcome! Memoirs from History offers you a chance to write for our growing publication; where there’s always a story to tell. We publish stories on the following topics: True Crime, History, Fiction and we’re pleased to announce a new category: BOOKS! True Crime: We’re looking for non-fiction crime stories — stories that answer the 5 W’s: Who, What, When, Where and Why. And of course, we can’t forget ‘How.’ Stories that bring suspense and mystery to the reader — ones that we can’t peer our eyes away from, even if they make us scared to be home alone! History: Anything and everything history related. This may be about an inspirational historical figure, or of a cool or unique invention. Stories that teach the reader something they may not know! It’s never too late to keep learning! Fiction: We love fiction stories — they do not have to be crime-related. We don’t want to stifle writers’ inspiration, we want to share it. Books: We’ve now added a Books section and I couldn’t be more excited! This can be anything from book reviews to a story of a novel that inspired you, or one you believe can teach us something unique and rewarding. Plus, who doesn’t love books! Submission Guidelines The submission guidelines for Memoirs from History are the same as any other publications— no plagiarism, no hate speech and no glorifying of violence. I have no problem with profanity but we’ll keep nudity at bay. If you would like to submit your work to our growing publication, you can send me an email at [email protected] with your Medium handle. You do not need a draft prepared. Once you have been added as a writer (which should be done within 24 hours) you can submit drafts (unpublished pieces) via Medium. To submit your work, click the three dots on the top right of the draft page, click “add to publication” and choose Memoirs from History. From here, you will be prompted to add five tags: Your submission must have one of the following tags: True crime, crime, history, fiction, or books. The remaining four are up to you! There is still one more step — this is the part where some of us become confused. You must ensure you hit ‘publish’ one more time for the submission to fully go through. Now you can sit back, relax, and wait for your story to be published. If edits are required, I will send you a private note. I will respond to all submissions within 48 hours. My goal is to create a safe platform for writers to share their work, connect with others and gain valuable feedback. As of today, we have been alive for exactly one month and in our first month, we already hit 60,000 views! This could not have been possible without our talented and dedicated writers. Thank you for considering to write for us — I look forward to reading your stories.
https://medium.com/memoirsfromhistory/write-for-memoirs-from-history-4ef104bca193
['Fatim Hemraj']
2020-12-01 20:06:03.496000+00:00
['Submission', 'Writer', 'Call To Action', 'Publication', 'Writing']
Dove
Dove Forever Moon Courtesy of Redbubble/LordofMasks There’s a desperation that pins me down, covering me with hands of mercy. I sit at the window, gawk at the mighty moon, and wonder, “Will I see you tonight?” In my dreams, I have come to know your ways, you are designed to wake me at all hours even when sleep should not be disturbed and I sink into the thought of you watching me trying to shake the past into the present. I am Dove. You know me as flighty, shiftless, yet free. You could not love me the way I was made to be loved. You wanted a lie here. A moment of truth there. A reason to believe that I would not fly away from you too. I am Dove. My father named me. He saw me fall out of my mother with a wailing mouth, lungs furious enough to carry time, and blue-green eyes. One would soon turn gray. Dove. Dove. Dove. You caught me one day when my wing was broken. I had too many buildings in my view. Lost loves. Failed rebounds. Attempts at love from others who were still figuring out how to love themselves, and you bandaged me up — called me a “helpless soul,” and doted on my existence. Right now, this window gives me a great view of the moon and if I look hard enough, I can see your hands caressing it in a caring way. You are trying to heal it too. But the moon does not need you. It will shine as it hangs up there in the desolate sky, yearning to show up each night. The moon will not wait for you. I will see it just as I do tonight, tomorrow too. I think... I know… This bugs you — the fact that I have more faith in the moon than I do in you. I am Dove. I owe you nothing. Nothing more than what I have. Nothing more than what you need. The moon winks at me, it knows we, you and I, are parting. It knows we are through. However… The moon and I are forever.
https://medium.com/a-cornered-gurl/dove-ebf1a10c9428
['Tre L. Loadholt']
2017-10-24 17:35:56.672000+00:00
['Peace', 'Prose', 'Love', 'Life', 'Writing']
An Intelligent Crypto Trading Bot for Everyday Investors
18 Sep 2018 We launched HodlBot in early April 2018 with the goal of “democratizing investing for everyday people”. Our MVP was a simple cryptocurrency index called the HODL20 that anyone could invest in. All you needed was a Binance account. The HODL20 tracked the performance of the top 20 coins by market cap and made up of 87% of the entire cryptocurrency market cap. *HODL20 allocation as of March 25, 2018. Each coin is capped at 10%. Excess is re-distributed to holdings less than 10%. The default rebalancing period was one month.* It’s been 5 months since we’ve launched. We’ve shipped a ton of updates to our product and I’m excited to share with all of you what we’ve been cooking up. Free for Everyone under $500 We can’t democratize investing for everyday people if our product is too expensive. So the first thing we decided to do was to make HodlBot free for all accounts under $500. While our usual price of $10/month isn’t a lot, it was prohibitively high for smaller account sizes. Every bit of revenue is going to help support the team, so we can continue to roll out new and improved features for our users. As we scale, we’ll make more and more of HodlBot free. Expanding Beyond the HODL20 — Create your own Portfolio While the majority of people see better returns by passively indexing the market, there are definitely some lucky/talented individuals out there who can outperform the market. Now with HodlBot, you can create a portfolio with any of the 350+ coins listed on Binance. Simply select a coin and choose the percentage allocation you want. HodlBot will automatically rebalance your portfolio to your target allocation. Create your own portfolio with HodlBot Although it’s not always financially wise to speculate, the overall market benefits from opinionated traders. If everyone indexed the market, we would never see price differences between good & bad projects. Historical Backtesting It’s ill-advised to build & run a custom portfolio blind. We’ve collected a ton of data from Coinmarketcap and Binance, so you can simulate the performance of any custom portfolio over time. Benchmark your portfolio performance against BTC and see how well your strategy does over time. Historical Performance of 33% ETH, 33% BTC, 33% ZEC vs. Bitcoin Custom Rebalancing When we originally launched the HODL20, the default rebalancing period was set to 28 days. There’s been a lot of healthy debate surrounding what the best rebalancing period is. While we maintain there is no best rebalancing period, we wanted to give our users a greater range of choice. You can now set your rebalancing period anywhere between 1 and 1,000,000,000,000,000 days. Index Blacklist There are some cryptocurrency projects among the top 20 coins that investors want to avoid like the plague. When you blacklist a coin from the HODL20, Hodlbot skips it and adds the next highest coin by market cap. Cash Out There is no obligation to stay with HodlBot. You can liquidate your portfolio anytime you want. This is one of the main benefits of holding the underlying assets yourself. You can covert up to 100% of your entire portfolio into any coin that is trade-able on Binance e.g. USDT, BTC, ETH. Cryptocurrency Payments When we first launched HodlBot, we only accepted payments through credit card. Since all of our users have cryptocurrency, cryptocurrency payments was a natural fit. We want to do our part in promoting adoption and so we launched crypto payments. For accounts over $500, we accept BTC, ETH, LTC, ETC, BCH. HodlBot is normally $10 a month, but only $8/month if you sign up for the whole year. Future Roadmap At HodlBot, we aim for short development cycles and pushing product updates weekly. The goal from here on out is to publish a product update every 2 weeks. Here are a few of the things we’re working on right now. New Indices In the upcoming weeks, we’re planning to launch a plethora of new indices. We are big believers in passive investing and indexing, so we’ll continue to do research & development on that front. Advanced portfolio customization options Besides indices, we want our users to be able to create any kind of portfolio they can dream up. Expect more sophisticated portfolio customization options such as market cap weighting, caps, etc. Superior Trade Execution Bots are much more efficient at placing orders than humans. You may be able to place one trade at a time by hand, but instantly be able to place fifty with a bot. On Binance, you can reduce your transaction fee by 50% when you pay with BNB. Soon HodlBot will automatically purchase tiny amounts of BNB before every trade in order to minimize fees. Sign Up for HodlBot If you would like to try out HodlBot, the first 14 days of HodlBot are free. HodlBot is also free for any account under $500. To get started all you need is a Binance Account $200 in any cryptocurrency If you want to know how HodlBot indexes the market and completes rebalancing, check out the blog I wrote here. Talking to our users is where we get most of our ideas. If you’re interested in what we’re working on, come join our Telegram group. It’s a tight-knit community of 600+ members.
https://medium.com/hodlbot-blog/an-intelligent-crypto-trading-bot-for-everyday-investors-5885dd2fdb24
['Anthony Xie']
2019-04-05 14:46:00.903000+00:00
['Cryptocurrency', 'Investing', 'Bitcoin', 'Startup', 'Finance']
Stepping up: Image Classification using TensorFlow 2.x and TensorFlow Dataset
In this tutorial, we will build a network to predict the class of an image. Therefore we use the tf_flowers dataset, which consists of 3,670 and 5 different classes. This tutorial builds upon a tutorial by TensorFlow. This tutorial follows our well-known workflow: Loading the data and preparing our datasets Define the model Training and evaluating the model Loading the data and preparing our datasets First, let's have a look at our dataset. We are using the tf_flowers dataset provided by the TensorFlow Dataset library (tfds). We, therefore, load the dataset in our get_dataset() function using tfds. After that, we apply a preprocessing function on each sample in the dataset and generate a batch of size one. In the main() function, we plot the first 9 images in the dataset. The result should be something like this: The first 9 images in our dataset. As you can see, the three images in row three belong to the class dandelion. They differ very much from the dandelion in the second row. There are a few problems in our dataset: the different sizes of the images, the range of the pixels are from 0 to 255 (we prefer them between 0 and 1), and the encoding of the image labels. To solve these problems, we apply a resize on the images, standardize them by dividing them by 255, and encode the labels using the one-hot() function. One-hot encoding means that instead of one scalar value, we represent our classes by vectors. Every row in the vector represents one class. We want to classify 5 classes, so label 4 becomes [0 0 0 1 0]. Also, we split the data into train and test data and shuffle them. As you can see, we added parameters to specify batch size, image width, and image height. This leaves us with the following output: The pixels' value in the images is between 0 and 1, imshow() plots float values. Define the Model As in the tutorials before, we define our model as a class and inherit from tf.keras.Model. We define 3 Convolutional Layer, each with a Max Pooling Layer after that a Flatten Layer and a Dense Layer. We combine and train the model using our trainer_cnn.py and main.py (see Appendix): As you can see, the training and validation accuracy rises first. After a couple of epochs, the accuracy doesn’t rise anymore and starts decaying. This effect is due to overfitting. Each epoch, the model learns using all available data points. After a few epochs, the model remembers the training data. We can augment our data to overcome this issue: we use our existing images and generate new images by zooming or rotating the images. Therefore we apply augmentation and add a dropout layer to our model. Both layers are just active in training time: After training, we get a much better validation accuracy, and both losses are dropping:
https://medium.com/the-innovation/stepping-up-image-classification-using-tensorflow-2-x-and-tensorflow-dataset-a37bf30f747d
['Jan Bollenbacher']
2020-11-15 17:30:45.366000+00:00
['Machine Learning', 'Python', 'TensorFlow', 'Neural Networks', 'Classification']
The Unsung Royalty: The Mechanical Royalty
Photo by Heidi Fin on Unsplash I refer to the Mechanical Royalty as the lost royalty because when most people think of publishing they almost always think of ASCAP or BMI. These performance rights organizations collect the so-called “public performance royalty”. Public performances can include play on television or radio, in clubs and restaurants, on websites, or on other broadcasting systems. When your song is publicly broadcasted in one of the aforementioned mediums, a royalty is generated for which your PRO collects and pays out. Public Performance royalties could make up a large chunk of your publishing income. Another large chunk of your publishing should be coming from your mechanical royalty. The problem is a lot of producers and songwriters are not collecting them. Mechanical royalties are generated when a copyrighted work is reproduced in digital and physical formats. Songwriters and producers are paid mechanical royalties per song purchased, downloaded and/or streamed on digital platforms. The current statutory mechanical royalty rate for physical recordings (such as CDs, Vinyls, etc) and permanent digital downloads is 9.1¢ for recordings of a song 5 minutes or less, and 1.75¢ per minute or fraction thereof for those over 5 minutes. Although vinyls and physical reproductions of music are still alive today, majority of your mechanical income will likely derive from streaming as we enter a new decade. The streaming calculation is a bit more complicated nonetheless. The mechanical streaming rate is about $0.06 per 100 streams, or $0.0006 per stream, according to Royalty Exchange. Audiam actually lets you calculate your mechanical royalty revenue here. Another quick note, Mechanical Royalties are collected by different societies in each country. Quick math showcases how lucrative mechanical royalties used to be. When an album was purchased via a CD or Cassette, 9.1¢ were generated. To generate $1,000 in mechanical royalties, you would only need approximately 11,000 downloads or physical albums sold via Royalty Exchange. In the days when physicals and downloads reigned supreme, mechanical royalties were rather large. There is some hope for larger mechanical royalty payouts in the new streaming era. The new mechanical royalty rates as established by the Copyright Royalty Board will increase from 11.2% of streaming revenue in 2018, to 15.1% of streaming revenue by 2022, marking a 44% increase over the next few years. It is clear that the mechanical royalty will play a major part in publishing revenue over the course of the next generation. Still many songwriters and producers have let this royalty sit in space instead of cashing in. A major reason why independent songwriters and producers don't collect their mechanical royalty is because the Harry Fox Agency (the lead mechanical collection society), while great at what they do, is notoriously difficult to deal with independently. This alone is one of the biggest reasons why a lot of mechanical royalties are left unpaid. Others who pay out US mechanical royalties are Music Reports, Inc. and Audiam. Audiam is an interesting player in the mechanical royalty space as well. While they only pay out streaming mechanical royalties in the US (not digital downloads), they look to be filling the gap of providing mechanical royalty collection for the independent composer. Audiam is an interactive streaming mechanical royalty collection agency. It gets its music publishing members accurately paid from YouTube, Spotify, Apple Music, Google Play, Rhapsody, Beats, Amazon Prime, Mood Music, TouchTunes and other interactive streaming services. We license, police, research, audit, collect and distribute “interactive” streaming mechanical royalties. Any publishing administrator (either self published or a larger publishing entity) can join Audiam as long as they represent the administrative rights to their songs/compositions. via Audiam. Another way to collect your mechanical royalties is though a publishing administration partnership. A company like Songtrust will register and collect your publishing royalties across the board including the elusive mechanical royalty. These companies serve as a one stop shop for a majority of your publishing revenue for a rather small administration fee. This could save you a lot of time and hassle. Many songwriters and producers make the monumental mistake of thinking that because they registered with BMI, ASCAP, or SESAC they will be paid mechanical royalties or that they are collecting all of their publishing income. To be clear again, performing rights organizations do not collect your mechanical royalty. To get paid your mechanical royalties, you must be registered with a separate collection agency such as the Harry Fox Agency (HFA), Music Reports (MRI) or Audiam. But remember, companies such as the Harry Fox Agency (HFA) or Music Reports (MRI) only really deal with music publishing companies or administrators thus for the individual it is hard to collect without a large publisher or administrator in the picture. Unlike, performance royalties which can be extremely unpredictable and impossible to calculate, mechanical royalties are a steady and calculable source of publishing revenue. As stated earlier, with mechanical royalty rates on the rise the next couple years, for many, mechanical royalties can and will be a significant chunk of income. It is imperative to know whether you are collecting your mechanical royalty and if not, devise a plan to start collecting. If you are not signed to a big publishing company or that route is not attractive to you, I strongly suggest you look into a publishing administration company such as Songtrust or TuneCore Music Publishing because of their global collection capacity and convenient portal. For a small one time fee and 15% commission on the ‘publisher’s share’ of royalties a company like Songtrust can collect on your behalf and you can confidently know that you are reaping a great majority of your publishing income. The reality is, if you are not with an admin company or a large music publishing company, more than likely you are not collecting your mechanical royalty. If this is you, it’s time to do something about it. Still, with everything that has been said, it will be interesting to see how companies like the Harry Fox Agency, Audiam and Music Reports co-exist with the government mandated Mechanical Licensing Collective. The Mechanical Licensing Collective (MLC), was derived from the newly passed Music Modernization Act and will be tasked with tracking, collecting, and distributing mechanical licenses from streaming services in the U.S. The new collective will be game changing to say the least and will institute a new public database that will house information relating to the ownership of publishing shares of all musical works. See Below. Mechanical Licensing Collective The legislation establishes a “mechanical licensing collective” (“MLC”) to administer the blanket license, and a “digital licensee coordinator” (“DLC”) to coordinate the activities of the licensees and designate a representative to serve as a non-voting member on the board of the MLC. The MLC will receive notices and reports from digital music providers, collect and distribute royalties, and identify musical works and their owners for payment. The MLC will establish and maintain a publicly accessible database containing information relating to musical works (and shares of such works) and, to the extent known, the identity and location of the copyright owners of such works and the sound recordings in which the musical works are embodied. In cases where the MLC is not able to match musical works to copyright owners, the MLC is authorized to distribute the unclaimed royalties to copyright owners identified in the MLC records, based on the relative market shares of such copyright owners as reflected in reports of usage provided by digital music providers for the periods in question. via Copyright.Gov Just know, publishing is on the upswing and a big reason for that is the future of the mechanical royalty. Collect yours!
https://medium.com/the-courtroom/the-unsung-royalty-the-mechanical-royalty-f43a12d59e62
['Karl Fowlkes']
2019-12-30 05:02:18.672000+00:00
['Business', 'Music', 'Technology', 'Music Business', 'Entertainment']
The Lucky Ones: Mourka’s Story
BY MOURKA MEYENDOROFF I was 19-years-old in the fall of 1966 when my friend Barbara and I drove my two-tone 1956 Chevrolet to Baltimore, Maryland, where I was to have an illegal abortion. One afternoon, a few weeks prior, I had returned from school, pulled into my driveway, parked, and stepped out of the car. I was loaded with books and bags and papers. Suddenly, Bo, my ex-boyfriend, was right there. I screamed from the unexpectedness of him. He was angry; I broke up with him. He grabbed my arm; the books and papers went flying. I thought he would kill me but instead, he ripped my clothes off and raped me on my car’s hood. Right before he got into his car and drove away, he banged me on the head. I never saw him again. I slid off the car and fell into the dry brown leaves. I dressed. With leaves still in my hair, I slowly walked up the apartment steps where I lived with my parents. My father greeted me when I walked through the door. He asked me in Russian, “Как поживаешь?” How was I doing? I answered, “Нормально, всё нормально.” Everything is fine. After swallowing many quinine water glasses and drinking vast amounts of alcohol in failed attempts to abort, I managed to obtain $500 from a friend for the abortion. Barbara and I scraped up the rest of the money for a hotel room and gas — there was not much left for food — a minor consideration. I had the instructions memorized. I was to go to a certain Howard Johnson Motor Lodge in Baltimore. I was to check-in, get a room, and wait for a taxi that would pick me up at a specified time. I was instructed to be alone. I was not frightened. I was in deep denial of the danger that awaited me. It was dark when we got to the Howard Johnson. We checked-in. I looked out the window and noticed a taxi waiting at the entrance. It was time to go. Barbara and I hugged and I walked alone, down the hall, into the lobby, out the door and climbed into the taxi. I felt like I was moving in slow motion. My blinders were on. My thoughts were not on the danger of what was to come but the necessity of going through it, to get it done and move on. The cab driver told me to lie face down in the car’s back seat and not to get up. I did as I was told. I felt the taxi winding around curves and going uphill. About twenty minutes later, we stopped at a dark house. He told me to go in. I was greeted by a woman who asked me for the money. She took the envelope of cash and told me to go into an adjacent room, take off my clothes and put on a paper dress. I went into the room and there was a woman lying on her side on the bed in one corner, groaning. We didn’t speak. I didn’t want to know. I soon walked into a very bright room and was told to lie down on the cold metal flat bed and put my feet into the stirrups. My legs began to tremble. The doctor and nurse were wearing sunglasses. The operation began. The doctor told me that there would be cramping but I was not given anything for pain. As it intensified I felt tears rolling down my cheeks. The procedure lasted fifteen minutes but felt like an eternity. And then it was over. The doctor asked me if I wanted to see the fetus. I said no. I was led into the original room. The woman was gone. I was told to lie down for a while that they would come for me. It was in this quiet moment that I realized what just happened. I could bleed to death. I could get an infection. Would I see Barbara again? In about twenty minutes, I was given some pills for the bleeding and some menstrual pads. I got dressed and slowly and painfully walked out of the house and into the waiting taxi. Again, I was told to lie face down on the back seat. Also, I did as I was told. Finally, the taxi dropped me off at the Howard Johnsons. I walked down the hall and was so relieved to see Barbara rushing towards me. We hugged. It was over. The next morning, I wasn’t bleeding too badly. My angels were working overtime. I was going to make it. Some women die. I was one of the lucky ones.
https://medium.com/tmi-project/the-lucky-ones-mourkas-story-33029a1859d6
['Tmi Project']
2020-12-11 18:36:15.696000+00:00
['Abortion Rights', 'Reproductive Rights', 'Storytelling', 'Abortion', 'Reproductive Justice']
This Moment In Time, And The Futile Quest For Permanence.
Take a breath and let the moment pass. There is another one coming. Image by A Owen from Pixabay Life as we know it is temporary. The earth has been in and remains in a constant state of change. At the time that the earth and moon were formed, some 4.5 billion years ago, a day on earth was only 4 hours long. The moon was only 25,000 miles away, and the tidal forces on the earth were huge. As the moon moved away from the earth, the days grew longer, the tidal forces decreased and the surface of the earth received more sunlight in a longer day. The moon is now 238,000 miles away. The moon will continue to move away from the earth over time, at the rate of about 3.5 cm a year. The sun will continue to look about the same for a few billion years before it blows up into a red giant star, consuming all of the inner planets. But this moment in time…it is all we have to live for now. And we have no idea just how lucky we are to be here. Had there been no collision between the earth and another body about the size of Mars, there would be no moon and perhaps no life. The days might be much shorter now. The axis of rotation might shift wildly over geological time, precluding the stable environment needed for the complexity of life to we have now, to emerge. We are a result of a state of constant change. Yet, so many people seek permanence. Some people think that American exceptionalism should be made permanent. Some people think that their wealth should be permanent, handed down from generation to generation. Some people want to go to heaven, to exist in a constant state of joy, forever. Some people want to sit in front of their TV all day, hoping that nothing will change. Some people wish to be buried in a mausoleum after death, in a quest for permanence. John Lennon once sang a song, Across the Universe. The chorus? “Nothing’s gonna change my world.” There is something about the mausoleums that I’ve seen. They are built mostly of stone. They are made to be permanent. They are made to resist change. The sunlight shines through the stained glass windows, tracking an image across the floor, onto the wall, day after day, after day. But nothing that man has ever made has been able to resist the constant state of change in the world. Even mausoleums require maintenance. Sure, we can find artifacts from ancient civilizations. Like the Great Pyramids of Egypt, each one a giant mausoleum for a royal family. Their interior walls tell a story of a wish for the afterlife. The artifacts, the gold masks, the mummification of the bodies inside, all tell us of a wish for a permanent place in the afterlife. As if immortality was a thing. But all of that came about from change. Everything we touch came about from change. The pyramids themselves seem permanent to us for we have such a short lifetime relative to them. And they will remain there for thousands of years more. But over geological time, nature will have their way with the pyramids. Eventually, as the tectonic plates move about the earth, the crust upon which the pyramids sit, will be subducted under another plate, and the pyramids will go with them. More likely though, the pyramids will be ground into dust by the weather long before succumbing to the subduction of the crust that supports them. The motion of the tectonic plates arises from the heat in the core of the earth. The heaviest elements are down there, in the core, where they work like a giant nuclear reactor. Nuclear fission is what drives the currents of molten metal in the core and the mantle, and the crust floats on the mantle like a boat floats on water. It has been estimated that it will be 91 billion years before the core will turn solid. That will give earth plenty of time to recycle everything on the surface through plate subduction. There is nothing on the surface that will survive that process. *Everything* will be recycled a few times more until tectonic motion stops in about a billion years. Looking further down the road, much, much further, we project the end of the universe some 10¹³⁹ years from now. That is an unimaginably long period of time. A rough calculation suggests that “it’s more than the amount of time it would take to count every atom in the universe if you had to wait from the Big Bang until now in between counting each atom”. according to Gizmodo. Our tiny little brains are just not capable of comprehending how long that is. A million years might as well be “forever” for us, let alone 10¹³⁹ years. And what awaits us there, at the end of the universe? Heat death. That’s when every last erg contained in the universe has been expended to the point where there is no other free energy left in the universe. At that point in time, there will be no more energy for anything else to happen. I wonder what happens to time then. Surely, we’re not living our lives for that, are we? If that’s the end of the universe, then is there any meaning to be found? Instead of looking for meaning in life, I think that the best thing to do is to just find a way to live in peace with our brothers and sisters. Find a way to make someone’s day a better day. Remind ourselves that a bad day is not even remotely close to “forever”, and that there is nothing personal about a bad day. And for those people who are looking for permanence, would you really want to live forever? Wouldn’t you like to know that at some point, you can stop thinking, stop worrying, stop even having joy and just rest? Ever watch a toddler have a good day, only to become cranky by early afternoon? Yeah, that’s because it takes energy for everything, even to have fun. And when blood sugar is low, decision fatigue sets in, and things go south in a hurry. That’s what forever is like. So I’m not sure that I want permanence. I know that my time here, on this plane, and perhaps any other plane is limited, and I’m OK with that. The past is gone. The future is not here yet. And the present is all that I really have right now, regardless of all of the friends, possessions, and experiences I’ve had in my life. I don’t count my blessings, for counting them only makes them the lesser. I just enjoy the moment for what it is and let it pass. Write on.
https://scottcdunn.medium.com/this-moment-in-time-and-the-futile-quest-for-permanence-fed5bcc3a38d
[]
2019-07-14 20:53:57.088000+00:00
['Science', 'Philosophy', 'Reality', 'Time', 'Mindfulness']
How Your Need for Progress is Hijacking Your Happiness
This is the feeling I started to chase and have since stopped going to the gym. (Not that I don’t like the gym at all, mind you, but I mostly went for the machines and sauna. Ever since Corona, I wasn’t trying to be stuck inside with a bunch of panting gym rats, and who would want to wear a mask while they’re working out, anyway? I’m not training, I’m maintaining. A phrase I admittedly will use as an excuse from time to time to not push myself when I probably could.) My girlfriend’s intentions were pure. She likes the tech and tracking and numbers that come along with making progress in areas like that. More power to her and anyone like that. (I mean, I’d be lying if I said I wasn’t at least a little proud of myself for finally hitting 600 followers…thanks!) But, as far as running went, I continued to do it for the simple joy of it. I already loved being at the beach, so killing two birds by running there instead of driving seemed like an easy solution to me. It also got me thinking. Why do we allow the desire to make progress in our lives to steal our happiness? Happiness doesn’t come from anything external, and yet we act as if any break in the momentum upward will cause the entire project of reaching peace to fall apart. We assume that every new endeavor we take on, every new project or poem or song or book or portrait or idea or allotted exercise time we have has to be better than the last one. If it’s not at least as good as what you did before, we let ourselves grow accustomed to amplifying our inner critic to the point where all we hear is YOU. HAVE. FAILED. We place our happiness in the hands of this illusion of upward mobility without ever considering the fact that by doing this, we’re leaving ourselves feeling empty without our little project. It’s not that we shouldn’t set goals for ourselves, but if achieving goals is where you find happiness, what happens when you’re no longer capable of reaching them? If you only find happiness in exercise when you’ve run farther than you did the day before, what happens when you get too old to run for very long? Can you no longer be happy simply running for the sake of it? Or walking, even? What about just sitting outside? In my case, the achievement of reaching a new running distance wasn’t where I was finding my happiness. I was perfectly content just being outside on a nice day. I got more out of the sunshine and saltwater than I did the mile tracker. Those numbers seemed petty and laughable in comparison to the grandeur I felt just being able to feel healthy and alive while slowing jogging along the tide as it stretched its foamy fingers trying its hardest to lap my feet and soak my socks. That being said, were it a conscious decision to make a point to run a certain distance, then sure, go for it! But unless you’re making that conscious effort to achieve something specific, there’s no point in living under the pressure of progress for the sake of progress. Because my goals had shifted from “run farther than last time” to simply “run outside,” I had a much easier time relaxing and enjoying my jog rather than feeling as if I had failed for not reaching some intangible number. It all depends on what your goals are. The thing is, most people at their core just want to be happy. And if you were to strip away the convoluted and unnecessary ways people invent to try and achieve this goal, most people could easily reach this state of peace on their own any time they wanted. I read things similar to this all the time when it comes to artists and making more money. I literally just posted something myself about that exact topic. When our standard for happiness has a minimum payment of: One Accomplishment, we eventually become unable to find that same sense of contentment on our own. We always need the external validation from our personal achievement or acknowledgment from our peers to say, Okay, NOW you can be happy. What happens when you’re no longer able to make your art or reach the same goals? What if you age out of it or, more tragically, get in an accident that strips you of your ability? Are you now incapable of finding happiness? Have you become so acclimated to acquiring badges of approval that you’ve forgotten that our sense of peace and wellbeing comes from within and not without? When you place too much importance on continual progress, especially for the sake of progress, you’re robbing yourself of the joy that is surrounding you at all times. Progress is merely an indicator that you’ve done something enough times to master that particular level of competency. If that indicator is where you place your requirement for happiness, how could you even be satisfied if you were the best in the world? Then what? There’d be nowhere to go. You would no longer have your indicators of achievement and would probably end up starting an entirely new game chasing the exact same trophies. I’m not saying progress is something to be avoided or dismissed as irrelevant, far from it. Progress has literally been what’s allowing me to type these very words to you right now. It’s the cornerstone of modern society and is what makes our lives more luxurious than the lives of the royalty of the past. That’s not the point. The point is that you’re giving up your most precious resource to something that doesn’t need it. Your progress doesn’t need your happiness. You can achieve all kinds of things in this world without feeling much of anything about it. Happiness is simply a byproduct of fulfilling goals. Goals, however, are not the source of happiness. That source is within you and always has been. When we can realize this — that our happiness and contentment doesn’t have to be conditional, that we can remove our source of peace from being so attached to achievement and validation — we can finally see that we are truly untouchable. We can finally create without the pressure of beating ourselves up over numbers that don’t really mean anything. We can finally go for a run and be perfectly content with the simple knowledge that our legs are able to do so.
https://medium.com/the-innovation/how-your-need-for-progress-is-hijacking-your-happiness-f6aa705f162f
['Scott Leonardi']
2020-12-25 21:32:23.666000+00:00
['Personal Development', 'Self', 'Progress', 'Self-awareness', 'Life']
Understanding Machine Learning
What is Machine Learning? Machine Learning is one of the fastest growing areas of computer science with, far-reaching applications. Basically, Machine Learning is a system that has the ability to automatically learn and improve from past experiences without being explicitly programmed. When Do We Need Machine Learning? When do we need machine learning rather than directly program our computers to carry out the task at hand? Two aspects of a given problem may call for the use of programs that learn and improve on the basis of their “experience”: Tasks That Are Too Complex to Program -> Tasks Performed by Animals/Humans: Examples of such tasks include driving, speech recognition, and image understanding. In all of these tasks, state of the art machine learning programs, programs that “learn from their experience,” achieve quite satisfactory results, once exposed to sufficiently many training examples. -> Tasks beyond Human Capabilities: Another wide family of tasks that benefit from machine learning techniques is related to the analysis of very large and complex data sets: astronomical data, turning medical archives into medical knowledge, weather prediction, analysis of genomic data, Web search engines, and electronic commerce. 2. Adaptivity One limiting feature of programmed tools is their rigidity — once the program has been written down and installed, it stays unchanged. However, many tasks change over time or from one user to another. Machine learning tools — programs whose behavior adapts to their input data — offer a solution to such issues; they are, by nature, adaptive to changes in the environment they interact with. Types of Learning : Supervised Learning In the Supervised Learning, the training data includes both the Features(Inputs) and the Labels. -> Regression In Regression Problem, we are trying to predict result with the Continuous Output. -> Classification In Classification Problem, we are trying to predict result with the Discrete Output. 2. Unsupervised Learning In the Unsupervised Learning, the training data includes the Features(Inputs) and not the Labels. Here, we do not tell the system where to go the system has to understand from the data given. It is difficult to implement then supervised learning because we do not know the desired output and its difficult to train the model. 3. Reinforcement Learning In Reinforcement Learning, the machine is trained to make specific decisions. The machine is being exposed in an outside environment where it trains itself continually using trial and error. So Let’s get started with Machine Learning. Keep watching this and follow CodinGurukul for more tech articles or you can reach out to me for any doubt and suggestions and the next blog series will be published soon. Visit: MLAIT - We Code For Better Tomorrow
https://medium.com/codingurukul/understanding-machine-learning-114e85cd3f0e
['Paras Patidar']
2019-12-13 14:50:26.878000+00:00
['Artificial Intelligence', 'Understanding Ml And Ai', 'Machine Learning', 'Codingurukul']
In Marketing, Don’t Optimize Operational Experiences, Optimize Customer Experiences
Photo by Gary Chan on Unsplash Marketing automation is great, but does it really change the way people think about your company and products? I’ve been marketing B2B technology products and services for over twenty years and I’m seeing a pattern I don’t like: operational automation prioritized over customer experience. I have examples sitting in my inbox right now in the form of marketing emails from several major trade organizations and software companies. These companies send this stuff every day and sometimes multiple times a day. Have they never heard of the boy who cried wolf (maybe they haven’t)? The dark side of operational marketing Tools like Hubspot and Marketo give marketers the ability to set up and automate the delivery of marketing messages to multiple outputs, with a few clicks. In fact, with proper set up there are no clicks- you just create and format the content and then plug it into the system for automated delivery. In some cases, marketers choose to deliver the same content every day to maximize exposure. But the reality is that they are actually rendering their efforts invisible. Remember ‘banner blindness’? Apparently not I’m ancient enough to remember when digital marketing was about intrusive crap like banner ads that littered online content, diluting its value and message. But humans are adaptive and we adapted to this onslaught by cognitively learning to ignore it. It didn’t take long because, even before the Internet, we were exposed to thousands of marketing messages on a daily basis and we became numb to them. Yet marketers still sent them out constantly in a ‘throw it at the wall and see what sticks’ approach. This, despite massive evidence that not only did this no longer work, but it actually does serious damage to the offending companies. Because it’s easy doesn’t make it right We know, for certain, that what people want online is high quality, actionable content. Content that informs and helps them make problem-solving decisions, i.e. buying decisions. Yet we continue to send those daily ‘empty’ emails. Why? The answer is simple. Quality is hard and quantity is easy. Automation has made the operational lift in marketing much easier, as it should. But marketers have taken the lazy route at the expense of customer experience. I opted in for those emails I mentioned at the opening of this article, supposedly exchanging my valued attention and time for something of value. And what did I get? Intrusive, irrelevant, annoying barrages of content that I either ignore or delete and unsubscribe. They acquired me with a promise and then used operational laziness to destroy the value of that effort. This is not really about marketing automation Ironically, marketing automation tools give us the tools we need to finetune the experience our customers associate with us. It’s called feedback and it tells us everyday, via analytics, whether what we are sending and posting delights the customer or infuriates them. If I send six emails in a week for the same webinar to a customer who doesn’t open them, then shouldn’t I get the message that this isn’t working or they don’t care about the topic? Apparently, if enough of the spaghetti sticks, it is worth annoying the 99% that doesn’t. Google keeps telling us but… SEOers, who should know better, are often as guilty of this laziness as anyone else. The reason they should know better is Google has made it loud and clear what it rewards: quality. They’re wise to the game. Potential buyers are no different. We know a good experience when we have it. And too many marketers are using tools that used improperly destroy that experience. Please stop. This is not rocket science Before you design campaigns and content, step back and imagine yourself on the receiving end of them. Even better, create a list in your marketing tool and put your marketing and management team on it. Then send your campaign to them. Even better, to their personal email, if they’ll share it. Get their feedback. Is the experience useful, interesting, helpful, or…a drag? This seat of the pant user experience (UX) testing is common in software and product development. Why would marketers ignore it? IMHO, there are only two reasons: incompetence or laziness. Or both. Time to step up and use the tools we have, like a professional.
https://medium.com/swlh/in-marketing-dont-optimize-operational-experiences-optimize-customer-experiences-5f93dc86104e
[]
2019-07-26 15:46:46.412000+00:00
['Professional Development', 'Customer Experience', 'Marketing', 'Automation', 'Advice']
Complete Guide to Data Visualization with Python
Let’s see the main libraries for data visualization with Python and all the types of charts that can be done with them. We will also see which library is recommended to use on each occasion and the unique capabilities of each library. We will start with the most basic visualization that is looking at the data directly, then we will move on to plotting charts and finally, we will make interactive charts. Datasets We will work with two datasets that will adapt to the visualizations we show in the article, the datasets can be downloaded here. They are data on the popularity of searches on the Internet for three terms related to artificial intelligence (data science, machine learning and deep learning). They have been extracted from a famous search engine. There are two files temporal.csv and mapa.csv. The first one we will use in the vast majority of the tutorial includes popularity data of the three terms over time (from 2004 to the present, 2020). In addition, I have added a categorical variable (ones and zeros) to demonstrate the functionality of charts with categorical variables. The file mapa.csv includes popularity data separated by country. We will use it in the last section of the article when working with maps. Pandas Before we move on to more complex methods, let’s start with the most basic way of visualizing data. We will simply use pandas to take a look at the data and get an idea of how it is distributed. The first thing we must do is visualize a few examples to see what columns there are, what information they contain, how the values are coded… import pandas as pd df = pd.read_csv('temporal.csv') df.head(10) #View first 10 data rows With the command describe we will see how the data is distributed, the maximums, the minimums, the mean, … df.describe() With the info command we will see what type of data each column includes. We could find the case of a column that when viewed with the head command seems numeric but if we look at subsequent data there are values in string format, then the variable will be coded as a string. df.info() By default, pandas limits the number of rows and columns it displays. This bothers me usually because I want to be able to visualize all the data. With these commands, we increase the limits and we can visualize the whole data. Be careful with this option for big datasets, we can have problems showing them. pd.set_option('display.max_rows', 500) pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) Using Pandas styles, we can get much more information when viewing the table. First, we define a format dictionary so that the numbers are shown in a legible way (with a certain number of decimals, date and hour in a relevant format, with a percentage, with a currency, …) Don’t panic, this is only a display and does not change the data, you will not have any problem to process it later. To give an example of each type, I have added currency and percentage symbols even though they do not make any sense for this data. format_dict = {'data science':'${0:,.2f}', 'Mes':'{:%m-%Y}', 'machine learning':'{:.2%}'} #We make sure that the Month column has datetime format df['Mes'] = pd.to_datetime(df['Mes']) #We apply the style to the visualization df.head().style.format(format_dict) We can highlight maximum and minimum values with colours. format_dict = {'Mes':'{:%m-%Y}'} #Simplified format dictionary with values that do make sense for our data df.head().style.format(format_dict).highlight_max(color='darkgreen').highlight_min(color='#ff0000') We use a color gradient to display the data values. df.head(10).style.format(format_dict).background_gradient(subset=['data science', 'machine learning'], cmap='BuGn') We can also display the data values with bars. df.head().style.format(format_dict).bar(color='red', subset=['data science', 'deep learning']) Moreover, we also can combine the above functions and generate a more complex visualization. df.head(10).style.format(format_dict).background_gradient(subset=['data science', 'machine learning'], cmap='BuGn').highlight_max(color='yellow') Learn more about styling visualizations with Pandas here: https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html Pandas Profiling Pandas profiling is a library that generates interactive reports with our data, we can see the distribution of the data, the types of data, possible problems it might have. It is very easy to use, with only 3 lines we can generate a report that we can send to anyone and that can be used even if you do not know programming. from pandas_profiling import ProfileReport prof = ProfileReport(df) prof.to_file(output_file='report.html') You can see the interactive report generated from the data used in the article, here. You can find more information about Pandas Profiling in this article. Matplotlib Matplotlib is the most basic library for visualizing data graphically. It includes many of the graphs that we can think of. Just because it is basic does not mean that it is not powerful, many of the other data visualization libraries we are going to talk about are based on it. Matplotlib’s charts are made up of two main components, the axes (the lines that delimit the area of the chart) and the figure (where we draw the axes, titles and things that come out of the area of the axes). Now let’s create the simplest graph possible: import matplotlib.pyplot as plt plt.plot(df['Mes'], df['data science'], label='data science') #The parameter label is to indicate the legend. This doesn't mean that it will be shown, we'll have to use another command that I'll explain later. We can make the graphs of multiple variables in the same graph and thus compare them. plt.plot(df['Mes'], df['data science'], label='data science') plt.plot(df['Mes'], df['machine learning'], label='machine learning') plt.plot(df['Mes'], df['deep learning'], label='deep learning') It is not very clear which variable each color represents. We’re going to improve the chart by adding a legend and titles. plt.plot(df['Mes'], df['data science'], label='data science') plt.plot(df['Mes'], df['machine learning'], label='machine learning') plt.plot(df['Mes'], df['deep learning'], label='deep learning') plt.xlabel('Date') plt.ylabel('Popularity') plt.title('Popularity of AI terms by date') plt.grid(True) plt.legend() If you are working with Python from the terminal or a script, after defining the graph with the functions we have written above use plt.show(). If you’re working from jupyter notebook, add %matplotlib inline to the beginning of the file and run it before making the chart. We can make multiple graphics in one figure. This goes very well for comparing charts or for sharing data from several types of charts easily with a single image. fig, axes = plt.subplots(2,2) axes[0, 0].hist(df['data science']) axes[0, 1].scatter(df['Mes'], df['data science']) axes[1, 0].plot(df['Mes'], df['machine learning']) axes[1, 1].plot(df['Mes'], df['deep learning']) We can draw the graph with different styles for the points of each variable: plt.plot(df['Mes'], df['data science'], 'r-') plt.plot(df['Mes'], df['data science']*2, 'bs') plt.plot(df['Mes'], df['data science']*3, 'g^') Now let’s see a few examples of the different graphics we can do with Matplotlib. We start with a scatterplot: plt.scatter(df['data science'], df['machine learning']) Example of a bar chart: plt.bar(df['Mes'], df['machine learning'], width=20) Example of a histogram: plt.hist(df['deep learning'], bins=15) We can add a text to the graphic, we indicate the position of the text in the same units that we see in the graphic. In the text, we can even add special characters following the TeX language We can also add markers that point to a particular point on the graph. plt.plot(df['Mes'], df['data science'], label='data science') plt.plot(df['Mes'], df['machine learning'], label='machine learning') plt.plot(df['Mes'], df['deep learning'], label='deep learning') plt.xlabel('Date') plt.ylabel('Popularity') plt.title('Popularity of AI terms by date') plt.grid(True) plt.text(x='2010-01-01', y=80, s=r'$\lambda=1, r^2=0.8$') #Coordinates use the same units as the graph plt.annotate('Notice something?', xy=('2014-01-01', 30), xytext=('2006-01-01', 50), arrowprops={'facecolor':'red', 'shrink':0.05}) Gallery of examples: In this link: https://matplotlib.org/gallery/index.html we can see examples of all types of graphics that can be done with Matplotlib. Seaborn Seaborn is a library based on Matplotlib. Basically what it gives us are nicer graphics and functions to make complex types of graphics with just one line of code. We import the library and initialize the style of the graphics with sns.set(), without this command the graphics would still have the same style as Matplotlib. We show one of the simplest graphics, a scatterplot import seaborn as sns sns.set() sns.scatterplot(df['Mes'], df['data science']) We can add information of more than two variables in the same graph. For this we use colors and sizes. We also make a different graph according to the value of the category column: sns.relplot(x='Mes', y='deep learning', hue='data science', size='machine learning', col='categorical', data=df) One of the most popular graphics provided by Seaborn is the heatmap. It is very common to use it to show all the correlations between variables in a dataset: sns.heatmap(df.corr(), annot=True, fmt='.2f') Another of the most popular is the pairplot that shows us the relationships between all the variables. Be careful with this function if you have a large dataset, as it has to show all the data points as many times as there are columns, it means that by increasing the dimensionality of the data, the processing time increases exponentially. sns.pairplot(df) Now let’s do the pairplot showing the charts segmented according to the values of the categorical variable sns.pairplot(df, hue='categorical') A very informative graph is the jointplot that allows us to see a scatterplot together with a histogram of the two variables and see how they are distributed: sns.jointplot(x='data science', y='machine learning', data=df) Another interesting graphic is the ViolinPlot: sns.catplot(x='categorical', y='data science', kind='violin', data=df) We can create multiple graphics in one image just like we did with Matplotlib: fig, axes = plt.subplots(1, 2, sharey=True, figsize=(8, 4)) sns.scatterplot(x="Mes", y="deep learning", hue="categorical", data=df, ax=axes[0]) axes[0].set_title('Deep Learning') sns.scatterplot(x="Mes", y="machine learning", hue="categorical", data=df, ax=axes[1]) axes[1].set_title('Machine Learning') Gallery of examples: In this link, we can see examples of everything that can be done with Seaborn. Bokeh Bokeh is a library that allows you to generate interactive graphics. We can export them to an HTML document that we can share with anyone who has a web browser. It is a very useful library when we are interested in looking for things in the graphics and we want to be able to zoom in and move around the graphic. Or when we want to share them and give the possibility to explore the data to another person. We start by importing the library and defining the file in which we will save the graph: from bokeh.plotting import figure, output_file, save output_file('data_science_popularity.html') We draw what we want and save it on the file: p = figure(title='data science', x_axis_label='Mes', y_axis_label='data science') p.line(df['Mes'], df['data science'], legend='popularity', line_width=2) save(p) You can see how the file data_science_popularity.html looks by clicking here. It’s interactive, you can move around the graphic and zoom in as you like Adding multiple graphics to a single file: output_file('multiple_graphs.html') s1 = figure(width=250, plot_height=250, title='data science') s1.circle(df['Mes'], df['data science'], size=10, color='navy', alpha=0.5) s2 = figure(width=250, height=250, x_range=s1.x_range, y_range=s1.y_range, title='machine learning') #share both axis range s2.triangle(df['Mes'], df['machine learning'], size=10, color='red', alpha=0.5) s3 = figure(width=250, height=250, x_range=s1.x_range, title='deep learning') #share only one axis range s3.square(df['Mes'], df['deep learning'], size=5, color='green', alpha=0.5) p = gridplot([[s1, s2, s3]]) save(p) You can see how the file multiple_graphs.html looks by clicking here. Gallery of examples: In this link https://docs.bokeh.org/en/latest/docs/gallery.html you can see examples of everything that can be done with Bokeh. Altair Altair, in my opinion, does not bring anything new to what we have already discussed with the other libraries, and therefore I will not talk about it in depth. I want to mention this library because maybe in their gallery of examples we can find some specific graphic that can help us. Gallery of examples: In this link you can find the gallery of examples with all you can do with Altair. Folium Folium is a library that allows us to draw maps, markers and we can also draw our data on them. Folium lets us choose the map supplier, this determines the style and quality of the map. In this article, for simplicity, we’re only going to look at OpenStreetMap as a map provider. Working with maps is quite complex and deserves its own article. Here we’re just going to look at the basics and draw a couple of maps with the data we have. Let’s begin with the basics, we’ll draw a simple map with nothing on it. import folium m1 = folium.Map(location=[41.38, 2.17], tiles='openstreetmap', zoom_start=18) m1.save('map1.html') We generate an interactive file for the map in which you can move and zoom as you wish. You can see it here. We can add markers to the map: m2 = folium.Map(location=[41.38, 2.17], tiles='openstreetmap', zoom_start=16) folium.Marker([41.38, 2.176], popup='<i>You can use whatever HTML code you want</i>', tooltip='click here').add_to(m2) folium.Marker([41.38, 2.174], popup='<b>You can use whatever HTML code you want</b>', tooltip='dont click here').add_to(m2) m2.save('map2.html') You can see the interactive map file where you can click on the markers by clicking here. In the dataset presented at the beginning, we have country names and the popularity of the terms of artificial intelligence. After a quick visualization you can see that there are countries where one of these values is missing. We are going to eliminate these countries to make it easier. Then we will use Geopandas to transform the country names into coordinates that we can draw on the map. from geopandas.tools import geocode df2 = pd.read_csv('mapa.csv') df2.dropna(axis=0, inplace=True) df2['geometry'] = geocode(df2['País'], provider='nominatim')['geometry'] #It may take a while because it downloads a lot of data. df2['Latitude'] = df2['geometry'].apply(lambda l: l.y) df2['Longitude'] = df2['geometry'].apply(lambda l: l.x) Now that we have the data coded in latitude and longitude, let’s represent it on the map. We’ll start with a BubbleMap where we’ll draw circles over the countries. Their size will depend on the popularity of the term and their colour will be red or green depending on whether their popularity is above a value or not. m3 = folium.Map(location=[39.326234,-4.838065], tiles='openstreetmap', zoom_start=3) def color_producer(val): if val <= 50: return 'red' else: return 'green' for i in range(0,len(df2)): folium.Circle(location=[df2.iloc[i]['Latitud'], df2.iloc[i]['Longitud']], radius=5000*df2.iloc[i]['data science'], color=color_producer(df2.iloc[i]['data science'])).add_to(m3) m3.save('map3.html') You can view the interactive map file by clicking here. Which library to use at any given time? With all this variety of libraries you may be wondering which library is best for your project. The quick answer is the library that allows you to easily make the graphic you want. For the initial phases of a project, with pandas and pandas profiling we will make a quick visualization to understand the data. If we need to visualize more information we could use simple graphs that we can find in matplotlib as scatterplots or histograms. For advanced phases of the project, we can search the galleries of the main libraries (Matplotlib, Seaborn, Bokeh, Altair) for the graphics that we like and fit the project. These graphics can be used to give information in reports, make interactive reports, search for specific values, …
https://towardsdatascience.com/complete-guide-to-data-visualization-with-python-2dd74df12b5e
['Albert Sanchez Lafuente']
2020-02-29 22:09:20.754000+00:00
['Maps', 'Python', 'Data', 'Data Science', 'Data Visualization']
Every Nomadic Life Needs Roots
Not everyone grows up with a strong sense of belonging. When home isn’t safe place for a child because your primary caregiver is also your abuser, you can’t grow roots. Instead, you dream of being anywhere but there. My one regret in life is never finding the courage to run away from home as a teenager and waiting until I was 17 to leave under the cover of academia. I’ve been a runaway ever since, moving countries like other people move apartments. Such is the magic of the European Union: Being a citizen of any of the 28 member states means you’re free to live, study, and work in another of the other 27. Sometimes, we also pick up languages along the way and our cultural identity expands to accommodate a new linguistic homeland. For example, I fell into Portuguese and it became part of me. Just like German had done many years before. The closest thing I have to roots is languages, cultures, and places that feel familiar because they were home at one point in my life. In 2013, I immigrated to the US, intent on dropping anchor at last. I took America to be a place where the whole world was at home, or at least that’s what it looked like on paper. However, depression felled me almost as soon as I landed. The ongoing cultural exchange that had characterized my life until immigration ground to a halt as the various parts of my identity atrophied and dropped off. Unilateral curiosity is a ravenous and lonely beast. With gusto, I embraced the lifelong learning process that is figuring America out but there was no interest in what I had brought with me. Thus began five interminable airtight, mostly monolingual years as the plaything of a parasite in my head.
https://asingularstory.medium.com/every-nomadic-life-needs-roots-5eb34a9a2a0
['A Singular Story']
2019-10-16 20:31:02.282000+00:00
['Self', 'Relationships', 'Life Lessons', 'Mental Health', 'Travel']
Data Humanism — A Case Study. What I learned from humanizing my…
Data humanism Data is an imperfect abstraction of the world. We use blindly this interface as reality itself. This is obscurantism and data humanists are here to switch the light on. The world is made of complicated mechanisms and interactions. Thus, the data capturing the world is also complex. But do we always consider this complexity when working with data ? In Big Data projects, a piece of data is just one more line. The more lines there are, the more credible the dataset is. The line itself is no more than a drop in the ocean and does not represent anything anymore. It has no specific form, and gives us very little hook on reality. Data is a magic plasma which we feed to data visualization tools which then recommend us the best way of depicting it. As far as data visualization is concerned, especially dashboards in which each widget answers a specific question and gives a limited amount of information, we hope to explain complex phenomena with generic questions and formatted answers. Most of the time, data is just what we take from the world to answer our most valuable questions. We lose the sense of what data represents. Thus, its complexity in not considered. Maybe using our digital tools to find answers to generic pre-existing questions is not the only way of behaving with data. Let’s say I have an app which tracks how many kilometers I ran in the last month. I chose a bar chart to show the distance in km for each week of the month. It is a valid design which gives valid information. It answers the question : “How many kilometers did I run each week ?”. However, it’s only valid and nothing more, because it doesn’t try to mean anything else than what it shows. Maybe one bar in the visualization matches with hours of physical effort in various places with various people feeling various emotions. Data humanists like Giorgia Lupi, who wrote a manifesto, say the bar chart is not the issue. The fault is ours, willing to show an inherently human piece of information without considering the human. To find the human in the dataset, they suggest ways to reincorporate empathy at different levels of the data handling. In the next part, I will describe how this can be done, and I will try to use these ideas as guidance for making a more human running app visualization.
https://medium.com/nightingale/data-humanism-a-case-study-c16d0efef533
['Guillaume Meigniez']
2019-06-12 12:02:21.239000+00:00
['Data Visualization', 'Design', 'Data', 'D3js', 'Datahumanism']
Managing Kubernetes with Kapitan. In my first I explained a little bit…
In my first post I explained a little bit the philosophy behind Kapitan and how it came to be. In this post, I will give a more pragmatic introduction so that you can easily evaluate it and see if it fits your needs. Kapitan is a tool to template files. It can be used to template things like text, documentation, scripts or yaml/json manifests. It was created to manage Kubernetes based deployments but it is flexible enough to be used in completely different contexts. To get started with it, you can run it using docker or following these instructions. For this post, I have also created a Katacoda scenario to allow you to play with it and take it for a run. This post will focus on the use of Kapitan to manage Kubernetes deployments. Assuming that you are all set and can run kapitan , let’s continue! $ kapitan --version 0.22.3 # <-- Tested with this version: might be different! Download the examples Follow the instructions to download the examples and let’s use the example in the example-3 folder. $ git clone https://github.com/ademariag/kapitan-examples --depth 1 $ cd kapitan-examples/example-3/ Let’s sail the Seven Seas! This example will show you how to work with multiple environments. It comes with 2 environments: prod-sea and dev-sea . You can run kapitan compile to verify that the compilation works $ kapitan compile Compiled prod-sea (0.43s) Compiled dev-sea (0.45s) Kapitan works by compiling the templates into their concrete form. By default, kapitan stores the compiled files in the compiled/ subfolder of the workspace. compiled |-- dev-sea | |-- README.md | |-- cod.md | |-- manifests | | |-- cod-configmap.yml | | |-- cod-deployment.yml | | |-- namespace.yml | | |-- sardine-configmap.yml | | |-- sardine-deployment.yml | | |-- tuna-configmap.yml | | `-- tuna-deployment.yml | |-- sardine.md | |-- scripts | | |-- apply.sh | | |-- kubectl.sh | | |-- setup_cluster.sh | | `-- setup_context.sh | `-- tuna.md `-- prod-sea [cut] Kapitan compilation is an idempotent operation. If none of the inputs have changed, you can expect no changes in the compiled files. In fact, let’s verify that the git status is clean $ git status On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean So far, nothing exciting. But you can see that there are several files in that compiled subfolder. Where do they come from? Let’s nuke the directory and run compile again. $ rm -fr compiled/* # <-- I shouldn't need to warn you.. $ kapitan compile Compiled dev-sea (0.46s) Compiled prod-sea (0.46s) $ git status On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean As you can see, the files are immediately recreated, and because kapitan compile is an idempotent operation, git status shows no changes. So far, so good! Targets and Inventory Kapitan uses an inventory that represents the single source of truth of your deployments. The inventory is based on a python library called reclass which is a fork of the amazing library by madduck. The entrypoint for the inventory is a target. As you can see in the inventory/targets folder, at the moment there are 2 targets: dev-sea.yml and prod-sea.yml $ cat inventory/targets/dev-sea.yml classes: - projects.minikube - components.tuna - components.cod - components.sardine - features.brine - features.canned - releases.v2.0.0 - stages.development parameters: target: dev-sea owner: developers Aaaahh… YAML at last! Now we are talking! I strongly suggest you to read more about reclass if you intend to discover its full potential. Reclass essentially merges multiple yaml fragments by using a hierarchical Class inheritance: this way, reclass allows to reuse parameters across multiple targets. Compare the content in prod-sea.yml and dev-sea.yml and you will see that the differences between them are minimal. The “features.brine” class that you see in the target is nothing but a map to a actual yaml file: inventory/classes/features/brine.yml $ cat inventory/classes/features/brine.yml parameters: tuna: args: - --brine Experiment 1: remove the “brine” feature Let’s see what happens when you remove a class from a target. Remove the line corresponding to the class features.brine in the target dev-sea.yml and run kapitan compile again! $ sed -i '/features.brine/d' inventory/targets/dev-sea.yml $ kapitan compile Compiled dev-sea (0.45s) Compiled prod-sea (0.47s) $ git status On branch master Your branch is up to date with 'origin/master'. Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) modified: compiled/dev-sea/manifests/tuna-deployment.yml modified: inventory/targets/dev-sea.yml no changes added to commit (use "git add" and/or "git commit -a") In this experiment, we have removed from the dev-sea target a class that represents the “brine” feature. As you can see from the result of git status , this had the side-effect of also rewriting the compiled/dev-sea/manifests/tuna-deployment.yml Diffing the file you can see that the operation results with the --brine command line argument being removed from the args list: $ git diff compiled/dev-sea/manifests/tuna-deployment.yml [cut] @@ -14,7 +14,6 @@ spec: - args: - --verbose=True - --secret=?{ref:targets/dev-sea/tuna:ee958e1c} - - --brine - --canned image: alledm/tuna:v2.0.0 name: tuna Experiment 2: remove the “tuna” component Let’s now try to remove the “tuna” component class. Similarly to the previous case, we just need to remove the “components.tuna” line from the dev-sea target. $ sed -i '/components.tuna/d' inventory/targets/dev-sea.yml $ kapitan compile Compiled dev-sea (0.32s) Compiled prod-sea (0.42s) $ git status On branch master Your branch is up to date with 'origin/master'. Changes not staged for commit: (use "git add/rm <file>..." to update what will be committed) (use "git checkout -- <file>..." to discard changes in working directory) deleted: compiled/dev-sea/manifests/tuna-configmap.yml deleted: compiled/dev-sea/manifests/tuna-deployment.yml deleted: compiled/dev-sea/tuna.md modified: inventory/targets/dev-sea.yml no changes added to commit (use "git add" and/or "git commit -a") As you can see, removing the component causes the deletion of all the manifests and documentation associated to that component. What can we learn from these experiments? Already a lot: Changing the dev-sea target produced a change that only affected the dev-sea environment. target produced a change that only affected the environment. Kapitan and git make it easy to assess which environment/target will be affected by a change. make it easy to assess which environment/target will be affected by a change. Both “ cause ” (the target change) and the “ effect ” (the deployment change) are visible to the reviewer. ” (the target change) and the “ ” (the deployment change) are visible to the reviewer. You can take a guess at what the change will do without having to actually deploy it to Kubernetes! Let’s dissect the experiment When we looked at the “brine” feature, we saw a YAML fragment. $ cat inventory/classes/features/brine.yml parameters: tuna: args: - --brine Let’s see how it all ties together. Inventory variable interpolation — quick intro Reclass allows to refer yaml subtrees from other section of the inventory. i.e. parameters: example: var: value other_var: ${example:var} would result in an expanded inventory: parameters: example: var: value other_var: value kapitan inventory First, we can learn to use the kapitan inventory command to see what is the actual inventory being computed by Kapitan. You can look at the full inventory: $ kapitan inventory parameters: < lots of output > You can drill down to a specific target: $ kapitan inventory -t dev-sea parameters: < lots of output > In our case, you can look at a specific sub-tree of the inventory: $ kapitan inventory -t dev-sea -p parameters.tuna.args - --verbose=True - --secret=?{ref:targets/dev-sea/tuna|randomstr} - --canned What happens when we restore the “features.brine” feature? $ git checkout -- inventory/targets/dev-sea.yml $ kapitan inventory -t dev-sea -p parameters.tuna.args - --verbose=True - --secret=?{ref:targets/dev-sea/tuna|randomstr} - --brine - --canned As you can see, the yaml fragment is injected into the parameters.tuna.args yaml list. But where is this list coming from? kapitan searchvar We can use the kapitan searchvar command to find where a specific value is defined: $ kapitan searchvar parameters.tuna.args ./inventory/classes/features/canned.yml ['--canned'] ./inventory/classes/features/brine.yml ['--brine'] ./inventory/classes/components/tuna.yml ['--verbose=${verbose}', '--secret=?{ref:targets/${target}/tuna|randomstr}'] As you can see, the parameters.tuna.args list is being referred to in these 3 files. You can already guess the canned.yml won’t be much different than the brine.yml that we have already opened. Let’s look at the inventory/classes/components/tuna.yml instead: $ cat inventory/classes/components/tuna.yml parameters: releases: tuna: latest tuna: image: alledm/tuna:${tuna:release} release: ${releases:tuna} replicas: ${replicas} args: - --verbose=${verbose} - --secret=?{ref:targets/${target}/tuna|randomstr} kapitan: mains: - components/tuna/main.jsonnet docs: - components/tuna/docs/tuna.md This inventory class defines the tuna inventory component. And this is also the first time we see a reference to jsonnet! Pretty much everything in the inventory is up to you to define. You can organise it the way you want it. We have used “components/tuna.yml” but you can organise both the inventory files and the inventory content whichever way you want! It’s no surprise that we find the “tuna” component subtree defined in this file. But I want to draw your attention to the “kapitan” section. In fact, let’s use the kapitan inventory command to see the whole “parameters.kapitan” yaml sub-structure. $ kapitan inventory -t dev-sea -p parameters.kapitan compile: - output_path: manifests output_type: yaml input_type: jsonnet input_paths: ${kapitan:mains} # See notes* - output_path: . input_type: jinja2 input_paths: ${kapitan:docs} # See notes* - output_path: scripts input_type: jinja2 input_paths: ${kapitan:scripts} # See notes* docs: - components/docs/README.md - components/tuna/docs/tuna.md - components/cod/docs/cod.md - components/sardine/docs/sardine.md mains: - components/namespace/main.jsonnet - components/tuna/main.jsonnet - components/cod/main.jsonnet - components/sardine/main.jsonnet scripts: - components/scripts/apply.sh - components/scripts/kubectl.sh - components/scripts/setup_cluster.sh - components/scripts/setup_context.sh vars: target: dev-sea notes*: in reality running the command would show the interpolated version of these parameters which would be a bit confusing. I have replaced this values with the pre-interpolation reference. Now we can finally understand what is happening: the components.tuna class in the parameters.kapitan sub-tree adds 2 files to be rendered. kapitan: mains: - components/tuna/main.jsonnet docs: - components/tuna/docs/tuna.md The components/tuna/main.jsonnet is a jsonnet type file to be compiled by the directive: compile: - input_paths: # this is a reference to ${kapitan:mains} - components/namespace/main.jsonnet - components/tuna/main.jsonnet - components/cod/main.jsonnet - components/sardine/main.jsonnet input_type: jsonnet output_path: manifests output_type: yaml mains: - components/tuna/main.jsonnet - ... and the components/tuna/docs/tuna.md is a jinja2 type file to be compiled by the directive: compile: - input_paths: # this is a reference to ${kapitan:docs} - components/docs/README.md - components/tuna/docs/tuna.md - components/cod/docs/cod.md - components/sardine/docs/sardine.md input_type: jinja2 output_path: . docs: - components/tuna/docs/tuna.md - ... This concludes this post. In the new upcoming post, I will talk more about the actual deployment to kubernetes, plus we will go into details of how jsonnet and jinja templates work. Please leave your feedback and let me know if you have comments. Clap to this post, star https://github.com/deepmind/kapitan and join us in the kubernetes slack channel #kapitan!
https://medium.com/kapitan-blog/introduction-to-kapitan-adb6a488cd77
['Alessandro De Maria']
2019-03-08 11:29:41.710000+00:00
['Kubernetes', 'Kapitan', 'Katacoda', 'Git']
The Best Bang for the Buck Digital Camera
Quick and easy, the EOS RP is, in my opinion, the best bang for the buck digital camera that you can get right now, considering it’s still around December 2020 when you’re reading this. I actually bought this camera for it’s full, original MSRP in February 2019 at $1299, which included the body, a grip, and an adapter for my EF lenses. Even then, I thought this was a stellar deal. Now, although it’s just the body, the EOS RP can often be had for sub-$1000 and even sub-$700 if you buy it certified refurbished on Canon’s website. Keeping in mind its price, here is what the EOS RP has going for it and what it lacks. But despite its setbacks, its value per dollar is the best. The Great Things: It’s Full Frame In recent years, full-frame has really taken off in popularity. There’s an appeal to a full size 35mm sensor, which is double the size of the APS-C sensor’s 1.5x crop. Full frame cameras tend to have cleaner images since the pixels are generally larger. Full frame also gives lenses their entire field of view, and since you can frame closer to your subject, full frame cameras tend to be seen as having a more shallow depth of field. A shallower depth of field contributes to more creamy backgrounds and bokeh, which is the dream of many portrait photographers like myself. It’s Mirrorless The EOS RP benefits from all the great aspects that mirrorless cameras have brought about. What really brought me around to the EOS RP as opposed to its DSLR counterpart, the 6D Mark II, was the RP’s focusing. Not only could the EOS RP’s focusing points be moved anywhere on the frame, but the RP provides stellar subject tracking and even eye autofocus. The EOS RP has made taking photographs easier than ever, given that my only worries are the composition and exposure. The autofocus is stellar and can lock on almost anywhere in the frame. Actually, hitting the right exposure is also so easy on mirrorless cameras like the RP as well since the electronic viewfinder provides exposure previews, so you know how your image is going to turn out before you have even taken the photo. This is a feature that is not present on typical optical viewfinders in DSLRs. Like many other mirrorless cameras, the EOS RP is also incredibly small. To date, it is still Canon’s smallest full frame camera and likely one of the smallest full frame cameras in the entire camera market. This makes bringing around the RP a piece of cake, especially when paired with small lenses. The RP has proved to be both an excellent travel camera and an excellent street photography camera thanks to its small and compact size. An Amazing Selection of Lenses Canon falls short in a lot of areas when it comes to their mirrorless and DSLR line up. Where they do not fall flat is their line up of lenses. Being part of the new RF system means that the RP has access to the amazing new lenses Canon has been producing. Luckily, Canon has really started focusing on more affordable lenses for the RF lens system and has recently completed a trinity of more inexpensive primes such as the 35mm, 50mm, and 85mm. And if you have an arsenal of EF lenses from Canon like myself, then the RP is still an amazing choice. If you can get your hands on an adapter (which unfortunately has been pretty sparse these days), you can have access to the entire lineup of EF lenses, which is highly touted as the greatest selection of glass ever made. In my observation, my EF lenses actually work better on the RP since the RP’s fantastic on-sensor, dual pixel autofocus makes lenses never miss focus, even the older ones from the 1990s. And furthermore, since the camera is mirrorless and therefore the flange distance is very short, adapting old vintage lenses like Pentax K, M42, or Canon FD is amazingly easy and inexpensive given that the adapters are more or less hollow tubes with the correct mounts on either side. The Perfect Camera for Video on Social Media I also believe that the EOS RP is a capable video camera for a lot of lighter projects. The EOS RP shoots pretty amazing 1080p up to 60fps. With great autofocus and a full frame sensor in a small body, I really believe it is one of the best cameras for making content specifically on social media. The Canon colors in video are always great, and the camera is extremely easy and intuitive to use. But there are some more video features I feel the camera is lacking for more professional-oriented work. My video reel which was shot almost completely using the EOS RP The Not-So-Great Things: Frames Per Second The EOS RP maxes out, when shooting stills, at 5 FPS, which is not very fast at all. It’s never been an issue when shooting portraits, but for sports, this camera is very lacking. I have indeed gotten a lot of great action shots on the camera, but there is a lot left to be desired. I will say, though, that at least with the stellar autofocus, out-of-focus photos are few and far between, so at least I know the images are tack sharp. Lack of Dual Card Slots This one is pushing it a little for a camera that hovers around $1,000, but I really do wish the RP came with dual card slots just for the peace of mind on a photoshoot. It’s not expected for the price point, but it would definitely have been much appreciated. You can, however, backup full-sized RAW or JPEG images when tethered to your phone to back things up, and this has proven useful. However, there is indeed nothing like having two card slots since it removes any extra steps.
https://medium.com/photo-paradox/the-best-bang-for-the-buck-digital-camera-photo-paradox-9bfb33cfa6a0
['Paulo Makalinao']
2020-12-07 18:44:29.892000+00:00
['Art', 'Photography', 'Gear', 'Technology', 'Creativity']
8 tips to motivate young learners
By Maria Blackburn Maybe your kid would rather nap than do their homework. Perhaps your child’s room is a disaster and though they promise daily to clean up, they never do. Or, let’s say, your youngest is more interested in playing Super Smash Bros. than doing just about anything else. We all struggle to find motivation sometimes. But for many kids, even when we’re not in the midst of a global pandemic, finding the drive to do what they’re supposed to do is no easy task. And for parents, this can be maddening. “People have been worried about undermotivation in children long before Covid-19 — it’s not a new thing,” said Michelle Muratori, Ph.D., a senior counselor at the Johns Hopkins Center for Talented Youth. Parents can help kids overcome their lack of drive to do certain tasks, but first “they need to dig deeper and understand what’s going on in their child that makes them appear to have a lack of motivation so they can respond appropriately.” Here’s her advice: Ditch the labels. Go beyond unhelpful labels like unmotivated or lazy, and identify the behavior. “When a parent talks about their kid having a motivation issue, I always want to know what it looks like and feels like,” Muratori said. Are they sleeping too much? Always on their phone? “It’s helpful to put it in behavioral terms since everyone’s definition looks a little bit different.” Consider underlying issues. Not all behaviors that look the same have the same underlying causes. For example, maybe all your child wants to do is play video games. Recognize that playing video games for hours may look passive to you, but may actually be a social activity for your child if they are playing online with friends. Or gaming may be self-soothing after a long day of class, Muratori said. In other cases, excessive gaming could be a sign that they are slipping into a depression or feeling lonely or anxious, and may need help. Know that it’s not an easy time to be a kid. “Kids are going through a lot right now,” Muratori said. Many aren’t going to school in person or engaging in regular extracurricular activities, and that lack of structure or new schedule may be taking its toll. Plus, not spending time with friends in the way they used to do is challenging and can cause them to act in ways they don’t usually act, like sleeping a lot. “Kids’ routines have been upended,” she said. “Many have experienced a a lack of predictability. That can be anxiety provoking and destabilizing.” Recognize that bright kids may face extra challenges. “Gifted kids tend to be perceptive and process information deeply,” she said. “With all of the traumatizing things happening in the world and in politics, can you really blame some kids for thinking, ‘Why even try?’” Muratori added that if your child tends to be a perfectionist and has a strong need for control, this is an especially trying time because things are far from perfect or stable right now. “It may be understandable that a child is feeling discouraged and weighed down, and although you shouldn’t let them wallow in despair, you may need to give them some grace.” Be an active listener. “It’s important to listen to your child and validate their feelings and have some empathy and compassion toward them,” Muratori said. Without being judgmental, parents can point out what behavior they’re noticing and have a conversation with their child about it. Be an active listener. “Really hear them out, without being defensive or interrupting,” Muratori said. “Sometimes it can go a long way for a child to say, ‘Look I am really struggling here.’ Or, ‘I hate that I can’t see my friends.’ Or, ‘I’m so angry that we’re in this situation.’” Help create solutions. “Once a child feels heard, they might be more open to what the parent has to say,” Muratori said. “Maybe they need your help.” For example, if your child is repeatedly oversleeping through the start of school, they may need help creating more structure around bedtime so they get enough sleep or could use a new wake-up routine to make sure they meet their morning commitments. Or if they are gaming or online too much because they say they don’t have anything else to do, help them create a daily schedule that leaves some time for video games, as well as other activities. “Kids need to be collaborators in this process,” she said. “We all need to have some sense of control over our lives.” Watch for signs of distress. Many therapists report that they are extremely busy right now. “A lot of people are in distress right now, not just adults, but kids,” Muratori said. Parents need to be on the lookout for signs of anxiety or depression in their children and talk to them about what they’re seeing. Don’t be afraid to seek a professional’s expertise. If you don’t know where to start, ask your pediatrician. Don’t forget the joy. Building resilience in kids that helps them get through difficult times is important. One way parents can help kids manage their stress is to make a concerted effort to bring joy into their lives. Everybody needs joy, Muratori said. “Maybe parents fear that their child won’t know the limits to doing what makes them happy, but if you negotiate some ground rules that work for everyone, you’ll be on the right track.”
https://medium.com/brightnow/8-tips-to-motivate-young-learners-60279a25c839
['Johns Hopkins Center For Talented Youth']
2020-10-14 12:32:38.387000+00:00
['Gifted And Talented', 'Education', 'Motivation', 'Learning', 'Gifted']
You Have More Time Than You Think
Let’s do a little math to find out how much free time we really have. There are 24 hours in a day, 168 hours in a week, 8,760 hours in a year. Some of those hours are spoken for. If you need eight hours of sleep a night, that’s 2,920 of the 8,760. If you work 40 hours a week for 49 weeks (so, excluding holidays and two weeks’ vacation), you’re working 1,960 hours. Subtracting all this takes us down to 3,880 waking, nonworking hours. Of course, people have vastly different levels of caregiving or chore responsibilities, and some people work more or fewer hours for day. But we could imagine that just about everyone has somewhere between 1,000 to 2,000 discretionary hours per year. (The American Time Use Survey pegs the population average at 5.19 hours of leisure per day, or 1,894.35 hours per year. The busiest segment — working mothers of children under age six — tend to have about 3.15 hours of discretionary time per day, or 1,149.75 per year.) When you put it that way, it’s not such a small number, is it? You can do a lot in 1,000 to 2,000 hours. Here’s how to make those hours work for you I know everyone feels busy and like they don’t have enough time to tackle those big goals. But take an honest look at how you spend your time. A few minutes here and there spent scrolling around on the phone doesn’t seem like much, nor does an evening routine of Hallmark movie devouring. But if it’s a regular habit, it adds up. Two hours a day of screen time is 730 hours in the year — or 73% of a busy person’s discretionary budget. This is necessarily going to preclude other things. When you understand how much time might reasonably be available, you can make smarter resolutions that stand a greater chance of happening. Let’s look at some goals people tend to set: learning a new language, training for a marathon, and writing a novel. According to the State Department’s foreign service language training program, English speakers can reach general professional fluency in related languages such as French, Spanish, or Danish in 600 to 750 hours. Training for a marathon requires about 10 hours a week for 16 weeks, assuming you do one three-to-four hour long run each week, plus three other one-to-two hour runs and some cross-training. That’s 160 hours total. People who participate in National Novel Writing Month each November — writing a 50,000-word novel in 30 days — often devote about three hours a day to this pursuit. That means someone could crank out a very rough first draft of a book in about 90 hours — even if editing would more than double that. Now, of course all these activities take time to start up and stop, and logistical planning to fit in. Most people aren’t going to be able to do their weekly long run from 8 p.m. to 11 p.m. on a Tuesday night, even if those hours are technically free. But, theoretically, even a very busy person could learn a foreign language to proficiency in those 1,000 discretionary hours. A very busy person could train for a marathon and write a draft of a novel. However, this busy person could not learn a foreign language, train for three marathons, and write a book. Big goals require focus. Look at the year ahead, and think about how you want to spend all those hours.
https://forge.medium.com/what-will-you-do-with-your-1-000-hours-76e9f2c0c960
['Laura Vanderkam']
2020-12-17 18:15:43.811000+00:00
['Self Improvement', 'Productivity', 'Personal Growth', 'Goals', 'Time Management']
Need Some Chaos in User Profile Algorithms
Containing Chaos by Michael Lang Because of how current user profile modeling work we are getting more and more siloed to our biases. It affects what we watch, what we listen, what we read etc. Though its human nature to side with things we agree with we should experience a little chaos in terms of what we expose ourselves to. Otherwise we will stay stuck in a constant feedback loop that doesn’t really help us experience different things and grow. Whether we agree with these things or not. This is how current “User Profile” algorithms work (parts in blue). Need to add the part in orange, or at least have the option. Music I enjoy listening to music from The Killers, Snow Patrol, One Republic, Coldplay — they are considered Alternative Rock. I use Spotify and when I listen to them it thinks based on my play history that’s my “taste”. Now when I use their Discovery Weekly (their machine driven suggestions) feature it mostly feeds me Alternative Rock music. All it’s doing is querying what a lot of users are listening to that I haven’t, that fit my “taste” and just builds me a playlist. Usually this is great, I get to discover a lot of good bands and songs I wouldn’t have discovered otherwise. But it is also bad because Spotify has now limited my exposure to mostly my “taste”. Unless I start listening to some Backstreet Boys, NSYNC or One Direction, Boy Band music is not popping up anytime soon in my Discovery Weekly. Not ideal because I shouldn’t have to alter my habits to then alter algorithms that a company like Spotify relies on to feed me new music. Spotify should expose me to good music from genres I haven’t listened to much, or at all. Challenge is not to send me music I already listen to, challenge is to send me music I haven’t listen to but may. News This is probably the most important area where chaos is needed. Most people get their news from social media. It’s not ideal but we mostly also happen to be friends with people online and offline with whom with we share similar ideologies. This creates a big problem as we get stuck in an echo chamber. If you are friends with people with similar ideologies and you share and like news with similar ideologies you are in an echo chamber that constantly validates your confirmation bias. Facebook, Twitter even YouTube delivers you content that fits your “User Profile” this way. So if you are a liberal you will get fed liberal news, conservative, conservative news. Animal lover? Lots of cute videos from The Dodo. Now what if in our timeline we received news from the opposite side from time to time. Just enough to challenge us to at least hear their voice. I always wanted Facebook to test this idea. To have maybe a slider that moves from very liberal to very conservative. And based on what is selected our timelines would show respective news and posts. But I also believe platforms like Facebook should built into their current algorithms some news intentionally intended to challenge our biases, things that don’t necessarily fit our “User Profile”. Video Same problems exists in platforms like Netflix and YouTube. What you watch dictates what gets recommended to you, which then leads to what you watch, feedback loop. I will be writing another post related to Netflix discovery sometime soon hopefully but essentially the idea is to recommend me content I wouldn’t necessary find otherwise. Just because I enjoy murder mysteries don’t think that’s my only “taste”, I mean seriously my friends who has seen my account probably think I am a psycho. Or just because I watched some episodes of Gossip Girls (to see lovely Blake Lively) please don’t think I enjoy Young Adult shows. Suggest content that we haven’t seen or that doesn’t fit our current “taste”, but is good and we should probably give a try. Help us expand our “User Profile” by continuously adding some chaos to the loop.
https://medium.com/an-attempt-at-writing/need-some-chaos-in-user-profile-algorithms-f8262b58ccae
['Razeeb Mahmood']
2018-07-25 06:58:02.720000+00:00
['Netflix', 'Spotify', 'Facebook', 'Algorithms', 'Tech']
The Laid-Back Person’s Guide to Navigating the Coronavirus
The Laid-Back Person’s Guide to Navigating the Coronavirus How can I put my chill attitude in service to others? Photo by Jacalyn Beales on Unsplash I’ve always joked that I’m the person you want in a crisis. I’ve been through a lot in my personal life so there isn’t much that can shock or destabilize me. I was primed for interpersonal crisis from a young age so, well, things being crazy just feels kind of normal to me. It’s the boring everyday stuff I tend to struggle with. As the spread of Coronavirus intensified over the last few weeks, it was becoming increasingly clear that we were heading towards either a public health crisis and/or far-reaching preventative measures. Yesterday, the Premier of Québec declared a health emergency and announced a slew of restrictions and closures including schools, universities, and daycares for minimally the next two weeks. As someone who teaches university and has two young children, this impacts me directly. I checked in with myself for any signs of hidden stress or worry. Nope, I felt fine. Social distancing feels like a rather anti-climatic sort of crisis. I understand that these preventative measures are being put in place to ease the load off the health care system (aka “flatten the curve”) and protect those most at risk. Even if the three of us are low-risk for complications, I, for one, am quite fine with my family not contracting the COVID-19 infection. We have better things to do with our bodies and time than getting sick. As a result, I am respecting social distancing guidelines and not at all stressed about the situation. I see these extra days at home as a respite from what was becoming an overly busy spring for me, and precious time to spend with my son and daughter. I’m welcoming this time to chill and catch up with some chores. A quick glance at my social media feed, however, was enough to make me realize that not everyone was reacting like me. It seems that for many people I know the social distancing measures and/or the possibility of getting sick with COVID-19 is freaking the heck out of them. In view of this, I asked myself: How can I put my chill attitude in service of others? In response to my own query, here is my Laid-Back Person’s Guide to Navigating the Coronavirus: Don’t mock people for their reaction My Facebook feed seems to be more or less equally divided between posts from people desperately looking for supplies like pasta or sharing photos of their grocery store loot VS posts from people mocking this apocalyptic buying frenzy or showing detailed mathematical calculations determining exactly how many rolls of toilet paper a family of four would normally require for a two-week period of social distancing (apparently 16). Even though, some of these posts (from both sides) are admittedly funny, I’ve decided to simply not engage. The closest to this kind of situation we Montrealers have experienced is the Ice Storm of 1998 (yes, capitalized), which saw the city paralyzed for a week under a treacherous sheet of ice, widespread power outages, and a real lack of basic supplies. The long-term effects of the Ice Storm are even part of a psychological study. I have a lot of sympathy for folks trying to get a handle on an unprecedented situation where they have little control other than what they can line their shelves with. This “social distancing” situation is new. A term I had never heard until last week is now common parlance. People are adapting the best they can. I can be kind about this. Be proactive with changing plans This semester I am teaching four undergraduate classes with over 200 students under my charge. If there is anything I’ve learned in my two years of teaching is that many (many, many) of my students struggle with anxiety. I knew that Friday’s announcement that universities were closing for two weeks (but still expected to complete the semester) was going to freak the eff out of a considerable amount of students. And sure enough, within minutes the worried emails started trickling into my inbox. As soon as I could, I sent my students a detailed — and I’d like to believe reassuring — email (entitled “Our plan for the next two weeks”) with instructions for alternative ways to submit the assignments due within the next two weeks, simple instructions to download Zoom, and a provisory plan for the rest of the semester should the shut-down continue past March 30th. Other than a few thank you’s and a couple of logistical questions, the emails completely ceased. I’m assuming that my students know where we are heading. Next on my list is to reach out to any of my facilitation clients who haven’t already contacted me to cancel and/or reschedule events to offer my support in thinking through alternatives. I can use my level-headedness to make life easier for others who might be feeling overwhelmed either because of the anxiety this situation is provoking or the sheer number of things they now have to rearrange. Check on how friends and family are managing This afternoon, I called my elderly aunt to see how she was coping. She is a fiercely independent woman who lives in a retirement complex, complete with on-site grocery store and pharmacy. I was pretty sure she’d refuse any offers of help (she did), given that her building is brilliantly equipped to deal with this kind of situation. However, I did want to see how she was doing emotionally. I placed the call mentally prepared to honour any anxiety she might be dealing with. She was just fine, and we had a nice chat about what the apocalypse might look like one day. Later, I’ll be texting a few friends to see how they are coping and tomorrow I’ll be calling a few more. In addition to the discomfort with social distancing and Coronavirus fears, many people I know rely on contracts, gigs, and shift-work to pay their bills and cancelling so many services and events is causing massive financial anxiety as well. Since I’m feeling steady, I can hold space for others and allow them to talk it out until they can steady themselves. Offer to help out One of my closest friends is self-isolating after returning from a trip to see her guy in the US. She is symptom-free and almost certainly not a carrier of COVID-19. However, she is responsibly following social distancing guidelines. She also lives alone. It took me all of 17 seconds to text her today and ask if she needed anything. She was almost out of coffee (gasp!) and wanted some fresh bread from the delicious bakery next to my place. The bakery was out of bread (a regular occurrence at the best of times — it’s really good) so I picked up a few croissants instead as well as her coffee beans and power-walked the dozen blocks to her place. She was grateful for the Rapunzel-like social interaction and I learned that throwing a bag of croissants onto a second-floor balcony is nigh impossible. She’ll have tasty crumbs with her fresh coffee tomorrow morning. I plan on checking in with her everyday. I also accepted an invitation to join a local mutual aid Facebook group where people can post requests for, or offers of, help. Spend money at a local business I am viscerally aware of the fragility of our interdependence. All of these sudden changes will have an economic ripple effect. One of my only worries in this whole situation is the potential negative financial impact on already marginalized individuals and just-getting-by small businesses. After two of my favourite local businesses shut down in the last couple of months, I’ve upped my awareness of where I spend my dollars. I’ve been vastly reducing my online shopping and visits to big box stores in favour of small shops where I can interact with real humans. Today — following all hygienic and social distancing guidelines — I spent money at the corner bakery (croissants), the café down the street (coffee beans for my buddy and a latte for me), and a locally-owned grocery store (fruits and vegetables for my almost-bare fridge). Over the next couple of weeks (and months) I plan to be extra, extra diligent about keeping my dollars local and my neighbourhood shops in business. Schedule self-care Especially in these few days while my son and daughter are with their father, my time home alone feels expansive and vague — kinda like that week between Christmas and New Year’s. I can easily get caught up in reaching out to other people, blasting through my grading, cleaning my apartment, or scrolling mindlessly through Facebook or Twitter. With my gym closed and my daily routine disrupted, I am prone to forget to take care of myself. My solution has been to create a to-do list for myself with pre-determined amounts of time for any given activity (this is called time-boxing). In addition to time dedicated to my household chores, work emails, and grading, today’s list included a one-hour walk, a journaling session, and an audiobook break. If I want to remain chill (and in service to others) I’ve gotta make sure to take good care of myself too.
https://medium.com/age-of-awareness/the-laid-back-persons-guide-to-navigating-the-coronavirus-13461bbe5f5a
['Elizabeth Katherine']
2020-12-11 18:47:23.249000+00:00
['Community', 'Helping Others', 'Self', 'Relationships', 'Coronavirus']
Why The Greatest Teams Succeed — The Culture Code by Daniel Coyle
Introduction: When Two Plus Two Equals Ten CULTURE: from the Latin cultus, which means CARE. Chapter starts with the spaghetti competition story: Peter Skillman challenged 4-person groups to create the tallest structure out of uncooked spaghetti, a string, and tape and a marshmallow. The kindergartners won, the business students were second, the engineers were dead last. SKILL 1 — BUILD SAFETY Chapter 1: The Good Apples Will Felps U of New S Wales study: researcher mole Nick is designed to be a monkey wrench, trying to make experimental groups work badly. But one group was resistant to his poison because there was a member who kept defusing his comments. Chapter 2: The Billion-Dollar Day When Nothing Happened Bill Gross: invented internet advertising. Owned Overture, an LA company that was competing to build a search engine. But as we all know, Google won. In part because Larry Page posted a note to about ads to a wall: “these ads suck,” an employee named Jeff Dean saw it, and long story short, Google created AdWords and became famous. Belonging cues: If the author gave you a tricky puzzle, then later gave you a note with (not useful) tips from a previous puzzle solver who “wanted to share a tip with you,” you’d actually be motivated to work harder. You’re also more likely to lend your phone to someone who says “I’m sorry about the rain, can I borrow your phone?” than to someone who says “can I borrow your phone?” out of the blue. The point is, there must be an unmistakable signal that you are safe to connect with people here. Small signals can have big effects, but they can’t be given only once. You have to establish a narrative, a relationship. Belonging needs to be continually refreshed and reinforced. 3 models of startups (Baron and Hannan sociologists study) Star model: hire the best people. Professional model: build group around skill sets. Commitment model: focus on developing shared values and strong emotional bonds. This was the most successful model. Chapter 3: The Christmas Truce, the One-Hour Experiment, and the Missileers Why was there a Christmas truce in 1914? Military scholars tell us that this is due to the fact that World War I marked the historical intersection of modern weapons and medieval strategy. But in truth, it was mostly due to the mud. The German and British soldiers were engaged in trench warfare, so close to each other that they could hear (and applaud) each others’ songs. Weeks of interactions before Christmas created “bonds of safety, identity, and trust” that paved the way for the Christmas truce. People could smell “the other side’s” cooking, hear their voices, etc. To end the fraternizing, generals on both sides rotated troops and destroyed the sensation of belonging. The next Christmas, there was no truce. Another story about India’s WIPRO call center. In the 200s, WIPRO had high turnover, and workers couldn’t articulate why they wanted to leave. Basically, they lacked connection to the group. To solve this, WIPRO gave new workers standard training plus an additional hour NOT focused on the company, but on asking questions like “what’s unique about you that leads to your best performance and greatest happiness at work?” and then gave them a sweatshirt with the company logo and their name. This one-hour intervention worked incredibly well, even after trainees forgot the training. The point is to constantly give a stream of belonging cues: individualized, future-oriented belonging cues asking personal questions, a sweater with their name on it, etc. Belonging cues answer basic questions: Are we connected? Missileers at Minot air Force did horribly as a team because they weren’t. Do we have a shared future? The missileers were created in response to the Cold War, but after that, their mission no longer existed and the work was monotonous and had no way out. Are we safe? The missileers were hounded by superiors who would fire them at the slightest infraction. Chapter 4: How to Build Belonging Author: The greatest NBA coach is Gregg Popovich. Because his team, the Spurs, constantly “perform the thousand little unselfish behaviors…that put the team’s interest above their own.” Marcin Gortat: “[Playing against them]…was like listening to Mozart.” Something makes these players unselfish on Popovich’s team. But what? Some elements: Popovich communicates and connects closely. He talks about life, brings in news reports and asks people their opinions, gets into people’s personal space, and does it all with love and truth (and some yelling) Popovich once spent 4 days hanging out and traveling with Spurs star Tim Duncan talking about everything but basketball, before Duncan decided to join the Spurs, because P wanted to see what D was like. (Was he tough, humble, unselfish?) That created a “high trust no bullshit” connection as a role model for other players. Popovich asked direct, personal, big picture questions about things other than basketball, and create conversations during meal time. Successful cultures are mostly NOT lighthearted places. They’re focused on solving tough problems together, and have many times of “uncomfortable truth telling” (Like Page’s “these ads suck”) Stanford researchers found that one sentence in feedback helped students improve their essay writing skills considerably: “I’m giving you these comments because I have very high expectations and I know that you can reach them.” This tells kids: You’re part of our group Our group is special, we have high standards I believe you can reach those standards Aka: this is a safe place to give effort. Popovich’s communication created 3 types of belonging cues: Personal, close-up (body lang, behavior) Performance feedback (relentless criticism aka yelling, when warranted) Big picture perspective (convos on life and politics and history: life is > basketball) Once, when the Spurs lost a game they should have won, Popovich had them gather at a restaurant to encourage the players. No speeches, just a ton of small, intimate conversations. He also doesn’t use tech, but always communicates in person, up close. Chapter 5: How to Design for Belonging Tony Hsieh of Zappos wanted to “build an atmosphere of ‘fun and weirdness.’” He started a Downtown Project in Las Vegas, acting like a human social app, connecting with everyone and connecting others. Hsieh’s goal is to increase “collisions” — serendipitous personal encounters. “When an idea becomes part of a language, it becomes part of the default way of thinking.” — Tony Hsieh Thomas Allen, MIT professor, studied successful engineering firms: “Clusters of high communicators” drive successful projects. And it all boils down to distance between desks. Simple visual contact is extremely important. He plotted the frequency of interaction against distance, and came up with the Allen Curve (related to the Dunbar Number of 150), a sudden steepness as the curve goes up. Both basically mean: humans are designed to focus on a relatively small number of people within a finite distance. He found that we’re more likely to contact (even digitally) people who are physically close to us. “I never say very much; I don’t make any big pitch. I just let them experience this place and wait for the moment to be right.” Chapter 6: Ideas for Action Building safety is an improvisational skill. It’s about reading the situation and reacting quickly. Things you can do: Overcommunicate your listening : lean forward, nod, watch faces, steady stream of affirmations like “Yeah, uh huh, gotcha,” etc. Avoid interruptions: Take turns. : lean forward, nod, watch faces, steady stream of affirmations like “Yeah, uh huh, gotcha,” etc. Avoid interruptions: Take turns. Be open about your faults/vulnerability : Especially if you're a leader. Actively invite input. You can say things like “I could be wrong, what do you think?” : Especially if you're a leader. Actively invite input. You can say things like “I could be wrong, what do you think?” Embrace the bearer of bad moods : Don’t just tolerate, embrace. Make people feel safe to share tough feedback. : Don’t just tolerate, embrace. Make people feel safe to share tough feedback. Preview future connection : Paint a vision of a great future. Ex: a coach would tell young ball-players “see that seat you’re sitting in? [a great player] sat there 3 years ago.” : Paint a vision of a great future. Ex: a coach would tell young ball-players “see that seat you’re sitting in? [a great player] sat there 3 years ago.” Overdo gratitude : Say thank you A LOT. Popovich thanks each star player for letting him coach them. It’s not about thanks, but affirming the relationship. Chef at French Laundry and Per Se thanks the dishwashers. : Say thank you A LOT. Popovich thanks each star player for letting him coach them. It’s not about thanks, but affirming the relationship. Chef at French Laundry and Per Se thanks the dishwashers. Hire painstakingly : Zappos even tries to pay trainees to leave after training so they get only people they want. : Zappos even tries to pay trainees to leave after training so they get only people they want. Throw out bad apples : Know what bad apple behaviors are and have low tolerance for it. New Zealand All-Blacks = “no dickheads.” : Know what bad apple behaviors are and have low tolerance for it. New Zealand All-Blacks = “no dickheads.” Create safe, collision-rich spaces : Great groups care about design. Create spaces that maximize collisions. : Great groups care about design. Create spaces that maximize collisions. Make sure everyone has a voice : Use simple mechanisms like “no meeting can end without everyone sharing something. : Use simple mechanisms like “no meeting can end without everyone sharing something. Pick up trash : Literally. John Wooden picked up trash in the locker room. Ray Kroc of McD’s did it too. Leaders do the menial work in a kind of “muscular humility.” : Literally. John Wooden picked up trash in the locker room. Ray Kroc of McD’s did it too. Leaders do the menial work in a kind of “muscular humility.” Capitalize on threshold moments : When someone enters a group, that’s a crucial moment, more important than others. Make use of the first day. : When someone enters a group, that’s a crucial moment, more important than others. Make use of the first day. Avoid sandwich feedback : Giving praise-critique-praise actually confuses people and makes them focus on one or the other. Separate the processes. : Giving praise-critique-praise actually confuses people and makes them focus on one or the other. Separate the processes. Embrace fun: Laughter is a fundamental sign of connection/safety. Ex: Toyota’s andon is a cord where any employee can stop the entire assembly line, giving power and trust to the workers and creating belonging.
https://medium.com/be-a-brilliant-writer/why-the-greatest-teams-succeed-the-culture-code-by-daniel-coyle-c2ab7f517291
['Sarah Cy']
2020-08-20 15:21:01.159000+00:00
['Book', 'Inspiration', 'Love', 'Life Lessons', 'Writing']
Being a Child of God, Not of The World
Lately, I’ve felt very chaotic and spiritually unfulfilled. I’ve felt much more tied down to the problems of the world without keeping God first in my mind and heart, and I’ve been struggling with my faith with all the challenges of the world lately, with the pandemic and the chaos of the election. As a teacher doing virtual learning, balancing a Master’s Degree, and attending to my side hustle of writing and editing, I’ve felt very worn down no matter how much rest I’m getting. Because I feel so connected to the world and everything going on, is it making me less of a Christian right now? I think to my Zoom church services where I feel less connected and increasingly more disengaged to where I can barely pay attention at times. I feel lost I feel like my faith has always been a center and constant, but in these times, I’m struggling to think of what I should do and what life should look like. “Do not love the world or the things in the world. If anyone loves the world, the love of the Father is not in him.” (1 John 2:15, ESV). But what does the world mean in 1 John 2:15? According to John Piper at Desiring God, the verse starts with a command to not love the world, and gives an incentive not to love the world later in the verse. According to Piper, as Christians, we should not love the world because “you can’t love the world and God at the same time.” “Love for the world pushes out love for God, and love for God pushes out love for the world,” Piper says. The world would be a master above God, so to love the world would mean to put the world above God. In 1 John 2:16 (ESV), John tells us: “For all that is in the world — the desires of the flesh and the desires of the eyes and pride of life — is not from the Father but is from the world.” According to Piper, in this verse, we have to not only say we love God but put God at the center and internalize our love for God. Piper goes on to say that a “love for the world can’t coexist with love for God.” It is in this verse that John Piper defines what the world is, which is what we desire from our flesh, what we desire in our eyes, and what we take pride in life. We find validation on all kinds of things, but finding our validation in faith and God is the most important part of being a Christian. Lastly, in 1 John 2:17 (ESV), John completes his section on not loving the world, and says: “And the world is passing away along with its desires, but whoever does the will of God abides forever.” For Christians, thinking of the world like a house, Piper makes the analogy that no one sets a house on a sinking ship — and makes the direct analogy that for Christians, the world is a sinking ship, and Piper stresses not to put all your heart on the work because it’s “only asking for heartache and misery in the end.” The lusts of the world are passing away too, like money, reputation, and pride. Instead, John tells us that the validation and incentive is in the will of the Father. If you love the Father, you will do the will of God. Love for the world and love for God’s gift cannot exist. Piper explains that loving the world means you don’t love God, and that if you love the world, you perish with the world. Loving God instead of the world means you will live with God forever. “According to 1 John 2:15, if your love for God is cool this morning, it’s because love for the world has begun to take over your heart and choke your love for God,” Piper says. Maybe that speaks to me right now because I, perhaps, am using everything going on in the world as an excuse to subconsciously put the world above God. Faith, then, is the most important thing as a Christian. But there is a faith that trusts God and a faith that loves God — and saving faith means you can’t have one without the other. There is a lapse in my faith if I trust God but still love the world above God, if that makes sense. And I know a lot of people are struggling in this day and age. Loving God, as John urges us in 2:15–17, is something we can say, but not something that’s easy to feel especially in times of extreme chaos and turmoil. How to apply 1 John 2:15–17 John Piper does not mince words — he says “your love has grown cool and weak” as a possibility for us not loving God. To regain a love for God, we have to immerse ourselves in the Holy Spirit and read God’s word. When the love for the world ascends our love for God, it’s time to renew our passion and faith. Simply put, loving the world and loving God can’t coexist. Of course, it’s not like many of our worldly desires don’t matter. We need food. We need work, and we need sleep and entertainment. But at the end of the day, all these worldly desires should be for God In Colossians 3:17 (ESV), Paul writes: “And whatever you do, in word or deed, do everything in the name of the Lord Jesus, giving thanks to God the Father through him.” That means every desire we have, as Christians, is for God. Seeking food is for God. Seeking exercise is for God. Seeking a spouse is for God, and seeking a job is for God. In everything we desire, we should seek God. Whatever is in the world which we confront before us on a day to day basis, we should seek for God — and that means all the challenges before us in the craziness of 2020. For me, that means I’m working so hard for God. But if work goes well or doesn’t go well — I’m reminded that God is more important than work. It’s an ongoing process, an exercise going back and forth that we gain through reading Scripture and through prayer. Israel, after all, means wrestling with God — because God makes us wrestle to receive his blessing.
https://medium.com/koinonia/being-a-child-of-god-not-of-the-world-525a2c1dbe36
['Ryan Fan']
2020-10-30 11:40:39.417000+00:00
['Self', 'Scripture', 'Spirituality', 'Coronavirus', 'Religion']
Unforeseen Side Effects from NaNoWriMo: When the Storymaking Can’t Be Stopped
National Novel Writing Month is a mighty challenge, tossing down a challenge for writers everywhere to channel story into a 50,000 novel within thirty days. We respond to the challenge as best that we can do — and then get to clean up consequences of neglected families, homes, jobs, dogs that needed lots more attention and walking. It’s rough going, but definitely worth it for the glory of story. It’s the story of December for us repeat offenders, grateful for the holiday opportunities to share the love. Then there are the small side effects, as reliable and terrible as the obvious large ones. A particularly nasty and persistent one is the inability to turn off the storymaking. Even when we are far from the keyboard, the notebook, the writing desk, we are constantly weaving meaning from wisps of dead grass, playing with ideas, losing track of the conversation. This state can be highly productive, energizing, and exciting. Words erupt onto the page, compelling drafts are vomited, worthy ideas are splayed into scenes, chapters, subplots. There’s no need to worry about hitting 50,000 words by November 30; we’ll hit that point and go far, far beyond it by November 20. The joy is real, the giddy laughter a little scary. A lightbulb flares and goes out when you flick on the switch. It’s ghosts. You knew there were ghosts in the house. It’s the former owner, the one that you’re pretty sure died in your bedroom, only no one would ever admit it was true. And he is not happy. This lightbulb incident is just the first in an ever-escalating horror movie brought to life. Is your will up to date? Fasten your seatbelt, boys, it’s going to be a rocky night. Your computer won’t let you blog. You can draft like crazy in a word processing program, but you can’t get anywhere close to posting it to a blog, any blog, anywhere. That cute purchase on a dodgy site? It’s plunged you into the dark web and you’re the next one to die. It starts without being able to post unless you go to a brightly lit, public place with fresh coffee. Then, there are the bumps in the night (oh, wait, no, that’s the ghost again) and your bank accounts being emptied and no wonder you can’t get anyone to come to the house to fix the leak in the roof. They know. The dark web has warned them off. You belong to it. And you’re going to die — and you can’t do anything about it. Then the unthinkable happens. You change the lightbulb in the lamp again and it works. The replacement bulb was defective. The unthinkable keeps on happening. You learn that there was a problem with the service to your neighborhood, so that’s why you couldn’t access the intenet. Today, all is well, the internet is its hyperactive, perky little self. However, you are stuck with the stories that erupted in your brain and that you were foolish enough to share with others because you were so scared. Now you’re the one that everyone laughs at and pities at dinner. April is not the cruelest month. The cruelest month is November, at least for those of us with stunningly well-developed storymaking abilities. It’s a gift, not a curse, I tell myself, but I’m not convinced.
https://medium.com/nanowrimo/unforeseen-side-effects-from-nanowrimo-when-the-storymaking-cant-be-stopped-1a2cc94c40a8
['Louise Foerster']
2017-11-06 21:13:43.535000+00:00
['Story', 'Humor', 'Writing', 'NaNoWriMo', 'Writers']
How to Use Python Datetimes Correctly
How to Use Python Datetimes Correctly Datetime is basically a python object that represents a point in time, like years, days, seconds, milliseconds. This is very useful to create our programs. Image by petradr The datetime module provides classes to manipulate dates and times in a simple and complex way. While date and time arithmetic is supported, the application focuses on the efficient extraction of attributes for formatting and manipulating output You can download a jupyter notebook with all these steps here. Let’s import the Python module In [1]: from datetime import datetime Creating a date using year, month, and day as arguments. datetime(year, month, day hour, minute, seconds) In [2]: birthday = datetime(1994, 2, 15, 4, 25, 12) Now once we’ve created the object and assigned to the variable called birthday , we can access to each date format just like this In [3]: birthday Out[3]: datetime.datetime(1994, 2, 15, 4, 25, 12) In [4]: birthday.year Out[4]: 1994 In [5]: birthday.month Out[5]: 2 In [6]: birthday.day Out[6]: 15 As you can see, it’s very easy to create a date using this module. Now we can do other interesting things, like: In [7]: birthday.weekday() Out[7]: 1 This means that the birthday was a Monday, because days are in this format (0-6) , or what is the same indexed as a list (beginning with zero). But what if I want to know what is the current datetime ? In that case, we can use datetime.now() , Go ahead and write down this into next cell In [8]: datetime.now() Out[8]: datetime.datetime(2020, 11, 17, 11, 32, 11, 992169) Ok, that’s interesting. What if you run that command again? Go ahead and see the difference In [9]: datetime.now() Out[9]: datetime.datetime(2020, 11, 17, 11, 33, 36, 433919) As you can see the output is now different, because time changed. Great! Now you can ask, how do I calculate the time from one date to another? That’s called time tracking, let’s see how it works In [10]: # time tracking operation datetime(2018, 1, 1) - datetime(2017, 1, 1) Out[10]: datetime.timedelta(365) In [11]: datetime(2018, 1, 1) - datetime(2017, 1, 12) Out[11]: datetime.timedelta(354) You can see, how easy it is, we can run arithmetic operations between dates, which is great! But what if now you want to know how much time has passed from a given date to today, at this very moment? How do you think that can be done? Think about it for a moment! In [12]: datetime.now() - datetime(2020, 1, 1) Out[12]: datetime.timedelta(321, 41994, 571469) Excellent now we use the .now() method and subtract the date we want to calculate. Easy! Using strptime This method will help us to transform dates that are given in strings to a datetime format, which is quite useful! Let’s see it in action: In [13]: parsed_date = datetime.strptime('Nov 15, 2020', '%b %d, %Y') In [14]: parsed_date Out[14]: datetime.datetime(2020, 11, 15, 0, 0) In [15]: type(parsed_date) Out[15]: datetime.datetime As we see, we have passed two parameters to the strptime method, the first has been the string of the date, and the second the "directive" in which we want to make the conversion. To see all the available "directives", go to the following link: Image by python Image by python We already have parsed our date in the parsed_date variable, now let's start making calls to the methods it contains. In [16]: parsed_date.month Out[16]: 11 In [28]: parsed_date.year Out[28]: 2020 Using strftime All right, now let’s do the opposite operation, passing a datetime type as a parameter to the strftime function and converting it to a string. We do it like this: In [37]: date_string = datetime.strftime(datetime.now(), '%b %d, %Y') In [38]: date_string Out[38]: 'Nov 17, 2020' As you can see, we pass datetime.now() as the first argument and then the directives of the formats in which we want the output. Really simple! Time object A time object represents a time of day (local), independent of any particular day, and subject to adjustment through a tzinfo object. All arguments are optional. tzinfo can be None , or an instance of a tzinfo subclass. The rest of the arguments can be integers, in the following ranges: Image by Author If an argument is given outside these ranges, the Value-Error is raised. All default values are 0 except tzinfo , which defaults to None . Time to play with this object! In [40]: from datetime import time In [42]: my_time = time(hour=12, minute=34, second=56, microsecond=123456) In [43]: my_time Out[43]: datetime.time(12, 34, 56, 123456) As we can see it will give us a time object as a result. However, it has a not very “friendly” format. With the time object we can use the isoformat In [44]: my_time.isoformat(timespec='minutes') Out[44]: '12:34' In [45]: my_time.isoformat(timespec='microseconds') Out[45]: '12:34:56.123456' In [46]: my_time.isoformat(timespec='auto') Out[46]: '12:34:56.123456' In [47]: my_time.isoformat() Out[47]: '12:34:56.123456' We can see that there are several iso formats to display the time. We use different formats, and the default one is auto , which we can use without passing a parameter explicitly. These are the possible fromatos to use Image by Author timedelta object A timedelta object represents a duration, the difference between two dates or times, which is quite useful! Let's look how it works. First we need to importa timedelta and then we need to call the different built-in functions In [48]: from datetime import timedelta In [49]: year = timedelta(days=365) In [50]: year Out[50]: datetime.timedelta(365) In [51]: year.total_seconds() Out[51]: 31536000.0 In [56]: ten_years = 10 * year In [58]: ten_years.total_seconds() Out[58]: 315360000.0 We’ve passed the parameter days = 365 to timedelta and then called two functions. One of them returns the total seconds that 365 days have.An the other one creates 10 years. Let’s make another calculations In [59]: another_year = timedelta(weeks=40, days=84, hours=23, minutes=50, seconds=600) # adds up to 365 days In [60]: another_year Out[60]: datetime.timedelta(365) In [61]: year == another_year Out[61]: True We have now done a boolean operation, where we ask if one timedelta is the same as another. For which we get a True . Naive & Aware methods There are two types of date and time objects: “naive” & “aware”. An “aware” object has sufficient knowledge of the applicable algorithmic and political time settings, such as time zone and daylight savings information, to be able to position itself in relation to other “aware” objects. An “aware” object is used to represent a specific moment in time that is not open to interpretation. Ignore Relativity A “naïve” object does not contain enough information to place itself unambiguously in relation to other date/time objects Whether a “ship” object represents Coordinated Universal Time (UTC), local time or the time of some other time zone depends purely on the program, just as it depends on the program whether a given number represents meters, miles or mass The “naive” objects are easy to understand and work with, at the cost of ignoring some aspects of reality. This finally serves us to work time zones, and time changes (depending on summer-winter) and American zones as for example EST or EDT Supporting time zones at deeper levels of detail depends on the application. The rules for time adjustment worldwide are more political than rational, change frequently, and there is no standard suitable for every application other than UTC Objects of this type are immutable. Objects of the date type are always naive. I hope you enjoyed this reading! you can follow me on twitter or linkedin Read these other posts I have written for Towards Data Science
https://towardsdatascience.com/how-to-use-python-datetimes-correctly-43505e8701ae
['Daniel Morales']
2020-12-14 19:24:47.575000+00:00
['Machine Learning', 'Data Science', 'Python Programming', 'Python', 'Python3']
The Psychology of the Sore Loser
Photo by Clay Banks on Unsplash When you think your worth is determined by outcomes, you’re asking for trouble. When you equate getting something good as being a good person, you’ve made a fundamental error. I don’t know how many people are aware but there was an election the other day. It was a pretty big deal. Anyways, the incumbent lost and he hasn’t conceded to his successor. It’s pretty embarrassing but it reminds me of the games nights I’ve had where one or two friends would either lose their minds when they couldn’t win (to the point where she tried to cheat) or quickly move on to something else while making excuses for not winning. The irony of the whole thing is that no one cares if one doesn’t win. Of course there are factors that led to one person winning and everyone else at the table losing. It’s a game. Only one person can win. Also, trying to cheat your way to victory isn’t victory. It’s an admission that you don’t know how to play, you don’t care to learn how to play, you don’t respect the other players, you don’t respect yourself and you’re taking this far too seriously. For most, a competition, game or an election are taken at face value. For someone to win somebody has to lose. But to the sore loser, it’s not that simple. Losing is indicative of something that they refuse to accept. They think it means that they are beneath someone else. Now, this isn’t the actual truth, of course. It’s just a belief that was probably forged from an episode of bad parenting. When we do something good, valuable or sought-after, as children we develop the notion that we are good because we do something good. Unfortunately, the flip side is true. When we do something bad, we develop the notion that we are inherently bad. And if you know anything about humans, being bad, undesirable, wrong or faulty is just about the worst thing you could ever be. Our enemies are bad. Failure is bad. The final season of Game of Thrones is bad. We can’t bear to think of ourselves among the things we despise. And yet, despite this being an issue that all of humanity faces, some of us are gracious in defeat. So what’s the excuse the sore loser has for that? Narcissism and Superiority Lifetime Narcissistic Personality Disorder (NPD) has a prevalence rate of 6.2% in the United States. I don’t have statistics for how many sore losers there are but I’m willing to go out on a limb and say that they are more than just 6.2% of the population. This is why it’s important to make the demarcation between NPD and narcissistic behavior. For one to have NPD they have to meet a number of criteria over a certain time period. But for one to be narcissistic, they just need to be self-absorbed with a sense of superiority over others at the moment. It isn’t a trait that shows itself in every aspect of one’s life. A sore loser could be narcissistic in the moment that they lose a game and then return to normal. But it is more likely that if you are dealing with someone who has NPD, they are probably a sore loser. What these two manifestations of narcissism have in common is the sense of superiority over others. If a narcissist suffers a loss to someone they believe is better than they are, they aren’t going to act like a sore loser. If anything they may feel proud of themselves for competing against a great talent and let you know all about it. But with the belief that they are superior to everyone else, they are going to unwittingly have expectations of themselves. Expectations In his novel The Way of Kings, Brandon Sanderson wrote, “Expectations were like fine pottery. The harder you held them, the more likely they were to crack.” For the sore loser who believes his own hype, he unwittingly brings suffering onto himself. He has an image to maintain and a hierarchy that props up his sense of self. Expectations are nothing more than “premeditated resentments.” No one ever gets what they expect. You may reach your destination but it never matches what you thought, hoped or expected the journey to be. And why would it? You aren’t in control of other people, the environment or your subconscious tendencies and motivations. And if you can’t even control that, good luck to ya. So when it comes to the sore loser not getting what they want, it’s no different than when they do. They were not in control of either outcome. But when they try to seize control of the outcome because their belief in themselves to triumph over all didn’t do the job, they must use force. Trauma To the gracious loser, this must seem like much ado about nothing. The sore loser could be so much happier if they just enjoyed the process, did the best they could and awaited the outcome, knowing that winning or losing doesn’t mean anything about them as a person. But that’s just it. It means something to them. Perhaps it was a parent who devalued them when they failed. Perhaps they had negative role models who were sore losers themselves. Perhaps their grandiose sense of self is really just a smokescreen for low self-esteem. Whatever the culprit, the sore loser has a wound that gets triggered every time they lose. And as a result, they must soothe themselves in some way. Their sense of self-worth is derived from external gratification. To them, winning is not only positive, winning “means” that you are better than the competition. In reality, winning means that you were better than the competition today at the specific task you had to fulfill. It has no bearing on you as an individual. It’s totally fair to be disappointed at a loss. After all, the objective of the game, competition or election is to win. You pour all your resources into winning. You do your best to think positively and shrewdly. But sometimes, things don’t work out. Blaming people for your loss makes no sense whatsoever. Blaming yourself doesn’t make much sense either. There are factors beyond your control that can sway things in various ways. It’s a chaotic world and we’re all just living in it. So to all the sore losers out there, give yourself a break. Embrace your losses and congratulate those who won. But most importantly, seek help for the trauma that is making you think that you’re better than another human being and the wounds that lead you to embarrass yourselves and hurt others. Life is a lot more fun when you don’t have to prove your worth.
https://alchemisjah.medium.com/the-psychology-of-the-sore-loser-63e335d8f6e5
['Jason Henry']
2020-11-10 23:23:17.576000+00:00
['Self Improvement', 'Personal Development', 'Self', 'Psychology', 'Politics']
Erica Steps Away from the Corned Beef
BY ERICA BROWN A fucking corned beef sandwich. It’s 2011. I’m at a St. Patrick’s Day parade with my family, and we’re supposed to be having a good time, but he wanted potatoes and carrots and cabbage. We’ve been together for five years, and I’m supposed to know these things. Of course. I can’t read his fucking mind. He always expects me to read his mind. Everything I do, I take his reaction into account. I find myself tediously and meticulously putting things where he wants them, as I know that if a laundry basket or the vacuum cleaner is even a quarter of an inch out of place, the reaction from a slight stumble will quickly turn from an annoyance into a rage. Today we’re supposed to be enjoying one another. As if we do that anymore at all. I always thought that arguments opened up the potential for positively learning about one another. His parents are here watching the boys, and I’m not even enjoying myself. “We need to be a family,” he says. The boys and I…we ARE a family. Every day, I learn a little more about these two small humans that I was blessed to create. “It’s for the children,” he says. He has got to be kidding me. Is it for the children when he comes home after work with a case of Keystone Light in tow? I know the monster that is born out of the consumption of that godforsaken shitty beer. He tells me I’m selfish, unreasonable, irresponsible. I go to work five days a week, carry this 12 credit schedule at school, and I‘ve made sure that my child is close to me, so I enroll him in the childcare center on campus. We travel to school together. Talk. Figure one another out. Slowly. Out of love. Non-toxic love. Now, I am standing before a man who is berating me over a corned beef sandwich. I know that look. I know those bloodshot eyes. He’s empty now. When he’s been drinking, he becomes a magnified version of that creature who towers over me, surpassing my height significantly by 12 ominous inches. At home, when he is angry at me, he follows me, thundering footsteps at an increasing pace. I am agile. Small and quick. Usually, I can move fast enough to find a hiding spot where by the time he has checked all of the crevices and spaces that I can fit into, he’s given up and won’t bother to continue the attempt to find me. He’s still talking about the fucking corn beef sandwich, and I can’t stop thinking about all the times I’ve been here before. Like the time the baby was crying during the night. Our newborn needs so much of my attention. I am so tired. I love being a mother. I THRIVE off of being a mother, but a break would be so nice. So, I pump. I make sure that everything’s ready. Exhausted, I crawl into bed; baby just laid down in his crib, toddler, soundly asleep. I’m so excited I’m going to sleep! I’ve forgotten how a full night’s sleep feels. A few hours in, as always, there is the wailing. The language of a 1-month-old. I’m sure he is saying, “Momma…momma come and get me! I’m awake! I need you.” I roll over. I ask him, “Would you mind feeding the small human? There is milk in the freezer.” He grumbles about how he has to get up for work. Appalled, I ask him again, “Please, I am so tired.” “What do YOU do all day,” he questions. Hurt and sleep-deprived, I defend my daily routine. A quick motion — he is up. Aggravated. Furious. Oh no. What have I done? Why did I even bother to ask? I spring off the bed and run for the bedroom doorway. I quickly stumble, tired, down the hallway to my toddler’s room. The baby is still crying. He won’t hurt me in my three-year-old’s room. Never. There I feel safe. I stand there, relieved. But I’m mistaken. The man's arm, 12 ominous inches taller than me, snaps forward, his hand catching me by my neck. He lifts and lifts. I rise. I dangle in fear with the realization that even our children have no impact on his reasoning. I fall silently to the floor. I wonder what we’re teaching our boys. When they grow up, will their partners race away for safety to shelter themselves from this conditioned anger? Will my children find entertainment in manipulation? Are they being desensitized to domestic abuse? Will they become entangled with a sense of self deeply rooted in others’ control and abuse? When did I allow myself to become so quiet? Somewhere along this timeline of our relationship, I slowly ease into this false sense of comfort. The fear that rises in me more and more frequently is gradually forgotten when I’m offered the mere consolation of expensive material adornments. I walk past the fist-shaped hole in the wall now as if it’s a part of the everyday decor in every household in America. The designer purses seem to get larger in correlation to the length of distance I slide, shoved, smacking against the hardwood floors. — Shame. I’ve been quieted by the amount of shame that I feel every day for the life that I have allowed myself to fall so depressingly and fully into. Look at him, still babbling. How long can he go on about a corn beef sandwich? All of these people are staring. What must they think of me? Allowing such disrespect. Disrespect. Dis-re — Wait. Respect. Self. Respect. <Takes deep breath> Self-respect. I don’t have to be quiet. Did I use to be strong? I can be strong. Sure, I’m not sure how I got here, but…he would never expect me to walk away. Calmly. Say the words. <Takes a deep breath> I look at him and say, “I’m done.” Ok. Turn away. Jesus, I am so tense. Look for a napkin…look for a napkin. Don’t turn around, and he won’t see your tears. <Takes a deep breath>. Self. Respect. At this moment, my heart finally finds its voice. “Move,” it says. “He won’t follow you.” I take a step. “Now another…that’s it. Keep going” Self-respect. I never turn around. I walk for seven miles, and I never turn around. I know that behind me is the ghost of a woman who hangs her head in shame- she is quieted by fear. Step by step, I distance myself from the person I have come to believe I am. Step by step, I move gradually closer to becoming who I know I am, the person I’ve always respected.
https://medium.com/black-stories-matter/erica-steps-away-from-the-corned-beef-43e5f05a2754
['Tmi Project']
2020-12-09 20:46:02.231000+00:00
['Domestic Violence', 'BlackLivesMatter', 'Race', 'Domestic Abuse', 'Storytelling']
FPL Gameweek 14: Abandon Ship
I’ve only got one thing to say. Worst week ever. First, let’s recap the week. We saw a return to goals galore. Both Manchester United and Liverpool inflict heavy damages on their opponents after they themselves got humiliated by much weaker opponents. It was odd to see that it was the very same two teams that bounced back this week. What a strange season it has been. Anyone who held onto United or Liverpool players were in luck. 13 goals from these two teams definitely carrying a lot of points. Unfortunately, we were not so lucky. Gameweek 14 Results Gameweek 14 FPL Data Team; source: FPL Gameweek 14 FPL Data Team Points; source: FPL As far as points go, this isn’t our worst result. We’ve had worse weeks. Our lowest tally coming in Gameweek 1 at 39 points. Yet, when we look at our averages, this one has to hurt the most. In fact, we scored 44 points last week as well. However, we were only 2 points below the average. This week, its a whopping 16. Gotta pat ourselves on the back for failing so miserably. Or have we failed? One can also consider that with the league average being so high, one observation we can make is that our team selections are outliers compared to the players the rest of FPL managers seem to hold. Hence, the huge differential. We may be putting out a team that can only perform on some occasions. It could also be just a skewed result from the United and Liverpool results. We will have to wait and see how the trend progresses. Vardy continues to impress and Kane had a poor outing. We were worried last week with much of our team players facing each other and the results show that. Bringing in John Stones seemed to be a great idea. Unfortunately, we kept in on the subs. It was unsure whether Guardiola would play him or not. So we had to pick someone who was a sure starter. This was a mistake, I admit. I could have put him in anyway and invoked an automatic sub. We will have to make more riskier selections from now on. Many of our stellar performers faltered this week either to tougher opposition or just a tiny blip. It was Grealish who bounced back after a few bad weeks with a better performance. One positive again is that we finally have a clean sheet again from our Goalkeeper Mendy. Our goalkeepers have been struggling lately and this is nice to see despite the results
https://medium.com/fantasy-tactics-and-football-analytics/fpl-gameweek-14-abandon-ship-6d498e16788c
['Tom Thomas']
2020-12-25 11:48:50.200000+00:00
['Soccer Analytics', 'Fantasy Premier League', 'Premier League', 'FPL', 'Soccer']
2019 Was the Year Data Visualization Hit the Mainstream
2019 Was the Year Data Visualization Hit the Mainstream From our president to our clothes to our books, this year showed that Dataviz has become an integral part of modern culture There’s always something going on in the field of data visualization but until recently it was only something that people in the field noticed. To the outside world, beyond perhaps an occasional Amazing Map®, Tufte workshop or funny pie chart, these trends are invisible. Not so in 2019, where data visualization featured prominently in major news stories and key players in the field created work that didn’t just do well on Dataviz Twitter but all over. 2019 saw the United States President amend a data visualization product with a sharpie. That should have been enough to make 2019 special, but the year also saw the introduction of a data visualization-focused fashion line, a touching book that uses data visualization to express some of the anxieties and feelings we all struggle with, as well as the creation of the first holistic professional society focused on data visualization. The First Data Visualization President Illustration by Surasti Puri When Donald Trump was elected, he framed and hung in the White House a map of the United States that implied he was elected by an enormous landslide. But as every frustrated data visualization expert pointed out, this map neglected to indicate that more people didn’t vote for Trump than did. The United States has had data-driven presidents before — Thomas Jefferson famously charted the crops the slaves of his plantation planted every year. But the United States has never had a president that cared more about the appearance of data than the data itself, until now. The difference between Thomas Jefferson (the first data scientist president) and Donald Trump (the first data visualization president) is that Jefferson wanted to understand the data and used data visualization to do so whereas Trump was more concerned with the representation of the data. The critical thing to recognize is that it was the rhetorical value of the above geospatial data visualization and not its underlying dataset that was important to Trump. But it wasn’t the electoral map that cemented Donald Trump’s status as the first data visualization president because, critically, the map was representing the data (it was just doing so in a way that was misleading). It was this year in September when, confronted with an official map of the range of the effects of Hurricane Dorian — one that contradicted his claims about what states might be affected — he decided to draw onto the map an additional bit of range. The data didn’t support it and wasn’t even uncertain enough to allow it to be drawn with digital tools, but Trump knew if he could just change the visualization, that was all that mattered. This has been taken by many pundits as a sign that we live in a post-fact era but that’s short-sighted. Instead, public debates about the presentation of data increase the prominence of data visualization as a meaningful act. The previous way of looking at it, that you were just “showing the data” is naive and misleading and leads to products like Trump’s “Impeach This” map. The naive perspective that data visualization is just a final step to help people see the data ignores the importance of subtle steps like showing uncertainty as well as the necessity to design a product that engages the audience (something Trump does far better than many data visualization practitioners). Trump is a sign of this, not a cause, and as we move forward in our practice we need to be more aware of how, for many people, the visualization is the data. Reflective Data Visualization Illustration by Surasti Puri Just as there have been presidents before Trump who have shown charts, there have been books before Michelle Rial’s Am I Overthinking This that are filled with charts. But Michelle’s book, unlike the typical data visualization coffee table book, is not a collection of charts selected for their historical or design merits. Instead, she’s created a series of charts by hand in her inimitable style that highlights the contradictions, fears, and complexities of modern life in a way that text simply can’t. Michelle’s book is filled with playful charts like these that represent weighty subjects in an amusing but still analytical manner. Data visualization as a way of exploring and expressing one’s feelings and traits has always been present in the margins of the field. Data-driven badges are always popular and Dear Data provided a nice model for thinking about one’s life and connecting with each other in a systematic way. Likewise, XKCD has often produced data visualization content. The Internet is littered with jokes like the pie chart made of a real pie, which have always proved popular with audiences. But Michelle’s book signals a fundamental shift toward the act of creating data visualization as a standalone way of imputing meaning, not as a gimmick or a one-off but fully engaged as the primary method for dealing with an increasingly data-intruded life. That’s probably why Michelle’s work is constantly shared without credit. The Unstoppable Giorgia Lupi Illustration by Surasti Puri Our profession suffers from an implicit stratification that bubbles up into a disdain for those who use one method of doing data visualization over the other. Practitioners who use one tool think those who use another aren’t as good. Coders think people who rely on tools are less capable. People who write in one language or with one library think the others are worse. As a result, we see an overemphasis on learning technical skills over design. Except for Giorgia Lupi, who has throughout her career eschewed this entire line of reasoning to forge a path that touches on traditional data visualization, data art, design and data humanism. This year, Giorgia has tacked on two more significant achievements: She’s started a fashion line and joined Pentagram, the world’s largest independent design consultancy.
https://medium.com/nightingale/2019-was-the-year-data-visualization-hit-the-mainstream-d97685856ec
['Elijah Meeks']
2019-12-30 12:01:03.055000+00:00
['Data Visualization', 'Design', 'Data Humanism', 'Data', 'Dvsintro']
geovisualization?
Gapminder Imagine the world as a street. All houses are lined up by income, the poor living to the left and the rich to the right…
https://medium.com/tosseto-info/geovisualization-d5d706fe1a65
['Toshikazu Seto']
2017-02-07 14:56:50.629000+00:00
['API', '未分類', 'Visualization']
JFrog Artifactory and Oracle Cloud
Artifacts Overview These days companies are releasing software faster than ever in order to stay competitive in the market. Asking distributed teams to produce updates and features at a high rate presents new operational challenges that create an opportunity for system vulnerabilities to arise. This complexity also can cause testing problems and generally slow down the pace of software releases. One way to prevent these issues is by using a universal repository to disallow developers to choose their own versions of software and using artifacts from different sources. One such tool, JFrog Artifactory, is useful for sharing libraries and third-party components at scale. Artifactory centralizes control, storage, and management of binaries throughout the software release cycle. It is easy to integrate with a variety of build tools and coding languages and provides users with the ability to tag libraries with searchable metadata for speed, security, and quality. Artifactory Pro provides all these features, while Artifactory’s Enterprise edition can be installed for high availability, includes Xray for package security scanning, and also Mission Control for realtime, centralized control of your environment. JFrog Artifactory Virtual Machine Installation Refer to the official Artifactory installation guide for a more detailed walkthrough of the Artifactory installation process. I chose to install Artifactory on Oracle Cloud Infrastructure, a modern, second-generation cloud which was built and optimized specifically to help enterprises run their most demanding workloads securely. For this test, I used the Oracle VM.Standard1.8 instance shape, which more than met the Artifactory system requirements. I provisioned this instance with a public IP address. I chose an x64-bit Canonical-Ubuntu-18.04 image because this is the most up-to-date Ubuntu version supported by Artifactory. The JFrog Systems Requirements page includes additional information about technical pre-requisites. Artifactory virtual machine After provisioning a virtual machine make sure you have JDK 8 or above installed and your JAVA_HOME environmental variable set to the JDK installation. To do so, I updated my list of packages with sudo apt update and installed the JDK with apt install default-jre . Next, I updated my $JAVA_HOME with vi ~/.bashrc and added the following to the end of the file: export JAVA_HOME=/usr/bin/java export PATH=$JAVA_HOME/bin:$PATH I chose to install Artifactory as a Docker container. Alternatively, you can install Artifactory manually or install Artifactory in multiple containers using Docker Compose. I installed Docker onto the virtual machine by following the official documentation for installing Docker Engine on Ubuntu. After doing so, I pulled the Artifactory Pro Docker image docker pull docker.bintray.io/jfrog/artifactory-pro:latest and then ran the image in a container docker run --name artifactory -d -p 8081:8081 docker.bintray.io/jfrog/artifactory-pro:latest . In this case, you create a docker named volume and pass it to the container. By default, the named volume is a local directory under /var/lib/docker/volumes/<name> , but can be set to work with other locations. For more details, please refer to the Docker documentation for Docker Volumes. The example below creates a Docker named volume called artifactory_data and mounts it to the Artifactory container under /var/opt/jfrog/artifactory: $ docker volume create --name artifactory5_data $ docker run --name artifactory-pro -d -v artifactory5_data:/var/opt/jfrog/artifactory -p 8081:8081 docker.bintray.io/jfrog/artifactory-pro:latest In this case, even if the container is stopped and removed, the volume persists and can be attached to a new running container using the above docker run command. Run docker ps to verify the container is running: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c5146646a363 docker.bintray.io/jfrog/artifactory-pro:latest "/entrypoint-artifac…" 2 days ago Up 2 days 0.0.0.0:8081->8081/tcp artifactory Artifactory serves traffic over port 8081. After the installation is complete you can connect to the Artifactory console by navigating to http://[IP Address]:8081/artifactory/webapp/#/home JFrog Artifactory dashboard Kubernetes Cluster Installation In addition to installing JFrog Artifactory onto an Oracle Cloud Virtual Machine, you also have the option to install it onto the Oracle Container Engine for Kubernetes. Container Engine for Kubernetes is a standard and conformant, developer-friendly, container-native, and enterprise-ready managed Kubernetes service for running highly available clusters with the control, security, and predictable performance of Oracle’s Cloud Infrastructure. Refer to Installing on Kubernetes section of the JFrog documentation for more information regarding installing Artifactory on Kubernetes clusters. The installation is accomplished by means of Helm, a package management tool for Kubernetes applications. There are a number of options available on the Helm Hub, including the default Artifactory chart and a chart designed for high availability. This chart will deploy Artifactory-Pro/Artifactory-Edge (or OSS/CE if custom image is set), along with a PostgreSQL database using the stable/postgresql chart, and an Nginx server. It also provides instructions for swapping out the underlying database, deploying small/medium/large installation, expose Artifactory with Ingress, and other optional modifications. This guide will assume you have a Kubernetes cluster with a version greater than 1.8 and Helm installed. The first step to installing JFrog helm charts is to add the JFrog helm repository to your helm client. To do so, run: helm repo add jfrog https://charts.jfrog.io To install the chart with the release name artifactory : helm install --name artifactory jfrog/artifactory When the installation is complete you will see a prompt to get the Artifactory URL and the default credentials: 1. Get the Artifactory URL by running these commands: You can watch the status of the service by running 'kubectl get svc -w artifactory-nginx' export SERVICE_IP=$(kubectl get svc --namespace default artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}') echo NOTE: It may take a few minutes for the LoadBalancer IP to be available.You can watch the status of the service by running 'kubectl get svc -w artifactory-nginx'export SERVICE_IP=$(kubectl get svc --namespace default artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')echo http://$SERVICE_IP/ 2. Open Artifactory in your browser Default credential for Artifactory: user: admin password: password The Artifactory chart documentation on the Helm Hub provides additional information for updating your deployment, specifying resource limits, and configuring storage. Next Steps After installing Artifactory on a virtual machine or Kubernetes cluster, the next step is configuration. Follow the steps to configure Artifactory as you would on any platform. You can get a 30-day free trial of Artifactory and a free trial of Oracle Cloud Infrastructure. Additional information can be found on the Artifactory Getting Started page.
https://medium.com/oracledevs/jfrog-artifactory-and-oracle-cloud-ac2a6e18a2ad
['Mickey Boxell']
2019-12-17 21:21:56.830000+00:00
['Kubernetes', 'Security', 'Oracle Cloud', 'Jfrog Artifactory', 'Docker']
The Roller-Coaster of Start-Ups
The Roller-Coaster of Start-Ups Jennifer Barnes is a strong business professional who learned the hard way that you can’t succeed until you fail. Here, she shares some of her key reflections about her experience failing. Photo by Sudan Ouyang on Unsplash My first start-up experience included an externally tested and solid market plan, innovation awards, and an investor that wanted to give us millions. I should have felt amazing, but I was uneasy. Something in the product development wasn’t feeling right to me. Although a scientist by training, as the CEO, my role was not to be the technical expert. I was torn — it would be easy to accept the results the team was providing, take the investment and keep going. We were on the roller coaster of start-ups and technical progress was slow but positive, interest was still high, and we were climbing to the top of the hill. I felt equal parts of exhilaration and fear. But my uneasy feeling would not go away. I eventually decided to hire an external expert to stress test the science. In hindsight, it turned out to be one of my best decisions. To put things simply, our science failed, remaining funds were returned to the investor and we closed the company two weeks later. I got off the roller coaster feeling pretty shaken and quite honestly, ill. I went away to lick my wounds and I spent time reflecting. Here are a few (of my many) reflections and learnings: Trust your gut. I had a feeling I couldn’t shake, and it was there early on. Our science was ground-breaking and innovative and just plain sexy. It came out of an academic setting and was recognized as a world-leading invention. It was easy to be charmed by all it promised. But I had worked with cool science before, and I worried it would struggle to scale into a commercial product. I came from the highly regulated diagnostic medical device world. I knew it needed to be produced with extremely high reproducibility. And that is what killed it in the end. It was not reproducible enough to make the commercial prototype viable. I made a solid decision, but I realize now I waited too long to stress test it, and should have brought in the external parties earlier. We were over 12 months in and had some investment. I can name lots of reasons why it took that long to uncover the issues, but actually, I should have just trusted my gut. My intuition. Because you can’t be an effective decision maker without full conviction. And intuitive decision making comes with experience. It is as important as traditional analytical decision making. Trust your gut. I am proud we closed the company. Say what? Trust me, I did not feel proud at all at first. Closing the company immediately had many negative implications. But it was also absolutely the right and responsible thing to do. It didn’t take long to feel proud of making the tough call and hard decision. To return funds. To return the IP to the University to ensure it may have a chance to be visited after further research. To be truthful and upfront about the issues with investors. I didn’t feel great but I didn’t feel bad. I started getting so much positive feedback from people I told. Investors commended me for being brave and not just ploughing on and wasting funds. People understood and were supportive. Fast forward 2 months and those same investors I had to tell their funds would not be giving a return on investment recommended me for a CEO role of another start-up. The experience has given me better tools and a better chance at succeeding this time around. And the investment community appreciates that. Be you. When I reflect on my first startup experience and question why I hesitated to trust my gut- it was the overwhelming duty I felt to do right by everyone; co-founders, inventors, investors and staff. Some of the dreaded imposter syndrome did exist. Who was I to tell the chief scientific officer or the investor I disagreed? But ultimately, I felt an overwhelming duty to myself too, you can only do what feels right to you, what you can live with. For me that was ultimately to fail, reflect, get back up, dust myself off, cover those extra grey hairs, and start round two. I always did love roller coasters. Jen Barnes started her career as a molecular biologist focusing on medical diagnostics. She joined Roche in a scientific role but soon learned she had a passion and aptitude for the business side. She spent over a decade at Roche and when she left was responsible for two product divisions and teams. Her experience in both the science and business realms made her adept in the commercialisation of life science and medical opportunities and she made the leap to working with early-stage companies in business development and CEO roles. She now runs JUNOFEM, a company that has created the world’s first wearable smart device that can teach effective techniques to address pelvic floor disorders. The femfit® is on track to go to market in 2021 and she is starting to capital raise to support the commercialisation of this revolutionary medical device. With 20 years providing business strategy she is passionate about helping companies succeed.
https://medium.com/been-there-run-that/the-roller-coaster-of-start-ups-90c49c2843a0
['Springboard Enterprises']
2020-11-13 20:53:02.534000+00:00
['Growth', 'Entrepreneurship', 'Startup Lessons', 'Advice', 'Women']
The Case of the Dead Farmhand
The upstairs hallway resembled a slaughterhouse when local doctors William Coleman and Frank Green arrived on the scene. Summoned by a phone call from a nearby farm, the doctors found Cook soaked in blood, but, miraculously, he was still alive. He slipped in and out of unconsciousness. Occasionally he gasped “murder,” in a weakened voice. The flow of blood from his wounds had slowed. Coleman and Green realized that Cook would not survive transport to a hospital. Working in the light of an oil lamp, they performed emergency surgery on the hallway floor. Green’s scalpel drew a line across the victim’s abdomen, splitting the skin to reveal blood leaking from the liver. Ten stitches staunched the worst of the internal bleeding. Although several bullets had struck Cook, most of the wounds were not life-threatening. The oddest, and potentially most dangerous, wound was a long gash that began behind his left ear and wrapped around the back of his neck to end in a spot near his right ear. This slice was two inches deep in places, and blood had poured from it. Coleman and Green sutured the slash to keep the victim alive. Unfortunately, their help arrived too late. Cook rallied when given heart stimulants, but he had lost too much blood. He slid into a coma, and then, at 12:15 AM, Sunday morning, died. Although he had two witnesses in the doctors, Cook departed life without offering his side of the events. Did Norman Cook enter the Bankert house with criminal intent, possibly to seduce or rape Anna Bankert, or was there another explanation for the tragic event? Was this a case of justifiable homicide or premeditated murder? Sylvester Bankert. Rushville Republican, August 16, 1905. Public Domain. The case was not even twenty-four hours old before troubling details began to emerge. Perhaps the most problematic, was the location of the husband during the incident. Anna told police that after escaping Cook, she had run outside and summoned her husband. Together, they had ascended the stairs, but, rather unchivalrously, Sylvester had hung back while Anna led. She, rather than Sylvester, carried the only weapon. Why hadn’t an armed Sylvester Bankert confronted Cook alone? It seemed odd, timid behavior for a man in such a gallant age. Unfortunately for Anna, her husband’s statement did not match her own. Sylvester Bankert told the police that he had been working in the woods that Saturday evening. “ The first I knew of the trouble was when I heard four or five shots fired on the second floor of my house,” He said. “I do not know how the shooting occurred, but I think it happened between half past six and seven o’clock.” Contrary to his wife’s testimony, Sylvester was not in the house when she confronted Cook with the pistol. She did not call her husband before shooting Cook. Sylvester’s first clue that something was amiss came when he heard gunfire explode in his house. Had Cook made trouble for the family in the past? No, said Bankert. “Cook had worked before for me, before he came to work in this neighborhood this fall. He helped me put up my corn on my farm near Glenwood last year, and he had never been in any trouble with any member of the family until last night.” “I do not know much about Cook before he came to work for me last fall,” continued Bankert, “but I have heard him say that he was raised at Laurel, Indiana, and that he was an engineer by trade. He has been working all around this neighborhood this year, helping during the threshing season.” So why was Cook in the Bankert home? Why would he attack Anna? Sylvester Bankert had no answers for these questions. “He always had a good disposition, but he generally got drunk when he went to town. He had been drinking last Saturday night, and there was no member of my family who saw him slip into the house Saturday night, In fact, we have no idea when he came.” “The only reason that I know for the deed,” concluded Sylvester, “was that Cook insulted my wife and as a result she shot him. Of course I will assist my wife in this case, as I believe she deserves my aid, and that she is innocent of any crime.” Troubling Inconsistencies Accumulate Deputy Prosecutor John Kiplinger took charge of the investigation. As he and the Rushville Police interviewed witnesses and studied the crime scene, additional puzzles appeared. Anna claimed that she had not entered the south bedroom when confronting Cook; he grabbed her as she came into the hallway at the top of the stairs. Investigators discovered a bullet hole in the wall of the southern bedroom. The spent .38 caliber slug that had carved the hole was found under the carpet in the room. Measurements proved that the bullet had been fired by someone standing in the bedroom doorway. At least one shot had not been fired in the hallway. In the excitement after the shooting, the police had also failed to appreciate the significance of bloodstained bedding and a comforter piled on the bed. It appeared that Cook had made a small bed out of these items on the floor near the bullet hole. Had he been laying on the comforter when Anna shot him? Had she missed with her first shot, and then hit him several times as he lay before her? Someone had moved the comforter, after the incident. Sadie Smay, the hired girl, also failed to support Anna Bankert’s story. Her testimony at the coroner’s inquest introduced additional anomalies. For example, Cook had not been the stranger to the household that Anna suggested. Five days before the shooting, Cook arrived, intoxicated, at the Bankert farm at nine AM. Sylvester Bankert was away. Anna did not tell Cook to leave; she allowed him to sleep it off in the house. He stayed for lunch and most of the afternoon. Two months earlier, Sadie had caught Cook going through the letters Anna Bankert kept in the wardrobe in her bedroom. When Anna returned home, Sadie reported the incident. Cook denied the charge, claiming that Sadie was “a liar.” Nothing more was made of the invasion of privacy. Sadie claimed that she did not hear the first five shots on Saturday evening. She didn’t know anything had happened, until she met Howard Bankert, Anna’s eight year son, who was crying. When she asked him what was wrong, Howard told her that there had been a shooting. Sadie raced upstairs and found Cook laying in the north bedroom, blood pouring from his wounds. She arrived in time to see Sylvester Bankert wrestling with his wife, trying to gain possession of a smoking revolver. Cook was screaming for water; the room was filled with the choking fumes of burnt gunpowder. Sylvester ordered Sadie to run downstairs and bring a glass of water for Cook. She hurried to obey, but then thought she should fill a bucket of water to bathe his wounds and clean the mess. While she was downstairs, procuring water, Sadie heard two more gun shots. Anna Bankert came downstairs; Sylvester remained with the victim until the doctors arrived. Sadie’s story meshed with Sylvester’s tale. It also suggested that Anna had fired two more gunshots into an incapacitated man after Sadie left the bedroom. Did Anna shoot him a second time to finish the job? And what, precisely, was the relationship between Cook and Anna Bankert? Sadie’s testimony hinted that they were closer than farmhand and employer. Cook had been at the farm when Sylvester was away. Other witnesses testified that they had often seen Anna riding in a buggy with Cook, and the pair had been spotted alone in other places around the county. The police recovered four letters from Cook’s pocket. Although the officers refused to disclose the contents of the letters, they did confirm that Anna had written the letters to Cook. Cook also carried a picture of Anna in his pocket. It was becoming more difficult to believe that Anna’s relationship with Cook was completely innocent. As the investigation continued, Anna remained nonplussed in her jail cell. She spent her days reading and sewing, and, according to the local paper, “does not worry, but on the contrary, seems very cheerful.” Competing Explanations The preliminary investigation had discredited the idea that Cook’s shooting was a simple case of an innocent woman attacked in her own home. There was nothing straightforward about this violent affray. The prosecutors believed that Anna Bankert and Norman Cook had enjoyed an adulterous relationship. When the pair fell out, for unknown reasons, Anna silenced Cook in the most effective way imaginable. She was a cold-blooded, crafty murderess. They charged Anna with first degree murder and held her, without bail, for trial. The defense team took a different view of the matter. Norman Cook had conceived an unhealthy obsession with Anna Bankert. When she did not reciprocate his interest, he broke into her home and lay in wait, hoping to rape her. A woman of commendable character, she was justified in defending her honor. To support their narrative, Anna’s lawyers canvassed the surrounding counties, searching for evidence that would blacken Cook’s character. The August 18 edition of the Rushville Republican reported that witnesses from Franklin, Henry, and Fayette counties would testify that Cook had frequently bragged about his relationships with respectable women across Indiana. The newspaper also claimed that Cook had served two years in the Franklin County Penitentiary. Moreover, his ex-wife, who lived in New Castle, would be called for the trial to attest to Cook’s bad character. The Trial Begins Judge William Sparks gaveled Anna Bankert’s trial to order on September 28, 1905. Prosecutor Elmer Bassett declared in his opening statement that the state would prove that Anna Bankert had shot Norman Cook, bludgeoned him with a pistol, and attempted to cut his throat with a razor. She had carefully set the drama’s scene, casting Cook in the role of unhinged stalker. Anna was no victim: she had engineered and executed a cold-blooded murder. Bassett called Sylvester Bankert as the state’s first witness. While Anna, dressed in a white silk blouse and brown skirt, fanned herself, her husband took the stand. Repeating the story he had told the police, Bankert said that he had not known Cook was in his house. Bankert was in the dining room when he heard gunshots erupt. He raced up the stairs, his son Ralph right behind him. Cook knelt before Anna in the south bedroom. She was beating him with the butt of a 38-caliber pistol. Bankert took the pistol away and ordered his son to run to a neighbor’s house to phone for the doctor. Anna fled downstairs. Cook looked at Bankert and moaned, “I am bleeding to death.” Sylvester took the spent pistol and carried it downstairs. He placed the revolver on the mantel in the living room. When he went back upstairs, Cook levered himself up on an elbow and gasped, “Help me.” Sylvester, half-dragging, half-carrying the victim, assisted Cook into the north bedroom. He then left the dying man alone. A few minutes later, he heard two gunshots. Anna had crept back upstairs and fired two more bullets into Cook with a 32-caliber pistol. Sylvester took the second gun away from his wife. She went back downstairs. Asked about the gash on Cook’s neck, Sylvester Bankert testified that he had not noticed a wound when he helped Cook into the north bedroom. Someone — and Sylvester had no idea who — cut Cook’s throat after he was placed in the second room. After corroborating testimony from Ralph Bankert, the state placed Coroner William Coleman on the stand. He had accompanied Frank Green on the emergency call. They had found the victim shot, in a weakened condition, gasping for water. Cook had not told the doctors anything about the incident. Coroner Coleman detailed the results of the autopsy. Six bullets had penetrated the body. The entry wounds were clustered on the left side of Cook’s body, which suggested that he had been laying face down when he was shot. The inquest suggested that the victim had been in a prone position, laying down, when he was shot. A Nasty Surprise During the coroner’s testimony, Prosecutor Bassett introduced the victim’s clothing, the pistols, and the bloody razor into evidence. The defense team expected this gruesome display, but Anna’s four attorneys were completely unprepared for what came next. Bassett asked the coroner if anything else had been found on the victim’s body. “The coroner then produced four rubber instruments of a lewd nature,” wrote the reporter for the Rushville Republican, “which he had found in Cook’s pocket. One of these, the witness testified, was wet when he found it and he stated that it was not wet from blood.” Although the newspaper was unwilling to be more specific about the precise nature of the “lewd” rubber instruments — and indeed, some of the other Indiana newspapers, mindful of their family readership, skipped this part of the coroner’s testimony entirely — it is to be presumed that Cook carried four condoms. One of them was still wet from use. This was a bombshell. The state had hidden this terrible evidence from the defense team. Was it possible that before the shooting Anna and Cook engaged in sex on the makeshift bed found in the corner of the south bedroom? The salacious surprise unmanned Defense Attorney Watson; he was unable to return for the session after the lunch break. He took to his bed, ill. The remaining three lawyers spent the rest of the afternoon arguing for a continuance, as they could not proceed with the case in Watson’s absence. Pistol Practice The next morning, Senator Watson was still too ill to return to the court room, so William Green, a well-known defense attorney from Greenfield took his place. The prosecution continued to build its damning wall of testimony. Hired girl Sadie Smay testified that in the weeks before the incident, Mrs. Bankert had started practicing shooting the pistols. When asked about her new interest in target practice, Mrs. Bankert asserted that some chickens had been stolen. A neighbor, Bertha Walker, recounted a day in early August when she watched Anna Bankert and her sons shoot in the barn. Anna shot at a small plug in the bottom of a keg and hit it with little trouble. After recovering the bullet from the target, testified Mrs. Walker, Anna took her aside and said, “It don’t look like a little piece of lead like that would kill a man, does it? Can you kill a man with a gun against his body? “How far does a man have to be away to kill him? Do you think I could shoot straight enough to kill a man? Where is the best place to shoot a man to kill him? In the heart?” Was there a problem with chicken thieves in the neighborhood? asked the prosecutor. Bertha didn’t know of anyone who had lost chickens. At 3:00 PM, Judge Sparks ordered a one hour recess. During the interval, he instructed the bailiffs to remove the women from the courtroom audience. The next witness would offer testimony that was unfit for female ears. After the break, the prosecution called a medical expert, Dr Frank Wynn of Indianapolis. Dr Wynn had conducted a microscopic examination of the wet substance found in the condom recovered from Cook’s pocket, and had identified it. Local newspapers again walked the prim path and refused to specify the nature of the substance. As the Rushville Republican wrote, Wynn “gave some damaging testimony which is wholly unfit for publication.” Presumably the condom contained fresh semen. Anna, who had not been debarred from this scandalous testimony, “showed signs of nervousness and anger. Her face turned a fiery red and her eyes were kept riveted upon the ceiling the greater part of the time.” The jurors were excused after Wynn finished. They went home pondering the import of a freshly used condom. A Virtuous Woman Prosecutor Bassett offered one final witness before concluding the state’s case. Elva Mains, nephew of Bertha Walker, had also been present when Anna had allegedly mused about killing a man with her pistol. The young man corroborated Walker’s version of what Anna had said. With that, the state rested. Senator Watson had sufficiently recovered from his condom shock to return to his role as lead defense attorney. After offering a biographical sketch of his client, Watson declared that the defense case would stand on her excellent reputation in the community. She was “an industrious woman and a model housewife who remained at her home the greater part of the time.” She was also a Christian and an active member of the local church. Against this stood the terrible reputation of the victim: a thief, a drunk, quarrelsome, vicious, and a dangerous man. This “degenerate” had become obsessed with Anna Bankert. He had sneaked into her house and attacked her when she went upstairs to get a lamp. Consequently, she was forced to defend herself. This was not a complicated case: a bad man tried — and failed — to suborn a virtuous woman. Justice had been served from the barrel of Anna’s pistol. Anna Bankert was the first witness for the defense. She stuck to the version of the story she had told the police in the initial phase of the investigation: Cook had surprised her upstairs; she had escaped and gone to find her husband; she thought she heard the front door slam, and so did not expect to find Cook still upstairs when she went back up; he grabbed her again, threatened her with a razor, and she shot him. The defense then devolved to a string of character witnesses. A long line of neighbors and townspeople from Rushville took the stand to attest to Mrs. Bankert’s “reputation for peace and quietude.” Others attested to the poor standing of Cook, and his habit of carrying a straight razor with him. Somewhat surprisingly, the defense offered no explanation for why Anna had returned with a second revolver to shoot Cook twice more after he had been moved to the north bedroom. Nor was there an explanation for the razor slash that had opened the victim’s throat. And, quite naturally, the defense ignored the “lewd” condoms discovered in Cook’s pocket. Senator Watson’s defense strategy consisted of portraying Anna as a virtuous victim, and Cook as a vicious criminal. “It is hard to tell,” wrote the Rushville Republican, “just what the outcome will be.” After four days and a long line of character witnesses, the defense rested its case. The state countered with a chain of rebuttal witnesses who attested that Anna had a bad reputation in the county, while, to the contrary, Cook had been a good man who drank a little. T he parade of character witnesses became so onerous that the courtroom, which had not been able to seat all the people who wanted to attend the trial in its opening days, was only one-third full by Friday morning. After another long day of character witnesses, both sides announced they were ready to finish. Closing Arguments The court room was full on Friday evening when the prosecutors and defense team faced off in their final appeals to the jury. Prosecutor Kiplinger emphasized that the evidence showed that Anna had been involved with Cook, that she had invited him to her home on the fatal evening, and, that the number and variety of wounds found on the corpse suggested a malicious attack. Moreover, argued Kiplinger, with the victim incapacitated by the bullets from the first pistol, there was no reason for Anna to have returned and fired two more shots into his body. Kiplinger spoke for two hours, registering several strong points. The following morning, the defense took the baton. Cook was “a moral leper,” who had attacked an innocent woman; she, like everyone, had a right to defend herself. She had exercised that right. Moreover, asserted Attorney Watson, Anna Bankert had a right to return to the bedroom to shoot Cook a second time. She had the right “to pursue him until she had freed herself from all danger.” The state, claimed Watson, had failed to offer a motive for the crime, and thus, had not proven its case beyond a reasonable doubt. As for the razor slash found on Cook’s throat — well, Watson didn’t have an explanation. It was a mystery and it was likely that someone else had cut Cook’s neck. Prosecutor Bassett had the final oration. Anna Bankert had confessed to shooting Cook, and consequently, the state had no obligation to provide motive. After walking the jurors through the evidence that proved her guilt, he poured fire upon the defendant in his conclusion: This villainous woman, this tigress, came back upstairs, after Cook had been assisted to the north room — came back upstairs, if you please, and poured more lead into Cook’s body. This woman used more force than was necessary to protect herself. The theory of self-defense goes glimmering. Was Anna Bankert in danger? Was Anna Bankert’s virtue in danger? Yes, she is guilty of murder in the first degree. The jury, concluded Bassett, would be justified, not only in sentencing her to life in prison, but to execution. The Verdict After reading a fifty-one point list of instructions, Judge Sparks sent the jury away to consider their verdict. Sunday afternoon, at 5:00 PM, the jury returned, having been out for twenty-four hours. Judge Sparks asked R. H. Philips, the foreman, whether the jury had reached a decision. We have not, responded Philips. After several rounds of balloting and a day of passionate debate, the jury had failed to reach a conclusion. Judge Sparks polled the jurors individually, asking if there was a chance of reaching an agreement. The only thing that the jurors could agree about was that a verdict was impossible. It had been close: at one point during the Sunday afternoon balloting, ten men had voted guilty and two innocent. The last round of balloting, however, had broken six-six. Anna Bankert wept into her handkerchief; she had expected exoneration. That was not to be. After days of testimony and what in retrospect seems to have been a strong case for the prosecution, the trial resulted in a hung jury. Judge Sparks thanked the jurors for their time and dismissed them. “Public opinion has been and is still divided,” wrote the Rushville Republican, “as to whether or not Mrs. Bankert was justified in killing Cook. The fact that the jury disagreed has occasioned no surprise. The Cook murder is still as much a mystery as ever.” Epilogue In February 1906, Anna Bankert was placed on trial for a second time. After days of testimony, and much expense, the jury again failed to agree to a verdict, splitting seven for guilty, five innocent. Anna was released on parole and returned to the farm where she had shot Norman Cook. Although the prosecutors vowed to try her a third time — possibly in another county where opinions had not been formed — ultimately they did not pursue this. On December 20, 1906, Judge Sparks dismissed the case. Anna, believing that she was in court to receive a new trial date, was caught off guard. “How I thank you, judge,” she exclaimed, when Sparks threw out the case. She then returned home to convey the good news to her husband and ten year-old son. She and her husband, Sylvester, remained married until his death in 1947. Sources: The Indianapolis News, October 2, 1905; Indianapolis Star, October 1, 1905; Rushville Republican (IN), August 14, 1905-Dec. 20, 1906.
https://medium.com/lessons-from-history/the-case-of-the-dead-farmhand-b1a45ed9b11c
['Richard J. Goodrich']
2020-12-28 19:58:13.412000+00:00
['Murder', 'Nonfiction', 'True Crime', 'History', 'Crime']
Use These Scripts To Get More Effective Support, Ideas, And Feedback On Your Small Business Challenges
Everyone runs into questions, challenges, obstacles, and snags in their plans. It’s a universal law of entrepreneurship and business ownership — it doesn’t matter how long you’ve been in business, you’ll run into a new problem eventually. And therefore, by the transitive property, you can then know that everyone — every entrepreneur or executive, from the coffee shop owner down the street to Warren Buffet, Sara Blakely, and Howard Schultz — is going to need support, a few new ideas, and some feedback from time to time. We’re doing new things, creating fresh solutions, and building new organizations. Needing help is a feature, not a bug. But after spending the last decade watching small business owners ask for help, feedback, and support — or worse, not ask — I’ve realized that something as simple as asking for a bit of help can be fraught with difficulty. Since I’ve made it my business to provide support to small business owners and help them seek out feedback and help themselves, I wanted to share what I’ve learned about getting the best help you can and what to do with it once you’ve got it. Below, you’ll find 8 dos and don’ts of asking for support for your small business. Plus, I’ve provided a number of scripts you can use to get better business feedback. They’re based on 10 years of getting help on my own business journey, working as a business coach, and creating and observing a platform built for small business owners to ask questions, get feedback, and learn from each other. These scripts can be used in online communities, in mastermind sessions, with a business coach, in coffee chats, in workshops, and really anywhere online or offline you’re seeking help with your small business questions and challenges. But before we get to those, we need to remember what I’ll call The Golden Rule of Small Business Support: Find your own answer. The support, ideas, and feedback that any 1 person can provide are limited. You will inevitably need to adapt their perspective or suggestion into something that is tailor-made for the way you do business. Even when you have access to a pool of other business owners or experts, their perspective is limited by what they see and what they’ve experienced. That doesn’t mean that their input isn’t valuable — it’s extremely valuable. It just means that there is more work to do than simply asking for and receiving help. You need to analyze the help you receive, you need to weigh your options (there’s always more than one), and you need to choose what you want to test or experiment with. At the end of the day, the feedback you receive won’t point you in the right direction — you have to point yourself and your business in the right direction. And that leads me to the first element of getting more effective support for your small business: 1. Don’t ask for advice. Most people love giving advice. We all think we could do better in any given situation and, at even the hint of an opening, we’ll offer up our 2 cents. Social media might as well be “advice media.” Everyone has an opinion and any question you ask or topic you suggest for discussion will be viewed as an opportunity to give advice. When you’re looking for an answer to your business question or support on a challenge you’re experiencing, don’t ask for advice. In fact, if you’re asking for support in a relationship or context where advice-giving is rampant, explicitly state that you’re not looking for advice. Instead, you can ask for different kinds of feedback depending on what would be helpful to you. Try asking people to share their experience: “I’m currently experiencing [your challenge] and I was thinking I might try [your solution] to solve it. What’s been your personal experience with this challenge or that solution?” Or, try asking people to share examples: “I’m considering solving [your challenge] by [your solution] because [your reasoning]. Have you seen a business take a similar approach? Please share the specifics with me! Ask people what works for them: “I’m going to [your course of action]. What worked for you when you did that?” Or, ask people what didn’t work for them: “I’m dealing with [your challenge]. What didn’t work for you when trying to solve a similar challenge? Why?” I’m not going to pretend that you might not receive some “advice” mixed in with your responses if you use one of these scripts. However, the majority of your responses will be more concrete, fact-based, and verifiable. While the support you receive in response to these kinds of requests might not immediately produce your “aha!” moment, you will have the raw ingredients you need to create your own solution — and with less time wasted digging for ideas. That’s a great outcome! 2. Do start with 1 question. Realistically, every challenge or question you face in your business is actually a set of at least 37 different questions or challenges all rolled up into one. That can make knowing what to ask for help with nearly impossible. At The What Works Network, the business support network my company runs, I’ve watched members put off asking for help over and over again because they haven’t found the exact right question to ask yet. The truth is that there is no exact right question to ask. There is only your first question, second question, third question, and so on! So don’t wait to ask for support until you know exactly what you need. On the other hand, don’t try to get answers or feedback on all of your 37 different questions related to your main challenge all at once. Pick some place — any place — to start. Think about your challenge or big question as a knot you have to untangle. Maybe that knot has 37 different strands to separate. You start picking at one strand, seeing how it’s connected to the rest, working it out of the mess a few places. Then you start working on a different strand. Eventually, you start to see how the knot is formed and you can strategically pick apart each strand until the knot is completely undone. The same thing happens with your business challenges. By starting with one tiny strand of the problem, you can start to work your way into the mess and eventually unravel it. One set of feedback leads to new clarity. The next set of feedback frees up a section of the knot. But you have to start with 1 single question. Try something like this: “I seem to be challenged when it comes to [the area you’re experiencing a problem in]. I’m not entirely sure what the real cause is yet but finding out more about [your starting point] would help. How have you [solved this problem, handled this challenge, answered this question] yourself?” 3. Do provide context. Context is key when it comes to getting great support or feedback for your business. It can take a little longer, of course, to share more context than just asking a question flat out but you’ll save time in the long-run by weeding out inappropriate responses from the start. Plus, providing more context makes giving feedback or support more efficient too. Even though it might take longer to read your post or listen to your question, the people you’re asking for help from will appreciate that their response will be more helpful to you. After all, we all want to be of the highest service! When you’re giving context, consider including some of these things: What you’ve already tried Research you’ve already done How your business differs from what people might expect Core values that impact the way you work or solve problems How long you’ve been in business Special resources you have at your disposal Solutions you’d like to avoid Any mindset blocks you’re aware of Your short-term and/or long-term goals Once you’ve given a solid context for your challenge, it will be extremely tempting to ask for advice (i.e. “Given all this, what should I do?”). Resist the urge. Instead, lean on asking for relevant examples of what’s worked or hasn’t worked for the people you’re talking to based on the context you’ve provided. 4. Do make your request specific. Similarly to providing context, making specific requests can make it a lot easier for people to give you help and for you to receive high-quality feedback. A question like, “How can I get more leads for my business?” might be what you’re wondering but, unless you’re looking for a laundry list of ways that could be done, you’re going to be disappointed in the feedback you receive. Instead, you could ask, “How have you used Facebook ads to get more leads for your service-based business?” Or, “How have you optimized your website to get more inquiries about your products?” And that leads me to the next point… 5. Do get curious. Maybe you don’t know for sure that you want to use Facebook ads to get more leads or that you want to spend time optimizing your website for more inquiries. That doesn’t mean you shouldn’t ask about it! Curious business owners tend to discover more creative solutions to their problems. If you find yourself wondering “could this solve my problem?” that’s a good indicator that it’s worth asking about. Don’t just Google for answers (what a time suck!). Ask real people how they’ve made it work, what they would do differently, or how it all came together. You might very well end up not going that direction but you’ll have a much better understanding of your options for the future. And, your curiosity might just inspire an even better creative solution! 6. Do define the kind of feedback or support you’re looking for. Sometimes you ask for help when all the options are open. Sometimes you ask for support in the middle of a project when resources have been committed and choices have been made. Sometimes you ask for feedback at the end of a project when you really only want to make minor changes or soak up the applause. By defining the kind of feedback or support you’re looking for, you can avoid painful, unhelpful, or unrealistic responses. You can let people know: “I’m just getting started on this idea and I’m open to all the options. Would you help me brainstorm this?” Or, you can put it this way: “This project is going great and I’m happy with where we’re at — but now I’m at a crossroads between one option and another option. What would make you choose one over the other?” Or, you might try this: “I’m just about done with this [your project] and, specifically, I’m wondering if [your concern]. Could you review it and give me your feedback on whether [your concern] is true?” And if what you really want is applause — and there’s nothing wrong with that — try this: “Yes! I just finished [your project] and I’m so proud of [specific thing you’re loving]. I couldn’t wait to share it with you!” One final tip here: if you are really looking for a specific type of feedback, it might pay to examine why that kind of feedback is important to you and whether you’re avoiding a bigger problem by steering feedback away from another area. This is an extremely uncomfortable thing to do but it can uncover catastrophic problems before they wreak havoc on your final project. 7. Do consider the source. Different kinds of people offer different kinds of feedback and support. While I won’t pretend everyone’s feedback might be valuable, most people’s support can be useful when you carefully consider the source. Anytime you ask for help, the help you receive is going to be colored by the experience, expertise, and personal perspective of the person giving you the help. Both experienced and unexperienced, expert and novice people can give you really useful feedback if you’re willing to examine it. Highly experienced or expert sources have deep knowledge to draw on when they give feedback. They’re likely to spot common mistakes. They often know when “intuitive” solutions lose out to unexpected counterintuitive solutions. But, their perspective can also be clouded by experience or expertise or they might approach your challenge with assumptions in mind. Inexperienced or novice sources often think creatively because they don’t have years of experience or expertise to fall back on. They are more likely to be able to turn constraints into opportunities. They also can draw on experience in other areas — for instance, when it comes to business problems, they can use their experience as a consumer to help guide their feedback. But, of course, their perspective can suffer from their lack of knowledge and they can make mistakes that an expert might catch. Again, being specific with your questions can help you consider the source. If you’re talking to an expert or a group of experienced business owners, you might ask: “I’m dealing with [your challenge] and I’d value your perspective and experience here. What have you seen work for overcoming this challenge? What mistakes have you seen others make?” If you’re talking to a friend, colleague, or a group of less experienced business owners, you might ask: “I’m dealing with [your challenge] and it’s affecting our customers by [how it’s affecting them]. In your experience as a consumer, how would you want the business to handle this issue?” 8. Don’t limit yourself to familiar territory. This has to be the most common and limiting problem with the way small business owners pursue support and feedback for their challenges. All too often, I see people gravitate to receiving help and inspiration from people with business’s like theirs or like the one they want to have — instead of seeking out diverse perspectives. My greatest business breakthroughs have come from talking to people who have experienced similar challenges or achieved similar goals but have done so through vastly different approaches, in different industries, or with different methods. The more I’ve sought out help from business owners and experts who feel unfamiliar and even intimidating, the more creative and effective my own solutions have become. The closer the source of feedback is to your style, your business model, your marketing strategy, your industry… …the more likely you are to copy instead of innovate. Sure, sometimes we want fast answers to questions that have already been solved. And yes, getting help from someone in-the-know in your field is a great way to do that. But I believe that 90% or more of the support you get on your business should come from different and diverse sources. When you do, you might just find answers to questions you didn’t even know you had and solutions to problems that hadn’t revealed themselves yet.
https://medium.com/help-yourself/8-ways-to-get-more-effective-support-ideas-and-feedback-on-your-small-business-challenges-72b5cc429e1c
['Tara Mcmullin']
2019-04-30 18:45:05.391000+00:00
['Life Lessons', 'Leadership', 'Small Business', 'Entrepreneurship', 'Freelancing']
When an Extrovert is Almost an Introvert (and Vice-Versa)
When an Extrovert is Almost an Introvert (and Vice-Versa) There’s a name for that and an explanation for those confusing feelings you have Photo by Allef Vinicius — Unsplash You would think that taking a test, like Myers-Briggs, would give you all the answers you need about how you deal with the world. And while I’ve found that personality tests give you a fairly accurate insight into your modus operandi in day-to-day life, I’ve noticed that there is still a lot of wiggle room. Whenever I take a personality test, my results normally show that I’m an extrovert. Anyone who knows me would agree with this. I even agree with it, most of the time. My scoring shows that I make the grade to extrovert, but just barely. I’m always just one or two points across that magical line. I’ve also taken a personality test that had me just barely in the introvert category. I suspect I was feeling very “unpeopley” the day I took that test. I love people. I have a lot of friends. I talk freely and easily with others. So, when I mention to someone that I’m shy, they usually laugh and give me that look. But I am actually shy most of the time. I just hide it well. And I’m not alone. Many people score just slightly across the line from introvert to extrovert. We can be either/or depending on the situation. Welcome to the ambivert club According to Merriam-Webster, an ambivert is “a person having characteristics of both extrovert and introvert”. Ambiverts fall in the middle of the spectrum between introvert and extrovert. It’s kind of the best of both worlds. We like our alone time but can shine in social situations just as easily. We can confidently step into a leadership role at work, or we can sit back and follow. We are happy being good team players or tackling a solo project. We can’t wait to get together with friends, or we are content to spend the weekend at home, binge-watching Breaking Bad. Either/or. This or that. Ambiverts are the yin and yang of the social spectrum, and while that sounds ideal, being an ambivert isn’t always as sweet a deal as it sounds. It’s a blessing and curse Before I learned I was an ambivert, I felt confused about my reactions in certain situations. I am an extrovert. That should mean that I am “on” all the time. I should love talking to anyone who walks within earshot. I should be happy to strike up a conversation at a moment’s notice, right? Nope. Not one bit. When I’m with friends, or in a situation I’m comfortable with, I’m on fire. I can converse like it’s my job. But take me even slightly out of my comfort zone, and I’m secretly looking for the exit. And I’ll smile while making a run for it. I think a big part of the problem is that I smile all the time. If anyone looks at me, a big, cheesy grin breaks out across my face, and for most people, that’s an invitation to talk to me. And I don’t always know how to handle it. There are the introvert moments When I’m out in public, I don’t always want to talk to everyone I meet. The grocery store checkout line, for example. There’s always that chance you’ll end up with someone who really wants to chat. I scan the magazines or peruse the chocolate bars. So many choices. I look up. Crap. I have made the dreaded eye contact with the lady in front of me. My face defaults to smile mode. Damn it, face. Stop that. Too late. I’m soon listening to a stranger tell me things or asking me questions. I smile and respond, never betraying the fact that I really don’t want to have this conversation. I just want to squirrel my groceries out of the store like a ninja and go home. Then there are the extrovert moments I’m in a checkout line at the grocery store and someone (perhaps an introvert) is behind me checking out the magazine covers and chocolate bars. She seems nice. She looks up and we make eye contact. She smiles and I notice it’s a tad strained. I comment on the ice cream in her cart and let her know that, I too, like salted caramel. We chat a bit. I leave the store and happily trot off to my car, my current need for human contact fulfilled. Are you feeling somewhere in the middle? Did you take a personality test and not quite feel like the test results rang true for you? If so, you are actually in the majority. According to Ronald E Riggio Ph.D. in Psychology Today, “about two-thirds of people are in the middle, and can be classified as ambiverts.” So, the next time you find your introverted self happily striking up a conversation with a stranger in a waiting room, or your extroverted self bailing on a party long before it’s over in favor of reading a book in bed, know that you’re not alone. You’re just doing what ambiverts do best.
https://medium.com/the-partnered-pen/when-an-extrovert-is-almost-an-introvert-and-vice-versa-63d7bed304a8
['Sandy Bishop']
2019-11-05 17:07:05.565000+00:00
['Extrovert', 'Psychology', 'Introvert', 'Ambivert', 'Self']
The Top 3 React UI Libraries for Beginners
The Top 3 React UI Libraries for Beginners The pros and cons of my favorite React UI libraries Photo by Tirza van Dijk on Unsplash. Early this year, I decided to take my JavaScript skills to the next level. Tired of jQuery, I started looking for something faster, more modern, and reliable. I opted for React, which was voted the most loved and wanted JavaScript framework in Stack Overflow’s 2020 Developer Survey by the 65K developers who participated. One of the first things I learned (and loved) about React is that thanks to its popularity, you don’t need to reinvent the wheel. There are countless UI suites ready for you. Not surprisingly, many developers have built React libraries offering several components to help you improve the UI of your application. In this article, I’ll show you the three React UI libraries I would suggest to anyone to start with, based on my experience.
https://medium.com/better-programming/the-top-3-react-ui-libraries-for-beginners-6987f7b62c78
['Antonello Zanini']
2020-12-07 17:50:42.895000+00:00
['Programming', 'JavaScript', 'Reactjs', 'React', 'Software Development']
What Makes Harry Potter so Memorable
If you’re anything like me, when you love something, you really love it. After watching The Philosopher’s Stone at a young age, I would look out the window at night in anticipation of an owl bringing my Hogwarts letter, run around the house with a stick wand, and dress up as Harry for Halloween ( and for fun). Most of my childhood obsessions have died out.. except for Harry Potter. As I am currently worldbuilding for my own fantasy series, I’ve been considering what elements I should incorporate — and how — to make it memorable to readers. It was only natural for me to peer into one of the most renown series of all time — Harry Potter — and dissect it to figure out what it is that has made these books so influential on an entire generation. To make a worldwide bestselling novel ( and the most successful one in history, at that), J.K. Rowling must have done many things right, which I, as a writer, would like to carry into my own work. Among the many amazing things I noticed, one thing stood out to me most of all: Fantasy is about escapism, and J.K. Rowling did a fantastic job of making her world feel real enough to escape into. She did so through well-developed characters, by making the magical world reflect the real one, by creating her world to have plenty of historical information and exhaustive lists of spells, items, government departments, beasts, and books, and by foreshadowing events — both near and far — in an ingenious way. Below are the main elements which I theorize have made the Harry Potter series both palpable and timeless, which are worth considering during your own worldbuilding process: UNFORGETTABLE CHARACTERS Although I wish there were more LGBTQ+ and POC characters in Harry Potter, there are many characters with various dispositions and backgrounds, many of which that are relatable to readers. Within this lengthy 7-book series, us readers get to witness characters evolve slowly ( and thus realistically) over time — sometimes even growing with them — and in doing so strengthening our connection and their impact on our lives ( Fred and Dobby’s deaths still haunt me). Neville Longbottom, the underdog and the almost-chosen one, had one of the most impressive character arcs in the entire series. Beginning as an untalented and timid young wizard, he later became a brave Gryffindor, daring enough to stand up to Bellatrix, and even to Voldemort. The sword of Gryffindor presented itself to Neville, affirming his transformation, and allowing him to destroy the final Horcux, Nagini. Courage wasn’t something that came as naturally to Neville as it did for Harry; he had to develop it over the years, starting from the very ( long) bottom. Underdogs can relate to Neville for this reason, especially those who are determined to succeed despite their hurdles. After all, it is those who you least expect it from, who surprise you the most. Severus Snape is not particularly a relatable character — except perhaps to those who have lost someone they love, either to another person or to death — but he is a memorable one, in part because of Alan Rickman’s fantastic portrayal of Snape. He is morally grey, making his motives and actions a frequent topic of debate: Would Snape have turned against Voldemort if he hadn’t killed Lily? Did he truly love Lily, or was it mere obsession? Are Snape’s actions against Harry forgivable (he saved him multiple times, while also bullying him)? Either way, being an undercover Death Eater is pretty badass. Luna Lovegood is my favorite character despite her lack of a character arc, as she is introduced later on in the series and is amazing from the moment we meet her. She is loyal, brave, eccentric, kindhearted, and authentic, despite the snickers she receives for being so. I believe everyone should strive to be more like Luna, and she serves as a great role model. Draco Malfoy’s redemption arc is often discussed within the fandom community, many being disappointed that he did not have a dramatic turning point. I believe his redemption arc was realistic, as it would take years to deprogram from all the hogwash we was taught throughout his life about blood superiority. Not only that, but people seldom risk what little they have left for a possibility, no matter how much they wish to take that risk. By the end of the series, all Draco has left were his parents ( who the Death Eaters no longer held in high esteem), and betraying them for the group that he once openly opposed was too risky ( why would they accept and trust him after all the harm he had done?). It was much more practical for him — a self-preserving Slytherin — to watch events unfold from the sidelines, helping discreetly where and when he could, and to avoid announcing his allegiance ( which he was probably struggling to decide). Those who have had to unlearn the wrong they have been taught, who have had high expectations placed on their shoulders at a young age, and who have deep regrets, may relate to Draco. LIFE LESSONS AND MEMORABLE QUOTES Harry Potter taught me a lot about the realities of life when I was young, many of which I have carried with me throughout the years. These lessons — often summarized in the form of memorable quotes — are things all people, in my opinion, should know. Things are usually morally grey; neither all good nor all bad: “Besides the world isn’t split into good people and Death Eaters. We’ve all got both light and dark inside of us. What matters is the part we choose to act on.” — Sirius Black, The Order of the Phoenix It is okay to imagine and dream, but not at the expense of your life: “It does not do to dwell on dreams and forget to live.” — Albus Dumbledore, the Philosopher’s Stone We decide who we are, not our past, nor anyone else: “It matters not what someone is born, but what they grow to be.” — Albus Dumbledore, The Goblet of Fire “It is our choices that show what we truly are far more than our abilities.” — Albus Dumbledore, The Chamber of Secrets Feelings are just as — if not more — important than “facts”: “The truth. It is a beautiful and terrible thing, and should therefore be treated with great caution.” — Albus Dumbledore, The Philosopher’s Stone The opposite of love is indifference, which is why it hurts so much more than being despised: “Indifference and neglect often do much more damage than outright dislike.” — Albus Dumbledore, The Order of the Phoenix Those who do not accept you as your authentic self are not worth the time of day anyway: “I am what I am, an’ I’m not ashamed. ‘Never be ashamed,’ my ol’ dad used ter say, ‘there’s some who’ll hold it against you, but they’re not worth botherin’ with.’” — Rubeus Hagrid, The Goblet of Fire The right decision is often the harder one to make: “Dark and difficult times lie ahead. Soon we must all face the choice between what is right and what is easy.” — Albus Dumbledore, The Goblet of Fire There are other memorable quotes, too, which most of us Potterheads have memorized by heart: “Don’t let the muggles get you down.” — Ron Weasley, The Prisoner of Azkaban “You’re just as sane as I am,” — Luna Lovegood, The Order of the Phoenix “I’ll be in my bedroom, making no noise and pretending I’m not there.” — Harry Potter, The Chamber of Secrets “Just because you’ve got ‘the emotional range of a teaspoon doesn’t mean we all have.” — Hermione Granger, The Order of the Phoenix “All my shoes have mysteriously disappeared. I suspect the Nargles are behind it.” — Luna Lovegood, The Order of the Phoenix “Things we lose have a way of coming back to us in the end, if not always in the way we expect.” — Luna Lovegood, The Order of the Phoenix “You — complete — arse — Ronald — Weasley!” — Hermione Granger, The Deathly Hallows “What exactly is the function of a rubber duck?” — Arthur Weasley, The Chamber of Secrets HISTORY The history within a fantasy/fiction novel is important for establishing why the world is presently the way it is. J.K. Rowling took worldbuilding very seriously, providing far more historical facts and theories about her wizarding world than can be memorized or contained within 7 books (much of it has been later published on Pottermore (now wizardingworld.com), in Quidditch Throughout the Ages, and in Fantastic Beasts and Where to Find Them). As it took J.K. Rowling 6 years to write only the first book, it is no surprise that her universe has such a detailed history, which was inspired by both true and imagined events. Large-scale history, involving wizarding wars, lineages, and the government, and small-scale history, involving characters’ pasts, are evident. “‘Just interdepartmental memos. We used to use owls, but the mess was unbelievable… droppings all over the desks…’” — Arthur Weasley, The Order of the Phoenix Witches and Wizards known for their achievements, inventions, and oddities (especially those recorded on Chocolate Frog cards): Andros the Invincible, Falco Aesalon (the first known Animagus), Fulbert the Fearful (a well-known homebody), Herpo the Foul (who created the Basilisk and the Horcrux), Merlin, Uric the Oddball (who wore a jellyfish on his head), Wendelin the Weird,.. The origins of the game Quidditch (originally Kwidditch or Cuaditch), first played in Queerditch Marsh c. 1050 A.D., influenced by the Scottish broomstick game Creaothceann, and the Golden Snitch influenced by the magical bird the Golden Snidget The history of the Quidditch World Cup — which has been held in Britain since 1473 and opened to other countries in the 17th century — including what teams had won and occurrences which made certain tournaments unique (such as the 1877 Quidditch World Cup that no one can remember) Family trees, especially of Pureblood witches and wizards: the Peverells, the Gaunts, the Lestranges, the Blacks, the Weasleys (who are considered blood-traitors for their disregard of their blood “purity”), the Potters,.. FORESHADOWING Since J.K. Rowling had such a clear image about the history of her world and the plot layout of each book, she was able to masterfully foreshadow future events. There is such a large amount of foreshadowing — both big and small, long-term and short-term — scattered throughout the pages of Harry Potter, that many of them are easy to overlook until a second or third read-through. Connecting these dots gives readers a sense of satisfaction, similar to finding a fitting piece in a giant jigsaw puzzle. Sirius Black, the only family Harry has left, was passively referred to in Book 1 Chapter 1 (Hagrid borrowed Sirius’ motorbike to transport Harry to Privet Drive), but is not mentioned again until the third book: “Borrowed it, Professor Dumbledore, sir. […] Young Sirius Black lent it to me.” — Hagrid, The Philosopher’s Stone Readers learn very early on that Harry has some kind of connection to Voldemort when Ollivander matches him with Voldemort’s twin wand: “Curious… curious… […] I remember every wand I’ve ever sold, Mr. Potter. Every single wand. It so happens that the phoenix whose tail feather is in your wand, gave another feather — just one other. It is very curious indeed that you should be destined for this wand when its brother why, its brother gave you that scar. […] Curious indeed how these things happen. The wand chooses the wizard, remember…. I think we must expect great things from you, Mr. Potter…. After all, He-Who-Must-Not-Be-Named did great things — terrible, yes, but great.” — Ollivander, The Philosopher’s Stone Professor Snape’s first words to Harry Potter indicate something much more profound: “Potter! What would I get if I added powdered root of asphodel to an infusion of wormwood?” — Severus Snape, The Philosopher’s Stone The Asphodel is a type of Lily and can sometimes mean “ my regrets follow you to the grave,” and wormwood can mean “ absence” or “ bitter sorrow.” It is likely that Snape, being a highly intelligent man who prefers to keep “the best of himself” hidden away, would have used this hidden language as a way to acknowledge Lily’s death to Harry. Readers — and Harry — do not realize that Snape had a connection to Lily until the end of the series, and that her death deeply affected him. Scabbers, Ron’s rat, was negatively affected by the news of Sirius Black escaping Azkaban, subtly foreshadowing that he was somehow involved in it all: “Scabbers was looking thinner than usual, and there was a definite droop to his whiskers” — Harry Potter and the Prisoner of Azkaban He progressively got worse throughout the course of the book, highlighting his increasing anxiety as Sirius Black got closer and closer: “He’s skin and bone!” — Ron Weasley, The Prisoner of Azkaban Harry saw Aberforth Dumbledore in the Hog’s Head, without knowing who he was or that Professor Dumbledore had a brother: “He was a grumpy-looking old man with a great deal of long gray hair and beard. He was tall and thin and looked vaguely familiar to Harry.” — Harry Potter and the Order of the Phoenix Harry encountered the locket Horcrux before realizing what it truly was, while helping clean out a cabinet in 12 Grimmauld Place: “…a heavy locket that none of them could open…” — Harry Potter and the Order of the Phoenix Similarly, Harry came across a tiara (diadem) in the Room of Requirement when he was hiding the Half-Blood Prince’s book, unaware that it was also a Horcrux: “He stuffed the Half-Blood Prince’s book behind the cage and slammed the door. He paused for a moment, his heart thumping horribly, gazing around at all the clutter. . . . Would he be able to find this spot again amidst all this junk? Seizing the chipped bust of an ugly old warlock from on top of a nearby crate, he stood it on top of the cupboard where the book was now hidden, perched a dusty old wig and a tarnished tiara on the statue’s head to make it more distinctive, then sprinted back through the alleyways of hidden junk as fast as he could go..” — Harry Potter and the Half-Blood Prince J.K. Rowling also planted some amusing “Easter eggs” in the books: Fred and George Weasley unintentionally threw snowballs at Voldemort’s face: “The lake froze solid and the Weasley twins were punished for bewitching several snowballs so that they followed Quirrell around, bouncing off the back of his turban.” — Harry Potter and The Philosopher’s stone DETAILS J.K. Rowling created a comprehensive list of magical and enchanted items, spells, potions, books, Quidditch teams and fouls,.. making the magical world feel not only real, but also unique; separate from the rest of us. Some examples of these are: The departments within the Ministry of Magic: the Department of Magical Law Enforcement (Auror Office, Misuse of Muggle Artifacts Office, Wizengamot,..), the Department of Magical Accidents and Catastrophes (Accidental Magic Reversal Squad, Obliviator Headquarters,..), the Department for the Regulation and Control of Magical Creatures (Beast Division, Spirit Division, Goblin Liaison Office, Pest Advisory Board,..), the Department of International and Magical Co-operation (International Confederation of Wizards,..), the Department of Magical Transportation (Floo Network Authority, Portkey Office, Apparition Test Centre,..), the Department of Magical Games and Sports (British and Irish Quidditch League Headquarters,..), and the Department of Mysteries (Brain Room, Death Chamber, Time Room, Hall of Prophecy,..) The various broomstick models, each with different abilities (Bluebottle, the Cleansweep series, the Comet series, Firebolt, Moontrimmer, the Nimbus series, Oakshaft 79, Shooting Star, Silver Arrow, Thunderbolt VII, Tinderblast,..) Potions, such as Amortentia (a strong love potion), Beautification Potion, Draught of Living Death (makes the drinker appear to be dead), Elixer of Life (made from the Philosoper’s Stone), Felix Felicis (Liquid Luck), Pepperup Potion (for colds), Polyjuice Potion, Skele-Gro, Veritaserum (compels the drinker to honesty), Wolfsbane Potion (for Werewolves),.. Spells (charms, jinxes, healing, transfiguration, hexes, curses, and counter-spells), including everything from tickling to killing: Accio, Aguamenti, Alohamora, Bombarda, Confundo (confuses target), Diffindo, Engorgio, Episkey, Expecto Patronum, Expelliarmus, Immobulus, Legilimens (opens the target’s mind to the spellcaster), Lumos and Nox, Mimblewimble, Obliviate, Petrificus Totalus, Portus (creates a Portkey), Protego, Reparo, Rictusempra (tickling charm), Riddikulus, Scourgify, Sectumsempra, Wingardium Leviosa,.. Books written by witches and wizards: A Guide to Medieval Sorcery, The Invisible Book of Invisibility, Gilderoy Lockhart’s series (Break with a Banshee, Gadding with Ghouls, Magical Me,..), One Thousand Magical Herbs and Fungi (by Phyllida Spore), Hogwarts: A History (by Bathilda Bagshot), Moste Potente Potions, Fantastic Beasts and Where to Find Them (by Newt Scamander), The Monster Book of Monsters, From Egg to Inferno: A Dragon Keeper’s Guide, Unfogging the Future (by Cassandra Vablatsky), The Philosophy of the Mundane: Why the Muggles Prefer Not to Know (by Mordicus Egg), Quidditch Throughout the Ages (by Kennilworthy Whisp), The Life and Lies of Albus Dumbledore (by Rita Skeeter), The Tales of the Beedle and the Bard (containing children’s stories such as Babbitty Rabbitty and her Cackling Stump and The Tale of Three Brothers), Charm Your Own Cheese,.. Wandlore (although not a perfect science, the Ollivanders have studied it well since 382 B.C.), especially concerning the main wand cores (Dragon Heartstring, Unicorn Hair, Phoenix Feather), and woods: Aspen (performs great charm work and martial spells), Beech (for wise and open-minded wizards), Blackthorn (for warriors, with both good or bad purposes), Ebony (for authentic wizards who don’t mind not fitting in), Hazel (for emotionally intelligent wizards), Hornbeam (for wizards with an intense passion), Maple (for adventurous wizards), Pine (for solitary wizards), Rowan (performs great defensive spells), Walnut (for intelligent and innovative wizards),.. Rare and unique objects such as the Deluminator, Howlers, the flying Ford Anglia, the Marauder’s Map, Remembralls, the Weasley family clock, Ravenclaw’s Diadem, the Sword of Gryffindor, the Sorting Hat, the Hand of Glory, the Vanishing Cabinets, the Philosopher’s Stone, the Goblet of Fire, the Mirror of Erised, the Pensieve, Time-Turners, The Deathly Hallows (the Elder Wand, the Cloak of Invisibility, the Resurrection Stone),.. Fantastic beasts, many inspired by Celtic mythology (Acromantula, Basilisk, Blast-Ended Skrewt, Bowtruckle, Cornish Pixie, Demiguise, Doxy, Dragon (Antipodean Opaleye, Chinese Fireball, Flesh-Eating Slug, Hungarian Horntail, Norwegian Ridgeback, Swedish Short-Snout,..), Fire Crab, Flobberworm, Giant Squid, Gnome, Grindylow, Hippogriff, Imp, Jarvey, Kelpie, Kneazle, Niffler, Phoenix, Pygmy Puff, Red Cap, Three-Headed Dog, Troll, Unicorn, Wrackspurt,..) and intelligent beings (Centaurs, Giants, Goblins, House-Elves, Merpeople, Veela, Werewolves,..) Joke and gag items, especially from Weasleys’ Wizard Wheezes: Canary Cream (temporarily turns you into a giant Canary), Decoy Detonators, Dungbombs, Extendable Ears (for eavesdropping), Fanged Frisbee, Nose Biting Teacup, Peruvian Instant Darkness Powder, Portable Swamp, Rubby O’ Chicken (bewitched to Irish dance), Skiving Snackbox (Fainting Fancies, Fever Fudge, Nosebleed Nougat, Puking Pastille), U-No-Poo,.. Sweets, especially from Honeydukes (Acid Pops, Bertie Bott’s Every Flavour Beans, Butterbeer, Cauldron Cakes, Chocolate Frogs, Cockroach Clusters, Drooble’s Best Blowing Gum, Fizzing Whizzbees, Fudge Flies, Jelly Slugs, Liquorice Wands, Pepper Imps, Pumpkin Pasties, Sugar Quills, Toothflossing Stringmints, Treacle Fudge),.. What does this mean for your writing? In short, Plan ahead. Consider your world’s unique culture, history, characters, creatures, items, and magic, before plotting and writing (although some ideas can come to you while writing freely, too). This helps to keep your worldbuilding consistent and minimize plot holes. Include a wide variety of morally grey characters with relatable struggles and challenges to overcome, showing their evolution throughout the course of your novel. Contemplate the messages and themes you want to shine through, hinting at these things early on, and building off of them. “Words, in my opinion, are our most inexhaustible source of magic.” — Albus Dumbledore Put the work in to make your world feel as close to tangible as you possibly can: that is what many of us escapists are searching for. And, most importantly, have fun with the creation process. I will be forever grateful to J.K. Rowling and the Harry Potter series for opening my mind up to possibility and creativity, for shaping my childhood, and for inspiring me to write, too.
https://medium.com/fantasy-writing-school/what-makes-harry-potter-so-memorable-7485ad135194
['Jenna Mcrae']
2020-12-14 22:37:35.635000+00:00
['Worldbuilding', 'Harry Potter', 'Writing Tips', 'Fantasy', 'Writing']
Review: Python Crash Course
The book — Is Python Crash Course good? No Starch Press, its publishing company Python Crash Course will make you familiar with Python in no time. While learning very well a language will take months (well, years…), this book speeds up your learning process by providing a solid foundation in general programming concepts by using Python. Part I: Basics Learning what variables are, simple data structures, if statements, user input, while loops, classes and functions, how to read and write files… Everything that is basic in Python is covered in its own chapter. At the start of each one, we have a brief explanation of what this chapter is going to cover and what we are going to learn. And then we jump to the code. Not too much, just a few lines as an example. And then we go back to theory, just to go back to coding again in the next paragraph. This book uses a style that I love: Intertwining theory and coding so the user learns and code at the same time. No boring 7 pages of theory before the first print. No coding without an explanation of what are we looking at. You learn while you code, and you code while you learn. Then, exercises. After a few pages, you have a ‘Try it yourself’ section that reinforces what you have learnt, as pushes the reader out of your own comfort zone by making you solve problems. I love books like this because it is the perfect blend of learning, coding and coding on your own. That makes it fun to read. And fun to learn Python. Make no mistake: Learning is hard, but when it is fun to do, we push ourselves to learn even more. And this book is successful in doing that. Part II: Projects After all the first part of Python Basics, the book goes into the second part: Projects. No more time for theory: now we are building real things. A video-game with PyGame, a website with Django and a bit of Data Visualization with mathplotlib are the projects we will build on the second part of the book. Going beyond the basics, creating real projects. Creating Data Visualization with Mathplotlib And real ones. Not small problems such as “Iterate over all the numbers from 0 to 100 and print only the ones that divided by 5 or 3 returns 0”. No, we build real things. For example, I use Django, a Python-based web framework at my job. In fact, it is my favourite framework. How many beginner books do that? So, answering the topic of this question: Yes, Python Crash Course is a good book. A ver good one.
https://medium.com/quick-code/review-python-crash-course-78e83b761509
[]
2019-09-15 07:06:36.412000+00:00
['Programming', 'Python', 'Coding', 'Crash Course', 'Software Development']
The Newest and Greatest Marketing Move by KFC
The Newest and Greatest Marketing Move by KFC The Lifetime Original Movie brought to you by KFC Photo: Lifetime From ‘Cyber Seduction: His Secret Life’ to ‘Newlywed and Dead’, Lifetime Original Movies have made themselves famous and a lot of money by being much more memorable than they are high quality. After years of spending the bare minimum to provide us with the most laugh-out-loud titles imaginable, Lifetime has a new movie for us that is breaking the internet in half. Backed by a licensing deal with KFC, Lifetime has made a horny romance flick starring Mario Lopez as sexy Colonel Sanders. In the movie, the Colonel is called Harland, and he’s the brand new chef for the beautiful and seductive Jessica. But oh no! Jessica is already engaged to be married to someone else! Will she choose the man her mother arranged for her? Or will Jessica side with the man who professes to have invented the perfect combination of 11 herbs and spices? The film is called ‘A Recipe for Seduction’ and will air on Lifetime on December 13th. Crazy? Or Genius? The film only runs for 15 minutes, and after airing will be available to stream on “all Lifetime apps” whatever those are. A lot of netizens are taking their doubts to Twitter, confident that this is nothing more than the spectacular trailer, and that no movie has been filmed. They’re sure that it’s a prank, but why would it be? There’s nothing more valuable to a business than being talked about, and even after Apple announced a new pair of overpriced earphones, all anyone is talking about is the sexy new KFC movie. It makes perfect sense that Lifetime would spend KFC’s money to make a zero risk movie that tries to capture attention at the end of the worst year on record. At what better time could this movie have ever emerged? Lifetime caught us at the exact moment when we most desperately needed to be distracted, and judging by the trailer, they only spend a few hundred bucks to do it. Lifetime has described this tiny movie their first foray into branded mid-form content, which suggests that more could be on the horizon. Either KFC is going to receive a lot of positive attention from this endeavour and sponsor a sequel, or McDonald’s will get in on the action and commission “Sesame Seeds of Desire” to be released in 2021. Screenshot from ‘I Love You Colonel Sanders’ developed by Psyop This isn’t the first time Last year, KFC commission the creation of the video game “I Love You Colonel Sanders! A Finger Lickin’ Good Dating Simulator” developed by Psyop and released for free on Steam. Eager players are thrown into an anime world where sexy Colonel Sanders is daddy and teaches at the “University of Cooking School: Academy for Learning.” Over the three days you spend at the university, your goal is to attract the attention of sexy Colonel Sanders and win him over. You have a best friend, a rival, and even a robot. While critics panned the game for its lack of emotional depth, players gave it a 100% user approval rating for its gorgeous art and fun concept. Many however have seen it for the marketing gimmick it is. Marketing Ultimately, both ‘A Recipe for Seduction’ and ‘I Love You Colonel Sanders’ are clever creations of the KFC marketing team to ensure their brand is being discussed positively by the world at large. Twitter falling to pieces over this movie was their goal, so KFC executives must be thrilled right now. Hell, someone somewhere is probably cutting everyone at the KFC marketing department a giant Christmas bonus right now to celebrate the occasion. Marketing is getting more and more cut-throat because of how saturated the internet has become. YouTube is showing back to back unskippable ads, and even the New York Times is slipping native advertising in with the regular articles. The more we’re all marketed to, the more desensitised we become to it. The only way for advertising to really resonate with us anymore is when we choose to consume it. This new movie will be us choosing to watch an ad, and many of us will willingly make that choice. As time goes on, companies will find new and creative ways to coerce us into choosing our advertising. KFC has us for now, but within 15 minutes the internet will move on, and they’ll need something else to get our attention back. Because they know that without all the window dressing, their product is fried chicken. It’s an unhealthy, meat-based meal that’s dripping in cruelty to animals and underpaid/under represented staff. When your product has this many issues, you need to get creative when developing a new narrative for global conversation. So for now, let’s all tune in and watch Mario Lopez play sexy Colonel Sanders as he navigates a sexy love triangle on December 13th. As far as we know, it’s not a practical joke, it’s a real tiny movie, and it’s coming our way whether we’re ready or not.
https://medium.com/money-clip/the-newest-and-greatest-marketing-move-by-kfc-dab54c97e7c5
['Jordan Fraser']
2020-12-09 06:09:38.261000+00:00
['Money', 'Marketing', 'Advertising', 'Finance', 'Entertainment']
Digital Expectations Report From Razorfish
If you haven’t checked out the new report by Razorfish: DIGITAL DOPAMINE: 2015 GLOBAL DIGITAL MARKETING REPORT, you may want to check it out sooner rather than later. And I’m not just saying that because I’m in it! (The report contains a one page interview I did with one of their staff — page 29). It’s an interesting report based on a survey of 1600 millennials and gen-exers from the US, UK, Brazil, and China, as well as some in-depth interviews. Here are some of my favorite data points: “56% of U.S. Millennials say their phone is their most valuable shopping tool in-store compared to just 28% of U.S. Gen Xers.” “59% of U.S. Millennials use their device to check prices while shopping compared to 41% of U.S. Gen Xers.” “Advertising is most effective when it is part of a value exchange. Consumers are now aware of how much their attention is worth to marketers, and they expect to be rewarded for it. They look to be compensated with loyalty programs, free content or useful tools that solve problems.’ “Over half of consumers in the U.S. and U.K. and 69% of consumers in China say they do anything they can to avoid seeing ads. What’s more,they’re actively availing themselves of technology to do so, with a majority of TV lovers using a DVR to skip through ads (U.S. — 65%, U.K. — 73%, China — 81%).” Brazil is the outlier on this one: “Fifty-seven percent of Brazilian consumers endorse TV, radio and print ads as most influential,” to skip through ads (U.S. — 65%, U.K. — 73%, China — 81%).” Brazil is the outlier on this one: “Fifty-seven percent of Brazilian consumers endorse TV, radio and print ads as most influential,” My favorite point is this one: “Seventy-six percent of people in the U.S., 72% in the U.K. and 73% in Brazil say they are more excited when their online purchases arrive in the mail than when they buy things in store.” I have heard the same comments in my behavioral science research. And the reason has to do with the anticipatory centers of the brain. I wrote about this recently in my report “Why You Should Do Behavioral Science Research At Least Once This Year”. The Razorfish report is comprehensive. I think it’s worthy reading if you design or produce digital products, marketing or advertising. And don’t forget to check out page 29! What do you think? Does any of this data surprise you?
https://medium.com/theteamw/digital-expectations-report-from-razorfish-2408b6d78231
['The Team W']
2016-09-21 22:13:26.894000+00:00
['Psychology', 'Research', 'Gen Exers', 'Generational Differences']
The Art of Breaking a Habit
The Art of Breaking a Habit Life-Altering Tips To Lose The Bad Habit Photo by Taylor Young on Unsplash I’ll be the first person to tell you that I have cultivated a long list of horrendous habits over the course of my life. From smoking, to disordered eating, to lacking discipline. I’ve failed to protect myself and my future from my own destructive habits just like most other people. To Break a Habit we Must First Understand How They’re Made There are a couple of theories on how habits are formed, but personally I believe in the idea of “The 3 R’s.” Reminder: A reminder is a trigger or feeling that results in… Routine: The behavioral result of the reminder. The reminder could be leaving the room, and the routine could be turning the lights off… or it could be something far less beneficial to your utility bill. Reward: This is what is associated with the routine that reinforces the “goodness” of the habit. An example is, often time people smoke to relieve stress. The alleviation of stress is the reward associated with the routine of smoking. So not that we know how habits are formed, lets talk about how to break them. What triggers your habit? Identify what insinuates the need to partake in the habit you want to break. Where do you frequently act on the habit? At what time? What feelings trigger the action? Does it happen right after a certain event or action? Research suggests that the change you want to make will be easier to make if you believe that that change is beneficial to you and your wellbeing. How Can Breaking This Habit Help You? Research suggests that the change you want to make will be easier to make if you believe that that change is beneficial to you and your wellbeing. Examine why you want to break this habit and the benefits that would come from breaking the habit. Can Practicing Mindfulness Help Break The Habit? According to The Mayo Clinic, mindfulness is a form of meditation in which you focus on being intensely aware of what you’re sensing and feeling in that moment, without interpretation or judgement. Becoming more aware of your routine and the triggers that cause your habits you may find it easier to break that habit. Remember You Will Fail… And That’s Okay Breaking a habit is not a straight line to success. Change is hard, but don’t let a slip-up slow down your progress any more than it has to. Learn from your mistakes and visualize the life you imagine for yourself. Focusing on your goal will motivate you to pick yourself back up. Focus on your successes instead of the lack thereof. It doesn’t have to be all-or-nothing. Take Care of Yourself Sometimes we form a certain habit because we are lacking a healthier option in it’s place. Maybe you don’t get enough sleep, so you take a late night smoke break? Pay attention to what your body needs, and let those needs be your focus instead of the habit. Get enough sleep. Eat meals that fill you up and are good for you. Take time to yourself. Implement physical activity into your day. Reward Yourself For Your Progress Breaking a habit is hard work. Don’t forget to focus on the progress you’ve made and to treat yourself because of it. You deserve to celebrate your successes.
https://medium.com/age-of-awareness/the-art-of-breaking-a-habit-5e1e178d8517
['Emma Comeaux']
2020-12-02 03:47:19.861000+00:00
['Life Lessons', 'Education', 'Motivation', 'Advice', 'Healing']
History of the Machine Learning Department at Carnegie Mellon
History of the Machine Learning Department at Carnegie Mellon How was the first machine learning academic department founded Tom M. Mitchell portrayed, Machine Learning Department Head 1997–1999 and 2002–2015, Source: CMU. The Machine Learning Department at Carnegie Mellon University was founded in the spring of 2006 as the world’s first machine learning academic department. It evolved from an earlier organization called the Center for Automated Learning and Discovery (CALD), created in 1997. CALD was designed to bring together an interdisciplinary group of researchers with a shared interest in statistics and machine learning. The first collection of CALD faculty participants were primarily from the Statistics Department and departments within the School of Computer Science, but also included faculty from philosophy, engineering, the business school, and biological science. Statistics Professor Stephen Fienberg and Computer Science Professor Tom Mitchell were the primary faculty involved in creating CALD. In 1999 CALD began its first educational program, a Master’s degree in “Knowledge Discovery and Data Mining.” In 2002, we launched our Ph.D. program in “Computational and Statistical Learning,” and simultaneously converted the Master’s degree program into a secondary Master’s program, available only to CMU Ph.D. students. Once CALD began to offer educational programs, It also began hiring its faculty. Geoff Gordon portrayed Interim Department Head, Machine Learning Department 2016–2016, Source: Microsoft. By spring of 2006, we petitioned the university to change CALD to the Machine Learning Department, in creating this academic department in 2006, Carnegie Mellon University signaled both its belief that the discipline of machine learning forms a field of enduring academic importance, and its intention to be a leader in helping to shape this rapidly developing field. The department’s research strategy is to maintain a balance between research into the cure statistical-computational theory of machine learning, and research inventing new algorithms and new problem formulations relevant to practical applications. Manuela Veloso portrayed, Department Head, Machine Learning Department 2016–2018, Source: CMU The Ph.D.and Master’s programs in Machine Learning were among the first degree programs in the world to offer specialized training in machine learning. The department offers a Ph.D. program in Machine Learning, and joint Ph.D. programs in Statistics and Machine Learning, Machine Learning and Public Policy, and Neural Computation and Machine Learning. We also offer an undergraduate minor in Machine Learning and primary and secondary Masters in Machine Learning. Roni Rosenfeld portrayed, Department Head, Machine Learning Department 2018 — Present, Source: CMU Our mission is to help lead the development of the discipline of Machine Learning, by performing leading research In this field, by developing and propagating a model academic curriculum for the field, and by helping society to benefit from the knowledge gained by the field.
https://medium.com/towards-artificial-intelligence/history-of-the-machine-learning-department-at-carnegie-mellon-1998e0ea6c37
['Roberto Iriondo']
2020-10-27 14:02:41.649000+00:00
['Machine Learning', 'Artificial Intelligence', 'Academia', 'Research', 'University']
10 Jokes to Brighten Up Your Lockdown
Things are very serious in the world right now. While I don’t want to play that down at all, a little bit of laughter can go a long way when we all need to keep our spirits up. These are my top ten favorite jokes, I hope they get a giggle out of you. Be warned, strong language ahead. Teacher: Give me an example of a sentence with the word contagious in it. Pupil: Our neighbor is painting his fence with a two inch brush and my dad says it will take the contagious. Three conspiracy theorists walk into a bar. Now, that can’t be a coincidence. My English teacher told me my grammar was awful. I said, yeah? Well your grandad’s a bastard. I went to my doctor and said “doctor, please help me. I keep farting. They’re silent and odor free, which is a great relief, but I’m so embarrassed and on edge all the time, please can you help me?” The Doctor said, “I’m going to recommend a hearing test and a referral to help with your sense of smell.” You can’t run on a campsite. You can only ran, because it’s past tents. My grandfather has the heart of a lion..and a life-time ban from the zoo. Did you hear about the bank that’s rated worst bank in the world? They’re called Norfolk & Gould. Jesus goes into a fancy restaurant and asks for a table for 26. “But, there are only 13 people in your party,” the waiter said. “Yes, but we’re all gonna sit round the same side.” What did the pirate say when he turned 80? Aye matey. What’s the difference between a tea bag and the England team? The tea bag stays in the cup longer.
https://medium.com/bettertoday/10-jokes-to-brighten-up-your-quarantine-98fe2cab53f0
['Stef Hill']
2020-05-17 22:38:28.115000+00:00
['Coronavirus', 'Pandemic', 'Quarantine', 'Jokes', 'Funny']
Average Image Generator
The Average Image Color Generator is a free online tool which calculates the average color of an image in RGB and HEX format. You can upload your own images or use the tool with the preset images. Besides the average color, you will also get the opposite color. Average Image Generator Never again struggle to find the perfect color to start your design. By finding the average color of an image, which can be used to find the perfect pallet to use when designing mobile or web apps, you will be up to a great start.
https://medium.com/pixelsmarket/average-image-generator-f9547db4ed1c
['Alex Ionescu']
2020-04-08 20:06:04.766000+00:00
['Colors', 'Web Design', 'Image Processing', 'Online App', 'Graphic Design']
Adding Space Between the Cells of a UITableView
Photo by Tim Hüfner on Unsplash Exordium Thank you for stopping by the Rusty Nail Software dev blog! In this post, I will be showing you how to create a UITableView that has spacing between the cells. I came across this design as I was recently working on a client’s application. I wanted to get away from the basic table view look and use something tailored more toward the UI/UX of the client’s app. For those who work as freelance software developers — versatility in design is essential because not every client wants the same thing. To accomplish a clean table view design with spacing between the cells; you must work with sections rather than rows. Let’s get into how to create a custom UITableView with spacing and clean design. This post assumes that you have previous experience with Swift, Xcode, and UITableViews . You can find the source code to this post on GitHub. Be sure to start with the ‘start’ branch of the repo. What Is Already Set Up If you open the project and run it on a device or simulator, you will see the current UI layout. I have set up a basic restaurant review application, all done programmatically, and we’ll be working with the fictional burger shack of Rusty Nail. The first view controller you see is the RestaurantReviewsViewController class. There are two classes in the View group; one for a custom UITableViewCell and one for a custom UIView . In the current state of the app, you can see the UIView in action, as this is the orange circle that holds the restaurant logo. Back to the RestaurantReviewsViewController . First, I’ve stored a custom struct called Review that holds three properties; the writer of the review, the restaurant receiving the review, and the review’s text. There is also a method named setupView that sets up the UI elements of the view, and a method named applyAutoConstraints which activates the constraints for said UI elements. Furthermore, in the viewDidLoad method, the view’s background color is set, and the setupView method gets called. Lastly, you will see a section marked Extensions and a method that makes a UIView circled. Now that we’ve gone through the current code of the main view controller let’s build that table view. If you want to take a look at the other code provided in the project, take your time and get to know how it all works together. Briefly, the RestaurantReviewCell is a custom table view cell that contains two labels; one for the name of the person reviewing the restaurant and one for the review. Configuring the Table View To start, initialize the data array with some reviews. In this case, I’ve used characters from the show Bob’s Burgers. The data is initialized in the setupView method. data = [ Review(writer: “Calvin”, receiver: “Rusty Nail Burgers”, reviewText: “Okay burgers. Okay tenant. Would like to see them pay rent on time.”), Review(writer: “Felix”, receiver: “Rusty Nail Burgers”, reviewText: “Though my brother isn’t a fan of Bob’s cooking, I can’t find a better burger on the wharf.”), Review(writer: “Teddy”, receiver: “Rusty Nail Burgers”, reviewText: “I’m here everyday. Couldn’t ask for a better hangout spot.”), Review(writer: “Mickey”, receiver: “Rusty Nail Burgers”, reviewText: “Hey Bob! Thanks for feeding me during that heist. I will definitely be back to visit.”), Review(writer: “Marshmellow”, receiver: “Rusty Nail Burgers”, reviewText: “Hey Bob.”), Review(writer: “Rudy”, receiver: “Rusty Nail Burgers”, reviewText: “My dad drops me off here every once in a while. I like to sit in the back corner and enjoy my food.”), ] Next, set the delegate and dataSource of the reviewTableView . I will conform to UITableViewDelegate and UITableViewDataSource protocols in the section marked Extensions — though you can do this directly off the RestaurantReviewsViewController class. Add the following to the setupView method after reviewTableView is initialized: reviewTableView.delegate = self reviewTableView.dataSource = self The setupview method should now look like this: func setupView() { data = [ Review(writer: “Calvin”, receiver: “Rusty Nail Burgers”, reviewText: “Okay burgers. Okay tenant. Would like to see them pay rent on time.”), Review(writer: “Felix”, receiver: “Rusty Nail Burgers”, reviewText: “Though my brother isn’t a fan of Bob’s cooking, I can’t find a better burger on the wharf.”), Review(writer: “Teddy”, receiver: “Rusty Nail Burgers”, reviewText: “I’m here everyday. Couldn’t ask for a better hangout spot.”), Review(writer: “Mickey”, receiver: “Rusty Nail Burgers”, reviewText: “Hey Bob! Thanks for feeding me during that heist. I will definitely be back to visit.”), Review(writer: “Marshmellow”, receiver: “Rusty Nail Burgers”, reviewText: “Hey Bob.”), Review(writer: “Rudy”, receiver: “Rusty Nail Burgers”, reviewText: “My dad drops me off here every once in a while. I like to sit in the back corner and enjoy my food.”), ] restaurantImageViewBackground = RestaurantImageViewBackground() view.addSubview(restaurantImageViewBackground) restaurantImageView = UIImageView(image: UIImage(named: “cheese-burger”)) restaurantImageView.translatesAutoresizingMaskIntoConstraints = false restaurantImageViewBackground.addSubview(restaurantImageView) restaurantNameLabel = UILabel() restaurantNameLabel.translatesAutoresizingMaskIntoConstraints = false restaurantNameLabel.text = “Rusty Nail Burgers” restaurantNameLabel.font = UIFont.boldSystemFont(ofSize: 28) restaurantNameLabel.textColor = .black view.addSubview(restaurantNameLabel) starStackView = UIStackView() starStackView.translatesAutoresizingMaskIntoConstraints = false starStackView.alignment = .center starStackView.axis = .horizontal starStackView.spacing = 5 for _ in 0…5 { let starView = UIImageView(image: UIImage(systemName: “star.fill”)) starView.frame = CGRect(x: 0, y: 0, width: 40, height: 40) starView.tintColor = .black starStackView.addArrangedSubview(starView) } view.addSubview(starStackView) reviewTableView = UITableView() reviewTableView.translatesAutoresizingMaskIntoConstraints = false reviewTableView.delegate = self reviewTableView.dataSource = self view.addSubview(reviewTableView) applyAutoConstraints() } The UITableViewDataSource is set up like any other table view. We return one row in each section and configure the cells based on our custom RestaurantReviewCell class. Before setting up the cell, register the UITableViewCell class used with the table view. Add the following to UITableView setup: reviewTableView.register(RestaurantReviewCell.self, forCellReuseIdentifier: “reviewCell”) To set up the cell, add the following to the cellForRowAt method: func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { // 1 let cell = tableView.dequeueReusableCell(withIdentifier: “reviewCell”, for: indexPath) as! RestaurantReviewCell // 2 if let writer = data[indexPath.section].writer { cell.reviewerLabel.text = “\(writer) said:” } if let reviewText = data[indexPath.section].reviewText { cell.reviewLabel.text = “\(reviewText)” } // 3 return cell } Here’s what that does: 1. Dequeue the cell, using the `reviewCell` identifier and the `RestaurantReviewCell` class. 2. Pull the data, based on the section being displayed. First, unwrap the value of the `writer` property of our data source (remember that this is of the type `Review`). Then, unwrap the value of `reviewText`. 3. Return the cell. UI Work with the Table View Next, in the `UITableViewDelegate`, set the number of sections of the table view to the `count` of the array `data` to create the number of sections needed to display the correct amount of data. func numberOfSections(in tableView: UITableView) -> Int { return data.count } Set the height of the rows to 140 (this can be changed according to your use case). func tableView(_ tableView: UITableView, heightForRowAt indexPath: IndexPath) -> CGFloat { return 140 } Now, to add the spacing to the cells, we must create a view for each section’s header. Call the `viewForHeaderInSection` method, and add the following: func tableView(_ tableView: UITableView, viewForHeaderInSection section: Int) -> UIView? { // 1 let headerView = UIView() // 2 headerView.backgroundColor = view.backgroundColor // 3 return headerView } Here is the breakdown of what was just added: 1. Create the header view as a `UIView`. 2. Set the background color of the header to the view controller’s `view.backgroundColor` property. (This creates a design effect that makes the header look transparent). 3. Return the `headerView` to set the header for each section of the table view. This produces a basic spacing between the cells. The height of the header can be set as followed: func tableView(_ tableView: UITableView, heightForHeaderInSection section: Int) -> CGFloat { return 20 } The UITableViewDelegate and UITableViewDataSource should look like this: extension RestaurantReviewsViewController: UITableViewDelegate { func numberOfSections(in tableView: UITableView) -> Int { return data.count } func tableView(_ tableView: UITableView, heightForRowAt indexPath: IndexPath) -> CGFloat { return 140 } func tableView(_ tableView: UITableView, viewForHeaderInSection section: Int) -> UIView? { let headerView = UIView() headerView.backgroundColor = view.backgroundColor return headerView } func tableView(_ tableView: UITableView, heightForHeaderInSection section: Int) -> CGFloat { return 20 } } extension RestaurantReviewsViewController: UITableViewDataSource { func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return 1 } func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = tableView.dequeueReusableCell(withIdentifier: “reviewCell”, for: indexPath) as! RestaurantReviewCell if let writer = data[indexPath.section].writer { cell.reviewerLabel.text = “\(writer) said:” } if let reviewText = data[indexPath.section].reviewText { cell.reviewLabel.text = “\(reviewText)” } return cell } } Finishing Up Earlier I mentioned a ‘clean’ design. Right now, each cell’s design looks like a basic UITableViewCell ; a white rectangle. To add to the ‘cleanliness’ of the design, change the leadingAnchor and trailingAnchor to push the table view sides 20 points away from each side of the view: reviewTableView.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 20), reviewTableView.trailingAnchor.constraint(equalTo: view.trailingAnchor, constant: -20), The cells are still basic white rectangles, even though the RestaurantReviewCell has its layer.cornerRadius set to 20. This is because the table view’s background color is white. To fix this, we must remove the background color of the table view, which is currently white. Remove the background of the table view by setting the backgroundColor property of the table view to .clear . reviewTableView.backgroundColor = .clear Finally, since this is a basic list, tapping the cells is useless and doesn’t add to the user experience. To disable tapping on the table view, set the allowsSelection property to false . reviewTableView.allowsSelection = false Closing Notes Thanks for stopping by Rusty Nail Software and reading one of our blog posts! If you’d like to get in touch with me, you can find me on Twitter and LinkedIn. If you and your team are looking for a dependable iOS Developer, let’s talk about your project. Feel free to get in touch on social media, or via email. You can also read more of my writings on my blog.
https://andrewlundydev.medium.com/adding-space-between-the-cells-of-a-uitableview-590a0cfd2e22
['Andrew Lundy']
2020-10-19 06:36:26.873000+00:00
['iOS App Development', 'iOS', 'Swift Programming', 'App Development', 'Software Engineering']
IBM Watson’s New AI Tool Aims to Help Businesses Identify Client Needs
During the debut of “That’s Debatable” on October 9, IBM Watson presented their latest advancement in NLP from IBM Research. The show is a limited series hosted by John Donvan, presented by Bloomberg Media and Intelligence Squared US, and sponsored by IBM. It features economists, intellectuals, industry leaders, and policy-makers who debate today’s most pressing issues. The debut episode convened a vibrant debate that included former US Labor Secretary Robert Reich and former Greek Finance Minister Yanis Varoufakis, both arguing for the notion that “It’s Time to Redistribute the Wealth.” Manhattan Institute Senior Fellow Allison Schrager and former US Treasury Secretary Lawrence Summers argued against it. In order to determine who had won the debate, the virtual audience of the show was polled before and after the show. Prior to the start, 57% of people polled were for the motion, 20% against, and 23% undecided. After the debate, the number had changed to 59% for and 37% against. With an increase of 17 percentage points, Shrager and Summers were declared the winners. The use of Key Point Analysis in “That’s Debatable” was able to generate new insights based on the analysis of submissions by the public, identifying over 20 key points in 1,600 selected arguments. Key Point Analysis The Technology Key Point Analysis is a novel advancement in NLP developed by IBM Research. This next-generation NLP-based extractive summarization was evolved from the earlier IBM Project Debater, an artificial intelligence (AI) system that can debate humans on complex topics. The new technology was used during the show to determine the main points that mattered most to the public based on the over 3,500 submissions made online before the debate. To do so, it utilized four steps. First, the system classified arguments using a deep neural network to determine if the content of the submissions was for or against the motion. Irrelevant and neutral comments were removed. The quality was then evaluated for each argument, identifying potential key points through grading and filtering those with the highest quality. Points that were too long, emotional in tone, or incoherent were disregarded. Finally, arguments were identified for each potential key point and submitted with a prevalence for each. A small subset of the strongest arguments was used to create silent narratives arguing the pros and cons of the debate. Findings From the total of submissions, 1,600 arguments and 20 key points were identified. About 56% of arguments analyzed favored the motion to redistribute wealth, while about 20% affirmed there was too much global wealth inequality. One of the arguments identified was that income inequality had dramatically increased over the past few decades and had caused suffering to many people — a tendency that would continue if wealth was not redistributed. The remaining 44% of arguments were against redistributing wealth. From them, 15% argued this could discourage people from working hard, with examples cited about negative effects in entrepreneurship, individual initiative, and accountability for choices. About IBM Research IBM Research has been propelling innovation in IBM for 75 years. With more than 3,000 researchers across the globe distributed over a dozen locations, its mission is to positively impact business and society through science. The Future of Key Point Analysis Key Point Analysis is designed to empower businesses to employ AI for greater accuracy and efficiency. This can result in less data consumption and human oversight while giving companies a clearer view of relevant points and considerations so they can make data-driven operational decisions such as adjusting prices, evolving products, creating new marketing campaigns, and optimizing inventory. The use of Key Point Analysis during “That’s Debatable” demonstrates that IBM is advancing Watson’s ability to understand the language of business — one that usually presents its own vernacular and is constantly evolving in response to new innovations, world events, and consumer expectations. IBM has plans to commercialize Key Point Analysis as part of Watson NLP products. This could include commercializing cutting-edge capabilities from Project Debater, helping states deliver critical voting information to citizens, and even transforming the fan experience at the US Open. For now, you can see it in action during the next episode of the show, which will cover whether “A US-China Space Race Is Good for Humanity.”
https://yisela.medium.com/ibm-watsons-new-ai-tool-aims-to-help-businesses-identify-client-needs-6993771a6c61
['Yisela Alvarez Trentini']
2020-10-30 15:54:13.458000+00:00
['IBM', 'AI', 'Technology', 'Business', 'Watson']
The Portal
After a time, Shirley opened one eye and could see Freddie’s Avengers backpack. She became transfixed by a plume of steam that was rising from it. Well, it looks like we’re alive, she thought. But where are we? She sat up to take a look around. She could see they were in another kitchen, by another washing machine. Did we come through one washing machine into another? she wondered. There was a calendar visible on the fridge opposite her. She stared at it for a while before realising why she found it so disturbing. The year showing on the calendar was 2025. They had travelled six years into the future. Freddie had been right. While she was still ruminating over that discovery, Shirley heard a noise from elsewhere in the house. “What on earth was that racket?!” a voice shouted. “I think we ought to go and investigate, Rosie — come on!” Freddie was stirred by this and sat up. “Mum!” he whispered, urgently, “we can’t be discovered! The ramifications to this timeline could be disastrous!” Shirley’s experience in time travel was highly limited, she had to admit, but even she knew that it might be a bad thing for them to be found out. “What do we do?” she whispered, conspiratorially, to her apparent partner in crime. “Er…leave it to me,” he said, as he stood up and marched bravely out of the kitchen, to confront the owner of the disembodied voice. Once again, Shirley was left amazed at her son’s bravery and resourcefulness. “Hello!” she heard him say to their potential captors, “We appear to be lost and had to spend the night in your house — I do hope that’s okay?” “Well…good morning young man!” was the kind response. “That’s fine by me, of course, but what can you tell me about the commotion I just heard? You might have noticed it — it sounded like a truck drove through the house?” There was a brief silence before Freddie spoke up again. “Ah, yes…that,” was his uncertain reply. Shirley was sure she could hear her son’s mind working to come up with a suitable answer to that question. “I’m slightly embarrassed to say,” he began, “that I got scared in the night and so I climbed into your washing machine but managed to get myself locked into it in the process. That commotion you heard was a mixture of me freaking out inside the machine and Mum trying to get me out of it, in a bit of a panic.” Shirley was impressed. So, it seemed, was their unwitting host. “Ah, I see. Well, no problem then. At least you managed to get out safely before the machine washed and dried you. Did you say your mum was here? Do you think I could say a quick hello?” “Yes, of course,” she heard Freddie reply and, before she had a chance to even get off the floor, a beautiful chocolate brown Labrador came bounding around the corner and ran straight into her arms for a hug. What a lovely dog! she thought. Is this Rosie, I wonder? “Ah, I see she likes you,” the man was saying, “she’s usually quite shy and retiring around new people, she must sense a friendly…” The man froze mid-sentence as he came around the corner himself, into the kitchen. As Shirley looked up at him, whilst continuing to fuss his dog, she saw he was staring open-mouthed at her. “Hello?” she said, nervously. “Sorry to have intruded.” “Shirley?” he said, almost inaudibly. Now it was her turn to freeze at the sound of her own name. She rose slowly from her spot on the kitchen floor with the dog and approached the man. The sunlight streaming in the large window over the sink had obscured her view of his face somewhat but now she could see him clearly. She almost fainted when she realised that she recognised those strong, handsome features. “Phil?” she said, his name coming out of her mouth in a strangled whisper. “Oh my god…Shirley…how are you here?” He came to her and swept her up in his arms. She threw herself around him and gripped him tightly. It felt so good to be held by him again. After a while, she glanced down at Freddie, who looked utterly perplexed at the scene before him. “Freddie,” she said, at last, “this is your Dad!” Phil had tears in his eyes as he released Shirley sufficiently to be able to look at his son. “Freddie…is that really you? You’ve grown so much!” “Dad? But…how are you here?” Freddie’s eyes were wide with amazement. “I live here, Son, same as I always did. It was the two of you who left me here on my own. Well, at least until I got Rosie here.” “No, no,” replied Shirley, shaking her head, “you left us! We thought you were killed in action on that second tour in Afghanistan!” “No, not at all! I came back from that tour to the news that the two of you had both died in an horrific car crash!” After a little while, looking thoroughly confused, the freshly reunited couple turned slowly to look at Freddie. He was smiling. He knew exactly what had happened. “It’s the portal!” he said, as he ran to them both. “It’s brought us all back together by creating a wormhole to connect our two time streams! In this time stream, we died. In ours, you died.” Freddie flung his arms around his parents and buried his face into their bodies. “Portal? You used a portal to cross from your time stream to mine? Did you say, the washing machine?” asked Phil, still not quite grasping what was going on. Suddenly, Freddie flung himself away from them. “Dad, you can come back through the portal with us!” “I can?” “Sure! Like Mum said, you’re only missing in action in our timeline…remember? In this timeline, we died. In our timeline, you’re only missing. I mean, we buried you, of course, but we didn’t find a body! So…you could just…find us again, couldn’t you?” There was a pause while Phil considered Freddie’s proposal. “Well…I think you might be right, son. The Top Brass are going to want a hell of a debrief. I might have to tell a few white lies. But…why not? I have nothing much here really, other than Rosie. I’d rather be with you both than be alone here. I mean, would that be okay with you Shirley? Would it be alright if Rosie came with me?” “That would be just perfect,” said Shirley, beaming, as she reached for the dog again. “Yes! Yes! That would be awesome!” said Freddie, unable to contain his excitement. Without saying anything further, the three of them, with Rosie by their side, turned to face the washing machine, as Freddie set it for the spin cycle. And then they went back home.
https://medium.com/lit-up/the-portal-918079f7fc55
['Derrick Cameron']
2019-07-01 20:45:44.762000+00:00
['Time Travel', 'Short Story', 'Science Fiction', 'Fiction', 'Writing']
The Parents Are Not All Right
The Parents Are Not All Right Even in the most privileged households, the pandemic is exposing the farce of how society treats families Photo courtesy of the author “I just want to cry,” I told my wife on Friday morning. I had just gotten off a work call and my brain was ticking through follow-up items, adding to a long list of untouched to-dos. My wife, meanwhile, was multitasking an onslaught of work questions while also trying to manage “homeschool” time with our son — but he refused to participate. Instead, he huddled in an increasingly secure couch fort, refusing to do anything — color, read, go outside, talk to his teacher — besides sit in silence in the dark or watch his iPad. (Today, he opted for sitting in silence in the dark). “Are we permanently ruining and psychologically damaging him?” my wife pleaded with me. We both felt guilty for the work we were not doing — and aching for the way our son was struggling and needed us to be present and calm. But that’s exactly what our current schedule prohibits, as we run back and forth between work calls, requests, and parenting. (Later, as I took over the homeschool shift and he stormed upstairs to cry, he told me it was because I had stopped smiling at him. Knife, meet heart.) This is really hard. What’s amazing to me is how consistent this struggle is among every parent I talk to. The texts and social media posts bouncing around my circle all echo each other. We feel like we’re failing at both. Our kids don’t just need us — they need more of us. Our kids are acting out, abandoning the routines they already had, dropping naps, sleeping less, doing less — except for jumping on top of their parents, which is happening much more. We’re letting them watch far greater amounts of screen time than we ever thought we’d tolerate. Forget homeschooling success — most of us are struggling to get our kids to do the basics that would have accounted for a Saturday-morning routine before this pandemic. The particular struggle reflects the most privileged perspective — that of two fully employed adults, sharing the burden, without fear of losing our jobs. Put another way, I’m not worried about how I’m going to feed my family — I’m just worried about getting my son to eat something besides a donut for two days straight. But it’s precisely the privilege of this vantage point that in a way makes it so stark. This is the best-case scenario? Viruses, or in this case, global pandemics, expose and exacerbate the existing dynamics of a society — good and bad. They are like a fun-house mirror, grossly reflecting ourselves back to us. One of those dynamics is the burden we put on individual parents and families. We ask individuals to solve problems that are systemically created. This current situation is almost prophetically designed to showcase the farce of our societal approach to separating work and family lives. There’s a subtle expectation that parents must find creative ways to handle this on their own. My in-box, social media feeds, and countertops are filled with creative ideas for educating and caring for your kids. Workbooks, games, creative projects and experiments, virtual yoga, virtual doodling, virtual zoo visits, virtual everything. I honestly am too tired and stretched thin to read the suggestions, let alone try them. The few I have tried have been met with astounding and fierce rejection by my son. I see these “helpful suggestions” alongside reminders to be gentle on ourselves. “Embrace imperfection!” “Lower your standards!” To be clear — my family’s standards at this point are simply to get through the day, ideally with my son doing something besides watching TV, and us not utterly sabotaging our work. But what’s missing in all these cloistered parent texts and Facebook groups, all these helpful tips, is acknowledgement that this situation is fundamentally farcical. And individual solutions don’t — and won’t — work. I thought by the fourth week of social distancing we would have all settled into the new norm a bit. But for my family (and others I’ve spoken to) that is not the case — things are harder than they were at the beginning. Harder because we’ve all accrued anxiety, stress, and sadness over this period. My to-do list is longer and further untouched; my guilt and anxiety for the ways my son is not being engaged enough is greater; his apparent sadness for his whole world shifting is intensified as he regularly acts out; and our collective exhaustion grows deeper. This cannot be solved by tweaks to the schedule, helpful routines, and virtual activities. We have to collectively recognize that parents — and any caregivers right now — have less to give at work. A lot less. The assumptions seem to be that parents have “settled into a routine” and “are doing okay now.” To be clear, parents are not doing okay. Everyone is grieving and struggling right now. When I’m not pulling my hair out, I’m trying to be grateful that I am with my family, they are healthy and safe, and I am not enduring this period in total isolation. But this pandemic is highlighting all that is wrong with our systems set up to support families. “Making it work” is only true for those with the most privilege among us. It exposes everything from the lack of paid sick leave and parental leave to the fact that the school day ends at 3 p.m. when the typical workday goes several hours longer — yet aftercare is not universally available. And that says nothing of our need for universal health care, irrespective of employment. Parents pour endless energy into solving for systems that don’t make sense and don’t work. It’s always been a farce to think about caretaking and family responsibilities as “personal life decisions” that get handled outside of work hours. From getting kids to pediatrician appointments to the onslaught of sick days when cold season hits to school closures and parent-teacher conferences. In my son’s first year of day care, I didn’t work a full week for months. Yet we just hide it better and make it work. And again, “making it work” is only true for those with the most privilege among us. This current situation is almost prophetically designed to showcase the farce of our societal approach to separating work and family lives. We are expected to work from home full time. And care for our children full time. And we cannot have anyone outside our immediate household help. It can’t work and we all are suffering at the illusion that it does. Our kids are losing out — on peace of mind, education, engagement, the socialization for which they are built. Our employers are losing out, too. Whether the office policy is to expect full-time work or whether, like in my experience, we are offered a lot of flexibility — work is less good, there is less of it, and returns will be diminishing the longer this juggle goes on. To be honest, I’m not sure what the solution is. But unless we step back and redefine where the burden of responsibility lies in providing care for our most vulnerable and reprioritize what work matters, we are going to emerge from this pandemic with some of our most powerful forces — parents and young people — not up for the task of rebuilding a better future. And in the meantime, remember this: Parents are not okay.
https://gen.medium.com/parents-are-not-ok-66ab2a3e42d9
['Chloe I. Cooney']
2020-04-08 15:57:33.490000+00:00
['Parenting', 'Homeschooling', 'Work', 'Family', 'Coronavirus']
The Oddity of Words.
“You have to be odd to be number one.” Dr. Seuss Theodor Geisel (Dr. Seuss), author of 42 children’s books, didn’t like kids. As Geisel once confessed, “You have’m, I’ll amuse’m.” He created The Cat In the Hat because he found Dick and Jane boring. Oh, The Places You’ll Go was supposed to be read by expectant mothers. He also coined the word “nerd” for his book If I Ran the Zoo. “I like nonsense, it wakes up the brain cells,” Geisel once said, and he certainly believed in nonsense. Little did he know he’d be coining a term (nerd) for future tech billionaires. In turn, they would invent their own odd words like “zap” and “pram,” while rendering words like “help” completely useless. Not only did Geisel create “nerd,” he also created “guff,” which went on to become “I don’t take no guff.” He prided himself on not taking any guff from children. This was accomplished this by not having any. His wife, Audrey, admitted they scared the crap out of him. He didn’t do readings. Possibly Carroll would have chortled and been snarky if someone had accused him of pedophilia. Lewis Carroll liked children — possibly too much, according to some Carroll scholars. He took nude pictures of Alice’s older sister, Lorina Liddell (then 13 years old). That could have landed him in prison. Perhaps writing Alice’s Adventures in Wonderland showed he was just looking for a muse. Be that as it may, going from a 13 to an 11-year-old wasn’t exactly moving in the right direction — even for a pedophile. Photography aside, Carroll still managed to come up with some pretty interesting words like “chortle” and “snark.” Possibly he would have chortled and been a snark if someone had accused him of pedophilia. Chortle is actually a combination of “chuckle” and “snort,” what linguists call a “portmanteau.” This was invented by Lewis Carroll, too. He probably figured someone was going to take him to task over “slithy” which combines “slim” and “lithe.” He went on to help form the “cyberpunk” movement when he ran out of hippy chicks. Fans of Superman might be interested to know that the word itself wasn’t created by Jerry Siegel and Joe Shuster. Long before they developed their “caped crusader,” George Bernard Shaw coined the term for Man and Superman. He, in turn, stole it from Friedrich Nietzsche. Both men wore capes over their tuxedos and Nietzsche thought he could fly. William Gibson is credited with the term “cyberspace” which first appeared in his short story Burning Chrome. Gibson was a conscientious objector during the Vietnam War. Moving to Toronto, he said that he wasn’t avoiding the draft so much as wanting to “sleep with hippie chicks.” He went on to help form the “cyberpunk” movement after he ran out of hippy chicks. So why are odd people responsible for so many words? Linguists believe it’s all part of the creative process. Once you step outside standard literary form, throwing in a few words of your own seems perfectly acceptable. Anthony Burgess created a whole vocabulary for A Clockwork Orange. His characters were so weirdly violent, it seemed natural to give them weird names. Like Carroll, Burgess tended to use portmanteaus, such as “droog” which is a variation of the Russian “drug fiend.” One day, he just blurted out “rock ‘n roll.” It stuck because nobody else knew what to call it. Sometimes words and terms happen when we’re stumped. Sometimes it’s not even the author but the critic who creates the term. Columnist, Herb Caen, created “beatnik” to describe the writers of the Beat Generation. “It kind of just came out,” he admitted, noting this was around the time of Russia’s Sputnik satellite. Disc jockey, Alan Freed, is credited with the term “rock and roll.” As he admitted, “I was on air playing guys like Elvis Presley. I didn’t know what he was doing. It sounded country but it also sounded R&B. One day, I just blurted out ‘rock ‘n roll.’” It stuck because nobody else knew what to call it. Sometimes words and terms happen when we’re stumped. It’s the same with naming toys. What do you call a doll that’s anatomically perfect? You call her “Barbie.” Ruth Handler, the creator, said it was the name of her daughter. Others noticed a surprising number of Barbies with anatomically — or airbrushed — perfect bodies in Playboy. Linguists also point out that new words could reflect new thinking. Existing terms simply can’t describe advancements. Megawatt, for instance, refers to one million watts, but most of us wouldn’t know a million watts from a lightening bolt. Joseph Heller had to invent “Catch-22” because there was no term for sending pilots on missions long after they were capable of even flying a kite. When they complained that they were “going mad,” military psychiatrists kept them in the air by saying, “You can’t be mad if you think you’re mad.” Sometimes words appear simply out of expediency. Back when detective novels were popular, critics and columnists had to produce reviews. Not being wordsmiths or creative, they relied on common descriptives. These were repeated ad nauseum until editors came out of their offices, screaming “Who wrote this crap?” which led to the crime descriptive “whodunit.” Elon Musk said he was worried because he didn’t understand them, either. How do you make sense of “Balls have zero to me to me to me”? If it takes the odd or nonsensical to create new words, this might explain why Facebook recently had to shut down their AI. It was creating its own language. More frightening than that, other AIs understood the bots. Since Facebook didn’t — and Mark Zuckerberg didn’t — they shut it down. Elon Musk said he was worried because he didn’t understand them, either. How do you make sense of “Balls have zero to me to me to me”? If Lewis Carroll was still around, he’d probably know exactly what it means. The rest of us simply don’t have the portmanteau — or the creatively odd minds — including Mark Zuckerberg and Elon Musk. If they’re worried, maybe we should be, too…unless AIs are doing something far more nonsensical than we realize. Maybe they’re learning how to talk to kids. Would it really be surprising if they prefer creating Through the Looking-Glass rather than stupid algorithms? What if being stripped of human traits like copying everyone else, AIs realize that nonsense and being odd is…well…a lot more fun. Maybe being odd is how you become number one. It worked for Dr. Seuss. It’s probably worked for lots of people, some known, some just cranks, creating their own little worlds and language, waiting for the days when robots come by and commend them for their thinking. Anything’s possible where nonsense is concerned, and we’re all more nonsensical than we realize, even the most sensical, or those who think they’re sensical, and in a topsy-turvey AI world, nonsensical is sensical. Robert Cormack is a freelance copywriter, novelist and blogger. His first novel “You Can Lead a Horse to Water (But You Can’t Make It Scuba Dive)” is available online and at most major bookstores. Check out Yucca Publishing or Skyhorse Press for more details.
https://robertcormack.medium.com/the-oddity-of-words-c3e54e6c1a58
['Robert Cormack']
2018-08-03 14:43:07.871000+00:00
['Life Lessons', 'Education', 'Words', 'Life', 'Artificial Intelligence']
Yugasa is one of the Top 10 Most Promising Mobile Apps Development Companies in India
Yugasa feels proud to announce that she has recently been recognized and awarded as one of the Top 10 Most Promising Mobile Apps Development Companies in India, by ‘CIO Review’, a magazine of global repute. “There is absolutely no repudiating the fact that the mobile world is expanding itself across all spectrums of the Indian market and will continue to grow. This spurt in growth has resulted in the ubiquity of mobile phones and advancements in the internet thereby, propelling the wheels of mobile app design and development. Additionally, trending technologies like IoT and AI act as catalysts for the proliferation of mobile apps across a gamut of industry verticals……” To Read more, please visit https://goo.gl/XLrtdf Click on the link in case you want to download our magazine coverage news in PDF format. Originally published at yugasa.com on October 5, 2018.
https://medium.com/yugasa/yugasa-is-one-of-the-top-10-most-promising-mobile-apps-development-companies-in-india-323ec26a037
['Yugasa Software Labs']
2018-10-10 13:37:21.124000+00:00
['Outsourcing Company India', 'iOS App Development', 'Android App Development', 'Mobile App Development']
Development Update August 13, 2018 | IOS Wallet| The Force| Web API
The FAB Foundation provides technical updates related to the project every week Mobile Wallet The FABcoin Mobile Wallet is now available today for Apple IOS devices. To download the wallet, visit https://itunes.apple.com/us/app/fab-wallet/id1423626046?ls=1&mt=8 or http://fabcoin.pro/runtime.html or search “FAB Wallet” in the App Store. For instructions on how to properly set the wallet up, read our wallet guide at https://fabcoin.co/help/. https://medium.com/fast-access-blockchain/fabcoin-mobile-wallet-now-on-ios-bb8f902d357e. Web API For our Web API we are currently adding features that will interact with the Kanban network. We are creating a KANBAN data architecture and developing internal capabilities and integrations. We have also used a Websocket setup to distribute the current node’s recent changes to the network of other nodes. We are developing: the initial node network and preparing the nodes for the first distributed network the initial web native API as a local Kanban service new transactions for linked nodes or networks Next week, we will add a new Kanban trading model, implement two direct business transaction policies, and create original transactions through the trading model. Smart Contract We are improving the Equihash mining system and going through rounds of smart contract testing. Our online smart contract wallet is also undergoing a series of tests. Kanban For the Kanban network, we are progressing with a Proof of Concept for a Practical Byzantine Fault Tolerance (PBFT) consensus mechanism. “The Force”: Phase 1, Basic Edition The Force” is a planned hard fork of the Foundation Chain of the Fast Access Blockchain. We are currently ahead of the development schedule. “The Force” will be implemented in two phases with the first phase coming at the end of August/beginning of September 2018. Several features will be deployed to meet user and market needs, prepare the network for listing on our new exchanges with increased volume such as OKEx, and provide better security features against 51% attacks. All the design elements of the white paper will be introduced during the second phase in October 2018. For more details, read our full outline here: https://medium.com/@FABBlockchain/the-force-749fa40d5b3 This past week we recreated the website API for additional security features and standardization. This week we will be changing the KYC and Order process as well as adding additional Account features. Community Development Over the last few weeks we have seen tremendous growth in the FAB community. FAB will begin to focus on education the community on the project and blockchain this week. To keep up to date, visit https://medium.com/fast-access-blockchain. We are also open to adding new ambassadors to join our team, visit https://fabcoin.co/ambassador-program to apply. Events Our first meetup in Shanghai is currently being planned. Details on date and location will be released soon. Additionally, our Canadian ambassadors are in the process of preparing a Toronto meetup as well. FAB Open House Fast Access Blockchain will be hosting our very first open house on September 13, 2018. Come visit us to see the active work we are doing and discuss our project with us.
https://medium.com/fast-access-blockchain/development-update-august-13-2018-ios-wallet-the-force-web-api-22b79467e2e7
['Fab Info']
2018-08-13 18:48:21.444000+00:00
['API', 'Updates', 'Development', 'Blockchain', 'Bitcoin Wallet']
The 7 Biggest Issues Data Visualization Faces Today
The 7 Biggest Issues Data Visualization Faces Today According to the members of the Data Visualization Society… On a dedicated channel, #dvs-topics-in-data-viz, in the Data Visualization Society Slack, our members discuss questions and issues pertinent to the field of data visualization. Discussion topics rotate every two weeks, and while subjects vary, each one challenges our members to think deeply and holistically about questions that affect the field of data visualization. At the end of each discussion, the moderator recaps some of the insights and observations in a post on Nightingale. You can find all of the other discussions here. As a data visualization practitioner, it’s easy to feel isolated. We’re mostly swimming alone in our organizations or as freelancers. But in February 2019, we boarded our ship: the Data Visualization Society. Now that we’re together, what do we do? The founders have a vision to steer us toward a unified community that lifts up its members through collaboration and shared resources. So, every week on the member Slack, we’re asking a different question that challenges members to think about data visualization issues deeply and holistically. But what are those issues? For our first week, we asked our 3,000+ members: “What do you think the most important issue in data visualization is?” The answers and open discussion ranged from light-hearted to deeply concerning. They mostly landed in three main groups: issues around data, visualization in practice, and and the general profession. We’ve gathered these topics into three main themes so that the conversation doesn’t just benefit our membership, but can also be useful to the broader data visualization community. When I first started, it was difficult to know what a typical workflow looked like to create a visualization. Having access to a centralized community would have been helpful! :) I’m glad we’re assessing these issues now so we can be more supportive to those entering and actively practicing in the field. Here are the biggest issues that we currently face, according to our members. Data Data visualization is often framed as a solution to the data-access problem, and most professionals who have a job title that includes “data visualization” spend an inordinate amount of their time not visualizing that data but cleaning and processing it. So it’s no wonder many of the concerns expressed by our members focused on data. Importantly, though, the concern wasn’t about processing data. It was how organizations deal with data, how to teach about data and how to deal with bias in data. 1. Organizations Why do they spend so many resources on data collection without a plan? Why do they spend so much time competing to collect the most data? Why do they trust the raw data over the visualization? How do we create and integrate more data translators with domain knowledge into organizational teams? As member SanPaw put it, “I see many dazzling visualizations but very few provide insights or lead to meaningful insights. Having someone on the team with domain knowledge and understanding the business issues helps a lot.” 2. Teaching Why aren’t teachers taught how to use data visualization techniques? How do we prioritize teaching kids how to read a data visualization that isn’t a map? Iris Morgenstern is working on bringing viz to teachers: “I am teaching future teachers how to use data visualization for their professional life. When they start they have no idea what DataViz has to do with anything. But in the end, many of them are delighted to see how much more they get out of their research projects.” 3. Bias How do we raise awareness around bias in data collection methods? How do we get more people on board with visualizing uncertainty, beyond the academic space? How can we be more responsible in how we speak for the data? Mike Cisneros says, “It is too easy and common that we wittingly or unwittingly relinquish responsibility for the quality of our work, be it in terms of how reliable the data is, how clearly it is presented, whether there’s a bias in the presentation reflecting anything other than the truth of the data, whether there are unsustainable claims being made, and so on.” In Practice The second category of concern was centered on practice. The Data Visualization Society is a professional society, so it only makes sense that we’d be particularly concerned about our practice. By this, our members typically referred to the techniques and tools that often define their roles. This includes dealing with how aesthetics and science overlap in the visual display of information, as well as how data visualization cannot be evaluated objectively with the kinds of performance tests in place in other technical fields, its value and impact are tied to its reception by its audience. On the tool side, there’s general anxiety about the sheer number of tools and how those tools enable users to make questionable data visualization decisions. 4. Technique How do we balance beauty and understanding? How do we explain to clients that “it depends” (meaning that the right data visualization or right technique is context-dependent)? How do we create standards for accessibility? How do we prioritize designing with our audience in mind? One member describes a common scenario: “It feels like [my coworkers] want some hard rules on what and what not to do whereas I’m more inclined to say, for the most part, ‘It depends on the data/scenario!’” 5. Tools How do we help people create good charts when modern software makes it so easy to make charts quickly? How can we help those in the field not feel so overwhelmed with the number of tools to learn? Bill Seliger said, “I think the underlying problem is that the democratization of data and easy learning curve on dataviz tools has far outpaced the education effort on how to present data.” The Profession Finally, there was a distinct focus on the profession as a whole. To outsiders, this might seem the least interesting aspect of data visualization. But if you scratch the surface, you’ll find that there’s little shared definition of roles and responsibilities, best practices and resources. That internal chaos is reflected externally in how we approach stakeholders to explain how we make data visualization, how we measure its impact, and how we justify further investment by our organizations’ leaders in the roles and resources necessary to perform effective data visualization. 6. Internal Where can we come together as a central community? How do we create centralized resources and best practices? Where can we publish articles in a central place? How do we learn from related disciplines (design, UX, etc.)? How do we come to a common understanding of what data visualization is for? How do we organize into sub-disciplines within data visualization? Casey Haber added, “As a community, I think we need to study and integrate from related disciplines like design that have a long history with this problem.” 7. External How do I effectively explain to other people what I do? How do I make the case that data visualization is necessary? Francis Gagnon says, “I usually say that information design, including data visualization, has a huge market and little demand. People don’t realize that theirs sucks and that they need professional help.” Having a community and central forum for questions, discussion and self-reflection is an essential need for newbies and veterans alike. As we move forward, we hope to use these concerns as a guide to build a community that cultivates data visualization into a robust field. We look forward to your comments on any of the points above. If you have additional ideas, please join the Data Visualization Society and add your voice to the conversation. There’s room for everyone! To sign up please register at datavisualizationsociety.com/join Elijah Meeks and Jason Forrest contributed to this piece. Thanks to Mara Averick for your keen editing.
https://medium.com/nightingale/the-7-biggest-issues-data-visualization-faces-today-7bf6b6457b72
['Alli Torban']
2020-07-06 20:17:09.859000+00:00
['Design', 'Data', 'Topicsindv', 'Data Science', 'Data Visualization']
An End-to-End Project on Time Series Analysis and Forecasting with Python
Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. Time series are widely used for non-stationary data, like economic, weather, stock price, and retail sales in this post. We will demonstrate different approaches for forecasting retail sales time series. Let’s get started! The Data We are using Superstore sales data that can be downloaded from here. import warnings import itertools import numpy as np import matplotlib.pyplot as plt warnings.filterwarnings("ignore") plt.style.use('fivethirtyeight') import pandas as pd import statsmodels.api as sm import matplotlib matplotlib.rcParams['axes.labelsize'] = 14 matplotlib.rcParams['xtick.labelsize'] = 12 matplotlib.rcParams['ytick.labelsize'] = 12 matplotlib.rcParams['text.color'] = 'k' There are several categories in the Superstore sales data, we start from time series analysis and forecasting for furniture sales. df = pd.read_excel("Superstore.xls") furniture = df.loc[df['Category'] == 'Furniture'] We have a good 4-year furniture sales data. furniture['Order Date'].min(), furniture['Order Date'].max() Timestamp(‘2014–01–06 00:00:00’), Timestamp(‘2017–12–30 00:00:00’) Data Preprocessing This step includes removing columns we do not need, check missing values, aggregate sales by date and so on. cols = ['Row ID', 'Order ID', 'Ship Date', 'Ship Mode', 'Customer ID', 'Customer Name', 'Segment', 'Country', 'City', 'State', 'Postal Code', 'Region', 'Product ID', 'Category', 'Sub-Category', 'Product Name', 'Quantity', 'Discount', 'Profit'] furniture.drop(cols, axis=1, inplace=True) furniture = furniture.sort_values('Order Date') furniture.isnull().sum() Figure 1 furniture = furniture.groupby('Order Date')['Sales'].sum().reset_index() Indexing with Time Series Data
https://towardsdatascience.com/an-end-to-end-project-on-time-series-analysis-and-forecasting-with-python-4835e6bf050b
['Susan Li']
2018-09-05 12:18:21.429000+00:00
['Machine Learning', 'Data Science', 'Statistical Analysis', 'Timeseries', 'Python']
Experimenting with Ingress Controllers on Oracle Container Engine (OKE) — Part 2
Experimenting with Ingress Controllers on Oracle Container Engine (OKE) — Part 2 Using multiple controllers and load balancers with OKE In Part 1, I briefly described what Ingress and Ingress Controllers are. We also took on a spin on OKE some of the most popular Ingress Controllers for Kubernetes, namely: In this post, we’ll look at Ingress class and how they can be used to deploy multiple instances of Ingress Controllers concurrently. We’ll also briefly look at a different type of Kubernetes service (ExternalName) and one of the ways you would use it. Finally, we look at how they can be used with public and internal load balancers. All the yamls in the exercises below can be found on github. Using multiple Ingress Controllers Which Ingress Controller to use is a difficult question to answer. As usual, it depends on your needs, your team skillset (e.g. your team maybe more familiar with NGINX or HAProxy than Traefik or Contour), the type of applications you are deploying, the technical and operational features you need, protocols supported, the maturity of the controller or the helm package (if that’s how you are deploying), your appetite for experimenting and risk, internal approvals and so on. This excellent Kubedex guide with its summary of features can help you narrow the list to evaluate. Regardless, it is possible to deploy multiple controllers in a single Kubernetes cluster. It is also possible to deploy multiple instances of the same controller in the same cluster by specifying a different Ingress class. So, let’s look at how the Ingress class works. Recall that an Ingress Controller listens for changes and update its routing rules. However, when you deploy multiple Ingress Controllers and Ingresses, there has to be a way to ensure that a specific Ingress is picked up by the right controller. That’s the purpose of the Ingress class. Think of the Ingress class as an instruction to the all the deployed Ingress Controllers: you are responsible for implementing my routing rules you are not responsible for implementing my routing rules The diagram below illustrates this. When an Ingress class value matches that of a controller instance, that controller instance will handle the routing rules for your Ingress. If you have other controller instances, whether of the same or different type but with different Ingress class values from your Ingress, they will ignore that Ingress. Ingress classes and Ingress mapping Specifying the Ingress class through annotation The Ingress API has an annotation which allows you to specify this Ingress class: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-blog-ingress annotations: kubernetes.io/ingress.class: "ingressclass" Except for traefik (see below), each controller has a default Ingress class value but also allows you to specify your own custom class. This is useful, especially when you have to deploy multiple instances of them in a single cluster. Below are the default Ingress class values for each when you use the respective helm charts: nginx: nginx haproxy: haproxy traefik: contour: contour If you use traefik’s helm chart as described in Part 1, no ingress class is set by default. You need to explicitly specify it during your deployment as follows: helm install stable/traefik --name traefikcontroller --set kubernertes.ingressClass=traefik Finally, note that if you use Contour, you do not set the annotation kubernetes.io/ingress.class on the IngressRoute. More on Contour and its Ingress class later. Using the default Ingress class annotations in your Ingresses The examples below show the default annotation you need to provide for each of the four controllers. NGINX Ingress Controller: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-website annotations: kubernetes.io/ingress.class: "nginx" ... HAProxy: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-website annotations: kubernetes.io/ingress.class: "haproxy" ... Traefik: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-website annotations: kubernetes.io/ingress.class: "traefik" ... Contour: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-website annotations: kubernetes.io/ingress.class: "contour" ... Customizing the Ingress class Let’s now look at how to customize the Ingress class for each controller during their installation with the helm charts. NGINX: helm install --name acmecontroller stable/nginx-ingress --set controller.ingressClass=mycustomnginx You can set any value for the NGINX Ingress class. HAProxy: helm install --name acmecontroller incubator/haproxy-ingress --set controller.ingressClass=mycustomhaproxy Similarly, you can set any value for HAProxy’s Ingress class. Traefik: helm install stable/traefik --name traefikcontroller --set kubernertes.ingressClass=traefikcustom With Traefik, the Ingress class value has to start with ‘traefik’. I’m not sure whether it’s the helm package or Traefik itself. This is based on Traefik’s helm chart documentation. Contour: kubectl apply -f examples/deployment-grpc-v2/ --ingress-class-name=newcontour Similar to NGINX and HAProxy, you can set any value for Contour’s Ingress class. However, note the following applies according to Contour’s documentation: contour.heptio.com/ingress.class : The Ingress class that should interpret and serve the IngressRoute. If not set, then all Contour instances serve the IngressRoute. If specified as contour.heptio.com/ingress.class: contour , then Contour serves the IngressRoute. If any other value, Contour ignores the IngressRoute definition. You can override the default class contour with the --ingress-class-name flag at runtime So, if you’re planning to use Contour’s IngressRoute API, you must set this value to ‘contour’ or not set it at all. Using multiple controllers in action I said that it was possible to have multiple Ingress Controllers deployed simultaneously on a single cluster. Let’s look at how we can use them concurrently. Pick any 2 controllers you like and deploy them. In this example, we will use NGINX and Traefik for illustration but you can use any combination as long as you specify the correct Ingress class e.g. you can also have 2 NGINX controllers if you want to as long as you set a different class name for the 2nd controller. This is how we want to deploy our artifacts: acme services (website, blog) in the acme namespace nginx controller and website ingress in the web namespace traefik controller and blog ingress in the blog namespace The following diagram illustrates our deployment. Using multiple controllers and public load balancers Let’s first create the namespaces: kubectl create ns acme kubectl create ns web kubectl create ns blog We can now deploy the website and blog. kubectl create -f ingresscontrollers/multiplecontrollers/acme-website.yaml kubectl create -f ingresscontrollers/multiplecontrollers/acme-blog.yaml Let’s now deploy our 2 Ingress Controllers (NGINX and Traefik). NGINX: helm install --name webcontroller stable/nginx-ingress \ --namespace web \ --set defaultBackend.enabled=true \ --set defaultBackend.name=acmedefaultbackend \ --set rbac.create=true Verify the NGINX controller has been deployed properly: kubectl -n web get pods NAME READY STATUS RESTARTS AGE webcontroller-nginx-ingress-acmedefaultbackend-86689bd54b-dzpcg 1/1 Running 0 5m45s webcontroller-nginx-ingress-controller-8bdc8c665-f6xqm 1/1 Running 0 5m45s And retrieve the LoadBalancer’s public IP Address: kubectl -n web get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR webcontroller-nginx-ingress-acmedefaultbackend ClusterIP 10.96.122.109 80/TCP 46s app=nginx-ingress,component=acmedefaultbackend,release=we bcontroller webcontroller-nginx-ingress-controller LoadBalancer 10.96.45.231 129.146.208.247 80:30698/TCP,443:32651/TCP 46s app=nginx-ingress,component=controller,release=webcontrol ler Traefik: helm install --name blogcontroller stable/traefik \ --namespace blog \ --set rbac.enabled=true \ --set kubernetes.ingressClass=traefik Verify the Traefik controller has been deployed properly: kubectl -n blog get pods NAME READY STATUS RESTARTS AGE blogcontroller-traefik-65579ddcb4-7hqdz 1/1 Running 0 6m30s And retrieve the LoadBalancer’s public IP Address: kubectl -n blog get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR blogcontroller-traefik LoadBalancer 10.96.127.166 129.146.89.209 80:30293/TCP,443:30455/TCP 69s app=traefik,release=blogcontroller Update your DNS ‘A’ records so that they match the following: 129.146.208.247 (used by the Load Balancer created by NGINX Ingress Controller) maps to www.acme.com 129.146.89.209 (used by the Load Balancer created by Traefik’s Ingress Controller) maps to blog.acme.com Note: Your Load Balancer public IP addresses will be different, remember to use yours Replace acme.com with your domain Let’s now create the Ingresses, starting with the website Ingress using the same Ingress code as in the examples in Part 1. Open a new terminal to watch the NGINX controller logs (replace with the name of your controller pod): kubectl -n web log webcontroller-nginx-ingress-controller-8bdc8c665-f6xqm -f ... I0712 00:36:36.398883 8 status.go:86] new leader elected: webcontroller-nginx-ingress-controller-8bdc8c665-f6xqm We’ll now create the following Ingress (replace acme.com with that of your domain): apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-website namespace: web annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - host: www.acme.com http: paths: - path: / backend: serviceName: acme-website servicePort: 80 On your first terminal, create the Ingress: kubectl create -f multiplecontrollers/acme-website-ingress.yaml In the watch terminal, you’ll notice the following: I0712 00:50:13.540522 8 controller.go:133] Configuration changes detected, backend reload required. I0712 00:50:13.540695 8 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"acme", Name:"acme-website", UID:"02051a8b-a43f-11e9-a88d-0a580aed1b23", APIVersion:"extensions /v1beta1", ResourceVersion:"192541", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress acme/acme-website I0712 00:50:13.624379 8 controller.go:149] Backend successfully reloaded. [12/Jul/2019:00:50:13 +0000]TCP200000.000 I0712 00:50:36.427547 8 status.go:309] updating Ingress acme/acme-website status from [] to [{ }] I0712 00:50:36.434886 8 event.go:258] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"acme", Name:"acme-website", UID:"02051a8b-a43f-11e9-a88d-0a580aed1b23", APIVersion:"extensions /v1beta1", ResourceVersion:"192593", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress acme/acme-website And now if you access the website URL, you’ll be able to access the website page. If you are still watching the controller pod logs, you’ll also notice the incoming traffic: 10.244.0.0 - [10.244.0.0] - - [12/Jul/2019:00:51:26 +0000] "GET / HTTP/1.1" 200 1855 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0" 388 0.002 [acme-acme -website-80] [] 10.244.2.13:80 4446 0.002 200 0bed30fdf9d16bde887ed8e488b80735 10.244.0.0 - [10.244.0.0] - - [12/Jul/2019:00:51:27 +0000] "GET /favicon.ico HTTP/1.1" 404 178 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Firefox/68.0" 320 0.001 [acme-acme-website-80] [] 10.244.2.13:80 178 0.001 404 d91af2a400fd4230ec3dec8b0eaf713f One important thing to note here is that the Ingress is defined in the same namespace as the service. If you create the Ingress in a namespace different from the targeted service, the Ingress Controller will still be able to detect the Ingress change. However, it won’t be able to obtain the endpoints and therefore won’t be able to update its configuration. To illustrate this, we’ll deliberately create the blog Ingress in the blog namespace i.e. that of traefik. apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-blog-ingress namespace: blog annotations: kubernetes.io/ingress.class: "traefik" spec: rules: - host: blog.acme.com http: paths: - path: / backend: serviceName: acme-blog servicePort: 80 On your 2nd terminal, stop watching the NGINX controller and start watching the traefik controller (replace with the name of your controller pod) instead: kubectl -n blog log blogcontroller-traefik-65579ddcb4-7hqdz -f Let’s now create the blog Ingress (remember to change the hostname in the Ingress rule): kubectl create -f acme-blog-ingress.yaml You’ll see the following in the traefikcontroller logs: {"level":"error","msg":"Service not found for blog/acme-blog","time":"2019-07-12T01:02:20Z"} Delete the blog Ingress: kubectl delete -f acme-blog-ingress.yaml Edit its namespace to acme: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-blog-ingress namespace: acme annotations: kubernetes.io/ingress.class: "traefik" spec: rules: - host: blog.acme.com http: paths: - path: / backend: serviceName: acme-blog servicePort: 80 And recreate the Ingress: kubectl create -f acme-blog-ingress.yaml Inspect the controller log again: kubectl -n blog log blogcontroller-traefik-65579ddcb4-7hqdz -f {"level":"info","msg":"Server configuration reloaded on :8880","time":"2019-07-12T02:01:36Z"} And now if we access the blog URL, we can also access the blog. Let’s clean up before our final experiment: kubectl delete -f acme-blog-ingress.yaml kubectl delete -f acme-website-ingress.yaml helm delete --purge blogcontroller helm delete --purge webcontroller Using a single public load balancer, private load balancers with multiple controllers As you might have noticed in the previous experiments, whenever you deploy an Ingress Controller, a public Load Balancer is created by default. Deploying multiple controllers means you will have multiple public load balancers. This can have an obvious impact on costs and although OCI is quite generous in terms of data charges, you as the superadmin, want to keep a close eye on it. Now, let’s say you have 2 teams (Team Web and Team Blog), each with their favourite Ingress Controller (NGINX and Traefik respectively) as above and for whatever technical reasons, they both insist on using their favourites. 2 cannot become 1 despite what the Spice Girls sing. Well, it turns out you can use internal Load Balancers by setting the right annotations. We used Load Balancer annotations before in an earlier post but for changing the shape of the Load Balancer. But how to do this with Ingress Controllers and then expose the applications publicly? Below is a slight modification of the previous deployment architecture: Using a public load balancer and multiple private load balancers In the above diagram, when the user accesses either the website or the blog URLs, they will hit the public load balancer. We also want the web and blog traffic to go through the internal load balancers. We’ll use the following 3 controllers this time: nginx-ingress: private, website traefik: private, blog haproxy: frontend load balancer Let us first deploy nginx-ingress and traefik in internal mode: helm install --name webcontroller stable/nginx-ingress \ --namespace web \ --set controller.name=web \ --set rbac.create=true \ --set controller.ingressClass=nginx \ --set controller.service.annotations."service\.beta\.kubernetes\.io/oci-load-balancer-internal"=true By adding the annotation above, the load balancer created by NGINX Ingress Controller will be deployed with a private IP address only. Only, requests originating from within the VCN can reach it. Let’s do the same for Traefik: helm install --name blogcontroller stable/traefik \ --namespace blog \ --set rbac.enabled=true \ --set kubernetes.ingressClass=traefik \ --set service.annotations."service\.beta\.kubernetes\.io/oci-load-balancer-internal=true" And finally, let’s create a frontend namespace and deploy HAProxy: kubectl create ns frontend helm install --name frontendcontroller incubator/haproxy-ingress \ --namespace frontend \ --set defaultBackend.enabled=true \ --set defaultBackend.name=defaultbackend \ --set rbac.create=true \ --set controller.ingressClass=haproxy For HAProxy, we don’t set the oci-load-balancer-internal=true annotation because we want it to be publicly accessible. When you login to OCI Console, you can verify this: Public and Private Load Balancers Now, we need to deploy the services and ingresses. Recall that we want the traffic to route through the private load balancers. I also mentioned before that your Ingresses should be in the same namespace as your service. Since we want to force the traffic through a particular path i.e. the internal load balancers, we’ll use an alternative method to the above. In this case, we’ll use ExternalName. As the documentation explains, services of type ExternalName map a service to a DNS name. Here’s the explanation from the documentation: When looking up the host my-service.prod.svc.cluster.local , the cluster DNS Service returns a CNAME record with the value my.database.example.com . Accessing my-service works in the same way as other Services but with the crucial difference that redirection happens at the DNS level rather than via proxying or forwarding. apiVersion: v1 kind: Service metadata: name: my-service namespace: prod spec: type: ExternalName externalName: my.database.example.com However, there’s also this little note in the Kubernetes documentation: ExternalName accepts an IPv4 address string, but as a DNS names comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName is intended to specify a canonical DNS name. Since ExternalNames that resemble IPv4 addresses are not resolved, we can use IPv4 addresses instead to force the traffic through the internal load balancers. The website and blog services in the acme namespace are still there, so we only need to create the following: an ingress for the website with class “nginx”. This will allow the internal nginx controller to do the necessary routing to the website service. an ingress for the blog with class “traefik”. This will allow the internal traefik controller to do the necessary routing to the blog service. The above 2 ingresses are the same as in the previous exercise when using 2 public load balancers. We could have left the 2 ingresses as is and not delete them during clean up but I want to make this clearer here what is happening in the current scenario. We’ll also create: an acme-website service of type ExternalName whose externalName value is the private IP address of the internal load balancer created by the NGINX controller an acme-blog service of type ExternalName whose externalName value is the private IP address of the internal load balancer created by the Traefik controller. And finally 2 ingresses of class “haproxy” whose target services are the 2 ExternalName services for the website and the blog. All the yaml files can be found under the ingresscontrollers/private directory. First, edit all the ingresses and change ‘acme.com’ to your domain as before. Next obtain the private IP addresses of the load balancers: Once the nginx-ingress and traefik controllers are deployed, obtain their private IP Addresses. Nginx: kubectl -n web get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR webcontroller-nginx-ingress-default-backend ClusterIP 10.96.44.162 80/TCP 29s app=nginx-ingress,component=default-backend,release=webcontroller webcontroller-nginx-ingress-web LoadBalancer 10.96.54.89 10.0.12.23 80:30626/TCP,443:30352/TCP 29s app=nginx-ingress,component=web,release=webcontroller Traefik: kubectl -n blog get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR blogcontroller-traefik LoadBalancer 10.96.236.78 10.0.12.26 80:30668/TCP,443:32269/TCP 86s app=traefik,release=blogcontroller Edit the frontend-website-ingress.yaml and change the externalName value: kind: Service apiVersion: v1 metadata: name: acme-website namespace: frontend spec: type: ExternalName externalName: 10.0.12.23 ports: - port: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-website namespace: frontend annotations: kubernetes.io/ingress.class: "haproxy" spec: rules: - host: www.acme.com http: paths: - path: / backend: serviceName: acme-website servicePort: 80 Repeat for frontend-website-ingress.yaml and change the externalName value. kind: Service apiVersion: v1 metadata: name: acme-blog namespace: public spec: type: ExternalName externalName: 10.0.12.26 ports: - port: 80 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: acme-blog namespace: public annotations: kubernetes.io/ingress.class: "haproxy" spec: rules: - host: blog.acme.com http: paths: - path: / backend: serviceName: acme-website servicePort: 80 Apply the manifests: kubectl create -f ingresscontrollers/private/ ingress.extensions/acme-blog-ingress created deployment.apps/acme-blog created service/acme-blog created ingress.extensions/acme-website created deployment.apps/acme-website created service/acme-website created service/acme-blog created ingress.extensions/acme-blog created service/acme-website created ingress.extensions/acme-website created Finally, get the public IP address of your loadbalancer either from the OCI Console or with kubectl: kubectl -n frontend get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR frontendcontroller-haproxy-ingress-controller LoadBalancer 10.96.41.118 129.146.95.188 80:30746/TCP,443:30777/TCP 83s app=haproxy-ingress,component=controller,release=fronten dcontroller frontendcontroller-haproxy-ingress-defaultbackend ClusterIP 10.96.158.93 8080/TCP 83s app=haproxy-ingress,component=defaultbackend,release=fro ntendcontroller Update the 2 entries in your OCI DNS for the website and the blog (replacing ‘acme.com’ with your domain) and set the IP Address to that of the public Load Balancer: www.acme.com blog.acme.com Remember to click on ‘Publish Changes’ to make it effective and test access to your website and the blog. Summary We have experimented with 4 different Ingress Controllers on OKE. We then looked at Ingress class and its purpose and deployed multiple controllers simultaneously on a cluster and used the Ingress class to ensure the requests are routed accordingly. Finally, we looked at how private Load Balancers can also be used with multiple Ingress Controllers and then subsequently exposing the services publicly. The examples and reasons we used are rather trivial and contrived but I hope they serve to illustrate the flexibility you have when you deploy your applications on Kubernetes and Oracle Cloud Infrastructure. Additional References: NGINX and multiple Ingress Controllers Deploying Traefik using helm with service annotation OCI Documentation: Creating Internal Load Balancers Ingress Controller Comparison Kubernetes Cross Namespace Ingress
https://medium.com/oracledevs/experimenting-with-ingress-controllers-on-oracle-container-engine-oke-part-2-96063927d2e6
['Ali Mukadam']
2019-07-29 02:17:58.667000+00:00
['Kubernetes', 'Oracle Cloud', 'Nginx Ingress', 'Haproxy', 'Traefik']
The Mean Girl of Morehouse Returns
The Mean Girl of Morehouse Returns Ten years ago, I wrote a story that changed lives forever—including my own. I went back to examine the wreckage In 2009, I was sifting through press releases and news items, looking for a story. In my office — really an illegal bedroom in a Newark, New Jersey, rooming house — my assistant sat nearby, reading headlines aloud to me. “Morehouse College has a new dress code,” he said. I shrugged. Nothing new there. For years, HBCUs had been struggling to keep the Black Ivy League moving into the modern era while trying to stay true to certain standards. When Rev. Dr. Martin Luther King Jr., Spike Lee, Samuel L. Jackson, and others are among the distinguished alumni, there’s not much room for freshmen wearing sagging jeans and durags on campus. I quickly read through the press release. It mentioned all the things I expected it to: no sunglasses worn in class, no head coverings inside buildings… Then my eye skipped to the bottom. I read it aloud to my assistant. “No wearing of clothing usually worn by women (dresses, tops, tunics, purses, pumps, etc.) on the Morehouse campus or at college-sponsored events.” He and I stared at each other, our heads cocked to the side, eyebrows raised as far as they would go. Why on earth would Morehouse College need to bury this rule at the end of the statement? And who were these students at Morehouse, an institution serving Black men for more than 100 years, who needed to be told not to wear clothing “usually worn by women.” A few days later, Morehouse College’s president clarified the section of the press release regarding women’s clothes, which also included makeup and handbags. He said it was a very small segment of the population that needed to be addressed. Oh, really? I called Jermaine Hall, now the editor-in-chief here at LEVEL — and then the editor in chief of Vibe. I’d worked with Jermaine for more than a decade by that point, and we’d come to trust each other’s instincts. He gave me the green light, and a week later I was on a flight to Atlanta to find those Morehouse students who had found themselves in the president’s crosshairs. I’ve been doing investigative journalism for my entire career. I take a tiny tidbit of a story and gnaw on it like a bone until I break it apart. I found out who killed Sean “Diddy” Combs’ father after a two-year odyssey. After wondering about the woman who threw grits at Al Green, I found her children and her siblings and uncovered the true story of her life — a several-year process. I looked for young Malcolm Shabazz, grandson of Malcolm X, and ended up spending a year interviewing him. I’ve even been known to go in and work hard to find something as simple as a belt. If something feels like a story, I dig in and can’t let go. Even when I probably should. The first step was to find the students. Throughout 2010, I traveled to Atlanta multiple times and made dozens of phone calls, trying to put together the scene. How did a group of kids, some still teenagers, live their truth at a place like Morehouse? All told, it took me more than a year to find and speak with Diamond Poulin, Chanel Hudson, Brian Alston, Michael Leonard, Michael Brewer, and Kevin Webb. Before we could get a few minutes into our interview, at a steakhouse in Atlanta, she told me she’d recently thought about killing herself. She tossed off that fact like it’s something people think about every single day. SafeSpace, Morehouse’s advocacy group for on-campus LGBTQ students, put me in touch with a few students. Some were members of a group called The Plastics, gender-bending queer young men whom the dress code seemed to target specifically. There were also other queer students struggling with gender identity and sexuality, some of whom were in the process of transferring out of Morehouse. While I spent time with several students, it was Chanel — then known as Philip Hudson — who stood out. And not just because Chanel is 6'5", chocolate brown, with wide and expressive eyes. Before we could get a few minutes into our interview, at a steakhouse in Atlanta, she told me she’d recently thought about killing herself. She tossed off that fact like it’s something people think about every single day. Formerly known as Philip Hudson while a student at Morehouse, Chanel says, “I only went to Morehouse because I wanted to be the traditional man my father wanted me to be.” Over dinner, Chanel was wounded and raw. Her eye contact was furtive, and she offered tidbits of her life in a rapid-fire staccato. She talked about verbal and sexual abuse in her life. She talked about briefly working as an escort. (“I did not have sex,” she snapped. “Just company.”) She talked about moving from Florida to New York alone while still a teenager. She told me about finding black-market hormones at age 12 to begin to transition. She told me about her parents, both from Jamaica. And her father is a pastor. I winced at the triple whammy of a trans teenager living in the deep South with Caribbean parents who are also religious. I wondered if sharing her story with a national audience was a good look for Chanel. She was just barely 21 years old. But I folded up that worry and put away. My job was to deliver the story, period. And so, a few days after we met, I pushed Chanel into the spotlight, literally and figuratively. Looking back, she never should have been a part of the story. She wasn’t in the right space to represent the LGBTQ community. She was just barely able to understand herself and her own identity. And if I’m honest with myself, I wasn’t in the space to write about her, either.
https://level.medium.com/the-mean-girl-of-morehouse-returns-280934648c3
['Aliya S. King']
2020-05-01 15:42:54.589000+00:00
['Race', 'Gender', 'Journalism', 'Culture', 'LGBTQ']
10 Rules of Front-End TDD to Abide By
The most beautiful thing to see when writing tests It has been one year since I’ve joined my current team. It is a team consisting of three members, me and two other backend developers. My team members are pretty experienced and they have been practicing TDD extensively over the years. It wasn’t quite long when I started hearing the phrase “Always start with a test!” and it was a phrase that I started to listen to quite often. Prior to starting to work as a React developer, I haven’t had an opportunity to practice testing my code, apart from testing a backend API and some integration testing and Angular tests with Jasmine and Karma. Thus I felt pretty uncomfortable submitting a PR and writing a test about it. I have to be honest, at first I was cheating, I was not following TDD principles at all and all my tests were some simple checks whether some element is present or not. But then again, I guess, something is better than nothing. In a way, I always had a feeling that frontend TDD is like you are trying to code blindfolded. Our team eventually grew and as we’ve started working from home we started to have pair programming sessions more frequently, so we’ve started practicing TDD more and more. At the beginning it was really hard, but in time it started to become surprisingly more and more interesting. What I’ve learned during this year is that the learning curve is indefinite, you can never say ‘Now I think I know everything’ or ‘I think now I do it best’. As we gain more knowledge on how to test, we should improve our tests. Just a few days ago we managed to improve a test that was increasing the test execution time significantly, just by simply mocking all the components that were not directly used in the test. Imagine how much unnecessary renders were done and how much code was executed by using the components as they were. During this post I will include mostly React + Jest examples, but they are pretty similar to any front-end testing framework, so please don’t be downhearted if you are not using them, you can learn something new anyway. Here are a couple of rules I’ve learned during this adventure: The setup of the test should be well-devised I think the setup of the testing environment is one of the most important things when writing integration tests. Believe me, it is really hard to make the right setup, especially if you are not starting from a clean codebase. At a certain point, it has happened to spend hours and hours figuring out what is actually wrong with my test, when the obvious problem was me not having a good setup. My advice is, first try to identify all dependencies for that group of components you want to test. Then decide which dependencies should be used as-is and which of them should be mocked. TDD is not hard to set up when you are trying to test a single component, but once you start to test multiple components together, believe me, that is a whole different story. Within my team, we are using setup functions per test suite which can receive different parameters. Then depending on the test we send different parameters to the setup function. Make your setup function configurable as much as you can, so you will not need to repeat the same setup code in each test or write several different setup functions. You should thoroughly check what your test really needs. If it depends on context, include the context. If it depends on the router, include the router. Below you can find a couple of different examples of setup functions. See how my setup changes when my component has different dependencies? Example 1: const userList = [{name: ´John doe´}]; const setup = () => render( <MemoryRouter> <UsersTable users={userList} /> </MemoryRouter> ); The code above is an example of a users table that accepts a list of users as a parameter. Let’s say that you can click on each row on the table to see the details of a user. That means that you need the router, right? So we should wrap our component in a Router in order for our tests to be successful. Example 2: fetchMock.mock(´api/users/´, {method: ´GET´, status: 200, body: { users: [{name: ´John doe´}] }}) const setup = () => render( <MemoryRouter> <UsersTable/> </MemoryRouter> ); Now, let’s say that your UsersTable component doesn’t accept users list as a parameter, but makes an api call to fetch the list, then you should mock the api call associated with this component. Example 3: const setup = () => render( <MemoryRouter> <UsersContext.Provider value={{ users: [{name: ´John doe´}] }}> <UsersTable /> </UsersContext.Provider> </MemoryRouter> ); If your component depends on a context for getting the users, then you should include the context in your setup. The example above is how to mock the UsersContextProvider and provide a mocked list of users, but you can also use the default Provider and just mock the api call behind it. It would look like this: fetchMock.mock(´api/users/´, {method: ´GET´, status: 200, body: { users: [{name: ´John doe´}] }}) const setup = () => render( <MemoryRouter> <UsersContextProvider> <UsersTable /> </UsersContextProvider> </MemoryRouter> ); Of course in all of these cases, you can make the list configurable and pass the users as a parameter, so in each test you will have a custom list of users. Here is an example: const setup = (userList) => render( <MemoryRouter> <UsersContext.Provider value={{users: userList }}> <UsersTable /> </UsersContext.Provider> </MemoryRouter> ); A wide range of different scenarios right? Which one you should choose depends heavily on your setup and what you want to test. 2. Mock each subcomponent/class that you do not directly use in the test Everything you do not use directly in the test — mock it. This is also valid for components whose inner logic has not been tested, but only whether they appear or not. You can easily mock these components with some text wrapped in a HTML element e.g div or span and assert whether the text is there or not. For example, when you do not directly depend on the router or the route and query parameters, you can mock them. But if your component behavior depends on them and they will change in the test you should not mock them because as that you will not get the expected behavior. Example: Const mockUserEditForm = () => ´<div>This is an edit form</div>´; jest.mock(´users/components/EditForm´, () => mockUserEditForm); const setup = () => render(<UsersDashboard />); describe(´When the users dashboard is displayed, () => { describe(´And a user edit button is clicked, () => { it(´It should display the edit form´, () => { const { getByText, getAllByRole } = setup(); const editButton = getAllByRole(´button´)[0]; userEvent.click(editButton); expect(getByText(´This is an edit form´)).toBeInTheDocument(); }); }); }); 3. Keep your data mocks closer to the test You need this, believe me! The project I’m currently working on relies heavily on redux, but we’ve started slowly to implement the Context API. When I started working on the project the only tests that we had were snapshot tests and one big mocked redux store which was really hard to maintain. Not only that, it was really hard to find what exactly is your test data. As we started moving to Context API we were able to mock the API calls that were used in the context and switch to data objects per API call. Now, not only are we testing more real scenarios, but we are also one click away from the test data and do not need to wonder where something came from. 4. Always start with a test I never thought I would say this! Start with a test. Even if it is only an assertion whether the element you expect to see is there, write it before the actual code. And make the test fail on purpose — as part of the TDD core rules. Then you will refactor the failed test with minimal code just in order to make it pass. And again you fail the test and refactor with minimal code and you are repeating this cycling process until all your criteria are satisfied. Of course, each criteria should be covered with a separate test. The same is valid for the corner cases. When you take a corner case into consideration you first write a test about it, then implement it. 5. Make the test fail for the right reason This is also one of the most important things along with the setup. You must know why your test fails. You cannot write a test that will assert if some element is there and all of sudden you see some random error that something else is undefined. That means that you are probably missing something in the setup. You can, of course, continue to write your code, but your test will never pass. 6. Mock third-party library dependencies that you do not directly need Each library has its own behavior and you do not want to test that. That is the library’s job, right? So mock everything that you do not need from that library or you can even mock the entire library if your tests do not depend on it. Let’s say you use a library for validation and you have all your fields validated. You will not test whether the form default errors from the library are displayed, you will test some custom validation that you probably may have. Sometimes you will not even have to check this, but use the form. You may get some errors that the library wants to interact with the component after the component was destroyed. And that is fine because at that moment you are not testing the library behavior. For these situations you can just mock the part(s) of the library that do this. 7. Do not over snapshot Avoid making a lot of snapshots. You do not need to take a snapshot of everything. Snapshots can be helpful while you’re writing your test, so you do not need to rely on the browser in order to test or for some custom classes behavior. You will end up just updating the snapshots and not checking what and why was changed. You can easily find yourself in a situation to misinterpret an error if you rely only on snapshots. 8. Use mocked API calls when possible If you want to test as much as to the real behavior I suggest mocking the API calls and using data mocks directly in the store/context/component props as little as possible. Like this you will test the whole process of fetching the data, whether all the API calls are made and everything behaves as expected. With using any mocking library you can easily mock api calls and use the correct method and status code. For example, if you use the same url for a get and post, you can mock them separately and configure only the response and status. I suggest that you mock your api call exactly the number of times you expect it to be used in your test/test suite. We do not want to mock an API call 100 times while we expect it to be called only 10 times. If we mock only the expected number of times we are preventing any further errors and we can easily discover if our component behaves as expected. I recommend resetting the mocking api library history after each test so you can be sure that what you are testing has happened exactly at that point of the testing process. We do not want false positives right? 9. Add tests as you add new features/rules As I’ve mentioned above, each of your rules should be first covered with a test and afterwards implemented. With this you will strictly follow TDD. One simple case scenario, imagine that we want to have one input field and a button. The value of the button needs to be between 1 and 10000 and we want to check whether the button behaves correctly. We first write a test to check ‘if the value is between 1 and 10000, the button is enabled’. The test should fail and then we should go and implement an input field and a button. After we’ve implemented this, we write another test case about the values that are below 1. The button should be disabled for these values. The test will fail because we haven’t implemented something like this. We go back and implement that logic. We go back and run the tests and they should all pass. So afterwards, we continue to write the other tests for values above 10000 and we repeat the process until the end. 10. Try to wrap the particular test with multiple describe blocks (as much as possible) to make it more understandable When you’re writing the test imagine you are giving a book to someone where your code behavior is documented. So you want this to be a good book, where everything is explained in detail and well documented. I will give you one bad and one good example. Let’s say that we have users’ form with a button that becomes disabled at a certain point in time (or after some condition is fulfilled). Bad: describe(´Users form´, () => { it(´Disabled button´, () => { // the test }); }); Good: describe(´Given the users form´, () => { describe(´When the user tries to update the username field´, () => { describe(´And he enters an invalid username´, () => { it(´The save button should become disabled´, () => { // the test }); }); }); }); In the first example, the person who will read the test and try their best to understand what is happening in your code (this can be you in the future) has no idea when the button should be disabled. He just knows that it can be disabled in some cases. He will then need to dive into the code and maybe try to refactor the test and even do a more thorough check to have an overall better understanding. In the second example it is so obvious why and when the button should be disabled. There is no need of reading the test code into details to figure out what is happening — just read the description blocks, that’s it. And for the end, one more tip, bear in mind that two pairs of eyes are always better than one. We’ve all been there. Try to explain to your colleague, your friend or even your duck when you have an issue and can not figure out what is wrong. Maybe they’ve experienced a similar situation or maybe they will have an idea what you are missing in your test or just simple talking with your rubber duck might give you the solution you are aiming for. If you are still here that means that you’ve read the whole post. Thank you for considering the post helpful, it means a lot to me. I hope that I’ve contributed to your better understanding of the frontend TDD and some of its possibilities or at least encouraged you to adopt TDD in your everyday coding.
https://medium.com/swlh/10-rules-of-front-end-tdd-to-abide-by-f48987dc2ffc
['Frosina Stamatoska']
2020-12-17 12:03:09.284000+00:00
['Software Testing', 'Front End Development', 'React', 'Tdd', 'Jest']
What Does Love Mean To You?
Love is thrown around as a word. Yet, we’re not taught to define it. What does love mean to you? The truth is, it means so many different things to different people. In this article, we’ll explore the different ways that you can interpret love; starting with the love languages, and moving onto integrity, authenticity, and a sense of purpose. Love is an individual perception of the world. Love is both connected and personal. Some people aren’t our people. There are nearly eight billion people on this planet, they won’t all have meaningful lessons for us, and this is because the energetic resonance is more potent in some than in others. What do I mean by energetic resonance? Have you ever met a complete stranger and known that they will be essential and intricate to your life story? It’s a common thing that we play off as weird or a chance. The idea behind this is simple: if someone has an experience in their life that is closely related to yours, then you’ll feel that resonance in your body. They’ll help you to understand your life story by mirroring it for you. Some people do this fleetingly; who maybe mirror some aspect of pain; something that you can’t fully sit with. Have you noticed these people stay in your memory and feeling, even if they don’t remain physically in your life? On the life journey, where we each unfold our true nature, or our spirit, into the world, we come across people who naturally resonant with our interests, vibrancy around certain aspects of life; social dynamics including shared joys, social justice, and issues, sports, habits, values, purpose. This shared resonance allows us to access belonging, connectedness and hope. These things allow us to understand that we’re not alone, that someone out there cares, thinks we’re unique and valuable and honours our process. Having others interest and curiosity in our lives is so vital to a healthy life. It’s part of love. You feel love. Let’s start with the five love languages: Love Languages Words of Affirmation Saying supportive things to your partner. Acts of Service Doing helpful things for your partner. Receiving Gifts Giving your partner gifts that tell them you were thinking about them. Quality Time Spending meaningful time with your partner. Physical Touch Being close to and caressed by your partner. People receive love predominantly in one of these ways, perhaps two. That was a revelation to me when I learned it. If I’m not speaking someone’s love language, then they won’t feel love. Their love well will run dry. It gets interesting as typically the love languages are things that you lacked in your conditioning as a child. That’s not to say you cannot go on a healing journey and change that. That’s generally how it starts. The conditioning in western culture told me that my love language was acts of service, however, when I did some more in-depth work I found that this was me trying to fulfil the cultural narrative of being the provider combined with the conditioning of needing to show someone how to love me by demonstrating it. I found that my love language is physical touch, closely followed by quality time. Connection Connection is essential for all human beings; we’re wired that way. We all need our closest people to care and actively engage in our lives. That’s why trauma can be so painful and isolating because it dislocates you from being able to receive and give connecting actions. Your life will be a representation of the five people that you spend the most time with [link]. Not only because of the words that those people use and the shared values that you hold. It’s also the somatic imprint that you share. Mirror neurones in the brain activate strongly with sound, and we’re also closely scanning our environments at all times to work out if we’re safe and secure, and whether we’re in a relationship with our environment in the right way. A great example of this is a relationship dynamic that you can’t get out of. There is inherent and implicit information in that dynamic, based on the routine and habitual things that you somatically and verbally. Connection is essential to a feeling of belonging, and deep relational space breeds hope. Reciprocal and shared love is born in this area. When you share with someone you love, within the sharing, is appreciated and recognised that you will receive love back at some point in time, in the way that you appreciate. The constant goal of relating. Being seen, heard, and belonging The triad of love. It doesn’t necessarily have to mean you’re right. Being seen, heard, and having a sense of belonging allows you to make mistakes and develop; grow; fail forward. Feeling safe in relationship fosters a trust that it doesn’t matter what you say or do. Alignment to your values; if you’re intentions are right, then you will get there eventually, and the path doesn’t have to look perfect as you do that. Furthermore, being seen, heard, and feeling that you belong gives you a sense of certainty that the other person can and will challenge you to be your best self in a loving way. Some of the most connected and meaningful times, to me, are when my friends, colleagues, and intimate partners have shown me where I am acting outside of alignment to my purpose and values. Especially in my men’s group. Shame & love Shame and love? Those two don’t interconnect surely? They are part of our world and so they must. I’m not talking about toxic shame, that is a disconnecting and disempowering thing, I’m talking about healthy shame. Healthy shame, let’s you know your finiteness in the world. It’s akin to a community bringing a young man who is full of hubris back into the community’s embrace before he does something destructive. Shame is the natural way of that; he needs to recognise his finiteness; his part in the wider whole. Healthy shame is a force to pushes and pulls us integrate with others; it lets us know that we need others, that we cannot exist without them. It’s integral to the realisation of spirituality that we are a part of something much bigger than ourselves. Ever stood at the edge of a cliff and marvelled at the vastness of it all? That wonder is partially born from a healthy shame. “Our healthy shame is essential as the foundation of our spirituality. By reminding us of our essential limitations, our healthy shame lets us know that we are not God. Our healthy shame points us in the direction of some larger meaning. Our healthy shame is the psychological ground of our humility.” ~ John Bradshaw ~ Nature’s love We inherit the relationship with the earth, the sky, the Universe. We are born from it, and when we die, we will return to it. Death is the only constant in life, and it is the force that pushes us to live viscerally; pulsing, pushing, pulling at life, squeezing all the juice we can from the fruit’s nectar. Life works for you because you are life. It moves through you just as much as it is the very fabric of your being. What I find fascinating is that all the elements of the Universe are born from huge explosive acts in space. Physical elements are born in the oven of a dying star. It can’t be any other way. Death always sows the seeds of life. It’s impossible to be disconnected, even from the massiveness of a dying star, because we are the elements that it spews forth. The only thing that can disconnect us from life is a refusal, in our minds, that this is so. Recognise that you are life; interconnected to all things. You can start to remember that life works for you; with signs and signals that you were missing; how you react to something, what your intuition tells you, how you feel about a situation, aligning to your values and purpose. All of these things can and will guide you to a more profound connection to your life; help you build the right security, and embrace things in your life that will bring you abundance. The upper set limit is a concept that comes into play here. Both Dr. Ron Seigel and Gay Hendricks describe this. If you cannot imagine yourself transcending a boundary, then you won’t be able to, or you’ll do it briefly, then sabotage yourself so you can return to a place where you feel safe. The body-mind is weighted for comfort, not change. It’s an active process. Once you are connected to the Universe, and you recognise that all life and possibility flow through you, you enter a new space of belonging; a deeper space. Where the trees, plants, animals and rocks, and all other sentience for that matter, become a part of your connected world. Integrating thoughts O.K., don’t get overwhelmed; this is a long process of awareness. You can start with one of these concepts; journal, meditate, safety practice. You are exactly where you need to be, and everything is unfolding as it needs to be. What is one aspect of love that you could deepen? What is one action that you could take today to do that? We start to see that the Hollywood idea of love; the shadow hero that rides in on his steed, slays the inner or outer dragons, bursts open the door to find the princess before returning to Happy Ever After land, is so narrow. Love is a deep-rooted and rich experience. The story that we’re told doesn’t include five years later when the prince and princess need to co-inhabit a space in reverence, respect, and empowered safety. Love unfolds over many concentric layers; it is fractal in its opening. You must have self-love to start, then intimate friends, family, a life partner, a community, a nation, humanity, part of Gaia and the natural world, the Universe. All these things are interwoven with the golden web of one’s individuality and uniqueness. What does love mean to you, and how do you define it?
https://medium.com/hello-love/what-does-love-mean-to-you-fa6babeb9cf3
['Peter Middleton']
2020-12-01 01:07:13.791000+00:00
['Self Improvement', 'Self', 'Love', 'Self-awareness', 'Growth']
Facts About Major Social Media & Their Active Users Stats [Infographic]
Designed by Web Design Company Kolkata From the last half a decade Social Media has become an integral part of our daily activity. Major Social Medias have managed to attract a huge number of Internet Traffic as their Users. The scenario changed drastically and the number of active users in social media jumped to a new high with the increasing number of smartphone users. Today the monthly active users of major social media has already crossed million. Social Media encases these huge user traffic and generated revenue options in different sectors of digital promotion and ads with the help of their users. Unika Infocom a Web Development Company in kolkata developed the following infographic with some interesting facts about the leading social media.
https://medium.com/unika/facts-about-major-social-media-their-active-users-stats-8711bcca1d61
['L Rahaman']
2017-05-29 20:21:53.437000+00:00
['Infographics', 'Social Media', 'Technology', 'Marketing', 'Digital Marketing']
Is Graphic Design Art or Science ?
Graphic design is just a creative process that includes art and technology to convey thoughts. It is the procedure for conveying visually using typography and pictures to present information, typically used when visual sophistication and ingenuity are needed to present text and image. It may even be applied to the layout and format of informative material to make the info more available and more readily comprehended. Designing a masterpiece graphically is the artwork of combining text and graphics to convey an efficient message. It is primarily utilized in the design of logos, pamphlets, newsletters, posters, signs, along with other type of visual communication. Graphic design is the usage of words and pictures to pass on information or to make a specific visual effect. This art form may also be referred to as commercial art due to its application to marketing and its essential contribution to company function. Graphic design practice includes a wide range of cognitive abilities, aesthetics and crafts, including typography, visual arts and page layout. Graphic designers possess a distinguishing capability to sell an item or idea through powerful visual communications, and are asked to undertake the challenging job of being creative each day. Combining visual rhetoric abilities with the international rhetoric abilities of user interaction and on-line branding, graphic designers frequently work with web designers to create both the feel and look of an internet site and improve the on-line experience of web site visitors. Color is also another strong way to help users find their way around a website, and colour coding sections of the website helps users determine where they’re. Graphic design adds a visual and psychological context to the strictly intellectual text on the website. The main tool for this art type is, of necessity, the creative mind. With the arrival of computers and software apps, the task of the designer became a little easier, as these possess provided more efficient production tools than traditional methods. Graphic design a creative profession and things which were once only conceived in the brain are brought to life through abilities and imagination. There’s a disadvantage to the addition of graphic design on sites. Many developers possess tried to force the Web to be what it isn’t, creating inefficient and sometimes unusable websites. There’s a propensity to forget that words, and not pictures, are the building blocks for the majority of websites. Individuals are clearly visually orientated, and their reaction to the site appearance and visual framework plays a strong part in how they communicate with it as a whole.
https://medium.com/freebiesmall/is-graphic-design-art-or-science-f15da3d54679
[]
2017-03-13 17:03:52.795000+00:00
['Design', 'Design Story', 'Creative Design', 'Graphic Design', 'Process Design']
The first terror attack of the Trump era was against Canadian Muslims
The first terror attack of the Trump era was against Canadian Muslims Psychometrics and the mainstreaming of Christofascism For immigrant folk like me, the question of the Trump administration wasn’t when authoritarianism would begin but how ghastly it would be. Justin Trudeau would speak out in the wake of the a mosque shooting that left 6 innocent Canadians dead at the Quebec Islamic Cultural Centre saying, “We will grieve with you. We will defend you. We will love you. And we will stand with you.” I wonder what words Trump had offered the prime minister given that 1 in 2 Americans are in support of his ban on Muslims or that the “lone-wolf” terrorist was pro-Trump. I also wonder whether there are really “lone-wolves” in a pack of rabid nativists. Given how Americans would view other American Muslims (despite not knowing one) it’s hard to understand what condolences this country could offer to families as they were notified of the injuries and deaths at the crowded mosque. Standing at the San Francisco airport while we protested the xenophobic detainment of travelers, I hadn’t realized that Canada just witnessed terrorism nor that it was a religious hate-crime of the nature that Bannon and his ilk have promoted. Meanwhile, widespread support of the ban and the dearth of reporting on a terrorist attack at a place of worship exemplifies the media’s complicity in Christofascism, as well as our own privileging of alt-right narratives. It’s alarming how the Trump administration effectively uses psychometrics to monitor our digital footprints and manage the nation’s emotional responses. And given how accurately our authoritarians in chief can now track individual sentiment thanks to public facebook data and disinformation, it’s unclear how we effectively push back against the mainstreaming of white supremacy, as seen with downplaying white terrorists or popular support of racially targeted policies. The only sincere apology the administration should offer would be their complicity in a terrorist attack, an apology that Trump, Bannon and Guilani should jointly author just as they had the immigration executive orders. I’m curious how as Spicer says, “the president is taking steps to be proactive instead of reactive when it comes to our nation’s safety and security” when those steps inspire acts of terror in North America. No surprise then that Bannon’s efforts pushed Canada to blacklist Breitbart saying, “The Government of Canada does not support advertising on websites that are deemed to incite racial hatred, discrimination or the subversion of Canada’s democratic system of government.” His platform Breitbart has also led 5 Democratic reps to write a letter urging Trump to remove Bannon from the National Security Council given that he’s “provided a platform for white nationalists and the alt-right, and he has also espoused a false theory of a violent clash of civilizations between the West and Islam that only serves to fuel violent extremism.” At least this much is clear, white supremacy has deeply instituted itself in the White House which sadly aligns with the building’s construction by Black slaves. So as the economic contributions of immigrants to the nation swells, it seems Trump–Pence are poised to return America to a greatness built on exploitation and injustice. Some will argue that America never truly divested from those values but that’s an argument on how neo-liberalism had eroded our democracy. This week the United States was downgraded from a full democracy to a flawed democracy. So it falls to us to citizens to govern and minimize the necropolitics of the current administration before more marginalized groups are traumatized, the EPA gets dismantled, or a war is launched, all while countering the mainstreaming of neo-Nazis. It is not that I’m in disbelief of this historical moment. I left India as a child because of the poverty that colonial exploitation had wrought, and later left the United Arab Emirates because of the imperialist Gulf Wars. Growing up in Canada seemed to offer some respite from white legacy, except for the occasional nativist taunting of immigrants. The implicit popularity of white supremacy has stopped shocking me. What should shock us is the degree to which a small group of wealthy opportunists can explicitly overturn a nation in their white supremacists image. So how does a conservative turn from sociology student to white supremacist eugenics. As Flavia Dzodan writes, “If you are going to ask readers to decide if it’s OK to punch a nazi, you better provide them with all the facts to answer that question.” But those facts will be all that much harder to realize under a psychometric regime in a flawed democracy. Vikram Babu is a product designer and small business owner who thinks we should not normalize white supremacy. You can troll him on twitter.
https://medium.com/endless/the-first-attack-of-the-trump-era-was-by-a-white-terrorist-against-canadian-muslims-ab4a132096a5
[]
2017-02-02 00:59:26.233000+00:00
['Tech', 'Big Data', 'Canada', 'Donald Trump', 'Politics']
Difference between AI, ML ,DL and DS
What is Artificial Intelligence, Machine Learning, Deep Learning and Data Science and How are they different ? Now-a-days, because of hype, these terms are being used interchangeably. In this blog, I would try to briefly distinguish between these terms. Note : There are many different definitions available for these terms in the internet, which are also certainly true. So, the below explanation is not the only way to interpret these terms. However, I would try to explain these terms with help of demonstrating few applications as well as in simple terms. In this blog, I would use AI for Artificial Intelligence, ML for Machine Learning and DL for Deep Learning interchangeably. What is Artificial Intelligence ? Humans can think and make decisions. If the implementation of decision is in appropriate way, then it becomes successful. If it is not implemented well, then it becomes a failure. In simple terms, this decision making and thinking process can be called as “Intelligence”. In the same way, making Machines to learn the decision making process is called as Artificial Intelligence. Artificial Intelligence is to teach computers to think, to solve a problem. or Artificial Intelligence is to make computers, to simulate the kind of things that humans can do (like playing chess or driving a car and many more), be able to solve such problems in ultimately better and faster way than people can do it. Artificial Intelligence Creating a better AI enabled machine is always dependent on Human (at the end, he is the one, who is using his brain to create it). We as a “People make mistakes sometimes”, in the same way machines also generate incorrect result in few instances. For instance: Driving in hilly areas where there are many steep curves In such cases, sometimes people who aren’t much experienced in driving such hilly areas might not be able to notice the vehicle coming in the opposite direction because of curve, which leads to an accident. In the same way, self driving car (which is AI enabled machine) would also fail, as it might not be trained to drive in such steep curves. So, don’t assume Artificial Intelligence to be perfect. What is Machine Learning ? Machine Learning is a part of Artificial Intelligence and it is one of technique to create AI enabled machine. Machine Learning is about using a bunch of Statistical tools to learn from data. or Machine Learning is to make computers programmed at some given task, which is expected to learn from its environment and improve its performance over time. Machine learning is a part of AI This technique became more and more successful to the extent that, which made a lot of people think both Machine Learning and Artificial Intelligence are synonymous. What is Deep Learning ? It is a kind of Machine Learning, that uses neural networks, which mimics the network of neurons connected in human brain. Deep Learning is recent technique in AI, when compared to other two. There are few complicated tasks, which are difficult to be solved using statistical models with machine learning. Deep Learning can be used to solve such challenging tasks in much faster way with help of GPU’s. Deep learning is a part of Machine Learning Most of the advances in Artificial Intelligence in last 10 years is happening in this small sub area called Deep learning. Example to understand AI, ML and DL Let’s assume We are building a Self-driving car
https://medium.com/analytics-vidhya/difference-between-artificial-intelligence-machine-learning-and-deep-learning-and-data-science-2fb482efb2b8
['Chamanth Mvs']
2020-06-02 16:10:31.449000+00:00
['Deep Learning', 'Artificial Intelligence', 'Machine Learning', 'Data Science']
[COVID-19] One Important Recommendation You May Not be Hearing
[COVID-19] One Important Recommendation You May Not be Hearing Especially if you don’t have it yet Photo by Ani Kolleshi on Unsplash My daughter’s preschool is now closed until the end of the month. My wife works as a nurse in a hospital that has one of the busiest emergency rooms in the country. People camp out at stores hours before they open, then run the shelves empty by mid-day. Everyday, we’re flooded with more information in our feeds about this pandemic. And it’s all pretty scary the more you think about it. I was contemplating if I should voice my perspective on this or not. But it’s way too important not to. Wherever you are in the world, you’re hearing about how Coronavirus (COVID-19) is rapidly spreading. I mean countries have shut down over this. First, let me say yes it’s important to take the necessary precautions. I’m not writing this to argue about that. But what I do want to share something that’s been on my mind about what the biggest problem going on with all of this. There’s something much worse that’s spreading a lot faster. It’s FEAR. And here’s the truth. The FEAR of the Coronavirus is deadlier than the virus itself. You see, when you are in fear, a reactive part of your brain called the amygdala takes control of your actions. You enter into a fight-flight-freeze response. (It’s what's causing people to buy way too much toilet paper.) And when you are in this reactive state, your body starts producing a steroid called cortisol to help you handle the stress. And guess what cortisol does to your immune system? It WEAKENS it! We have bacteria, viruses, fungus and a whole array of foreign particles we are exposed to every day, but it’s your immune system that prevents you from getting sick. When we’re stuck in fight-flight because of worries or anxiety, our bodies are wasting a ton of energy because it actually thinks it might die at that current moment. And all that energy that’s gone now makes our bodies weaker and more vulnerable. Being afraid is literally making you even more susceptible to getting sick. And guess what else happens when you stay in fight-flight-freeze mode? You’re in a SELFISH, self-protective state. You literally lose the capability of thinking or having empathy. This is where the racism is coming from because people are so afraid and they’re only looking out for themselves. It’s what prevents us from thinking clearly and end up making matters worse. Such as the people who are in denial and go outdoors when unnecessary and put other people at risk. Or those who take up physicians’ time saying they think they have the virus when their symptoms aren’t even related. This takes time away from serving people who are actually infected. How to Powerfully Boost Your Immune System So while taking proper precautions during these times, do the things that keep your immune system strong. Taking the steps to help your brain feel safe puts your body out of fight-flight state and into a rest and digest state where your body is in recovery and ultimately maximizes your health. So how can you do this? Practice the act of shifting your focus to things you are grateful for. Take some time to think about the things you have other’s don’t. Or if that’s hard be grateful for try thinking about all the people in the front lines working their asses off to contain this pandemic. Whatever it is you do, take notice of things until you actually feel grateful. Doing this immediately gets you out of the fight-flight state. And practice empathy. Spend time connecting with your loved ones that you are home with. Laugh with them. Tell them you appreciate them. Spreading that joy to others boosts their immune systems as well. Spend these next few weeks as if it will be a long while before you have this rare time to connect with your loved ones at home or just have your alone time to recharge. And send your love to those who are getting ill because this is bigger than just ourselves. Turn off The News And Turn on Your Personal Connections The statistics you are seeing all over the media are not necessarily 100% accurate. For example, the mortality rate will likely end up being lower as we continue gathering data especially because of all the unconfirmed cases that are out there. And if you’re really curious about the data, search for science-based discussions from credible platforms other than the news. But limit that as well so you can take your mind off of this for a moment. With that said, take the proper precautions. Practice proper hand hygiene. Avoid large gatherings. And implement social distance. But don’t do all this out of fear. Do it as an act of service where you actually are preventing the potential deaths of more people. And instead of the news, watch something entertaining that makes you laugh and smile. Or perhaps break out that board game you used to love playing. Or get some sun! Going for a walk outside and getting some vitamin D also boosts your immune system. Or maybe pick up an old hobby you haven’t had the time to do until now. Whatever it is, being less concerned and afraid is what will help you make the best decisions and make your immune system stronger. Why Feeling Brave During Scary Times is Easier Than You Think For those of you who read this and feel like it’s cliché advice because this is hard for you to implement: I want to take a moment to provide some food for thought. In 2009, Dr. Alvaro Pascual-Leone ran a study in which he had a group of people learn to play a simple melody on the piano. Then he split them up into two groups. One group practiced the melody on a keyboard for two hours a day over the next five days. Then he had the other group sit in front of the keyboard for the same amount of time, but didn’t have them play. Instead, he had them imagine playing the melody on the keyboard. Photo by Michal Czyz on Unsplash During the whole study, Dr. Pascual-Leone was mapping the brain activity of all the participants before, during and after the experiment. And the results were shocking. He found the same exact brain changes happened in both groups. This meant that the brains of the people who only imagined playing the piano changed in a way as if they were really playing it. What does this mean? Our brains do not know the difference between imagination and reality. So how does this apply to you? If you’ve been anxious, worried, or afraid with all the uncertainty that exists around us, it means you can choose to be courageous, bold and confident simply by imagining what that would feel like. This means how you feel is YOUR choice. You Have a Lot More Power Over Your Feelings Than You Think Photo by Robina Weermeijer on Unsplash I want you to sit with this truth for a second. Your thoughts produce your feelings. If I had a brain scan hooked up to you, I can see what it looks like when you have a thought about something. You’d see a pulse of electrical activity occur at that moment. And that electrical activity stimulates the release of chemicals called neuropeptides that will communicate with your body to produce a feeling. So the thoughts you have determine how you feel. So if you’ve ever thought about someone you loved and then you felt all bubbly inside, that’s what’s happening. Or if you think about scary thoughts about all the bad things that can happen to you, then you feel your heart pounding along with the fear that comes with it. And if it’s proven that we can make changes occur in our brain simply by imagining it, then it means we can get out of the self-protective fight-flight state by imagining things that make us feel good instead of afraid. It’s your decision. How to Activate Your Brain’s Superpowers And Body’s Immune System There’s one part of your brain that has all of your amazing capabilities such as your decision making, critical thinking, and creative skills. But here’s the problem. Research shows the best part of our brain is turned off for about 70% of our adult lives When we are in the selfish fight-flight state, we turn off this amazing part of our brain known as the prefrontal cortex. And when your prefrontal cortex is off, you prevent your immune system from operating at it’s best. It’s also what makes you feel stuck and unable to figure out the best solutions for yourself especially if you’ve been taking a hit financially. And worst of all, you do not have the capability of having empathy. So that means if you’re with your loved ones and you’re feeling afraid for them, you can try to justify it all you want, but the person you’re really focused on is yourself. So while people are behaving like this is the apocalypse, spend the time to help comfort those around you who are genuinely afraid. Meet them where they are at and acknowledge that yes this can be scary, but do what it takes to brighten them up. Not only will you make them feel better, but you actually make their bodies stronger. I promise you, it will go a long way. More than ever, this is a time to stand together. So let’s make empathy and connection even more contagious than fear.
https://medium.com/the-mission/why-coronavirus-should-be-the-least-of-your-worries-d6ed6abe75bc
['Dr. Eugene K. Choi']
2020-06-06 04:01:34.378000+00:00
['Self Improvement', 'Personal Development', 'Inspiration', 'Life Lessons', 'Self-awareness']
4 Ways Data Science Could Revolutionize the Testing Phase in Nearly Every Industry
The most successful companies in all industries typically have testing phases that help them develop new products, test new materials, guide marketing campaigns and more. Data science and big data platforms could collectively upend the testing phase in almost every industry, helping companies save money and better assess their results. Here are four ways that may happen. 1. Improving the Efficiency of Human Testers Data science won’t remove human testers from the equation, but when used properly, data analytics could help those people more quickly extract valuable insights from collections of data. For example, people on testing teams get feedback from various sources ranging from comments on a social media feed to the remarks provided during a focus group. A data analytics platform can help testers make sense of the compiled data and spot trends within it. With the help of data science, people can pick up on patterns that inform the next steps in a product’s development. 2. Assessing Product Safety One of the reasons the testing phrase is so crucial is because it can help product manufacturers identify issues that could lead to product recalls. IBM has a content analysis platform that screens for indicators that could relate to severe issues with products such as cars. Although manufacturers can and should use data analytics after the initial testing phase, it’s also smart to do so before a product has an extensive reach in the market, too. By taking that approach, they can save time, money and headaches by identifying possible safety flaws before they can harm the general public or people involved in small tests. Data analytics platforms examine enormous quantities of data much faster than humans could without help. That means that those tools can identify warning signs that may indicate there are problems with a product going through testing. Testing simulations created with big data could also project what may happen when people use products in particular ways. For example, they might indicate that the material for the lid on a travel-friendly coffee mug becomes overly flexible after less than 250 uses and that the problem results in an inadequate seal that’s not immediately evident to a user. Then, a person might get burned if the lid falls off when they take a sip. In that case, product engineers would know it’s time to go back to the drawing board and find a more suitable material. 3. Streamlining Regulatory Requirements Construction, aerospace and several other industries must put their products through fire and flammability tests. They reveal what happens to a product following exposure to an open flame, or how long a product can resist fire. In the case of some fire-resistant doors, for example, they should tolerate flame exposure for at least 90 minutes. Data science can help companies take care of any required flammability tests in systematic ways, ensuring that their products meet or exceed what the regulations dictate. Also, even when an enterprise’s industry does not make fire testing mandatory, having it carried out can create a selling point for customers who want to avoid unnecessary risks. Big data can also help in another way by finding at-risk buildings during fire inspections. City officials in New York utilized this method with a data analytics platform that looked for more than 7,500 risk factors, making inspections nearly 20 percent more accurate. Public buildings have to undergo periodic checks, and this is an example of how data-driven tests have value even after initial testing happens. 4. Decreasing the Time to Market The world’s most competitive companies know how essential it is to release market-ready products faster than other entities. Succeeding in that feat leads to long-term profitability and dominance in the sector. Many companies rely on big data in the testing phase to get products ready for release faster. Besides helping enterprises feel confident that products function as intended during tests, big data analytics can help companies assess the most pressing unmet needs in their sectors. Proctor and Gamble has a presence in more than 175 countries, and the people in those nations have varying needs. The company discovered that big data helped them uncover insights and nimbly make changes to their products or introduce new ones to boost success. Reducing the time it takes to bring a product to the market pays off when companies have access to reliable insights, too. It’s useless to release a product as fast as possible and realize later that not enough people want or need it to make the new product launch worthwhile. So, a company may first deploy big data to make sure the market desires the product enough to justify its creation by performing tests on segments of the market to see how they respond to the product. Next, it could use big data to cut down on redundancy or errors that could make the testing process take longer than it should. Both of these things help products enter the market sooner than they otherwise might. Causing Meaningful Enhancements Consumers probably don’t devote a lot of thought to the testing that their favorite products, or the buildings they love to visit, went through during development. But, most of them realize that the tests tend to cause overall improvements. Big data could help product developers reach their goals with fewer delays.
https://towardsdatascience.com/4-ways-data-science-could-revolutionize-the-testing-phase-in-nearly-every-industry-5f7929e147db
[]
2019-04-17 21:27:51.959000+00:00
['Data Science', 'Big Data', 'Towards Data Science', 'Data Scientist', 'Testing Tools']