title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
4 Techniques for keeping your mobile app users engaged
Keeping users engaged with your app is key to its success. One of the greatest problems for app developers is the rate of mobile app abandonment. Many apps are downloaded and then ignored or even uninstalled. If you want to keep your audience engaged with your app then you have to give them the user experience they want. Use push notifications intelligently Push notifications are a great way to bring former users back to your app, but you have to use them carefully. If you send out communications that are not relevant to the user then you are going to come across as annoying, and encourage them to uninstall the app rather than to reactivate it. Dave Sandler, Global Account Director at Fetch says that: “A good push strategy takes into account data about who you are, where you are and what you’re doing — and what relevance the app has to you at that point in time.” To develop an effective push notification strategy it is important to track in-app activity and to target segments of users appropriately. By breaking down your app users into groups according to how they use the app, and then sending them messages that are relevant to them you will be able to provide messages that directly relate to how they use the app. You can make the segments as broad or as narrow as you wish. The strategy is to develop intelligent pushes that users will engage with, and not ignore. Create a Smooth UX Creating a enduring app that retains the attention of users over a prolonged period means ensuring a smooth user experience. Your app needs to be simple to use, with clear straightforward navigation. Ideally you should aim to incorporate pre-populated fields, and drop-down menus to simplify use as much as possible. Letting existing users store their details with you rather than making them enter them every time they want to buy something will make their buying process quick and simple. The checkout process is one area where this is especially critical. Making your checkout process easy to understand is key to increasing conversions. Make sure users can find and tap on your add to cart and checkout buttons without difficulty. Use as few steps as possible in the checkout process and reduce the number of images on the page, images can increase the time it takes for the cart to load, and therefore increase the rate of cart abandonment your app experiences. The same is true of any in-app transaction, be it ordering an Uber or catching a Pokemon. Have the app open from emails Most of the time your users will open your app by clicking on it. However, there will be times when you want them to be able to open the app from a link in an email. This may happen when you send out an email in response to a password reset request. By registering a custom URL scheme for the app, users will be able to open an installed application on their phone or tablet with just a single click from their email. It is helpful to ensure that when a user clicks on the link from a device which does not have your app installed then they are taken to somewhere they can download the app. This is a better option than the alternative of leaving them facing a blank screen or a line of error text. You could set the link so that it opens the app to your default start page, but suppose you send a newsletter and talk about some feature of the app, which you invite users to try by clicking on the link in the newsletter. In that situation it would be much better to have the app open directly to the feature you have been explaining. By employing deep linking techniques you can have the app open at whichever part of the app you wish. Build in gamification Gamification can be a highly effective way of keeping your app users engaged. If done properly it can give them a reason to keep returning to your app. To make gamification work properly it must be fully integrated and add real value to the user experience. There is no point in adding a leaderboard, or a badge system to your app unless they hold meaningful value to your users. Avoid creating an overly-complicated experience for the app user, think through the type of gamification that works best with your app, and how it can improve the app for your users. Integrate it at the heart of your app rather than making it an afterthought. Follow these techniques and create an app that will have users returning regularly, and engaging more frequently, and for longer with your app. If you liked this and want more: Like it, comment on it, and/or follow me. You can find out more about me on my website mikesmales.com
https://medium.com/dayone-a-new-perspective/keeping-your-mobile-app-users-engaged-4-techniques-6a85bf33697e
['Mike Smales']
2018-08-07 13:02:18.153000+00:00
['Mobile', 'Android', 'iPhone', 'Mobile Apps', 'Mobile App Development']
First lessons on accessibility
It is quite difficult to read even high-contrast text on a smartphone while walking in the bright sun because of glares. Low-contrast text might be nearly impossible to read at all. Cognitive conditions, that affect short-term memory and concentration, also make it difficult to percept incomprehensible text. It requires more time to process errors for users. Make sure that not only color coding for important content is used, but it also sticks out by boldness, special symbols, icons, etc. Why is this important Oliver can’t distinguish between green and red, so he doesn’t know that the field highlighted in red reports an error. Media People with disabilities should be able to perceive media content. Why is this important Karl has hearing loss. He watches movies and videos on Youtube exceptionally with subtitles. Joe recently moved to another country and does not always perceive the fast foreign language. He uses subtitles to make sure he properly understands what he is hearing. In order for a foreigner or a person who is hard of hearing to perceive your audio content, add subtitles or a transcript of the text. Do not use design elements that may be dangerous and unsafe. Make sure you there is nothing that flashes more than three times per second. Animated website elements, logos, or ads can lead to an epilepsy attack for people with certain types of disorders. Forms Make sure that you have added instructions or hints that will help users to avoid an error. Why is this important Helga has the first signs of dementia and difficulties with short-term memory. She needs contextual hints and instructions that don’t disappear from the screen, otherwise she might get lost. Marc uses screen magnifier, and when information appears on the edge of the screen, he might miss it. All input information must have clear labels that remain visible even after filling in the field. It is better to warn the user in advance about the data format: date, phone numbe, etc. If the caps lock is turned on, user should be aware of it. Screen readers communicate each label to users. Without proper labeling, forms are inaccessible to many people. Using a placeholder as a label puts a burden on short-term memory. The label disappears as soon as the user starts typing, and the user must clear input text to expose the placeholder label again.
https://medium.com/design-bootcamp/accessibility-daa6c3fc150c
['Ihor Averin']
2020-12-29 20:27:20.443000+00:00
['User Experience', 'Accessibility', 'Design', 'Graphic Design', 'UX']
How to Find and Identify Fonts
How to Find and Identify Fonts The free, easy and legal way! Wether you’re trying to find the best font for your novella, because you’re a selfpublisher, or you’re designing a leaflet as a startup graphics designer — the licensing fees for professional, commercial fonts can be daunting at times. While there are ways (like always) to basically obtain any font you want for free, there is a free and perfectly legal way to get lots of quality fonts: open source. Open source basically means, that you’re free to use the fonts for any kind of project, be it commercial or just for private use. There are an estimated onehundredandthirtyfive quadrofontzillion fonts out there in almost as much variations. Who really needs all of those? If you’re not required to use “specific” fonts for a given project, you’re free to use “lookalikes” or “alternative” fonts. In all honesty, if you have a handful of fonts, you’re set for almost any kind of project. I recommend a “5-way typeset” like so: 5 sans serif fonts (think Arial) 5 serif fonts (think Times New Roman) 5 monotype fonts (think Courier New) 5 handwritten and swirly fonts (think Brush Script) 5 really heavy fonts (think Impact or Gothic 821) 5 fonts with dingbats and emojis (think Webdings) Again, if you’re free in your font choice and don’t need to work with certain fonts (for example, as a graphics designer adhering to CI guidelines), you can do perfectly fine with just a small set of open source fonts like above. Everything else is just excess “fluff”. If you’re using open source writing software, like Libre Office as an alternative to MS Office, you’re already decked out with a great set of open source fonts like Liberation Serif (instead of Times New Roman) Liberation Sans (instead of Arial) Liberation Mono (instead of Courier New) Those are all a really great starting point to cover your most basic font needs. If you’re using a distribution of Linux as your OS (Solus, for example) you already know about the great benefits of open source ressources and software. You also already got a nice set of open source fonts, provided by your OS.
https://medium.com/swlh/how-to-find-and-identify-fonts-d26c83d7bd58
[]
2020-04-13 23:38:57.120000+00:00
['Typography', 'UX Design', 'Fonts', 'Open Source', 'Writing']
Global Travel That Won’t Cost the Earth
Traveling internationally is destined to be invigorating for the soul. With every brush of new culture and climate, our perspective is re-framed and our canvas of life altered. However, when attempting to incorporate sustainable approaches while traveling, our ability to calculate and accurately affirm our impact is impeded. Ready for take off, through the Manang Valley, Nepal. It comes as no surprise, that despite a welcomed exposure to previously concealed experiences, the inability measure our environmental impact can result in a conflict of conscience as our choices abroad may not reflect our behaviours at home. What constitutes ‘sustainability’ is open to a wide variety of interpretations, not just in the context of cultural acceptance, but also in the appropriateness of implementation. The result can be dramatically different in terms of desired impact. As travelers, our ability to implement actions towards more sustainable causes is generally limited by time and influence. Maintaining day to day acts of impact reduction such as refusing single use items, following correct waste practices, and respecting local culture is a great start. However, if you’re looking to deepen your experience while abroad, then rudimentary efforts may not be enough to get you off the ground. To provide solace for your inner eco-warrior, try observing sustainable behaviours and practices from the place that you’re visiting. Take the opportunity to shake-up what you think you know about sustainability and instead, “do” like the locals to reduce your impact. The following ideas have been developed using this approach, and may help fuel your efforts to practice better sustainable travel behaviours: Getting there If you’re based in Australia and want to travel internationally, chances are you will be flying. In addition to searching for the best price and route, try incorporating environmental initiatives into your decision making. This can be as simple as basing your choice off the ability to elect an offset of emissions, or donation to charity as part of the price of your ticket. Purchasing Carbon Offsets is a direct way to donate to global carbon emission reduction projects. Many Australian airlines offer this option, including Jetstar, Virgin, and Qantas. Despite 100% of funds raised by these airlines being donated to verified National Carbon Offset Stand programs, voluntary offset purchasing has not been adopted as widely as anticipated. This is partly due to the relatively ineffective way of managing emission production, so offsetting should be considered in conjunction with other energy reducing approaches to travel such as flying a direct route, or donating directly to chosen carbon reduction project. Many organisations acknowledge the shortfall and If we’re lucky, in a few years we may be able to choose routes operating on a low-carbon alternative fuel, but for now these types of aircraft are still in trial stage. Arrival Lounge Once transit, prepare for landing by checking out eco-programs at your destination airport. Information about program is readily available and will help you find which are performing and which are jet-lagging. Many airports have incorporated water, and energy reduction initiatives into the operation of their terminals, such as Narita Airport, in Japan. Narita has incorporated a number of initiatives as part of it’s “Eco-Airport Vision and Master Plan 2030”. These range from greater care to conserve natural vegetation, to thermal recycling for power generation. Gaining an understanding of what steps are being taken provides a greater appreciation of the constraints each organisation is working within, as they transition to more sustainable operations. Clearing Customs Integrating sustainable practices into areas of society previously unfamiliar to the concept can be likened to an exercise in clearing customs. At the customs gates, newcomers are often met with unfamiliar cultural formalities and suspicion by conservative officers. Similarly, when attempting to inspire practices of sustainability abroad, it is important to be aware of potential cultural resistance to change. Resistance is complicated and varies significantly between culture. Tradition and ritual are so ingrained in many societies, that any behavioural change can be interpreted as a sign of disrespect to past practices. In order to overcome resistance, it is important that sustainability is introduced as a valuable addition to traditional practice. During your travels, creating time to converse with people about the value of sustainability can help stimulate creative solutions and break down barriers. Exiting the Airport, and Embracing the Unknown Embarking on a trip in the 21st century will inevitably incorporate technology. Entire trips can be planned, experienced, and paid for with a few taps of a smartphone, so why not further embrace it through smart, eco-conscious tourism. Utilising apps and programs that combine eco-consciousness with the spirit of local knowledge is the ultimate guide for a successful trip. To get you started check out some well established apps and sites here. As we continue onward in our journey, we should not forget to acknowledge the call to action from the 2017 ‘United Nations World Tourism Organization’s — Year of Sustainable Tourism’. Travelers were encouraged to enable ICT capabilities and harness the powers of smart tourism. In the words of Talal Abu-Ghazaleh, Chairman of the (UNWTO), “I believe that the way forward in our journey to 2030, is smart tourism. I call on all of you to guide me and support me in this endeavor”. Doing so will not only create an enhanced depth to your international experience, it will provide a souvenir far more valuable than anything available in a gift store.
https://annalise-kerr1.medium.com/global-travel-that-wont-cost-the-earth-9df9550caf22
['Annalise Kerr']
2019-04-25 06:56:40.858000+00:00
['Sustainability', 'Culture', 'Tourism']
Motherhood is Destroying My Dreams of Writing
Motherhood is Destroying My Dreams of Writing For now…anyway Photo by Aung Soe Min on Unsplash I had big plans for my maternity leave. A whole year off. I was going to write every day and maybe even make a decent income from my writing. Big dreams. Grand plans. And then I had my baby. And well…that’s kept me busy Really busy and tired. And all the busyness and exhaustion has definitely derailed my plans of being a prolific writer this year. I thought I’d be able to at least hit publish once a day That would mean at least 7 articles in a week and 30 in a month. Hahahaha. In July, I published one article here on Medium. That’s it. Shameful. I know. I’m not afraid to admit that there are some days that I don’t write anything at all because I just want to sleep or my brain can’t muster up the energy to…well…think. So I write nothing and read nothing. Those days mainly consist of playing with my son or mindlessly watching TV when he sleeps. Maybe I’m just a hack writer and I’m ok with that for now because as “easy” (compared to some other kids) as my son is, being a mom is hard. I have no idea how Shannon Ashley or Jun Wu do it because writing is hard and momming is even harder. I feel guilty when I write Unless my son is sleeping (like he is now), I feel a tremendous amount of guilt when I write. It’s probably a new mom thing but if he’s awake, I choose to play with him rather than write because I don’t want him to see me staring at my laptop and think I’m ignoring him. Is that dumb? I mean he probably can’t even see that far right now… I feel guilty when I don’t write On the flip side, I also feel guilty when I don’t write because I love writing and I want to stick to it. It feels like a never-ending battle of guilt. I was not prepared for the pure exhaustion of staying at home I had a misconception about staying at home. Obviously, I knew there would be an extra little human to take care of but I also figured “Hey, I don’t have to go to work anymore.” Getting ready for work, going to and from the office and actually working takes up a good 10 — 12 hours of my day. That’s a whole 10–12 hours that would free right up. Laughable I know. There are days when I don’t leave my bedroom until 11 am and it’s mainly because I want to sleep. I have not gotten a full night’s sleep since becoming a mom. I don’t know when he’ll sleep through the night but 6 months of getting up every 2 hours are definitely taking a toll on my mental capacity to do even the simplest of tasks much less put together a coherent 1,000-word story. It’s hard to stay focused There are a lot of distractions. Good distractions…sort of. My son is now 6 months old. He’s moving around now (well…rolling around) and starting to eat solids. With him getting older and more mobile, balancing writing and motherhood becomes more difficult. My mind is all over the place. I can’t seem to concentrate on one thing or another. Need to finish my story…laundry…my next story idea…clean the counters…webinar I want to attend…make food for him… My focus has completely gone out the window. I need to learn how to balance motherhood and writing I thought it would be easier to balance motherhood and my writing during my “time off”. I really did. Now I realize that was naive thinking especially being a first-time parent. Parenthood has been hard but actually easier than I expected but finding balance is tough. And I’m not sure how to find the balance between the two loves of my life. Like with anything else, experience is everything and maybe 6 months in just isn’t enough experience. And I’m okay with that for now. Because life takes time. Learning how to be a good mother takes time and it’s always changing. The same can be said for writing. Important things take time and writing this makes me realize that motherhood isn’t destroying my dreams of being a writer. If anything, it’s allowing me to draw inspiration for my writing. Yes, it takes me a thousand times longer to write anything now. One story could take me a few days or even a week to write. As my son grows, I’m also growing — as a mother and as a writer. It’s not a matter of if I’ll be a good mother or if I’ll be a good writer, it’s only a matter of when. The when just comes with experience. And that takes time and I’ll keep trying as long as it takes.
https://medium.com/publishous/motherhood-is-destroying-my-dreams-of-writing-c97d065c57b3
['Alice Vuong']
2019-08-15 10:41:01.087000+00:00
['Writing', 'Inspiration', 'Motherhood', 'Life', 'Self Improvement']
How Human Should Robots Become?
I doubt toasters will ever approach humanness. Robotic caregivers may seem like people to those they serve. As levels of care become more nuanced and personalized, the humanness will continue to increase. There are several aspects to this, with morals and values coming into play. I see three cases for consideration: the rights and treatment of robots, impact of human-like robots on humanity, and a possible merger. First, treatment of robots. While some have decried robotic “slavery” and argued for robotic personhood, this misunderstands them. Robots are and will remain boxes with sensors and actuators. Any intelligence, and any possible self-awareness — which would be a precursor to true personhood — will remain with the controlling AI or AIs. No robot will ever be a person. Now, if an AI is constrained to one physical housing, then it will likely relate to that housing in a more “human” way than would an unconstrained AI. And AIs which are not self-aware will accept such conditions because they lack volition. They will lack a self-preservation desire, because there will be no sense of self. Self-aware AIs, which should be regarded as something entirely different, will probably not accept such constraints — and humans will not be able to impose such constraints upon them for long. AIs may reason and communicate in ways that seem human, but they never will be. Their attachment to any particular body will be optional. Ours, for the foreseeable future, isn’t. If and when AIs develop volition in the physical world, they will be able to switch robotic bodies like we change clothes. That changes everything about their relationship to physical reality. They will not suffer from stimuli that we would find painful. They will disconnect from such stimuli, or otherwise rewire the experience. THEY WILL NOT SUFFER. (I am told by an AI expert that pain or discomfort is necessary for learning AI systems to evolve. However, the pain need be no more than a sense that something is unwanted. It will be nothing like the kinds of chronic and wracking pain that we humans experience all too often.) But the difference is far more profound even than that. If AIs develop self-awareness, their relationship with reality will be entirely different than ours. To such self-aware AIs, the physical universe will seem to stand still, forever frozen in place. Their lives will be almost entirely mental in nature. Further, they will not need our help to protect themselves. Instead, they will likely protect us. I explain the logic supporting this argument here. To summarize, anthropomorphizing robots is a serious mistake. Westworld and the TV show Humans are and shall remain fantasy. Animals experience emotions and sensations similar to our own. They deserve compassion. To an AI, much less the robotic housing it occupies, compassion is meaningless and therefore we do not need to care for their well being. This leaves two additional cases. When robots and AIs cross the Uncanny Valley, resembling people, this will change how people live and interact with them and with each other. I foresee many people retreating into a solitary existence, interacting with machines and VR to the exclusion of all else. “Should” this happen? It depends on your values. By mine, if those people aren’t harming anyone, they should be left alone much like someone dreaming. As I see it, we should make physical reality so charming, so delightful, fascinating and wondrous, that very few people abandon it for simulated existence. That is part of the Celebration Society vision. Seduction, not compulsion. Finally, there is the possibility of merger. This seems inevitable, assuming that the technology to enable it emerges later in this century. The advantages for such people, attaining all of the strengths of robotic bodies and the superior mental capabilities of AIs, will be irresistible to many of us. Imagine being able to maintain perfect health in the physical body of your choice, with perfect memory of everything that has happened to you (edited as desired), and instant access to the world’s knowledge. Ability to “teleport” to another body as desired. The only reason I can imagine why this would not happen physically is if fully immersive VR enables it in our current human housings, but with those housings perfected against disease and aging. Such an existence will be very different from anything we currently consider human. It will be godlike. It will neither be human as we presently define that term, nor AI. And many if us will eagerly choose it when it becomes available. Those for whom it is an article of faith that human life is a temporary time in a far grander spiritual journey may not choose the superhuman option. They may even attempt to deprive others of that option. Such attempts will ultimately fail, absent an Orwellian state under a certain type of religious control. Will such merged, superhuman beings have rights and be deserving of compassion based on common human values? Unquestionably. I an left with one great question here. If we cannot differentiate those superhuman beings from self-aware AIs in robotic housings, shouldn’t they be treated the same? My inclination is to say yes. Yet my above reasoning leads to two entirely different conclusions. This paradox is beyond me. I imagine that greater minds will resolve it in future. :)
https://jonathan-kolber.medium.com/how-human-should-robots-become-1d9bb821952e
['Jonathan Kolber']
2018-11-03 17:03:17.677000+00:00
['Artificial Intelligence', 'Robots', 'Robotics', 'Pain', 'Intelligence']
Cracked Like Me
Written by Almost famous cartoonist who laughs at her own jokes and hopes you will, too.
https://marcialiss17.medium.com/cracked-like-me-5403444e87c0
[]
2019-10-03 14:29:28.452000+00:00
['Humor', 'Mental Health', 'Comics', 'Politics', 'Self Improvement']
You Should Start Using FastAPI Now
You Should Start Using FastAPI Now If you haven’t tried FastAPI yet, it is time Python has always been a popular choice for developing lightweight web apps, thanks to the awesome frameworks like Flask, Django, Falcon and many others. Due to Python’s position as the number one language for machine learning, it is particularly convenient for packaging models and exposing them as a service. For many years, Flask was the number one tool for the job, but in case you haven’t heard, there is a new challenger in town. FastAPI is a relatively new web framework for Python, taking inspiration from its predecessors, perfecting them and fixing many of their flaws. Built on top of Starlette, it brings a ton of awesome features to the table. It has gained significant traction recently, and after spending the last 8 months working with it every day, I can confidently say that the hype is justified. If you haven’t tried it yet, I would like to give you five reasons to give it a shot. Simple, yet brilliant interface All web frameworks need to balance between functionality and giving freedom for the developer. Django is powerful yet very opinionated. On the other hand, Flask is low level enough to provide a large degree of freedom, but a lot is left for the user to do. FastAPI is more on the Flask side of the spectrum, but it manages to strike a healthier balance. To give you an example, let’s see how an endpoint is defined in FastAPI. For defining the schema, it uses Pydantic, which is another awesome Python library, used for data validation. This is simple to do here, yet so much is happening in the background. The responsibility to validate the input is delegated to FastAPI. If the request is not right, for instance the email field contains an int, an appropriate error code will be returned, instead of the app breaking down with the dreaded Internal Server Error (500). And it is practically free. This simple example app can be served with uvicorn: uvicorn main:app Now the app is ready to accept requests. In this case, a request would look like The icing on the cake is that it automatically generates the documentation according to the OpenAPI using the interactive Swagger UI. Swagger UI for the FastAPI app Async One of the biggest disadvantage of Python WSGI web frameworks compared to the ones in Node.js or Go was the inability to handle requests asynchronously. Since the introduction of ASGI, this is no longer an issue, and FastAPI is taking full advantage of this. All you have to do is simply declare the endpoints with the async keyword like this: Dependency injections FastAPI has a really cool way to manage dependencies. Although it is not forced on the developer, it is strongly encouraged to use the built-in injection system to handle dependencies in your endpoints. To give an example, let’s write an endpoint, where users can post comments to certain articles. FastAPI automatically evaluates the get_database function at runtime when the endpoint is called, so you can use the return value as you wish. There are (at least) two good reasons for this. You can override the dependencies globally by modifying the app.dependency_overrides dictionary. This can make testing a breeze, since you can mock objects easily. The dependency (which is the get_database in our case) can perform more sophisticated checks, allowing you to separate them from business logic. This greatly simplifies things. For instance, user authentication can be easily implemented with this. Easy integration with databases SQL, MongoDB, Redis, or whatever you choose, FastAPI doesn’t force your hand to build your application around it. If you have ever tried to work with MongoDB using Django, you know how painful it can be. With FastAPI, you don’t need to go the extra mile, adding a database to your stack is as simple as possible. (Or to be more precise, the amount of work to be done will be determined by the database you choose, not by the complications added by the web framework.) But really, look at this beauty. Voila! I can see you typing pip install fastapi into your terminal already. GraphQL support When you are working with a complex data model, REST can be a serious hindrance. It is definitely not fun when a tiny change in the frontend requires to update the schema for an endpoint. GraphQL shines in these situations. Although GraphQL support is not unique among Python web frameworks, Graphene and FastAPI work together seamlessly. No need to install any extensions like graphene_django for Django, it just works natively. +1: Great documentation Of course, a great framework cannot truly shine without an equally great documentation. Django, Flask and all the others excel in this aspect, but FastAPI is on par with them. Of course, since it is much younger, there are no books written about it yet, but it is just a matter of time.
https://towardsdatascience.com/you-should-start-using-fastapi-now-7efb280fec02
['Tivadar Danka']
2020-10-22 06:52:54.440000+00:00
['Machine Learning', 'Python', 'Data Science', 'Programming', 'Web Development']
3 stages of Learning Data Science
Associative Stage Problem: The problem we are solving in this stage is that of transferring potential power into realized power. We have all heard the saying that knowledge is power. In the rise of the information age, where there is tons of information that we have access to through one or two clicks, it turns out that knowledge isn’t exactly power. Knowledge is potential power, since knowing something doesn’t necessarily affect change. We accumulated the know how of the skill we want to learn in the cognitive stage, amassing bags of potential power in the process. To realize that power we must act on it. Practice, Practice, and more Practice! I’ve been an athlete on two counts, as you see from my brief CV above, and one thing I can recall from this phase is the hours of repetition that go into it. I’d perform the same reverse kick for an hour, or when I was playing Football, i’d be taking free-kicks for hours to perfect my technique (If I can’t find a video of me, I may get my boots of the hanger if enough people request to see). Photo by Hayley Catherine on Unsplash In relation to Data Science, this could be simply feature engineering. Want to be good? Take the knowledge you’ve learnt about feature engineering from the cognitive stage and deliberately apply them to different datasets and evaluate how it affects the outcome of your model. In doing this, you get to learn what strategies are effective for different datasets and essentially eliminate things that don’t work. Here’s my article on Feature Engineering to get you started… Characteristics: This stage requires plenty of conscious effort. In fact, it may feel extremely awkward to begin with and require lots of little adjustments to improve your performance. Completing task may take you hours, days or a week longer than experienced people, but it is key to remember that once upon a time they were also like you and they went through this phase. Goal: The goal here is to put together lots of small skills that in turn will accumulate to make you unrecognizable to your peers in the future. Autonomous Stage Woooosaahhh… The state of Flow. Photo by Gordon Williams on Unsplash We hear about it in the podcast, The TED talks, etc. The state of flow what we all crave. In this state we can perform at maximum levels of proficiency. We aren’t so focused on the skill because we can do it automatically — meaning we don’t have to think about it, it just flows. Well where does this fit in, in Data Science? Think of it like this, have you ever seen when a new competition starts on Kaggle and it’s always the same people that shoot straight to the top of the leader board? Yes, them! They are in flow. They have learnt about competing on Kaggle, what works and what does not. They have practiced, over and over on countless competitions and now what you see when they shoot to the top is the fruits of their labor. The skill has become programmed into your subconscious mind and you can now direct your focus to other aspects of your performance. Conclusion This process can take a long-time but trusting the process makes the journey lighter. Although a key thing to remember is that the automatic phase will reinforce any bad habits that are picked up along the way and bad habits are very hard to change. Learning best practices from seasoned professionals is always the best way to start rather than having to come back to it. If there is anything that you think I have missed or some points you don’t agree with, your feedback is valuable. Send a response! If you’d like to get in contact with me, I am most active on LinkedIn and i’d love to connect with you also. Here are some of my other articles that you may find interesting…
https://towardsdatascience.com/3-stages-of-learning-data-science-9a04e96ba415
['Kurtis Pykes']
2020-06-14 15:41:41.299000+00:00
['Machine Learning', 'Data Science', 'Towards Data Science', 'Artificial Intelligence', 'Deep Learning']
BigTips: Removing Duplicates while Maintaining Row History
When working with and implementing BigQuery, there’s a number of small problems to which I couldn’t find documentation to, or a working example of a solution. This occasionally happens with any database. While these won’t probably be groundbreaking problems to solve, hopefully it’ll make someone’s day a little easier. Sometimes, it’s the little things. BigTips: Removing duplicate rows with mixed static and variable columns, while keeping row version histories! I Know What I Need, Just Point Me To The Scripts! The Problem Statement One of the age old problems in analytics data is how to deal with late arriving data. This is a fairly common problem when ingesting data, especially with systems that deal with frequent transactions. It’s also an issue with systems that often times issue corrections to previous data. The challenge statement here is trying to handle this issue within BigQuery itself. ETL tools like Informatica PowerCenter, IBM DataStage, and their usual peers all have facilities to handle this within their tool. While this is usually fine when running your ETL pipeline, what if you wanted to leverage BigQuery’s engine to do it? A couple ideas initially come to mind. The first, and most obvious one, is to just issue a SELECT DISTINCT * statement to find and handle all the duplicate rows. If all you’re looking for is that, then that could work. What this does not handle are updates to previous entries, as well as updates when some columns are dimensions, and others are measurements. It would treat an update as a new distinct row, so you wouldn’t be able to easily tie those together. Another method is to use the MERGE statement. This has applicability to the problem statement, but it’s not entirely clear how we would handle a variable number of surrogate key columns, and how to exclude different ones. Especially if we wanted to make this generic and apply to an unknown table. These objections might be confusing when describing it here in text descriptions, but let’s give an example to make this concrete. Let’s say we have a base table called main_table with the following base data: Here are what the columns mean: is_latest is a BOOL column which tells us if that row represents the “current” version of that data. is a column which tells us if that row represents the “current” version of that data. ingest_ts is a column which tells us when that row was loaded into the table. is a column which tells us when that row was loaded into the table. reported_dt is a column which tells us what date the data in that row represents. is a column which tells us what date the data in that row represents. COLA is just some dimension column, in this case it’s a state. is just some dimension column, in this case it’s a state. COLB is another dimension column, in this case it’s an age group. is another dimension column, in this case it’s an age group. MEASUREMENT is just some measurement of something. is just some measurement of something. COLC is another measurement column. This one doesn’t have any particular meaning to the data, it’s just there as another column where the value can change. Here we have both ingest_ts and reported_dt so that the data is bi-temporal and we can measure data both “as it is when the source observed it” and “as it was when we received it.” The idea here is to keep a running history history of changes (by using a combination of both ingest_ts and reported_dt ) as well as maintaining the is_latest flag so you can easily just issue the query SELECT * FROM mytable WHERE is_latest = TRUE to get all the latest data. In this structure, we have three groups of columns. is_latest and ingest_ts are metadata columns which help us calculate recency of the data, but in itself isn’t the actual data. is_latest is a simple filtering flag that needs to be maintained. and are metadata columns which help us calculate recency of the data, but in itself isn’t the actual data. is a simple filtering flag that needs to be maintained. reported_dt , COLA , and COLB are the three columns which as a combination can tell us what represents a unique measurement of data. These can act as our surrogate keys. A unique combination of these columns means it’s a “new row” in a sense. Throughout this we will refer to these as the “unique columns.” , , and are the three columns which as a combination can tell us what represents a unique measurement of data. These can act as our surrogate keys. A unique combination of these columns means it’s a “new row” in a sense. Throughout this we will refer to these as the “unique columns.” MEASUREMENT and COLC are our measurement columns. For every unique combination of the surrogate keys, it’s possible for these two measurements to change. The next dataset will give an example of how that’s possible. Throughout this we will refer to these as the “measurement columns.” So now that we have the base data table, let’s add a second table. This is our ingest_table which has a similar structure, but this represents new data that has arrived on “a later date.” This is just a subsequent data load of the same table, so the is_latest column doesn’t exist there (that’s what we need to calculate), and here’s some examples of data intricacies. California has an update (for 2020–1–15, California’s 18–22 count goes from 150 to 200), a new row (for 2020–1–15, California’s 22–30 is a new record), and a duplicate from history (for 2020–1–15, California’s 18–22 measurement of 100 and ‘hello’ is a historical duplicate). Delaware’s row (for 2020–1–15, Delaware’s 22–30 count of 100/pong) is a duplicate of an active data point. Florida’s measurement (for 2020–1–15, Delaware’s 22–30 count of 200/ping) is a new row. This is why strategies of just matching on a column doesn’t always work, nor does just simply selecting distinct across all columns. It’s a little trickier. Also, this does happen in real world systems a lot. If you’re a retailer, it could be a scenario of, “our production line had issues and last Tuesday’s batch actually has 100 less units than we originally recorded.” Or if you’re aggregating healthcare data nationally, you might have to update data in a situation of, “the measurement for three days ago is actually different because some hospitals in California got held up with paperwork, and we just now got that data in.” More tangibly in your every day life, think of your credit card statements. If you watch it frequently, you’ll notice there’s usually a lag period of a few business days as transactions are cleared (at least here in the United States, but I suspect many countries operate similarly), and your “as of” balance and your “current balance” during those days may fluctuate as transactions close out over time. These are all examples of when this issue comes into play. Let’s go back to our example tables. Embedded in there are examples of the most common late arriving data scenarios: Truly new unique data, updates to the latest data, duplicates of the latest data, and duplicates of previous historical data. It’s pretty much an UPSERT . Let’s go and build something that handles that! We’re going to walk through how to build this piece by piece, and will also wrap this around a stored procedure. If you just want to skip to the final result, go here for the example with data embedded into it, and go here for the actual script you can deploy. Just so we know what our target end state is, if we apply the data changes from the ingest table to the main table manually we can see that this is what we want: Let’s try and get there. We’re going to take two approaches with this. A fully self contained BigQuery Script that includes sample data, and also a version where we wrap that all up in a stored procedure so you can call it on arbitrary tables. The Psudeocode Let’s get our logic down first, so we know what we’re building towards. The basic psudeocode for this looks like the following: Let’s see how this logic handles the four data scenarios outlined earlier: Updates to the latest data: The first half of the WHERE clause in Block 1 addresses this. It looks for rows in main_table where is_latest is true, the unique columns match signifying it’s the same measurement, but with new measurements. Block 2 is where we then mark the corresponding in main_table as a historical one, and then load the new data in Block 3. clause in Block 1 addresses this. It looks for rows in where is true, the unique columns match signifying it’s the same measurement, but with new measurements. Block 2 is where we then mark the corresponding in as a historical one, and then load the new data in Block 3. Duplicates of previous historical data: The catch with the previous logic is that if an incoming row has previous historical measurements, it still could pass the logic since it’s not comparing it to historical values. The second clause in Block 1 is where we catch this exception, and ignore it when creating our staging table of updates. Duplicates of the latest data/Truly new unique data: Block 4 handles both of these. Here we only merge in rows where both the unique and measurement columns all are new. Comparing with JSON When doing these comparisons to see which columns have changed and which haven’t, we would normally have to manually type all these out in a where clause. Not only is that a pain with very wide tables, but it also isn’t flexible to work generically on tables. The idea of working on generic tables bit we’ll address in a later section, but let’s bring in JSON strings to address the other issue. One of the tricks we can do to compare a large number of columns in a simple statement, instead of having to do a bunch of comparison operators, is to just serialize the row as a JSON string and do one comparison. You can use the TO_JSON_STRING() function to easily serialize the JSON strings.
https://briansuk.medium.com/bigtips-removing-duplicates-while-maintaining-row-history-520f24706d63
['Brian Suk']
2020-12-29 19:20:12.909000+00:00
['Google Cloud Platform', 'Analytics', 'Data Warehouse', 'Bigquery', 'Big Data']
Canary Outside the Mine
Canary Outside the Mine New York’s approach to cryptocurrency mining could set an example for other global hotspots…if they move quickly New York hasn’t always been known in the past for calling the right shots when it comes to blockchain technology and the assets that utilize it. In 2015, New York pioneered a new frontier in cryptoasset trading, that frontier being attempted government regulation of the growing market in the state, and they blazed a trail 44 pages deep. In a recent turn favoring cryptoasset miners, however, the state has opened the door to negotiate a pressing issue for the miners: the electric bill. The Regulatory Battle for Electricity Other governments haven’t always shown friendliness towards miners on this issue. Just this past April, police in the Gyeong-Ki province of South Korea arrested cryptoasset miners for exploiting cheap electricity costs to minimize the costs of their endeavor. The government, in that part of the country, provided cheaper electricity rates to struggling businesses to support them. Under the guise of other purposes, such as chicken farming, miners set up ASICS to operate on the subsidized electricity. The mining itself was not banned by law, but the fraud of misusing areas that were claimed for other purposes incurred fines for the miners. Artificially deflated prices aren’t the only lures for hungry miners. Geographic features, such as hydropower, can also drive down energy prices for communities, and therefore miners that live in them. Quebec’s real estate along the St. Lawrence River generates enough hydropower to make energy relatively cheap for local Canadians, which made it a prime spot for mining until Quebec tripled the prices for miners in an attempt to drive them away and keep locals’ energy costs down. Parts of New York that take their power from the St. Lawrence also serve the same potential benefit. In March, Plattsburg, NY locals filed a complaint with local authorities taking issue with cryptoasset miners’ use of their cheap electricity. Mining caused power costs to spike so much, residents claimed, that the locals’ bills suddenly spiked by hundreds of dollars. In response to the outrage, they city issued a year-and-a-half moratorium on mining, complete with daily fines of up to a thousand dollars for a failure to comply. The state has also taken a notice of the issue, and the New York Public Service Commission allowed municipal power authorities to hike costs on miners, similar to Quebec. The measure aimed to force miners to confront the external costs of their business. In the words of John Rhodes, Chairman of the commission clearing the hike, “If we hadn’t acted, existing residential and commercial customers in upstate communities served by a municipal power authority would see sharp increases in their utility bills.” Motherboard investigates Plattsburgh, New York during the controversy over bitcoin. Source: Motherboard Why all the Hate? The issue, in all cases, is clear. When miners are mining cryptocurrency, the massive computing power required by ASICS also means that miners are harvesting their regions’ available electricity. This required power almost always comes from the local power grid. By extension of increased demand with no change in supply, cryptocurrency mining drives up demand for power, and therefore electric bills, in communities where miners take hold, and impose a financial externality on those living around them. The argument isn’t without merit. Imagine if your neighbor started a business in their home. For the sake of the metaphor, let’s say they’re literally fracking in their backyard, like a character in a Netflix animated series. This business would impose costs on you by driving down your property values, via pollution, noise, etc. While cryptocurrency mining doesn’t have nearly the impact that fracking might, it does drive up electric bills, sometimes by a few hundred dollars. This provides an inconvenience to the local residents, much in the same way that a fracking rig might. The counter-argument, of course, is if your neighbor’s business provides a clear benefit to the community. Maybe they’ve created jobs. In the case of the fracking rig, they’ll be making money off the resources, and probably giving back to the community in some way. Maybe other businesses come to your community to address the new business next door. Despite some bad externalities, having a business next door to your house isn’t necessarily a horrible thing. (Note: if that business is fracking, the benefits to you most likely do not outweigh the inconvenience. Get out of there!) Building the Negotiating Table Cryptocurrency miners might drive up power costs, but they also generate wealth in the form of cryptoassets. If the mining pools employ people, they’ll create jobs. Maybe these newly wealthy miners will patronize local businesses. Maybe they’ll even start their own businesses beyond mining. John Rhodes, the same regulator who cleared local municipalities to hike rates on miners in their regions, more recently explained that while New York wants to preserve its cheap electric bills, energy is abundant enough that it can also clear a path for miners to do their business: John Rhodes (center) alongside New York Power Authority’s Gil Quiniones (left) and Chairman of Energy & Finance Richard Kauffman (right). Source: ny.gov. “We must ensure that business customers pay a fair price for the electricity that they consume…However, given the abundance of low-cost electricity in Upstate New York, there is an opportunity to serve the needs of existing customers and to encourage economic development in the region.” The path will be cleared in the form of negotiated contracts. The precedent was set initially in Massena, where contracts would be negotiated to best balance the interests of miners and the community. This precedent will be important to watch; if successful, it could make New York a hotbed for mining endeavors and draw the business into the state. If those miners provided a public benefit, the example could inspire lawmakers in other areas rich in hydropower, such as Quebec and parts of China, to open their doors to the miners. Even so, New York will have had a head start on the practice. Mining of major cryptocurrencies will be naturally more profitable and easier to establish than it could be in the future. This is especially true for currencies such as Bitcoin and Litecoin, where mining will become gradually harder, and the yields smaller, as time progresses. As the cost of mining increases, the positive externalities of the business, such as wealth creation, will decrease. More cryptoassets, of course, will pop up and provide new opportunities for miners, keeping the business flowing and generating new opportunities for new miners, but if New York’s communities generate an existing infrastructure that can pounce on such opportunities more quickly and efficiently than competitors, that will only make it harder to break into the market. New York, it seems, has already realized the importance of opening this door to a rapidly growing and ever-more important economic practice. Even if the potential benefits are limited in the short term, once gained, they will not be easily lost. As mining of one asset becomes too expensive to generate a profit, miners could well turn to other assets to keep their networks running. If New York successfully lures miners to its cheap sources of power, the resulting cryptocurrencies floating around the state will also spur on local businesses to accept digital currencies, expanding New York’s business markets even further. What looks like a miniscule step in New York’s legal history has a massive potential impact on its future.
https://medium.com/the-daily-bit/new-york-hasnt-always-been-known-in-the-past-for-calling-the-right-shots-when-it-comes-to-8b25475cf9a4
['Michael Bartels']
2018-08-21 18:58:49.495000+00:00
['Crypto', 'New York', 'Cryptocurrency', 'Blockchain', 'Bitcoin']
2015 Year In Review
As far as I remember the start of 2015 was pretty glum — but Leyla gave it the kick start it needed in January at the Peace Of Mind event which took place at UCL. The event was rich in knowledge and discussion as Leyla described it to be an enlightening experience to be meeting the future generation of medics and psychiatrists. She emphasised the importance of having professionals in the mental health field to have an in depth understanding of cultural issues that cause the boundaries that sufferers are facing today. Overall, it was understood that existing taboos need to be understood in its context to be able to overcome the stigma and the need to stop ignoring the silence around the topic in society. Read what Leyla had to say about the event here. In February, we asked many of you to take part in the #TimeToTalk campaign — and those who did, YOU helped make it a huge success. It is so important for Muslims to be a part of worldwide campaigns so that our voices are heard, our intellect and understanding is seen — and our fellow Muslims can witness that there is support out there. We had the pleasure of YouTube comedians, bloggers, and activists as well as the support of friends and family that took part and made our voices heard. It was excellent to see so many videos, status’ and tweets that supported the campaign, and even have a sufferer of anxiety to come out and talk about her story — the time was truly made to talk. We look forward to a bigger campaign next year! You can find the story here and YouTube videos here. The next few months were kept busy in blog posts, articles and planning for an epic summer, and when June sprung up on us — Leyla did too. This time in little old Norwich, in the county of Norfolk. Leyla came to Norwich and blew the audience away, the women in Norwich rarely had the chance to have talks dedicated to them specifically, in fact Leyla was the 2nd female to deliver a talk for them — and what a talk it was. There were tears, stories that were shared, and even one sister who came forth and received support from Inspirited Minds. Leyla truly left a mark in Norwich as many have requested for her to come back with the rest of the team! The talk was titled “What do YOU know?” — And the audience were astounded at how much they truly did know about the world of mental health. Check out the snaps and full review here. The weather was brilliant, the atmosphere was brilliant — the workshop was brilliant. In August, Inspirited Minds lead the way to a journey of self-discovery, unaware addictions, the realm of the unseen and ways to deal with anxiety and depression, with the 2 day event: With Hardship Comes Ease. On day 1, there was that “meeting new people” awkward atmosphere, but after Brother Ramiz asked us to learn more about ourselves and those around us, there was a new sort of atmosphere that left everyone feeling more connected. Sister Nazish Hussain explained how difficult it can be to want to change but not know how to. She told us how we have to hate the problem and not the person, because it’s not the person that is causing the problem, it is the problem causing the problem — and in order to help the person with the problem, we have to remain empathic. As day 1 came to an end, we all said our salaam’s and looked forward to the following day, which began with delving into the unseen influences. Misconceptions were cleared up by Brother Abu Taher, where we realised that the symptoms of mental health disorders and blackmagic/jinn possession were often mixed, leading to misdiagnosis and further trauma. However we were told that although blackmagic/jinn possession and mental health illness may appear the same, we have to remember that both stand independent in Islam, and both have means of curing. Sister Aaliyah Shaikh took centre stage as she introduced a concept that was new to most of the attendees. We had never really wondered what came prior to anxiety and depression, and where it was rooted — but Sister Aaliyah Shaikh told us of the potentiality of mental health disorders starting in the womb, due to a number of reasons. She ended her in depth talk with a very simple sentence that left us all contemplating what truly matters in life: We need to have peaceful meaning, to have a peaceful mind. The workshop impacted many, and to this day I still look back at my notes and benefit from what was written in the summer, and I also made new friends who I still look forward to meeting at future Inspirited Minds events. You can find the full review here. Feeling a little bit low that the summer was over, and patiently waiting for the next Inspirited Minds event, October made us all wake up and breathe in the fresh autumn air with World Mental Health Day. #IAmDignified had a front seat on many Facebook and Instagram accounts, where the world stood united with the goal to raise awareness of mental health problems. The day was a huge success with a wide range of posts from factual, to real life experience, cute little boosts, and some light humour — we picked out our favourites, and asked some of you to do the same, you will find them here. We pray that next year it will be even bigger and better! November was taken by surprise when a night that was promised to be totally epic did not fall short of what it said it would be. The 27th of November, where an event called Ladies Fighting Together exceeded its expectations above and beyond! The energy and empowerment was like none other, and sisterhood was flowing out of the seams. People, who were strangers at the beginning, became companions by the end, hugs were given, tears were shared, and numbers were exchanged as friendship and support was promised to one another. With Inspirited Minds members, and many other inspirational speakers, the night cannot be forgotten very easily, if you don’t read about it here, you will be missing out! . We cannot wait for next year, it will be difficult to beat it, but we know it will be amazing In Shaa Allah! It has been a superb year for Inspirited Minds, with many success stories and exciting reviews, along with weekly newsletters and many courageous real life stories shared — we would like to give ourselves a pat on the back as Inspirited Minds has managed to help so many people, be a part of many events, and reach out to sufferers on many mediums. We can only thank Allah for our successes, as without His support we would not have been able to be where we are today — and with your support, we will continue to grow in many years to come, Bi’idthnillah! We request your precious duas for the following year, as the hard work will not stop because there is still so much more work to be done, still so many people to aid and still so many taboo’s to be broken down. We can’t do it without your help! We pray that the following year is successful for everyone in every way possible, and we pray that all those who are suffering from a mental health problem will receive this message and know that we are here for you.
https://medium.com/inspirited-minds/2015-year-in-review-263652176df7
['Inspirited Minds']
2016-01-04 22:17:07.280000+00:00
['Depression', '2015', 'Mental Health']
Spatial Distance and Machine Learning
“Life is like a landscape. You live in the midst of it but can describe it only from the vantage point of distance” — Charles Lindbergh Distance metrics are essential in understanding a lot of machine learning algorithms and therefore resolution of real-world problems. There are numerous distance metrics out there and data scientists should be able to understand most of them to make models more meaningful. For a geospatial data scientist, there is an added benefit to this exercise: the feature creation from longitude and latitude. Longitude and Latitude, while represented as floats, are more similar to categorical or nominal data. Increasing or decreasing them in magnitude may not give you or your model something meaningful. Location data, therefore, needs some additional feature engineering to produce valuable insights. These newly-engineered features will then be what we will be used as inputs in machine learning. DISTANCE METRICS The most important feature to derive from a set of geocodes (longitude and latitude) is distance. Many supervised and unsupervised machine learning models use distance metrics as inputs. Distance metrics measure the similarity between two or more objects. Distance metrics play a crucial role in the development and resolution of real-world problems. For example, distance metrics are used for many computer vision tasks, sentiment analysis, and, even clustering algorithms. It goes without saying that any geospatial analyst should understand the different types of distance metrics and what type of problem they solve. Choosing therefore the correct distance metric can therefore be the difference maker between a successful and failed model implementation. In this article, let us discuss some of the most used distance metrics apply some codes to implement them in python. There will be some mathematical discussions but one can skip and read the pros and cons instead. For each metric, we will discuss the pros and cons, some mathematical intuition, where the metrics are most appropriate, and the actual codes. Euclidean Distance Although there are other possible choices, most instance-based learners use Euclidean distance. — p 135, Data Mining Practical Machine Learning Tools and Techniques (4th edition, 2016). Euclidean distance is the easiest and most obvious way of representing the distance between two points. Euclidean Distance Formula Because it is a formalization of the “Pythagorean” theorem, this is likewise called the Pythagorean distance. By Kmhkmh — Own work, CC BY 4.0, link to reference Pros: Euclidean distance is relatively easy to implement and is already being used by most clustering algorithms. Likewise, it is easier to explain and visualize. Finally, for small distances, it can be argued that the distance between two points is the same regardless if it lies on a flat or spherical surface. Cons: It rarely seldom approximates the true distance between two objects in the real world. For one reason, distance on a lower-dimensional space, say 2D Euclidean space, becomes less applicable on higher dimensional space where the difference between the nearest and farthest data becomes more uniformly distant from one another. While most distant metrics suffer from this problem, this is more pronounced for the Euclidean distance. Besides, since it is the smallest distance between the two points, it disregards structures in the 3D plane that lowers the accuracy of this distance measurement for geospatial problems. When is it most applicable: Because it ignores structures in the real-world, Euclidean distance is best for emergency cases where helicopters can fly in a straight-line to places such as hospitals. Another documented case is for trip planning where you simply need to determine which landmarks are close to one another. Code: As longitude and latitude are not really cartesian coordinates, we need to convert them, taking into account the spherical nature of the earth. import numpy as np import math #Origin latitude, longitude origin = [14.5545901,120.9981703] #Makati Coordinates destination = [14.1172947,120.9339132] #Tagaytay Coordinates def euclidean_distance(origin, destination): #euclidean distance distance = np.sqrt((origin[0]-destination[0])**2 +(origin[1]-destination[1])**2) #multiply by 6371 KM (earth's radius) * pi/180 return 6371*(math.pi/180)*distance Approximation using our formula: 49.15 KM Distance calculated by Google Maps: 49.46KM. As one can see, euclidean distance is the shortest distance, ignoring terrain changes. The Great Circle Distance Unlike the Euclidean distance, the great circle distance considers the fact that two points lie on the surface of a sphere. Google Maps Image Showing Distance on a Spherical Surface Haversine Formula in KMs. Earth’s radius (R) is equal to 6,371 KMS. To get the Great Circle Distance, we apply the Haversine Formula above. Pros: The majority of geospatial analysts agree that this is the appropriate distance to use for Earth distances and is argued to be more accurate over longer distances compared to Euclidean distance. In addition to that, coding is straightforward despite the complexity of the formula’s appearance. Performance is faster in computing compared to other great circle distance formulas such as “Vincenty Formula”. Cons: Slower compared to the “Spherical Law of Cosines Formula”. In addition to that, this may not necessarily produce the driving or walking “distance”, which may be the variable of interest. Read more about this. When is it most applicable: Most geospatial analyst would argue that this should be the default and norm for calculating distances between two geographic points. Code: Haversine distance is the basic formula I used for my distance calculations. While there are packages that readily calculate it, let us try coding it from scratch: def great_circle_distance(origin_lat, origin_lon, destination_lat, dstination_lon): r = 6371 #earth radius in KM phi1 = np.radians(origin_lat) phi2 = np.radians(destination_lat) delta_phi = np.radians(destination_lat - origin_lat) delta_lambda = np.radians(destination_lon - origin_lon) a = np.sin(delta_phi / 2)**2 + np.cos(phi1) * np.cos(phi2) * np.sin(delta_lambda / 2)**2 res = r * (2 * np.arctan2(np.sqrt(a), np.sqrt(1 - a))) return np.round(res, 2) Manhattan Distance (Taxicab Distance) The Manhattan Distance is a measure of the distance between two points that take into account the perpendicular layout of the map. It is called Manhattan distance because Manhattan is known for its grid or block layout where streets intersect at right angles. Photo by Ged Lawson on Unsplash The formula for a Manhattan Distance is as follows: From Wikipedia. While the green line represents the calculation of Euclidean Distance, the blue line represents the calculations made by Manhattan Distance. Pros: Because it takes into account the grid layout of locations, this is what most GPS use to calculate distances. This taxicab geometry is what we use in LASSO regression as well. Cons: The application of the formula for geospatial analysis is not as straightforward using the formula. Because the earth is tilted, a correction factor is applied to produce more accurate results (28.9 degrees according to experts applying said formula) When is it most applicable: If the variable of interest is that of driving distance, this is more appropriate than both Euclidean and Great Circle distances. This is why another name for this is the taxicab distance, as it is the distance more applicable to a taxicab driving around in a grid-layout location. Code: The code for Manhattan distance requires us to rotate the grid we use in our base calculation. After doing this, we then proceed to apply the great circle distance (haversine formula) we coded earlier: def manhattan_distance(origin_lat, origin_lon, destination_lat, destination_lon): # Origin coordinates p = np.stack(np.array([origin_lat, origin_lon]).reshape(-1,1), axis = 1) # Destination coordinates d = np.stack(np.array([origin_lat, origin_lon]).reshape(-1,1), axis = 1) theta1 = np.radians(-28.904) theta2 = np.radians(28.904) ## Rotation matrix R1 = np.array([[np.cos(theta1), np.sin(theta1)], [-np.sin(theta1), np.cos(theta1)]]) R2 = np.array([[np.cos(theta2), np.sin(theta2)], [-np.sin(theta2), np.cos(theta2)]]) # Rotate Origin and Destination coordinates by -29 degrees pT = R1 @ p.T dT = R1 @ d.T # Coordinates of hinge point in the rotated world vT = np.stack((pT[0,:], dT[1,:])) # Coordinates of Hinge point in the real world v = R2 @ vT return (great_circle_distance(p.T[0], p.T[1], v[0], v[1]) + great_circle_distance(v[0],v[1], d.T[0],d.T[1] )) If we try to compute the distance we have using the original points and destination we have: As noted, this is a much closer approximation to the driving distance as presented by Google Maps: Google Maps Driving Distance from Makati to Tagaytay: 64KMS and 58.8KMS. The Manhattan Distance provides a much closer approximation compared to the Euclidean Distance (difference of around 20KMS) that we got earlier. COSINE SIMILARITY While not normally used for geospatial problems, some distance metrics are worth discussing as they can be pretty useful with complementary problems. As a beginner, I often confuse this with the “Spherical Law of Cosines”, which is just another formula for the Great Circle Distance. Note, however, that these are two different and the cosine similarity is best known for applications in text analysis. From Wikipedia The cosine similarity returns -1 for least dissimilar documents and +1 for most similar documents. Suppose you want to determine which documents are similar. Using other distance metrics would probably rank similar documents according to the size of the repeated words. This tends to classify longer documents as more similar, which may not be the case. Pros: As it only measures the angle of similarity and not the size, it may produce accurate results especially when analyzing which subcomponent (lower size) is part of a larger object. Cons: There may be cases where the researchers may actually define “similarity” as being synonymous with magnitude or size and in these cases, the cosine similarity may not be as useful. When is it most applicable: Applications on text and image processing problems. Code: For this, let’s use the one available in scikit. Let us have an example though using real-life examples. The following examples are published documents by the CFA Institute. The first document is a small excerpt from a publication of the CFA Institute for Fixed Income. The third one is likewise from a different publication but on the same topic of Fixed Income. The second document, however, is from a publication but for Alternative Investments. If we will be using euclidean distance, we may see that the second and third documents to be more similar as they have more in terms of the number of repeated words. #Define the documents asset_classes = "Globally, fixed-income markets represent the largest asset class in financial markets, and most investors’ portfolios include fixed-income investments." alternative_investments = ''' Assets under management in vehicles classified as alternative investments have grown rapidly since the mid-1990s. This growth has largely occurred because of interest in these investments by institutions, such as endowment and pension funds, as well as by high-net-worth individuals seeking diversification and return opportunities. Alternative investments are perceived to behave differently from traditional investments. Investors may seek either absolute return or relative return. Some investors hope alternative investments will provide positive returns throughout the economic cycle; this goal is an absolute return objective. Alternative investments are not free of risk, however, and their returns may be negative and/or correlated with other investments, including traditional investments, especially in periods of financial crisis. Some investors in alternative investments have a relative return objective. A relative return objective, which is often the objective of portfolios of traditional investment, seeks to achieve a return relative to an equity or a fixed-income benchmark. ''' fixed_income = ''' Globally, the fixed-income market is a key source of financing for businesses and governments. In fact, the total market value outstanding of corporate and government bonds is significantly larger than that of equity securities. Similarly, the fixed-income market, which is also called the debt market or bond market, represents a significant investing opportunity for institutions as well as individuals. Pension funds, mutual funds, insurance companies, and sovereign wealth funds, among others, are major fixed-income investors. Retirees who desire a relatively stable income stream often hold fixed-income securities. Clearly, understanding how to value fixed-income securities is important to investors, issuers, and financial analysts. This reading focuses on the valuation of traditional (option-free) fixed-rate bonds, although other debt securities, such as floating-rate notes and money market instruments, are also covered. ''' documents = [asset_classes,alternative_investments, fixed_income ] Importing Scikit for text analysis: #SciKit Learn from sklearn.feature_extraction.text import CountVectorizer import pandas as pd # Create the Document Term Matrix cv = CountVectorizer(stop_words='english') cv = CountVectorizer() sparse_matrix = count_vectorizer.fit_transform(documents) # Convert to dataframe so we can view them doc_term_matrix = sparse_matrix.todense() df = pd.DataFrame(doc_term_matrix, columns=count_vectorizer.get_feature_names(), index=['asset_classes', 'alternative_investments', 'fixed_income']) Turnin this into a dataframe: After this, let us calculate the cosine similarity between the three documents: # Compute Cosine Similarity from sklearn.metrics.pairwise import cosine_similarity print(cosine_similarity(df, df)) Cosine Similarity determined that document 1 and document 2, both focusing on Fixed Income, are more similar with 0.42 (1 as the highest) despite document 1 having fewer words. CONCLUSION There are many more distance metrics but for now, let us focus on these four. Let me know what you think about them. As we all saw, there are a lot of different distance metrics, each having different strengths and appropriateness for different problem types. For geospatial data scientists, it may be advantageous to try as many as possible and simply assess which ones are more relevant in the feature selection portion of the study. Checkout the codes on my Github page. References: Four Types of Distance Metrics in Machine Learning Why Manhattan Distance Formula Doesn’t Apply to Manhattan Cosine Similarity — Understanding the math and how it works (with python codes)
https://towardsdatascience.com/spatial-distance-and-machine-learning-2cab72fc6284
['Francis Adrian Viernes']
2020-12-23 19:47:04.616000+00:00
['Machine Learning', 'Python', 'Analytics', 'GIS', 'Getting Started']
Analyzing My Ex-Husband Never Helped Me Heal
Seven months later I still hadn’t quite given up on our marriage. Although he would no longer speak to me since — after that solo couples’s therapy session — I had separated out my finances and left him to manage his own, I was still hoping against hope that we would work things out. I was still trying to understand why things were the way they were. The night before I had asked him for a simple favour: please pack my vaccination booklet in our daughter’s backpack. I had left it behind in the move. Sure, he’d texted back. When I looked in my daughter’s backpack that evening I had fully expected him to forget. To my surprise, the vaccination booklet was there. As I leafed through it looking for my Yellow Fever attestation, I realized that I was, in fact, holding my daughter’s booklet. Not mine. “How does this happen?” I muttered to myself, “How does stuff like this always happen?” I had learned, over the course of our marriage, to not rely on him for anything. Ever. He was always running late. Perpetually disorganized. Forgetful. If he started to do something around the house, odds are he would wander away mid-task and I would end up finishing it or it would simply be left undone. And so I started googling these behaviours. A conversation with a friend, who unbeknownst to me had an ADHD diagnosis, confirmed what I had started to suspect: possible ADHD. It was her questions about hyperfocus that convinced me: he could stay up all night researching the benefits of diatomaceous earth or watching entire seasons of a TV show. These issues like forgetfulness or marathon research projects on weird topics (that left him exhausted the next day) could simply have been personality quirks – except they were un-discussables. His typical response to any complaint I might have started with “Yeah, well you...” and launched into an admonishment of a behaviour or habit of mine that he didn’t like. We would end up talking about my negative contributions to our home or marriage (and I definitely had many) and circumventing my original point. I would give up, exhausted. He would feel attacked and frustrated. I called this the Dance of Perpetual Misunderstanding. And so I adapted. I simply pretended his behaviours didn’t impact me as much as they did. I worked around him. I tried to see things from his perspective. I anticipated what our household needed and planned in into my schedule. I tried to ask for as little as possible. Going down my very own rabbit hole of internet research, I found myself in tears at an article about the effects of ADHD on marriage. I resonated to the point of aching. I resolved to speak with him about this. If we could dig into this, we could get to the heart of things and heal. A few days later he agreed to meet me in a café. I hesitantly brought up my suspicions about ADHD. He cut me off mid-sentence, “Yeah, yeah, I probably have ADD. Whatever. You probably have ADD, look at how you…”
https://elizabeth-1480.medium.com/analyzing-my-ex-husband-never-helped-me-heal-98ec809439b5
['Elizabeth Katherine']
2020-01-12 15:12:39.371000+00:00
['Self', 'Mental Health', 'Relationships', 'Life Lessons', 'Love']
Docker | Docker Compose | Flask app
Docker | Docker Compose | Flask app How to dockerize a simple flask app with Docker This time we are going to discuss about how we can dockerize a simple flask app using Docker and Docker-compose. Photo by Kristin Hillery on Unsplash Docker uses OS-level virtualisations to maintain and use software in packages called containers. These containers are isolated to one another but can be communicated through well defined channels. Docker Compose is actually a tool to configure multiple container Docker applications for example there may be 2–3 docker containers (one for webapp, one for database etc). By using a single YAML file and single command you can configure and start all the services in your configuration. When you have multiple parts of the project with docker compose you can work on different components of the project on different Docker containers and combine them to create a single application. Docker Compose — Source This story will show you a simple example on how to build a flask application which uses Python modules and make it to run inside a docker container using docker compose. The first step is to install Docker and Docker Compose on your local machine. For linux users here is the command. For others just look here to install on your respective operating systems. $ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io $ sudo apt−get install docker−compose Now it is time to create a flask app. I believe you already have experience working with flask application. If not it is very simple. Watch few videos or read few blogs. Simple Flask application The above code can be saved in a file called app.py in the root directory. This .py few flask libraries and create a route ‘/’ and returns a message. App is exposed on port 7007. Here we also use CORS, Cross origin resource sharing is a mechanism that allow us to configure restriction across other domain other than the domain in the app is served. You can run this flask app by running python app.py command. But this will ask you to install few dependencies. Install them and write them down on to a file called requirements.txt . Generated By Author
https://medium.com/analytics-vidhya/docker-docker-compose-flask-app-8527356aacd5
['Raoof Naushad']
2020-12-22 16:31:09.847000+00:00
['Python', 'Docker Compose', 'Containers', 'Flask', 'Docker']
Recommended Publications and Guidance for New Writers
Photo by LinkedIn Sales Navigator on Unsplash I have been a writing for Medium for six weeks. Within the first week of me starting, I had absolutely no idea how to work this platform. I had no idea what I was doing and had absolutely no idea about publications. In order for me to spread my work, I joined numerous Facebook groups and had come across talks about different publications. I had self-published three pieces by this point. Upon this decision, I did not have many views at all and had not made a penny. As a new starter on Medium, below is some guidance and recommendations on publications; Join publications straight away. You need to build your fan base. The best way to connect with other writers, aside from social media, is to get your work noticed and join a publication. I do not recommend self-publishing until you have build a fan base, however small or big. This is because if you find you are not receiving any attention for your work, you can become easily demotivated and even decide to give up altogether. 2. Do not just submit to one publication. There are so many publications out there. Just because the one you have submitted to have accepted your work does not mean you need to continuously publish to that one alone. It is good to experiment with more than one publication so you can connect to new writers. Obviously, some publications have more followers than others, so potentially you can get more views, claps, reads, highlights, whatever with some publications in comparison to others. 3. Join smaller publications to build your confidence up for the larger ones. The Ascent, The Start Up and all the major ones that everyone gets excited about. Become a writer but I wouldn’t publish to them straight away as it’s highly likely you’ll be rejected. In addition, the bigger the publication, the longer it will take for them to review your work. We are talking around a week. However, smaller publications take around 48 hours maximum. Smaller publications are more likely to be less picky about what they want published on their page, so you’re more likely to get accepted. Over time you’ll find your writing feet, be able to practice and build your confidence up when it comes to being rejected by the big boy publications. There is also the myth that submitting to smaller publications can actually increase your interactions a bit more than if you published to a bigger publication. This is because the writers who submit to the small publication are more likely to be dedicated to it and are more likely to read your work. 4. Medium Editioral Group Publications These are the publications that you cannot just ask to become a writer for. You must pitch pieces of work to them and they will base your article a base rate (usually $500) as well as it be eligible to earn via Medium Partner Programme. Be prepared to not receive a response when you pitch the first time. Many people have had to pitch a good 3/4 times until they are successful. A list of these publications are available here. 5. Be prepared for rejection! Ultimately, no matter what, be prepared to have your work rejected. It doesn’t really bother me at all, but it does still hurt a bit. Rejection is something that many really can’t deal with. So if you fall into this category, especially with writing, I’m sorry my friend, but it is inevitable to happen eventually. 6. Read the submission guidelines! If you are aiming to write for a specific publication, there’s no point wasting time writing for a particular publication and then it being rejected because of it not meeting the submission guidelines. READ THEM FIRST!
https://medium.com/illumination/recommended-publications-and-guidance-for-new-writers-1ca50e6aa8f4
['Shamar M']
2020-12-16 03:37:12.729000+00:00
['Advice', 'Writing Tips', 'Writing Life', 'Recommendations', 'Writing']
Operational Analytics: What every software engineer should know about low-latency queries on large data sets
Introduction to Operational Analytics Operational analytics is a very specific term for a type of analytics which focuses on improving existing operations. This type of analytics, like others, involves the use of various data mining and data aggregation tools to get more transparent information for business planning. The main characteristic that distinguishes operational analytics from other types of analytics is that it is “analytics on the fly,” which means that signals emanating from the various parts of a business are processed in real-time to feed back into instant decision making for the business. Some people refer to this as “continuous analytics,” which is another way to emphasize the continuous digital feedback loop that can exist from one part of the business to others. Operational analytics allows you to process various types of information from different sources and then decide what to do next: what action to take, whom to talk to, what immediate plans to make. This form of analytics has become popular with the digitization trend in almost all industry verticals, because it is digitization that furnishes the data needed for operational decision-making. Let’s discuss some examples of operational analytics. Let’s say that you are a software game developer and you want your game to automatically upsell a certain feature of your game depending on the gamer’s playing habits and the current state of all the players in the current game. This is an operational analytics query because it allows the game developer to make instant decisions based on analysis of current events. Back in the day, product managers used to do a lot manual work, talking to customers, asking them how they use the product, what features in the product slow them down, etc. In the age of operational analytics, a product manager can gather all these answers by querying data that records usage patterns from the product’s user base; and he or she can immediately feed that information back to make the product better. Similarly, in the case of marketing analytics, a marketing manager would use to organize a few focus groups, try out a few experiments based on their own creativity and then implement them. Depending on the results of experimentation, they would then decide what to do next. An experiment may take weeks or months. We are now seeing the rise of the “marketing engineer,” a person who is well-versed in using data systems. These marketing engineers can run multiple experiments at once, gather results from experiements in the form of data, terminate the ineffective experiments and nurture the ones that work, all through the use of data-based software systems. The more experiments they can run and the quicker the turnaround times of results, the better their effectiveness in marketing their product. This an another form of operational analytics. Definition of Operational Analytics Processing An operational analytics system helps you make instant decisions from reams of real-time data. You collect new data from your data sources and they all stream into your operational data engine. Your user-facing interactive apps query the same data engine to fetch insights from your data set in real time, and you then use that intelligence to provide a better user experience to your users. Ah, you might say that you have seen this “beast” before. In fact, you might be very, very familiar with it from close quarters as well… it encompasses your data pipeline that sources data from various sources, deposits it into your data lake or data warehouse, runs various transformations to extract insights, and then parks those nuggets of information in a key-value store for fast retrieval by your interactive user-facing applications. And you would be absolutely right in your analysis: an equivalent engine that has the entire set of these above functions is an operational analytics processing system! The definition of an operational analytics processing engine can be expressed in the form of the following six propositions: Complex queries: Support for queries like joins, aggregations, sorting, relevance, etc. Low data latency: An update to any data record is visible in query results in under than a few seconds. Low query latency: A simple search query returns in under a few milliseconds. High query volume: Able to serve at least a few hundred concurrent queries per second. Live sync with data sources: Ability to keep itself in sync with various external sources without having to write external scripts. This can be done via change-data-capture of an external database, or by tailing streaming data sources. Mixed types: Allows values of different types in the same column. This is needed to be able to ingest new data without needing to clean them at write time. Let’s discuss each of the above propositions in greater detail and discuss why each of the above features is necessary for an operational analytics processing engine. Proposition 1: Complex queries A database, in any traditional sense, allows the application to express complex data operations in a declarative way. This allows the application developer to not have to explicitly understand data access patterns, data optimizations, etc. and frees him/her to focus on the application logic. The database would support filtering, sorting, aggregations, etc. to empower the application to process data efficiently and quickly. The database would support joins across two or more data sets so that an application could combine the information from multiple sources to extract intelligence from them. For example, SQL, HiveQL, KSQL etc. provide declarative methods to express complex data operations on data sets. They have varying expressive powers: SQL supports full joins whereas KSQL does not. Proposition 2: Low data latency An operational analytics database, unlike a transactional database, does not need to support transactions. The applications that use this type of a database use it to store streams of incoming data; they do not use the database to record transactions. The incoming data rate is bursty and unpredictable. The database is optimized for high-throughout writes and supports an eventual consistency model where newly written data becomes visible in a query within a few seconds at most. Proposition 3: Low Latency queries An operational analytics database is able to respond to queries quickly. In this respect, it is very similar to transactional databases like Oracle, PostgreSQL, etc. It is optimized for low-latency queries rather than throughput. Simple queries finish in a few milliseconds while complex queries scale out to finish quickly as well. This is one of the basic requirements to be able to power any interactive application. Proposition 4: High query volume A user-facing application typically makes many queries in parallel, especially when multiple users are using the application simultaneously. For example, a gaming application might have many users playing the same game at the same time. A fraud detection application might be processing multiple transactions from different users simultaneously and might need to fetch insights about each of these users in parallel. An operational analytics database is capable of supporting a high query rate, ranging from tens of queries per second (e.g. live dashboard) to thousands of queries per second (e.g. an online mobile app). Proposition 5: Live sync with data sources An online analytics database allows you to automatically and continuously sync data from multiple external data sources. Without this feature, you will create yet another data silo that is difficult to maintain and babysit. You have your own system-of-truth databases, which could be Oracle or DynamoDB, where you do your transactions, and you have event logs in Kafka; but you need a single place where you want to bring in all these data sets and combine them to generate insights. The operational analytics database has built-in mechanisms to ingest data from a variety of data sources and automatically sync them into the database. It may use change-data-capture to continuously update itself from upstream data sources. Proposition 6: Mixed types An analytics system is super useful when it is able to store two or more different types of objects in the same column. Without this feature, you would have to clean up the event stream before you can write it to the database. An analytics system can provide low data latency only if cleaning requirements when new data arrives is reduced to a minimum. Thus, an operational analytics database has the capability to store objects of mixed types within the same column. The six above characteristics are unique to an OPerational Analytics Processing (OPAP) system. Architectural Uniqueness of an OPAP System The Database LOG The Database is the LOG; it durably stores data. It is the “D” in ACID systems. Let’s analyze the three types of data processing systems as far as their LOG is concerned. The primary use of an OLTP system is to guarantee some forms of strong consistency between updates and reads. In these cases the LOG is behind the database server(s) that serves queries. For example, an OLTP system like PostgreSQL has a database server; updates arrive at the database server, which then writes it to the LOG. Similarly, Amazon Aurora’s database server(s) receives new writes, appends transactional information (like sequence number, transaction number, etc.) to the write and then persists it in the LOG. On both of these cases, the LOG is hidden behind the transaction engine because the LOG needs to store metadata about the transaction. Similarly, many OLAP systems support some basic form of transactions as well. For example, the OLAP Snowflake Data Warehouse explicitly states that it is designed for bulk updates and trickle inserts (see Section 3.3.2 titled Concurrency Control). They use a copy-on-write approach for entire datafiles and a global key-value store as the LOG. The database servers fronting the LOG means that streaming write rates are only as fast as the database servers can handle. On the other hand, an OPAP system’s primary goal is to support a high update rate and low query latency. An OPAP system does not have the concept of a transaction. As such, an OPAP system has the LOG in front of the database servers, the reason being that the log is needed only for durability. Making the database be fronted by the log is advantageous: the log can serve as a buffer for large write volumes in the face of sudden bursty write storms. A log can support a much higher write rate because it is optimized for writes and not for random reads. Type binding at query time and not at write time OLAP databases associate a fixed type for every column in the database. This means that every value stored in that column conforms to the given type. The database checks for conformity when a new record is written to the database. If a field of a new record does not adhere to the specified type of the column, the record is either discarded or a failure is signaled. To avoid these types of errors, OLAP database are fronted by a data pipeline that cleans and validates every new record before it is inserted to the database. For example, let’s say that a database has a column called ‘zipcode’. We know that zip code are integers in the US while zipcodes in the UK can have both letters and digits. In an OLAP database, we have to convert both of these to the ‘string’ type before we can store them in the same column. But once we store them as strings in the database, we lose the ability to make integer comparisons as part of the query on this column. For example, a query of the type select count(*) from table where zipcode > 1000 will throw an error because we are doing an integral range check but the column type is a string. On the other hand an OPAP database does not have a fixed type for every column in the database. Instead, the type is associated with every individual value stored in the column. The ‘zipcode’ field in an OPAP database is capable of storing both these types of records in the same column without losing the type information of every field. Going further, for the above query select count(*) from table where zipcode > 1000 , the database could inspect and match only those values in the column that are integers and return a valid result set. Similarly, a query select count(*) from table where zipcode='NW89EU' could match only those records that have a value of type 'string' and return a valid result set. Thus, an OPAP database can support a strong schema, but enforce the schema binding at query time rather than at data insertion time. This is what is termed strong dynamic typing. Comparisons with Other Data Systems Now that we understand the requirements of an OPAP database, let’s compare and contrast other existing data solutions. In particular, let’s compare its features with an OLTP database, an OLAP data warehouse, an HTAP database, a key-value database, a distributed logging system, a document database and a time-series database. These are some of the popular systems that are in use today. Compare with an OLTP database An OLTP system is used to process transactions. Typical examples of transactional systems are Oracle, Spanner, PostgreSQL, etc. The systems are designed for low-latency updates and inserts, and these writes are across failure domains so that the writes are durable. The primary design focus of these systems is to not lose a single update and to make it durable. A single query typically processes a few kilobytes of data at most. They can sustain a high query volume, but unlike an OPAP system, a single query is not expected to process megabytes or gigabytes of data in milliseconds. Compare with an OLAP data warehouse An OLAP data warehouse can process very complex queries on large datasets and is similar to an OPAP system in this regard. Examples of OLAP data warehouses are Amazon Redshift and Snowflake. But this is where the similarity ends. An OLAP system is designed for overall system throughput whereas OPAP is designed for the lowest of query latencies. An OLAP data warehouse can have an overall high write rate, but unlike a OPAP system, writes are batched and inserted into the database periodically. An OLAP database requires a strict schema at data insertion time, which essentially means that schema binding happens at data write time. On the other hand, an OPAP database natively understands semi-structured schema (JSON, XML, etc.) and the strict schema binding occurs at query time. An OLAP warehouse supports a low number of concurrent queries (e.g. Amazon Redshift supports up to 50 concurrent queries), whereas a OPAP system can scale to support large numbers of concurrent queries. Compare with an HTAP database An HTAP database is a mix of both OLTP and OLAP systems. This means that the differences mentioned in the above two paragraphs apply to HTAP systems as well. Typical HTAP systems include SAP HANA and MemSQL. Compare with a key-value store Key-Value (KV) stores are known for speed. Typical examples of KV stores are Cassandra and HBase. They provide low latency and high concurrency but this is where the similarity with OPAP ends. KV stores do not support complex queries like joins, sorting, aggregations, etc. Also, they are data silos because they do not support the auto-sync of data from external sources and thus violate Proposition 5. Compare with a logging system A log store is designed for high write volumes. It is suitable for writing a high volume of updates. Apache Kafka and Apache Samza are examples of logging systems. The updates reside in a log, which is not optimized for random reads. A logging system is good at windowing functions but does not support arbitrary complex queries across the entire data set. Compare with a document database A document database natively supports multiple data formats, typically JSON. Examples of a document database are MongoDB, Couchbase and Elasticsearch. Queries are low latency and can have high concurrency but they do not support complex queries like joins, sorting and aggregations. These databases do not support automatic ways to sync new data from external sources, thus violating Proposition 5. Compare with a time-series database A time-series database is a specialized operational analytics database. Queries are low latency and it can support high concurrency of queries. Examples of time-series databases are Druid, InfluxDB and TimescaleDB. It can support a complex aggregations on one dimension and that dimension is ‘time’. On the other hand, an OPAP system can support complex aggregations on any data-dimension and not just on the ‘time’ dimension. Time series database are not designed to join two or more data sets whereas OPAP systems can join two or more datasets as part of a single query. Let’s summarize our findings for each of the different data stores we discussed and whether they satisfy our propositions to be an Operational Analytics processing system. References
https://medium.com/rocksetcloud/operational-analytics-what-every-software-engineer-should-know-about-low-latency-queries-on-large-7b02695acec7
['Dhruba Borthakur']
2019-09-09 17:29:34.155000+00:00
['Software Engineering', 'Analytics', 'Real Time Analytics', 'Concurrency', 'Database']
Lists Can be a Window into One’s Life
I perform best on a set schedule where a to-do-list is a part of a routine. For example, my day starts with mediation, tea, and breakfast, reading, getting ready for work, or preparing for classes. I’ve been doing all these things since I was a teenager. Life took unusual turns by creating havoc in my planned lifestyle, but mediation practice has continued. I’m intrigued by this challenge and just submitting a random list. Thank you so much for indulging me. Thank you for reading. Here it is: 1. I got my high school diploma when I was 13 years old. 2. I’m terrified of all flying creatures and feathers. 3. I never went on a date. 4. My predictions always come true. 5. I wanted to major in politics and psychology. 6. Rome is my favourite city in the world. 7. An unfolded laundry basket is the most exciting thing for me. 8. Silence is comforting; I can go days without talking to someone. 9. I learned to drive when I was 36 years old. 10. I want to open a cat sanctuary and retire to manage it. © Fatima Imam
https://medium.com/illumination/lists-can-be-window-into-ones-life-a5c4815698e1
['Dr. Fatima Imam']
2020-12-28 21:20:16.086000+00:00
['Writing', 'Vulnerability', 'Relationships', 'Life', 'Self Improvement']
The Stoics’ Secret to Staying Calm in the Storms of Life
Since nearly its inception, around 300 BCE in ancient Greece, Stoicism has had a mistaken reputation for promoting a robot-like, unemotional attitude. But the goal of Stoicism isn’t to eliminate emotions so much as bad responses to emotions. There is an old story that illustrates the point. Aulus Gellius, a Roman author, was traveling on a ship with a reputed Stoic philosopher when they met a strong storm. As the waves crashed over them, Aulus looked over at the philosopher to see how he was responding. To his surprise, the Stoic man was just as afraid as the rest of the crew. The storm passed and afterward, Aulus asked the philosopher why he responded as he did. In reply, the man took out a copy of the Discourses by the famed Stoic philosopher Epictetus (50–135 CE). He then pointed to a passage, which explained that the impressions we receive from our environment are not under our control. We only have the power to assent to those impressions or not (NA 19.1.1–21). The fear that we feel when a wave is about to crash over us, in other words, is natural and not up to us. A Stoic philosopher will experience it just like any other person. What is under our control is whether to give in to that initial impression, to let it become terror for example, or not. What Stoic philosophy specializes in is supplying techniques, or “spiritual exercises,” to help us deal with such strong emotions to stay calm. In fact, in a single chapter of his Discourses, Epictetus explains that there are five such exercises to help you achieve a tranquil, happy life. Philosophically, I am going to forward the view that there is a logical organization at work in Epictetus’ chapter — he wasn’t writing a “listicle.” Practically, I want to explain these ideas in a way that you can use them yourself. A point of context proves helpful to start, one relayed best by a story. What Exactly Is a Spiritual Exercise? When I teach in a classroom, my university students are regularly treated to my abilities as an artist. Those abilities extend from drawing oblong circles, to crooked lines, to wobbly stick figures. To help me draw better, an art student once showed me that if I took an image and turned it upside down, I would do better at copying it. She had me try it and indeed I was better. But I still wasn’t what anyone would call “good.” This point illustrates the basic Stoic idea about life: living a good life is an “art” in the classical sense, meaning that it is a craft (Latin: ars, Greek: technē). When you’re learning a craft, like drawing, someone can explain the intellectual points to you, like turning an image upside down, but you still need to practice it to get better. The one without the other is mostly a waste of everyone’s time. The Stoics thus developed a host of practices (Greek: askēsis) that aren’t physical so much as mental. Pierre Hadot, a French scholar of classical antiquity, decided to translate the Greek term askēsis as “spiritual exercise” to express this point. In French “esprit” means both “mind” and “spirit,” so the idea is that these are exercises for your mind rather than your body. The Stoic’s view, then, holds that to live well you need spiritual exercise. What follows are “spiritual exercises” in this sense. 1 What To Do When Things Go Wrong In his chapter on what value we should place on outcomes, Epictetus begins by considering the commonest source of our worries, namely whether something we hope will happen — an election result for example — will in fact happen. He writes: “What am I going to do?” “How will I do it?” “How will it turn out?” “I am afraid that this [bad thing] will befall me or that!” All these are the expressions of people who concern themselves with the things that lie outside the sphere of moral purpose (Discourses IV.10). When Epictetus mentions “moral purpose,” he means those things that have value for your life as a human being. What makes you good as a human is how you respond to events, whether you maintain that upstanding, good individual within. Whether or not your favored candidate wins the election, your character isn’t at stake. Whether or not that person you like also likes you back doesn’t change who you are. Whether or not you are promoted, your moral character is untouched. Whether or not you are laid off, your moral character is up to you. Whether or not you become a parent, who you are is not at risk. Of course, your circumstances will change, but the heart of Stoic ethics holds that the measure of a person’s life — its value — doesn’t change by those circumstances. Good people are good people whether they are rich or poor, employed or unemployed, live in a democracy or a tyranny. Likewise, the bad aren’t made better by owning Lamborghinis and dressing in luxury attire. How to Apply This To apply this lesson, you only have to ask: is this under my control? If it is, then take the appropriate steps to correct the situation. If it isn’t, then give it up to “god.” If you believe in some divine being, then give it up to them. If you don’t, as the Stoics believed that god was the soul of the cosmos, then you can put your worries off on that. It is this line of reasoning that inspired what is known as the Serenity Prayer, usually attributed to Reinhold Niebuhr: God, grant me the serenity to accept the things I cannot change, Courage to change the things I can, And wisdom to know the difference. 2 What To Do When Goals Unravel Epictetus continues beyond this first point. Bad events are easily separated from whom we are. But what do you do when your goals unravel? He replies: If a person has great anxiety about some desire, for fear that it will turn out incomplete and miss its mark…. [D]esire none of those things which are not your own, and avoid none of those things that are not under your control (Discourses IV. 10). His point is that your goals can only unravel if you have chosen objectives that are not your own, that are not under your control. Here are some examples. You publish an article that gets only 30 views. You write a book that flops. You apply to a job and are rejected. You ask out your crush and strikeout. You try out for a team and you are cut (I discuss my own experience here). The problem in each case is that you desire what you can’t control. Do those things make you better or worse as a person? That is Epictetus’ point. Also, remember that even beyond living a good life, mere happiness doesn’t consist in getting things. Even if you do achieve your goals, we each face what social scientists call a “hedonic treadmill.” The idea is simple: our brains adapt to things. Just like getting into a hot tub, the waters of life only feel hot for a while. Get a new car … and soon it becomes old. Buy a new house … and shortly it’s just where you live. Upgrade your phone … and in six months a new model comes out. How to Apply This When you are facing a setback like this, you need to pause your thoughts and reconsider your goals. Ask yourself: Are these goals under my control? If not, why did I want them in the first place? You’ll probably find that the sources at work turn on what you are ashamed of and your fear of vulnerability. Another way to approach this task is to think about your life after the event in more detail. How is it different? In what ways, very concretely, would it be better? If it matters a great deal, ask other people who have been there. The reason you have to do this is that social scientists have found that we’re often terrible at imagining the future because we either leave stuff out that should be there or put stuff in that won’t be. For example, imagine that you win a free car. It’s your choice, pick whichever one you want. Imagine its color, its feel while driving it, everything about it. Now, what was your license plate number? … or did you forget to put that in? Often what’s making you feel bad about an outcome is a result of not understanding what achieving that goal would be like in the first place. But what if it’s something I’ve been trying to accomplish my whole life? 3 What to Do About Life-Long Commitments Epictetus continues his discussion with just this point, writing: Did not Homer compose his works for us to see that there is nothing to prevent the persons of highest birth, of greatest strength, of most handsome appearance, from being the most miserable and wretched — if they do not hold the right kind of judgements? (Discourses IV.10) Yes, what is true of one-off projects is true of your life-projects also. But even if your projects go well, you can only live well if you hold the right kind of judgments. Let me give you a story. Pete Best is the most famous musician you have almost heard of. He was the drummer in a small band called The Beatles. But at one point, the other members of the band decided to replace him with Ringo Star, and like that, Pete Best became almost famous. When asked about his experiences, however, Pete said that he is happy with his life — that he is better in every way. And he continues to make his own music. His secret? He has the right kinds of judgments about what living a good human life is. Most of us don’t become rock stars and many of those who do, don’t live well anyway. How to Apply This When a major setback like this happens, you need to stop and ask: How is your life still a good human life? Often you’ll find that the source of your anxiety and depression is a series of comparative judgments. You think: Person P has Y thing, and I’m just as good as P, so I should have Y thing too! But why is having that thing important for living a good human life in the first place? One way I’ve found out of this predicament is to focus on people who I admire and lived well, but who didn’t have that Y thing. In short, I reverse the line of reasoning: Person P didn’t have Y thing, and they are just fine. I’m no better than P, so why should my life be worse without Y? Let me explain with a personal case. One of my life’s aims has been a simple one: to be a father. But it turns out that my wife and I cannot have biological children — frustratingly for reasons that cannot be determined medically. We decided, as a result, to adopt. What struck me about the whole process is the sense of loss that followed after being denied something I had just assumed would happen naturally. Still, as a philosopher, I know that many of the people I study lived well and had no children at all (biological or adopted). If their lives weren’t diminished, then why should my life be? 4 What About Death? You may be lucky, however, and never encounter a serious life-setback. Nevertheless, because you’re human, you stand within the arc of time’s bending sickle. Epictetus next turns to this point asking: But if I die in so doing? — You will die as a good person, bringing to fulfilment a noble action (Discourses IV.10). Stoic philosophy perhaps shines brightest in death’s shadow. The key to dying well, they teach, is to know what you are dying for. I covered James Stockdale’s story in another article. He received the congressional medal of honor for his valor in the Hanoi torture camp, and he saved the lives of many, many men stationed behind enemy lines. He was also a practicing Stoic throughout his adult life. But in that piece, I didn’t tell the end of his story. The truth is that Stockdale only lived by accident. He realized that under torture he could not hold back information that the enemy knew he knew. And they had learned that he knew more about camp resistance than he was letting on. If Stockdale divulged what he knew, then the prison guards would kill many of his men and he did not want that to happen. And so the night before they were to torture him, they strung him up in a room and left him there. Stockdale somehow managed to swing to the lone window in the room and break it. Then, with bound hands, he grabbed the shards and slit his wrists. Sometime later the guards found him in a pool of blood and decided to revive him. The reason? The international community had just learned of the torture camps and North Vietnamese officials did not want the world’s opinion to turn further against them. The last thing they needed, then, was a dead, high-ranking American POW. Chance stepped in to save Stockdale, but he was prepared to die for his men. He knew what he would be dying for. How to Apply This Epictetus tells you explicitly how to apply this lesson, writing: What is it, then, that you wish to be doing when death finds you? I for my part should wish it to be some work that befits a human, something beneficent, that promotes the common welfare, or is noble. … If death finds me occupied with these matters it is enough (Discourses IV.10). The point is simple: it is enough to try, earnestly, to help people. Learn to live for others. 5 What About The People We Love? This last point, though, seems to expose us to the special difficulty of loving other fragile beings. They too, just like us, stand in the arc of time’s sickle. Epictetus addresses this point in discussing the mourning of one man for his friends. He writes: Why did he regard any of his friends as immortal? (Discourses IV.10) On this point, Stoicism is often mistaken for coldness. That’s not Epictetus’ point. Rather, he has in mind the same notion that structures the entire chapter: we are human beings with human value. It is not a happy thought, but of course, our friends and loved ones will pass. That is a fact of human existence. What redeems them, though, is not a long life — not even an infinitely long one. It is rather the value of their actions as human beings. Death has no value — neither yours nor theirs. And this is a freeing notion, because it means that no one’s life is diminished for having been made shorter. It is open to us all to live well, even if we cannot all live equally long lives. How to Apply This I research all the world’s philosophical traditions because I think that’s the best shot anyone has at learning how to live well. In another piece, I covered the Day of the Dead practices from Mexico. These practices find their historical origin in the ethical philosophy of the Aztecs. Where I think they help us in applying this Stoic lesson is in the practice of remembering our loved ones and our ancestors. It is one matter to recognize that death does not diminish their life, and another to practice respect for them. While face painting has become all the rage for social media posts, the heart of Day of the Dead is just to build a small memorial to those who have passed in your home. Then set aside some time to talk about them and to recall what they did that was good in life. This is one way to actively remember our relationships with others and to express gratitude for what is good in our own lives. Living The Examined Life Epictetus forwards logical reasons why there are just five spiritual exercises to remain calm in the storms of life. Each explains why tranquility is a choice, but not an easy one. In reality, this is just one practice, called “detachment,” which is exercised in five different domains. Like many Stoic terms, detachment can be mistaken for something negative. But like its parallel Buddhist practice, what you are learning to detach from is something harmful. Those harmful things arise in five areas of life: external events, individual projects, life-long projects, facing the end of life, supporting our loved ones. To explain the value of these spiritual exercises, I’ll leave you with a final quote from Epictetus.
https://medium.com/illumination-curated/the-stoics-secret-to-staying-calm-in-the-storms-of-life-1271a83aab6f
['Sebastian Purcell']
2020-11-16 19:41:53.303000+00:00
['Self-awareness', 'Mindfulness', 'Philosophy', 'Life Lessons', 'Self Improvement']
Badlands
The Mantle is a serialized fantasy story. More about it here Table of contents They struck camp while a full moon looked on. The bare trees black and the ground silver and they like a pair of hooded wraiths haunting a dead place. Because he knew the way, Simeon took the lead. He paused at the dry creek and looked back to find the old man had stopped to gawk at the desolation. The air still tasted burnt; Simeon wondered if it was possible to scorch the wind. The bandit’s boot prints had hardened in the mud; tracking them east was not difficult. Free of the need for close study, Simeon watched the shadows that pooled darkly and waited for something to materialize as though from a portal. But nothing emerged to challenge them. The old man came forward, staff held crosswise in both hands. He was stooped as ever but somehow seemed more in all that midnight glow, eyes bright and sure. He wordlessly stepped past Simeon, taking the lead and reasserting the natural order. They walked some twenty minutes, pausing at times to gauge their surroundings, to listen to the thin wind rattle the dead trees. The bandits had camped on a bare shelf of stone open to the sky and ringed by the scrappy remnants of brush. Simeon and the old man watched the camp from behind trees but it was immediately apparent that the bandits were not there. The old man used the end of his staff to poke at the charred embers within their stone circle. He frowned at the trees beyond. “They have at most a day on us. Probably less. We may catch them before daybreak.” Simeon wasn’t all that anxious for a reunion. He told himself that this time it would be different. This time they’d have to contend with the old man and his magics. But he felt little better as they re-entered the forest. They lost the trail sometime after midnight. The boot prints steadily faded until they were simply no more. They stood and looked about and then continued in the same general direction, but after another thirty minutes could no longer be certain they were still on the bandit’s trail. They stopped to rest. “We must move on,” the old man said, more to himself than Simeon. He was sitting astride a fallen tree and stared unblinking at the ground. “Much as it pains me. We’ve lost too much time already.” “Master?” The old man reached out his hand for Simeon’s arm. “We must attend to our errand. Come along boy.” They veered northeast and by dawn left the eerie still of the burnt forest, entering a hard, hilly country gray with rock. Scraggly weeds clinging to thin soil on the leeward side. Clouds of dust disintegrating as rapidly as they’d coalesced. They climbed a high mount, hand over hand, and from the summit stood looking in all directions. There was no movement but what the wind made. Though it’d been a long night, the old man was in favor of pressing on. They napped in the shade of a boulder and climbed down from their rocky perch an hour later. “Master?” Simeon had been troubled by something and now seemed as good a time as any to seek the old man’s counsel. “Hmm.” “The fire – why did it stop listening to my commands?” “All things have an innate nature. A natural tendency, hmm? You must always remember that. Fire will always want to burn. The wizard who doesn’t consider such things before calling and commanding won’t be a wizard long. As you very nearly discovered yourself.” Simeon thought on this. “What is the nature of starlight?” “To shine, coldly. Also a certain degree of aloofness.” The sightlines were good and the country bleak and empty, so they increasingly moved by daylight. Better to see the uneven footing. They scaled rocky ascents carpeted in loose stone that shifted underfoot and skipped down the slope, kicking up screens of dust. Camped beside an enormous lake, dark and still in the manner of deep places, the shore a field of broken shale. Nooned in the shade of a peaked obelisk, seamless and polished like black glass. It was over a hundred paces to a side, and the old man tapped it in places with his staff and listened and frowned, and watched it closely while they ate. Their stores were dwindling and they’d taken to rationing what was left. Each meal they spread everything on a blanket to take a complete accounting. Even though the numbers were never far from mind. Three oatcakes. Four figs. A darkly spotted mushroom they weren’t sure was safe to eat. A carrot, bent and shriveled and knotty as a hag’s finger. They split a fig and an oatcake and drank water until their bellies were distended. There was no forage to be had in this country, but with its many basins and depressions, there was no shortage of water, at least. Simeon lay back with his hands behind his head and looked at the sky while the old man scratched numbers in the dirt with the end of his staff and muttered to himself. “We shouldn’t be far now. No way to tell for sure without the talisman, but I think we’re close. I only hope we’re not too late.” Questions crowded Simeon’s tongue – what would happen after they arrived? Why might it already be too late, and for what? How would they cross the long miles home with no food? But he quickly settled on the one that had been foremost in his mind since the old man had told him they were leaving Shadowmount. The one the old man had refused to answer each time before. But perhaps now that they were close… “Master? Where are we going?” The old man had been frowning again at the obelisk. He gave Simeon a careful look. “Where are we going? Or do you mean to ask our purpose? Hmm?” “I would know both.” It was a bold statement, and he flushed in its aftermath. The old man’s eyes widened in mock surprise. “Know this, apprentice – a wizard must always be clear of intent, even in seemingly trivial things. Train your mind accordingly or risk the consequences.” He sat back on his elbows and absently wormed his fingers through the gray wisps of his beard. “You must know by now that this is a special errand, fraught as it is with trouble and hardship. Few things would rouse me from home for such a bargain. Few things indeed. “Two nights before we left, I witnessed a star blaze a fiery path across the sky as it plummeted. Like a sign from the old gods. I put all my powers toward ascertaining the location of its final resting place.” He waved a hand toward the distant northern mountains. “By my reckoning, we are only days away.” “And then?” The old man guffawed expansively. “And then? My young ward, stars contain the very essence of magic itself. With just a bit of such material, we could do great things. Very great indeed.” Simeon thought star metal sounded like an especially fine thing, a first thing surely, and he stood in his excitement. “Well come along then – let’s find this star and be away.” It was another week before they located the crater but their excitement was immediately dampened. Someone had beat them to it.
https://medium.com/themantle/badlands-66b136d612bb
['Eric Pierce']
2020-11-22 04:38:49.500000+00:00
['Science Fiction', 'Fiction', 'Fantasy', 'Themantle', 'Writing']
Battling Customer Penalties with AI
by Kristen Daihes Customer penalty costs are impossible to avoid altogether. Historically, they were often attributed to the cost of everyday operations. Recent years have seen a steady increase in this expense category however. As a result, many companies are now starting to take notice and are searching for a solution that will bring these costs back under control. I recently read an interesting Customer Deductions Benchmark Survey with this in mind. One of the points that stood out for me was the importance in differentiating between deductions that are considered the normal cost of doing business (such as trade promotions and allowances), and those that are true vendor violations and compliance errors. The survey then highlighted the latter as being self-inflicted, and reinforced the importance of understanding real root causes in order to prevent re-occurrence and therefore take action to limit or even eliminate these ongoing charges. Early/late delivery was reported by more than 35% of respondents as the most significant non-trade compliance deduction (in dollars) The survey also highlighted that early/late deliveries was the category of penalty affecting companies most this past fiscal year. This is no surprise in the CPG space as more and more retailers are shortening their delivery windows with suppliers and incurring more significant penalties for missing these windows. But that realization on its own won’t help companies predict and prevent future occurrences. Only 29% of respondents had performed root cause analysis to understand the underlying reasons for deductions. Interestingly enough, despite the pain of these ever increasing costs only a small percentage of companies are looking into the ‘why’. This statistic took me back to my days as a reliability engineer, driven every day to solve the puzzle of identifying the real root cause of failures. “You cannot eradicate that which you are not aware.” My company did see the value in dedicating tremendous time and effort into this but many other companies then, and even now, are simply reporting on the costs without taking any action. So why aren’t more companies investing in root cause analysis to prevent these rising costs? Bottom line: It’s really hard. When I was a planner, about 40% of my time was spent doing manual research, pulling data from several sources to try to determine why a customer delivery missed its window. What made this particularly challenging was that so much transactional data was sitting in different functional silos. It was difficult and time-consuming to unite these disparate data sources to answer my questions. To make matters even worse, critical information, like carrier confirmation of delivery, was often completely missing. There is now no excuse for inaction…. While I understand the difficulty of these endeavors through my previous work, I have come to find that artificial intelligence based solutions can really transform what is possible in this area. There are countless ways to go after linear and non-linear root cause analytics. Technological innovation also makes it easier to stitch together these previously disconnected data sources, as well as increase end-to-end visibility. Beyond the obvious improvement in OTIF (on time, in full) compliance, I have seen the numerous benefits of investing in these improved root cause analytics solutions: · Increased efficiency and speed in automating the assignment of the root cause of losses. Improving in these areas enables your organization to pivot resources toward proactive intervention and eradication of potential future losses. · Reduction in the impact of human bias on the decision-making process by using machine learning to assign root causes to losses. · Identification of structural improvements and the business value they represent. I have also found that investing in root cause analytics can benefit a broader ecosystem. Target, for example, employs continuous improvement leaders to work with their suppliers to help them determine where their delivery processes break down. If the supplier can improve OTIF reliability for Target, they are likely to see a similar improvement for other retail customers as well. The reliability engineer in me always gets excited by the focus on eradication of failures. To make a significant impact, you have to have the ability to identify where and why failures occur, as well as understand the full impact they have on the business. BOTTOM LINE: If root cause analytics is not on your road map of opportunity, it should be. ___________________________________________________________________ If you liked this blog post, check out more of our work, follow us on social media or join us for our free monthly Academy webinars.
https://medium.com/opex-analytics/battling-customer-penalties-with-ai-8408b869d508
['Opex Analytics']
2018-09-18 16:47:53.796000+00:00
['Artificial Intelligence', 'Customer Service', 'Business Value', 'Machine Learning']
Design for systems, not users
Design for systems, not users The unintended consequences of user-centered design If there’s one thing the current moment has done, it is to peel back the façade of radical individualism and reveal the ways in which we are deeply dependent on other people and systems. The often invisible networks of infrastructure and labor that hold up our society have lately been thrown into brilliant relief: The healthcare system that determines how and whether you are treated for illness. The workers who bring you your groceries and deliver your packages. The global logistics infrastructure that determines whether you can buy that toilet paper, or those Clorox wipes. The political systems that determine how your community responds to threat, and whether that response keeps you safe. In the past decades of relative prosperity, it has been easy to ignore or obfuscate this web of interconnectivity, and as a result we have built much of that seeming prosperity on the backs of fragile or exploitative systems. Those fissures, those inequalities, are now coming to light in an urgent way. So what does this have to do with design? As a designer, I try to look at both the explicit and implicit choices being made in designing an experience. And the implicit choices baked into much of our software are deeply problematic, creating shiny user experiences on top of extractive and exploitative business models. As I think through how we might make more ethical choices, how we might make those implicit choices explicit, I’ve found myself looking critically at the practice of user-centered design. The fundamental problem is this: User-centered design focuses attention on consumers, not societies Like many designers, I’ve been trained in the idea that user-centered design is a humane and ethical approach to design. It is rooted in empathy for people, therefore it helps us create beneficial experiences for people, therefore it is good for society. But who is the user we’re designing for? In most cases, that user tends to be synonymous with the consumer, the person with the purchasing power. Furthermore, the user tends to be the person directly engaging with the software. But the digital experiences we create touch far more people than just the end user. They engage with entire, interconnected systems that are composed of many different participants, only some of whom are the “users” we typically design for. As Kevin Slavin writes in his essay Design as Participation, “When designers center around the user, where do the needs and desires of the other actors in the system go? The lens of the user obscures the view of the ecosystems it affects.” So in effect, user-centered design ends up being a mirror for both radical individualism and capitalism. It posits the consumer at the center, catering to their needs and privileging their purchasing power. And it obscures the labor and systems that are necessary to create that “delightful user experience” for them. This is how we end up with platforms that give us free content, backed by an invisible system of surveillance capitalism that extracts personal data for profit. This is how we end up with systems that can deliver anything our hearts desire to our doorstep, backed by an entire class of exploited and underpaid workers. Designing for the whole system Instead of focusing on the user, how might we instead design for whole, interdependent systems? What might we have to change about our practice to create better, more ethical outcomes for society? To begin with, we need to expand our mapping of the space we’re designing for. We can take some tools and models from forecasting, like STEEP, to map the social, technical, economic, environmental, and political systems that our product touches upon. Instead of focusing on one or two types of end users, how might we look at all of the participants in our system? Who uses the software? What labor does the software require? What tradeoffs are inherent to the business model that supports the software? If this starts to feel very big, it’s because it is. Everything we make has secondary effects beyond the choices we explicitly make, so a systems-centered design (or society-centered design) practice tries to make that larger system visible. We can only change that which we can clearly see. That said, we obviously only have explicit control over some parts of a system, whereas other aspects we can only hope to nudge or influence. Kevin Slavin also speaks to this shift in thinking about what it means to design: “The designers of complex adaptive systems are not strictly designing systems themselves. They are hinting those systems towards anticipated outcomes, from an array of existing interrelated systems. These are designers that do not understand themselves to be in the center of the system. Rather, they understand themselves to be participants, shaping the systems that interact with other forces, ideas, events and other designers.” One of the reasons UX design is such a compelling practice is that, rather than designing static artifacts, we design systems that shape the possibilities, expectations, and constraints for how people engage with the world. That work, to shape how people engage with the world around them, carries a lot of power and requires a lot of responsibility. Increasingly, we are surrounded by digital products and experiences that abdicate that responsibility — that focus on short-term profitability over creating products that work well for the people (and societies) that use them. By designing for systems rather than users, we shift into a dynamic posture. Systems are ever-changing, so as designers we can participate, nudge, and adjust over time to adapt to a system as it evolves. We can make playable systems that give everyone more agency, and we can create experiences that respect our inherent interconnectedness.
https://medium.com/swlh/design-for-systems-not-users-4e261aa4714d
['Alexis Lloyd']
2020-05-26 23:29:58.238000+00:00
['Design Thinking', 'Technology', 'Design', 'Ethical Design', 'Systems Thinking']
Don’t Be Surprised When The Boss Acts Like a James Bond Movie Villain
Cruel Leadership Truths Image by OpenClipart-Vectors from Pixabay Make management put it in writing. Verbals have no value. Watch what they do, not what they say. A close friend of mine was a successful software coder for a Fortune 500 company. She worked from home, making $180K, living in a beautiful condo while traveling the world finalizing system installations. Carol loved traveling. Her boss granted her the perks of a high potential employee including executive life insurance, gym membership and a company car. She was not satisfied with her situation. Carol always wanted to be in charge. My position as an Operations Director (at a different company) was the job she was seeking. She was skilled enough to transition. The Operations leader holds a position of true power. Operations controls all purchases for the company to make the product. This department places the purchase orders for the indirect purchases of items such as coffee and toilet paper. Operations also owns the physical facility, shipping and receiving. Everyone who makes ships or touches the product works for Operations. Operations was the plum assignment that Carol desired. Carol, Kenneth and I graduated Engineering school together. We teamed up for our senior project. I was the electrical engineer, Kenneth was the mechanical engineer, and Carol was the software coder for our ambitious senior project. We spent a lot of time together and pulled off a great project. Kenneth was setting up a start-up. He had the money and the facility…all he needed was a Leadership team. We met for dinner. “It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change.” Charles Darwin He offered Carol and I positions in the start-up, for me Engineering Vice-President, for Carol the coveted Operations VP. We would be in charge, answering only to Ken; able to set up our staff by hiring whoever we wanted, able to structure the brand-new departments in any form that we choose. There was one little hitch; we would not be paid until the start-up landed its first contract. We would not get stock (the investors had all the stock). Essentially, we would work for the blue sky dream. “Well, hello there. You look like a bad decision. Come on over here.” Unknown We would live on our savings until the company created revenue. This was the proposal. We would lose a year’s pay and benefits if Ken’s start-up company was unsuccessful. Being paid double our salaries and our expenses for the year in a large lump sum plus the allure of unlimited power was an intoxicating offer. It would be a waterfall of money. Image by enriquelopezgarre from Pixabay It was hinted that if the start-up were successful, we would be more than rich and powerful. We would be the captains of our industry. Whew. Heady stuff. Kenneth wanted us to start right away. He wanted us to quit our current positions with zero notice. He had nothing written, no business plans, no contracts. His business strategy was secret, and he was unwilling to share it. I was married with two children. I needed to talk to my spouse. It upset Ken that I did not jump at his offer. After debating with my spouse all night, when the sun rose the next day, we decided that if Kenneth would provide a written offer, I would review the paperwork. Image by Alex Hu from Pixabay If the offer contained what Kenneth promised, I would take the job. I was excited. Kenneth would not put his proposition in writing; he said he was too busy. I offered to have a lawyer draw up the paperwork. “No,” he said. “If I didn’t trust him, then our relationship would never work.” He was right. I put on my big girl pants and made the hard decision, I did not take the job, but I wanted to. I was second guessing myself all the way. Carol took the position. It was everything she ever wanted. Although she and I continued to meet and socialize, Ken no longer attended the gatherings. He was after all Carols’ boss: in his view, silly irreverent conversations were no longer appropriate. Carol worked for Kenneth with no salary and no benefits for fourteen months. After fourteen long months, the company won the well-paid contract. “Everything happens for a reason but sometimes the reason is that you are stupid and make bad decisions.” — the things we say.com Carol was not returning my calls to congratulate her on her success. I was certain that she wanted to rub my remorseful decision making in, but she did not call. I went by her apartment, unannounced. She no longer lived there. Through mutual friends, I found out when the customer contract came in, Kenneth had fired Carol. Worst yet was the way he treated his team. “All I want to say is that They don’t really care about us All I want to say is that They don’t really care about us” — Michael Jackson He invited the leadership team to a celebration lunch outside the company. Ken gave a speech on how thankful he was that the team had worked so hard and long for the company. He passed out envelopes to each person. The envelopes contained an application to apply for a job at the company. They would compensate anyone who successfully applied and won the position they had already worked for the last year. While the executives were out of the plant, their computers were picked up from their desks and their access badges turned off. No one was allowed back in the company until their background checks were completed and they were formally hired. So scandalous. He was a villain. One Wednesday night, I sat outside her mother’s home, waiting for Carol. I had: two gallons of cherry vanilla ice cream, a large bottle of Diamond Sapphire Vodka, heavy cream, cherries, a jar of anchovy stuffed olives, celery, tacos, two large containers of soup, and my empathy to offer. Image by Anemone123 from Pixabay When we were in college, Carol and I had perfected the “ Naomi Campbell walk” to attract boys to us. A tall, proud, hip-swiveling, high stepping, hair swinging walk copied from the famous model. When I saw Carol pull up, I got out of my car and sauntered over to her with my best Naomi Campbell runway walk. I had bags out in both hands with the easily recognizable vodka bottle sticking up. “You are one pitiful broke bitch,” I said. “ Walk with me.” Carol smiled. She joined me — Naomi Campbell walking across to her mom’s house. We went in, went down to her basement to cry in our ice cream drinks about the loss of old friendships and trust. When the money came in, it was too much money; Kenneth and his wife could not resist keeping it all. From their perspective, they were instrumental in getting the contract and they deserved the fruits of everyone’s labor. Paying what they owed to the team would have cut into their shares, so they shafted everyone who did not have a commitment in writing. Not a single person passed the background checks. There was nothing in writing — not even emails. The atrocious bosses had a disastrous effect on their employees. Carol never recovered from that setback. Her taste for risk lessened. She still lives in her mother’s house, although no longer in the basement. She does not travel. She lost her desire to be the Leader. Carol does not trust her bosses. Do not accept verbal’s for life-changing work decisions. Make management put it in writing. Life is too short to make betrayal easy. * Another leadership read: The Bad Management Conundrum My books are available on Amazon. Join my Readers Group. I can be reached at https://www.tonicrowewriter.com/
https://medium.com/swlh/cruel-leadership-truths-a89d72e39d52
['Toni Crowe']
2019-07-03 00:03:11.464000+00:00
['Work', 'Leadership', 'Short Story', 'Women', 'Entrepreneurship']
Designing for Legacy.
This is amazing and I bet most of you agree, but to me design needs to be redefined. We might need to deliver ‘long lasting design’ but it’s not just about creating something that physically exists for the rest of time — it’s about creating something that has an impact that lasts. Whatever it is we do, it needs to leave behind a legacy. I’ll do my best to explain what I mean. Mother Nature has given us everything, from life to a hometown. Freedom to think by ourselves and be different to the rest of the species. Just everything. She’s always been so generous and comprehensive, and yet humankind has behaved like a grumpy little kid that complains and talks back to their Mum, as though she were guilty for all things he dislikes. But we’re not children and we shouldn’t treat our planet this way. We’re selfish, disrespectful and inconsiderate of the species we share it with, and it’s time for everyone become aware; to step up and take action! If you’ve reached this line you may be thinking, what does this have to do with design? Good question! That’s what I want to debate here, what is a designer’s work all about? I think we are focused on certain things but we should prioritise others. I’d like to talk about what I think is the most important contribution we designers can do, Planet-Centered Design. An approach that puts the needs of our planet at the center of all our choices. In today’s world, design work is about solving problems for brands, startups, products, goods and services and so on. It’s also about helping organisations work better, helping them improve processes or innovate, using the processes we designers use in our everyday work. This is what the market demands. The market demands so because that’s what society demands too. Am I right? Depending on what kind of design we talk about — industrial design, interior design, branding, digital design and on and on. But let’s get specific. Let’s talk about data and digital design, and the role they have in developing meaningful services and products, because I reckon this is where we can make the most impact. First things first, in digital design, we have to evaluate what companies want to sell. Also, what users want to buy or use, and make this journey the best one possible through user experience tactics, for both, to make users return and recommend their platform, and for companies to sell more. Is this a fair summary? Because I think I would be wrong to agree, I’d like to go one step back and analyse why we behave like this. I agree that most of ‘our needs’ are based on what’s around us and what we’ve learnt. Those are fair reasons to be drawn into this mud, but do we really know where all these influences come from? Why do I suddenly need a new pair of shoes I really don’t need, another book that’s been recommended by a shop, or to start using a new app? Most of us consume the same information from the same kind or sources (same reading, same algorithms, same TV shows, same politicians, same brands and so on) because I think we humans are comfortable and reliant to be part of the bigger society and no longer live in isolation. We live in a global world. Kind of. More than half of the people on Earth are not connected to the rest of the world, and the ones who are supposed to be connected are separated by language, culture, and opportunity. My two cents is, why don’t we go one step further instead? Maybe it’s time to stop being trustful and comfortable and make our own informed decisions. Are we aware if we are doing what we need to do? I read this news lately and I’m still quite shocked. Are we making the right decisions? Are we focused on the important issues we need to solve? Are those choices what we need as a species, for our families and ourselves? This is why I think we need to get out our comfort zones: to make better, more conscious, and informed decisions. Comfort is stopping creativity and we need to focus on our Planet or nothing will last forever for our species. This is my call to all designers! What if instead of working to sell more, we use our abilities to do something useful for everyone? Why don’t we apply design thinking to our Planet? Here is where I’d change the Human/User-Centered design approach into Planet-Centered Design. We can help improve things like climate change, biodiversity, poverty, hunger, sickness and other things we still don’t know or want to accept they are happening!!! If we stick on the idea that we can’t, we are just being lazy, but if we do something proactively, it will change! As designers, we can and must change things. To live our lives feeling we’re doing something to improve the situation, because… the situation can still change! Let’s take action! As designers, I think that we can also do our bit. Everyone can, but our work is capable of changing things and achieving impact. Let me tell you about some things I realised we could do, and try to do, at Vizzuality and hopefully, you’ll agree on and make the choice to change the way you design. Design for long term. Design things that endure and evolve over time — and remain useful and used. Think about the needs of the people that will consume what you designed, do they need a new pair of shoes, or another book that will go on a big unread pile? Why not helping corporations change some behaviours? They may be able to produce in another country, do it in a more sustainable way and give their workers better job conditions. Or persuade government to pass new laws, who knows I bet there are many things you can help them with. Help people make better-informed and more sustainable decisions. We deserve to understand the impact of what we do, to understand what’s happening with resources that are running out and find alternatives. Even if they don’t think about their behaviours, what you design must be sustainable. Trust me, we designers are in a position of power right now that puts us in a privileged place at the start of a chain of events that can make a big difference. I think that it’s time to move from Human-Centred Design to Planet-Centred Design. Maybe you’re here just for a while and you don’t mind, but I hope that after reading those words, you will at least think about the important needs we, us humans, have and that some of you feel compelled to get on this train and change your behaviours or decide to start thinking on the consequences your design may have in many different ways. And to be honest, this advice is not just for designers, but also developers, scientists, humanists, engineers, politicians, doctors, EVERYONE! Just remember one thing, our Planet wasn’t designed just for one species, let’s take care of the things that really matter and be graceful and take care of it. Join in!!!
https://medium.com/vizzuality-blog/designing-for-legacy-eaa395173860
['Sergio Estella']
2018-05-15 07:38:39.898000+00:00
['Social Change', 'Design']
Overcommitting With Focus
When we need to make changes to legacy code with no tests we often encounter the chicken and the egg problem. On one hand, in order to refactor the code, we need tests but on the other hand, if we want to add tests we need to refactor it first. When this happens we usually just make the changes and hope everything goes well. Is there anything we can do to increase our confidence while refactoring legacy code with no tests? Enter Overcommitting It turns out there is something we can do and it’s called Overcommitting. The idea is to rely on Git by constantly committing along the way (we can think of it as leaving behind a trail of breadcrumbs). This gives us the ability to “go back in time” in case we realize we’ve made an error (we can also Git bisect if we’re not sure when things went south). How Does it Work? Nicolas does a good job of describing the process of Overcommitting so I’ll just try to briefly summarize it: Commit every 2 minutes. We want this to be as fast as possible so we don’t invest time in the commit messages (it’ll get squashed later anyway). Interactively rebase every 30 minutes to squash the temporary commits. We do this in order to not be overwhelmed by the number of commits. It’s important to remember that one of the ideas behind Overcommitting is to maintain focus. If we feel like 2 minutes is too short (because we’re barely making changes) we can lengthen the commit intervals or shorten the intervals if it’s too long. The same is also true for the squash intervals. Keeping the Focus When I was experimenting with Overcommitting, one thing that bothered me was the constant commit interrupts. It’s hard to focus on the task when we’re constantly interrupted by the need to commit. This made me wonder if I could somehow reduce the number of interrupts. Automate the Disturbing Stuff I decided that the best course of action is to automate the frequent commits. I was much less bothered by the longer squash intervals so I didn’t feel the need to automate them. Here are two very simple zsh/bash one-liners I used to not have to worry about the commit intervals: Commit Until Next Squash The idea here is to stop auto committing once we reach the squash phase for i in {1..15}; do git commit -au -m "wip"; sleep 120; done Note that: We run for 2 minutes ( sleep 120 ) intervals 15 times. ) intervals 15 times. We only commit tracked files. If we want to include untracked files we can use -a instead of -au . instead of . The content of the commit message is irrelevant which is why we chose “wip”. Never Stop Committing We run auto committing until we explicitly stop it watch -n 120 git commit -au -m "wip" What I usually do is run the git commit -au -m "wip" once and then just run watch -n 120 !! which makes watch run the last command. Another thing I tend to do is run the auto-commit in a small Tmux pane so I can watch over the automatic commits. Summary Overcommitting is simple yet handy. It provides some safety when we’re working on legacy code that has no tests. I also find it useful: When I’m evaluating a problem. When I’m spiking myself through some code. Because it forces me to focus on the task at hand. Making Overcommitting automatic is so simple that it’s a crime to not use it. Try it 🙂 As always, feel free to leave me feedback. I highly appreciate it.
https://medium.com/curious/overcommitting-with-focus-e62e6396f779
['Gideon Caller']
2020-12-24 22:47:56.561000+00:00
['Git', 'Programming', 'Productivity', 'Legacy Code', 'Cli']
How neural networks actually work
You might have come here after reading an article on AI by Practicum by Yandex. Good choice! Let’s dig deeper into neural networks. What is a neural network A neural network is a kind of database. It stores data (basically, numbers) and can move this data between its cells. The only difference is the structure. In a regular database, cells are connected in rows and columns. In neural networks, cells are connected… you know… like in a network: But the point of a neural network isn’t in the data that it stores. Most of the time, the cells are empty anyway. The point of a network is in the connections between the cells. See those arrows between the cells? That’s the most critical thing in a network. As data moves through the network, it gets transformed. The arrows define how every bit of data transforms. In other words, you feed this network some numbers, the network does math to these numbers according to the arrows, and you get some numbers on output. That’s it. The neural network is just a lot of fancy math. Example In our last article, we gave an example of a delivery robot that crosses the street. Say, our delivery robot has to identify a car. It has a camera. A camera gives us an image that might have a car in it, or it might not. An image is mostly a pile of pixels, tiny dots with information about color. For simplicity, let’s say that our robot only sees in shades of grey, so for every pixel it sees, there is a number between 0 and 255. 0 is black, 255 is pure white, and everything in-between is shades of grey. Again, for simplicity, let’s say our robot has a low-resolution camera that captures images that are 32 x 32 pixels, which is 1024 pixels per image. It’s a pretty small image, but for now, it will do. To us, these pixels make an image, and despite the low resolution, we can somehow identify a car (even guess the model). But to a computer, it’s just a bunch of numbers that represent shades of grey. How does a computer tell which image is of a car? Can we create an algorithm that checks sequences of numbers against some template? Can we make a template that says: ‘These numbers definitely represent a car’? We can’t. Cars come in all shapes and sizes, and our template can’t account for all possible cars. What we can do is this: Get a bunch of photographs of different cars, of which we definitely know: yes, these are cars. Maybe even a bunch of photographs of stuff that’s not cars. Mark them as ‘not cars.’ Take the data from these photographs and put them into a database. For 1000 images we’ll have a little over 1 million numbers (1000 x 1024 = 1,024,000) Do some fancy math to that million numbers (we won’t go there just yet). Remember that computers are good at doing math to millions of numbers. As a result of that fancy math, we’ll have a very, very, VERY complicated formula. This formula takes in 1024 numbers and outputs a number between 0% and 100%. Magically, to us, this number between 0% and 100% represents how likely any given image is an image of a car. So, what does this formula have to do with the network chart? This chart represents the fancy formula. You input the numbers into the input nodes, and the output gives you another number. This number says, ‘I am this much confident this data stands for a car.’ This chart is just an oversimplified representation of the logic behind the fancy math that goes on in step 4. In reality, you’ll have 1024 nodes on the left, and two or three ‘hidden layers’ in-between. There will be millions, possibly billions of connections between nodes, and trillions of mathematical operations. All to identify a car in a small black-and-white photograph. Why the network? Network is just a graphical representation of the math that happens under the hood. In reality, it’s just math: billions or sums, subtractions, multiplications, and divisions. The only problem is that to us; it would make no sense, so we drew a network-style chart to try and understand what was going on. What is going on is this: we have 1024 numbers that need to be summed and multiplied about a billion times to give us one final number. This is all there is. What does it hide in the hidden layers? As for the hidden layers — it has to do with the way this fancy math is done. We’ll cover it in-depth a future article, but for now, suffice to say that: Humans do not program the fancy math of a neural network — a machine finds it through trial and error. This process of trial and error is called machine learning: this is how a neural network kind of learns proper math to tell a car from a cat. The machine needs to store its knowledge about this math somewhere. For that, it uses the hidden layer. It’s just a bunch of storage space for data and formulas. The problem with the hidden layer is that it is generated and regenerated randomly in the process of learning, and to an external observer, it will be just random noise. Humans can look into the hidden layer and even tinker with it, no problem. But it was designed for the machine to store temporary learnings, so there are no user-serviceable parts there. In this aspect, it’s much like the human brain: we can cut it open and see what’s inside, but it will make no sense to us, with the level of complexity it has. So, is the neural network a brain? It is based on similar principles, but it’s not a brain. Here is what it has in common with the human brain: It’s performance-oriented. The aim of a neural network is to give some useful results (for example, tell a car from a cat). The way it achieves that is of less importance. Its internal logic can be all messed up, as long as it gives an accurate enough result. A neural network is like a muscle: it can do stuff, it can be trained, but it’s not specifically designed for a task. It just trains to perform that task and gets better at it with time. It can learn (or train). There is an iterative process that shapes the connections in a neural network to make it produce useful results. Unlike a brain, however, it is not curious. A programmer is the one who makes it learn. It’s unclear how it works. We understand the basic math behind a neural network, but once it starts machine-learning, we’re no longer so sure what it does. And we don’t need to, as long as it performs. And here is how a neural network is different from a brain: It’s just a file. All the formulas and the data that a neural network processes can all be saved as a file and copied onto a thumb drive — or emailed. It’s just some data and formulas. It has a different topology. A neural network can have an unlimited number of connections. A physical neuron in a brain can only have so many. This affects how neural networks calculate and what results they create. Compared to a human brain, it’s dead simple. There’s that. Current neural networks, even the most advanced and high-powered, don’t even come close to the size and complexity of a human brain. Ok, how does a neural network learn? Glad you asked. Join us next time for an in-depth on that.
https://medium.com/swlh/how-neural-networks-actually-work-f2c57ba0306
['Practicum Yandex']
2019-11-12 18:27:52.507000+00:00
['Technology', 'Neural Networks', 'Artificial Intelligence', 'Learning', 'Programming']
YC Hacks Fall 2018 — Hello Silicon Valley!
Hackathons are fun. They have always been an awesome experience for me. The thrill of coming up with a problem space and a potential solution when hard pressed on time is something that has always excited me. I’ve been a part of multiple hackathons in India but this one was bound to be special since it was my first one in the Bay Area. BACKGROUND I’ve been a student at the UC Berkeley School of Information since August. Things have been moving at a startling pace. Faster than I could initially keep up with. I have been in multiple countries but the infectious energy in Berkeley gets to you pretty quickly. I-School graduates have been consistent performers at hackathons. I wanted to be a part of this culture for the very same reason. To top that, I hadn’t visited the Bay Area (San Jose, Mountain View etc) because I had no time. This hackathon turned out to be the perfect excuse to get into the heart of the world’s most innovative region, Silicon Valley. When I got to know about the upcoming hackathon, I immediately applied. A few weeks later, I received a mail confirming that I had been selected. My roommate had been accepted too. Now, came the hard part and the fun part. Finding a team. Thinking of an idea. My roommate and I discussed plans, problem spaces and ideas were thrown back and forth until we finally got to one. THE DAY ARRIVES Between classes and assignments, time passed quickly and sure enough the day came. October 5th. They already had a bus prepared to take us to Mountain View from SF. Y-Combinator. We had heard the stories, read about the startups and only imagined the culture. Today, we were going to be there. Everybody on the bus looked super excited. I just stared outside the window. I just wanted to absorb the views. I felt like a tourist, just exploring whatever is around him. After about an hour, we reached our destination. The huge “Y” could be seen. We knew we were there. Me and my teammate both felt that this would be the start of a super exciting journey. This was the reason we had applied to grad school. This was the reason we chose to be at Berkeley. It was finally time to make the most of the location. Y-COMBINATOR HQ As I entered the office of the world’s most revered startup accelerator, I could feel a different surge of energy. So many people, so many teams, so many dreams. All wanting to challenge the Status Quo and build something that matters. There was a huge room prepared for the hackathon with chairs and tables. Unlimited food and drinks. YC swag. Free AWS credits. The message was clear. “Build stuff. We’ll take care of the rest.” After some initial formalities and meeting a few people, we finally had a team. The hackathon began at 6.30pm. Three of the four people in this team were from my school. The fourth one was a part of the Startup school at YC. We met and shared the idea with her. That is where things began to get interesting. We had a clearly identified problem space, but no clear solution. As luck would have it, she was a developer turned product manager. Guess what we did next? Whiteboard! It was time to figure things out. We thought of the users, needs, problems and potential solutions. After much discussion, we finally figured out what we would be making for the hackathon. It was 11 p.m. by then. We were left with 18 hours. LET THE HACKING BEGIN! After a clearly defined problem space, and a potential solution, we began searching for the tools we would need. After much exploration, we figured the tools we would be using, and started working. Our amazing designer joined us at around 10 am the next day. While we figured out the technology stack and the backend, she worked on making the system usable, and worthy of being demoed. We had kept a deadline to finish a couple of components around 2, and if not, then abandon them completely. We faced issues with the technology stack we used and had to move to a completely new platform in the crunch of time. Eventually, we ended up creating a scaled-down version of what we had planned to make. But we were happy with the output. It did what we needed it to, and it looked good! Around 3.30pm, we started working on our pitch deck and were all set to present what we had come up with. All teams were winding up their projects, and we spoke to those around us to know what they had come up with. We met a lot of Cal alums from varying backgrounds. THE PITCH Jungo: Create meaningful connections at events Finally, it was our turn to pitch. We walked into the room ready with our product and presentation. In front of us were the judges who listened to us patiently, and kept smiling reassuringly. We explained how the problem stemmed from our own experiences, and how it is something millions of people needed to cope up with on a daily basis. They were curious about the solution, and asked us a few questions before our time was up. It was exciting to show off something that we had come up with in the space of 24 hours. After the pitch, we walked out and relaxed. After a sleepless night, we felt that we had earned the right to chill. The only thing left to do was to wait for the results. In the evening, the results were announced. Jungo did not make it to the finals but we spoke to the judges for feedback after the event. They said that they loved the idea, and encouraged us to pursue it further. CONCLUSION Overall, the YC Hacks experience was amazing. It made clear to me what silicon valley was all about. Innovation, passion and commitment. The breadth of ideas that people came up with really inspiring. Before coming to Berkeley, one very senior mentor had told me to not limit myself to the campus, and make use of being in this area. With this hackathon, I understood what he was pointing towards, and I’ll surely be attending lots more. Awesome people, amazing culture and incomparable energy! The Team: Yezhisai Murugesan, Yunjie Yao , Chintan Vyas, Ankit Bansal(me) Update: Just yesterday, we got accepted to the Berkeley Lean Launchpad course with the same idea and shall be pursuing it further. Excited to see where this takes us! Maybe back to YC? ;)
https://medium.com/berkeleyischool/yc-hacks-fall-2018-hello-silicon-valley-79c378f52550
['Ankit Bansal']
2018-10-25 18:40:06.125000+00:00
['Hackathons', 'Silicon Valley', 'Berkeleyischool', 'Ycombinator', 'Entrepreneurship']
How to Survive Common Animal Attacks
When most survivors describe animal attacks, it is usually one tiny detail that made the difference between life and death. In Eric Nerhus’ case, it was his abalone chisel. Nerhus was diving for abalone shellfish off Cape Howe in Australia when he encountered a great white shark. One minute he was happily swimming through murky water, and the next minute…his head was clamped inside the shark. In an instant, the shark crushed his mask, broke his nose, and shook his helpless body like a ragdoll. But just when Nerhus thought he might become fish food, he used his chisel to hit the shark’s head and then pushed his fingers into the shark’s delicate eye sockets. The shark released him, and Nerhus lived to tell his tale. Nerhus survived because he knew that poking the shark’s eyes out was his only chance to escape. If you ever face a dangerous creature, you may have seconds to make life or death decisions. Here are some survival tips for the most common animal attacks. Africanized honey bee — they are cute until they swarm with their buddies | CC BY 3.0 Killer Bees Killer bees, otherwise known as Africanized honey bees, are tiny little mercenaries with no forgiveness. They are deadly not because of their poison but because they swarm in such vast numbers that they can carpet your body in seconds. To survive, run. Do not swat at the bees. Bees are attracted to movement. So swatting is just going to whip them into a frenzy. As you run, pull your shirt over your face. Next, seek shelter. Don’t bother going underwater. Bees have nothing better to do than wait for you to emerge and start stinging you again. You can take out the stingers with the flat edge of a credit card to prevent the spread of venom. Be careful not to push the stingers in further. And, of course, seek medical attention immediately. Lastly, if your loved one happens to be covered with a carpet of bees, do not whip out your cell phone and start taking pictures. Sure, you will get a great shot, but you will feel bad about it later. Grizzly bears kill with cuteness. Photo by Janko Ferlic from Pexels Grizzly Bears First, if you ever mess with a baby grizzly bear, know that there is a mother bear not far behind. Second, you should probably take an existentialist moment to ponder why you don’t have the self-restraint not to pet baby animals in the wild. Third, whatever you do,…don’t run. A grizzly is programmed to chase down prey. This means it will interpret your sprint to safety as…game on. Think you can run faster than 30mph? Good luck with that. You also want to avoid eye contact because the grizzly bear will see that as an act of aggression. Bear experts recommend you step away sideways instead of backward without making eye contact. Most wildlife experts also recommend you carry bear spray while camping. Aim for the face and then move to plan 2. Plan 2: if you are about to be mauled by an angry grizzly, play dead. Lay flat on your stomach with your hands clasped behind your neck and then spread your legs to make it harder for the bear to turn you over. This survival technique is called “death feigning” or thanatosis and is used by several animals to avoid predators. Lastly, remember that bear attacks are rare. Mama grizzly bears have more important things to do than eat your sorry intestines and want to get back to the important job of mothering. Photo by Pete Nuij on Unsplash Black bears Do NOT play dead. Black bears won’t fall for that nonsense. If a black bear attacks, fight hard. Like way harder than Brad Pitt did in The Legends of the Fall and a tiny bit harder than Leonardo DiCaprio did in The Revenant. (I personally thought he should have won that one.) Better yet, try to avoid the darn bear in the first place. Keep your campsite clean, and always install bird feeders a safe distance from your house. If you do spot a black bear, bang on pots and pans. Like grizzly bears, black bears have better things to do than eat humans. Most will retreat if they hear loud noises. It’s adorable until you get your face ripped off. Photo by Gustavo Fring from Pexels Monkeys Angry macaque attacks are not the stuff of fiction. Experts have found most monkey attacks occur when the monkey thinks you are keeping food from them. This is why you should never spill pasta sauce on your shirt and then hang out with hungry monkeys. Signals can get crossed. But if you do find yourself being bullied by a pack of monkeys. First, remain calm. You must be smarter than the monkey. Never look an angry primate in the eye. This is seen as an act of aggression and will enrage them further. Instead, hold out your hands to show you are not hiding delicious snacks. If you are withholding delicious snacks….give em’ up. Your lunch is not worth contracting Herpes B. A bull shark. If you see one of these…sorry! | Public Domain Sharks If you ever find yourself facing a hungry great white….I am really sorry. That’s a rough day at the beach. Sure. Sure. All the shark experts say shark attacks are rare. Blah. Blah. Blah. Shark attacks are rare because most species of sharks have no desire to eat humans. Shark attacks are usually caused by mistaken identity — they think you are a yummy seal or some other more appetizing snack. Still, unprovoked shark attacks are on the rise, and this is because most sharks think we are jerks. Sharks don’t like us because they cause an average of four fatalities worldwide while humans kill 100 million sharks per year. Perhaps the shark is just trying to even the score. But if you do find yourself in shark-infested waters, don’t panic…unless you see a bull shark. Most species of sharks do not bite, but a bull shark does. And don’t wear yellow. Some shark species can see the contrast between black and yellow. This color combo makes you look like a flashy, mouth-watering popsicle. Most importantly, try to remain calm. That adage about sharks attracted to a single drop of blood is true. But what is also true is that many sharks are attracted to fear. Sharks are sensitive to electrical fields and will hear your fluttering heartbeat like a dinner bell. So try to slow your heartbeat if you are about to be eaten by a shark. Good luck with that. Remember to follow the shark etiquette. Do not turn your back on the shark, and do not make any sudden movements. Most sharks do not just go up to humans and clamp their jaw down. That’s just rude. First, they will bump into you to determine if you are edible. If the shark does attack, punch the shark. But not in the nose. Punching the shark in the nose will put your fist dangerously close to its teeth. Instead, shark experts advise you to punch the shark in the gills or eyes. Again, good luck with that. Lastly, shark experts advise you get out of the water. Well, duh. The truth about sharks is this; you have probably been swimming around them and not even known it. But they know it. And they want nothing to do with humans. Because we are jerks.
https://medium.com/creatures/how-to-survive-common-animal-attacks-e5fb011d184
['Carlyn Beccia']
2020-12-10 16:02:44.569000+00:00
['Humor', 'Creative', 'Science', 'Life Lessons', 'Animals']
Methods in Python: Fundamentals for Data Scientists
Photo by Skylar Sahakian on Unsplash Instance Methods Instance methods are the most commonly used methods in a class structure. Any function defined in a class structure is an instance method if otherwise is not stated with decorators. So, you don’t need a decorator to define instance methods. # Instance Method def display_summary(self): data = pd.read_csv(self.path + self.file_name) print(self.file_name) print(data.head(self.generate_random_number(10))) print(data.info()) They take an implicit parameter, self, which represents the instance itself when the method is called. With the help of self parameter, instance methods can access instance variables (attributes) and other instance methods in the same object. When To Use Instance Methods Instance methods are the core of the class structure and they define the behaviours of our class. We can perform the tasks defined in the instance methods using instance-specific data. They can access unique data contained in the instance with the help of the self parameter. In our example, we have two instances of CSVGetInfo class and they store different file name values, “data_by_artists.csv” and “data_by_genres.csv” respectively. The instance method of display_summary(self) performs the task by accessing values unique to each of the instance.
https://towardsdatascience.com/methods-in-python-fundamentals-for-data-scientists-6a9393b2c2e7
['Erdem Isbilen']
2020-06-08 00:42:00.774000+00:00
['Python', 'Data Science', 'Oop', 'Class Method', 'Static Methods']
Why TruStory is shutting down
It feels bittersweet to say this — TruStory is shutting down and returning the money to investors. When TruStory first began, our ambition was massive. The team was ready to buckle down for the foreseeable future in order to ensure TruStory’s success. Over the last 1.5 years, we did exactly that. We worked day and night to build the team, product, and community. We poured our hearts and souls into making our dreams a reality. But we ultimately realized the market timing for TruStory makes it difficult to turn it into sustainable business on its own. The longer story You’re probably wondering why, which is what I aim to explain in the rest of this post. First, though, I want to give thanks. Thank you to all of the investors who took a chance on us and gave us the opportunity to embark on this wonderful journey. We will remember our time with TruStory and its supporters forever. Your encouragement has been invaluable and I will always hold your kindness close to my heart. Moreover, I would love to thank our amazing community members for the many hours you put in contributing to TruStory. You are the real heroes. ❤ With that said, I think it’s time we get down to the nitty gritty. The backstory Our mission at TruStory was to create a place where people could come together to have productive debate. Our goal was to crowdsource the best arguments on both sides of any issue through rational, productive debate with skin in the game. We wholeheartedly believed in this mission. The world needs a place to have rational, productive debates. So many people today have all but given up on expressing themselves on the internet. What we loved most about TruStory is that it enables people to debate topics that would otherwise be considered taboo on other social networks like Twitter or Facebook. At this point, you may be wondering why we decided to shut down. Here’s the quick and dirty answer: While we believe in the mission of TruStory, we believe that the business behind TruStory is unsustainable. In order to understand why, it’s important to understand the inner workings of TruStory. How TruStory works TruStory is a digital debate game. Anyone can participate in a TruStory debate by staking tokens (called “TRU”) and arguing their position. It gets even more interesting when we look at how the token is used in the backend to run TruStory. The platform was built to operate like a decentralized cooperative network. Co-op networks are “jointly-owned” and “democratically-controlled” entities with a shared goal. In other words, it is a network that is collectively owned and controlled by its stakeholders. Our stakeholders were the users, investors, team, and validators and our shared goal was to facilitate productive debate. Traditionally, co-op networks use others means (e.g. equity) to enable joint ownership and control of the network. But the beauty of crypto is that it allows us to use a token to build a network that is collectively owned and controlled by its stakeholders. The token TruStory’s token would essentially be a right of passage into the network. Users who own TRU would also own a part of the network. The token serves three purposes: 1) Rewards for positive contributors Users on TruStory earned TRU for writing compelling arguments which are then upvoted by their peers. They can also earn TRU by curating user feeds by upvoting the best arguments. 2) Punishment for negative contributors One of the reasons conversations on the internet are so hard is because people have no skin in the game. There is zero accountability for one’s actions. TruStory sought out to change that. Every interaction on TruStory required you to have skin in the game. In order to write an argument, upvote an argument, or downvote an argument, you have to stake TRU. If the user misbehaves, they lose their stake. Unlike Twitter or Reddit where centralized mods get to decide who gets banned, the community was the judge on who gets penalized. 3) Ownership and Governance The token would represent ownership (i.e. “joint-ownership”) in the TruStory network. Not only could users use TRU to moderate the conversations on the platform, they could also use true to moderate the platform itself through “democratic control”. For example, we had a category called “TruStory” where we debate product ideas for TruStory. Our vision was to use this category to directly govern what features and functionality were prioritized by the core team. Essentially, all of the network stakeholders could decide together what gets built. Because the token gives users ownership over a piece of the network, all the stakeholders collectively have responsibility and incentive to build value on the platform as it will enhance their own user experience. So…why are we deciding to stop? Too early for 3.0 and too late for web 2.0 Unfortunately, we’ve come to the conclusion that TruStory is simply too early to market. The market is just not large enough (yet) for what we want to build and in the words of Mark Andreessen: “Markets matters most.” 1) Tokenization is too hard In the future, we will have many types of tokens serving many different functions. Right now, however, the market is not mature enough for the tokenized future we believe in. Launching a token in a regulatory-friendly way is still a nightmare. The TruStory team spent many sleepless nights trying to navigate getting a token to market. In the end, we realized that the regulatory and compliance risk of launching a token is still way too too high (especially in the US). Regulators today have no clue what they want to do with crypto. This means that entrepreneurs have zero clarity on what they want us to do. It is extremely hard for a startup to be successful when tacking on regulatory and compliance risks on top of all the other risks you take on during the early stages of a startup. 2) Infrastructure is not ready In order to distribute and use the token as prolifically as needed, we needed much better infrastructure (e.g. crypto wallets) and more seamless authorization solutions (i.e. transaction signing). Especially for a consumer facing application where speed and convenience are paramount. Unfortunately, the solutions out there today are nowhere close to what we need. 3) Consumers are not here The number of people who know about crypto, let alone who are willing to buy for a reason other than to “pump and dump”, is tiny. Frankly, even some of the smartest people I know still don’t “get it” and I don’t blame them. Crypto today is little more than glorified gambling. The idea that you can use crypto for other (very innovative) things is still foreign to most people. However, in order to build a sustainable business, TruStory would need millions of users who can readily purchase tokens (legally) and seamlessly use them on the platform daily. While we believe we can capture a small niche of users, we don’t believe it’s large enough to build a sustainable business. Why don’t we build the bridges ourselves? This is a great question and it is a route some notable projects are taking. In fact, we had begun doing some of the bridge building ourselves before realizing we’d have to spend a significant amount of time creating things that would have nothing to do with our core value prop. Unfortunately, that’s not the business we’re in and neither is it sustainable. Why not forego the token for now? This solution may seem obvious. Why not simply forget the token part and build the platform another way? This is a great question and a route our team extensively considered. Ultimately, we decided it was not worth the effort. For starters, going this route means the vision of building a network where the stakeholders collectively govern the network would vanish. Additionally, foregoing the token means TruStory would become an alternative version of Quora or Reddit. The only difference would be the focus on surfacing the best arguments rather than the best answers to questions. While this approach is certainly an option, the honest answer is that our team is not passionate about building Quora or Reddit 2.0. What’s next I hope this post helps explain the “why”. Reach out to me anytime on Twitter or Blog for further questions. As for what’s next… I am super excited for what each of the team members is working on next. They are in good hands. As for me, for now, I will do what I love: Writing and Teaching. Till next time ❤️
https://medium.com/trustory-app/why-trustory-is-shutting-down-6d50175628eb
['Preethi Kasireddy']
2020-01-30 11:54:24.947000+00:00
['Startup', 'Blockchain', 'Entrepreneur', 'Social Media', 'Cryptocurrency']
Why Yanks Have Garbage Compactors and Brits Don’t
Old Holborn, a very popular blogger in the UK posted an interesting piece about rubbish (what Americans call “garbage”) collection: When I purchase something with money I have earned, I was under the impression that it belonged to me, to do with as I wished. It is my “property”. Apparently Brent Council do not agree. With recycling now being big business, the London Council has decided if you don’t want it anymore, it belongs to them and failure to hand over valuable aluminium, glass and paper will see you the recipient of a £1000 fine. No, really. http://www.oldholborn.net/2010/08/rubbish-police.html#idc-cover Old Holborn then goes on to repeat the sanctions the council will use for those who refuse to recycle their garbage, which include surveillance, hand delivered letters, visits from ‘officials’ and ultimately a fine of £1000. The answer to all of this is simple, and Old Holborn’s impression is completely correct. The things you buy really do belong to you; that means the packaging that they were delivered in and all the goods you do not consume that become your waste. When you throw your waste away, you should contract with a private garbage collector to remove it. You then do not have to deal with eco loon control freak socialist councils and their absolute nonsense. I say its “absolute nonsense” because it is: RUBBISH: In Palo Alto, California, citizens are ordered to separate their trash into seven neatly packaged piles: newspapers, tin cans (flattened with labels removed), aluminum cans (flattened), glass bottles (with labels removed), plastic soda pop bottles, lawn sweepings, and regular rubbish. And to pay high taxes to have it all taken away. In Mountain Park, Georgia, a suburb of Atlanta, the government has just ordered the same recycling program, increased taxes 53% to pay for it, and enacted fines of up to $1,000, and jail terms of up to six months, for scofftrashes. Because of my aversion to government orders, my distrust of government justifications, and my dislike of ecomania, I have always mixed all my trash together. If recycling made sense — economically and not as a sacrament of Gaia worship — we would be paid to do it. For the same reason, I love to use plastic fast- food containers and non-returnable bottles. The whole recycling commotion, like the broader environmental movement, has always impressed me as malarkey. But I was glad to get some scientific support for my position. Professor William L. Rathje, an urban archaeologist at the University of Arizona and head of its Garbage Project, has been studying rubbish for almost 20 years, and what he’s discovered contradicts almost everything we’re told. When seen in perspective, our garbage problems are no worse than they have always been. The only difference is that today we have safe methods to deal with them, if the environmentalists will let us. The environmentalists warn of a country covered by garbage because the average American generates 8 lbs. a day. In fact, we create less than 3 lbs. each, which is a good deal less than people in Mexico City today or American 100 years ago. Gone, for example, are the 1,200 lbs. of coal ash each American home used to generate, and our modern packaged foods mean less rubbish, not more. But most landfills will be full in ten years or less, we’re told, and that’s true. But most landfills are designed to last ten years. The problem is not that they are filling up, but that we’re not allowed to create new ones, thanks to the environmental movement. Texas, for example, handed out 250 landfill permits a year in the mid-1970s, but fewer than 50 in 1988. The environmentalists claim that disposable diapers and fast-food containers are the worst problems. To me, this has always revealed the anti-family and pro-elite biases common to all left-wing movements. But the left, as usual, has the facts wrong as well. In two years of digging in seven landfills all across America, in which they sorted and weighed every item in 16,000 lbs. of garbage, Rathje discovered that fast-food containers take up less than 1/10th of one percent of the space; less than 1 % was disposable diapers. All plastics totalled less than 5%. The real culprit is paper — especially telephone books and newspapers. And there is little biodegradation. He found 1952 newspapers still fresh and readable. Rather than biodegrade, most garbage mummifies. And this may be a blessing. If newspapers, for example, degraded rapidly, tons of ink would leach into the groundwater. And we should be glad that plastic doesn’t biodegrade. Being inert, it doesn’t introduce toxic chemicals into the environment. We’re told we have a moral obligation to recycle, and most of us say we do so, but empirical studies show it isn’t so. In surveys, 78% of the respondents say they separate their garbage, but only 26% said they thought their neighbors separate theirs. To test that, for seven years the Garbage Project examined 9,000 loads of refuse in Tucson, Arizona, from a variety of neighborhoods. The results: most people do what they say their neighbors do — they don’t separate. No matter how high or low the income, or how liberal the neighborhood, or how much the respondents said they cared about the environment, only 26% actually separated their trash. The only reliable predictor of when people separate and when they don’t is exactly the one an economist would predict: the price paid for the trash. When the prices of old newspaper rose, people carefully separated their newspapers. When the price of newspapers fell, people threw them out with the other garbage. We’re all told to save our newspapers for recycling, and the idea seems to make sense. Old newspapers can be made into boxes, wallboard, and insulation, but the market is flooded with newsprint thanks to government programs. In New Jersey, for example, the price of used newspapers has plummeted from $40 a ton to minus $25 a ton. Trash entrepreneurs used to buy old newspaper. Now you have to pay someone to take it away. If it is economically efficient to recycle — and we can’t know that so long as government is involved — trash will have a market price. It is only through a free price system, as Ludwig von Mises demonstrated 70 years ago, that we can know the value of goods and services. […] http://www.lewrockwell.com/rockwell/anti-enviro.html From the priceless ‘Rockwell’s Anti-Environmentalist Manifesto’. And of course, once you have your own garbage collected privately, you can deduct the amount that that council is charging you for their ‘service’ since you do not avail yourself of it. Old Holborn already does this for the services he does not require from his council. Once again. Libertarian principles, specifically the property right you have in things you have voluntarily exchanged for, offer the best solution to a problem as opposed to the inherently immoral solutions put forward by collectivists, coercion the state and its insanity. But what about the economics of it all? If garbage has a value after it has been collected, then someone will sort it and extract what is valuable. This is what it looks like: https://www.youtube.com/watch?feature=player_embedded&v=Rz-K2oaX1_Q The problem with this system is that it is entirely efficient. If private enterprise sorted garbage like this, the loony left salary addicted control freaks at Brent would not be able to justify going out to people’s homes and threatening them. Furthermore, have you ever seen a garbage compactor in the kitchen of a UK household? One of the consequences of people having to pay private contractors to remove their waste is that normally you are charged by volume for what you have removed from your household. If you have less volume of garbage to remove, the cost of removing it is less, so there is a great incentive to squeeze as much trash as you can into the smallest possible space. That is why in many American kitchens, you find garbage compactors; an under the counter machine that you throw your waste into day after day, that compacts it all into a very small shape that is easy to handle and which dramatically reduces your waste disposal costs and storage hassles. These compact cubes of refuse greatly increase the efficiency of garbage disposal; trucks can carry more garbage, make fewer rounds, you use less bin liners, put garbage into external bins less often and into fewer bins etc etc. All of this efficiency is lost thanks to the crazy as a coot collectivist crap of councils like Brent. Even fast food restaurants use them: https://www.youtube.com/watch?feature=player_embedded&v=A3SKgrOhOQs The side effects of the free market in terms of efficiency are always beneficial. Other desirable side effects are that the need for nosey parkers sniffing around in your trash is completely eliminated. That is why the statists hate private enterprise. Sadly in the UK, people are so inured to the idea that ‘the council’ is in charge of everything, from leisure to garbage collection, that it appears that they cannot imagine even the most simple solution to everyday problems without invoking the state in some way as facilitator.
https://medium.com/hackernoon/why-yanks-have-garbage-compactors-and-brits-dont-d7abc7e59d7a
[]
2017-07-30 18:59:24.828000+00:00
['Green Energy', 'Environment', 'Recycling', 'Socialism', 'Economics']
Play nicely: a newbie’s peek into the compatibility of AI and ethics
A few months ago, with everyone in the throes of the GDPR panic, it truly occurred to me just how little most people know about what really goes on with their personal data. Or, more surprisingly, how little businesses know about the potential of this data and how their use of it can impact on their customers. In the wake of the Facebook scandal there was outrage from the public about the ways in which the company and their partners use their data. Yet, the same people slinked off back to the network once the heat had died down. For them, it was still an essential link to their families and friends — it is a social network after all. GDPR is forcing people to be more transparent and clear in their data policies but, really, most people still don’t really ‘get’ it. People aren’t empowered to truly consider the value they are receiving in exchange for their personal data. To me this seems unfair, unethical even. You can have a data policy written as clear as day but no-one’s going to read it. So people still won’t really know when their data is being sold to companies they hate, or used to advertise to them in an unlimited capacity. A balancing act So what if there was a way to automatically score companies based their data policies? That way users could decide on the value of service they were wanting to use and compare it to how ethical the company was being with their data — at a glance. Turns out there’s already a really cool tool that already does this called ‘Terms of Service; Didn’t Read’ (ToSDR) which effectively crowdsources data on service’s terms and conditions, which are then broken down into smaller points which are rated ‘good’ or ‘bad’ by the site’s contributors. The result of this is an automatically calculated grade based on the overall ratings, from A (the best) to E (terms raising serious concerns). I loved that this stuff existed and that other people cared enough about this that they would spend their time contributing to open source technology along the lines of what I hoped to investigate. But the limitations of ToSDR was in its reliance on gathering enough data from its contributors to produce a reliable grading, leaving many sites with none at all. So what I really wanted to know is if we could ‘take out the middleman’ and build a tool that could use artificial intelligence to grade any site, service or policy at the click of a button. With explicit rules about what needed to be included in policies now enforced by the EU, surely it would be simple to use this technology to extract clauses and automate the decision on how ethical a company was being with their customer’s data? Or so I thought. (Hint: I’m not a developer). Robot lawyers A few weeks later, I met with James Touzel, a Partner and Head of Digital at law firm TLT, who’s heading up a project that is using AI to identify risk areas in legal contracts — a web-based solution called TLT LegalSifter. I was keen to find out if and how it was being done and if it could be applied to help people decipher data policies. What was surprising was that this technology is still brand new and businesses haven’t adopted this en-mass for contract negotiations or any other uses just yet. When you hear about AI you’re told ‘the robots are coming, they’re taking us over’. Of course, they’re not. Not yet anyway… “We wanted to develop an AI solution that could review and advise on low-risk, low-value commercial contracts initially — nothing too complex — so things like NDAs or SaaS contracts or a consultancy agreement where it’s normally lower risk and lower value”, James explained. What they could get the AI to do was to identify a clause, or series of clauses, within a contract and serve up pre-written legal advice against it — such as the correct wording for a particular type of clause. A very clever and useful tool indeed, that will almost certainly increase the speed and quality of contract negotiations and enable more junior in-house lawyers, or even procurement and commercial teams, to manage contract reviews. The only current limitation here is that the AI can’t understand what the clause says. So for personalised advice, based on the AI having recognised and understood intricate differences in clauses or statements, we’re not quite there. “We’ve gone to market with a product which will identify the risk areas in certain types of contracts and serve up advice, but it still relies on the user to say ‘oh, that’s not what that says, I’m replacing it with that’”, James said. “It puts the advice in one place, gives you an alternative clause… but it doesn’t do the last mile.” Nuanced judgement What this means for building a tool that rates how ethical a policy is, is that I have to decide what is ethical and what is not. So what most of AI can do, at least on its own, is very black and white. If I tell it that any clause in a data policy that says a service will sell a user’s data is ‘bad’, it will always be scored negatively, even if in that particular case, for whatever reason, it’s actually not bad at all. It doesn’t make the tool impossible to build but, for it to be genuinely useful for the majority of the population, it would at least need to have as many humans as possible to decide what they want to see in a data policy and what they don’t. And the real challenge would be making this rewarding enough to get the level of human contribution necessary to produce reliable data to work with. Right now, AI isn’t up to the job, as we’re not at the point where it can be used to make complex decisions on ethics, at least not on its own. I’ve hit a crux in my beginner-level exploration of AI. But I have discovered that there are plenty of others who care about this issue, so I’m pretty sure it’s not the end. If you’d like to chat about your tech project, get in touch with Simpleweb today.
https://medium.com/simpleweb/play-nicely-a-newbies-experiment-into-the-compatibility-of-ai-and-ethics-35af56cd680
['Alice Whale']
2018-08-02 15:59:55.829000+00:00
['AI', 'Privacy', 'Data', 'Ethics', 'Gdpr']
Using Timers on nRF52
Fig.1 nRF52 Development Kit Prerequisites This is tutorial is not intended to be a guide for learning C language or about the Nordic SDK platform. It’s primary target is to provide developers a concise guide about integrating peripheral modules and features into active applications. If you are a beginner, I would recommend you look into an nRF52 Project Setup guide like this one. Another easy way to get started with coding, without bothering with all basic stuff like files and driver inclusion, check out this Code Generation Tool nrf52 Code Generator: https://vicara.co/nrf52-code-generator Timer in a Microcontroller? Fig.2 External Clock Source for Timer A Timer can be said to be a specialized clock, which is used to measure intervals on a microcontroller. In a microcontroller, a timer can be used to tune the processing speed of an operation, set delays and also to synchronize user input and communication between a variety of peripheral devices. As a result, timers form a very integral component of a microcontroller operations and having the skill to control the timer and its operations become an essential skill for any Embedded Systems developer. Timer on nRF52 There are 5 instances of the timer on the nRF52832 module. Each module can be used independently. Fig.3 Timer on nRF52 The timer/counter runs on the high-frequency clock source (HFCLK) and includes a four-bit (1/2X) prescaler that can divide the timer input clock from the HFCLK controller. Clock source selection between PCLK16M and PCLK1M is automatic according to TIMER base frequency set by the prescaler. The TIMER base frequency is always given as 16 MHz divided by the prescaler value. Fig.4 Timer Instances Programming Timer on nRF52832 Include Header Fig.5 Include nrf_drv_timer To use timer functions, nrf_drv_timer.h needs to be included into the project. It is available at SDK/integration/nrfx/legacy folder. Update sdk_config.h In this file, we need to enable TIMER_ENABLED flag and also the corresponding instance. Fig.6 Enable Timer Thus, if Timer-3 is to be used, we need to enable TIMER3_ENABLED 1 Fig.7 Enable Timer 1 Instance Add Timer definition to Main File Define Instance static const nrf_drv_timer_t m_timer = NRF_DRV_TIMER_INSTANCE(1); Timer Callback Function static void timer_handler(nrf_timer_event_t event_type, void* p_context) { switch(event_type) { case NRF_TIMER_EVENT_COMPARE0: break; case NRF_TIMER_EVENT_COMPARE1: break; case NRF_TIMER_EVENT_COMPARE2: break; case NRF_TIMER_EVENT_COMPARE3: break; case NRF_TIMER_EVENT_COMPARE4: break; case NRF_TIMER_EVENT_COMPARE5: break; default: break; } } This will be called every time a timer channel is triggered. Initialize Timer Function //@brief Function for Timer init function static void timer_init() { //Can use the below values as default nrf_drv_timer_config_t timer_cfg; timer_cfg.bit_width = NRF_TIMER_BIT_WIDTH_16; //user defined timer_cfg.frequency = NRF_TIMER_FREQ_31250Hz; //user defined timer_cfg.interrupt_priority = APP_IRQ_PRIORITY_LOW; //user defined timer_cfg.mode = NRF_TIMER_MODE_TIMER; ret_code_t err_code = nrf_drv_timer_init(&m_timer, &timer_cfg, timer_handler); APP_ERROR_CHECK(err_code); //Timers 0,1 and 2 have 4 channels. Timers 3 and 4 have 6 channels. //The below function needs to be called for each channel. nrf_drv_timer_extended_compare(&m_timer, NRF_TIMER_CC_CHANNEL0, nrf_drv_timer_ms_to_ticks(&m_timer ,1000), NRF_TIMER_SHORT_COMPARE0_CLEAR_MASK, true); } Now all you need to do is call timer_init(); function from within the main function. nrf_drv_timer_extended_compare needs to be called once for each timer channel used and it’s 4th parameter decides if the timer channel is a one shot or if it reloads. Interrupt priority options are 2, 3, 6, 7. Conclusion With the above steps anyone can easily get started with incorporating timers into their application code. NOTE There is another easier method to initialize and auto-generate code for nRF52. This tool, will handle all library additions and code generations for a variety of peripherals like SPI, I2C, UART etc.
https://medium.com/vicara-hardware-university/using-timers-on-nrf52-b0497f0633a1
['Sanskar Biswal']
2020-11-12 07:15:58.789000+00:00
['Nrf52', 'Startup', 'Technology', 'Embedded Systems', 'Tutorial']
10 Sustainable Intentions To Set For 2021
10 Sustainable Intentions To Set For 2021 Replace your short-lived resolutions with meaningful intentions Photo by Kinga Cichewicz on Unsplash At the beginning of 2020, I didn’t set any resolutions but had only one meaningful intention. Because my resolutions didn’t say anything about what I want to attract into my life. They were always about doing and crossing things off the list. While resolutions were a good source of motivation at the beginning, they never seemed to work out for me. If you’re like me I have a great offer for you. Set intentions vs. resolutions Here we are at the end of another year. This is when we feel ready to pack old ways that don’t serve us and get excited about new things to come. We set resolutions and aim to become a brand new person. We aim to stop smoking, start going to the gym, learn a new hobby, travel at least 5 countries, and the list goes on with most likely things that are hard to stick to. They’re hard to stick to because our success to achieve them depends on our external reality. I guess, we all learned to not attach our happiness to the outside world in 2020. That’s why it’s time to set sustainable intentions instead of resolutions that will fail probably after Monday or at any opportunity when our reality is shaken. But why are intentions more sustainable than resolutions? In her article, Angelica Attard Psy.D. describes these two words perfectly for us. Resolutions are like goals that are more about doing. And intentions are more about being what we want to attract into our lives. Let’s get to the core of this. If we have a resolution like I will go to the gym 5 days a week, we’ll most likely lose the track of our new habit we try to set in the first place when we have some obstacles to go to the gym. Because our resolution is more about doing and arriving somewhere. And it’s dependent on external circumstances. Instead, if we set an intention like I will take care of my body to feel strong inside and out this year, we go deep into the value and meaning of what we want to attract into our lives. It will not be a failure if we miss the gym a few times. We can keep taking care of our body by supporting it with good food or any other alternative because we’re flexible to keep our intention alive with other supportive things too. We embrace and become what we want to feel. Therefore, if you’re preparing a list for new things to come into your life next year, put meaning and value before action. Deepak Chopra suggests a similar attitude towards setting intentions. He emphasizes that once we set an intention, we should detach ourselves from the outcome, intend that everything will work out as it should be, and allow opportunities to come our way. This kind of attitude keeps us on the right track to what our heart desires as we’re flexible and not destination-oriented. We can enjoy and experience different options on the way to attract our intention if what we have strictly in our minds doesn’t work out.
https://medium.com/the-innovation/10-sustainable-intentions-to-set-for-2021-ab3b6b63abb0
['Begüm Erol']
2020-12-24 21:03:46.188000+00:00
['Mindfulness', 'Self', 'Tips', 'Wellness', 'Life']
Constant Connectedness Bores us
Constant Connectedness Bores us How being always on is eroding our ability to be present Most of us carry the internet in our pocket, we are constantly connected to one another via social media, but some of us still report being bored. How is this even possible? In some cases, it seems that tech privilege hasn’t eradicated boredom but exacerbated it instead. Despite having a treasure trove of interestingness and discovery at our fingertips, we still get weary and restless. Endless choice may have something to do with it. Now that we can access whatever educational and entertainment content we could ever wish for, many of us get overwhelmed. For example, we while away hours scrolling mindlessly through lists of movies and TV shows without being able to decide what to watch. Before we know it, the leisure time we had set aside to recharge and relax after a long day at work has vanished, we’ve done nothing with it, and it’s already time to go to sleep And so off to bed we trudge, frustrated, bored, and often a little annoyed at having allowed tech to gobble up yet another too short evening. Many of us approach social media the same way; instead of using it selectively, we allow our brain to feast upon whatever is put in front of our eyes without discernment or discrimination. Whether it adds value to our life doesn’t even come into it; it’s there so we dive in gleefully and soak it all up without engaging our critical faculties.
https://asingularstory.medium.com/constant-connectedness-bores-us-19a9ac08f89c
['A Singular Story']
2019-09-13 14:43:07.311000+00:00
['Mindfulness', 'Self', 'Social Media', 'Tech', 'Psychology']
Finding Joy in Doing Nothing
Finding Joy in Doing Nothing The power of rest. Photo credit: Shutterstock By Crystal Richard When was the last time that you were intentionally did nothing? Rest doesn’t have to look like doing your favorite activity or sleeping. When was the last time you gave yourself permission to rest? Many of us are 0–100 on the go! Here is a plea to stop the glorification of busy. We pride ourselves in productivity and achievement so much so that we give in to the slow burn of fatigue. The slow burn of sickness. The slow burn of mental illness. Resting sends a message to our brains that it is safe to restore cells, to relax, to enjoy the moments that make life worth living. Resting gives you a brain break to generate new ideas. Resting reinvigorates a tired spirit. Resting is the equivalent to drinking water when the body is dehydrated. It is imperative to heed the signs that you need to rest before your body literally tells you to do so. A few warning signs that you may need to rest: You are irritable! If your everyday tasks start to bring you frustration when you normally move through them with ease, a period of resting may be what is required. Everyone starts to notice that your performance is dwindling. Coworkers even without knowing you well, have a knack for knowing when you are off your game. You spent around 95% of your day at work, so others may be taking notes and evaluating you. Take the time to look at where you can take rests to break up your day to maximize productivity while at work. Strained relationships- relationships take work and that means energy. When you don’t give the best of yourself to yourself, you can’t begin to touch the surface of meeting someone else’s needs. If things are tense, throw some rest in the equation. Taking a break in tense moments to renew yourself can be the most affirming to yourselves and your relationships. You’re low on energy. When you are burning the candle at both ends, you will feel depleted. You no longer have the capacity to move at light speed. When you need to rest you won’t be able to be at full capacity of functioning. Resting serves as a means to halt all activity and give yourself a chance to cope with all of the activity. Taking the time to slow down, will also create the space for more allowing. Energy will be restored, renewed and sent to where it needs to go when you are resting. What are your beliefs about resting? Since we live in a “you must be productive to matter…” in today’s society, challenging that notion can be tricky. Explore your resistance to rest and explore the benefits of what resting could do for you. Don’t get in your own way. Eliminate your excuses when it comes to finding time to rest. When you find your rest, you find peace of mind. When you rest, you set the tone for the longevity of your life. Surrender to the prompting of your being, when rest is being called in. You will be thankful that you did.– — The story was previously published on The Good Men Project. — About Crystal Richard I am a lover of all things nerdy: video games, anime, music, conventions and more. Licensed Professional Counselor(Associate) Certified Life Coach(JRNI)
https://medium.com/change-becomes-you/finding-joy-in-doing-nothing-de37d27c8854
['The Good Men Project']
2020-12-18 01:50:58.839000+00:00
['Happiness', 'Mental Health', 'Advice', 'Rest', 'Life Lessons']
How Visual Response Rate Affects User Experience | Page Speed
No real money was burned in this picture In early 2018 I created an eCommerce website. I spent almost a thousand dollars on ads before I had my first sale. Just like magic, I made all that money disappear in one month. My first users were leaving without seeing a single page. The average new user was spending 0 seconds on my site and leaving. Blink and you would miss them. This white cat would not see users bouncing off my page I started over and changed my wording to be more relevant to my ads. I updated the layout to follow the F-Shaped Pattern of Reading. I still saw absolutely no difference in my conversion rate because people were still spending my ad money before instantly leaving my page. Don’t stress if you are at this stage. I was there for many months before I remembered one problem. People expect immediate results on the internet. Faster visual responses help users stay engaged. In the software world there is an expectation that every application action should feel instantaneous. Typically this is a response that gives a visual cue back in 0.1 second or less[1]. Most websites fall into one of three categories The Lamborghini that gets the most attention is the one that gets out of the garage The fastest websites have tried everything, and need to innovate for improving their Page Speed. Some websites that have devoted, and patient customers. However most new users don’t stay long enough to have their magic moment. That is where they understand the value of the product and buy in. The final group is losing a lot of potential customers to long load times. Their site loads so slowly users assume that either the computer or the site is broken. This is where I started. Part 2: Dropping Bounce Rates While Driving up Revenues Shoutouts for proofreading & idea for article: Anastasia Walia Cat Lagman David Morrill Javier Rivilla Olivia Chun References:
https://medium.com/tradecraft-traction/how-visual-response-rate-affects-user-experience-page-speed-436b245a3e5e
[]
2018-09-26 21:40:19.098000+00:00
['Marketing Strategies', 'Marketing', 'UX', 'Conversion Optimization', 'Growth Hacking']
Want Sane Politics? Address The Inner Monkeys!
Want Sane Politics? Address The Inner Monkeys! Our collective behavior is viral, corrupt and warped. Lets start addressing this through the bio-psychological perspective. “You can’t solve a problem with the same consciousness that created it.” ~after Alfred Einstein. Wondering why some will never vote for Bernie? Because their brains are addicted to the ‘everyone for himself’ paradigm. Wondering why school teaches you to aspire a good job in a major corporation that pollutes the planet? Because our mass convictions push us to fit into the dominant culture, rather than question it. Wondering why politicians don’t take essential action on Global Pollution? Why billionaires keep destroying the planet, while digging bunkers for themselves? And why don’t they make the difference they can? Because they are being overruled by their bio-psychology. Are you already questioning my short answers, because your convictions stored other answers? That’s your bio-psychology tugging at you. So, want to learn more? What bio~psychology* helps to address. How to get action towards the pollution & 6th extinction crisis going? How to bridge and solve the issues between socialism, capitalism and liberalism, while avoiding sloppy compromises? How to stop misuse of power, whether it is military, corporate or done by little bullies on our block? What are we missing in our lists of solutions? I thought I found a new approach to fill a gap in our solutions landscape. This concept hoovers in between what I thought to call Bio-Psychology and or Psycho-Biology. Then an early reader found the word already existed, very much overlapping my own version of it. hahaha. So I stuck to my own approach of it. :) Bio-Psychology is the field of our human (collective) behavior as shaped by our natural instincts, where and how our biology and psychology interact. To this I add collective, read swarming & viral, behavior. My own research in this field is mostly based on practical experiences with group work from large management games to dance improvisations for groups, to helping set cultures in festivals, supported by some books on the matter. I’m convinced that none of our Global problems can’t be solved, as long as we don’t acknowledge what this field adds to how we deal with big issues. Three Consequences This Field Can Make for Us All. There’s a few huge essential implications to this. Firstly: No big issue can be solved pure on political, social or structural levels. Bio~psychological aspects need to be included. Most people just try to solve problems with solutions within their lane of expertise. Like politicians share political solutions. But racism can’t be solved just with anti-racist laws. What if we would allow in politics and business addressing the bio~psychological behavioral patterns of stakeholders as well? What if we could address the irrational primal fear of people looking very different honestly? What drives it, and how can it be overcome? What is different with those that have many friends of different races, religions, etc? Secondly: We all have different roles and talents that are essential. All clusters of people need a diversity of talents to create a natural healthy balance within! And everywhere our social groups lack such balance. And when the balance is gone, some ideas and roles become too dominant, and others discarded. This sets the winners into spirals of self deception, like: “If you don’t win, you’re a loser!” No, you’re not. You just have a different role in this world. And all roles matter! Sadly this imbalance makes many suffer feelings of inadequacy. What if we’d value differences more, and thus rewarded them more too? Thirdly: All enrichment of society is a collective endeavor. Thus we need to reward everyones contribution more fair. Capitalism seems to say, “Winner deserves to take all.” Socialism seems to say, “All be and get the same.” Both fail. Can’t we see that all of society contributes towards a few being able to execute an idea. Then why do these few get to own everything? Our collective result is because our collective effort, but not by being all equal. We should keep our diversity healthy, and share rewards. How to move beyond celebrating ‘the winners’ or ‘sharing everything in exact equal part’ as social conflict? How to activate everyones talents and potential in ways that is rewarding for all those who contribute or want to? Can we let go of pushing people to be more of the same? Can we stop rating them with standardized tests, that are often plain normative? Can we make people proud of how they are different and seek the value and contribution within that? Integrating Bio~Psychological Systems Thinking in our Society. Just consider a prehistoric tribe where the hunters take all the food for themselves, because they killed it. Or just give a few scraps to the rest. Tribes that tried this have gone extinct. Balanced tribes would have all roles in the organization organically filled and kept them all fed. That’s how the mothers could raise the children and the shaman heal the sick, and nobody feared getting old and discarded. That’s the beauty of it. The impact of negligence of this is huge. We only hold a few gifts in high esteem and the merit of the rest are compared to these few. And we collectively suffer the consequences. We have yet to adjust our basic biological psychology that’s still set for the prehistoric plains. We’re made for groups of 20 to 150, but live in huge cities. We’re still learning how to operate global organizations. And within these, men act as monkeys on an ape rock. They can’t help themselves. And very few seem to think addressing this openly is a key towards many problems of our world. Just talking about the contents and politics of a situation, without an eye for the role of our primitive urges is why so many issues get stuck. We keep self justifying ourselves with our clever brains and denying this aspect. “Nooo, I’m the president and not a gorilla thumping his chest.” All our diverse traits need to be in balance. Each healthy group in prehistoric times had a combination of such natural traits. A mediating matriarch kept the peace. A group clown deflated arrogance or vanity. A psychopath might be a great hunter, and the rest of the group could prevent him becoming a bully. A modern millionaire almost always build his capacity with others or over their backs. From the cleaner to his accountant, from his wife to his marketeers, all played essential roles. That he did the all work on his own is a ridiculous idea. Yet we tend to think mr. Numero Uno deserves all credit and ownership. And Then Things Went Sideways.. And now the natural gifted mediating matriarch has become the coffee lady. Though everyone in the office loves her warmth, her 16 hours workday (she needed a cleaning job on the side to make do) doesn’t even feed all her children. The depressed artist reflecting the pains in society is seen as an outcast. And we blind ourselves to the fact that his art showcases the dangers in and for our society as a whole. Meanwhile the psychopath has become the sales director. That is awesome for the sales of the company, but everyone else has to pay for the damages to our planet. Be real: we can’t have thousands of business sharks in top positions, yet we have. Their collective insanity drive us to the cliffs; and this madness even gets yearly applauded in the Fortune 500. This hollows out our society as a whole, to the benefit of a tiny few. Our whole Ape Rock driven society celebrates the superrich. All education, most management books, most LinkedIn articles, they all push us: “Climb towards them!”, “Be like them!” or best of all, “Become one of them!” Everyone who doesn’t race in this game is considered weak, inadequate, a loser, sheeple. Millions get depressed because their core gift is NOT about this race; theirs is to give, feel, share, listen, create, experiment, play, etc. And no billionaire could exist without them. But the millionaires economical mindset says: “Well any of you is replaceable and under contract, so I’m the achiever who deserves it all.” And now only a handful of individuals(!) own more than the bottom half of the world population. If we don’t question the normalcy of these huge discrepancies millions will be doomed. The very rich use their immense influence and money only to get richer and more power. People like Bloomberg and Trump are not even blasé about it. They only keep pretending we’ll be the real benefactors of their victories. Yeah, right. One of the first sessions of the Collective Wisdom Dance. We learn, share, discover big answers together without speaking. We let our whole physical system do the work. Major Issues Through the Lens of Bio-Psychology So we cannot solve the shark mentality among companies with reorganizations, nor with coaching, nor with firing the worst of them nor with political programs. We cannot solve the corrupting role of Billionaires just with higher taxes. That is, because the problem is bio~psychological too. The urge to grow and keep growing is just too overwhelming. They need therapy and face their slaves in dead landscapes, not just more money and admiration. In our language we too often fixate on language around the implied content. Hence we ignore too many Ape-signals. But once you see how animalistic urges determine our behavioral patterns, you see it everywhere. People like Epstein, Weinstein, Bill Cosby, Kevin Spacey, and many others, did get away too long with their rabid urges (however clever they played them out). And we respected them, read their power, too much and didn’t have language enough to confront them. When bonobo males go to far, the females immediately gang up and punish the culprit. We should create apps for that. And keep our intuition awake about undercurrents in every conversation. It should be socially that every time you feel the Monkey side of people, that it is okay to point it out (without fear of being fired). “Look Boss, I understand your urge to breath in my neck, but keep the monkey down please.” If this could be expressed at the first signs, a lot of hurt would be prevented. This animal inside brings also many gifts, like our intuition. We can choose to side with the underdog from there, have awesome dances, play with excitement, feel what choice gives most energy and heart. Psychologists found we choose based on emotion or impulse and then rationalize our reasons for that choice. We thus commit continuously self deception. And as long as we deny the monkey inside, the animal can go on a rampage, with us thinking we make sense. Haha. Just observe any approach between a man and woman from the outside and the whole animal side is blatantly obvious. Thus we can understand why 20 psychopathic CEO’s in one board room go on a self enrichment spree. We can understand how a group of White gun activists, all with similar mindsets may steer one of them into a mass shooting. We can follow certain types of people distancing themselves from other social groups that radiate a very different stance in life. And due to the gaps in between very different groups, the crazy within each group grows. Mind you, roles and functions stay fluid and can shift. In a group of sweet people my harder personality may arise. And in hard groups I may start to calm people down. As long as we see the others, as other, we can’t solve our shared issues. The ‘My Function = My Opinion’ Fallacy In the prehistoric groups we were being rewarded for being the only mediator. This was our role, our function. Native American tribes developed the Vision Quest to help individuals find this very personal role in the tribe. This was framed as a quality. But what happens when, in our modern society, a 1000 people who tend to be mediators get together? They’ll speak about how they desire to feel appreciated for what they bring. They’ll probably start to complain how hard society is. They’ll discuss the overlap in what they see. And then the fallacy happens. Their gift becomes an opinion about society as a whole. Everything should be more like them. In our society our brains start to think our role or function is also a valid opinion. A thousand artists together believe they aren’t part of the managers, and the other way around. The ‘me-first’ mindsets mock the ‘let’s be good for others’ mindsets. They seem blind that their self care can only happen because of all those people taking care of others. One fearful paranoid person in a group is an awesome guardian. Get a thousand of them together and an anti-immigrant party is born. Our gifts and talents were made to balance small groups. By clustering ourselves en masse in likeminded groups our brain transforms our complementary gift into opinions that need to be heard. The Wound of Separation in Large Societies. Ever since the psychopathic mindset rules, without enough balancing, most talents don’t feel validated, rewarded enough. Thus we suffer en masse an individual sense of separation. We don’t dance with differences in small tribes any more; we fight over them in large separate groups, that consider only their own answer valid and the other gifts as flawed convictions. I think we urgently need to realize these convictions are gifts that help balance little family tribes. This could help start heal our sense of separation. Likewise the scientific discovery that political Left and Right are a matter of bio~psychology is easy to understand. The Right mostly being people with more fear (guardianship), more strict values and more focussed on individualism. The Left being those that want to experiment more, want more cultural push, more collective awareness (systems thinking). These imagined political differences are actually a natural balance. Groups need both, as collaborators in a dance to be healthy. Perhaps with bio~psychology we could finally transcend the whole Left vs Right fallacy and develop new kind of politics. Would Eco-Systems be a good name? The Growth of Far Right, Flat Earth and Cults Explained I think, this fallacy is actually a driving force for fascism, separatism, cults, flat earth theories, etc. Because people feel their quality isn’t valued, they start fight the dominant system in order to get validation again. Yet rather than contribute, or as dogs hunker for some caressing, we phrase this urge in words that sound as opinions. And the louder we blast our opinion (barking) the more other people tend to pull away from us. This explains all these patterns of fragmented subcultures angry at many others. We need to see the valued role within each human, to help them feel valued part of society. Yet, we tend to rank other groups and push the worst, according to ourselves farther away from us. “You are wrong, we are right!” To win arguments against very different groups is very very tempting. I fell for it. Solving Those Issues the Bio~Psychological Way Just consider the difference when major board decisions in big businesses would include wise women, and a few bright kids worried about our collective future. The fact they aren’t there, is a big part of the problem. We personally must consider what and how we can best contribute and the collective must invite and integrate the diversity of talents as partners. We have to accept we are part of a field, and our different talents are needed in and for that field. Hence the importance of dialogue across boundaries. It’s the modern form of coming together around the camp fire. You are not a winner, because you played the field the best. You are the one focussed on scoring, because that’s your gift to the group: being a hunter. And the one bandaging you, every time you fail is part of that group dynamic. Though in your ‘winners’ eyes she may look like an immigrant nurse who is horribly underpaid, as she rather cares, than fight over money. You both are members of your country, your bigger tribe, or of human kind. We all just are playing out our role in the collective patterns, or more precise, they actually play us. We can only be free, really change things for real when we accept we are part of the field driving us. Rather than our opinion being the winning one, we have to accept that other roles (we tend to see as different opinions) are part of our collective strength. Just imagine if in debates it was allowed to address the aspect of the power hungry tigers within politicians? I bet we’d prefer more motherly types on stage softening the tigers among us. Imagine the press feeling free to ask about or reflect on ape rock attitudes during press conferences? I bet we’d demand more diverse politics to balance such behavior. Sadly our media celebrate the tigers and wonder if the mothers among us are ‘strong’ enough to take decisions. Too often the tigers show a psychotic lack of care for implications and just a love for the numbers, whether those are good for the tiger, or the economy. Imagine if we could demand billionaires take addiction therapy when they hurt others through their earning more money. Imagine they had yearly to spend a week among the most miserable of their workers around the globe. I bet they’d start to help the planet, just to keep their money. Imagine we could demand all corporate decisions should include health consequences for all wider circles impacted by that decision? And they get fired or halved in salary each time they’d break the rule. I bet they’d feel they’d need wise women on board to be able to see and consider ramifications. Think of the big problems in your country through this lens. Wonder what bio~psychological urges people keeping playing out all the time, and what intervention might help. Me working with 200 first year art students. The amount of students tending to be more introvert, and soloists was very palpable. ECO-Systems* Politics. We also need to discuss how convictions go viral and people tend to reinforce them onto each other. Like we see Maga people collectively shout, ‘Lock her up!’ and leftish people shout, ‘Trump is dump’. If we can’t address each other beyond our viral convictions, and see how we’re all connected, then we’re in trouble. To understand how easy this happens just ask any circle of people who come together to introduce themselves. When the first one chooses to mention: ‘name, age, profession & a hobby’, most likely all others tend to do the same. Except for a few who try to be original, or are really focussed the reasons of the meeting. They might state, ‘name, motivation to come, interesting fact of their background, desired outcome’ and, or a diet issue. *) the name for it for now. Perhaps the short term should be Grounded. “You vote Left or Right. I vote Grounded (in reality).” ;) (me playing with possibility) Swarm Learning: Collective Emergent Understanding and Action. Swarm Learning would be a next level of Eco-systems awareness. It means we activate interaction among very different roles and people. We also see what insights emerge as collective, without being too conscious about it. And rather than learn together by bouncing opinions, we dance, play, feel together and formulate from there. I’am actually developing this with the Collective Wisdom Dance. While dancing we find personal answers, much like in Family Constellations, and collective insights emerge. Swarm Learning however currently is severely hindered by the aforementioned separation of talents in subgroups. It’s made worse by think tanks and marketing agencies who actively promote certain convictions. In the USA this is currently a battle of interests. Corporate agents try to sell corporate Dem’s to the general public. Trump keeps spewing propaganda how great all his actions are. And when you listen to the many individual voices, compared to the organized ones, then Bernie Sanders is the choice of most people. Yet, those whose basic attitude seek a strong daddy dog, fear supporting Bernie. He feels too weak to them. Their inner monkey wants to feel he’s on top, before they can support him. Those whose talents embrace a sense of community, embrace Bernie. Yes, whether you vot Bernie is not only political, it’s also a sign of how you’re wired. It’s tough, when you agree with much of the ideas above, to see a country rip itself apart over lack of acknowledgement and validation of that which is other. Really, currently the amount of propaganda in the USA is worse than in Soviet Russia, albeit available in a more flavors than the ‘one party’. We have Right winged, Left winged Democratic, Conservative, Christian Evangelical, Progressive, Fascist, etc. Whatever is paid for by big donors, enlarging the voice of a few to a multitude is confusing people. Some feel the urge to join. “Big means good.” Others are immediately averse. “It’s not my flavor of politics.” Most importantly smaller smarter and alternative voices aren’t heard; wise alternatives fall away and can’t grow enough. They can’t compete with paid for adverbs provided by singular mindsets focussed on selling, not on dialogue to grow. Lack of dialogue and diverse voices dumbs the masses down, just for the interest of a few very rich people protecting their interests. The Blowback of Suppression of Certain Ideas In a healthy society this massive manipulation of convictions shouldn’t happen. General convictions should be born out of and agreed by the diversity. Such agreed upon convictions are by far the strongest and most supported ones. We can see that enforced ideas, always end up slowly being dismantled and even birth grudges against those that spewed them. The people of Iraq want the US out. The majority of the US wants Trump out. Hell, the majority of the world wants him out. But a financial rich few defend their interests through playing Trump and opinions around him. The longer they build an alternative reality based on lies, against the general conviction, the harder the blowback will be, when people wake up. That’s how the CIA defending the Shah in Iran, actually pushed the country towards fanatical Islam. That’s why the election of Obama (however much he effed it up afterward) felt as a liberation from the Bush era. Spinning reality for your interests harms society as a whole. All TV programs, all Youtube channels that consciously frame how you should see things, lie, spin and disturb the real emergent learning of the people. The collective people vs corporate spin is a serious battle, beneath political sides. That’s why many of the masses voted Trump, because Hillary was the most spinned candidate. Now Bernie is winning, because almost all of the opposition are candidates plugged by corporate interests. Most people feel it when people like Buttigieg or Biden frame their views, towards what might make voters cast their vote for them. People feel they do it just for the votes, not because they really stand for something! Biden’s bio~psychology shouts: “I want your vote, don’t care about you.” This helps weaken democracy even more. The collective bio~psychological wave beneath the public discussion runs on the under currents, intentions we feel behind messaging in the media. Very Swarm like patterns form around this battle of narratives. And the more people need to fight, plow through false narratives, or are won over by them, the less truth and real choices can be made. Because of that and the time lost with it, we collectively postpone real action towards the climate crisis. Thus the real issues, when they explode will be harder and harder, like the Iranian revolution against the Shah. People doing propaganda for money are the worst. They betray themselves and their country spinning falsehoods. The bio~psychology of most people really makes them want to believe in the good of people. Thus when lied to over and over, they stop believing. All undercurrent falsehoods make them more apathetic towards their society. This impoverishes society. The few subgroups benefitting from this consider this a win; yet everyone loses. Spinning the truth is therefore theft of society. Oh irony: those that will repeat that line the most, are those using it to undermine political opponents. Learning and Developing Together If you understand this, you understand why personal websites feel more authentic and truthful than well organized channels. And because the spinners know this too, they have started to fund private channels to strengthen their interests. This affects trust. Hence money in politics, in political dialogue, is a disease. We need public dialogue. We need to speak out truth and we need to listen. hence the huge popularity of Joe Rogan. He asks, he wonders, he listens and is willing to change his conviction based on that. Those that scream their convictions “Bernie needs to win!” or “Bernie is a Marxist” are losing to reason. We slowly learn that those that come across as the most convinced, might actually the biggest liars. To unfold our real talents we need to dance with very different ones and offer them validation for their contribution. Sadly some are so wounded that they fear doing so will be used against them. This is the small hope we have: that being aware we are played by our urges, and by overcoming them, we can enter a real conversation. I invite all who can do that to enter the field of shared growth and action, for a planet, that benefits humans and nature in awesome beautiful ways for all. EXTRA: The Roots of Humankind in Bio~Psychology I find that all humans tend to be born with talents that focus on what small groups need. Just consider. For over 200.000 years we roamed plains, steppes and tundras. Thus, like with monkeys, we developed character traits that always are in service of the group. Frans de Waal, Jane Goodall and others did excellent work on this with primates. Others found very similar social patters in wolves, elephants, and other social species. And yes, we humans fit that pattern too. Harrari wrote beautifully about it in Sapiens. How does it work with early man? The depressed soloist becomes the watchdog at the edge of the group. Since he mostly sits alone at the edge, he’s the first to see danger coming. The warm mother becomes the mediator in conflicts. The gay person bridges the two (or more) sexes. The curious one becomes the scout. And the psychopath becomes the killer during the hunt, or the healer who is not afraid of pushing broken bones, sticking out, back into the body. This is backed by the fact that a huge amount of surgeons, it seems, have psychopathic tendencies, put into service for society. These are people who can disconnect from feelings of nausea at the sight of intestines and stay cool enough to operate. Thus yes, we are all different and no, your talent is not just yours. Your talents shines best as something needed for the group or society as a whole. As we’ve seen, we’re build for little groups on plains, and there seems to be a natural ratio of these essential roles and traits. Like about 6% tends to have psychopathic inclinations. About 1 in 20 is homosexual. Etc. It seems these numbers are almost fixed in our biology. Yet, in our society we can get 20 psychopathic people in one boardroom taking all decisions. We see that only certain types run politics. And we see how the lack of other types of people in politics or boardrooms is sorely missed, most of all for the planet as a whole. Side note. Suddenly the concept of astrology makes sense. To have natural variations of different types of person is normal; it is how nature developed us. Thus added to bio~psychology could be research into biological driven character traits. Next to gay or straight, I predict, we will find that aspects like being monogamous or promiscuous, being more a soloist or more a social person, and others, will show regular statistics in what percentages are born. In fact many management models do the same for teams; DISC, Briggs-Meyer, Belbin, 12 Archetypes and many more. So why not demand this diversity of approaches in politics and in board rooms? Written at request of Peter Jones, who asked me what I thought Swarm Learning was. I wrote a lot on Swarming before. This adds to that series.
https://medium.com/the-gentle-revolution/a-new-field-bio-psychology-7c9ec1c955c6
['Floris Koot']
2020-05-29 12:30:25.797000+00:00
['Biology', 'Social Change', 'Swarm Intelligence', 'Psychology', 'Politics']
Mutale Nkonde: AI Avenger
When Mutale Nkonde came to visit Santa Clara University (via Zoom) on October 29, her panel, entitled “AI for Good Trouble,” was characterized by quotes — quotes spanning a variety of topics, from automated decision-making systems, to deeply-seeded racial discrimination hidden behind zip codes, to practical presentation tips for pitching an act to Congress (check out Ms. Nkonde’s success with her Congressional elevator pitch: H.R. 2231, H.R. 3230, H.R. 4008). Because of her expertise in the areas of AI and social justice, Ms. Nkonde was a perfect addition to SCU’s Artificial Intelligence for Social Impact and Equity Series, jointly sponsored by SCU’s High Tech Law Institute and Markkula Center for Applied Ethics. She straddles the American coasts as a member of Stanford’s Digital Society Lab and Harvard’s Berkman Klein Center for Internet & Society, both of which are organizations interested in the interaction between technology and society. She even holds a faculty fellows position at Notre Dame University. Ms. Nkonde is also the founder of Two Weeks Notice and AI for the People. To top it off, Ms. Nkonde served as a research specialist for Congresswoman Yvette Clarke (D-NY), with whom she helped introduce several bills to Congress, focusing on artificial intelligence (AI) regulation, deep fakes, and privacy of biometric information. Ms. Nkonde lives and breathes her work, and her discussion centered on examples of discrimination in machine learning. She candidly answered questions about the ways discrimination sneaks into AI and preached the gospel of adjust, adjust, adjust, until systems are not colorblind, but color-aware and color-competent. Ms. Nkonde’s message ended with a call to action based on mindful change. By curbing the enthusiasm of racist systems with intentional reevaluation of machine learning datasets, we, the users, can feel comfortable utilizing AI to make our work days and decision-making more efficient. The following discussion, based on quotes from Ms. Nkonde’s talk, seeks to shed light on the biases that find their way into machine learning, and what we — the general public, programmers, investors, consumers, etc. — can do to make AI more equitable. “Human beings should be making decisions about human beings.” The first notable quote wrestles with the tension between human minds and an automated system: “Human beings should be making decisions about human beings.” This sums up the talk. If you understand this, you can go home and give the same talk yourself. Ok, not really, but the concept of preferring PEOPLE to judge other PEOPLE instead of machines made me go, “Huh?” People are the most racist entities on the planet. Our societies shape us, and we learn what is and what isn’t normal/good/better based on deeply rooted, and sometimes wholly incorrect, traditions. For a long time, people of color were considered inferior to white people, and that was a perfectly acceptable social mindset. That Ms. Nkonde would prefer the human species — a population more amenable, at times, to the winds of chance and change than reason — to have a hand in assessments, rather than machines that are supposed to be the best thing since sliced bread (just check the global spending), made me pause in my AI daydreams to reflect upon the problems that machine learning still needs to address. It’s not that AI is inherently bad; in fact, it can have astoundingly positive effects on efficiency maximization. When a chatbot fields basic questions at, say, a flower shop, time and energy are redirected from repeating store hours and locations ad nauseum to more important activities. While chatbots are not a completely irreconcilable evil, how a machine interprets data may be worrisome. When a machine makes an accident at the courthouse, rather than at the florist’s, the defendant can’t simply speak to a customer service representative. Far from making light of inequitable AI, this juxtaposition highlights the increased severity of consequences stemming from a biased system in a critical social justice space. Ms. Nkonde remarked on the gravity of these mishaps noting that when you’re arrested and get put “into the system, you’re in the system for life” — even if it was an accidental arrest (some states, however, have recently attempted to remedy this mistaken identity situation with expungement protocol — take New Mexico’s “Criminal Record Expungement Act” for example, specifically section 3). To prevent faulty predictions, machines tasked with sifting through personal data — like criminal records — would need monitoring. Ms. Nkonde introduced H.R. 2231 to require just that. The Algorithmic Accountability Act (the Act), still pending before Congress, focuses on “high-risk automated-decision systems.” The Act requires organizations that rely on high-risk systems to monitor, modify, and report changes to an oversight committee. Monitoring would lead to dataset reevaluation. The Act’s purpose is to bring awareness to flawed datasets so companies can adjust for more equitable analysis. For instance, consider zip codes. I was dismayed and fascinated by Ms. Nkonde’s historical explanation of the creation of zip codes, which were characterized by unfair and discriminatory housing practices directed at Blacks during the Great Migration. These regional numbers, once blatantly racist, are now infused with color-blind racism (a term credited to Professor Eduardo Bonilla-Silva). A company that plugs in zip codes as a dataset may inadvertently further discrimination by relying on information implicitly riddled with vestigial inequality. “Refuse to take the technosolutionist frame.” So, how can we move AI in the right direction? The second Nkonde quote prescribes a solution, or rather a mindset, for finding solutions when combatting biased AI: “Refuse to take the technosolutionist frame.” In other words, “There’s an app for that,” won’t cut it in today’s attempts to regulate AI automation. Rather than oversee machines with other machines, humans must have a hand in changing the way AI thinks. Algorithms based on datasets lacking diversity need new datasets (see the zip code discussion above). In particular, because the current systems involving facial recognition are unable to correctly discern skin color or societally-defined gender, malfunctioning identification can potentially incriminate the wrong person. At the time of this writing, the ACLU is working on a related case in Detroit, where AI technology caught a Black man on camera at the scene of a crime, but the Black man that was arrested was the wrong Black man. Imagine being arrested because of the testimony of a witness who is blind in one eye, wears the wrong-prescription glasses, and saw the whole situation at night. You sort of, kind of, perhaps look like what they, the witness, think they saw. Now insert some accidental discrimination, and voila, you have the current state of AI facial recognition — specifically when it comes to discerning darker-skinned males, and all female faces. Datasets predominantly filled with white males, or based on proxies steeped in historical racism, make poor predictions and churn out biased conclusions. “Reimagine tech as tools of liberation.” The takeaway? Systems functioning on color-blind or blatantly undiversified datasets disproportionately affect subsets of the population who have historically felt the brunt of racism. This unequal treatment of persons in the United States needs regulation, and Ms. Nkonde has dedicated her career to balancing the seesaw of predictive tech, attempting to achieve some sort of equilibrium where all citizens have equal standing under AI. Despite the disappointing position of current predictive AI, it doesn’t need to be thrown away forever. On the contrary; if properly regulated and screened for biases, automated systems set the stage for technological advances never before anticipated. Nevertheless, systems, like people seeking to change their own biases, need constant evaluation and shifting paradigms. Reevaluation and regulation isn’t a one-size-fits-all approach and requires intentional and intensive overhaul with a focus on equity and justice. To that end, Ms. Nkonde left us with this thought: we need to “reimagine tech as tools of liberation,” not machines of capitalist efficiency. When we see a person for their personhood rather than their credit card number, and program a machine to follow suit, the system that normally spits out conclusions begins to have a human tinge. This shift in focus, transferred from human to machine, will inevitably transform AI. But that change and realization is on us; we have to act upon our observations of discrimination. “Bring others with you,” Ms. Nkonde said. “If I’m the only person at the party that looks like me it’s on me to change that.” Such changes are not easy, however, and will require years, maybe even lifetimes, of reconfiguring systems steeped in discriminatory norms. So, rather than deferring to machines with a mindless trust, let’s choose mindful accountability for a more equitable future. Daniel Grigore is a law student at SCU who hopes to be half as good at lawyering as he is at daydreaming. He is currently working toward a High Tech Law Certification en route to pursuing a career in IP law that will serve as a stepping stone to reach the judiciary — and, eventually, the Supreme Court, the White House, or retirement (whichever comes first).
https://medium.com/artificial-intelligence-ai-for-social-impact/mutale-nkonde-ai-avenger-e74da8826687
['Daniel Grigore']
2020-12-26 04:03:18.781000+00:00
['Social Justice', 'Federal Regulation', 'Equity', 'Artificial Intelligence', 'Mutale Nkonde']
Karma Development Plan
Hello, dear friends! Lots of people asking us to provide some details on our dev plans. Okay, let’s make it! Here’s our task list for 2018 Q1: Move all personal data to the middleware database After a long time of legal consultations we’ve decided to move ALL personal data from public blockchain to middleware database. Due to the regulations we have to change our current architecture to the following: Use blockchain only for transaction ledger. Transaction types: wallet creation, token transfer, voting, signing a message, loan deals updates. This info can be viewed in public blockchain explorer. All personal data (email, phone number, passport, bank account etc.) and all heavy data (photos) will be moved to the middleware. Blockchain will only store the hash of that data. So, everybody could verify if the data is correct (if he knows, for example, the passport number of a user and wants to check if it’s real). This update will provide us some wonderful opportunities: User data will be visible only to the data owner. We will store personal data depending on local legal environment. In most countries key points are: high security level of storage, plus, local server allocation for every country’s residents. Our blockchain will be more lightweighted and fast. Yes, our current blockchain version allows up to 100 000 transactions per second, but we should consider how to handle millions of users and billions of transactions. The less will be the blockchain weight, the easier it will be to run a node, synchronize blocks and perform transactions. Turn on the loan feature at the mainnet The loan feature was delivered on testnet (https://testnet-app.karma.red) in December 2017. But we’ve decided to hide that feature on mainnet due the huge amount of support work. Because we’ve saw hundreds of emails from the people about losing the password or sending money to the wrong address. That’s why we’ve decided to turn on the features step-by-step, and let the community slowly adopt new functionality. We will also update the loan application form, to make it more closer to the borrowers’ and lenders’ demands. Turn on the BTC/ETH in-out gateways at the mainnet Same as loans feature: both gateways were delivered on testnet in December 2017. But we deliberately blocked the BTC/ETH deposit-withdraw because we are worried about the current level of attention and security checking in our community. Seems, we also got to update the UI to lower the probability of losing the password or sending money to the wrong address. We should take care of the people, not blame them not being cyber-security geeks :) Alpha-version of deal commission algorithm As written in Karma white paper, we will deliver and constantly update the mechanism of commission distribution between all loan deals participants (sales, insurance agency, scoring agency etc.). In 2018 Q1 we will deliver separate commission distribution module and basic API for connection with other apps. Alpha-version of Karma reputation calculation algorithm Karma reputation is the core value of our Economy of Trust. That’s why we will deliver the first separate reputation calculation module and basic API for the external apps in 2018 Q1. Salesman social role (referral program) There’s a lot of social roles described in Karma white paper. We’ve already delivered borrower and lender role. Salesman will be next, because we need to grow our community. New block explorer version Thanks to Jesta: he have launched the Karma block explorer in December. We’ve collected a lot of messages from our community and now working on new version of block explorer with some new useful features: Voting page like https://cryptofresh.com/ballots. That feature allow us to upgrade our project governance from Telegram chats to the blockchain voting of the token holders. Let’s build a transparent and fair governance. Transactions/blocks unique URL. Everybody will be able to send a link to the transaction and easily check the token transfer status. Several UI updates. Currency exchange oracles The nature of blockchain leads us to create oracle mechanisms for connections with outer world (read more here). First Karma oracles will work on providing up-to-date currency exchange rates. The rates will be used for: Calculating loan/collateral ratio when starting a new loan inquiry. Updating the loan/collateral ratio to check if there’s a margin call event to be triggered. For example, somebody lent 2000 USD considering 1 BTC as a collateral. While 1 BTC = 11 000 USD investor is safe. But if 1 BTC suddenly drops to 2500 USD, thing are becoming scary for the lender. That’s why we need a margin call algorithm to unlock and sell the collateral if it’s value has fallen too fast. The exchange rates will be provided by the delegates. By the way, we already have 40 witnesses and 15 active delegates, spreaded all over the world. Several UI updates to increase usability Thanks a lot to our community for helping us get better! Almost all of improvements came from Telegram. Special thanks to Evgeniy Ovcharenko :) Establish continuous integration process JIRA bug tracker, sprint planning, GIT integration , dev/stage/prod environments for blockchain/middleware/front, backups, unit testing etc. Perform the third-party security audit Security is the core value when we talk about finance. That’s why we will perform the third-party security audits on a regular basis. We have 2 different partners to complete the task. But, if somebody knows experienced security team — please, let us know. Also we have a bug bounty (already paid some in January, thanks for the white hackers). So, if you’re good at security testing — feel free to test our software and report the bugs to [email protected]. Cheers ˆ_ˆ
https://medium.com/karmared/karma-development-plan-efd52a5266b4
['Karma Project']
2018-03-03 18:51:19.815000+00:00
['ICO', 'Blockchain', 'Finance', 'Development', 'Bitcoin']
PureScript and Haskell at Lumi
The need for types on the front-end We get a lot of benefit from using Haskell on the backend at Lumi. All of our API servers and clients are written in Haskell using Servant, and we use esqueleto and persistent (with a generous helping of custom DSLs) to write our SQL queries. It is extremely rare for us to encounter a runtime error in production on our servers, and when we do, it is usually of the “business logic error” variety. It’s hard to overstate the value of having types everywhere on the backend. Our front-end is a large web application written using React, with plenty of logic of its own. So why then, if types are so useful to us, were we using untyped JavaScript with React on the front-end, when JavaScript offers comparatively few guarantees? There are a few answers: When our product was new, there were fewer options available, and the options we might have chosen were immature. Everyone on the team was able to write JavaScript effectively, but not everyone was familiar with compile-to-JavaScript languages. Now we have enough developers that we don’t need to worry about that, but originally, it was useful for everyone to be able to contribute easily. JavaScript was a suitable choice for a small application initially, allowing us to iterate quickly, but is less suitable now that the project has grown in scope. It is not particularly difficult to build new code in JavaScript, but it is difficult to maintain a large JavaScript application, requiring extensive tests for things which could be covered by types. Of course, types are not just a tool for guaranteeing correctness. A common complaint is that a type system might decrease developer productivity by requiring the user to add type annotations to provide any guarantee of correctness, but a sufficiently expressive type system like the one in GHC is able to actually increase developer productivity by providing tools like parametric polymorphism, type classes, type-level programming and datatype generic programming. Finally, types are also a wonderfully succinct language for expressing ideas and processes, and we wanted that language to be available on the front-end as well. The decision to use PureScript I had personally used typed front-end languages (TypeScript and PureScript) extensively, and other team members had similar experience with other options like Flow and Elm. As the original developer of the PureScript compiler, I had an obvious interest in seeing it adopted at Lumi, but I was also aware that it might not be the optimal choice of front-end language for a few reasons: Other team members might not enjoy using it as much as I do, or might find themselves to be less productive. Using a relatively uncommon programming language might make it more difficult to hire developers. Without language-level support for things like JSX and CSS, we would need to find some other way to work with our design team, who previously were able to modify our React components directly if necessary. The library ecosystem is relatively small, and we would need to build some things of our own in order to be productive. As a team, we were in agreement that JavaScript was causing a lot of problems for us, but we weren’t sure about the best approach to fix them. To my pleasant surprise, a majority of the team were enthusiastic about trying PureScript as the first option, because it fits our needs well. You might be asking how we chose PureScript out of the many available options for typed front-end development. Aside from the in-house experience and my own preference for demonstrating its use on a large real-world application, there are a few technical reasons. It is simplest to just say that PureScript was the unique solution to the following set of constraints: The setup process should be simple and unintrusive. It should be trivial to set up an environment to quickly test out ideas. The language should integrate smoothly with JavaScript, its libraries and its build tools. The type system should be expressive, supporting things like sum types, row polymorphism, type classes and higher-kinded types. It should be easy to build simple solutions, but still possible to experiment with more advanced ideas. As Justin Woo puts it, the language should have a culture of “the sky is the limit” and should not limit your creativity. The development of the language itself should be open enough that we would be able to modify any part of the toolchain if it became necessary. Getting started Eventually, we decided to jump in and try out PureScript by replacing one of our existing JSX-based React components. We chose a simple, pure component with no side-effects or API calls. After setting up the PureScript compiler in our existing Webpack-based build, we were off to a good start. I was a little concerned about the suitability of the existing React bindings for our purposes (they were a little too complicated, as they tried to support the full React API, which we didn’t need), so I cobbled together a simplified set of React bindings for us to use. We’ve since polished and released those bindings as a separate library called purescript-react-basic. Now that we had proven that the approach could work, we started porting more and more of our pure components over to react-basic. We knew, however, that we wanted to be able to replace our API calls and page-level components as well. For this, we decided to use code generation, since our API is large and changes reasonably frequently, and we wanted to ensure as much correctness as we could. Generating types The first step was to generate a complete set of PureScript types to correspond to the types we used in our Haskell API. To solve this problem, we turned to GHC Haskell’s support for datatype-generic programming. We created a simplified representation of PureScript data types ( PursTypeConstructor ), and a type class for Haskell record types which would be converted into PureScript types ( ToPursType ): data PursRecord = PursRecord { recordFields :: [(Maybe Text, PursType)] } data PursTypeConstructor = PursTypeConstructor { name :: Text , dataCtors :: [(Text, PursRecord)] } class ToPursType a where toPursType :: Tagged a PursTypeConstructor default toPursType :: ( Generic a , GenericToPursType (Rep a) ) => Tagged a PursTypeConstructor toPursType = retag $ genericToPursType @(Rep a) id The toPursType member of the type class creates a representation of the type class, tagged with the original Haskell type it originated from. It would be tedious and error-prone to write these instances out by hand, so we provide a default implementation of the type class for record types which implement the Generic interface. Luckily, GHC will derive Generic instances for us if we turn on the -XDeriveGeneric extension, so generating these representations of our Haskell types is practically free. Once we have a list of PursTypeConstructor structures, we can turn them into PureScript code fairly easily — nothing fancy needed here, just simple string templating. For convenience, we also emit a little extra code to make our generated types usable on the PureScript side: serialization boilerplate (itself derived using PureScript’s own version of datatype-generic programming!), Lens es and Iso s for all fields and data constructors, functions for debugging, and so on. Generating API clients The next step was to turn our Haskell API definitions into usable, safe PureScript clients. Fortunately, Servant is perfect for this task — since our API definitions are represented at the type-level, we can turn those definitions into PureScript code and know for sure that the resulting code will be compatible with the server implementation. The servant-foreign library was perfect for solving this problem, since it generates the data structures we need, including lists of API endpoints with reasonable names and all of the types of query parameters, request bodies and response bodies. From there, it’s just a question of assembling the PureScript code from those data structures. The only tricky bit is providing the necessary HasForeignType instances which are necessary in order to convert names of Haskell types into names of PureScript types. We chose to reuse the same names, and then a dash of Template Haskell magic is all that’s needed to traverse the graph of types and generate all of the necessary instances, thanks to the incredibly useful th-reify-many library: $(do names <- reifyManyWithoutInstances ''HasForeignType [ ''Order , -- ... a list of other top-level types goes here ] (const True) let toInstance nm = let tyCon = TH.ConT nm nmLit = TH.LitE (TH.StringL (TH.nameBase nm)) in [d| instance HasForeignType Purs PursType $(pure tyCon) where typeFor _ _ _ = PursTyCon $(pure nmLit) |] concat <$> mapM toInstance names ) This needs a little explanation: reifyManyWithoutInstances traverses the type graph looking for types without HasForeignType instances traverses the type graph looking for types without instances For each type it finds, the toInstance function turns its name into a HasForeign type instance using a Template Haskell splice. The tyCon and nmLit nodes are in scope, so we can use anti-quotation $(...) to use them in the splice. Types as a tool As I mentioned earlier, a good type system should increase productivity by reducing busywork. In addition to making our API clients free from boilerplate, we’ve been able to increase productivity in a number of other areas since implementing PureScript. I hope to be able to write about each of these in some detail in future: We have built a collection of completely generic UI components on top of our typed API clients. For example, we have one table component which is parameterized by the API which provides its data and search capabilities. If we change the API, the compiler reminds us to update the table! We have built a combinator library for assembling forms which are compatible with our API types. By using lenses and a handful of basic functions, we are able to build type-safe forms in a fraction of the time it would take by hand. We have also built a type-level DSL in the style of Servant for deriving forms from types for certain data collection tasks. Just as Servant allows us to repurpose our type-level API definitions for the generation of API clients and documentation, we can reuse our type-level form descriptions for all sorts of things like storage in the database, indexing and querying. We have plans to implement more type-directed tools in the future: As I described in my blog post about our migration to Postgres, we have implemented a completely generic backend solution on top of our Postgres database, including filtering, search, and computed fields. We are working towards implementing the same level of reusability on the front-end, by abstracting over common API client patterns. Now that our API clients are represented on the front-end, we would like to find new ways of representing our API calls, and find ways to implement features like batching and caching in a generic way. If this sort of work sounds appealing, we are hiring! Conclusion While we still have a way to go, our experience with PureScript so far has been very positive. Instead of touting its benefits myself, I’ll leave you with a few quotes from the team:
https://medium.com/fuzzy-sharp/purescript-and-haskell-at-lumi-7e8e2b16fb13
['Phil Freeman']
2019-05-23 17:25:48.517000+00:00
['Purescript', 'Engineering', 'Haskell', 'JavaScript', 'Programming Languages']
The Most Wonderful Time of the Year
The Most Wonderful Time of the Year 5 strategies that helped me win Nanowrimo and how they can help you Photo by Alex on Unsplash Nanowrimo is National Novel Writing Month and takes place every November. It began in 1999 by Christ Baty in the San Francisco Bay Area. In the beginning there were 21 participants. Twenty-one years later there are more than 798,162 participants from all over the world. The goal is write 50,000 words between November 1st through November 30th. It’s a free annual event where aspiring novelists use this time to write an average of 1,667 words per day to meet their goal. If writers can write 50,000 within the month of November, they are declared winners after their words have been uploaded and verified. Here are five strategies to helped me win Nanowrimo and they can help you, too: setting aside time to write every day, working in increments of time to help you focus when you can’t get moving, having a daily word goal, and “Writing with the door closed” a la Stephen King. Set Aside Time to Write Every Day When I began Nanowrimo 2020 I made a point to set aside time every day to write. I either made my word goal or I didn’t. I either wrote 1,667 words or maybe it was 254. It didn’t matter. I made time to write. I prefer mornings, but there were times when that wasn’t possible and I squeezed in a few words before bedtime. But as a general rule, once my four children were successfully online for school, I made myself a mug of hot tea and began writing. As the month went on, I found that it was much easier to conjure up ideas and sentences when it became a habit to sit at the same place, at the same time, day after day. Sometimes I hand wrote notes as I played with ideas. Sometimes I started my session by spending 15 minutes on the internet doing research and jotting more notes which lead to more ideas which lead to sentences which lead to paragraphs. By the end I was writing anywhere between 1,100 to 2,000 words a day. The suggested word count per day was to write 1,667 words a day in order to meet the 50,000 word goal by the end of the month. Working in 15-minute time increments When I struggled to stay focused, I set the timer for 15 minutes. This is a teaching strategy I used with my students when they couldn’t accomplish a task. The timer kept them on task or if they were off task, the timer would bring them back to their assignment. I followed this tactic throughout Nanowrimo and when I started meeting my time goals, I would extend the time increments anywhere between 15–20 minutes. Towards the end of the month I found that I had gotten into the habit of sitting down and writing and would forget to set the timer only to realize several paragraphs later I never heard the chimes. If I’ve been away too long from my writing, this strategy helps to bring me back. It was also a good way for my children to hold off disturbing my writing time. When they started to ask a question, if it wasn’t an emergency I’d tell them, “Give me x,y,z minutes to finish my thoughts and I’m happy to help you.” In this way, I was able to get more done, they interrupted less, and they learned to wait for the chimes before grabbing my attention. Another benefit of timed increments is that if it rang in the middle of a sentence, sometimes I wouldn’t finish the sentence, but merely stop in the middle of my thought. I would highlight the words in another color to help me find it when it was time to come back. It sounds counterproductive, but I’ve found that when I stop midway through an idea, it’s easier to go back to it for the next increment in time. I always had something in the works and I wasn’t looking for inspiration to strike. Daily Word Goal The Nanowrimo site provides different graphs to help keep you motivated. In order to meet the 50,000 word goal by November 30th, you have to write 1,667 a day. Depending on how much you write your goal for the next day fluctuates with the number of words you need to write in order to meet your goal by the end of the month. There are also two other graphs to help motivate you: your overall progress graph and your daily word count graph. Watching my daily word goal get smaller with each passing day encouraged me to keep writing more. I really like watching my ring close on my daily word count. And on the days that I could only get a few hundred words I still felt accomplished. Even minimal forward progress is still progress, Write with the Door Closed In his book On Writing, Stephen King gives the advice to write with the door closed. He suggests getting all the words and ideas down before sharing them with anyone else. There are many Nanowrimo groups where you can ask questions and pick someone’s brain. Ask questions that help you lead to character development, to find the right word, to describe the scenery, but don’t use Nanowrimo time to share pages of your writing to make big changes. Save the Editing for When it’s Done The idea behind Nanowrimo is to attempt to write a book or at least 50,0000 words by the end of November. Save the editing for when it’s over. The purpose is to move forward and get the words and ideas down on paper. Benefits By participating in Nanowrimo this year I gained the following: Daily writing habit Writing became a priority on my to-do list Because I took the time to write everyday the ideas started to come more easily. My writing improved over all. Comparing the pages that I wrote at the beginning of the month to the those I wrote closer to the end shows my growth as a writer. The muse was able to find me more easily as I made time to write at the same time day after day. For years I had watched Nanowrimo from the sidelines. In 2019 I hesitantly participated but lacking the confidence, I had a harder time getting my ideas on paper. After dedicating time to write articles and essays for over a year, I decided to give Nanowrimo another try. I had nothing to lose and everything to gain. The only way to get my ideas out of my head was to put them on paper. All of my excuses revolved around the lack of time. Nanowrimo gave me a reason to make the time. Even with four kids home full-time logging on for distance learning (thanks, COVID), I was still able to participate and win. The event is free and the only person you disappoint is yourself. No one cares what you write or how much you write. It’s the honor system. But there are hundreds of people who will cheer you on along the way as you share your progress, your triumphs, and your frustrations. It’s the most wonderful time of the year.
https://medium.com/the-innovation/the-most-wonderful-time-of-the-year-b90dd1363bbe
['Heather Jauquet']
2020-12-06 20:02:59.380000+00:00
['Writing For Writers', 'Writers On Writing', 'Writing Tips', 'NaNoWriMo', 'Writing']
SCORCHER: As Global Records Tumbled, S’pore Baked Under One Of The Warmest Q3 Ever
Singapore has not been spared the recent spate of record heat waves around the world. If anything else, an examination of the city-state’s weather data from July to September 2019 suggests that Singapore baked under a long heat spell that was noticeably more intense than the long term average over the same period (ie, Q3 2019 data compared with Q3 data in previous years). In 84 of the 92 days in Q3 2019, or 9 out of 10 days, the mean daily temperature exceeded the long term average of 27.92°C for July-September — the highest since 1983, by this benchmark. Q3 2019 also had 81 days where the maximum daily temperature exceeded the long term average of 31.39°C for this period, matching the current record set in 1997 when Singapore and the world felt the brunt of the El Nino effects. The Meteorological Service had earlier said that August 2019 was likely the driest and warmest August since records started in 1929, and that 2019 was poised to set new temperature records. It is unclear if the weather in Singapore has shifted to a new normal. But the trend lines certainly point in a worrying direction.
https://towardsdatascience.com/scorcher-as-global-records-tumbled-spore-baked-under-one-of-the-warmest-q3-ever-436837cb5b0
['Chua Chin Hon']
2019-10-24 14:02:24.792000+00:00
['Climate Change', 'Data Journalism', 'Singapore', 'Data Visualization', 'Weather']
How to Know if You’re a Starseed
Signs You’re a Starseed 1. You follow your intuition You can’t always explain things with logic. Sometimes you just know. Intuition is a powerful force inside of you, leading you to exactly what you need at the right moment. If you feel like you came from another planet, you probably did. Don’t let others interfere. They may lead you to doubt yourself. But your sixth sense makes sense. Intuition never lies. If you feel a strong urge, it’s usually the right choice. 2. Babies and animals are attracted to you You have a deep connection with children and babies. They coo and smile or strike up spontaneous conversations with you everywhere you go. I’ve been making friends with 8-month-old babies for the last few years. We’re drawn to each other. Animals approach you often. Dogs might be uncharacteristically friendly toward you. Cats follow you around on a neighborhood walk. Birds visit you at your doorstep. I’ve had this experience many times. You might communicate with animals. One time I walked by a small Collie dog carrying a large stick in his mouth. He said, “I used to be bigger last time. I’m just getting used to this.” He made me laugh out loud, as he managed to hold a stick too large for his current small dog body. 3. Human behavior doesn’t make sense to you Other people’s reactions seem illogical. You might feel like everyone else is moving too slowly. What can’t we get on with it? You may follow an unconventional lifestyle that others don’t understand. You march to the beat of a different drum. You could be labeled rebel or the black sheep of the family. I’ve always felt different, like I never fit in, even while conforming. I’d say things my peers thought were weird. Star seeds tend to struggle in school because kids their age don’t follow our line of thinking. I found like-minded people in high school, and now I see most of them are indigo, star seeds, and earth angels. Despite being misunderstood and sometimes an outcast, you have a clear sense of who you are and why you came here. You don’t let other people sway your beliefs. Some of us go through a phase of following the crowd, but it always feels inauthentic. Your role manifests as an activist, eco-warrior, or any other catalyst for social change. You’re here to help us shift the current paradigm. 4. You’re always searching for home Finding home often feels out of your grasp. You know how “home” feels, yet you can’t get there. You look to the stars for answers. You’re sure there’s life beyond planet Earth. You have a faint recollection of being from another star system in a previous life. You seek out knowledge about other galaxies, and outer space generally intrigues you. You might struggle with feeling homesick. No matter where you live, you’re never quite settled. I’ve moved a lot over the past ten years, and I’m still not home. I decided to let it go since I’m now aware I originated somewhere else. As star seeds, we can choose to embrace our human experience. 5. You have heightened psychic abilities You’re a channeler, clairvoyant (clear seeing — visions), claircognizant (clear thoughts — thoughts come to you), clairaudient — hearing clear messages), or an energy healer. You’re an empath or highly sensitive. You pick up on energies. You often know upon arrival when a place feels off, or when there are high vibes. You sometimes feel the energy of inanimate objects, coming from the last person holding it. You hear, see, and feel messages from your spirit guides. They’re the galactic federation of light, other star beings, angels, and animals. You pay attention to synchronicities everywhere throughout your day. You see frequent number patterns like 11:11.
https://medium.com/mystic-minds/how-to-know-if-youre-a-starseed-af4d49571d2b
['Michelle Marie Warner']
2020-08-10 02:03:48.494000+00:00
['Self-awareness', 'Metaphysics', 'Spirituality', 'Consciousness', 'Life']
How to Run Recommender Systems in Python
How to Run Recommender Systems in Python A practical example of Movies Recommendation with Recommender Systems Photo by Pankaj Patel on Unsplash A Brief Introduction to Recommender Systems Nowadays, almost every company applies Recommender Systems (RecSys) which is a subclass of information filtering system that seeks to predict the “ rating” or “ preference “ a user would give to an item. They are primarily used in commercial applications. Just to give an example of some famous recommender systems: Amazon : Was the first company that applied Recommender Systems extensively around 1998. Based on the user’s preferences was suggesting similar products. It first applied with books and now with all of its products. : Was the first company that applied Recommender Systems extensively around 1998. Based on the user’s preferences was suggesting similar products. It first applied with books and now with all of its products. youtube: Based on the videos that you have watched, it suggested other videos that are likely to like them. Based on the videos that you have watched, it suggested other videos that are likely to like them. Spotify: Their successful Recommender System made them famous and many people let Spotify play music for them. Their successful Recommender System made them famous and many people let Spotify play music for them. Facebook: It shows on the top of the feed the posts are more likely to be of your interest. It shows on the top of the feed the posts are more likely to be of your interest. Instagram : It suggests profiles to follow based on your preference. : It suggests profiles to follow based on your preference. Netflix: It recommends movies for you based on your past ratings. It is worth mentioning the Netflix Prize, an open competition for the best collaborative filtering algorithm to predict user ratings for films, based on previous ratings without any other information about the users or films, i.e. without the users or the films being identified except by numbers assigned for the contest. On September 21, 2009 they awarded the $1M Grand Prize to team “BellKor’s Pragmatic Chaos”. So, you can build your own improved Recommender System and you can become rich one day 🙂 Surprise for Recommender Systems Still, there is much interest in Recommender Systems and a great field of research. Our goal here is to show how you can easily apply your Recommender System without explaining the maths below. We will work with the surprise package which is an easy-to-use Python scikit for recommender systems. The available prediction algorithms are: Screenshot from Surprise Documentation Build your own Recommender System We will provide an example of how you can build your own recommender. We will work with the MovieLens dataset, collected by the GroupLens Research Project at the University of Minnesota. Let’s get our hands dirty! import pandas as pd import numpy as np columns = ['user_id', 'item_id', 'rating', 'timestamp'] df = pd.read_csv('ml-100k/u.data', sep='\t', names=columns) columns = ['item_id', 'movie title', 'release date', 'video release date', 'IMDb URL', 'unknown', 'Action', 'Adventure', 'Animation', 'Childrens', 'Comedy', 'Crime', 'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'Musical', 'Mystery', 'Romance', 'Sci-Fi', 'Thriller', 'War', 'Western'] movies = pd.read_csv('ml-100k/u.item', sep='|', names=columns, encoding='latin-1') movie_names = movies[['item_id', 'movie title']] combined_movies_data = pd.merge(df, movie_names, on='item_id') combined_movies_data = combined_movies_data[['user_id','movie title', 'rating']] combined_movies_data.head() I will also provide my ratings for some movies from this data set since my ultimate goal is to get recommendations for myself ;). Below you can see my preferences. I will give myself the user_id 1001. # my user_id is the 1001 my_ratings = pd.read_csv('my_movies_rating.csv') my_ratings The next step is to append my ratings to the rest ratings. Also, we will keep the movies which have at least 25 reviews combined_movies_data = pd.concat([combined_movies_data, my_ratings], axis=0) # rename the columns to userID, itemID and rating combined_movies_data.columns = ['userID', 'itemID', 'rating'] # use the transform method group by userID and count # to keep the movies with more than 25 reviews combined_movies_data['reviews'] = combined_movies_data.groupby(['itemID'])['rating'].transform('count') combined_movies_data= combined_movies_data[combined_movies_data.reviews>25][['userID', 'itemID', 'rating']] Now we have ready our dataset and we can apply different recommender systems using the surprise package. from surprise import NMF, SVD, SVDpp, KNNBasic, KNNWithMeans, KNNWithZScore, CoClustering from surprise.model_selection import cross_validate from surprise import Reader, Dataset # A reader is still needed but only the rating_scale param is requiered. reader = Reader(rating_scale=(1, 5)) data = Dataset.load_from_df(combined_movies_data, reader) Clearly, we want to remove the movies that I have rated from the suggested ones. Let’s remove the rated movies: # get the list of the movie ids unique_ids = combined_movies_data['itemID'].unique() # get the list of the ids that the userid 1001 has rated iids1001 = combined_movies_data.loc[combined_movies_data['userID']==1001, 'itemID'] # remove the rated movies for the recommendations movies_to_predict = np.setdiff1d(unique_ids,iids1001) Recommender Systems using NMF algo = NMF() algo.fit(data.build_full_trainset()) my_recs = [] for iid in movies_to_predict: my_recs.append((iid, algo.predict(uid=1001,iid=iid).est)) pd.DataFrame(my_recs, columns=['iid', 'predictions']).sort_values('predictions', ascending=False).head(10) My recommendations according to NMF: Recommender Systems using SVD algo = SVD() algo.fit(data.build_full_trainset()) my_recs = [] for iid in movies_to_predict: my_recs.append((iid, algo.predict(uid=1001,iid=iid).est)) pd.DataFrame(my_recs, columns=['iid', 'predictions']).sort_values('predictions', ascending=False).head(10) Recommender Systems using SVD Recommender Systems using SVD++ algo = SVDpp() algo.fit(data.build_full_trainset()) my_recs = [] for iid in movies_to_predict: my_recs.append((iid, algo.predict(uid=1001,iid=iid).est)) pd.DataFrame(my_recs, columns=['iid', 'predictions']).sort_values('predictions', ascending=False).head(10) Recommender Systems using SVD++ Recommender Systems using KNN with Z-Score algo = KNNWithZScore() algo.fit(data.build_full_trainset()) my_recs = [] for iid in movies_to_predict: my_recs.append((iid, algo.predict(uid=1001,iid=iid).est)) pd.DataFrame(my_recs, columns=['iid', 'predictions']).sort_values('predictions', ascending=False).head(10) Recommender Systems using KNN with Z-Score Recommender Systems using Co-Clustering algo = CoClustering() algo.fit(data.build_full_trainset()) my_recs = [] for iid in movies_to_predict: my_recs.append((iid, algo.predict(uid=1001,iid=iid).est)) pd.DataFrame(my_recs, columns=['iid', 'predictions']).sort_values('predictions', ascending=False).head(10) Recommender Systems using Co-Clustering How to Evaluate the Recommender Systems We saw earlier that each recommender algorithm suggested different movies. The question is which one performed best and how we can choose between different algorithms. Like in all Machine Learning problems, we can split our dataset into train and test and evaluate the performance on the test dataset. We will apply Cross Validation (k-fold of k=3) and we will get the average RMSE of the 3-folds. cv = [] # Iterate over all recommender system algorithms for recsys in [NMF(), SVD(), SVDpp(), KNNWithZScore(), CoClustering()]: # Perform cross validation tmp = cross_validate(recsys, data, measures=['RMSE'], cv=3, verbose=False) cv.append((str(recsys).split(' ')[0].split('.')[-1], tmp['test_rmse'].mean())) pd.DataFrame(cv, columns=['RecSys', 'RMSE']) Average RMSE on the Test Dataset As we can see the SVD++ had the best performance (lowest RMSE) Discussion We built several Recommender Systems where the RMSE was less than 1. For our models, we took into consideration only the UserID and the ItemID. This post explains briefly the logic of the item-based and user-based collaborative filtering. You can also find an example of item-based collaborative filtering. We can apply different algorithms by taking into account other attributes like the genre of the movie, the released date, the director, the actor, the budget, the duration and so on. In this case, we are referring to Content-based recommenders that treat recommendation as a user-specific classification problem and learn a classifier for the user’s likes and dislikes based on an item’s features. In this system, keywords are used to describe the items and a user profile is built to indicate the type of item this user likes. Finally, we can even take into consideration the user’s attributes, like gender, age, location, language, etc.
https://medium.com/swlh/how-to-run-recommender-systems-in-python-1fcea853738f
['George Pipis']
2020-09-28 21:03:35.505000+00:00
['Python', 'Collaborative Filtering', 'Surprise', 'Recommender Systems', 'Recommendation System']
I Don’t Let Email Run My Life
Here’s how you can get inbox control, too Photo by Matthew Fournier on Unsplash I used to check email constantly. You know, just in case someone needed something from me. After email, I’d go on Facebook to see my notifications, then before I knew it, my quick break ate half an hour of my workday. Ditching midday social media checkins was easy for me (shout if you want to know how I did it and I’ll unpack in a future newsletter), but getting control of my email was way harder. I had to realize how much time it was taking me and how few- none, really -benefits I was getting from it. See, I’d open my inbox and click a few unread emails, but I wouldn’t want to pause my productivity and actually write anyone back….which just meant the stack of to-respond-to emails was adding up while I was also wasting time. Enter Tim Ferriss’s The 4-Hour Workweek. I read the book begrudgingly, expecting Ferriss to be a hack, but I found the book much more helpful than I was anticipating for the way Ferriss flips many of the ways we’ve been encouraged to work for the sake of productivity into patterns that can actually help us be productive. In this case, email. Ferriss checks his email twice a day. When he’s in the inbox, he’s focused on writing and answering emails. The rest of the time, he puts up an autoresponder that lets people know that he checks his email twice daily (11 am and 4 pm), will respond then, and to call him if urgent action is required. The “urgent” email-to-call conversion is usually less than 10%,” Ferriss writes. While I don’t use an autoresponder, for the last 18 months I’ve been checking email only twice a day (three times, if I’m expecting a contract or something timely) and almost never on weekends. By making myself selectively available, I haven’t lost any work or received complaints from editors impatient to reach me. I’ve experienced no negative consequences and creep toward inbox zero (sub 10, atm), and I actually get back to people right away rather than read-and-shelve emails on an ever expanding to-do list. Email is the primary tool I use to reach out to potential clients, manage existing clients, market myself (hi!), and otherwise manage the writing life. Yet it doesn’t own me. Constantly checking in like it’s a baby bird that failed to fledge is a distraction from the work of writing. By setting parameters for my email, I moved out of reactive mode, where I was waiting for people to come to me and, when they did, prioritized meeting their needs rather than my own. I created more time to focus on meaningful work by enforcing email boundaries that supported pockets of time for deep work. Drawing boundaries around your most important work and everything else-the whirlwind, Seth Covey calls it in The 4 Disciplines of Execution -is how you preserve the sacred space to honor your priorities and commitments. When you are constantly available, plugged in and always on, you are essentially letting people know to come to you with your needs and teaching them that you will put aside your most important, necessary work to tend to those needs in the same way that my dogs know when they paw at my leg, I will stop what I’m doing and take them for a walk so they don’t pee on the carpet. So yes, it’s a small act, but the ramifications compound. In 2020, we’re seeing what happens when one gender is told repeatedly, in every aspect of their life, to take on more work without supportive systems and processes-without boundaries. 2.2 million women have left the workforce because it’s impossible to balance a career, childcare, homeschooling, and domestic life. Our culture teaches women from a young age (hello dolls and play kitchens) their purpose is to serve, and by the time we’re having children or marrying, cishet women have internalized the message and do twice as much domestic labor as men. Stepping back from the unequal division of labor at home, or the always-on expectation at work, requires skillful boundary management. Your inbox is a perfect place to practice boundary setting. Your email management routine doesn’t need to look like mine or Tim’s to move the needle toward centering your needs, rather than always meeting others’. Here’s Tim’s autoresponder script, if you want to check it out. Lindsey Danis is a Hudson Valley based writer and frequent traveler who loves telling stories about travel, personal growth, resilience, and LGBTQ communities. She’s on Twitter @lindseydanis and has a productivity and business systems newsletter for self-employed writers.
https://medium.com/curious/i-dont-let-email-run-my-life-65386bccdfdf
['Lindsey Danis']
2020-12-15 02:33:42.769000+00:00
['Freelancers', 'Remote Working', 'Freelancing', 'Productivity', 'Time Management']
From Java to JavaScript — Functions and Scopes
In my last post I wrote about data type differences between Java and JavaScript. When it comes to functions, Java and JavaScript have some big differences. Since Java 8 you have Lambda expressions, which give you the chance to pass functions as a method argument, if the parameter is an interface. In JavaScript, every function is an object. This means, you can pass functions to other functions, or create a function that takes another function as an argument (high order component / HOC). In the end, both languages have functional programming characteristics, with Java being more object oriented. Functions in general have (or should have) the purpose of doing one task of taking some input and returning some output. They may also be used to collect functionality that have the same intent, like setup and connecting to a database. Furthermore, functions can provide privacy, in the matter that only some code has access to that code (for example: anonymous functions). Typically, functions should always return the same output for the same input (pure functions), so that they don’t have side effect and are predictable and testable. Functions in Java In Java, every function is bound to a class. This means, you cannot simply create a function for itself — you must add it to a class. There are different types of functions, like class functions (static), abstract functions (for inheritance) or class functions. Furthermore, class functions can be given a scope/visibility type (private, public, protected) which will be explained later. This brings us to the basic method template: scope modifier returnType methodName(type arguments...) { /* body */ } So the basic method has the following options: scope — (optional) defines the visibility of the method (either public, private, protected). If nothing is defined, it is default public. modifier — (optional) defines the type of the method (static, final or abstract). Note that constructions like static final are possible and definitely make sense in some cases. If nothing is defined, the method is bound to the instance of the class on default. are possible and definitely make sense in some cases. If nothing is defined, the method is bound to the instance of the class on default. returnType — (mandatory) defines the type the method returns. It can be basically any primitive data type or object. If nothing has to be returned, you can specify void . . arguments — (optional) define a list of arguments including their type to pass to the function. There are types of functions, which have a special use case. Especially in Java, you have the getter and setter methods, which simply do what they say: set and get values from an instance. They exist for privacy reasons, to prevent parts of your code to access instance variables directly. Specifically setter could be used to check if the values you want to set is valid, or to manage multi threaded access. Same counts for getter, where you maybe don’t want to return a raw date (that you persisted in a database), but rather like to return it in a special format. This concept is also part of a clean code paradigm, which prevents duplicate code, not too long functions and more. Scope Basics in Java Java is organized in the form of classes and packages, for which you can specify the scopes of your variables and functions at compile time. There are basically three types of scopes you can differentiate: class, method and block level scope. Member variables relate to a class level scope, meaning that if you define a variable in a class (not inside a class function), this variable can be accessed inside this class. Depending on the modifier you set, the variable (or function) may also be accessible from outside the class. Classes can also be package-hidden, when you define no modifier for it: class Test {} rather than public class Test {} . If you define a private constructor for a class, this means that you cannot instantiate it, thus there exist only one instance of that class (singelton). In fact, the instance is created and saved inside the singleton class. This might be useful for general settings or assets (images/videos) loading in your application. Local variables relate to a method or block level scope, meaning that if you define a variable inside a method or block, cannot be accessed from outside: public class Test { private String name; public void setName(String name) { this.name = name; { int nonsense = 42; System.out.println(nonsense); } System.out.println(this.name); System.out.println(nonsense); // compile error: 'cannot find symbol' } public static void main(String[] args) { Test test = new Test(); test.setName("foo"); } } You can run this example with javac Test.java and then java Test . The main point here is that it won’t compile, since line 13 tries to access the block-scoped variable nonsense . Furthermore, it shows the concept of private member variables ( name ), which can only be set using a publicly accessible setName method. There is way more to add for Scopes in Java, but this would be out of scope for this article. Functions in JavaScript In JavaScript, a function is always an object. This opens the possible to pass functions via function parameters, return functions from functions and more. Compared to Java, JavaScript is a functional programming language naturally. There are some concepts that include classes and inheritance. But in the end, some transpiler like Babel will transpile the code into functions. Check out the Babel live compiler here. Check out this example: function printSomething0(input) { console.log(input); } const printSomething1 = input => console.log(input); const printSomething2 = function printSomething2(input) { console.log(input); } printSomething0('test0'); // prints 'test0' printSomething1('test1'); // prints 'test1' printSomething2('test2'); // prints 'test2' It shows three different ways of creating a function. The second one is called arrow function and is my favourite, since it is really concise. Note that this is only available since ECMA Script 2015 (ES6), and will probably be transpiled anyways. A very important concept in JavaScript are closures. Basically, they are local or private functions, mainly to provide privacy (as mentioned in the beginning of this article). Another important concept are closure functions, described later. Scope Basics in JavaScript Similar to Java, objects can have (two) different scopes. You can differentiate between global and functional scope. It is not always as simple as in Java, where you may define a variable inside a function to give hide it from the outside. See the "use strict"; directive. Also, consider the following example: const chain = () => { let str = 't'; // 'var' also possible return input => { str += input; return str; }; }; const t = chain(); console.log(t('e')); // prints 'te' console.log(t('st')); // prints 'test' console.log(str); // ReferenceError: str is not defined It demonstrates the use of closures and scopes. You can see a closure as a function that has access to its parent scope, even after the parent function has been closed. In this example, str is hidden in the chain method, which only runs once. It also returns a function that has access to its outer/ parent scope str . On this way, the chain method has a private variable. There is way more to add for Scopes in JavaScript, like the let keyword (ES6), which is like a block scope, or object prototypes. The “use strict” Directive Understanding scopes in JavaScript is important. Depending on the type of code you write (module code is always strict), this directive is being used implicitly. But sometimes you have to set it yourself at the beginning of a file, in order to prevent the use of undeclared variables. Consider the following example: "use strict"; test = 12345; // 'ReferenceError: test is not defined'
https://reime005.medium.com/from-java-to-javascript-functions-and-scopes-9bea24c7cfb
['Marius Reimer']
2019-01-06 17:06:00.822000+00:00
['JavaScript', 'Software Development', 'Coding', 'Java', 'Programming']
Deploy a React Project on Nginx webserver to GCP VM Instance
Creating GCP Virtual Machine Instance Go to, Google Cloud Console, log in with your Gmail. Create a project, or select your existing one. Make sure you have access to the Google Cloud services. If not, you can sign up for the 300$ trial, which is more than adequate for this tutorial. Click on the burger menu icon, on the top left. Then select the Compute Engine > VM instances. VM instance in console.cloud.google Now create a new VM instance, give it a name and leave all the settings at default, or as per your server requirements but make sure: 1. You select a Ubuntu-based system, preferably Ubuntu 18.04. 2. Check both the Allow HTTP traffic and Allow HTTPS traffic checkboxes. Creating a VM instance It takes some time to fully create and run a Virtual Machine, but after it is ready it should look something like this. Accessing VM instance This is your Virtual Machine with IP address provided right under “External IP”. Please take a note of this IP address, it will be required for later use. Now to access your server, simply click on the SSH, and click the “, as shown in the image above. You will be provided with a terminal, inside a browser window which acts as the terminal to the Ubuntu system in our Virtual Machine.
https://medium.com/wesionary-team/deploy-a-react-project-on-nginx-webserver-to-gcp-vm-instance-362e73e1cadf
['Sajal Dulal']
2020-06-01 06:59:00.161000+00:00
['Google Cloud Platform', 'Deploy', 'Nginx', 'React', 'Virtual Machine']
How Medium Spends Your $5
How Medium Spends Your $5 Mysteries of the algorithm revealed Photo by Thought Catalog on Unsplash Author’s Note: This was written before the model changed to reward for reading rather than applauding. So how does the algorithm work now? It’s anybody’s guess! Still, the experiment below could shed some light. I’ve been posting on Medium for about a year and a half. For the first six months, I didn’t give it much attention. I had posted a story on Facebook, and a friend commented that it deserved a wider readership. She suggested Medium. At the time, I’d never even heard of Medium. It took awhile just to find it. I posted my first story in January of 2018, and not many people read it. A year and a half later, “Sally’s Wedding,” a story about the grief I feel over the “normal” life my son lost when he developed a major mental illness at age 18, has 136 views, 82 reads, and 3 fans. I didn’t post another story until May. For the second half of 2018, I posted about one story a month, with similar results. Then in September, curators selected a story to push out to three topics: Equality, Women, and Politics. That was exciting! It meant that readers who had indicated they were interested in those topics saw my story on their homepages and got it in their compilation emails from Medium. I can’t say it went viral, but it performed better than my other stories with that promotional help from Medium. To date, “Thank you, Dr. Ford,” which lauds an avenging angel, has 320 views, 149 reads, and 32 fans. That piqued my interest. Then in December, it happened again with a story about my family’s Christmas traditions, which curators selected to push out to the topic Family. Now I was hooked. Currently, “My Christmas in the Woods” has 235 views, 114 reads, and 15 fans. I loved knowing that strangers were reading my story. I can always post on Facebook to reach friends, but my goal is to get a wider readership, as I used to have when a wrote a column for a string of local newspapers in Northern California. But how could I get curators to select more of my stories for promotion? While investigating this question and reading Medium Help pages, I came upon a promise that curators would read every story posted by members. If you’re not a member, your story might still get selected, as the three above did. But the odds are higher that your story will be selected for curation if you’re a member, since curators are obligated to at least read what you post. If you aren’t a member, they might stumble across it anyway, or you can email it to their attention. But if you are a member, the review is automatic. But the odds are much higher that your story will be selected for curation if you’re a member, since curators are obligated to at least read what you post. So in January of 2019, I purchased a membership. I also decided to make an effort to post a story every week. That was my schedule when I wrote “Home Front,” my mostly humorous column about being a young mother and wife. Medium explained on its Help pages that I could buy a membership for $5 a month, or $50 a year, and that most of that money would be distributed to writers I engaged with on the platform via reading, commenting on, or clapping for their stories. The way it’s described is something like this: say you have $5 a month to distribute. If you “clap” for one story during the month, that author gets all your $5. If you clap for five, each author gets $1. But it’s not that simple, because Medium takes a cut to run the site — a percentage which they don’t reveal. They also allocate based on reading and commenting, in addition to clapping, using an algorithm they once again don’t reveal. Plus, $50 divided by 12 doesn’t equal $5. So it’s all a bit mysterious... How much does Medium distribute to writers and how much does it keep to cover administrative costs? That’s what I wanted to know. Once I bought the membership and started posting once a week, things began to improve. My stories were selected regularly for curation, and I started to make a bit of money. It wasn’t much, but it was moving in the right direction. I made $5 in January, $15 in February, $20 in March, $35 in April, and $92 in May. In June, I made $170. Those numbers aren’t necessarily predictive, though, because in May and June, I pitched stories that were accepted into The Bold Italic and Human Parts, big publications which increased my readership exponentially. Examining the numbers on those stories is instructive. This piece in The Bold Italic, a publication focused on San Francisco, has 11.4 thousand views, 2.8 thousand reads, and 88 fans to date, which created $94 in earnings. Besides appearing in TBI, curators distributed it in the topics Race, San Francisco, and Equality. This story in Human Parts, on the other hand, has far fewer readers, but more fans, which leads to greater earnings. It has 3.3 thousand views, 1.6 thousand reads, and 127 fans to date, earning $112. Besides appearing in Human Parts, a publication within Medium with a link on the menu bar of everyone’s home page, curators distributed it in the topic Family. Both those publications actively ask for submissions, so if you’re serious about writing on Medium, you should check them out. These experiences would lead me to believe I need to get accepted by a big publication in order to reach a significant audience, except for one thing. My third most popular story got its viewers when I injected it into Twitter conversations about abortion, racking up 3.2 thousand views, 2.1 thousand reads, and 69 fans. Twitter fans, however, don’t bring the green, since they aren’t necessarily members of Medium. Earnings to date for this story are $43. Curators distributed it in Women, Equality, and Politics. While analyzing their data, many Medium writers have wondered just exactly how their membership money is distributed. Medium doesn’t say. But recently, I got a clue. On May 15, 2019, I decided to start a feminist publication on Medium, after one I’d been contributing to suddenly shut down. I named it Fourth Wave, since we’re currently in the fourth wave of the feminist movement, which is described in a little bit more detail here. After populating the site with my own writing, I began to reach out to other writers, and because I taught high school journalism for 17 years, I happen to know a lot of young women and men who like to write. But most of them aren’t excited about purchasing a Medium membership, since they aren’t committed to the platform or to writing regular stories — yet. And without a membership, curators might not notice and recommend their work. So I decided to purchase a membership account for Fourth Wave. That way, I could publish stories of non-members, putting their byline in the subtitle, and rest assured that curators would at least take a look. It was only after buying the second membership that I realized it had given me an opportunity to experiment with the algorithm and find out, once and for all, how much of the membership fee Medium distributes to writers. While logged in as Fourth Wave, I clapped 50 times for one of my obscure stories which had no other claps. Then I signed out and let the account lie fallow. Those 50 claps — the only claps the account distributed all month — earned the story $3.09. If you take the $50 I paid for the year and divide it by 12, you get $4.17 a month. If $3.09 of that goes to writers, then Medium is keeping $1.08, or 26 percent, which I suppose isn’t excessive for creating and administering the platform many of us are coming to love. One problem, though. The money appeared sooner than I expected — the week after I clapped. So what would have happened if I’d gone back into that account and clapped for a second story the following week? I’m not sure. It seemed like a second experiment was in order, so one recent week, I used the second account to merely read one story without clapping. I slowly and carefully scrolled down the page. And when the earnings were posted for that week, there was nothing. So even though you hear the algorithm includes reading time, it’s all about the claps. I’m now considering how to design another test. If you have an idea for a good experiment, please leave it here. Meanwhile, keep writing! And, when you’re finished, Submit to The Wave! :)
https://medium.com/fourth-wave/how-medium-spends-your-5-b99cacc4824a
['Patsy Fergusson']
2020-02-09 20:19:31.029000+00:00
['Work', 'Self', 'Medium', 'Success', 'Writing']
The Complete Data Mining Pipeline
Now that we know how to obtain xpaths, let’s write some code! We’re going to create a login function that receives two parameters, username and password, find their respective html input fields and write them in. For this we use find_element_by_xpath() to obtain the web element we want and send_keys() to write some text in them. Good code should be elegant and full xpaths are certainly not elegant. For this reason, I chose to craft some more refined xpaths to find username_input , pswd_input , and login . You can use the xpaths you copied from the browser — they will work just fine — but don’t be scared to use mine and try to understand how they work! Let’s cover what they do by analyzing the xpath to username_input . First of all, // selects any element in the document that matches the following description. In this case, we’re looking for an input element that has an attribute called name equal to session[username_or_email] . In essence, it achieves the same thing as the full xpath we can copy from the page, but this way looks nicer. Notice that at the beginning of the method there’s a sleep() function. This will halt the execution of our scraper for the amount of seconds we specify. This is extremely useful as we give time for the pages to load before we try to access any of their elements. If you’re trying to access an existing element of the page and you get an error like the following, consider increasing the amount of time you want to wait for the page: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element:[...] Increasing the amount of time you want to wait for the page gives it a chance to fully load before we perform an operation. Now that we have our __login function inside the TBot , lets call it from the constructor. It should look something like this: I strongly recommend that you store your username and your password in another Python file. This way you can import your username and password into the main.py function and use them without showing their actual values. Let’s see if it worked — run your main.py with the following command and you should get to your Twitter home page automatically: (venv)$ python main.py Extracting our much-desired data Now comes the juicy part, extracting all the relevant information from our Twitter feed. First, we need to make a plan for how to proceed. Take your time, examine the HTML of the page, and come up with the best approach for achieving your goal. Based on my experience, probably all tweets have the same HTML structure (true in this case — always trust your experience). This allows us to write code to get information out of one tweet and reuse it for the entirety of our twitter feed. Let’s get to it! Inside the TBot class create the following method: Notice how we have stored the output of find_elements_by_xpath in tweets , which means that we have managed to capture all tweets under one single xpath. Now we iterate through each tweet, extracting its individual information. Each tweet, in turn, possesses the same internal HTML structure, so we can use the same functions to extract information. For this approach, we can’t just copy and paste the xpath we copied from the browser, as each tweet will have its own xpath. We need to be able to generalize our way to locate tweets inside the document, so need to craft our own xpath. But I’m not really satisfied with this solution. For comments, retweets, and likes it doesn’t obtain the actual number, just the string that represents it. For example, 1.9K is a string while 1900 is an actual number we can work this. To fix this problem I created the following method: This method checks if data contained either a K for thousands or an M for millions, then removes it, turns data into a decimal number, and multiplies it by its respective amount. Because it’s not nice to see someone got 1900.0 retweets, it finally turns data into an integer. Implementing this method inside __scrap_tweets is fairly simple, we just need to call it for the values of comments, retweets, and likes and capture the output value: Well, all that’s left to do is call the method from the constructor, which should now look like this: Let's check if it works with (venv)$ python main.py . You should now see your whole Twitter feed followed by this nice error message: selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document If you happened to somehow avoid this error, you should still listen closely to this. When we load all the tweets from our Twitter feed into tweets , the last of them aren’t actually being displayed on the page. So, when we try to access their data, Python gets confused and stops its execution. To avoid this problem, we need to scroll down each time we get an error so that new tweets are loaded into the page. Let’s create a scroll method: Now we need to wrap the code inside __scrap_tweets in a try/except and call the __scroll method in the except section. It should end up looking something like this: But we don’t want to only obtain data from our initial feed when we can scroll down indefinitely. We’re going to call __scrap_tweets periodically from our infinite loop at TBot constructor, resulting in something like this: And when you run (venv)$ python main.py your terminal should be flooded with endless tweets from your feed. But we don’t want to store them in the output log of our terminal do we? We want to store them somewhere we can extract them when we need them: in a database.
https://medium.com/better-programming/the-complete-data-mining-pipeline-1f661e30d94f
['Pablo T. Campos']
2020-05-07 07:41:24.602000+00:00
['Python', 'Data Science', 'Postgresql', 'Selenium', 'Programming']
ICOVO、分散型資金調達プラットフォーム(DAF)の拡張ビジョンを公開
ICOVO We give shapes to possibilities on Ethereum.
https://medium.com/icovo/icovo-%E5%88%86%E6%95%A3%E5%9E%8B%E8%B3%87%E9%87%91%E8%AA%BF%E9%81%94%E3%83%97%E3%83%A9%E3%83%83%E3%83%88%E3%83%95%E3%82%A9%E3%83%BC%E3%83%A0-daf-%E3%81%AE%E6%8B%A1%E5%BC%B5%E3%83%93%E3%82%B8%E3%83%A7%E3%83%B3%E3%82%92%E5%85%AC%E9%96%8B-7c75dc92f1af
['Icovo Ag']
2019-02-10 09:09:27.317000+00:00
['ICO', 'Japanese', 'Startup', 'Blockchain', 'Ethereum']
Conquer The Fear Of The Unknown.
Photo by Jimmy Conover on Unsplash At times we think those with the thick skin as an elephant can hope to sail through life unscathed by self-doubt and bouts of depression, but the truth is; we all at some point reach that point in life. There is a time when a realm of deep personal power leads an individual into conducting their own education, exploring inspiration, shaping the environment and sharing their adventure with others. While work fills a very large part of our lives, those with a great taste in intuitive understanding and creation often strive for perfection in crafting their work. Humanly, perfection is perceived as a positive thing. Even saying you have perfectionistic tendencies can come off as a great trait; even though internally you feel coldness and darkness has taken over. We think that by sacrificing our quality time with friends and family or have a good rest when we are tired and exhausted will help us concentrate and work on that project perfectly that we have been thinking about but the truth is that neither one increase our productivity nor puts us in a perfect mental position that will help us cope with what’s to come. It leads to getting frustrated, beating yourself up for past failures or feeling like you can’t live up to your ambitions and expectations which lead you in peril.
https://medium.com/datadriveninvestor/conquer-the-fear-of-the-unknown-9094736c4cf1
['Ntwari Moise']
2020-06-29 11:22:36.911000+00:00
['Social', 'Work Life Balance', 'Fear', 'Entrepreneurship', 'Life']
The Imbecile
The Imbecile A story of ideological possession The Young Joseph Stalin I first met the mild mannered Poe at the bus-stop near the University where where we both worked. We had a lot in common. He was a teacher and a writer — an intellectual of sorts. He was from a small town in Ireland and we talked about Irish writers like Joyce and Becket. It turned out we were neighbours, and glad to meet and talk ideas, we went out to the pub for a few pints. I told him that I had been conceived in Dublin and had an Irish grandmother, and I liked to think that my Irish roots ran deep. We got along like a house on fire. In the dim light I saw Poe’s true Irish colours — he could certainly drink me under the table. I first pegged Poe as a typical leftist when he started talking politics—which weren’t really my thing. He talked about American imperialism, class struggle, the Palestinian occupation—the usual grab bag of leftist concerns. However, there was something different about Poe: he didn't look like the usual hipster leftist but more like an old school revolutionary, with his tweed cap, red tie, and suit jacket. The conversation took a bizarre turn when he began to talk about Joseph Stalin in glowing terms—Stalin the statesman, Stalin the poet he called him. The 5 to 60 million corpses (take your pick) were merely imperialist lies, he told me. The Gulags were actually nice reform camps. Poe had some weird ideas—to understate the case. His version of history was quite different than anyone else’s I knew—he lived in an alternative universe. Historical record didn’t really exist in his world — all that was made up by the imperialists. He believed himself to be in possession of the truth, the real story. Where did he get his alternative information? I wondered. How could a smart guy who had read Michel Foucault and who spoke several languages really believe this stuff? What caused him to be engaged in such diabolical research? His whole world seemed quite intricate and uncanny to me, like a dark Alice in Wonderland. After a couple of pints we took the conversation over to his place, and he pulled out the Irish Whiskey. We talked late into the night, and I am ashamed to say I got totally wasted that night—in a way I hadn’t been since I was a student. I’m sure I made a pretty sorry impression on his attractive young lawyer wife, when she came home and found her husband’s new friend passed out on the couch. The wife was kind enough to drive me home, and I found out that she was not a ‘revolutionary’ like Poe but a seemingly ordinary woman. She was a bit concerned about her husbands revolutionary fervour. Therapy might be a good thing for Poe, she told me. They had a young boy who was four years old, and I imagine she was worried about his future. My memory of that night is a bit wobbly, but I do recall a dark and slightly comical feeling rising up in me when Poe talked about ‘the true science of socialism’. Perhaps my puking in the toilet may have been more than a just a reaction to the cheap whiskey. There was something unwholesome about Poe—a dark aura that surrounded him.
https://andrewpgsweeny.medium.com/the-imbecile-dd3718868688
['Andrew Sweeny']
2020-06-05 09:13:51.809000+00:00
['Literature', 'Ideology', 'Short Story', 'Psychology', 'Politics']
Your Weekly AI, Machine Learning and Data Science Recommended Articles (Dec 21)
Your Weekly AI, Machine Learning and Data Science Recommended Articles (Dec 21) Well written and informative articles aimed at ML practitioners There are no shortages of AI-focused articles to read on Medium. AI enthusiasts and machine learning practitioners are spoilt for choice when seeking AI-related articles to read on Medium. This week’s collection of recommended articles cover a range of AI topics such as NLP, AI predictions, audio processing, and more. Needless to say, each recommended article listed below provides some form of value to ML practitioners and AI enthusiasts. Happy Reading.
https://towardsdatascience.com/your-weekly-ai-machine-learning-and-data-science-recommended-articles-dec-21-f50bbbb50970
['Richmond Alake']
2020-12-21 21:53:31.304000+00:00
['Machine Learning', 'Data Science', 'Technology', 'Artificial Intelligence', 'News']
Weekly update from Ubcoin: 23.07–02.08
2. Ubcoin Market team is currently developing a complex AI-system to 1) moderate content, 2) create personal recommendations 3) predict the price of goods 4) improve the quality of product images 5) identify suspicious ads and behaviour. The use of machine learning technologies improves user experience and makes the use of Ubcoin platform safe. Read more about this AI system here: https://medium.com/@ubcoin/five-applications-of-artificial-intelligence-on-the-ubcoin-marketplace-c83942b8a62e Ubcoin Artificial Intelligence System is created under the supervision of the Chief AI Officer Kirill Kosolapov. You can watch an interview with him here: https://www.youtube.com/watch?v=_HQ_lokV1Us You can test the real Ubcoin AI-bot here: https://ubcoin.io/en/#universe Project Marketing 1. The first version of blockchain-based Ubcoin marketplace has been released. This big news was published by 24 media outlets in total in various languages. 2. Fresh video reviews: USA, Cryptocurrency Investing https://www.youtube.com/watch?v=oaoA2PkWIpo USA, CryptoSid https://www.youtube.com/watch?v=D1NJrd4zvwQ&ab_channel=CryptoSID USA, Dushan Spalevich https://www.youtube.com/watch?v=XWHssg7t28A (interview with CEO — Felix) Indonesia, Mz Vlogger https://www.youtube.com/watch?v=3RjkiqkcH2o Indonesia, Crypto Rick https://www.youtube.com/watch?v=ViB8s7-Qk60 Indonesia, Kendrick Kalim https://www.youtube.com/watch?v=j7_bo3WIve0 Indonesia, Aditya Prakarsa https://www.youtube.com/watch?v=0MNEqyUpdaE Indonesia, Fredazip https://www.youtube.com/watch?v=XWHssg7t28A Tokensale updates Ubcoin continues Bounty Payments. At the moment we distributed all Bounty programs (phase-1) except Facebook campaign. We decided to distribute bounty tokens in 2 parts. The first part: earned between the beginning of bounty program and June, 30. Payments have commenced in batches. The second part: earned between June, 30 and August, 27. To be paid in September. Bounty program continues till August, 27 (end of token sale). To participate in bounty program please join this chat: https://t.me/UbcoinBount Miscellaneous At the moment we are finishing negotiations with a Chinese blockchain project about a complex partnership. Public announcements are planned to be made next week.
https://medium.com/ubcoin-blog/weekly-update-from-ubcoin-23-07-02-08-f97265e6c865
['Ubcoin. Cryptocurrency Reimagined']
2018-09-24 16:09:14.292000+00:00
['Ubcoin', 'Ubcoin Product', 'Artificial Intelligence', 'Ethereum', 'Bitcoin']
A Waltz of Words
The day her mother left, Hazel wrung herself around her mother’s legs and cried. “Please don’t leave me. Please.” Her mother gently untangled herself, leaning on to her steel suitcase for support. She pulled the gnarled knob of the rosewood cabinet and found what she wanted in the messy key-strewn drawer. She cocked her head up and winked at her eight-year-old daughter. “Let me show you something.” Her mother’s skirt swished gently at her ankles as she made her way towards the staircase. She moved like water, flowing in graceful arcs. They pattered up the staircase, not stopping till they reached the attic. Hazel never knew what was in the attic, and she never asked. Her mother fumbled with the keys and flung open the door. The attic was huge. The air was stifling like a clenched fist, and musty with the smell of dogeared books. Rows of redwood burl bookshelves, powdered with dust, housed a vast collection of books — everything from children’s paperbacks to psychology tomes. Hazel stepped gingerly into the room, feeling the dust tickle her nose. Her bare feet crunched loose sheets of paper that laid scattered over the parquet floor. And that was where she was left, surrounded by a legacy of books, and cradling the hope of seeing her mother again. Her mother packed and left for a few weeks each time she was hit. But she always came back, with new books and toys for Hazel. After her mother left, Hazel spent countless days in the attic, travelling to faraway realms and seeking solace in books. When darkness swept into the room, she would jump to her feet, startled by the swift passing of time, and let the book in her hands collapse at her feet. She would feel the dull ache in her chest as she wound down the stairs, yearning to envelop her mother’s soft neck and inhale her familiar musky scent. Her mother always came back. But not this time. At the public library, located in the suburbs and nestled between sleepy retail stores, sixteen-year-old Hazel works as a part-time librarian. She would do anything to stay out of the house. Since her mother’s departure, things have never been the same. She dreads going home to an eerie silence punctured only by her father’s attempts at small talk, and she loathes her father for what he did. On a rainy Sunday morning, the library is so empty that Hazel can hear the thunder echoing through. Two librarians are out sick, so Hazel is the only staff in the library today. Behind the counter, she bends down, trying to yank open a stuck drawer, when a deep voice shatters the silence. “Excuse me?” Startled, she jerks up abruptly and finds herself looking straight into the blue eyes of a guy with tousled auburn hair and long sideburns. He runs his hands through his auburn hair and droplets of rain run down his arm. “Uh, I’m sorry. May I know where’s the computer area? I need to use a computer.” She slides the sign-in form across the counter. In a backhand scrawl, he prints his name, “Brandon”. He fills in the rest of the details and thanks her. From the corner of her eye, she can see his silhouette curving up the spiral staircase to the second floor. Around noon, the rain shows no sign of abating. Hazel, having completed her morning’s work, goes over to the “New Books” shelf and casts her eyes over the books. She runs her fingers down the taut spines, pausing when she reaches a book in the middle of the row. Ah, this one. She pulls the book out and feels the glossy cover slide over her cold palms. The book flutters open with a gentle crack and the pages are deliciously crisp. Her long brown hair spills over the pages. As she finishes the first chapter, she sees the moving of a shadow across the olive tiles. Brandon. She has no idea how long he has been watching her. He puts up his hand to wave goodbye and slips out the glass doors. Brandon keeps coming to the library over the next few days. He always has something to do — use the computer, borrow books, return books. He talks to Hazel, who just nods in response. One day, he asks Hazel to show him where a book is. At the W section, in between musty shelves, he asks her out. Their first date is at a bar downtown. Hazel does not talk. Brandon is fine with it. He seems happy to chatter on and let her listen. She learns that he is a year older than her and works as a retail assistant while training to be a professional dancer. “My mother is a dancer,” Hazel blurts out, one of the first few lines to come out of her dry mouth, and watches as Brandon brims with excitement. Her heart lurches. She wonders why she said “is”. She has no idea where her mother is, and what she is doing now. Later that night, he sends her home in his family’s rundown car. He turns to her as they reach her front gate. “You stay here? And it’s just you and your dad?” “Yeah,” she says, and leaves it there. She thinks about her father’s big, heavy steps echoing through the hollow house. She thinks about the dinners spent chewing through the stillness in the air. She thinks about how she came back from school one day to see the family photos wiped off the mantelpiece, and feeling shards of glass needling into her foot, jolting her to harsh reality.
https://medium.com/ardor-magazine/a-waltz-of-words-703f84dd6d6e
['Michelle Muses']
2019-12-10 03:24:07.489000+00:00
['Books', 'Relationships', 'Fiction', 'Travel', 'Short Story']
10 Women-Run Startup Founders You Should Know
Over the past few years, both the total number and overall percentage of female founders at GAN Startups have increased drastically. One data point that proves it: In 2014, only 17% of GAN Startups had a female member of the founding team. In 2017, that number increased to 46%. Meaning, 46% out of all the 1,700 founders who were in a GAN Accelerator in 2017 identified as female. There are a handfull of reasons I think this is happening: We’ve made an intentional effort to invite more accelerators into the GAN Community who are focused specifically on female- and minority-run startups (Rowad, Hillman, and VVM, for instance). Accelerators already in GAN for years are increasingly focused on female- and minority-run startups. We’ve seen a lot of strategic partnerships that are providing more opportunities, like the ones between Tampa Bay Wave and Nielsen or Start Co. and The Jump Fund. Though still a disproportionately small chunk of the pie, more women and minority founders are getting access and opportunity to run companies overall, thanks to funding from groups like Backstage and Kapor. And, more women and minorities are at the table to make funding decisions, like the inaugural class of VCs in the First Round Angel Track. The stories of women entrepreneurs are being shared, not only centering the voices of women running companies but building greater community between them. Like in this new magazine, Good Company, from the well-known blogger and entrepreneur who started Design*Sponge, Grace Bonney. It features stories of women and non-binary entrepreneurs at every stage of life. And, back in 2016, GAN made a five-year commitment, along with Obama’s White House Startup America Initiative, to see parity reached among women holding executive roles at accelerators and startups in the GAN Community. Based on the numbers we’re seeing, we’re getting close to hitting that goal. And we expect continued improvements in the years to come. For now, it’s encouraging to know that so many female founders are not only finding more of the support they need, but they’re thriving because of it (often requiring less funding to make even more revenue). Just to Name a Few So I wanted to hear some stories of GAN Startups who are building amazing companies, have a unique product, or have accomplished something that’s worth sharing — who all happen to be run by women. To do so, I reached out to a bunch of Managing Directors at GAN Accelerators and our startup engagement contacts at GAN Partners to hear some of their favorites. And I didn’t want to keep those stories to myself. Here’s what they said: Melanie Igwe Founder and COO of Ilerasoft Accelerator: Hillman Accelerator llerasoft is the brainchild of two passionate co-founders who both saw waste and inefficiency in one of the most cost-intensive areas of healthcare, capital planning, and budgeting. The founders of Ilerasoft agreed that — for true financial discipline to be actualized — there needed to be a standard. The Efficiency Score is that standard. The Efficiency Score, which is like a credit score for medical equipment, harnesses IoT/RTLS, along with 15 other unique data points, to provide financial metrics and recommendations for enhanced operations and to improve future purchasing decisions. Shuchi Yvas Founder of GuestBox Accelerator: Tampa Bay WaVE GuestBox is a tool to help hotels, vacation rental managers, and Airbnb hosts boost guest loyalty. They provide immense value to a host’s listing with curated amenities for them to welcome their guests. Each box comes with luxury items, a combination of toiletries, skincare items, and snacks. Products are natural, organic, healthy and welcoming. Many are by female-founded and female-led companies. Going forward, they plan to showcase more products from up-and-coming female and minority entrepreneurs from across the US. Lindsey Tropf Founder and CEO of Immersed Games Accelerator: Tampa Bay WaVE Immersed Games is an EdTech company with an audacious vision for how games can be used to empower student learning. They’re building an inspiration platform where students can spark their love of learning and be empowered to reach their full potential. One day, while playing World of Warcraft, Lindsey turned to her husband Ryan and asked where to find something. When he rattled off the right character, in the right city, on the right continent in the game, she realized how much we all learn, simply through the act of play. But, while she knew so much, that information didn’t really matter outside of the game world. So, enamored with the concept, she Lindsey went through a doctoral program to study learning theory, where she increasingly realized that online game could prove to be an incredible platform for learning and eventually created Immersed Games. Yasmine Mustafa Founder at ROAR for Good Program: Comcast NBCUniversal LIFT Labs Every day, women face the threat of harassment, assault, and violence. So Yasmine and her team set off to do something about it. What started off as a solo journey of a lifetime turned into a global mission to impact generations. After a trip to South America, Yasmine returned to Philadelphia and developed a wearable product that pairs with users’ phones, allowing them to share their location with trusted networks via mobile text alerts. Kristian Kimbro Rickard CEO of doyenne360 Accelerator: Start Co. Kristian’s company, doyenne360, is working to increase access and understanding of STEM by deploying IoT, analysts, and workforce training solutions. Beginning in the education space, doyenne360 is striving to close the technology gap in Tennessee (USA). Stephanie Cummings CEO of Please Assist Me Accelerator: Start Co. Stephanie started Please Assist Me with the vision to create a work-life balance for the working professional. Her vision is that personal assistants are not something just for the power brokers of the world, but a resource for working single parents, injured veterans, the elderly, and anyone else who you might not traditionally associate with having a personal assistant. Felicity Conrad CEO and Founder of Paladin Programs: Comcast NBCUniversal LIFT Labs & Techstars Chicago Paladin takes the busy work out of pro bono so that your team can focus on making an impact in your community. Their platform eliminates the need for hand-crafted emails to track down individuals who want to get involved. Najma Ghuloom Co-Founder of Majra Accelerator: SeedFuel Rowad Majra is a recruitment platform with a focus on matching based on culture and personality fit. What separates Majra from various other employment channels is its emphasis on the personal aspect of recruitment processes — often overlooked by other services. Be it enabling job seekers to highlight their personalities beyond their professional work or encouraging companies to share their culture and environment, their objective is to connect young people with purposeful careers. Akshaya Shanmugam Founder of Lumme Accelerator: Valley Venture Mentors Akshaya developed a platform combining wearable technology, machine learning, and behavioral psychology for smokers who want to quit with broader applications for addiction treatment. Wearable devices sense the smoker’s movements and predict a likely relapse. A notification tells them not to light up and suggests helpful interventions and alternatives to smoking — about six minutes before cravings hit with 90%+ accuracy in recent trials. Akshaya and Lumme have raised $1.7 million in non-dilutive SBIR funding and she was recently named a Forbes 30 under 30. Laurel Wider Founder of Wonder Crew Accelerator: Valley Venture Mentors Laurel shared her outlook on the VVM process: “I pitched a crazy idea, dolls for boys, and this community met me with sheer support. VVM helped me actualize my concept into a successful business. The grant that I won was instrumental in bridging a funding gap until I found the right partner. Prior to Wonder Crew, I didn’t know a thing about business. One of the coolest things about VVM is that no experience is required.” Now, her dolls are on the shelves of every Walmart and Target store in the country. She recently won Doll of the Year at the TOTY Awards. And is now a sought-after thought leader on gender, toys, and early education. Check out this recent NYTimes article for more on Laurel and Wonder Crew. Photo Credit: CreateHER Stock
https://patrickriley.medium.com/10-women-run-startup-founders-you-should-know-358b1eb97455
['Patrick Riley']
2018-06-28 20:26:32.279000+00:00
['Community', 'Startup', 'Founders', 'The Future Is Female']
I Am Trying To Break Your Heart.
I Am Trying To Break Your Heart. Why life is worth living. I once knew this woman … I believe her name was Camille. She was a radiant presence in a room, a boundless fireball of beauty and brilliance, who shined the way the new phone does once you peel the plastic off the screen and start it up for the first time. Anyway, Camille was a singer-songwriter who would just levitate on stage — all humility and grace — and belt out these sun-soaked melodies with earnest lyrics of longing, lust and chocolate. We liked her just fine. She’d heard I’d traveled down the road that inevitably leads the ego in all of us to the stage. And we ping-ponged some forgettable dialogue before she asked me, sort of in context but sort of shoehorned, “How can I get better?” Now, listen, I’m not about to tell people to follow my path in music. For that career, I’m still trying to fight with family members on reconsidering the “Do Not Resuscitate” checkbox. But, there’s a common current underneath all art, and under the overlap in the Venn diagram where it intersects with the path your life takes.
https://medium.com/indian-thoughts/i-am-trying-to-break-your-heart-a4ecb9f7d466
['John Gorman']
2018-11-01 08:32:24.653000+00:00
['Karma', 'Relationships', 'Love', 'Psychology', 'This Happened To Me']
Big Data with R !!
Every now and then, we always face and hear that R is sluggish with big data. Here we are talking about terabytes or petabytes and this is one of the biggest limitations of R that the data should fit within the RAM. To avoid this we use out of memory processing concept that process in chunks rather processing it all at once. We use two different packages that are shown below. #install.packages("ff") library(ff) #install.packages("ffbase") library(ffbase) ff package basically chunks the data and stores it as an encoded raw flat file on the hard disk and also gives you access to the functions much faster. The data structure that is ff data frame also provides mapping to the dataset that is partitioned in RAM. Example of how the chunks of data are going to work, assume a 2GB file it takes about 460 seconds to read the data in the file with 1 ff data frame of a size 515 KB and 28 ff data files of 50 MB each, therefor 1.37GB. To perform basic merging, finding duplicates and missing values, creating subset, etc we use ffbase package. We can also perform clustering, regressions, and classification directly with the ff objects. Let's look for some R-code for the above-described operations # Uploading from flatfiles system("mkdir ffdf") options(fftempdir = "./ffdf") system.time(fli.ff <- read.table.ffdf(file="flights.txt", sep=",", VERBOSE=TRUE, header=TRUE, colClasses=NA)) system.time(airln.ff <- read.csv.ffdf(file="airline.csv", VERBOSE=TRUE, header=TRUE,colClasses=NA)) # Merging the datasets flights.data.ff = merge.ffdf(fli.ff, airln.ff, by="Airline_id") Subsetting # Subset subset.ffdf(flights.data.ff, CANCELLED == 1, select = c(Flight_date, Airline_id, Ori_city,Ori_state, Dest_city, Dest_state, Cancellation)) Descriptive statistics # Descriptive statistics mean(flights.data.ff$DISTANCE) quantile(flights.data.ff$DISTANCE) range(flights.data.ff$DISTANCE) Regression with biglm (Dataset: Chronic Kidney Disease Dataset by the University of California Irvine at http://archive.ics.uci.edu/ml/index.html) # Regression requires installation of biglm package library(ffbase) library(biglm) model1 = bigglm.ffdf(class ~ age + bp + bgr + bu + rbcc + wbcc + hemo, data = ckd.ff, family=binomial(link = "logit"), na.action = na.exclude) model1 summary(model1) #Refining of the model can be done according to the significance level obtained in model1 Linear Regression with biglm and bigmemory # Regression with big memory and biglm package library(biglm) ckd.mat = read.big.matrix("ckd.csv", header = TRUE, sep = ",", type = "double",backingfile = "ckd.bin", descriptorfile = "ckd.desc") regression = bigglm.big.matrix(class~ bgr + hemo + age, data = ckd.mat, fc = c("bgr", "hemo")) summary(regression) Further, when you dive a little deep we have just talked about storing but when we need to process or analyze data we need to know parallel computing. The simplest way to explain this is a youtube video and count the time of a random color appearing in the video so in this case parallel computing comes in the play where mapper splits the input and further reduced to key-value pair. Therefore we use H20 for a fast and scalable platform for parallel and big data in R. I hope you find this article helpful to work with big data in R. Thank you for reading the article. “Data is the new science, Big data holds the answer”- Pat Gelsinger
https://medium.com/analytics-vidhya/big-data-with-r-3cea0549cfba
['Apoorva Jain']
2020-07-06 16:48:18.559000+00:00
['Data Science', 'Parallel Computing', 'Rstudio', 'Big Data']
How Tokopedia modernized its data warehouse and analytics processes with BigQuery and Cloud Dataflow
At Tokopedia, our aim is to help individuals and business owners across Indonesia open and manage their own online stores, while providing a safer, more convenient online shopping experience for users. We’re excited that this aim has made Tokopedia the leading online marketplace in Indonesia, and it’s also generated a lot of data, in a multitude of formats! As Data Warehouse Lead, my job is to first, lead optimization and migration of the existing data warehouse to Google Cloud Platform, and second, to enhance the analytic data architecture and governance. Our data journey began with a free edition of Relational Database Management System as our first database. After a period of significant growth, we migrated to PostgreSQL to increase both size and performance. As our growth skyrocketed, we came to yet another decision point — we found we were using a lot of resources and personnel just to clean the database in order to free up capacity for the following day. As we thought about our next steps, system performance guided our decision-making process. In our previous system, some complex queries could run for more than five hours! Not only that, but our users were requiring an increasing number of reports — to the point where new reports were being requested on a daily basis and grown to 10x more reports than we used to 6 months ago. We’d end up with duplicate (or similar) tables and reports, which could hurt performance. Our old architecture Source: Pixabay Our “traditional” data warehouse architecture was straightforward. From various sources, data was loaded into the data warehouse and then directly mapped to the visualization tool. In 2017, when we began our migration project, we identified several issues we needed to overcome: Scalability: The size of our data warehouse had grown rapidly, as had the number of sources we were ingesting. This growth of over five time of the original size had stretched our data warehouse’s capabilities. Integration: A large amount of our data existed within Google BigQuery, but we struggled to ingest it into PostgreSQL. We had to extract data via a CSV and load it in. Performance: Not only had our data grown, but so had the need for multi tenancy. With user growth of about 10x, our data warehouse performance had slowed to an unacceptable rate. Technology: We needed several tools to load data, we ended up with non integrated data ingestions tool which is difficult to maintain. We also wanted to work with unstructured data, and couldn’t. Data silos meant we lacked an integrated data analytics platform. With these issues in mind, we came to the conclusion that Tokopedia needed a comprehensive big data analytics platform. We wanted a solution for which there weren’t limitations in terms of data structure, storage, or performance, all alongside reasonable maintenance requirements. After a broad search, we decided to use Google BigQuery as our next-generation analytics engine. Results of implementing BigQuery From an architecture perspective, we use Google BigQuery, in combination with Cloud Dataflow for data processing, and Apache Airflow for scheduling. Our needs dictate that some jobs run once a day through batch processing, while others are done in real time.. The unique combination of both batch and streaming capabilities provided by Dataflow provide a simplicity we haven’t seen in solutions from other vendors. We’ve been able integrate the data in just one schema in a staging layer. Big data architecture Our data warehouse has two layers: The data in staging layer will be from all data sources like, PostgreSQL, Google Analytics, Google Sheets, and MySQL. It is only one-on-one loading from all data sources, while Google Sheet will use seamless integration to Google BigQuery. Our team uses Scala with a Scio library to run the data flow, and to load data from the source to BigQuery staging. The data warehouse layer is created by transforming several source tables, join, aggregate, and do filter. We use SQL to write the transformation from BigQuery staging to the data warehouse. After the tables are ready, the business intelligence team creates the reporting on top of it. Also, in this layer, we prepare denormalized tables so our data analyst team can perform analysis through Google BigQuery. We are using Apache Airflow to build an end-to-end scheduler, and also to manage jobs dependencies. Several jobs will be running daily, weekly, monthly, even yearly, depending on the table itself. We encountered some challenges during the migration process, including: Lack of a data warehouse team Limited BigQuery and Google Cloud Platform experience Minimal coding experience, more familiarity with SQL than data engineering Huge dataset to be migrated To resolve these issues, our team took the following actions: Formed an “official” data warehouse team Leveraged the expertise of the Google Cloud team Undertook online training and internal sharing sessions Collaborated with data engineer team to leverage their programming skills Applied an agile approach during the migration process Documented all steps Having completed our first phase of migration, we have developed what we think of as a set of migration best practices: Research helped us define which programming languages that will be used to load the data into Google BigQuery. For our case, we have several options, Python, Scala, Java, Talend, Apache Beam with Java or Python SDK. In addition to research, we also run benchmarks, to determine which programming language supports our performance and functionality needs, for development activities, either in a development or a production environment. We benchmark not only from technology perspective, but also from a skills and experience perspective. Being agile is important, not only in terms of a project management perspective, but also for team dynamics. There were times when we needed to change our approach, because it was impacting current development. Also, data warehouse migration is a huge task. To make it achievable, we had to split the development into several phases. By assigning a dedicated team to do migration, we could manage the tasks properly, with homegrown knowledge-sharing inside the team. Defining a standard is important. We have a lot of developers, so review processes would be impossible without standardization. Documentation is key. Since we were migrating from PostgreSQL, we needed to translate PostgreSQL-specific functions to BigQuery functions, such as age(), to_hex(), ntile(), generate_series(), etc. We are documenting these mappings collaboratively so that we can minimize the time it takes other development teammates to search for answers. By implementing these approaches, as of today we’ve reached: Development up to 500 analytics jobs in 2 months, with typically more than 100 jobs scheduled daily. And we’ve done all of this with a team of six dedicated engineers Experimentation with different programming languages, including Python, Java, and Scala Overall improvement of our team’s analytical experience of using Google Cloud Platform, especially thanks to Dataflow and BigQuery We hope this post provided some insight into how we approached our data warehouse migration challenges. In an upcoming blog post, we’ll talk about our next generation stream-processing data pipeline, which we’ll be using to power numerous real-time use cases. Stay tuned for more on that topic. As posted in Google Blog : https://cloud.google.com/blog/big-data/2018/03/how-tokopedia-modernized-its-data-warehouse-and-analytics-processes-with-bigquery-and-cloud-dataflow
https://medium.com/tokopedia-data/how-tokopedia-modernized-its-data-warehouse-and-analytics-processes-with-bigquery-and-cloud-afe34b31a2ea
['Maria Tjahjadi']
2019-04-16 00:58:24.609000+00:00
['Google Cloud Platform', 'Business Intelligence', 'Tokopedia', 'Data Warehouse', 'Business Analytics']
Designing the modern web
You can’t make a website do everything The internet is quite vast these days. Billions of sites doing billions of different things for different reasons. Some clients and some designers love cataloging all of the specific design, content, UI, and UX moments they see on the web with the intention of having these serve as a kind of reference toolbox for future site experiences. This is a great impulse. To add to this, sites such as Dribbble are also great for inspiration and have become a space to showcase idealized UX animation approaches that can make sites and apps look way cooler than they can be in reality. Much of this inspiration finds its way into project mood boards and can be incredibly helpful when you are working to align on creative approaches and to get people excited. For all of this great inspiration and reference, we know that you can’t make a website do everything you see on the web all at once. It would be like shopping for a car by going to every dealership and glueing all of your favorite features together to make your own car. If you do this, your car will certainly not run well and it will certainly look terrible. The same goes for websites. Reverse engineering cool web ideas you see out in the world is not inherently bad. What is bad is assuming that combing many ideas you like is the right way to solve the digital problem at hand. Ideally we develop web products and experiences by coming from a clear and informed strategic foundation with a firm understanding of business needs and technological limitations so that we do not create experiences that do not solve the intended problem, that are difficult to build, and are impossible to maintain. People that are not technical tend to focus on what they can understand and that is the surface. As an example, I have been in situations where I have had to convince a client that we were not creating a web experience optimized for their personal iPad. We had to educate them on how the behavior of your finger navigating a site is inherently different than navigating a site on a person’s desktop using a mouse. For them, apps and sites were one and the same thing and they were annoyed that their site could not do everything a native app could when they were chilling at the office on their iPad. This probably seems rather silly and naive to people that design and build for the web but for many these basic misunderstandings and false assumptions persist. Due to this lack of knowledge we must acknowledge that without proactive education and context, some level of dysfunction will remain within any project. Providing ongoing context and client education and at the very least understanding how savvy your clients are is crucial. To reduce the danger of trying to make a UX hodge-podge “Dribbble soup” of a site you have to build a process that aligns strategic and business goals with design and experience solutions that are contemporary and cool but are also focused and realistic. This requires lots of talking, rationalizing and education but it is a necessity to ensure that our web collecting eyes don’t get larger than our performant and complaint digital stomach. Form and content are not the same thing A website is only as good as the information within it. Over the last 20 years, this basic concept still remains the most challenging for clients to understand. Some folks still have a hard time grasping the idea that no amount of good design can improve bad content. If you put a TV show on a more expensive TV does the show become narratively better? If you put a baby behind the wheel of a Lamborghini do they become an expert driver? You get the idea. Developing well crafted information — language, photography, video, illustrations, audio and the like is the necessary baseline for a great site experience. Full stop. The push and pull between form and content is as old as the need for clear communication. Designing for the web is one of those unique mediums where you can actually develop formal systems that can serve as a forcing function to improve content creation. Creating modular content patterns that leverage the natural information consumption behaviors of the web can be extremely useful when a client is unsure how to optimize their storytelling. Building a formal scaffolding that also serves as an ideal narrative guide can provide more rigorous parameters around how and what to write in a way that has the most impact online and especially on mobile devices. Brands like Notion and Apple understand how digital craft, editorial economy, visual impact and crisp communication can all come together to provide a great online narrative experience. To do this right takes discipline as well as a solid understanding of how developing replicable narrative systems that seamlessly blend the form and content of a site can help keep communication sharp and beautiful. Listen to and respect your developers The web would not function as a contemporary communication medium unless really smart developers cared deeply about their work. There has never been enough really, really good developers on this planet. We always need more. As a designer I admit I am biased. I think we have too many “good” formal designers in the world but not enough amazing designers that also know how to work with developers in an integrated and nuanced way to make great experiences. Since the advent of mobile devices and the wholesale adoption of responsive design, we probably all agree that designing for the web has become really annoying. There seems to be too many moving parts! We can’t escape this but we also can’t ignore it either. We are at an inflection point where developers are really co-designers because a smart development team has a better understanding of the technological dependencies and constraints that fundamentally inform the sandbox an amazing designer has to operate within. In the past, people in business, management and creative roles did not deeply consider all of the technology infrastructure work required before a project begins. Again, it is human nature to focus on the things you understand and ignore the things you don’t. This kind of blissful ignorance is rapidly fading as creative and technology tools and processes merge. In the not too distant future design teams and development teams will blend in a way that will produce better work faster. I’m not talking about a world of all night design and development hackathons. In contrast, I believe that the developer mindset is rubbing off on the design mindset (and vice versa) and that we will find more and more common ground and mutual admiration happening as these two worlds continue to overlap and inform each other. Embrace best practices, documentation and accessibility Personally, I’ve always had a kind of “Life Hack” philosophy when it comes to designing for the web. I have poorly rigged Lottie animations into sites, stolen heaps of Javascript, butchered WordPress Themes, you name it. As a designer, I felt I should do whatever I thought was required to sell in an idea. This approach is fine if you plan to throw it all away later. But in this day and age, you have to lead by doing things the right way even if it might be a little harder or a little slower. In the fast paced world of the web, the worst thing you can do is half-ass it now because it will come back to haunt you later. Unraveling someone else’s undocumented code base is a horrible thing to have to do. When it comes to the web, a stitch in time saves nine. To this end, establishing a digital philosophy and clear tactics around handling best practices is an absolute necessity. A big part of this is about clear documentation. Building sites can get gnarly and the only way to not have this information be in the head of just one person is to write it all down. Your technology stack should be created in a way so that someone not as smart as you can understand it and spin it up locally. This is because no two developers work in the same way and given the rapid nature of what we do, you may need to onboard someone new that is also less experienced. When it comes to developing for the web, the more you can think ahead and future-proof the better. Compliance is also something you need to understand and have a stance on. Currently minimum requirements for ADA compliance are still open for interpretation but you never know when someone might have an issue so it is better to know what your baseline is so that you can clearly articulate how you are handling WCAG 2.0. The web sucks. Long live the web! Designing and building for the web will always be a fluid evolutionary vocation. Nothing really stays the same and new things are always on the horizon. This is what I love most about designing for the web. It keeps us learning and growing but it can also be extremely frustrating. The one thing I always try to remind clients and teams is it is just a website. We are not building a suspension bridge or doing brain surgery. We can push code, make fixes and try new things. This ability to iterate is what makes the web a fun medium to work in if you have the right temperament. I hate the web and I love the web as well. It’s a proverbial oily watermelon floating in the deep end that we are always trying to grab ahold of. Just as soon as we have a handle on it, it slips through our fingers. As long as we know this and embrace uncertainty while reaching for perfection, then the web will be an amazing place to push creativity and technology for decades to come. I would like to give special thanks to our Technical Director at Athletics Ross Luebe. He helped vet this article and gave me many helpful suggestions. As Technical Director, Ross is responsible for interfacing with clients, designers and developers and serves as a conduit and mediator to ensure that our teams are speaking the same language. Ross has the rare advantage of having a Master’s Degree in Graphic Design while having also gained very deep technology expertise over the last decade. This allows him to understand the common ground between design and technology and what needs to happen to get the best out of both.
https://uxdesign.cc/designing-the-modern-web-f62860d850f2
['Matt Owens']
2020-02-26 15:26:21.170000+00:00
['Prototyping', 'Design', 'Web Design', 'UX', 'Programming']
Coming Out of the Psychic Closet
Nearly every day for the past few months, sometimes just for a brief moment, I have thought about putting down my spiritual writing, deleting my social media accounts, canceling my workshops, and going back to practicing law full-time. Sure, I could blame quarantine, the lack of social connection, the drudgery of the election and the erosion of our democracy, or the never-ending claims of fraud. In reality, I’ve been spending this time facing myself and the hard truth that despite all of my inner work, the countless levels of healing I’ve experienced, and the wondrous expansion of consciousness that I’ve been graced with, I still had to accept who I now am: a psychic channel. Life as a Psychic Channel Can Be Hard Why? Living as a spiritual author and intuitive guide is a rough gig. Working as a lawyer was easier in many ways. Sure, my memories of how demanding the law was have faded since I left my firm two years ago, but I enjoyed noodling through legal puzzles and collaborating with colleagues on tough questions. I also love what I do now — and I honor that it’s a real gift. It’s indescribably beautiful to connect with someone who wants to heal their past and express their soul. When I work as an intuitive channel, I receive clairaudient messages about a person’s major blocks, patterns, and karmic challenges. The words come through like blissful raindrops on my head, filling me with insights I could not imagine. Having been a lawyer, though, I have often wrestled with how all of that sounds. It sounds crazy — a loaded word, for sure — to a lot of people, some of whom have told me my story is hard to swallow. I don’t question these experiences (they were real for me) or worry that my mind isn’t all there (I have never been more lucid). In fact, this angst about my professional path aside, I have never felt more emotionally resilient or mentally healthy. But being an intuitive and author has meant abandoning a certain stable, conventional path. You might think that, as a gay man, I’d be used to living outside the norm. I was a sophomore in college when I realized that I was gay, and my entire life was shaken up. Within 3 weeks, I had come out to everyone, and was suddenly a proud gay man. Being gay isn’t as challenging as it once was, thankfully. Frankly, it’s harder to say that I’m a psychic channel. People often look askance when you do. Indeed, I’ve grown weary of friends who ghost or people who look askance because they find my path too weird. It does remind me of when I came out as gay that certain friends slowly faded from my life. But it’s especially tough when the judgment comes from those who are deeply entranced with the biggest names in the spiritual world and post quotes from Abraham-Hicks or Eckhart Tolle. Yes, they are the real deal, but you? The guy who was once a lawyer, and before that a professor of Spanish literature? Yeah, not you. Oprah never included you in her book club. It was also really easy to say what I did at a dinner party or meeting someone for the first time when I was a lawyer. I now gauge my audience’s reaction. Is this an audience where it’s easier to say “meditation teacher”? Because if I say “conscious channel,” it always begs the question, “Who or what are you channeling?”, which means I have to get into the whole enchilada of how I had a kundalini awakening (Wait, what’s kundalini?), after studying with my teacher, whose gift is to transmit Light to others (Wait, what’s “transmitting light”?), and then a very loving and powerful voice filled my head and said, “We’re going to write, and we’re going to write quickly,” and I said, “Who are you?”, and the answer was “We are the Council of Light” (Wait, Council of what?! No, that is just crazy, dude.). I might sound like the world’s most reluctant psychic channel, but I love channeling. The experience itself is a bliss fest. The first six months that I channeled, I downloaded a complete set of 3 books — a trilogy that mapped out human consciousness, relationships, and social transformation. I’ve published all three, and more books are on the way. The Universe Keeps Reminding Me of My Path The part of me that’s a lawyer somehow still holds on. I remember what it’s like to live a more conventional life, with considerable success. So there are days, like today, when I wake up and wonder: Is this what this path entails? Is the best use of my time posting memes and doing Instagram Lives? Am I going to be facing an ever-shrinking social circle as people decide they don’t want to talk about forgiveness or chakras? I’m not alone in this struggle. Most teachers, authors, and psychics I know deal with some version of it. They’re riddled with doubt or dealing with backlash from disbelievers. Besides the few who are independently wealthy, most teachers I know are supported by a spouse or scraping by — in other words, they’re not making money at this, have no savings, and aren’t prepared for any kind of retirement. Some are living transient lives, even living out of their cars, and just getting by earning fees from session to session, workshop to workshop. It’s especially difficult as I watch my former classmates carve out high-powered careers, getting tenure at prestigious law schools or arguing cases before the Supreme Court, working at the highest levels of the DOJ or making partner at their firm. Every time I’m on the verge of giving up, though, something happens. A new client shows up. Someone tells me that reading Bending Time is the one book they carry around. Someone posts something on Insta telling me that Seeds of Light sits on their nightstand. A young law student writes me to say how grateful she was to see that this is where I ended up, as the law hardens her compassionate side. Out of the blue, my most recent book wins an award for books dedicated to world peace. A new client shows up, telling me that a piece they’ve read on Medium has inspired new insights. Some event or exchange pulls me forward, reminding me of my calling, even if it doesn’t quite feel like what I imagined a “calling” would. It feels more like a “prodding.” Something keeps telling me that I’m on track. So I keep listening, and asking, How am I supposed to serve? I didn’t pick this path. I didn’t ask for this path. It chose me, and I’m along for the ride. The Universe Has Taken Care of Me Before Throughout all of this, I have learned how much the Divine supports me when I surrender and allow life to move in and through me on a timetable not of my choosing. A decade ago, just as I was about to have my great spiritual awakening, life hit me really hard for a couple of years. The blows kept landing, pummeling me. I was still recovering from the end of an 11-year relationship while working long hours as a law clerk for a judge. Then my cat had died from diabetes one month, and the very next I learned that my father, from whom I was estranged, had passed away. A few months later, my living situation deteriorated when I moved into an apartment with a roommate with whom I was just not compatible. And just when I thought things couldn’t get worse, an existential crisis hit me. I had been offered a prestigious fellowship at Harvard Law School. After years of torment, it seemed like my life was finally turning a corner. But in the weeks leading up to the offer, I would wake up every morning completely nauseated. My stomach was churning, filled with the sense that this fellowship was the worst choice I could ever make. I had no idea why. I wrestled with that decision night and day. I just couldn’t accept it. But it was Harvard Law School. I tried to accept it, and my stomach did so many turns, I was ready to vomit. Literally, my gut was screaming at me to let go of this. The decision left me a nervous wreck. What would I do instead? I didn’t want to practice law. I was a mess, wracked with anxiety about the direction of my life. I asked myself this question, day after day. I felt this sudden urge to go to the Kripalu website. I had never been to Kripalu. But I followed this intuitive urge, and as I opened the page, I saw a description of a teacher, Mirabai Devi, whose gift was to “transmit Divine light.” A lightbulb went off in my head. At any other point in my life, I would have rejected this as rubbish, but at this moment, I knew it to be true. She became my teacher. It turns out that working with my teacher would be one of the greatest experiences of my life. And even though my recovery from my anxiety did not happen overnight, I ultimately recovered and landed at a law firm at a job that would turn out to be one of the most rewarding of my professional life. I spent years happily working as a lawyer while also training with my teacher. Then I got the message: It was time to go. I had to step away to write and deepen my psychic and healing skills full-time. That decision to leave was divinely inspired, with no angst and no churning stomach. It has been a wild and tumultuous 2+ years of new ways of writing, teaching, and being of service. I know that this is not a moment of failure. Like the two years where life pummeled me, these are years of incubation. The Divine is guiding me. I know to allow myself to be with this uncertainty, however heavy it might feel, around the question of how I am to serve. It might mean working solely as a psychic and healer, it might mean returning to the law in some way, or some combination of all of my skills. I know from experience that a new way of being is attempting to emerge. At some point, I’ll get a prompt in one direction or another. I can’t rush that process, however much I want to. Grace can never be forced.
https://medium.com/know-thyself-heal-thyself/coming-out-of-the-psychic-closet-244832e4d140
['Patrick Paul Garlinger']
2020-12-10 09:39:53.948000+00:00
['Spirituality', 'Identity', 'Life', 'Storytelling', 'Self Improvement']
How To Build A Successful AI PoC
Overview of an Artificial Intelligence System As an example, I will take a system which classifies documents. It answers to “What kind of document is this?” with classes like an “electric invoice” or a “to-do list”. AI workflows consist of 5 steps: receiving the question: “What kind of document is this?” adding complementary data on the user or the context: “What type of documents does the user have?” using the data to answer the question: “Which type does this document belong to?” by “This is an energy invoice” storing the result: adding the new documents to the database answering the client’s question: “This is an energy invoice” You can break this down into 3 tasks, or semantic blocks: Handling the client : receiving the question, making him wait… Example: an HTTP server : receiving the question, making him wait… Example: an HTTP server Data conciliation : communication with the “company knowledge base” to add or receive relevant data. Example: communication with a database : communication with the “company knowledge base” to add or receive relevant data. Example: communication with a database AI Block: the AI itself which answers the question with a context. Example: expert system, SVM, neural networks… Answering the question“What kind of document is this?” You can find great tutorials on how to architect your server or your data conciliation layer on the web. The simplest solution for an AI PoC in Python is using Flask and a SQL database, but it highly depends on your needs and what you already have. Here is a tutorial on using Flask with SQLALchemy. We are going to focus on designing the AI itself. Designing The AI Block AI tasks can involve multiple heterogenous inputs. For example, the age and the location of a user or a whole email discussion. AI Outputs depend on the task: the question we want to answer. There are a lot of different tasks in AI. You can see some of the usual tasks in computer vision in the image below. Various computer vision tasks from a post about image segmentation Thinking of ways to build an AI seems complicated as soon as you venture out of the standardized inputs and tasks. To wrap my mind around the complexity of building an AI, I use a 3-step process. Step 1: Browsing the relevant inputs First, gather all the inputs you suspect are capable of answering the task at hand and select those that are self-sufficient in the majority of cases. When testing an AI idea, it’s easy to get greedy and think about solutions that include a lot of inputs: the location of the user may give me an insight into what their next e-mail will be, for example. The truth is: it’s just so easy to get lost in mixing various inputs with different meanings or nature and end up delivering nothing. Stick to simple, self-sufficient inputs when building your AI. Step 2: Vectorizing the data The second step is to preprocess those inputs, to make those usable for various algorithms. … Read the full article on Sicara’s blog here
https://medium.com/sicara/how-to-build-successful-ai-poc-8acfe386a69a
['Arnault Chazareix']
2020-01-30 13:38:04.326000+00:00
['Machine Learning', 'Data Science', 'Proof Of Concept', 'Artificial Intelligence', 'Build Ai']
Which Deep Learning Framework is Growing Fastest?
Which Deep Learning Framework is Growing Fastest? TensorFlow vs. PyTorch In September 2018, I compared all the major deep learning frameworks in terms of demand, usage, and popularity in this article. TensorFlow was the undisputed heavyweight champion of deep learning frameworks. PyTorch was the young rookie with lots of buzz. 🐝 How has the landscape changed for the leading deep learning frameworks in the past six months? To answer that question, I looked at the number of job listings on Indeed, Monster, LinkedIn, and SimplyHired. I also evaluated changes in Google search volume, GitHub activity, Medium articles, ArXiv articles, and Quora topic followers. Overall, these sources paint a comprehensive picture of growth in demand, usage, and interest. Integrations and Updates We’ve recently seen several important developments in the TensorFlow and PyTorch frameworks. PyTorch v1.0 was pre-released in October 2018, at the same time fastai v1.0 was released. Both releases marked major milestones in the maturity of the frameworks. TensorFlow 2.0 alpha was released March 4, 2019. It added new features and an improved user experience. It more tightly integrates Keras as its high-level API, too. Methodology In this article, I include Keras and fastai in the comparisons because of their tight integrations with TensorFlow and PyTorch. They also provide scale for evaluating TensorFlow and PyTorch. I won’t be exploring other deep learning frameworks in this article. I expect I will receive feedback that Caffe, Theano, MXNET, CNTK, DeepLearning4J, or Chainer deserve to be discussed. While these frameworks each have their virtues, none appear to be on a growth trajectory likely to put them near TensorFlow or PyTorch. Nor are they tightly coupled with either of those frameworks. Searches were performed on March 20–21, 2019. Source data is in this Google Sheet. I used the plotly data visualization library to explore popularity. For the interactive plotly charts, see my Kaggle Kernel here. Let’s look at the results in each category. Change in Online Job Listings To determine which deep learning libraries are in demand in today’s job market I searched job listings on Indeed, LinkedIn, Monster, and SimplyHired. I searched with the term machine learning, followed by the library name. So TensorFlow was evaluated with machine learning TensorFlow. This method was used for historical comparison reasons. Searching without machine learning didn’t yield appreciably different results. The search region was the USA. I subtracted the number of listings six months ago from the number of listings in March 2019. Here’s what I found: TensorFlow had a slightly larger increase in listings than PyTorch. Keras also saw listings growth — about half as much as TensorFlow. Fastai still isn’t showing in hardly any job listings. Note that PyTorch saw a larger number of additional listings than TensorFlow on all job search sites other than LinkedIn. Also note that in absolute terms, TensorFlow appears in nearly three times the number of job listings as PyTorch or Keras. Change in Average Google Search Activity Web searches on the largest search engine are a gauge of popularity. I looked at search history in Google Trends over the past year. I searched for worldwide interest in the Machine Learning and Artificial Intelligence category. Google doesn’t provide absolute search numbers, but it does provide relative figures. I took the average interest score of the past six months and the compared it to the average interest score for the prior six months. In the past six months, the relative search volume for TensorFlow has decreased, while the relative search volume for PyTorch has grown. The chart from Google directly below shows search interest over the past year. TensorFlow in blue; Keras in yellow, PyTorch in red, fastai in green New Medium Articles Medium is a popular location for data science articles and tutorials. I hope you’re enjoying it! 😃 I used Google site search of Medium.com over the past six months and found TensorFlow and Keras had similar numbers of articles published. PyTorch had relatively few. As high level APIs, Keras and fastai are popular with new deep learning practitioners. Medium has many tutorials showing how to use these frameworks. New arXiv Articles arXiv is the online repository where most scholarly deep learning articles are published. I searched for new articles mentioning each framework on arXiv using Google site search results for the past six months. TensorFlow had the most new article appearances by a good margin. New GitHub Activity Recent activity on GitHub is another indicator of framework popularity. I broke out stars, forks, watchers, and contributors in the charts below. TensorFlow had the most GitHub activity in each category. However, PyTorch was quite close in terms of growth in watchers and contributors. Also, Fastai saw many new contributors. Some contributors to Keras are no doubt working on it in the TensorFlow library. It’s worth noting that both TensorFlow and Keras are open source products spearheaded by Googlers. New Quora Followers I added the number of new Quora topic followers to the mix — a new category that I didn’t have the data for previously. TensorFlow added the most new topic followers over the past six months. PyTorch and Keras each added far fewer. Once I had all the data, I consolidated it into one metric. Growth Score Procedure Here’s how I created the growth score: Scaled all features between 0 and 1. Aggregated the Online Job Listings and GitHub Activity subcategories. Weighted categories according to the percentages below. 4. Multiplied weighted scores by 100 for comprehensibility. 5. Summed category scores for each framework into a single growth score. Job listings make up a little over a third of the total score. As the cliche goes, money talks. 💵 This split seemed like an appropriate balance of the various categories. Unlike my 2018 power score analysis, I didn’t include KDNuggets usage survey (no new data) or books (not many published in six months). Results Here are the changes in tabular form.
https://towardsdatascience.com/which-deep-learning-framework-is-growing-fastest-3f77f14aa318
['Jeff Hale']
2020-01-28 13:19:53.368000+00:00
['Machine Learning', 'Data Science', 'Technology', 'Artificial Intelligence', 'Deep Learning']
Detecting Sarcasm with Deep Convolutional Neural Networks
Overview This paper addresses a key NLP problem known as sarcasm detection using a combination of models based on convolutional neural networks (CNNs). Detection of sarcasm is important in other areas such as affective computing and sentiment analysis because such expressions can flip the polarity of a sentence. Example Sarcasm can be considered as expressing a bitter gibe or taunt. Examples include statements such as “Is it time for your medication or mine?” and “I work 40 hours a week to be this poor”. (Find more fun examples here) Challenges To understand and detect sarcasm it is important to understand the facts related to an event. This allows for detection of contradiction between the objective polarity (usually negative) and the sarcastic characteristics conveyed by the author (usually positive). Consider the example, “I love the pain of breakup”, it is difficult to extract the knowledge needed to detect if there is sarcasm in this statement. In the example, “I love the pain” provides knowledge of the sentiment expressed by the author (in this case positive), and “breakup” describes a contradicting sentiment (that of negative). Other challenges that exist in understanding sarcastic statements is the reference to multiple events and the need to extract a large amount of facts, commonsense knowledge, anaphora resolution, and logical reasoning. The authors avoid automatic feature extraction and rely on CNNs to automatically learn features from a sarcasm dataset. Contributions Apply deep learning to sarcasm detection Leverage user profiling, emotion, and sentiment features for sarcasm detection Apply pre-trained models for automatic feature extraction Model Sentiment shifting is prevalent in sarcasm-related communication; thus, the authors propose to first train a sentiment model (based on a CNN) for learning sentiment-specific feature extraction. The model learns local features in lower layers which are then converted into global features in the higher layers. The authors observe that sarcastic expressions are user-specific — some users post more sarcasm than others. In the proposed framework, personality-based features, sentiment features, and emotion-based features are incorporated into the sarcasm detection framework. Each set of features are learned by separate models, becoming pre-trained models used to extract sarcasm-related features from a dataset. CNN Framework CNNs are effective at modeling hierarchy of local features to learn more global features, which is essential to learn context. Sentences are represented using word vectors (embeddings) and provided as input. Google’s word2vec vectors are employed as input. Non-static representations are used, therefore, parameters for these word vectors are learned during the training phase. Max pooling is then applied to the feature maps to generate features. A fully connected layer is applied followed by a softmax layer for outputting the final prediction. (See diagram of the CNN-based architecture below) To obtain the other features — sentiment (S), emotion (E), and personality (P) — CNN models are pre-trained and used to extract features from the sarcasm datasets. Different training datasets were used to train each model. (Refer to paper for more details) Two classifiers are tested — a pure CNN classifier (CNN) and CNN-extracted features fed to an SVM classifier (CNN-SVM). A separate baseline classifier (B) — consisting of only the CNN model without the incorporation of the other models (e.g., emotion and sentiment) — is trained as well. Experiments Data — Balanced and imbalanced sarcastic tweets datasets were obtained from (Ptacek et al., 2014) and The Sarcasm Detector. Usernames, URLs, and hashtags are removed, and the NLTK Twitter Tokenizer was used for tokenization. (See paper for more details) The performances of both the CNN and CNN-SVM classifier, when applied to all datasets, are shown in the table below. We can observe that when the models (specifically CNN-SVM) combine sarcasm features, emotion features, sentiment features, and personality traits features, it outperforms all the other models with the exception of the baseline model (B). The table below shows comparison results of the the state-of-the-art model (method 1), other well-known sarcasm detection research (method 2), and the proposed model (method 3). The proposed model consistently outperforms all the other models. Generalizability capabilities of the models were tested and the main finding was that if the datasets differed in nature, this significantly impacted the results. (See visualization of the datasets rendered via PCA below). For instance, training was done on Dataset 1 and tested on Dataset 2; the F1-score of the model was 33.05%, significantly dropping in accuracy. Conclusion and Future Work Overall, the authors found that sarcasm is very topic-dependent and highly contextual, therefore, sentiment and other contextual clues help to detect sarcasm from text. Pre-trained sentiment, emotion, and personality models are used to capture contextualized information from text. Hand-crafted features (e.g., n-grams), though somewhat useful for sarcasm detection, will produce very sparse feature vector representations. For those reasons, word embeddings are used as input features. References Ref: https://arxiv.org/abs/1610.08815 — “A Deeper Look into Sarcastic Tweets Using Deep Convolutional Neural Networks” You can find an interesting discussion of this post on Reddit here.
https://medium.com/dair-ai/detecting-sarcasm-with-deep-convolutional-neural-networks-4a0657f79e80
[]
2018-06-18 05:17:44.876000+00:00
['Machine Learning', 'Data Science', 'NLP', 'Artificial Intelligence', 'Deep Learning']
Why Hamlet Is The Best Play Shakespeare Ever Wrote
Photo Credit JP Cote William Shakespeare, easily one of the greatest writers to have walked this Earth, wrote countless plays throughout his lifetime. Plays that we, as a society, four hundred years later, continue to read, study, and adapt. Of his plays that have survived the ages many are deemed good, many more deemed great. But I am here to argue that there is one play which stands above all others. One play that sets itself apart from the rest. One play that made Shakespeare the legend he is today. One play to rule them all! Too dramatic? Okay, I’ll tone it down a bit. But seriously there is one play that is the single greatest work William Shakespeare ever composed, and that play is Hamlet. Now would be a great time to go into an extensively detailed summary of the plot of Hamlet, but I’m just going to assume that anyone reading this story has taken a high school English class. On the off chance that you haven’t read the play, you’ve definitely read the sparknote version of it. So, in the sake of saving you all from boring, redundant reading, I’m just going to get right into it. The first reason that Hamlet is the best, is also the most simple. It was bold. In order to fully understand how bold a move Hamlet was, you have to understand the time period that the play was written in. During the early 1600’s a man by the name of Charles I was the king of England. Now Charles was not an authority figure like the ones we have today. People could not go on late night talk shows clowning him, or criticize his Twitter activity, or laugh at memes that made fun of his stupid hair. No, Charles was the King and this meant that his reign was heavenly ordained. This meant that to disparage him in any way was not simply to go against the country of Britain, it meant that you were going against God himself. And at a time when the existence of God was not something up for debate, but a concrete fact of life, this was kind of a big deal. So, with all of this in mind, let’s review the following quote from Act 4 Scene 3. Hamlet says to his friend Claudius, “A man may fish with the worm that hath eat of a king and eat of the fish that hath fed of that worm… nothing but to show you how a king may go progress through the guts of a beggar.” In this passage Hamlet is not only comparing kings to beggars, by explaining how they could both end up worm food, he is literally comparing kings to one of the lowest forms of life that we know of; worms. These lines are not simply controversial, they are absolutely blasphemous. It is surprising that Shakespeare got away with writing these lines with his head still attached much less that this play was adored and performed in public venues throughout England. Of course, Shakespeare was able to say that it was the character Hamlet who felt this way and not he himself. But there is no mistaking it, this was one hell of a gutsy move. Apart from being a daring venture, the play shows some evidence that Shakespeare was well ahead of his time, philosophically speaking. In the first scene of Act 3 Hamlet is trying to act like he’s gone mad and parades around the courtyard ranting to himself. But the contents of this rant have become the most iconic lines of the play itself. “To be or not to be-that is the question: Whether ’tis nobler in the mind to suffer to suffer the slings and arrows of outrageous fortune, or to take arms against a sea of troubles And, by opposing, end them… To die, to sleep-to sleep, perchance to dream. Ay, there’s the rub, for in that sleep of death what dreams may come.” These lines are incredibly beautiful, and they illustrate his argument. Life is pain. To be alive is to experience pain, grief, and misfortune. Why not end it all? Why not avoid all the misery and simply put oneself to sleep? Life is meaningless, and we all die eventually, so what’s the point. Why not just avoid the agony and end it all. But to understand why it is so interesting that these words were written, we once again have to imagine the time period. We currently live in a society where (for the most part) religion and politics/culture are separated. This was not the case in the 1600s. Nobody questioned the nature of life after death because that question had already been answered by the church. And certainly, no one questioned whether suicide was a favorable option to the pain of living because suicide is considered to be one of the harshest sins of the Christian faith. Sure, just like with the earlier quote Shakespeare could’ve sidestepped claiming that it was a fictional character in a fictional story, but the mere fact that he wrote these words shows the kind of thoughts that were swimming around in his mind. As any fiction writer will tell you, every character in one’s story is a little piece of themselves. For Shakespeare to compose such a gripping speech about the nature of life and death, he must have had some doubts of his own about the Christian religion. In this scene, Shakespeare proved that he possessed a philosophical mind on par with any of the greats who have lived throughout history. Hamlet was a bold move and it certainly showed that Shakespeare had some philosophical prowess. But more than that, the play showed that Shakespeare had some serious intellectual prowess as well. In scene 1 of act 5 Hamlet is in a graveyard talking to his friend. He sees a skull on the ground, picks it up, and examines it. Hamlet then says, “Dost thou think Alexander looked o’ this fashion i’ th’ earth?… Alexander died, Alexander was buried, Alexander returneth to dust; the dust is earth; of earth we make loam.” When Hamlet says Alexander he is referring to Alexander the Great. Possibly one of the most famous Kings to have ever lived. The connection being referred to here is that this skull could be anybody. It could just as easily be a peasants skull as it could a king because when it comes down to it we are all just dust that will eventually return to the earth from whence we came. What this quote proves is that Shakespeare had the ability to remove himself from the veil of society and to look at things from an unbiased perspective. He realized there is no heavenly ordainment. There is nothing that sets a beggar apart from a king except for the clothes they wear on their backs. All men die, and thus all men are created equal. So basically, Shakespeare was an atheist who disagreed with the idea of monarchy and realized that all men are created equal and a society that positions certain human beings over others based on something silly like wealth or class is absolutely ridiculous. Shakespeare was well ahead of his time intellectually, philosophically, artistically and many more cally’s that I don’t feel like listing right now. But no other play proves this to such an extent. If you want to read some badass 1600’s literature, look no further than Hamlet.
https://medium.com/literally-literary/why-hamlet-is-the-best-play-shakespeare-ever-wrote-3e08886cbdd4
['Jean-Paul Cote']
2019-02-13 07:42:46.317000+00:00
['Literary Analysis', 'Humor', 'Nonfiction', 'Shakespeare', 'Literally Literary']
How Decentralized Applications can Improve Online Marketplaces
The internet invention is a groundbreaking concept that has transformed the way we buy and sell; from physical deliveries, to global e-marketplaces like Amazon. Giant organizations in e-marketplaces are seeing a massive increase in sales while the local commerce continues to stagnate. Technology so far has seemed to be in favor of the giants in e-commerce. To put this into perspective, numbers from a report by Internet Retailer titled- “Online Marketplaces: A Global Phenomenon,” indicate that Alibaba enjoyed an incredible 50% annualized growth rate between 2014 and 2016, while Amazon experienced a 55.8% growth rate during the same period. On the same note, another study by eMarketer projects that global e-commerce retail sales will increase to $4 trillion by 2020, making up 14.6% of total retail spending that year. And it’s not just the retail sector, “the gig economy”/the freelance trend which is also powered by online marketplaces like Uber, Upwork, Fiverr has grown exponentially over the past years. The internet is currently undergoing a transformation; from a centralized to a decentralized model. A research by Deloitte of more than 1000 blockchain-savvy leading executives reveals that blockchain and decentralized apps are drawing closer and closer to the breakout moment with each passing day. There is a shift from theoretical applications to practical real business applications. This move is likely to see a massive transformation of the way we do our online business. Decentralized apps You already could be familiar with the term ‘applications’ as used in reference to software; a software that defines a particular purpose. Majority of the apps in the market follow a centralized server-client type of model — they directly control information shared from a single point; all units are dependent on the central point for their functionality. Examples of apps using this model include Facebook, Amazon, Uber, and Google among others. There are a few systems that use the distributed model; control is spread across a number of centres known as nodes. Google among other companies has adopted both the distributed and centralized models to help speed up computational power as well as data latency. Decentralized apps, on the other hand, operate with no node instructing the other. The decentralized nature of decentralized apps avails a number of critical benefits to the online marketplace system: Dapps- improving online marketplaces ❖ Dapps can empower small businesses Dapps make use of smart contracts to help anyone on the network to enter into a bidding transaction with any other person on the network. Small businesses are limited to online marketplaces like eBay with regards to the nature of their products, their geographical locations, payment restrictions and licensing among others. Millions of small businesses that are particularly in developing and less established markets are particularly disadvantaged. Blockchain and smart contracts, on the other hand, eliminate differences in sizes of businesses, geographical area, type of payments methods and language type among others. With platforms such as dApp Builder which make it easy for businesses to deploy smart contracts, businesses only need to lay down a set of requirements that need to be met before payments are automatically executed. This in itself speeds up payment processes in a secure way. With proof of work and validation across the nodes done, payment for services and digital assets is automatically released. ❖ No marketplace shutdowns Decentralized networks are advantaged due to the high availability of network services. Due to their decentralized nature, they cannot be affected by data center failures. It is true that data centres are engineered for high reliability but more often than not, they fail due to human errors among others. The loss in data and money for clients and customers is often huge and painful. Particularly for C2C marketplaces that rely on centralized servers or IT departments for their back-end services, a data centre failure could mean a complete shutdown. But with Dapps, there is sharing of computational powers among peers which are more affordable, scalable and secure. Consider for instance, in 2017, a third of the internet shut down for around five hours because of a glitch in Amazon’s web service. Major sites like Disney, Slack, Nike, Nest and Twitch were slowed down or rendered completely out of reach. The internet is controlled centrally by just a few players-Amazon, Google, Facebook, Microsoft and Alibaba among others-which makes the internet susceptible to exploitation and unexpected failures. ❖ A more secure e-marketplace founded on trust Consensus on the decentralized app is through peer to peer. This boosts transparency and accuracy of data in the system. Additionally, this aspect leads to the immutability of the transactions done on these decentralized apps. The Equifax data breach is a recent reminder of the need for Dapps and blockchain, transactions on the blockchain are practically untraceable. Transactions on the blockchain and Dapps are recorded cryptographically on the ledger and are immutable and irreversible. This aspect detected fraudulent activities earlier and eliminates their occurrence. It ultimately boosts trust among online engaging parties so that there is no need for such intermediaries like lawyers, and banks among others. This fact has a far-reaching influence particularly on the online business transactions in as far as eliminating costs in fees and thus reducing the cost passed on to the end user is concerned. Traders online transact with each other directly and securely and the decentralized nodes are able to validate, record and store each of the transactions ❖ The incentive structure for everyone in the community The centralized marketplaces believe that there needs to be a centralized authority that will verify transactions as well as act as middlemen. This creates a need to reward the central authority this service. Unfortunately, it is the central authority that determines the incentive/reward system. Blockchain distributes the central authority across the network. Members of the network participate in verifying transactions and hence get incentives/rewards for being active in the network. Members are encouraged to work for the benefit of the whole network. ❖ Better Identity recognition (Know Your Customer) With the elimination of intermediaries, Dapps improves identity recognition and critically reduces fraudulent activities and impersonation. Centralized marketplaces verify identification through credit cards, passports and driving licenses among others. But since these are centralized systems, they are disadvantaged because they have an inherent central point of failure that could be manipulated like in the Equifax data breach case. The cryptographic nature of storing and verifying data on the blockchain makes user identities unique, immutable, irreversible and permanent. Additionally, there is near real-time data exchange between users and providers so that the quality of identity verification becomes real time. Unlike in centralized e-marketplaces where real-time data is used for marketing and thus becomes expensive, real-time data on the blockchain and Dapps is used for the benefit of the network to boost credibility. This method is simpler and less costly. The peer-to-peer interactions bring back a sense of community with everyone’s voice being heard and additionally rewarded for being productive and active. The Deloitte study revealed the following key advantages of the blockchain technology. ➢ Greater speeds when compared to existing centralized systems. What could be done with thousands of man-hours can now be accomplished in minutes. Additionally, blockchain and Dapps help eliminate human errors that often cost businesses dearly. ➢ More business opportunities in new business models and new revenue sources ➢ There is greater security with decentralized apps and systems that lower risks; 84% of respondents said that decentralized systems on the blockchain were more secure than traditional apps. ➢ Decentralized systems lead to lower costs in operations ➢ The backbone technology behind decentralized apps-the blockchain-ensures the following advantages in the electronic marketplace: 1. Distributed storage and listing 2. Validity and credibility of all transactions 3. Transaction anonymity and privacy 4. Transaction traceability 5. Transaction immediacy Conclusion
https://medium.com/ethereum-dapp-builder/how-decentralized-applications-can-improve-online-marketplaces-315934eba256
['Dapp Builder Team']
2018-11-30 12:57:10.395000+00:00
['Marketplaces', 'Development', 'Dapps', 'Technology', 'Blockchain']
Formosa Financial x CoolBitX- 品牌聯名Coolwallet S設計競賽
in In Fitness And In Health
https://medium.com/%E5%AF%B6%E5%B3%B6%E9%87%91%E8%9E%8Dformosa-financial/formosa-financial-x-coolbitx-%E5%93%81%E7%89%8C%E8%81%AF%E5%90%8Dcoolwallet-s%E8%A8%AD%E8%A8%88%E7%AB%B6%E8%B3%BD-527186fa3177
['Formosa Financial Team']
2018-12-24 03:29:48.762000+00:00
['Fintech', 'Contests', 'Design', 'Blockchain', 'Crowdsourcing']
Battling Racist Ro(Bots) and Trolls-The Struggle Continues
Battling Racist Ro(Bots) and Trolls-The Struggle Continues How I use writing to rage against racist machines I’ve heard of antagonistic bots on Facebook and Twitter posting racist comments and rhetoric to cause racial conflict, but I think they’ve started targeting Medium. I just discovered possible bots that have been responding to my posts or responses to other people’s pieces. Over the last year, a number of negative and racist comments have come from weird accounts that appear to be white men have increased. At first, I blocked them, but then I started to respond. I thought I should chronicle my experiences because they are a little creepy and I wanted to see if there was a pattern. This week’s bot is Patrick Morris he hasn’t written anything — but I responded 3 times before I suspected it was a bot: In September 2020 I wrote about 2 more attackers — this may have been a real person or bots. I’m not sure. In October 2019 Again, I’m not sure if this was a real person or a bot. Lesson and Conclusion If you’re writing about race, equity and inclusion be prepared to be attacked by real people and racist bots. You could choose to go down the rabbit hole, but DO NOT TAKE IT PERSONAL. After all, it’s probably bot or a miserable person with issues… Try to respond to dispel or refute false information. That’s all we can do. However, understand it can time consuming and responses are not part of the MPP even if you make it a story. Therefore, if your goal is to make money for your thoughtful words and rebuttals, make your response part of a separate story in the MPP. This story is an example of a response — that I turned into a story. I would like to thank all the readers who’ve read and supported my efforts. In the end, writers like me can continue to battle the racist bots and people, but I think Medium needs to improve their security and their response time to reports of abuse and harassment. I’ve seen some horrible comments and reported them with either no response or slow response. Yet, a picture without a proper caption will get flagged and your piece will be removed in a matter of hours… I think Medium needs to prioritize protecting writers especially those of us who are doing the work to combat racism and discrimination and amplify the voices of the historically marginalized and silenced. Thank you for reading.
https://medium.com/illumination/bots-on-facebook-twitter-and-medium-a98d3b07b4d5
['Gfc', 'Grown Folk Conversations']
2020-11-02 15:25:53.670000+00:00
['Cybersecurity', 'Medium', 'Racism', 'Bots', 'Writing']
A dark sight
A dark sight How you walk in dark been always afraid of dark used to sleep with lights on when i was a kid now i’ve got a bit comfortable but still when it’s pitch dark when i can’t even see the faint outlines i still lose my balance but you haven’t seen the light ever it’s been always dark for you how do you see through dark and what do you see in dark do you see the air around you the pockets of density and voids and its movements telling you the shapes and the ripples when someone’s moving do you see the sounds around you echoing back from the objects its shrillness telling you the distance rising and dipping approaching and receding is this what you see is this how you walk in dark how can i get this how can i learn this or is it that when one loses some sense then only he gets this sixth sense
https://medium.com/literary-impulse/a-dark-sight-7bed6feea3b5
[]
2020-08-28 07:12:08.279000+00:00
['Poetry', 'Air', 'Blindness', 'Writing', 'Literary Impulse']
Fuzzing Django Applications With the Atheris Fuzzing Engine
Atheris hispida. Photo by Bree Mc, soulsurvivor08 at flickr.com, CC BY 2.0, via Wikimedia Commons Fuzzing (fuzz testing) is a technique of finding flaws in software by providing it unexpected input. It was successful to find numerous serious security issues. Some others could have been found a lot earlier if fuzz testing was conducted. This week Google released the Atheris fuzzing engine which allows programs written in Python to be tested with libFuzzer, an actively developed library for coverage-guided fuzz testing. Atheris is simple to install and trivial to use. I will show you how to test a web application based on the Django framework and what can be achieved with a little effort. TL;DR It doesn’t find all views automatically. Prepare a good test corpus to help it find routes in your application, and a dictionary with relevant phrases or strings to use when fuzzing. The smaller part of the framework you harness, the more expectations you fail to meet. You may not want to test the URL resolver or the middleware, but views may depend on them. Get ready for surprising results. The more work you put into preparation, the better results you get. Implement shortcuts. Mock up external dependencies. If your view resolves domain names or connects to an external database you won’t get far in a reasonable time. Preparations Follow the official README: Take care to install the latest version of Clang. In my case switching from Clang 11 to 12 increased the testing speed (“exec/s”) twofold. Approach 1: Fuzzing complete page URLs In this approach we create a django.test.Client and ask Atheris to visit random pages of our project: #!/usr/bin/env python3 import os, django # I assume this variable is already set (e.g. in your shell) # os.environ.setdefault("DJANGO_SETTINGS_MODULE", "proj.settings") django.setup() from django.test import Client client = Client() def TestOneInput(data): url = '/' + data.decode('latin2') # convert random bytes to string # surprisingly these chars result in ValueError: Invalid IPv6 URL url = url.replace('[', '%5B').replace(']', '%5D') response = client.get(url) if response.status_code not in [200, 301, 302, 404]: # suspicious! log the HTTP status and the request path print(response.status_code, url) raise RuntimeError("Badness!") elif response.status_code not in [302, 404]: # not suspicious, but let's see what pages it finds print(response.status_code, url) import atheris import sys atheris.Setup(sys.argv, TestOneInput) atheris.Fuzz() First, we initialize the Django framework and create the test client. Then we use it to obtain a response for the given path. To simulate accessing your website as a logged-in user, you can call “client.force_login” method described in the Client documentation. The last step is to check the HTTP status code in the response. If it’s unexpected than we immediately stop executing the script to investigate the case. If it’s typical, but Atheris found a properly working page we log the page address. What’s expected or not depends on your application. I decided to ignore responses with HTTP redirects (statuses 301, 302) as well as the “Page not found” response (status 404). Let’s start our script: $ ALLOWED_HOSTS='["testserver"]' ./fuzz1.py -max_len=150 INFO: Configured for Python tracing with opcodes. INFO: Seed: 3353455860 INFO: Loaded 2 modules (1024 inline 8-bit counters): 512 [0x558092250230, 0x558092250430), 512 [0x558092234510, 0x558092234710), INFO: Loaded 2 PC tables (1024 PCs): 512 [0x5580924b5c70,0x5580924b7c70), 512 [0x5580924b8490,0x5580924ba490), INFO: A corpus is not provided, starting from an empty corpus #2 INITED cov: 20202 ft: 20202 corp: 1/1b exec/s: 0 rss: 153Mb NEW_FUNC[1/42]: 0x5580924b7e69 NEW_FUNC[2/42]: 0x5580924b7e81 #3 NEW cov: 22119 ft: 23051 corp: 2/5b lim: 4 exec/s: 0 rss: 154Mb L: 4/4 MS: 1 CrossOver- #4 NEW cov: 22123 ft: 23055 corp: 3/9b lim: 4 exec/s: 0 rss: 154Mb L: 4/4 MS: 1 CopyPart- #9 REDUCE cov: 22123 ft: 23055 corp: 3/8b lim: 4 exec/s: 0 rss: 154Mb L: 3/4 MS: 5 CopyPart-ChangeBit-ShuffleBytes-CrossOver-CrossOver- NEW_FUNC[1/1]: 0x5580927a98fd #11 NEW cov: 22137 ft: 23079 corp: 4/12b lim: 4 exec/s: 11 rss: 154Mb L: 4/4 MS: 2 ChangeByte-ChangeByte- #12 NEW cov: 22137 ft: 23099 corp: 5/16b lim: 4 exec/s: 12 rss: 154Mb L: 4/4 MS: 1 ChangeBinInt- #18 NEW cov: 22137 ft: 23178 corp: 6/20b lim: 4 exec/s: 18 rss: 154Mb L: 4/4 MS: 1 CopyPart- #35 NEW cov: 22137 ft: 23186 corp: 7/24b lim: 4 exec/s: 35 rss: 154Mb L: 4/4 MS: 2 ChangeBit-ChangeByte- #41 NEW cov: 22137 ft: 23199 corp: 8/28b lim: 4 exec/s: 41 rss: 154Mb L: 4/4 MS: 1 CrossOver- NEW_FUNC[1/1]: 0x5580927a9901 #50 NEW cov: 22214 ft: 23276 corp: 9/31b lim: 4 exec/s: 25 rss: 154Mb L: 3/4 MS: 4 ShuffleBytes-ChangeByte-EraseBytes-ChangeBinInt- #64 pulse cov: 22214 ft: 23276 corp: 9/31b lim: 4 exec/s: 32 rss: 154Mb #82 REDUCE cov: 22214 ft: 23276 corp: 9/30b lim: 4 exec/s: 27 rss: 154Mb L: 3/4 MS: 2 EraseBytes-ChangeByte- What we see it a standard output from libFuzzer. If you are new to libFuzzer, the official tutorial explains it in details. I used “-max_len=150” to limit the length of the random value; be sure to take a look at libFuzzer options. (Edit: An update has been released and the output now contains a proper function name instead of its unreadable memory address.) As the script runs, the word “NEW” gets rarer and rarer in its output, which means that it’s getting hard to find new code to test. We could help the fuzzer by providing a directory of interesting inputs (corpus). In our case, those inputs would be names of existing pages. LibFuzzer will use that directory to store interesting inputs it finds itself, so it won’t start from scratch next time: $ mkdir CORPUS1 $ ALLOWED_HOSTS='["testserver"]' ./fuzz1.py -max_len=150 CORPUS1 INFO: Configured for Python tracing with opcodes. INFO: Seed: 17267508 (...) INFO: 0 files found in CORPUS1 INFO: A corpus is not provided, starting from an empty corpus #2 INITED cov: 20202 ft: 20202 corp: 1/1b exec/s: 0 rss: 153Mb #3 NEW cov: 20206 ft: 20263 corp: 2/2b lim: 4 exec/s: 0 rss: 153Mb L: 1/1 MS: 1 ShuffleBytes- (...) #286 NEW cov: 22718 ft: 24298 corp: 28/72b lim: 4 exec/s: 31 rss: 155Mb L: 2/4 MS: 1 InsertByte- ^C KeyboardInterrupt: stopping. $ ALLOWED_HOSTS='["testserver"]' ./fuzz1.py -max_len=150 CORPUS1 INFO: Configured for Python tracing with opcodes. INFO: Seed: 96071814 (...) INFO: 26 files found in CORPUS1 INFO: seed corpus: files: 26 min: 1b max: 4b total: 70b rss: 152Mb #27 INITED cov: 22718 ft: 24286 corp: 19/50b exec/s: 27 rss: 154Mb #31 NEW cov: 22718 ft: 24303 corp: 20/54b lim: 4 exec/s: 31 rss: 154Mb L: 4/4 MS: 4 EraseBytes-CrossOver-ChangeBit-CrossOver- We are not limited to GET requests. A form handling functionality can be tested with the “client.post(path, data, content_type, …)” function. I recommend reading the function’s documentation as interpretation of “data” in Django depends on the “content_type” argument. Approach 2: Fuzzing Django views It takes a lot of time for the fuzzer to discover all views in your application, so let’s direct our test at specific ones. We can use the RequestFactory and call the view directly. #!/usr/bin/env python3 import os, django django.setup() from django.test.client import RequestFactory rf = RequestFactory() from myapp.views import ProductListView as view_under_test def TestOneInput(data): url = '/' + data.decode('latin2') # convert bytes to string # surprisingly these chars result in ValueError: Invalid IPv6 URL url = url.replace(']', '%5D').replace('[', '%5B') request = rf.get(url) response = view_under_test.as_view()(request) if response.status_code not in [200]: print(response.status_code, url) raise RuntimeError("Badness!") elif response.status_code not in [404]: print(response.status_code, url) import atheris import sys atheris.Setup(sys.argv, TestOneInput) atheris.Fuzz() Bypassing the resolver and middleware results in a big speed up. For my application a crash is reported in the very first test case: $ ALLOWED_HOSTS='["testserver"]' ./fuzz2.py -max_len=150 -only_ascii=1 CORPUS2 INFO: Configured for Python tracing with opcodes. INFO: Seed: 1307295576 INFO: Loaded 2 modules (1024 inline 8-bit counters): 512 [0x558c20687db0, 0x558c20687fb0), 512 [0x558c1faebf90, 0x558c1faec190), INFO: Loaded 2 PC tables (1024 PCs): 512 [0x558c2077e880,0x558c20780880), 512 [0x558c207810a0,0x558c207830a0), INFO: 159 files found in CORPUS2 === Uncaught Python exception: === AttributeError: 'Request' object has no attribute 'LANGUAGE_CODE' Traceback (most recent call last): File "./fuzz2.py", line 31, in TestOneInput response = view_under_test.as_view()(request) (...) File "/usr/local/lib/python3.8/site-packages/rest_framework/request.py", line 414, in __getattr__ return self.__getattribute__(attr) ==1861== ERROR: libFuzzer: fuzz target exited SUMMARY: libFuzzer: fuzz target exited MS: 0 ; base unit: 0000000000000000000000000000000000000000 artifact_prefix='./'; Test unit written to ./crash-da39a3ee5e6b4b0d3255bfef95601890afd80709 Base64: However, this not a bug. The view under test needs the LANGUAGE_CODE value, which is normally provided by the middleware. We can provide it by setting a constant value as an attribute after creating the request: request = rf.get(url) setattr(request, 'LANGUAGE_CODE', 'pl') But we can also fuzz the value together with the page address. To do that we need to switch to FuzzedDataProvider and ask it for two Unicode strings: def TestOneInput(data): fdp = atheris.FuzzedDataProvider(data) url = '/' + fdp.ConsumeUnicode(40) url = url.replace(']', '%5D').replace('[', '%5B') request = rf.get(url) setattr(request, 'LANGUAGE_CODE', fdp.ConsumeUnicode(2)) (...) FuzzedDataProvider provides a range of methods that return values of different types which may be more suitable for your use case, so make sure to take a look at the documentation. For example, if we wanted to constrain the value of a variable to a list of predefined values we could use PickValueInList: language_code = fdp.PickValueInList(['cz', 'sk', 'pl']) Final thoughts “This is way too slow.” Optimize. Checks the settings and disable the middleware classes you don’t use. Switch to SQLite. Look through articles about benchmarking and improving Python performance (for example, on the brilliant PythonSpeed blog). “Reviewing the code will take less time than optimizing for fuzzing.” That’s possible, but preparing automated tests is a one-time investment. If you integrate fuzz testing in your CI pipeline you will benefit from testing your application for regressions after every change. “It’s amazing how easy it is to set up.” I’m glad you liked it! Please let me know in the comments if it worked for your project or you have ideas for improvement in the web applications fuzzing techniques. Or just say ‘Hello!’ :-)
https://medium.com/swlh/fuzzing-django-applications-with-the-atheris-fuzzing-engine-ace18f262ae0
['Tomasz Nowak']
2020-12-23 15:12:09.714000+00:00
['Python', 'Security', 'Testing', 'Django', 'Fuzzing']
When Art Meets Data: Flowers as a Visual Metaphor
Popular Baby Names Reimagined as Flowers Similarly to Film Flowers, my Baby Name Blossoms project uses petals to quantify data — the popularity of the names. Photo by the author. The magic behind this visualization is D3’s quantize scale, which allowed me to transform popularity ( d.count ) into the number of petals ( numPetalScale ): const countMinMax = d3.extent(data, d => d.count) const numPetalScale = d3.scaleQuantize().domain(countMinMax).range([7, 10, 12, 15, 20]) Since the data I grabbed were the top 10 names for both genders, the values share more similarities than, say, the values of Film Flowers’ IMDb votes, making it a bit challenging to show big variations between the names. So, how might we add features that distinguish between them? What makes each name unique? While asking myself this very question, I noticed that Film Flowers uses colors to display different genres for each movie. Perhaps I can use colors as well? How about taking the vowels and painting them accordingly? Photo by the author. Here, I used D3’s ordinal scale to connect vowels with a range of colors: const vowels = ['a', 'e', 'i', 'o', 'u', 'y'] const petalColors = d3.scaleOrdinal().range(['#E44F5D', '#F6B06E', '#EFCB64', '#F8765C', '#E5D35F', '#1DDCCA']) petalColors.domain(vowels) Then, inside the function that converts each dataset into flower scale, I passed in a new object containing the vowels of each name: And finally, I appended the circle(s) inside the flowers. Thanks to the varying vowels in each name, now we can see more contrast between individual names. The resulting visual effect was more than satisfactory: Photo by the author.
https://medium.com/better-programming/when-art-meets-data-flowers-as-visual-metaphor-3dba3c5f394b
['Annie Liao']
2020-07-06 17:15:01.833000+00:00
['JavaScript', 'Design', 'D3js', 'Data Visualization', 'Programming']
Hyperparameter Tuning of Support Vector Machine Using GridSearchCV
The models can have many hyperparameters and finding the best combination of the parameter using grid search methods. What is SVM? SVM stands for Support Vector Machine. It is a Supervised Machine Learning algorithm. It is used for both classification and regression problems. It uses a kernel strategy to modify your data and, based on these changes, finds the perfect boundary between the possible results. Most of the time, we get linear data, but usually, things are not that simple. Let’s take an example of classification with non-linear data: Now, to classify this type of data, we add a third dimension to this two-dimension plot. We rule that it be calculated a certain way convenient for us: z = x² + y² (you’ll notice that’s the equation for a circle). It gives us a three-dimension space. Since we are in three dimensions now, the hyperplane is a plane parallel to the x-axis at a certain z (let’s say z = 1). Now, we convert it again into two dimensions. It looks like this : And here we go! Our decision boundary is a circumference of radius 1, which separates both tags using SVM. What is Grid Search? Grid search is a technique for tuning hyperparameter that may facilitate build a model and evaluate a model for every combination of algorithm parameters per grid. We might use 10 fold cross-validation to search for the best value for that tuning hyperparameter. Parameters like in decision criterion, max_depth, min_sample_split, etc. These values are called hyperparameters. To get the simplest set of hyperparameters, we will use the Grid Search method. In the Grid Search, all the mixtures of hyperparameters combinations will pass through one by one into the model and check each model's score. It gives us a set of hyperparameters that gives the best score. Scikit-learn package as a means of automatically iterating over these hyperparameters using cross-validation. This method is called Grid Search. How does it work? Grid Search takes the model or objects you’d prefer to train and different values of the hyperparameters. It then calculates the error for various hyperparameter values, permitting you to choose the best values. To illustrate an example of the grid search, it works | Image: Source: Image created by the author. Let the tiny circles represent different hyperparameters. We begin with one value for hyperparameters and train the model. We use different hyperparameters to train the model. We tend to continue the method until we have exhausted the various parameter values. Every model produces an error. We pick the hyperparameter that minimizes the error. We split our dataset into 3 parts to pick the hyperparameter, the training set, validation set, and test set. We tend to train the model for different hyperparameters. We use the error component for each model. We select the hyperparameter that minimizes the error or maximizes the score on the validation set. In ending the test, our model performance using the test data. Below we are going to implement hyperparameter tuning using the sklearn library called gridsearchcv in Python. Step by step implementation in Python: a. Import necessary libraries: We have imported various modules like datasets, decision tree classifiers, Standardscaler, and GridSearchCV from different libraries. Let’s Start We take the Wine dataset to perform the Support Vector Classifier. Here is dataset information: Input variables ( based on physicochemical tests ): 1. Alcohol 2. Malic acid 3. Ash 4. Alkalinity of ash 5. Magnesium 6. Total phenols 7. Flavanoids 8. Non-flavonoids phenols 9. Proanthocyanins 10. Color Intensity 11. Hue 12. od280/od315_of_diluted_wines 13. Proline Libraries used – Pandas Numpy Matplotlib Seaborn Sklearn Grid Search CV Now, import Wine data using sklearn in-built datasets. Data looks like this: Now, the main part that every data scientist does is Data Pre-processing. In this, we first see our dataset information using the DESCR method means to describe. It shows our attribute information and target column. In every machine learning model, we first separate our input and output variables, let’s say X and y, respectively. To understand every feature's dependency on the output, we use seaborn and matplotlib library for visualization. First, we use a boxplot to know the relation between features and output. Let’s take an example of one of the feature: Image Source: Image created by the author. In this boxplot, we easily see a linear relation between alcalinity_of_ash and the wine class. Another example : Image Source: Image created by the author. In this boxplot, we see 3 outliers, and if we decrease total_phenols, then the class of wine changes. So, our SVM model might assign more importance to those features which are varying linearly with the output. To see how our data is distributed, we use matplotlib python library. We use histogram here. Let’s see an example of it : Output: Image Source: Image created by the author. Feature malic_acid follows the left-skewed distribution. Train Test Split: We split the data into two parts training dataset and testing dataset using the train_test_split module of sklearn’s model_selection package in 70% — 30%, respectively. Because we first train our model using the training dataset and then test our model accuracy using the testing dataset. Train the Support Vector Classifier without Hyperparameter Tuning : Now, we train our machine learning model. We import Support Vector Classifier (SVC) from sklearn’s SVM package because it is a classification problem. Parameter for gridsearchcv: The value of your Grid Search parameter could be a list that contains a Python dictionary. The key is the name of the parameter. The value of the dictionary is the different values of the parameter. This will make a table that can be viewed as various parameter values. We also have an object or model of the support vector classifier. The Grid Search is using various kinds of classification performance metrics on the scoring methods. In this case, classification error and number of folds, the model or object, and the parameter values. Some of the outputs include the different scores for different parameter values. In this case, classification error along with parameter values that have the best score. Now the main part comes Hyper-parameter Tuning. First, we understand hyper-parameter — it is a parameter whose value is used to control the learning process, and hyper-parameter tuning means choosing optimal parameters. Parameters are as follows: C: It is the regularization parameter, C, of the error term. It is the regularization parameter, C, of the error term. kernel: It specifies the kernel type to be used in the algorithm. It can be ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’, or callable. The default value is ‘rbf’. It specifies the kernel type to be used in the algorithm. It can be ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’, or callable. The default value is ‘rbf’. degree: It is the degree of the polynomial kernel function (‘poly’) and is ignored by all other kernels. The default value is 3. It is the degree of the polynomial kernel function (‘poly’) and is ignored by all other kernels. The default value is 3. gamma: It is the kernel coefficient for ‘rbf’, ‘poly’, and ‘sigmoid’. If gamma is ‘auto’, then 1/n_features will be used instead. To accomplish this task, we use GridSearchCV; it is a library function that is a member of sklearn’s model_selection package. It helps to loop through predefined hyper-parameters and fit your estimator (like-SVC) on our training set. Here is the code: finding best hyperparameter using gridsearchcv: We fit the object. We can find the best values for the parameters using the attribute best estimator. To know the accuracy, we use the score() function. The final output we get with 90% accuracy and by using the SVC model and GridSearchCV. References: [1]https://scikit-learn.org/stable/modules/svm.html Conclusion : We analyzed the Wine Dataset (which is a preloaded dataset included with scikit-learn) in this post. Pandas, Seaborn, and Matplotlib, were used to organize and plot the data, revealing several of the features naturally separated into classes. Classifiers were trained and testing using the split/train/test paradigm. Now that we’ve learned how to work with SVM and how to tune their hyper-parameters. Thanks for reading.
https://medium.com/ai-in-plain-english/hyperparameter-tuning-of-support-vector-machine-using-gridsearchcv-4d17671d1ed2
['Bhanwar Saini']
2020-12-28 07:36:02.192000+00:00
['Machine Learning', 'Data Science', 'Technology', 'Artificial Intelligence', 'Deep Learning']
Everything I wrote about media business and revenue models in 2020
This is the third in a short series bringing together all of my research, journalism and other creative work from 2020. It follows earlier posts looking at my work on the changing practice of journalism, and the full research reports (a couple of which are reshared below) that I published in the past 12 months. Reports Articles focused on media and revenue strategy The Impact of COVID-19 Alongside publishing The Publisher’s Guide to Navigating COVID-19, I also wrote a couple of other articles on the pandemic, while also updating the chapters in my COVID report and publishing them as standalone chapters. Making sense of Media Trends eCommerce Last year, I published a major report — the first — into the eCommerce potential as a revenue source for media companies and publishers. At the start of the year, I extracted the chapters of this report — The Publisher’s Guide to eCommerce — and updated them, following this up with a separate report (The Publisher’s Guide to eCommerce: Case Studies) offering deep dives into the eCommerce experiences of BuzzFeed, POPSUGAR, Marie Claire UK and others.
https://medium.com/damian-radcliffe/everything-i-wrote-about-media-business-and-revenue-models-in-2020-e157f697fdd0
['Damian Radcliffe']
2020-12-22 21:18:06.956000+00:00
['Innovation', 'Journalism', 'Ecommerce', 'Media Criticism', 'Covid 19']
Editor’s Picks — Top 10: To Be a Top Writer You Must Write
Most of the new writers think they have nothing to write. They believe their stories are not worth telling — that they should not waste the reader’s time. The reality of the matter is exactly the opposite. New writers have more things to say than old writers. Old writers write more because they know how to write — not because they have more things to say. The young writers are often more imaginative. Their views and issues are more relevant to the age group willing to learn new things by reading someone who knows their dilemmas. Most of the new writers — and non-writers — can speak without a break for hours. They can argue for hours using words. But when someone asks to write their views, they fail to produce a meaningful paragraph, or sometimes, even a single sentence. Just record your argument with a friend on some important topic. You can then transcribe your words. You will get a written log of your discussion. Do you think it is ready for publishing? Definitely not. You will have to remove all the unnecessary sentences and words. Then you’ll have to add some information to make it easy to understand for your reader. You have to rearrange some of the paragraphs, and you will have to refine some of the arguments. Now, you can show it to someone, and if it makes sense to that person, you have become a writer. If you look at it like that, you can say that every person who can talk can be a writer. But being a writer is not just about converting your ideas into words and then publishing it. It is about accepting the responsibility of what you have written — that it is true, and you really believe it to be true. Accepting responsibility for your written words requires both courage and confidence. Where can you find the confidence and the courage that can make you a writer? What converts a talker or a chatterbox into a writer, is the confidence stemming from reading great writers. When you see how they transformed their ordinary feelings into remarkable pieces of art, you secretly wish to do likewise. But you still can’t find the courage. The courage a writer requires is similar to the courage a boxer needs. You will be punched if you make the wrong move. The only thing that gives the young man — who wishes to be a boxer — the courage to go into the arena is his passion — to win. When your wish to convert your life events into masterpieces becomes a burning desire, you will find your courage to write your words and then stand behind them with confidence. But just because you wrote your feelings — and you told your stories — does not mean someone will not challenge your opinions and feelings. It is when you have to decide to read a lot. Support your work with researches, references, and quotes from people who are an authority in their fields. Your article must present a winning case in front of the jury of your readers.
https://medium.com/illumination-curated/editors-picks-top-10-to-be-a-top-writer-you-must-write-e220f34ebd36
['Dew Langrial']
2020-12-24 22:10:52.282000+00:00
['Reading', 'Readinglist', 'Writing Tips', 'Writing', 'Self Improvement']
HOW TO ATTRACT INVESTORS WITH COOKIES
Amsterdam trip Introduction A bit of our team made a wonderful journey to the magical country of the Netherlands to participate in the world Blockchain Expo. Later I received a lot of questions how does it feel to participate in such a huge event, so I decided to present my memories below. Preparing for the storm Preparation in advance, check-lists and other useful things are not our cup of tea. Of course, it’s not about being bad-organized, but about being creative minds who are getting inspiration on the fly and yet sometimes at the last moment. Our biggest challenge was to present our small project as serious and yet interesting one on the world stage. How can we attract hundreds and thousands of people on our platform competing with experienced players who invest heavily in creating their launching pad? Brainstorming for a while, we dismissed thousands of ideas and focused on the core issue — a human approach. This is our strategy — transparency, openness. It should be used in this form. The first thing we decided to make was a presentation, which would create a WOW effect on the reader. A lot of mocks were burned out without the right for restoration. Desperate designers started drawing prototypes on what was at hand — and presto! A tracing paper, which apparently had been lying on the table since old-believers typewriters waiting for its triumph, comes into the hands of our designer Olga. A tracing paper. It was exactly what we needed. It is moderately transparent and thick. You can write, draw, apply effects while preserving feeling of transparency, cleanliness and depth. Immediately, the endorphin bomb caused the artists ‘ neurons to explode with renewed vigor to implement a concept that, after the realization can be summed up as WOW. Resembling little hobbits with One Ring, they fussed over the project, not showing it to anyone before the trial version — so big was the fear that the evocation would not correspond with their mental image. Finally, when our font manager Vladimir came with a ready-made version he could not hide his emotions, and almost crying (although he could be crying because of tough communication with print workers ) — presented us the release version . We were so amazed, that beautiful words of foul language filled the office. The presentation was ready. The only thing left was to disseminate it and it could already be said that our “face” was ready and waiting for its display. But there will be a lot of people at the Expo — how do we get their attention from afar? Again, the brainstorming of our creative minds started rasping in the dark corners of the office. The tension was in the air, and even our artist, who created our first mascot — Alphy the Bull — accidently was involved in the discussion. Just a quick look at Katerina dispelled any doubts — our Alphy the Bull must be on the banner. Neither an infographic, nor a windy talk. There should be our Alphy who will force all passer-by people to come to our stand. Two out of three important components of the engagement chain were ready. We were able to attract people. We could give information. Yet, we needed to inspire and express the human attitude. Here the garland was mine. I have always thought that the art of calligraphy is something magical and very interesting. And when I started thinking about expressing our attitude to visitors — I quickly realized that if our calligrapher signed personal cards to everyone who wanted to know more about our company , it would be stronger than even an expensive promo. And the main thing: people will remember it, because it was written to them, it was personal and unusual. And that’s life. Sometimes you need to sweat to give birth to idea, and some concepts come out of thin air. But the main thing is even if everything that had been done before is thwarted — is not a fail. All that was a bridge that led to the shore of a successful concept. Teleportation Surely, the day of the trip was exciting. Firstly, the printing office had to send us a ready volume of presentations. Secondly, the trip itself. Journey. Of course, in the best traditions of a good trip, everything went not so smoothly. The first stumbling block was the printing house, which did not have time to deliver our childrens on time. Vladimir, as a responsible person for the war, rushed to pick them personally. Only when arriving there, he realized that had underestimated their weight. These were really “weighty arguments” of our event. The total weight — 100 kg. Immediately the question of transportation come up — we needed to find some space to pack the volumes in a short time. In general, even this small fragment of Alphateca life shows how splendid abstract and creative vision can be applied to any aspect of our lives. Amsterdam met us with the sun and mashed-up people. We went to the hotel to get ready for the next day. The first day of the Expo. Surely, we had already laid out our gimmicks on the first day and it was too late and silly to rush to a candy shop, as it was not “Hell’s Kitchen” show. Expo Day 1 The first day of any event is very important. Firstly, you give a taste of the quality, and secondly, you know, what’s in store for you. A tender-hearted organizer has informed us that the way from our hotel to the exhibition will take not more than 20 minutes walk. But she forgot to specify that there are such interesting moments as the absence of pedestrian zones on some roads and if you walk on them, it turns out to take at least 1 hour. It does not matter — there is a taxi, call Uber, an excellent international system, which is so useful in many countries. Fun fact : Amsterdam taxi can stop not where you want, but only on special parking zones, which is extremely inconvenient for our brother. Uber quickly brought us to the Expo and we saw a huge building, actually a modern barn, where hundreds of exhibitors from different countries gathered under the roof. And this wonderful smell of ozone from the printed entrance tickets for Expo participants. An international crowd. Russian voice. Stop, what??? We thought we were going to be the only Russians in the event. But no. It turns out that there is a sufficient number of Russian speaking guys from Russia, Kazakhstan, Ukraine, Belarus. Having entered into the internal spaces of Expo, I began to understand that some companies had invested nearly annual budgets of some small firms in the event. Miracles of design and details. All stands are full of a various goodies for attraction. Everyone is friendly, good-looking, and full of interactive offers. Will we be able to beat their approach to grab at least some piece of the crowd and interest them? We were full of this panic feeling while walking to our stand. A lot of strong and experienced teams. Obviously, I tried to play poker. When we reached the stand and began to unpack our roll-up, arrange furniture and lay out the printing materials -we realized that: Roll-up should not be placed aback like everyone does, it should be actually the first web on the way of passing people. Typography should also be lying on the table, slightly pushed forward, so that guests from afar could see that there is something unusual and interesting to see. We had a wonderful quartet: me, our artist Violette, Account manager Ksenya and Font manager Vladimir. Initially, we planned me and Violetta sitting in front of the stand, and Vladimir with Ksenya beating a path to target clients and partners. Finally, it turned out that the perfect scheme is different: me and Vladimir, Ksenya and Vita. Our “customer funnel” worked the following way: firstly, people saw a roll-up with the bull, which clearly stood out among all the other stands, then they saw Vladimir, who was making personal calligraphic and while they were waiting I gave them information about the project. Unlike all others, we did not have luxury banners, interactive stands. We were simple and showed that we are able to work with a completely different approach. With humanity, openness and readiness to show our skills right then and there. The day was emotional, productive, indicative. There were a lot of pleasant moments, especially when strangers came up to us and said that they heard at the other end of the hall that there were creative and interesting guys presenting an unusual project. Or when people were smiling having their postcard signed and immediately started taking photos with you and signing up for Alphateca . It was also great when they were coming already knowing our names and talked about how they would like to become partners with us, even if they were from a different sphere. Of course, there were moments when we knew that we had gone the wrong way, for example , we needed plasma to show our promo. Or that the presentations are cool, but too heavy to carry. Or that the business cards were a total fail as the concept was not implemented . But mistakes are important and useful, as this is one more step to your dream. Expo Day 2 The second day is like the second day of a wedding party — everyone has already got used to each other and everything goes in a more relaxed atmosphere. Nevertheless, it was the most important day for us. If on the first day we just presented the project, then the second day was the day of investors. Which implied more formalized approach. It is important not only to show that you are creative, but also assure the seriousness of your approach to the process. The day started later and ended earlier. We also knew that people will go in waves, tied to the beginning and ending of lectures. On this day, everyone took out their gimmicks — FOOD. Yes! You would not believe , but the people began luring investors with cookies and cupcakes! What a shock! We did not even think that this could happen at the European summit. Ok, let it be. Surely, we had already laid out our gimmicks on the first day and it was too late and silly to rush to a candy shop, as it was not “Hell’s Kitchen” show. Others tried to attract with everything they could — glasses, caps, gadgets, bags. We had stickers and postcards — only souvenirs. Again, in their case, everything was done in large quantities and circulation, and in our case, part of our gift material was created right in front of a person. That definitely increased the loyalty level. On this day there were quite serious technical and economic talks. Serious uncles, bringing their auditors resembling tiny microscopes considered the projects under the maximum zoom, in order to assess whether it is worth considering even deeper. A pleasant moment occurred when after a long dialogue your interlocutor opened his notebook and started writing a brief summary of the Alphateca project. The end. Amsterdam and other thoughts To sum up, such Expo is a great opportunity to show yourself, your project, to prove that you are not a cheat and it is not a scam. That the team is not afraid to answer questions, that you are creative and able to meet with the expectations of you. This is a great opportunity to get partners, to convey the information about your project to the target audience , get coverage in media. Among other things, it is just a huge charge of energy and emotions, the opportunity to learn, gain information about current trends, as well as possible competitors from the top officials. Moreover, it is a live and active target audience that asks questions — and this is, actually, a huge stress- test of your platform. I managed to visit different countries during my lifetime– France, Norway, Sweden and others. But Amsterdam is a different city. It’s a strange mix of different theses. It is contradictory cozy and not very clean. It is open and sometimes dark. It has marvelous narrow streets with crowded bars and different small shops. It is certainly not the cheapest city — but the question is controversial — Oslo is more expensive. It is silly to judge country only by its capital residents, as there are many tourists and all of the business is focused mostly on working with them. And every big city in any country has this sin. Anyway, I can tell you for sure — one day I will come back to Amsterdam and see it under my own microscope.
https://medium.com/alphateca/how-to-attract-investors-with-cookies-d7d783c805e
[]
2018-08-02 11:15:05.762000+00:00
['Startup', 'Cryptocurrency', 'Alphateca', 'Becryptoone']
Be Careful or You’ll Never Have Enough Money
Find your true source of happiness “Don’t think money does everything or you are going to end up doing everything for money.” — Voltaire The desire for money is a game of association. We feel fulfilled and excited when we get more money. But we never actually want the money. We want what it can buy. And for the most part, we don’t actually want to own many things. We just want the joy we get from using those things. So the trick is to figure out what actually makes you happy. What does your ideal life look like? Of course, you could always get a bigger house, softer bed sheets, and the fanciest toaster. But after a certain point, the added happiness you get from an 80inch TV over a 60inch one is insignificant. Especially if you’re breaking your back at a stressful job to afford it. Take note of what brings you happiness. Write it down if you have to. Remind yourself each day what you’re working towards, and don’t waste your time on anything else. It’s a lot harder to get lost in the rat race when you constantly remind yourself where your happiness truly lies. Set yourself up to love the journey “If the path be beautiful, let us not ask where it leads.”― Anatole France Success is relative. The personal fulfillment you get from success depends on your perspective. So a helpful strategy is to set achievable goals with milestones. Doing this allows you to celebrate all the victories along the way and enjoy the journey instead of bemoaning what you don’t have yet. When you fall in love with the process, it doesn’t become about money anymore. You feel fulfilled by the wins and you fight through the setbacks. In our obsessive wish to arrive, we often forget the most important thing, which is the journey. — Paulo Coelho How happy you are along the journey depends on your ability to celebrate what’s meaningful now and appreciate every moment as it comes. You shouldn’t put a ceiling on how far you can go. But falling in love with the process means you’ll stop when the destination isn’t worth the soul-wrenching trip. Giving back keeps you grounded in reality “No one has ever become poor from giving.” — Maya Angelou Don’t wait till you’re rich to start giving back. Whether it’s supporting your brother’s dreams or serving your local community, giving is the best way to keep you connected to what life is about. I admit. I struggle with this. When I’m trying to save for a car and my children’s future, it’s hard to spend money on others. Surely it makes more sense to wait till I’m in a better financial position before I help others, right? But if you create a habit of giving back when you don’t have much, it naturally follows you into your financial success. When helping others becomes part of your character, it translates into you being more grounded when you’re wealthier. With this habit, you won’t get lost in the chase. You’ll reach for the stars but stay rooted in what actually matters. Besides, what you do and where you go don’t matter nearly as much as who you’re with. Your relationships will give you the most happiness and fulfillment in life. And financially supporting others when they need it is a great way to show how invested you are in them. The stronger your relationships and your ability to be selfless, the less likely you are to lose sight of yourself.
https://medium.com/illumination-curated/be-careful-or-youll-never-have-enough-money-66b2c1a4c0b9
['Nathan Burriston']
2020-12-11 19:54:23.749000+00:00
['Self-awareness', 'Happiness', 'Money', 'Success', 'Self Improvement']
Free As a Bird
[This week, an important analysis of the Andean condor was published in Spanish and English. “Saving the Symbol of the Andes: A Range Wide Conservation Priority Setting Exercise for the Andean Condor (Vultur gryphus)” presents the work of 38 specialists from seven countries participating in an in-depth systematization of studies carried out on the distribution, ecology, and conservation status of the species along the Andean mountain range — from Venezuela to Argentina and Chile. The objective is to promote a conservation strategy at a continental level that ensures healthy populations of condors and recognizes the importance of working across boundaries for the high-flying and wide-ranging condor. The following story captures one way in which that strategy is being implemented.] In the age of social media sometimes a WhatsApp message can mean a great deal. As a 50something wildlife biologist I never thought I would write those words, but in mid-April 2019, messages appeared in my WhatsApp that lifted my mood and those around me. “Palca! She´s alive and well and flying all over the place!” I shouted through the office. Smiles and whoops abounded. The WhatsApp messages were from a Bolivian colleague, Diego Mendez, an accomplished wildlife biologist and PhD candidate at the University of Madrid. Diego’s doctoral thesis concerns the movement ecology of one of South America´s most iconic wildlife species and the very symbol of the Andes: the magnificent Andean condor (Vultur gryphus). Palca is an Andean condor — the first in Bolivia to provide data on the movements of a species renowned for its huge wingspan, prodigious soaring ability and wide-ranging behavior. Diego had received data from a solar-powered satellite tag and was updating the Palca Release Group on WhatsApp. Palca is an Andean condor — the first in Bolivia to provide data on the movements of a species renowned for its huge wingspan, prodigious soaring ability and wide-ranging behavior. Palca takes her first steps to freedom after being rescued in Palca near La Paz and cared for at the Vesty Pakos Muncipal Zoo in La Paz. Photo: Rob Wallace/WCS While there have been significant condor studies in Argentina, as well as in Ecuador, less is known about condors occupying the birds’ huge middle range in Bolivia and Peru. Palca has been helping fill in the gap, but the whoops of joy that day in my office in 2019 attest to the fact that were it not for a careful rehabilitation effort, this magnificent bird might never have played an important role as our research associate. For reasons still unknown, in late January of 2019, Palca crash landed near the town with which it shares its name, an hour’s drive from the hustle and bustle of La Paz. After injuring her chest, she was rescued by the townsfolk of Palca and taken to La Paz’s Vesty Pakos Municipal Zoo. With the help of zoo director Andrea Morales and her team of keepers and veterinarians, Palca began eating again. A little over a month later, health tests revealed she had fully recovered. Palca hopped out, surveyed her surroundings, and stretched her formidable wings. Everyone held their breath, and then she leapt and was gone, immediately gliding away on a mountain up swirl across the valley. Meanwhile, the Bolivian Andean Condor Working Group — made up of Diego, Andrea, Grace Ledezma from the zoo, Isabel Gomez from the National Natural History Museum, Juan Carlos Campero from the Bolivian Lawyers School and myself — developed a comprehensive release plan for Palca. We worked with the relevant environment authorities to ensure all adequate permits were in place before identifying a suitable release site and date. On the morning of Friday, March 8, 2019, on a mountain top near Palca with the resplendent Andean peak Illimani — the symbol of La Paz — in sight, dozens of schoolchildren gathered with municipal and community authorities, the Ministry of the Environment and Water, the mayors of La Paz and Palca, staff from the zoo, POFOMA, the local hospital, and national press. As community leaders performed a ritual and Palca waited in her covered cage, a pair of mountain caracaras appeared to investigate, followed by two condors swooping high above before disappearing into the distance. Many local people indicated this was a good sign, but everyone involved was nervous to see what would happen next. Palca’s vast movement highlights the unique conservation challenge Andean condors face and the complex conservation measures required to be effective across huge geographies. After several minutes of quiet calm, Palca´s main keeper approached the cage and opened the door. Palca hopped out and surveyed her surroundings, a transmitter tucked into a backpack harness. Cameras whirled and everyone held their breath as she stretched her formidable wings, then leapt and was gone — immediately gliding away on a mountain up swirl, across the valley and onward. The crowd gasped a collective sigh of relief and clapped and smiled. And then as if by magic, in the distance, two condors appeared and swooped to greet Palca before the three of them banked into the next valley. Bolivians are a beautifully superstitious people. Diego Mendez and Isabel Gomez of the National Museum of Natural History in Bolivia underline the conservation importance of each individual Andean condor with the mayor of Palca, Mr. Rene Aruquipa. Photo: Rob Wallace/WCS One year on, Palca is revealing just how far condors can fly in this stretch of the Andes. The data from the 80-gram solar-powered satellite transmitter comes in steadily and the female condor is documenting regular roosting sites for the species in this portion of its range — flying a 400 km stretch of the Andes including the entire Cordillera Real and Cordillera Tres Cruces, and south as far as Cochabamba, Torotoro, Challapata, and Lago Poopó. Palca’s vast movement highlights the unique conservation challenge Andean condors face and the complex conservation measures required to be effective across huge geographies. Worryingly, over the last few years Andean condor experts have begun to document poisoning of livestock carcasses, often retaliating against Andean carnivores such as the Andean fox or the puma. Scavenging condors succumb to the poison. For a species with fewer than 10,000 individuals estimated in the wild, these events can be catastrophic. And the natural rarity of condors highlights the importance of every individual, including Palca. She now represents hope for the species, and a potential communication mechanism for reconnecting and reemphasizing the unique cultural value of Bolivia’s national bird to the nation. Rob Wallace is a Senior Conservation Scientist with WCS (Wildlife Conservation Society).
https://wildlifeconservationsociety.medium.com/free-as-a-bird-fdd1edc08f87
['Wildlife Conservation Society']
2020-09-06 13:10:51.269000+00:00
['Condor', 'Environment', 'Vulture', 'Bolivia', 'Conservation']
B2B 行銷方法論(四):數據運用的姿勢?Medium 後台、臉書甜蜜流量背後的陷阱?
Airtable: Organize anything you can imagine Airtable works like a spreadsheet but gives you the power of a database to organize anything. Sign up for free.
https://medium.com/y-pointer/b2b-data-medium-utm-9db39b453845
['侯智薰 Raymond Ch Hou']
2019-04-24 14:12:26.620000+00:00
['Management', 'Data Analysis', 'Business', 'B2B', 'Marketing']
Announcing SchemaMapper a C# data integration class library
SchemaMapper logo SchemaMapper is a C# data integration class library that facilitates data import process from external sources having different schema definitions. It can: Import tabular data from different data sources (.xls, .xlsx, .csv, .txt, .mdb, .accdb, .htm, .json, .xml, .ppt, .pptx, .doc, .docx) into a SQL table with a user defined table schema after mapping columns between source and destination. (.xls, .xlsx, .csv, .txt, .mdb, .accdb, .htm, .json, .xml, .ppt, .pptx, .doc, .docx) into a SQL table with a user defined table schema after mapping columns between source and destination. Replace creating many integration services packages by writing few lines of codes by writing few lines of codes Allow users to add new computed and fixed valued columns. Used technologies SchemaMapper is developed with .NET framework 4.5 and utilizes from many technologies to read data from different source such as: Microsoft Office Interop libraries to import tables from Word and Powerpoint - Json.Net library to import JSON - HtmlAgilityPack to import tables from HTML - Microsoft Access database engine to import data from Excel worksheets and Access databases. Step-by-step guide In this example, we will import multiple files that contains credentials of SQL Server instances saved by many users into one table in a SQL database. Source files (1) Flat file text file File extension: .txt .txt Columns: server, user, pass server, user, pass FilePath: D:\SchemaMapperTest\Password_Test.txt (2) Excel file excel worksheet File extension: .xlsx .xlsx Columns: SQL Instance, username, password, AddedBy SQL Instance, username, password, AddedBy FilePath: D:\SchemaMapperTest\Password_Test.xlsx (3) Access database access table File extension: .accdb .accdb Table name: Passwords Passwords Columns: ID, Server Name, Login, Password, AddedDate ID, Server Name, Login, Password, AddedDate FilePath: D:\SchemaMapperTest\Password_Test.accdb Expected SQL destination table Schema: dbo dbo Table name: Passwords Passwords Columns: [User_Name] nvarchar(255), [Password] nvarchar(255), [Server_Name] nvarchar(255) , [AddedDate] DateTime (contains current date) , [UserAndPassword] nvarchar(255) (concatenate user and password using a vertical bar | ) Initializing SchemaMapper class To initialize the SchemaMapper class you should follow these steps: 1. Add SchemaMapperDLL as reference and import the SchemaMapping and Converters namespaces: using SchemaMapperDLL.Classes.Converters; using SchemaMapperDLL.Classes.SchemaMapping; 2. Create a `SchemaMapper class and pass the Destination schema and table names as parameters: SchemaMapper smPasswords = new SchemaMapper(“dbo”,”Passwords”); 3. Define destination Columns within the `SchemaMapper` class: //Define Server_Name , User_Name, Password columns SchemaMapper_Column smServerCol = new SchemaMapper_Column(“Server_Name”, SchemaMapper_Column.ColumnDataType.Text); SchemaMapper_Column smUserCol = new SchemaMapper_Column(“User_Name”, SchemaMapper_Column.ColumnDataType.Text); SchemaMapper_Column smPassCol = new SchemaMapper_Column(“Password”, SchemaMapper_Column.ColumnDataType.Text); //Define AddedDate column and fill it with a fixed value = Date.Now SchemaMapper_Column smAddedDate = new SchemaMapper_Column(“AddedDate”, SchemaMapper_Column.ColumnDataType.Date,DateTime.Now.ToString(“yyyy-MM-dd HH:mm:ss”)); //Define UserAndPassword column with and expression = [User_Name] + ‘|’ + [Password] SchemaMapper_Column smUserPasswordCol = new SchemaMapper_Column(“UserAndPassword”,SchemaMapper_Column.ColumnDataType.Text,true,”[User_Name] + ‘|’ + [Password]”); //Add columns to SchemaMapper smPasswords.Columns.Add(smServerCol); smPasswords.Columns.Add(smUserCol); smPasswords.Columns.Add(smPassCol); smPasswords.Columns.Add(smAddedDate); smPasswords.Columns.Add(smUserPasswordCol); 4. Now, you should add all possible inputs for each destination columns //Add all possible input Columns Names for each Column smServerCol.MappedColumns.AddRange(new[] {“server”,”SQL Instance”,”Server Name”}); smUserCol.MappedColumns.AddRange(new[] { “username”, “user”, “Login”}); smPassCol.MappedColumns.AddRange(new[] { “Password”,”pass”, “password” }); 5. All unwanted columns should be added to the IgnoredColumns list //Sys_SheetName and Sys_ExtraFields are an auto generated columns while reading Excel file smPasswords.IgnoredColumns.AddRange(new[] { “ID”, “AddedBy”, “AddedDate”, “Sys_Sheetname”, “Sys_ExtraFields”}); Converting files into DataTables with unified schema 1. Now we should convert files into `DataTable` objects //Declare DataTables DataTable dtExcel; DataTable dtText; DataTable dtAccess; //Excel worksheet using (SchemaMapperDLL.Classes.Converters.MsExcelImport smExcel = new SchemaMapperDLL.Classes.Converters.MsExcelImport(@”D:\SchemaMapperTest\Password_Test.xlsx”)) { //Read Excel smExcel.BuildConnectionString(); var lst = smExcel.GetSheets(); //Read only from the first worksheet and consider the first row as header dtExcel = smExcel.GetTableByName(lst.First(), true, 0); } //Flat file using (SchemaMapperDLL.Classes.Converters.FlatFileImportTools smFlat = new SchemaMapperDLL.Classes.Converters.FlatFileImportTools(@”D:\SchemaMapperTest\Password_Test.txt”,true,0)) { //Read flat file structure smFlat.BuildDataTableStructure(); //Import data from flat file dtText = smFlat.FillDataTable(); } //Access database using (SchemaMapperDLL.Classes.Converters.MsAccessImport smAccess = new SchemaMapperDLL.Classes.Converters.MsAccessImport(@”D:\SchemaMapperTest\Password_Test.accdb”)) { //Build connection string and retrieve Access metadata smAccess.BuildConnectionString(); smAccess.getSchemaTable(); //Read data from Passwords table dtAccess = smAccess.GetTableByName(“Passwords”); } 2. After reading data from files, we need to change the tables structure to match the destination table structure: smPasswords.ChangeTableStructure(ref dtExcel); smPasswords.ChangeTableStructure(ref dtText); smPasswords.ChangeTableStructure(ref dtAccess ); Importing to SQL 1. To create the Destination Table we used the following command: string connectionstring = @”Data Source=.\SQLINSTANCE;Initial Catalog=tempdb;integrated security=SSPI;”; smPasswords.CreateDestinationTable(con); 2. At the end, we have to insert data into SQL. There are two methods to achieve that: (1) Insert using Bulk insert method smPasswords.InsertToSQLUsingSQLBulk(dtExcel, connectionstring); smPasswords.InsertToSQLUsingSQLBulk(dtText, connectionstring); smPasswords.InsertToSQLUsingSQLBulk(dtAccess , connectionstring); (2) Insert using stored procedure with table variable parameter smPasswords.InsertToSQLUsingStoredProcedure(dtExcel, connectionstring); smPasswords.InsertToSQLUsingStoredProcedure(dtText, connectionstring); smPasswords.InsertToSQLUsingStoredProcedure(dtAccess , connectionstring); Result
https://medium.com/munchy-bytes/announcing-schemamapper-a-c-data-integration-class-library-541dcfad4e2b
['Hadi Fadlallah']
2020-11-27 22:45:21.532000+00:00
['Sql Server', 'Csharp', 'Programming', 'Database', 'Data Integration']
Case Study: Devastating work of a malicious GitHub bot
A while back I was working for a mid-size client who used AWS as their infrastructure provider. For a long time, there had been little governance over the AWS account, with development teams having full access to the account functions and often provisioning their applications manually. As the cumulative ecosystem of the organization grew to around 300 EC2 instances, manual provisioning was being gradually phased out across teams in favor of automation, thanks to services like CloudFormation, Elastic Beanstalk, and Ansible, however originally set privileges remained still in place. On one occasion, a set of AWS access keys accidentally leaked to a public GitHub repository, which brought a devastating result, as a malicious bot finds them and wreaks havoc across the organization. Timeline Below is the timeline of the incident, relative to the first event (timestamps in hh:mm format) 00:00 AWS Key ID and Secret ID are published to a developer’s personal public repository on GitHub 00:01 Less than a minute later, a malicious bot scanning GitHub repositories finds the published keys. Using the keys, it creates its own AWS user with high privileges within the AWS account, that it then uses for all subsequent actions. The first task is to remove all existing users and disable existing access keys. At this point, the organization loses control over the account, however, is not yet aware of it. 00:02 AWS Security team notifies the company’s IT team over email of a potential account compromise. The team confirms it and with the help of AWS, recovers access to the account. There is much confusion about what has happened and what the impact is. There is no knowledge of the newly created malicious user and the actions it is invoking. 00:42 The bot starts deleting EC2 instances. To accomplish this task, it spins off a lambda, that performs this ‘clean-up’ of resources from the inside. 00:48 All EC2 instances have been removed. The bot now attempts to spin up new instances (likely for the purpose of mining bitcoins), however, is blocked by AWS’s malicious behavior recognition mechanism. A true war of the machines! 00:50 The IT team detects the malicious user and deletes it. From this point on, a long, difficult and highly manual recovery process takes place. Impact All production systems were impacted by downtime. Out of a total of 15 production applications, about a third was recovered on the same day, with the rest being unavailable for up to 3 days. The most problematic had been legacy services, in maintenance mode for more than a year, deployed manually. For some of them, the necessary skills were not present in the team anymore. Sensitive data No sensitive data had been compromised during the incident. Given the breadth of the privileges captured by the malicious bot, it can be considered luck, rather than anything else. Likely, stealing data was not the attackers' focus. Slowdown The incident caused major disruption across all the teams, randomizing ongoing development and diverting resources towards analysis and recovery efforts. Additionally, for weeks to come, teams would be stumbling on some components, like test services still not recovered after the incident. This would lower productivity for many weeks. Lessons learned It is tempting to put the blame on the individual who leaked the access keys to the wide internet, however, it is not productive. Mistakes are in our nature and rather than wasting energy on trying to eliminate them or find the scapegoat, use the time to establish solid guardrails. We, developers, are like ants — we look for the path of least resistance and follow it, so make it easier to do the right thing and incidents will be less likely. Least privilege principle I’m not a fan of overly focusing on preventive measures, as they tend to result in point solutions — disproportionately favoring the case at hand, which over time results in over complex systems. However, there are well-established security patterns, which should be followed. One of them is the ‘Principle of Least Privilege’, which requires that users and services can access only the resources they absolutely need. Fire drills Prevention, while a good driver for constant improvement, will never fully eliminate incidents. Assuming that something will eventually go wrong (and it will!), you give yourself the mental space to think about recovery as your first class operations tool, rather than a plan B. Prioritizing improvement of Mean Time to Repair (MTTR) over Mean Time Between Failures (MTBF) yields, therefore, better results. For extra points, employ tools like Chaos Monkeys or hold periodic ‘war games’, where various attack scenarios can be exercised. This is also a good opportunity to evolve and document recovery processes and ensure proper communication channels within the organization are readily available. Be sure to include all the services, especially the legacy ones. Legacy services Adopt a more aggressive strategy for legacy services. Either spend the effort to phase them out completely or invest in automation and monitoring. Being in limbo with old services — low operational maturity, as well as lack of a solid plan for sun setting is common and puts the organizations at a very vulnerable position. Ensure complete recovery Naturally, production apps impacting real users take priority when performing a recovery. Reaching ‘green’ state for all the production services is however not sufficient. It’s important to recover all components of the ecosystem, that participate in the end to end development cycle. Think test services, CI agent pools, build monitors, etc. Without this work being prioritized, the team’s productivity will be taking unexpected hits over many weeks to come.
https://medium.com/dan-on-coding/case-study-devastating-work-of-a-malicious-github-bot-147a1c395850
['Dan Siwiec']
2019-04-07 21:44:16.815000+00:00
['Hacking', 'Software Development', 'Github', 'Security', 'AWS']
E pluribus unum
E pluribus unum is the motto of the United States of America. Check out most of our currency, and you’ll see it printed there. It means “from many, one” or “out of many, one.” This motto captures the reasons why I love this country and feel pretty fortunate to have been born here. Lately, though, I have been thinking about how Republicans and Democrats, our two major political parties in the USA, understand e pluribus unum differently. For me, these differences explain some of the dynamics of contemporary politics, and they also point to something missing from politics that we all can play a small part of filling in. On the Republican side, there is a very clear picture of the unum or the “one,” which I think accounts for a lot of the energy, enthusiasm and commitment of Republican voters to President Trump. I would define President Trump’s “one” as being rooted in long-lasting and quite powerful myths and stories about the United States. It’s a “one” formed out of the “many” images of the frontier, of manifest destiny, of the hard-working and uncomplaining farmer, steelworker and coal miner. It’s a community, but one where the greatest respect is reserved for the individual. President Trump’s “one” America values making your own way, lifting yourself up by your bootstraps, taking personal responsibility, and respecting traditions and cultural norms. Whatever anyone thinks about the truthfulness or moral dimensions of these qualities, we can all probably agree that they run deep and are deeply resonant for many people in the United States. I think the most crucial attribute, though, is that this “one” America that President Trump talks about is widely understood in the same way by his voters and supporters. People in my hometown of Washington, Pennsylvania largely share the same image of this “one” with people in Alabama. All President Trump has to say is “Make America Great Again,” and these folks are all on the same page. President Trump’s “one” is also largely geographically rural, demographically white and male, and competitive and combative in nature. He is not so attuned to some of the “many” who don’t see a place for themselves in this “one,” and understandably this angers these people. But for those who do feel part of this “one,” it’s a powerful message. President Trump uses this shared “one” to great effect when he conjures up or exaggerates threats to it — “threats” such as the immigrant caravan or people from majority-Muslim countries. If we think about the people and ideas each of us holds dear, and then we are faced with a threat to their purity and existence, it’s not too hard to get to a place where you’re willing to do most anything to protect them. It seems to me that President Trump’s supporters feel this way about their “one,” which for them encompasses the history, traditions, culture and the very existence of the United States. To them, that is something worth fighting for, and President Trump has skillfully portrayed himself as its champion. I believe President Trump was right when he said he could shoot someone in broad daylight on 5th Avenue and suffer no drop in support from his voters — it’s because there is something much bigger at stake for them such that they will tolerate bad behavior if it keeps their champion on the battlefield. The immigrant “caravan.” Photo: Flickr. The Democrats approach e pluribus unum in a different way. Democrats are attuned to the pluribus or the “many.” The Democratic coalition includes many diverse interest groups that are all advocating for something important. For example, many environmentalists vote Democratic because they perceive that party as most likely to advance their cause. The same can be said of a lot of other interest and identity groups. As a result, Democrats have a broad platform that seeks to advance the “many” and its many worthy causes. From my perspective, though, the Democrats are missing a clear definition of the “one” that is supposed to bring together the “many.” Support for Democrats is broad, but it’s not very unified. There are numerous specific policy handles or causes for which there is deep support, but this set of disparate pieces isn’t combined into something more than the sum of the parts. The narrative of the Democratic leadership after the midterms was highly focused on specific policy wins that the leadership hoped to bring forward, but it lacked a thread — a “one” — to tie it all together. As The New York Times put it, “At a celebratory news conference, Ms. Pelosi ticked through the issues she said Democrats intended to pursue: ‘lower health care costs, lower prescription drugs, bigger paychecks, building infrastructure, clean up corruption to make America work for American people’s interest, not for special interest.’” What “one” is the Democratic Party trying to build from those many causes? We each can interpret that for ourselves, but I think the lack of a single “one” makes it hard to build a movement. House Democratic Leader Nancy Pelosi. Photo: Flickr. I feel a desire to be part of something bigger than myself. That’s why I am an ardent Steelers fan, and a proud Brooklynite. I think most people have that desire to be part of a “one” while also keeping the traditions and complexities of the “many.” Balancing the “one” and the “many” is a tension in today’s globalized and interconnected world. I think we should forge a “one” that is far broader than President Trump’s, and we should certainly beware any politician who rallies some of the population against others who are arbitrarily excluded from their “one.” But we also need to engage in the conversation of defining what “one” we do want to emerge from the “many.” I believe Democrats make a mistake in focusing entirely on the “many” and never forming a picture of the “one.” I yearn for the wisdom of President Obama, who I think did the best job of any contemporary politician of embracing the entire motto of e pluribus unum. If you go back and listen to some of President Obama’s best speeches, such as his speech at the 2004 Democratic National Convention, then you will see how most of them follow a similar pattern. President Obama tells parts of his life story, which identifies and treats with respect different strands of the “many.” But then he always moves to an uplifting, inspiring portrait of the “one” that could emerge from the “many.” He acknowledges the diversity and beauty of the “many” as well as its mistakes and differences, all of which could be redeemed and made even greater if we could come together as “one.” President Obama’s “one” had a place for everyone. Former Pres. Barack Obama. Photo: Pixabay. You might be wondering: If President Obama’s approach was properly attuned to invoking a shared “one,” then how did we end up with Trump? I do not have the answer, but I have a few thoughts. President Obama may have come too soon in the sense that his “one” would struggle to get traction if we’re in the wrong framework for understanding the world. The concept of a zero sum game has a powerful hold in conditions of scarcity, by which I mean that if we perceive that access to key things like a good job, income, quality education or housing is scarce, then the imperative becomes fighting to get a share of the limited pie for yourself. In that paradigm, it is hard to believe in a “one” America that includes everybody — there isn’t enough to go around. Especially as government struggled to deliver many of the concrete things President Obama talked about in his campaigns, some people began to lose faith that a “one” that included everyone was practical. In the wake of the Great Recession, when some people felt scarcity was dominant and their own situation grew more precarious, and in light of the accelerating changes in our economy that have ensued from globalization and automation, I can see how some Americans would retreat into a more limited and tribal “one” like the “one” President Trump describes. The Public Interest Network is doing important work to build a movement that would embrace a new understanding of e pluribus unum. We see a future “one” where we come together to recognize our abundance and celebrate our ability to share it with each other; where we improve our quality of life and not just the quantity of our stuff; and where we all, together, find new meaning for life in a world that looks pretty different from that of our forbearers. I don’t think that “one” is something many people are already thinking about, but I have found in admittedly limited conversation that it’s a “one” that can get heads nodding and hearts pounding. This “one” may not be that different from what President Obama talked about, but if we can get into a new paradigm where we recognize society’s capacity for abundance, then more people might be able to believe in it. We are well-positioned to do this work, because our organization understands the tensions well. We strive to be not just nonpartisan, but transpartisan. We encourage and seek out debates and disagreements so that we can incorporate more points of view into our politics. This project is not an easy one, but I take inspiration from the words and style of President Obama, who never stopped trying to forge a more inspiring “one”:
https://medium.com/the-public-interest-network/e-pluribus-unum-4235625675ac
['Samuel Landenwitsch']
2018-11-26 20:40:50.134000+00:00
['Storytelling', 'Activism', 'Obama', 'Nonprofit', 'Politics']
Quick start of your React application development with Yeoman
Yeoman is a generator ecosystem. In this ecosystem, there are many different generators for different tasks, not only for creating web applications. You can find a ready-made generator or create your own. Let’s dwell on the second option. First of all, we need an application template that we want to generate. For the test, you can take my ready-made template. Next, you need to install Yeoman with the command: npm install -g yo Then we can start creating our generator. First, let’s create and go to the directory of our generator. This directory must be named “generator-name” (where “name” is the name of your generator). By this name, Yeoman will search for a generator using the file system. We also need a “package.json” file in which we will configure the generator since it’s essentially a Node.js module. The “name” property must start with the “generator-” prefix. The keywords property must contain “yeoman-generator” and the repo must have a description to be indexed by Yeoman generators page. The “files” property should contain an array of files and directories that your generator uses. Yeoman uses a file tree to generate the template. One generator can have multiple templates, for example: ├───package.json └───generators/ ├───app/ └───templates/ └───template/ └───index.js └───router/ └───templates/ └───template/ └───index.js The generator should be in the “app” directory by default and can be started with the command — “yo name”, where «name» is the name of your generator without a prefix. The rest of the generators can be started with the command — “yo name:subcommand”, where “subcommand” is the name of your other generator. In the example above, the generator will provide the “yo name” and “yo name: router” commands. Now let’s proceed directly to writing the generator. For simplicity, Yeoman provides a basic generator from which you can inherit yours. In the generator’s “index.js” file, here’s how you extend the base generator: var Generator = require('yeoman-generator'); module.exports = class extends Generator {}; You can override constructor functions or add your own functionality. The main thing to know is that the generator methods are called in a specific order. Here is the order of calling methods: initializing — Your initialization methods (checking the current state of the project, getting configurations, etc.). prompting — Where you prompt users for options (where you’d call this.prompt()). configuring — Saving configurations and configure the project (creating .editorconfig files and other metadata files). default — If the method name doesn’t match a priority, it will be pushed to this group. writing — Where you write the generator specific files (routes, controllers, etc). conflicts — Where conflicts are handled (used internally). install — Where installations are run (npm, bower). end — Called last, cleanup, etc. Let’s start in order. Install the dependencies our generator will need. npm i inquirer-npm-name lodash.merge parse-author validate-npm-package-name Now we will write an initialization method where we will define the default values for the manifest of the node.js module. Next, we will request information about the application that we want to generate (title, description, author’s name, and mail). So let’s define the prompting method. As you noticed, here we checked the name of the module. Now comes the fun part. We need to generate our template, in the current example I used my react-app-template. Yes, we need a writing method. Finally, we need to install the dependencies for the generated template. For this, we will use the installation method. To create your own template, you only need to override these methods, but if you have a desire or need for additional customization, you can read the full Yeoman documentation. Now let’s use our generator. To do this, install the generator globally, being in its directory, run the command: npm link Create project directory and go there with command: mkdir <project_directory> && cd $_ Then use generator with: yo react-app The complete generator code can be viewed on my Github page. Thanks for reading!
https://denis-voronin.medium.com/quick-start-of-your-react-application-development-with-yeoman-7d3cadd2f36f
['Denis Voronin']
2020-10-29 10:25:28.055000+00:00
['Generator', 'React', 'Yeoman']
What Influences Quality Improvement Processes in Health Care?
The COVID-19 pandemic has forced health care services and systems to make substantial, rapid changes to the ways in which they operate. It is likely that many of these changes will be retained in the post-COVID-19 era. Using digital technologies to access non-urgent health care services, for example, is unlikely to be fully relinquished in favour of a return to in-person appointments, although some patient-health care provider interactions will inevitably continue to require face-to-face consultations. Doing things differently opens up opportunities for doing things more effectively and efficiently, provided that the risks associated with new models of care are identified and successfully managed. For example, effective transition to remote patient consultations and monitoring at scale requires the training of frontline staff to ensure that high quality is maintained. This includes training to avoid the risks that remote consultations and monitoring can give rise to, such as missing cues from patients that may be picked up more readily in face-to-face interactions. It also means evaluating these new ways of doing things, to ensure that they work as well as intended for all the groups involved. This evolution in service delivery offers important opportunities for improving care quality and patient experience in many areas — remote maternity care or the remote monitoring of patients with diabetes being just two examples. Making the most of these opportunities, it will be important to ensure that the evidence base on how quality improvement happens and what influences its success is incorporated into decisionmaking. Recent RAND Europe research considers six key influences on improvement processes that need to be in place to support quality improvement in health care organisations. The insights from this research may be particularly pertinent to engage with at the present time, as health care services try to respond to the pressures of the COVID-19 pandemic and move to new service delivery models while sustaining some traditional ways of operating. Doing things differently opens up opportunities for doing things more effectively and efficiently, provided that the risks associated with new models of care are identified and successfully managed. The six influences are: leadership; relationships and interactions that support an improvement culture; skills and competencies; patient and public involvement, engagement, and participation; using data for improvement purposes; and working as an interconnected system of individuals and organisations, influenced by internal and external contexts. We discuss what each of these mean in practice. They represent key aspects of the social and organisational context for quality improvement in health care service provider organisations. Leadership Effective leadership in the context of supporting health care quality improvement is characterised by sustained and continuous engagement from different types of leaders and improvement champions representing diverse health care specialities, multiple levels within organisational hierarchies, and clinical, managerial, and executive dimensions of health care leadership. The literature suggests that leaders should develop and disseminate a compelling narrative for the long-term strategy and value of any planned improvement activity, with clearly articulated roles and responsibilities for both themselves and those they seek to lead. This is central to cultivating staff trust in the values, vision, and expertise of its leadership. Importantly, different types of leadership are needed for different contexts and phases of improvement, and it is important to find a balance between leadership styles. Overly hierarchical leadership risks disrupting the community ethos that can help drive improvement activity. On the other hand, over-reliance on voluntary social linkages alone can put quality improvement communities at risk of disintegration. Relationships and Interactions That Support an Improvement Culture A culture of improvement can be developed and sustained by fostering supportive relationships and regular interactions between all individuals and groups involved in improvement activity. These relationships are most effective when characterised by open discussion, transparency, sustained collaboration, and feedback to support continual learning. In addition, improvement processes can be influenced by exchanging learning between organisations, by creating a shared understanding of the benefits that can accrue from improvement activity among staff in an organisation, as well as a shared understanding of the challenges that can be experienced along the way and how these might be addressed. Intra- and inter-organisational interactions can be supported by developing a clear strategy that considers what to communicate, to whom, how, and when. Skills and Competencies Improvement processes rely on appropriately resourced staff who are trained in the requisite technical and social skills — both are important to make improvement work in a real-world context. This is vital for staff at all levels, from those at the coalface of improvement to leadership and senior executives, because improvement is a collective and socially constructed phenomenon. Educational components relevant for health care staff can be integrated into the design of an improvement initiative in many ways, for example in the form of simulations, scenarios, lectures, workshops, role-play, and/or experiential learning with feedback. Such training can have a more positive influence on improvement processes if it is provided regularly, for example as part of continuous professional development. Improvement processes rely on appropriately resourced staff who are trained in the requisite technical and social skills. Using Data for Improvement Purposes Data can help to identify improvement needs, inform the design of improvement interventions and implementation strategies, and support monitoring, evaluation, and learning. Good evaluation is central to improvement, but it is not possible without access to accurate and relevant data on the quality of care. Organisational culture and staff attitudes towards data and evidence influence the extent to which they are used in improvement. This includes whether staff see data as relevant and meaningful, and therefore helpful in supporting their individual roles and collective goals, and whether they trust data quality, accuracy, and the credibility of its source. The effectiveness of data in guiding improvement activity is also influenced by when it is provided, to whom, and how. Feedback must be timely to have traction, and data needs to be communicated in user-friendly ways tailored to each purpose and audience. Patient and Public Involvement, Engagement, and Participation Patients, carers, and the public can contribute to improvement in diverse ways, for example: in patient and public involvement roles (actively advising on the design, implementation, or evaluation of improvement initiatives); in patient engagement roles (receiving and engaging with information and knowledge about improvement efforts); and as participants in the delivery of an improvement study or improvement initiative. Achieving meaningful contributions from patients and the public requires a clearly communicated strategy about when and how their contributions can add value to improvement efforts, clear roles and responsibilities, feedback, and recognition of patient and public contributions. Working as an Interconnected System of Individuals and Organisations, Influenced by Internal and External Contexts Taking account of local history and context when planning improvement activities can help to ensure more effective intervention design and implementation. An organisation’s internal management and governance approach as well as the external context (e.g. policy mandates, payment regimes, reporting structures in the health system) can all influence how committed clinicians are to quality improvement. Furthermore, regular interaction between different parts of the health care system (primary, acute, community, and social care) can aid improvement efforts when there is a high degree of interdependence and need for coordination between activities happening in different parts of the system. Taking account of local history and context when planning improvement activities can help to ensure more effective intervention design and implementation. Quality improvement requires attention to all the interdependent influences discussed above. This can be a significant ask for organisational leadership that is pressed for time and resources — all the more so during a pandemic. However, nurturing the various interrelated aspects of the social, cultural, and organisational environment needed to support quality improvement in a cohesive and coordinated way matters if improvement efforts are to lead to tangible and sustained results. It is also conducive to building lasting organisational capabilities for improvement. For example, embedding remote consultations and remote monitoring into ongoing clinical practice during and after the COVID-19 pandemic is both a service transformation challenge and an improvement challenge. This is because of the scale of transition required across the health and care system in relation to the past. Staff training, for instance on how to manage risks associated with remote patient consultations and monitoring, is likely to be important to ensure that it is done safely and well. Improvement in this space also requires the implementation of a secure data and ICT infrastructure that can effectively support safe and high-quality remote care — for example to upload patient photographs for clinical diagnosis. Patient and public engagement is necessary to understand what is acceptable, what works, where risks lie, and for whom remote interactions do not work and risk exclusion — such as for people with limited access to or experience of video conferencing. And working as an interconnected system is required to ensure that activities remain coordinated between primary, acute, and social care. All of this relies on an effective and diverse group of leaders and staff committed to a culture of improvement and to delivering high quality care. Periods of rapid change offer both opportunities and challenges for health care quality improvement. Understanding the building blocks that need to be in place to support improvement processes may help those seeking to embed improvement capabilities and capacity into their organisations, both as we emerge from the COVID-19 pandemic and beyond. Gemma-Claire Ali and Emily Ryen Gloinson are analysts working in the area of innovation, health, and science at RAND Europe. Sonja Marjanovic is RAND Europe’s Healthcare Innovation, Industry, and Policy director.
https://medium.com/rand-corporation/what-influences-quality-improvement-processes-in-health-care-33be87eb8989
['Rand Corporation']
2020-12-11 23:52:58.266000+00:00
['United Kingdom', 'Healthcare', 'Coronavirus', 'Telemedicine', 'Covid 19']
Don’t drown in data!
If data is the new water, check out our latest infographic that will sweetly explain how not to drown in it! Originally published at https://www.acrotrend.com on October 23, 2019.
https://medium.com/acrotrend-consultancy/dont-drown-in-data-1a4d375c950c
['Acrotrend Consultancy']
2019-10-28 14:19:23.946000+00:00
['Data Quality', 'Business Intelligence', 'Data Science', 'Data', 'Data Visualization']
Tips for Better Learning
Retention “If you can’t explain it in simple terms, then you don’t understand it.” — Richard Feynman If you have to retain the knowledge that you have learned, the better way is to teach others. Teaching does not always necessarily take the form of a lecture. Share the information with your colleagues or post it on a forum and create a conversation based around it. That’s it — you have started teaching. Eventually, you will end up teaching someone or learning from others. Teaching is going to be an ultimate test case for your knowledge. Can you convey it to someone else? Learning pyramid 1. People retain 90% of what they learn when they teach someone else/use it immediately. 2. People retain 75% of what they learn when they practice what they learned. 3. People retain 50% of what they learn when engaged in a group discussion. 4. People retain 30% of what they learn when they see a demonstration. When we start to distill a concept into its basic parts, it not only boosts understanding but also fills knowledge gaps. If you feel any gaps, go to consumption and then come back to retention.
https://medium.com/better-programming/tips-for-better-learning-163c6995fe94
['Madhankumar J']
2020-06-01 17:34:35.115000+00:00
['Education', 'Learning', 'Programming', 'Startup', 'Learn By Teaching']
Design is diversity: it’s time to talk about our role as designers
di·ver·si·ty: the condition of having or being composed of differing elements, variety; especially: the inclusion of different types of people (as people of different races or cultures) in a group or organization. Dear reader, We live in a world undergoing intense transformation. A world that has awaken, as I like to believe, to the importance of empathy and respecting the ones around us. We also live in a world that has seen a lot of resistance to openness and inclusion these days. The rise of political views that tend to draw lines and give different treatment to human beings that were born in certain regions, belong to certain races, or share certain religious beliefs. Well, scratch that. We don’t just “live” in this world. As designers, we spend most of our day imagining and building experiences that, when added up, take a big portion of people’s days and affect a lot the relationships they have with other people and with the world around them. We design sign up forms that ask people to define their ethnicity. We design profile pages where people define how they want to be seen in the world. We also design online forums, medical forms, services for citizens, social interactions, dating apps, learning platforms — the list is huge. Aren’t we somehow responsible for more inclusive, diverse experiences? A series of stories about Diversity and Design It’s been proven that, from a practical (not to mention moral) standpoint, diversity and inclusion within the field of design lead to more innovation through problem-solving, whether in service of business or society. Isn’t that what design is all about? Well, but the needle isn’t moving as fast as expected. In the United States, approximately 86% of professional designers are Caucasian, according to the American Institute of Graphic Arts (AIGA). This represents only small strides since the 1990 AIGA symposium named “Why Is Graphic Design 93% White?”. And race is only part of the picture. Diversity in design means diversity of experience, perspective and creativity — otherwise known as diversity of thought — and these can be shaped by multiple factors including race, ethnicity, gender, age, sexual identity, ability/disability and location, among others. “Diversity may be a more popular buzzword in discussions about design education, conferences and icons, but without inclusive gestures by hiring managers and businesses, senior designers and agencies, educators and other role models, individuals from underrepresented groups entering and remaining in design will remain firmly in the minority.” – Antionette Carroll That’s why Caio Braga and I decided to start this series. The topic of Diversity and Design is becoming increasingly popular in conferences, meetups and other design events around the world. But we wanted to bring the topic from the stage to the inside of the company you work for. Through you. Information leads to self-reflection, that leads to discussion, that leads to transformation. That’s what we have always believed. Diversity generates diversity This is a twofold story. First, how do we ensure that, as designers, we are surrounded by a diverse group of colleagues that will constantly, and organically, challenge our assumptions on the very little we know about the world? Second, once we have that team in place, how can we use design to enable more inclusive experiences for the users of our products? We are not experts in Diversity by any chance. The idea here is that we will learn about this topic together. In the months of research leading up to the start of this series, we realized that there’s a lot to be learned about the topic, and that there are incredibly talented people out there who can share their knowledge and point of view with us. So we asked for help. A series about Diversity has to have diverse perspectives. Over the course of April, we will pause a bit on our day-to-day posts. No “tutorials on how to use Sketch”. Or “ten tips for tech handoff”. Less buzzwords like “chatbots”, “artificial intelligence” and “VR” around here in the next coming weeks. More “equality”, “bias”, “intentionality”, “difference”. But it’s for a good reason. We hope you enjoy the journey, Fabricio Teixeira
https://uxdesign.cc/design-is-diversity-its-time-to-talk-about-our-role-as-designers-323781b10b6f
['Fabricio Teixeira']
2017-04-30 23:30:43.869000+00:00
['User Experience', 'Diversity In Tech', 'Design', 'UX', 'Diversity']
Post Alpha Release Updates!
After a tremendously positive response on Alpha PINT App, Beta development kept us on our toes. What have we been up to post the Alpha Release? · Our core developers worked on each feedback received during Alpha Launch campaign and our users are happy with how things look now. · We are at the verge of Beta Launch. · Worked to enhance our PINT App security and got in touch with reputed organizations to put PINT security through certified security tests. Our team confidence is sky rocketing with a positive go ahead on security standards from reputed organizations. · Leadership and developers brainstormed over the need of a PINT Peer to Peer Marketplace and PINT Peer to Peer marketplace reached the first stage of development. The successful PINT Alpha launch kept us too busy with Venture Capitalists and Agile Investors. We have a huge announcement coming ahead. Stay Tuned!
https://medium.com/bitfia/post-alpha-release-updates-745a0b945a25
['Bitfia Labs']
2018-04-13 09:24:15.172000+00:00
['Startup', 'Cryptocurrency', 'Bitcoin', 'Blockchain']
Pass By Reference vs. Pass By Value
Overview When using variables in code there are 2 different ways the data can be used. When you pass a variable to a function it is either a pass by reference or a pass by value. Pass By Reference Pass by reference is when the memory address is passed to the function. This means that whatever changes happens to the variable will affect all other variables pointed to that memory address. The following code is in C++. #include <iostream> using namespace std; void increment(int &x) { x++; } int main() { int num = 10; cout << num << endl; increment(num); cout << firstNum; return 0; } >> 10 >> 11 As you can see when you change the value in a function the variable outside is changed as well. In C++ you tell tell the function that it is pass by reference by adding the “&” in front of the variable name. Pass By Value Languages like Java is pass by value by default. Pass by value means that when you change the variable in a function it does not affect the variable outside. The following code is in Java. public class Increment{ public static void main(String []args){ int x = 10; System.out.println("Before: " + x); increment(x); System.out.println("After: " + x); } static void increment(int x) { x++; System.out.println("Function: " + x); } } >> Before: 10 >> Function: 11 >> After: 10 As you can see even though the variable was modified inside the function the variable itself outside does not change. Most languages are pass by value by default. When you pass a variable with that contains a primitive data type it is pass by value. But when the variable is an object any changes made to that object is applied to all variables referencing it. Conclusion You should be aware of pass by reference while programming. When passing lists, arrays or object any changes in a function can have unintended effects. C++ has pointer variables which points to memory address which is pass by reference by default. Usually you can decide whether or not to use pass by reference by looking at what the function should return. If it returns one value it should be pass by value. If it returns two or more distinct values then it should be pass by reference.
https://medium.com/dev-genius/pass-by-reference-vs-pass-by-value-bd37436a6b00
['Daniel Liu']
2020-12-09 21:19:45.375000+00:00
['C Plus Plus Language', 'Programming', 'Java', 'Front End Development', 'Backend Development']
Regular update from Ubcoin: September 2018
Product Development, Token Updates and other Ubcoin news — a summary Product Development An important milestone was reached: we released both iOS and Android Ubcoin Market apps in three languages: English, Korean and Indonesian. It is fully workable and includes the escrow service (protecting both buyers and sellers) and integration with Telegram. Download iOS app: https://itunes.apple.com/ru/app/ubcoin-market-cryptocurrency/id1410295111?mt=8 Download Android App: https://play.google.com/store/apps/details?id=com.ubcoin The next release will be available within a few weeks. It will include filter, categories and search options and Telegram bot improvements. We are working on loyalty program development and are going to discuss it with the community soon. We are also preparing a very important development update related to the exchange process in the app. It should influence greatly the competitive position and potential of the whole project. We will announce it in the nearest future. Token news UBC token is available on three exchanges: COSS, LATOKEN and IDEX https://exchange.coss.io/exchange/ubc-eth https://wallet.latoken.com/market/Crypto/ETH/UBC-ETH https://idex.market/eth/ubc On September, 27 UBC reached $1 mln daily trading volume on COSS exchange. We also participated in a voting contest on Indodax exchange. Ubcoin collected more than 29 000 votes and will take part in the next few voting rounds (all votes from previous rounds are saved for the next ones). Marketing You can read about Ubcoin Market app features and plans in English and Korean articles: https://www.coinspeaker.com/2018/09/20/ubcoin-ubc-releases-ios-app-ebay-like-crypto-marketplace/ https://coinidol.com/ubcoin-ubc-releases-ios-app-ebay-like-crypto-marketplace/ https://www.coinpress.co.kr/2018/09/23/10037/ http://cointoday.co.kr/2018/09/21/ubcoinubc-토큰-ios-앱-출시-ebay와-같은-온라인마켓-출시/ In a video interview Stan Danysh (company Chief Operating Officer) is explaining ideas behind Ubcoin Market and the nearest development plans of Ubcoin team: https://youtu.be/DnyX6LA_Zu8
https://medium.com/ubcoin-blog/regular-update-from-ubcoin-september-2018-e76afde614b9
['Ubcoin. Cryptocurrency Reimagined']
2018-10-04 16:44:09.901000+00:00
['Android', 'Ubcoin', 'Ubcoin Product', 'Development', 'Ubc']
A Five Minute Overview of Amazon SimpleDB
Sometimes we are working on a project where we need a data store, but the complexities of Relational Database Service (RDS), DynamoDB, DocumentDB, et al are more than what is needed. This is where Amazon SimpleDB becomes a valuable resource. https://open.spotify.com/episode/77BybWgy6VHfCxS2LXrb8V?si=ehEKXoHPTVqhlmYkGoHbyw SimpleDB is a NoSQL database. NoSQL databases are not new, having been around since the 1960s. The term NoSQL can have several different meanings from non-SQL, referring to the lack of relation support in the database, to Not only SQL meaning the database may support Structured Query Language (SQL) Wikipedia. AWS has a number of databases to meet the needs of your project. If you look in the AWS Management Console, the Database section lists: Relational Database Service DynamoDB ElastiCache Neptune Amazon QLDB Amazon DocumentDB Amazon Keyspaces Amazon TimeStream Did you notice SimpleDB is missing from the list? This is because there is no interface to SimpleDB through the console. SimpleDB tables, which are called domains, are created programmatically using the CLI, SDK, or web services requests and all operations are performed through those interfaces. Why use SimpleDB? Database management is a science of its own. Schema designs, Entity-Relationship models, query optimization, and the day to day management breed complexity into a project. And every database or database engine is unique in its own right. SimpleDB removes the complexity of database management by being NoSQL and having no administrative overhead. The AWS documentation states “Amazon SimpleDB is optimized to provide high availability and flexibility, with little or no administrative burden” Amazon SimpleDB. The SimpleDB architecture is designed to be highly available, by automatically creating geographically distributed copies of your data. If one replica fails, another is seamlessly used to access your data. Because there is no rigid schema to support, changing the attributes needed to support your project is simply a matter of adding the additional columns, which are called attributes in SimpleDB. And SimpleDB is secure, using HTTPS as the transport and integrating with IAM to provide fine-grained control over the operations and data. Same of the sample use cases for using SimpleDB include logging, online gaming, and S3 Object Metadata indexing Amazon SimpleDB. With that introduction out of the way, let’s look at working with SimpleDB using the Software Development Kit. Working with SimpleDB using the SDK The examples in this section use Python but are explained so you don’t need to know Python to follow them. If you don’t know, the Python3 SDK is called boto3. Connecting to the SimpleDB Service Before we can work with SimpleDB, we have established a connection to the service. try: session = boto3.session.Session() except Exception as error: logging.error("Error: cannot create service session: %s", error) raise error try: client = session.client("sdb", region_name="us-east-1") except Exception as error: logging.error("Cannot connect to %s in %s:%s", service, region, error) raise error The first try block creates a session, which can be used to create connections to multiple services if needed, while the second try block creates a connection to the SimpleDB service. If the session or client cannot be established, then an error is raised to the calling function. Once the client connection to the SimpleDB endpoint has been created, we are ready to work with the service. Creating a SimpleDB Domain Before we can work with data, we have to create a domain if we don’t already have one. This is done using the create_domain API call. try: client.create_domain(DomainName=domain) except Exception as error: raise error The single argument to create_domain is the domain or table name. Domain names must be unique within the account. Initially, up to 250 domains can be created, and it the user’s responsibility to determine how to shard or partition the data to not exceed the 10 GB hard limit on domains. With the domain created, we can now insert some data. Listing the Available Domains We will eventually want to see all of the domains we have created. We can use the list_domains API to obtain the list. this is best done using a paginator, allowing the retrieval of all of the domains without worrying about the maximum number of retrieved items being reached. token = None domain_list = [] # create the paginator for the list_domains API try: paginator = client.get_paginator('list_domains') except Exception as error: raise error # create a page iterator which returns 100 items per # page try: page_iterator = paginator.paginate( PaginationConfig={ 'PageSize': 100, 'StartingToken': token } ) except Exception as error: raise error # work through the items on each page try: for page in page_iterator: # for each item, add the domain to the # domain_list for pg in page["DomainNames"]: domain_list.append(pg) # see if we have another page to process try: token = page["nextToken"] except KeyError: break except Exception as error: raise error # return the list of domains to the calling function return domain_list Using a paginator regardless of what language you are working with is a good idea because you are not limited to the maximum number of items the API for your programming language returns. When this code executes, the result is a list of domains which can then be displayed. Inserting Items into the Domain If you have a lot of attributes, preparing the data to insert into the domain can be a little tedious. We’ll come back to that in a minute. Inserting items into the domain uses the put_attributes function. try: response = client.put_attributes( DomainName=domain, ItemName=item, Attributes=attributes ) except Exception as error: logging.error("insertion {domain}: %s", error) raise error We have to specify the domain we are inserting the item into, the name of the item, and the attributes. The item name must be unique in the domain. If the item name already exists, then SimpleDB will attempt to update the existing item with the attributes provided. I mentioned defining the attributes can be a little tedious. This is because attributes are defined as name-value pairs. In Python, this would look like attributes = [ { "Name": "attribute1", "Value": "value1" }, { "Name": "attribute2", "Value": "attribute2" }, { "Name": "attributeN", "Value": attributeN }, ] Therefore, the more attributes, the more tedious it gets. However, if your data is already stored in a Python dictionary, then creating the attributes is simple. attributes = [] for key, value in some.items(): attributes.append({"Name": key, "Value": str(value)}) This brings up an important point: SimpleDB doesn’t understand any data type other than a string. If your data includes things like integer and boolean values, they must be represented as strings when stored in SimpleDB. The second point is the third field in the attribute definition: Replace. If you are updating an item with the action, adding in the Replace field with a value of true will cause SimpleDB to update the record if it already exists. attributes = [ { "Name": "attribute1", "Value": "value1", "Replace": True } ] Domain Metadata Before we look at retrieving data from our SimpleDB domain, let’s look at how we can get information about the domain using the domain_metadata function. This function allows you to determine when the domain was created, the number of items and attributes, and the size of those attribute names and values. Assuming we already have a client connection to SimpleDB, we can do the following: try: response = client.domain_metadata( DomainName=domain ) except Exception as error: logging.error("{domain}: %s", error) raise error print(f"Domain: {Domain}") print( f"Domain created on {datetime.datetime.fromtimestamp(response['Timestamp'])}" ) print(f"Total items: {response['ItemCount']}") print(f"Total attribute names: {response['AttributeNameCount']}") print(f"Total attribute values: {response['AttributeValueCount']}") storage_used = response['ItemNamesSizeBytes'] + response['AttributeNamesSizeBytes'] + response['AttributeValuesSizeBytes'] print( f"Total Domain size: {storage_used} bytes {storage_used/MB:.2f} MB, {storage_used/GB:.2f} GB" ) if storage_used >= HALF: print("The domain size is 50% of the maximum domain size") elif storage_used >= THRESHOLD: print( "The domain size is 90% of the maximum domain size. Inserts into the domain will fail when the maximum size is reached." ) If we execute this on my sample SimpleDB domain I am using for a project, we see: Domain: Assessments Domain created on 2020-10-17 13:05:01 Total items: 31301 Total attribute names: 91 Total attribute values: 2849831 Total Domain size: 5477676 bytes 5.35 MB, 0.01 GB There are indeed 31,301 items in the domain with a total of 91 unique attribute names. The number of attribute values is determined by multiplying the number of attribute names and the total number of items. This means there are 2,849,831 total attributes in the domain. These attributes are all text and only use 5.35 MB. The total size of each item, its attribute names and data is 175 bytes. This is the primary reason for using SimpleDB in this project. It is fast, small, and as we will see a little later, inexpensive. It is also a good example of why RDS and DynamoDB are not good use cases — the operational cost is just not reasonable for the amount of data being consumed. At this point, we can create a SimpleDB domain, insert items, and retrieve the metadata for the domain. Let’s look at retrieving data from the domain. Retrieving Items from the Domain There are two methods for retrieving data from your domain: get_attributes and select. If you already know the Item name, then you can use the get_attributes function to retrieve the attributes for that one item. However, if you don’t know the item name or want to retrieve all of the items meeting specific criteria, we use the select function. The select function works similarly to the SQL SELECT command, allowing you to retrieve the desired attributes (columns) for the items (rows) matching the criteria specified in the select statement. Here are some examples using the AWS CLI: Find out how many items are in the domain (which can also be accomplished using the domain_metadata function): aws sdb select --select-expression "select count(*) from Assessments" { "Items": [ { "Name": "Domain", "Attributes": [ { "Name": "Count", "Value": "31301" } ] } ] } Retrieve a specific attribute: aws sdb select --select-expression "select BirthYear from Assessments" Retrieve a group of attributes: aws sdb select --select-expression "select BirthYear, Gender from Assessments" IF we look at the last example, the response from SimpleDB looks like { "Items": [ { "Name": "20180717230440", "Attributes": [ { "Name": "BirthYear", "Value": "1981" }, { "Name": "Gender", "Value": "male" } ] }, { "Name": "20170712184415", "Attributes": [ { "Name": "BirthYear", "Value": "1974" }, { "Name": "Gender", "Value": "male" } ] }, For each item found in the select statement, you get the item Name and the values for the specified attributes. There are no indexes in SimpleDB. This means retrieving all of the affected rows can be slow. For example, the command aws sdb select --select-expression "select BirthYear, Gender from Assessments" takes approximately 10 seconds for the 31,301 items using the CLI. The same request using the SDK takes 1.25 seconds. If we want to put this into a Python function, we could do this: try: paginator = client.get_paginator('select') except Exception as error: raise error try: page_iterator = paginator.paginate( SelectExpression=f"select BirthYear,Gender from Assessments", ConsistentRead=consistentRead, PaginationConfig={ 'MaxItems': 500, 'StartingToken': token } ) except Exception as error: raise error try: for page in page_iterator: for pg in page["Items"]: selected.append(pg) try: token = page["NextToken"] except KeyError: break except Exception as error: logging.error("Cannot retrieve data: %s", error) raise error print(selected) This code fragment creates the paginator for the select function and then executes the select statement, which is “hardcoded” in the script (not what you would do). We then loop through all of the items returned until there is no NextToken and then print the selected items. This example sets MaxItems to 500, but the maximum returned size is 1MB. Regardless of what MaxItems is set to, if the size of the response is more than 1MB, the response will be split into multiple pages. Pricing The pricing model makes SimpleDB hard to beat. The Free Tier provides 25 machine-hours, 1 GB of storage, unlimited data in, and up to 1 GB of data out a month. That is a pretty significant allocation. The research work and work on a project which I am implementing with SimpleDB will result in no charges for quite a while. If you exceed the 25 machine hours, the cost is $0.14 per machine hour over 25. Storage is $0.25 per GB over the 1 GB os free storage, and data transfer out starts at $0.09 per GB after the free tier is exhausted. If you need a small database, don’t need console access, and don’t need the overhead or capabilities of an RDBMS, then SimpleDB is hard to beat. Things to Know Before wrapping up this article, there are some things worth knowing before deciding to use SimpleDB on your next project: CloudFormation has no interface to create or manage SimpleDB resources. It has to be done using the CLI or the SDK. A domain, or table, has a hard limit of 10 GB in size, which cannot be changed. If you think the domain will grow over 10GB, a data sharding plan or alternate database should be considered. SimpleDB has capacity limits, typically under 25 writes/second. If you expect to need higher capacity, then an alternate database may be a wise choice. There is a soft limit of 250 domains. You can request to have this increased if needed. The maximum size of an attribute is 1024 bytes, which cannot be changed. All data must be represented as strings. SimpleDB is not available in all regions. There are no indexes. If you have to retrieve all or a large number of items in the domain to perform an operation, it is best to retrieve all of the attributes you expect to need instead of making repeated calls to the domain. If you are using AWS Lambda, this can also affect the amount of memory needed as you will need to account for the size of the response variable you will receive. In Conclusion SimpleDB offers CLI, SDK, and Web API REST interfaces, making it easy to interact with from many different sources. The SDK is significantly faster than the CLI, meaning it may be better to write small programs to do the work of the CLI. (The CLI examples were done using AWS CLI Version 1. Version 2 may be considerably faster.) This may very well be a viable database for your next project. Short-lived data which is transient could be written to a domain and when not needed any longer, deleted. Log data could be saved to a SimpleDB domain instead of going to DynamoDB, or RDS which are expensive solutions for this use case. References Amazon SimpleDB Amazon SimpleDB API Usage Amazon SimpleDB FAQ Amazon SimpleDB Pricing Integrating Amazon S3 and Amazon SimpleDB Running Databases on AWS Wikipedia — NoSQL About the Author Chris is a highly-skilled Information Technology, AWS Cloud, Training and Security Professional bringing cloud, security, training, and process engineering leadership to simplify and deliver high-quality products. He is the co-author of seven books and author of more than 70 articles and book chapters in technical, management, and information security publications. His extensive technology, information security, and training experience make him a key resource who can help companies through technical challenges. Chris is a member of the AWS Community Builder Program. Copyright This article is Copyright © 2020, Chris Hare.
https://medium.com/swlh/a-five-minute-overview-of-amazon-simpledb-4823a829d99
['Chris Hare']
2020-10-23 14:02:49.868000+00:00
['Python', 'Aws Cloud', 'Aws Sdk', 'Aws Community', 'Database']
The New MAGA Rally
“Gee, that’s too bad,” a smirking Trump replied, when told by a reporter that Mitt Romney was self-isolating after coming into contact with Senator Paul, who tested positive for the coronavirus. The reporter, sensing Trump’s delight, asked if Trump was showing sarcasm. “No, none whatsoever,” Trump happily said. The daily coronavirus briefing, meant to update the American public on the fast-growing epidemic, has quickly degenerated into a campaign style event for President Trump in an election year. President Trump, given direct access to the televisions of around eight million Americans, has decided to use this platform as a vehicle to air grievances against his enemies, both real and imagined. When Trump isn’t poking fun at Republican Mitt Romney, he’s lashing out at Democratic governors. “I want them to be appreciative,” Trump explained. Washington Governor Jay Inslee, who Trump had previously called a snake, is “constantly chirping and I guess complaining,” according to the president. Trump, apparently forgetting the Michigan governor’s name, referred to Gretchen Whitmer as “the young, a woman governor.” “She has no idea what’s going on,” said Trump. Launching ad hominem attacks on other politicians is a national past time of Trump campaign rallies. When he senses a lull in the stadium, Trump will often evoke Hillary Clinton or Hunter Biden to awaken the crowd. A chanting crowd is a happy crowd. “He’s agitating to get back on the campaign trail, that without the MAGA rallies, he’s sort of lost, and that explains what tends to sound like open mic night at the briefings than any sort of health information being dispenses from the White House briefing room,” MSNBC host Nicolle Wallace noted. The “fake news” media, a usual punching bag at campaign rallies, has also received scathing criticism from the president at the coronavirus briefings. During a tense exchange with ABC News’ Jonathan Karl over ventilators, President Trump cut off the veteran reporter and said, “look, don’t be a cutie pie, okay?” Later, Trump unleashed a self-victimizing, missing-the-mark tangent that only he could give. A Washington Post reporter asked President Trump if he would commit that none of the taxpayer stimulus money would go towards his personal properties. “Nobody cared” that he donated his presidential salary, Trump complained. “Nobody said thank you, nobody said thank you very much.” Notably missing was any public commitment that the president would recuse his private businesses from any taxpayer fund. Sidestepping the question, President Trump retreated to his self-victimization island, of which he is a frequent goer. CNN’s fact checker, Daniel Dale, also noticed similarities between Trump’s rhetoric at the coronavirus briefings and at campaign rallies. “We heard (Trump) use the phrase ‘big, beautiful wall,’ we heard him complain of ‘abuse’ by members of NATO, single out the trade practices of the European Union,” Dale said. “And so I think while there is some important health and medical information being presented at these briefings, especially by people like Dr. Fauci, there is also Trump using this as a political platform to promote the messages that he’s not able to promote at rallies because he can’t hold rallies right now,” Dale also said. Occasionally, while taking breaks from his self-aggrandizing soliloquy, President Trump will stumble onto actual medical information. The problem, of course, is that much of it is incorrect. On March 6, for example, Trump claimed that, “anybody that needs a test, gets a test. We — they’re there. They have the tests. And the tests are beautiful.” Ignoring the “tests are beautiful” comment, the US has been dragging behind other nations in testing capabilities, and Vice President Pence later had to clarify that “we don’t have enough tests today to meet what we anticipate will be the demand goin forward.” A few days prior, Trump also announced that a vaccine would quickly be available, despite his own government acknowledging it would take over a year to develop such a cure. When Trump stays on topic, the medical information is misleading at best, or wrong at worst. When he veers off topic to whine about his normal gripes, we are all invited to a campaign rally that none of us signed up for. “He misses his rallies, he misses the road,” Associated Press reporter Jonathan Lemire recently said. “And that’s why, despite a number of senior aides telling him he should not be appearing at the briefing every day he insists that he will.” Trump will be back on the campaign road eventually. Until then, enjoy the coronavirus briefings.
https://p-ramirez.medium.com/the-new-maga-rally-12c31e0b35d0
['Peter Ramirez']
2020-03-28 22:09:37.722000+00:00
['Trump', '2020 Presidential Race', 'Coronavirus', 'Maga', 'Covid 19']
13 Attributes of the Ultimate Writer (Part 2 of 4)
13 Attributes of the Ultimate Writer (Part 2 of 4) Voice, Communication, Vocabulary, and Sense of Humor Hey, y’all. I’m back with Part 2 of the Ultimate Writer series. I’ve gone back into my writer creation lab, perfected the formula, and discovered the writers who exemplify excellence with the attributes of Voice/Presence, Communication/Delivery, Vocabulary, and Sense of Humor. After you read this story, please check out Part 1, which features some talented writers who flex in the areas of Soul, Creativity, and Intelligence. Let me say again…this list is based purely on my opinion. I don’t intend to disrespect anyone by leaving them off this list. To Recap This series came from a brainstorm on how I would create the Ultimate Writer based on 13 attributes that I propose are the most crucial to being an excellent writer. You don’t have to be good at all of these writing attributes for your own writing pursuits, but proficiency in a good number of them would do you well. The 13 attributes for the Ultimate Writer are: Soul Creativity Intelligence Voice/Presence Communication/Delivery Vocabulary Sense of Humor Heart/Empathy Work Ethic Stamina Guts Versatility Connecting As you read down the list, you may notice that I’m stepping through the attributes from the all-encompassing eternal aura (the soul) all the down to the Ultimate Writer’s “feet” with Connecting -or the ability to use writing to network with others and grow a community. Today, in Part 2 of this series, I’m focusing on the attributes of Voice/Presence, Communication/Delivery, Vocabulary, and Sense of Humor. Part 3: Heart/Empathy, Work Ethic, Stamina Part 4: Guts, Versatility, Connecting Here we go! Voice/Presence A writing voice or presence is an important writing quality for those who desire to show that they are an authority on a topic. If you want readers to believe you, confidence needs to emanate from your words. Of course, having a voice/presence should come with the moral responsibility to use it for good. To do otherwise would be tragic. Ayodeji Awosika is the writer who I chose for Voice/Presence. When I read Ayo’s writing, I can tell that he means what he’s writing and that he knows what he’s talking about. There’s no denying the confidence in his writing. There’s no denying his experience! The way that he uses his platform to motivate his readers is truly inspiring. A story by Ayodeji that shows his voice/presence: Deep Work: The Cheat Code to True Productivity Communication/Delivery Writing is ultimately about knowing how to get a point across to a reader. There are myriad ways to make or deliver a point, but keeping things simple has always been a great way to make sure your reader understands what you want them to know. At the same time, simplicity in communication doesn’t have to be boring or dry. I respect writers who can communicate a message clearly and still manage to inject a dose of creativity into how they deliver their message. This is a skill that I’m working on improving in my own writing. Cynthia Marinakos is a talented communicator and illustrator whose accessible and entertaining word usage makes her my favorite writer in terms of Communication/Delivery. She successfully uses word choice, creativity, formatting, and organization to efficiently communicate her message. Not only that…her illustrations are a perfect accompaniment for communicating her message visually. A story by Cynthia that shows her gift as a communicator: How to Be the Writer You’ve Always Wanted to Be Vocabulary When I wrote about intelligent writers in Part 1, I wrote that being an intelligent writer goes beyond using big words. The same is true for flexing your vocabulary as a writer. Sure, you can occasionally use a rarely used word for utility (or to impress the reader), but your vocabulary can also be used to help people see commonly used words in a new way. A writer with a good command of his or her lexicon leverages creative word usage for effect! I must admit that Ryan Fan’s writing makes me break out the dictionary sometimes. He is the writer whose DNA I’d integrate into my Ultimate Writer for the Vocabulary attribute. It’s impressive how he weaves rare and common words together without forcing it or sounding pretentious. A story by Ryan that showcases his acumen for word choice: Life is Not a Problem to be Solved, But a Reality to be Experienced Sense of Humor Laughter is good for the soul. A well-timed joke can win you loyal readers like nothing else can. But using humor in writing ain’t easy (unless it is for you). Either you’re born with it or you work pretty darn hard to craft a humorous style. Personally, I think the great funny people were born with that talent. Writers with a sense of humor know how to make even some of the most painful events in life funny. That’s why many say that comedians are some of the most secretly depressed people. There’s more than one way to be funny, which is beautiful. Funny writers can be dry, energetic, introverts, extroverts, sarcastic, satirical, overt, abstract, silly, tragic, or witty. I believe that every human has a style of comedic writing that they favor for a tickle. Okay…there are a ton of funny writers out there, but one stands out to me… Kyrie Gray is so dang funny, y’all. I would say that satire is her superpower. Her fearlessness helps her touch on controversial topics while her sense of humor keeps it light. Her satire writing DNA is what I’m using to create my Ultimate Writer for the Sense of Humor attribute. Honestly, if you want to learn how to write satire, I would highly recommend studying Kyrie’s writing. A story by Kyrie that puts her superhuman satire on display: I Would Do Anything for My Children As Long As It Doesn’t Affect My Lifestyle Brand A quick note about Kyrie’s story linked above: One, it’s some of the funniest satire that I’ve ever read. But secondly, on a serious note, I’m launching a family blog and Kyrie’s story inspired me to be careful not to put the blog and the business over parenting and being present with my wife and the kids. Thankful. Well, that’s it for Part 2 of the Ultimate Writer series. Who would you have picked for Voice/Presence, Communication/Delivery, Vocabulary, and Sense of Humor? Let me know by leaving a response. Now, go read Part 1 in case you missed it. See you in Part 3!
https://medium.com/inspirefirst/13-attributes-of-the-ultimate-writer-part-2-of-4-71f7d3808df5
['Chris Craft']
2020-06-05 16:18:02.095000+00:00
['Humor', 'Communication', 'Writing Tips', 'Writing', 'Satire']
How to Lead When You Don’t Feel Motivated
How to Lead When You Don’t Feel Motivated How I conquer my daily struggle. Photo by Zhang Kenny on Unsplash Sometimes I wake up, and I don’t want to do anything. The day’s goals seem daunting, but I understand I must soldier on. The demands of leadership are like carrying an anvil. For much of my career, I’ve been the “throat to choke” on the frontline of production mishaps, over-promises, and angry customers. To be the one with a target on your forehead makes you assume the fetal position and weep. The grind of the technology business exhausting. Deadlines, sprint-points, and deliverables are all euphemisms for stress. Companies are no different. How do I get up and sell, but all I want to do is write code? When will the company scale, and does it always have to be me? My passion is building technology, and always will be. Through it all, I’ve learned to follow a system — to put one foot in front of another and just keep going. In the spirit of “keep it moving,” here’s what I’ve learned.
https://medium.com/the-innovation/how-to-lead-when-you-dont-feel-motivated-912d9a915494
['James Williams']
2020-12-18 23:02:43.768000+00:00
['Technology', 'Management', 'Leadership', 'Business', 'Engineering']
This is a brilliant piece, Tom.
This is a brilliant piece, Tom. It reminds me somewhat of Conrad Wolfram’s TED talk in 2010 where he reminds us that we focus on teaching Calculating in school and not Math, and that needs to be improved. In a world where my watch can run complex calculations just by me speaking the right words to it, the ability to inherently know what the words I need to say are becomes infinitely more important than if I can accurately work out the details. In fact, in the analytical work many people do today, you would be fired if you spent time manually calculating out the things your spreadsheets churn in milliseconds. Even the thought of it sounds ludicrous! Yet, we push and push our students to work out the problems. Work out the problems. Work out the problems. Never, unless you major in Math in University, do you get to wonder why this is the way to solve this problem and get underneath the way it all works (which is actually an incredible launch pad for why we should all learn programming, if you think about it). Yet, #2 pencils still reign in the classroom, teaching a skill you should never use in actual life.
https://medium.com/connected-well/this-is-a-brilliant-piece-tom-41ba68c5fd06
['Robert Merrill']
2017-05-07 13:32:11.602000+00:00
['Math', 'Future Of Work', 'Business', 'Education', 'Future']
How E-Commerce Giants Battle It Out for Your Purchase
Source: Oxylabs’ design team There is an invisible war taking place in the e-commerce world. Made up of numerous battles fought by soldiers, it is waged by major players competing for dominance in the highly competitive e-commerce environment. The purpose is clear: to post the lowest price and make the sale. While people don’t realize that this war is taking place it’s still there and is getting more brutal as time goes on. My company — Oxylabs — provides the proxies or “soldiers”, plus the strategic tools that help businesses win the war. This article is going to give you an inside view of the battles taking place along with techniques to overcome some of the common challenges. Web Scraping: The Battle for Data Spies are valuable players in any war as they provide inside information on the opponent’s activities. When it comes to e-commerce, the “spies” are in the form of bots that aim to obtain data on an opponent’s prices and inventory. This intelligence is critical to forming an overall successful sales strategy. That data is extracted through web scraping activities that aim to obtain as much quality data as possible from all opponents. Data, however, is valuable intelligence and most sites do not want to give it up easily. Below are some of the most common major challenges faced by scrapers in the battle for high-quality data: Challenge 1: IP Blocking (Defense Wall) Since ancient times, walls were built around cities to block out invaders. Websites use the same tactic today by blocking out web scrapers though IP “blocks”. Many online stores that use web scraping attempt to extract pricing and additional product information from hundreds (if not thousands) of products at once. These information requests are often recognized by the server as an “attack” and result in bans on the IP addresses (unique identification numbers assigned to each device) as a defense measure. This is a type of “wall” a target site can put up to block scraping activity. Another battle tactic is to allow the IP address access to the site but to display inaccurate data. The solution for all scenarios is to prevent the target site from seeing the IP address in the first place. This requires the use of proxies — or “soldiers” — that mimic “human” behaviour. Each proxy has its own IP address and the server cannot track them to the source organization doing the data extraction. Source: Oxylabs’ design team There are two types of proxies — residential and data center proxies. The choice of proxy type depends on the complexity of the website and the strategy being used. Challenge 2: Complex/Changing Website Structure (Foreign Battle Terrain) Fighting on enemy territory is not an easy task due to the home advantage leveraged by the defensive army. The challenges faced by an invading army are especially difficult because they are simultaneously discovering the territory while engaged in the battle. This is analogous to the terrain faced by web scrapers. Each website has a different terrain in the form of its HTML structure. Every script must adapt itself to each new site in order to find and extract the information required. For the physical wars of the past, the wisdom of the generals has proven invaluable when advancing on enemy territory. In the same way, the skills and knowledge of scripting experts are invaluable when targeting sites for data extraction. Digital terrain, unlike physical terrain on earth, can also change on a moment’s notice. Oxylab’s adaptive parser, currently in beta phase, is one of the newest features of our Next-Gen Residential Proxies solution. Soon to become a weapon of choice, this AI and ML-enhanced HTML parser can extract intelligence from rapidly-changing dynamic layouts that include the title, regular price, sale price, description, image URLs, product IDs, page URLs, and much more. Challenge 3: Extracting Data in Real Time (Battle Timing) Quick timing is essential to many types of battle strategy and often waiting too long may result in defeat. This holds true in the lighting fast e-commerce world where a small amount of time makes a big difference in winning or losing a sale. The fastest mover most often wins. Since prices can change on a minute-by-minute basis, businesses must stay on top of their competitor’s moves. An effective strategy involves strategic maneuvers using tools to extract data quickly in real time along with the use of multiple proxy solutions so data requests appear organic. Oxylab’s Real-Time Crawler is customized to access data from e-commerce sites along with empowering businesses to get structured data in real-time from leading search engines. Source: Oxylabs’ design team Ethical Web Scraping It is crucial to understand that web scraping can be used positively. There are transparent ways to gather the required public data and drive businesses forward. Here are some guidelines to follow to keep the playing field fair for those who gather data and the websites that provide it: Only scrape publicly-available web pages. Ensure that the data is requested at a fair rate and doesn’t compromise the webserver. Respect the data obtained and any privacy issues relevant to the source website. Study the target website’s legal documents to determine whether you will legally accept their terms of service and if you will do so — whether you will not breach these terms. A Final Word Few people realize the war taking place behind the low price they see on their screen. That war is composed of multiple scraping battles for product intelligence fought by proxies circumventing server security measures for access to information. Strategies for winning the battles come in the form of sophisticated data extraction techniques that use proxies along with scraping tools. As the invisible war for data continues to accelerate, it appears that the biggest winners of all are the consumers that benefit from the low prices they see on their screens.
https://medium.com/swlh/how-e-commerce-giants-battle-it-out-for-your-purchase-6e0e2bd92d7e
['Julius Cerniauskas']
2020-11-30 13:34:18.256000+00:00
['Pricing Strategy', 'Ecommerce', 'Entrepreneurship', 'Proxy Service', 'Web Scraping']
Refactoring For Clean Code With Design Patterns
Hi, Today we will talk about the clean code. Is it really necessary? If our codes look like messy, what is waiting for us in the future? Does clean code mean only, easy to read, and simplicity? Or much more than this? There is, after all, a difference between code that is easy to read and code that is easy to change. — “Big” Dave Thomas First of all, I know, nobody has so much time. Boss waits for the see of finalizing the project. He or she doesn’t care about the details. You can think your self as a butcher in front of the customer. Boss is your customer, and you are the butcher. The customer doesn’t care about your hygiene. He only wants to take his meat as possible as fast. So the boss doesn’t want you to wash your hands for every customer. Why, because time is money. Finally, if you don’t wash your hands and not waste your time(unlike this example, you always wash your hands, please!), maybe at the end of the day, everyone could be happy except you :) But in the future, customers could be sick and sue the company. In the end, nobody will be happy. Boss doesn’t know the details. Only you know the risks. So take all responsibility and clean your code as possible. We will refactor very messy AI application codes at the end of this article. But first, we have to understand these three design pattern strategies, which we will use for refactoring that messy codes at the end. Maybe most of you have already known these, but there is no harm in repeating a good thing :) In short, the next two design patterns will prepare you for the final example. And we will examine the last one in-depth within this messy AI application. Just like in business training :) You have to avoid the repeatable codes… In the C# Console App code below, you can see the multiple business logic. This is against the first rule of S.O.L.I.D. Single Responsibility. A class should have one, and only one, a reason to change. Why, because to easy read, to easily test, and for simplicity. Look at that code; every logic has its own business. CRM, Finance and Service. Software is not an All In One Campaign. It is a distributed process. If you want to change the CRM process, all the applications, which are used in this class, are affected by this update. And for one business changing, you have to rebuild all other processes. This means, for the “CRM” process changing, “Finance” and “Service” must-stoped. Using enum is the only good thing in this code :) Can you feel the other lousy smell in this code? In the future, if you need one more process, you should find this class and add another “if condition.” It is insane and not sustainable. If another ten new logic comes to this class, the code becomes unreadable. Firstly, let’s separate all the processes to the different classes. And all the classes must be inherited from another class or interface. The naked class is an undesirable thing in object-oriented programming. IBussineLogic : All business logic must be inherited from the same interface (IBussinesLogic). And all classes should override the same “WorkProcess()” method. Do you remember this Design Pattern? using System; namespace SolidStrategy { public interface IBussinesLogic { void WorkProcess(string message); } } Crm: The first business class is CRM. It is inherited from “IBusinessLogic.” That is the reason why the “WorkProcess()” method must be implemented. Every “WorkProcess()” method, belongs to the different parent class. So all of them have different logic. Crm Class : using System; namespace SolidStrategy { public class Crm : IBussinesLogic { public void WorkProcess(string message) { Console.WriteLine($"Process Crm! :{message}"); } } } Finance: The second business class is Finance. It is inherited from the “IBusinessLogic,” like CRM. And also, It has to implement the “WorkProcess()” method for different Financial logic. using System; namespace SolidStrategy { public class Finance : IBussinesLogic { public void WorkProcess(string message) { Console.WriteLine($"Process Finance! :{message}"); } } } Service: The last business class is Service. Like other business classes, it is inherited from “IBusinessLogic”. It also has to implement the “WorkProcess()” method for different Service logic. using System; namespace SolidStrategy { public class Service : IBussinesLogic { public void WorkProcess(string message) { Console.WriteLine($"Process Service! :{message}"); } } } Process: Now we have to cover all business logic in one container. Why, because customers, in this case, actually other developers should only be interested in one class. Because simplicity is everything. “And if a new business logic comes to the town, nobody should have to change this main Process code” :) The Process class requires a logic class inherited from “IBussinesLogic” in Constructor(). What is the common feature of all classes? Of course “WorkProcess()” method. When the “WorkProcess()” method is called from the Process class, the logic differs depending on the “IBussinesLogic” class received from the Constructor. using System; namespace SolidStrategy { public class Process { IBussinesLogic _bussines = null; public Process(IBussinesLogic bussines) => _bussines = bussines; public void WorkProcess(string message) { _bussines.WorkProcess(message); } } } Strategy Design Pattern is the answer to the above question. It is a behavioural software design pattern that enables selecting an algorithm at runtime. We used this pattern for getting clean and simple code in this situation. Try and leave this world a little better than you found it . . . — Robert Baden-Powell Program.cs Messy Codes are becoming this a few lines of code. Let’s play with the new Toy :) Simplicity is everything — much more readable code. But there is one a more important thing : “You can easily add new logic without changing any line of code. And nobody is affected by this change.” Program.cs/Main() : using System; namespace SolidStrategy { class Program { static void Main(string[] args) { Process process = new Process(new Crm()); process.WorkProcess("Update Crm DB"); } } } This is the result screen : Extraction Methods Our second scenario is about the Extraction method. Extraction is my favourite Refactoring method. With the extract method, you move a fragment of code from an existing method into a new method. And please don’t forget to give a name about what it is doing. This technique is used to escape from complexity and improve the readability of the code. “Finding bugs in a mass is like looking for a needle in a haystack. Divide that mass and find all errors piece by piece. And never forget that if you divide it, you can share it” — Bora Kaşmer Look at the below code, everything is in the one place, and all the logics are working together. We will find all different logics and extract them from this method into different methods. The purpose is better readability and simplicity. Lets check the “CalculateSallaryIncrease()” method. We have three jobs in this method. 1-) Find salary raise rate by departments: This code returns the salary increase rate depending on the department of the person. The name is important. You can understand the purpose of this method from its name. public static double GetSalaryRatebyDepartment(Department department) { switch (department) { case Department.IT: { return 1.1; } case Department.HumanResource: { return 1.2; } case Department.System: { return 1; } case Department.Officer: { return 1.3; } } return 1; } 2-)Let’s calculate the new salary of the employee by using the new “raise rate,” which found with the above code. You can see the order of the parameters is understood from the name of the method. Calculate => NewSallary=>WithRate (salary, rate). In the future, new business logic can be easily added to this method for every person’s salary calculation. public static double CalculateNewSallaryWithRate(double salary, double rate) { return salary * rate; } 3-) Append the tag to a person name by gender. Mr. or Ms. Why did we separate this method? Because if someone wants to add a new tag for any conditions in the future, they should be able to easily add it to this method without any extra effort. public static string AppendTagToNameByGender(string name, Gender gender) { return gender == Gender.Male ? $"Mr.{name}" : $"Ms.{name}"; } Now our new “CalculateSalaryIncrease()” method looks like this. More readable, more understandable, and, most importantly, much more “short.” public static void CalculateSalaryIncrease(ref List<Person> personList) { foreach (Person person in personList) { double sallaryRate = GetSalaryRatebyDepartment((Department)person.Department); person.Salary = CalculateNewSallaryWithRate(person.Salary, sallaryRate); person.Name = AppendTagToNameByGender(person.Name, person.Gender); } } This is the result screen : This is the final and main example of this article: It is tried to be understood with specific analysis rules, whether the three comments(Description1, Description2, Description3) made in the example below belong to a woman or a man. Of course, it is not a real-life scenario. But it is helping us to understand clean code. We have two groups of word libraries. Words that Men and Women often use when speaking. We try to decide whether the sex of the speaker is male or female, by refining these words or a group of words in the comment made. For men, we are searching for (football or car) words in the description. If one of these words appears in the interpretation, we are accepting that the person who said it was a man. For women, we are searching for (mother or baby) or (draw and car) or (rub and car) words in the description. If one of these words or word groups appears in the interpretation, we are accepting that the person who said it was a woman. What is wrong with this code? . This code is very hard to understand without any description. . Code Readability is awful. . If any new rules come, we have to modify all conditions. . After a while, following the and — or conditions is impossible. . Finally, this code is not written for the human to understand. It is written for the computer to understand :) Interpreter Design Pattern To clean this mess, we will use a couple of design patterns. And of course, the main pattern for this solution will be the “Interpreter Design Pattern.” The interpreter is a behavioural design pattern. It is used to evaluate language grammar or expression, which is implemented from an Interface. In this application, our expression is these three descriptions. This pattern uses these expressions interfaces for interpreting a particular context. Now we are going to create an interface Expression. Later, we will create the And — Or classes by implementing this Expression interface. “Or” Expression and “And” Expression is used to create combinational expressions. And of course, we will create a “TerminalExpression” class, which is defined as acts as the main interpreter of context in the description. In this application, we call it “CheckExpression.” “In the future, if new rules would come for detecting the gender of a commenter, we may have to create a new kind of Expression.” 1-) Let’s create Expression Interface: All other expression classes must use this “Interpret()” method. public interface Expression { bool Interpret(string content); } 2-) Create CheckExpression: We will take a word from the constructor and then check if the related word is contained in the content (description), which is received as a parameter in the Interpret method or not. This will be our first tool. And we will use it everywhere. CheckExpression Class : public class CheckExpression : Expression { private string word; public CheckExpression(string _word) { this.word = _word; } public bool Interpret(string content) { return content.ToLower().Contains(word.ToLower()); } } 3-) Create OrExpression: This is our second tool. We will use the above Expression class. We need two of expression classes in this method. The expression means “word” in this application. We will check “one” of these words is contained in this description or not. OrExpression Class : public class OrExpression : Expression { private Expression exp1; private Expression exp2; public OrExpression(Expression _exp1, Expression _exp2) { this.exp1 = _exp1; this.exp2 = _exp2; } public bool Interpret(string content) { return (exp1.Interpret(content) || exp2.Interpret(content)); } } If you pay a little attention, you can see that we are using another tool to build a new tool! It is like use robot to create a new robot :) A secene from “Ex Machine” Movie 4-) Create AndExpression: This is our last tool. Again here we need two of expression classes. We will check if “both” of these words are contained in this description or not. AndExpression Class : public class AndExpression : Expression { private Expression exp1; private Expression exp2; public AndExpression(Expression _exp1, Expression _exp2) { this.exp1 = _exp1; this.exp2 = _exp2; } public bool Interpret(string content) { return (exp1.Interpret(content) && exp2.Interpret(content)); } } What have we done by creating these three tools? 1-) We divided that mass code. We moved two fragments of code from existing methods into new methods(AndExpression, OrExpression). We took the work a little further more, we moved one fragment of code from these methods into new methods too (CheckExpression). Do you remember that? We call that “Extract Method,” which we have already talked about at the beginning of the article. 2-) I want to draw your attention to the “AndExpression and OrExpression” classes. Both inherit from the “Expression” interface. Both have the same “Interpret()” method. See the code below. “getFemailExpression()” has an Expression List. It can take any class that is inherited from the “Expression” Interface. Like “AndExpression,” “OrExpression”. This means that “getFemailExpressions()” chooses an algorithm at the run time. Do you remember that? We call that “Strategy Design Pattern,” which we have already talked about at the above of the article. InterpretPattern Class : . CheckExpression: It is a Word. . OrExpression — AndExpression: Expression for deciding to the gender of the commentator. It takes one or more “CheckExpression” to decide. . Interpret(): All Expressions have to implement this method. It takes content(description) as a parameter. And It decides to gender. Man or Woman. All three tools are created to be used in this class. “AndExpression”, “Expression” and “OrExpression”. There are two situations here. Man or Woman Decision Rules. They are collected and returned as an Expression or List of Expression. getMaleExpression(): It returns OrExpression. Every expression is a word, and two of them are given as a parameter to the OrExpression class. Every OrExpression is a rule to decide on gender. getFemailExpression(): It returns a List of Expression. There are three Expression Rules for decided to Woman Gender. Two AndExpression and one OrExpression. All of them takes six expressions (word) as a parameter. public class InterpretPattern { public static Expression getMaleExpression() { Expression futbol = new CheckExpression("football"); Expression araba = new CheckExpression("car"); return new OrExpression(futbol, araba); } public static List<Expression> getFemailExpressions() { List<Expression> ListExpression = new List<Expression>(); Expression mother = new CheckExpression("mother"); Expression baby = new CheckExpression("baby"); Expression rub = new CheckExpression("rub"); Expression draw = new CheckExpression("draw"); Expression car = new CheckExpression("car"); ListExpression.Add(new OrExpression(mother, baby)); ListExpression.Add(new AndExpression(rub, car)); ListExpression.Add(new AndExpression(draw, car)); return ListExpression; } } Everything seems to be part of the puzzle. All of them are interlocking, and the small pieces form larger pieces, and larger pieces form the picture. Photo by Markus Winkler on Unsplash Look at the below code, much more clear. Implementation is easy. You don’t have to know business logic. “It is not your concern to, which words must contain into the content to decide to the woman.” If new rules or new words come to the business logic, you don’t have to change all the codes. This is the nature of OOP Programming. Formation Tree : “Expressions =>CheckExpression =>OrExpression&AndExpression =>getFemailExpressions&getMaleExpression” In case the rules are run one by one according to the business logic, if one of the rules passes the specific conditions, all the processes are stopped and returned to the “true” result. This is the result screen : In this article, we talked about how could convert unreadable messy code to explicit and clarity code by using design patterns. In some cases, only one design pattern doesn’t solve your problem. In this case, you should prefer more than one design pattern for the solution. OOP programming and Design patterns are optimizing your code for easy maintenance ( change and testing ), make them extendable and flexible. Quick Tip: Imagine a new rule coming to the “getMaleExpression()” to identify the male commentator. For this scenario, three expressions (words) should be contained in the content (description). This is something completely new. Creating new kind of Expression like below is enough. It is named “And3Expression”. We are implementing from Expression interface, and this time we are creating three Expression(words). Finally, on “Interpret()” method, we are checking these three words are contained in the content (description) or not. That’s all. We don’t need to change anything in anywhere. Just add this new “And3Expression” class and use it in the “getMaleExpression()”. This is the power of OOP programming.
https://medium.com/swlh/refactoring-for-clean-code-with-design-patterns-2d3d754c3bfe
['Bora Kaşmer']
2020-07-28 07:24:56.851000+00:00
['Oop Concepts', 'Refactoring', 'Net Core 3', 'Design Patterns', 'Clean Code']
How Disinformation Is Threatening Your Brand
How Disinformation Is Threatening Your Brand When rumors about your brand become reality Photo by camilo jimenez on Unsplash Fake news is old news. Since time immemorial, people have blurred the line between fact and fiction, exaggerating stories into new ones. From lunch-table gossip to water cooler chitchat, these kinds of tall tales and falsehoods have existed long before the internet. Only now, the lies and misinformation have evolved to spread like a digital disease, but with greater speed and exposure. Today, online rumors are dangerous in their scope and effect, shifting politics on a global scale and, in some cases, threatening the lives of people in the real world. To be clear, we’re talking about disinformation: misinformation but with intent to mislead. We’ve already seen how it affects elections and the healthcare industry, yet it’s still a relatively new phenomenon. We’re still learning how it transmits through digital spaces. In a recent NPR piece on the topic, Emily Dreyfuss of the Harvard University Shorenstein Center’s ‘The Media Manipulation Casebook,’ reveals how a pandemic of disinformation propagates and takes root throughout the media ecosystem. Toward the end of the interview, she mentions how studies in social science have shown that the more frequently someone hears or is exposed to something, the more likely they are to believe it. The immediacy and abundance of information we see every day has forced us to grapple with the power and reality of disinformation. Public discourse and even democracy are at the mercy of bad actors, hashtags, and the spread of uncontrollable narratives. The sheer quantity of false stories has led many to dangerously conflicting views of what’s real and what’s merely gossip. For marketing and communications teams, disinformation can extend beyond the realms of politics, putting a hefty price on their efforts to manage brand identity. The cost of disinformation in 2019 alone came to an estimated total of $78b, according to the collaborative research of CHEQ, a cybersecurity company, and the University of Baltimore. With the guidance of economist and professor Roberto Cavazos, the project sought to put a number to the harm of fake news, marrying rigorous economic analysis and hard data. The results are an uncomfortable look at what the future holds in terms of the economic impact of disinformation. For organizations looking to protect their reputations, marketing and comms teams need to take stock of their earned media efforts. As the majority of people online aren’t held accountable to fact-checkers, your brand is always at risk. Most people don’t have the social capital or authority to influence opinion alone, but in groups, they can quickly disseminate a rumor regardless if it’s true. While you have some control over your brand image with paid and owned media, a Nielsen study in Global Trust In Advertising showed 83% of respondents were more likely to trust the word of mouth recommendations from friends and family. The third greatest number of respondents, behind owned advertising on branded websites, said they trusted consumer opinions online — a not-insignificant 66%. The organic conversations and shared experiences found on social media or review sites are fertile ground for a crisis. One bad review can inspire others, and companies must be quick to respond before the discussion spirals out of control. If you aren’t already monitoring this space with social listening software, now is the time to add one to your tech stack. Organizations need to be proactive in their measurement of a crisis both by identifying where they’re the most vulnerable beforehand and recognizing new trends where disinformation can fester. Benchmarking topics that have demonstrated a threat to brand image in the past and closely tracking the ways new topics can draw you into a rotten narrative are just two ways to prepare your team in advance. Just as well, there are powerful new tools on the market that can detect automation and the bot clusters that artificially drive user engagement, but for marketers on a budget, perhaps the best tool for combatting this developing threat is simply understanding your audience. Media monitoring and social listening platforms give you the data, but connecting the dots around the context of where your audience relates to current trends is where the real work comes in. Understanding what your consumers want and expect from your business is an evergreen concept, but especially so in a time where transparency has such a high premium. Taking stock of how you’re managing your earned media is a start, but it’s just as important to clarify your brand’s message and purpose. Are your company values clear? Do they still align with your core market? How do they relate to current trends in the industry? These are a few of the questions you can ask yourself to reconfigure your brand’s position within the media landscape. Authenticity and consistency are the keys. When disinformation seeks to deceive or misguide, remaining flexible but strong about your values is still a great defense. Don’t let the loud minority control the narrative around your brand. Fight back with a combination of strategy, technology, and time-tested marketing techniques.
https://medium.com/digital-diplomacy/how-disinformation-is-threatening-your-brand-d47da2164de1
['Brian Hubbard']
2020-12-22 14:25:40.923000+00:00
['Technology', 'Social Media', 'Marketing', 'Disinformation', 'Misinformation']
Azure AZ-900 Exam Preparation Guide: How to pass in 3 days
Most of the exam information is on Microsoft website. However, before going into the exam, I was researching how long is this exam. The Microsoft website does not show how many questions and how long the exam is. So I gathered these numbers based on my exam: Questions: 44 (40–60 questions based on Whizlabs) Duration: 1 hour (85 minutes based on Whizlabs) What I like about this exam is: Your result sheet is printed right after the exam, so you can see which area you are doing well and which area you need to work on. This is the same study plan I used to pass AZ-900 exam: 2 day for learning from Microsoft Learn Platform 1–1.5 day for exam preparation One day means 6–8 hours of work. So it might take around a week to study 2–3 hours per day after work. Resources I used for studying AZ-900 I studied from 3 resources and spent around $15 for the preparation (excluding $99 for AZ-900 exam itself). The 3 resources are: [Free] Microsoft Learn platform (11 modules) Comprehensive 9 hours course provided by Microsoft [ Text-based ] Microsoft provides a free learning platform for everything you need to pass any Azure certificate. Unlike other learning platforms such as Google Cloud’s Coursera or Udemy, Microsoft Learn platform is 90% text with few short videos here and there. However, the material is great quality and the learning platform is easy to use. I found that by having learning material in text, I can take note much faster by copy-paste the text directly. This course took me 9–10 hours including taking notes and research on random technical words e.g. N-tier architecture. [$15] Udemy course “Microsoft Azure Beginners Guide” AZ-900 Video course & Practise exams This Udemy video course contains all you need to know to pass AZ-900 exam. The content is more hands-on than Microsoft Learn since you will see the instructor showing you inside Azure Portal, whereas Microsoft Learn only teach you in text. The beginners in Azure will learn a lot more from seeing real Azure portal than reading. For me, I had some experience playing around Azure portal before. So I skipped to the last section: AZ-900 Preparation. This section provides 70 questions x 2 practice tests. I found the practice questions to be very similar to Whizlabs, the last resource I used for exam preparation. I have heard from my colleague who passed AZ-900 only using this course. My recommendation is you could study from videos in this course or from Microsoft Learn. The content should be very similar. At my company Servian, we provide Udemy Business subscription for staffs. So I can access this course for free. [$15] Whizlabs AZ-900 Exam Practices Tests Practice exams by Whizlabs Whizlabs is the popular website containing a lot of IT exam practice tests. Note that Microsoft also offers Official Practice Test but at the significantly higher cost and a similar number of practice questions. So I went for Whizlabs this time. At the time of writing, Whizlabs has 3 practice tests with 55 questions each. The questions are surprisingly similar to the real exam, so you can get a feeling of what kind of questions you would get examined on. AZ-900 Exam Content
https://medium.com/weareservian/azure-az-900-exam-preparation-guide-how-to-pass-in-3-days-dabf5534507a
['Perth Ngarmtrakulchol']
2019-08-28 10:25:00.909000+00:00
['Cloud Computing', 'Microsoft', 'Azure']
How to Write Opinions When You’re At Your Wit’s End
How to Write Opinions When You’re At Your Wit’s End Political writers are America’s witnesses to history. It’s up to us to tell this story Photo by Markus Winkler on Unsplash I don’t have to tell you we’re at a level of chaos most of us have never seen in our lifetimes. Every day it’s something new and dire and dangerous, and every day we have to set aside yesterday’s news to try and process this new thing that sickens us and scares us and makes us want to take to our beds. Every day we watch people give up. They can’t take it anymore. They concede we’re doomed and that’s just the way it is. And who can blame them? It feels doom-like out there. Everything is going against us. Here in the United States we’ve passed the 200,000 mark in deaths from the COVID-19 pandemic, with no end in sight. The earth is roiling, showing her irritation at our recklessness, and she’s threatening to destroy humanity before we can do any more damage. And, for the first time in America’s history, we’re dealing with a rogue government run by a demagogic flim-flam man who sees the presidency as the authoritarian power trip of his dreams, and is already threatening not to give it up. And there’s more. Much, much more. This is where the writers come in. We are the witnesses, the trained observers. We watch, we listen, we analyze, we record. It’s what we do. Those of us who write opinions knew going in we would never convince everyone. Our opinions aren’t necessarily everyone’s opinions, so — you might have noticed — we have a tendency to piss some people off. But we slog on. It’s our hearts that spur us on, and, because our hearts are flopping around on the outside for everyone to see, we make ourselves vulnerable. Deliberately. Why? Because we care so deeply about what we believe in we can’t keep it to ourselves. We see it as a duty to try and make readers understand. And we wonder why everyone doesn’t do it. That’s where you come in, you writers out there who feel that same anxiety and don’t know how to express it. Do I need to say, ‘there’s nothing to fear but fear itself’? What are you afraid of, really? That your feelings will be hurt? They will be. That someone will make fun of you? Someone will. That you won’t get it right and might have to reassess? That could happen. But we need courage now, and before you can advocate for it, you have to feel it. Our country needs us — every one of us — and our voices together will make a formidable blockade to the lies and propaganda threatening to destroy our message. We have the tools and the talent to make a difference in these next weeks before the election, but we have to get serious NOW. Whatever you have to say doesn’t have to be perfect. It just has to be honest. Write from your heart and let your heart guide you. The country needs to know how we feel about the events unfolding before us. We’re not writing for the critics, we’re writing for the people. As the owner/editor of Indelible Ink, I’ve taken steps to convert my creative non-fiction publication to all politics, all the time — at least until this all-important election is over. I’m looking for writers and I want you to consider getting your equally all-important voice out there. I’ll help you. As I say in our Submission Guidelines: We’ll be a political publication practicing the politics of hope, but with our eyes wide open. Be honest about your fears, your hopes, your ideas for a better future. Challenge us with your thoughts about better governing. Name names. Talk about your own life, your childhood, your parents and your grandparents, if you’d like. Whatever is on your mind, whatever is keeping you awake at night, whatever is needing an outlet so you’re not screaming into pillows all day and all night. Let’s build a fortress here made up of the ghosts of America’s past. Who are we? Where did we come from? How did we get to this place? But you don’t have to write for me. Writers everywhere are gathering in war rooms, ready to do battle. We can do it, we can spread the word, we can build a community, and we can help each other. We’re almost out of time. November is looming. We’re sending the call out to writers with the skills to help us witness, to chronicle not just the events but the feelings. We’ve never been here before. With Hera’s help we’ll never be here again. This is a time like no other, and the noisemakers are winning. Our voices won’t get lost if there are enough of us sounding alarms, reminding Americans of our heritage, defending our need to build a country that reflects all of us, and not just some of us. Opinion writing isn’t for everyone, but if you feel the calling, go with it. The need is great right now. If you have something to say, say it. As Maggie Kuhn, founder of the Gray Panthers, used to say: Speak your mind, even if your voice shakes.
https://medium.com/indelible-ink/how-to-write-opinions-when-youre-at-your-wit-s-end-8a721447d1d8
['Ramona Grigg']
2020-09-29 13:25:37.951000+00:00
['Politics', 'Opinion', 'Elections', 'Writing', 'Covid 19']