title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Re-imagining the wealth management stack
It is an exciting time to be working in the wealth management industry. There is a seismic change underway with new technologies that make it easier and more affordable for firms to both offer services to clients who historically would not have been able to access their advice, and to look after their clients’ interests better. We think this presents a huge opportunity for forward thinking companies and individuals who are willing to embrace the change and challenge the way things have ‘always been done’ by validating and adopting best of breed emerging solutions in their firms. This post goes into the background of how we have got to where we are today, and why change is happening now. We will outline our thinking in more detail in subsequent articles on specific pieces of the puzzle, but our working thesis is that: the future stacks powering wealth business will not rely on one-to-one integrations, but be flexible open components connected by an underlying data infrastructure, some firms will choose which components to prioritise (planning, risk, construction etc) so any point solutions must be both simple to plug & play and leverage channel partnerships, most firms will have 2–3 core systems and will rely on these providers to have selected the best in class components and include them as part of their suite, the direction of travel is towards truly bespoke portfolio construction and advice so this must be supported by tech providers, adviser relationships will remain key, so the role of technology should be to support and not displace them. Why do people use wealth managers and advisers? Before we dive into the detail of some of the new innovations shaping modern wealth management, it’s useful to first take it back to basics and ask the question — why do people use advisers and wealth managers in the first place? The simple answer would be to achieve their financial goals…. But it’s a much more complex question than it seems. Every person or family has different goals and requirements, as well as different levels of financial literacy. This means that while one family might need help with basic planning and saving goals, another might need complex cross-jurisdictional tax advice for multi-currency earnings and hedging. Layering on top of this the fact that everyone has different risk profiles (how comfortable you are with the trade-off between possibly losing money to earn potentially higher returns), knowledge and experience, country specific regulations and fiduciary duties a manager has to understand where their clients sit on this spectrum — it gets complex very quickly. This is also just the first step. A manager then needs to review these goals and construct an investment and savings plan by reviewing the many millions of stocks, bonds, ETFs, mutual funds and myriad of other investment products available. They then must work out which are the best, check for tax implications, put them into a portfolio with the optimal mix (based on both goals and risk appetite), and keep track of this whenever there is a change in markets or of the clients circumstances or preferences. Phew! No wonder most advisers won’t help a client unless they have over $100,000 available to invest. On an ‘hours worked’ basis, they would be out of pocket helping anyone with less. How do advisers currently cope? This is the natural next question and, in our view, goes a long way to explaining why there are such large and publicly scrutinised advice gaps and industry conflicts. In fact, according to research by McKinsey, the annual revenues generated by intermediation in the wealth segment are more than double all market infrastructure (trading, clearing, settlement etc) combined. A simple Google news search shows you how much of a focus this is for the press, regulators and governments alike — links here and here. To solve this, wealth managers and advisers need to be able to do more with less. This is where technology can help. At present, most advisers and wealth mangers mainly rely on 3–4 core systems (usually a CRM, execution platform, portfolio management / reporting engine) but have multiple other point solutions which feed into these for specific tasks. These might include client onboarding, annual suitability, cash flow forecasting, tax planning etc. Another key piece is a tool for general financial planning which can feed into either the onboarding process, or ongoing management depending on the tool chosen. © Illuminate Financial — non-exhaustive illustration of the wealth stack Precisely how this fits together will depend on the firm and providers chosen but, generally speaking, while there are often specific integrations between tools (for example a risk score provider that integrates with a portfolio management platform; or a chatbot that can feed into a CRM client portal) these are largely one-to-one, meaning data is trapped in silos. Why are simple questions so hard to answer? Due to the current wealth stack architecture, advisers often re-key information manually from one system to another to get a complete view of their client’s wealth. This is clearly not ideal. Not only does this risk errors, it soaks up time that could otherwise be spent better managing relationships and makes seemingly simple questions very hard to answer. Am I on track to reach my goal? What had been the biggest drag /contributor to my performance? What fees have I paid and where? What income have I generated? Can you show me the split of my holdings? If stock markets fall 20% what will happen? What if I only want to invest in companies that are socially responsible? Being able to respond to these questions quickly and in real time intuitively feels like something that an adviser should be able to do. The reality is that with current systems the way they are — it takes days of work in excel. Change will not happen overnight but given the increasing scrutiny from regulators, fee pressure, advances in technology, and new entrants in the direct to consumer landscape; the scene is set. What has changed in the direct to consumer market? Whether or not you believe the B2C fintechs making headlines will survive the test of time, they have undeniably tapped into something and show the time is now. Their combination of lower fees, easy online access, promise of ethical investments and reach into previously un-served segments of the market have attracted a new breed of customer in droves and piqued the interest of VC investors and incumbents alike. The latest market map from CB Insights gives you an idea of how much is going on. Although we do not believe that these direct offerings will ever displace human wealth managers, we do think the competition has re-set the entry bar and consumer expectations. Micro-savings, robo portfolios, personalisation of goals, online portals, ESG screens can become new tools advisers and managers can add to their arsenal as they help their clients. How can a bank or a large established business adopt? There is no easy answer, but we believe there is a huge opportunity for B2B companies who can help the industry re-architect itself with a new breed of data architecture that connects information and allows the simple questions to be answered. With the right infrastructure and tools in place, advisers and managers will be able to cut down on the admin, grow their businesses and get back to doing what they do best — managing relationships and helping clients understand their finances.
https://medium.com/illuminate-financial/re-imagining-the-wealth-management-stack-466d16c904a5
['Katherine Wilson']
2019-06-12 14:38:43.022000+00:00
['Enterprise Technology', 'Startup', 'Venture Capital', 'Wealth Management', 'Fintech']
How to Migrate Your Local MongoDB Database Between Computers
Restore MongoDB Data At this stage, you already have your dump-files directory in your new machine. Next, we can proceed to restore the MongoDB data. Let’s assume I saved my dump directory at this path ~/Downloads/mongo-migration . Now we can use the default root role we have in our MongoDB to restore the database. Refer to the command below. We can use this single command to restore all of the database, and you’d have the exact same database as previously. Restore multiple DBs However, there could be a scenario where you only want to restore a few DBs, excluding a few DBs from the dump directory. For example, I want to restore the audit and client tables during this round. We can do this by using --nsInclude with the nsExclude option in mongorestore . Example of using --nsInclude We can use --nsInclude to select only the database and collections that we want to restore. In the command below, we restore all the collections in the audit database using audit.* and all collections in the client database using client.* . The wildcard after the dot notation means all collections in the database. If you want to include or exclude multiple databases, you’d need to specify it as nsInclude or nsExclude . Refer to the example below. Example of using --nsExclude We can use nsExclude to exclude the database and collections we don’t want to restore. In the command below, we exclude the partner , promotions , transaction , and utilities databases. Restore a single DB Lastly, this is the command to restore a single DB. There are a few things to take note of here: -d options: Specify the database name to be restored. This is a required option. options: Specify the database name to be restored. This is a required option. Specify the correct dump directory. For example, if I’m restoring the utilities database, I must specify the utilities database dump directory. Refer to the command below.
https://medium.com/better-programming/how-to-migrate-your-local-mongodb-database-between-computers-debe57092ab5
['Tek Loon']
2020-07-31 14:22:05.348000+00:00
['Database', 'Mongodb', 'Software Engineering', 'Mongo', 'Programming']
This Weekend‘s News: Claims of Treason, CIA Spy Ops, and FBI Informants
An Interesting situation has begun to develop following concurrent articles from the New York Times and Washington Post about an FBI informant that allegedly met with Trump advisors. The media response has been fractured with several different narratives being told. With so many different accusations being thrown around, it can be difficult to cut through the bias and find the truth, so let’s dive in and see what the press is saying. Prior to the Articles Being Published The Intercept (Center) in their great analysis describe the situation prior to the NYT and WaPo articles: Over the past several weeks, House Republicans have been claiming that the FBI during the 2016 election used an operative to spy on the Trump campaign, and they triggered outrage within the FBI by trying to learn his identity. […] In response, the DOJ and the FBI’s various media spokespeople did not deny the core accusation, but quibbled with the language (the FBI used an “informant,” not a “spy”), and then began using increasingly strident language to warn that exposing his name would jeopardize his life and those of others, and also put American national security at grave risk. […] What is so controversial about all of this? The controversial aspects of the case center around a a few key points: Is there culpability in revealing the FBI informant? The FBI and DOJ have been refusing to name the informant, insisting that doing so would endanger lives. Several accusations are being thrown around regarding the culpability of revealing his identity. Were the FBI informant’s interactions with Trump advisors aboveboard, or were they somehow politically motivated? The FBI informant appears to have been contacting Trump advisors prior to the date established by the NYT and FBI as the start of the investigation. This would seem to contradict their earlier statements about why the investigation was started, and raise more questions about the motivations behind the investigation. What the NYT and WaPo Articles Said The NYT and WaPo articles both state an unnamed FBI informant made contact with several Trump advisors during his campaign. Halper allegedly met with several Trump advisors, including George Papadopoulos. The NYT article is mainly focused around the informants interactions with George Papadopoulos. For the uninitiated, Papadopoulos had bragged to an Australian Diplomat about having access to hacked DNC emails prior to their public release, an event the NYT had claimed back in December began the FBI’s inquiry into potential ties between the Trump campaign and Russia. The Washington Post article, however, mentions several interactions between the informant and Carter Page, who Trump had named as his foreign secretary advisor, and appears to contradict the NYT timeline of the investigation. Exactly when the professor began working on the case is unknown. The FBI formally opened its counterintelligence investigation into Russia’s efforts to influence the 2016 campaign on July 31, 2016, spurred by a report from Australian officials that Papadopoulos boasted to an Australian diplomat of knowing that Russia had damaging material about Democratic nominee Hillary Clinton. The professor’s interactions with Trump advisers began a few weeks before the opening of the investigation, when Page met the professor at the British symposium. Several conservative outlets confirmed the informant met Page prior to Papadopoulos bragging to the Australian minister. Breitbart says: [..] the problem with that account is that the FBI informant had approached Trump campaign adviser Carter Page before that email release on July 22, 2016, and before the Australians came forward with the information. The informant first approached Carter Page at a Cambridge symposium on the U.S. presidential election in London on July 11–12, 2016. Page was invited to the symposium in June 2016 by an unnamed doctoral student at Cambridge who knew Halper, according to a source. This would seem to imply that the FBI was investigating the Trump campaign before the date they had previously indicated. Is there culpability in revealing the source? Several of the far left outlets are attempting to frame the story around the idea that revealing the source has somehow been a national security risk. Salon published an article saying the event has sparked a debate on treason, which ironically doesn’t actually feature the word “treason” aside from the title. ThinkProgress also published an article implying that not only did revealing the source create a security risk, but that Republicans were somehow responsible. Who Actually Revealed The Source? Neither the New York Times or the Washington Post explicitly named the source, but they did list several specific details that allowed him to be identified, including: “an American academic who teaches in Britain” “also met repeatedly in the ensuing months with the other aide, Carter Page.” who met with Page “at a symposium about the White House race held at a British university.” Indeed, shorty after the articles were published, The Daily Caller was able to identify the source as Stephen A. Halper. While neither the NYT or the WaPo explicitly named the source, other outlets generally agree they published enough information for the source to be identified. The Intercept notes: [..] both the Washington Post and New York Times — whose reporters, like pretty much everyone in Washington, knew exactly who the FBI informant is — published articles that, while deferring to the FBI’s demands by not naming him, provided so many details about him that it made it extremely easy to know exactly who it is. Was revealing the source a national security risk? The Intercept also notes how absurd the claim that revealing Halper’s identity posed a national security risk is. Earlier this week, records of payments were found that were made during 2016 to Halper by the Department of Defense’s Office of Net Assessment, though it not possible from these records to know the exact work for which these payments were made. The Pentagon office that paid Halper in 2016, according to a 2015 Washington Post story on its new duties, “reports directly to Secretary of Defense and focuses heavily on future threats, has a $10 million budget.” It is difficult to understand how identifying someone whose connections to the CIA is a matter of such public record, and who has a long and well-known history of working on spying programs involving presidential elections on behalf of the intelligence community, could possibly endanger lives or lead to grave national security harm. It seems likely, then, that the idea that revealing this source was some sort of a security risk, is an attempt by several of the more left leaning outlets to distract from the legitimate concerns that are surfacing. 2. Were the FBI informant’s interactions with Trump advisors aboveboard, or were they somehow politically motivated? The primary question around all of this is whether the informants interactions with the Trump campaign were part of a legitimate investigation, or if they were politically motivated. Because the intent is so critical in this case, the media on both sides have been attempting to frame the incident in a way that supports their narrative. What the right is saying Right wing sources frequently refer to the informant as an FBI spy, some claiming he was “inside the campaign”. What the left is saying: The fixation around the use of the word ‘spy’ is especially interesting because the article they are referring to from the NYT is titled “F.B.I. Used Informant to Investigate Russia Ties to Campaign, Not to Spy, as Trump Claims”. The Intercept points out an odd conversation between CNN’s Andrew Kaczynski and the New York Times’ Trip Gabriel that seems to “vividly illustrate the strange machinations used by journalists to justify how all of this is being characterized”. Several other Liberal outlets are trying desperately to frame the story the same way as the NYT, with responses ranging from the Washington Post claiming the informant was protecting trump: To CNN using language like “informant talked to” : To CBS’ laughably awkward “interacted with”: Another popular strategy appears to be to attempt to discredit Trumps claims entirely: What the center is saying Center outlets are trying to call attention to the lengths both sides are going through to frame the context within their own narrative, and urging readers not to simply dismiss the claims as unfounded, while admitting the scope and intent of the FBI’s use of the informant is currently unknown. NBC News says the claims may not be unfounded: But it would not be absurd to think the FBI might have sent informants to speak to suspects in their counterintelligence investigation into whether anyone in the Trump orbit was working with Russia to interfere in the presidential election. In fact, it would have been accepted procedure for the FBI. The Intercept goes through great lengths to illustrate the bizarre way the media is framing the issue, and points out a huge missing piece of the puzzle: the same FBI informant was used in an unethical and potentially criminal election spying operation in the 1980s, a fact which would have been known by both the NYT and the Washington Post. The NYT in 1983 said the Reagan campaign spying operation “involved a number of retired Central Intelligence Agency officials and was highly secretive.” The article, by then-NYT reporter Leslie Gelb, added that its “sources identified Stefan A. Halper, a campaign aide involved in providing 24-hour news updates and policy ideas to the traveling Reagan party, as the person in charge.” […] Halper, through his CIA work, has extensive ties to the Bush family. Few remember that the CIA’s perceived meddling in the 1980 election — its open support for its former Director, George H.W. Bush to become President — was a somewhat serious political controversy. And Halper was in that middle of that, too. […] So as it turns out, the informant used by the FBI in 2016 to gather information on the Trump campaign was not some previously unknown, top-secret asset whose exposure as an operative could jeopardize lives. Quite the contrary: his decades of work for the CIA — including his role in an obviously unethical if not criminal spying operation during the 1980 presidential campaign — is quite publicly known. The Intercept also points out this raises several questions that should not be ignored: THERE IS NOTHING inherently untoward, or even unusual, about the FBI using informants in an investigation. One would expect them to do so. But the use of Halper in this case, and the bizarre claims made to conceal his identity, do raise some questions that merit further inquiry. While it’s not rare for the FBI to gather information before formally opening an investigation, Halper’s earlier snooping does call into question the accuracy of the NYT’s claim that it was the drunken Papadopoulos ramblings that first prompted the FBI’s interest in these possible connections. And it suggests that CIA operatives, apparently working with at least some factions within the FBI, were trying to gather information about the Trump campaign earlier than had been previously reported. Then there are questions about what appear to be some fairly substantial government payments to Halper throughout 2016. Before finally concluding: Whatever else is true, the CIA operative and FBI informant used to gather information on the Trump campaign in the 2016 campaign has, for weeks, been falsely depicted as a sensitive intelligence asset rather than what he actually is: a long-time CIA operative with extensive links to the Bush family who was responsible for a dirty and likely illegal spying operation in the 1980 presidential election. For that reason, it’s easy to understand why many people in Washington were so desperate to conceal his identity, but that desperation had nothing to do with the lofty and noble concerns for national security they claimed were motivating them. The Hill, in an apparent plea to other media organizations, published an article expressing concern over the claims, stating that exposing the truth is in everyones interest. I have been highly critical of Trump’s attacks on the media. However, that does not mean his objections are wholly unfounded, and this seems one such example. There may have been legitimate reasons to investigate Russian influence before the election. Yet, very serious concerns are raised by the targeting of an opposing party in the midst of a heated election. These concerns will be magnified by the use of a confidential source to elicit information from Trump campaign associates, though officials deny that the FBI actually had an informant inside the campaign. They go on to offer an excellent summary of why its so important for both sides to cut through the bias and be properly informed about the situation. Just as it is too early to support allegations of a conspiracy to frame Trump, it is too early to dismiss allegations of bias against Trump. As shown by many of the emails and later criminal referrals and disciplinary actions at the FBI, an open hostility to Trump existed among some bureau figures. Moreover, the extensive unmasking of Trump figures and false statements from FBI officials cannot be dismissed as irrelevant. As a nation committed to the rule of law, we need a full and transparent investigation of these allegations. All of the allegations. That includes both the investigation of special counsel Mueller and the investigation of these latest allegations involving the FBI. For many Trump supporters, this new information deepens suspicions of the role of the “deep state.” If we ever hope to come out of these poisonous times as a unified nation, the public must be allowed to see the full record on both sides. To conclude, the Conservative media is trying to frame the event as an attempt by the opposing political party to spy on the Trump campaign. The Liberal media is trying to discredit or downplay the claims. The Center media is urging people not to ignore the claims, while at the same time not jumping to conclusions. Help fight misinformation, sign up for Bitpress updates.
https://medium.com/bitpress/this-weekend-s-news-claims-of-treason-cia-spy-ops-and-fbi-informants-bbbe5c3092d0
['Eric Oetker']
2018-05-21 02:25:29.493000+00:00
['Mueller', 'Journalism', 'Misinformation', 'Trump']
This 3 Step Approach Can Transform Your Data Science Journey
Finding myself in a challenging situation. It was a Monday, and we had to do a demo for the stakeholders on a product we have been working on by Wednesday. To give you some context, I was a Machine Learning Engineer who was mostly involved with researching, prototyping, and developing AI products. We got terrific results for the experiments we ran; the product was in good shape. But we also knew we’d have to build a working application for the demo, and the problem was we were short of time. And on top of it, we didn’t have any experience in building apps. With the limited time I had, I searched all over google for tutorials, walkthrough videos, documentation, and tools to build machine learning applications. I stumbled upon a tool/library called Streamlit and understood this could do the job. All of us find ourselves in challenging situations, how we approach them makes all the difference. Starting to learn all over again. I love starting to learn all over again. (Photo by Jonathan Borba on Unsplash) I love to learn all over again. I naturally learn every day. With years of experience, I’ve mastered the art of learning how to learn, and I’m relatively quick about it. I knew the time I had in hand was limited. Still, I went on to watch walkthrough videos on YouTube, navigate through articles written on Streamlit, and finally, the rabbit hole of official documentation. I was pleasantly surprised when I discovered how simple it was and built my machine learning app within the day, thereby meeting the Wednesday deadline. A day of learning and developing, boom, we have the working product for the demo! Learning is truly the first step for mastery, and don’t let anybody tell you otherwise. Grabbing when an opportunity knocks. We finally delivered the presentation and demo to the stakeholders. My manager, who was also at the demo, wanted me to introduce this tool to the team. I didn’t hesitate for a second. I was more than happy to share everything about machine learning app development with the team. I prepared and conducted a workshop, had a live-coding session, and helped the team upskill. More than anything, I saw it as an opportunity to grow and become better. Post-workshop, I asked for feedback from the team. I listened to what I did well and what I could improve. I couldn’t have been more satisfied; it was a win-win! Opportunities are everywhere, making it big or letting it go is all on us. Sharing the experience with the world. In most of the feedback I received, one thing stood out. My app development process was easy to grasp and saved the team a lot of hours. What if I could share this with the world for everyone’s benefit? Soon enough, I started writing my heart out. I took the time to create a new real-world example, push the codes to a Github repository, embed code snippets, and finally published it on Towards AI. Towards AI featured my article on the front page! (Screenshot by Author) This article went on to one of my most viewed articles. (If you’re curious to read the article, you can do so here.) Everyone who read enjoyed it and my LinkedIn was full of positive feedback. Eventually, Towards AI publication featured my writing for the month's newsletter. I was tasting success—the success of my approach. And before I could process everything, I became a Top Writer. Here’s a thing about Medium, most Top Writers on medium publish a lot. I don’t. I have hardly 10 articles published. Instead of quantity, I focussed on quality. I became a Medium Top Writer. (Screenshot by Author) Every time I publish, I make sure I give something of value to the reader. While delivering value, I focus on improving myself, becoming a better data scientist, and becoming a better machine learning engineer with every article I publish. The top Writer tag means nothing to me; a better data scientist means everything to me.
https://medium.com/towards-artificial-intelligence/this-3-step-approach-can-transform-your-data-science-journey-a48f6a753097
['Arunn Thevapalan']
2020-12-11 01:02:58.661000+00:00
['Machine Learning', 'Artificial Intelligence', 'Education', 'Data Science', 'Learning']
How Reading Fiction Has Made Me a Better UX Designer
How Reading Fiction Has Made Me a Better UX Designer Applying lessons learned from literature to the design process Photo: pchyburrs/RooM/Getty Images Plus I used to be an avid reader. When I was a kid, I would go to the library and check out more books than I could carry home or ask my parents to drop me off at the bookstore (aka my backup library) on their way to work and spend the day reading there. As I got older, reading became less of a priority. I would relish books I had to read for homework and the passionate conversations we would have in class the next day, but I didn’t spend as much time reading for fun. A few months ago, I found myself living a two-hour train ride away from work. The first week was dreadfully boring, but I quickly realized this was four hours a day being handed to me on a silver platter. Three months and 6,616 pages later, I’ve noticed an unexpected side effect of rediscovering my first love: It’s making me a better UX designer. Empathy When an author describes a story from a character’s point of view, it allows me to see the world through the character’s eyes, experience their delight and frustration, and understand why they make the decisions they do. A richly drawn character is a reminder that people are multidimensional, always a product of their circumstances. Take The Kite Runner for example: The protagonist is cowardly, unlikable, and entitled. His actions hurt the people who care about him the most, but the author describes why. Through the author’s words, I feel the protagonist’s jealousy toward anyone who receives his father’s affection, his paralyzing fear when he encounters a bully, and his heart-wrenching guilt when he instantly regrets his actions. Reading allows me to see the world through his eyes and empathize with him despite never having had his experiences in my own life. Developing a deep understanding of characters who live in different circumstances than myself continues to teach me to empathize with user groups that I don’t belong to. For example, at Punchcut, I worked on a project that aimed to help people find their purpose. It was challenging for me to personally relate to this problem, but I thought back to the main character in Siddhartha and how he goes through the course of self-discovery. His journey helped guide the decisions I made when designing for this group of people. The relationships I’ve built with the characters of the stories I’ve read have allowed me to develop a nuanced empathy and understanding of diversity that helps me design for a wide range of people. At a larger scale, I’ve gained a detailed awareness of a greater variety of circumstances. I’ve started to view the world as more diverse — not just in terms of demographics, but as a spectrum of experiences, customs, and lifestyles. Imagination Reading a book requires me to use my imagination to visualize the world I’m reading about. When I read a book, my brain has the creative agency to explore as my imagination takes me to different places, events, and time periods. Hamlet transports me to a castle in Denmark, back in the late middle ages. A Clockwork Orange takes me to a dystopian future with a scheming totalitarian government. In The Lion, the Witch, and the Wardrobe, I travel to Narnia, a fantasy world full of talking animals. My imagination is the train that takes me there. Imagination is like a muscle: The more you use it, the more creative you can be. This is essential to UX design — I use my imagination to visualize scenarios, context, and design elements as I’m coming up with concepts. When I was working on a concept project recently, I had to imagine how a specific technology could be used though the technology does not yet exist. Reading about imaginary worlds has taught me how to conceptualize how something will look, act, and feel before it exists and has improved my ability to solve problems in innovative ways. All of those are core design skills. Communication Books aren’t all written the same way. Numerous techniques can be used to tell the same story — with devices like flashbacks and symbolism — and depending on the story, some work better than others. Reading has armed me with a toolset of storytelling strategies and new ways of communicating my ideas. Through example, literature continues to develop my understanding of how best to communicate based on the audience and content. Books reveal details with a deliberate order and pace to tell the story in a digestible way. By sharing information when it is most relevant, the author keeps the reader focused. This is parallel to the interaction design technique known as progressive disclosure, a method of revealing details in a design as they are needed. I’ve found the concept of storytelling just as important for presenting my designs. When I was putting together a recent presentation, I thought back to two books with drastically different communication styles. Hard-boiled Wonderland and the End of the World is a book that discloses details with restraint, weaving the context and plot together like a mystery leading up to a great reveal. Catch-22 invests in setting the scene to provide ample context before diving into the plot. The former builds anticipation while the latter is direct. The contrast between these two books helped me better consider my audience and decide that they were more of a Catch-22 bunch. As a UX designer, the art of storytelling is essential for communicating my ideas. By reading, I’ve observed and developed my own eloquence, making me more effective at telling the story of my designs. Analysis The analysis I naturally do while reading — picking out devices like metaphor, foreshadowing, and irony — has developed my eye for identifying motives, patterns, and themes. This is essentially the core of research synthesis. Design even uses some of the same methods as literature, such as identifying themes, hero moments, and archetypes. During a recent project, my team was collecting information about user behavior and actions when ordering medical supplies. While synthesizing these findings to identify patterns and insights, I realized this is exactly what I do while reading (except without the Post-its). Literary analysis is to reading what synthesis is to UX research. They use the same skills, so the more I read, the more instinctive synthesis becomes.
https://modus.medium.com/reading-fiction-has-made-me-a-better-ux-designer-bcaed958ca45
['Anoosha Baxi']
2019-06-03 23:12:05.630000+00:00
['Design', 'Craft', 'Reading', 'Visual Design', 'UX']
Build a Skeleton Component in React for Better UX
1. Structure the Skeleton UI The step is, obviously, to make the empty skeleton components: The skeleton should look the same as the content as much as possible When you make the skeleton component, there’s one thing you should be careful about. The purpose of the skeleton UI is to reduce a boring wait for the data, so they shouldn’t look too different from the real UI components. If they’re too different, users may feel like the skeleton is another independent UI component. 2. CSS animation The second step is to choose the animation that will go through the skeleton. Some people use pure CSS animation and some people use an image. I personally prefer to use an image, especially when the animation contains some gradient background colors. Original source of the image is npm You see the white gradient that goes through the skeleton from the left to the right? To paint that every 16.6ms is an unnecessary amount of work for the CPU — it has to calculate every gradient color value on each spot and work with GPU to represent it on the screen. For this reason, I prefer to use an image. However, in this example, I’m going to use pure CSS to show you how to do it. It may not always be possible to use an image. I made a gradient background like so to the element that wraps the whole contents. But, this looks weird — you can see the whole gradient pillar. We only want it to show in the gray area. To achieve this we add the pseudo-class to every gray element. Remember that the element the gradient background belongs to should have the styles like this:
https://medium.com/better-programming/build-a-skeleton-component-in-react-for-better-ux-b1dca9d783e6
[]
2020-05-17 07:51:15.840000+00:00
['React', 'Nodejs', 'Reactjs', 'Programming', 'JavaScript']
The Myth of the T-Shaped Developer
The Myth of the T-Shaped Developer Exploring Generalization and Specialization in Developer Careers Recently a younger developer I respect expressed a somewhat common concern. In essence, their concern was that they were finding themselves doing a little bit of everything and not specializing enough. They were specifically concerned that nobody would want to hire them without a key specialization. They also mentioned the idea of a “T-shaped” developer who has a wide breadth of experience but a specific area that they are deeply skilled in. Keep in mind that this was a new developer who had recently graduated from a bootcamp and that specializing early on can be both hard and limiting. This is a common concern for new developers and so, in this article, I’ll explore the pros of cons of generalizing, specializing, being a so-called T-shaped developer, as well as introducing the term “comb-shaped” which I believe is a more accurate picture of a developer career. Generalization Generalizing means having just enough skill at a wide array of things. As a generalist, you’re competent enough to wade through issues all over a technology stack, work with a wide variety of projects, and jump in and help out where needed. This means that you’re able to do a lot to help a team and are more likely to take features from beginning to end. This flexibility in turn tends to result in more things coming your way. You may get bugs assigned to you because people trust you will be able to identify the broken area and will have the familiarity needed to fix things in those areas. Additionally, your ability to get involved quickly in codebases makes you a leading candidate when the business needs to move someone onto a project to help out. This increases your value to the organization, but it can be frustrating and disruptive to you. The downside of generalizing is that you tend to not know the newest practices and patterns of a particular technology layer and are more likely to write sub-optimal code just because you don’t know the best way of doing something or have enough experience to avoid common pitfalls. Additionally, specialists may look down on your technical abilities because they are seeing you contribute in technical stacks where you are not an expert and they judge you against specialist standards. This perception of your abilities may cause the organization as a whole to respect you less than they should or pay you less than your specialist peers. Specialization Specialization offers a chance to really explore and become an expert in certain technologies and areas. Subsets of larger frameworks, front-end or back-end code, data access layers, communications logic, etc. are all technical areas where people can specialize. Additionally, people can specialize in larger organizations on specific projects or specific types of projects, though this is not frequently viewed explicitly as a form of specialization. Specialists tend to get more complicated changes and more complicated bugs. They’re more likely to be labeled as experts because their role requires expert-level knowledge in a specific area. This expert-level skillset results in rarer skillsets and can command higher pay as fewer people in a job market have the needed expertise for senior-level roles. That’s not to say that specialists must be senior developers. Junior developers can also specialize in certain areas and often get forced into this by organizational needs. This can help junior developers gain skill and comfort more quickly, but limit their usefulness to the larger organization or team. Specialization has its downsides, however. As a specialist, you are much more dependent on a specific technology. If this technology declines in popularity or suddenly becomes unsupported for a major platform, you are likely to be suddenly forced to compete with other specialists in that area who are looking to adapt their careers to account for the sudden change. Alternatively, if you stick it out when others move on, you can win some very lucrative roles maintaining old technologies no longer in active development, but bear in mind that it becomes increasingly difficult to find jobs that need your skillsets over time and you will lose respect in the eyes of other developers as you are seen associated with an ancient technology. Specialists also miss some opportunities in organizations because their skills are so narrowly focused. It can be hard to get promoted or moved to another project if there isn’t another person around with similar skills who can pick up your work. This means that specialists don’t see as much variety and don’t get to move on to some of the newer and more “fun” projects organizations might create. That said, being extremely skilled in a specific area can be a lot of fun for a simple reason: It’s fun to be really good at something. The T-Shaped Developer A “T-Shaped” developer is a term you’ll hear thrown around. This simply refers to someone who has the breadth of a generalist and the depth of a specialist in a specific area. T-Shaped developers tend to arise out of a generalist diving deeper into a specific area they find they have an affinity to or from a specialist branching out and learning more areas. Most developers will become T-Shaped by the time they hit senior developer, but I do not feel that this is an end-state for most developers. Being T-Shaped is not bad, but I don’t think it’s truly reflective of a modern development career. The Comb-Shaped Career Instead, I’d argue that as developers age and gain more experience with emerging technologies, they become closer to something more “comb-shaped”. That is to say that they have the breadth of a generalist and multiple specializations that they’ve gravitated to throughout their career. Additionally, the “base” of the comb is thicker or deeper than the traditional depth of a generalist. All of this is to say that as developers continue to develop and grow and encounter new libraries and technologies, they amass a lot of areas of specialization and a greater degree of comfort in areas they still are generalists in. The multiple specializations they have carry some interesting benefits. Because different disciplines and technologies look at different techniques to solve their problems, you can often borrow some specific tools and techniques from one specialization and apply them to other specializations. This isn’t always a literal reapplication of the same technology, but more recognizing a common principle and applying it in a new context. As you become more and more comb-shaped you will feel a greater degree of familiarity when working with newer technologies, be able to apply new things quickly, and feel a greater sense of general wisdom for software development. Important Note: There is no guarantee that by the time you become “comb-shaped” you will have any hair left. Such are the ironies of life. Directing your Own Path So, next time someone suggests that you should specialize more or that you should be more of a generalist or more of a “T-Shaped” developer, take it into consideration, but don’t fret that it’s an irreversible decision. Your career is your own and, if you stay in development, you’re quite likely to move back and forth between times of specialization and times of generalization. If it is helpful to anyone, here are the ebbs and flows of my own journey thus far: Started as a Java generalist Moved to a .NET generalist fixing bugs throughout a codebase Pivoted to a WPF specialist writing very involved and complex high-performance controls Leveraged that T-Shape to become a senior developer Gained an additional specialization in web services Generalized more into ASP .NET web development Specialized as a Silverlight developer Branched out into Angular as a true “full stack” developer as Silverlight rapidly fell Specialized more in Angular as we needed to make up some complex ground Changed jobs to become more of a .NET generalist and manager Specialized in software quality and technical debt management to help my organization meet its needs Life is a journey and our careers are part of that journey. Pick a path that works well for you and keep in mind that it’s a temporary path that will often rejoin other major roads.
https://medium.com/swlh/the-myth-of-the-t-shaped-developer-8dbbbbb9f4fc
['Matt Eland']
2020-02-22 18:57:30.829000+00:00
['Bootcamp', 'Careers', 'Career Development', 'Software Engineering', 'Junior Developer']
Young Thomas Young. The Visionary 20th-Century Universal…
Quick Intro The fourth entry in this series of Masters of Many in their formative years lacks the mainstream popularity of our previous protagonists, especially relative to Da Vinci & Franklin; Thomas Young, however, nonetheless embodies the perfect portrait of a 20th-century polymath. Praised by the likes of Eisenstein, Asimov, & Maxwell, Young’s accomplishments are countless & impactful; as renowned author Isaac Asimov once claimed [of Young]: He was the best kind of infant prodigy, the kind that matures into an adult prodigy Young held a rare, feverish desire to learn & accomplish within; this tenacity in pursuit of knowledge, paired with his clearly-heightened, raw intelligence, led to a uniquely productive & prolific life. Staying in context with the previous entries, let’s down narrow the historical peripherals to a decade of his formative years, asking the question— what was he like in his twenties? Note-Worthy Accomplishments — Established the wave theory of light through his double-slit experiment; first physicist to suggest that light is both a particle & a wave — Significant contributions to the decipherment of Egyptian hieroglyphs, specifically the Rosetta Stone; lifelong polyglot with mastery of five languages — Medical physician that laid down the foundations for vision & color theory by identifying the presence of three kinds of nerve fibers in the retina — Well-rounded Royal-Society physicist that made a dizzying amount of entries to the Encyclopaedia Britannica, ranging from vision, light, mechanics, energy (credited with the term itself), language & finally music 20s To 30s (1793–1803) From birth, it was quite clear that Thomas Young held prodigious intelligence. At the early age of six, for example, he found the village school quite dull — so he was sent to a clergyman as a form of advancement. Young, in his autobiography, almost immediately voiced his continued frustration, stating that the clergyman “had neither the talent nor temper to teach anything well.” Young spent only six more years in official learning institutions. By the time he left preparatory school in 1786, he was already knowledgeable in many languages (ancient & modern), as well as versed in Newtonian physics. At thirteen years old, bored with schooling, Young became a tutor to Hudson Gurney (grandson of David Barclay); he remained here from 1787 to 1792. Lifelong friends, Gurney observed that Young: Believed what one man can do another can if he’s willing to make the effort A classic display of his lifelong perseverance, by seventeen Young powered through nearly all of the major writes on antiquity (Aristotle, Plato, Newton, etc..) in their original tongues. Newton’s Opticks, in particular, sparked a lifelong interest in the concept of vision, the mechanics of light & the anatomy of the eye. He began to study medicine in London at St Bartholomew’s Hospital in 1792; additionally, he enrolled as an assistant pupil at the hospital. Required to dissect an ox’s eye, he began jotting down a flurry of ideas & experiments… Isaac Newton’s Opticks Young wastes no time implementing a life of prolific productivity— the year 1793, as a twenty year-old, he submitted his very first publication to the Royal Society of Long: Observations On Vision. Based on this dissertation, at the unusually young age of twenty-one, Young was elected a fellow of the Royal Society. A short two years later, after transferring again, twenty-three year-old Young obtained the degree of Doctor of Medicine from the University of Göttingen in 1796. Th next year, due to regulatory requirements set by the Royal College of Physicians, Young discovered that he’d have to repeat a few years of medical school in order to attain an MD & become a practicing physician in England; he, therefore, enrolled once last time at Emmanuel College, Cambridge This same year (1797), at twenty-four, Young suddenly found himself financially stable as he inherited the estate of his grand-uncle Richard Brocklesby. At twenty-six, Young established himself as a physician & officially opened a proprietary place of practice at 48 Welbeck Street, London (seen below). His primary interest at this time consisted of sensory perception & the anatomy of the eye. Commemorative Plaque @ 48 Welbeck Street, London The following year, twenty-seven year-old Young, a bit distracted from his physician practice, upped his publication pace. The first half of the year he submitted the Sound and Light to the Royal Society; the second half of the year he submitted a further paper On the Mechanism of the Eye — which measured astigmatism for the first time. In 1801 Young, continuing his flurry of publications, published one of his most legendary pieces: On the Theory of Light and Colours. This ground-breaking dissertation contained not one, but two monumental theories. First, he presented a wave theory of light using the wavelength of different colors of light using diffraction (preceding his double-slit experiment). Next, he put forth the theory of three-color vision to explain how the eye could detect colors — a theory that today we know is very true given the three types of cones located in the retina. To round the year out, at twenty-eight, Young was appointed professor of “natural philosophy” (basically physics) at the Royal Institution. In 1802, twenty-nine year-old Young was appointed foreign secretary of the Royal Society. The following year (1803), the closing year of this introspection, our hero broke a commitment & resolved to enter another, lifelong one. At thirty, Young resigned from his professorship at the Royal Institution, claiming that he feared that its duties would interfere with his medical practice (it’s much more likely that he simply found the professorship distracting from his many experiments). This same year he met one Eliza Maxwell, his lifelong sweetheart that he’d soon marry. Quirks, Rumors & Controversies People with enormous strengths tend to balance this out with tragic weaknesses. As we’ve seen with previous protagonists, every bright light casts a shadow; so, the question follows — what shadows followed Thomas Young? An overlap of a quirk & controversy, a negative association attributable to Young was his rather experimental, unorthodox, & at times arguably in-humane physician practice. It’s well-known that Young, in his brilliant rebelliousness & intellectual independence, strongly disliked traditional medical methods; he therefore often turned to untested & experimental methods for treatment. As a lover of science & history observing Young’s accomplishments centuries later, his physician methods intrigue me; as a patient in the 1800s in dire need of a serious ailment, however, the kooky physician revered for his physics & wide-ranging experiments didn’t exactly inspire trust. This apathy towards his physician career, while it undoubtedly opened up his bandwidth for pursuing his many interests, naturally led to a mediocre commercial track record. Had Young not inherited his Uncle’s estate, it’s highly likely that Young would’ve either ran into serious financial issues or, arguably worse, would’ve curtailed his rapidly expanding knowledge. Additionally, as well-rounded as Young was, he was famously infamous for being quite the poor communicator. His tenure as professorship, for example, leaves a few hints that perhaps Young didn’t leave as much as he was asked to resign; during the first two years of his lectures, it’s recorded that his teachings were very technical, contained too much breadth, &, of course, consisted of his latest home-experiment results. It was said by one of his contemporaries, George Peacock, that: His words were not those in familiar use, and the arrangement of his ideas seldom the same as those he conversed with. He was therefore worse calculated than any man I ever knew for the communication of knowledge. In Closing Who was Thomas Young in his twenties? A genuine workhorse genius infatuated with medicine, physics, & the best of both worlds: vision, optics & the eye. Was he accomplished in his twenties? Yes. Out of the four protagonists summarized, Young maintained a surreal consistency in productivity seen from an early childhood that extended much past his twenties. Though, as noted above, many of his scientific breakthroughs were only possible due to the financial stability he inherited. Nearing his thirties, Young was on the cusp of his arguably greatest contributions (the double-slit experiment & the Rosetta Stone decipherment). His pace in publications, experiments & contributions rarely waned as he aged; & his scope in topics calling his interest, it seemed, only expanded over time. Reminiscent of other intellectual rebels that carved their own path, I’d like to highlight a lesson Young lived out: refuse to let academia & traditional paths overshadow your education. Your absolute best work will come from aligning multiple purposes, skillsets, & interests, all characteristics very personal to you — find a way to create your own path to that sweet spot, it’s hardly one that’s already been walked. Yes, Thomas Young is certainly no-where a household name as Benjamin Franklin or Leonardo Da Vinci, but leave all doubts at the door, Thomas Young is vastly underappreciated as a 20th-century universal genius. Additional Entries Part I — Benjamin Franklin Part II — Russell Bertrand Part III — Leonardo Da Vinci Part V — Mary Somerville Part VI — Richard Feynman Part VII — Sir Francis Bacon Part VIII — Jacques Cousteau Part IX — Nikolas Tesla Part X — Isaac Newton Sources The Last Man Who Knew Everything: Thomas Young
https://medium.com/young-polymaths/in-their-20s-thomas-young-5112c506906b
['Jesus Najera']
2019-12-28 19:57:40.035000+00:00
['Life', 'Physics', 'Education', 'Science', 'History']
The Kabbalahistic Origins of the Search for AGI, and it’s relation to Sheol.
A conceptual and theoretical apology for the morality of a hell marker. Combinatorics is essential to Kabbalah and also AGI. One of the fun things about the advent of AGI, and VR is being to actually access the functions of divine beings — the creation of environments and the entities that populate those environments. While what is possible once we unlock these functions is very fun to consider, it is also worth attempting to empathize with past societies and how they dealt with views of — for lack of a better term: Sinners. “Sin,” has long been thought of as coming from an archery term, which means “to miss the mark.” This is an extremely relevant term when you are thinking in terms of neural network and AI design. The exact corollary, is to have a target based learning set, and a creative system that attempts to recreate the target. In this case a “sinner” is merely an iteration of a function that failed to evolve in a direction that increased “fitness” and was evolutionary less weighted to the goal set. In this case, “sinners” are gradient descent dissidents that go in the opposite direction of the training set and are discarded. “Sinners go to Sheol” To take the analogy one step further is to imagine a scenario of infinite or practically infinite computing power. (Side note — Symbolic expansion on how this is feasible) In this scenario the notion of having files, parts, and systems that are discarded out side of the system — is actually wasteful, because you must then re-index past misses into the system only to re-discard them. Given a scenario of infinite compute, you should actually mark misses and store valid, non-valid entities, in order to track similarities of non-validity. At the same time, having a large store of organized known and named, types of invalid entities, is itself as a meta practice extremely valuable as a source of valid invalids. For example, the notion of Sheol, where the notion of Hell comes from. Is a physical place where they would burn trash. Hence the imagery of burning in Sheol is a miss transliteration of early writers which did not understand that Sheol was a physical space outside of Jerusalem used to burn trash. However — given infinite compute (Side note definition of IC as opposed to Singularity) A marker that is basically saying: “this data bit is garbage” would be appropriate, and should go to the garbage dump. Ie. “Sheol,” means “the place where invalids are not discarded.” The Moral and Digital Apology of Sheol Considering the above conclusions, the notion of being discarded and not deleted, then serves to justify the notion of “hell” not of a place of torture. Where beings are getting their kicks off of torturing lesser beings. But of divine foresight of allowing even the invalid an existence on their own terms, and furthers to justify the notion of the the discarded place (Sheol), not as a place of torture, but of a place of torment. Where the entities are actually experiencing their own natures. Free will allows the existence of evolution on our own terms, allows us to chose our own image and forsake the goal, and allows us to continue to self define in the wrong direction forever. A more damned scenario, I cannot personally imagine. I will chose my targets from with in the “Tao,” the Goal Set pattern, or “the way.” At the same, adopting the stance of allowing for a discarded index — is good principals for AI and evolved systems design.
https://medium.com/datadriveninvestor/the-kabbalahistic-origins-of-the-search-for-agi-and-its-relation-to-sheol-7950bf476f16
['Jordan Service']
2018-10-13 20:03:05.510000+00:00
['AGI', 'Christianity', 'Apologetics', 'Artificial Intelligence', 'Kabbalah']
How Wendy’s Revolutionized Corporate Social Media Accounts
How Wendy’s Revolutionized Corporate Social Media Accounts From Twitter to Twitch Source: author’s screenshot from Twitter In 2017, one of the biggest memes on the internet was Wendy’s Twitter account. One of the most unexpected things for a fast food joint to do in order to get an edge on their competitors, Wendy’s started roasting other Twitter users and brands in order to gain popularity. Source: author’s screenshot from Twitter Wendy’s Twitter account quickly became one of the best places on the internet if you wanted to search for roasts or memes. This quickly made Wendy’s popularity increase both on Twitter and in house. In 2017 Wendy’s Twitter account went from 2.1 to 2.4 million followers in just six months, and their account currently has 3.7 million followers. Source: author’s screenshot from Twitter Financially this also proved to be an ingenious marketing move by Wendy’s as their net income increased from $129.6 million to $194 million. For those of you that don’t want to do the math, that is a 49.7% growth in one year. For any brand, that kind of growth would be wildly impressive but it is even more impressive from an established brand that many people might assume has already reached most of their consumer base.
https://medium.com/better-marketing/how-wendys-revolutionized-corporate-social-media-accounts-6d4aec739f37
['Justin Thorne']
2020-12-02 18:02:16.983000+00:00
['Marketing', 'Advertising', 'Wendys', 'Twitter', 'Social Media']
The Best AI trends, overview of coming trends
The Best AI Trend Is Yet To Come A complete overview of all AI trends (we know of) coming at you Title Image by Author, License for background held through Envato Elements AI has made incredible progress over the last decade, and better tools and models are being developed every day. From GPU-Acceleration to Natural Language Processing progress, we have seen accelerators and enablers taking shape and move huge amounts of investments in the most recent past. Deepmind showed us just this week again that things thought to be impossible for another decade can become a reality in no time. Ranging from Smart Robots to Neuromorphic Hardware, we will have a look at the top 13 AI trends that will be on everyone's mind from now until 2025. I am in no way affiliated with any of the following companies. All of the following technologies are listed in the Innovation Trigger categorie by Gartner 1. Digital Ethics However, the risks are also great, as the allures of using sensitive information are sometimes too great to see the downsides. Governments and companies need to address several ethical questions and find methods to capitalize on information while designing best practices to respect people’s privacy and maintain trust. The worst-case scenario in my mind for the future is a public that distrusts AI fundamentally and stalls development. Think about Weapons, Fake Media, Socialmediabots, DeepFakes, and we haven’t even started with what state actors might be up to behind the scenes. Screenshot from Google 2. Knowledge Graphs Knowledge Graphs have been around for over 20 years. The basic idea is that of taxonomies (ontologies more precisely), classifying, and grouping information. Knowledge Graphs are like the more conservative approach to intelligent systems, build on clear rules, and highly interpretable. Combinations of new Deep Learning-based methodologies and Graph Databases have contributed to these structures' modern hype learning. Example of a Knowledge Graph, Source Github They often are used in areas where transfer learning is needed, or understanding context is crucial. Google holds the record with their knowledge graphs, which are used for many services such as the search engine and voice assistants. 3. Intelligent Applications Intelligent apps, aka I-apps (I wonder why Apple doesn’t hold that patent;) are apps that use intelligence in any form. This includes Artificial Intelligence, Big Data, and everything you can sell under the umbrella term AI. The big promise here is applications that get to know their users and can learn how to serve them better through continuous use. The most known applications of this type are Cortana/Siri and the other assistants, Ada, the healthcare app, and AI dungeon, a multiplayer text adventure game that uses artificial intelligence to generate unlimited content. I asked AI dungeon what it thinks about our stories so far, and this was its answer. example from I-app AI dungeon, Yes it really wrote that 4. Deep Neural Network ASICs Many years ago, AI models were mainly trained on CPUs. The real breakthrough came after heavily optimized and parallelized code started running on GPUs. Google decided this trend should not end here and came up with TPUs(Tensor Processing Unit), a specified chip perfectly adjusted for computations common on their AI framework TensorFlow. An ASIC(Application-specific integrated circuit) is a chip customized for a particular use. A GPU is technically nothing more than an ASIC optimized for graphical calculations, aka parallel matrix multiplications. The trend here is to design ASICs that are highly specific for a particular purpose. We could develop one that handles exactly the requirements of a Tesla car or one perfectly adapted for face recognition. 5. Data Labeling and Annotation Services To validate systems and allow them to learn, we need to know what is right first ourselves. Just having data often is not enough. We mostly also need labels describing the data. As in the image below, a machine would need to know first that the subject is wearing a mask before it could ever learn to distinguish mask wearers from non-mask wearers. Image of the author testing a Mask Detection AI This is where data labeling and annotation services come in. Step 1: Upload thousands of photos. Step 2: Qualified users will tell you what is in the Image. Low skilled people for things like mask vs. no-mask. Doctors for labels like what type of cancer is in the radiology image. As you might imagine, this could be a pretty huge business in the next years. Amazon’s Mechanical Turk is probably the most well known. It allows you to earn money for labeling data or to answer surveys. Amazon’s Mechanical Turk Dashboard Screenshot. 6. Smart Robots Humanities last dream, or it's first? Since I was a little boy, I dreamed about humanoid-like robots doing mundane tasks for me. Cooking, cleaning, and assembling Ikea furniture. According to Yahoo Finance, the market for Smart Robots is expected to exceed $23 Bn by 2025. While it is still a long road to the types of robots I imagined as a child, current developments are interesting, to say the least. Industry smart robots guard the assembly lines of many international products from cars to furniture. Their usage is not expected to reach a plateau soon. The most fascinating current inventions are not toys for children but adults. Sex robots have been in the news for many years, and apparently, they are making money. Personally, I would not spend $5000+ for creepy dolls like these, but then again, who am I to judge? It can even make tea ;) 7. AI Developer and Teaching Kits AI kits is an umbrella term for instructions, examples, and software development kits that help developers/students understand and implement AI solutions. Such kits are currently in their infancy stage, and we can expect them to become more common knowledge in the next few years. Their goal will be to teach and get developers and students to a level of knowledge to use and contribute to AI adoption productively. Current examples for developer kits include kits such as Atlas 200 DK AI from Huwai. Or Intel's AI on the PC Development Kit. On the teaching frontier, the Magician Lite from Dobot is on top of the things I never needed but always wanted ;), be sure to have a look. 8. AI Governance AI Governance is the process of evaluating and monitoring algorithms. This may include things as Bias, ROI, risk, effectiveness, and all other metrics that we will come up with in the future. The main issue here is time. At the time of development, AI developers have to make assumptions based on data available to them right now. But what happens 5 years later after a possible black swan event such as COVID? Do your flight recommendation engine's base assumptions still hold? The result is missed opportunities and bad decisions based on ancient assumptions. This is why we need AI governance plans. In short, it should at least include. AI model definition: What is the purpose of the AI AI model management: What can each model do, and what department is using it for what? Data governance: How is the data transformed? Which countries can you use them in? Potentially duplicated for which purposes? For further reading, I can recommend Getting started with data governance by tdwi. 9. Augmented Intelligence Augmented Intelligence refers to using AI to enhance the intelligence and productivity of humans. Instead of replacing workers, it strives to develop tools that help them become more efficient and effective. Current examples include portfolio management software that enables financial planners to offer their customers custom-tailored solutions. Or assisting healthcare professionals in choosing the right drug for the right patient. “Decision support and AI augmentation will surpass all other types of AI initiatives” — Gartner While that most certainly is more a question of what solution belongs to what categories, definitely a strong statement. 10. Neuromorphic Hardware Neuromorphic hardware refers to specialized computing hardware that was designed with neural networks in mind from the beginning. Dedicated processing units emulate the neurons inside the hardware and are interconnected in a web-like manner for fast information exchange. So what's the difference? Currently, most processors implement the von Neumann architecture. This architecture has shown to be ideal for most tasks we were interested in for the last couple of decades. AI is different in terms of computation; it is heavily parallelized and requires decentralized memory accesses. This is where neuromorphic hardware sees an opening for more efficiency and speed. There is a huge variety of such architectures for different purposes, and no one size fits all approach. 11. Responsible AI I personally realized only after reading Weapons of Math Destruction how important it is. When building systems, the developers themself barely understand what happens when “normal” people use them? In her book, the author listed the example of schools that used an automated teacher evaluation system. The issue was that no one understood the score it gave to teachers. This resulted in good teachers in a bad neighborhood to be fired based on results out of their control. When they objected, no one was around to explain the results that were deeply statistical in nature. This is only one of many areas where we should proceed with care and clearly highlights why we should monitor what our AI models are used for. The following video from Microsoft highlights the key components to think about. 12. Small Data We all heard of Big Data and how you can improve a model by simply giving it more quality data to train on. Currently, we mostly see that an inferior model can easily outperform a better model by simply using more data. But what if we don’t have more data? Or we have a lovely use case, and there is not enough data, and we need to gather and label everything from scratch? For example, we could want to transcribe handwritten notes/drawings from mechanics automatically. Probably no dataset exists that would allow us to learn the entire system on the real deal. So what can we do? This is where transfer learning comes in. We could basically first train our model on drawings and handwritten notes in general. Once the model learned how to interpret words and draw in general, we would show it the real deal, our Small Data from mechanics. More and more papers are coming out, figuring and piecing together a system of how and in what order such a system should be trained. Transfer learning surely is here to stay. 13. AI Marketplaces Sharing is caring; AI developers know this and openly share knowledge through 1000’s of publications every year. While sharing knowledge is great, sharing the entire model is even better. The basic concept is that after training a model on your data, you upload it to a website, and then users can pay for it. I have dedicated an entire video to this topic and think this trend is too huge to be ignored, have a look. Conclusion The world is moving faster than ever. We are in the most exciting epoch of technological progress, and AI is a big part. We got to know the most exciting trends that should help AI deliver on its promise and accelerate progress across all industries applying it. While we won’t see most of these trends in full action until 2025, we can expect some of them to become an integral part of our lives. If you enjoyed this article, I would be excited to connect on Twitter or LinkedIn. Make sure to check out my YouTube channel, where I will be publishing new videos every week.
https://medium.com/towards-artificial-intelligence/the-best-ai-trend-is-yet-to-come-f21ac7145908
[]
2020-12-12 16:32:30.881000+00:00
['Programming', 'AI', 'Data', 'Data Science', 'Machine Learning']
CTF Write-Up :: Categorizing images in Python
We brainstormed a few solutions to that hiccup. We basically wanted a checksum algorithm we could use to fingerprint files quickly, but that didn’t have one of the properties of most hashing algorithm: the fact that any small change in the input value results in a completely different output value. One solution we explored was converting the PNG image to a bitmap, hashing each of the 166 lines individually and comparing the matrices with a ~2% threshold, because two of the rows could end up being different. As we painfully updated the script, we stumbled on a package that seemed promising. It basically does image hashing, but way better than what we intended to do. The hashing strategy that was particularly interesting to us was the average hash, that could be used to fingerprint images and compare them in a way that ignores minute differences (a.k.a. high frequencies). In pictures, high frequencies give you detail, while low frequencies give you structure. Think of pixel level details and colors versus overall shapes and luminosity. The trick behind the algorithm is simply to scale down the image enough that you lose those high frequencies, e.g. by scaling it down to an 8x8 square and converting it to grayscale. It can then compute a Hamming distance between two images to tell you how similar they are. A distance of 5 or less, it’s probably the same image, save a few minor artefacts. A distance of 10 or more, the images are probably very different. With that new weapon in our arsenal, we updated our script! The distance comparison happens in the is_already_in_db() function: We knew our new strategy worked when the script started classifying images all by itself, never asking for user input 👌 Here’s what our database looked like after letting the script run for a while to make sure it prompted us for all the possible images. All the hashes, and all the associated answers. With that database, we could write a final script to submit our answers. Our “solver” script We started our solver script with the Python code that was generated from the POST request. This was to ensure we had all the headers we needed and the PHP session cookie. We started by loading our database in memory, then looped through the images from 1 to 14995, thinking we might want to do the last few manually, just so we wouldn’t miss the flag. The rest of our script looked pretty similar to the vision script: download the image, hash it, compare it to the hashes we have in our database and when a match is found, submit the answer for that image. Because we’re a paranoid bunch, we decided to leave the code that would prompt us for an answer, in case a random image was slipped in the lot by an evil challenge maker. We also decided to leave a bunch of debug statements in, just in case we had to figure out a problem. Conclusion We let the script run for a while, gingerly submitted the last few answers manually and finally got our flag! 🍾 It was a fun coding challenge, and while our code was arguably a lot more complicated than other write-ups I saw, we learned a ton about the different ways you can compare an image. It was really satisfying to see the script classifying images all by itself when we finally got our image hashing right.
https://medium.com/poka-techblog/ctf-write-up-categorizing-images-in-python-ab40e4aa6a4c
['Marc-Antoine Aubé']
2019-09-25 12:01:01.373000+00:00
['Security', 'Python', 'Conference']
Racist and Sexist AI — A Tale of Algorithmic Bias
Decision-making algorithms trained on large amounts of data regularly shape the world we live in, often unknowingly. From deciding which demographic a social media advertising campaign will target to granting parole to a prisoner, these algorithms are trusted daily to make important decisions in personal finance, health care, hiring, housing, education, policies and much more. First, let’s discuss the importance of data in AI… For anyone that hasn’t worked with data science before, some notions are important to define. The subset of AI which allows algorithms to learn by themselves is called machine learning. This technique is behind most of the “operational” AI that we find in our everyday lives ranging from credit card fraud detection, targeted advertising, Facebook news feed generation, image recognition, voice recognition and many more. Machine learning can be understood as a sort of statistical pattern matching. There are statistical models that are then fitted to a set of training data in order to represent the real world. This is how a statistical model will learn to recognize fraud by having been trained on thousands of transactions, for example. The algorithms are learning and making predictions from the data. Now you may ask, how can an algorithm even be racist? There exists an important ethical concern when it comes to determining if an algorithm is impacted by existing human biases and stereotypes. Intuitively, if bias exists in the society, algorithms naively learning from data will surely be affected and will themselves become biased. This is where the concept of data fairness comes into play. This phenomenon has been observed previously in many different spheres. According to a research conducted by Latanya Sweeney, a Harvard professor of government and technology, a user on website receiving ads traffic with a black-identifying name such as DeShawn, Darnell or Jermaine was 25% more likely to receive an ad suggestive of an arrest record. As outraging as this example seems at first, when data fairness is discussed in relation to social media or ad campaigns, the issue, unfortunately, seems trivial. Although I disagree that algorithmic bias in social media is trivial, mainly because anything presented to us on social media platforms greatly shapes the way we perceive our world, I would agree that this issue becomes increasingly important when studied for algorithms making decisions for governments and cities. Jun Cen: Illustration In 2016, Angwin et al. writing for ProPublica analyzed predictions made by COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a tool that creates risk scores to help assign bond amounts in Broward County, FL. The bond amount was determined by a human judge based on a computer-calculated “risk score” ranging from 0 to 10, 10 being a very high risk of a repeat offence. When comparing the offender’s future risk scores to their actual criminal records, the analysis found that the tool was twice as likely to falsely label black offenders as future criminals than white offenders and more likely to falsely label white offenders as low risk. The article went on to show an example where the COMPAS tool had assigned a risk score of 8 to a young African-American woman who had been arrested for attempting to steal a kid’s bicycle and scooter, totalling around 80$. The same tool assigned a risk score of 3 to a middle-aged caucasian male who shoplifted a similar amount of around 80$. Both had a criminal record, nonetheless incomparable, the male having served five years in prison for armed robbery and the woman having committed some misdemeanours when she was a minor. So who let an algorithm become racist? The simple answer is no one — the tricky answer is everyone. Algorithmic bias is described as a bias that is introduced into a decision-making algorithm by data that is tainted with historical human biases. However, it may not always be as evident as attributes such as race or gender — sometimes, other attributes of the dataset correlate highly with sensitive data such as ethnicity or gender, which could be completely unnoticeable by the scientists handling the data. When thinking of algorithmic bias, there is truly no one else to blame other than the historical training data which is, in the end, only a mirror of our flawed society. For example, let’s take a system that predicts how likely a customer is to fail to pay back a bank loan. If you’ve ever applied for a loan, you have probably seen the bank teller enter your personal information in said system, and almost immediately receive a decision such as “pre-approved”. Due to anti-discrimination laws, the system would not use race, ethnicity or gender as an attribute to predict such an outcome. However, the system could very well use a person’s postal code. The main problem here is that this feature often highly correlates with a person’s ethnicity. Therefore, removing ethnicity may not do much to remove algorithmic bias, as a person’s postal code is an excellent predictor for this attribute. In many cases, most statistical models would be able to pick up the correlation and therefore infer race. One can imagine how this example can be extrapolated to all sorts of different attributes for all sorts of different applications — city, state and school attended are all features that could be correlated to race. Now, the previously described algorithm would be trained on historical data of bank tellers making decisions on loans. The dataset contains years of bank tellers’ decisions on approving loans, prior to the system ever existing. It would look something like an aggregate of data related to a past customer and the customer’s financials mapped to an outcome (customer defaulted bank loan or customer paid back everything on time). As the system is trying to replace human judgement, it would then try to correlate these features to an outcome before approving or denying a loan. Many oversimplified concepts were explored here, but the problem is still clear — the historical data based on bank tellers’ decisions are tainted by unconscious (or conscious..) bias! The moment a dataset contains instances of human bank tellers historically possibly denying loans to a race A more often than a race B with no valid reason, the dataset becomes inherently biased. The system will then learn the bias, even though race, ethnicity or gender was never even mentioned in the dataset! If data scientists are not mindful of algorithmic bias, an algorithm learning blindly from a deeply racist society can only become racist itself. These ethical concerns call for a discrimination-conscious development of algorithms and data mining systems, especially when used for governmental and justice-related decision-making. This remains a challenge in the data science community and many novel techniques have been discussed by researchers to address algorithmic bias. Different solutions range from deemphasizing the data fed to the model to even introducing a second independent algorithm to assess and reduce bias in the first algorithm. However, the problem needs to be discussed far more often as these algorithms are being widely used in making life-changing decisions for the population.
https://medium.com/swlh/racist-and-sexist-ai-a-tale-of-algorithmic-bias-3dc9128cc0ab
['Karine Mellata']
2020-07-22 21:33:47.486000+00:00
['AI', 'Ethics', 'Justice', 'Data Science', 'Racism']
Fashion History: Hermès Birkin
In the early 1980s, Jean-Louis Dumas was the chief executive officer of Hermès. He was on a flight from Paris to London. On the flight, he was seated next to a popular English actress and singer named, Jane Birkin. Apparently, Birkin put her bag in the overhead compartment when all of a sudden, her bag dropped and everything inside of it fell out. (via Wikipedia) According to Hermès, Birkin told Dumas that she had a hard time finding a bag that was suitable for her because she was a young mother. However, Wikipedia claimed that Jane Birkin said that she could not find the right weekend bag. On the other hand, there were a few sites like Freeman’s who claimed that Dumas suggested Birkin to find a bag with pockets inside of them. Although the topic of the conversation was unclear, it sparked something inside of Dumas to create a leather bag called “The Birkin.” Quickly, this bag became a symbol of wealth and luxury. At some point, it was known as a rare bag because not everybody could purchase it. A Birkin can typically range between $8500 to $500,000. According to Town & Country, it takes 48 hours to make one Birkin and Hermès only allows a certain amount of orders per year. These reasons also contributes to why the Birkin is rare too. Today, if someone wanted to purchase a Birkin, they would be put on a waitlist. Many people have claimed that they cannot go to an Hermès store and/or their website to purchase a bag and bring it home immediately. There are specific ways for you to purchase the bag and to enter the waitlist. Update: There’s a number of websites that claim that Hermès does not do a waitlist anymore. However, I see other websites that claim that a waitlist is still implemented at the store. I’m assuming it depends on the store and their regulations. It’s encouraged that buyers go in-person to purchase a bag or purchase the bags from a second-hand store.
https://jenniferarchives.medium.com/fashion-history-herm%C3%A8s-birkin-28e010a2edaf
['Jen Eve']
2020-12-27 23:59:46.824000+00:00
['Fashion', 'History', 'Art', 'Creativity', 'Style']
Building a historical price engine using InfluxDB
We wanted to build a service which can store price data for multiple financial products (like BTC future quoted in USD) and can answer time based price aggregation queries within milliseconds. Choosing InfluxDB We evaluated a bunch of time series databases and decided to finally go ahead with InfluxDB. Pros SQL like query language Supports auto rollups using Continuous Query Comes with clustering support (Enterprise version) Easy StatsD integration Since InfluxDB was going to do most of the major heavy lifting, we decided to go ahead with an express server. Nodejs was also an obvious choice to ingest trades data from our Kafka cluster. Setup Our trading engine publishes all trades on a Kafka topic. We have a service written in Nodejs which reads data from Kafka topic and writes it to InfluxDB. We define Continuous Queries (CQ) in InfluxDB to create aggregates for certain pre-defined resolutions. So for example, we create a CQ which runs every 5 mins and calculates price aggregates for the last 5 minutes. We choose the following resolutions — 1m, 5m, 15m, 1hr, 6hr, 1day. Nodejs Service — Kafka to InfluxDB Service to ingest messages from Kafka and insert into InfluxDB Influx Schema OHLC data in computed periodically for different resolutions (1m, 5m, 15m …) For a detailed understanding on how to setup InfluxDB, we suggest the reader to go through official InfluxDB docs.
https://medium.com/delta-exchange/building-a-historical-price-engine-using-influxdb-19d6b65c0ff3
['Saurabh Goyal']
2018-06-25 07:23:12.328000+00:00
['Big Data', 'Influxdb', 'Cryptocurrency']
decision
The river below looked as drab and grey as the still, forbidding sky above. The day was a perfect underscore for my mood. No. More than mood. The life I’ve prepared to abandon for months. The dreariness of living has laid a road ahead far too bleak to consider. I am through, as the cold wraps itself around my thin coat, my weak, frail body. I am numb to caring. As the wind picks up, I feel icy frost catch in my hair on my bare head. My face, too, is by now bitten by frost. It must be painful. I feel nothing. Just weary from standing, looking down, bracing myself against the wind.
https://joan-evans-nyc.medium.com/decision-aebe10b738e8
['Joan Evans']
2018-08-03 17:10:09.673000+00:00
['Depression', 'Poetry', 'Relationships', 'Self', 'Psychology']
The Meaning Of Wealth
The Meaning Of Wealth A poem. Photo by Budka Damdinsuren on Unsplash When you think of wealth, what are the thoughts and pictures that come to mind? Perhaps it’s a hot tub filled with money and diamonds and pearls, and a house so big you could get lost within its rooms and halls. And a private jet to take you wherever you’d like to run away. Or maybe it’s a house that you can call your own, with a table big enough to hold your friends and family and the ones that know your heart. Or perhaps wealth isn’t something that you can hold — but the freedom to do what you want, when you want. To go as freely as you stay. To give to your heart's content. To love as much as you’d like — without another soul telling you what to do. Maybe that’s what wealth is really about — the ability to give, to do, and be what you want. Perhaps, at the heart of it, wealth is about having freedom. The freedom to love. The freedom to breathe. The freedom to dream. The freedom to think. The freedom to give. The freedom to build. The freedom to be.
https://medium.com/assemblage/the-meaning-of-wealth-fa9f64aeacca
['Megan Minutillo']
2020-11-28 14:33:58.647000+00:00
['Poetry', 'Self-awareness', 'Mindset', 'Personal Growth', 'Wealth']
Creating accessible progress indicator in React
When creating our registration application our UX designer came up with the following design (screenshot taken from our storybook): It was relatively easy to implement using React, so we wrote it and forgot about it. Several weeks later, we submitted the alpha version of the app to our accessibility testing consultant (as we do with all our apps). The results were that among other minor accessibility problems, the most serious was that the progress indicator (we call it thermometer) is completely inaccessible to screen readers. Not invisible, even worse — annoying, because when read using a screen reader it produced seemingly irrelevant words. The accessibility problems can be split into these points: There was no indication this component represents a progress display The circles were not numbered and labelled in any way The lines connecting the circles did not provide information about how much of the way they were filled There was no explanation what A***** D*** means (this was explained in the rest of the app, so a seeing users could easily read it, visually impaired users however could not) We will discuss how we fixed all these problems in the rest of this article. Declaring the semantic meaning First, let’s focus on how to declare that our component is a multi step progress indicator. We followed our consultant’s recommendation: The whole component is an ordered list where the list items consist of the circle and optionally the connecting line. Bellow there is the source code of the main component: Thermometer component As we can see, it renders an ordered list with a descriptive aria-label attribute that tells the user what it represents. The list takes two props: items that specify name, icon and other properties of the individual steps, and position – two numbers that indicate which step is currently active and what is the percentual progress to the next one. Rendering the circles Next, let’s see how the Segments (i.e. list items) are rendered. Bellow is the code for the Segment component: Segment component The Segment component renders one circle and if it is no the last one, it renders the connecting line as well. The important thing here is that the li element is marked as aria-current when appropriate. This ensures that screen reader reads the current step like Account, current step ensuring the user knows what the current step is. Other than that, the component is relativelly straight-forward, it computes some derived props for the Point and Line components (see bellow) based on the current position on the thermoeter. The Point component represents the circle on the thermometer. It has a name, an icon and status, that indicates whether it is the current step, step already completed or one that is still to be visited. The whole code is listed bellow: Point component Point renders a div with some styling that ensures it is a circle of the right color and size depending on the status. Active step is larger than inactive for example. Inside of the div , there are two span s – one for screen readers only and one for displays (hidden from screen readers using the aria-hidden attribute). The reason for this duplicity is that we needed to provide additional description for the screen reader users. In our case it was a note that the name of the person being registered is redacted and therefore is spelled A***** D*** instead of Arthur Dent. You may wonder why we did not simplify this to something like Simplification attempt While this would work relatively well, the screen reader would read the two parts separately. In other words the user would have to press a button to hear props.description as well. The way we wrote it, both strings get read at once. Rendering the connecting lines The last part we haven’t shown are the lines connecting the circles. This is the responsibility of the Line component: Line component The visual part of its markup is simple: just a div contaning two overlapping div s that emulate the partially filled progress bar. It's the ARIA part that's interesting! As we can see, the main div has several ARIA attributes, so let's explain those one by one. The role attribute declares what the component represents semantically – in this case a progressbar. There are quite a few roles available in the WAI-ARIA standard. We used aria-label here to give name to the progress bar. In this case the title of the related step is used. This ensures the user knows which step the progress bar corresponds to. Finally, the aria-valuenow specifies the curernt progress value while aria-valuemin and aria-valuemax specify the lowest and highest possible values. These are used by the screen readers to compute the percentual progress of the progress bar. This means that somthing like aria-value* attributes example would be read as 50 percent Conclusion On a simple real life example we’ve shown how to make React components accessible to visually impaired users and hopefully inspired you to revise your own components with this perspective in mind. After all, accessible components are better components.
https://medium.com/vzp-engineering/creating-accessible-progress-indicator-in-react-1ae7eca76633
['Dan Homola']
2018-07-25 20:26:12.955000+00:00
['Accessibility', 'React']
The Cycle
One day at a time you fight You step in and out of the light Breathe in and hold it tight What’s wrong can feel so right When the day turns into night When the frog becomes a knight His kiss makes your heart take flight Then he’s gone and out of sight The fight is gone and replaced with numbness The light disappears and fades into darkness Release your breath into nothingness What was right is now just emptiness The night swallows you into its vastness The knight vanishes into the wilderness His kiss is gone, leaving you breathless You’re alone again and filled with sadness One day at a time, you’re stronger The darkness starts to fade, the sky becomes lighter Your breath once more light as a feather Wrong and right are entangled together Day and night spin around and under The knight, you were, always the fighter From the past, now asunder Never alone, happiness now and forever
https://medium.com/romance-monsters/the-cycle-4627f6d7de47
['Edie Tuck']
2020-01-02 03:24:51.447000+00:00
['Poetry', 'Lov', 'Self-awareness', 'Broken', 'Self Love']
Machine Learning techno-premium babble
Machine Learning Techno-Premium Babble Machine Learning & Data Science — ambrosia for noobs. The article was supposed to be short. But there is no way to talk about “machine learning” without knowledge of the slogans used in the discussion. We will start in the traditional way — we will go through a list of the most commonly used techniques used in “data mining” and then come to real life examples. The subject of “data mining” itself is so capacious that one could write at least a few pages about it. For our needs let us assume that it is simply working with data. And in this work, we usually use the techniques listed below in the “Data Mining Techniques” paragraph. Having a general idea of what’s behind, we will move on to apply this knowledge in practice. In part, “machine learning on examples”, there will be specific cases of using algorithms to solve real-life problems and to answer frequently asked questions in business. Data Mining Techniques 1. Prediction Classification and Regression are data mining techniques used to solve similar problems. Both are used in predictive analysis, but regression is used to predict a numeric or continuous value, while the classification uses labels to assign data to separate categories (classes). Classification Classification is the assignment of an object to a particular class based on its similarity to previous examples of other objects. Usually, classes are mutually exclusive. An example of a classification question would be “Which of our clients will respond to our offer?” and creation of two classes: “will react to an offer” and “reject an offer”. Another exemplary classification model — credit risk. It could be developed on the basis of observed data for loan applicants over a period of time. We can track employment history, house ownership or renting, length of residence, type of investment and so on. The target classes would be a credit rating; e.g. “low” and “high”. We will wisely call attributes (e.g. employment history) the “predictors” (or “independent variables”) and target variables the “dependent variables” or simply “classes”. Classification belongs to the “supervised” methods. What are supervised methods? read in “Supervised methods and unattended methods” below. Regression Regression is a “value estimation”. It is the determination of the relation between different entities and on this basis attempts to estimate (“predict”) unknown values. For example: if we know the company’s turnover from the previous year, month after month, and we know the advertising expenses in each month of the previous year, we are able to estimate the amount of revenue by assuming spending some amount on advertising in the next year. Another question that we can find the answer using regression can be “How often will a given customer use the service?” 2. Co-occurrence and associations Exploring groups or discovering associations tries to find the links between objects based on the transactions of these objects. Why do we want to find such occurrences? Association rules will prompt us whether “Customers who bought a new eWatch, they also bought a Bluetooth speaker”. Sequence pattern search algorithms can suggest how a service action or customer service should be organized. Association Association is the method of discovering interesting dependencies or correlations, commonly referred to as associations, between data in data sets. Relations between elements are expressed as association rules. Association rules are often used to analyze sales transactions. For example, you can see that “Customers who buy cereal in a grocery store often buy milk at the same time” (eureka!). For example, an association rule can be used in e-commerce to personalize websites. The associative model can discover that “person who visited pages A and B can also visit the page C in the same session with a probability of 70%”. Based on this rule we can create a dynamic link for users who might be interested in the C page. Searching for patterns Searching patterns (patterns), more precisely sequential patterns, is searching for ordered sequences. The order of sequences between elements is important. Founded patterns are presented in the order of ‘support’, i.e. the frequency of the occurrence of a given pattern in the set of elements in relation to the number of considered transactions. 3. Clustering Clustering is the grouping of objects with similar properties. As a result of this operation, a cluster or class is created. Clustering can give us an answer to the question “do our clients create groups or segments?” And consequently, “how should our customer service teams (or sales teams) look like to adapt to them?”. Clustering, like the classification, is used to segment the data. In contrast to classification, grouping models into segments divides data into groups that were not previously defined. Clustering belongs to ‘unattended’ methods. What are unattended methods? Read in “Supervised methods and unsupervised methods” below. The examples of Machine Learning Having the theory and examples of algorithms in mind, let’s move and apply them in practice. We have specific problems to solve and we will use the techniques described in the previous paragraph. Classification “Which customers can leave us in the near future?” The algorithm that we can use to try to find the answer to this question is J48. Successor of C4.5, whose predecessor is ID3. The algorithm uses the concept of entropy. In short: “how many questions is necessary to get to the information?”. This algorithm belongs to the group of algorithms of the “decision tree”. There are at least 835463248965 articles on the Internet about these algorithms and decision trees …. of which 2/4 are written by those who do not know how they work and 1/4 by scientists who have focused on to crypt this information. To try to answer the question of which clients can leave, we will need historical data as a behavioral pattern (where we know which clients have left us). Analyzing this information, attributes, we will create a model. This model will be used to analyze the data of customers who are still our clients (current data). The last column in the table below is the class (the answer we will look in the data). Historical data: Current data: Which software will do this task for us? Unfortunately, we will not do it in Excel (although there are some experimental solutions for this program). If we could do it there — the whole spell of “machine learning” would evaporate in one moment. Among the few others, there are two programs that are great for this. The first is Weka. Created by the academic community it offers an adequate interface. However, it is particularly suitable for testing the selection of attributes, which in effect will assign our clients properly to the appropriate class — by experimenting we can work out a model that we apply to the current data. We can use built model in the second software; Pentaho Data Integration. The advantage of Pentaho (PDI) is the logical interface and the ability to process large amounts of data. PDI use Weka libraries and is also able to prepare a model to test production data. Regression “What turnover will we achieve by investing amount X in advertising?” We need two variables to calculate regression; a predictor (independent) variable and a target (dependent) variable. Historical data: After calculation we have ready to use formula: y = 10,788 * x + 368,848 Where X is the forecasted amount of advertising and Y is the expected revenue. The green line is the regression line. The linear model assumes that relations between variables can be summarized with a straight line. The line indicates the relation between X and Y. Regression can be calculated in Excel or using a simple to use an online calculator. Thanks to Pentaho, we can automate the regression calculation — it can be part of the data analysis process. The example shows a simple linear regression. Real cases often use multiple regression with multiple predictor variables or non-linear regression. Association “What is the most common product basket?” We can also ask a similar question: “What products are usually bought with hamburgers?” We want to learn the correlations that occur in customers’ purchases — get to know their shopping patterns. Let’s assume that we have a list of customer transactions: Best rules found: 1. burgers=y 4 ==> potatos=y 4 <conf:(1)> lift:(1.17) lev:(0.08) [0] conv:(0.57) 2. onion​=y 3 ==> potatos=y 3 <conf:(1)> lift:(1.17) lev:(0.06) [0] conv:(0.43) 3. onion​=y burgers=y 2 ==> potatos=y 2 <conf:(1)> lift:(1.17) lev:(0.04) [0] conv:(0.29) 4. burgers=y milk=y 2 ==> potatos=y 2 <conf:(1)> lift:(1.17) lev:(0.04) [0] conv:(0.29) 5. burgers=y beer=y 2 ==> potatos=y 2 <conf:(1)> lift:(1.17) lev:(0.04) [0] conv:(0.29) 6. potatos=y beer=y 2 ==> burgers=y 2 <conf:(1)> lift:(1.75) lev:(0.12) [0] conv:(0.86) 7. onion​=y milk=y 1 ==> potatos=y 1 <conf:(1)> lift:(1.17) lev:(0.02) [0] conv:(0.14) 8. onion​=y beer=y 1 ==> potatos=y 1 <conf:(1)> lift:(1.17) lev:(0.02) [0] conv:(0.14) 9. onion​=y beer=y 1 ==> burgers=y 1 <conf:(1)> lift:(1.75) lev:(0.06) [0] conv:(0.43) 10. onion​=y burgers=y beer=y 1 ==> potatos=y 1 <conf:(1)> lift:(1.17) lev:(0.02) [0] conv:(0.14) How to interpret the results? 10 rules were found in the data (default program settings). Let’s explain the first line. A pair of “burgers ==> potatos”. Burgers were found in 4 customer transactions (in 4 customer baskets). This number is called ‘support’. Number 4 next to potatos means that were also 4 connections “burgers ==> potatos”. This variable is called “support” or “coverage” (coverage of the whole rule). The number in brackets after “conf:” is “confidence” or “trust”. Trust is the percentage determination of the probability of a rule occurring; 100% sure that if the customer bought burgers, will also buy potatoes. Confidence arises from the formula: confidence = support rule (second digit .. number of rule occurrences) / support (number of instances of the formula … first digit) The apriori algorithm provides us with unordered sets of elements (without a specific sequence). Searching for patterns “What next product will buy the customer who bought product X?” The discovery of sequence patterns involves the analysis of a database containing information about events that occurred over a given period of time in order to find a relationship between the occurrence of specific events over time. An example of a sequence pattern is customer purchases. The purchases included in the sequence pattern do not have to occur directly one by one — they can be separated by other purchases. This means that the customer usually purchases another product between the purchase of the product X and the purchase of the product Y, but the given sequence describes the typical behavior of most customers. Let’s assume that the purchases of our customers look as follows: Using the Weka program and applying the GeneralizedSequentialPatterns algorithm, we get the result: Frequent Sequences Details (filtered): - 1-sequences [1] <{coffee}> (3) [2] <{milk}> (2) [3] <{sugar}> (2) [4] <{pasta}> (2) - 2-sequences [1] <{coffe}{coffee}> (2) [2] <{coffee}{pasta}> (2) [3] <{milk}{coffee}> (2) [4] <{milk}{sugar}> (2) [5] <{sugar}{coffee}> (2) - 3-sequences [1] <{milk}{sugar}{coffee}> (2) How to interpret the result of the program calculations? “X-sequences” are groups of instances that meet the calculation criteria (minimum support settings, “minSupport” that must meet the results); single sequences, double, triple … The found patterns are presented in the order of “support”, i.e. the frequency of the occurrence of a given pattern in the set of elements in relation to the number of considered transactions. Clustering “What segments do our clients create?” Let’s assume that our clients have the following attributes and features: We would like to divide them into two groups (two clusters) get to them more accurately with our offer or to adjust the sales team to handle their specific needs. We will do it in the Weka program using the k-means algorithm. How to interpret the results? Attribute Full Data 0 1 (7.0) (5.0) (2.0) ================================================ age 29.8571 33.8 20 Marital status married married single Property house flat house Education elementary high elementary We obtained two clusters (program settings — we can request more of them); 0 and 1. In the cluster 0 the average age is 34 years, marital status married, owner of an apartment with high education. There are five records in this set. Cluster 1 is 20 years old, marital status single, house owner with elementary education. The ‘Full Data’ column is the average of all instances. The clustering used here is based on an algorithm that uses the arithmetic mean to calculate the distances of individual features in clusters. The example has been deliberately limited to several records to better illustrate clustering. Actual grouping is made on thousands and more records. With such a wide sample, the data can be visualized to give a clear picture of our groups. Supervised methods and unsupervised methods. In another sense ‘supervised learning’ and ‘unsupervised learning’. In supervised learning, we set a specific goal — we expect a certain result. Example: “Can we find groups of customers who have a particularly high probability of canceling their services shortly after the expiration of their contracts? “ Or: “Let’s divide the clients because of the risk of insolvency; small, medium, large. “ Examples of supervised methods are classification and regression. The algorithms used here are often decision tree, logistic regression, random forest, support vector machine, K-nearest neighbors. In unsupervised learning, we do not set a specific goal — we do not expect a specific target result. The questions asked here are, for example: “Do our clients create different groups?” Examples of unsupervised methods are clustering and correlation (association) Metaphorically, a teacher “supervises” the learner by carefully providing target information along with a set of examples. An unsupervised learning task might involve the same set of examples but would not include the target information. The learner would be given no information about the purpose of the learning but would be left to form its own conclusions about what the examples have in common. Algorithms Baby! The whole mystery of “Machine Learning” was born from the lack of easy access to the functions/algorithms that perform the work described above. The tools have been available on the market for years. What’s more, they are often free! However, to use them you need to have basic skills in the field of databases, programming, SQL, parsing files — the data most often require transforming to the appropriate form to be able to use them. All these calculations are possible thanks to appropriate algorithms. Most of these calculations could have been made a decade or even more years earlier (!). The regression algorithm has more than two centuries (its beginnings are 1805). The j48 algorithm used for classification has its roots in entropy of information — Claude Shannon’s work from 1948. We have even older algorithms — k-means grouping objects, based on the idea of the Euclidean distance which derives from ancient Greek geometry. If somebody were to do “learning” here, then certainly not machines but human. The computer, as the perfect computing machine, does in a second what human would do for weeks. There was no new revolution in science — we gained access to high-speed computing machines. If “Machine Learning” is the basis of today’s “artificial intelligence”, how does it look itself? Data mining is a craft. It involves the use of a significant amount of science and technology, but the proper application still includes art. No machine can pick the attributes in the right way as a human does. For example, in the retail trade the attribute “frequency of purchases” may be more reliable than in B2B relations. In the United States, there are data mining competitions (GE-NFL Head Health Challenge, GEQuest) and the rewards for solving specific humanity problems are very high (eg 10 million dollars in the challenge of the GE-NFL Head Health Challenge).
https://medium.com/datadriveninvestor/machine-learning-techno-premium-babble-e7a41f8b2bfb
['Mike Gosforth']
2019-06-12 10:42:20.145000+00:00
['Machine Learning', 'Artificial Intelligence', 'Data Driven Company', 'Data Science', 'Deep Learning']
What My Daily Walk Consists of
Human interaction: Humans are social beings, we don’t cope well in isolation as we have seen with many mental health experts telling us that social distancing will cause great damage to our mental health. As a homebody, I’m coping with this new way of life of staying indoors to flatten the curve as we all should be following what our governments say. Although, I don’t like that I don’t have the option to go outside. I liked knowing before that I could go outside without any worry about getting fined. I wouldn’t go out to say it’s an amazing human interaction with people that walk past you when they do the curt smile and nod or a small greeting. It’s not much of an interaction but an acknowledgment of people not from your own home makes the difference. It’s a sight to see people at the park doing their exercise of jogging, walking the dog, walking with family and riding bikes. It’s the livelier aspect of people doing what they can to get through the day knowing it’s going to repeat itself. Glimpses of happiness and hope: I walked past a house the other week where I saw a bunch of pictures drawn was put on the window for walkers to see and the chalk pictures drawn on the path. Similar to the pictures would be the bears being put on windowsills, as I walked past an Elmo doll thinking of the article written where kids have been putting teddy’s on windowsills for walkers to see. It’s these small things that give you a positive outlook with the situation that we’re all facing putting a smile on our faces, it’s the humanity in all of us. Random notable moments: Dogs (small dogs) can be scary. There was one time where my sister and I walked past a house with the garage open with a chihuahua standing guard without a leash. Given that we were only walking past, we got chased off because we happened to be in the proximity of danger. I also wave to cats that I see inside windows of houses doing their lookout of the street. It’s honestly so stupid but it’s one of the things I will never fail to do whenever I see a cat. I guarantee that I’m 100% being judged but I love seeing cats.
https://medium.com/from-the-outside/what-my-daily-walk-consists-of-5b397f0410fa
['Tracy Nguyen']
2020-04-20 12:56:21.848000+00:00
['Life', 'Nature', 'Humanity', 'Self-awareness']
An Invincible Summer
~philosophical poetry My dear In the midst of strife, I found there was, within me, an invincible love. In the midst of tears, I found there was, within me, an invincible smile. In the midst of chaos, I found there was within me, an invincible calm. In the depth of winter, I finally learned that within me, there lay, an invincible summer. And, that makes me happy. For it says, that no matter how hard the world pushes against me, within me, there’s something stronger… ~ Albert Camus 🕊 Stepping out from chaos into the light to know, understand, to find meaning in the abstruse, the absurd which helps to minimize strife, internal and external with a patient, conscious, mindful heart actively reflecting, engaging, a soul-sourced voice. 🕊 Bring a desire to exceed the mundane, outside of fear, of chaos to grow exponentially, creatively an equanimous entwining, a knowing. 🕊 Tears wane, with this stronger voice, which is founded in idealism with right intentions, speaking, writing about alternative ways with the Great Mother and the Old Wise Man of our collective consciousness, with the honouring of creative potentials of individual offerings, of less restricted potential. 🕊 Cleansing the heart against the disease of winter, the archetypal Trickster, of absurdity, chaos, dark, profane shadows even stints in the desert, where the air suffocates and stifles — drying the voice, the heart and the soul — we persevere. 🕊 The realization that life is absurd and cannot be an end, but only a beginning. This is a truth nearly all great minds have taken as their starting point. It is not only this discovery that is interesting, but the consequences and rules of action drawn from it.~ Albert Camus 🕊 Toward fertile ground, the light, the order where there was chaos toward summer awakenings, a playful practical pragmatism on the black earth, in lush forests, the expansive ocean of flexible yet durable transformations. 🕊 An interplay of resisting, of letting go with turquoise seas of what could be and some revolution before what is going to be/ to take a stand, with awareness, with love, calm, we can stand up with a stronger pushing back, bravely, for even more. ~ namasté, Leah J. 🕊 © Leah J. Spence 2019, All Rights Reserved
https://medium.com/resistance-poetry/an-invincible-summer-e825c6427153
['Leah Spence']
2019-09-12 18:22:42.797000+00:00
['Resistance Poetry', 'Poetry', 'Life', 'Mental Health', 'Community']
Standardization is Crushing UX (and other UX links this week)
What’s hot in UX this week: “Let’s not reinvent the wheel.” If you’ve ever worked at a software company you’ve heard somebody say this during a design session. As liberated and brash as we may seem these days, we are — more than ever — fighting a creative culture of fear. There are alternative views, of course, but this is something that designers and developers are dealing with on a regular basis. We in the software industry often find ourselves struggling to break free of “conventions” and “standards” in the realms of UX design and front-end development. Interfaces are now crawling with informal standards that have quickly made their way from inventive to inevitable: hamburgers, swipes, likes, scrolls, and much, much more. Read full story → via Fabricio Teixeira
https://uxdesign.cc/standardization-is-crushing-ux-and-other-ux-links-this-week-722c510fc812
['Fabricio Teixeira']
2015-11-26 05:23:14.647000+00:00
['Design', 'UX', 'UX Design']
Combining ML Models to Detect Email Attacks
This article is a follow-up to one I wrote a year ago — Lessons from building AI to Stop Cyberattacks — in which I discussed the overall problem of detecting social engineering attacks using ML techniques and our general solution at Abnormal. This post aims to walk through the process we use at Abnormal to model various aspects of a given email and ultimately detect and block attacks. As discussed in the previous post, sophisticated social engineering email attacks are on the rise and getting more advanced every day. They prey on the trust we put in our business tools and social networks, especially when a message appears to be from someone on our contact list (but is not) or even more insidiously when the attack is actually from a contact whose account has been compromised. The FBI estimates that over the past few years over 75% of cyberattacks start with social engineering, usually through email. Why is this a hard ML problem? A needle in a haystack — The first challenge is that the base rate is very low. Advanced attacks are rare in comparison to the overall volume of legitimate email: 1 in 100,000 emails is advanced spear-phishing less than 1 in 10,000,000 emails is advanced BEC (like invoice fraud) or lateral spear phishing (a compromised account phishing another employee) When compared to spam, which accounts for 65 in every 100 emails, we have an extremely biased classification problem which raises all sorts of difficulties Enormous amounts of data — At the same time, the data we have is large (many terabytes), messy, multi-modal, and difficult to collect and serve at low latency for a real-time system. For example, features that an ML system would want to evaluate include: Text of the email Metadata and headers History of communication for parties involved, geo locations, IPs, etc Account sign-ins, mail filters, browsers used Content of all attachments Content of all links and the landing pages those links lead to …and so much more Turning all this data into useful features for a detection system is a huge challenge from a data engineering as well as ML point of view. Adversarial attackers — To make matters worse, attackers actively manipulate the data to make it hard on ML models, constantly improving their techniques and developing entirely new strategies. The precision must be very high — to build a product to prevent email attacks we must avoid false positives and disruption of legitimate business communications, but at the same time catch every attack. The false-positive rate needs to be as low as one in a million! For more examples of the challenges that go into building ML to stop email attacks, see the discussion Lessons from building AI to Stop Cyberattacks. To effectively solve this problem we must be diligent and extremely thoughtful about how we break down the overall detection problem into components that are solved carefully. Example: Let’s start with this hypothetical email attack and imagine how we could model various dimensions and how those models come together. Subject: Reset your password From: Microsoft Support <[email protected]> Content: “Please click _here_ to reset the password to your account.” This is a simple and prototypical phishing attack. As with any well-crafted social engineering attack, it appears nearly identical to a legitimate message, in this case, a legitimate password reset message from Microsoft. Because of this, modeling any single dimension of this message will be fruitless for classification purposes. Instead, we need break up the problem into component sub-problems Thinking like the attacker Our first step is always to put ourselves in the mind of the attacker. To do so we break an attack down into what we call “attack facets”. Attack Facets: Attack Goal — What is the attacker trying to accomplish? Steal money? Steal credentials? Etc. Impersonation Strategy — How is the attacker building credibility with the recipient? Are they impersonating someone? Are they sending from a compromised account? Impersonated Party — Who is being impersonated? A trusted brand? A known vendor? The CEO of a company? Payload Vector — How is the actual attack delivered? A link? An Attachment? If we break down the Microsoft password reset example, we have: Attack goal: Steal a user's credentials Impersonation strategy: Impersonate a brand through a lookalike display name (Microsoft) Impersonated party: The official Microsoft brand Payload vector: A link to a fake login page. Modeling the problem Building ML models to solve a problem with such a low base rate and precisions requirements forces a high degree of diligence when modeling sub-problems and feature engineering. We cannot rely just on the magic of ML. In the last section, we described a way to break an attack into components. We can use that same breakdown to help inspire the type of information we would like to model about an email in order to determine if it is an attack. All these models rely on similar underlying techniques — specifically Behavior modeling: identifying abnormal behavior by modeling normal communication patterns and finding outliers from that identifying abnormal behavior by modeling normal communication patterns and finding outliers from that Content modeling: understanding the content of an email understanding the content of an email Identify resolution: matching the identity of individuals and organizations referenced in an email (perhaps in an obfuscated way) to a database of these entities Attack Goal and Payload Identifying an attack goal requires modeling the content of a message. We must understand what is being said. Is the email asking the recipient to do anything? Is it an urgent tone? and so forth. This model may not only identify malicious content but safe content as well in order to differentiate the two. Impersonated Party What does an impersonation look like? First of all the email must appear to the recipient to look like someone they trust. We build identity models to match various parts of an email against known entities inside and outside an organization. For example, we may identify an employee impersonation by matching against the active directory. We may identify a brand impersonation by matching against the known patterns of brand-originating emails. We might identify a vendor impersonation by matching against our vendor database. Impersonation Strategy An impersonation happens when an email is not from the entity it is claiming to be from. To do so we identify normal behavior patterns to spot these abnormal ones. This may be abnormal behavior between the recipient and the sender. It may be unusual sending patterns from the sender. In the simplest case, like the example above, we can simply note that Microsoft never sends from “fakemicrosoft.com”. In more difficult cases, like account takeover and vendor compromise, we must look at more subtle clues like unusual geo-location and IP address of the sender or incorrect authentication (for spoofs). Attack Payload For the payload, we must understand the content of attachments and links. Modeling these requires a combination of NLP models, computer vision models to identify logos, URL models to identify suspicious links, and so forth. Modeling each of these dimensions gives our system an understanding of emails particularly along dimensions that might be used by attackers to conduct social engineering attacks. The next step is actually detecting attacks Combining Models to Detect Attacks Ultimately we need to combine these sub-models to produce a classification result (for example P(Attack)). Just like any ML problem, the features given to a classifier are crucial for good performance. The careful modeling described above gives us very high bandwidth features. We can combine these models in a few possible ways. (1) One humongous classification model: Train a single classifier with all the inputs available to each sub-model. All the input features could be chosen based on the features that worked well within each sub-problem, but this final model combines everything and learns unique combinations and relationships. (2) Extract features from sub-models and combine to predict target — there are 3 ways we can go about this: (2.a) Ensemble of Models-as-Features: Each sub-model is a feature. Its output is dependent on the type of model. For example, a content model might predict a vector of binary topic features (2.b) Ensemble of Classifiers: Build sub-classifiers that each predict some target and combine them using some kind of ensemble model or set of rules. For example, a content classifier would predict the probability of attack given the content alone. (2.c) Embeddings: Each sub-model is trained to predict P(attack) like above or some other supervised or unsupervised target, but rather than combining their predictions, we extract embeddings, for example, by taking the penultimate layer of a neural net. Each of the above approaches has advantages and disadvantages. Training one humongous model has the advantage of getting to learn all complex cross dependencies, but it is harder to understand and harder to debug, and more prone to overfitting. It also requires all the data available in one shot, unlike building sub-models that could potentially operate on disparate datasets. The various methods of extracting features from sub-models also have tradeoffs. Training sub-classifiers is useful because they are very interpretable (for example we could have a signal that represents the suspiciousness of text content alone), but in some cases, it is difficult to predict the attack target directly from a sub-domain of data. For example, purely a rare communication pattern is not sufficient to slice the space meaningfully to predict an attack. Similarly as discussed above, a pure content model cannot predict an attack without context regarding the communication pattern. The embeddings approach is good, but also finicky, it is important to vet your embeddings and not just trust they will work. Also, the embedding approach is more prone to overfitting or accidental label leakage. Most importantly with all these approaches, it is crucial to think deeply about all the data going into models and also the actual distribution of outputs. Blindly trusting in the black box of ML is rarely a good idea. Careful modeling and feature engineering are necessary, especially when it comes to the inputs to each of the sub-models. Our solution at Abnormal As a fast-growing startup, we originally had a very small ML team which has been growing quickly over the past year. With the growth of the team, we also have adapted our approach to modeling, feature engineering, and training our classifiers. At first, it was easiest to just focus on one large model that combined features carefully engineered to solve subproblems. However, as we’ve added more team members it has become important to split the problem up into various components that can be developed simultaneously. Our current solution is a combination of all the above approaches depending on the particular sub-model. We still use a large monolithic model as one signal, but our best models use a combination of inputs including embeddings representing an aspect of an email and prediction values from sub-classifiers (for example a suspicious URL score). Combining models and managing feature dependencies and versioning is also difficult. Takeaways for solving other ML problems Deeply understand your domain Carefully engineer features and sub-models, don’t trust black box ML Solving many sub-problems and combining them for a classifier works well, but don’t be dogmatic. Sure, embeddings may be the purest solution, but if it’s simpler to just create a sub-classifier or good set of features, start with that. Breaking up a problem also allows scaling a team. If multiple ML engineers are working on a single problem, they must necessarily focus on separate components. Modeling a problem as a combination of subproblems also helps with explainability. It’s easier to debug a text model than a giant multi-modal neural net. But, there’s a ton more to do! We need to figure out a more general pattern for developing good embeddings and better ways of modeling sub-parts of the problem, better data platforms, and feature engineering tools, and so much more. Attacks are constantly evolving and our client base is ever-growing leading to tons of new challenges every day. If these problems interest you, yes, we’re hiring!
https://medium.com/abnormal-security-engineering-blog/combining-ml-models-to-detect-email-attacks-e1b4d1f2d14e
['Jeshua Bratman']
2020-11-18 00:21:27.244000+00:00
['Machine Learning', 'Email Security', 'Artificial Intelligence', 'Cybersecurity', 'Data Science']
What they are and how to use Lambda expressions in Python
Definition of Lambda In Python, a lambda function refers to a small anonymous function. We call them “anonymous functions” because technically they have no name. Unlike a normal function, we do not define it with the standard keyword def that we use in Python. Instead, Lambda functions are defined as a line that executes a single expression. These types of functions can take any number of arguments, but can only have one expression. Basic syntax All Lambda functions in Python have exactly the same syntax: #I write p1 and p2 as parameters 1 and 2 of the function. lambda p1, p2: expression As I can best explain it is by showing you a basic example, let’s see a normal function and an example of Lambda: #Here we have a function created to add up. def sum(x,y): return(x + y) #Here we have a lambda function that also adds up. lambda x,y : x + y #In order to use it we need to save it in a variable. sum_two = lambda x,y : x + y As with the list comprehensions, what we’ve done is write the code in one line and clean up the unnecessary syntax. Instead of using def to define our function, we have used the keyword lambda ; then we write x, y as function arguments, and x + y as expression. In addition, the keyword return is omitted, further condensing the syntax. Finally, and although the definition is anonymous, we store it in the variable sum_two to be able to call it from any part of the code, otherwise we could only use it in the line where we define it. Lambdas Apply I want to give you some ideas of where the Lambdas could be applied. Below I have created some examples applying Lambdas for different purposes. So you can better understand how they work. Lambda in Pandas DataFrame with the apply() method I think we can apply a Lambda function to data cleaning in Pandas with the apply() method, something that I think can be useful to avoid creating a loop that goes through the whole DataFrame: #import pandas and numpy to create a DataFrame import pandas as pd import numpy as np #created a DataFrame with two columns, Celsius and Kelvin, both with equal data data = {'Celsius': [22, 36, 20, 26, 30, 38], 'Kelvin': [22, 36, 20, 26, 30, 38]} #created the DataFrame with its index and Celsius and Kelvin columns df = pd.DataFrame(data, index = ['Londres','Madrid','Barcelona','Sevilla','Cádiz','Lima']) #Creating a lambda function to pass the degrees Celsius to Kelvin to_kelvin = lambda x: (x + 273,15) #apply the Lambda to the [Kelvin] column with the apply() method df = df['Kelvin'].apply(to_kelvin) #In the column 'Kelvin' the Lambda has been applied, the 273.15 applied by Lambda has been added to x (in this case 22 from London) London (295, 15) Madrid (309, 15) Barcelona (293, 15) Seville (299, 15) Cadiz (303, 15) Lima (311, 15) Name: Kelvin, dtype: object Output : London (295, 15) Madrid (309, 15) Barcelona (293, 15) Seville (299, 15) Cadiz (303, 15) Lima (311, 15) Photo by Vlad Hilitanu on Unsplash Lambda in lists with the filter() method The filter() function is able to return a new collection with the filtered elements that meet a condition. We can check, for example, what the even numbers are in a given list. To do this we will pass a list to a Lambda in the following way: #I have a list with many numbers check = [38,24,99,42,2,3,11,23,53,21,3,53,77,12,34,92,122,1008,26] #I create a variable and apply filter() and lambda filt = filter(lambda x: x % 2 == 0, check) #I create a variable to convert the result of 'filt' into a list pairs = list(filt) #Finally I get the list with the results that have returned True when passing the filter print(pairs) Output : [38, 24, 42, 2, 12, 34, 92, 122, 1008, 26]
https://medium.com/datatau/what-they-are-and-how-to-use-lambda-expressions-in-python-41aa87852c83
['Borja Uría']
2020-05-14 15:55:59.284000+00:00
['Python', 'Data Cleaning', 'Lambda Function', 'Python Programming']
Behold the Winners of the 280-Character Story Contest
Tamar Nachmany, “Mount Sinai” All the electronics above my hospital bed are gossiping about when, exactly, I’m going to die. It sounds like a concert I heard in Berlin many years ago. We were told to close our eyes and listen. Static. Beeping. Rain. My monitor is the principal violin. I am not dying alone. Stephen Aubrey is a writer and theater-maker living in Brooklyn. His writing has appeared in Publishing Genius, Commonweal, The Brooklyn Review, Pomp & Circumstance, and Electric Literature. M. Lopes da Silva is an author and fine artist living in Los Angeles. Her work has appeared or is forthcoming in Blumhouse, The California Literary Review, and Queen Mob’s Teahouse, and anthologies by Mad Scientist Journal, Gehenna & Hinnom Press, and Fantasia Divinity Publishing. She recently illustrated the Centipede Press collector’s edition of Jonathan Carroll’s The Land of Laughs. Josh Lefkowitz won the Avery Hopwood Award for Poetry at the University of Michigan. His poems and essays have been published at The Awl, The Millions, The Rumpus, and many other places including publications in Canada, Ireland, and England. He lives in Brooklyn, NY. James Lough’s upcoming book, Short Circuits: Aphorisms, Fragments, and Literary Anomalies will be published by Schaffner Press in April. His oral history, This Ain’t No Holiday Inn: Down and Out in New York’s Chelsea Hotel 1980–1995 (Schaffner Press 2013) was optioned by Lionsgate Entertainment for TV production. He is a professor of nonfiction writing in the Savannah College of Art and Design’s writing department, which he formerly directed. Tamar Nachmany is the director of 1010 Residency, a former writer-in-residence at the New Mexico School of Poetics, and a former Johns Hopkins University Woodrow Wilson Research Fellow. Her work has been shown at the Bell House (Baltimore), the Cullom Gallery (Seattle), the Jewish Museum of Baltimore, and other venues. She is currently writing her first novel. Sara Lautman is a cartoonist, illustrator, and editor in Baltimore. Her drawings have been published by The New Yorker, Playboy, Mad, Jezebel, The Paris Review, The Pitchfork Review and The Awl. She is the Comics Editor for Electric Literature’s Okey-Panky, and in 2016, Recommended Reading published her illustrated “cut-up” collaboration with Shelia Heti, “The Humble Simple Thing.” Illustrations © 2017 by Sara Lautman.
https://medium.com/electric-literature/behold-the-winners-of-the-280-character-story-contest-17532fb1ab9e
['Electric Literature']
2017-10-24 13:51:25.004000+00:00
['Contests', 'Writing', 'Microfiction', 'Fiction', 'Twitter']
Tracker: Ingesting MySQL data at scale — Part 2
Robert Wultsch | Pinterest engineer, SRE In Part 1 we discussed our existing architecture for ingesting MySQL called Tracker, including its wins, challenges and an outline of the new architecture with a focus on the Hadoop side. Here we’ll focus on the implementation details on the MySQL side. The uploader of data to S3 has been open-sourced as part of the Pinterest MySQL Utils. Tracker V-0 As a proof of concept, we wrote a hacky 96-line Bash script to unblock backups to Hive for a new data set. The script spawned a bunch of workers that each worked on one database at a time. For each table in the database, it ran SELECT INTO OUTFILE and then uploaded the data to S3. It worked, but BASH… And that just isn’t a long term solution. Tracker V-1 For our maintainable implementation, we rewrote the Bash script into a Python script called mysql_backup_csv.py. The only significant difference (other than making us not feel bad about ourselves) was we added lzop compression in order to reduce the size of the data in S3. Why lzop? We thought it would be the lightest weight compression tool with a command line interface we could install from apt-get. We tested this against our large sharded MySQL fleet, and it was slow. Like, really slow — 8 hours slow. Speed it up We now had a tool that was maintainable for uploading our MySQL data to S3. The problem was that the tool would not process all data fast enough for our team to meet their SLA’s. We needed to improve the overall throughput significantly, and so went to work on the following: Implement locking so multiple slaves could cooperatively dump in parallel. The lock is maintained via a simple table on the master. This allowed us to get down to around 3.5 hours to dump all our data. Too slow! Skip writing to disk. The Percona distribution of MySQL has a very interesting feature in that SELECT INTO OUTFILE can write to a FIFO. As is, we had to dump all of our data, and then read it back from the filesystem. Using a fifo, we could build a pipeline that did not need to write to the local filesystem at all! This got us to somewhere around 1 hour which was way less than our requirement. Slow it down Per the fine manual (and this is in super old manuals): “ASCII NUL is escaped to make it easier to view with some pagers.” !@#()U@!#!!! We had to write a C program called nullescape to unescape the data. &*(@!#! Adding this to our pipeline resulted in our servers burning four cores just to unescape NUL bytes. This slowed us down to 1.5 hour to dump all our data. This was still within our requirements and left us a bit of breathing room. Winning the race against an EOF A problem with the system was that partial uploads must be prevented. Partial uploads could happen if anything in the pipeline failed. When a Linux program terminates (regardless of how or why), its open file handles will close. If the file handle is to a FIFO, the reader of the FIFO will receive an EOF without any indication of success or failure of the process feeding data into the FIFO. So, why does this matter? Well, dump queries get killed from time to time, and early versions of nullescape would segfault occasionally. When either happened, the rest of the pipe would think no more data was coming. It was possible to catch the non-zero return status and delete the uploaded data, but that’s kinda racy and eventually the race would be lost. We talked about it a bunch, and the best solution we came up with was a program that would sit just before s3gof3r in the pipeline. This program would repeat its input from stdin to stdout, but only transmit an EOF if all programs in the pipeline succeeded. This program is called safe_uploader and ended up being very lightweight. In the beginning there were subtle bugs in safe_uploader that resulted in Zombie and Orphan processes, however once we fixed these, they quit appearing on database servers. Systemic improvements Compared to the previous system, this project significantly improved usability of the resulting data and reduced operational issues: We added support for MySQL binary types. During the backup, rather than using hex encoding for binary columns (which doubled the size of the backup file), we chose to use escaping for some special characters (e.g. , \t, \r); Hadoop’s built-in TextInputFormat can’t read the backup with newline characters escaped, so we wrote our own EscapedTextInputFormat for Hadoop/HIVE We made a fix in Hadoop Streaming side for this special TextInputFormat We rewrote the CSV parser for our Python clients to read the new backup file We added consistent data retention policy to all backup files, and we made the auto adjustment on HIVE table to make sure its schema is always in sync with MySQL schema. Since all data is imported into MySQL without significant modification, we now have a secondary backup system. This is useful for small losses of data. Restoring an xtrabackup takes hours, but pulling a single row or a small table from Hive is really fast and, better yet, doesn’t require help from the DBAs! When a failover occurs, a small script run by cron kills running backups. In the past, this would require dropping the MySQL user for the dumper framework. Often, this would also result in the DBA team and Data-Eng paging each other in the wee hours of the morning. Our backups are now fully consistent on the schema level and generally consistent on a replica set within a few seconds. This is a big improvement for cross-shard data consistency checking. An unintended benefit of tracker pushing the slaves servers really, really hard is that we are effectively running a benchmark every night that significantly stresses out our slave servers. From time to time, we’ll remove slower servers from production. Tracker is now in full production with the capability of moving all our MySQL data into S3 within two hours. Future work We’re not stopping here. We realized for some tables, the daily change is actually not big enough to warrant a full snapshot pull, so we’re building an incremental pull pipeline that converts MySQL binary logs into a Kafka stream. This will then be incrementally pushed into S3 and later compacted with the previous snapshot to get the continuously updating snapshots. Stay tuned! Acknowledgements: Thanks to Henry Cai, Krishna Gade, Vamsi Ponnekanti, Mao Ye and Ernie Souhrada for their invaluable contributions to the Tracker Project. For Pinterest engineering news and updates, follow our engineering Pinterest, Facebook and Twitter. Interested in joining the team? Check out our Careers site.
https://medium.com/pinterest-engineering/tracker-ingesting-mysql-data-at-scale-part-2-9c5249e9332a
['Pinterest Engineering']
2017-02-21 20:07:23.540000+00:00
['Open Source', 'Big Data', 'Sre', 'MySQL', 'Data Science']
Mobile App Security: Issues And Standards
Photo by Author Mass migration to the online space has led to the widespread use of portable devices, especially mobile ones. It becomes more difficult to imagine users’ lives without mobile applications and constant information updates. Ensuring data protection is one of the critical aspects of application development. And it’s about both user data and the data of the application itself. It seems to be a banality. But too much depends on this banality to be neglected. Application security is important because modern applications are often available on multiple networks and connected to the cloud, making them more vulnerable to threats and security breaches. There is a growing need to provide security at the network level and the application level, and this approach is gaining more and more benefits. One reason is that hacker attacks are increasingly targeting applications. Application security validation identifies weaknesses in the application layer, which can help prevent such attacks. App Annie experts say that users spend 10 out of every 11 minutes on mobile apps. Given the role these IT solutions have come to play, many companies are concerned about the quality of applications, especially their security. BetaNews research shows that 44% of applications contain personal data requiring a high-security level, and 66% use functionality to compromise user information privacy. To ensure the security of the mobile apps they provide, companies rely on expert advice to provide security testing guidelines. This article is about common defects in this area and the proper ways to prevent them. Top-3 mobile app vulnerabilities Among mobile apps’ leaders in terms of installs, 94% contain at least three medium-risk vulnerabilities, and 77% have at least two critical ones. Difficulties with storing information and transferring data appear already in software development. And about 1/3 of applications contain hidden functionality and bottlenecks in the source code. The figure below shows that these defects rank in the first three places as the most critical vulnerabilities in mobile applications. Let’s discuss them in more detail. Source: NowSecure Research Insecure data storage Compared to other types of devices, mobile phones are more at risk of being stolen or lost than others. But even if the physical copy of the device is with the user, companies should take more care of the proper storage of sensitive data (confidential information, etc.) to prevent intentional and unintentional leaks. The issue is especially critical for software in banking and finance and healthcare: the insecurity of storing bank card numbers or nuances about patients’ health status in most cases can cause distrust on the part of users, which can lead to reputational losses, lawsuits, and scandals. Problems with data transmission Interoperability between different platforms has become the norm for many applications. They are using a Google account when registering or paying online are some of those conveniences that users don’t want to give up. Using open APIs doesn’t only benefit customers. The integration of services provides end-users with a multifunctional and convenient application. This helps companies meet their business needs. It is especially crucial to remember the risk of personal information leakage, which only becomes higher due to unsafe data transmission. It is important to remember to use TLS and SSL encryption methods and ensure that third-party services connected to the app comply with security requirements, including the minimum set of permissions, validating input from external sources, and much more. Hidden functionality About half of all analyzed applications contain hidden functionality to simplify debugging and testing of the application. But often, it remains even after the release, which makes it easier for fraudsters to hack data. How can it happen? Attackers can download the application, examine the configuration files, view the code itself, and gain access to the software’s administrative part. All this can lead to confidential data disclosure, cryptographic extensions, theft of intellectual property, and much more. Therefore, organizations should consider all possible scenarios and prevent potential risks by ensuring that the application runs securely. 3 Steps Towards Mobile App Reliability Many companies are now using an agile methodology. Thanks to this, all security items are checked at each stage. So if your software provider uses this methodology, you can be sure of security. Step 1. Conduct security testing at all stages of the life cycle One in four reviewers interviewed by the experts who compiled the World Quality Report 2019–2020 helped to optimize testing processes thanks to Agile. The statistics are as follows: if a company has introduced agile methodologies, then processes are accelerated by 75%, and cycles automatically become shorter due to more frequent releases. That is why it is no longer enough to take care of mobile applications’ security at the last stages of the life cycle: critical defects often are found at the stage of production. It is much more expensive to fix them at later stages of the project. What’s more, having a robust early testing strategy, proactively identifying sensitive data, and building a threat mitigation model can help you avoid future security issues. Step 2. Implement penetration testing Almost every mobile application interacts with a back-end service where user data is stored. To prevent their leakage, security specialists simulate intruders’ actions, thereby checking the system for vulnerabilities. The essence of penetration testing is to detect bottlenecks that cybercriminals can use for their purposes: stealing data, deliberately stopping the server, or restarting it. Prepare for such a reaction of the system and develop a plan for its recovery. Step 3. Automate security testing Process optimization is directly related to automation. Security testing is no exception. More than half of the respondents to the World Quality Report report that information security risks have decreased since testing automation. Keep in mind, however, that full automation is not always ineffective. For example, automated vulnerability scans can skip critical combinations for the application. Therefore, develop appropriate QA strategies, considering the architecture, business logic, and system features. While there was a lot of business time for security testing in 2019, only 13% of security checks are automated. And this is precisely the growth point for companies to think about in 2020 and further. Conclusion To keep your application secure, you should consider the most common vulnerabilities and prevent them from the very beginning. With the help of recommendations, you can prevent them and get reliable software that meets all the necessary standards.
https://medium.com/an-idea/mobile-app-security-issues-and-standards-671b4e4f0552
['Andrej Suschevich']
2020-12-02 15:06:12.935000+00:00
['Mobile App Development', 'Mobile Apps', 'Data Security', 'Software', 'Testing']
Top Data Science Trends to Watch For in 2019
Data science is a common term in the present time. This was not the case five years ago because only a few people know about it at that time. Before moving further, you need to understand what is it? It is nothing but is a multidisciplinary blend of data inference, algorithm development, and technology. The given chart displays people’s interest in data science from 2011 to 2017. There is a lot of buzz going on when it comes to major data science trends to watch in 2019. Each individual has their own prediction for data science trends to watch in 2019. Anthony Goldbloom, CEO of Kaggle thinks that one will see departmental or business-specific teams in place of data centers whereas Thomas H. Davenport, Professor at the Babson College thinks that artificial intelligence (AI) will see advances in 2019. AI remained at the top position when people were asked about data trends to watch for in 2019. 2019 can be considered as the year of artificial intelligence. Why did I say this? Don’t know? Just look at the number of startups with AI in their business names or taglines. AI is everywhere and there is hardly any field which has remained untouched with its impact. Now, we will discuss the top five data science trends that will see the light of the day in 2019. Artificial Intelligence and Intelligent Apps The buzz created by AI in 2018 is likely to continue in 2019. The world is at a nascent stage of AI and the coming year will see a more advanced application of AI in almost every field. However, harnessing AI will still pose a challenge. You can see more intelligent apps developed using AI and machine learning. Decision making will become a piece of cake with the incorporation of AI and improve the overall business experience. Automated machine learning will become commonplace and transform data science with improved data management. Applications will mostly rely on AI to improve the overall experience. So, you can expect a rise in the number of intelligent apps. Virtual Representation of Real World Objects The digital representation of real-life physical objects backed by AI capabilities will become rampant. This technology can be used to solve business problems across businesses. Not only this, it will speed up the pace of real-time innovations too. Augmented reality (AR) and virtual reality (VR) has already given way to massive transformations so you can expect more breakthroughs in this field in 2019. Human expectations from digital systems will surely rise. Regulatory Schemes A lot of data is generated every second and the pace of data generation increases by catalysts like IoT. With more data, data security becomes more important as everything depends on data. You can expect more data regulatory schemes to follow in 2019 as a security of data is the most important thing for each and every single entity whether it is an organization or an individual. Data regulatory events like GDPR (European General Data Protection Regulation), which was enforced in May 2018 regulated data science practices to some extent. GDPR set up a boundary and limited the collection and management of personal data. These regulatory activities will impact future predictive models. The recent cyberattacks have mandated the need for a less vulnerable data protection scheme. So, you can expect new protocols and procedures to secure data in 2019. Blockchain You can expect a lot of advancements in Blockchain technology, where the record of transactions made in bitcoin or any other cryptocurrency is maintained. You can say to be a highly secured ledger as the Blockchain technology has far-reaching implications when it comes to data security. Still, you can see new security measures and processes in the coming year. Edge Computing With the growth of IoT, edge computing will become popular. The number of devices and sensors are increasing with the passing of each day so demand for edge computing will also increase. Edge computing is necessary to maintain proximity to the source of information as it eliminates issues such as connectivity, latency, and bandwidth. Edge computing blended with cloud technology will make way for a coordinated structure just like a service-oriented model. IDC predicts that by 2020, new cloud pricing models will service specific analytics workloads. Conclusion: With these trends to prevail in the coming year, the future for innovation looks bright. You can expect data science to witness massive use and development in 2019. Digital space will replace conventional modes when it comes to human experience. The field of data science is expected to grow so investing in this field looks a profitable thing. Summary: The given article talks about data science trends to watch for in 2019. After reading this article, you will surely want to invest in the field of data science. Author Bio: Paige Griffin is a seasoned Content Writer at Net Solutions, Los Angeles for 7 years with an expertise in blogging, writing creative and technical copy for direct response markets and promotional advertising for B2B and B2C industries. Born and brought up in New York, Paige holds a bachelors degree in English Literature. She has worked for industries like IT, Product Development, Lifestyle, Retail, among others. Besides her technical background, she is a poet by heart, who loves to connect with people through a dose of creativity and imagination.
https://medium.com/quick-code/top-data-science-trends-to-watch-for-in-2019-cafc8034db4c
['Paige Griffin']
2020-07-23 05:49:44.041000+00:00
['Data Science', 'Artificial Intelligence']
Python Numpy and Matrices Questions for Data Scientists
I’ve been preparing for Data Science interviews for a while, and there is one thing that struck me the most is the lack of preparation for Numpy and Matrices questions. Often, Data Scientists are asked to perform simple matrix operations in Python, which should be straightforward but, unfortunately, throw a lot of candidates off the bus! Me included! One time, I was asked by a FAANG company to perform a multiplication of two matrices, which I didn’t know back in time. I find the best way of preparing these types of interviews is to find a niche area and write a post on the topic. It’s a win-win situation for me and my fellow readers. So, here it goes. In this post, I walk through 4 Numpy/Matrices questions that often come up in DS interviews and code it up in Python. Photo by Brittany Bendabout on Unsplash Question 1: Given a 4x4 Numpy matrix, how to reverse the matrix? # step 0: construct a 4*4 Numpy matrix [[ 1 2 3 4] [ 5 6 7 8] [ 9 10 11 12] [13 14 15 16]] A side-note, np.arange(1,17) returns a Numpy array, and reshape(4,4) generates a 4*4 matrix. # step 1: to flatten the numpy array array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]) There are two ways to flatten a matrix depending on the data type. For Numpy arrays, we use np.array.flatten() command; for non-array matrices, we use matrix.ravel(). Please try it out. # step 2: read the elements backward into a new matrix array([16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1]) After flattening out, we read the matrix backwards starting from the last element. # step 3: reshape the matrix back into a 4*4 matrix Last, we reshape the matrix and turn it back into a 4*4 matrix, as shown: array([[16, 15, 14, 13], [12, 11, 10, 9], [ 8, 7, 6, 5], [ 4, 3, 2, 1]]) The unique attribute about Numpy arrays or matrices is that we have to flatten it out before reversing it, and there are two ways of doing so depends on the data format (Numpy or not). Photo by Pete Walls on Unsplash Question 2: How do you multiply two matrices? This type of question could be tricky! Two matrices? Numpy or not? The solution largely depends on the data type that it refers to. Overall, there are two solutions: the dot method for Numpy arrays and nested for loops for non-arrays. Solution 1: Numpy arrays # step 1: numpy array # A array([[0, 1, 2], [3, 4, 5], [6, 7, 8]]) # B array([[10, 11, 12], [13, 14, 15], [16, 17, 18]]) # Step 2: dot method [[ 45 48 51] [162 174 186] [279 300 321]] The one-liner dot method easily solves the multiplication question for Numpy arrays, but I doubt interview questions would be so easy. So, it’s more likely to do multiplication questions for non-arrays. Solution 2: Non-Arrays Step 0: construct two matrices. Python does not have a built-in data type for matrix, and we use a list of a list (nested lists) instead. # step 1: create a zero matrix to store results The reason why we create an empty matrix is to store the multiplication results in the following nested for loop. # step 2: nested for loop [[14, 20, 31], [21, 30, 46], [21, 30, 44]] Let’s disentangle the nested for loops: for i in range(len(X)): returns the number of rows for j in range(len(Y[0])): returns the number of columns of Y. A side-note: we use Y[0] to access matrix columns and Y to access matrix rows. for k in range(len(Y)): to iterate rows of Y Z[i][j] += X[i][k]*Y[k][j]: fill in the values in Z by the sums of element-wise multiplication. Recall, the sequence of two matrices multiplication: the elements in the first row from X multiply the elements in the first column from Y, and we add the sums up. Repeat the process until the end. The above nested loop simply follows the same procedure of calculating matrix multiplication as we normally do. A quick refresh of linear algebra here. Photo by Jonny Gios on Unsplash Question 3: How to transpose a matrix? Again, there are multiple solutions depends on whether you are allowed to use Numpy. Solution 1: nested for loop (don’t use Numpy) # step 1: create matrices # step 2: nested for loop [12, 1, 3] [7, 2, 4] Solution 2: using zip() for a list of tuples (don’t use Numpy) # step 1: create a list of tuples [(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12)] We use a list of tuples to create a matrix. # step 2: unzip and zip in one line (1, 4, 7, 10) (2, 5, 8, 11) (3, 6, 9, 12) # Solution 3: use Numpy array 3.1 np.transpose() If we can use Numpy, a one-liner can get the job done. 3.2 np.array.T array([[12, 1, 3], [ 7, 2, 4]]) Alternatively, we can change the matrix (list) into a Numpy array and use array.T. Photo by Eduardo Drapier on Unsplash Question 4: How to add two Numpy matrices together? Again, there are two solutions depends on whether we can use Numpy. Solution 1: np.array() [[17 15 4] [10 12 9] [11 13 18]] Numpy handles element-wise addition with ease. Solution 2: nested for loops for ordinary matrix [17. 15. 4.] [10. 12. 9.] [11. 13. 18.] The code is pretty self-evident, and we have covered them all in the above questions. On a related note, there are two variations of the question. 4.1 How to stack matrices vertically? array([[12, 7, 3], [ 4, 5, 6], [ 7, 8, 9], [ 5, 8, 1], [ 6, 7, 3], [ 4, 5, 9]]) 4.2 How to stack matrices horizontally?
https://towardsdatascience.com/python-numpy-and-matrices-questions-for-data-scientists-167af1c9d3a4
['Leihua Ye', 'Ph.D. Researcher']
2020-12-30 00:18:10.177000+00:00
['Python', 'Interview', 'Data Science', 'Programming', 'Matrix']
Geeks on a Train
Geeks on a Train starts today! This afternoon I’m leaving Dalian, Liaoning, China for a 10-day journey across China’s tech ecosystem with a bunch of awesome startup founders. By train. Because planes and buses are so last year. With stops scheduled in both Beijing and Shanghai and a number of events on the schedule including tech meetups, company visits, mentor meetings, and a 10x10 mini conference in each city, it’s bound to be a fun — and delightfully chaotic — experience. I’ve really been looking forward to it. Geeks on a Train (affectionately referred to as “GOAT”) is part of the Dalian-based Chinaccelerator program, which is a Chinese startup accelerator run by program director Cyril Ebersweiler and the good folks at SOS Ventures. I was fortunate enough to meet them through TechStars in 2010 and am really honored that they invited me over to work with some of the young teams that were accepted into the program this year. How could I say no? Chinaccelerator itself is much like TechStars, but with a brilliant international twist. The startups here aren’t just from China, they’re from all over the world, with founders from Malaysia, Canada, Italy, the Phillipines, India, and England, as well as China and the United States. It’s really fascinating to experience both the similarities and differences of startup life on the other side of the world, but a couple philosophies remain as constants: heartfelt motivation and JFDI are key. In the two weeks I’ve been here thus far, I’ve seen the founders produce some truly great work and make impressive progress. It’s good stuff. There’s nothing more inspiring than being trapped in a room full of crazy entrepreneurs with wildly different backgrounds who are trying to change the world. But I can only imagine what it’s going to be like trapped on an overnight train with them :).
https://medium.com/zerosum-dot-org/geeks-on-a-train-c30f751e8bd0
['Nick Plante']
2017-11-04 17:29:10.212000+00:00
['China', 'Startup']
Exploratory Data Analysis (EDA) — Hands-on NYC Airbnb Dataset
Dataset Download the dataset from: NYC Airbnb data REMEMBER: The goal is to predict rental prices for given data. Note: I have just explained some steps for EDA in NYC Airbnb to get the intuition of approaching EDA for any problem. To know more about this dataset, check on Github. Try to understand the data. For example: Ask questions like; How many columns are there in the dataset? What are the datatypes of those columns? How many data-points contain some null values? i.e. missing values How to replace these missing values? How many data-points are there in the dataset? What is the range of all columns with numbers? What is the mean, median, mode, percentile, etc. values of features? How many values are there for this particular feature in some range? Try to answer or verify these questions or similar kind of questions as follows: Now, check the output & note down the observation. For example: The output of data.dtypes would be as follows; Reference: Author And the observation can be: some columns contain text so maybe there would be a need to use BoW, tfidf, w2vec, etc. for data modeling. Similarly, the output of data.isnull().sum()is; Reference: Author And the observation can be: There are 4 columns that have null values, have to replace them as it might affect later while considering the remaining values of the same column. So here, the next is to replace these null values as shown in the above code snippet. Now, to know that the range of data-points for a particular feature, univariate plotting using Histogram is a good option. Output: Reference: Author Observation: Most of the prices are less than 1000. Now, Reference: Author Observation: In all, there are 239 data-points with a rental price > 1000. These are either super lavish listings or there can an error during input. Nonetheless, since these records are skewing our data, we can treat them as outliers and drop them. Output: Reference: Author Output: Reference: Author Observations: This is a kind of Gaussian distribution for rental values < 250. Considering threshold = 250 is a good option. Likewise, trying to know different features of the dataset. Output: Reference: Author Observation: Almost 90% of the data is covered. Output: Reference: Author Observation: There are 5 groups & 85% are covered by Manhattan & Brooklyn. Output: Reference: Author Here we can note that Brooklyn and Manhattan tend to have more listings with price > 150. Also, most listings above price > 100 are entire home types followed by private rooms and shared rooms which is the cheapest. Following to exploration of features, it is good to drop features that are not important for prediction. Now, Also, checking the correlation between features is important; Exploring room type feature; Output: Reference: Author Mostly, the room type is either an entire home or a private room. Detecting outliers is also a crucial thing during EDA. Now, exploring, minimum_nights column; Output: Reference: Author It’s quite weird, the range is between 1 night to 1250 nights. From the graph, most of the nights are under 100. Output: Reference: Author Only 747 data points have a number of minimum_nights > 30, so replacing all of those with 30. Now, plotting correlation graph for all features; Output: Reference: Author This graph can depict a lot about important features to achieve the goal to solve the problem. Now, the data can be used to build a simple linear regression model, and then one can decide the approach for data modeling. You can find the source code from Github.
https://medium.com/towards-artificial-intelligence/exploratory-data-analysis-eda-hands-on-nyc-airbnb-dataset-c835f08195da
['Rajvi Shah']
2020-10-09 19:33:54.441000+00:00
['Data Analysis', 'Kaggle', 'Data Science', 'Data Visualization', 'Machine Learning']
Elements of Functional Programming in Python
“Object-oriented programming makes code understandable by encapsulating moving parts. Functional programming makes code understandable by minimizing moving parts.” : Michael Feathers There are multiple programming languages in the world, and so are the categories in which they can be classified. A programming paradigm is one such way which tries to classify programming languages based on their features or coding style. A programming paradigm is essentially a style or a way of programming. Most of the times, we understand Python as an object-oriented language, where we model our data in the form of classes, objects, and methods. However, there also exist several alternatives to OOP, and functional programming is one of them. Here are some of the conventional programming paradigms prevalent in the industry: Conventional programming paradigms: Content Source Wikipedia Functional Programming(FP) The Functional Programming Paradigm As per Wikipedia, Functional programming is a programming paradigm, a style of building the structure and elements of computer programs, that treats computation as the evaluation of mathematical functions and avoids changing state and mutable data. The above definition might sound confusing at first, but it essentially tries to put forward the following aspects: FP relies on functions, and everything is done using functions. Moreover, FP focuses on defining what to do, instead of performing some action. The functions of this paradigm are treated as first-class functions. This means functions are treated like any other objects, and we can assign them to variables or pass them into other functions. what to do, instead of performing some action. The functions of this paradigm are treated as functions. This means functions are treated like any other objects, and we can assign them to variables or pass them into other functions. The data used in functional programming must be immutable, i.e. should never change. This means if we need to modify a data in a list, we need to create a new list with updated values rather than manipulating the existing one. i.e. should never change. This means if we need to modify a data in a list, we need to create a new list with updated values rather than manipulating the existing one. The programs written in FP should be stateless . A stateless function has no knowledge about its past. Functional programs should carry out every task as if they are performing it for the first time. Simply put, the functions are only dependent on the data passed to them as arguments and never on the outside data. . A stateless function has no knowledge about its past. Functional programs should carry out every task as if they are performing it for the first time. Simply put, the functions are only dependent on the data passed to them as arguments and never on the outside data. Laziness is another property of FP wherein we don’t compute things that we don’t have to. Work is only done on demand. If this makes sense now, here is a nice comparison chart between the OOP and FP which will make things even more apparent. Python provides features like lambda, filter, map, and reduce that can easily demonstrate the concept of Functional Programming. All the codes used in this article can be accessed from the associated Github Repository or can be viewed on my_binder by clicking the image below.
https://towardsdatascience.com/elements-of-functional-programming-in-python-1b295ea5bbe0
['Parul Pandey']
2020-01-21 06:25:42.977000+00:00
['Python', 'Lists', 'Functional Programming', 'Data Science', 'Programming']
Want To Build A Truly Amazing Team? Focus Less On Talent And More On Personality How To Build A Great Team
Want To Build A Truly Amazing Team? Focus Less On Talent And More On Personality How To Build A Great Team Here’s why you shouldn’t overlook the intangibles when building your team: By Andrew Wolfe, CEO of Skiplist If you want to build a truly great team, focus on the individual personalities your employees introduce to your organization. There was a time when I thought building a strong culture simply meant hiring the most talented programmers. And I’m not alone. In an increasingly competitive tech landscape, company leaders are going to great lengths to attract “top talent.” But I’ve since learned there’s more to consider. After a few years dealing with hotshot programmers who didn’t gel with the rest of the team, I now know that there are far more important traits to seek than “talent.” If you want to build a truly great team, focus on the individual personalities your employees introduce to your organization. The collection of those personalities, and how they individually complement or contradict one another, is the chemistry that makes or breaks your startup. It also has a lot to do with your company’s culture. Especially in tech, where disparate teams-R&D, user experience, visual design, sales & marketing, etc.-must work together seamlessly, strong chemistry and culture are a must. In business, personality traits are often called “soft skills”-work ethic, flexibility, etc. Here’s why you shouldn’t overlook the intangibles when building your team: Tech leaders put too high a premium on hard skills. Every tech company needs coders, programmers, etc. who are good at their jobs. That’s a base requirement. But in Silicon Valley, there’s too much focus on surface-level qualities-technical abilities listed on resumes. There are other things to consider, like interpersonal “soft” skills. The truth is, soft skills-a candidate’s ability to communicate well, for example-are incredibly important, too. And often the toughest to identify when you’re interviewing candidate after candidate. We all like to think we’re great communicators, but in reality, a lot of us are pretty bad at it. A strong communicator is a tough-to-find asset to any company. After all, communication is crucial to building chemistry and culture. There are too many high-potential companies chock full of high-end talent that overlooked one crucial element: the ever-important human side of business. Talent is great, but teamwork is better. With teamwork, nearly anything is possible. Without teamwork, things become much more difficult. I’ve known plenty of hotshots who look great on paper but are incredibly problematic once thrown into the mix. As a result, the team (and company) suffer because a non-team player acts as a broken link in the chain, and is a detriment to your company’s ability to work together. It’s just like the Golden State Warriors. They’re stacked with talent, from Steph Curry to Kevin Durant to DeMarcus Cousins. But the real reason they’re so lethal is because they have great team chemistry and culture. Everyone is always on the same page. And no one is unwilling to pass the ball. In my search for team players, above all, I look for kindness and a great attitude. It’s true that how someone treats a waiter or cab driver speaks volumes about their character. Kind people positively influence their peers, and as a result, they’re typically good team members. Unkind people, on the other hand, typically like to work in solitude. When you have a bunch of people trying to work on their own, avoiding teamwork at every opportunity, less work actually gets done. Disorganization ensues. And the more disorderly your teams, the more managers you need. People who work well together solve problems on their own. People who are passionate about problem-solving will always work harder. In the startup world, passion is key. But it’s not important solely in context of your own career. A growth mindset is important, but it should apply on an organizational-not purely individual-level. And the most valuable passionate people are passionate problem-solvers. Those with a strong desire to help their company overcome challenges. If you can find team players who are driven by overcoming challenges, you’ll be in great shape. Because you’ll be able to work toward a common goal everyone’s invested in. I’ve found that passionate people are often universally passionate. For example, someone with many hobbies and passions outside of work, whether that be homebrewing beer or playing classical piano, are typically also passionate at work. So I always ask people about their life passions in interviews. Passionate people produce great results because they truly care about the work they do. And they’re excited by great end products. They’re inspired by awesome end results. The right attitude is what defines high-quality people. People with great attitudes win more often. That’s a fact. You can teach a barber to program if they have the right attitude, but you can’t teach an asshole to not be an asshole. And in the same vein, a world-class person can become a world-class anything. So it’s actually pretty simple, company leaders: Find amazing people and give them access to the right resources. Watch them-and your company-flourish. Well, maybe it’s not that simple. To bring this full circle, you also need to make sure the amazing people you bring in all work well together. Because that will build chemistry, which will create strong culture, which will motivate and inspire your team to achieve and accomplish at the highest level possible. At the end of the day, you’re not buying a tool when you hire a programmer or data analyst. There’s a vital human element to every hire, no matter how technical the role. This considered, it’s time we start focusing less on talent and technical skills and more on people.
https://medium.com/the-mission/want-to-build-a-truly-amazing-team-1b96fc11f332
[]
2019-07-08 15:39:28.045000+00:00
['Work', 'Life', 'Life Lessons', 'Entrepreneurship']
Up for the Challenge
We chose quality education as this year’s brief topic because of Microsoft’s commitment to providing accessible STEM education to students around the world. We were interested in finding students with diverse backgrounds who wanted to come together to solve issues, break barriers, and provide more access. -Margaret Price Before casually trying to topple that small subject, the nine student finalists did a group icebreaker/team-building activity — involving rope and a blindfold. Put the blindfold on, search for the rope hidden somewhere in the open space — but don’t say a word. Got the rope? Great. Find a way to make sure everyone else has it too. Once you’ve completed that, feel free to chat away as you attempt to make a perfect square with your rope. Easy, right…? Students tested out their communication skills with a blind-folded team-building exercise. Inclusive Design Primer Before receiving their design topic, the students completed a crash-course in Inclusive Design, a workshop Margaret has facilitated in companies and universities around the world. With material scaleable from an hour to nine weeks, the program helps provide a glimpse into what “inclusive design” means. Margaret Price (standing) leading an Inclusive design 101 crash-course with our student design finalists. Inclusive design allows us to accomplish designing for everyone, while also designing for individuals. Take closed captioning, for example — it was originally developed for those who were deaf or hard of hearing, but now, just about everyone benefits from it. Ever been that person on a bus who didn’t have their headphones, but reallllyyy wanted to know how Ellen’s latest interview with Michelle Obama went? Cue muted volume with closed captions. Or, wanted to teach your child to read, with a video that has captions? Yup, same thing. A design solution for a smaller group of people ends up benefitting millions. “As designers — emerging or professional — it’s our collective responsibility to understand the impact of the designs we make, and think mindfully about how to embrace all forms of human diversity in the design process,” Margaret added. Still with me? Great — now back to the students… After they heard from Margaret, they met with several design experts. Jean-Baptisse Joatton, a professor of interactive design at Lycee Leonard de Vinci, and one of the co-chairs of the Education Summit at IxDA this year, provided insight into the concept of teaching design, and what limitations you may encounter in a classroom. Andres Lombana, a post-doctorate fellow at Harvard, video chatted with students about the research he’s done on interactive design. Adam, Jenn, Kevin, Melodie and Amy chat with Harvard’s Andres Lombana over video. “At my school, I met a lot of people who are studying interaction design but only designing for themselves,” Milda Norkute said. “Inclusive design needs to be a bigger part of the curriculum, because otherwise we’re all only creating things that we like, and that leaves people out. You should bring it to the root of your designs.” Milda Norkute, KTH Royal Institute of Technology They received their briefs and were broken into teams of three. They’ll have three more days to devise a design solution addressing quality education, and to create a 3-minute pitch video for the entire conference to watch on Thursday. (I’ll be reporting on their progress here on Medium. No pressure, right?) I asked Jenn Lee about her first day of the challenge: “I’m so excited to see what we can come up with as a team in such a short amount of time, the people we get to meet, and the parties. I’m also excited about the brief — rhythm is unique but there’s a lot of room for this to be applicable to a lot of important problems.” Pathikrit “Po” Bhattacharyya added, “I feel honored and flattered to be included in this. Coming in, I didn’t know what to expect, except that it was focused on education, which is something I’m extremely passionate about. The challenge seems really well set up, by starting with the workshop on accessibility, doing research and now narrowing it down with the brief. It sets some limits to inspire us, but is isn’t limiting.” Milda, Jenn, and Po are all on a team together — which they named “Team Tango” — because, it takes two to tango, “but three to find a rhythm-based solution.” (from left) Jenn, Milda and Po (Team Tango) ideate on rhythm as an educational tool. The evening reception for the conference opening was filled with tasty treats, local drinks, networking, and exploring the Musee de Confluences. Students will hit the ground running Tuesday morning on designing their solutions for quality education.
https://medium.com/microsoft-design/up-for-the-challenge-36acbad62574
['Ashley Walls']
2019-08-27 14:59:07.599000+00:00
['Design', 'Inclusive Design', 'UX Design', 'Ixd', 'Microsoft']
Tokenomic Explained (Distributorship)
With the Morpheus Labs BPaaS commercial version slated to be released in a few months, our business development activities will start to ramp up in accordance to a go-to-market plan. This article is intended to provide more details on our business model. 1. Who are distributors? Our go-to-market strategy is primarily based on channel sales partners who are currently working with a sizeable client base (our potential clients) and have outstanding past experience in software / SaaS business. We have identified these partners as a catalyst to scale up the Morpheus Labs ecosystem. 2. What do distributors do? There are three categories of distributors: Trial Distributor (trial period only, least amount of security deposit required, no minimum purchase of license required) (trial period only, least amount of security deposit required, no minimum purchase of license required) Country Distributor (low amount of security deposit, bulk purchase of licenses at discounted rate, possibility of licenses to be resold to trial level distributors) (low amount of security deposit, bulk purchase of licenses at discounted rate, possibility of licenses to be resold to trial level distributors) Regional Distributor (high amount of security deposit needed, bulk purchase of licenses at higher discounted rate, licenses can be resold to country and trial level distributors). *During the initial stage, a few channel partners will be selected for a fully-discounted (no-fee) distributorship subject to screening by the team. 3. How to become a distributor? With most distributorships, a security deposit is required for the eligibility of license sale. This security deposit is a certain amount of MITx tokens that distributor needs to purchase and hold during the entire time of the distributorship period (staking). Different tiers require different amount of MITx token staking. After a security deposit is received, distributors will be officially accepted as a distributor of Morpheus Labs BPaaS licenses. 4. Distributors will receive training, comprehensive documentation and sales materials such as: User Guides Platform Brochure Datasheets Presentations Platform Intro (sales pitch presentation, customisable by distributors). App Library Presentation (eg. of specific presentation) Subscription Data Presentation (eg. enterprise subscription vs advanced) Competition Analysis Distributor Price Book Further support for distributors include: Joint training with blockchain partners Dedicated account manager to serve and joint sales Referral Sales channel back to distributors Joint marketing initiatives Sponsorship of Events/exhibition planning (once a year) Shared marketing event booths Marketing Material 5. Proof Of Alliance (PoA) Extension Benefits of Our PoA members will be enhanced here. PoA members will help to facilitate possible transactions and potentially be a critical part of the ecosystem. The final mechanics of how this will work is currently being worked out. 👏 If you enjoyed reading this piece leave us a clap (you can also leave multiple claps of course) or comment below. We are curious to hear your thoughts! Stay informed
https://medium.com/morpheus-labs/tokenomic-explained-distributorship-bfe6ad276d1
['Morpheus Labs Team']
2018-08-22 17:52:41.758000+00:00
['Bpaas', 'Startup', 'Blockchain', 'Morpheus Labs', 'Bitcoin']
How To Get The Most Out of Audiobooks
Tips for Retaining what you’ve Listened to When it comes to audiobooks, retention is queen. Enjoyment is king. If you aren’t enjoying what you’re listening to, then maybe consider switching books. Remember, don’t just ditch the whole practice of audiobooks based on one bad menu item. But right behind enjoyment is retention. If you aren’t remembering or retaining what you are listening to, why are you listening? With reading and audiobooks becoming more and more popular in leadership circles these days, the temptation is there to jump on the bandwagon just because everyone is doing it and it sounds good to be able to say you’re a “reader.” That’s called hollow reading. And it probably won’t help you in the long run. You want to have your imagination sparked. You want to remember what it feels like to laugh out loud when great writing captures your heart and tickles your mind. You want to feel the deep emotions that comes with powerful storytelling and get to know the characters, places, and circumstances that are awaiting you on every page or every minute of audio storytelling. Sink into the books and the audio and be someone who not only is touched by the content but someone who carries that content with them as you set off to touch the lives of others around you. If you are on board for retaining what you listen to, here are some tips for how you can do that. 1. Pause Button This is one of the most beautiful buttons on your audiobook screen. It is the equivalent of setting down your book on your kitchen table with the spine open as you take a short break before you come back to it again. When I listen to audiobooks, one of the biggest temptations I face is to keep listening. Which ultimately you need to do to finish the book. But I’ve found that after a particularly impactful minute or chapter, I’ve really enjoyed hitting the pause button and creating space for my brain to catch up to what I just heard. I like to roll the words over in my head and make sure that I am processing what the author is saying, not just allowing words to come in one ear and out the other. So if you are new to audiobooks, don’t be afraid of the pause button. Pause as much as you like. It’s your book. It’s your adventure. 2. Three Sentence Summary For every audiobook that I listen to, as soon as I’m done, I try to write out what I think would be a three-sentence summary of the book. I know what you’re thinking… only three sentences!! But here’s why this practice can be helpful to me and to you: short sentences require you to be precise and brief short sentences are easier to remember short sentences are easier to share with other people when they ask short sentences are less time for you in the long run short sentences are do-able by everyone. That last point is important. Everyone can write down three sentences about something. This post is full of sentences. This tip isn’t asking you to write a 12 page single-spaced summary of the books you’re reading. That would be daunting. Writing three sentences will help transform your understanding of the book and how you can keep that information close at hand for months and years to come. 3. Summary Sheets This is the next step up from point number two. Once you get in the rhythm of writing out your three sentences for each audiobook, you can start to think about building out a short, one-page summary sheet for each book you listen to. For me, my preferred tool to do this is Evernote. I have a folder titled “audiobooks” (creative, right?) where I make a new note for most books that I start. For fiction books, I don’t feel like I need a summary page since I’m not taking away a lot of practical and applicable contextual points. For most other categories, I try to make this summary sheet where I can write down some of the notes that stand out to me while I’m listening to that particular audiobook. If this sounds daunting and you don’t know what notes you would put down on your summary sheet, a good place to start that I’ve found is with the chapter titles and any list headers that the author uses in his book. It can be as simple as a table of contents. Basically, your summary sheet should be unique to you and should hopefully be a short cheat-sheet of shorts that allows you to remember and recall more of what you are listening to. When I am listening to a book, I often keep a note on my phone open and pause the book to write down items I want on my summary sheet. Then I transfer over to Evernote. Eventually, when I finish my audiobook, I go back and clean up my summary sheet! And viola! Building out a summary sheet is a massive help in your journey towards retaining what you listen to. 4. Bookmark your Reading This is a tip that is dependent on what platform you are using to listen to your audiobooks. Maybe you don’t have the ability to jot down notes as you listen, especially if you are in the car or doing something like mowing the grass. Most audiobook platforms will allow you to put in a bookmark if you are listening which serves as an indicator of a section that you liked and that you want to return back to. If I have time, as I’m listening, I will drop occasional bookmarks and will then go back towards the end of the audiobook and re-listen to those sections that were particularly impactful. 5. Follow Along with a Physical Copy I’ve only done this a handful of times, but it’s been helpful when I have had the space and time to employ this step. Retention can be increased by associating what you are seeking to remember with multiple senses. If you can see and smell something, odds are that you have a better chance of remembering that item than if you could only see it. Same goes for reading and listening. If you are still struggling to retain what you are reading, maybe consider taking an audiobook and getting the physical copy as well. That way, when you listen to your book, you can also read along and engage multiple senses in the process.
https://medium.com/the-post-grad-survival-guide/how-to-get-the-most-out-of-audiobooks-7cbc2e738012
['Jake Daghe']
2019-10-03 12:01:01.201000+00:00
['Coaching Corner', 'Coaching', 'Leadership Development', 'Growth', 'Books']
Creative Focus: Reach the right people with the right message in any industry
Creative Focus: Reach the right people with the right message in any industry By: The Civis Team Today, we’re excited to announce General Availability of Civis Creative Focus, our online message testing tool. Now, users in any industry can build better messaging, create more focused advertisements, and generate positive brand perception to improve advertising ROI. The Problem with Messaging Spend Today Whether you’re thinking of creative content or wording, motivating your current and prospective customers to act on a message they encounter from you isn’t as simple as it seems. It can be hard to deliver the right message to the right audience on a consistent basis, and marketers need to know with confidence that budget and resources are well spent. Currently, over 40 percent of ads have no impact on a person’s behavior, and one in ten actually causes a negative reaction — whether or not your brand is helped by an advertisement is literally a coin flip. Combined with the time-consuming process of message testing, and platforms that currently encourage buying more ads, and not necessarily the best ads, marketers are facing a handful of challenges. The Evolution of Creative Focus: Better Messaging for Any Industry Today’s announcement reflects an expansion of Creative Focus’s capabilities to meet the needs of marketers, communications and insights professionals in any industry. We introduced Creative Focus earlier this year to meet the needs of progressive campaigns that wanted to conduct effective and actionable message testing. We quickly discovered that the insights derived from Creative Focus could serve not only political and non-profit organizations — but any organization looking to make the most of their marketing budget. Create a test Why Creative Focus? Our proven message testing solution offers you the chance to test your ads before spending the money to put them in market, and ensure they will be the most effective. Choose an audience Creative Focus tests the message efficiency among key audiences and subgroups, enabling you to make data-driven targeting decisions to further engage the right people. You can optimize creatives by audience to tailor messages effectively and decide which to promote in new campaigns. The tests are quick, affordable, and automated, so you can spend more time developing strategies for future campaigns. You’ll receive actionable results based on KPIs (i.e. brand favorability, brand awareness, and intent to purchase) that will move you closer to your business goals instead of reporting on vanity metrics like click-through rates or shares. Brands have been using Creative Focus to identify the most effective ads for their key audience, so they can reach the right people with the right message at the right time. Upload your images/videos/text How It Works Creative Focus is built on the gold standard of scientific research — randomized control trials. This means that it randomly assigns respondents to either a treatment group or a control group. Those in the treatment group will see one of your creatives, and those in the control group will see no ad at all. By measuring the opinions of each group and comparing them against one another, Creative Focus minimizes noise and directly measures the impact of each ad on the outcomes that matter most, like purchase intent and brand favorability. View your results Within a few days of the original request, see how your messages performed overall, measure any potential backlash, and understand which subgroups were most receptive to each creative. Creative Focus can also help you optimize targeting across digital channels. Once the test is complete, leverage the results and use Civis technology to activate a list of individuals who are most likely to be persuaded by each creative, and target them accordingly. Based on your results, you’re able to use Civis tools to generate a custom model to predict persuadability, score the consumer file to find similar individuals, and then generate a list of your best targets. Then, pass the list to data onboarding partners (i.e. LiveRamp) for activation. Most importantly, this creative-first targeting means that you are delivering your best messages to the most receptive audiences. Getting Started Creative Focus is now available for demos and testing — if you’re interested in learning more, reach out today!
https://medium.com/civis-analytics/new-updates-to-civis-creative-focus-reach-the-right-people-with-the-right-message-in-any-industry-c58ef6843b16
['Civis Analytics']
2019-01-18 19:33:53.675000+00:00
['Message Testing', 'Software', 'Analytics', 'Advertising', 'Marketing']
Encouraging Diversity in Stories Puts Any Dream Within Reach
Stories are powerful. They teach lessons, provide distractions, and inspire audiences to expand the scope of their aspirations. The greatest stories relate to us on a personal level. We identify with the hopes, dreams, struggles, and triumphs of different characters, and that identification often begins with a character’s physical appearance. When we see ourselves as the hero in stories, we see ourselves differently in the world. My son believes wholeheartedly that he is a Power Ranger. My bruises are a testament to his crude, yet burgeoning, martial arts skills. What can I say? He is four-years-old and deeply committed to his heroic character as his older brother was before him. I can relate because I grew up seeing myself in most movies, television, and literature. I am a 6’ 1”, blue-eyed, (formerly) blond-haired, white man, so this is no stretch. As Americans, we live in a consumer-driven economy and culture. We create and export a lot of art in the form of music, movies, pop-culture novelties, and other shiny objects. We are not alone in this pursuit, but any visit to a Disney property reaffirms that we created a rock-solid blueprint for marketing, merchandising, and consumption. The problem is that those in the story business have been packaging, merchandising, distributing, and selling “my” story for far too long. You know that disclaimer in a movie that states, “the events, characters, and firms depicted in this photography are fictitious. Any similarity to actual persons, living or dead, or to actual firms, is purely coincidental?” Consider the so-called “Greatest Story Ever Told.” Is that why almost every movie about Jesus casts a white man in the role? The producers are worried about a celestial lawsuit? Those executives are not alone. Did you know that over 500 years ago Pope Alexander VI likely changed my perception of the appearance of Jesus when he commissioned paintings based on his favorite nephew as part of a propaganda campaign? Talk about false advertising! Here’s what I know, a Middle Eastern, Jewish carpenter most certainly had darker skin than I do. We need more diverse stories and content more than ever. When you consider that Hollywood is cranking out remakes of old movies for new audiences, don’t you think we’re ready for a much broader, color palate of stories and heroes? Hollywood is not alone. We need these stories across every industry. More Diversity in Content on the Way It’s no secret that pop culture, especially Hollywood and Madison Avenue, has a woeful track record of elevating multicultural stories. Even though Hollywood leans towards liberal politics (based on all the award acceptance speeches), the business people make decisions about what they think will sell and deliver the greatest return to their investors. You might say, I’ve watched a lot of movies with “Will Smith, Denzel Washington, Eddie Murphy, Jamie Foxx, Morgan Freeman, Samuel L. Jackson, Halle Berry, Kerry Washington, Idris Elba, and others. What’s the problem?” There has been progress in many areas, but the power structure that greenlights and funds stories has lacked diversity. Tyler Perry built his own media empire by telling stories about Black characters, and there are signs that money is flowing into the production of more diverse movies, television, and streaming content. Just a few examples outside of traditional media and studio companies include: A $100 million round of investment in SpringHill Co., a media and production company founded by Lebron James and Maverick Carter; The early work by the Obama’s Higher Ground Productions; and Everything Oprah Winfrey and Harpo Productions touch, including her recent “OWN Spotlight: Where Do We Go From Here?” Jordan Peele continues to deliver amazing work under his Monkeypaw Productions company. It was major news less than five years ago when Mattel gave its line of Barbie dolls a major makeover to finally recognize a more diverse population. Was this a move geared towards clearing a guilty conscience or because of declining revenue? I don’t know. I was a G.I. Joe kid growing up, but I saw stories over the years from Black women who expressed sadness and shame that their toys didn’t look like them. It’s not just skin color, but clothing sizes that have started to break through the polished veneer of the traditional advertising game. As I walked through a shopping mall in 2019, I was happy to see promotional photos of young models displayed prominently that did not adhere to the idea that only a size zero is commercially acceptable. Encourage Diversity in STEM by Telling Your Story I have worked in marketing, public relations, and communications in the technology industry for over 20 years. That particular job function attracts diversity in terms of female representation, but not as much with Black, Indigenous, People of Color (BIPOC). The executive headshots of many tech companies, and in particular the payments and financial technology sectors where I worked for over a decade, looked like a private school yearbook — white kids in their blue, buttoned-down Oxford shirts. Outside of marketing, there has been more geographical and ethnic diversity as the data shows, and I believe I’ve benefited from that diversity of experiences and perspectives. I vividly recall a former colleague sharing his excitement over earning his U.S. citizenship. That simple parking-lot conversation had a powerful effect on me. I remember sharing drinks and stories with a Pakistani, Muslim colleague. It was interesting to hear about his family life and community in his adopted hometown of Dallas, Texas. I worked with a female, engineering executive from India who persevered through obstacles to achieve great success in the technology industry. These people, their stories, and their contributions make the industry a better place to earn a living and build a career. More needs to happen to bring more diversity into the tech world. I work with Intel Corporation as a marketing consultant, and I have been incredibly impressed with their dedication and investment in creating a culture that champions diversity on every level. I am encouraged by some signals in the tech space that things will change, but it has never been more important for people in the industry to tell their personal stories. As someone who works with technology executives from different backgrounds, I believe stories of diversity matter. I believe that telling your unvarnished, personal story complete will be an inspiration to someone who is looking for a reason to try instead of an excuse to quit. The story is even more powerful when it includes a reasonable disclosure of your failures and triumphs, but there is still such a thing as T.M.I. The website “Because of The We Can” recently published a blog recognizing Jasmine Bowers as the first Black person to receive a doctorate in Computer Science from the University of Florida. Yes, this is wonderful news and should be held up as an example and story for other Black women. The fact that this is news shows how far we have to travel as an industry. This accomplishment should be common enough to warrant a backyard cookout or some other party with family and friends, not news in the blogosphere. I share these perspectives as an encouragement to anyone who has a story to tell. We are still suckers for the classic rags-to-riches, David-vs-Goliath, and redemption stories, and I look forward to hearing, reading, or watching yours. If I can be of any help to you in this process, please contact me on Twitter @davidfontaine.
https://medium.com/swlh/encouraging-diversity-in-stories-puts-any-dream-within-reach-1bcf02968871
['David Fontaine']
2020-09-08 12:56:53.444000+00:00
['Storytelling', 'Personal Development', 'Personal Branding', 'Diversity In Tech']
What Fills My Cup Up
What Fills My Cup Up A reflection from a Divine Feminine Photo by Zac Harris on Unsplash I was recently told to take a break and do what fills my cup up. I said “YES” without hesitant only to walk out of the conversation feeling blank. I was taken aback because I thought I was filling my cup up! Isn’t that what I’ve been doing? Doing what I love, loving what I do? I’m super good at this self-love sh*t, so what’s missing? So, I went to my favorite tool of all-time: Journalling! I prompted myself with questions after questions, digging deeper until it finally led to the clarity that brought me to this post. What stood out for me is these 3 words “shake it up.” Yes, I love what I do. I love my business. I love my clients. I love Muay Thai. I love meditation. I love journaling. I love the beach, and the list goes one. Great news? I’ve been doing all those I love consistently, and kudos, girl! While the ritual is essential to keep the fundamental strength, as a Divine Feminine Goddess, we appreciate playfulness, spontaneity, and creativity. The same way we put in the effort to keep the sparks alive in our romantic relationship, we are to do the same with our relationship with our soul. After all, I’m my my own most intimate lover, am I not? Striking the balance of ritual and spontaneity is the simple yet often neglected answer to the question of “How to fill my cup up?” The real question is, “Are we sincerely willing to fill our cup up?”
https://medium.com/spiritual-secrets/what-fills-my-cup-up-2bdc12349adb
['Elies Hadi']
2020-10-16 19:31:26.940000+00:00
['Self-awareness', 'Spiritual Secrets', 'Advice', 'Life Lessons', 'Lessons Learned']
Way More Things Are Conscious Than We Realize. Here’s Why.
Prequel: scaling up our empathy When you put a bunch of humans together, they eventually find ways to assemble themselves into interesting structures. They form governments, start companies, and join organizations like the UK Roundabout Appreciation Society, which is a real thing that actually exists. We can think of these human structures as the organs and appendages of a vast “human super-organism”. Just as our hands and our kidneys allow us to function and achieve our goals, each government, company and club plays a specific role in advancing the interests of the human super-organism. “Wait…” you ask. “ the interests of the super-human organism? Jeremie, are you actually saying this thing is conscious? Come on, now.” I used to think this way myself: how could the human super-organism be conscious, if every component organism that it’s made from — every human being on the planet — is making their own decisions? The super-organism just reflects the choices we make as individuals, so how could it possibly have a mind of its own, and a consciousness to inhabit it? It’s easy to imagine the super-organism as a kind of mindless drone, unable to make its own decisions or think its own thoughts. But does this really make sense? Consider this: the behavior of a human being — yours and mine — is determined entirely by the behavior of each cell that makes up our bodies. From this perspective, we’re slaves to whatever processes are unfolding in our cells at any moment in time, unable to act in any way that doesn’t correspond to the precise “wishes” and “desires” of all of our cellular building blocks. So our actions are every bit as constrained by factors outside our conscious control as those of the human super-organism. Organisms that exist at different scales and levels of organization use different communication strategies. Our cells communicate via complex biochemical mechanisms that involve passing molecules and ions back and forth to one another, sending along messages that we lack the means to interpret or the context to understand. The human super-organism is presumably similar: it relates to the world in a way that is as incomprehensible to us as the electrochemical signalling strategies used by our cells. And because we can’t communicate with cells or super-organisms, we don’t perceive any hints that they have a genuine conscious experience of the world. Dogs salivate, so we assume they’re hungry. Flies avoid our swats, so we assume they don’t want to be killed. But cells… what do cells do exactly? They don’t communicate things like hunger or fear of being swatted in ways that we can understand, so we assume that they’re simply not conscious. From that perspective, the “consciousness continuum” that many people imagine exists from atoms, to cells, to insects, to humans isn’t a continuum of consciousness at all, but rather a continuum of our ability to perceive consciousness. So it’s our incompatible communication styles that make it hard to notice the consciousness of organisms that exist at vastly different spatial scales from our own. But the same is true for organisms that communicate at different temporal scales: plants don’t seem particularly conscious or relatable until you watch them interact with their environment on longer timescales than those we consider relevant for most human-to-human interactions. None of this is definitive evidence that plants or super-organisms are actually conscious, of course. But it’s a hint — a teaser for what’s to come. In what follows, I’m going to try to convince you that we can’t see or even conceive of most of the forms of consciousness that exists in the universe, and that there may be rich forms of conscious experience playing out right under our noses. I’m not going to tell you it’s because of quantum mechanics, or because “consciousness is the foundation of the universe”, or whatever new-agey woo-woo you might be worried about. All I’m going to assume is that consciousness arises from the physical state of the brain. By the end, you may or may not agree with me on any of this (I’m not sure I do). But I hope we’ll both have learned something fairly surprising about what consciousness really is. You are your program, not your hardware Your body takes in new atoms when you inhale or consume food, and releases old ones when you shed dead cells, exhale, or go to the bathroom. On average, the atoms in your body are exchanged for atoms from the outside world about once a year, so you’re quite literally made from completely different stuff today than you were a few months ago. But you still have just about all the same quirks, hobbies, hopes and dreams. So whatever those atoms are, they aren’t “you”. So then, what are you? If you ask most machine learning researchers — and increasingly, even most people — they’ll tell you that the human brain is essentially a computer, and that everything we do and experience is the output of some complicated program that we don’t fully understand, that’s being run on our brain’s neural hardware. If that’s true, then “you” are a computer program, and so am I. The things that are true about all computer programs must also be true about us. Here’s one thing that’s true about all computer programs: it doesn’t matter what hardware you run them on — they’ll always produce the same outputs. A tic-tac-toe algorithm works just the same whether it’s being instantiated on a PC, a Mac, in your brain, or on some experimental optical computer. Likewise, the “you” program that’s running in your brain right now would work just the same way — and would presumably give rise to the same consciousness — if it was running on silicon chips rather than cells. If you believe that you’re fundamentally nothing more than a computer program running on biological hardware, then we should be able to swap out your hardware without taking away what makes you “you”. This might not seem like a ground-breaking observation. But as we’re about to see, it’s actually the trapdoor that gets us to wonderland. Oh look, a rabbit hole! Rabbit hole, red pill, not in Kansas anymore If we could understand exactly what role each and every cell in your brain or body plays in storing the “you” program, we could create a copy of you on a computer — any computer — as long as it had enough memory. All we’d need to do is encode the structure and connections between every cell in your body — or if needed, every atom in every cell in your body — into bits and bytes, and voilà: a new you, every bit as valid as the original. But of course storing something as complex as the “you” program will take a lot of memory. We might need to distribute our storage across many different computers (using perhaps a service like AWS or Google Cloud). Remember, this is a move we’re allowed to make: “you” aren’t your hardware! This new, distributed “you” may be stored in bits and pieces on many different computers in many different parts of the planet, but as long as those pieces are split up in ways that are compatible with, and replicate, the “you” program that was originally your physical brain, “distributed you” should still be every bit as valid — and conscious — as “human you”. At this point, distributed you exist as nothing more than bits on a computer. And there’s a bit for every degree of freedom that’s required to fully capture the nuances of what it means to be “you” — every memory and association. I have no idea what level of detail would be required to do this, whether cell-level data would be sufficient, or if atomic or even subatomic effects would need to be factored in. But whatever that level is, we’ll assume our electronic copy of you captures it. Now, suppose AWS, or whatever service we’ve been using to store our copy of you, reduces their servers’ memory capacity dramatically. As a result, “you” are now being stored not on 4 servers, but on 40, or 400 servers. Actually, scratch that — let’s take it even further. Suppose that you’re being stored on as many different servers as there are bits required to represent the “you” copy, so that each mini-server is literally storing just one bit of information about you: You’re now spread all over the world in a nearly continuous way — you exist as a kind of dust, disembodied and distributed atom by atom, bit by bit, all over the planet. What’s more, from one moment to the next, AWS might decide to move one of these bits over from one mini-server to another. Depending on these mini-servers’ locations, the little piece of “dust you” that they encode might have moved from one continent to another in the process, all without compromising the integrity of the whole! As long as AWS is keeping track of which server stores which relevant bit — as long as they remember the “right” way to interpret the contents of all their mini-servers — the “you” copy is intact. But notice that caveat: the “you” copy persists only as long as AWS keeps track of how to interpret the bits in their mini-servers. In other words, at this stage “dust you” is quite literally composed of two things: first, the bits stored in those AWS mini-servers, and second, the interpretation of what those bits actually mean. Apply the wrong interpretation (say by confusing mini-server 170 with mini-server 443) and you’ll see nothing but incoherent garbage — it’s only when you look at the contents of those mini-servers in the right way that a mind pops into view. Now, suppose that a developer’s finger slips — they accidentally delete one of these trillions of mini-servers! And wouldn’t you know it, that server happened to store a bit that represented one of the most important neural connections in your brain. Your entire personality, and perhaps some of your most cherished memories, hinged on that one neural connection! What happens to you? Thinking outside the server Dust “you” only ever existed in our imagination to begin with. We imagined that we were storing a genuine copy of “you” because we interpreted certain atoms as bits, and because we interpreted those bits as being part of a “you” copy. And now, thanks to the negligence of an AWS employee, we’ve lost one of these bits — a bit that we had previously interpreted as representing a very important part of “you” in our stored copy. But can’t we just interpret another atom, somewhere else in the universe, as representing the missing bit? Let’s make this concrete. Imagine that we’ve been representing our bits using the charges of atoms in our mini-servers. We’ll say that a charged atom represents a “1” bit, and an uncharged atom represents a “0” bit. Suppose that the bit that was lost when its mini-server was deleted had a value of 1. Let’s also imagine that there’s an atom floating in the air next to that mini-server, that happens to be charged. We *could* just interpret that atom as representing the missing bit, couldn’t we? Would that make you whole again? Recall that the only reason we believed that we were storing a copy of “you” in the first place was that we chose to interpret each of our mini-servers as representing a specific part of you. Now that one of those parts has been deleted, what’s wrong with choosing another atom, somewhere else, and simply interpreting it as a new mini-server? At the very least, it’s unclear to me what the problem with this move would be. Sure, it means using a new interpretation of what parts of the universe count as “bits that represent you”, but why would one interpretation of the arrangement of matter in the universe be any more or less valid than another? How could the story we tell ourselves about the roles that different particles play in defining “you” affect your conscious experience? Dust theory There’s no reason to limit ourselves to swapping out one mini-server bit for one nearby charged atom. In fact, we could interpret literally any atom or particle in the universe as representing any bit required to complete the “you” copy. So why don’t we do that? Why don’t we just interpret a copy of you into existence right now, by assuming that the charge of some random atom in the Andromeda galaxy represents the bit value that we would have stored in mini-server 1, that the charge of another random atom somewhere else in the universe represents the bit value of mini-server 2, and so on? Why don’t we interpret the particle dust the universe is made from, to create a copy of you in our imaginations? Wouldn’t that copy be every bit as “real” as any other copy we might have stored on a cluster of Amazon servers, or in a single computer, or in your biological brain itself? This is “dust theory”: things exist simply because they can be interpreted to exist. Our experience of the world is itself nothing more or less than one interpretation of the arrangement of particles and fields all around us. Dust theory suggests that any interpretation that can be applied to the components that make up the universe — every story that we could imagine telling — is legitimately real. And just as we can interpret “you” into existence from the positions and movements of particles all across the universe, we can equally interpret other consciousnesses, and other conscious beings into existence by the same mechanism. When the ancient Greeks gazed up at the stars and saw bulls and hunters, they were doing the very same thing that AWS does every time it interprets bits in its servers into intelligible information for human consumption. When Netflix chooses to represent a datastream as pixels on your screen and sound vibrations in the air, they’re doing the same thing as I am when my brain converts text on a novel’s page into a simulated world with characters and scenery, action and emotion. According to dust theory, reality itself is in the eye of the beholder, and a pattern is real even if it emerges from pure noise. A sufficiently large group of particles can be interpreted as representing just about anything — including the minds of creatures that don’t even exist in the version of the universe we see every day. But what we see around us is just that: nothing more than one version, one interpretation, of the universe. Other versions surely exist, but we’re unable to notice them because we can’t relate to the way they encode information, any more than we can relate to cellular communication or the communication patterns of the human super-organism we discussed earlier. But as our technology improves, more of these interpretations may come into view. We may eventually recognize life encoded in the atomic states of dust particles spread across planets, galaxies, or the vast expanse of the universe itself. Dust dynamics So far we’ve been thinking about a static copy of “you”, that stores the states of every cognitively relevant neuron and cell in your body. But consciousness isn’t a static experience — it’s intrinsically dynamic. So while we’ve seen that we can interpret a static brain into existence, what about the dynamic experiences of thought and perception? Consider this. While I may be able to interpret the charge of some atom in some galaxy, and the spin direction of another atom in a different part of the universe, as bits that encode part of a brain, if I wait just a fraction of a second, those “bits” will change as their atoms bump into things and acquire a new charge, spin direction, or other property. The more bits I need to represent your brain in my interpretation, the faster I can expect my representation to degrade. Meaningful, self-reflective consciousness probably requires a fair bit of complexity to emerge, so an interpretation that produces a long-lived copy of any conscious brain is going to be very rare. So most ways of interpreting your brain into existence from the states of atoms in distant galaxies will only be a faithful representation of your brain for a very short period of time. Very quickly, the correlations between the atoms we depend on to encode your brain in our interpretation will fall apart. Presumably then, the experience of that brain would be of existing for a brief moment — like a flash in the pan of conscious experience — before dissolving into garbled incoherence and noise. At best, we might find an interpretation of the behavior of particles in the universe that happens to reflect the time evolution of your brain as it really would unfold for perhaps several milliseconds. Out of pure chance, the atoms might jiggle in just the right way to cause our interpretation to correctly simulate a few milliseconds of conscious experience — but even then, rapid decay would follow. But recall that AWS was able to relocate bits of the “you” copy to different mini-servers at will, without losing the integrity of the copy. As long as they kept track of which mini-server they should interpret as each part of “you”, their ability to recognize the “you” copy hidden in the dust persisted. So why can’t we do the same? We can interpret your brain into existence by looking at the spins, charges and other properties of appropriately selected particles throughout the universe in just the right way. If, a second later, that interpretation no longer matches a reasonable brain state, why can’t we just change our interpretation to one that does? Isn’t that exactly what AWS does, when it assigns a “you” bit to a new mini-server? Here’s where we land the plane. Our perception of the world arises from two things: first, from the states of the particles and fields all around us; but second and equally crucially, from our interpretation of what those particles and fields represent. And because the universe is made of an immense number of particles, spatial locations, times and fields, there’s enough raw material — enough dust — to interpret just about anything you’d care to imagine into existence. So, why aren’t cells conscious? Presumably, because we simply aren’t interpreting them in a way that reveals them to be conscious. This isn’t surprising: we’re comfortable saying that humans are conscious as long as we see “indications of conscious activity” that we know how to interpret, like language, non-verbal cues, and other forms of communication. Similar cues exist in many animals, which is why we’re fairly comfortable modeling them as conscious entities as well. Quite often, this myopic focus on easy-to-interpret indicators of consciousness causes us to mistakenly assume that people aren’t conscious, when in fact they are (for example, in the case of certain comatose patients). Why aren’t human super-organisms, or rocks self-aware? Why is space dust itself not teeming with conscious life? Again, the answer may be the same: they are conscious, but in a way that we can’t appreciate because we simply haven’t found the right way to look at these things — the right interpretation of the behavior of their components — to reveal the consciousness hidden within them. The trouble with dust theory If dust theory is correct, then there are presumably an enormous number of conscious experiences waiting to be discovered in what appears to be pure cosmic dust. But how many of these conscious experiences really exist, and what form would they take? One would imagine that there are far more ways to interpret simple worlds — say, worlds containing nothing more than a single conscious brain in otherwise empty space — than complex ones. If that’s true, then the vast majority of imagined worlds would presumably be far simpler than our 14 billion year old, formerly dinosaur-occupied, iPad populated techno-jungle. In the space of all possible consciousness-containing universe interpretations, the vast majority should look totally random and incoherent, except for the minimum level of order required to create and maintain a conscious mind. Why, then, do we find ourselves here? I honestly don’t know. But we may be able to spot a hint of the answer to this question by looking back at the “you” copy we considered earlier. I argued earlier that “you” are really two things: raw material (which I called “dust”), and an interpretation of that raw material. If that’s true, then any version of you, whether it’s made of dust or cells, must contain enough information in total — spread out between the raw material, and its interpretation — to reconstruct a complete “you” copy. But which component contains the most information will vary from one kind of “you” copy to another. For example, “dust you” offloads most of its information content to its interpretation. In order for you to explain to someone how they could reveal a “you” copy in the cosmic dust, you’d have to explain to them how trillions of particles should be imagined to relate to one another, for every millisecond of the existence of the copy. The raw material — the dust — can be anything, and it doesn’t carry much relevant information at all. “Biological you” is different: here, the interpretation is actually pretty simple — and it’s entirely specified by a handful of equations that encode the laws of physics. Half a dozen (or perhaps far fewer!) lines of math are in principle all that’s needed to map one biological brain state onto the next. The substrate does just about all the work! But “distributed you” — the kind that exists as a copy stored in just 3 or 4 servers in different parts of the world — is somewhere in between. AWS has to keep track of which servers to interpret as which parts of your brain, but within each server, simple physical laws once again determine the interpretation required to bring coherent brain chunks into view. Through this lens, it certainly seems like an odd coincidence that we just happen to perceive a world where complexity is forced onto the raw material that serves as the substrate for intelligence, rather than in the interpretation. And perhaps that makes sense: before there can be an interpretation, there presumably needs to be an interpreter — a system with enough cognitive capacity to keep track of how every part of a substrate needs to be interpreted to give rise to consciousness or intelligence. Perhaps that system must be made of stuff itself: “computer you” only exists as long as the computer that stores the “you” bits keeps track of the right way to interpret those bits, by storing its interpretation on some other physical material. Likewise, “globally distributed you” is only “you” as long as AWS has stored the correct interpretation of the contents of their servers somewhere in another physical server. So arguably, “cosmic dust you” would only exist if we were to actually find and physically store the right interpretation of all the dust particles that are needed to keep a coherent copy of you going over an extended period of time. This requires much more than just “imagining a dust you into existence”. Remember: almost all of the information required to create “cosmic dust you” is in the interpretation. In order to build you from the dust, we’d have to: 1) build a computer with enough memory to store the relationships between trillions of dust particles; and 2) actually go out and find a set of dust particles that can be interpreted as by our stored relationships as a “you” copy. This would be much, much harder than just building a simulation of you on well-structured computer chips in the first place. From that perspective, it’s perhaps a bit less surprising that “cosmic dust you” can in principle be conjured into existence — because in practice, it’s about as hard to achieve as my intuition says it would be. But it’s also possible that there may be a deeper principle at play, whereby the universe — for whatever reason — genuinely “prefers” simple interpretations to complex ones. Speculating about that would fall way, way above my pay grade though. The bottom line is that the universe is damn hard to make sense of. No matter how rational, scientifically minded and sober you want to be about it, consciousness just seems to come out weird.
https://medium.com/swlh/way-more-things-are-conscious-than-we-realize-heres-why-4d5c879cadec
['Jeremie Harris']
2020-10-22 21:37:55.088000+00:00
['Artificial Intelligence', 'Philosophy', 'Consciousness']
My first time….
Liever in het Nederlands lezen? Dat kan hier. It was pretty painful. Especially in the beginning. When I learned to let go it wasn’t all that bad. It brought a feeling of relief knowing that this wouldn’t be the last time. I heard many stories about people’s first time, but how can you really prepare for such a things. Let me start from the beginning: Late at night we arrived and I was looking much forward to it. The smells, colors and impressions were all unknown to me. Nina, our host in Ghana guided us to the room which we would share with the three of us. It was warm, the beds were outdated and shower was simply a bucket with cold water. Images of our documentary Twentie Four — India. While wondering through the city we quickly lost our way. A boy from the shanti town offered his help. He guided us straight into the town ship. It was similar to the images I’d seen on television, but this was real. Everything there was nasty and crowded. There were so many people on such a tiny plot of land. The sounds and smells were intrusive. Did I just see a man taking a dump in a ditch? Our short walk felt like hours. Perplexed and overwhelmed I asked whether we could drink a cup of tea in the garden of the English embassy. The porcelain cups and the napkins were in stark contrast with our previous endeavor. Still with the panic in my eyes, I pondered about how we could contribute. We came to the conclusion that from that moment forth we’d only stimulate the local economy. Images of our documentary Twentie Four — India. We’d dine at place where no foreigners could be found. We spend our days on the street in bars and restaurants with plastic chairs and gigantic menu’s, but nothing available. The most common phrase from any waiter would be: “No have, no have.” This consequently led to a diet of fried rice and chicken for three straight weeks. As a strict vegetarian in those days this meant three weeks of fried rice. Everything was so different than I was used to. For example: buses only departed when they were completely full. This could literally take a few hours. Inhabitant would constantly glare at us, pass their fingers over our pale white skin and shout “White one, White one.” Also my former boyfriend was reacting poorly to the strong malaria tablets he was taking. Images of our documentary Twentie Four — India. That being said I wouldn’t have wanted to miss all this. My first experience away from Europe and into an impoverished country was taunting. Nevertheless this trip has helped me to put my other travels in perspective. The uplifting idea came to mind ‘It could always be worse.’ A few days before our trip to India, to film an impact entrepreneur for 24 hours (more on this later), I got a bit scared. India was supposed to be so different… In short: Is India dirty & crowded? Most definitely I’m I having diarrhea? Most definitely Is it hot and sticky here? Most definitely Do people here drive like mad men? Most definitely Are timetables a very loose concept here? Most Definitely However due to my first trip to Ghana I have laid the foundation to manage my expectations. I’m enjoying India so intensively. The food, busy streets, the clothing and especially the people. Want to know more on Twentie Four Check our Facebook page: Fix the world and make money.
https://medium.com/twentie-four/my-first-time-a4b13c90205d
[]
2016-07-27 06:32:29.018000+00:00
['Impact', 'India', 'Socialentrepreneurship', 'Entrepreneurship', 'Plastic']
Multiclass classification with softmax regression and gradient descent
Logic behind Softmax regression Ultimately, the algorithm is going to find a boundary line for each class. Something like the image below (but not actually the image below): Image by author Note: we as humans can easily eyeball the chart and categorize Sarah as waitlisted, but let’s let the machine figure it out via machine learning yeah? Just like in linear and logistic regressions, we want the output of the model to be as close as possible to the actual label. Any difference between the label and output will contribute to the “loss” of the function. The model learns via minimizing this loss. There are 3 classes in this example, so the label of our data, along with the output, are going to be vectors of 3 values. Each value associated with an admission status. If the label is such that: admitted = [1, 0, 0] waitlisted = [0, 1, 0] rejected = [0, 0, 1] then the output vector will mean: [probability of being admitted, probability of being waitlisted, probability of being rejected] Thus, in softmax regression, we want to find a probability distribution over all the classes for each datapoint. Image by author We use the softmax function to find this probability distribution: Why softmax function? I think this functions is best explained through an example. Let’s look at the example: GPA = 4.5, exam score = 90, and status = admitted. When we train a model, we initialize the model with a guessed set of parameters — theta. Through gradient descent, we optimize those parameters. Because we have 3 classes (admitted, rejected, and waitlisted), we’ll need three sets of parameters. Each class will have its own set of parameters. Let’s have theta be of the shape: [bias, weight of GPA, weight of exam score] Let’s initialize thetas to be: theta_admitted = [ -250, 40, 1] theta_waitlisted = [-220, 40, 1] theta_rejected = [-220, 40, 1] Why those values? Remember that a line is y = mx + b? The line given by the initial thetas would be: admitted: -250 + 40x + y = 0 y = -40x + 250 waitlisted: -220 + 40x + y = 0 y = -40x + 220 rejected: -220 + 40x + y = 0 y = -40x + 220 If I just eyeball the data, I can see that the line that separates “admitted” from the rest has y-intercept around 250 and slope around -40. Note: It’s a start, but these parameters are actually never going to work. First, the parameters for waitlisted and rejected are the same, so the parameters will always return the same probability for waitlisted and rejected regardless of what the input is. Second, only the bias differ, and rejected and waitlisted have a bigger bias than admitted (-220 > -250). Therefore, regardless of what the input is, these parameters will return 0 for admitted and 0.5 for the other two. But it’s okay to start with bad parameters, gradient descent will fix it! Let’s visualize what the softmax function is doing. What happens when we run our datapoint through the softmax equation? Again, our datapoint is: GPA = 4.5, exam score = 90. First, we find the dot product of the parameters and datapoint: Image by author Then, we exponentiate that value to get rid of any potential negative dot products: Image by author Lastly, we normalize it to get a probability distribution: Image by author Because our initial set of parameters are not good, the model output 0.5 for rejected and 0.5 for waitlisted even though the label is admitted. Essentially, the softmax function normalizes an input vector into a probability distribution. In the example we just walked through, the input vector is comprised of the dot product of each class’ parameters and the training data (i.e. [20, 50, 50]). The output is the probability distribution [0, 0.5, 0.5]. The machine learning algorithm will adjust the bias, weight of GPA, and weight of exam score so that the input vector will produce an output distribution that closely match the input label. What we really want is our model to output something like: Image by author So, let’s change the parameters for all three classes to get better accuracy. One way to do this is by gradient descent. Gradient descent works by minimizing the loss function. In linear regression, that loss is the sum of squared errors. In softmax regression, that loss is the sum of distances between the labels and the output probability distributions. This loss is called the cross entropy. The formula for one data point’s cross entropy is: The inner 1{y=k} evaluates to 1 if the datapoint x^i belongs to class k. 1{y=k} evaluates to 0 if datapoint x^i does not belong to class k. Essentially, this function is measuring how similar the label and output vectors are. Here’s a good blog post that goes into detail about this equation. The total cross entropy, or loss, will be the sum of all the cross entropies. We take the derivative with respect to theta on this loss in order to do gradient descent. The new parameters for class k after each iteration is: Again, 1{y=k} will be 1 if x^i belongs to class k, and 0 if x^i does not belong to class k. We use this formula to calculate new thetas for each class. Now, let’s implement the algorithm to arrive at optimal parameters theta.
https://towardsdatascience.com/multiclass-classification-with-softmax-regression-explained-ea320518ea5d
['Lily Chen']
2020-12-22 19:13:29.930000+00:00
['Machine Learning', 'Data Science', 'Algorithms', 'Engineering', 'Editors Pick']
One Resolution Each Year
One Resolution Each Year The importance of having a game plan Image created by Lorie Kleiner Eckert on Canva My financial adviser is a smart guy. He believes that we need to set up rules for my investments and stick to them. The main rule is what percent of my assets is in the stock market and what percent is not. This keeps me from getting greedy (and soon-to-be-slaughtered) when the market is up and from getting scared (and running) when it is down. The main thing about this strategy is that it is a strategy. We don’t just behave in a willy-nilly fashion, we have a plan. This concept carries over to other areas of my life as evidenced by the journal I have kept since January 1, 2005. I make two entries in it each year, both on New Year’s Day. The first entry details how I spent New Year’s Eve so I can look back and remember happy times that would otherwise be forgotten. The second entry records my resolutions for the new year. In the early years, I wrote down five or six mundane goals such as “Cut the cussing” and “Take calcium daily,” but of course, before long I was forgetting to take the damn calcium. By 2013, one of my resolutions got me closer to a workable plan, closer to a non-willy-nilly strategy. That resolution said, “Do my routine — see card.” Then on an index card, I wrote down my daily routine, or at least the routine I followed on my good days. Here it is in its entirety: Walk M-F Do 20 minutes of weight training M, W, and F Do 10 minutes of yoga T and Th Dental care — use an electric toothbrush and floss at night and regular toothbrush and mouthwash in the morning. Nights (M-F) — Use lotion on face, feet, legs, and hands Be dressed — with makeup on — within two hours of walking M-F Go somewhere daily by noon Go to Temple on Friday nights In the years since 2013, I have had a singular resolution — yes, just one. My resolution is to follow my routine. I reinforce this mission by writing, “Make a schedule and stick to it. Without this structure, you are like an unmedicated bipolar patient. For happiness, you need to take this ‘drug’ daily.” Some may argue that my routine is a whole set of resolutions, difficult to carry out. But I disagree. I was doing all of these things regularly when life was good for me emotionally, but I failed to do them on down days. So, this gave me a strategy for the up markets and down markets of my life. I learned the beauty of living this way from my ancestors. My dad — post-retirement — used to have a different activity scheduled daily. If it was Monday, he mentored at an elementary school; if it was Tuesday, he volunteered at Temple, and so forth. My grandmother used to cook a different soup daily. If it was Monday it was pea soup; if it was Tuesday it was bean soup, and so forth. When my dad lost my mom, and when my grandmother lost my grandfather, they were not completely adrift. They at least had a plan for what to do the next day…and the days thereafter. It worked for them. It works for me. I recommend it to you! ***** If you like the stories of my life, try my memoir, Love, Loss, and Moving On.
https://medium.com/live-your-life-on-purpose/one-resolution-each-year-79cc099fb637
['Lorie Kleiner Eckert']
2020-12-28 15:02:31.167000+00:00
['Life', 'Motivation', 'Self Improvement', 'Life Lessons', 'Inspiration']
Beyond Coding: Watson Assistant Entities — Part 2, The Faces Of Entities
Photo by Brett Jordan on Unsplash In the first article, we went through an overview of where entities fit in the Watson Assistant structure. We also touched upon the different types of entities and their benefits. In this article, we will take a closer look at the different types of entities — synonyms and patterns. As we know, entities are a feature which allows us to group items or concepts. This allows us to strengthen the intelligence of our Watson Assistant. But to further improve a customer’s (or end-user’s) experience, we can use entities to recognize patterns and extract data. Entities are also broken down in two areas. The first is user-generated entities and the second is system entities. For now, we will focus on user-generated entities (entities you create for your assistant). System entities will be discussed in other articles. User Generated Entities — Synonyms At their very core, entities are synonyms which help the assistant determine what the customer is asking (in conjunction with intents). Entities are the mechanism by which we link concepts (words) to one another. Consider entities to be a collection of groups. Each entity contains its members, and each member contains its synonyms. The simplest example (which has been used to explain various concepts) is the group of Animals. For instance, we want to provide people with an assistant which helps them navigate a zoo’s enclosures. Part of the process is to understand which animal is being discussed. With this in mind, we would create an entity called Animals. Within this entity, we would have several animals and their synonyms. For example, “Hippopotamus” would have the synonyms “hippo”, “river horse”, “hippopotami”, “hippopotamuses” (to take into account the evolution of language) and perhaps even “Hippopotamus amphibious”. For this example, we will ignore the pygmy hippopotamus. You would then continue to add synonyms for animals as required. But the level of detail depends on the domain, or topic, the assistant will discuss. In our fictional zoo, all birds may be housed in a single enclosure, which means you could create “Birds” as part of the “Animals” entity and populate it with synonyms such as “parrot”, “owl”, “kookaburra”, “condor”, “eagle” etc. If the birds were scattered in multiple enclosures, you could create an entry for “Birds of prey” / “Raptors”, “Nocturnal birds” or “Marsh birds”. An example of the Animals entity The delineation on how to define the entity is something we will look at in future articles, but for now it should be understood that an entity is the umbrella concept while the entries underneath it are the group’s members. Possible expansions of the Animals entity This now means the assistant now understand that a hippo and a parrot are both animals. But is that the extent of the synonyms capability? Not at all. We could expand the entity synonyms to use phrases, or multiple concepts. Consider the phrase “Tell me about”. It’s an indicator that the customer is asking for information. We could create entities for the components, but a phrase entity would suffice. We could consider adding “Explain to me”, “What do you know”, “Do you have information”, etc. By going beyond simple word synonyms, we begin to expand the building blocks — and the power — of entities in our assistant. We’ll take a closer look at creating and defining entities in future articles, for now, it’s important to understand that entities aren’t limited to single concepts or words — as I’ve mentioned, an entity houses the groups of similar concepts. User Generated Entities — Patterns Entities as synonyms are useful in identifying information and determining the topic being discussed. But is data identification the most we can expect? Definitely not! With pattern entities, you can tell the Watson assistant that you expect customers will provide certain data types — anything from an email address to an alphanumeric serial number. With pattern entities, you’re only limited by the customer information you need to identify. Unlike Synonym Entities, Pattern Entities identify a recognizable pattern in the customer’s query. The structure of the pattern is determined by the regular expression (or regex) defined in the entity. I won’t delve into regular expressions (not only is the subject extensive, but there are numerous online resources which can help), but I will provide a simple example to better understand what we mean by Pattern Entities. A pattern entity should follow the same considerations as a Synonym Entity — a top-level entity containing groups of concepts. But instead of synonyms, you are simply defining patterns. It should be noted, an entity cannot be both a Synonym Entity and a Pattern Entity — entities can only be one or the other. With that in mind, let’s consider our zoo navigation example. Our customers have been traveling around the zoo and they are told about discounts and offers with the zoo’s partners. To find the best discounts for their area, we need a zip code. With System entities, we can’t do anything more than understand the topic is “zip code” and “discounts”. So, we create a new entity — Contact_Details. We now have an entity devoted to understanding how we can communicate with the customer. But more importantly, we can extract the information. Once we have the high-level entity, we add a “Zip” entry. We then tell the assistant which regular expression pattern we want to use — (\b|\s)\d{5}(\b|\s). This simple example tells the system to look for continuous 5 digits with spaces or boundaries around those digits. An example of Pattern entities We now have a way of identifying the pattern, we can now extract this information from the customer’s response. The way we achieve this will be discussed in future articles, but for now we should understand that once we have the zip code we can decide what to do next — send it to the discount systems to display partners; provide analytics for the zoo to determine where they should increase discount partners and so on. More About Entities Now we’ve taken a quick look at the two different types of entities available. It’s important to understand how they work and how they differ to get the best out of entities. We’ll be looking at entities in-depth in future articles, where we examine: How to use entities How to determine which entities we need Entity Best Practices Read about pre-built system entities in the third article of this series.
https://medium.com/ibm-watson/beyond-coding-watson-assistant-entities-part-2-the-faces-of-entities-b51e849f27e7
['Oliver Ivanoski']
2019-08-28 13:43:39.503000+00:00
['Watson Assistant', 'Wa Editorial', 'Artificial Intelligence', 'Chatbots', 'Editorial']
5 Essential Cybersecurity & Privacy Tips for Building Your Personal Brand
The Internet is growing up. So are you. And you’re ready to get out there and break it. You’re going to be an influencer, a digital brand expert, a social media marketing guru. The first brand you’re going to build, you’ve decided, is yours. Welcome to the fray. Sowing Your Oats You’re diving into an $8B market, chock full of people just like you who have decided to make being themselves (or whoever their sponsors pay them to be) their full-time gig. You’re voluntarily (and ambitiously) exposing yourself — or at least what the marketeers deem your most valuable parts to be — to the world (and perhaps the highest bidder?). There are over 1 billion users (potential fans!) on Instagram alone. What could go wrong? The opportunities are endless — both for you, and for social engineers (like me) that hack humans as targets of interest (and opportunity). Luckily for you, I’m one of the good guys, and I’m here to help you. First, a story. But Everyone’s Doing It… A friend of mine — let’s call him Bob — decided to build his personal brand as an interior designer. He had spent years honing his skills as an enthusiast, decorating homes for his friends and family as a part-time hobby. Feedback was very positive, and his work became all the rage amongst his circle, and their connections. Bob was elated — and so were the peeps who got free decorating. With much excitement, Bob announced that he would harness his gift, along with the power of social media and the Interwebs, to turn his hobby into a career and bring his designs to the (well-paying) masses. Bob opened a GoDaddy account, bought a domain, created his Instagram, Twitter, and Snapchat profiles, and set to work. And he worked hard. 60 days into his adventure, Bob had 10 paying clients — more than he needed to earn gown-up money — and several thousand followers on his social channels. Bob was elated — and so were the peeps who eventually hacked him and shut him down. 90 days into his adventure, Bob was out of business — or at least out of commission. Bob was devastated. So were his clients. How could this have happened? Bob’s downfall was due to a nasty combination of (not so skillful, but persistent) social engineering, and poor cybersecurity hygiene. Let’s Get to Know Each Other First Bob’s first Instagram posts were pictures of his most intimate design work — his own downtown apartment — and he blasted them out across all of his channels. Among the likes, comments, and accolades — there was a “woman” who we’ll call Alice, who stated she and her husband were moving to Bob’s home city, and wanted to know what building he lived in to get such a killer view of the skyline she saw in the pictures. Bob told Alice the name of the building and told her how great it was to live there. Among the continued small talk (online), she even convinced Bob to tell her the floor he lived on. Bob’s online admirer asked if the building accepted pets. He told her it did, which was lucky for his cat, Morpheus. And that’s how it started. Bob’s new friend was able to keep the conversation going by bringing in small details that only a friend or colleague would know — in this case, she was asking about a design contest entry from an industry fair in Los Angeles that Bob had attended. Alice obviously wasn’t there but found a photo online and used it to describe Bob’s work and how amazing it was — as if she had seen it in person and was in awe of such amazing talent. Certainly, he deserved better than an honorable mention. Alice mentioned she’d be willing to have a chat with some of her connections in L.A. and possibly get Bob some meetings with higher profile clients. She asked for a personal email address so she could send Bob some ideas, and said she’d connect with him on his other social accounts as well. She even mentioned meeting up for coffee when she and her husband got to town. Bob was elated. And so was Alice, who now had everything she needed to launch her attack. Your Tests Came Back Positive… Bob was being doxed, a technique where a bad actor works a target to compile pieces of open source (publicly available) information, along with verification information from the targeted individual, to build a profile and do something bad (as bad actors often do). In this instance, the bad actor (Alice) was verifying information she was able to find on Bob — and then she went to work. Alice had the building, even the floor, where Bob lived. Bob told her himself. She easily searched online to reverse-engineer the address. Strike one. Alice had Bob’s personal email address, which he gave her and she verified with a quick note and an even quicker reply. As you’ve probably already deduced, email addresses are also typically usernames — for almost everything. Strike two. Morpheus, Bob’s cat? It’s a response for Bob’s “secret” security question — “What’s your favorite pet’s name?” — on his web hosting account (and a ton of other things). Strike three. Bob’s not tech-savvy. Alice guessed that. She also guessed (cracked) the password for his email account using an online tool. Game over. While Alice didn’t inflict direct financial damage — thankfully she didn’t get to Bob’s bank accounts or credit cards — she was able to take control of two of his social media accounts and his primary email. Those thousands of followers? Gone (including Alice). Hours of work in selecting and posting photos and stories and comments? Trashed. Rather than growing his budding business, Bob spent the next four months cleaning up and becoming hacker-free. But that’s for another article. Sadly, this is a true (and all too common) story. But happily, Bob learned his lesson, and we can all learn from his mistakes. Let’s Have the Talk. Here are five essential steps to consider in securing your personal brand on social media. Don’t be like Bob. #1. Go Hack Yourself. Before you launch your world-changing online persona, conduct an audit of your current online footprint. It’s easy to get started — Google yourself and see what’s out there. Sites like Spokeo, Manta, and others routinely post sensitive information including addresses, phone numbers, and in some cases even spouses, relatives, and business associates. In the security world, we call this OSINT — open source intelligence — and there’s plenty of it available. Marriage and divorce records, census data and birth certificates, and in some cases more sensitive documents like tax returns, vehicle registration, and credit information are out there online for the picking. Not only is this information useful for a potential bad actor, it can also be considered by people who are getting to know your brand. You don’t want your first impression to be your DUI arrest from 2001. In subsequent articles, I’ll teach you how to get your information removed from these databases and lock down your publicly available persona. #2. Now Clean Yourself Up. Now it’s time to disinfect your online presence. Your entire digital footprint is in scope here. In addition to restricting access to sensitive data, you should be reviewing what information is out there that you did intend for people to see. Those revealing spring break pics from Cabo? Take ’em down. The mildly inappropriate meme about needing a beer before work that your best friend tagged you in? Consider removing the tag. The nasty Yelp review you left for that nasty taco joint on 3rd Avenue? Delete it. You see the trend here. Anything that could be used to compromise your brand is fair game for a bad actor, a competitor, or an investigator. As mentioned in my previous article on social media privacy — be intentional with what you post, even if you think others will never see it. #3. Be Discreet, and Always Use Protection. This is where the good security hygiene comes in. Don’t pass account numbers, financial information, personal data — or anything else that you’d consider sensitive — through email, text, or DM, even with people you trust. There are ways to exchange all of this information securely. Don’t use weak or common passwords for your online social media accounts. Moreover, don’t use the same passwords between all of your platforms. If one password is compromised, a (proficient) bad actor will try all the sites with that same password. Put a password or PIN on your phone/tablet, or even better, enable biometrics. Where possible, enable two-factor authentication on your individual social apps (Instagram, Facebook, Snapchat, etc.) to prevent unauthorized activity, particularly in a scenario where you ignored the first part of this recommendation and then lost your phone. These are the big three. I could easily write a whole guide on this portion of the process. In fact, I think I will! #4. Don’t Get Emotional. As illustrated in our example, it’s very easy to build rapport, and even friendships, with connections you make online as you’re building your brand. It’s exciting to see these relationships develop as you get closer and increase interaction with your fans and admirers — and your haters. Don’t let emotion take over in these scenarios. If an interaction seems overly friendly, makes you uncomfortable, or seems slightly suspicious, it probably is. A good social engineer who’s marked you as a target of opportunity will intentionally extend the building period of your online connection in order to divert that suspicion — so you should always do some recon of your own. Who’s this person connected to that you know? Do they have other followers, or follow others in your market? Do they have profiles on other platforms? These are basic but effective verification steps that you can perform fairly easily. This is another area where I could, and probably will, write a separate guide to help folks stay safe. #5. Get Checked Regularly. Monitor all of your platform profiles for strange activity or unauthorized access. Always review posts where you’re mentioned, tagged, or shared. You can usually control this particular feature in the social application’s settings. Make it a habit to conduct a periodic review of your online footprint, to include public and private information on your social accounts — and stay consistent with your grooming. Stay smart on how to get support on your various social various platforms if something goes bad. Technical support is usually just a live chat away and these teams will always prioritize potential security issues.
https://cybersage.medium.com/5-essential-cybersecurity-privacy-tips-for-building-your-personal-brand-878ddd4faad6
['David Scott']
2019-11-06 16:10:14.915000+00:00
['Marketing', 'Security', 'Privacy', 'Social Media', 'Personal Branding']
The UX Designer’s Role in Design Systems — Part 2
“A design system is like a pet. It’s easy to like the idea of having one but if you do go ahead and pull the trigger you are taking on a big responsibility,” says Apurv Ray, a design strategy lead. “When it’s a young design system it’s great to play with it, teach it new tricks by adding new components, and it doesn’t take up a lot of your time. But as your design system matures and grows it needs more and more of your time and you’re responsible for anything it breaks. It will always be there with its puppy eyes acting all innocent but remember you’re the one who built it and you have to live with your decisions.” With more and more organizations ramping up their efforts to build and maintain design systems, how can designers contribute to being responsible “pet owners?” Design systems, by nature, are a collaborative effort, and designers are one part of the equation. What do design systems mean for the role of the designer? Are there specific skills that are helpful for designers to hone as they work on these projects? What activities are designers doing as part of design systems and what contributions do they make? As part of our ongoing guide to successfully creating a design system, we surveyed 12 designers with experience on design systems and spoke to a variety of experts to learn more about the role of the designer within these projects. Evolving the role of the designer Design systems are quite literally creating new, specialized roles for designers as dedicated design systems team members. Organizations like Shopify, Airbnb, and Salesforce are investing in and hiring designers to focus on their design systems. Even at organizations where resources don’t allow for dedicated teams, designers are working on and championing design systems as side-of-desk efforts. A Design Language System partnership meeting at Airbnb, where designers from around the organization can participate in sharing feedback and systems thinking with the design systems team. Image by Airbnb. Some people have fears that the proliferation of design systems may limit designers’ creativity, and reduce the role to one of assembly. In the most extreme scenarios of this concern, designers simply become interface assemblers, building pages and interactions from predefined blocks and components. Ideally though, design systems divert the designers’ efforts to solving strategic problems, and getting creative on the truly challenging and unique use cases and patterns. Design systems can also provide constraints to innovate within, and radically reduce duplication of effort. Gavin Harvey, a platform design manager at Google, sees a future where design systems will start to automate and shift parts of the designer’s role. “That’s what makes it so interesting — it’s not just an evolution in terms of making a standard process more efficient. Design systems are a precursor to a revolutionary change in how we design software. The process of building software is about a lot more than assembly, we need designers to look beyond components to usability, task flow, and how the interactions are driving users towards success.” The skills needed to work on design systems While full design system automation is not the reality for most teams today, many designers are finding that their skills and day-to-day work are being shaped by their contributions to design systems projects. As design systems gain traction, what are the skills that designers need to foster in order to successfully contribute to these projects? As roles on specialized design systems teams become more common, what skills should interested designers be working on? In our research, several people highlighted the difference between being an early builder of a design system versus a steward of a more mature system. In the early stages of a system, designers will play more of a craft-oriented role that is focused on creating components and documentation. Over time, this will shift to a stewardship role of supporting the continued growth and evolution of the design system. How far along a design system is in its maturity will change the degree to which the following skills are flexed. However, the designers we surveyed highlighted the importance of all of the following skills: Collaboration skills Design systems won’t work with a lone wolf approach. There needs to be a team effort, and designers need to be collaboration oriented. This enables them to work with developers to articulate, document, and name the system, to understand the needs of the system users, and to contribute to roadmapping the design system’s evolution with product managers. For UX designer Vivian Chung, this means having a “team-oriented approach: you have to be able to collaborate and co-design with other designers, devs, marketing, etc. You need to involve others as early as possible so we can understand their needs and building that into the design system early leads to greater likelihood of the system’s survival and adoption.” Patience and passion Complex projects like design systems require an investment in the long game and a willingness to stick with the project even when things get challenging. The themes of tenacity, patience, and personal passion came up over and over as crucial skills for driving these projects forward. Lisa Guo is a product designer at Sensibill, and as she puts it, “Design systems are not for the faint of heart!” Systems thinking Design systems demand that designers flex their strategic and systems thinking skills at several levels of magnification. At the brand level designers need to be able to align components and UI into a coherent theme, that work with the overall direction of the brand. This includes all aspects of UI such as weight, color, spacing, and typographic styles. Designers also need to be able to see the bigger picture of the design system and how it serves the organization, for example by anticipating potential use cases and evolutions. Design systems require an ability to “disconnect from the immediate problems of today, looking beyond a single feature or a single product… a system is much larger than any one thing,” says Chris Bettig, design director at Youtube. Understanding of development and code Hybrid design/development skill sets really shine in design systems. At a minimum, effective designers are able to talk to developers and collaborate efficiently by building trust and respect. Andrew Couldwell, a designer who has written a book on his experience with design systems, says that having a hybrid skillset is his secret weapon on design systems projects. “Speaking developers’ language will get you really far. And being able to think systematically like a developer is important; for example, being very strict about how many fonts you use.” Project and product management It can be easy to get caught up in building a design system for the sake of it, rather than seeing the system as a means to an end. Product management skills, like understanding the problems the system will solve, being able to chunk out the work into a manageable and clear plan, and leveraging research in the right way are all ways that the designers we surveyed contributed to their design systems work. “It’s really important that you actually understand what the problems you are trying to solve are and for who, and that you make sure it’s not design for design’s sake or with only other designers in mind. You also need to be able to make solid business cases for the decisions being made and keep faith even when the public reacts negatively,” says Bettig. A designer’s day-to-day activities on design system efforts The day-to-day activities that designers take on related to design systems are varied, and span from the already familiar to some specialized tasks. Again, the maturity of the system will have an impact on the proportion of time being spent on certain types of activities. Regardless of team structure and the maturity of the design system, there are some key activities that repeatedly came up in our research: Infographic by Andreea Mica. Designing components Not surprisingly, all of the designers surveyed mentioned their role in actually designing components. Designers working on design systems are often defining how components like buttons, form fields, and radio buttons are visually styled and presented, along with their interaction models. Adam Glynn-Finnegan is a senior product designer at Airbnb whose role includes being “an active contributor to the design system, scoping, and creating new components.” Writing documentation Connected to creating components is writing documentation for the design system. This can be at the component level, or more broadly, can cover things like design principles, accessibility guidelines, UI states, and interactions. In addition, several people stressed the importance of documenting the decisions that led to how the components are built. As part of Soojin Cha’s role as a product designer, she spends time “building design system pages including item definition, usage, the do’s and don’ts or rules for the component, accessibility, UI states, and the style guide.” Advocating for the design system Designers’ roles on design systems projects often encompass the messy, human aspects of gaining support for the design system and demonstrating its value. It goes beyond the work of crafting components and interaction models. This includes presenting, demo-ing, and gently reminding team members to use the system, as well as “communication with clients and shareholders on the impact and advantages of the system,” says Samuel Yeung, a UX/UI designer and UI developer. Collaborating with developers on creation and adoption Design systems require close-knit collaboration between designers and developers, both in building the system and in maintaining it. Working together to define naming conventions and ensuring that design assets and code are staying up-to-date with each other can only happen through this collaboration. This is another part of Cha’s role, “collaborating with developers on which items in the design system can be built as a component each sprint, by deciding which part is reusable, worth ‘component-ifying’ as well as how we’re going to name the component. Once developers create components, I access the code in reactJS (to check naming and design token coverage) and UI representation on browser.” Governing the system The responsibility of stewarding the system as it evolves can fall to designers. For example, vetting contributions from other teams, or being the point person for documentation and owning design files. This also includes providing support and guidance to consumers of the system. For Ray, who shared his design system pet analogy in the introduction, this was part of his last role. “I was part of the design system governance body which helps to maintain the system and decides which new patterns make it in. It’s hard, but worth it! Design systems come with a distinct set of challenges. Getting buy-in and the right governance process in place to support the project takes a lot of effort. They are also very time and resource intensive, which can add to designers’ already heavy workloads. Finally, design systems teams are constantly navigating the right balance between how strict and uniform to make the design systems and its use, with how flexible and ‘open to evolution’ they should make it. Despite these challenges, the ways in which design systems stretch our skill sets, and the uncertainty about how our roles will change, many designers feel optimism and excitement about the potential of design systems. These projects can bring so many benefits, from aligning teams, to reducing duplication of effort and menial tasks, to improving accessibility and consistency of brand and experience across products. You’re solving problems and helping people across an organization to work faster. There’s also the satisfaction of seeing the design system propagate and the consistency it brings. At Wealthsimple, Senior Product Designer Eric Akaoka has seen this first hand. “A well-maintained design system has the capacity to greatly speed up both the design and development processes by taking advantage of reusable components; it makes handoffs so much easier; and it even allows designers to skip low-fidelity wireframing where appropriate.” Most exciting of all, designers get to see the impact of their work across an organization, at scale. As Samuel put it, “being able to see your work getting adopted across an entire platform is so exciting. You can see the impact that has been made for both developers and designers! It’s easier for developers to build pages faster, and it allows designers to have more time for critical design decisions instead of ‘how big should this button be?’” Continue to part three: Building Accessibility into Your Design System. Thank you to all of the people who participated in research, interviews and surveys, and shared their deep knowledge and expertise. For more on design systems, download the Adobe and Idean e-book ‘Hack the Design System.’
https://medium.com/thinking-design/design-system-teams-the-ux-designers-role-974422d2a883
['Linn Vizard']
2020-01-29 18:16:27.305000+00:00
['Ux Designer', 'Design Career', 'Design', 'Design Systems', 'UX']
Two Years in a Snap — RAPIDS Release 0.16
RAPIDS cuGRAPH In release 0.16, cuGraph kicked off three major long-term themes. The first is to go big. We have shifted to a new 2D data model that removes the 2 billion vertex limitation and offers better performance and scaling into the 100TB+ graph range. The first multi-node multi-GPU (MNMG) algorithms updated to use the new data model are PageRank, SSSP, BFS, and Louvain. The second theme is to go wide, by expanding our supported input data models. In 0.16, we are happy to announce that NetworkX graph objects are now valid data types into our algorithms. We are still expanding interoperability between cuGraph and NetworkX and moving to support CuPy and other data types. The last theme is to go small and develop a collection of graph primitives. The primitives support both single GPU and multi-GPU workflows and allow us to have a single code base for both. RAPIDS Memory Manager (RMM) RMM 0.16 focused on reduced fragmentation for multithreaded usage, and CMake improvements. This release includes a ton of CMake improvements from contributor Kai Germaschewski that make it easier to use RMM in other CMake-based projects (and more improvements to come!). It also includes a new arena memory resource that reduces fragmentation when many threads share a single GPU, as well as improvements to the pool memory resource to reduce the impact of fragmentation. Another new memory resource is the `limiting_resource_adaptor`, which allows you to impose a maximum memory usage on any `device_memory_resource`. We have improved diagnostics with debug and trace logging, currently supported in the pool_memory_resource. A new simulated memory resource allows running the RMM log replayer benchmark with a simulated larger memory, which can help with diagnosing out-of-memory errors and fragmentation problems. And last, but definitely not least, by removing previously deprecated functionality, librmm is now a header-only library. RAPIDS Dask-cuDF For the 0.16 release, Dask-cuDF added an optimized groupby aggregation path when applying many aggregations. Previously, for each aggregation operation, dask-cudf would run serially against the groupby object. Now, Dask-cuDF will call the aggregation operations in parallel on the GPU. This is a big step forward for performance. RAPIDS cuSignal cuSignal 0.16 focuses on benchmarking, testing, and performance. We now have 100% API coverage within our PyTest suite — ensuring that deployed features are numerically comparable to SciPy Signal. Further, via our performance studies, we found multiple functions that were better suited to ElementWise CuPy CUDA kernels versus standard CuPy functions — resulting in 2–4x performance gains over cuSignal 0.15. BlazingSQL It’s now easier than ever to get started with RAPIDS and BlazingSQL. You can now find BlazingSQL containers on the RAPIDS Getting Started Selector, and we have expanded Blazing Notebooks to include more RAPIDS packages (CLX and cuXfilter) with a multi-GPU private beta slated for public release in early November. For version 0.16, we have been working hard closing out dozens of user-submitted issues. At the same time, we have been working on a major overhaul of the communications layer in BlazingSQL. SQL queries are shuffle-heavy operations; this new communication layer, (soon to be merged into the 0.17 nightlies) increases performance across 95% of workloads while setting us up to utilize UCX, enabling the technologies of NVIDIA’s NVLink, Mellanox Infiniband, etc. for even greater performance. NVTabular NVTabular provides fast on GPU feature engineering and preprocessing and faster GPU based data loading to PyTorch, Tensorflow, HugeCTR, and Fast.ai, speeding up tabular deep learning workflows by 5–20x when used in conjunction with NVIDIA AMP. Since its inception, it has relied on RAPIDS cuDF to provide core IO and dataframe functionality. With the recent 0.2 release, NVTabular is now even more integrated with the RAPIDS ecosystem, switching from a custom iteration back end to one built entirely on Dask-cuDF. This means users can now pass Dask-cuDF or cuDF dataframes as input in addition to the many file formats already supported by cuIO and can mix and match between NVTabular and Dask-cuDF seamlessly, writing custom ops for NVTabular directly in Dask-cuDF. It also allows for easy scaling across multiple GPUs or even multiple nodes; In a recent benchmark, we were able to preprocess the 1.2TB — 4 Billion row Criteo Ads dataset in under 2 minutes on a DGX A100. RAPIDS version 0.16 introduces list support in cuDF which allows for the addition of NVTabular’s most requested feature, Multi-hot categorical columns, and the team is hard at work on that for NVTabular version 0.3. CLX While there were multiple performance improvements and tweaks to the example notebooks in CLX, making them more performant and easier to use, the big enhancements come to cyBERT. In addition to the on-demand batched mode previously supported, cyBERT can now utilize a streaming pipeline for continuous, inline log parsing. In addition, cyBERT has been modified to support ELECTRA models in addition to the previously supported BERT models. While BERT is still preferred for the log types we’ve observed thus far (providing higher parsing accuracy albeit at a slightly slower speed), ELECTRA support will allow others using cyBERT greater flexibility in choosing a model that works for them in their network environments. cyBERT also got a few more tweaks and improvements, including a new data loader that helps keep up with larger streaming pipelines. RAPIDS Community We’re starting a podcast called RAPIDS Fire. The first episode releases in early November, so keep your eyes out for the announcement. We’d love to have your feedback on topics and invite the community to join the dialogue. The format is going to be unique like RAPIDS is unique. There’s going to be a host, Paul Mahler, and rotating co-hosts. I’m up first. The two hosts will interview a guest on anything and everything related to accelerated data science and the RAPIDS Community. We’re really pumped about this, so expect a blog announcing the first episode. We expect the podcast to be available anywhere podcasts are found. Wrap Up RAPIDS has made so much progress in two years I almost can’t believe it myself. We have a lot of exciting new things on the way in version 0.17, more improvements to SHAP, more MNMG algorithms in cuGraph, and nested type support in cuDF will continue to improve. As always, find us on GitHub, follow us on Twitter, and check out our documentation and getting started resources. We’re excited to have you join us, and we’re looking forward to another great year of RAPIDS. And Vote!
https://medium.com/rapids-ai/two-years-in-a-snap-rapids-0-16-ae797795a5c4
['Josh Patterson']
2020-11-02 17:36:47.407000+00:00
['Big Data', 'Rapids Ai', 'Gpu', 'Data Science', 'Machine Learning']
5 Design Tips to Grow Your Online Business
Designing a website is hard. Until I built a website for my wedding, I’d never tried to do it before and I can honestly say that building the pages and content wasn’t as hard as initially thought, but making it look how I envisioned in my head… now that was a different story! Of course, designing a website for a wedding is different to designing a website for your business, but there are some common design principles that can help you create an effective, professional website that interests your visitors. Of this interest, people are more likely to take the sort of action that will ultimately improve your bottom line too! With everything included in designing a website, below are 5 design tips to help get you started. Tell a story When I watch music videos, I find that I quickly lose interest if it’s just a bunch of people dancing around my screen and there’s no real story being told. If the song lyrics don’t speak to me, it’s a lost cause. The same applies to your website! After all, You cannot create a music video without a song. So think of your website content like the song lyrics: they’re effective if they speak to your audience and the message is something they can relate to. Your website design is like your music video: it’s where you visualise your message. But first, figure out your story: What are you trying to say? Why do people need to buy your product? What colours best represent your message? What images represent your brand? Ask yourself these questions and make sure your customers can tell what your website is about within a few seconds of visiting it. Also, map out your user’s journey so you can ensure you’re guiding them in the right way towards what you want them to do next. Expand your toolkit Have you considered plugins? External plugins can more often than not be the missing pieces needed for a very specific or additional functionality to your site that may not be included in your website builder. There are many options available depending on what your goal is. Let’s say you want to boost conversions, you could try a popup. Need to create a custom order or returns form? Try building your own form. If you want to offer your customers better support, you could even enable comments or build a custom, searchable FAQ… The possibilities are endless! Capture attention with subtle animations And no, we don’t mean putting cartoons all over your website (even though that does sound kind of fun). Animations shouldn’t be too over-the-top otherwise they’ll distract rather than help focus attention. Once you have worked out the story behind your website, you can then use animations to draw attention to a particular message or call to action. Be careful though, complicated animations will take away from the message that you’re trying to get across. So simply try adding a motion effect to your headline or key call-to-action button and see if it gets more clicks! Use images! It’s an obvious fact that images catch attention and that they help get you more views, sometimes up to 94% more! This comes with a caveat though, you need to use the relevant images, and put them in the right place! The last thing you want is to turn users off with images that are not tied to your business, so it’s important not to pick just any image. Relate back to your website’s story and choose images that appeal and highlight your narrative. If the in-built functionality of your website builder doesn’t enable the flexibility you need, remember to utilise external plugins to achieve beautifully Pinterest-style galleries, eye-catching banner images and hero images. Be responsive It’s important to remember that customers visit your site on mobile and tablets too. If your website isn’t optimised for these users, they will have a completely different experience of your website. Readability or unclear calls to action may cause customers to abandon the site. With users spending an average of 69% of their media time on smartphones, thinking about mobile responsiveness is no longer optional if you want to remain competitive, so keep this in mind and test your designs on different devices. When you think about using external tools, make sure they’re mobile-responsive too. a lot to consider when creating your website and we can’t expect to become professional web designers overnight. So use these five tips to get you started and you’ll be on your way to creating a website that not only looks like what you envisioned, but that also helps your business grow.
https://medium.com/powrplugins/5-top-design-tips-to-grow-your-business-online-1546e920f8c
[]
2018-06-07 19:09:54.985000+00:00
['Design', 'Online Business', 'Ecommerce', 'Web Design', 'Small Business']
Sharded: A New Technique To Double The Size Of PyTorch Models
Sharded: A New Technique To Double The Size Of PyTorch Models Sharded is a new technique that helps you save over 60% memory and train models twice as large. Giving it scale (Photo by Peter Gonzalez on Unsplash) Deep learning models have been shown to improve with more data and more parameters. Even with the latest GPT-3 model from Open AI which uses 175B parameters, we have yet to see models plateau as the number of parameters grow. For some domains like NLP, the workhorse model has been the Transformer which requires massive amounts of GPU memory. For realistic models, they just don’t fit in memory. The latest technique called Sharded was introduced by Microsoft’s Zero paper in which they develop a technique to bring us closer to 1 trillion parameters. In this article, I will give the intuition behind sharded, and show you how to leverage this with PyTorch today to train models with twice the memory in just a few minutes. This capability in PyTorch is now available thanks to a collaboration between Facebook AI Research’s FairScale team and the PyTorch Lightning team. By the way, I write about the latest in deep learning, explain intuition behind methods and tricks to optimize PyTorch. If you enjoy this type of articles, follow me on twitter for more content like this!
https://towardsdatascience.com/sharded-a-new-technique-to-double-the-size-of-pytorch-models-3af057466dba
['William Falcon']
2020-12-12 15:11:28.095000+00:00
['Deep Learning', 'Artificial Intelligence', 'Editors Pick', 'Data Science', 'Machine Learning']
Marketing to millennials: Create a message they can’t ignore
Millennials — currently between the ages of 23 to 37 years old — are estimated to to yield $200 billion in annual buying power. The caveat: only six percent of Millennials consider online advertising to be credible. So, how will you reach the other 94 percent? With Millennials, you must cut to the chase and tell the truth. If you don’t, they’ll move on or rat you out. However, when alignment between a brand and a Millennial is achieved, the results are powerful. This demographic proudly aligns themselves with brands that authentically speak to their values. They can become your most vocal advocates and brand ambassadors. To get to this point, you must relentlessly build trust through compelling and authentic messaging. As with any generational segment, Millennials comprise individuals with diverse traits. Despite this, some characteristics are nearly universal and provide a starting point for creating a message they can’t ignore. Loyal millennials: A paradox? Contrary to Baby Boomers, Millennials inherently distrust organizations. They want to know the motivations behind what you’re selling and if it aligns with their own value system. If distrust is so rampant, some argue Millennials will never be loyal customers. However, a Forbes and Elite Daily study found Millennials will develop strong brand loyalty when the product is quality and they are actively engaged by the organization. So, through honest and strategic messaging, you can build trust and create a loyal Millennial audience. Get to the point Thankfully, combatting distrust and building loyalty doesn’t require long, complicated messaging. Millennials prefer you get to the point simply and directly, and tie it back to what is genuinely valuable to them. But don’t fool yourself into thinking simple means easy. Millennials have a strong aversion to anything remotely salesy. To strike a chord, your simple and direct message must be creative and original. Brands that talk like a human (not a corporate robot) and take an empathetic approach will win. How do you create such a message? Well, you are in luck. We rounded up some examples of brands that are framing their message in a way that the Millennial market can’t ignore. Stand for something Millennials want to give back and drive change, and they seek out brands with the same mentality. Whether it’s social advocacy, environmental sustainability, or community-building, brands that clearly stand for more than themselves will draw in the Millennial crowd. Take note from 26-year-old entrepreneur Marcus Harvey. He founded his apparel company, Portland Gear, on pure pride — the pride that Portlanders have for their city. He positions his company not just as an apparel company, but on the desire to build a strong community. With 300K followers on Instagram, Harvey has clearly struck a nerve. Almost every post features a community members’ photo and invites others to contribute. In an interview with Oregon Live, Harvey said, “People do crazy things because they want to see their photo on the @portland page.” Harvey is continually nourishing a brand that taps into people’s values, their pride of place. Make a connection Millennials don’t want to be marketed to; they want to be involved. They’re more receptive to messages that come from those they trust. A study conducted by SocialChorus found that 95 percent of millennials say friends are the most credible source of product information. By tapping into pre-existing connections and encouraging engagement, brands can build trust with their audience. If they do so continually, they will grow a vocal group of brand advocates. In our digital world, there are many ways for brands to connect with their audience and build relationships. Crowdsourcing and user-generated content are just two examples. Connecting through crowdsourcing Crowdsourcing is when a brand presents a problem to their audience and asks for their help in creating a solution. Pabst Blue Ribbon, a beer that’s been on the market for more than a century, has witnessed a resurgence since shifting their marketing toward Millennials. In an effort to connect with this generation, Pabst Blue Ribbon turned over their product design to Millennials through its PBR Art contest. Users were asked to recreate the PBR logo for a chance to win $30,000. Connecting through user-generated content A user-generated content strategy involves inviting or curating user content (typically a video or photo). Since the content originates from other users, not the brand itself, it builds peer trust and engages your audience. Smart brands actively invite users to submit their content and then share it on the brand’s social networks. User-generated content helps turn passionate followers into brand advocates. There is no need to overthink this. Hydro Flask incorporates user-generated content on its social channels and website. Most Instagram pictures are curated from followers, and some web pages feature customer photos. Millennial messaging that resonates Millennial consumers will leave you in the dust if you give them a sales pitch. They want to connect with a business that speaks to their values. By being direct, standing for something, and encouraging engagement, you’ll build trust among this demographic. If you continue to deliver, they will become your extended grassroots marketing team.
https://medium.com/madison-ave-collective/marketing-to-millennials-create-a-message-they-cant-ignore-39c69275c178
['Hanna Knowles']
2018-01-15 18:28:31.743000+00:00
['Marketing', 'Messaging', 'Millennials']
Review: Residual Attention Network — Attention-Aware Features (Image Classification)
Review: Residual Attention Network — Attention-Aware Features (Image Classification) In this story, Residual Attention Network, by SenseTime, Tsinghua University, Chinese University of Hong Kong (CUHK), and Beijing University of Posts and Telecommunications, is reviewed. Multiple attention module is stacked to generate attention-aware features. Attention residual learning is used for very deep network. Finally, this is a 2017 CVPR paper with over 200 citations. (Sik-Ho Tsang @ Medium)
https://towardsdatascience.com/review-residual-attention-network-attention-aware-features-image-classification-7ae44c4f4b8
['Sik-Ho Tsang']
2019-06-05 08:26:28.875000+00:00
['Image Classification', 'Deep Learning', 'Artificial Intelligence', 'Data Science', 'Machine Learning']
QC — Observable. In quantum mechanics, relationship…
Photo by Freddy Marschall In quantum mechanics, the superposition of states collapses when measured. This behaves as if nature cares only when you are looking. This is odd and we may expect the math will be bizarre also. In reality, the math is amazingly simple and elegant. A wave function |ψ⟩ collapses when measured and all measurements have an associated operator U (observable). After we calculate the observable, we use it to make measurements. In this article, we will detail how both are done. Eigenvalues & eigenvector In linear algebra, λ (a scalar) and v (a vector) are the eigenvalue and eigenvector of A if For example: For a matrix A, it can have multiple eigenvalues and eigenvectors. Observable By experiment, we know the spin of a particle can be measured in two unambiguous distinguishable states|u⟩ and |d⟩ with measured value +1 and -1 respectively. These vectors are orthogonal to each other. If we are given a particle in either state, we can always set up an experiment to distinguish them without ambiguity. By the principle of quantum dynamics, these are our eigenvectors and eigenvalues for our observable σ, i.e. With, Substitute |u⟩ and |d⟩ with the equations above, it becomes: The solution for these equations are: This is the observable along the z-axis. It is this simple! Let’s repeat the calculation for the x-axis. By experiment, when a particle is prepared to be right spin, it has half of the chance to be measured as up spin or down spin. So we can express the right and left spin as a superposition of up and down spin below. Again, |r⟩ and |l⟩ are unambiguous distinguishable when measured along the x-axis with measurements +1 and -1 respectively. So, i.e. The solution is: We repeat the calculation with the y-axis. Here are the observables along all three axes: Once we have the observable, we can use them to make measurements. Observable So for an observable, what is the value when a quantum state is measured. Let’s start with X which is the observable along the x-axis. The two eigenvalues of this observable are 1 and -1 with the corresponding eigenvectors: Let’s rewrite the ket and the bra of e0 in the matrix form: and compute, with the definition: i.e. The probability of measuring |ψ⟩ with a specific eigenvalue value is: So for state |0⟩ to be measured with eigenvalue 1, the probability is: The state after the measurement is: i.e. the states measured with eigenvalue 1 and -1 are: As visualized below, if |0⟩ is measured along the x-axis, it has an equal chance to be measured as |r⟩ or |l⟩. The quantum state becomes: For a particular state Ψ, the average value for the observable A when measured is: Hermitian operators Operators corresponding to physical observables are Hermitian. An operator is a Hermitian if it equals its transpose after taking a complex conjugate. Hermitian operators guarantee to have real eigenvalues. i.e. its measured values are real. The diagonal value of a Hermitian operator has to be real and the transposed elements are its complex conjugate. Thoughts Quantum mechanics is non-intuitive. But the beauty is it has a very elegant and amazing simple model based on math. Hope you have enjoyed the math here. Credit and reference Susskind, Leonard, Friedman, Art. Quantum Mechanics: The Theoretical Minimum. Jurgen Van Gael: The Role of Interference and Entanglement in Quantum Computing.
https://jonathan-hui.medium.com/qc-observable-8a44d10c3f7a
['Jonathan Hui']
2019-09-03 13:39:38.465000+00:00
['Physics', 'Science']
Build your own machine learning model to predict the presence of heart disease
Build your own machine learning model to predict the presence of heart disease A detailed walk-through of an end-to-end machine learning project Photo by Alexandru Acea on Unsplash We all know that machine learning is currently used widely in various sectors like healthcare, automotive, financial services, retail, education, transportation, etc. When talking about the healthcare sector, machine learning has disrupted the domain in terms of the vast number of applications it can be used for. It is used in disease identification/diagnosis, robotic surgery, predicting epidemic outbreaks or even in the latest clinical trials/research. One such important application is, predicting the presence of heart disease in human beings, given certain features or parameters. The model we are going to be dealing with in this blog will be a “classification model” that makes predictions using binary values (0 or 1). To dive into the process of creating our model, we first require the data set. The data set used for building our classification model has been obtained from Kaggle and can be found here. So, let’s get started with the process by exploring our data. Since many of us here including myself aren’t from a medical background, decoding the meaning of various features in the data set could help us understand it better. After this step, we can move on to the juicier parts like visualizations, data pre-processing, model creation etc. Understanding the data set The following picture depicts the information displayed by the data set. First 5 rows of the data set The heart disease data set contains the following features: · Age — Age of the person · Sex — Sex or the gender of the person · cp — Type of chest pain-represented by 4 values (0, 1, 2 and 3) · trestbps — Resting blood pressure · chol — Serum cholesterol is the combined measurement of HDL and LDL (high and low density lipo-proteins). HDL is often referred to as the good cholesterol and indicates lower risk of heart disease whereas LDL is considered to be bad cholesterol and indicates a higher risk of heart disease or increased plaque formation in your blood vessels and arteries. · fbs — Fasting blood sugar indicates the level of diabetes and is considered to be a risk factor if found to be above 120 mg/dl. · restecg — Resting electro-cardio-graphic results measure the electrical activity of the heart. This factor can diagnose irregular heart rhythms, abnormally slow heart rhythms, evidences of evolving/acute heart attack possibilities etc. · thalach — Maximum heart rate achieved is the average maximum number of times our heart beats per minute. It is calculated as: 220 — age of the person · exang — Exercise induced angina (AP) is a common concern among cardiac patients. Angina is usually stable but is triggered when we do physical activity especially in cold conditions. · oldpeak — Oldpeak is described as the ST depression induced by exercise relative to rest. Not all ST depressions represents an emergency condition · slope — The slope of the peak exercise ST segment · ca — The number of major vessels · thal — Thalach: 3 = normal; 6 = fixed defect; 7 = reversible defect · Target variable-tells us whether the person has heart disease (1) or not (0). Data visualization Visualizations help us familiarize ourselves with the data that we are dealing with. So, let’s look at the three important visualization techniques that tell us a lot more about the data that we are dealing with, than any tabular column ever could. · Heatmap: The heatmap below was plotted using python’s seaborn library. The heatmap tells us the relation between various variables in our dataset by indicating how they affect each other. It uses a color scheme as well as decimal values. Negative values indicate relatively less correlation between the 2 specific variables whereas values closer to 1 indicate highly correlated variables. Here, df.corr() is used to find the pair-wise correlation of all columns in the data-frame. Read more about seaborn’s heatmap. Heatmap · Countplots: The 8 plots below represent the count of the categorical values with respect to the target variable (0 or 1). These plots are used to show the category (x-axis) vs. the count of the categorical values (y-axis). The presence or absence of heart disease i.e. the target variable is differentiated using colors. Also, the most common category for a given feature can be inferred from these plots. Read more about countplot. Countplot · Distplots: The 4 plots below are basically histograms that represent the range of values that the continuous numerical variables possess. These plots are useful for visualization when we are dealing with vastly different ranges of data. The plots represent a univariate distribution. The x-axis represents the feature which is separated into different bins. The bins consist of the frequency of occurrence of each variable in that range. The most common value for each variable can be inferred by looking at the bin with the highest y value. Read more about distplots here. Distplot Data manipulation Now we’ll be performing some data manipulation. The values of some categorical variables cause ambiguity while fitting our model and training it. So, we transform them into the binary form i.e. convert categories consisting of alphabets, integers from 0–3, 0–4, etc. to 1's and 0's by adding separate columns for each category. We do this using a functionality provided by the pandas library called; pd.get_dummies. The documentation for pd.get_dummies can be found here. We can achieve this by specifying which columns need to be encoded and thus we get a data frame with the original columns replaced by new columns consisting of each of our encoded variables and their binary values. The picture below shows the new columns of the first 5 rows with their binary values. Feature creation Now, we can go ahead and create an extra feature. This process is called feature engineering and is the used to either modify an existing feature to get a new column or create an entirely new column. An extremely simple version of feature engineering is performed using the pre-existing age column. A well-known fact is that adults over the age of 60 are more likely to suffer from heart disease as compared to younger adults. So, we create a separate column to filter the entries in which the person is either 60 years or older. We can do this by assigning 0’s to those below 60 years of age and 1’s to people over the age of 60. We name the column ‘seniors’ which refers to senior citizens. Train-test split Our data now needs to be split into xtrain, xtest, ytrain and ytest for ensuring that our model can fit itself to the training set (xtrain,ytrain) and predict values using the test set (xtest,ytest). The split is performed in the ratio of 80:20 for train:test using sklearn’s train_test_split feature which takes 80% of our training data and splits it into input features and labels. The features are stored in xtrain whereas the labels are stored in ytrain. These lists are then used for model fitting. A similar process is used to assign 20% of the original data to xtest and ytest. These lists are used for making predictions. Read more about train_test_split here. Data scaling The next logical step would be data scaling. We need to necessarily scale our data before proceeding with the model creation since our dataset contains different features in various ranges. If the data isn’t scaled, it will be difficult for the model to assign equal importance to all the features as it will be biased towards features with higher values. So, the data is scaled down using sklearn’s StandardScaler(). This functionality transforms our data such that our data is scaled down to unit variance. The mean of the distribution will equal 0 and the deviation will equal 1. This makes it easier for the model to assign equal priority to all the features in spite of their range of values. Read more about StandardScaler() here. Model creation We can finally go ahead with model creation. Classification models basically take in the input values and make predictions by classifying various inputs into their respective labels/categories. The model that we will be creating will use Logistic regression which is a supervised learning classification algorithm. A single-class binary logistic regression model gives outputs in the form of 0 or 1 and it employs the sigmoid function which is, graphically, an S-shaped curve. This function is the crux of the computations performed. Graph and expression for the sigmoid function The sigmoid function binds various values into a range between 0 and 1 and maps the predictions to probabilities. A threshold value (for example, 0.5) is set and any probability equal to or above the threshold is assigned to the presence (1) class. Any value below the threshold is assigned to the absence (0) class. The code for the logistic regression classification model along with its accuracy, is given below: As you may have realized, the code for actually creating and training the model is very simple due to the hidden implementation of logistic regression in python’s sklearn library. Refer to the documentation for logistic regression to learn more about the parameters used. This model churns out an accuracy of roughly 90.16% which is very decent. Evaluation The accuracy values from 5 different models, including logistic regression are listed below: Accuracy values of 5 different classification models For better understanding of what exactly our model has predicted and why our accuracy is what it is, we can view 2 important metrics provided by sklearn — confusion_matrix and classification_report. The confusion_matrix is basically a representation of our true negatives, true positives, false negatives and false positives. This matrix tells us how many correct and incorrect predictions our model made. The confusion matrix for our Logistic Regression model is given below: Confusion matrix From this we can infer that our model has predicted 27+28 values (true positives and negatives) correctly, whereas it predicted 2+4 values (false positives and negatives) incorrectly. The classification_report is another important metric which uses 3 different aspects; precision, recall and f1-score. Let’s look at what these 3 aspects mean and how they represent an evaluation metric. · Precision is defined as the number of correct positives predicted with respect to the total number of positive predictions. It can be calculated using the simple formula: TP / (TP+FP). In case of our model, the precision for presence of heart disease will be 27 / (27+2) = 0.9310~0.93 · Recall is defined as the number of actual positives predicted with respect to the total number of positive predictions. It can be calculated using the simple formula: TP / (TP+FN) In case of our model, the recall for presence of heart disease will be 27 / (27+4) = 0.8709~0.87 Classification report The ambiguity arises when we need to balance precision and recall for better overall performance. · The F1 score is the solution to the aforementioned ambiguity. It is calculated as: (2 x precision x recall) / (precision + recall) If we apply the aforementioned formula, we get, (2 x 0.93 x 0.87) / (0.93+0.87) = 0.899 ~ 0.90 Conclusion Thus our model has been created and evaluated successfully. Our model can make predictions if we provide values for all the input features. The aforementioned steps constitute the process involved in building a classification model for heart disease prediction to get a very decent accuracy. Though, there are other ways to improve model accuracy, which you are free to try, this blog specifically deals with the structure and process involved in building/understanding an end-to-end machine learning project. So, feel free to refer to my kernel on Kaggle for the entire code.
https://medium.com/srm-mic/build-your-own-machine-learning-model-to-predict-the-presence-of-heart-disease-8f5256afdc96
['Pooja Ravi']
2020-09-18 14:52:36.710000+00:00
['Machine Learning', 'Python', 'Classification', 'Heart Disease', 'Data Visualization']
Introduction to Big Data & Hadoop
In order to understand whats is Big data and Hadoop, you need to understand what is data. What is data? In general, data is any set of characters that is gathered and translated for some purpose, usually analysis. What is Big Data? Big Data is also data but with a huge size. Big Data is a term used to describe a collection of data that is huge in volume and growing with time. In short such data is so large that none of the traditional data management techniques are able to store it and process it efficiently. Big Data Examples Big data is getting bigger every minute in almost every sector. The volume of data processing we are talking about is mind-boggling. Here is some information to give you an idea.: The weather channels receive 18,055,555 forecast requests every minute. Netflix users stream 97,222 hours of video every minute. Twitter users post 473,400 tweets every minute. Facebook generates 4 new petabytes of data per day. A single Jet engine can generate 10+terabytes of data in 30 minutes of flight time. With many thousand flights per day, the generation of data reaches up to many Petabytes. you can check the states from here: https://www.internetlivestats.com/ Types Of Big Data BigData’ could be found in three forms: Structured Unstructured Semi-structured Structured Any data that can be stored, accessed, and processed in the form of fixed-format is termed as a ‘structured’ data. Over the period of time, talent in computer science has achieved greater success in developing techniques for working with such kind of data (where the format is well known in advance) and also deriving value out of it. Unstructured Any data with the unknown form of the structure is classified as unstructured data. In addition to the size being huge, unstructured data poses multiple challenges in terms of its processing for deriving value out of it. A typical example of unstructured data is data containing a combination of simple text files, images, videos, etc. Semi-structured Semi-structured data can contain both forms of data that is structured and unstructured Characteristics of Big Data
https://venkateshpensalwar.medium.com/big-data-hadoop-edf6572b2232
['Venkatesh Pensalwar']
2020-09-17 11:47:35.678000+00:00
['Technology', 'Arth', 'Big Data', 'Hadoop']
How to Make Disney’s Vegan Dole Whip at Home
How to Make Disney’s Vegan Dole Whip at Home Disney released the cult-favorite recipe so you can transport yourself to the parks anytime I grew up in a house that had rules — and they were reasonable ones. No sugary cereal, no staying up past 10:00 PM, no computer time until after homework was done. But on vacation, there were no rules. I remember standing outside the gates for the Tomorrowland Speedway in Magic Kingdom, watching kids crammed into brightly-colored cars bump along the track. The Florida sun made the day muggy already as I bit into the Mickey-shaped bar, chocolate and ice cream dripping down my hand faster than I could slurp it. It was 9:30 AM. That was only the first ice cream stop. As we worked our way around Magic Kingdom, we finished up the afternoon in Adventureland. Still singing “Yo ho, yo ho, a pirate’s life for me!” we waited in line at the colorful stand at Aloha Aisle for another round. What emerged from the dispensers wasn’t ice cream, but a deliciously creamy pineapple treat. Disney has been serving Dole Whip at Walt Disney World since 1986, but on the day I discovered it, it was like what Aladdin sings to Jasmine on a magic carpet ride: a whole new world. Dole Whip is a cult-favorite snack at Disney, and I’m not exaggerating when I say it’s the best snack the parks have to offer. You can only find it in a few places at the parks — so I always make time to wait in line for a chance to cool down. I’m holding out that I’ll still be able to return to Disney later this year, but until it reopens, I’m whipping up my own Dole Whip at home, thanks to Disney releasing the recipe on the My Disney Experience app.
https://medium.com/tenderlymag/how-to-make-disneys-vegan-dole-whip-at-home-70aee84e0125
['Kayla Voigt']
2020-05-14 15:01:01.494000+00:00
['Dupes', 'Food', 'Food Hacks', 'Vegan', 'Disney']
We Are Paying $5 A Month
I stared at the numbers in disbelief. I refreshed the page repeatedly, hoping it was just some glitch, that the earning was just not calculated yet. With a heavy heart, I had to accepted what was to be my new reality. The rule changed and I suffered some losses as a result. Other than adding to complaints already mounting in the platform, there really wasn’t much I could do about my disappointment. I started with earning $28.30 on October 2018, slowly building up to finally reached my all-time best of $410.64 on September 2019 (took me almost a year) only to have it dropped to $142.22 on November 2019. All with one change of MPP rule. The earning has never really climbed up ever since. As with many of the writers here who have been thriving on poems and short stories, I was extremely disappointed. But I knew I had to deal with it and move on. I probably earn a bit more than others these days, a little under a hundred, a little more than a hundred some months. But I haven’t earned as much as I used to ever since the rule changed. I still write poems and short stories. I write long stories/articles too but I refuse to lengthen and puff up my work only for the sake of reading time and earnings. I say what I want to say, what I need to say and what I think needed to be said. I am here to express myself, to talk to you, to the world. Everything else is a bonus. Here to have a meaningful conversation with you. Photo by Etienne Boulanger on Unsplash I was very upset in the beginning. How could they? Don’t they know how hard I work to build my readership and learn my craft? How can they value short narratives less than the long ones? I talked to my husband, my brother and my best friend about what happened, about my disappointment. They listened and nodded in sympathy. They knew how hard I work on my Medium. I was angry at one point. What the hell? A voice yelled in my head. Oh, the unfairness of it all. That must be how Icarus felt as he plunged into the Aegean sea. I was mad. I was very very mad. Until I sat down with the numbers. And then I was ashamed for being angry.
https://medium.com/warm-hearts/we-are-paying-5-a-month-98a932fbf0e4
['Agnes Louis']
2020-10-06 15:13:41.189000+00:00
['Gratitude', 'Writing', 'Médium', 'Life', 'Writer']
How To Make API calls in Fullstack Apps or React Apps Using Axios
API CALLING When I began programming some years back, APIs, REST APIs and Endpoints were some of the programming concepts that I had the general idea of but didn’t fully understand. I later took up some internships and came across the term “API” again. This time my colleagues made it seem like an out of this world concept that required 10 years of programming experience and luck from the gods to make work. I felt like the APIs I had been exposed to were wrong just because of the way my internship mentors and online tutorials explained. But now looking back I realize it’s all the same story: Programmers love overcomplicating the definition of simple concepts. DEFINITION An API (Application Programming Interface) is a software/piece of code that allows two separate programmes to interact with each other by passing data with each other. A simpler definition is: an API is a code that triggers/ causes the transfer of data from one point to another TYPES There are two types of APIs that can be configured in your code: 1. ) Internal API: These are APIs that are configured in full-stack applications. The API is called in the front end requesting for information from the back end server of the same application. There are typically four operations that can be carried out with APIs. These are called CRUD (Create — POST API Request, Read — GET API Request, Update — PUT/PATCH API Request, Delete- Delete API Request). We will be focusing on the two main requests, GET and POST, as these are the most widely used. i.) Internal API GET Request A GET request means that we are receiving (GETTING) data. An Internal API GET request means we are receiving data from the back end server of the same application. import axios from "axios" ... axios.get(url) /*state which route/url to get data from*/ .then((res) => {setState(res.data)}) /*do something with data*/ .catch((error) => {console.log(error)}); /*show error if any*/ ii.) Internal API POST Request A POST request means that we are sending (POSTING) data. An Internal API POST request means we are sending data to the back end server of the same application. import axios from "axios" ... axios.post(url, "object to be posted")/*state url and data to send*/ .then((res) => {console.log(res.data)}) /*do something with data*/ .catch((error) => {console.log(error)}); /*show error if any*/ 2.) External APIs: These are APIs that are configured in either the front end or back end of applications to interact with an external server. The API is called in the front end/ backend requesting for data or sending data to the server of another individual’s server. We will be focusing on the two main requests, GET and POST, as these are the most widely used. To make proper external API requests, make sure you always check the external API’s documentation. However, this is the typical layout: i.) External API GET Request A GET request means that we are receiving (GETTING) data. An External API GET request means we are receiving data from the server of another person’s application. import axios from "axios" ... axios({ method: "get", url: "https://...../", headers: { x-api-key: *key/apikey* }, "params" : { custom_params : "custom_param_value" } }) .then((data) => {*do something with data*}) .catch((error) => {*do something error*}) ii.) External API POST Request A POST request means that we are sending (POSTING) data. An External API POST request means we are sending data to an external server. import axios from "axios" ... axios({ method: "post", url: "https://...../", data: { firstName: 'Peter', lastName: 'Parker' }, headers: { custom_header: *custom_key* }, params : { blah_blah: "place param value" } }) .then((data) => {*do something with data*}) .catch((error) => {*do something error*}) That’s it for calling APIs NOTE In addition, note the following: Only configure the routes in your backend server if it’s an internal API because External APIs do not require routes in your backend server as it interacts with routes on an EXTERNAL server. For headers and params, they are not always going to be configured in your API.Their requirement(if you should include them or not )will be stated in the external API documentation You can use both internal and external APIs in the same application. EXTRA: This is how backend routes are configured for Internal APIs *These are example Routes/End points. This code is placed in the routes file in the backend server* const mongoose = require("mongoose"); const express = require("express"); const router = express.Router(); const article = require("../models/articlesschema"); router.route("/addarticle").post((req, res, next) => { return article.create(req.body, (error, data) => { if(error){ return next(error); } else { console.log(data); res.json(data); } }); }); router.route("/").get((req, res)=> { return article.find((error, data) => { if(error){ return next (error); } else { return res.json(data); } }); }); router.route("/edit/:id").get((req,res) => { return article.findById(req.params.id, (error, data) => { if (error){ return next (error); } else { return res.json(data); } }); }); router.route('/update/:id').put((req, res, next) => { article.findByIdAndUpdate(req.params.id, { $set: req.body }, (error, data) => { if (error) { next(error); console.log(error) } else { res.json(data) console.log('User updated successfully !') } }) }) router.route("/delete/:id").delete((req, res, next) => { return article.findByIdAndRemove(req.params.id, (error, data) =>{ if (error){ return next(error); } else{ return res.status(200).json({ msg: data }) } }) }) module.exports = router; Fin.
https://medium.com/simply-complex/how-to-make-api-calls-in-fullstack-apps-or-react-apps-using-axios-12c2ae52d650
['Ohuoba Omoruyi David']
2020-11-09 11:19:05.244000+00:00
['React', 'API', 'Axios', 'Mern', 'Full Stack']
What You Don’t Know About Your Competitors
What You Don’t Know About Your Competitors How purchase data gives a broader view of your competition When you’re sizing up your brand’s competition, where do you typically look?Sometimes, we look at the category through the lens of product features. For example, ice cream competitors can be divided into the different formats (tubs, bars, cones, etc.). Still, in this case, you could also look at the competition in terms of the audience segment (health conscious, millennial, moms, etc.). However, identifying your competitors through product features or audience segmentation can prevent you from seeing how broadly your brand competes. The reality is that consumers do not exclusively buy a product format (i.e. ice cream bars) or a product designed for their audience segment (i.e. organic). Instead, per a previous blog post “5 Lies We’ve Been Fed About Brand Loyalty,” most consumers are occasional buyers who will buy ice cream bars on one occasion and opt for the organic offering on another. To understand the true buying behavior of a product, Byron Sharp recommends using “duplication of purchase” analysis to see the overlap of buyers in a category. Below is an example of duplication of purchase analysis for the ice cream category: The Duplication of Purchase Law indicates that the degree of buyer overlap in a category aligns with a brand’s penetration in the market. In other words, a brand shares a greater percentage of buyers with the biggest brands in the category and a smaller percentage of buyers with the smallest brands in the category. In the example above, you can see that all of the brands in the analysis share the most buyers with with the largest brand, Carte D’Or (second column) and the least number of buyers with the smallest brand, Mars (seventh column). Key Takeaway: Sharpe’s research tells us that a brand’s competition is not based on how it is positioned in the market, but by the other brands purchased in the category. Moving forward we should supplement our competitive audits with purchase data to understand the structure of the market and guide the category definition so we can keep our brand distinct.
https://medium.com/comms-planning/what-you-dont-know-about-your-competitors-1e9a3e6d497
['Julie Naidu']
2017-05-16 14:12:39.436000+00:00
['Strategy', 'Marketing', 'Advertising', 'Research']
Welcome To The Wilderness
It’s okay to pause here Here in between the promise and the things you know It’s not that you want to go back to how things were It’s just that familiar things are hard to discard It’s just that the future seems so hard And unknown And scary Welcome to the wilderness It’s okay to pause here And let the spirits of the Wild come and dance around you Let them welcome you to your initiation into the Wild inside of you While you wail for all you’ve given up to get here Welcome to the wilderness As the Wild rips off your clothes and dons you with dirt You sob for how terribly uncharted this territory is Surely you did not have to take the red pill Surely you could have found a way to stay comfortable Surely you could have made yourself smaller for them You could have betrayed yourself just a little more You could have fallen asleep “Wake up! Wake up! Wake up!” Chants the Wild Spinning you around until you can no longer keep a finger In the dam inside your belly Out comes terror Greif And rage Wailing and screaming Wailing and screaming And you realize the wails of the Wild dance in time with your own Their screams echo yours Their cry is your cry It has always been “Wake up! Wake up! Wake up!” Chant the trees Chant the wolves As you weep and gnash your teeth The release of your tears Your voice finally being used And the vines of terror creeping up your veins Are an invitation to realize your soul does not want comfort Your soul wants this This moment Experiencing everything you experience The places they used to fit you Cannot hold this They cannot hold your full experience They cannot hold your grief Your rage Your terror Your joy Only the Wild has space for you To be who you are It’s okay to pause here Here in between the promise and the things you know It’s okay to take your finger out of the dam And let your wild out After all, you are in the wilderness now What else is there to do Other than be yourself? The moment you took the red pill The moment you woke up Is the moment you gifted yourself with the space To be yourself To feel what you feel To use your voice Yes, it is terrifying But it is you And it is beautiful And for the first time in a long time You are living Welcome to the wilderness, darling. Welcome to you.
https://medium.com/blueinsight/welcome-to-the-wilderness-25d5064b011d
['Jordin James']
2020-12-07 01:36:19.349000+00:00
['Blue Insights', 'Spirituality', 'Self', 'Psychology', 'Poetry']
Introducing RISE v2
2018 was a very productive year for RISE. Let me recap some of the achievements the team was able to deliver during 2018: Along with all these updates, I started working in a private repository to create the new RISE v2. Why v2 New v2 is built to have the following concepts in mind Extensible Modular Simpler More flexible The ones above are the 4 core features that I think were lacking in the old v1 codebase hence why I started the rewrite. What’s new in v2 RISE v2 hosts several new improvements. Let’s start from the biggest change: RISE Modules Thanks to yarn and Lerna we were able to rewrite all main RISE functionalities separating the concerns of each module. This is not only a nice coding approach allowing to re-use components but will also allow third-party developers to override core modules by injecting their own replacing module. For example, there is a “consensus-dpos” module which provides all the code needed to handle the DPoS rules and API; If a developer wants to, they might write their own consensus module and re-use all the other core modules. You can think about modules like pieces of a puzzle. As long as your piece has the same shape as the “original” one, it will fit nicely creating a different picture. More info about modules here. Modules LifeCycle Since every module is an (almost)-self-contained piece of software it needs to follow a contract, or a so-called interface, in order to be a valid module. Module developers can also benefit from using the lifecycle declared within the above-mentioned interface. This is beneficial especially when modules need to initialize or teardown specific elements within their logic. The main 4 lifecycle events are (in order of execution): preBoot, boot, teardown, postTeardown. Modules can also override the configuration of other modules as well as adding specific commander options when booting the application. Modules resolution DAG — Source: Wikipedia Since Module ‘A’ could depend on ‘B’ a Directed-Acyclic-Graph is built on-startup so that the lifecycle and config override respects the expected natural flow. This also ensures that no 2 modules reference each-other (either directly or non-directly) leaving the codebase clean as well as making sure that different boot produces the same result in term of booting priority. Since I follow (as much as possible) the KISS principle, modules can declare that they depend on another module by exploiting the ‘package.json’ capabilities that Node.js developers are already used to. Technology upgrades The underlying technology of the RISE core needs to be constantly updated to leverage both performance and security improvements. With this in mind we decided to update: Node.JS from v8 to v10; TypeScript from 2.8 to 3.4.5; PostgresSQL from 10.4 to 11.3. We then decided to remove Redis entirely as it was no longer used in the codebase and there was no real reason to keep it as third-party dep. The WordPress-like Hook System If there’s something great about WordPress is its the plugin and hook system. Since one of the main assets of the whole WordPress ecosystem is the wide variety of plugins that are installable it seemed a good idea to copy the good concepts and bring them, for the first time, in the BlockChain eco-system. In RISE v2 we use “mangiafuoco”, a Node.JS library written by me that mimics the WordPress hook system by adding utility functions. Embracing this technology was kind of the next step when building a modular core since it allowed us to decouple modules inter-communication enhancing by several factors the maintainability and code clarity. To enhance and smoothen out the learning curve we even created TypeScript Decorators which you can apply on your methods. This will give you type-safety, the thing we all love about TypeScript. Example of multi-hooks usage. Of course, all filters and actions are declared in the “hooks” directory of each module allowing every developer to easily find exported actions and filters. Execution priorities you ask? We also support that and you can use it as well :). New Transaction Types and functionalities V1.x Transaction system has several design flaws and is not as flexible as I wanted it to be. The new tx schema has the following benefits compared to the previous version: Streamlined serialization and deserialization Ability to encode different address systems (more on this later) Unequivocally encode-decode payload Before this update, we had several constraints and, for example, the Ledger Nano integration was painful to code and test. The new tx schema opens a lot of new possibilities for RISE enabling the amount of flexibility we need to satisfy almost all use cases. Another nice side-effect of the new transaction types is that they do natively support having dynamic fees. Even if RISE does not necessarily need dynamic fees now, we always need to anticipate needs to avoid unnecessary network bottlenecks. The new Send transaction type has a new field that can hold up to 128bytes of raw data Send transaction with “banana” string encoded in the payload This opens up a lot of new possibilities and scenarios developers could use to permanently store data in the RISE blockchain. EG: An eternity wall, a payment processor, a notary system, … Arbitrary precision library with bigint In v1 we used BigDecimal javascript library which has been a great resource against very-well-known javascript issues when handling math. It’s very crucial that the math is bullet-proof unless you are a fan of Thanos and its snap and wanted to see it applied to the blockchain :). With v2 we leverage “bigint” which provides integers with arbitrary precision with blazing fast (native) speed. So, by switching to this new feature, we got rid of a third party dependency (which is always a great thing) and we leverage native speeds when dealing with big numbers. Introducing RISE address v2. The “oldy” addresses look like “12345678R” while this might sound great as only numerical digits are used, the way the address system was initially designed comes with 2 coupled security flaws: Address space is “only” 2⁶⁴. (compared to 2²⁵⁶ public-key space) 2 public keys can result in the same address. (collision attack) Along with the 2 issues above, the current RISE address system does not allow any data encoding so it’s basically impossible to have special addresses with a different purpose. Let’s get straight to the point. What does the new Address will look like? rise1q8gsvlw985fvx22me8mhnvpwhfzf4dkehzt5nq84mk35zup2prfs73z5ydg “OMG, that’s long… What are the PROs?” State of the art technology (borrowed from Bitcoin — bech32), flexible, has checksum!!!! Meaning that if you had to type a long address and made up to 4 mistakes we would be able to find exactly where the error was made, by far more secure, QR code efficient — easier to read. Along with all the goodies, we are making a big change in the way the 2 address system will co-exist. Since V2. The primary account key will no longer be publicKey while rather the address itself. Since V2 is, by far, more secure than the previous address system, we will discourage using V1 by highering the TX fees when sending/receiving. New P2P Comm Layer (Again) Since ProtoBuf we basically solved most of the p2p bottlenecks. Unfortunately, some minor errors were made when designing the new p2p layer back then. The new P2P layer features: Same Transaction Per Second output Same Bandwidth efficiency More flexibility More robustness Let’s focus on the third and fourth elements above: The P2P layer has been decoupled and is now a separate module (“core-p2p”) which allows third-party devs to deploy their own p2p solution. Ex: just for fun a developer could use morse-signals over the infrared spectrum to send/receive data to/from neighbor nodes. About Robustness: we designed the p2p classes to be easy to use and rock solid. The p2p layer will treat your data as an opaque message to be sent and received, serialization and de-serialization are up to the developer. Multiple codec technologies might co-exist; for example, a module could register a p2p endpoint for sending/receiving data in “plain-text” and another could use binary or ProtoBuf as the encapsulation method. Furthermore, requests can now be batched and also eventually expire. Handy for many use-cases. Last breaking change for the RISE v2 would be the communication ports. In RISE v1.x we used the 5555 (or 5566) port for both public APIs and internal consensus communication. In V2 we decided to separate the 2 comm channels: Port 5554 for consensus communication; Port 5555 for APIs. This small change will improve even further the performances of the communication layer by removing the API bloatware from the consensus endpoints. Deprecating MultiSignature Accounts This might sound like a step back, right? No more multisig accounts… WhatTheHeck?? Well, there are 2 reasons behind this move: The multisig functionality was, to say the least, over-engineered and poorly conceived (both in code and database) The new Address system is able to encapsulate multisig accounts such as in BTC. Note: you can’t really create multisig accounts in v2. (they’ll come soon™) To summarize, we removed a big chunk of codebase (and tests) that no-longer needs to be maintained :). In RISE we also have second signature accounts. That functionality will be maintained as a separate module named after its functionality (core-secondsignature). We decided to keep this feature since many accounts were actively using it compared to multisig which was not used at all. Changes to the consensus (Again) Improving current solution is crucial. That’s why we decided to stop distributing fees evenly to all delegates. “Wait what”? In v1 : All tx fees were collected and evenly distributed to all delegates who participated in the round. : All tx fees were collected and evenly distributed to all delegates who participated in the round. in v2: delegates will be rewarded the fees they forge. In all honesty, the “v2 way” should have been “the way” as it provides an incentive for delegate nodes to keep their node up&running with proper networking. The Database Since RISE v2 is, besides many other things, a big refactor of v1.x, it was about time to change the Database structure to a more meaningful schema. A bunch of columns was removed from the accounts table and, at its base form, only 4 (vs over 10) column are needed. Furthermore, tables and some columns were properly renamed to follow a single standard convention. Idempotent DB Update Scripts Upgrading a blockchain isn’t as easy as you might expect. Database updates are no exception. In RISE v2 we decided to create “schema.sql” files which basically are idempotent. But what is idempotency? From wikipedia: Idempotence is the property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application. (wikipedia) With this change of approach we now have the ability to: Keep the current, most-up-to-date, database schema in a single place for easier maintenance and debugging. run the schema.sql upon each startup and make sure we don’t break anything How/When will the update happen? Since the p2p layer is no longer backward compatible we’ll provide an update script that will take care of upgrading the blockchain core at a specific point in time (or better said, in height). All node operator will need to trigger the update script before the deadline to keep their node(s) synced with the network. Delegates failing to do so will be eventually banned by the consensus leaving space to worthy delegates that updated instead! (more info). Of course we’ll give it a go using testnet first that has been rebooted a couple of weeks ago to have the same features and configuration we have in mainnet. If interested, please join our Slack, and enter the #testnet channel to receive some testnet tokens to play with and instruction on how to set up a node.
https://medium.com/rise-vision/introducing-rise-v2-521a58e1e9de
['Andrea Baccega']
2019-06-03 18:59:55.491000+00:00
['Blockchain', 'Typescript', 'Cryptocurrency']
What Exactly Is Creativity?
Creativity is giving yourself the freedom to share your truth. Not standing in your own way, not judging yourself on the idea still in your mind but allowing them flow through your fingers without judgement. That is what creativity is. Creativity is freely sharing your truth with the world and yourself without judgement. If you share your truth without freedom, that’s not creativity. Creativity is about making something that’s never existed without fear or judgement. That’s creativity.
https://medium.com/afwp/what-exactly-is-creativity-492e94a454d0
['Joshua Idegbere']
2020-11-23 15:31:08.165000+00:00
['Writers On Writing', 'Inspiration', 'Artist', 'Creativity', 'Art']
Keep Writing
Your Path by Nicole Ivy I know it’s only the first week for me on 100 Naked Words but sometimes I feel like giving up. The truth is, it’s hard to create something that I’m okay with posting everyday. And I don’t want to force it. I know that the situation I’m in right now won’t make it easy to keep up. I’m a volunteer at a sustainable eco-village community in Florida. This means I don’t have the easiest access to wifi. And as an introvert, it takes a lot of energy to go to a new place and meet so many new faces and personalities. I’ve also had to change everything about my routine. And I’m living in a tent. Having to create something I feel proud enough to share on Medium is challenging. I’m tired! But… I am so grateful that I keep choosing things for myself that are interesting, meaningful and fun. I didn’t used to live my life that way. I’m making more of my choices based on what I love and value rather than on fear. I’m going to be learning skills and more about permaculture. I’m living in paradise! It’s never perfect but I’m just so grateful that I’m living in choice rather than habit and the beaten path. The past 5 years have shown me that I can do anything. So I will rise to the challenge, 100 Naked Words! Let’s do this!
https://medium.com/100-naked-words/keep-writing-43c8f068fedf
['Nicole Ivy']
2020-01-14 17:01:01.286000+00:00
['Writing', 'Life', '100 Naked Words', 'Challenge', 'Determination']
5 lessons from a young leader. A reflection on self-confidence and…
5 lessons from a young leader A reflection on self-confidence and overcoming Imposter Syndrome Photo by David Marcu on Unsplash At 23, I became the manager of my company’s User Experience team. A few years prior, I had started in design as a one-person show. We were a small company, with around 30 employees worldwide. Back then, UX was a young, unexplored discipline for us. Like any start-up, relentless efficiency was our mantra. I loved every second of the hustle. Still, I was usually bent over my desk, churning out designs for 4–5 projects at once, too busy to slow down and look at the bigger picture of why I was there. Being the only UX designer in a brilliant and fast growing company also meant I lived with constant paranoia. Imposter syndrome felt like an overwhelming beast. I was certain it was only a matter of time before others realised I was a fraud basked in colours. I’d dismiss any positive feedback thinking my colleagues didn’t know better. Everything I did was new to the organisation, so at the end of the day, my work was the standard. If I happened to set the bar low, it was still as high as they’d ever known. I became even more out of my depth, paralysed with fear, when it was time to expand the team. I was about to become a manager. I had the confidence of my company to guide the team, but I felt 10 centimeters tall. Do I have what it takes to lead a team? I don’t know what I’m doing! What if I don’t have the answers to the difficult questions my team members have? They’ll see right through me! I spent months trying to find my feet. I have since then come to accept there is not always a clear path to good leadership. That said, the following five lessons have helped me make sense of the journey ahead as a young leader. 1. Don’t think too hard. Make a start. You are where you are because you’ve earned your spot there. Learn to trust and back yourself. When a new and complex task arrives, it will feel like everyone is looking to you for direction. Don’t freak out. Slow down instead. Start from what you know. You’ll understand the problem better if you help the team start small. Great work requires more than great individuals, it requires a team. As a team, you’ll be able to leverage each other’s knowledge and skills to achieve a common goal. So don’t fret. Make a start. You don’t need to know what it takes to do the job. Once you’ve done the job, you’ll know what it takes. 2. Base your decision on what’s best for the company Teamwork is the perfect example of the whole being greater than the sum of its parts. As a team leader, you’re responsible for the performance of your team. If your insecurities and ego are clouding any judgment, take a step back and focus on what is best for the company. Ask yourself who would be most suitable for the job to achieve the highest quality outcome. This will help you and your team stay focussed on the tasks at hand. When your team members share a new idea or come up with an intelligent solution to a tricky problem, embrace it. It doesn’t matter who comes up with a solution first. What counts is the new perspective the team now has to inform the next challenge you face together. 3. Provide opportunities you wish you had Being in a small team or the only member of a team means it’s very easy to hoard the opportunities. It takes a special kind of courage to let someone else in on the fun. Good employees crave trust and are unafraid to work for it. Create room for your team members to add value. The most meaningful way to do this is by delegating responsibilities you would’ve wanted. When your team rises to the occasion, acknowledge it. This will encourage them to help set the bar higher for the rest of the company. 4. Look back on the stepping stones you’ve placed Take time to recognise the contribution you’ve made to support your team on their own journeys. Your team members have voiced differing opinions? Great, you’ve created an environment where they feel comfortable doing so. Your team members have improved systems you’ve set in place? Fantastic, you’ve set up structures that were easy to follow. Your team members have refined existing processes? Excellent, you’ve built a solid foundation on which others can form better ideas. People we admire most also had others before them showing the way. It is a humbling experience knowing you’re paying it forward. 5. Be kind and forgiving towards yourself Don’t be quick to criticise and undersell your abilities. None of the leaders I know feel ready all the time. The vulnerability and humility that arise from this is a beautiful thing. Accepting you still have a lot to learn is a great exercise in self-awareness. It means you are recognising there are opportunities to improve. I have gone from being too afraid to reveal I don’t know something to saying, “I can’t answer that.” Later, I found myself comfortable adding, “I can’t answer that, but I can point you to the person who can.” Nowadays, I am able to admit, “I am not sure”, but then ask, “What do you think?” In time, you will mature as a leader by taking responsibility for the outcome of the team. So be kind to yourself when things go sideways. It may not always feel like you’re moving forward. It’s important yet to realise even a lateral step can open up new paths for those behind and beside you.
https://medium.com/age-of-awareness/five-lessons-from-a-young-leader-bcab781412f1
['Tina Hsu']
2020-04-26 03:16:58.157000+00:00
['Leadership', 'Design', 'Learning', 'UX', 'Imposter Syndrome']
Front-End Architecture for Scale
Three Constraints From my experience in working on large legacy systems and also immature systems still in development, the following are three important constraints that I have identified for building resilient front-end architectures. 1. Source code dependencies must point inwards A few ways of organising our dependencies It is important to think directionally. The picture above demonstrates three well-known ways of organising our application’s dependencies directionally. Your app is a big ball of mud when you don’t know what may not depend on what in your app and there are no rules to restrict mutations in your components. This is something we always aim to avoid. In a layered architecture, we have rules about what code may depend on and whether the direction of dependencies is only in a single direction. And then we have our favourite, modular architecture. Almost all of the time, if I asked an interview candidate about best practices, they would drop a line about how the code should be organised in a modular fashion. And when I asked them to elaborate on this, I’d hear about how we should break our code into different modules. Unfortunately, that’s not what makes your code modular. When I speak of modular architecture, I refer to things such as a micro front-end approach or mono-repo, code bases where you have modules with a very small API service area which are then able to call up to other modules which in turn are mono-functional. I’m sure you must be able to spot the big difference in the three architectures talked about here: dependency organization. In a big-ball-of-mud architecture, if you make a change in one part of the application, it is very difficult to predict the scope of regression, and that makes it a perfect way to not only cause cross-team conflicts but also to burn bridges with your QA team. After all, nobody likes to see an urgent ticket on their board because another team made a change without realising how it would affect their part of the code. However, in the layered approach, there is still a lot more predictability and control on the amount of surface area of the code that will be affected by a certain change. And better still, if you’ve organised your application on the basis of how your teams are set up, then the chances of having cross-team conflicts are contained to a large extent. Real-world example for layered architecture The image above shows a real world example of layered architecture. The router layer is the entry point to any page in your application, and then comes the data layer which handles all the API requests, caching, and duplication of data. Then comes the different pages/components of the application, each with their own UI layer and business logic. The important thing to note here is that these two pages are completely isolated from each other. Not only does this provide ease of testing and predictability of regression changes — for if they were sharing some code or business logic, changes in one page could have had unintended side effects on the other. However, with this model, it is easier to isolate the impact of changes, which is important for speedy and good quality delivery. And that’s something that directly results in a product that is more resilient to changes. This brings us to the next constraint. 2. Being conservative about code reuse The above is a tweet I did long back, and it summarises the point behind this constraint. When deciding about reusing a piece of code, one must think about it critically rather than seeing it as an abstraction waiting to happen. This helps us to avoid coupling code that can diverge over time due to different business requirements. As engineers we love DRY, and when we’re able to make some code DRY by introducing abstraction, it just feels like a eureka moment. It is the reason for our existence in this industry despite the tight competition. But the problem crops up when, in the name of DRYness, we link pieces of code that are not supposed to be linked. And this makes our code brittle and prone to side effects. Hence, when deciding about reusing code, we must try to have some idea about the roadmap and how the business requirements will change over time, and then we must ponder over whether it makes sense to reuse existing code or just copy-paste it to avoid coupling it with the roadmap for some other feature and create problems for future. So in a nutshell, one must always remember decoupled > DRY. 3. Preserving your architecture over time Coming to the last and very interesting restraint, let me recall one incident in my previous company. We were developing a very complex product, and in the discussions, we decided to maintain a confluence doc about the architectural decisions. It was a very elaborate one, with all the UML diagrams which we thought would be helpful in onboarding new team members as well as maintaining standards. But what happened over time was that even the existing engineers, let alone the new ones, tended not to consult that confluence in detail before introducing new features and other changes. Similar-looking pieces of code that were meant to stay separate were abstracted and generalised. PRs were reviewed only on the basis of syntax and not from the point of view of the impact on the initially planned architecture. And after some months, the application gave it back, and we ended up detangling dependencies and rewriting large parts of the code. Instead of senior engineers being utilised for the solving of more complex problems such as scaling, they were engaged in rewriting code that should have ideally been written as per the constraints initially documented to preserve the architecture. To prevent this situation in the future, we decided to write forbidden dependency tests. These tests allow you to test the structure of your application from a dependency standpoint. So next time, if you want to prevent somebody from touching the code you’re writing, you can write a plug-in to traverse your dependency graph and check for forbidden dependencies. More specifically, if you’re working in JavaScript ecosystems, you can try out Dependency Cruiser ( npm i dependency-cruiser ). It allows you to define rules about what can depend on what in your application inside a file called dependency-cruiser.js , which will help the tool validate your application’s dependency graph against these constraints and report violations as text and visually by generating the dependency graph image for you. Sample rule in dependency-cruiser.js You can customise the settings for this tool to check for circular dependencies, orphans and widows, missing dependencies in package.json , and production code relying on dev or optional dependencies. Better still, you can run these tests as part of CI. So the next time somebody tries to diverge from the original architecture and introduces unwanted complexity in the code, you don’t have to wait until the entropy becomes large enough to stall new development and make you argue with different engineers about reverting the change: The CI will do the argument on your behalf. Boiled down, it is not just important to document boundaries, but also to enforce them in code.
https://medium.com/better-programming/frontend-architecture-for-scale-c4acc44a214e
['Manisha Sharma']
2020-11-27 17:16:00.573000+00:00
['Nodejs', 'React', 'Architecture', 'Programming', 'JavaScript']
Absolutely Electrifying
Absolutely Electrifying Like a shot of adrenaline to the heart Image by FelixMittermeier from Pixabay When the words strike you in just the right way at just the right time they can make you forget the world around you as your attention is consumed by whatever it is that your reading. It’s not easy to capture people’s attention these days, seeing as how it is split down so many different channels. Sitting down to read a ten minute article seems primitive. But I have you in this moment and I’m determined not to let you go. My father screamed at me once when I was trying to explain to him why I was arguing with my mother. I told him I was just trying to make my point. At the top of his lungs he hollered, ‘When are you going to realize that life isn’t about making a fucking point!’ I got up from my seat slowly. My whole body was shaking. I said very softly, ‘Don’t you ever speak to me like that again. Never. On your life. I swear to God. I swear to God.’ My father was sitting and I was standing over him with my hands balled up into fists. My mother must have heard the commotion because she rushed in and put a hand in my back and began guiding me out of the room as I looked back at my father, repeating ‘I swear to God’ over and over again. My mother said that she was very frightened. She said she was relieved I let myself be led out of the room, because otherwise she would have had no choice but to call the police. I’ve always been an angry man. You wouldn’t know it by talking to me. I come off as perfectly polite and congenial. I even asked my best friend of twenty years the other day if he’s ever seen me truly angry and he admitted that he has not. That’s astonishing really, considering how prone I am to fits of volcanic rage. When the rage takes over it’s like the detonation of a thousand nuclear warheads. The world goes up in flames. All concern for consequences goes out the window and it’s blood red wherever I look. This anger is dangerous. It’s volatile, destructive, exhausting and corrosive. I’m getting too old for temper tantrums. You know, I could write a paradigm-shifting viral masterpiece today and I would still be underachieving. I’m not supposed to be here. If I succeed, okay, I was supposed to. I was gifted with all the advantages a citizen of the world could hope for. If I fail then I’m doubly damned. Expectations were so high and I shattered every one of them. Now if I survive the day it’s a noteworthy accomplishment. I’m so sick of this. I’m so sick of being fixated on my own specialness. I’m so sick of being the impostor. I’m so sick of not giving my best effort. You think you’ve seen my best, but you haven’t. I’m afraid of giving everything, because then the only way to go is down. I’m reticent, paranoid and superstitious. But maybe that’s my black-and-white thinking, devoid of subtlety and nuance. One piece doesn’t have to be better than the last, as long as it’s different. Authors don’t always try to make their new book better than their previous book. They understand that each one has its own distinct personality, that it is its own unique work of art. So instead of resorting to a game of creative one-upmanship maybe I can make peace with whatever I’m working on in the present. I like that idea. Soon after my dear friend died I found myself writing a poem about him and it talked about how in the future the ice that freezes us in place would melt and our streams would cross over each other once more. I really hope that’s true. I miss him terribly. It’s been almost six years and I grieve him every day. I wish they didn’t have to leave. It’s so lonely here. I’m so lonely without you. And Fifty, you wouldn’t believe what I’ve been through. I really could have used you, brother. Percy is a good dog, but Percy is not the same. He doesn’t possess your gentle spirit or your good humor. I could never use Percy as a pillow to fall asleep on. Oh Fifty, oh Ella, oh these lights of my life. It’s all so painful. I loved them with a love I could never give myself. We might ask this question once a year, once a month or on a daily basis, but at some point we will all ask it, with rare exceptions. Have I done something to positively impact another person’s life? I know that’s in the front of my mind while I’m writing for my small audience. I want to write the piece that makes your morning, the piece that makes you smile, makes you think, lets you know you’re not alone. Why else would I be writing if not to benefit others? For my own enjoyment? That’s really not why I’m here. And trust me, I am no altruistic angel in disguise. It’s just how I’m wired. My work can’t just be my work. It has to be useful in some grounded and pragmatic way. It has to resonate with people. People have to be able to see themselves in it. My older sister is coming by today. We’re going to sit in the backyard and chat. Also, she asked me if I would read The Trotter Sisters Meet The Vampire to her. I said I would love to. I only have a few teeth left in my head these days and my mouth is always as dry as the desert so reading can be a challenge but it feels like story time around the campfire. It feels like we are participating in an ancient tradition. And what an honor to be asked. To think that my sister actually enjoys listening to me read my work. Hard for me to believe. And what is this? Whatever it is, I want to shut it down. This constant negativity and lack of self-belief. I’m constantly surprised someone is interested in me, surprised someone likes me, surprised someone may find something of worth in my work, surprised I’m wanted, surprised I’m needed, surprised that maybe I had to earn my way to purgatory. It’s like I have to prepare myself for when someone discovers the awful truth, that I’m a hideous person who can do nothing of merit. It’s also like I don’t care enough about myself to hold on to the well wishes, compliments and mere statements of fact. I let it go. It’s not for me. You must have the wrong number. They say we become what we repeatedly do. I’ve become a myopic mentally ill addict with no self esteem. My thoughts are not the thoughts of a healthy person. Healthy people don’t obsess over angels, demons and death on a regular basis. Healthy people don’t demand apologies from God. Healthy people’s heads aren’t filled with mistakes and sad memories, with failure and lost faith. I had to work hard to become a tortured artist. I didn’t become a cliché because I thought it was cool. I took the hand that I was dealt and I played like a drunken blind man who never bothered to learn the rules of poker and who continues to throw good money after bad. I didn’t ask for anxiety and depression and a paralyzing personality disorder. But I did pick up that first drink and pop that first pill. I made the decision to spiral. I made the decision to rob my family. I made the decision to lie every chance I got. I made the decision to end my life. And here I am, sick to death of a self I can’t escape. I wish I had good news. This was supposed to be an energy shot for your soul. Something to bolster your flagging spirit. I wanted to spin a tale with eloquent, elegant run on sentences that cascaded down the screen like the Niagara Falls of verbal excellence, washing away all your worries and stimulating your mind with their power to transform the mundane into the magnificent. I fell short of my goal. I made the tragic mistake of accepting the average, when I should have been shooting for greatness. I just tried to be honest. That has to count for something. And we’re not through yet. I want to take this small space and dedicate it to my sister, Susan. I love her with my whole heart, and I don’t love easily. I trust her with my life, and I don’t trust easily. We just click. She also deals with mental health issues, namely OCD, and has lived enough of her life around me that she can truly empathize with my struggles. She never blames or shames or judges me. And she’s different than other people whose love is conditional, who love me when I know my place, who love me when I don’t stray too far from the prison we’ve trapped ourselves in, whose love stops when their interests are no longer being served. She just wants what’s best for me. It’s a selfless love and I feel undeserving of it. Two weeks ago I read her two short pieces and she asked me the question that will burnish any writer’s ego, “How do you come up with this?” She sincerely wanted to know, as a non writer, how I came up with my ideas. She was interested in the process. This was exciting. Someone who genuinely admires what I have created and who wants to know more about how the sausage gets made? That’s the best compliment I could ever receive. So thank you, Susan. You’re simply the best. Until next time, faithful reader. Be well, Timothy
https://medium.com/grab-a-slice/absolutely-electrifying-b8bf7009d01e
["Timothy O'Neill"]
2020-12-13 15:54:32.571000+00:00
['Addiction', 'Spirituality', 'Life Lessons', 'Mental Health', 'Self']
Meanwhile in America
Alarming news! In from the New Yorker, That the health of poor Poesy has gotten worse! There are many theories and we have from her doctor That what’s bad for her heart is free verse. “She was desperate,” said her husband, Time, “She had strayed so far that she could not tell If music was carried away in the crime, Elsewhere bound, exerting good, hot hell.” Grandson Rap broke in: “What the hammer is happenin? Man, don’t jive, Nana, she live, and rhyme thrive Like bongos clappin’, hands a slappin’, an’ fires crackin’ At Pop with an edge she give, actin’ youn’, tryin’ to live ~” Then in with a hush ~ the Government Minister To read the last rights: “Whatever must be, must be… Please trust in your will’s administrator… Who we will appoint, whose views are not so lusty.” That’s the story; what can we say? Not much. Our only choice is to become the predator, Not exactly to devour, but only to touch Your human fragility; no matter the editor.
https://medium.com/no-crime-in-rhymin/meanwhile-in-america-b03c5a91b38c
['Hank Edson']
2020-02-25 12:31:01.128000+00:00
['Inspiration', 'Humor', 'Writing', 'Poetry', 'Writing Tips']
Building the Tree of Knowledge — Exploring the Use Case for Educational Chatbots
Building the Tree of Knowledge — Exploring the Use Case for Educational Chatbots aXcelerate Follow Apr 11, 2018 · 3 min read With an increasing number of tech giants getting into the business of the smart home assistant, modern life is edging closer and closer to a sci-fi movie’s depiction of future home life. “Hey Siri, add bananas to my shopping list.” “Alexa, buy more deodorant.” “Ok Google, set my alarm for 7 AM.” Beneath the dazzling home control functionalities and fancy voice interactions lies the essence of a chatbot program, which maps human utterances to intents, and from intents reaches programs to carry out the intended tasks: Utterance → intent → task procedures → output This is the exact model of Amazon’s Alexa, and its outputs carry out hundreds of services: ticket booking, food ordering, hotel room reservation, and more. Smart Assistants in Education While the use case for an education industry chatbot may be slightly different, its primary purpose is to retrieve the knowledge required by the student — be it disambiguation of concepts, clarification of contexts, or fetching resources. Like home assistants, it too is only bound by one intent: information retrieval. The assistant may solely depend on a single or a combination of several NLP (natural language processing) algorithms to handle the query, but without the proper structuring of data, NLP algorithms alone will only generate a fixed percentage of wrong outputs. Sensible curation of data is essential for effective information retrieval. Data is organised as an index list for a search engine, but for a domain-specific knowledge base, information is adapted into a tree structure. There is a reason we use the phrase knowledge tree — the tree branches out as knowledge goes from general to specific, from low-resolution to high-definition. The Tree of Knowledge Knowledge points are put together as tree nodes, and the identification of the most relevant topic is simply a traversal of the tree, each step being guided by a particular filter function. ‍The transformation of Wikipedia’s knowledge base for “Anthropology” into a tree structure. Answering a query takes log(n) time in complexity. Organising the data in a way that’s optimal for the intended task is essential to information retrieval. A tree structure will enable a fast and accurate query, which makes it an ideal way to structure a chatbot’s knowledge base.
https://medium.com/vetexpress/building-the-tree-of-knowledge-exploring-the-use-case-for-educational-chatbots-fbb860d81dde
[]
2018-04-11 23:48:09.801000+00:00
['Edtech', 'Artificial Intelligence', 'Chatbots', 'Technology', 'Education']
Journalism and COVID-19: Impacts of the Global Pandemic
Key points by Ioanna Georgia Eskiadi The COVID-19 pandemic has impacted all facets of society and economy. In this webinar, the ways in which the pandemic has impacted the journalism industry, journalists themselves, and media outlets was discussed. Across news and media outlets, workers are facing layoffs, positions are being reduced, and publications are folding. Beyond fearing for their job security, threats against journalists — such as verbal attacks and police harassment — have intensified and compromised the safety of journalists during COVID-19. This situation causes concern about the short- and long-term implications of COVID-19’s impact on journalism and the media. The pandemic has shifted, in a dramatic way, the journalism industry presently and into the future. During the COVID-19 pandemic, three big trends in the field of journalism have emerged: 1. The financial status of many news and media organizations has been challenged due to limited advertising opportunities, changes in business plans, loss of income, job losses, and pay cuts. Panelists also noted the financial concerns for their newsrooms focused on covering new operating costs such as trainings on new technologies and science and health reporting; supporting remote reporting and publishing; and utilizing advanced verification and fact-checking systems and methodologies. 2. Second, there was a massive uptake in news consumption and interest at the start of the pandemic due to the fear and uncertainty the pandemic created. From this, there was a refined need to create media mediums to catch the interest of the audience, such as newsletters, podcasts, and subscription services, as during this period, consumers were more willing to pay for news as they understood the value of information during this period of uncertainty. 3. Threats to journalists on the front lines of reporting and news have increased during this period. Coverage of the pandemic has increased the need for more training and safety protocols to support and protect journalists. This era is also creating an “infodemic,” resulting in public concern about misinformation and trust in the media. The “infodemic” demands finding a balance between media freedom and freedom of expression that can be fact checked and corrected for misinformation. Speakers agreed we should focus on protecting the mental health of journalists. Many journalists were called to cover the pandemic in the early stages of COVID-19, causing them to feel a sense of psychological distress from reporting in these uncertain circumstances. Journalists were working more hours while still dealing with isolation, financial issues, and finding sustainable journalism models. When the pandemic began, the field of journalism was already facing problems that were further exacerbated by COVID-19. Panelists noted the most difficult aspects of covering COVID-19 are the psychological and emotional impacts, concerns about unemployment and finance, the intense workload, social isolation, and the physical risk of contracting or spreading the virus. One panelist detailed a new report on the field of journalism during the pandemic, with statistics such as: · news organization revenue declined more than 75 percent during COVID-19 · 30 percent of journalists said their news organization was not supplying any safety equipment for field reporting · 20 percent of journalists said online harassment was “much worse” during the pandemic, yet 96% indicated that their employers offered no help in dealing with the problem. To this end, journalists noted they encountered COVID-19 disinformation daily, with the most prolific spread of disinformation coming from Facebook, Twitter, and YouTube. Other top sources of disinformation are average citizens, political leaders and elected officials, attention-seeking trolls, profiteers, propagandistic or heavily partisan news media or state media, and identifiable government agencies or their spokespeople. This webinar ended on a positive note, as the panelists noted they felt an increased commitment to their craft and that news consumers trusted them. They had a positive outlook on the future of the industry, noting the chances for collaboration and team efforts. Key points: · Fear of COVID-19 related issues · Keep your sources close and avoid disinformation · Check and cross-check the information. · Need more video interviews and podcasts · Increase of content creation · Need for more quality and representation of society · Journalists have suffered psychological distress · Journalists were feeling under personal risk during the reporting · More working hours during the pandemic · Need to solve financial issues for sustainable journalism models The discussion Speakers: Fatima Bahja is a Research and Proposal Coordinator on International Center for Journalists in Washington, DC, USA. Sher Ali Khalti is Staff Reporter for The News in Lahore, Pakistan. Sarah Scire is Staff Writer for Nieman Lab on Nieman Foundation for Journalism in New York, USA. Parvathi Benu is Senior Reporter and Sub Editor in The New Indian Express in Kerala, India and U.S. State Department Fellows Exchange Alumnus, Fall 2019. Damian Radcliffe is Carolyn S. Chambers Professor of Journalism and Professor of Practice in University of Oregon, USA.
https://digicomnet.medium.com/journalism-and-covid-19-impacts-of-the-global-pandemic-5d3aa09a1802
[]
2020-10-15 19:28:08.513000+00:00
['Covid 19', 'Webinar', 'Journalism', 'Dcn']
Objectivity is Dying. This is What Comes Next.
For better or worse, partisan journalism is here to stay. Photo by Charles Deluvio on Unsplash The arbiters of information, perpetual seekers of a scoop, the sometimes-affirming, sometimes-spurious mouthpieces of the body politic — call them media, call them press, these are the hats of a journalist. These roles of the press remain today, and have remained as such since the inception of the profession. The speed, pressures and influence of reporting, however, have evolved with dramatic effect. Pamphlets and periodicals marked humanity’s initial foray into the world of news; penny papers followed shortly thereafter. The Boston News-Letter of 1704 was America’s first regular newspaper. At the time, the news of the day could more aptly be described as the news of last week, sometimes last month. Information moved like a cloud in a windless sky. Local news took the slow lane (the fastest lane available at the time) and intercontinental news traveled as if by sloth. The advent of the telegraph in the mid-19th century extended the reach of news. Innovations abound, the swiftness with which information spread skyrocketed. An evolution was under way: hearsay from a tête-à-tête at the local watering hole became ink on a newspaper, which ultimately beget the enterprise that is the news media industry. The onset of printed news as a new medium was not without a ripple effect (McLuhan’s dictum comes to mind). What started as a new form of information dissemination became not only a lens with which to view the world, it became the lens. Independent thought became greatly crowdsourced. To ensure you were keeping abreast, top reporters and journalists could — via newspaper column or periodical — sit in with your morning coffee or afternoon biscuit. The news (i.e. the press, the media, journalists) became the mouthpiece of the nation, the means with which places and events interacted and danced within the minds of the public.
https://philrosenn.medium.com/objectivity-is-dying-this-is-what-comes-next-a27547208e30
['Phil Rosen']
2020-10-21 15:06:31.539000+00:00
['Election 2020', 'Politics', 'Journalism', 'News', 'Government']
Machine Learning In Healthcare: Detecting Melanoma
Machine Learning In Healthcare: Detecting Melanoma Using the patient's diagnosis report and skin lesion images to detect whether the lesion is cancerous or non-cancerous by applying several machine learning algorithms. Photo by National Cancer Institute on Unsplash Skin cancer is the most common type of cancer. It occurs due to the abnormal growth of skin cells, usually on the areas exposed to sunlight. There are three major types of skin cancer — basal cell carcinoma, squamous cell carcinoma, and melanoma. Melanoma, specifically, is responsible for 75% of skin cancer deaths, despite being the least common skin cancer. Melanoma is a deadly disease, but if caught early, most melanomas can be cured with minor surgery. In this article, we shall identify melanoma from the patient’s diagnosis records and images of their skin lesion by applying several machine learning algorithms to classify the lesion as benign or malignant. Better detection of melanoma has the opportunity to positively impact millions of people. Note: The following code is implemented on a Kaggle kernel using the dataset SIIM-ISIC Melanoma Classification provided by Kaggle. You can find the entire code here. We shall follow the following steps for implementing the algorithm. Importing Libraries Let’s start the implementation by importing the required libraries inside a Kaggle kernel as shown below: import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import cv2 import pydicom as dicom import pandas as pd import numpy as np from sklearn.metrics import accuracy_score, confusion_matrix from sklearn.metrics import mean_squared_error, r2_score The pydicom library imported, is used for working with images provided in the dataset. These images are of type DICOM (Digital Imaging and Communications in Medicine) which is the international standard to transmit, store, retrieve, print, process, and display medical imaging information. Analyzing Dataset The next step is to import the dataset SIIM-ISIC Melanoma Classification provided by Kaggle. df = pd.read_csv('/kaggle/input/siim-isic-melanoma-classification/train.csv') df.head() SIIM-ISIC Melanoma Classification Dataset From the dataset given above, we can infer that classification of melanoma depends upon two things, the patient details and the image of the patient’s skin lesion. The whole process is explained below. We take the patient’s details and skin lesion images and then apply the machine learning algorithm to detect whether or not the person suffers from cancer. Image Classification Let us start by detecting melanoma present in the images provided in the dataset. Melanoma signs include: A large brownish spot with darker speckles A mole that changes in color, size or feel or that bleeds A small lesion with an irregular border and portions that appear red, pink, white, blue or blue-black A painful lesion that itches or burns Dark lesions on your palms, soles, fingertips or toes, or on mucous membranes lining your mouth, nose, vagina or anus Using these signs, the machine learning algorithm classifies skin lesions as benign or malignant. Lets us start the implementation by taking a small sample of data from the entire dataset to work on. s0 = df.target[df.target.eq(0)].sample(50).index s1 = df.target[df.target.eq(1)].sample(60).index df = df.loc[s0.union(s1)] df['target'].value_counts() This is our newly generated dataset which we will use to train our machine learning models. Lets us now take a look at the images. These images are present in the DICOM format, and therefore we use the dicom.dcmread() function provided by pydicom library to read the images. image = '/kaggle/input/siim-isic-melanoma-classification/train/' + df['image_name'][1512] +'.dcm' ds = dicom.dcmread(image) plt.imshow(ds.pixel_array) Picture showing the difference between malignant and benign cancer The next step is to train our machine learning models. In order to do that, the images should first be represented in a format understood by the models. To do this, we need to convert the images into its pixelized format. Therefore, for every image in the dataset, we read the image using dicom.dmread() and extract the pixels using ds.pixel_array. These pixels are multidimensional, and hence we convert it to a one-dimensional array using the flatten function. We then append these pixel format images in a list named images. images = [] for x in df['image_name']: image = '/kaggle/input/siim-isic-melanoma-classification/train/' + x +'.dcm' ds = dicom.dcmread(image) pixels = ds.pixel_array images.append(pixels.flatten()) Representation of images using pixels Now, the issue that arises is each image has a different number of pixels and therefore the generated arrays have uneven lengths. To overcome this issue, we use the padding technique that either adds extra values to the array or discards certain values to make the length of the array equal to the max length specified. import tensorflow as tf images = tf.keras.preprocessing.sequence.pad_sequences( images, maxlen = 720, dtype = "int32", padding = "pre", truncating = "pre", value = 0 ) We will also be needing a test dataset containing images to test our model. We can generate the test dataset as follows. test = df.tail(50) test.head() Test Dataset Once we have our dataset ready, we repeat the procedure shown above. That is, we first convert the images into its pixel format and then apply the padding technique so that the images have the same number of pixels. test_images = [] count = 0 for x in test['image_name']: image = '/kaggle/input/siim-isic-melanoma-classification/train/' + x +'.dcm' ds = dicom.dcmread(image) pixels = ds.pixel_array test_images.append(pixels.flatten()) count +=1 print(count) test_images = tf.keras.preprocessing.sequence.pad_sequences( test_images, maxlen = 720, dtype = "int32", padding = "pre", truncating = "pre", value = 0 ) Finally, it's time to train our model. We will be using several machine learning algorithms to train our model and then test it to view the accuracy score. We will set the value of X = images (list containing images in pixel format) y = np.array(df[‘target’]) (values that state whether the lesion in the image is benign or malignant) 1. Logistic Regression from sklearn.linear_model import LogisticRegression X = images y = np.array(df['target']) classifier_lr = LogisticRegression() classifier_lr.fit(X,y) X_test = test_images y_test = np.array(test['target']) y_pred_lr = classifier_lr.predict(X_test) print('Accuracy Score: ',accuracy_score(y_test,y_pred_lr)) print('Confusion Matrix: ',confusion_matrix(y_test,y_pred_lr)) Accuracy given by Logistic Regression Algorithm 2. Support Vector Machine from sklearn import svm X = images y = np.array(df['target']) classifier_svm = svm.SVC() classifier_svm.fit(X,y) X_test = test_images y_test = np.array(test['target']) y_pred_svm = classifier_svm.predict(X_test) print('Accuracy Score: ',accuracy_score(y_test,y_pred_svm)) print('Confusion Matrix: ',confusion_matrix(y_test,y_pred_svm)) Accuracy given by Support Vector Machine Algorithm 3. Decision Tree from sklearn.tree import DecisionTreeClassifier X = images y = np.array(df['target']) classifier_dt = DecisionTreeClassifier() classifier_dt.fit(X,y) X_test = test_images y_test = np.array(test['target']) y_pred_dt = classifier_dt.predict(X_test) print('Accuracy Score: ',accuracy_score(y_test,y_pred_dt)) print('Confusion Matrix: ',confusion_matrix(y_test,y_pred_dt)) Accuracy given by Decision Tree Algorithm 4. Random Forest from sklearn.ensemble import RandomForestClassifier X = images y = np.array(df['target']) classifier_rf = RandomForestClassifier() classifier_rf.fit(X,y) X_test = test_images y_test = np.array(test['target']) y_pred_rf = classifier_rf.predict(X_test) print('Accuracy Score: ',accuracy_score(y_test,y_pred_rf)) print('Confusion Matrix: ',confusion_matrix(y_test,y_pred_rf)) Accuracy given by the Random Forest Algorithm 5. Adaptive Boosting from sklearn.ensemble import AdaBoostClassifier X = images y = np.array(df['target']) classifier_ab = AdaBoostClassifier() classifier_ab.fit(X,y) X_test = test_images y_test = np.array(test['target']) y_pred_ab = classifier_ab.predict(X_test) print('Accuracy Score: ',accuracy_score(y_test,y_pred_ab)) print('Confusion Matrix: ',confusion_matrix(y_test,y_pred_ab)) Accuracy given by Adaptive Boosting Algorithm 6. Gradient Boosting from sklearn.ensemble import GradientBoostingClassifier X = images y = np.array(df['target']) classifier_gb = GradientBoostingClassifier() classifier_gb.fit(X,y) X_test = test_images y_test = np.array(test['target']) y_pred_gb = classifier_gb.predict(X_test) print('Accuracy Score: ',accuracy_score(y_test,y_pred_gb)) print('Confusion Matrix: ',confusion_matrix(y_test,y_pred_gb)) Accuracy given by Gradient Boosting Algorithm After training and testing the images using several machine learning algorithms we get different accuracies. Some algorithms overfit the data while some underfit. Therefore we go forward with the Logistic Regression algorithm since it neither overfits nor underfits the data, giving an accuracy of 96%. Bar plot showing the accuracy of obtained using the machine learning algorithms Classification using Patient Records Lets us start by renaming “anatom_site_general_challenge” to “site” for our convenience. df = df.rename(columns = {'anatom_site_general_challenge':'site'}) To initiate the training of the data, we first need to understand the data. Let's apply some visualizations and data cleaning techniques to interpret our data better. First, let us remove all the missing values from the dataset using the following code df = df.dropna(axis=0, how = 'any') Considering age_approx and as a factor, we can use the following code to create the graphs shown below. age = [] for i in range(df.shape[0]): try: if df['target'][i] == 1: age.append(df['age_approx'][i]) except: pass plt.figure(figsize=(15,5)) plt.subplot(1,2,1) sns.distplot(age) plt.title('Distribution of age of people having malignant cancer') plt.subplot(1,2,2) sns.countplot(y = age) plt.ylabel('Age') plt.title('Count plot of age of people having malignant cancer') Graphs created using age_approx as a factor The following points can be inferred from the above graphs: Most patients suffering from malignant cancer fall between the age range of 40 to 80 years The maximum number of cancer patients have age values greater than 60, followed by people having age values between 55 to 60. The count of patients suffering from cancer and having an age value below 10 are very less. Considering site and as a factor, we can use the following code to create the graphs shown below. site = [] for i in range(df.shape[0]): try: if df['target'][i] == 1: site.append(df['site'][i]) except: pass sns.countplot(y = site) sns.countplot(y = site,palette="rocket") plt.title('Graph showing count of patients having cancer and the site it is located in') plt.ylabel('Site') Graph created using site as a factor The following points can be inferred from the above graphs: Maximum patients have the skin lesion present on their torso Very few patients have skin lesions present on their palms/sores or oral/genital places. As you can see, the attributes sex, site and diagnosis contains categorical data. And so, we use pandas.get_dummies() to convert the categorical data into dummy or indicator variables understood by the model. We also drop the non-required columns, benign_malignant and patient_id. This can be done as follows: df = pd.get_dummies(df, columns = ['sex'],drop_first=True) df = pd.get_dummies(df, columns = ['site'],drop_first=True) df = pd.get_dummies(df, columns = ['diagnosis'],drop_first=True) df = df.drop('diagnosis_unknown', axis = 1) df = df.drop(['benign_malignant', 'patient_id'], axis = 1) df.head() Our dataset now contains 16 attributes. The next step is the selection of the right attributes to train our model. We can do this by finding out the correlation between the attributes and the target. To do this, let us generate a heatmap using the following code: plt.figure(figsize = (10,10)) sns.heatmap(df.corr()[['target']].sort_values('target').tail(16), annot = True) Heatmap showing the correlation between the attributes and the target The following points can be inferred from the above heatmap: diagnosis_melanoma has a direct correlation with the target value. Therefore, if a person is diagnosed with melanoma, he has cancer and if a person is not diagnosed with melanoma, he doesn’t have cancer. has a direct correlation with the target value. Therefore, if a person is diagnosed with melanoma, he has cancer and if a person is not diagnosed with melanoma, he doesn’t have cancer. age_approx, sex_male and site_upper extremity are positively correlated with the target. and are positively correlated with the target. diagnos_nevus, site_lower extremity and site_torso are negatively correlated with the target. This can also be interpreted from the graphs shown below. Graphs showing the correlation between the attributes and the target Let us now train our machine learning models. We will be using several machine learning algorithms to train our model and then test it to view the accuracy score. We start by creating the train and test dataset. X = df[['diagnosis_melanoma','site_torso','diagnosis_nevus','site_lower extremity','site_upper extremity', 'sex_male', 'age_approx']] y = df['target'] from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 1) Now we need to apply the classification algorithms which take these attributes and train the model against the target value. This can be done by using the same algorithms and the procedure used above to train the model with skin lesion images. After successful training of the algorithms, we obtain the following accuracy scores: As you can see, the Support Vector Machine algorithm doesn’t overfit the data and gives an accuracy of 98% we select it for further processing. We selected the Logistic Regression algorithm for predicting cancer from images and we selected Support Vector Machine to predict cancer based on the medical details and diagnosis of the patient. In our next and final step, we test the entire record of the patient and detect whether he or she suffers from cancer. The patient record is as follows: image_name = ISIC_0149568 age_approx = 55 sex_male = 0 site_lower extremity = 0 site_torso = 0 site_upper extremity = 1 diagnosis_melanoma = 1 diagnosis_nevus = 0 image_path = '/kaggle/input/siim-isic-melanoma-classification/train/ISIC_0149568.dcm' details = [[55,0,0,0,1,1,0]] image_to_test = [] ds = dicom.dcmread(image_path) pixels = ds.pixel_array plt.imshow(pixels) image_to_test.append(pixels.flatten()) image_to_test = tf.keras.preprocessing.sequence.pad_sequences( image_to_test, maxlen = 720, dtype = "int32", padding = "pre", truncating = "pre", value = 0 ) print(train2.predict(image_to_test)) if train1.predict(details) == [1]: result1 = 'Malignant' else: result1 = 'Benign' if train2.predict(image_to_test) == [1]: result2 = 'Malignant' else: result2 = 'Benign' print('Result from patient details: ', result1) print('Result from patient image: ', result2) image_path stores the path to the image_name details stores the patients record details train1 is the model trained using patients records. train2 is the model trained using images of patients skin lesion. After executing the code, we get the result as follows: Output after final testing As you can see, the algorithm correctly detected that the patient suffers from cancer. And so, we can infer that machine learning algorithms can be effectively used in the healthcare and the medical field. Find the entire code here. If you want to know how to classify images using neural networks instead of machine learning algorithms click here. References
https://towardsdatascience.com/machine-learning-in-healthcare-detecting-melanoma-70147e1b08de
['Sakshi Butala']
2020-10-19 12:54:00.068000+00:00
['Machine Learning', 'Data Science', 'Programming', 'Health', 'Data Visualization']
Airflow: Lesser Known Tips, Tricks, and Best Practises
There are certain things with all the tools you use that you won’t know even after using it for a long time. And once you know it you are like “I wish I knew this before” as you had already told your client that it can’t be done in any better way 🤦🤦. Airflow like other tool is no different, there are some hidden gems that can make your life easy and make DAG development fun. You might already know some of them and if you know them all — well you are a PRO then🕴🎩. (1) DAG with context Manager Were you annoyed with yourself when you forgot to add dag=dag to your task and Airflow error’ed? Yes, it is easy to forget adding it for each task. It is also redundant to add the same parameter as shown in the following example ( example_dag.py file): The example ( example_dag.py file) above just has 2 tasks, but if you have 10 or more then the redundancy becomes more evident. To avoid this you can use Airflow DAGs as context managers to automatically assign new operators to that DAG as shown in the above example ( example_dag_with_context.py ) using with statement. (2) Using List to set Task dependencies When you want to create the DAG similar to the one shown in the image below, you would have to repeat task names when setting task dependencies. As shown in the above code snippet, using our normal way of setting task dependencies would mean that task_two and end are repeated 3 times. This can be replaced using python lists to achieve the same result in a more elegant way. (3) Use default arguments to avoid repeating arguments Airflow allows passing a dictionary of parameters that would be available to all the task in that DAG. For example, at DataReply, we use BigQuery for all our DataWareshouse related DAGs and instead of passing parameters like labels , bigquery_conn_id to each task, we simply pass it in default_args dictionary as shown in the DAG below. This is also useful when you want alerts on individual task failures instead of just DAG failures which I already mentioned in my last blog post on Integrating Slack Alerts in Airflow. (4) The “params” argument “params” is a dictionary of DAG level parameters that are made accessible in templates. These params can be overridden at the task level. This is an extremely helpful argument and I have been personally using it a lot as it can be accessed in templated field with jinja templating using params.param_name . An example usage is as follows: It makes it easy for you to write parameterized DAG instead of hard-coding values. Also as shown in the examples above params dictionary can be defined at 3 places: (1) In DAG object (2) In default_args dictionary (3) Each task. (5) Storing Sensitive data in Connections Most users are aware of this but I have still seen passwords stored in plain-text inside the DAG. For goodness sake — don’t do that. You should write your DAGs in a way that you are confident enough to store your DAGs in a public repository. By default, Airflow will save the passwords for the connection in plain text within the metadata database. The crypto package is highly recommended during Airflow installation and can be simply done by pip install apache-airflow[crypto] . You can then easily access it as follows: from airflow.hooks.base_hook import BaseHook slack_token = BaseHook.get_connection('slack').password (6) Restrict the number of Airflow variables in your DAG Airflow Variables are stored in Metadata Database, so any call to variables would mean a connection to Metadata DB. Your DAG files are parsed every X seconds. Using a large number of variable in your DAG (and worse in default_args ) may mean you might end up saturating the number of allowed connections to your database. To avoid this situation, you can either just use a single Airflow variable with JSON value. As an Airflow variable can contain JSON value, you can store all your DAG configuration inside a single variable as shown in the image below: As shown in this screenshot you can either store values in separate Airflow variables or under a single Airflow variable as a JSON field You can then access them as shown below under Recommended way: (7) The “context” dictionary Users often forget the contents of the context dictionary when using PythonOperator with a callable function. The context contains references to related objects to the task instance and is documented under the macros section of the API as they are also available to templated field. { 'dag': task.dag, 'ds': ds, 'next_ds': next_ds, 'next_ds_nodash': next_ds_nodash, 'prev_ds': prev_ds, 'prev_ds_nodash': prev_ds_nodash, 'ds_nodash': ds_nodash, 'ts': ts, 'ts_nodash': ts_nodash, 'ts_nodash_with_tz': ts_nodash_with_tz, 'yesterday_ds': yesterday_ds, 'yesterday_ds_nodash': yesterday_ds_nodash, 'tomorrow_ds': tomorrow_ds, 'tomorrow_ds_nodash': tomorrow_ds_nodash, 'END_DATE': ds, 'end_date': ds, 'dag_run': dag_run, 'run_id': run_id, 'execution_date': self.execution_date, 'prev_execution_date': prev_execution_date, 'next_execution_date': next_execution_date, 'latest_date': ds, 'macros': macros, 'params': params, 'tables': tables, 'task': task, 'task_instance': self, 'ti': self, 'task_instance_key_str': ti_key_str, 'conf': configuration, 'test_mode': self.test_mode, 'var': { 'value': VariableAccessor(), 'json': VariableJsonAccessor() }, 'inlets': task.inlets, 'outlets': task.outlets, } (8) Generating Dynamic Airflow Tasks I have been answering many questions on StackOverflow on how to create dynamic tasks. The answer is simple, you just need to generate unique task_id for all of your tasks. Below are 2 examples on how to achieve that: (9) Run “airflow upgradedb” instead of “airflow initdb” Thanks to Ash Berlin for this tip in his talk in the First Apache Airflow London Meetup.
https://medium.com/datareply/airflow-lesser-known-tips-tricks-and-best-practises-cf4d4a90f8f
['Kaxil Naik']
2020-02-06 19:59:19.740000+00:00
['Apache Airflow', 'Python', 'Airflow']
Do You Want to Know Why Suffering Is so Seductive?
Do You Want to Know Why Suffering Is so Seductive? My very first story on Medium reached 1k views and ~750 reads in less than two weeks. This made me curious about the reasoning behind and I’ll share my insight with you right now. Image by Jerzy Górecki from Pixabay I don’t expect to make a living from writing on Medium. My motivation behind writing, though, is to be able to vent my anger, especially towards my husband. Why? Because we have a toddler around us and hot topics are subject to potential conflict — as in, yelling and smashing things. To my surprise, I found writing to be a safe outlet where I can gain some clarity over my thoughts, and share my pain with people that might relate to it. And so far, I managed to find a crowd interested in my suffering. This story presents suffering’s seductive angle, from two perspectives: belongingness and narcissism. Belongingness One reason for the unexpected success of my first story was that I vented about my husband. I felt sabotaged for my efforts to better myself. And quite possibly, there are other people, readers, and writers, that can relate to my pain. Please notice that I am not putting a label on men, here. Being an asshole is gender-free. Feeling relatable stems from belongingness. On Maslow’s pyramid of needs, belongingness is defined as the emotional need of being accepted into a group where you can give and receive attention from others. In fact, this need is so important, that Maslow has included it along with the physiological needs, safety, self-esteem, and self-actualization. Furthermore, according to Roy Baumeister and Mark Leary ("The need to belong: Desire for interpersonal attachments as a fundamental human motivation". Psychological Bulletin. 1995): All human beings need a certain minimum quantity of regular, satisfying social interactions. Inability to meet this need results in loneliness, mental distress, and a strong desire to form new relationships. Judging on this need, it is understandable why my story of suffering was so seductive to many. Either we like it or not, relationships will always be tough. In long-term relationships arguments and disappointment happen even more often than many of us would like to admit. So when you read a story that you can relate to, it’s almost like you have said it yourself. It makes you feel that it’s yours. Narcissism There is a slight hint of narcissism in people when they help others. “God is attracted to weakness. He can’t resist those who humbly and honestly admit how desperately they need him.” — Pastor Jim Cymbala When my husband and I started dating I was very insecure. I felt so lucky that such a stable guy would even look at me let alone commit to a relationship. He also helped me from a financial perspective to go through my studies. I’m still grateful for that. But I never felt good about the whole set-up. I hated being dependent, while he thrived in this relationship. I felt that I was his shadow and my name didn’t even matter. I was presented as “his girlfriend”. For all practical purposes, he was helping me. Yet, as his satisfaction grew, my self-esteem sank even more. He was the Prince Charming saving the damsel in distress since many times he reminded me that he was treating me like a princess. My suffering, under the form of weakness, was seductive for him. Later on, when I established myself as a professional and was no longer dependent on him financially, the seductiveness faded away. The only thing he was willing to give me, was material resources. Whenever I asked for time, connection, authenticity, he told me that the reason I needed all these was that I was bored. So he recommended that I take a hobby instead. He randomly confessed one day a strange thing. I was overwhelmed by frustration that I can’t convince him to be there for me and while discussing this, I started crying. He said, “look, now I want to hug you”. He otherwise, never comes out of the blue, to show affection, but if he sees me weak, he’s triggered. This is not right; this is not love — it’s pity. Thanks, but no thank you!
https://medium.com/illumination/do-you-want-to-know-why-suffering-is-so-seductive-32288a7394be
['Eir Thunderbird']
2020-12-26 20:05:43.672000+00:00
['Husband Wife Dispute', 'Relationships', 'Empathy', 'Suffering', 'Writing']
I Write Erotica… And It’s So Boring
The stories I write are erotic shorts, around 5,000–8,000 words in length, retailing for the tried and tested price point of $2.99. I create “chapters” of around 500–800 words that move the story along. Each chapter has a “cliff hanger” that usually ends with the sex almost happening, but not quite. The stories have one main character — the cool, older chick — and one or two minor players. Emphasis on the players. My readers (such as they are) say the stories are fabulous. They say the stories are witty, well-written and (as much as possible) have a satisfying character arc. My writing process is that I outline the story, write it with placeholders where the sex should go, then go back and add in the sexy times once the story is complete. And it’s this going back to fill in the sex gaps that is boring to me. Coming (pun intended) up with various descriptions and synonyms for my characters getting hot and heavy in its various forms and shapes and permutations, is creatively taxing. Tab A. Slot B. Slot B.Tab A. I wish I found it exciting. I wish I didn’t find it formulaic. I wish I didn’t have to grapple with the myriad of ways to describe a tongue going into places it probably wasn’t designed for. Or the waves of pleasure washing over one as one is about to ride the cusp of eternal bliss via aforementioned tongue. Or getting graphic with words or language I never use ordinarily like See You Next Tuesday. I always thought I was comfortable with sex. Yes, I’m vanilla. I’ve never had a threesome or been interested in one. Hell, I don’t even like sharing my food. I don’t do anything that involves an implement (organic or inorganic) near the vicinity of my butt. I don’t dress up, mainly because I can’t be bothered. But within the bounds of vanilla, is definitely some strawberry: I like the lights on. Afternoons and mornings are lovely times of day for getting jiggy with it. Up against the furniture is fine. Anything involving water, I’m there. Ditto cream and chocolate sauce. So maybe it’s not so much that I find writing erotica boring (which I do) but more that I’d rather write about sex in the context of human relationships and emotional connections. An exploration of feelings and attachment and physicality. Because that, in all its messy beauty, is never boring.
https://medium.com/literally-literary/i-write-erotica-and-its-so-boring-b0bc6a4c4ed3
['Diane Lee']
2020-07-04 08:35:35.988000+00:00
['Publishing', 'Nonfiction', 'Sex', 'Self', 'Erotica']
Puffin 全雲端上網隔離:應用篇
CloudMosa’s mission is to empower the world’s phones through cloud computing and make them universally powerful and useful. Follow
https://medium.com/cloudmosa-tw/puffin-%E5%85%A8%E9%9B%B2%E7%AB%AF%E4%B8%8A%E7%B6%B2%E9%9A%94%E9%9B%A2-%E6%87%89%E7%94%A8%E7%AF%87-1fa07371559a
['Cloudmosa']
2020-11-10 03:31:00.269000+00:00
['Cloud Computing', 'Cybersecurity', 'Remote', 'Web Development', 'Cloud']
The Three Pillars of Quantum Computing
The Three Pillars of Quantum Computing The fundamentals of understanding how a quantum computer works Image by Gerd Altmann from Pixabay Quantum computing is one of those topics that people find very interesting yet quite intimidating at the same time. When people hear — or read — that the core of quantum computing is quantum physics and quantum mechanics, they often get intimidated by the topic and steer away from it. I will not deny that some aspects of quantum computing are incredibly puzzling and hard to wrap your mind around. However, I believe that you can understand the basics and the overall ideas of quantum computing without overcomplicating things. That is what I will attempt to do in this article and some upcoming articles about various quantum computing-related topics. In this article, we will talk about the three fundamentals of quantum computing: superposition, qubits, and entanglement. These three concepts are the building stones to understanding how quantum algorithms do their magic and how quantum computing has the potential to solve some of the problems classical computers have failed to solve. Now, let’s tackle each of them in detail… Superposition By author using Canva Superposition may be the most famous quantum mechanics term ever! I guarantee you have heard of it before, not directly perhaps, but as the infamous Schrodinger’s Cat. In case you never heard of the one and only cat, Schrodinger’s cat, is a thought experiment that explains the concept of quantum superposition. It goes like this: assume you have a cat and a sealed bottle of poison in a closed box together. The question now is, how would you know if the poison bottle broke open and the cat died or if the cat is still alive inside the box? Since you can’t tell for sure if the cat is dead or alive without opening the box, the cat is now in a state of superposition, which simply means it’s a 50/50 chance of the cat being dead or alive. That’s it! In technical terms, superposition is a principle — much as in classical physics — that refers to when a physical system exists in two states at the same time. Superposition is not an absolute term; that is, it doesn’t mean anything by itself; it instead refers to a specific set of solutions. The set of solutions differs based on your application and what you’re trying to achieve. Nevertheless, the most commonly used set of solutions is ALL possible solutions, also known as Hilbert Space. In the case of Schrodinger’s cat, the solution space (all possibles states of the cat) contains only two states; the cat is either dead or alive, there’s no other possibility. Hence the superposition of that solution space is that the cat is both dead and alive at the same time, or in other words, the cat is 50% dead and 50% alive. If we were to put that in an equation form it will look like this: By author using Canva This equation represents the mathematical form of the superposition of both states of the cat (dead and alive) with a 50/50 probability. The probability of any state equals the absolute value of the square of the amplitude. In our cat equation, both states have the same amplitude of 1/√2. Which if we calculated the absolute square of will gives us 1/2, or 50%. We also notice that the 2 under the square root also represents the size of the Hilbert space. So, our cat state remains a mystery, that is until we finally open the box. Once we do that, the superposition will collapse, and the cat will be either 100% dead or 100% alive.
https://medium.com/digital-diplomacy/the-three-pillars-of-quantum-computing-d80ff5f50ec7
['Sara A. Metwalli']
2020-09-22 15:25:04.277000+00:00
['Quantum Computing', 'Quantum Physics', 'Demystifying Technology', 'Technology', 'Science']
André Christ, LeanIX: “Like Google Maps for IT in a company”
André Christ has built a company that could be best described as an underdog. Even though LeanIX isn’t very present in the news, the company is one of the youngest German success stories. In June, they closed a Series D funding and their client list is a potpourri of international top brands like Adidas, Atlassian, or Kühne + Nagel. In our podcast REWRITE TECH, we talk to André Christ about enterprise architecture, product development, and why Bonn is the perfect place for a start-up that is looking for talent. André Christ, LeanIX: Mapping out the IT Infrastructure The technological infrastructure of big companies is complex, and responsibility lies in many different departments and teams. At a certain point of scale, a company runs hundreds of services and software products. And that’s where LeanIX comes in. “LeanIX is like Google Maps for IT in a company. It actually helps organizations to map out what software they have, for what processes that software is used, which organization is using it and which business capabilities, that means which functionality, is in that software,” explains André cleverly. The advantages are obvious: Less time spent on reporting, faster onboarding for new colleagues, and cost savings due to the elimination of redundancies. Since they started in 2012, LeanIX has won several clients like Adidas, Atlassian, or Bosch. Recently they closed a Series D funding round led by Goldmann Sachs — in the midst of a global pandemic. From bootstrapping to venture capital LeanIX has already been backed with venture capital since 2015 when Capnamic Ventures and Iris Capital invested. In 2017 DTCP, the investment group of Deutsche Telekom invested. However, the first three years after the founding, André and his co-founder bootstrapped the company. Only when they got their first investment in 2015, they switched to “growth mode” as André calls it. But with more money and employees coming in, the right mindset stays important for André: “We try to be as lean and quick in decisions making as possible.” André Christ, CEO & Co-Founder LeanIX Even though LeanIX is a classic technology scale-up, their product doesn’t rely on tech alone. “Not everything can be automated” as André states in the conversation. That’s why the software obtains data from various sources. “LeanIX is a hybrid of people putting their knowledge in and leveraging APIs and other systems to get data.” Besides André and his team provide additional content like information about software lifecycles to be the one-stop-shop for everything related to enterprise architecture. Hidden champion based in Bonn Despite its huge success, LeanIX is perhaps not as well-known as other German start-ups. One reason could be the location. Instead of Berlin or Munich, LeanIX’s headquarters are located in Bonn. Office of LeanIX And, as André reveals in our discussion, he struggled with Bonn during the first years: “I was having this debate with myself: Was it the right idea to found a business in a region that is not really known for building a fast-growing company?” Now, with a success record at their back, André has come to a conclusion: “I’m over this question now. I am fully convinced that Bonn is a great place for us.” They have managed to build a brand around LeanIX and are now able to attract talents from the whole region, even competing with big players like Telekom or DHL, which are also headquartered in Bonn.
https://medium.com/rewrite-tech/andr%C3%A9-christ-lean-ix-not-everything-is-automatable-df0c925077c6
['Michael Mirwald']
2020-11-26 10:26:53.572000+00:00
['Enterprise Architecture', 'Startup', 'Podcast', 'Product Development', 'Digital Transformation']
The Marketing of Spirituality
Image from memes on me.me Or rather, is it possible to sell spirituality? So here’s the thing. I’ve had miraculous healing of an autoimmune disease by taking ayahuasca and medicinal plants, along with living in the jungle and having a skilled curandero (healer) to guide me. I initially thought that I wanted to own a retreat center in Peru. However, the longer I stayed in Peru and saw the overblown commercialization and marketing of ayahuasca and retreats, the less interested I became. Everyone seems hellbent on constantly advertising their own retreat centers, tearing each other down, and talking about the plants in flowery language without acknowledging any of the risks. I’m part of a few online groups on ayahuasca and plant dietas (shamanic retreats with medicinal plants), all of which are managed by foreigners who own retreat centers, mostly in Peru. And I can see how everything they post all leads back to…. the marketing and selling of their own retreat centers. Marketing, in essence, is pushing an agenda on people. Owning a business previously in San Francisco, and now online, I’m just as guilty of playing into the game. I know how it works. But when it comes to selling spirituality and truth, things get even more complicated. Mainly because it is selling something that by its very definition cannot be sold. So what happens when the selling and marketing of spirituality adds some sex to it? Sex, after all, always sells. Yesterday on Instagram, a New Age teacher I follow posted a photo of herself butterball-butt-naked, lying on a fallen tree trunk. She wrote a nice poem about nature, and ended with a link to her website to sign up for her class. Now don’t get me wrong, I’m not a prude. This woman is beautiful and has a great body to show off. Understanding how sexual energy is life force energy is important and is indeed part of the spiritual path. But it’s a bit over the top, right? It’s clickbait. I usually like her content but this seemed beneath her, along the lines of a shady car salesman. Are we really supposed to believe that because she is sexy and naked, she has all the secrets of how to live a spiritual life? There seem to be conflicting messages at play here. We want to own our sexuality, protect it and feel liberated. Using sex to market and advertise just feels outdated to me, especially in light of the #metoo movement. On the flip side, is this any better or worse than the many photos I see of someone meditating with a plant, eyes closed and looking angelic? The very idea of taking a selfie while meditating, or doing yoga, is just absurd, completely contradicting the idea of “going within”. These curated, picture-perfect images are just that: curated. They are not at all reflective of reality, walking the path, living in the moment. What is even more complicated is that in the ayahuasca community, there are more reports of sexual assault or inappropriate behavior against women during ceremonies. Obviously this is wrong, dangerous and exploitative. You know what else is exploitative? Seeing this as an opportunity, turning it around and hosting an all women retreat. Because who is really benefitting here? Is it the women on retreat, or the people choosing this as an advertising opportunity? This also doesn’t take into account that female curanderas (healers) can just as easily take advantage of men. And not every single male curandero has committed sexual misconduct. In my own personal experience, two women who considered themselves teachers tried to convince me that I needed a female teacher, and that they were the curandera I needed. It felt very manipulative. Having an all-women event may protect women from inappropriate and wanted attention from men. But it still is not a perfect solution, it’s only marketed and sold that way to vulnerable consumers. I’ve also seen advertisements for all women ayahuasca ceremonies and retreats using sexual and provocative images of naked women. And these are run by women. Given that it is always recommended to abstain from sex and to clean your energy during ceremonies and retreats, I can’t help but think that they are sorely missing the point. In another marketing twist, I saw another center quickly start representing themselves as safe and supportive after two recent suicides at ayahuasca centers. No suicides here, sign up! A healer who really just wants to heal is not looking to be put on a pedestal. They don’t have time for social media arguments, or defending their views against online trolls. It’s when we, the consumers, put these healers on a pedestal, call them a master and shower them with praise and adulation, that things get messy. The point I’m trying to make here is that the marketing of the plants and spirituality has gone overboard. It makes me really question who is running things, and what is their purpose. Is it truly for spiritual advancement, or is it simply about money?
https://lynnenardizzi.medium.com/the-marketing-of-spirituality-49bc5df3b286
['Lynne Nardizzi']
2019-05-18 00:16:02.587000+00:00
['Spirituality', 'Ayahuasca', 'Plants', 'Spiritual Growth', 'Marketing']
Circularity in General
Circularity in General The (Real) Theory of Everything Photo by Shapelined on Unsplash What is the circular theory? Nature is balanced (and explained) by the circle between any X and Y… Why is this relevant? Two is the most basic number (not one) thus 50–50 explains everything (complementarity is the basis for identity) … How does it affect me? My life? Observation is 50% flawed. The opposite of everything is always hidden in the background (any movement makes the opposing movement real) … Meaning? You have to use circular logic to get to the truth… Circular Logic? Hidden behind any ‘one’ is some ‘other’ ‘one’… thus, the circle is conserved (circularity is the core dynamic in nature). So, why is this important? It explains (actually, it predicts) behavior in all systems… Conservation of the Circle is the core dynamic in Nature.
https://medium.com/the-circular-theory/circularity-in-general-8c9f74e5b0c7
['Ilexa Yardley']
2020-02-20 14:17:01.408000+00:00
['Relationships', 'Quantum Computing', 'Deep Learning', 'Artificial Intelligence', 'Reality']
Love is Patient, Love is Kind, Love is Scary
Your endorphins are high, serotonin is running rampant, and oxytocin is overtaking your actions. You’re giving your vulnerabilities and opening up your heart to someone who was once a stranger. You are in love. Love is one of the best feelings you can ever experience. It’s a feeling or an experience that we continuously sought after. For no rhyme or reason, we crave it. Love is innate, and we need it. When you’re in love, it feels like nothing can bring you down. You have a person who will always hold you up and be your foundation. Everything will be alright. Until your endorphins level out, your serotonin is coming to a peaceful walk, and your oxytocin is enabling you to take control. You’re more vulnerable than you once were to someone who was once a stranger. Are you still in love? Love is now one of the scariest feelings you can ever experience. You start to feel comfortable. Nothing is new. You still crave it, but you start doubting.
https://medium.com/the-shadow/love-is-patient-love-is-kind-love-is-scary-a362c4a13223
['Tiffany Hsu']
2020-12-17 22:07:49.136000+00:00
['Relationships', 'Love', 'Marriage', 'Trust', 'Dating']
Launch First and Listen To Your Customer Often as You Scale
Launch First and Listen To Your Customer Often as You Scale Danielle Bodinnar, a passionate entrepreneur with a desire to solve a problem realized that in order to do so, her company had to pivot. Here she shares how to do just that based on customer insight Photo by Brett Jordan on Unsplash The best strategy I learned as an entrepreneur after starting Karista — a unique healthcare business that assesses people’s needs and then matches them to disability providers — was that nothing really got rolling until you heard from your first customer. Put simply, what I learned was that customers simply don’t know what they want until you serve up the product to them, which means launch first, then learn from customer feedback along the way. Launch and learn. We launched our Australian based disability business and then set out to meet with our target customers. They welcomed us into their homes to share their insights. We were planning to initiate some very costly new product changes but once we spent time with real customers, the plan and our priorities changed. We’ve yet to implement those cool but costly products but we were quick to produce the product that our customers wanted. We were able to build a product that really added value to our customers and allowed us to scale. And they were simple to execute, for example a young disabled customer told us “do not put me in the same category as an old person” and so we built separate landing pages, one for the elderly and one for our young people with disabilities and linked our campaigns to the relevant target audience — cost of landing pages, less than AUD$2k. Value to Karista and our customers, priceless. Nail One Area First, then Expand. We did the whole let’s be all things to all people thing. We launched with options for the elderly, adults, teenagers and even newborns, right across Australia. Then I got some sound advice from my Springboard friends that there was a lack of focus and we were not in a position to scale with such an unrefined strategy. So we focused on the part of the market for which we had an expertise — teenage boys with autism. We’ve become the trusted source for that cohort, including the Australian government that is required to provide those services, and there’s so much business opportunity in this space alone. When and if we decide to expand, we know our company will be welcomed by both our service providers and clients seeking services. The key here is to launch boldly but be mindful of what your customers are telling you. Always go back to the ‘’why?’’ If you are confused about what to do next, or how to prioritise, or are not getting traction, go back to your why. Our why is about building the capability of people with autism — We believe that everybody has the right to equal opportunity; we give them that opportunity. When investors, staff members, clients, providers, suppliers tell me I should do something (and it happens often) I assess the suggestions based on our why. If it doesn’t fit with our why it does not make the cut. This helps me communicate with whoever is making the suggestion and helps me stay on track. Here’s a little TED talk on the “Why” — I recommend keeping it in your favorites: https://www.youtube.com/watch?v=IPYeCltXpxw&feature=youtu.be Danielle Bodinnar, a passionate entrepreneur, is the CEO of Karista. She has held senior management positions in sales, marketing, supply chain and project management in large corporations for over 20 years. She founded Karista after being inspired by the changes emerging in the healthcare industry. Danielle has a deep knowledge of the Healthcare (specifically aged and disability sectors) from her experience at SCA HA (now Asaleo) where she spent 4 years as Healthcare General Manager.
https://medium.com/been-there-run-that/launch-first-and-listen-to-your-customer-often-as-you-scale-11c12e6ddb2c
['Springboard Enterprises']
2020-12-04 15:36:42.030000+00:00
['Entrepreneurship', 'Customer Engagement', 'Scalability', 'Entrepreneur', 'Women']
Kill Your Parents, Watch Batman, and Live Like a Prince
TRUE CRIME Kill Your Parents, Watch Batman, and Live Like a Prince Lyle and Erik Menendez murdered their parents and almost got away with it. Photo: Murderpedia In the late 80s, Lyle and Erik Menendez were living a privileged, wealthy lifestyle thanks to their highly successful father. The boys weren’t particularly good at school, but they were promising athletes. The financial and emotional support they received from their parents was something that most children could only hope for. They learned quickly that money and social status can erase almost every mistake, and life can become a playground with no rules. Born into Wealth José Menendez was an immigrant who fled from Cuba to America after the Cuban Revolution. He had nothing but ambition and confidence. He attended the South Illinois University, where he met his future wife, Mary Louise “Kitty” Andersen. She was a former model and elementary school teacher coming from a broken family. In 1963, the couple got married and moved to New York City. José climbed the corporate ladder quickly, and Mary became an exemplary housewife beside him. In 1968, their first child Lyle was born, and the family moved to New Jersey. In 1970, Mary gave birth to their second son, Erik. José’s career took the family to California, where he became an executive at Paramount studios. In school, the brothers’ grades were average, but they both prevailed at sports. So, their father asked them to choose one sport to focus on. According to friends of the family, once the boys had decided to compete in tennis, José became obsessed with the sport. He hired coaches, training partners, and he started studying the game religiously. As a father, he was strict and controlling but granted everything for their sons to succeed. Erik reached 44th in national ranking among junior players, and Lyle’s career was looking up too. He had an immediate impact on the local tennis team. However, they both faced issues in California. Lyle had been placed on academic probation for weak grades and bad attendance. He was expelled due to copying another student’s psychology lab report. His father flew in from California to argue his son’s case. Money and status haven’t worked this time, and the expulsion stood. From that point, Lyle purposely tried to get out of school and into tennis full time. He asked his dad to fund him on a trip to join his girlfriend on her tour in Europe. However, José said that if he’s not studying, he should work at his company. Lyle decided to join his girlfriend, anyway. When he returned to California, he started training hard and working for his dad at the same time. Erik ran with the wrong crowd and began a series of burglaries. With his friends, they broke into several homes of their classmates. On one occasion, they had stolen over $100 000 in cash and property and have been pulled over for speeding with the stolen goods. In court, Erik pled guilty and was sentenced to probation and to attend therapy with Dr. Jerome Ozeil. Motive for Murder Despite José’s relentless efforts, his sons couldn’t live up to his standards. According to his brother-in-law, José’s disappointment made him think about taking Erik and Lyle out of his will, but he didn’t say who the money would go to. This would mean that the boys won’t inherit their parents’ entire estate and other assets worth over $14 million. Plus, with their parents’ possible death, the brothers would receive $600,000 in life insurance. Once they found out about José’s plan by overhearing a phone conversation, Erik wanted to kill his mom and dad immediately to prevent them from changing their wills. But Lyle wanted to wait. Their greed to stay rich was stronger than the love they felt for their parents. The “Perfect Plan” On August 18, 1989, Eric and Lyle purchased shotguns at a Big 5 Sporting Goods chain store in San Diego. The following day they went shark fishing with their mom and dad. On the evening of August 20, 1989, José and Mary were sitting on a couch in their living room watching television. Their sons burst into the house, carrying the firearm they bought two days ago. First, they shot José in the back of the head and then Mary in the leg while she tried to escape. After she slipped on her own blood and fell, Erik and Lyle shot her several times in the arm, face, and chest. Once their parents were dead, the brothers shot them in the kneecap to make the killings seem as they were connected to the mob. Later, the Wall Street Journal drew some links between Jose’s business and organized crime. Once they executed the plan, the two went to see Tim Burton’s Batman in a local cinema, and shortly after, they headed to a festival at Santa Monica. This was supposed to be their alibi once they return home and call the police to report the murders. Eight days after the killings, at the memorial service, Lyle read a letter he received from his father before. It said, “I believe that both you and Erik can make a difference. I believe that you will. I encourage you not to select the easy road. I encourage you to walk the road with honor, regardless of the consequences, and to challenge yourself to excellence.” The Short-lived Victory According to Community News, the insurance policy in Jose’s name was issued to the boys. In the months after the murders, they went on a spending spree. They had decided to stay in luxury hotels instead of the mansion that became a crime scene. Later, they rented a penthouse on the water in Marina Del Rey. Erik purchased clothes and watches and lost thousands of dollars gambling. He hired a professional tennis coach for $60,000 and practiced ten hours a day. Later, he flew to the Middle East to compete in a tournament. Lyle bought three Rolex watches that cost over $15,000. He also purchased a popular student restaurant for $550,000. The two bought a Porsche Carrera, courtside seats at basketball games, and they went on lavish vacations in the Caribbean. Over six months, they spent $1 million on parties, travel, and shopping. They thought they can do anything with impunity. An interview that Erik gave to People Magazine perfectly revealed how delusional they were because of their financial background. He said, “My brother wants to become President of the U.S., and I want to be a senator and be with the people of Cuba.”
https://medium.com/crimebeat/kill-your-parents-watch-batman-and-live-like-a-prince-9d770378d1c3
['Akos Peterbencze']
2020-12-15 12:11:18.115000+00:00
['Murder', 'Family', 'Psychology', 'Crime', 'True Crime']
Everything you need to know about tree data structures
This post was originally published at TK’s Blog. When you first learn to code, it’s common to learn arrays as the “main data structure.” Eventually, you will learn about hash tables too. If you are pursuing a Computer Science degree, you have to take a class on data structure. You will also learn about linked lists , queues , and stacks . Those data structures are called “linear” data structures because they all have a logical start and a logical end. When we start learning about trees and graphs , it can get really confusing. We don’t store data in a linear way. Both data structures store data in a specific way. This post is to help you better understand the Tree Data Structure and to clarify any confusion you may have about it. In this article, we will learn: What is a tree Examples of trees Its terminology and how it works How to implement tree structures in code. Let’s start this learning journey. :) Definition When starting out programming, it is common to understand better the linear data structures than data structures like trees and graphs. Trees are well-known as a non-linear data structure. They don’t store data in a linear way. They organize data hierarchically. Let’s dive into real life examples! What do I mean when I say in a hierarchical way? Imagine a family tree with relationships from all generation: grandparents, parents, children, siblings, etc. We commonly organize family trees hierarchically. My family tree The above drawing is is my family tree. Tossico, Akikazu, Hitomi, and Takemi are my grandparents. Toshiaki and Juliana are my parents. TK, Yuji, Bruno , and Kaio are the children of my parents (me and my brothers). An organization’s structure is another example of a hierarchy. A company’s structure is an example of a hierarchy In HTML, the Document Object Model (DOM) works as a tree. Document Object Model (DOM) The HTML tag contains other tags. We have a head tag and a body tag. Those tags contains specific elements. The head tag has meta and title tags. The body tag has elements that show in the user interface, for example, h1 , a , li , etc. A technical definition A tree is a collection of entities called nodes . Nodes are connected by edges . Each node contains a value or data , and it may or may not have a child node . The first node of the tree is called the root . If this root node is connected by another node , the root is then a parent node and the connected node is a child . All Tree nodes are connected by links called edges . It’s an important part of trees , because it’s manages the relationship between nodes . Leaves are the last nodes on a tree. They are nodes without children. Like real trees, we have the root , branches , and finally the leaves . Other important concepts to understand are height and depth . The height of a tree is the length of the longest path to a leaf . The depth of a node is the length of the path to its root . Terminology summary Root is the topmost node of the tree is the topmost of the Edge is the link between two nodes is the link between two Child is a node that has a parent node is a that has a Parent is a node that has an edge to a child node is a that has an to a Leaf is a node that does not have a child node in the tree is a that does not have a in the Height is the length of the longest path to a leaf is the length of the longest path to a Depth is the length of the path to its root Binary trees Now we will discuss a specific type of tree . We call it the binary tree . “In computer science, a binary tree is a tree data structure in which each node has at the most two children, which are referred to as the left child and the right child.” — Wikipedia So let’s look at an example of a binary tree . Let’s code a binary tree The first thing we need to keep in mind when we implement a binary tree is that it is a collection of nodes . Each node has three attributes: value , left_child , and right_child . How do we implement a simple binary tree that initializes with these three properties? Let’s take a look. Here it is. Our binary tree class. When we instantiate an object, we pass the value (the data of the node) as a parameter. Look at the left_child and the right_child . Both are set to None . Why? Because when we create our node , it doesn’t have any children. We just have the node data . Let’s test it: That’s it. We can pass the string ‘ a ’ as the value to our Binary Tree node . If we print the value , left_child , and right_child , we can see the values. Let’s go to the insertion part. What do we need to do here? We will implement a method to insert a new node to the right and to the left . Here are the rules: If the current node doesn’t have a left child , we just create a new node and set it to the current node’s left_child . doesn’t have a , we just create a new and set it to the current node’s . If it does have the left child , we create a new node and put it in the current left child ’s place. Allocate this left child node to the new node’s left child . Let’s draw it out. :) Here’s the code: Again, if the current node doesn’t have a left child , we just create a new node and set it to the current node’s left_child . Or else we create a new node and put it in the current left child ’s place. Allocate this left child node to the new node’s left child . And we do the same thing to insert a right child node . Done. :) But not entirely. We still need to test it. Let’s build the following tree : To summarize the illustration of this tree: a node will be the root of our binary Tree will be the of our a left child is b node is a right child is c node is b right child is d node ( b node doesn’t have a left child ) is ( doesn’t have a ) c left child is e node is c right child is f node is both e and f nodes do not have children So here is the code for the tree : Insertion is done. Now we have to think about tree traversal. We have two options here: Depth-First Search (DFS) and Breadth-First Search (BFS). DFS “is an algorithm for traversing or searching tree data structure. One starts at the root and explores as far as possible along each branch before backtracking.” — Wikipedia “is an algorithm for traversing or searching tree data structure. One starts at the root and explores as far as possible along each branch before backtracking.” — Wikipedia BFS “is an algorithm for traversing or searching tree data structure. It starts at the tree root and explores the neighbor nodes first, before moving to the next level neighbors.” — Wikipedia So let’s dive into each tree traversal type. Depth-First Search (DFS) DFS explores a path all the way to a leaf before backtracking and exploring another path. Let’s take a look at an example with this type of traversal. The result for this algorithm will be 1–2–3–4–5–6–7. Why? Let’s break it down. Start at the root (1). Print it. 2. Go to the left child (2). Print it. 3. Then go to the left child (3). Print it. (This node doesn’t have any children) 4. Backtrack and go the right child (4). Print it. (This node doesn’t have any children) 5. Backtrack to the root node and go to the right child (5). Print it. 6. Go to the left child (6). Print it. (This node doesn’t have any children) 7. Backtrack and go to the right child (7). Print it. (This node doesn’t have any children) 8. Done. When we go deep to the leaf and backtrack, this is called DFS algorithm. Now that we are familiar with this traversal algorithm, we will discuss types of DFS: pre-order , in-order , and post-order . Pre-order This is exactly what we did in the above example. Print the value of the node . Go to the left child and print it. This is if, and only if, it has a left child . Go to the right child and print it. This is if, and only if, it has a right child . In-order The result of the in-order algorithm for this tree example is 3–2–4–1–6–5–7. The left first, the middle second, and the right last. Now let’s code it. Go to the left child and print it. This is if, and only if, it has a left child . Print the node ’s value Go to the right child and print it. This is if, and only if, it has a right child . Post-order The result of the post order algorithm for this tree example is 3–4–2–6–7–5–1. The left first, the right second, and the middle last. Let’s code this. Go to the left child and print it. This is if, and only if, it has a left child . Go to the right child and print it. This is if, and only if, it has a right child . Print the node ’s value Breadth-First Search (BFS) BFS algorithm traverses the tree level by level and depth by depth. Here is an example that helps to better explain this algorithm: So we traverse level by level. In this example, the result is 1–2–5–3–4–6–7. Level/Depth 0: only node with value 1 with value 1 Level/Depth 1: nodes with values 2 and 5 with values 2 and 5 Level/Depth 2: nodes with values 3, 4, 6, and 7 Now let’s code it. To implement a BFS algorithm, we use the queue data structure to help. How does it work? Here’s the explanation. First add the root node into the queue with the put method. Iterate while the queue is not empty. Get the first node in the queue , and then print its value. Add both left and right children into the queue (if the current node has children ). Done. We will print the value of each node, level by level, with our queue helper. Binary Search tree “A Binary Search Tree is sometimes called ordered or sorted binary trees, and it keeps its values in sorted order, so that lookup and other operations can use the principle of binary search” — Wikipedia An important property of a Binary Search Tree is that the value of a Binary Search Tree node is larger than the value of the offspring of its left child , but smaller than the value of the offspring of its right child. ” Here is a breakdown of the above illustration: A is inverted. The subtree 7–5–8–6 needs to be on the right side, and the subtree 2–1–3 needs to be on the left. is inverted. The 7–5–8–6 needs to be on the right side, and the 2–1–3 needs to be on the left. B is the only correct option. It satisfies the Binary Search Tree property. is the only correct option. It satisfies the property. C has one problem: the node with the value 4. It needs to be on the left side of the root because it is smaller than 5. Let’s code a Binary Search Tree! Now it’s time to code! What will we see here? We will insert new nodes, search for a value, delete nodes, and the balance of the tree . Let’s start. Insertion: adding new nodes to our tree Imagine that we have an empty tree and we want to add new nodes with the following values in this order: 50, 76, 21, 4, 32, 100, 64, 52. The first thing we need to know is if 50 is the root of our tree. We can now start inserting node by node . 76 is greater than 50, so insert 76 on the right side. 21 is smaller than 50, so insert 21 on the left side. 4 is smaller than 50. Node with value 50 has a left child 21. Since 4 is smaller than 21, insert it on the left side of this node . with value 50 has a 21. Since 4 is smaller than 21, insert it on the left side of this . 32 is smaller than 50. Node with value 50 has a left child 21. Since 32 is greater than 21, insert 32 on the right side of this node . with value 50 has a 21. Since 32 is greater than 21, insert 32 on the right side of this . 100 is greater than 50. Node with value 50 has a right child 76. Since 100 is greater than 76, insert 100 on the right side of this node . with value 50 has a 76. Since 100 is greater than 76, insert 100 on the right side of this . 64 is greater than 50. Node with value 50 has a right child 76. Since 64 is smaller than 76, insert 64 on the left side of this node . with value 50 has a 76. Since 64 is smaller than 76, insert 64 on the left side of this . 52 is greater than 50. Node with value 50 has a right child 76. Since 52 is smaller than 76, node with value 76 has a left child 64. 52 is smaller than 64, so insert 54 on the left side of this node . Do you notice a pattern here? Let’s break it down. Is the new node value greater or smaller than the current node ? If the value of the new node is greater than the current node, go to the right subtree . If the current node doesn’t have a right child , insert it there, or else backtrack to step #1. If the value of the new node is smaller than the current node , go to the left subtree . If the current node doesn’t have a left child , insert it there, or else backtrack to step #1. We did not handle special cases here. When the value of a new node is equal to the current value of the node, use rule number 3. Consider inserting equal values to the left side of the subtree . Now let’s code it. It seems very simple. The powerful part of this algorithm is the recursion part, which is on line 9 and line 13. Both lines of code call the insert_node method, and use it for its left and right children , respectively. Lines 11 and 15 are the ones that do the insertion for each child . Let’s search for the node value… Or not… The algorithm that we will build now is about doing searches. For a given value (integer number), we will say if our Binary Search Tree does or does not have that value. An important item to note is how we defined the tree insertion algorithm. First we have our root node . All the left subtree nodes will have smaller values than the root node . And all the right subtree nodes will have values greater than the root node . Let’s take a look at an example. Imagine that we have this tree . Now we want to know if we have a node based on value 52. Let’s break it down. We start with the root node as our current node . Is the given value smaller than the current node value? If yes, then we will search for it on the left subtree . Is the given value greater than the current node value? If yes, then we will search for it on the right subtree . If rules #1 and #2 are both false, we can compare the current node value and the given value if they are equal. If the comparison returns true , then we can say, “Yeah! Our tree has the given value,” otherwise, we say, “Nooo, it hasn’t.” Now let’s code it. Let’s beak down the code: Lines 8 and 9 fall under rule #1. Lines 10 and 11 fall under rule #2. Line 13 falls under rule #3. How do we test it? Let’s create our Binary Search Tree by initializing the root node with the value 15. And now we will insert many new nodes . For each inserted node , we will test if our find_node method really works. Yeah, it works for these given values! Let’s test for a value that doesn’t exist in our Binary Search Tree . Oh yeah. Our search is done. Deletion: removing and organizing Deletion is a more complex algorithm because we need to handle different cases. For a given value, we need to remove the node with this value. Imagine the following scenarios for this node : it has no children , has a single child , or has two children . Scenario #1: A node with no children ( leaf node ). If the node we want to delete has no children, we simply delete it. The algorithm doesn’t need to reorganize the tree . Scenario #2: A node with just one child ( left or right child). In this case, our algorithm needs to make the parent of the node point to the child node. If the node is the left child , we make the parent of the left child point to the child . If the node is the right child of its parent, we make the parent of the right child point to the child . Scenario #3: A node with two children. When the node has 2 children, we need to find the node with the minimum value, starting from the node ’s right child . We will put this node with minimum value in the place of the node we want to remove. It’s time to code. First: Note the parameters value and parent . We want to find the node that has this value , and the node ’s parent is important to the removal of the node . Second: Note the returning value. Our algorithm will return a boolean value. It returns True if it finds the node and removes it. Otherwise it will return False . From line 2 to line 9: We start searching for the node that has the value that we are looking for. If the value is smaller than the current nodevalue , we go to the left subtree , recursively (if, and only if, the current node has a left child ). If the value is greater, go to the right subtree , recursively. Line 10: We start to think about the remove algorithm. From line 11 to line 13: We cover the node with no children , and it is the left child from its parent . We remove the node by setting the parent ’s left child to None . Lines 14 and 15: We cover the node with no children , and it is the right child from it’s parent . We remove the node by setting the parent ’s right child to None . Clear node method: I will show the clear_node code below. It sets the nodes left child , right child , and its value to None . From line 16 to line 18: We cover the node with just one child ( left child ), and it is the left child from it’s parent . We set the parent 's left child to the node ’s left child (the only child it has). From line 19 to line 21: We cover the node with just one child ( left child ), and it is the right child from its parent . We set the parent 's right child to the node ’s left child (the only child it has). From line 22 to line 24: We cover the node with just one child ( right child ), and it is the left child from its parent . We set the parent 's left child to the node ’s right child (the only child it has). From line 25 to line 27: We cover the node with just one child ( right child ) , and it is the right child from its parent . We set the parent 's right child to the node ’s right child (the only child it has). From line 28 to line 30: We cover the node with both left and right children. We get the node with the smallest value (the code is shown below) and set it to the value of the current node . Finish it by removing the smallest node . Line 32: If we find the node we are looking for, it needs to return True . From line 11 to line 31, we handle this case. So just return True and that’s it. To use the clear_node method: set the None value to all three attributes — ( value , left_child , and right_child ) To use the find_minimum_value method: go way down to the left. If we can’t find anymore nodes, we found the smallest one. Now let’s test it. We will use this tree to test our remove_node algorithm. Let’s remove the node with the value 8. It’s a node with no child. Now let’s remove the node with the value 17. It’s a node with just one child. Finally, we will remove a node with two children. This is the root of our tree . The tests are now done. :) That’s all for now! We learned a lot here. Congrats on finishing this dense content. It’s really tough to understand the concept that we do not know. But you did it. :) This is one more step forward in my journey to learning and mastering algorithms and data structures. You can see the documentation of my complete journey here on my Renaissance Developer publication. Have fun, keep learning and coding.
https://medium.com/free-code-camp/all-you-need-to-know-about-tree-data-structures-bceacb85490c
[]
2020-05-23 20:19:09.096000+00:00
['Python', 'Algorithms', 'Coding', 'Technology', 'Programming']
FinOps at SMPD
Cloud resources are free… until they are not. FinOps — Joel Marchand Cloud computing is amazing. Resources of all sizes and shapes can be provisioned with a few commands. But at the risk of using a tired trope, with great power comes great responsibility, or at least great cost. My team saved Cloud costs by 40% while doubling our infrastructure by adopting FinOps principles articulated around four tenets. In this blog, I will introduce our process. Some Background: The Social and Messaging Product Development (SMPD) team lives within T-Mobile’s contact center organization. Our mission is to develop tools and experiences empowering customers to have the best and most effective interactions with T-Mobile over asynchronous communication modes (such as messaging or social networks). We build tools and a platform used by Customer Care experts to accelerate and optimize every contact with our customers. Our application landscape is organized around an increasing number of microservices, running in containers orchestrated by Kubernetes and augmented by a collection of Amazon Web Services products for data persistence, streams management, encryption, etc. Within SMPD, my team is focusing on creating and maintaining the platform, tools and guardrails allowing the product teams to innovate, disrupt and optimize business flows with increasing speed and quality. This team is known as the Engineering Efficiency team or E2. As the custodian of the infrastructure and platform, the E2 team took on the responsibility to introduce FinOps principles as part of the quality markers for each SMPD product. So even as our portfolio of products and services continues to grow and change, the E2 team is focusing on maximizing resource utilization by identifying and reducing waste while remaining on the leading edge of technology innovation. These principles were put to the test during a recent project to achieve multi-region resilience. More on that later… To bootstrap our FinOps responsibilities, we organized four focus areas: 1. Increase cost awareness across the whole organization (and make it fun!) 2. Provide automated guardrails for resources creation and management. 3. Reduce waste by adopting ephemeral environments principles. 4. Create cost modelling tools at design time. 1. Making cost awareness fun The first step in our FinOps journey is to make the actual cost of services available to all. True to the spirit of the DevOps model, the operational responsibilities for a product lies fully with the team that creates the product. As such, it is important to make the cost of running a product completely transparent. The T-Mobile Cloud Center of Excellence (CCOE) provides each team using cloud resources a detailed report on the cost of running every aspect of our applications. From the CPUs and memory to network connection cost, the information is available in nearly real time. This invaluable tool is too often only available to comptrollers and management. We decided to provide that information to everybody inside the team. Awareness is the greatest tool to ensure the rightsizing of resources. This has become a part of our operational review of our applications. Every month, the E2 team reviews the numbers provided by the CCoE and presents them to the different product teams with suggestions on potential optimization opportunities. To increase the participation of the product teams, we have also taken a page out of the gamification playbook. We regularly create contests between teams to achieve some widespread goals for the platform. For example, we needed the team to right-size the provisioning of the Kubernetes pods resources. Our goal was for each container to use 60% of assigned memory as a baseline for provisioning. Creating a leaderboard showing how each team’s portfolio met that goal increased awareness while tickling their competitive streak. It is a simple but very effective solution to ensure the teams’ participation. Gamified Resources Utilization Dashboard 2. Guardrails for resource creation and management In SMPD, we strongly believe in bringing the decision-making process as close to the individual as possible. However, there are some requirements that are larger than the team. Security standards are a good example. To achieve adherence to organizational standards, the E2 team is designing automated guardrails that implement the patterns necessary for compliance without additional burdens on developers. Our CI/CD pipeline is a constantly evolving product designed to implement standards when they are needed. For example, the pipeline enforces naming conventions and tagging requirements on resources. And when automation is not possible, documentation starting with the “why” of a pattern is available for constant reference. We have affectionately named our documentation system E2Pedia. If the code does not meet our documented standards, the pipeline fails the build and prevents the deployment. 3. Ephemeral environments With the resource creation and management guardrails in place, we move a step closer to ephemeral environments. One of the promises of the Cloud is the ability to create and destroy resources as needed. The ability to create, let’s say a Redis cluster, with a few mouse clicks is truly a game changer when it comes to designing disruptive products. Destroying that Redis cluster when done however seems to not generate the same level of glee. It is also not in the Cloud providers’ business model to encourage us to clean up after ourselves. My parents would be proud. Our team is moving towards a completely ephemeral provisioning model. Every Cloud resource required is created upon requests by the CI/CD pipeline and destroyed either on schedule or with the click of a button. This ensures that resources are only created for as long as they are needed. This seems like an obvious habit to implement but it does require us to declare all the infrastructure needed by an application with the application itself. Adopting the principles of “Infrastructure as Code” and empowering every developer to declare what resources is needed is a critical step in managing our cloud resources efficiently. 4. Cost modeling tool at design time The next step in our FinOps initiative is to provide a cost modelling tool at design time. While we do not advocate for cost to be the sole or even a major design consideration, providing knowledge of the impact of those decisions helps shape better design decisions. The tool models the cost of the infrastructure required to run the service through a short survey. For example, a new application might require 4 new microservices deployed over 35 Kubernetes pods. Each pod is provisioned with 500 minutes of CPU time and 256 MiB of memory. The application also uses a couple DynamoDB tables and Kafka for streaming. Based on the information collected, we provide a strawman of the costs associated with running the application. The results can be saved for side by side comparison with different configurations. This information can also be compared with the results of some more rigorous performance testing to validate the design assumptions. This tool is still a work in progress. The feedback we gather from the different teams will make it more accurate over time. Data driven decisions make for better decisions. Walking the talk: One of our team’s initiatives for 2020 is to improve the resilience of our applications by distributing our services across multiple cloud regions and follow an active/active traffic distribution pattern. This meant duplicating our infrastructure and supporting cloud services. To validate our FinOps principles, we set a goal to be fully geo-redundant without increasing our cloud budget for the year. Six months into the year, the current forecast has us achieving that goal! Conclusions: Managing the costs associated with an application portfolio is rarely a pre-occupation of a traditional development team. We believe however that bringing the attention of the whole team on the financial aspects of software development empowers us to make better, more conscious decisions about the design, management and maintenance of our products. FinOps is a process complementary to any good DevOps team.
https://medium.com/tmobile-tech/finops-at-smpd-db7c9646bac5
["Olivier D'Hose"]
2020-07-06 23:57:24.871000+00:00
['DevOps', 'Cloud Computing', 'Finops', 'T Mobile Tech']
Diary of a NYC Hospital: Losing Hope in the ICU
Today wasn’t a great day. We did the best we could. It just went on and on. A lot of people just dying in front of us. Due to the nature of the crisis, there are so many sick patients overwhelming the staff. It’s very difficult to get everyone into the ICU in a timely manner. We’ve tried to deploy other physicians and medical personnel to help manage the critical-care patients on the floor, but nonetheless it’s very overwhelming. Maybe two or three patients died upstairs in our second ICU, but a lot of the deaths are of the patients that don’t make it to the ICU who are sitting on the regular floors and unfortunately can’t come here because there are no beds. In the ICU, you have specialized intensive care, specialized nurses and technicians that know how to manage the special medications. Ventilator management can be very complicated, and the staff on the floor are not as used to dealing with that and taking care of patients with those requirements. Rationing ventilators hasn’t been an issue yet. We have enough protective materials, and we’ve had help in terms of staffing, but there’s no other disease in our lifetime where you see 170, 180 patients in the hospital with the same disease. We still just don’t have the staffing or the space for everyone. We had a 12-bed ICU, we opened up a second 12-bed ICU, and yet we still have 15 to 20 patients in the hospital that we need to find a place for. It’s overwhelming because the list just goes on and on and on. We realize we’re not going to get to all of these patients in the right way. And then there are new patients coming in that we have to evaluate. Even if people die, the list does not get shorter. It’s an ongoing process, and it makes us feel horrible. It makes us feel helpless. Historically, for the most part, we can always get somebody an ICU bed in a reasonable time frame. Now people are waiting much longer. We’re still trying to get them into ICUs as soon as possible, and we’ve also been transferring patients to other hospitals in the Sinai system. The virus is horrible. It’s a horrifying disease. It’s not just a disease of the lungs that we’re seeing; it’s also a disease of the kidney. We’re seeing around 80 percent of critically ill patients experiencing kidney failure. We’re finding that to be a very poor prognostic indicator. Upwards of 80 to 90 percent of patients with kidney failure have died. We don’t have enough dialysis machines to take care of these patients. And the disease of the lungs is like nothing you’ve ever seen before. There are so many theories about what this is and the best way to manage it, and it just suggests to me that nobody really knows. The elderly patients seem to be the ones that just kind of die right away. The ones we’re seeing in the ICU who are just kind of lingering on the vents and not getting any better, or the ones who die after kind of a week or so, are the 50-to-60-year-old patients. They’re not that old. We’ve also seen some younger patients with comorbidities that have not done well. It’s horrible, they’re dying by themselves, we don’t have enough time to update the families as well as they deserve. There have been a couple of cases that have been particularly tough. One of our first patients we were particularly aggressive with and put a lot of effort into trying to do the right thing for her in terms of her therapies. The nurses weren’t used to doing what we call proning, which is putting people on their stomach, but we got it done and she was initially improving and we were optimistic. Then the past few days she has just gotten worse and worse. Probably through a blood clot or a complication in the lungs, she’s very likely to die in the next 24 hours. She’s only 53 years old. It’s important for the public to know that it’s not just old people who are suffering from this. It could be anybody, it could be any of us, any of our families, our parents, brothers, sisters. People sometimes get a sense of infallibility, but I think they need to realize how severe this is. We’re seeing such a high mortality rate in these younger patients. It’s not just a disease of the lungs; it’s a disease of the whole body. We were able to take our first two people off ventilators today, and so far they are doing okay. That was a big emotional hump for us. But not many. That’s two out of the 40-some patients total who have come in and out of our ICU, not including the patients on the floor that we haven’t been able to take care of in the right way. There are days that I’m convinced I’m going to get it and I’m going to die. I think that’s more emotional than scientific. At this point, if I was going to get sick from it, I probably would have already died. But it’s a concern. We could bring it home to our families. I’ve isolated myself from my family, which is hard, but I think it’s the safest way to handle it. I don’t know how long I can keep doing this for. In my mind, I’m going to do it, I’m going to be there until this is done, but it’s going to be very hard to come back to work after this is over. This whole experience has definitely been traumatizing. All we really have with this disease is supportive care. It makes me think about what we’re really doing — if we’re really doing good for people by just prolonging their death, when it’s just going to end up in mortality anyway. In the beginning, we’d bring someone to the ICU and put them on a ventilator, and we would think they have a reasonable chance of getting better like any other disease. But as we get more experience with this virus, we’re seeing through the data we have in our hospital that most of them don’t do well. Other diseases we still have hope. This has really taken away all our hope. Before it started, we expected it was going to come here and that it was going to be bad, and we knew it was overwhelming, but I didn’t imagine it was going to be like this. It’s worse than I thought. I’ve been working extra hours. Our normal shift is 12 hours. I’ve been there 14-, 15-hour days and working on days off. So I’m there six days a week. I try to take one day off a week. I try to sleep and catch up on things around the house. But I generally mostly find myself staring into space. The hospital is offering resources for health-care professionals for psychological health. It’s going to be a big need, for sure. I think during the crisis, people are getting by, but I think after, when it’s controlled, it’s going to be a big problem. I want people to know that we’re doing the best we can under the circumstances. We know that we’re not the ones suffering from this disease and we feel for the family and the patients that are. But we’re still having a hard time trying to help everybody through it. A really hard time.
https://medium.com/new-york-magazine/diary-of-a-nyc-hospital-losing-hope-in-the-icu-cd853ca2d92d
['New York Magazine']
2020-04-10 16:23:29.199000+00:00
['Healthcare', 'Coronavirus']
Using Zoom? Remove Background Noise With Single Click (Free-ium app)
Until a while back, this all sounded like a distant dream, until I discovered this new AI-powered software called Krisp. Krisp does one job and it does it really well; removing background noise automatically. It does not matter for what purpose you are using your microphone for, as long as there is background noise, Krisp will automatically remove all the noise, and your audio will sound like you are sitting in a quiet room. This freemium app is idle for screen casters, work from home people, digital nomads, entrepreneurs, and the rest of the crowd who needs to make a call online. I have tested this app on various occasions: Creating screencast in a noisy beach Making a zoom team call from a busy cafe Podcast recording Team call The best part is that if you have a co-worker who is always making a call from a noisy background, you can use the Krisp app on your end to remove the background noise. Sounds unbelievable? When I first heard about Krisp on ProductHunt, that’s what I thought but I was blown away after I tested this noise cancellation app. I did not even realize how quietly this app has become an integral part of my day to day work life, and now I’m introducing it to you. Now, if you have ever been embarrassed due to background noise, be it crying baby noise or airport noise in the background, this app is what you need. Before, we dwell more into how Krisp works, and how to use it, watch this one minute video to understand what Krisp could do for you: Moving on… Krisp: The Noise cancellation app you always wanted Krisp is an AI-Powered Noise Cancellation app. The revolutionary noise cancellation technology behind the app is based on Deep Neural Networks training and ongoing improvements. Thanks to this noise cancellation tech, the app mutes the background noise coming from your side of the call. Krisp is available as a desktop app (Windows and Mac), Chrome extension, and also as an iOS app. I have extensively used the Mac app, and after seeing its benefits, I upgraded it to a paid plan. However, the free plan is good enough for occasional use, and a majority of our readers would like that. Getting started with Krisp is easy, and it is also designed for new work from the home force, who are not too tech-savvy. Here is how you can start using Krisp: Head over to Krisp website Download, and install the Windows or Mac version Create an account using your work email (This will give you 14 days of pro account for free) There is no need for credit card details, and after 14 days trial, you can continue using the free plan. (Plan details below) Now, start the software and it will help you configure with many popular conferencing apps such as Zoom, Skype, Slack to name a few. However, here are two things you should know: Select the right input device, and enable the “Remove noise”. This is for those who are using an external microphone. In the app that you are using, select “Krisp” as your Microphone From here on, Krisp will automatically filter any background noise, and the party on the other end will always hear the noise-free sound. If you are on a call with someone who has a noisy background, you can simply enable the “Remove noise” feature in the speaker. Simple, isn’t it? Well, I’m pretty sure you will be blown away with the sound quality when you use this app. As I mentioned earlier, I was blown away by the quality when I used this app for the first time. Since this app offers a generous free plan, it is worth every second you spend on setting it up. Krisp adopted the growth hacking model of Dropbox, where you can refer the app to friends and get a month of the pro account for free. Your friends will also get a month of pro account for free. You can find your referral link within the Krisp dashboard. Download Krisp app If you are someone who is working from home or knows someone who works from home these days (See what I did there 😀), you can let them know about this app, especially the ones who have a child or lives in a noisy neighbourhood. They will certainly thank you for introducing them to the Krisp app. Here are a few nice words by people who have used Krisp for various purposes: If you are a digital nomad or someone who is always on the move, this is a must-have app for you. Let me know what other techniques you are using for noise cancellation? Any other app that I should try? Here are a few hand-picked apps/software that you would love to discover next:
https://medium.com/shoutmeloud/zoom-background-noise-cancellation-563f06fc53d3
['Harsh Agrawal']
2020-06-09 10:20:31.076000+00:00
['Skype', 'Noise Cancelling', 'Artificial Intelligence', 'Zoom']
Sketch or Figma which is better for UX Designers?
Becoming a UX UI Designer is not that tough, but choosing which tool to use for designing is quite difficult. There are a lot of softwares which does pretty much the same thing like Adobe XD, Figma & Sketch which is turning your ideas into low to high fidelity prototypes. Which one should you start with? If you are a beginner in design and don’t know which one to pick and start learning, in my opinion I would recommend either Figma or Sketch. But let’s see what things you might need to know while using these two when you are working in a small scale design agency or if you are in a large company. Figma or Sketch, which is better? Before jumping into which software is better, you need to understand what you can do with both these tools and in what ways it will help you wether you are working in a small design agency or a large company, wether if it is a small project are a large scale project. Lets see the pros & cons of both these from my experience using these in my design journey. Figma User friendly & easy to learn Easy to animate your prototypes Provides easy realtime collaboration with team mates Works best for small to big scale projects Free to use Paid features are available which is not needed if you are working on a small level projects Sketch Little bit of learning curve needed to get used to the tool It is a paid tool with 30 days free trial which is a downside Better to work with complicated components and style guides Not best suited for animating prototypes Previewing designs & prototypes is not that better when compared to figma In my experience I would definitely say that Figma is much better than Sketch to use as a design tool. But if you land in a company which doesn’t uses the tool that you like to use, you just have to keep that aside and learn what is needed. But all these design tools wether if its Figma or Sketch or Xd is incomplete when its needs a design handoff and you need to know a little bit of other helpful tools which I will talk about in my upcoming posts.
https://medium.com/design-bootcamp/sketch-or-figma-which-is-better-for-ux-designers-59bfd88d2424
['Sathish Kumar']
2020-12-28 22:28:41.207000+00:00
['UX Design', 'Design', 'UI Design', 'Design Tools', 'UX']
Funding & Exits — Chapter 1: The Fundable Entrepreneur
Now available in ebook + paperback “Funding & Exits is the definitive guide that I recommend every one of my portfolio companies’ management teams read and internalize” — Bruce Cleveland, Partner, Wildcat Venture Partners Entrepreneurship is woven into the fabric of America. 30 million small businesses operate in the US.¹ Each year, about 800,000 startups join their ranks.² Every month, 300 of every 100,000 adults start a new business.³ For the past forty years, this small business activity has been the growth engine of the US economy, responsible for almost all net new job creation.⁴ And yet, almost as many businesses fail every year as are started. Of all new businesses, only 1% employ fifty or more people after ten years in operation.⁵ This percentage has declined fairly steadily since 1990. Less than 0.1% of all US businesses employ more than 500 people.⁶ The number of public companies in the US was down to 4,333 by mid-2016 — a 46% drop from twenty years ago.⁷ And in 2017, just 160 companies went IPO — down 42% from three years ago.⁸ IPOs since 1999 Why have so many new businesses stayed small or failed? What factors have constricted their growth? Why have so few scaled into large companies, and even fewer gone public? Company building is a science. As CEO, the people you hire, the markets you choose, the products you build, the systems you design, the revenue generation practices you pursue and the funding path you follow all contribute to your growth trajectory. Not every company will go IPO, but armed with knowledge of the science of company building, a disciplined CEO can build a company that achieves greater success at every stage. The best CEOs will lead their companies towards impressive exits. A select few will break through and take their companies public. Technology companies have a unique path to scale. Because of their potential for outsized growth, tech companies have access to capital not available to other types of companies. In this book, our focus is on tech companies. As tech CEOs scale their companies and introduce their technologies to more and more customers, they disrupt and change the world. But achieving success is hard. If you are a tech company CEO, you know well that the scaling of your company is a ragged jog through a vicious gauntlet. The marketplace is unforgiving. Failure nips at your heels. Only the strongest get funded; only the strongest survive. Before an investor takes a deep look at your company, she will first take a deep look into your background (and into your soul). What makes you, the tech company CEO, fundable? When I started Digital Air Strike, my cofounder and I each put in $100K of our own money. Five months later, just before that money ran out, I raised a $900K seed round. Then, as our bank balance fell perilously low, we raised an $8M A round. After that, there were a couple more funding events — one of which powered an acquisition. One thing was common across every one of these funding events: they were all really hard. Investors were hard to reach. Once reached, most wouldn’t take a meeting. Once we met, all were skeptical. After the meetings, most said no. Only a few — the bare minimum, actually — said yes. Even after wereceived a term sheet, every step until the close of funding was difficult. Every time. Was I fundable? Yes, I was — by the skin of my teeth. We had enough of a story and a strong enough team to convince just enough investors that our company was worthy of investment. I have no doubt each investor in Digital Air Strike took a close, critical look at me, the CEO. Each had to make a judgment: would I be able to lead this company to an exciting exit? Investment is about confidence. An entrepreneur is fundable when she can create confidence in the hearts and minds of investors. At every company stage, domain expertise is important. But that’s just the beginning. In the early stage, the company’s choice of market, product vision and go-to-market plan are all just theories. Nothing has been proven yet. You have a concept and a category. You might have an initial product release, or even a Minimum Viable Product, but you have not yet proven a repeatable sales model. At this stage, it’s all about the team, especially the CEO. The investor knows that your initial choice of market, your initial vision for the product, and your planned scaling path will all change radically, so she can’t invest in that. All she can invest in is you. What exactly does the early stage investor look for in you, the entrepreneur? Grit, determination and persistence matter. Ben Horowitz, in his book The Hard Thing About Hard Things, said: “Great CEOs face the pain. They deal with the sleepless nights, the cold sweats, and what my friend the great Alfred Chuang (the legendary cofounder and CEO of BEA Systems) calls ‘the torture’. Whenever I meet a successful CEO, I ask them how they did it…. The great CEOs tend to be remarkably consistent in their answers. They all say, ‘I didn’t quit.’” ⁹ Vision, confidence and passion matter. Jeffrey Bussgang, author of the book Mastering the VC Game: A Venture Capital Insider Reveals How to Get From Start-up to IPO On Your Own Terms, observed: “As I listened to pitch after pitch and watched as some venture-backed startups took off and others didn’t, I became much more aware that the successful entrepreneur is built, in fact has to be built, in ways that are fundamentally different from other business people… The most important of these [characteristics of the successful entrepreneur] are a certain kind of visionary optimism; tremendous confidence in oneself that can inspire confidence in others; huge passion for an idea or phenomenon that drives them forward; and a desire to change the game, so much so that it changes the world.” ¹⁰ Adaptability matters. Jessica Livingston, in her book Founders at Work, said: “…founders need to be adaptable… People think startups grow out of some brilliant initial idea like a plant from a seed. But almost all the founders I interviewed changed their ideas as they developed them. PayPal started out writing encryption software, Excite started as a database search company, and Flickr grew out of an online game….” ¹¹ Intelligence, toughness, capacity to convince, task relevant fit, and special superpowers matter. In February, 2014, Rob Go, Partner at Nextview Ventures, wrote a blog identifying four attributes of a successful founder: “1. Smart and Tough…Not all awesome founders are necessarily genius-level smart. Nor are all founders UFC-level tough. But there is a baseline that is certainly well above the top 5% of humans on both… 2. Convincing…. Not all founders are evangelistic, charismatic or the greatest salespeople. But almost all are able to be convincing in their own way. 3. Superlative. I’ve found that awesome founders tend to be really really great at something. No one can be amazing at everything, but I’ve seen the most success with founders that show superlative traits. I’m always excited when I hear the word ‘best’ in reference calls… 4. Fitting for the task at hand…. Many founders have multiple strong attributes, but great founders are great in the context of the task at hand.” ¹² As the testimony noted above underscores, early stage investors want to see intelligence, vision, passion, confidence, never-quit perseverance, good fit with the task at hand, and adaptability. The perspectives of Horowitz, Bussgang, Livingston and Go anecdotally affirm the prevailing research. In the late nineties, Saras Sarasvathy, associate professor at the University of Virginia, published her first research into the reasoning of expert entrepreneurs. Since then, a large body of academic research has built up in the area now known as effectuation research (check out effectuation.org). In her wide-ranging interviews with founders of companies valued from $200M — $6.5B, Sarasvathy found that expert entrepreneurs exhibited a distinctive approach to decision making. Specifically, these entrepreneurs exhibit five key reasoning attributes: Start with Means Who we are What we know Who we know What we have Manage Within Affordable Loss Primary focus on limiting risk Manage investment based on amount that can be affordably lost Less focus on potential upside Build Partnerships Attract partners early and often Co-create the future with every partner Partners bring new knowledge, contacts and money to the enterprise Leverage Contingencies What can we learn from everything we do How do failures inform us Build / Measure / Learn Control, don’t Predict Don’t spend time trying to predict the future Control the accessible levers Learn and adjust These attributes are exhibited through a decision flow that continuously iterates as an opportunity is identified, refined and built into a company. It works like this:
https://medium.com/ceoquest/funding-exits-chapter-1-the-fundable-entrepreneur-ea8dec9ba874
['Tom Mohr']
2020-04-16 21:28:04.181000+00:00
['Funding', 'CEO', 'Startup', 'Venture Capital', 'Tech']
Friends On Medium
Friends On Medium Thanks for your help and example Photo by Simon Hattinga Verschure on Unsplash Although I have not personally met anyone on Medium, there are some people on the platform that I can consider as friends. They have responded to me, and I have responded to them. They have been kind and encouraging. They have made me feel good, and they make this experience worthwhile. The editors who have published my stories have been nice. Sometimes they clap after they have published a story. Sometimes they make a nice comment. Sometimes they offer advice. Those are encouraging and helpful. There was one very negative person whose writing I read a few times. It seemed that she had some of the same views as I do, but then I found that she puts a derogatory spin on things. I have decided not to read any more of her work. She is the only one I have found like that. Thanks to all the writers on Medium who spend hours writing and sharing ideas and experiences. Thanks to the editors who publish the many articles and stories. Thanks to Medium for providing this platform. It is interesting and informative. Hopefully, it will be around for a long, long time. Thanks for the friends I have “met” on Medium. Your friendship and kindness have enriched my life and many lives. Even though it is unlikely that anyone will meet other writers on Medium in person, it is nice to virtually meet new people who share your views — or not. There is so much good material to read on Medium from the thousands of writers who participate. It’s too bad we don’t have enough time to read more of your work. Good luck and success to everyone who writes and reads on Medium.
https://medium.com/illumination/friends-on-medium-97c1a028e973
['Floyd Mori']
2020-12-14 21:11:52.823000+00:00
['Medium', 'Friends', 'Gratitude', 'Writing', 'Reading']
Of writers and their writings
LI Weekly Editorial Of writers and their writings Literary Impulse mini-edition is live! Artwork by Bettina Baldassari As writers and poets, we scribble on, weaving a multitude of stories, every day, or once in a few days. Some of what we compose, we find it closer to our heart, than the others, and alternatively our readers, sometimes, appreciate those “less close to our heart” pieces much, much more, which makes me think about the rainbow of perspectives, that each of us has, and tells a lot about how and why someone connects to a particular piece more than the others. While I delved quite deep into these, a weird thought entered me, I wondered, If, just like we remember the best pieces we have written, do those pieces remember us too? As a result of the ink and the paper dancing together in sync, the poems that we compose, do they remember those fingers that crafted them, molding them, giving them an identity for a wider audience or for the writer’s own self? Do they feel as ecstatic as the writer, once the piece, written from the heart, is completed? And do they get affected by the criticisms too, just like the writer does, sometimes? Although a bit weird, I would still love to listen to all of your takes on this little thought.
https://medium.com/literary-impulse/of-writers-and-their-writings-99683c2eea3b
['Somsubhra Banerjee']
2020-08-21 10:32:35.393000+00:00
['Fiction', 'Literary Impulse', 'Writing', 'Poetry', 'Letters From Li']
Yes, and: Use These Two Words to Build Trust
Most of us say “no” a lot, and it’s important that we do. It’s also important that we say “yes, and” — a foundational rule in improv — so we can build trust in key moments. Yes, and. These two short words convey a lot. Yes is an acknowledgement: yes I see you and hear you. Yes, I acknowledge and accept your offer and contribution. And is a reciprocal offer to engage and collaborate: and I’m going to work with, not against, you. And I’m going to share power with you so we can collaborate effectively. And I’m going to embrace possibility and co-create with you. So what are some examples of critical moments when yes, and can be a power tool for trust building? 4 Ways to use YES, AND to build trust 1. Use YES, AND in a brainstorming session Brainstorming can be scary. It involves cultivating courage, taking risks, and being vulnerable with putting your ideas forward to face possible judgment. Embracing yes, and makes brainstorming less scary and more productive for all involved. When you’re brainstorming and the goal is to generate as many ideas as possible, yes, and indicates openness and invites generous ideation. It helps create an inclusive and encouraging space for brainstormers to take risks, to be bold, and to ideate freely. 2. Use YES, AND when welcoming someone It’s hard to be the new kid, especially when joining a pre-established group. Before feeling like you belong, you likely have to overcome feelings of being unsure, nervous, a bit anxious, concern about looking foolish, and hesitant to contribute. Trust-Centered leaders help new people feel welcome, whether the new person is a new hire, a manager who has recently been promoted and is now in meetings with more senior colleagues, a new member to a community, or a prospect. When you’re interacting with someone new, yes, and can be a simple way to demonstrate care and encourage connection. When the new person shares something, support their contribution with a yes, and. This is a subtle, yet powerful way to signal to them that their contribution is noticed, appreciated, and valued. This is a practical way to include them that can help build their confidence in the new context and likely encourage them to keep showing up and contributing. 3. Use YES, AND when encouraging someone to try something challenging and/or new Trying something new and taking on challenges is daring. It involves sticking your neck out, facing the unknown, and making yourself vulnerable to take a risk that this might not work. It requires the humility to be a beginner and embrace a learning opportunity. And it also potentially has a big payoff of unlocking possibilities and making breakthroughs. It’s easier to be daring when you feel supported by those around you and underpinned by psychological safety. Yes, and can help foster those conditions. Yes, and helps you empathize with the person taking the risk and places you on the same team, encouraging them in pursuit of their goal. “Yes, I can see that there’s a lot of fear here, and I can also see how much you have to offer and it would be a shame to keep that to yourself. Keep going! 4. Use YES, AND in a Q&A session Have you ever been the first person to ask a question in a Q&A session? It takes guts to sit with the tension and go first. And how your first question is received sets the norms and tone for the questions to follow. If you’re the person answering the questions, you are in position to make space for possibility, especially when responding to the first question. How you respond signals with action whether or not you’re truly open to receiving questions, or merely saying that you are. If your response to the first question shuts down the audience, they’re less likely to ask you questions. Yes, and shows respect to your audience and builds connection. Is the first question a tough one? Then be sure to yes, and! “Yes, that’s an interesting question, and while that’s not the focus today, I’m happy to chat about it offline later. Please come find me.” Building trust by embracing YES, AND No is a terminal point. It prioritizes our ideas and opinions over the contributions of others. Yes, and includes and unleashes others by inviting and acknowledging their inputs and fostering the conditions for effective collaboration. Yes, and opens up pathways and possibilities to move forward together. In what key moments do you think you could more readily embrace yes, and to build trust and create possibility? Is it one of the examples listed above, or does another moment come to mind? How are you going to commit to your practice of yes, and this week?
https://medium.com/spotlight-trust/yes-and-use-these-two-words-to-build-trust-4ba78bc04571
['Lisa Lambert', 'Co-Founder At Spotlight Trust']
2020-07-22 19:15:40.771000+00:00
['Culture Change', 'Leadership Development', 'Teamwork', 'Trust', 'Collaboration']
Building a product growth model
Implementation The model is meant to be used to change the input spend (especially by the product and marketing teams) and see how that translates to the overall growth of the client base over time. This is used in the signup sub-model to map it to the number of signups. Besides that, certain improvements that are expected in the product are quantified as either increased conversion rate or retention rate (in %, such as 110% meaning 10% improvement in retention because of some new product feature or improved conversion rate because of some decrease of friction in the conversion funnel). Below we can see an example scenario where we expect an improved conversion rate from February 2019 on, combined with improved retention rate from April 2019 on. Example input sheet: Future spend and hypothesized product improvement impact Signup sub-model The signup sub-model was built using a Lasso linear regression model, mapping the historical spend to CPS (cost per signup). This was done as the relationship is mostly linear — increasing marginal cost per signup as the spend increases. The features are the country in which the money was spent in, whether a particular month was a crypto craze month (to capture outliers using a boolean feature), and the month of spend (to capture seasonal effects, using a label binarizer). The crypto craze monthly cohorts are treated differently as we had a large inflow of new clients in that period that are not representative of the whole client base, therefore potentially introducing noise in the training data. This was one of the hypotheses that we tested in the feature engineering process and that proved to have enough feature importance to be used. Lasso regularization was used in order to minimize over-fitting as the number of features increased considerably relative to the size of the data. The alpha parameter was chosen using the grid search with cross-validation as the runtime was not an issue considering the size of the available data. The seasonal effects are the hardest part to model as the amount of spend in a particular month is decided by the performance marketing team leveraging their domain expertise (thus not captured in the data). This way some months have naturally more spend on average than others which makes it harder for the model to generalize. Also, the product is available in certain countries for less time than others which poses additional challenges. We tested Catboost regression model for a different treatment of categorical features but the performance didn’t improve so we decided for a simpler model instead. With the sub-model trained, we iterated through all the countries and cohort months to forecast the CPS for the whole range of spend that we expected in the future with certain step size, say from 0 euros to 10.000 euro with a step of 1000 euro. The CPS values are then multiplied with the spend itself in order to extract the forecasted signups. This is retrained once a month to add new training values. Applying the Lasso linear regression (left graph) and extracting the desired relationship (right graph) Conversion sub-model To model the relationship with how new users convert over time we used Scipy’s curve fit functionality. We tried different methods and this one provided the best performance considering the complexity. The modeling is performed on a country level. First, we extract the retention of each of the monthly cohorts of a country. This is followed by building a retention model for a particular country, using all of the existing cohorts as training data. Then, we iterate through all of the cohorts in that country and have certain conditions based on which a different curve is applied to extrapolate future retention from: If a cohort has a long enough tenure and is large enough (in tenure 0), we use the curve fit function on that specific cohort retention data and extrapolate into the future If a curve cannot be fit on the cohort data but it satisfies both conditions mentioned above, we use the country-level curve and apply the peg rate (adjusting the country model with the currently available cohort retention data) If either of the conditions is not satisfied, we apply the fitted country curve The function used to model conversion (using Scipy’s curve_fit) Retention sub-model The business logic behind the Retention sub-model is mostly re-used from the Conversion sub-model. What is changed is the curve fit function applied to fit the cohort retention data. The function used to model retention (using Scipy’s curve_fit) Combined output The first pipeline starts with the future marketing spend as input into the Signup sub-model from which we extract the number of new signups. This is used as the input into the Conversion sub-model from which we calculate the number of conversions over time (cohort’s tenure) by taking into account the potential improvements in the product’s conversion rate. The output is the new clients in a particular month. After that, the second pipeline starts by using the input of the number of clients in the current month (period t) and forecasts the number of returned clients in the next month with an expanding window method (period t + 1), across all countries and all monthly cohorts. This is done by taking into account historical retention data and potential future improvements in the product’s retention rate. The output is the number of returned clients in a particular month. The output of both pipelines is combined into the number of active clients in a particular month (MAU, or in this case monthly active clients). Evaluation For the last 6 months that the model was used, the average error in the forecasted number of new clients was 11.1%. The average error in the number of returned clients was 1.7%. This was all evaluated on forecasting t+1, a month ahead. The performance bottleneck was mostly the Signup sub-model which makes sense considering the data that we’re trying to fit and all the exogenous variables that are not included in the model (performance marketing trends over time, future market volatility, market penetration/saturation over time, etc.) What we also noticed is that if we modeled MAU further in the future by simulating as if the model was implemented earlier, the errors accumulate quickly. Even if percentage-wise the model performs well, the model is not useful for the decision-making process in a year or more in advance as small errors accumulate over a year miss the realized value too much for it to be useful. Whereas there is no quantification of uncertainty embedded into the model (via stochastic processes), we can build some understanding of the reliability and consistency of results with this approach. Taking this into consideration, we can go more in-depth to have insight into where there are discrepancies between the forecasted and the realized values when forecasting for period t+1 (next month). Is the CPS (cost per signup) in a particular country higher or lower than expected? Have more or fewer users converted? Did we retain more or fewer clients while taking into account the number of forecasted signups? It’s also possible to go deeper and see in which country and which monthly cohort a different number of clients were retained more or less than expected. This can give us insight where certain cohorts in certain countries could be churning more than expected by which we can then go deeper and see some behavioral characteristics that bind them. These are some of the potential use-cases that his model can be used for. Conclusion One of the challenges, particularly in the Signup sub-model is that while the data can be relatively large at the start, we need to get to the granularity that is used for the decision-making process to make it insightful (or at least, as close as possible). When split by country there is a lower signal to noise ratio for each of the countries which makes it harder to model. It would be ideal to model down to the acquisition channel but the data is just not large enough to generalize from that level. The biggest challenge was thinking through the problem formulation and the process of building the data pipeline (and with that, how to modularize the code in a way to make it as easy as possible to maintain and improve over time). Much less time was invested in the process of algorithm selection and tuning. An obvious shortcoming of the model is that there is no estimation of the uncertainty in the forecasts. As mentioned, we used the expanding window approach to derive a general understanding of the reliability and performance of the output. Additionally, there are potential Black swan events that could occur in the financial markets (in either direction) which would considerably influence the emergent behavior of the client base. By definition, these are high impact events but also practically impossible to forecast so we don’t model them either. Overall, the model proves to be useful to measure how certain changes in the product’s funnel are reflected in the growth of the client base over time. Besides that, quantifying the effect of the diminishing returns of the user acquisition spend is useful to better understand the business levers of growth which can support the decision-making process. Sources:
https://medium.com/inside-bux/building-a-cohort-level-product-growth-model-b3412ed19fbd
['Jan Osolnik']
2020-07-27 22:44:46.678000+00:00
['Growth Hacking', 'Machine Learning', 'Startup', 'Data Science', 'Fintech']
Organic Search vs Paid Search: Which is the Best in 2k20?
Organic Search vs Paid Search: The Epic Showdown Organic Search vs Paid Search: Which is the Best in 2k20? How can you incorporate both for maximum ROI in 2k20? My Experience in this industry is more than 3.5+ years and here I want to elaborate to you what is good for your business: organic search or paid search? In this digital landscape, a marketer needs to use Paid and Organic means PPC and SEO both for delivering results. Organic Search: Organic search is a method that has higher placement and here we can obtain maximum results without paying a single penny in the lead generation. The ranking on the page is unpaid. In organic search, we get lots of traffic from the website with its help it improves the search engine ranking of the page. As per research if you on the first page of Google (The Top 10)then you receive 92% off all search traffic on Google and traffic drops by 95% on the second page. Organic search includes taglines, URL, Keywords and many other things. People especially find this by googling which can be referred to as search traffic. So after this all things, You might be skeptical that how could I get organic footprints? Keep SEO in your mind. The main advantage of SEO is a permanent result. Start targeting your page or website with keyword who you want to rank and also check the volume. They definitely help you to grow organically in the search results. Benefits for Organic Search in 2020: 1. Organic Search cost nothing 2. Organic search delivers magnificent ROI 3. Organic Search helps in marketing channels Organic Search cost nothing By comparing the paid search, the organic search comes up with zero cost. In organic search, we don’t have to find funds or budgets. In organic search, the company doesn’t have to provide any money or need to spend. Just write a good quality blog, fulfill with SEO criteria and get a result. That’s It. Organic Search delivers magnificent ROI Why an Organic search delivers magnificent ROI because SEO is a long-term game. When you start organic SEO, it’s result was going long for your business. There are 90% of marketers who are successful in SEO. We can easily measure the return for SEO strategy. We can easily browse the case and to see the impact of SEO strategy. Organic Search supports marketing channel Organic Search is for online marketing. If a social media campaign supports organic search and can set the company for success result and that’s why paid VS organic has some debates due to this advantage. Google always loves organic backlinks so the mostly online marketer ranking at the top in Google because they have more organic backlinks compare to competitors. Paid Search: Paid search means which shows ads in the search engine. It works on the Pay Per Click model. It is called PPC(Pay-per-Click). It advertises the program like Listing Ads, Google Ads, Bing Ads and Shopping Ads. Some are the trends which are helpful in 2020: PPC automation Amazon Paid ads YouTube Paid Video Marketing Responsive search ads PPC Automation: Nowadays in 2020, the advertising for PPC has changed in digital marketing. Google has generated more like( $32.6 billion) in revenue from advertise. There is a large amount of data working together to identify the proper audience. Google has invested much in a wide variety of options for automation. With the use of PPC automation, experts can free up their time and use this time in top-level strategy. You will get your result as per your keyword bid. YouTube Paid Video Advertising: In Jan. 2017, Google announced a YouTube advertisement to reach more views. In YouTube, the marketer can target ads at people who recently search for any services and products. It also targets based on your google history. Keyword bid on Youtube is compared to the average Google Search cost per click to less and it is a cost-effective way to target audience with more engagement. Amazon Paid Ads: PPC means that Google ads and Facebook ads are right and can be moved aside. Amazon is the third-largest and fastest-growing advertisement with amazon ads. Amazon has many advantages on Google and Facebook. With the help of the amazon site, it helps to increase the diversity which helps to advertise. When you do amazon paid ads, then sponsored product ads are the advertisements on Amazon that seem in search results and product listing pages first. Responsive Search Ads: Responsive search ads had been launched in 2019 and it is really great way to target your audience. There were some different headlines for the description for using AI for finding the popular combination. The creativity is back in 2020 and it will keep improving machine learning techniques. Sometimes in google AI treats data and is very different. In my opinion, I would suggest doing both for your business. Organic will provide you long term game and Paid search provide you quality leads and business.
https://medium.com/devtechtoday/organic-search-vs-paid-search-which-is-the-best-in-2k20-4bddccab4945
['Nikhil Rangpariya']
2020-02-21 05:45:31.071000+00:00
['Marketing', 'Organic Marketing', 'Paid Search', 'Organic Seo', 'PPC Marketing']
How to Setup a New Project with Python Env
I’m writing this content to help you understand the best way to setup a new project with Python and using virtual environments properly. An illustrative image for how Virtual Environments works The fact is when I learned about Virtual Environments I didn’t realize how they could be used in a way that would help to scale and maintain my projects. If they used right it is a true powerful tool and it could help you to understand some concepts to isolate tools used in projects and never more feel lost with the packages you are using for a specific project. Why is that? When you create a virtual environment and activate it, when you install packages that have the tools you need for that project you are installing only inside that environment and by doing that you are isolating the project and if you need to run that project on another machine you can trace all packages you need for that project and install in an automated way. But enough talk, let’s setup a new project: First create a project directory/folder on you machine. I’m going to guide you through Linux commands (I’m using two commands with &&, that is used to run a command after another successfully, the first command creates a directory on my current directory and the other changes current directory to that one created): mkdir myproject && cd myproject 2. Create the virtual environment inside your project directory, with some obvious name (in this case I’m calling “env”): python3 -m venv env 3. Put the directory of your virtual environment to “.gitignore” file. This would be useful to when you add your project to git, the packages wouldn’t be added to the repository, maintaining your repository clean of junk. echo 'env' > .gitignore 4. Activate the environment (and every time you will work on that project you have to do this step to work inside your environment): source env/bin/activate 5. Now when you install a package it will installed in your environment: pip install django 6. Freeze the requirements. Every time you install or remove packages from that environment you should record that to a file that stores all packages necessary to that project (this is the file you should add to your git repository for that project): pip freeze > requirements.txt 7. If you need to install the requirements for that project you would use that file created before: python -m pip install -r requirements.txt I’m new to blog posting so if you have some tips or contributions for me to grow in this business I appreciate. My mission with this is to help everyone with possible struggles we all have in the beginning with software development and some other IT stuff. So if you have anything that you wanted me to write about it I’m open too. Thank you for reading this and I hope that it helped you.
https://rutefig.medium.com/how-to-setup-a-new-project-with-python-env-f6a55a98fc05
['Rute Figueiredo']
2020-12-28 12:11:45.191000+00:00
['Python Programming', 'Python3', 'Python', 'Virtual Environment', 'Basic Python']
Meaning
Meaning A Poem Photo by Luisa Denu on Unsplash leave me on the sand in the rotation of raging waves and bluish caresses this is where I try to understand the meaning of everything
https://medium.com/scribe/meaning-657c7ea3a3fd
['Thomas Gaudex']
2020-12-07 10:38:33.187000+00:00
['Poetry', 'Life', 'Mental Health', 'Poem', 'Nature']
What Motivates Gamers?
What motivates gamers? How do these motivations correlate to personality traits? What impact do age and gender have? At Quantic Foundry, we combine social science with data science to answer these questions. As you’ll discover, the findings aren’t what you’d expect. Introducing the research model Using factor analysis, a technique used in psychology research, we were able to identify how variables of gamer motivations cluster together, and what their underlying structures are. We extracted gamer motivations presented in academic and industry literature, and generated about 50 questions for a survey, such as ‘how much do you enjoy games with elaborate storylines?’, and ‘how important is it for you to play a game at the highest difficulty setting?’. Starting with a panel of 600 gamers to create a good enough model, we then created an online app. Gamers could come, take a five minute survey, and get a personalized report of their gaming motivations, relative to the gamers in our sample. We optimized the process, changing items until a robust model emerged. The first version of this tool was released in June 2015 and in the first wave we surveyed about 30,000 gamers. As of 2019, over 400,000 gamers have taken our profile test, mainly from North America and western Europe, as well as sizeable samples from South-East Asia and South America. The 12 motivations, 6 pairs, and 3 clusters Collecting this data enabled us to identify 12 unique motivations, which we then split up into six pairs according to how correlated the unique motivations were to each other. We then assigned overall category labels to the pairs like Action or Creativity. Meanwhile, the map below visualizes the relationship between each motivation, using multidimensional scaling. It shows there are three clusters of gamers, which were consistent in all the geos we tested, split as follows: bottom left is Immersion-Creativity, bottom right is Action-Social, and Mastery-Achievement is shown at the top. The distances between each motivation are based on their correlation with each other. As you can see, there are two motivations that are on the edge, almost forming bridges between the clusters. ‘Discovery’ is the bridge between Immersion-Creativity and Mastery-Achievement, and ‘Power’ bridges Mastery-Achievement and Action-Social. From a developer’s standpoint then, leveraging these ‘bridge motivations’ in their gameplay could widen their audience by appealing to two ‘clusters’ of gamers. The personality link Fascinatingly, we found that the 3 main clusters of game motivations correlate to well-known personality traits in psychological research. The Action-Social cluster of motivations is an expression of extraversion in a gameplay context. In other words, people who score high on extraversion also tend to be thrill-seekers in games they play, and enjoy competition and social interaction. Openness and curiosity strongly correlate with the Immersion-Creativity cluster. Conscientiousness, organization, and self-discipline match the long-term thinking of the Mastery-Achievement group. Escaping the paradigm The widespread narrative that gaming is escapist, that it enables people to pretend to be something they’re not, is the antithesis of the picture painted by our data. Instead, games can be seen as a kind of identity management tool; gamers gravitate towards the gameplay that aligns with their core personality traits. It wouldn’t be an exaggeration to say video games help us become more of who we really are. Breaking down the data into demographics also provided some interesting findings. At first, we found that gender differences align with stereotypes: the primary motivations for women were more likely to be fantasy, design, and completion, whereas for men, destruction, competition, strategy, and challenge were the most likely. However, after digging deeper, we discovered that age differences were far more influential than gender. We focused on the motivation that had the greatest gender difference — competition — to highlight the impact of age. As the graph shows, gender difference fades with age: past age 45, there’s virtually no difference between male and female gamers’ desire for competition. Perhaps most interesting is the size of the gap between the youngest and oldest men. Quantifying this difference, here age actually accounts for twice the variance than gender does, and in terms of their scores for competition motivation, there’s an 87% gender overlap, suggesting the impact of gender on competition is less than one might expect. In the gaming industry, there’s much debate about the differences between men and women, and what games for women might look like, but in fact, these findings show we should divert our attention to age.
https://medium.com/ironsource-levelup/what-motivates-gamers-ccbffe9180c0
['Nick Yee']
2019-09-23 07:15:13.433000+00:00
['Gaming', 'Game Development', 'Game Motivations', 'Behavioral Science', 'Psychology']
6 Technologies That Will Make You a Wanted Front-end Developer in 2021
6 Technologies That Will Make You a Wanted Front-end Developer in 2021 How to choose your next move in your career, and turn it for the better Photo by Austin Distel on Unsplash Do you want to change your developer’s career for the better in 2021? Maybe you want to move to a better-paying position. Or maybe you want to switch to a company working with the latest technologies. I believe that improving your position as a developer is always possible. You just need to gain the right knowledge. The one that companies are looking for, the one that is the most paid and that fills out the Glassdoor jobs announcements. “When I began my developer’s career, I was working with a lot of legacy projects involving Java. As time passed, however, my interest in JavaScript technologies grew, so I learned Node. I knew that this technology was growing at the time. So it was super easy for me to change my employer, upgrading my salary and starting to work with a more modern tech stack. Now, there was still a part of my job rotating around old AngularJS code. Yet, I knew that if I wanted to grow again and improve my position again, I had to switch to a newer technology. So I learned the basics of React. Those basics allowed me to fulfil a position in the company I’m working for right now, where I’ve increasingly gained knowledge about this framework, which is highly marketable and has opened to me the doors for a lot more occasions.” So you see, being a developer is also made of considerations like the one I just showed you. How can you be more marketable? What should you know to land a better position? To answer these questions for you, I’ve created the list you will find below of technologies I’m convinced you should learn in 2021.
https://medium.com/javascript-in-plain-english/6-technologies-that-will-make-you-a-wanted-front-end-developer-in-2021-ebe44e9245ee
['Piero Borrelli']
2020-11-25 11:21:34.992000+00:00
['Technology', 'Software Engineering', 'Work', 'Programming', 'Web Development']
Curate With Me :)
If you want to help me curate this weekly, please send an email to arihantverma1994[at]gmail[dot]com with the subject: Interested in curation of LOL and I’d be happy to add you as the editor of this medium pub after a brief chat 😃
https://medium.com/lol-weekly-list-of-lit/lol-link-sharer-a-chrome-extension-9279d4389611
['Arihant Verma']
2017-08-27 21:32:16.775000+00:00
['Storytelling', 'Chrome Extension', 'Lol', 'Lollistoflit', 'Chrome']
7 Fitness Tips to Keep You in Shape When You Don’t Know What Else to Do
Photo Credit: Pixabay Fitness carries different connotations depending upon whom you ask or talk to about it. We all know that it’s what is on the inside that counts, but that doesn’t mean that you should neglect your outsides. Take some time to improve your appearance and your health using the tips below. 1. Crunches Make a Difference Work out your abs without doing crunches. That’s right, all you have to do is to take a deep breath and, on the exhale, simply squeeze your belly to your spine and hold it for about 10 seconds. When doing crunches, be careful not to strain your neck. If you put your tongue to the roof of your mouth while doing them, this can actually help to properly align your head and neck. You’ll be working your transversus abdominis muscle, which lays behind more prominent abdominal muscles but can flatten your stomach noticeably. 2. Green Tea All the Way A really good way to help you get fit is to start drinking green tea. Green tea can be a great, natural alternative to coffee if you’re not much of a fan of coffee. Green tea has been proven to give the metabolism a boost and it also provides energy. 3. Find your Balance Studies have proven that mediating every day for eight weeks have shown to improve health and boost a calmer lifestyle. Mediating improves the fitness of the brain by reducing stress. Even if you don’t feel like working out on a given day, at least try for five or ten minutes. You might find that once you get going, you can do more than that. Remaining calm has proven to plump the part of the brain called the hippo-campus, which is directly connected to memory and alertness. 4. Find Something that Makes You Happy The best way to ensure you stick with getting regular exercise is to do things you enjoy doing. Getting an effective workout does not have to mean working out on boring machines like treadmills. Instead, find something you love to do like joining a dance class or riding a bike. 5. Actually Exercise Exercising and staying in shape has many benefits, including beautiful skin. Staying physically fit, not only helps your body to look good, but it helps keep a clean, youthful complexion. Make sure you are eating enough; your body requires fuel. Exercise calms the nerves, increases circulation and promotes a deeper, more revitalizing sleep, all of which helps your skin to look amazing. If you have a gym membership, use every piece of equipment offered. Try not to use just one or two different exercise machines. Using a variety of machines will not only prove more fun, but you’ll effectively work more parts of your body. Try to learn to use at least a dozen different machines in your gym. 6. Accountability Matters This is the most important one of them all. If you have trouble staying motivated when working out, consider hiring a personal trainer. As experts of fitness, personal trainers push people to their limits, and help them achieve their fitness goals. After a few sessions you will know exactly what you need to do to keep fit, even without guidance. 7. Make Time for Stretching Make sure you’re stretching before and after your workouts. You want to do moving stretches, like jumping jacks and windmills, in the beginning, to loosen your muscles up. For healthy fitness staying hydrated is vitally important. The benefits of getting plenty of water do not end at the gym door, though Afterwards, you should do stationary stretches to stretch out your muscles and let your body cool down, after your work out, to avoid getting any cramps. It’s true that what’s on the inside of a person is important. That said, you still have a body that can always be refined. You can improve upon your body by you and your doctor coming up with a fitness routine that can help you become healthier. Hopefully, these tips gave you advice on how to do that.
https://medium.com/gethealthy/7-fitness-tips-to-keep-you-in-shape-when-you-dont-know-what-else-to-do-f066d73a4dd6
['Jeremy Colon']
2018-07-31 02:55:19.190000+00:00
['Weight Loss', 'Helping Others', 'Fitness Tips', 'Fitness', 'Motivation']
Google Cloud Spanner Nodes
As we continue with our exploration of Google Cloud Spanner, I found one concept to stand out, that of nodes. Typically in cloud computing, an instance is synonymous with a singular virtual machine, and a node is the CPUs, memory, storage and networking underlying that VM. In Cloud Spanner, the relationship still holds, but is more complex due to the nature of instances. Nodes are also directly related to your instance configuration, and thus your scaling, so it is important to have a good understanding, and I explain in more detail below. Nodes Cloud Spanner documentation loosely defines a node as a collection of resources, namely CPU, RAM, and 2TB of storage. This is quite simple, and means that if your database monitoring shows that you are using more resources than is optimal, or that you are running low on storage, you simply need to go into the Cloud Console, and add a node. This will, as per the definition, add more computing and storage resources to your instance. This is where the Cloud Spanner node differs from the norm, is that adding a node doesn’t just add a single set of compute resources to your instance, but in fact it adds a set of resources to each replica within your instance. When adding a node to a traditional distributed/clustered database, you are only adding a single computing resource or server to your cluster. As an administrator, you have to manage how that node is used by the database, whether it becomes a new shard, a read-write replica, a read-only replica, a witness replica, a warm failover cluster — depending on the database, the options could be numerous, and administration overhead costly. icons © Google In Spanner, the administration is transparent, and the definition of a node therefore encompasses all the resources required to increase the capacity of your instance with a full set of resources, regardless of it being regional or multi-regional, whether it requires read replicas or witness replicas or both. icons © Google That is quite a powerful concept, and it speaks to the “fully managed”, “unlimited scale”, and “99.999% availability”. Each addition of a node is automatically managed by the instance and therefore replicated, sharded and essentially used to increase scale in the highly available architecture of Cloud Spanner. In other databases, adding compute or storage necessitates configuration — is it used for primary compute, failover, replication, backup, etc. In Cloud Spanner, you simply click in the Cloud Console to add another node, and everything just happens in the background, your application has more highly available resources as per your instance configuration. The Cloud Spanner documentation on instances lists all the different instance configurations and explains the difference between the number of replicas you will have depending on your regional configuration. A quick note on replicas, nodes and instance configurations Each regional instance has 3 replicas, so each node added to your instance in this case will result in an increase in compute and storage for each replica, so 3 sets. This maintains 99.99% availability, and the stated performance of “up to 10,000 queries per second (QPS) of reads or 2,000 QPS of writes (writing single rows at 1 KB of data per row)”. Most Multi-region instances have 4 read-write replicas, and one witness replica, so adding a node increases the total compute and storage across all 5 replicas. This allows Cloud Spanner to provide the 99.999% availability. Over and above that, the “nam6” region has 2 additional read-only replicas, and “nam-eur-asia1” configuration spans 3 continents, with 4 additional read-only replicas. This is quite astounding, as it means that adding a node to your “nam-eur-asia1” instance means that Google provisions 9 sets of CPUs and memory, and 9 sets of 2TB of additional capacity (one for each replica) to support your highly available instance. These 9 sets of resources are managed in such a way that you not only get replication and fail-over for high availability, but you also get external consistency regardless of the global distribution of both your database and its users. It is worth reading the white-papers and documentation on TrueTime and external consistency if you are interested in understanding how Google uses Paxos engines along with atomic and GPS clocks to manage all these resources AND provide the highest level of consistency available.
https://medium.com/google-cloud/google-cloud-spanner-nodes-8cc38f46ebd1
['Ash Van Der Spuy']
2020-11-05 17:37:31.949000+00:00
['Database', 'Cloud Computing', 'Cloud Spanner', 'Distributed Systems', 'Google Cloud Platform']