title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Permanent Space Colonies Are Closer Than You Think
Most people imagine the distant future of humanity like an episode of Star Trek. Queue the cheesy music and hip-hugging space suits. The starship Enterprise is blasting bad guys with its photon torpedoes and settling space colonies that further expand the borders of a thriving United Federation of Planets, teeming with alien civilizations. But the reality is that our future may not be tied to planets. Because all of us were born on a planet, we suffer from a deeply ingrained “planetary bias.” Earth has been humanity’s womb, so we naturally expect to settle another spherical body. We think that new horizons are broached with “one small step for man” on a new habitable planet — whether that be Mars or some yet-undiscovered planet that aligns with humanity’s desired ecosystem. But even though life evolves on planets, celestial bodies are not the best long-term option for supporting technologically advanced civilizations due to their limited resources. Based on scientists' current interpretation of the laws of physics, the best option for establishing a permanent place for humanity may be something called a rotating habitat. What is a rotating habitat? Back in the seventies, physicist Gerard O’Neill spotlighted the concept of rotating habitats, cylindrical megastructures that perfectly replicate Earth’s gravity and atmospheric conditions. Today, many of the technologies necessary to build them already exist, and — with modern 21st-century materials — they can be large enough to comfortably house tens of millions of people. Even better, you don’t need to make it through NASA’s astronaut program; your eighty-year-old grandmother would be perfectly comfortable living in one of these habitats. But even if rocket launch costs weren’t still prohibitive, the raw materials to manufacture these dwellings must be acquired in space. Asteroids and the moon will provide the foundation for the initial infrastructure, including the first rotating habitat prototype. Image Credit: Katie Lane (Full distribution rights reserved by Erasmo Acosta) Why not just go to Mars or some other planet? Aside from the fact that we don’t know yet how to travel at warp-speed, humankind has been extremely optimistic about which planets we’ll be able to settle — never mind the lack of magnetic fields or breathable atmospheres. The deleterious effects of zero-gravity on human physiology are one of the reasons why the odds of being accepted in the NASA Astronaut Program are 0.065%. In other words, humans do not have the physiology to function well on another planet. After studying the effects of zero-gravity for nearly 60 years, scientists have begun to understand the disastrous effects of zero-gravity on the human body. We can try to guess the effects of Mars (at 38 percent of Earth’s gravity) or the moon (at 17 percent), but we won’t know for sure until scientists perform actual tests. Some speculate that we’ll need to make severe adaptations to our bodies in order to survive in those low-gravity environments, possibly branching into a different species in the not-too-distant future. As Earth becomes smaller and smaller with the amount of land to feed our population fast approaching unsustainable levels, our short-term profit system continuing to destroy the planet, pandemics, inequality, political stagnation, and climate change, we ask ourselves, “Can we avoid self-annihilation?” “The sea level rise created a refugee exodus of catastrophic proportions when hundreds of millions were forced to flee their homelands, as living became impossible in the most vulnerable coastal regions.” -Excerpt from my book K3+ Image Credit: Katie Lane (Full distribution rights reserved by Erasmo Acosta) Move over, Mars and Mercury Mining asteroids can only bring us so far. As the human footprint in space grows, it will be necessary to find more abundant sources of raw materials, like those on Mercury. Being the leftover core of a past planetary collision, it won’t be necessary to dig very deep. The first rock from the Sun is made of 70 percent metals and 30 percent silicates and is perfect for building the first megastructures capable of housing millions. Each of these self-reliant island-sized habitats will be able to feed its entire population and satisfy their power demands. As Mercury becomes exhausted, the Sun (which contains thousands of times the mass of planet Earth in metals) will take its place as the main source of raw materials for the burgeoning human civilization. We know these elements are not sunk down to the core but swirling around, carried by the star’s convective process. Even if it requires huge amounts of energy to extract them, the Sun generates a virtually endless supply. Regardless of the technology we develop for this purpose, mining the sun will reduce its mass, delaying the moment it will become a red giant. Continent-sized rotating habitats, each capable of housing billions, will be constructed as new advanced materials of incredible tensile strength are developed. A few centuries in the future, a population of quintillions will finally level with the power output of the Sun, requiring humans to reach for the stars to find additional room for the growing population. Image Credit: Katie Lane (Full distribution rights reserved by Erasmo Acosta) Will humans change, living inside rotating habitats? In a single second, the sun produces close to 500,000 times the annual energy needs of our entire civilization. In space, solar panels harvest seven to eight times more energy than on Earth’s surface, and near-future technologies are bound to dramatically raise their efficiency. Thanks to genetic enhancements, crops can be adapted to thrive in the lower gravity areas of the cylinder — maximizing human use of the habitable surface — and lab-grown meat will produce better steaks without bringing cows to space. In time, it will be possible to grow an apple without a tree while using a fraction of the resources. AI-controlled and solar-powered laser systems will keep these colonies safe from asteroids and other debris. Radiation shielding can be achieved either by soil, water, or generating a strong magnetic field from superconducting wires in the cold temperatures of space. A system of vacuum tubes, with rails inside, will allow transports to travel at dizzying speeds between the habitats, using magnets. Within a century, the ever-growing population will be able to live for eons, oblivious to the ravages of aging, in larger and larger numbers of these megastructures. We can fit more people inside rotating habitats around the Sun than on all habitable planets in the entire Milky Way galaxy. Imagine a single mega-nation, billions of times larger than any other in history, where quintillions of inhabitants live safely and free from fear, inequality, disease, exploitation, and wars — a true renaissance for our species. We can fit more people inside rotating habitats around the Sun than on all habitable planets in the entire Milky Way galaxy. Image Credit: Katie Lane (Full distribution rights reserved by Erasmo Acosta) Rotating habitats also happen to be ideal for interstellar travel, as they already provide a perfect habitat for humans, requiring only adaptations for the long interstellar journey: shields, brakes, maneuvering thrusters, and power generators, among others. Under a gentle acceleration, induced through solar powered lasers in space, it will take a few years to reach the speed necessary to arrive at Alpha Centauri within decades. But stuck with business-as-usual, prey to our instant-gratification instinct, obsessed with greed, and fantasizing about war, we fail to realize the universe is a wondrous realm — rife with possibilities for permanent colonies that could provide an exceptional future for civilization. In the meantime, the cosmos lay empty, its doors wide open, just waiting for us to settle it.
https://medium.com/predict/9371e794e68a
['Erasmo Acosta']
2020-08-25 19:11:01.442000+00:00
['Asteroid Mining', 'Mars', 'Climate Change', 'Space', 'Stars']
Gumao-Flutter: Developing beautiful UIs in Flutter
Banner Flutter is a new trend in the world of mobile developers to write beautiful and expressive UIs for Android and iOS. Since I started developing apps with flutter, I love picking up random designs from Dribbble, Behance, etc. and develop them using flutter. I spent so many days with flutter and dart coding user interfaces mostly. I came across a beautiful design on dribbble designed by Vijay Verma. You can check it out his work on dribbble, he’s genius! This is the design we are going to create in flutter throughout this article. The design has two screens, in this article, I will build HomeScreen, and the DetailsScreen will be built in the next part of this article. Gumao-Flutter Design First of all, the important thing while creating this app is the assets. I searched on the internet for similar PNGs and luckily got them. So, If you are following this article, here you can download them. Initial Steps Here is the basic main.dart file we are going to start with. Also, I have created a separate file constants.dart for some constant styles like gradients, colors, fonts, etc. Now, it’s time to jump on the homepage and start developing it. I have used SingleChildScrollView , a widget for the homepage for more flexibility. Using Container as a child of SingleChildScrollView . Now, for the gradient background color scheme, I used homebody , a widget from constants.dart(line 54 -67) as a BoxDecoration for Container. For arranging the homebody layout, we will need to use a column widget, and nesting widgets inside the column will get us expected results. The top transparent appbar section is just a row with two widgets Text & CircleAvatar . Here is the code for that, Now, it’s time to introduce a new package that will help us creating swiper cards. The name of the package is flutter_swiper . You can head over to pub.dev and install the package into your pubspec.yaml file dependencies. For the swiper-card section, we’ll use a container (of height 550 with padding from the left side only to create a stacked card effect) as a parent of the widget Swiper. There are many things you can do with the flutter-swiper package, like multiple types of layouts, scrolling directions, pagination indicators, autoplay pagination, etc. The prerequisite for building swiper cards is one more file named swiper_data.dart, in this file, we will make a model for all the data we want to show in the card. Below is the example of one character info. For full file, head over to this link. Now, heading over to the swiper widget in the container below is the code for the same. Line-wise Explanation: line 5: If you want to make cards auto-swipe, you can set autoplay:true . line 6: itemCount property used to define how many item cards you want to display in UI. line 7–8: itemWidth & layout property is used to set the width of the card, and type of the layout you need to show in the UI, I used swiperLayout.STACK to create a stacked card like effect. line 9–15: pagination is used to build the little page-indicator below the cards, which takes DotSwiperPaginationBuilder as a constructor. ActiveSize , color , activeColor are self-explainable. line 16–23: itemBuilder is the property which builds the actual content. Inkwell is to create a nice splash effect onTap. We want to get routed to the details page by tapping on the card that’s why you see DetailPage as an attribute of the pageBuilder . Now, we have just created a bare architecture for swiper cards, we didn’t do anything related actual UI of the card. We will start with the Stack widget as a child of swiper for stacking character image, name, rating, game name, etc. I will not explain the alignment, color, spacing between the widgets as it’s not that confusing. You can visit the home_page.dart, line 81–135 for the whole reference. For the rating-box, Game-name, and character-name refer below snippet. For the position of the character, I have used positioned widget with a little opacity in the styling. The image of the character is also easy to place in the last of the stack so that it stays on the very top. For the bottomNavigation part I didn’t use inbuilt widget rather I used simple Row and placed iconButton into it. Below is the gist for that: Here is a demo of the final achieved screen( HomeScreen ). If you want to see the full code for this article, head over to the repository, I created for this project, don’t forget to ⭐️ the repo when you’re there! For the detail, screen development stay tuned for the next part of the blog! Don’t forget to share this article, you know the drill😉! Thanks for reading and following along, Peace ☮✌️.
https://medium.com/flutter-community/gumao-flutter-developing-beautiful-uis-in-flutter-38a610c9bb9c
['Abhishek Wagh']
2020-08-26 00:26:26.529000+00:00
['Flutter Community', 'Flutter Ui', 'UI Design', 'Flutter', 'Dart']
President Trump Doesn’t Care About Trafficking Victims
President Trump Doesn’t Care About Trafficking Victims An open letter to the anti-trafficking movement and anyone who cares about trafficking, commercial sexual exploitation and the wellbeing of vulnerable youth in this country: For the last four years, Trump, Ivanka and this administration have claimed to care about trafficking victims and survivors and boasted that Trump has done more to fight trafficking than any other president in history. As a trafficking survivor and the founder and CEO of GEMS, Girls Educational and Mentoring Services, the nation’s leading organization serving and empowering girls and young women who have experienced commercial sexual exploitation and domestic trafficking, and someone who has worked with thousands of survivors over the last 23 years throughout 3 prior administrations, I can unequivocally say that these claims couldn’t be further from the truth. Trump, his enablers and sycophants, are in fact harmful for victims and survivors and dangerous for those children and youth who are vulnerable and at risk for recruitment. The anti-trafficking movement, which has a significant intersection with the white evangelical community, has been silent and in many cases complicit in perpetuating the myth of Trump as a champion for trafficking victims, in some cases because they really believed it, mostly because they wanted to get legislation passed, funding allocated and attention paid to the issue. Those are worthy goals, and in some cases it’s worked; there have been some laws passed, federal funding allocated and awareness of the issue of trafficking, particularly child trafficking, has never been higher. A surface glance at a Trump press release might tell you that complicity was worth the price, after all there’s been gains, the movement has progressed forward and achieved those specific goals. Even a cursory glance at the reality however shows that while the administration is happy to tout its one tiny step forward, it’s the many huge steps back that really count. Legislation, public awareness, even federal grants that run directly counter to everything else this president and this administration says and does are at best meaningless, more often they have provided cover for the truly heinous actions of this administration. Anti-trafficking work began in the Bush administration, the Obama administration built upon that and helped ensure the inclusion of domestic victims. The Trump administration has taken credit for work and statistics that should rightfully be attributed to Obama and while there has been work continued under this administration none of it is impactful enough to balance out all the harm done. Trump’s words, actions, policies, tweets, staff, allies, history, appointees and defenders explicitly show that he doesn’t care about people of color, he doesn’t care about women and girls, he doesn’t care about low income and poor people, he doesn’t care about the LGBTQIA community, he doesn’t care about people from other countries…but that’s who trafficking victims and survivors are. Can anyone really argue with a straight face that Trump values the inherent worth and humanity of a a 16 year old Black girl who has been in juvenile detention for stealing a car or a woman from Guatemala who escaped a domestic violence relationship by coming to the US with her child and entering without documentation or the Latinx trans young woman who is struggling with drug addiction? Because those are the trafficking victims I know. They are the faces of trafficking in this country and all the evidence says that Trump and this administration do not care about these individuals. They don’t even see them as fully human. Trump has mocked the cities they live in and the countries they come from, he’s repeatedly encouraged racism and misogyny in its most violent forms against them and anyone who looks like them; he’s encouraged police brutality and argued for harsher sentencing; he’s stopped domestic violence victims claiming the US as a place of refuge, ripped children from the arms of scared parents and put kids in cages; he’s rolled back protections and rights for trans people, has tried repeatedly to take health coverage away and appointed justices who will do more of the same. He’s encouraged and empowered groups like QAnon and the #SaveOurChildren movement to spread lies and create imaginary trafficking scenarios that distract and pull resources from the real work and in some cases put people in real danger. Trump has vehemently condemned individuals who pulled down statues of Confederate heroes, — supporters of slavery — but when asked about Ghislaine Maxwell after her arrest for trafficking WISHED. HER. WELL. There are relatively few explanations for a sitting President to wish an infamous, alleged, child sex trafficker well. None of them are flattering to Trump. Trafficking and commercial sexual exploitation don’t happen in a vacuum. They happen in a world where gender based violence, racism, poverty, inequality are allowed to flourish. Trump only cares about trafficking and trafficking victims when he needs to make a speech and appeal to his base. The rest of the time he creates and encourages the very conditions that make children and adults vulnerable to trafficking in the first place. He has made his priorities clear. Now it’s about the movement’s priorities and all those who claim to care about trafficking victims and survivors. If you do, the only choice is Biden. Trump has used the pain and trauma of trafficking victims and survivors for his own gains. There’s no ‘worthy goal’, no short term win, no legitimate rationale that justifies allowing him and his administration to exploit and harm us anymore.
https://medium.com/@rachel2lloyd/an-open-letter-to-anti-trafficking-movement-and-anyone-who-cares-about-trafficking-commercial-755104d10466
['Rachel Lloyd']
2020-11-03 01:25:44.651000+00:00
['Biden', 'Trafficking', 'Trump Administration', 'Trump', 'Anti Trafficking']
The One Main Problem Confronting the LA Transportation Movement
The main problem for improving LA transportation is that most of our ideas for a safer, more sustainable network are not popular enough. Taking space on the street away from cars for bus-only lanes? Not popular enough. Taking space to put in protected bike lanes? Not popular enough. Implementing safety-oriented design treatments that slow traffic? Increasing bus frequency? Devoting more money to fix sidewalks? All not popular enough. “Enough” is the key word here. Nobody is against the bus or sidewalks, and few people are against biking as an activity. But these things just aren’t popular enough for communities to prioritize them and endure the perceived loss of utility for cars. The only really popular non-car mode in LA is light rail, but it’s a soft popularity. The evidence to date makes it clear that most people will not use the expanding rail system until the relative cost of driving goes up —and making driving more costly is another example of a transportation solution that is not popular enough. In short, if you care about transportation modes other than cars, it’s bleak right now. So what to do? I personally think it helps to have a “theory of change,” which is pretty much what it sounds like — a theory for how things might change within the current environment. My own theory is simple: transportation advocates need to do more work in the area of changing people’s minds so they actually see more value in safer streets that provide travel options other than a car. This is much more daunting than it sounds on paper. Changing minds around entrenched, often unconscious behavior is incredibly difficult. But before moving to a discussion of how to make our stuff more popular, the first step is simply acknowledging this basic fact: our ideas are not popular enough. At present, when I look at some of the prevailing theories of change in the transportation movement today, what I see instead is a state of denial about the fact that our ideas are not popular. Political Courage For example, one prevalent theory of change in transportation is based on the idea that our transportation ideas actually are popular, and people do want them prioritized — the problem is that we don’t have leaders with the political courage to enact them. According to this approach, the transportation movement doesn’t need to rethink anything about how road diets and bus lanes are resonating with communities. We just need to help elect political candidates who will implement them and then stay strong in the face of resistance from “the vocal minority” that opposes them. Now, there’s nothing wrong with supporting transportation-friendly candidates, but I personally find this “political courage” theory of change to be a profound misunderstanding of the political context in which elected officials make decisions about things like road diets and bus lanes. As I outline in my overview of the role that elected officials play in transportation, elected officials and their staffs have far greater access to a wider spectrum of people in their communities than transportation activists and advocates do. If elected officials are not putting their political capital behind our transportation ideas, it’s mostly because our stuff isn’t popular enough with their constituents. Full stop. The idea that there is latent demand for bike lanes and bus lanes that just needs to be activated is, to my mind, an act of wishful thinking by activists unaware of how people other than themselves feel about the things we ourselves cherish. A variant of the “political courage” model is this: even if many of our ideas in transportation are relatively unpopular, then at the very least elected officials should be leading the campaign for transportation change. I think this is another misunderstanding of how elected officials operate. In the crowded marketplace of ideas of ideas for making our communities better, it isn’t the job of elected officials to make bus lanes lanes and bike lanes popular — that’s our job as transportation advocates! We need to take responsibility for making our stuff popular instead of pawning it off. The first step to making safer streets requires a different kind of “advocacy courage” — a willingness to take responsibility for leading a transportation movement instead of insisting they are popular or that it’s the job of elected officials to make them popular. If we make safer streets a more popular cause, city officials will be more likely to become committed to it as well. Bridge Building A second prominent theory of change that many transportation advocates are adopting right now is “building bridges” with adjacent causes within the larger progressive political movement in the areas of housing, police reform, and racial justice. This approach connects the transportation cause to the energy that these other movements are generating. Just as I’m skeptical of the “political courage” theory of change, I’m also skeptical of this “bridge building” theory of change. As I see it, the transportation movement has a basic job to do: get more people walking and riding bikes and taking buses and trains. That’s really it. And while achieving this goal reaches into a lot of adjacent spaces, we should never lose focus on our main thing. And I think getting too involved in solving the affordable housing crisis or improving the behavior of law enforcement, however important those causes may be, is a significant strategic mistake for a movement that is already faltering — it’s like a failing student adding additional difficult classes to her schedule. A transportation movement that can’t produce bike lanes and bus lanes is unlikely to contribute meaningfully to movements concerning housing or law enforcement. At a certain point, there is no limit to bridge-building with adjacent causes, and the bridges end up diluting rather than strengthening the work done within transportation. If a person simply finds the police reform issue more compelling, that’s perfectly fine, but you don’t need to leave the transportation space to focus on racial justice. This brings me back to the central question— how can we make our ideas more popular? How can we get more people to prioritize walking, biking, and public transit? I’ll be the the first to admit there are no easy answers here. In fact, the absence of clear promising methods for making our stuff more popular is precisely what makes these questions so difficult to confront. Complaining about elected officials is easy. Getting involved with causes adjacent to transportation is also relatively easy. But they are ultimately a waste of time and resources, because these approaches don’t operate within a valid theory of change. The first step toward making progress is to acknowledge the fact that our transportation ideas are not popular enough, and to take responsibility as a movement for making them more popular. This is the main thrust of my theory of change, and is where our energy should be focused. A discussion of nuanced strategies for how to make our stuff more popular in different kinds of LA communities would be fantastic, and this is a topic I will write more about. Do you think I’m off-base? Do you have an altogether different theory of change? No problem. Weighing the pros and cons of different theories of change would be a very productive discussion to have within the transportation movement in LA. We’ve been complaining for decades and nothing ever changes — maybe we need a new theory of change. See an overview of the other things I’ve written, transportation-related and otherwise, at the link here.
https://medium.com/@nsholmes21/the-one-main-problem-confronting-the-la-transportation-movement-fdf6a5ce312d
['Nathan S. Holmes']
2021-06-30 01:40:43.334000+00:00
['Mobility', 'Bikes', 'Los Angeles', 'Bus', 'Transportation']
No Mental Health Days
Part 1: Bensalem October, 2019. Bensalem Pennsylvania. Would you believe that I wasn’t always The Gigconomist, Dear Reader? I had been a real estate agent. Albeit I couldn’t afford many of the tools I needed to successfully compete in a busy market of agents, I do have a natural inclination to help someone else feel like they’re comfortable and have power in a negotiation, when they really don’t. In other words, I’m a master manipulator. I’m a salesman. At that time, I had nothing to show for it. I was living in a weekly motel in Bensalem, Pennsylvania. I had to get away from my grandparents, who before I had lived with, and made it a case to abuse and insult me into regular fits of madness. I was 36, by the way, at this time. Also, I worked for a startup in Philadelphia as a contractor. I showed apartments to prospective tenants on behalf of landlords. And I was good too. I was the best in the company, and that’s not bragging. To put it into perspective, the Philadelphia office was at the best, in the middle of the company. There were six offices: Chicago, Cleveland, Pittsburgh, Philadelphia, Baltimore, and D.C. My first month, I closed more lease agreements by myself than the entirety of the Chicago office. I worked 60 hour weeks to pull this off. I ended up making $2,000 dollars a month. Thank you, Gig-Conomy. This story is going to jump around a bit, but we start at a nervous breakdown in a Bensalem motel. At the time, I think I was taking Lithium and…maybe an antidepressant. I’m not sure. Come to think of it, after this I started driving Uber and Lyft full-time, so this moment is my origin story. Figure 1: My Resignation Letter I had begged and pleaded for the month of September off. I was working too hard and couldn’t pay any bills and I was trying to move anywhere to Pennsylvania. It was granted, I think the guy who hired me kind of felt my burnout. But I simply lost interest. That’s where the title of this article comes from, Dear Reader. During my sabbatical all I did was drive for Uber, Lyft, and Postmates. I didn’t take a vacation so much as I just switched jobs. Which led me to Bensalem. So, come October 1st, they wanted me back. And I started showing apartments again even though I didn’t want to. This lasted two weeks, and I tried to get my passion for the job back. I love showing properties, I really do, but I couldn’t take anymore and sent the email above. And that’s that, as they say. I actually tried to come back to the company twice during Covid but they rarely, if ever, respond. The point is that I had to work myself into the ground for this job to be any form of beneficial to me. Doing so cost me my humanity and my personality. It cost me relationships. It cost me the love and respect of people who love and respected me. I should mention at this point in our journey together that I have cyclothymia, borderline personality disorder traits, and PTSD. And all three of those things are co-morbid with each other. I only received a proper diagnosis when I was 35. When I was 27 I was simply “depressed”. When I was 30 I was “Bipolar 2”. When I was 33 I was “If I give you a drug test, you mean to tell me that nothing is going to come up?” (Nothing did, by the way. Thank you State of Delaware appointed psychiatrist for making me feel bad about something I didn’t even do.) and finally at 34 I was “Well, we’re not sure.” I’ve been through a dozen psychologists and even more jobs. I’ve ruined every personal relationship I’ve ever had, some more than once. I’ve only ever known chaos, and never stability. And I’ve only ever been expected to just work through it. By my family, my employers, and even some counselors that I’ve had. And it was that way with the apartment showing startup in Philadelphia by way of Pittsburgh. It didn’t matter that I was on the edge of a nervous breakdown. It didn’t matter that I didn’t know where I was most of the time and living as a transient. If I didn’t smell too bad, put on that collared shirt and head to Northern Liberties, we have to rent this three-bedroom. And I wasn’t an employee, just a contractor, so what rights did I have? I just worked through it, like always. The problem is that working through it makes me appear too functioning to the outside world. Even though I qualify as disabled by American government definition (technically the government defines even simple depression as a disability, as it should) I still get up every day, and I mean every day, and go to work. See? I wasn’t lying! PTSD is right there! To me, that’s the way it is. Get up and go to work and fulfill my obligation. Sometimes I’ll try to kill myself. If the cuts aren’t too bad and my stomach doesn’t hurt too much from whatever household item I try to poison myself with, I should get through the day. I have drank bleach in my car before, prior to a seven-hour Uber shift. So it goes, right Kurt? I’m too crazy to hold a job, and too sane not to have one. I want to thank everyone, all of you, who have tried to talk me out of doing something reactionary these past few years. I want to make sure you know that I appreciate you trying to help me, and I’m sincerely sorry if I’m a burden. I’m trying to be better, I really am. I moved away from Bensalem and back to Delaware, by the way. Covid hit and I moved out of my grandparent’s house for good, this time. But that’s another article for when I feel up to it. As for my current employer, well, I’m not driving ride share full-time anymore, anyway. But, I am starting to work too many hours again. And, I’ve hidden my diagnosis from them, based on a past event I had with one of my many bosses. Get up go to work and fulfill my obligation. I think I’ll make this a series, as it seems to be ongoing. So Dear Reader, I hope you enjoyed Part 1, Bensalem. As for me, there’s a streak bonus I have to try and secure. Gig.
https://medium.com/@thegigconomist/no-mental-health-days-535f7b01fc42
['The Gigconomist']
2020-12-20 15:37:54.486000+00:00
['Gig Economy', 'Borderline Personality', 'Mental Health', 'Suicide', 'Real Estate']
IoT Applications in Manufacturing (Production & Supply Chain)
Photo by NASA on Unsplash Industrial IoT (IIoT), a key enabler of Industry 4.0 & smart manufacturing, is widely adopted by manufacturing companies across the globe to improve operational visibility, productivity & efficiency. According to Forbes, IIoT platforms are beginning to replace MES and related applications, including production maintenance, quality, and inventory management. As per the survey by IoT Analytics, IIT spend would be split 60%/40% between within and outside the factory, respectively. And most importantly, discrete manufacturing companies will outpace process/batch manufacturing companies in the IIoT adoption race. Find out how IoT for manufacturing is transforming production and supply chain in the sections below. Why IIoT projects are widely used in manufacturing MarketsandMarkets has forecasted that the IoT in the manufacturing market will grow from USD 12.7 billion in 2017 to USD 45.3 billion by 2022, at a Compound Annual Growth Rate (CAGR) of 29.0% during the forecast period. IoT implementations are solving the below manufacturing challenges for industries. Data collection for OEE Unplanned machine downtime Inefficient Inventory management Under-utilized assets & resources Increasing labour and machine maintenance costs IoT applications in manufacturing IIoT is the process of implementing IoT enabled devices and integrating with processes to track and improve production efficiency, product quality, and speed. IoT in production The implementation of IoT has transformed manufacturing production. Manufacturing involves lots of resources and managing them efficiently is critical for achieving planned production. IoT helps manufacturers to track and manage machines, humans effortlessly. Let’s get started with equipment/machine utilization. Monitoring machine utilization Machine utilization — IoT enables operators to facilitate business processes by collecting machine data in real-time and sharing it instantly for making informed decisions. The devices help operators to track everything under different conditions starting right from volume to the temperature where humans cannot intervene, and this helps to utilize equipment to the fullest for achieving planned production. Improving production quality In the digital era, a product’s quality shortcomings will impact a company financially and its branding. Monitoring production processes, making adjustments wherever needed is the best way to produce quality products that delight customers. Manual product quality controls are time-consuming and error-prone, resulting in defective products in the hands of customers. By collecting data and other metrics with IoT sensors, manufacturers can determine product quality standards expected. A physical inspection may be required to rectify the problem. So, there is no wonder why manufacturers are turning to IoT solutions to maintain quality standards Product traceability Tracking product movement when they are on the move is always challenging. The inability to find products on the move during production delays project delivery. With IoT devices, tracking products between two production points is simple. Real-time tracking of products on the move enables manufacturers to save time and cost, ensuring faster time to market. Predictive maintenance Predictive maintenance or Conditions Based Maintenance (CBM) uses embedded sensors and devices to monitor a machine’s critical parameters. With such a setup, monitoring the signs of machine failure or abnormalities is effortless. By studying these data using machine learning, advanced analytics and AI, developing a well-defined maintenance strategy is always simple. The analyzed data helps with making adjustments during run time and reduces unplanned maintenance costs. As the entire process is performed using the real-time data, identifying problems proactively and making informed decisions on maintenance cycles is effective. This approach has helped manufacturing companies to reduce downtime by 75% and its associated maintenance costs by 75%. IoT in supply chain management IoT has re-invented supply chain management (SCM). It is simple and seamless to discover where goods are, how to store and where they are within the manufacturing unit. Asset tracking & management IoT-enabled smart asset monitoring solution lets manufacturers locate the asset’s condition, lifecycle, etc. accurately. Adding intelligence to the system powers the asset management process with automated workflows, real-time alerts, data insights, and real-time visibility. Combining the power of IoT devices with mobile apps will ensure on-the-go and faster asset tracking. Identifying and authenticating an asset’s location at any time By attaching IoT devices to storage containers, identifying assets within a container is easy. Along with other technologies like RFID, Barcode, NFC, etc. operations can easily locate assets using the real-time data pushed to through mobile applications. This saves time and improves productivity. Inventory management Even storing the products in ideal conditions is possible, as IoT devices can help in monitoring the conditions of storage spots exposed to different environmental conditions. Setting up an alarm on such conditions will help staff to immediately act and avoid product damage. Tagging the materials with IoT devices (along with sensors) will simplify locating a product in a warehouse. In this way locating any product in a large warehouse is easy and accurate. Conclusion In short, IIoT adoption has created new revenue streams for manufacturing and the industry can mitigate the production, operational efficiency challenges with this strategy. Are you ready for the Industrial 4.0 revolution? Get started. Disclaimer: Being a follower of ‘The IoT Magazine’ offers lots of perks :) A consultation session with experts from across the industries is a major one. Submit your query here and we will connect you with the right IoT experts. He might be sitting next door, you never know.
https://theiotmagazine.com/iot-applications-in-manufacturing-production-supply-chain-c8f0698f5fa7
['Siva Prasadh G']
2020-05-27 14:02:23.718000+00:00
['Iiot', 'Manufacturing', 'Internet of Things', 'Predictive Analytics', 'Industry 4 0']
When Humanity Fallen Apart From Society?
When Humanity Fallen Apart From Society? 1 . Humanity has fallen from society as society has forgotten that it is formed by human beings to look after the day to day crises of every human being, 2 . Humanity has fallen from society as Society has forgotten that purpose of the society is to look after issues of its member and not to force its members to start “Rat race” to prove better than other. 3 . Humanity has fallen from society as society has forgotten that goal of the existence of society must be -security, each and every member of it and not categorize own members as more important & less important than each other to make members of society vulnerable and victims of circumstances. 4. Humanity has fallen from society as Society has forgotten that, society must lead every member to its happiness and fulfilment and not to deprivation and despondency. 5. Humanity has fallen from society as, Society has forgotten that the need of society must be to establish order among its member and not to be a reason for inventing chaos, loot, and injustice. 6 . Humanity has fallen from society as society has forgotten that each and every life of its member is important and must be protected from getting lost due to any neglect or excessive use of power 7. Humanity has fallen from society as society has forgotten that it must be the reason for the atonement & redemption of each and every member of it and not to be the reason for dejection and deniability. Millions in Bengal had died, due to artificial feminine invented by British Prime Minister -Mr. Winston Churchill during World War-2, but not a single authority in the world took notice of it, This is the best example, why humanity is fallen from society today or how humanity is used to fall in any time of human history. From : Dr. Nilesh Jaybhaye [1] Footnotes
https://medium.com/@nileshjaybhaye9999/spiritual-light-11-a59f1f648c75
['Nilesh Jaybhaye']
2021-07-05 09:24:23.969000+00:00
['Human Rights', 'Humanidade', 'Humanitarian', 'Humanism', 'Humanity']
Text Preprocessing for NLP and Machine Learning Tasks
As soon as you start working on a data science task you realize the dependence of your results on the data quality. The initial step — data preparation — of any data science project sets the basis for effective performance of any sophisticated algorithm. In textual data science tasks, this means that any raw text needs to be carefully preprocessed before the algorithm can digest it. In the most general terms, we take some predetermined body of text and perform upon it some basic analysis and transformations, in order to be left with artefacts which will be much more useful for a more meaningful analytic task afterward. The preprocessing usually consists of several steps that depend on a given task and the text, but can be roughly categorized into segmentation, cleaning, normalization, annotation and analysis. Segmentation , lexical analysis, or tokenization, is the process that splits longer strings of text into smaller pieces, or tokens. Chunks of text can be tokenized into sentences, sentences can be tokenized into words, etc. , lexical analysis, or tokenization, is the process that splits longer strings of text into smaller pieces, or tokens. Chunks of text can be tokenized into sentences, sentences can be tokenized into words, etc. Cleaning consists of getting rid of the less useful parts of text through stop-word removal, dealing with capitalization and characters and other details. consists of getting rid of the less useful parts of text through stop-word removal, dealing with capitalization and characters and other details. Normalization consists of the translation (mapping) of terms in the scheme or linguistic reductions through stemming, lemmatization and other forms of standardization. consists of the translation (mapping) of terms in the scheme or linguistic reductions through stemming, lemmatization and other forms of standardization. Annotation consists of the application of a scheme to texts. Annotations may include labeling, adding markups, or part-of-speech tagging. consists of the application of a scheme to texts. Annotations may include labeling, adding markups, or part-of-speech tagging. Analysis means statistically probing, manipulating and generalizing from the dataset for feature analysis and trying to extract relationships between words. Segmentation Sometimes segmentation is used to refer to the breakdown of a text into pieces larger than words, such as paragraphs and sentences, while tokenization is reserved for the breakdown process which results exclusively in words. This may sound like a straightforward process, but in reality it is anything but. Do you need a sentence or a phrase? And what is a phrase then? How are sentences identified within larger bodies of text? The school grammar suggests that sentences have “sentence-ending punctuation”. But for machines the point is the same be it at the end of an abbreviation or of a sentence. “Shall we call Mr. Brown?” can easily fall into two sentences if abbreviations are not taken care of. And then there are words: for different tasks the apostrophe in he’s will make it a single word or two words. Then there are competing strategies such as keeping the punctuation with one part of the word, or discarding it altogether. Beware that each language has its own tricky moments (good luck with finding words in Japanese!), so in a task that involves several languages you’ll need to find a way to work on all of them. Cleaning The process of cleaning helps put all text on equal footing, involving relatively simple ideas of substitution or removal: setting all characters to lowercase noise removal, including removing numbers and punctuation (it is a part of tokenization, but still worth keeping in mind at this stage) stop words removal (language-specific) Lowercasing Text often has a variety of capitalization reflecting the beginning of sentences or proper nouns emphasis. The common approach is to reduce everything to lower case for simplicity. Lowercasing is applicable to most text mining and NLP tasks and significantly helps with consistency of the output. However, it is important to remember that some words, like “US” and “us”, can change meanings when reduced to the lower case. Noise Removal Noise removal refers to removing characters digits and pieces of text that can interfere with the text analysis. There are various ways to remove noise, including punctuation removal, special character removal, numbers removal, html formatting removal, domain specific keyword removal, source code removal, and more. Noise removal is highly domain dependent. For example, in tweets, noise could be all special characters except hashtags as they signify concepts that can characterize a tweet. We should also remember that strategies may vary depending on the specific task: for example, numbers can be either removed or converted to textual representations. Stop-word removal Stop words are a set of commonly used words in a language like “a”, “the”, “is”, “are” and etc in English. These words do not carry important meaning and are removed from texts in many data science tasks. The intuition behind this approach is that, by removing low information words from text, we can focus on the important words instead. Besides, it reduces the number of features in consideration which helps keep your models better sized. Stop word removal is commonly applied in search systems, text classification applications, topic modeling, topic extraction and others. Stop word lists can come from pre-established sets or you can create a custom one for your domain. Normalization Normalization puts all words on equal footing, and allows processing to proceed uniformly. It is closely related to cleaning, but brings the process a step forward putting all words on equal footing by stemming and lemmatizing them. Stemming Stemming is the process of eliminating affixes (suffixes, prefixes, infixes, circumfixes) from a word in order to obtain a word stem. The results can be used to identify relationships and commonalities across large datasets. There are several stemming models, including Porter and Snowball. The danger here lies in the possibility of overstemming where words like “universe” and “university” are reduced to the same root of “univers”. Lemmatization Lemmatization is related to stemming, but it is able to capture canonical forms based on a word’s lemma. By determining the part of speech and utilizing special tools, like WordNet’s lexical database of English, lemmatization can get better results: The stemmed form of leafs is: leaf The stemmed form of leaves is: leav The lemmatized form of leafs is: leaf The lemmatized form of leaves is: leaf Stemming may be more useful in queries for databases whereas lemmazation may work much better when trying to determine text sentiment. Annotation Text annotation is a sophisticated and task-specific process of providing text with relevant markups. The most common and general practice is to add part-of-speech (POS) tags to the words. Part-of-speech tagging Understanding parts of speech can make a difference in determining the meaning of a sentence as it provides more granular information about the words. For example, in a document classification problem, the appearance of the word book as a noun could result in a different classification than book as a verb. Part-of-speech tagging tries to assign a part of speech (such as nouns, verbs, adjectives, and others) to each word of a given text based on its definition and the context. It often requires looking at the proceeding and following words and combined with either a rule-based or stochastic method. Analysis Finally, before actual model training, we can explore our data for extracting features that might be used in model building. Count This is perhaps one of the more basic tools for feature engineering. Adding such statistical information as word count, sentence count, punctuation counts and industry-specific word counts can greatly help in prediction or classification. Chunking (shallow parsing) Chunking is a process that identifies constituent parts of sentences, such as nouns, verbs, adjectives, etc. and links them to higher order units that have discrete grammatical meanings, for example, noun groups or phrases, verb groups, etc.. Collocation extraction Collocations are more or less stable word combinations, such as “break the rules,” “free time,” “draw a conclusion,” “keep in mind,” “get ready,” and so on. As they usually convey a specific established meaning it is worthwhile to extract them before the analysis. Word Embedding/Text Vectors Word embedding is the modern way of representing words as vectors to redefine the high dimensional word features into low dimensional feature vectors. In other words, it represents words at an X and Y vector coordinate where related words, based on a corpus of relationships, are placed closer together.
https://medium.com/sciforce/text-preprocessing-for-nlp-and-machine-learning-tasks-3e077aa4946e
[]
2020-05-05 11:11:00.901000+00:00
['NLP', 'Machine Learning', 'Data Science', 'Artificial Intelligence', 'Algorithms']
Class method vs static method in Python
This might be obvious for many, but based on my personal experience, I realized that people (including me) sometimes get confused on when to use @classmethod and @staticmethod . Hopefully this article will help clarify the usage of both. Class method First of all, what’s the definition of a class method? Based on Python’s official doc: “a class method receives the class as implicit first argument, just like an instance method receives the instance”. To declare a class method, we can use @classmethod decorator: A class method can be called either on the class ( Example.foo() ) or on the instance level ( Example().foo() ). The instance is ignored except for its class. Class method passes the derived class as the first implicit argument. Static method According to Python’s official doc, unlike class method, a static method does not receive an implicit first argument. To declare a static method, we can use @staticmethod decorator: Similar to class method, a static method can be called either on the class ( Example.bar() ) or on an instance ( Example().bar() ). When to use Class method and Static method Based on the above definitions, class method and static method are arguably similar. So why do we need both, and what are their use cases? Let look at the following examples: Example 1 The output from calling Example.foo('FOO') and Example.bar('BAR') are: Message sent from foo: FOO Message sent from bar: BAR In this example, the effect of using class method and static method is similar and both ways are capable of doing this task. Example 2 I this example, foo is calling static method bar from the inside this body function. The output from calling Example.foo("FOO") i: Message sent from foo: FOO attr_1 Message sent from bar: (from foo) BAR As we can see from the output, class method is capable of accessing attributes and methods which are in the scope of class level (not instance level). However, this is not the case for static method since static method cannot have access to other class methods or class attributes (we are not passing class as the first implicit argument). In addition, both class method and static method do not have access to instance level’s attributes and methods. Therefore, when design a method, if the method requires access to any of class level’s attributes or methods, class method is a suitable candidate, otherwise, static method is already sufficient.
https://medium.com/@att288/class-method-vs-static-method-in-python-be91dd6b3117
[]
2020-12-22 11:36:05.102000+00:00
['Python', 'Python3', 'Object Oriented', 'Software Engineering', 'Programming']
Living through the pandemic doesn’t mean you can’t thrive and be creative.
Living through the pandemic doesn’t mean you can’t thrive and be creative. In fact, creativity flourishes when we are alone with our thoughts and not distracted by the mundane events in the world. The key is to find a passion — something that brings you joy and nurtures your soul. For me, it’s writing. What about you? I hope you get a dose of inspiration from reading this article.
https://medium.com/illumination/living-through-the-pandemic-doesnt-mean-you-can-t-thrive-and-be-creative-28ef9e85b772
['Kristina Segarra']
2020-12-21 04:37:52.864000+00:00
['Mindfulness', 'Pandemic', 'Inspiration', 'Life Lessons', 'Short Form']
Hanyu Pinyin: Learn the pun, find the fun
Hanyu Pinyin (汉语拼音) is the official romanization system for Standard Chinese, created by the Chinese. Tones aside, there are two parts to each character — Sound (声母) and Rhythm (韵母), also known as the Initial and the Final to English speakers. Although these combinations are made up of modern English letters, they do not always sound like the English pronunciations that we are accustomed to. It is a functional system and has its merits for learners who start at a young age with no prerequisite of another language. At the age of 7, in Singapore, I learned English and Mandarin Chinese concurrently in separate classes. In English class, the pan refers to a cooking device while in Chinese class (tones aside), the Hanyu Pinyin “pan” usually refers to the plate (盘) and it actually sounds more like the English word for a play on words, pun. Although they were both “p-a-n”, I had not connected the English pronunciations with Hanyu Pinyin, because they were equally foreign and new to me at that point. When I took up Russian class at the age of 24, my first instinct was to read “Москва” like an English word. It is in Cyrillic but between English and Mandarin that I already knew, I instantly drew an association to the former. I read it as “Mock-bah” and soon learned that it should sound more like “Mosque-va” which means Moscow.
https://medium.com/@linzybearswings/hanyu-pinyin-learn-the-pun-find-the-fun-bea5d66fbf68
['Linz Lim']
2021-12-31 17:39:35.870000+00:00
['Creative', 'Language Learning', 'Chinese', 'Design', 'Ideas']
What Did Marie Curie Discover?
Inspired by Wilhelm Roentgen’s discovery of X-rays, Henri Becquerel was studying its properties using naturally fluorescent materials in 1896. While observing its behavior under a magnetic field, he found that Uranium emits rays that are different from X-rays. This motivated Marie Curie to pursue her research in the field of uranium rays and their electromagnetic properties. When Curie experimented on uranium rays, she found that uranium rays remained constant regardless of the form of Uranium. Hence, these rays must have come from Uranium’s atomic structure. To describe this phenomenon, she coined the term ‘Radioactivity.’ Marie’s study on radioactivity gave birth to a new field of physics called Atomic physics. Marie’s husband joins her research By this time, her husband, Pierre Curie, was very impressed and intrigued by her research. So, he decided to drop his research and help her in her discovery. Marie worked with two minerals of Uranium. She also sampled tonnes of the ore Pitchblende to understand more about its radioactive properties. She and Pierre began searching for other elements that are radioactive. In the year 1898, they discovered that the element Thorium was also radioactive. Marie presented a brief paper on her discovery in the year 1898 since she knew that sharing her discovery with the world as early as possible was necessary to establish her predominance as an important woman scientist. After years of extensive research, Marie and Pierre discovered two radioactive elements that they named “Polonium” and “Radium”. Marie and Pierrie in their laboratory, 1904 — By Unknown author — hp.ujf.cas.cz (uploader= — Kuebi 18:28, 10 April 2007 (UTC)), Public Domain, Link Unfortunately, Pierre Curie died in a road accident in the year 1906. After his death, Madame Curie continued her research on radioactivity. She isolated the element Radium in 1910 and devised an international standard to measure radioactivity. Winning the Nobel prizes In 1903, Henri Becquerel, Pierre Curia, and Marie Curie were awarded (together) the Nobel prize for Physics for their work on radioactivity. A picture of the 1903 Nobel Diploma for Physics — By Dannybalanta2011–04–16 / Public domain In the year 1911, she won her second Nobel prize, in Chemistry, for the discovery of Radium and Polonium. This is a snippet from the biography of Madam Marie Curie. Read her entire biography here: Interesting facts about Marie Curie
https://medium.com/@freelancer-maddy1988/what-did-marie-curie-discover-d5e12528de1c
['Maddy M']
2020-12-08 06:05:16.441000+00:00
['Scientist', 'Important Events', 'Biography', 'History', 'Marie Curie']
How Will the EU MDR Shape the Future of MedTech?
This article first appeared via the Climedo Blog. From May 26 2020 onwards, medical devices manufacturers operating and/or selling within the EU must comply with the EU MDR (EU 2017/745). The regulation’s objective is to improve patient safety by evaluating existing devices (e.g. through PMS and PMCF) and ensuring transparency throughout a device’s lifecycle. In a recent survey conducted by KPMG and RAPS, 66% of respondents said they hadn‘t started planning for the long-term ramifications of MDR compliance, and only 27% believed they would be fully compliant by the cut-off in May 2020. In the following, we want to discuss how the future of medtech will be impacted by the new regulation. A new Chapter for MedTech Despite the amount of bureaucracy, increased costs, and longer waiting times the new regulation may entail, it has the potential to fundamentally change the MedTech industry (for the better, we believe). In our Playbook, we gaze into the future and predict five key ways in which we believe the MDR will shape the future MedTech world and how companies can use it as a leverage for growth. Below is a sneak preview of what our Playbook talks about. Download the full Playbook 1. Accelerated Innovation Under EU MDR law, Post-Market Surveillance (PMS) will become an integral part of most manufacturers’ Quality Management System (QMS). This means that they will need to gather and demonstrate much more evidence on whether their devices are fulfilling their intended purpose. They must also identify and potentially eliminate underperforming devices. The amount of data companies must collect will give them access to new insights which can help improve their products and devices, shifting from mere compliance to a higher quality standard. In light of the MDR, companies can take advantage of a stronger product differentiation and an improved ability to design and promote products to key decision makers. Gathering clinical evaluation data and real-world evidence to back up product marketing claims could also lead to new or better claims, thus driving market adoption. Experts predict that 30% of medical devices could be taken off the market due to the MDR. Though it sounds daunting, this will enable them to invest their efforts into their most valuable devices and grow their market share. As part of their portfolio rationalization, manufacturers will be able to see which products may no longer be worth their investment and discontinue them or sell them off to larger companies. The mandatory Unique Device Identification (UDI) system will help detect and report counterfeit products more easily, thus decluttering the market. 2. Improved Safety & Transparency Unlike the MDD (Medical Device Directive), the MDR states in Article 10 (2) that manufacturers must establish, document, implement and maintain a Risk Management System (RMS) as part of their QMS. Better identifying, analysing, and communicating known and foreseeable hazards will protect patient safety in the long term. New sanctions, such as fines or even prosecution, are meant to ensure that manufacturers follow through with their RMS and have patients’ best interests at heart. The MDR will also help to mitigate liability: should the safety of a medical device or product ever be called into question, manufacturers will need to act quickly. Not only will this help to ensure patient safety, it will also reduce the company’s liability, since their reaction time has to be kept to a minimum. Incidents need to be reported either immediately or within a given number of days (two to fifteen). The amount of data manufacturers will have gathered as part of their PMS by then will also help them build a stronger defense case if necessary. 3. Patient Empowerment On the end-user side, patients and medical staff can expect increased safety, traceability, and transparency. Since manufacturers will need to proactively gather and report on user feedback as part of their PMS, patients will have a clear say in the efficacy of products, how they will be modified, which should stay on the market, and which should be removed. This will contribute to a future of true patient empowerment. Some examples of what the MDR aims to find out from patients are: Is the device fulfilling its promised use? Are patients perhaps over- or underusing a device? Are they using the device’s default settings or having to change them every time? With the EUDAMED database becoming publicly available in May 2022, patients will be able to make more informed decisions about devices by researching them ahead of usage. Patients with implants will receive a registration card with access to information about the manufacturer and their safety records. 4. New Roles for Notified Bodies Finally, medium- and high-risk medical devices will require review by a Notified Body (NB). Their role will be to ensure that devices at every stage, from design through quality control to ongoing surveillance, comply with the regulation. For example, NBs will be obliged to enforce regulation through unannounced audits of manufacturers’ processes, and manufacturers may need to amend their contracts with subcontractors and/or suppliers as a result. For NBs to be able to issue MDR certificates (such as the CE mark), they need to be recertified according to MDR. This designation process consists of four steps and will take ~18 months per NB. We should thus expect some delays in the review and certification process. Furthermore, it is likely that not all current NBs will remain in business; the number has been declining since 2012, and as of November 20th, 2019, just seven NBs have been MDR-certified. A current list can be found via the NANDO database. On a more positive note, the more rigorous surveillance and scrutiny by NBs will mean that risks from unsafe devices will be drastically reduced. Manufacturers should prepare for these changes well in advance to ensure a smooth transition to MDR. 5) A Paperless World According to MDR Annex III, manufacturers’ technical documentation must be presented in a “clear, searchable and unambiguous manner”. Until now, many affected companies have been working with unstructured data collection and sharing systems, such as paper, Excel or email. Since these systems are cost-inefficient, unsafe and likely to struggle in the post-MDR era, we can expect a surge in companies moving their technical documentation into the cloud. As such, we will see more and more manufacturers embracing a holistic, digital solution that is accessible to all involved stakeholders. Some of the benefits manufacturers will see in the long term include improved collaboration and transparency, the elimination of silos, and more efficient information management processes. They will also be able to react to serious incidents more quickly, mitigating potential safety risks. To protect data from cybersecurity risks, however, certain security requirements must be met, such as data pseudonymization, appropriate encryption schemes, and consistent back-up strategies. Want to find out more? Get the Playbook And if you’d like a free, personalized demo for accelerating clinical validation, get in touch! Additional documents to get you ready for the EU MDR
https://medium.com/@marius.tippkoetter/how-will-the-eu-mdr-shape-the-future-of-medtech-56a9426fcb48
['Marius Tippkoetter']
2020-02-20 10:08:33.034000+00:00
['Healthcare', 'Eu Mdr', 'Medical Devices', 'Medtech', 'Digital Health']
Musharraf Ali Farooqi — Literature filled with Optimism
Musharraf Ali Farooqi — Literature filled with Optimism Following one’s passion requires a lot of hard work and focus for a dream to turn into reality. Musharraf Ali Farooqi is one such example. The author, novelist and translator was born in Hyderabad. After attending St. Bonaventure’s School and completing his Intermediate, he enrolled for an Undergraduate degree in Engineering. His father taught Philosophy at Sindh University and dreamt of an academic career for him. After about a year of studying engineering, Musharraf had something else in mind. To put it in his own words, he happily dropped out of the engineering program and made his own way through experimenting with writing and translating; albeit he told a different story to his mother. “My dropping out from the engineering studies was a disappointment to her and I still hear about it!” While working as a journalist in Karachi, he also started a small literary magazine called Cipher in collaboration with his friends, Azhar Ali Abidi and Zainab Masud. This was the period when he started writing stories in English, and simultaneously translating poetry from Urdu into English. His first translation was of a poem by the contemporary poet Afzal Ahmed Syed. Musharraf has published work across various age groups and genres. These include the novels Salar Jang’s Passion, The Story of a Widow, Between Clay and Dust, and Rabbit Rap: A Fable for the 21stCentury; the children’s novel Tik-Tik, The Master of Time, and the short story collection, The Amazing Moustaches of Mocchhander the Iron Man and Other Stories; his translated works include The Adventures of Amir Hamza, The Beast, Rococo and Other Worlds: Selected Poetry of Afzal Ahmed Syed, Hoshruba: The Land and the Tilism; and his Microtalk series of essays for HT/Livemint newspaper on South Asian folklore and myths. Lexicography is a new area in which Farooqi has made contributions by editing and launching the first online Urdu Thesaurus website and mobile app (the iPhone app is under development). The Urdu Thesaurus app is a much needed resource. With just a click, one can access a string of synonyms for Urdu words. It is easy to see that the Urdu Thesaurus is going to play a role in popularizing Urdu use. When we hear an unfamiliar word, or an obsolete, funny sounding word, we discuss it and it comes back in circulation. With the app, the words can be shared via text or social media, and the ease with which a word can be explored in detail will help the exchange and communication. No one has to wait to get back home and look up the word in a hardcover dictionary. No one does that any more anyway! So where did the idea for the online Urdu Thesaurus and app come from? The project idea, and its structure has been with him for the last ten years. It came from Musharraf’s practical experiences as a translator of classical Urdu texts. He realized how much of our language stays hidden from a common reader’s view because the majority of modern readers are not familiar with classical Urdu texts. “Our language resources have not been centralized. I am familiar with these problems because of my work as a translator of classical Urdu texts. I know how hard it is to find definitions for words of our classical literature. So I thought of making a resource that would help others like me. The Urdu Thesaurus website you see is just a small glimpse of what I dream would one day become a central tool and vehicle for literacy and education in the Urdu language.” Given that this idea has been taking root for a long period, Musharraf has been collecting dictionaries, and selecting the reliable ones from them. He started the data-entry, and ended up doing the initial proofreading himself. The synonyms data collected from these dictionaries was then merged, and another round of proofreading was done to remove duplication, followed by yet another phase. As he failed to secure any funding for the project and had to solely depend on his own resources — the progress was slow. Dr Rafaqat Ali Shahid helped with proofreading the first two letters, and then Humaira Ashraf took on the work and did the letters from BAY onward. Dr Awais Athar advised the project on the technological side. The two had met once in Lahore to discuss how to crowd-source the proofreading of large texts. Musharraf approached him again in 2015, and requested his help to which he kindly agreed, and since October 2015 Dr Awais has been advising the team building the Urdu Thesaurus database and the mobile app, voluntarily contributing his time and technical skills. “The Urdu Thesaurus would not have come out for the lack of resources had Dr Athar not offered help when I requested it. I don’t know how long it would have been delayed. The credit for Urdu Thesaurus’s success belongs equally to him.” The Urdu Thesaurus is a user friendly resource. According to Musharraf that was the whole point of making a language resource. Before finalizing the interface, the team ensured that it was easy to use, and the search function worked efficiently. Talking about what is in store for the project’s future, Musharraf shared that this is just the beginning. Now that the Beta version of the Urdu Thesaurus is out, he hopes that it will attract funding from here on. The plan is to add more data and several features to this resource to make it a comprehensive educational tool. A dictionary of antonyms and dictionaries of phrases, idioms and proverbs will come next. He has ideas for several other products that will grow around these resources in the long run. The general perception about the Urdu language is that it is in trouble, with the next generation not so comfortable with reading and writing it. Musharraf however, doesn’t agree with the majority’s stance. In his opinion there is no doomsday scenario approaching for the language. The proof being that just within the first few weeks of the Urdu Thesaurus’s launch, it has been accessed in 569 cities from 74 countries as per Google Analytics’ statistics. He quotes the fate of the English channels that closed down, and the Urdu channels which are thriving, as another example that Urdu language is not in any crisis of any kind. While Musharraf is quite optimistic about the future of the Urdu language, he knows that there are some issues that have led to the current state. Majority of the educated people don’t read Urdu classics anymore, not much is being written for children in Urdu, and local publishing for children is stagnant. These changes occur over a long period, and reversing them also takes time. For him, solutions should be looked at for their long term significance. In the meanwhile, persistence of the effort is needed to ensure that positive outcomes do transpire in time. Curating our classical literature for children in modern editions and communicating it to them by making it widely available, is one way to begin the effort. “My belief is that you are given one life, and if you wish to make a change, or set something right, you should do everything that is within your power to carry out your vision and ideals with whatever resources you have. Waiting for external circumstances to change before you take any action yourself, is not going to end well for you and others.” In recent years, Pakistani English language writers have been in the news a lot. In Musharraf’s view the much bigger number of authors in Urdu and other languages should be a part of the conversation when we discuss Pakistani writing in English. The literature coming out of a society should be looked at in its totality. Our stories in this section always cover the topic of stereotypes in one way or the other, so we asked Musharraf his take on Pakistan and Pakistanis being stereotyped in the outside world, given his personal experience. For those who might not know, he emigrated to Canada in 1994 with his wife, returned in 2009, and still divides his time between the two countries. He said that he is not worried about other peoples’ opinion about us. It will change once we change what people find objectionable. In turn, our complaints about others’ opinions about us would end once that is fixed. Musharraf Ali Farooqi’s literature and optimism is inspiring. We got to learn a lot from it and hope that he continues to be a source of inspiration for others in this field. Interview and written by Fatima Arif
https://medium.com/storyfest/musharraf-ali-farooqi-literature-filled-with-optimism-30a25a4c377c
['Fatima Arif']
2019-03-13 07:14:22.184000+00:00
['Pakistan', 'Storytelling', 'Language', 'Authors', 'Urdu']
Virtual Reality boosts creativity in your child
VR has opened up an entirely new era for learning be it, children or teachers. Virtual reality has the capability to overcome the challenges imposed by expensive field trips and robotic experiments. Read more at https://fotonvr.com/virtual-reality-boosts-creativity-in-your-child/
https://medium.com/@fotonvr/virtual-reality-boosts-creativity-in-your-child-468abb032f97
[]
2020-12-22 12:09:48.055000+00:00
['Virtualrealitytechnology', 'Augmented Reality', 'Educational Technology', 'Virtual Reality']
Gemba Capital 2018 Year In Review
We started Gemba Capital in August 2017 as a family office to make early stage investments. The transition to a micro VC Fund now makes sense looking at the strong foundation we have laid in the past 18 months. We now have a strong advisory board with experts guiding us on specific deals which fall in their domain. We have deep relationships with accelerators, incubators, intermediaries, co-investors, family offices, micro VC funds, angel networks and fundraising platforms across India and across specific sectors. This strong ecosystem and network have helped us understand trends better and also is the foundation of our deal flow. Our investment thesis evolved from being initially more CPG (Consumer Packaged Goods) focused to more technology focused. We have also internally developed frameworks to assess critical aspects of an opportunity. One such example is the Founders Attribute Score (‘FAS’) which is a weighted average point system to rate key attributes we look for in a founder. It has so far worked wonderfully for us and the framework is constantly evolving. We will probably write a blog on FAS separately. Another aspect which evolved is the sector agnostic nature of our investments. Power law distribution works in an early stage portfolio rather than a bell curve. We believe that in angel/seed investing, it is best to be sector agnostic since the portfolio is broader and on an average, we intend to close 20–24 deals in 3 years timeframe. In terms of business models and investment themes, we developed a preference for SaaS, marketplace platforms and products for Emerging Bharat in 2018. For 2019, we will write a separate blog in terms of the segments, sectors, and areas which look interesting to us. In terms of founders, we prefer at least 2 founders (at least one with a tech background) and founders who are brave and courageous to solve a real, big and interesting problem. Complementary skill sets, relevant domain experience are our added preferences for founding teams. We have a portfolio approach to investing and there are some markers like 70% B2C and 30% B2B; 70% Angel rounds and 30% Pre-Series A. We will continue with this approach in our micro VC fund as well. Our value-add to our portfolio companies starts even before we have cut the cheque. We do get involved to the extent our founders want us to get involved. Testimonials from our founders are the best way to put this across. We have lead 2 deals as part of a syndicate and we expect to lead many more in 2019 as our deal size increases. Our misses in 2018, were Edtech, Agritech, Industrial IoT and a pure play CPG brand. In 2019, our focus besides our misses in 2018 will be on AR/VR, Insuretech, Drones, and Blockchain. If you are working on any of the mentioned areas then do reach out to us here. Below is our 2018 deal flow in numbers: Our current team is one partner and one analyst. Overall our conversion ratio from sourcing to investments is 1.6%. In 2019, this is expected to reduce to ~1%. If we get on a call with the founders then 43% probability that we will have a one on one meeting. Conversion from the meeting stage to investments is ~12% As you can see 58% of our deal flow has been a combination of proprietary and referrals. We plan to increase this to 70% in 2019. Also, 24% of our deals were seed stage and 69% angel stage. Balance 7% includes bridge rounds, pre-series A and others. Delhi represents NCR Few observations are CPG deals came more from Delhi, fintech deals came from Mumbai and software deals from Bangalore. Many IoT and Agritech deals came from Pune and Hyderabad Chennai sent us hardware and SaaS startups. Our initial focus was on CPG and hence the higher deal flow from that segment. In 2019, we hope our deal flow improves on Edtech, Agritech, AR/VR and Blockchain We received a healthy mix of B2C and B2B startups in 2018 although we would have liked to see more of B2C startups considering our preference to invest in B2C businesses.
https://medium.com/@adithpodhar/gemba-capital-2018-year-in-review-5723b1a6c092
['Adith Podhar']
2019-01-05 09:31:19.020000+00:00
['Venture Capital', 'Angel Investing', 'Deal', 'Portfolio', 'Startup']
Viva La Vida Chords Coldplay
[Intro] C D G Em x2 [Verse 1] Em C D I used to rule the world G Em Seas would rise when I gave the word C D Now in the morning I sleep alone G Em Sweep the streets I used to own [Interlude] C D G Em x2 [Verse 2] Em C D I used to roll the dice G Em Feel the fear in my enemy’s eyes C D Listen as the crowd would sing: G Em “Now the old king is dead! Long live the king!” Em C D One minute I held the key G Em Next the walls were closed on me C D And I discovered that my castles stand G Em Upon pillars of salt and pillars of sand [Chorus] C D I hear Jerusalem bells are ringing G Em Roman Cavalry choirs are singing C D Be my mirror, my sword, and shield G Em My missionaries in a foreign field C D For some reason I can’t explain G Em C D Once you go there was never, never an honest word Bm Em That was when I ruled the world [Interlude] C D G Em x2 [Verse 3] Em C D It was the wicked and wild wind G Em Blew down the doors to let me in. C D Shattered windows and the sound of drums G Em People couldn’t believe what I’d become Em C D Revolutionaries wait G Em For my head on a silver plate C D Just a puppet on a lonely string G Em Oh who would ever want to be king? [Chorus] C D I hear Jerusalem bells are ringing G Em Roman Cavalry choirs are singing C D Be my mirror, my sword, and shield G Em My missionaries in a foreign field C D For some reason I can’t explain G Em I know Saint Peter won’t call my name , C D never an honest word Bm Em But that was when I ruled the world [Interlude] C Em x3 D x2 C D G Em x2 (Ohhhhh Ohhh Ohhh) [Chorus] C D I hear Jerusalem bells are ringing G Em Roman Cavalry choirs are singing C D Be my mirror, my sword, and shield G Em My missionaries in a foreign field C D For some reason I can’t explain G Em I know Saint Peter won’t call my name , C D never an honest word Bm Em But that was when I ruled the world [Outro] C D Bm Em Oooooh Oooooh Oooooh x2
https://medium.com/@vanesacitra05/viva-la-vida-chords-coldplay-33db9c478033
[]
2020-12-16 07:40:52.383000+00:00
['SEO', 'News', 'Music', 'Videos', 'Love']
What is a transaction?
🛤Transaction — a request for a record on the blockchain and a logically completed data exchange operation, which is either confirmed and entered into the block, or canceled. ⠀ 💰Transactions on DecimalChain are processed by specialized network participants-validators — for a reward in the form of new DEL coins. ⠀ 🗣Each transaction on the Decimal network begins with the formation of a special message in the prescribed form, after which the message is sent to the master node, where the request will be checked for honesty. ⠀ Next, an honest transaction gets the status of unconfirmed. All transactions verified but not confirmed by validators are pending processing. ⠀ 🎱The place where unconfirmed transactions are stored is called a memory pool. ⠀ The validator takes transactions from the mempool, collects them into a block, and suggests that the other validators agree on the resulting block. If the block is consistent, all transactions that are included in it receive the status of confirmed and become part of the Decimal blockchain forever. ⠀ 👨🏻‍💻For a simple DecimalChain user, the entire transaction path is hidden behind a simple and understandable form, in which you only need to specify the recipient’s address and the amount of the transaction. After six seconds, the transaction will be confirmed.
https://medium.com/@decimalchain/what-is-a-transaction-be2bbb38141a
[]
2020-12-27 12:42:47.294000+00:00
['Decimalchain', 'Cryptocurrency', 'Dar', 'Blockchain Development', 'Del']
Making Real Money on Upwork
Making Real Money on Upwork Quarantined Income Workshop #2 Photo by Windows on Unsplash Welcome to the second workshop in the quarantined income workshop series. Two days ago, we discussed how to go about making a real income on Fiverr. I talked about my experience and asked for advice from another Fiverr seller named Mandy. Today we’re focusing on Fiverr’s biggest competitor, a rival online platform that also connects entrepreneurs with clients and businesses that need their services. The Upwork Model Upwork has a number of major differences to Fiverr, differences that make the platform feel arguably more professional in tone than that of Fiverr. While Fiverr has a lot of goofy gigs and markets equally to casual customers as it does to businesses, Upwork focuses on serious work for professional companies. Sellers on Upwork have more options than those selling on Fiverr, including being paid by the hour, and paying a smaller percentage of their gig income to Upwork when they pass certain earning thresholds. These conveniences are there to attract more professionals that are proficient in their industry and not just those who are casually interested. A seller with a new interest in their discipline is probably going to have an easier time getting started on Fiverr, but a seller with experience and training will find Upwork more gratifying. Photo by christian buehner on Unsplash Please welcome to the stage, Matt For today’s workshop, I talked with Matt, a filmmaker who got started in Upwork originally just to pick up a few gigs on the side to supplement his freelance income. Now he relies on Upwork for the majority of his clients and makes a decent income from the site. Firstly, I asked Matt about his experience getting started on Upwork. He said that he was interested in the site right out of film school, back when Upwork was called Elance-oDesk, but he didn’t take advantage. He was intimidated by the site back then because he didn’t consider pitching clients to be his strength. He was also worried that sending and receiving video files over the version of the internet that existed at the time would be too much of a headache to deal with on a daily basis. Instead, Matt got started in the industry the regular way. He found representation with an agency that books film crews and worked on commercials and TV shows for the next several years. As time went by, he started to notice the industry changing. Social media marketing was on the rise, and more and more companies were turning to freelance filmmakers to produce simpler web-based content marketing. Matt checked Upwork and saw that a lot of these companies were finding their freelancers there, so he finally decided to join up. He only anticipated that he’d be given scraps of work here and there, he never predicted that it would become his largest source of new clients. Photo by Marvin Meyer on Unsplash The Advantages of the Platform What Matt likes most about Upwork is the billing system. Working as a freelance filmmaker, editor, and camera operator can be really tough. There’s a lot of anxiety around not knowing when a client will pay the invoice, or even if they’ll pay at all. A freelancer can do a whole week of work for a client while stressing the entire time about whether or not that week will ever pay off. Upwork took that fear away, giving him the security that he’ll be paid promptly and in full. Unlike with Fiverr, on Upwork, he can keep a gig open with his best clients after the original project has been completed. The client can just keep sending work, and he can keep their tab open indefinitely. There’s no need to constantly create and end gigs for clients who need constant work. For Those Getting Started For new sellers, Matt’s advice is to keep working at it. Matt’s first gig was a $20 job to do some light photoshopping. He couldn’t believe that he was being paid to stay home in his pyjamas and photoshop some images. After that, gigs came in slowly and infrequently. But after each gig he’d complete, he’d add the finished result to his portfolio; building up his library of work slowly over time. Eventually, his portfolio became so impressive that gigs were coming in constantly. Up until now, Matt has always charged the hourly rate that seems fair to him. Some sellers prefer to charge a flat rate, which for editors can be really risky. You can charge one flat price, then spend more hours than you ever predicted on the job, lowering the value of each hour. Charging hourly has always been better for the seller, but it’s a turnoff for clients. Clients want to know exactly how much money they need to spend on the job up front and can feel worried that hours might pile up. But recently, Upwork has taken a page out of Fiverr’s book and has introduced a package pricing model. Clients can buy packages that ensure the job gets done within budget, and ensures Matt gets paid enough. He can predict the hours he thinks the package will take to complete, then work extra fast to potentially make every hour even more valuable. Packages give clients the ability to see what the gig will cost up-front, and will give the seller the ability to up-sell the client with add-ons and bonus options. Right now Matt says that he’s not incentivised to work at his fastest pace because he’s paid hourly, but by adding the package system he can challenge himself to see what he can fit into every hour. Matt still has a lot of trouble pitching new clients and often relies on short and sweet pitches that get right to the point. Something like “Hey, I’m really familiar with the job you need doing and will do a great job with it. Take a look at my showreel and my portfolio. Have a great day!” Short pitches appeal to a lot of his clients, but he knows he has to improve his strategy and formulate better pitches. He especially needs to improve his pitches so that he can land an enterprise client. Enterprise cleints are large and often recognisable companies that rely on a pool of regular freelancers to build their content needs. Landing an enterprise client would mean lots of high value work that could be rely upon into the future. Making yourself seem more appealing to both regular and enterprise clients is important, and Matt has found that featuring your showreel on your portfolio is critical for catching the client’s eye. While his showreel thumbnail link featured on his portfolio has had hundreds of clicks, the standard showreel link has seen almost no attention at all. So it pays to be creative with your portfolio when trying to stand out in a packed crowd. Photo by Carl Heyerdahl on Unsplash Top Tips on getting started on Upwork
https://medium.com/money-clip/making-real-money-on-upwork-24e44f05391a
['Jordan Fraser']
2020-05-07 07:19:10.741000+00:00
['Money', 'Freelancing', 'Hustle', 'Entrepreneurship', 'Business']
Can the elevator OEMs become tech-enabled?
After publishing a series of articles five months ago outlining our contrarian vision of the elevator market, we received a great deal of attention from top executives, industry experts, and equity investors. They asked us similar strings of follow-up questions, which can be summed up as follows: Isn’t it simply a matter of time before the Big 4 are able to replicate uptime’s innovation? How does uptime scale? Are you the only company on the market doing this? Because we believe the industry is on the brink of a revolution, we decided to make our answers public. Predictive maintenance will shift the value from labor operators to tech operators. As we detailed in our previous articles, the entire industry will pivot on its service business model disruption. The disruption equation has three parts: the technology platform , which enables predictive maintenance , which enables predictive maintenance the operational model , which shifts from a labor-first, visits-based technician routine to tech-enabled, task-oriented interventions , which shifts from a labor-first, visits-based technician routine to tech-enabled, task-oriented interventions the value proposition, which shifts from selling useless visits, compliance, and last-minute repairs to selling performance and access to information The three parts of the elevator-maintenance-disruption equation Simply adding “digital features” to the OEMs ’ existing service contracts is far from enough to deliver the required level of change. F or an incumbent, this change requires a complete turnaround. We believe that once the OEMs have access to the right technology, they will gain the ability to perform a turnaround and emerge as winners in this industry transformation. However, even with lavish investments, they cannot build the tech by themselves, and definitely not within a reasonable timeframe. As detailed below, the barriers to entering such a technology platform are high, such that the value will shift from those who control the operational framework (service contracts, labor) to those who control the technology and the data. Time-out. It’s good that we have our own opinion, but we understand why some of you might think we are biased. So we decided to tune in to the Q3 Earnings Calls of Kone, Schindler, and Otis in order to listen to what analysts and top execs were saying. Luckily, digital maintenance, although initially not part of the CEO presentations (apart from a brief mention by Otis’ Judy Marks), was the main focus of the Q&As. Kone, for example, was asked nine questions — representing a third of all the questions posed — regarding their 24/7 Connected Services product, by no fewer than five different analysts. Smokescreen. Unveiled? To begin, let’s look at the basic questions: What is the deployment pace of digital offerings? What value are they creating? Deployment pace First of all, the Q3 earnings Q&As confirmed that the deployment pace of the OEMs’ digital maintenance products is slower than expected. Let’s look at Kone’s Q&A (source of the transcript: SeekingAlpha). Kone Q: Guillermo Peigneux, UBS « I am actually wondering whether you could share some numbers on 24/7 installation for us in unit terms or number of installations? » A: Henrik Ehrnrooth, Kone CEO « […] So we are currently in the range of between 5% and 10% of our service base where we have the 24/7 Connected Services. » Q: Guillermo Peigneux, UBS « And maybe a follow-up on that, every — do I understand that every new unit, elevator unit that you are installing will be with 24/7 Connected? » A: Henrik Ehrnrooth, Kone CEO « […] Not every unit comes automatically connected yet, but when we have DX across the world, then every unit will automatically have connectivity built in. » Q: Andrew Wilson, JPMorgan « And I guess, just an additional question on 24/7, I think historically, you have sort of set I think it was that there could be 1 million connected elevators […]. I guess, just of the development that you have seen at 24/7 so far, have you changed your of ambitions? » A: Henrik Ehrnrooth, Kone CEO « […] Perhaps in the beginning, we thought it would be faster to roll it out. Perhaps what’s been a little bit slower as we have a commercial model to sell to install and all of that has been somewhat slower than I would have predicted a few years back. […] We think that the majority of our service base in the future will be connected. That view hasn’t gone anywhere, but we also haven’t given a time frame where we think that’s going to happen. » Q: Daniel Gleim, MainFirst « Yes, good afternoon. Thank you very much for taking my questions. Actually got three on ten on 24/7 Connected Service and apologies for belaboring the point. The first is on the short-term impact; you gave us some guidance for the incremental service growth tailwind. […] » A: Ilkka Hara, Kone CFO « Well, geographically, actually, for 24/7, we have a good coverage already. […] So geographically, it’s not so much of an expansion story from my perspective. Then second, I think the key for us really is to ramp up our capability to sell. […] » What did we learn, from this Q&A? 1. Deployments are below expectations. Henrik says it stands between 5% and 10% and that it is slower than expected. He further mentions that their total ambition is still here — but without any mention of a time frame. 2. Not every new elevator has built-in connectivity yet, although this will be the case shortly, with the DX rollout. 3. The main reason given for Kone’s slow pace of deployment is the go-to-market, mentioned by both Henrik and Ikka. The geographical footprint is already there — but the product seems hard to sell. Let’s now look at Schindler’s Q&A on this subject (source of the transcript: SeekingAlpha). Schindler Q: Andre Kukhnin, Crédit Suisse « Schindler Ahead, could you give us an update on where you are, to whatever degree of detail you can in terms of number of connected and paid for units, and maybe any indication on how much revenue that has added in last 12 months, so that we can run some benchmarking versus some of your peers and think about forecasting that going forward for that business? A: Thomas Oetterli, Schindler CEO « […] We are strongly convinced that connectivity is a key driver of future success. It is still a net investment for Schindler. […] So every new equipment is equipped with the Schindler Ahead. » Q: Lucie Carrier, Morgan Stanley « […] I just actually had a follow-up on the — on Andre’s question on connectivity. Are you maybe able to tell us how much of your install base is actually functioning in terms of connected services. » A: Thomas Oetterli, Schindler CEO « So we do not disclose the absolute number of connectivity, because we don’t want to enter into this race. » Here, it becomes clear that the deployment rate is too low for them to want to disclose it (Thomas doesn’t “want to enter into this race”), and their hints are in line with our previous assumption of < 10% of the installed base. It seems evident that if the products would bring definite value, they would be deployed. Let’s look at the value-creation questions. Value creation Kone Q: James Moore, Redburn « Firstly, could we get back to 24/7, please? Could you talk a little bit about what is the magnitude of the price uplift that we’re seeing in the latest quarter or so? […] And then we are talking about a 10% price hike against the standard contract, or more than that? » A: Henrik Ehrnrooth, Kone CEO « Much more than that. » Schindler Thomas Oetterli, Schindler CEO responding to Andre Kukhnin, Crédit Suisse « […] And yes, on the first analysis and we see that the overall connected units are performing better in portfolio retention than non-connected units. But it is a long — it’s a long term game. […] Of course, this will — the financial impact will depend on how much we are going ahead with connectivity. And this will take a couple of years. […] And last but not least, for those units where we can sell an Ahead package, we also see that our service price, including the Ahead module is increased — depends a little bit on the market — between 10% to 15%. » Otis (Source of the transcript: SeekingAlpha.) Q: Carter Copeland, Melius Research « And then just as a follow-up on Otis ONE and pricing differentials you’ve seen on those connected units or what your expectation is for those in the future, just high level thoughts on that would be appreciated? Thanks. » A: Judy Marks, Otis CEO « […] In terms of Otis ONE, we are ramping up and accelerating our deployment. We’ve seen productivity gains but in terms of additional subscriptions or revenue, it’s still early in terms of where we’re able to gain traction on that. […] It’s giving them [the technicians] the ability to show up quicker, to have less running on arrivals when they get there because they know it’s already running on arrival and they don’t have to actually make that service call. So we’re pleased with the early results, but it’s not anything that’s really added to the top line in terms of subscription revenue yet.» Here, it’s important to note in passing that they focus on “showing up quicker” rather than on avoiding breakdowns. This takes us back to the product vision of technology for elevator maintenance, which we discussed in our second article. A: Rahul Ghai, Otis CFO « So just to add to that Carter, Otis ONE joins the suite of other connected applications that we have like destination dispatch system, elevator management system, and those applications combined add about 30% to 40% of the subscription revenue. […] Where we do have remote service capability that we provide through even the phone lines, we are able to get incremental price in those units. […] » “Even the phone lines” — it seems that Rahul is referring to the Otis REM (a 1980s product that has evolved since), a great innovation at the time, which, however, could not help to build predictive maintenance. Q: Denise Molina, Morningstar « Hi, thanks. Thanks for the question. Denise Molina from Morningstar. I am just trying to go back to the comment you made about the 30% to 40% left on service revenue from the connected services. […] Just wondering if you’re expecting that to be widely adopted for everyone to kind of have another 30% to 40% in their budgets for these services […]? A: Rahul Ghai, Otis CFO « Yeah Denise, so my comment on kind of 30% to 40% was more around the fact. So we have several connected solutions, in addition to Otis ONE. So we have a little bit of a management system. We provide the destination dispatch. We have eView systems that involves — that are connected. So when you put all that together, that can add about 30% to 40% off the revenue that we get on that unit. And there is not a lot of incremental cost to support that. So you would expect margins to be higher. […] to get higher uptime because as Judy responded earlier, we can dispatch a technician as soon as the unit breaks down. […] We are investing in this, we are installing these units at our own cost because we think the productivity benefits outweigh the cost. » A: Judy Marks, Otis CEO « And Denise, our conversion rates and our retention rates on connected elevators beats our industry leading retention rates across the globe. So we have the ability to actually retain in our service portfolio those connected units at several hundred basis points above what we do with our normal retention rate globally. […] » What did we learn? 1. Kone’s value from 24/7 focuses on additional topline, reaching more than 10% gains. We do not have details on productivity and retention, because questions concerning these factors were not asked. Taking into account the challenge in ramping up sales that Kone mentioned, is the value created for customers high enough to justify this price increase over the long term? 2. Schindler doesn’t value the topline first, and doesn’t see the financial results of the Ahead program yet. However, Thomas quotes increased retention, additional productivity, and a 10% — 15% price increase in some cases, that is, “for those units where we can sell an Ahead package.” The sale does seem hard as well. 3. Otis’ response is surprisingly contradictory: The CEO says it’s about productivity first, not additional revenues — but the CFO mentions that up to 40% of additional revenues are expected. However, the CFO combines quite different things, such as the destination dispatch system, in-cabin screens, and even old emergency phones — which do not seem linked to Otis ONE’s advertised focus on digital maintenance. Regarding Otis ONE specifically, both the CEO and CFO underline the productivity gains, though only curative ones (enhanced unscheduled callbacks — as opposed to the actual avoidance of a breakdown). The product does not seem to be currently commercialized per se. Now, If the productivity and retention gains are so high (Schindler and Otis), why not deploy massively ? ? If the product is working and generating much more than +10 points in additional revenues, and if there is already a global footprint (Kone), why is it hard to sell? Are their products fully functional? Our assumption, which we detailed in our previous articles, is that the OEMs’ products are not living up to the expectations. They do not enable the operational and go-to-market shifts that we detailed above. Some of our key views have been confirmed in the CEOs’ answers: Is the product vision even appropriate? Shouldn’t it be about avoiding breakdowns, instead of fixing them faster? « We can dispatch a technician as soon as the unit breaks down » says Rahul Ghai, Otis CFO. « It’s giving them [the technicians] the ability to show up quicker » points out Judy Marks, Otis CEO, before adding « to roll the trucks and to get our field professionals out there as quick as possible to drive uptime. » Is their go-to-market strategy based on the right approach? Do the customers really wish to purchase another “gadget” for their connected building? Maybe the sale is hard because their products focus only on nice-to-have features instead of must-have features? Data generated by an IoT device in itself has no compelling value proposition for customers. What customers want is an actual improvement in performance and transparent access to information. Have they developed the first layers of predictive maintenance, that is, brand-agnostic controller IoT and powerful field software? Why is it so hard for industrial companies to adopt a software-first approach? Predictive maintenance is deep-tech, not merely digital. The biggest market actors have been boasting about their digital initiatives for years. Those projects are simply prerequisites to operating a service company in the 21st century: deploying an ERP or CRM, giving technicians smartphones with reporting applications, tracking spare parts, etc. « We continue to deploy iPhones to our field professionals, adding four more countries during this quarter, and the adoption of our suite of apps continues to expand driving service productivity within the organization. » — Judy Marks, Otis CEO Of course, when an industry was “paper & pen” only a few years ago, it looks like tremendous progress. But it is the ground layer. Much deeper changes are required to unlock the future. Isn’t it only a matter of time before the Big 4 are able to replicate uptime’s innovation? This opinion is rooted in two misconceptions: Data. The OEMs have access to a tremendous amount of data, each of them maintaining 1m+ elevators. This was also Henrik Ehrnrooth’s answer during Kone Capital Market Days in September. Q: Klas Bergelind, Citi « I want to dig deeper into the maintenance and digital offerings, and threats out there. We’re hearing now of tech operators that are looking to join forces with ISPs, and quite big independents, and thinking they have an edge in understanding not only your equipment but also the equipment of third-parties from a tech perspective ? » A: Henrik Ehrnrooth, Kone CEO « […] Here I actually think the big players will have a clear benefit, because they have a broader base. And you require that data from a broader base to create the service needs, if you only have a small base, only some hundreds or thousands or couple of thousands, you will be very restricted in learning what are the service needs that come out of certain signals out of the data, and analyze and make sense out of it. […] We have to put it in perspective, the big players have 1m+ units in service, then we talk about smaller players that have some thousands it is quite a different game to scale it from one level to the other. » Money. The OEMs have tons of cash to invest into developing the right technology. Sheer volume is meaningless when you collect the wrong data. “Another common misconception is that IoT offerings generate so much data that companies should be able to discover a silver bullet somewhere in it. Misled by this belief, one multinational industrial company tied up dozens of highly qualified data scientists for a decade on data projects that failed to find a viable route to market or indeed demonstrate any real commercial potential.” — McKinsey article, Tech-enabled disruption of products and services To build a powerful AI and generate real ROI for customers, the sheer volume of data itself is not the most relevant factor. The dataset has to be relevant in the first place. Otherwise, it’s virtually useless. Brand-specific versus brand-agnostic data . How can data be useful if the dataset is appropriate only on one brand or model — provided that those units are connected? The operational routine of a technician is built around an area, not a brand or model. No field technician manages only elevators of a specific brand. The shift to tech-enabled services can happen only in a brand-agnostic model. . How can data be useful if the dataset is appropriate only on one brand or model — provided that those units are connected? The operational routine of a technician is built around an area, not a brand or model. No field technician manages only elevators of a specific brand. The shift to tech-enabled services can happen only in a brand-agnostic model. Relevance. What if the data points gathered are irrelevant? This applies as much to the IoT data stream (e.g., in the elevator context, vibrations are far less insightful than firmware status codes) as to the field data stream (e.g., the actual normalization and quality of reporting by field technicians). How can a technician focused on productivity targets and doing countless compliance-only visits report useful data? At uptime, we built our tech based on what was needed, not what was there and merely available. And the threshold that must be met to have a significant impact is clearly not millions of elevators. Henrik Ehrnrooth confirmed this view: Q: Lucie Carrier, Morgan Stanley « My last question was maybe on the Connected Services, I remember at the Capital Markets Day, you made the argument that the scale of your installed base was giving you an advantage, especially versus a smaller provider of services. […] » A: Henrik Ehrnrooth, Kone CEO « […] My point in the Capital Markets Day is that before you have several thousand units connected, you’re not going to have enough information to create good service needs and good algorithms to create that predictability and the understanding going forward. And that’s why I think big players have a significant advantage here.” Several thousands of units is a threshold that third-party technology providers fueling independent SMBs can achieve… fast. We estimate that the OEMs do not have the right data streams, neither for IoT nor for field data. Yet again, the Q3 Earnings Calls Q&As seem to validate our assumptions. IoT data relevance First, the OEMs do not seem to have brand-agnostic controller IoT (as a side note, we were impressed by the accuracy of the analysts’ questions): Kone Q: Lucie Carrier, Morgan Stanley « Can you explain maybe which type of data you collect specifically? I mean which kind of KPIs are really important from your standpoint? […] And how also should we think about the scale argument, considering that half of the elevators you maintain are not of your own manufacture, if I understood well from the Capital Markets Day material? » A: Henrik Ehrnrooth, Kone CEO « I think your question is more related to the Connected Equipment and what data we collect and utilize. That’s a proprietary information that we utilize. But of course, what are the most important things they have to do with movement of doors. It has to do with electronics, it has to do with the accuracy, with ride comfort and a lot of things like that. And we look at all those parameters, we learn from them, you create service needs.[…] Then you talked about that many of our equipment are non-Kone. That’s okay. We connect them as well. So that doesn’t really, again, change the picture here at all. They are connected in a slightly different way, but we get almost as good data from them as we get from Kone and we’re learning there as well, more and more, the broader base we have.” Henrik Ehrnrooth, Kone CEO answering Andrew Wilson, JPMorgan “[…] also the data that we collect from sensors and through the connectivity that we have into these elevators.” The word “sensors” could be a hint about the absence of a brand-agnostic controller-IoT product. Elevator expertise shows that connecting digitally to the controller yields massive results instead of adding additional sensors. To date, it seems that OEMs are able, at best, to connect to their own brand controllers only. Hence, regarding Kone versus non-Kone, what is the difference? Are we talking about additional sensors for non-Kone, or a connection to the controller? It changes everything. Schindler Thomas Oetterli, Schindler CEO answering Lucie Carrier, Morgan Stanley « So the older an installation is, the less information you get out of the controller. And you have to make a business case where it really makes sense to add a sensor kit. So you have enough data to get a meaningful data for your Ahead platform. » Again, additional sensors or controller-IoT? Especially for non-Schindler equipment? Moreover, is the age of the elevator the most relevant factor? Almost all controllers since the 1990s have digital diagnostic ports, and the average modernization pace is 20 years. Otis As we saw earlier, is Otis repackaging the REM as IoT products? Are they focused on gathering the right data? Q: Denise Molina, Morningstar « Can I just ask one follow-up on that, because we’ve heard a lot from Kone and Schindler respond on their connected services and I think we’re trying to figure out what the difference is amongst the players. But it sounds like the ISPs are the ones that or maybe not as much as they don’t have many elevators kind of feeding information to get those good kind of uptimes. Do you think that’s right? Do you think that if you were going up against and it’s difficult to say that you are going up against another OEM that had the same number of elevators feeding those algorithms, do you think your services would be differentiated still? » A: Judy Marks, Otis CEO « 55% of the service market right now is controlled by ISPs. That’s who we are going to get share from to grow our service portfolio above our leading service portfolio of over 2 million units already. And we do believe that with scale and with differentiation comes incredible data and clarity in terms of being able to make decisions, having a data lake, being able to do predictive and transparent maintenance, and as Rahul said to get the — to roll the trucks and to get our field professionals out there as quick as possible to drive uptime. It’s the value of that data and the analytics, that’s going to make a difference and we are — that’s where we’re going after. The ISPs have more than half of the share globally, and that’s what we intend, especially to get our Otis units back. » And as side note, if the target is the Otis units, why focus on ISPs? Isn’t Otis planning to be competitive and regain Otis units from Kone’s, Schindler’s, ThyssenKrupp’s portfolios as well? Field data relevance Kone Q: Andrew Wilson, JPMorgan « Just a couple of questions on 24/7, actually following-up on questions earlier, I think we were talking around the engineers and the way that they work in the field. Am I right in understanding from your comments that the engineers when they’re out on a site, they are documenting in all of the repairs, the mix is being logged and that data that you use in terms of helping to provide that connected service down the road? I am just trying to literally understand the practicalities of how the way the engineer is working has changed over time? » A: Henrik Ehrnrooth, Kone CEO « Clearly, they document what they do and through their devices they have in the field. So the data we have for any given elevator is what they have used, but of course, also the data that we collect from sensors and through the connectivity that we have into these elevators. […] And of course, if there’s a fault about to happen, you get a direct — it goes directly to the field device of the technician to inform them that they need to do an intervention. » Q: Lucie Carrier, Morgan Stanley « […] How are your maintenance employees incentivized to make sure they perform a good collection in monitoring of the data because obviously, for a lot of them, this is quite a new thing? » A: Henrik Ehrnrooth, Kone CEO « Okay. There were many questions in one and you wanted to understand what data employees collect, of course, the input data. We have — I think we have the leading field systems in the market. So we can, of course, gather data what they’ve done and what interactions. So that’s something that automatically happens. » Henrik is not providing much detail, unfortunately. Are the field technicians providing the right normalized data about elevator components, observations and actions, all in a harmonized way? Is the dataset usable by AI? Has this field data proven to be useful for building the “Service Needs”? Is this increased reporting requirement a cultural change for the field workforce? We would need to ask follow-up questions. Schindler Q: Lucie Carrier, Morgan Stanley « Yes, so if it wasn’t clear, I guess my question is, you know, we can have a lot of elevators being connected. But, you know, which type of metrics are you focusing on? And how much are they incentivized on collecting also the data? I mean, I’m just curious to understand how it worked really in practice for them, because historically, engineers were not necessarily serving a fixed base of elevator, they were kind of you know, serving an area rather than a portfolio of elevator. So in terms of their knowledge, and them being proactive in managing this install base of elevator, I’m just trying to understand how their training is changing or how their incentivization is changing, so they kind of adjust to the connectivity of elevator, in terms of their job? » A: Thomas Oetterli, Schindler CEO « […] Schindler Ahead is just an additional support for our engineers in the field. So we do not intend to reduce, for example, training, training efforts of our service engineers, but we add them additional data. So when a service engineer goes to the installation, he will have on his iPhone or on his smartphone, all the data of this and all the history of this elevator; if it is a breakdown of the elevator, he gets a guide and on how to resolve that, how to resolve that breakdown. For that purpose we have installed in all the countries, a so-called TOC, T-O-C, this is a technical operation center. All the data which comes from the connected unit goes into a central technical operation center of that company, where we analyze all the symptoms, all the data received by the elevator. And if there is a breakdown, or a normal visit, this technical operation center gives guidance to the technician. » Like Henrik, Thomas does not provide all the necessary details, and the same line of questioning arises. The answer mentioning the technical operation centers is insightful. Before, or instead of, developing AI algorithms that make sense of the IoT data automatically and send the insights to the field technicians directly, Schindler seems to analyze the data with people reading live streams. When we wrote our first article, we thought the OEMs lacked a suitable predictive maintenance product vision. Now, it also seems that they do not have the first layers done right. In our mind, their execution seems to suffer from the following problems: Unclear IoT strategy for their competitors’ brand equipment for their competitors’ brand equipment Unclear data strategy from the field from the field Industrial product vision versus a software-first approach: technical centers instead of algorithms; roll the trucks faster instead of avoiding breakdowns. Cash alone never generated any innovative outcome Considering that the OEMs have been trying to build such innovation for the last two to five years — and have invested significant money (to date, c.€200m-300m each for Schindler and Kone, according to Credit Suisse research), it is clear that they know they have to do it. But there is a big difference between trying and succeeding. IoT initiatives by the OEMs Source: Company data, Credit Suisse estimates Why then is it so hard for large industrial actors to develop the right IoT products, software, and AI? First, there are substantial cultural barriers . In our previous article, we offered a relevant quote: “It definitely needs a cultural change, because a company like ThyssenKrupp is more a classic engineering company ,” said Reinhold Achatz, ThyssenKrupp’s own chief technology officer, to the Wall Street Journal in October 2019. . In our previous article, we offered a relevant quote: “It definitely needs a cultural change, because a company like ,” said Reinhold Achatz, ThyssenKrupp’s own chief technology officer, to the Wall Street Journal in October 2019. Second, software and AI innovation is extremely hard and requires more focus and agility than money. The same McKinsey article advised that, in order to become tech-enabled, industrial companies should: First, listen to your customers — not something the elevator industry is known for. Second, place big bets. “Some tech-enabled industrial companies use a VC-like governance structure with a digital unit reporting directly to a “digital board” comprising the CEO, CTO, and CFO.” Focus is key. The elevator OEMs have been trying to innovate in a wild range from in-cabin screens to smartphone elevator calls by users ; not necessarily focusing on what matters. Third, adopt agile product development. This is much easier said than done — even for startups around the globe — let alone for huge manufacturing organizations. Fourth, build out your ecosystem. “Commercial as well as technological partnerships are essential to moving fast and scaling effectively. Building and maintaining a robust ecosystem of partners demands dedicated resources.” Fifth, establish the right go-to-market capabilities. « Expecting your traditional sales channels to convert customers quickly or bolting a digital sales group onto a traditional organization could spell disaster. » (same article). Those who are the most advanced on the market know that without the right go-to-market (value proposition and sales system), no innovation is valuable. Again, what is needed by the market is far removed from the elevator industry’s usual standards. With a high enough level of investment, agility and focus matter much more than extra dollars. uptime is only getting started We have upended the industry’s value proposition by offering a guarantee of performance to our clients. We have entirely rebuilt our technicians’ routine by adapting various workforces to various tasks, and soon, this allocation will be dynamic and condition-based. Most importantly, we have reached the key milestone of brand-agnostic digital IoT for elevators, as well as brand-agnostic data-driven field reporting. We have built the first predictive features (see our previous articles on the predictive maintenance steps), such as breakdown detection and remote callback discrimination, and we are now deploying our first CBM features. Our technology platform, the map®, is only getting started, and its full realization still lies far ahead, once the compounded effects of additional data, successful CBM, and further operations & sales model evolutions start kicking in. The OEMs are stuck in their legacy brand-specific product focus, legacy field data models, and legacy field and sales organizations. They might get somewhere with time, but we’re talking in years. Where will uptime be in two or in five years, when the OEMs could perhaps be approaching the first milestone of where uptime currently is ? Are incumbents automakers solving self-driving by themselves? Definitely not. Jaguar teamed up with Waymo, General Motors bought Cruise in 2016, before investing $1.1bn and attracting $2.75bn from Honda in 2018 Ford and Volkswagen teamed up with Argo ($3.6bn, 2019) Hyundai and Fiat Chrysler teamed up with Aurora GM launched GM Ventures early on Renault-Nissan-Mitsubishi created Alliance Ventures with $1bn to be deployed. The bottom line is that deep-tech innovation depends entirely on pace and timing, and knowing that, industrial incumbents of any sector are ill-advised to walk this path entirely on their own. Is uptime the only company disrupting elevator maintenance? The short answer is yes. There are elevator service companies out there, and there are elevator equipment providers (which provide a certain kind of IoT tools). But there is no company that is expert in software innovation, as well as on the service delivery and go-to-market. Andreessen Horowitz, one of the top VC funds, described this as “full-stack startups” in 2014. As Sep Kamvar puts it in a recent podcast: If you’re starting a full-stack construction company, you have to start a construction company and a software company at the same time. And it’s hard enough just to start either. […] But once you’re able to do that, if you’re able to do that, then it allows something really powerful, which is it allows you to write software not just for existing processes, but it allows you to innovate on process at the same time as you innovate on software. And very specifically, it allows you to innovate on process in the way that software enables. Growing our elevator portfolio from scratch in Paris allowed to us to innovate simultaneously on the operational processes, the value proposition, and the go-to-market — and to do so in a way that our IoT and software enabled. Without an elevator portfolio as development field, it is impossible to create the adequate predictive technology product, and it is impossible to change the operational model and go-to-market model without the right technology input. What about others? The only other start-up disruptor in this market also happens to be a French company, WeMaintain. WeMaintain founders noticed the poor quality of service in the market and decided to solve it by adopting a labor supply approach instead of a technological one. To our mind, there is indeed a friction on the supply of technicians, but it is a far distant matter compared to the maintenance model — and it is solved by developing predictive maintenance. Based on this idea, they started out as a marketplace in Paris in late 2017, and they communicated heavily about why predictive maintenance is the wrong answer for the elevator market as well as about their goal of motivating the current OEMs’ field employees to leave their jobs and start as independent contractors on their platform. However, we understand from the French technician labor market that WeMaintain now hires field operators, abandoning the pure marketplace route, which is hardly a good fit for the elevator maintenance market. Beginning 2020, WeMaintain acknowledged the technology shift and launched an IoT device, on the additional sensors and vibration analysis model that we depicted in our previous article. This limited solution looks mostly like another marketing push, similar to what some OEMs seem to be relying on so far. We can also notice that some elements of this IoT device development would have been outsourced to a generalist small firm, Piwio. As they launched in the UK recently, and their value proposition remains centered on customer communication and a direct connection to the field technician, we do not consider them a deep-tech company, and definitely not a complete full-stack company. How does uptime scale? uptime has everything required to continue building its technology platform, thanks to its full-stack development field. It is now in a position to scale globally by selling pickaxes to multiple actors during the imminent predictive maintenance global gold rush. Four years ago, when we started the company, elevator predictive maintenance was a dream for us, and a distant illusion for others. Now, we are ready, and the whole industry is progressively understanding that they have to shift ASAP. OEM companies went for the gold rush alone, and have failed so far. Let’s bear in mind that: elevators are the same worldwide, most of them being supplied by the same OEMs — IoT scales, field routines are similar as well worldwide — software scales easily with localization. Our platform is ready to be deployed globally. The OEMs and regional ISPs that will use our technology platform will be able to build on top of it: a massive dataset of relevant IoT and field data a custom CBM engine adapted to their own operational framework And therefore, they would finally differentiate by developing the right operational and sales shifts. They should emerge as the best at maintaining any elevator of any brand. There is an opportunity to cancel churn on mature markets, to generate strong installed base organic growth, and to solve the maintenance equation in China by jumping directly to predictive maintenance. Not all OEMs nor ISPs will be able to perform this turnaround and emerge as the dominant players. Who, in the context of the COVID-19 pandemic and crisis management pressure, will have what it takes to bet on the future? Who will build market leaders that operate the technology and serve any elevator better than their competitors? To enable that technological shift, uptime has the best product and the best team on earth. This is how we scale. www.uptime.ac/en
https://medium.com/@augustincelier/can-the-elevator-oems-become-tech-enabled-24efcd8a6e6
['Augustin Celier']
2020-12-06 23:07:47.789000+00:00
['Elevator', 'Industry', 'Predictive Maintenance']
Why Study IT Management?
In an ever-changing business landscape, it is increasingly more important for companies and large organizations to invest in the best information technology services they can. Without maintaining the quality of their IT, it quickly becomes impossible to conduct daily business and maintain their integrity. For round-the-clock availability, maximum performance and security, organizations need IT professionals which are where study IT management comes in. Why is a Master’s in IT Management so Important? A degree in IT Management will prepare you for many things. For instance. Your Master’s will help you understand what your employees are doing. Whilst it is true you don’t have mastery of all the tasks your team members perform, it is still important to familiarize yourself with business-critical issues those you have expectations of how to handle. Masters will teach you how to support your employees as they seek to overcome the challenges they face. Good team leadership skills are essential and taking a master’s degree gives you opportunities to practice and refine them. They are a key part of what makes a good IT management. Once students graduate, each is certified as being equipped with the knowledge, skills, and understanding they need to deliver IT services effectively so that wherever and however IT is used, its use contributes towards achieving the organization’s goal. As a graduate, students possess up to date knowledge and understanding of the latest advances in a wide variety of IT-related fields. They will know how to best use it so clients get value for money. They will also know which kind of organization would benefit most from them. Better Job Positions After Study IT Management After taking a master’s degree, you will be in a much better position to set achievable goals for your team. As a manager, it will be your responsibility to monitor team progress and to maintain and reinforce a shared vision to deliver the best outcome. The people skills you refine during your degree will help your professional reputation and attract a good team. Because a Master’s degree in IT management offers its students a curriculum based on the whole IT industry, as an employer, you will be able to recognize and develop new talent. After you graduate and begin offering your professional IT management services, you have the option to become an outsourcing solution. It is a great way to earn a living if you need a flexible way to work. These days, it’s pretty usual for mid to large organizations to outsource all their IT requirements for the IT specialists. Managers to work off-site remotely. Final Words You may already be working as an IT professional or know you have the technical skills. But if you’re contemplating a transition into management, a Master of Information Technology Management is well worth considering. The program provides a solid foundation to help you design, deliver and operate business technologies more effectively.
https://medium.com/visualmodo/why-study-it-management-9c3463f3a8ff
[]
2020-04-21 00:38:34.034000+00:00
['Management', 'Study', 'Study Abroad', 'It', 'Why']
Building Your Own Facial Recognition System
Prerequisites So, before we get started, let’s go over some prerequisites for this project. Here are the things you’ll need in order to successfully build your own facial recognition system. Understanding how facial recognition works A T3 API Key Python3.7+ Postgresql version 12.6 PostgREST version 7.0.1 React version 17.0.2 Node version 10.19.0 Those who just want to look at the code, here’s a link to the repo. Now, let’s develop a better understand of how facial recognition works and how we will use it. How Does Facial Recognition Work? For this article we won’t be diving into Convolution Neural Networks, Harris Corners or other technical means of the mechanical inter-workings of these systems. Instead, we want to focus on how these systems determine who is who. In facial recognition software, the machine learning models are first trained on hundreds of thousands or millions of images that have been painstakingly annotated by hand of key facial features within a picture of someone’s face, like the image below. source: quora.com The points on the face above calculated by a machine learning model that “looks” at an image and does it’s best to figure out where certain facial features are located. Using this information, facial recognition systems can calculate the geometry of your face, or your facial fingerprint, if you will. Then, this fingerprint is compared against a database of fingerprints for similarity in facial features. What sets facial recognition systems apart is the number of features they use to measure your face as more features, typically mean higher accuracy. For our system, what we’ll do is take a few pictures first to build a more accurate digital fingerprint, that way we don’t have to keep our head in the exact same spot every time we want to use it. The Setup As I mentioned earlier, we’ll need a T3 API Key to create our facial recognition app. T3 is an API service that I’ve created that offers free machine learning APIs to developers. For this, we’ll use the /facekp endpoint to get the key points from a face.
https://medium.com/@tedtroxell/building-your-own-facial-recognition-system-57d4d597448d
['Ted Troxell']
2021-04-26 17:23:31.066000+00:00
['Machine Learning', 'Facial Recognition', 'Postgres', 'Computer Vision', 'AI']
Kicking and screaming
Kicking and screaming But how do I go about this? I'm going insane Med after med It feels like nothing will help I'm ready to give up But I know I can't I know I'm not supposed to I love them too much But I'm supposed to keep going Not just for them For myself first But I can't stop myself I can't get myself to begin I want to be able to go on But I don't know how much I can take How much more can I take? I'd be lying if I said I can do this right If I said that I felt okay But I need to push myself For me And for them I need to change for the better But it's scary I want to be a good daughter A good sister Friend Fiancèe I just want to be a good person I need to learn to become happy Happy with myself As happy as I am when I'm not alone As happy as they make me I feel like screaming always I feel like kicking away the past I need to live in the moment But also find my future I know it involves all of this love And I'll do it I'll find myself With love on my side
https://medium.com/@csnow0437/kicking-and-screaming-f29a2771c2e1
['Christina Snow']
2021-07-15 23:13:26.931000+00:00
['Self Love', 'Strength', 'Poetry', 'Love']
Day : 0. No need to mention how much of a bad of…
Introduction DATE : 19–July–2020 || Day : 0 No need to mention how much of a bad of a year it is 2020. For me the bad the part began from 31st of Dec 2019 when my grandfather was very sick. Everyone of my family was broken. First 3 months of 2020 was gone almost after my grandfather. He had two major operations and by God’s grace he is doing fine now. Okay for the past two years, I was preparing for my GATE exam as I was always excited to go the IITs. And the exam was on February. I should not lie but my preparation was not really good. But I never thought of giving up and was expecting to get above 50 at least but I was away from studying from January because I had to go to grandfather’s home at Tamluk. Really had no other choice. I was there helping my mom and grandmother(Maya) to take care of grandfather. Those were really bad days. One day in March I got my GATE results and I barely passes with 30 marks and a rank of 10000. I was not sad but rather happy that I could at-least pass the exam. I was so obsessed with GATE exam I did not sit for any college placements and now I realize that it is one of my many mistakes of life. I was really into coding from my school days but did not have any idea like competitive coding until I reached college. I was performing quite good for first two three competitions but I don’t know what I left doing something I really liked. Again another of many mistakes of my life as I understand now. Just wasted whole of my college life just messing around, playing video games and you know what one goes through during college days. Crush during college is okay but blindly running after her even after getting rejected is the most dumb fucking thing ever one can ever do. But had no choice and could not avoid her. Every time I gathered myself up and thought let it go and one text after months totally melted everything. If it happened to you, you could relate to me. Even today I could not get over her. She got a job and is really happy as far as I know. I even had a relationship during this period and I thought that would help me forget her but that even did not worked. The other thing I am addicted to is CS GO. That is also one of the major reason for which I am in this position. The addiction level was of a level that I even told my father that I wanted to be a gamer and it was in November. That’s how I have really become. But from May, I should say I am trying to gather myself down and am willing to fight back. I started competitive coding and really got a good result. I am currently a 4* Coder in just 2 months. Now that I have promised myself to improve my life I was really confused between a lot of things like whether to appear for GATE(preferred by parents) or do competitive coding and do some high class project so that I can apply for product based companies. But my mom does not like idea as she wanted me to prepare for GATE and get a rank under 100 so that I can get into a PSU. And on a second thought I also realized that ain’t a bad idea because if I could do that then I can do competitive coding with a good job and then after two years I can get into IITs as GATE results are applicable for 3 years. And during those two years I can apply for jobs at product based companies. So here I am now a jobless confused pathetic human trying to improve myself whatever it takes. I will set my own goals for next seven months, each days and each hour. Lets see how it goes.
https://medium.com/@sagardas7/date-19-july-2020-day-0-c14933ebec0e
['Sagar Das']
2020-07-19 14:37:00.175000+00:00
['Bad Decisions', 'Computer Science Engineer', 'Improvement', 'Goals', 'Life']
I Was Heartbroken Witnessing How Unfair I Was Towards My Kitten
I Was Heartbroken Witnessing How Unfair I Was Towards My Kitten The surgery made me realize my fault! Courtesy of the author Back in October — curiously, it feels like it was a year ago — I met a little living miracle. She was the one who adopted me if I want to be fair! To describe the last two months we spent together as magical would not give them enough credit! Most of our moments together are hilarious or affectionate. I’ve always adored making silly noises and giggling at her funny reaction! The last time it happened, I was talking to my dear soul friend Jill Horton. I laughed loudly for some reason, and when seeing her facial expression, my laughter became even louder! I’m pretty sure she’s saying something like, “Why do you need to be that weirdo, you humans? Can you at least warn me whenever you decide to have a silliness crisis?” She’s an enthusiastic, playful, adorably curious, and stubborn kitten. Occasionally, her last attribute could get the best of me. I was finding myself so frustrated with her tendency to test the boundaries — her very favorite game — that I was shocked at the aggressiveness in my tone! I even found myself hitting her gently as my silly way to tell her what she was doing was not acceptable, that Mommy was not happy; hence punishing her for her lack of discipline. I was feeling too bad about myself afterward. Something was off. I was out of my integrity and I knew the root cause. Nonetheless, I wasn’t ready to face the ugly truth. It was another bias and pattern from the residual part of my distorted subconscious program, “I am a dog person; I’ve always had such genuine and immediate connections with dogs; dogs love you unconditionally while cats are selfish and show affection merely when they need something!” Being so engaged in fighting against NPD (Narcissistic Personality Disorder) symptoms didn’t help!
https://medium.com/know-thyself-heal-thyself/i-was-heartbroken-witnessing-how-unfair-i-was-towards-my-kitten-21bb6c2330c6
['Myriam Ben Salem']
2020-12-28 09:26:53.357000+00:00
['Self-awareness', 'Self', 'Pets', 'Love', 'Self Improvement']
ViewState and Interactions — an easy contract between view and ViewModel
What I liked in MVP architecture approach was the interface, the contract between a Presenter and a View (yes, I was the one who created an interface for each Presenter :). In a glace I knew, what user could do on that screen and I could imagine how the screen looks like by going through methods Presenter calls on the View. Everything was clear, everything in one place. When moving to MVVM I kind of a lost it. Don’t know why to be honest but with this new way of handling presentation layer communication I just did it differently. I decided to came back to my lovely contract. Is it possible with ViewModel ? Of course it is. Let’s take this one as an example: Simple Fragment example. Imagine we’ve created screen with settings that allows us to change our name, enable/disable notifications and go to other, more specific settings screens. Now let’s take a look at ViewModel . This implementation of ViewModel is also nothing complicated. We can see there user interactions methods and couple of LiveData s for views changes. There is nothing wrong about this ViewModel . It looks pretty good, right? I’m just missing my contract… or at least some place that I could go and see what is happening in this screen. What could be changed, as simple as possible, to make me happy? What do you say about these changes? First, I’ve created ViewState class which corresponds to everything that could be shown on the screen, each state of each view. Now just this one class will be used to render the screen data. Second, Interaction class was introduced. It’s a collection of interactions that user can perform on the screen. Of course I could wrap these two classes into an interface to make it more formal and mandatory to implement by both the Fragment and ViewModel , but to not complicate this example with additional inheritance, let’s just leave it as it is, it’s simple now. Ok, so how our SettingsFragment looks like now? Nothing really changed, right? It looks almost the same, but now from its perspective performing actions are more similar to calling commands. Rendering view’s data also is very similar to what we had before, with the small difference — now there is just one observer and we render the state as a whole. What about SettingsViewModel ? Couple of things have changed here: currentViewState — it’s a ViewState cache, when submitting new ViewState we can just change some of its properties, take a look at notificationsSwitchChanged() method; — it’s a cache, when submitting new we can just change some of its properties, take a look at method; onInteraction() — method which handles all of the interactions; — method which handles all of the interactions; public methods becomes private. The rest of the code is pretty much the same.
https://medium.com/@mateuszbudzar/viewstate-and-interactions-an-easy-contract-between-view-and-viewmodel-17cdfbd733c7
['Mateusz Budzar']
2020-11-24 05:01:06.725000+00:00
['Viewmodel', 'Android App Development', 'Clean Code', 'Android', 'Refactoring']
Happy Mother’s Day Fellow Mommiors
Happy Mother’s Day Fellow Mommiors You are the original badass Image by Annalise Batista from Pixabay If it weren’t for a post I saw on Facebook, I would not have known today was Mother’s Day. And yet, when I realized it was this weekend, I was not amused. One. More. Thing. To. Do. I don’t have the energy to get the house ready for ‘my day’ to celebrate ‘me’ because truthfully looking at the pile of clean laundry that has been growing on my coffee table for the past two months doesn’t put me in the mood. Nor does my floor — which I clean daily, and yet, the dirt only comes off on the white socks that traipse all over it. Don’t get me wrong, by all means, please take a day to think of me. But my work — my labor of love, blood sweat and tears, and general superheroness — has been compounded by this parallel universe/twilight zone of an era we’ve all been subject to. In case you didn’t notice, let me remind you all of what me and my sisters have done to keep this boat afloat. While we don’t necessarily want to get out of bed in the morning — we still do. And not because we’re being lazy or emotional. Because quite frankly, the sheer load of things we have to get done to keep everyone else sailing on this ship — is utter BS. Thank you to the school districts for scrambling to find a viable solution to teaching our children during lockdown. With all the brilliant minds at work, I was certain we would at least be able to see our kids’ teachers and classmates in tiny boxes on the screen, waving frantically and sharing quarantine stories. But no, no, no. That was not so. I realize the school board directors and scholars (you know the ones who write the curriculum but have never stepped foot in a classroom) were in a real pickle trying to figure out how to teach our children from a distance. However, online lessons without instruction or teacher support have only one direction to go. That’s right, “Hey mom! Can you help me?” Thanks for yet another thankless task we have to complete, and you somehow get credit for. Boredom and monotony are real issues as well. In fact, live entertainment is limited these days. But you know who brings on the works at least three times a day? Moms that’s who. Introducing: breakfast, lunch and dinner at your service. We prep, perform, and clean our set — only to realize by the time we’re done, it’s time to put on another show. While good partners can take on a meal or two, who’s the one who cleans it all up when the curtain goes down? Who checks behind the seats, sweeps the floors and puts everything back in its place? Oh and who runs the dishwasher two or three times a day? And the washing machine? Who magically vanquishes the mess and the spills? Who bids the boo-boos farewell? Who takes the brunt of the whining, the arguing, and the negotiating with frustrated children who didn’t ask for any of this and do not know how or where to process this new normal? Who offers support for our partners (who are hauling this ship too) as we writhe through exhaustion, blurred by our own tears, fully aware that we are neglecting ourselves (yet again) in the midst of all of this? Us, that’s who. Rock on sisters. You might be doing all of this before you even start your day job. You know, the one that may or may no longer be paying you. It’s very hard to have any sense of ‘me’ in this equation because at the end of the day, when you haven’t had a chance to sit down, write an email, or even wash your face, you just want to cry because this all seems too familiar and so unfair. Hey mom, remember how you spent years and years cultivating yourself? You wanted to be your own person, respected, valued, and appreciated for your worth. You may have gone the route of finishing school, getting a job and/or even starting a family. Whatever you did, you did well and worked your ass off. You took care of everything and everyone. But somewhere, in between the to-dos, you found time for you. You knew your worth and you treated yourself. Maybe it was good book, a trip to a coffee shop, a date night, a girls’ night out, or a fabulous bath. You developed this wonderful, hard earned sense of self. You had purpose. You had people that depended on you and you wanted to take care of them at home and elsewhere. You could go to the store, post office, gas station , beach — the good ol’ days, if you will. You weren’t afraid of standing too close to people, you actually liked seeing other humans. You had no clue these were things you might later think could have been taken for granted. You had no reason to believe the mundane would be a refreshing change of pace. You had no idea a coronavirus would make you question your sanity over and over again. And yet, here you are smack in the middle of Mother’s Day 2020. You may have thought this year was going to be great — a year of vision. My wise friend, Brittany, told me perhaps we are all having to focus our gaze on what is truly important. So fellow Mommior, fixer-of-things, good do-er, and lover of her children and family: you are the original badass, my dear. You have always focused your gaze on what is truly important. When the world offers you lemons, you turn it into Zoom. When all you can do is walk around the block, you teach your child how to ride a bike (even though you didn’t sleep well the night before). You make those chocolate chip cookies and play outside. And you let your children have tantrums in front of the neighbors because you know they’re trying and you love them more than your rules. When it’s your turn to have a bad day, you cry in front of your children so they know superheroes cry too. And when you wake up this morning (and possibly get to sleep in) remember the part of yourself you like the most. Gaze into the beautiful eyes of the creatures you helped create. Embrace the paper clip-on earrings and yarn bracelets they made for you. And the homemade cards and fresh flowers picked from the yard just for you. And the breakfast you asked for. And most importantly, the gift of them you cherish most. And know this: yes, you give a lot, way more than you ever thought you could. But you know what? You also get a lot and you mean so much to this world. So sisters, whether you’re sipping caffeine-free weed tea from your garden, mocha-I’m-gonna-choke-ya-latte, or Pinot de Drunkio, here’s to you, fellow Mommior. Cheers to YOU. And Happy Mother’s Day.
https://medium.com/the-partnered-pen/happy-mothers-day-fellow-mommiers-74c5b468b477
['Tami Bulmash']
2020-05-11 04:08:48.727000+00:00
['Lifestyle', 'Motherhood', 'Self', 'Mental Health', 'Parenting']
PurgeCSS Extractor for HAML. We’re using the Tailwind framework for…
We’re using the Tailwind framework for the new DocRaptor marketing website. We’re also continuing to use HAML for our templating system. The Tailwind library is an enormous 3MB so it uses PurgeCSS to remove unused CSS styles from your production CSS output. Unfortunately, the default PurgeCSS extractors aren’t compatible with HAML markup. HAML looks like this: %img.h-10.w-10.rounded-full{ :src => "... PurgeCSS uses a regex to scan your files for words then compares those words to your CSS selectors. The default regex doesn’t consider periods or curly brackets to be word breaks, so PurgeCSS views img.h-10.w-10.rounded-full{ as one big word. The fix is easy. In tailwind.config.js , update the default extractor regex to include brackets and periods as word breaks: purge: { content: [ './app/**/*.haml', './app/**/*.js', ], options: { defaultExtractor: content => content.match(/[^<>"{\.'`\s]*[^<>"{\.'`\s:]/g) || [], } },
https://medium.com/expected-behavior/purgecss-extractors-for-haml-a9dfe23a3504
['Expected Behavior']
2020-12-17 14:37:36.169000+00:00
['Tailwind Css', 'Haml', 'Technology', 'Purgecss', 'Ruby']
Statistical Learning Theory Part 1
Statistical Learning Theory Part 1 Introduction to Statistical Learning Theory Introduction In the 1920s, Fisher described different problems of estimating functions from given data as the problems of parameter estimation of specific models and suggested the Maximum Likelihood method for estiamting the unknown parameters in all these models. Glivenko and Cantelli proved that the empirical distribution function converges to the actual distribution function and Kolmogorov found the asymptotically exact rate of this convergence to be exponentially fast and independent of the unknown distribution function. These two events gave birth to two main approaches to statistical inference, Parametric and Non-parametric inference. Parametric inference aims to create simple statistical methods of inference that can be usedfor solving real-life problems. Non-parametric inference on the other hand, aims to find one inductive method for any problem of statistical inference. Parametric inference is based on the assumption that the processes generating the stochastic properties of the data and the function whose finite parameters needs to be estimated are known to the person handling the data and for this one adops the maximum Likelihood method. Non-parametric inference on the other hand, assumes that one does not have a priori information about the process or the function to be approximated, and thus finding a method for approximating the function from the data is necessary. The three beliefs that forms the basis of classical parametric paradigm are: To find a functional dependency from the data, it is possible to find a set of functions, linear in their parameters, that contains a good approaximation to the desireed function, and the number of free parameters describing this set is small. The statistical law underlying the stochastic component of most real-life problems involving large number of random components is described by the normal law. The Maximum Likelihood method is a good tool for estimating parameters. However, the parametric inference paradigm has its own shortcomings. Curse of Dimensionality : Increasing the number of dimensions increases the required amount of computational resources exponentially. Statistical components of Real-life problems distributions are described by only classical statistical distribution functions. Maximum Likelihood method does not perform best for some simple problems of density estimation. In 1958, after F. Rosenblatt suggested the Perceptron for solving the simple learning problems, several different learning machines were suggested. The genral induction principle that these machines implemented was the so-called Empirical Risk Minimization (ERM) principle. The ERM principle suggests a decision rule (an indicator function) that minimizes the number of training errors. However, the issues which drove the developement of the ERM theory, that is, to describe the necessary and sufficient conditions for which the ERM method defines functions that converges to the best possible solution with an increasing number of observations and to estimate both the probability of error for the function that minimizes the empirical risk on the given set of training examples and how close this probability (of error) is to the smallest possible for the given set of functions. The resulting theorems described the Qualitative model and the Generalization ability of the ERM principle respectively. To construct the general theory of ERM method for pattern recognition, a generalization of the Glivenko-Cantelli-Kolmogorov theory was made: For any given set of events, to determine whether the uniform law of large numbers holds. If uniform convergence holds, to find the bounds for the nonasymptotic rate of uniform convergence. These bounds are generalizations of Kolmogorov’s bound in two respects : They must be valid for a finite number of observations. They must be valid for any set of events. This theory gave rise to the concept of capacity for a set of events (a set of indicator functions). Out of these, the concept of VC dimension is of particular importance. VC dimension of a set of events characterizes the variability of the set of events. Both the necessary and sufficient conditions of consistency and the rate of convergence of the ERM principle depend on the capacity of the set of events implemented by the learning machine. For any level of confidence, an equivalent form of the bounds for the rate of uniform convergence define bounds on the probability of the test error simultaneously for all functions of the learning machine as a function of the number of training errors, of the VC Dimension of the set of functions implemented by the learning machine and of the number of observations. Achieving this form of bounds has two requirements — to minimize the number of training error and to use a set of functions with a small VC dimension. The above two requirements are contradictory. That is, to minimize the number of tranining errors, one needs to choose from a wide set of functions, rather than a narrow set with small VC dimension. Hence, finding the best guaranteed solution requires making a compromise between the accuracy of approximation of the training data and the capacity (VC dimension) of the set of functions, that is used to minimize the number of errors. This idea of minimizing the test error by making a trade-off between these two factors was formalized by introducing a new induction principle, known as the Structured Risk Minimzation principle. Two important points in connection with the capacity concept : Capacity determines both the necessary and sufficient conditions for consistency of learning processes and the rate of convergence of learning processes nad thus reflects intrinsic properties of inductive inference. Naive notions of complexity does not reflect capacity properly. In many real-life problems, the goal is to find the values of an unknown function only at points of interest (test set). To do this, first we find an estimation of the function from a given set of functions using inductive inference, and then we use that function to evaluate the values of the unknown function at the points of interest. Thus, we solve a problem which is more general in nature than we need to solve. But in cases where we have limited amount of data, we cannot estimate the values of the function at all points of its domain but can estimate the values of the unknown function reasonably well at given points of interest. This type of inference is called transductive inference.
https://medium.com/the-owl/statistical-learning-theory-part-1-aa00d522557e
['Siladittya Manna']
2020-07-21 10:54:31.585000+00:00
['Mathematics', 'Machine Learning', 'Statistics']
“Mint az élesztő, ahogy megkeleszti a tésztát.” — Csókási Zsolt (Magyar Telekom) a design érettségről
Our publication is a (kind of) personal journal of meet. We tell stories about our work to inspire you. We also tell stories about our mistakes so you don’t make the same ones. We tell our stories but want to hear yours too. So let’s meet. Follow
https://medium.com/meetperspectives/mint-az-%C3%A9leszt%C5%91-ahogy-megkeleszti-a-t%C3%A9szt%C3%A1t-3c96625801e7
['Katinka Boros']
2020-12-30 14:00:26.433000+00:00
['In House Design', 'Consultant', 'Customer Journey Map', 'Service Design', 'Service Designer']
Creating a minimal RabbitMQ client using Go
Photo by Fotis Fotopoulos on Unsplash Go is an open source programming language that makes it easy to build simple, reliable, and efficient software. RabbitMQ is an open-source message-broker software that originally implemented the Advanced Message Queuing Protocol and has since been extended with a plug-in architecture to support Streaming Text Oriented Messaging Protocol, MQ Telemetry Transport, and other protocols. In this tutorial, we are going to create a minimal RabbitMQ client that allows other packages consume or publish messages from or to RabbitMQ. The final source code can be found on my GitHub. Prerequisites I assume that you have installed Go already. If not, check here. already. If not, check here. You need to know a little bit about Go modules. We’re going to use modules as our dependency management solution. You need to be somewhat familiar with how RabbitMQ works. We’re not going to cover every aspect and types of consuming and publishing messages. First steps First you need to create a directory with the name of your module. I’m going to name it rmq . Then initialize go modules using go mod init command. Now you have to get the main dependency using go get github.com/streadway/amqp. Now create a main.go file in the root directory of your module. This file is where everything is going to start (and probably end). Your main file should be like this for start: package rmq Now let’s create a custom type named RabbitClient . We’re going to add consumer and publisher channels and connection separately in this client. type RabbitClient struct { sendConn *amqp.Connection recConn *amqp.Connection sendChan *amqp.Channel recChan *amqp.Channel } Connect and create channels In this part we’re going to add two private methods for our custom type that tries to connect to RabbitMQ and then creates a channel based on connection type (consumer|publisher) and reconnect (try to reconnect if it’s already exists). The connect method receives two boolean args to know about connection type and reconnect mode. We assume that you already have the information about RabbitMQ service (Username, Password, Host and Port) in a custom type named config . // Create a connection to rabbitmq func (rcl *RabbitClient) connect(isRec, reconnect bool) (*amqp.Connection, error) { if reconnect { if isRec { rcl.recConn = nil } else { rcl.sendConn = nil } } if isRec && rcl.recConn != nil { return rcl.recConn, nil } else if !isRec && rcl.sendConn != nil { return rcl.sendConn, nil } var c string if config.Username == "" { c = fmt.Sprintf("amqp://%s:%s/", config.Host, config.Port) } else { c = fmt.Sprintf("amqp://%s:%s@%s:%s/", config.Username, config.Password, config.Host, config.Port) } conn, err := amqp.Dial(c) if err != nil { log.Printf("\r --- could not create a conection ---\r ") time.Sleep(1 * time.Second) return nil, err } if isRec { rcl.recConn = conn return rcl.recConn, nil } else { rcl.sendConn = conn return rcl.sendConn, nil } } Same as the connect method, the channel method receives two boolean args to know about connection type and reconnect mode. This method tries forever to connect to RabbitMQ service and then create a channel based on the connection type. func (rcl *RabbitClient) channel(isRec, recreate bool) (*amqp.Channel, error) { if recreate { if isRec { rcl.recChan = nil } else { rcl.sendChan = nil } } if isRec && rcl.recConn == nil { rcl.recChan = nil } if !isRec && rcl.sendConn == nil { rcl.recChan = nil } if isRec && rcl.recChan != nil { return rcl.recChan, nil } else if !isRec && rcl.sendChan != nil { return rcl.sendChan, nil } for { _, err := rcl.connect(isRec, recreate) if err == nil { break } } var err error if isRec { rcl.recChan, err = rcl.recConn.Channel() } else { rcl.sendChan, err = rcl.sendConn.Channel() } if err != nil { log.Println("--- could not create channel ---") time.Sleep(1 * time.Second) return nil, err } if isRec { return rcl.recChan, err } else { return rcl.sendChan, err } } Now that we are able to connect and create channels, let’s start to consume and publish messages. We’re going to declare lazy-mode queues that are durable in both consume and publish modes. You can change it to whatever that fits your problem. Let’s consume something The Consume method receives two args, one is the queue’s name and the other one is the function that handles the consumed message’s body. We’re going to ack|nack based on the result of this function. // Consume based on name of the queue func (rcl *RabbitClient) Consume(n string, f func(interface{}) error) { for { for { _, err := rcl.channel(true, true) if err == nil { break } } log.Printf("--- connected to consume '%s' ---\r ", n) q, err := rcl.recChan.QueueDeclare( n, true, false, false, false, amqp.Table{"x-queue-mode": "lazy"}, ) if err != nil { log.Println("--- failed to declare a queue, trying to reconnect ---") continue } connClose := rcl.recConn.NotifyClose(make(chan *amqp.Error)) connBlocked := rcl.recConn.NotifyBlocked(make(chan amqp.Blocking)) chClose := rcl.recChan.NotifyClose(make(chan *amqp.Error)) m, err := rcl.recChan.Consume( q.Name, uuid.NewV4().String(), false, false, false, false, nil, ) if err != nil { log.Println("--- failed to consume from queue, trying again ---") continue } shouldBreak := false for { if shouldBreak { break } select { case _ = <-connBlocked: log.Println("--- connection blocked ---") shouldBreak = true break case err = <-connClose: log.Println("--- connection closed ---") shouldBreak = true break case err = <-chClose: log.Println("--- channel closed ---") shouldBreak = true break case d := <-m: err := f(d.Body) if err != nil { _ = d.Ack(false) break } _ = d.Ack(true) } } } } The Consume method handles NotifyClose , NotifyBlocked and NotifyClose from the connection and channel and try to reconnect or recreate them if needed. Let’s publish something The Publish method receives three args, one is the queue’s name and the other one is the array of bytes and contains the message’s body. // Publish an array of bytes to a queue func (rcl *RabbitClient) Publish(n string, b []byte) { r := false for { for { _, err := rcl.channel(false, r) if err == nil { break } } q, err := rcl.sendChan.QueueDeclare( n, true, false, false, false, amqp.Table{"x-queue-mode": "lazy"}, ) if err != nil { log.Println("--- failed to declare a queue, trying to resend ---") r = true continue } err = rcl.sendChan.Publish( "", q.Name, false, false, amqp.Publishing{ MessageId: uuid.NewV4().String(), DeliveryMode: amqp.Persistent, ContentType: "text/plain", Body: b, }) if err != nil { log.Println("--- failed to publish to queue, trying to resend ---") r = true continue } break } } This method handles reconnect or recreation of channel if needed. Usage Create an instance of the RabbitClient type and use Consume or Publish method. var rc rmq.RabbitClient rc.Consume("test-queue", funcName) rc.Publish("test-queue", mBody) What I didn’t cover
https://levelup.gitconnected.com/creating-a-minimal-rabbitmq-client-using-go-cbcec1470950
['Mehrdad Esmaeilpour']
2020-12-27 16:04:28.649000+00:00
['Golang Development', 'Development', 'Go', 'Golang', 'Rabbitmq']
Deployment maturity levels
As software engineers, we love to develop features for making our customer lives easier, but until those features are available to end-users, it adds no value to the business. Therefore the deployment process is an essential part of succeeding as a development team. With six years of experience in Continuous Delivery, this is how I would classify the maturity level for deployments: Level 0 – YOLO (You only live once) Diagram of deployments direct from the developer machine The deployment process is quite simple; the developer builds the code on his machines, copies and pastes into the production server. That can be quite fast, it’s a great way to prove a concept, but in the long run, it’s not repetitive and unauditable. It is also reasonably risky; no tests are running after the deployment, so the developer has to test manually. And if it the application broke, pray that he can restore the production environment. Another aspect of this level with production systems, is talking about spaced deployments of once to twice a month. Deployments require coordination across multiple teams, increasing the time a feature is requested until it is available to users. That is what often ends up happening to monoliths because multiple teams maintain one codebase. Let’s speak about deploying once a month. What is safer, to deploy once a month or to deploy every week? Let’s say every week the development team can develop a single feature. Once we deploy those features at once, many changes happen simultaneously, increasing risk exponentially, and once things go wrong, which feature caused the problem? Sometimes isn’t even a single feature issue, but a combination of multiple changes that were not tested together. Big-Bang deploys often are hard to rollback because they touch multiple applications and databases. Imagine breaking production in your Saturday, without an easy rollback; you will have a hard time. It is counter-intuitive, but often deploying small changes divides the risk, taking little by little in a much more controllable manner. Another problem with compiling the releases is for the development team to have to context switch. Once they finish the feature, it still isn’t released, and sometimes during the deployment or testing problems are found, impacting the current work they are doing. And as developers, our memory gets tested once we are trying to remember the purpose of the feature developed a month ago. Level 1 —Continuous delivery Continuous delivery diagram with a commit, build and deploy the artefacts step. Automated build and deployments, either through Powershell/Bash scripts or with a CI (Continuous Integration)/CD (Continuous delivery) tools as Octopus deploy, Circle CI or Azure DevOps. We can go to any version of the application through the click of a button. (That’s the definition of what we call continuous delivery. The transition from YOLO to continuous delivery: Once transitioning from manual deploys, I guarantee you there are steps in the deployment process, only the person deploying knows. A real-world example This is fine meme In 2015, we were automating a web API deployment, with a vast customer base, so we waited until dawn to start the first automated deploy. Before doing the automated deployment we tested in the test environment, everything looked promising and great. We did the first deploy at 2 AM, I checked the individual machines through the browser, and the API worked. I was happy and confident. At 4 am, we discovered we were not receiving any traffic. Confusion and desperation, we contacted our ops team, which were speaking to the hosting company. After long agonising 2 hours, someone found the problem; the load balancer couldn’t see our web API machines. The load balancer would ping a specific URL as http://api.com/index.html and expect a 200 OK, if not it would remove the machine from the load balancer pool. Load balancer not forwarding traffic because the API’s didn’t expose a specific URL The reason was, the deployment person always remembered to copy and paste that index.html file every deployment, so again: Manual deploys have steps only the person that deploys know. Of course, the next day, I added the file to source control and that future deployments didn’t have the problem. Level 2 – Zero Downtime The previous models even though automated, had some disruption to the service. Which leads to our users to see something like this: Once our deployments were scarce, this worked fine, maybe done once or twice a month during the weekends. But in the digital transformation world, our market is way more competitive, releasing new features fast and often became a competitive advantage. Can you imagine working at Amazon.com, and telling senior management we will deploy three times a day with 5 minutes maintenance windows? That would cost the business millions of dollars quickly. And remember, you are part of one team, Amazon has hundreds of teams, imagine the disruption if each one likes to deploy daily. That’s why we have this concept of Zero downtime, so we can roll out changes without causing disruption to the final users. We can achieve this in multiple ways; two common ones are: Blue/Green deployment Blue/Green deployment diagram In Blue/Green deployment, you will have your production environment having two instances of the application. One is live with users accessing. The other “staging” is a passive copy used for deployments and tests. After we deploy the new version to staging, we can warm up the application to avoid downtime and run smoke tests to validate the application is healthy. That’s an excellent safety net, in this way you won’t be able to break the entire application because of a wrong deployment. Once we are happy with the blue slot, we can swap it with the green one and start sending traffic. We can apply the swap at load balancer level, for example. Canary/Rolling deployment Canary deployments can be useful when you have multiple machines. You deploy to them incrementally, in a rolling manner. So you don’t take the application offline and minimise risks by deploying to only a subset of the traffic at a time. Before going to the next batch of machines to deploys, we can also run smoke tests before the next stage. Canary deployment term comes from mining, a dangerous profession; in some caves, there are poisonous gases that could kill the miners. So they would take a canary (little bird) with them, leave the bird in the cave for a while, if the bird is still all right, it would be safe for them. We are using a similar approach by deploying to a subset first. Level 3 – continuous deployment Photo by Oscar Sutton on Unsplash At this point, I think it is useful to make a clear distinction: Continuous Delivery — It is the ability to go to any application version at any time, as stated in level 1. Continuous Deployment — All code changes get automatically deployed. All right, you got a mature project. You have a high code-coverage, you trust your test suites will pick issues during the build/release pipeline. Every code change goes all the way into production without anyone’s approval. A small change takes less than 30 minutes, maybe even 10 minutes to go to production. Well done! Just ensure you keep up with the quality, and I am sure you will have delighted customers. Authors Note: Continuous Deployment is not a must and maybe I wouldn’t even recommend to every client. Bonus Although those aren’t continuous delivery specific subjects, if you want to do a microservices architecture with continuous delivery, you will need to up your game in those areas: Logging and Metrics You aren’t Daredevil, if you can’t see it, you can’t fix it. At any of those levels, you got to have basic logging. Preferably not logging to a virtual machine file that you need to connect to check the logs remotely. Preferably you will have a tool to centralise logs across applications, and even better if you have a distributed trace system, where we can see all the steps of a user request in the downstream system. App Insights example of an end-to-end request tracing Alerting In a monolith architecture, you have one service to make sure it is working. In a microservice architecture, you have dozens if not hundreds. You will need a way to view and be alerted about the system health. If the application goes down, I am sure you prefer to receive an SMS from your alerting system than receive a call from the CTO asking what is going on. Automated tests Not only for the smoke tests, for us to start deploying often, we also need short testing times. It wouldn’t be practical to have a manual testing process that takes weeks. Testing needs to be automated, run quickly and provide the safety net for changes. I speak about my current testing strategy here. Conclusion Making the deployments smaller, more often and faster, reduces the risk on the deployments while at the same time enabling development teams to deliver. Automation also makes it more secure, enabling you to audit your deployments, so you know who pushed what change and when. Tracking teams progress I like to use an excel to track my team’s maturity levels, so we can prioritise which project needs improvements depending on how often we apply changes and how important the project is for the customer. Excel matrix for the continuous delivery maturity levels References:
https://itnext.io/deployment-maturity-levels-feab55c20d04
['Raphael Yoshiga']
2021-01-08 16:27:24.556000+00:00
['Software Development', 'DevOps', 'Programming', 'Continuous Delivery', 'Software Engineering']
Why You Need a List?
Many people overestimate the power of a simple list, but it really helps with relieving your stress, managing your time, and coping with packed schedules in life. In today’s article, we are going to share with you about how a list works magically as the best productivity tool. Types of Lists Making a list is effortless. You simply write down the things you’d like to complete/achieve, but this small action can bring big changes and results to your life. Different types of lists serve for different purposes, and let’s see some common list types. People have limited time and brain energy to process many things we need to do in daily lives. Writing down today’s tasks onto a to-do list (in priority) in early morning costs less than two minutes but helps manage your time efficiently in a day. With this list in hand, you can complete and check things in order without being anxious about what to do next and what’s left unfinished. One-time life needs a bucket list — a list of the best things to do before you die! What kind of person do you want to become? What life do you want to live? What would you like to achieve? Think about your life’s bucket list and never regret not doing something. You need a happy list whenever feel anxious and depressed. A list of things that make you happy may lift you from a depressing swamp and encourage you to continue strongly living your life. As mentioned before in the previous How to Spend Money Wisely blog, we introduced the wish list — a list of items wanted before actual purchasing. It will help to hold off your impulsive spending desires and cut down many unnecessary spending. A checklist is the key to preparation. Whatever you’re preparing for, packing travel items or an online interview, a checklist is great for checking every detail and making sure you’re fully prepared. You will also feel the joy whenever finishing and checking an item/list off your list! Tips for List-Making SMART Principle SMART stands for Specific, Measurable, Attainable, Relevant, and Time-bound. This principle is one of the most used tools to help people plan and achieve goals, and it serves as a guide to list-making as well, especially for lists with specified goals such as to-do lists and checklists. PDCA Cycle The Plan-do-check-act cycle (PDCA) is another great management and planning tool for delivering changes. There is no end to this cycle, so PDCA needs to be repeated again and again for continuous improvement, which is key for us to constantly grow and achieve higher goals through lists. Thank you for reading, and we hope that you become familiar with varied types of lists and master the power of simple lists through this article to better manage time and brings productivity to your life. Make lists with XMind, starting today! Take care, and until next time…
https://medium.com/xmindofficial/why-you-need-a-list-cb2fa108804c
[]
2020-12-10 03:04:12.607000+00:00
['Xmind', 'Time Management', 'Schedule', 'Lists', 'Stress Management']
An Introduction to RiceQuant. The DeFi industry has experienced rapid…
NOTE: We Rebrand Recently. New brand is RiceQuant and Old Brand is ifarm.finance The DeFi industry has experienced rapid evolution in 2020. We have seen Governance Tokens(aka GT) burst out this year, and high-quality projects such as ResetDAO and Powerpool have also conducted profound experiments around meta-governance. However, ifarm.finance is not about solving core issues related to meta-governance. We observe that the capital utilization of governance tokens in the market is very low, the degree of capitalization also is very shallow. Most people just deposit governance tokens in their wallets. There are some notable phenomena that can indicate those who prefer investment attributes lack the derivative investment income ways of governance tokens and who prefer governance attributes have lost their enthusiasm for community governance because of being trapped by capital. ifarm.finance is a processing farm for the capitalization of governance tokens, hoping to deepen the degree of capitalization of governance tokens in the DeFi industry through the hierarchical lending model(a unique lending model designed in ifarm.finance) and the (common ownership self-assessed tax, COST) mechanism mentioned in “radical market”, and let the governance power really flow into the hands of those who want to truly participate in project governance. About the meaning of project naming.ifarm.finance — Governance Token capitalization processing farm. In the DeFi industry, Yield Farming has almost become the three meals a day for DeFi workers. In it, we invariably joked about ourselves as farmers. Therefore, ifarm hopes to become a popular Yield Farming farm among DeFi farmers. As for capitalization processing, we can explain our ideas from the perspective of industrial capital in the DeFi industry. In this specific context, industrial capital can still describe the deepening process of its capitalization processing from its three functional forms. The functional form of currency capitalization of GT(Governance Token) Existing mainstream DeFi projects mainly use governance tokens as currency attributes to pay for operating expenses and purchase labor from community members. In ifarm.finance, governance tokens can also be invested in Seed-Lending(The first level in the hierarchical lending model), and completing a GTAS(Governance Token Attribute Split, a collateral loan type order) order in Seed-Lending will generate iMT(Governance Token Money Attribute Token) and iPT(Governance Token Governance-Power Attribute Token). The iMT, which is one of the two, can be invested in ifarm.finance’s Money Market(The second level in the hierarchical lending model). In this process, we deepened the degree of capitalization of governance token currency. The functional form of production capitalization of GT DeFi projects revolve around their core business and carry the value of community members’ surplus labor by issuing native Governance Tokens. We use this as a context prerequisite to the production capitalization of Governance Token. We usually purchase or collateralize our BTC, ETH, USDT and other circulating currencies to hold various Governance Tokens to support long-term governance and operation of DeFi DAO projects to earn forward income. This long-term process can be regarded as asset purchase. At the same time, we also believe that participation in governance is an intangible asset for the study of the future development of the project, and the value of such assets is carried by governance tokens. Although for community members, most people will not actually participate in the project development tasks. However, in the entire process of participating in governance, all stakeholders have paid actual labor to promote the sound development of the project. The labor here mainly refers to research. Research is a creative and planned investigation to gain scientific and technological knowledge and understanding for their usually work. For example, in the community governance of a certain DAO project, community members will spontaneously study the knowledge in the field, study the knowledge of competing products or their industrial upstream and downstream products, and then transform them into actual production candidate results in the form of proposals. In the existing DAO, its Governance Tokens are mainly used to obtain future income or amplify future income (such as $CRV) in the form of collateral as expenses or production investments, which will reduce the liquidity of Governance Tokens. In the capitalization processing of ifarm.finance, GT holders can earn benefits without further reducing its liquidity or reducing the value of intangible assets such as research and contribution to the community as production activities. And it is foreseeable that the benefits will be higher than those obtained simply by staking in the governance system of the original project. Even, we can imagine that after you have participated in the ifarm.finance, the future income generated by contributing the production research and labor to the original project will be higher, which will be reflected in the future GT market price. More specific answers are also available from ifarm.finance’s hierarchical lending model. As mentioned above, when you collateralize the GT(eg, $UNI) to generate the GTAS order from the Seed-Lending, you will recieve the iMT and iPT. iMT can be further invested in the ifarm.finance Money Market to earn income, and can also sell it as your will on Dex such as Uniswap and Balancer. iPT can help you better participate in the research and production of the DAOs community, because you may own more governance power against a small amount of iPT without being trapped by capital. At the same time, because Seed-Lending can split the Governance-Power attribute from GT, which can attract a large number of intelligent, creative, and enthusiastic people who are enthusiastic about research and participation in governance. And you will see there are more informations and more frequent dynamic updates of governance. Management of these informations is also more convenient(Especially in an open system, the various protocols are intertwined and complicated, and the black swan of the associated protocol will produce a chain effect. In this case, investors or governors will no longer just focus on a single agreement or project. Therefore, efficient management of information and governance clusters are both effective ways to effectively improve governance). The above list will become the intangible Added Value of the GTs you hold. So, in this process, we deepened the degree of production capitalization of GT. The functional form of commodity capitalization of GT In the DeFi industry, commodities are presented in the form of providing services to users and charging for provided services. However, mostly, the profits earned by DAO projects from externally providing services are often injected into value by buying back and GTs. Of course, this is just a simple example. All in all, many times, we will reflect the profit and value of the service provided in the market price of GTs, although it is not always the case. In the DAO project, the quality of the goods and services mainly depends on the development of the project itself, which is closely related to the quality of the community, the governance capacity, and the intensity of production investment. This just represents the impact of the two functional form described above on the functional form of commodity capitalization of GT. Therefore, we also believe that the entire processing process will also deepen the degree of commodity capitalization(Mainly reflected in the overall business of the project) of GT. How it works? ifarm.finance Hierarchical Lending Model ifarm Hierarchical Lending is divided into Seed-Lending(The first level lending platform with fixed Rate) and Money Market(The second lending platform with floating rate) . Seed-Lending Stake GTs(eg, $SNX, $SUSHI) or LPs to farming IFA(ifarm.finance native governance token). And you can further choose to lock the GTs or LPs to create a GTAS order, then iMTs and iPTs will be synchronously received in your wallet. Actually this is a loan transaction, and the borrowed iMT can be suppied in ifarm Money Market, sold in Bounce, Uniswap and Balancer, etc. Early in the product launch, We will issue the iToken series, which also follow the rules we described, except that it cannot be attribute split. It’s worth explaining that there are privileges for iTokens applied in ifarm’s Hierarchical Lending. For example,in the Money Market, enjoy the supply of iToken to obtain higher interest rates and the borrowing of iToken to pay lower interest rates. Money Market for GT The money market is based on the demand relationship of various GT assets, and a floating interest rate GT asset pool is obtained through algorithms. The money market is divided into supply market and lending market. The market will support the supply and lending of assets such as iMT, iToken and other mainstream assets, DAI, ETH, wBTC, etc. And more asset will continue to be supported. Governance Power applied in COST(Common Ownership Self-Assessed Tax) ifarm.finance realizes the separation of GT money attribute and GT Governance-Power attribute in Seed-Lending. We split the GT into two fungible tokens, iMT and iPT, which represent the money attribute and governance-power attribute respectively through Seed-Lending. Many items have public value and private value. The same is true for GT. Some people hold GT as monetary capital to expect added value, while others prefer to owns much governance power to participate in project governance to expect long-term added value. Seed-Lending with GTAS is a new paradigm in GT design. It brings together familiar concepts into a never before seen protocol: P(m*GT)=P(x*iPT) + P(y*iMT/R) GT: Governance token. UNI, YFI. iMT: GT money attribute token. iPT: GT governance power attribute token P: Oracle price. R: Collateral ratio. m, x, y: token amount. Since GT have different value recognition for everyone, ifarm.finance will provide farmers in the Seed-Lending to create GTAS orders to freely choose their approved valuation models according to the above definition. Split GTs into iMTS and iPTs After the farmers split the GT into iMT and iMT by creating GTAS order, iMT can freely arbitrage in the market to earn greater benefits. At the same time, iPT also can be circulated freely in the market. iPT is a measurement of Governance Power, and it represents the capital of Governance Power of investors who are keen to participate in governance. C = P * H(iPT) / S(iPT) C: Delegated Governance Power P: Pooled Governance Power H(iPT): amount of iPT your hold S(iPT): iPT Current Total Supply When a project needs to vote, such as UNI, You will have the governance power C(UNI) = P(UNI) * H(iPT-UNI) / S(iPT-UNI) from ifarm.finance. Then you can apply for C amount of UNI from the GT Staking Pool to delegate participation in uniswap governance. Apply delegation from the Staking Pool for governance Negative Rebase in iPTs We hope that the Governance Power can be in the hands of those who are keen to participate in community governance. Therefore, the iPT that was split in ifarm.finance has a negative Rebase mechanism, which has a very similar idea to the COST system in the radical market. . Different commodities have different private values ​​to different people. If they have a preference for a certain attribute, their private value is high, which means that the valuation model has changed. And when you want to keep your favorite commodity as your own, you will pump a higher price in the market to prevent being easily bought by others, or when the commodity is at a lower market price, you will choose not to sell. However, in order to avoid monopolies and malicious price hikes, we encourage the free circulation of Governance Power in the market and will pay taxes on Governance Power in a rebase mechanism, with a Daily Rebase Ratio of 0.08%. Therefore, people who do not want to have Governance Power will sell iPTs to the market as soon as possible, and even choose higher-weight iMT when creating GTAS order in Seed-Lending, but those who want to have it are willing Pay high prices and pay taxes to get Governance Power. Tax Management In the Seed-Lending of ifarm.finance, paying taxes through the rebase mechanism will directly change the total supply of iPTs. When a user owning a GTAS order wants to redeem, he will need to buy a certain amount of iPT from the market, so that the balance of iPT in his wallet is consistent with the amount of iPT of the original GTAS order, only in this way can he redeem successfully the GT that was initially collateralized. Of course, in addition, if his GTAS order contains a certain amount of iMT, then he also needs to pay a daily interest rate of 0.05%. In order to satisfy the market that can fully redeem iPT to repay the GTAS order and to prevent maliciously pump up the market price of iPT, we have set up a tax administration, which will mint iPTs every 24 hours. The minted iPTs are equal to the negative rebase. Users can purchase iPTs from the tax administration to make up the balance to redeem GT. Redeem GTs Conclusion ifarm.finance is a processing farm for GT capitalization. It mainly solves: Governance tokens utilize their money attribute to invest in a single income method — — Provide diversified derivative investment methods for GTs. Governance is trapped by capital — — Let governance power flow to users who are really keen to participate in the governance of community projects. Website: ricequant.fi Tweet: https://twitter.com/RiceQuant Discord:discord.gg/PuKuxtW
https://medium.com/@ricequant/an-introduction-to-ifarm-finance-da85de466db
[]
2021-02-25 07:12:48.388000+00:00
['Governance', 'Defi', 'Uniswap', 'Blockchain', 'Dao']
Stress can ruin your life badly
Types of stress There are two types of stress, the good kind, and the bad kind. Both result in a fight or flight response that sends hormonal signals around your body, causing an increase in cortisol and adrenaline. This leads to an increase in heart rate and blood pressure, and in turn, changes to almost every bodily system. this includes the immune system, digestive system, and brain. Cortisol “can be beneficial in some situations, such as when it motivates your work on time. Dr.Patricia. A 2013 animal study found, a short term, moderate level of stress improved memory and increased alertness and performance in rats. But log term- also known as chronic stress doesn’t have the same motivational effects. “Cortisol gets toxic in high doses over a chronic period,” celan explains, adding that this is what leads to serious health issues. If not stress, then what? Stress itself can’t kill you. But,” over time, it can cause damage that leads to premature death,” celan says. “that’s why taking control over your stress is important.”
https://medium.com/illumination/stress-can-kill-you-a-fear-tactic-ba8af038989c
['Fahim Chughtai']
2020-12-25 14:39:48.742000+00:00
['Stress Management', 'Stress Management Tips', 'Mental Health', 'Stress Relief', 'Stress']
Satoshi’s Identity Doesn’t Matter. Anonymity Does.
Journalists like to claim that they have uncovered the real-world identity of Bitcoin’s inventor, Satoshi Nakamoto, and later retract their claim: In 2011, the New York Times declared that Michael Clear was Satoshi. In 2015, the New York Times decided to guess again, this time saying it was Nick Szabo. Here’s The Economist in 2016, seeming pretty sure it is Craig Wright. At a magic show, there are two kinds of people in the audience: those who enjoy the mystification, and those who try to solve it. These journalists are obviously in the second category. But to the first kind of person, Satoshi is another anonymous node in the network, and that is how it should be. (Image from Giphy) Satoshi Nakamoto’s story is really is one of the best parts of cypherpunk history. It’s the creation myth of blockchain. The goal of the movement is to build systems that function without a central authority. Blockchain’s a data structure without a leader node, and its history is a story without an identifiable leader or founder. Another core value of the system is anonymity. Part of the appeal of cryptocurrencies is anonymous payments. Why shouldn’t the same value be applied to creating and publishing code anonymously? But anonymity is under threat. Bitcoin transactions are much less anonymous than once thought. Anyone can see bitcoins flow through the public ledger, and analysis of this data has been used to unmask Bitcoin users. There are even companies like Chainalysis dedicated to identifying Bitcoin users. Beam is taking up a fight for anonymity that began in a suitably anonymous way. In 2016, someone logged into the bitcoin-wizards IRC channel and dropped one message: hi, i have an idea for improving privacy in bitcoin. my friend who knows technology says this channel would have interest http://5pdcbgndmprm4wud.onion/mimblewimble.txt The account never posted another message before or after. The link led to a text dated July 19, signed ‘Tom Elvis Jedusor,’ which laid out the basic concept for Mimblewimble. In the two years since, it has been reviewed by developers, academics, and cryptographers… and it works. It is recognised as one of the most elegant proposals for improving cryptocurrency, offering guaranteed privacy, and improving efficiency in the same stroke. No doubt when you read that last paragraph, you thought, “Hey! ‘Tom Elvis Jedusor’ is an anagram of ‘Je suis Voldemort,’ which is French for ‘I am Voldemort,’ the villain from Harry Potter.” Well spotted. The name ‘Mimblewimble’ is another Harry Potter reference. In the books, Mimblewimble is a spell that stops people from spilling secrets (a sort of magical NDA) and Jedusor’s paper said: I call my creation Mimblewimble because it is used to prevent the blockchain from talking about all user’s information Other Harry Potter characters took Jedusor’s proposal and ran with it. Further details of Mimblewimble were contributed by Moaning Myrtle, Séamus Finnigan, and Ignotus Peverell — and we have no idea who any of these people are. Beam is proud to inherit this tradition. Our team believes in the right to privacy and anonymity — whether for developers, or for users sending transactions. We are building for anonymity, and building on anonymity. Anonymity takes work. It is not easy in an era when our privacy rights are under attack from every direction — phone networks track our location, personal assistants have always-on mics, and our personal finances are treated as an asset for corporate analysis. (It was recently revealed that Paypal is sharing your data with over 600 companies.) Our research points to Mimblewimble as the best technical solution to financial privacy, and we have an extremely capable team of engineers working on building a full-featured implementation.
https://medium.com/beam-mw/satoshis-identity-doesn-t-matter-anonymity-does-32a06d7f7a6b
["Conor O'Higgins"]
2018-12-10 12:27:09.369000+00:00
['Bitcoin', 'Blockchain', 'Blockchain Technology', 'Ethereum', 'Finance']
Age Comes, But So Does Wisdom
Age Comes, But So Does Wisdom Learning to love yourself can be a painful process, but worth it Image by author I don’t want to admit it, but this year has aged me. At the same time, miraculously, I feel younger inside. But my DNA ain’t havin’ it. Wrinkles have set up camp. Skin finally got gravity’s memo, sent long ago. Belly fat thinks it’s going to move in for the long haul (sorry, belly fat — you’re not). I’ve got to be honest, aside from any life challenges that have come up in the last few years, the 40s have been the best decade so far. I never imagined I’d feel so comfortable in my own skin. Pride in myself has become my new way, my new normal. Whereas before, I thought fairly little of who I was and what I was capable of. Now I know I can accomplish anything… It has been a methodical 20 years; full of imagining my better self, striving for better, finding new ways to achieve my goals, and learning how to believe in myself. Lots and lots of self-talk… I’ve gotten here, to this place of pride and self-love, not by walking the comfortable path, but through effort, sweat, cuts, and bruises. Through many phases where I just had to break down and learn how to get up newer and better than before. This sort of growth makes you stronger. No matter how many times you must break, each breakage brings a more valuable solution, a deeper knowing of self, a more possible YOU. Don’t lie to yourself and say that becoming your dream-self will be easy. Tell the truth: sometimes you have to come to the brink of your understanding to know there is a field out beyond your imagination…one you want to go to. One where you can create anything you desire. One where you can become someone you love, someone you’re proud of, beyond reckoning. I’ve seen that field of possibility many times. I’ve walked it. I’m walking it, even now, on both good days and bad. I believe we can all get here. I believe that no matter how much we fail, we will still get up, in the end. I believe that even as time ravages us, the embers within grow hotter and more capable of making things. You are the future you hope for. You are the light you’ve been praying for. You are the strongest self you never thought you’d be. Now, beautiful, capable human…get up from the floor. Get up and shake off those old excuses and become the fire. Become the person you know, somewhere within you, that you can be. I believe in you. Melissa Raise is the owner of Raise the Bar Wellness. She is a Certified Personal Trainer, Certified Health Coach, and Licensed Massage Therapist with 18 years of experience helping people become more comfortable and successful.
https://medium.com/illumination-curated/age-comes-but-so-does-wisdom-d46e0ffff134
['Melissa Raise']
2020-12-01 10:02:44.217000+00:00
['Illumination Curated', 'Self Love', 'Midlife', 'Life']
5 Brilliant Books to Read in the Next 6 Months
It’s (finally) the end of the year. You’ve spent a lot of time inside, and you feel like you’ve watched all of YouTube. You’ve listened to every podcast there is. You’re waiting for an award from Netflix for ‘Most Time Spent Browsing’ yet you still haven’t made a dent in that bookshelf you spent a full day building at the start of lockdown. Or maybe you have done nothing but read for the past six months and are looking for some new books to get stuck into over the next six. No matter which of these people you are, or if you are someone in between, I have a book for any taste below. Some of these books I read myself over lockdown, and others I have fond memories of reading while lying by the pool on a Greek island when travelling wasn’t borderline illegal. Hopefully you’ll find one of them to be suited to your own taste, or maybe one that makes you want to find a new taste altogether. Let’s get stuck in.
https://medium.com/books-are-our-superpower/5-brilliant-books-to-read-in-the-next-6-months-bdd92c551d1e
['Christopher Hanna']
2020-12-24 09:18:03.084000+00:00
['Book Recommendations', 'Readinglist', 'Book Review', 'Reading', 'Books']
You only need an income, Excel, and ambition! 12 small steps are included!
Here are the detailed steps: 1.) Decide how much money you need to save. 2.) Set a deadline. 3.) Count the working days that remained until your deadline. Don’t include days without an income. 4.) Split in two the number of days used for savings and the total desired amount. My plan I need to save 280$, so 50% of the amount should be 140$. By the deadline, there are 22 working days left, so the first half of my table will have 11 rows. 5.) Set a small value for the first day. For my example, I have chosen 4$. 6.) Increase the amount by a constant value daily. For the last date, you will have your peak value. I use the Excel formula =A1+2 for the A2 cell. A1 is the cell for the initial value. is the cell for the initial value. 2 is the value with which I increase my savings daily. Hold and drag the fill handle down the column, over the cells A3:A11, where you want to copy the formula. I use the A11 cell because that’s the last one for my first half. For example, you can go until A30 if you need 60 working days, and your first half has 30 days (rows). 7.) Calculate the total amount for your progressive half. Formula example: =sum(A1:A11). A1 is the cell of the initial value. is the cell of the initial value. A11 is the last cell in the first half of the savings period. 8.) Adjust the progressive half. Ask yourself: Is the total amount the one I desire for my first half? The total amount for your first half should be a little bit higher than the half of your total desired amount. Is my peak value affordable? If not, try to change the initial value and see how this change affects the total amount. If this does not produce the results you need, go to the A2 cell, and change the value with which you increase your savings. Then, you have to hold and drag the fill handle again. 9.) Build the regressive half. I leave the B column for comments about the values entered in the A column. I use the C column for the second half. For the C1 cell, I use the formula =A11–2. A11 is the peak value for my first half. is the peak value for my first half. 2 is the amount I selected for each increase and decrease. It has to be the same value. For the C2 cell, I use the formula =C1–2. Hold and drag the fill handle down the column over the cells where you want to apply the formula. In my case, the cells are C3:C11. 10.) Calculate the total amount for your regressive half. Formula example: =sum(C1:C11). In the formula, you have to insert the first and the last cell you want to add. 11.) Calculate the final amount Add the total amount of the progressive half to the total amount of the regressive half. Formula =sum(A13;C13). A13 is the cell for the total amount of my first half. is the cell for the total amount of my first half. C13 is the cell for the total amount of my regressive half. 12.) When saving money, write the date next to the value saved. This way, you will make sure you don’t skip a day. On some days, you will save more than you need. Add a note next to the cell with the peak value, remembering to reduce its value by this exceeded value. My table after three days of savings
https://themakingofamillionaire.com/the-progressive-regressive-saving-system-b2ec29b7ea34
['Alexandru Vasai']
2020-12-16 07:39:36.904000+00:00
['Personal Finance', 'Savings Plan', 'Financial Planning', 'Money', 'Savings Tips']
The most important social issue affecting us through cause/effect, problem/solution process
Question from the Internet: “What is a social issue topic that is really happening today using the cause-effect and problem-solution type of structure?” The most important social issue topic that affects everything through cause-effect, problem-solution is the following: We are all born inherently egocentric, individualistic, selfish, thus we don’t trust each other and we fight, compete exclusively, trying to succeed, survive at each other’s expense. So we are unable to build sustainable, positive, mutually responsible, mutually complementing interconnections, cooperation with each other. As a result we can’t assess and solve our mounting global problems that require global decision making and coordinated action. Thus our collective Human existence is in danger right now, as we are helplessly sleepwalking towards an unprecedented global meltdown that could be triggered at any time from multiple issues in multiple locations. Unless… We find a way and the necessary, purposeful and highly practical method that can teach us how to build the crucially important, life-saving global coordination, cooperation above and despite everything that separates, rejects people and nations from each other. https://youtu.be/P1mY0hV5tRo
https://medium.com/@samechphoto/the-most-important-social-issue-affecting-us-through-cause-effect-problem-solution-process-3042ff6cf81a
['Zsolt Hermann']
2020-11-16 05:29:50.467000+00:00
['Crisis', 'Humanity', 'Survival', 'Integration', 'Education']
How Stoicism Can Help You Tackle OCD Intrusive Feelings
How Stoicism Can Help You Tackle OCD Intrusive Feelings It was the philosophical inspiration for CBT after all As an OCD sufferer, I am well aware that many, if not all of us, have intrusive thoughts, but that it is our beliefs about these thoughts that make them so debilitating. In short, the OCD sufferer is taught, quite rightly, that they are not what they think. Recently, however, my OCD has started focusing on feelings of anger and rage. These feelings often lead to intrusive thoughts and come on just as suddenly, but I assign far more weight to feelings of anger than I ever do to thoughts of harm. Moreover, my harm OCD is often exacerbated by feelings of rage and anger. It makes them far more convincing; if I have an argument and momentarily feel like hurting someone, this is far more distressing than having a mere thought. For those unfamiliar with harm OCD, it is a common subtype of the disorder where sufferers have frequent and disturbing thoughts about harming themselves and others. What’s more, usually, intrusive thoughts about harming others do not exist in isolation; intense feelings of anger give power to the thoughts. As a result, I often find myself feeling guilty about feelings. It turns out that there is a scientific term for having a feeling in response to a feeling: meta-emotion. If you feel guilty about feeling happy, someone didn’t get a job, for instance, or if you feel guilty for feeling angry at a spouse, then this is a meta-emotion. An article from the Greater Good Science Centre at University Of California Berkeley quotes author Douglas Adams to demonstrate meta-emotions in action: “For a moment he felt good about this. A moment or two later he felt bad about feeling good about it. Then he felt good about feeling bad about feeling good about it and, satisfied, drove on into the night.” So how can OCD sufferers handle these “intrusive feelings” or momentary urges that they feel ashamed at having had afterward? And do our feelings always reflect our desires, more so than a thought, for instance? I believe the answer lies in an ancient school of thought. Let me explain. CBT has its roots in the ancient Hellenistic philosophy of Stoicism; in fact, CBT’s premise that it is our belief about events and thoughts that make us afraid, as opposed to the event or thought itself, is pulled straight from the Stoic handbooks. As Donald Robertson notes, writing in The Guardian: “T he pioneers of CBT, Albert Ellis and Aaron T Beck, both describe Stoicism as the philosophical inspiration for their approach.” It is a good place to seek answers for how to respond to “intrusive feelings” also. The answer that the Stoics give is both clear and convincing; in short, the Stoics argue that we all receive myriad feelings, images, and ideas in response to events, which they call “First Movements” or impressions, and that these are neither good nor bad in and of themselves. These impressions are likened to physiological responses like shivering in response to cold or sweating in response to heat. For example, if you learn that your friend has been betrayed and have a sudden thought or feeling about wanting to hurt the person that did this, the Stoics do not believe you are vengeful just yet. According to the Stoics, we are judged by how we act upon these images, urges, and feelings using our rational faculties. In short, the Stoics differentiate between our natural emotional responses to events and our rational decisions on what to do with them. They argue that we are only morally culpable for the latter. For example, if someone hits you, and you momentarily feel like hitting them back or worse, but decide against it and calm yourself down, the Stoics do not call this anger, but a First Motion which was tamed. Likewise, if you have a sudden urge to betray someone for financial gain but shun this momentary “impression,” the Stoics would not believe you to have been treacherous. Put, you are not defined by thoughts- or by momentary and responsive feelings- but by how you respond to these feelings. For me, at least, this has provided no small measure of comfort. I have often been seized by a feeling that runs contrary to all my moral beliefs and felt wretched for having had the emotion. Now, I see, I need not worry too much as long as I didn't entertain it. In his book Lessons in Stoicism, John Sellars elucidates the concept of “First Movements” perfectly: “All humans experience what Seneca calls ‘First Movements’. These are when we are moved by some experience, and we might feel nervous, shocked, excited or scared, or we might even cry. All these are quite natural reactions; they are physiological responses of the body, but not emotions in the Stoic Sense of the word. Someone who is upset and momentarily contemplates vengeance, but does not act on it, is not angry according to Seneca, because he remains in control.” According to the Stoics, we only become angry if we allow ourselves to be seized by the emotion or act upon the desire. Seneca puts it more simply: “Fear involves flight, anger involves assault”. Or, to simplify it further: Actions speak louder than feelings. Furthermore, I can give you an example of a “First Movement” in action. Walking home from the gym, a group of young boys, each on a bike, rode towards me jeering and smiling, deliberately blocking the path in an attempt to intimidate. As suddenly as an intrusive thought, and confronted with 3 derisive faces, I had a feeling or urge to hit one of them. The feeling was intense, palpable even, but after the moment had passed, I felt extremely guilty for having it. At the time, I didn't seem to have a choice in the matter; it was as intuitive as a shiver in response to cold or a flinch in response to an incoming object. What am I to make of that feeling? According to the Stoics, I shouldn't make that much of it at all because anger didn't get the better of me. Instead, I took control of the passion, and thus I am not morally culpable for it any more than I would be for an intrusive thought which wasn't accompanied by anger. So, next time you feel shame about a feeling you had, don’t beat yourself up about it too quickly. The virtuous man isn’t one who doesn’t have “First Movements,” but one who responds soberly to them. Moreover, like intrusive thoughts, accept that feelings, particularly “First Movements,” are often unwanted and sudden. Lastly, feeling guilty about a feeling often leads to virtuous action; in this sense, they can help us discover what we stand for. If you feel a certain way and then later feel ashamed for having had this thought, instead of beating yourself up about the feeling, do the right thing instead, and that is what defines you as a person. In short: you are not what you think. And according to the Stoics, you are not what you feel either.
https://medium.com/invisible-illness/how-stoicism-can-help-you-tackle-ocd-intrusive-feelings-5ef0046cdb0c
['Ross Carver-Carter']
2020-10-27 16:07:03.329000+00:00
['Mental Health', 'Ocd', 'Health', 'Philosophy', 'Self']
An App With SCA: Flow Testing
Last week we discussed how to export the dependencies of our Snake app to gain full control over our codebase. We also finalized the test of the state we were not able to finish a couple of weeks ago. In the process of factoring out the dependencies, we worked on the timer, moving it from the Reducer to the Environment to have the possibility to control it. However, we did not use it in our tests. Today, we complete the test of our app, showing an interesting property of the Composable Architecture Test Support: thanks to its ergonomic API, it is possible to fully test our app in a very simple way. Our codebase so far In order to properly understand our tests, let me report here the pieces of the app we need. The first element we need to recall is the Environment . This is the part of the app that gathers all the dependencies used to interact with the external world. The Environment is a container for a set of API: we use a version of those API for the production environment, but we use some mocked implementations in the test environment, to properly control them. Another piece of code we need to remember is the Reducer . The reducer is a pure function that takes the current State , the Action performend, and the Environment and it updates the State . This is the single point where we implement the whole logic of our app. This is the function that we are going to test. Lastly, let’s recall the State and the Action . They are pretty simple: the State contains the structure that describes the Snake , the current location of the Mouse , and whether we should present an alert or not. The Action is an enum with the cases handled by the Reducer . For the sake of completeness, this is what they look like: Preparing the test Now, let’s set up the testing environment. As already stated, we would like to fully control our test environment. That means, for example, that we need to know exactly what is returned by the dependencies and when the publishers publish a new value. And the value they publish, of course. Luckily, this is pretty simple to achieve. We defined our Environment so that we can plug the implementation we want. Thus, we can write a mock version of the environment as it follows: In this mocked environment, we are going to use an RNG that always returns 0 . This can be used to compute a new location for the Mouse . We also pass a parameter that is an Effect , generic in the types produced by the timer. This step is important because it allows us to pass a Publisher that we fully control as a dependency for the reducer. Our test can be deterministic and fully predictable in this way. From the theory, we know that a good test should follow the AAA rule: A rrange rrange A ct ct A ssert We are not done with the Arrange part. We created the dependency we need, but we still need to assemble it into the test. So, let’s move to the actual test to complete the first step: I omitted the Act and Assert to focus on the Arrange part. In the testGameFlow() function, we are creating a PassthroughSubject . This is an object of the Combine framework that can act both as a Publisher and as a Subscriber . We are going to use it mainly as a publisher in our test. Then we create the Snake for the initial state. Finally, we create the TestStore using another zeroGenerator that will place the initial mouse in the top-left corner of the game field. For the Environment , we use the mock we prepared, passing the PassthroughtSubject as an effect. At this point, our test has all the ingredients to be finally written, so… let’s cook it! Writing the Test If you remember from the State Testing article, the ComposableArchitectureTestSupport allows us to Act and Assert in a single step, thanks to several utility functions and types the good guys of pointfree.co prepared for us. In short, the TestStore has an assert method that lets us specify a set of Step s to simulate the evolution of our app. There are different types of steps: one to simulate an action entering in the reducer, another to simulate an action received from an effect, a generic step to perform some operation, and a step to update the environment. By chaining wisely these steps, we can simulate a whole execution flow for the app. We can for, example, startGame , move the snake, changeCurrent(direction:) , move the snake again, and so on. After every step, we have the possibility to describe how the resulting state should be. For example, after a move action, we describe the new positions for the snake’s head and for its body. After a changeCurrent(direction:) , we describe where the snake is facing, and so on. If any of these descriptions does not match the actual execution outcome, the test fails. Now, let’s see what the final aspect of this test is: The first 8 lines of this snippet are the same lines of the Arrange step of the previous one. The interesting part comes after line 10. In the TestStore.assert function we pass a set of steps that describes the evolution of a game. After the game start, for example, we update the current direction to up . At this point, we want to make the snake move by one step. In the actual snake game, this action is triggered by the timer. In the tests, we have to simulate that and we can do it by leveraging our PassthroughSubject publisher. The do step lets us access the subject and use it to send a new DispatchTime . This simulates the firing of the timer. If we look at the reducer, we know that every time the timer fires, a move action is injected in the reducer. Therefore, we can handle it by using the receive step and describing the new expected state after the snake has moved. We then perform the same set of changeCurrent(direction:) , send a DispatchTime through a do step, and receiving a move action a couple more time to make the snake move left and down . This last movement, however, has a slightly different outcome: before the last move , the snake was facing its own body. By moving toward it, the game ends. This condition is captured at line 48, by describing the expected state as having an AlertState with the "Game Over" title, a message stating the exact length of the snake and an Ok button. Remember, the Store is still subscribed to the timer Effect . We need to clean up our resources otherwise the ComposableArchitecture would warn us with the following message: The last do step at line 54 does exactly that: by signaling the completion of the publisher, the Store is able to cancel the subscription to it and the cleanup is performed. Conclusion In this article, we harnessed the power of a completely controlled Environment . We have been able to fully test the execution of the app: from the startGame action to the final move that lead to the Game Over alert to be presented.
https://medium.com/swlh/an-app-with-sca-flow-testing-ada82518d313
['Riccardo Cipolleschi']
2020-10-22 13:38:42.966000+00:00
['Swift Programming', 'App Development', 'iOS', 'Testing', 'Apple']
All You Need To Know About Hybrid Cloud Computing Blog- Web Hosting Services | Best Cloud Hosting | Cloud Web Hosting- CloudHostWorld
All You Need To Know About Hybrid Cloud Computing Blog- Web Hosting Services | Best Cloud Hosting | Cloud Web Hosting- CloudHostWorld swapnil duphare Jul 6·6 min read Cloud computing is a model for enabling ever-present, convenient, on-demand network access to a shared consortium of configurable computing resources like networks, servers, storage, applications, and services. Clouds must have five essential characteristics: — All You Need to Know About Hybrid Cloud Computing : On-demand self-service; Broad network access; Resource pooling; Rapid elasticity or expansion and Measured service. Cloud computing from its initiation has played an exceptional role in revolutionizing most industries across the globe. Both public and private cloud computing models have helped many businesses to prosper. Organizations need flexible computing solutions that can provide benefits of both public and private cloud computing models to match the ever-changing business requirements. Hybrid Cloud Computing is one such solution and nowadays it is rapidly gaining a foothold in most businesses. There are three types of cloud services: public, private, and hybrid. Public clouds are controlled by their owners, and rent computing services to their clients. Some of such platforms are Amazon Web Services (AWS), Google Compute Platform, Microsoft Azure, etc. In comparison, a private cloud runs on its own servers using cloud software such as Nextcloud, OpenStack, or VMware’s vSphere. A hybrid cloud bridges the gap between public and private using its own special mix of public and private cloud services. As the cloud has continued to develop, the spaces between public and private models have shrunk over time. From a business perspective, public cloud-based technology promises to replace high capital expenses (Capex) with lower operating expenses (Opex). Irrespective of the cloud model you use, there are three main ways to consume cloud resources: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). However, there are other cloud services as well, like, data as a service, test environment as a service, desktop as a service, and API as a service, but the most widely used ones are those mentioned above. What is a Hybrid Cloud? One can say that Hybrid cloud computing is a blend of two or more cloud computing deployment models such as public and private. Furthermore, it can also be a combination of cloud and traditional IT models. Organizations chose hybrid plans as per their business requirements. However, generally, a Hybrid Cloud computing environment uses a mix of on-premises, private cloud and third-party, public cloud services with orchestration between the two platforms. Hybrid cloud gives businesses greater flexibility and more data deployment options by allowing workloads to move between private and public clouds with a change in computing needs and costs. Hybrid cloud architecture The general Hybrid Cloud Architecture includes: A public IaaS platform, such as Amazon Web Services, Microsoft Azure or Google Cloud Platform; The construction of a private cloud, either on-premises or through a hosted private cloud provider and Adequate wide area network (WAN) connectivity between those two environments. Typically, an enterprise will choose a public cloud for various operations like, accessing compute instances, storage resources or other services, such as big data analytics clusters or server-less computing capabilities but does not have any control over the architecture of a public cloud. However, for hybrid cloud deployment, they have to design their own private cloud to achieve compatibility with the desired public cloud or clouds and as such, gain control over the private cloud. Designing their own private cloud involves the implementation of suitable hardware within the data center, including servers, storage, a local area network (LAN) and load balancers. They must then deploy a hypervisor (virtualization layer) for the purpose of creating and supporting virtual machines (VMs) and, in some cases, containers. IT teams then, must install a private cloud software, (say OpenStack) on top of the hypervisor to deliver cloud capabilities like self-service, automation, reliability and resilience, billing and chargeback, etc. A private cloud architect will then, create a menu of local services (compute instances or database instances), from which users can choose. Remember that the key to creating a successful hybrid cloud is always to select a hypervisor and cloud software layers that are compatible with the desired public cloud so that it ensures proper inter-operability with that application programming interfaces (APIs) and services of the public cloud. It also helps in the seamless migration of instances between private and public clouds. Hybrid cloud platforms also helps developers to create various advanced applications. Benefits of Hybrid Cloud Computing Both small and large businesses can benefit from Hybrid Cloud computing solutions. Creating an in-house infrastructure consumes a lot of time and resources, but at the same time provides control and security. However, in the long run, these advantages cannot outweigh the expenditures that the company has to bear. With Hybrid Cloud computing, organizations can save a great deal of money. The money that the company would have spent otherwise on building infrastructure, hiring IT personnel, and so on, can now be invested in pushing the business forward. Another point in favor of hybrid cloud computing is that it does not impact any existing operation; also it allows all the existing technologies and tools to be reused. Another fact is that using the public cloud for storing data with less intended security can also decrease the expenditures of a company. Hybrid networks are much more secure than public or private clouds, as it allows companies to store critical information on dedicated servers and the usual in public clouds. In this way, companies have complete control over their critical information by keeping them in private networks and at the same time can increase and improve their security systems. With cloud computing networks, organizations can configure or optimize the network as per their business requirements. It helps to improve local network performance, by allowing the organizations to relocate heavy processes or traffic to a separate public or private off-premise cloud. Better bandwidth scalability and improved latency can be easily gained with flexible network optimization and as such, improves data transfer rate significantly. Challenges of the Hybrid Cloud Nothing is perfect or complete. Just like the benefits, Hybrid Clouds can present a few challenges: technical, business and management. Hybrid cloud requires API compatibility and solid network connectivity so that private cloud workloads can access and interact with public cloud providers. For the public part of a hybrid cloud, some of the problems that may arise are potential connectivity issues, service-level agreements (SLA) breaches, and other possible service disruptions. Designing hybrid cloud workloads to interoperate with multiple public cloud providers can be one way to mitigate these risks; however, on the other hand, this can complicate workload design and testing. Construction and maintenance of the private cloud itself is another challenge with hybrid cloud computing as it requires substantial expertise from local IT staff and cloud architects. Implementing additional software, such as databases, helpdesk systems, and other tools can further complicate a private cloud. Management tools for the Hybrid Cloud There are many management tools for Hybrid Clouds like Egenera PAN Cloud Director, RightScale Cloud Management, Cisco CloudCenter, Scalr Enterprise Cloud Management Platform, etc. These tools can help companies to handle workflow creation, service catalogs, billing and manage other tasks related to the hybrid cloud. Some more such tools include BMC Cloud Lifecycle Management, IBM Cloud Orchestrator, Abiquo Hybrid Cloud, Red Hat CloudForms and VMware vCloud Suite. However, it is important for potential adopters to test and evaluate these tools carefully in their own hybrid cloud environment before making a commitment to any particular tool. Also Read : -> Promote your Website for Free in 10 Effective Ways These days hybrid cloud has become the preferred choice for most organizations as it helps them to overcome the problems of both public and private clouds. However, choosing a hybrid cloud will not resolve everything. In order to get optimum use from hybrid cloud organizations must choose cloud provider services according to their business needs, these all You Need to Know About Hybrid Cloud Computing.
https://medium.com/@sduphare785u/all-you-need-to-know-about-hybrid-cloud-computing-blog-web-hosting-services-best-cloud-hosting-7d4b40fbf97a
['Swapnil Duphare']
2021-07-06 07:40:47.745000+00:00
['Hybrid Cloud Computing', 'Cloud Hosting', 'Web Hosting', 'Servers', 'Cloud Computing']
Four Steps for Migrating from Hive 2.x to 3.x
Four Steps for Migrating from Hive 2.x to 3.x Get Hive prepped and ready if you’re moving to Cloudera Data Platform by Shekhar Parnerkar and Pooja Sankpal If you are a current HDP 2.6.5 user, you are already contemplating a move to Cloudera Data Platform (CDP) since Cloudera has announced the discontinuation of support for this and older versions of HDP in favor of CDP at the end of 2020. Breathe easy, you are not alone! At Hashmap, we have many clients who are in the same boat. To make their journey safer and easier, we have developed a framework that provides a step by step approach to migrate to CDP. This framework has been documented in a previous blog post written by technical experts at Hashmap. This blog post covers the migration of Hive tables and data from version 2.x to 3.x (which is the target version supported by CDP). Other related articles are mentioned at the end of this article. Future posts would include lift-and-shift migration from HDP 2.6.5 to CDP and migration of Spark and Oozie jobs to CDP. What’s Different from Hive 2.x to 3.x There are many structural changes between Hive 2.x and 3.x, which makes this migration quite different from routine upgrades. To summarize a few of these changes: All managed (Non-transactional) tables need to change to External tables. This includes both — Native tables (data stored in hive_warehouse directory in HDFS) and Non-Native tables (data stored externally in HBase or Druid). In addition, for the Native tables above, property external.table.purge needs to be set to True; The use of Map/Reduce (MR) as a hive execution engine has been discontinued in favor of Tez. The default root directory of Hive has changed to app/hive/warehouse. Many Hive configuration properties have changed. While these appear to be simple changes at first glance, there are many challenges that may arise: The latest or most current DDL scripts may not be available to make the changes. Identifying the tables that need to be changed. Making and testing the change. Given that Hive is the de-facto Enterprise Data Warehouse, it will usually have thousands of tables spread across dozens of databases. To make these changes manually is very error-prone and time-consuming. This is where the Hashmap framework comes to help. Here is a step by step workflow to achieve these goals. Hive 2.x to 3.x in Four Steps Step 0: Prerequisites This blog assumes that you have stood up a brand-new CDP cluster or you have upgraded your existing cluster to a Hive 3.x version fully configured. This upgrade should have covered the following: All CDP Services are up and running including Hive Hive configuration properties have been set to their new values either thorough hive-site.xml or by an upgrade script In case you have stood-up a brand-new CDP cluster, all range policies have been imported from HDP and applied to the new cluster. Step 1: Get the current DDL scripts If you have them handy in a Git repo, download all such repos to the local file system for auto-correction by the framework. Our auto script will look for .hql files in each repo and make the required changes directly to those files. In case you have chosen an in-place upgrade, the required changes to Hive Metastore and data will be automatically done by the script provided by Cloudera. However, you can use this script to update your repo with desired changes for future use. If you do not have the current DDL scripts or are not sure if they are current, there is another option to create fresh DDL scripts from Hive Metadata. We have another script that generates a .hql file containing the create table statements for all the tables in a given database. However, please bear in mind that this script will add a LOCATION property to your CREATE TABLE statement, which will point to the location of the table in the current cluster. This location will need to be removed or changed to a new Hive root as per Hive 3.x. Once, you have the DDL scripts, please arrange them in one or more directories for auto-correction one directory at a time. Step 2: Run the Script Download our DLL correction script from this location: https://github.com/hashmapinc/Hashmap-Hive-Migrator This script takes a local directory as input and recursively traverses it looking for .hql file and makes the following changes to them: In addition to the above, the script also makes the following changes: If you have a CTAS statement, the script will still make the required changes to the created table, subject to the conditions above. If you have a LIKE statement, they will be modified according to the same criterion above. However, sometimes manual changes are required. For example, the statement: “Create External Table B as A”, is correct in Hive 3.x only if A is also external, which may not be possible for the Script to determine. Such instances are specifically logged for manual correction. If your CREATE TABLE statements have a LOCATION property for a MANAGED table, the location will be changed to the new Hive root directory. (This will be true if you created the DDL script using SHOW CREATE TABLE through our script in Step 1). After a successful run, the script would make all the above changes to .hql files in the input directory. All changes are logged in a log file and should be reviewed for other issues or exception before applying the script Step 3: Create/Update Hive Metadata Next, we apply the modified DDL scripts on the target environment to create new databases and tables. In case of an in-place upgrade, this will not be required. This will need to be done database-by-database or repo-by-repo, depending upon how the DDL was created. If you have a CI/CD pipeline set up to execute your scripts on a new CDP cluster, you can check-in the modified repo and trigger the deployment. There is another approach to move Hive Metastore. If your current Hive Metastore can create a data dump that is directly readable by your new Metastore database, you can directly move the Metastore to CDP. However, after the import into Hive 3.x, you will need to upgrade your Metastore using a Cloudera supplied Script, which is available as part of the AM2CM upgrade Tool. Step 4: Migrate Data Once Hive 3.x Metastore is updated in CDP, we are ready to move data from 2.x to 3.x. When moving data from Hive 2.x to 3.x, the following approach is recommended: The default root directory of Hive has changed to app/hive/warehouse. Therefore, the table data for every managed table should be moved to app/hive/warehouse/<db>/<table_name>. In case of an in-place upgrade, the table locations should be changed accordingly. If this is not possible, change the table to an External table. Compact all transactional tables before they are moved or used by Hive 3.x. This is due to changes in the compaction logic. If tables are not compacted prior Hive 3.x will not allow further changes to those tables. You can use the Hive Pre-Upgrade Tool provided by Hortonworks/Cloudera to achieve this result. Since we are loading the data in the tables without using an HQL query, the table's statistics (information about partitions, buckets, files, and their sizes, etc.) are not updated in Hive Metadata. Hive will update metadata when the queries are run the first time on this table. This could cause degradation of query performance depending upon the volume of data in the table. Move native Hive data to the CDP cluster using ‘distcp’. Please note, moving data for non-native tables from Apache HBase, Impala, Kudu, and Druid will require different approaches. These will be discussed in an upcoming blog. Each item in the list above list can be broken down into a series of detailed steps to be performed. A complete description of these steps is beyond the scope of this blog. However, they can be made available upon request. Where Do You Go From Here? If you’d like assistance along the way, then please contact us. We’ve been working with Cloudera and Hortonworks since 2012 and would be glad to partner with you on anything from assessment and strategy to upgrade implementation. Hashmap offers a range of enablement workshops and assessment services, cloud modernization and migration services, and consulting service packages as part of our cloud migration and modernization service offerings.
https://medium.com/hashmapinc/four-steps-for-migrating-from-hive-2-x-to-3-x-e85a8363a18
[]
2020-10-21 14:23:16.152000+00:00
['Cdp', 'Hive Migration', 'Cloudera', 'Cloud Computing', 'Cloudera Data Platform']
Education is the new Marketing
One of the stories defining the first few weeks of Fund 2 was the deluge of proposals for funding podcasts about Cardano. Tasked with solving the problem of “encouraging developers and entrepreneurs to build businesses and applications on top of Cardano in the next six months,” community members created over a dozen proposals for podcasts, leading to some spirited conversations on Reddit, Twitter, and within the public Fund 2 Telegram. It would take a separate essay to summarize these conversations, but in sum, I think they were productive. The entire purpose of Catalyst is to foster community governance, which means having the chance to raise ideas, debate their merits, and arrive at outcomes. In the end, several podcasts withdrew their applications, including the flagship Cardano Effect, as well as AfroFinLab, to which I am a contributor. Counting them today, I see 7 proposals for podcasts in the final list. Why were there so many podcast proposals? In part it was Charles Hoskinson’s call for them back in August. This in turn bred a Catalyst Problem Sensing proposal called “CH’s Podcast Callout Too Successful”, which may be true. One way to read this is as an illustrative example of a leader’s words moving a market. But I think it’s more than that. The influx of podcast proposals indicates both the passion of our community and our shared recognition of the work ahead. People want to participate. We recognize that we have to tell the story of Cardano to so many people, and podcasts are one way to do that. Mix the high-level goal of driving adoption with a little dose of pandemic isolation, and we’ve got a great recipe for creating whatever we can from home. What remains unresolved in these conversations is whether or not podcasts actually successfully drive adoption of new technology. As a community we recognize a need for deeper public awareness of Cardano, and that for now, podcasts provide an accessible way to pitch in. If we agree on that need, then the question is really about how to address it.
https://medium.com/@workshopmaybe/education-is-the-new-marketing-dd609966c41a
['Workshop Maybe']
2020-10-22 20:01:16.302000+00:00
['Cardano', 'Cardano Project Catalyst', 'Blockchain', 'Blockchain Startup']
Armed man opens fire on protesters in Somali capital
Armed man opens fire on protesters in Somali capital An armed man without uniform has opened fire on protesters in the Somali capital Mogadishu. The protesters were against President Mohamed Abdullahi Farmajo. The opposition Wadajir Party Leader Abdirahman Abdishakur Warsame has linked the man to the National Intelligence and Security Agency (NISA). “President Farmajo & the NISA director have deployed plain-clothed armed men into the streets to suppress the people. This man who was firing at the protesting youths r among those armed men. If they r firing live bullets at peaceful protesters, why is it wrong to defend ourselves,” Warsame said as he shared a video of the incident on Twitter. An alliance of opposition presidential candidates has been organising protests against the government amid a dispute over the 2020/21 electoral process.
https://medium.com/@oceanstar843/armed-man-opens-fire-on-protesters-in-somali-capital-30ef3b2d61d8
['Bigocean Star']
2020-12-27 14:36:55.814000+00:00
['Elections', 'Somalia', 'Africa', 'Protest']
Open-sourcing KingPin, building blocks for scaling Pinterest
Shu Zhang | Pinterest engineer, Infrastructure When we first started building Pinterest, we used Python as our development language, which helped us build quickly and reliably. Over the years we built many tools around Python, including Pinball, MySQL_utils and pymemcache, as well as a set of libraries used daily for service communication and configuration management. Today we’re releasing this toolset, KingPin, as our latest open-source package. KingPin contains some of the best practices we learned when scaling Pinterest, including: A local daemon to deal with the ZooKeeper’s single point of failure (SPOF) problem. The daemon is running on ~20K hosts delivering configuration data in less than 10 seconds. A Python Thrift client wrapper for enhanced functionality. We send hundreds of thousands of requests per second via this Python client across Pinterest. A configuration management framework. We have over 400 configurations being updated and consumed through this framework. KingPin use cases You may want to try out KingPin in any of the following cases: Your stack is also Python-oriented and running on AWS. You want to make your ZooKeeper cluster more robust and resilient. You’re building a configuration system and want your configurations to support a rich set of data structures like lists, maps, sets and JSON. You want to use S3 to store some of the most critical metadata. You’re using Thrift and looking for a more reliable client library. KingPin architecture KingPin has the following components working together: Kazoo Utils: A wrapper for Kazoo that implements the utils we use for the RPC framework, service discovery and some enhancements of native Kazoo APIs. A wrapper for Kazoo that implements the utils we use for the RPC framework, service discovery and some enhancements of native Kazoo APIs. Thrift Utils: A greenlet-safe wrapper for Python Thrift client with error handling, retry handling, load balancing and connection pool management built in. A greenlet-safe wrapper for Python Thrift client with error handling, retry handling, load balancing and connection pool management built in. Config Utils: A system that stores configuration on S3 and uses ZooKeeper as the notification system to broadcast updates to subscribers. (See our previous blog post for additional details.) A system that stores configuration on S3 and uses ZooKeeper as the notification system to broadcast updates to subscribers. (See our previous blog post for additional details.) ZK Update Monitor: A local daemon and server that syncs subscribed configurations and serversets to local disk from ZooKeeper and S3. This is a key part of how we make our use of ZooKeeper fault-tolerant. (For more on this design, check out this blog post.) A local daemon and server that syncs subscribed configurations and serversets to local disk from ZooKeeper and S3. This is a key part of how we make our use of ZooKeeper fault-tolerant. (For more on this design, check out this blog post.) Decider: A utility we use to control online logic flow, one typical use case is experiment control. Deciders are set so every A/B testing experiment can be turned on or off in real-time without any code deploy. Decider is built on top of Config Utils. A utility we use to control online logic flow, one typical use case is experiment control. Deciders are set so every A/B testing experiment can be turned on or off in real-time without any code deploy. Decider is built on top of Config Utils. Managed Data Structures: A convenient map/list data structure abstraction in Python built on top of Config Utils. A convenient map/list data structure abstraction in Python built on top of Config Utils. MetaConfig Manager: A system that manages all configurations/serversets and dependencies (subscriptions), built on top of Config Utils. Real-time configuration management and deployment An additional use case of KingPin is managing configurations in real-time. For example, engineers might create a new configuration via MetaConfig Manager and add it to a subscription we call “Dependencies.” Configuration content is stored in S3 as the ground truth and uses ZooKeeper to track and propagate updates. In order to get the configuration subscribed downloaded properly, ZK Update Monitor must be running on the subscriber machine. Applications can read the file out and decode into Python object for CRUD operations using variety of APIs. Service discovery We rely on KingPin to move towards SOA (service oriented architecture) inside Pinterest. An essential building block for SOA is service discovery. A service client needs to know the addresses of the service endpoints to connect and send request to them. KingPin provides a script for service endpoints to register themselves to ZooKeeper so the endpoint list (“serverset”) can be consumed by service clients. Similarly, ZK Update Monitor downloads the serverset from ZooKeeper and puts it into a local file. Serversets change dynamically when server nodes join or leave. Using the Mixin provided in Thrift Utils, a Thrift client reads the local serverset file and talks to the endpoints using any HostSelector algorithm. The Mixin also manages the connection pool and allow users to set various timeouts and retry policies according to specific use cases. Getting started We use KingPin across various parts of our infrastructure. For example, ZK Update Monitor is running on every box at Pinterest to deploy the latest configurations and serversets in real-time. Managed data structures are used for serving write-rare-read-frequent configuration data, such as a blacklist of domains we use to filter spam. The Python service framework is used by every Thrift client.here are hundreds of deciders controlling online logic and turning on and off various experiments. You can now access the source code, how-tos and examples for your own use. If you have any question or comments, reach us at [email protected]. Acknowledgements: KingPin is a joint effort across Pinterest engineering and has significantly evolved over the years. Contributors include Xiaofang Chen, Tracy Chou, Dannie Chu, Pavan Chitumalla, Steve Cohen, Jayme Cox, Michael Fu, Jiacheng Hong, Xun Liu, Yash Nelapati, Aren Sandersen, Aleksandar Veselinovic, Chris Walters, Yongsheng Wu and Shu Zhang. Thanks to Jon Parise for his support during the open-sourcing effort.
https://medium.com/pinterest-engineering/open-sourcing-kingpin-building-blocks-for-scaling-pinterest-8febe81f2c1c
['Pinterest Engineering']
2017-02-21 19:33:53.218000+00:00
['Python', 'Open Source', 'Microservices', 'Kingpin', 'DevOps']
Python Curses Based ASCII Art Fire Animation
Python curses based ASCII art animation by Mark Simpson displayed in iTerm2 on OS X Get ready for winter with a toasty Python curses script that displays an animated ASCII fire in your terminal. Gist courtesy of Mark Simpson:
https://medium.com/sweetmeat/python-curses-based-ascii-art-fire-animation-259e9e007767
['Chris Simpkins']
2017-10-14 23:51:55.859000+00:00
['Python', 'Animation', 'Programming', 'Fire', 'Software Development']
Sleepless Night
Sleepless Night kbeis:for iStock The tumbling down of the sun, assembled a dark emptiness, on which the moon was mourning. the night was pregnant with, defiant stars and holy comets. the winds, untamed by cliffs, were snoring in the sky. the graveyards weary of, field notes of the ghosts and un-cared flowers, called the cold caravans, for sweet lullabies. the silence was too loud, to let the body rest. and the turmoil was too silent to let the pillow fall. the blanket was too unruly to let the thoughts settle. And the plants, were worn out after, pumping oxygen all day long, from their brittle lungs. the mist and fog, to thick to cover the wildness of passions, the clouds, too thin to dress the shades of the sky.
https://medium.com/@uday-neutron/sleepless-night-953657d6f12d
[]
2020-12-19 11:09:12.611000+00:00
['Poem', 'Poems On Medium', 'Poetry Writing', 'Poetry', 'Poetry On Medium']
The History of Trade And The Role of Barter System
Today we will talk with you about the history of the development of trade relations and the role of barter exchange. Barter was first noted in Egypt around 9000 BC when farmers gathered in markets and traded cows for sheep, grain for butter. With the development of trade routes, the barter exchange has significantly diversified. For papyrus, precious stones, or a chariot, one could purchase an exotic animal, skins, or ore from Africa or Asia. In general, the ancient commodity economy was directly related to crops and livestock. Grain and livestock were examples of what is called commodity money, that is, money that has intrinsic value and is used primarily as a universal medium of exchange in barter. Over time, the barter system began to change. There were more goods, tools and it became difficult to determine the price of something. The first difficulties our ancestors faced was product evaluation. The price was determined individually, which caused disputes. Centuries have passed, but the natural exchange still lives among us. It has changed beyond recognition, but its essence remains the same as in the primitive era. As with any economic phenomenon, there are pros and cons to the barter system. It helps to preserve capital in difficult economic situations. This system allows you to increase sales without spending money. Today, barter makes it possible to get rid of inventory in organizations without funding. Barter-based transactions make large-scale cash management more flexible. There are some disadvantages inherent in the barter system. Difficulties often arise in the selection of goods for exchange. It is especially difficult to find a mutually beneficial option if the goods are not exchanged at a favorable price. Barter Smartplace solves the problem of product matching by creating a modern barter marketplace that intelligently selects products based on their characteristics, making it easier to find suitable offers. Since the days of barter, the means of trade have changed beyond recognition. Thanks to new technologies that allow a person not to work for money, it becomes possible that money will work for a person. Today the blockchain industry is very promising, and the speed of technology development is impossible to predict. With the development of the economy, barter and commodity money receded into the background but did not disappear completely. Barter Smartplace has reimagined barter trading by enabling users to tokenize real valuable assets and place them on a smartplace. Now it is possible to easily exchange objects of unequal and equivalent value using this platform. Digital asset technologies increase the confidentiality of the parties to the transaction and the speed of its execution, and a legal smart contract with an electronic signature allows you to achieve high reliability and convenience of service. The Barter project emerged as an idea in mid-2018 when the cryptocurrency market fell significantly, but at the same time, the demand in the OTC market increased. Barter has expanded its ecosystem to include tokenized real value assets that can be exchanged for any supported liquid token or cryptocurrency. Barter intends to create a blockchain platform for barter transactions, the objects of which can be any tokenized assets and liquid currencies. The Barter Smartplace ecosystem opens up the possibility of developing the barter industry in the modern world without much effort and investment. Without leaving the walls of your home, going to work or rest, barter exchange will become available to everyone. 📢 Connect to Barter Smartplace for barter exchange at https://barter.company/ Join the community: http://t.me/barterteam Telegram wallet: http://t.me/barterwalletbot
https://medium.com/bartersmartplace/the-history-of-trade-and-the-role-of-barter-system-5d4a7b37cfb2
['Nansy Dunne']
2020-12-27 13:40:20.729000+00:00
['Barter', 'Cryptocurrency', 'Trade', 'Smart Contracts', 'Blockchain']
The Best Data Science Framework You’ve Never Heard Of
The Best Data Science Framework You’ve Never Heard Of A web app built in Streamlit (all screenshots by Author) If you’ve ever wanted to take the machine learning models or data visualisations you’ve created and turn them into web apps for other people to view and interact with, at some point you probably felt a little lost. You might be an excellent data scientist, very familiar with the tools you need to wrangle data, extract insights, build visualisations and create models, but putting those into production takes more. As Adrien Treuille, co-founder and CEO of Streamlit, put it, “Machine learning engineers are actually app-makers.” Streamlit is a relatively young framework that offers a breath of fresh air and a way to create beautiful data-driven apps really quickly and easily. So what problem does it solve, who is it meant for, and should you use it? The Problem The machine learning/data science pipeline looks something like this: Data > Training > Model > Production Throughout the process, it is often the case that many bespoke solutions have to be engineered, from prototypes to demos to dashboards and so on… Whether you’re building an enterprise solution that will be used widely by salespeople within your company, or just working on a personal project, this requires software engineering. An Example Adrien Treuille recalls when he worked at Zoox, and about 80 Machine Learning Engineers were building self-driving car systems. They were doing everything: planning, vision, pedestrian detection, the whole stack. He makes the example of a type of tool the engineers might need to build: an app to run two self-driving systems at the same time and compare them. This would probably start out as a project for a single engineer, written in a Jupyter Notebook, copied and pasted into a Python script, pushed to GitHub and built out with a framework like Flask. But suppose the app becomes an important part of the team’s workflow and suddenly more engineers need to use it and features need to be added, but it wasn't designed to be a well-polished, extendable piece of software. This is what Adrian calls “the unmaintainability trap”. At this stage, the Machine Learning Engineers would call in an internal team who were essentially experts in building web apps. This team would collect requirements of the system, wireframe it and develop it in React or Vue with JavaScript, Python, CSS, HTML etc., ultimately creating a stunning and well-designed app. But then they’d need to move on to a different project, support a different team, and the app would be in “the frozen zone” until they could return to it. Even if you’re working on a solo project and not in a large company, this dependency on software engineering to deploy your app can make life very difficult. The Solution Streamlit is an app framework that asks the question: What if we could make building tools as easy as writing Python scripts? The goal is to shift the web app-building philosophy from starting with a layout and developing an event model, to a Python script-esque top to bottom execution, data flow transformation style that data scientists should be very used to. In Adrien’s own words, a Streamlit app is “basically a data script that has been slightly annotated to make it an interactive app”. I can tell you from personal experience that Streamlit delivers on this promise. You can even take a Python script that already exists and turn it into a web app simply by adding a few Streamlit calls, all in a single file. Is it Really That Simple? Yes! Check this out: Want to create a slider to choose a value to input into a model? val = st.slider() # Use val You just declare a variable and set it equal to Streamlit’s slider widget, then adjust some parameters to your liking. This will display the slider in your app and set the value of val to whatever the user slides to. It’s just as easy to create buttons, checkboxes, radios, select boxes, multi-select boxes and more.
https://towardsdatascience.com/the-best-data-science-framework-youve-never-heard-of-baf19120621c
['Aden Haussmann']
2021-07-05 12:48:45.351000+00:00
['Machine Learning', 'Data Science', 'Streamlit', 'Web App Development', 'Python']
Design thinking workshop — step by step guide
Usually, large companies will have multiple user groups, products, and an abundance of problems and questions. In those cases better to run hight level interviews and workshops that focused on framing the problem first, and only after that do you plan follow-up workshops to find solutions to those problems. 3. Always run a pre-workshop research You don’t want to start asking basic questions, to get up to speed with the audience. Researching as much as you can before the workshop will help you better identify knowledge gaps that you need to close. It helps also to build your credibility quickly, which essential to effective coordination of the group. 4. Choose the relevant activities After you have set a clear objective, it's time to pick the right tools (activities) to reach it. Choice of activities will depend not only on expected output but also on time you will have and participants you can invite. When you need to build Empathy It is one of the cornerstones of design thinking. Empathy is the capacity to understand or feel what another person is experiencing from within their frame of reference, that is, the capacity to place oneself in another’s position. Building empathy is a great starting point for your workshop. It will help you understand our customers, what are they trying to achieve, what drives them and what challenges they facing in the process. Some activities that will help you build empathy: Proto Personas, Empathy Map, Journey Map, Jobs To Be Done, VPC. Customer Profile.
https://medium.com/windmill-smart-solutions/design-thinking-workshop-step-by-step-guide-428171c2adee
['Taras Bakusevych']
2021-09-10 11:39:27.670000+00:00
['Design', 'Workshop', 'Design Sprint', 'Design Thinking', 'Facilitation']
Are We “Making a Living?”
Photo by Markus Spiske on Unsplash Two old friends met after not having seen each other for years. “Bob!,” said Frank. “It’s been such a long time. How’re ya doing?” “Well, Frank,” replied Bob, “I guess I’m making a living.” And that was the typical opening of a dialogue between old friends, former neighbors, former co-workers, or members of a church or synagogue. Indeed, that was part of the social glue that bonded people in an earlier time. What is missing from that exchange today is one little item: the truth. Much of America’s Gross Domestic Product used to be based on manufacturing. This process included taking raw materials from the soil and gradually converting them into finished products that would be sold around this country and, later, around the world. Photo by JuniperPhoton on Unsplash In the “glory days” when America’s manufacturing was the behemoth of its economy and of exceptional importance and reputation around the world, no one cared about how we poisoned the air we breathed, the water we drank, or the soils we farmed. Our sole goal in this period from 1936 to about 1960 was to dig, convert, finish, and sell. We had no purpose other than to expand our markets and enhance our profit and loss statements. On September 27, 1962, Rachel Carson published Silent Spring, an environmental science book that studied the horrifying effects of our use of pesticides. Based on years of studies, Carson found out that the big chemical companies (DuPont, Monsanto, and others) had been assiduously lying to us in their marketing efforts to get us to use more pesticides. While Carson was an early “canary in the coal mine,” she was not alone. The following year, Secretary of the Interior Stewart Udall published The Quiet Crisis, which described the dangers of pollution, overuse of natural resources, and dwindling open spaces. Along with Silent Spring, Udall’s The Quiet Crisis is credited with creating a consciousness in the country that led to the environmental movement. Udall was a staunch supporter of Rachel Carson and her work. Stewart Udall once stated, “Plans to protect air and water, wilderness and wildlife are in fact, plans to protect Man.” In The Quiet Crisis, Udall discussed two myths that formed the backbone of American thinking about Man and his environment. The Myth of Unlimited Natural Resources. Because Europeans coming to North America had no idea of its magnitude, they felt that its natural resources were unlimited. As a result, they believed they had carte blanche to plunder, pollute, abandon, and move on. Only when we reached the shores of the Pacific Ocean did we realize that our terrestrial borders were fixed. However, the mentality of abusing our land was already deeply engrained and we felt we still had no real limits on what we could do to extract gold, oil, coal, and other vital resources. The Myth of Scientific/Technological Supremacy. Arising out of the Rationalist Movement within Europe’s Age of Enlightenment, we thought that no matter what problem Mankind created, there would be a scientific or technological solution to it. It was that sort of thinking which prompted America’s military to develop dichlorodiphenyltrichloroethane, commonly known as DDT. That pesticide was sprayed all over the place — rivers, lakes, ponds, steams, any and all kinds of standing bodies of water — in order to eradicate the malaria-carrying mosquitos that were annually causing millions of deaths around the globe. What our scientists didn’t know was that DDT was affecting birds. When DDT-affected birds laid their eggs, the shells were either incomplete or were so thin that when the mother birds lay on their nests their weight crushed the shells, thus killing their offspring. As a result, there were fewer birds and, in turn, there were fewer enemies of pestiferous insects, meaning there was more crop damage and monumental losses to agriculture. The same principle held true with thalidomide, a tragedy which caused many thousands of children in Europe and the United States to suffer from phocomelia, resulting in the shortening or absence of limbs. The large pharmaceutical companies (“Big Pharma”) suffered from revenue losses from the banning of many dozens of their drugs by the FDA and its European, Asian and Australian counterparts. In order to combat the threat to their bottom line here in America, Big Pharma started to partner with corporate farms, such as Archer Daniels Midland. In the late 1970s and early 1980s, a “Toxic Trio” of Big Food, Big Farming, and Big Pharma started to band together, providing us with high-calorie, high-sugar, high-salt and highly-processed foods that push more and more of us down the path towards obesity, diabetes, cardiac failure, and early death. So, in the pursuit of profits, the “Toxic Trio” formed an unholy alliance where Big Food companies (McDonalds and the other national or international chains) joined Big Farming (ADM, Conagra, General Mills, Dow Chemical, Monsanto, J.M. Smucker) and Big Pharma (Bristol Myers Squibb, Amgen, Johnson & Johnson, Pfizer) in creating a “toxic stew” of pesticides, chemicals (anti-bacterial, growth hormones), and genetically-modified organisms which are fed to the U.S. population with light regulatory oversight. The result? About three out of four people in the U.S. is obese. More and more people are headed to the cemetery at a faster and faster pace. Our quality of life is deteriorating. More and more people are suffering from inflammation, anxiety, ADHD, autism, and many other disorders that were only marginally present 50–60 years ago. Now, monolithic producers such as Tysons are putting out huge amounts of highly-processed chicken that bears little resemblance to the barnyard fowl we thought we were getting. Raised in factory farms, these birds are incubated, raised and slaughtered with only one goal: profit. Likewise, beef and pork products are raised under inhumane conditions, pumped full of anti-bacterial medications, growth hormones, and other chemicals. Parents these days give their kids w-a-a-a-y too much milk. With all the garbage in milk, it’s not surprising that girls as young as nine and 10 are now displaying secondary sexual characteristics (development of breasts, pubic hair) and are starting their menstrual cycles at 10 and 11 instead of 14 or 15, as was the case for many generations. That’s all because of the diet full of growth hormones. But it’s not just the foods we eat; the makeup and clothes we wear; or the fragrances we use. It’s also the vehicular traffic; the construction patterns; the workplace tools; and the spewing of nitrogen and phosphorous into our soils and waters, the plastics that go into our landfills, rivers and oceans, and smog that goes into our air. The chemicals to which we’re exposed have turned our bodies into repositories of toxins, and they persist in their effects. Why do We do These Things? We are an immature society. We haven’t, as a general rule, really developed philosophical skills or tendencies. We value the latest goodies and gadgets, trinkets and toys, but we haven’t really learned on a broad, society-wide basis to value human life above all else. That would explain our addiction to cell phones, to cars, to trucks, to material acquisitions, to “bright and shiny objects.” We assume that we are the same as our jobs — at least that’s the prevailing view — and we feel that our identities are severely threatened if we lose our job. So, many of us will make moral compromises in favor of keeping our jobs and losing our humanity. We say we value our children and grandchildren, but when it comes to making the sacrifices that would give them a real chance to survive and even thrive, we abdicate our responsibilities there in order to keep our jobs. Instead of saying, “I’m making a living” the honest answer is “I’m making a dying.” What’s the Solution? People act in accordance with how they’re incentivized. Reward someone for being a liar and a corrupt buffoon, and you’ll wind up with Donald J. Trump. Reward them for honesty and integrity, and you’ll have the many good people who are challenging Trump (Sens. Harris, Sanders, Booker, and Warren, for example). There’s been substantial talk, and detailed plans, about the Green New Deal. Is it perfect? No. But does it give us a chance to massively fight the Global Climate Crisis, revive our economy, put many people into great occupations and professions of which they can be proud, and restore America’s leadership in the eyes of the world? Absolutely. I would argue, therefore, that pursuing the Green New Deal is the paramount obligation of the new president, of Congress, of every statehouse, and every mayor in this country. I would argue that the American people have an extraordinary opportunity to boldly lead the world by the sacrifices they’re willing to make, and the new direction they’re willing to take, in saving this planet — and themselves at the same time. And it’s their opportunity to truthfully state: “I’m making a living!”
https://swatkinslaw.medium.com/are-we-making-a-living-c9a93aba319f
['Stephen P. Watkins']
2019-10-19 20:35:32.217000+00:00
['Environmental Protection', 'Jobs', 'Food', 'Extinction Rebellion', 'Green New Deal']
“Abraham, Martin and John” — Marvin Gaye
Although not a name you hear much any more, without Marvin Gaye we might never have experienced Motown Records as we know it and love it. Starting out in the early 1960s, Marvin Gaye was one of the key people, along with Smokey Robinson and Berry Gordy himself, who made Motown label such a success. He started out as a session drummer, producer and composer for groups like The Miracles and The Marvelettes, one of whose early releases “Please, Mr Postman” helped put Motown on the map, before being persuaded to perform himself. And it’s just as well he did, for the world would have been a much poorer place if Marvin Gaye had never stepped to the front of the stage. When you say “Marvin Gaye” to most people these days, they’re likely to think of his record “I Heard It Through The Grapevine”…unless, that is, you’re speaking with a young person who is more likely to think of the record called “Marvin Gaye”, a tribute to the great man’s style by Charlie Puth and Megan Trainor from a couple of years ago… Although if you’re going to be remembered for any individual song, “I Heard It Through The Grapevine” is by no means the worst song to have as your epitaph. It would be much worse if you were Joe Dolce, for example. With music and lyrics by superstar Motown songwriters Norman Whitfield and Barrett Strong, Marvin Gaye’s version of “I Heard It Through The Grapevine” was Motown’s third attempt at making this song a hit. Smokey Robinson and the Miracles tried it first but Berry Gordy stopped their version even reaching the record shops. Gladys Knight and the Pips then had a go — their version made it to Number Two on the Billboard chart. A decent enough result you might think…until Marvin Gaye’s version the following year stormed to Number One and stayed there for seven weeks. Marvin Gaye also had significant success in a series of duets with some great female singers, including “It Takes Two” with Kim Weston and “Ain’t No Mountain High Enough” and “You’re All I Need To Get By” with Tammi Terrell. You might have thought by now he had enough success for anyone to be going on with, but after Tammi Terrell’s brain haemorrhage stopped her touring and performing, he moved into what I consider to be the most important part of his career. Marvin Gaye was one of the first chart-topping artists to write and perform songs with a strong social conscience. But his unique trick was that the songs he wrote and performed were so beautiful, with his sweet, soaring vocals and the wonderful musicians who together comprised the Funk Brothers performing at their very best in the background that, unless you listened very, very carefully you might not realise what the songs were about at all. The way he delivered social conscience on a hit record almost subliminally has never been bettered. “What’s Going On” and “Mercy, Mercy Me (The Ecology)” are in themselves beautiful songs. But I talk to plenty of people who are surprised to learn that those lyrics were inspired by the civil rights movement and the inner-city riots in the US during the late 1960s. But for me, Marvin Gaye’s best song, and in many ways his most sobering commentary of all, was one he didn’t write himself… “Abraham, Martin And John”. Dick Holler wrote the music and lyrics for “Abraham, Martin And John”, which was his only significant hit as a songwriter. It was first recorded (in a version with slightly different words) by Dion, who’d achieved success in the early 1960s with hits like “Runaround Sue” and “The Wanderer”. The genius of the Marvin Gaye version, though, was making it into a much simpler song and in the process…paradoxically, you might think…making it even more poignant and powerful. The song was written about four heroes of the struggle for civil rights and emancipation — Abraham Lincoln (who freed the slaves), John F Kennedy and his brother Bobby (who between them did so much to put the civil rights legislation in place in 1960s America) and Martin Luther King. All those fine people shared one thing in common, apart from their concern for equality between people irrespective of the colour of their skin and the flavour of their beliefs. Tragically, all had their lives cut short by an assassin’s bullet. It is perhaps a tribute to their work that, in the end, the world became more like their vision for it than the hatred and segregation preached by those who had so cruelly ended their lives. In the end, the good guys won…but at a significant cost… The lyrics for “Abraham, Martin And John” are so powerful precisely because they are so simple. All four verses follow exactly the same structure, except for a change to the name in the first line of each verse. It’s an enormous tribute to Marvin Gaye’s vocal prowess that he very subtly changes the delivery for each verse, which keeps the listener’s interest in the vocal right the way through despite singing almost exactly the same words over and over again. Each verse starts with a poignant question… Has anybody here seen my old friend Abraham Can you tell me where he’s gone? He freed a lot of people But it seems the good die young I just looked around and he was gone There’s also an intriguing mystery to the title of “Abraham, Martin and John”. Based on the verses themselves, it should have perhaps been titled “Abraham, John, Martin and Bobby” as those great men are sung about in a different order to the title and Bobby Kennedy didn’t make the title at all. I’ve long wondered whether leaving Bobby Kennedy’s name out of the title was deliberate. Whilst most people have heard of his brother John F Kennedy, Bobby Kennedy is much less well-known today and certainly lacks the iconic status of his brother in popular culture. Yet it was actually Bobby Kennedy, whilst US Attorney General working for Lyndon Johnson after his brother’s assassination, who got the Civil Rights Act onto the statute books. Bobby Kennedy’s role in the struggle is often forgotten (as, in fairness, was Lyndon Johnson’s who invested a significant amount of his own personal credibility to get the legislation passed…perhaps surprisingly for a former Senator from the South in the early 1960s). I’ve always liked to think that Bobby Kennedy’s omission from the title of “Abraham, Martin And John” is, in its own way, some sort of commentary on that injustice. Whether or not I’m right about that, today’s song combines some poignant lyrics, sweet, soaring vocals and some of the best strings ever put onto a record. Ironically, Marvin Gaye also died too young. And also by a bullet, although in this case one fired by his own father in the context of a family dispute rather than an assassin. But the tragedy of a life cut short too young is just as poignant in his case as it was for those giants of the civil rights movement Marvin Gaye sings about in “Abraham, Martin And John”. I’m always humbled by the beauty and mastery of today’s song…I hope you are too. Here’s Marvin Gaye with “Abraham, Martin And John”…(and let’s not forget Bobby too…) If you’ve read this far, thank you for spending a few moments in the company of one of my favourite songs. The video is below, but if you prefer listening to your music on Spotify, you can find today’s track here… https://open.spotify.com/track/3zOwKyf9QND3gMCveQLFHt
https://nowordsnosong.medium.com/abraham-martin-and-john-marvin-gaye-6f6bead10786
['No Words', 'No Song']
2019-08-03 07:47:35.988000+00:00
['Music']
Addressing the Handover
I don’t like the word handover. In design circles it implies the end of involvement for one group and the start for another. Designers will create a suite of assets, whether it’s a style guide or page designs, and then hand those over to a developer to be coded. The designer dusts his hands and moves onto the next task, while the developer gets the design equivalent of a bucket of cold water to the face and is told to get cracking. For some reason, we expect developers to see all the countless micro-decisions the designer has made, understand the designer’s intent, and execute in accordance with it — even when the delivered files have unintentional discrepancies… (sorry designers, but you know it’s true). This traditional waterfall process is still ever present, and common amongst a lot of digital agencies. It’s another hangover from the old creative agency model. But even the often lauded agile method has limitations, and doesn’t solve the issue of asset handover and collaboration. Not fully. The main problem is that there is a gap between those who do the thinking and those who are responsible for the doing. We’re not really working as a team. Typically, there might be a group of strategists, responsible for setting up the vision or proposition for a project or product. Then there are designers who ideate concepts for an interface or design aesthetic. Then we have the developers who are responsible for bringing all of the above to fruition. Projects will often work their way through teams in this manner and in this order. It’s rare that you find a developer involved in the ideation process, rarer still to find them included in the strategic thinking. This is not how good teams work. The most effective teams have a shared understanding, language and goal. Sports teams are a great example of this. They can only be successful when they work together. They communicate often and openly, spending time getting to know how the other thinks and plays so that they can facilitate better for one another. They have a single purpose, and work together using each other’s individual skills to achieve a common goal — without a big handover. They are in-sync. Project teams are no different. We all have a common goal of creating a successful, useable and beautiful product for our client and the real people that use it, in the least stressful way possible. To do this we need to be inclusive, contributing to the entire project cycle through the lens of our individual disciplines. Only when we collaborate in this way will we be truly efficient and successful. This collaborative culture spells the beginning of the end for the unhelpful handover. The first step to greater collaboration across disciplines is understanding. It’s no longer viable for disciplines to be specialised in execution at the expense of understanding each other’s craft. I’m not saying that teams should be multi-disciplined in execution, in fact people who heavily differentiate their skills usually end up with one of them suffering as a result (jack of all trades, master of none). But like any great sports team, we need to have a clear understanding of how each other works and what we need from one another in order to do the best work we can. Fortunately, there are some simple steps that can help steer your project team in the right direction.
https://uxdesign.cc/addressing-the-handover-3f874e1e96d4
['Jonny Gibson']
2017-11-09 17:48:00.542000+00:00
['Development', 'Teamwork', 'Design', 'UX', 'Psychology']
The Oceans Are Our Best Hope
Recently, I have been exploring small technology-based startups at the forefront of climate change mitigation innovation. I have been doing this to further my own understanding of the current reality of climate solutions, but also so I can discern where my efforts will be best placed in the future; I am trying to understand in which sectors my career efforts will have the most impact. One of my recent calls led me to investigate a unique mitigation angle, that the oceans are our best hope for mitigation. Amazingly, the ocean sequesters as much as a third of the carbon emitted by anthropogenic activity each year. This equates to approximately two billion tons of carbon each year. Yet, there is so little talk of oceanic mitigation measures, and also little innovation happening. I only managed to identify two main oceanic sequestration strategies, highlighting either the lack of focus on this area, or the poor potential for diverse sequestration strategies. These strategies are direct injection and ocean fertilization. Direct injection is a very hard-engineering approach to ocean sequestration. It requires the capture, separation, transport and injection of carbon dioxide from the atmosphere into the deep sea. Immediately, it is very obvious that this overall process is super energy heavy. Until our world has a reliable clean energy system, this cannot be considered a reliable mitigation effort. On the other hand, ocean fertilization — increasing the population density of carbon-sequestering phytoplankton at the ocean’s surface — is far more viable. Although the UN placed a moratorium on the process in 2008, the science is now there. The “key to producing net carbon sequestration” is to cause upwelling of deep, nutrient-rich water in low-nutrient, low-chlorophyll ocean gyres. Ocean-based Solutions is one company that is spearheading the development of feasible and effective ocean fertilization.They have developed an autonomous, wave-powered pumping technology designed to sit in groups in these ocean gyres to pump nutrient-rich water to the surface, thus stimulating phytoplankton growth. Each group of ten pumps is paired with an ARGO sensor: a profiling float that accurately transmits oceanic conditions in real time, including carbon dioxide content. Through the combination of these technologies, Ocean-based can monitor the exact efficacy of their pumping efforts and how this changes with varying scale. The company intends to target large corporations that have committed to carbon-neutrality by offering their technology as an effective carbon offset of which the real-time effect can be accurately validated. This company and technology are very exciting, and I look forward to tracking their progression as they attempt to scale.
https://medium.com/you-change-earth/the-oceans-are-our-best-hope-9aa711ccbb15
['Alec James']
2021-04-26 17:12:46.179000+00:00
['Climate Crisis', 'Climate Change', 'Oceans', 'Climate Action']
Basic Trajectory Prediction in Unity
Over the weekend, Austin Mackrell and I wanted to try out implementing the prediction and visualization of a trajectory in 2-D before it fires. Simply put, we’re firing an object through a gravity affected scene with a certain force, and we want to predict and show the path it would take before we fire it. We saw a ton of articles about it and have seen a number of games implement it in some way. There seemed to be two popular ways to do it, each having its own strengths and weaknesses. Basic intention of Trajectory Prediction Option One is to manually calculate how each of the Physics variables, like Force, Mass, Velocity, and Gravity would affect the object. With all the proper information and formulas, one can accurately forecast this. Option Two is to create a parallel scene in which physics are drastically accelerated. Every time you show intent to launch the object in your main scene, you’re essentially launching the object in an duplicated scene, which is completing the action at an almost instantaneous speed. Main Scene passes launch information into Duplicate Scene, and Duplicate Scene passes the results of that launch back to Main Scene. I chose Option One because I like how Math can in theory, perfectly calculate things for us in front of our eyes. Austin chose Option Two, and we thought it’d be interesting to illustrate and contrast the two. It sounded easy. Most people will know Trajectory Projection from Angry Birds It sounded easy until I attempted it. The bad news: there aren’t any mainstream methods of predicting this for us within Unity, we’ll have to build it ourselves. The good news: Many have been here before us. Before video games, before computer simulation and before coding, we’ve had science. Mathematicians and physicists have been at this since long before us. As early as four hundred years ago, Galileo was demonstrating accurate Parabolic Trajectory. f(t) = (x0 + x*t, y0 + y*t - g*t²/2) This is our mother equation. Looks complicated, I know. Let’s start basic. Take an object moving at a fixed directional speed through a two dimensional space, not being affected by gravity. I fire a bullet upwards at a speed (velocity) of 1x and 2y per second. The multitude of terms here can get dizzying quick. One helpful thing to note is the synonymity of Vector Velocity and Direction. Our object is traveling 1x and 2y every second, and both of these can be stored in one variable, a Vector2. Therefore, our Vector Velocity is (1ₓ, 2ᵧ) per time unit. Our Direction is also a Vector2 of the same ratio as V.V. The useful distinction is that we’ll normalize the ratio for Direction, which simplifies it into a magnitude of 1. What this boils down to is: Our V.V. is (1ₓ, 2ᵧ), normalized via Unity, returns ~(0.45ₓ, 0.89ᵧ), our Direction. Direction will be particularly useful later when we’re applying Force to our object. Notice they retain the same ratio; Y is twice as much as X. This is a feature in Unity that is inherently used often, even though you may not notice. Ever use transform.up, left, right, down, forward, or back? They’re constantly referenced in Unity, because the information they hold is very useful. Serialization of transform.up for this object. Do it some time and see for yourself. Transform Directions are always returned to us in magnitudes of 1, meaning the maximum value for X and Y are 1, when the other is 0. The same applies for all transform.directions, AKA the red, blue, and green axes of any object. TANGENT TIME: Ever wonder why the most optimum angle for launching objects is 45°? Probably not, but I’ll tell you anyway. Notice at the end of the GIF where I set the Rotation to -45°. Transform.Up has X and Y equal at about 0.707 each. Add these together for a sum of ~1.41. No other possible sum of X and Y normalized is greater than 1.41. It’s the best angle for applying force to, because both axes are receiving the highest possible force. If you feel like 1.41 sounds familiar, it’s the square root of two. It’s the length of a hypoteneuse whose Triangle has minor sides equal to 1, with minor angles of, you guessed it, 45°. No coincidence that it’s a significant number here. Going off of this, we can predict where the object will be by just asking how much Time has passed. I want the position of X after 5 seconds, so I’ll take the Origin of X and add the xVelocity * 5. Do the same to Y. x(t) = x0 + x*t y(t) = y0 + y*t Combine into one Vector2 — f(t) = (x0 + x*t, y0 + y*t) So at any given point in time, add the Velocity * Time to the Origin. The green line represents all positions at a Velocity of 1x, 2y. Converted to slope formula, which is represented in terms of Y, it’s y=2x (b is 0), or Y is traveling twice as fast as X. Follow the green line. Every second, we’re traveling 1x and 2y. At 2 seconds, we’re at 2x and 4y. At 5, we’re at 5x, 10y. We can predict every point of this trajectory at any point in time. There are infinite points.
https://medium.com/@schatzeder/basic-trajectory-prediction-in-unity-8537b52e1b34
['Dan Schatzeder']
2020-12-16 19:08:27.989000+00:00
['Game Development', 'Software Development', 'Unity', 'Gamedevhq', 'Physics']
5 Keys to Campaign Strategy #3: Undecideds in Polls
5 Keys to Campaign Strategy #3: Undecideds in Polls In the first two entries in this series on campaign strategy, we’ve seen how politicians can more efficiently use the tools in their toolkit by understanding concepts like elasticity, expected margin impact, and the persuasion-GOTV matrix. We’ve also learned how everyday citizens can become smarter consumers of political news by understanding that there are two very different kinds of swing states in America. To continue down that path, I want to use this piece to explain how we miss a key factor when we read election polls and how that leads voters, election forecasters, pundits, and politicians to make crucial mistakes. Not all 4-point leads are created equal Toward the end of the 2016 campaign cycle, Hillary Clinton was leading Donald Trump in key swing states by 3–6 points, per FiveThirtyEight’s polling averages. She famously went on to lose the “Blue Wall” states of Michigan, Pennsylvania, and Wisconsin by a hair. Joe Biden is now leading by slightly wider margins in these swing states, but I hear a lot of worried Democrats chastising anyone who dares to claim that Biden is in a better position than Clinton was. “Biden is up in the polls, but so was Hillary!” That analysis misses the mark because it overlooks a key factor: undecided voters. Suppose your candidate is up 4 points in the polls. It could be that they’re up 47–43, or it could be that they’re up 51–47. There is a huge difference between these two scenarios: If you’re up 47–43, a full 10% of voters are undecided. If 8 of the 10 swing against you, you’ll lose 49–51. Meanwhile, if you’re up 51–47, just 2% of voters are undecided. You’ll still win 51–49 even if all the undecideds move to your opponent at the last minute. The two very different types of 4-point polling leads. In other words, there’s a hidden number in polls: the percent of undecided voters. The more undecideds there are, the shakier any lead is. Depending on the number of undecideds, a 4-point lead could be either brittle or rock-solid. Polling sites often report just the partisan advantage: e.g. “The Republican is up by 2 points.” But that’s only half the equation. You won’t know the true state of the race unless you figure out how many undecideds there are. 2016 versus 2020 The big difference between 2020 and 2016 is that 2020 has far fewer undecideds than 2016, and thus Biden’s leads are much more solid than Clinton’s were, even though both leads are of similar sizes. In 2016, Clinton’s leads were of the 47–43 variety: she hit the mid-40s, but there were enough undecideds that she couldn’t break the crucial 51% threshold. In 2020, Biden’s leads are of the 51–47 variety: similar lead size, but with far fewer undecideds. Biden’s lead is thus a lot more solid than Clinton’s. By one estimate, only 6.4% of voters in battleground states are undecided or planning to vote third-party, compared to 17.8% in 2016. More surprisingly, Biden has 50%+ in the polls in nine battlegrounds: Maine, Virginia, New Mexico, Colorado, Michigan, Pennsylvania, New Hampshire, Wisconsin, and Nebraska’s 2nd Congressional district. Clinton had 50%+ in exactly zero battlegrounds. What everyone got wrong in 2016 Knowing this, we can also push back on the narrative that “the polls were terribly off in 2016.” Pollsters did make a mistake by failing to weight for education, which led them to systematically underrate Trump’s support. But if you look at the data, you see that pollsters captured the state of the election quite accurately: Clinton led among decided voters, but there were a ton of undecided voters. If you compare the 2016 polls to the 2016 results, you’ll see that both candidates gained support on Election Day as undecideds finally made up their minds. It’s not like the polls overestimated Hillary’s support. The shocking thing was that undecideds swung incredibly hard toward Trump, probably due to Comey’s letter, giving him a net gain of about 4 points in several key swing states. That was enough to put him over the top. Undecideds broke hard toward Trump, giving him an average boost of 4 points. This was enough to overcome Hillary’s slight lead in the polls. Sources: FiveThirtyEight and Wikipedia In 2016, the polls weren’t necessarily wrong. They just didn’t account for undecideds breaking toward Trump. Hillary’s 3–4 point edges couldn’t withstand the last-minute shift. The mistake of even breaks The polls told us that there were a lot of undecided voters, so we should have understood that they could easily break toward Trump, overcoming Clinton’s polling edge. Instead, most forecasters (and casual election-watchers, too) just assumed that the undecideds would break evenly, preserving Clinton’s slight edge. For instance, FiveThirtyEight saw that Clinton was up 44.8–40.8 in Michigan polls, with 5.4% for the Libertarian and 9% undecided. Nate Silver and co. guessed that 3.5% of voters would break for Clinton, and the same number would break for Trump. That would lead Clinton to maintain her 4-point lead on Election Day. FiveThirtyEight’s final projection for Michigan in the 2016 Presidential race. That wasn’t a bad assumption, but as we saw, it ended up being wrong. Instead of breaking 3.5–3.5, undecideds broke something like 1.5–5.5, enough to give Trump his narrow victory. Some caveats Hillary’s loss in 2016 was due to a combination of many, many factors. I argue that undecideds breaking against her in the last few weeks was a major factor, but there were doubtless many more. Many complacent Democrats stayed home; pollsters didn’t weight by education and thus underestimated Trump’s support in the Rust Belt; there may have been “shy Trump voters” who didn’t tell pollsters they supported him; Russia hacked state election systems; and so on. What’s more, just because Biden is above 51% in many swing states doesn’t mean that his victory is a done deal. Polls don’t count; only votes do! Remember to visit vote.org to find out where, how, and when you can vote. Conclusion Both the theory and the 2016 example show us that we can’t just look at the top-line results in the polls (like “Democrat +4”). If we ignore or downplay undecided voters, we might miss what’s really going on. This is significant for state parties, national parties, and folks looking to donate as well. If your candidates are leading in the polls, you need to figure out whether they have a 47–43 edge or a 51–47 edge. Or, if they’re trailing, you need to figure out whether they’re behind 43–47 or 47–51. The more undecideds there are, the more financial and other support a candidate needs. In part 4 of this series, we’ll zoom out to explore why gerrymandered states can suddenly become competitive, how you can predict these events, and why certain gerrymanderers are bad at their jobs. Stay tuned.
https://medium.com/@neelmehta/5-keys-to-campaign-strategy-3-undecideds-in-polls-ffd59d8be859
['Neel Mehta']
2020-10-27 13:17:43.732000+00:00
['Undecided Voters', 'Fivethirtyeight', 'Political Science', '2016 Election', '2020 Election']
It’s Christmas. We want gifts! We want gifts!
Everyone exclaims contentedly It’s Christmas! It’s Christmas! Where does everyone go anyway? They hurry to the streets For decorated stores They think about Christmas dinner, drinks And lit trees Everyone wants Santa Claus That in pomp descends from the heights Oh! Oh! Oh! So the little ones learn In delight, they throw kisses and they shout We want gifts! We want gifts! Very expensive and attractive And everyone forgets about the boy That poor little one! Who had nothing from his own Only a little straw crib where He was born And the cross where he died As Saint Teresa used to say And that even in His extreme poverty Was King of the kingdom that was not of this world Who can understand this deeper mystery? And many do not know Him Barely they have heard Of a nice little baby Who would come to save us Who slept warm in Maria’s lap Illuminated by the guiding star And me here in my selfish little world Consumerist and so material I dare to say: Dear Jesus Happy Christmas! É NATAL Toda a gente exclama contente É Natal! É Natal! Aonde vão todos afinal? Acorrem sôfregos para a rua Para as lojas enfeitadas Pensam em ceias, bebidas E árvores iluminadas. Todos querem o Papai Noel Que em pompa desce do Céu Oh! Oh! Oh! Assim aprendem os pequenos Em delírio jogam beijos e acenos Queremos presentes! Presentes! Bem caros e atraentes. E todos se esquecem do Menino Daquele pobre pequenino Que só mesmo teve de seu O presépio onde nasceu E a cruz onde morreu Como dizia Santa Teresa E que mesmo em sua extrema pobreza Foi rei de um reino que não era deste mundo E quem pode compreender mistério mais profundo? Muitos não o conhecem Vagamente ouviram contar De um bebezinho bonzinho Que viria nos salvar Que dormia quentinho no colo de Maria Iluminado pela estrela guia. E eu aqui no mundinho egoista Consumista e tão material Ouso dizer: Querido Jesus Feliz Natal!
https://medium.com/heart-revolution/its-almost-christmas-327643943ede
['Misa Ferreira De Rezende']
2020-12-26 19:41:23.133000+00:00
['Santa Claus', 'Christmas', 'Giving', 'Jesus', 'Gifts']
Top 10 Best Books I’ve Read This 2020
Photo by Lala Azizli on Unsplash This year, I set out to read 30 books, which I already considered a stretch goal after reaching last year’s target of 25 books. Being stuck in quarantine, however, has made this much easier to accomplish since I pretty much had more time on my hands to read. So, from the 33 I’ve managed to read this 2020, I compiled my top 10 picks which I’d highly recommend to anyone looking for new books to read. My top 3 here are actually part of the best books I’ve read ever since I began to read seriously around 3–4 years ago, while the rest in this list were chosen because of a specific insight or lesson I picked up from them. I share a little bit about what I learned from each book when I discuss each of them, so I hope that helps you see if you’d like to read one. Note: I’m aware that some books here may have conflicting reviews, but I included them here because there are still certain insights that were explained well which I felt were worth mentioning. I also acknowledge that these reviews are subject to my own perspectives, but of course everyone has different tastes. If you do decide to read them for yourselves, remember to remain critical about the content you take in and be mindful about your own preferences, too. Overall Top 3 Picks Atomic Habits by James Clear If you were one of the few people I got to talk to right after I read this book, I’m sure you’d remember how often I brought up the idea of habits (and I still do until now)! I’m not exaggerating when I say that my approach to routines and habits literally changed when I read the book, and I’ve heavily shifted to advocating for effective habit-formation as one of the keys to being more productive. Not only does James Clear do an incredible job explaining WHY habits are so important in improving our lives, but he also makes it as easy as possible to apply what he teaches in his book. If you’ve read the other well-known habit book Power of Habit by Charles Duhigg, you’ll find some of the concepts in Atomic Habits somewhat familiar, but much more enriched with James Clears’ experiences and examples. The writing style is also very clear and simple enough that it’s personally one of the best books I’d recommend people to pick up if they’re looking to get into the self-help or productivity space. I’ve applied many of the points from this book, and actually read it twice. I first read it last 2019, and once more this year for work, where I planned a learning session to teach our leaders about the concepts in this book. Yup, that’s how much I loved what I learned here! Other insights I’ve gained here include: There’s already SO much value if you improve even by just 1% every day for the next 365 days, even if you may not see results now Making habits stick is more about changing your mindset about the type of person you want to become, rather than about changing specific behaviors about the type of person you want to become, rather than about changing specific behaviors Even if you might have an unclear goal, as long as you have consistent and effective habits, that’s still going to take you somewhere (ie. “We don’t rise to the level of our goals. We fall to the level of our systems.”) The Power of Now by Eckhart Tolle One of the most common misconceptions about this book is that it’s your typical mindfulness or spiritual book — it’s not! While it is about mindfulness, it goes beyond just defining what it is and teaching you how to “be more mindful,” but it also discusses how you can still be present in spite of different circumstances you may experience. It’s not really a faith-related book, but he cites different teachings from different spiritualities as proof of the universality of the value of presence. The first chapter gives you a taste of the depth and tone of the book: it’s logical, straight to the point, and rich with substance in every chapter. It’s not exactly a hard read, but it’s not something you can finish in one sitting either. It’s a book you’d want to fully concentrate on when you read, otherwise you miss the point and you’d have to go through it again. I’ve tried and tested many of the lessons he teaches about focusing on the now, and in such a short span of time, it’s brought a lot of peace, clarity, and calmness in my life, despite all the many uncertainties and challenges I’ve faced. This has been one of those titles I’ve seen in bestseller lists and highly-recommended book suggestions, but have never really thought of buying because it’s pretty pricey (even more than an average self-help book). Luckily, I got to borrow a copy from one of my friends, and now I’m considering getting my own! So yes, the price is definitely worth paying for if you want to change how you approach the world. My best takeaways include: Everything happens in the now, and now is the only thing worth focusing on . And focusing on the present begins by dis-identifying with our mind. . And focusing on the present begins by dis-identifying with our mind. When faced with any situation, we are left with three things to do: we can accept it, we can change what we can, or we can leave. It’s as simple as that. Every little fear or worry is just another manifestation of the fear of death. Even the fear of being wrong, or of being rejected, is just a form of fearing death (just not in the physical sense). And this fear can be removed by focusing on the now. The Courage to Be Disliked by Ichiro Kishimi and Fumitake Koga My only regret after reading this book is that I didn’t read it sooner! I find this to be a good complement to The Power of Now because they have similar writing styles (ie. a dialogue or question-and-answer form between two people) and most points in both books beautifully support each other. This book is a mix of Adlerian psychology and philosophy, so if you’re ready to dismantle a lot of our preconceived notions about concepts like “freedom” and “courage” and “choice,” then this book is for you. I personally enjoyed the experience of following the dialogue between the philosopher and the youth mentioned in the book; it’s the type of story that leaves you with so much to think about and act on. It’s also not the type of book you’d finish in one sitting. I mean, I could if I wanted to, but the content was so enlightening that I really did my best to take my time to enjoy reading it. Admittedly, many of these learnings are more of mindset shifts than anything, but these changes in perspective, if understood and practiced well, can be life-changing. Again, since the points brought up here aren’t exactly common sense, it’ll take practice to get used to them, or even accept them. If, at the end, you don’t agree with what it teaches, the least is does is expand your perspective about how other people can view the world, which I think is still a pretty good win. A few paragraphs are definitely not enough to explain what I learned, but some of the ones that stood out are: Living freely means living a life true to yourself, whether or not people like you for that We’re so used to explaining how we behave in certain ways because of past experiences. This thinking, done to the extreme, can limit us and subject us to being “victims” to our past circumstances, rather than focusing on how we can achieve goals even if things may have happened to us in the past. All problems can be traced back to being interpersonal relationship problems The books above are what I would recommend overall! I purposely didn’t go too in-depth with my main takeaways so I don’t spoil much of the contents. If, however, you’re looking to deepen your understanding about specific topics, I highly recommend you grab any of the books below. Top 7 Books for Specific Topics Originals by Adam Grant About: Leading and navigating through change and disruption, especially when this change is radical and new Adam Grant shares what makes ideas original, and he explains how we can also champion change, especially when they’re very radical and uncommon. He does a wonderful job explaining what you need to do to be the type of person who, when introducing new ideas, will be taken seriously instead of being thought of immediately as absurd. Challenging the status quo and questioning the default are things that many of us may want to do, but reading this book showed me that it’s just like any other skill! Which is that it can definitely be mastered. If there’s another thing I learned also, it’s this: Coming up with a great idea is one thing. Sharing this idea with others and succeeding is a whole different matter. Dare to Lead by Brene Brown About: Learning concrete and actionable ways to step up and lead with courage, based on research, stories, and examples Dare to Lead contains a summary of Brene Brown’s other bestselling books, so if you’ve read her other works, this will serve as a good refresher. What’s different here, though, is that she lists down very concrete steps to begin leading with courage and vulnerability (a key concept in all her books) that you can do alone, and within your team or organization. Her points are backed up by research she’s either done herself or gotten from other studies, so you can be sure that the suggestions are all well-supported by data. Just Walk Across the Room by Bill Hybels About: Trusting in the power and gravity of even our smallest decisions, if only we choose to do them Bill Hybels brings up mostly his own examples of how just one small action can lead to big changes in another person’s life (mostly people getting to closer to God, given the nature of this book), making this a relatively easy read. He also talks briefly and explains how one small action may also not lead to big changes — and that’s okay. Though this book is more of a spiritual book that focuses on personal evangelization, it’s fast enough to read that it can easily motivate you to just say yes to opportunities to reach out to others. Talking to Strangers by Malcolm Gladwell About: Understanding very common but very human lapses in judgment when we interact with people we aren’t familiar with, and seeing the effects of those errors I’ve always appreciated how Malcolm Gladwell can explain topics in a very clear and easy manner, backed up by research and many cases; Talking to Strangers is no different. What made his points easy to follow here is that he makes use of stories and tries to answer: “Why did this happen? And what does this say about us as humans?” He discusses common biases we have whenever we meet someone new, and overall just makes the reader more aware of those. Though I appreciated how Malcolm Gladwell wove together his ideas by bringing up many stories and real-life examples, he discusses these in a lot of detail. TW: Some of these topics include cases about rape, pedophilia, suicide, and the like, so I wouldn’t recommend this for people who may be triggered by reading these stories. Lean In by Sheryl Sandberg About: Providing a general overview about the inequality, internal struggles, and challenges faced by women compared to men, supported by historical data and research Reading about Sheryl Sandberg’s own experiences and struggles confirmed that the issues that women face are very, very real. She discusses issues from sexism in the workplace, to internal thoughts women have about doubting their own abilities, and even how history has led to this much inequality. Even if the perspective is a bit limited because its main perspective is more for American women in the corporate world, it’s still a good foundation to more deeply understand what women face just by… well, being women, and living in a society that imposes so much norms on who we should be and what we should do. (On a personal note, this was a powerful read for me because it reassured me that feeling so strongly against being denied leadership opportunities because I’m a woman is very much valid. But that’s a story for another time.) The Speed of Trust by Stephen Covey About: Defining and developing trust in oneself and others, and seeing how trust brings about positive results As with any other Stephen Covey book, you’re in for a load of information with Speed of Trust, so I just had to include this in the list! He really does not skimp on any information and literally talks about trust in all perspectives possible — personal trust, trust for others, trust for organizations, how to build this trust in all levels, even what components there are to true trust, and more! Not to mention how he has exercises and worksheets that you can do to apply his lessons as soon as possible. This book was definitely jam-packed, informational, and backed up by well-known stories and personal examples. I thought I was already a person who understood what it meant to trust myself and others, but reading this book deepened my understanding and appreciation for the concept. This was such a refreshing perspective on trust! Do Good Better by William MacAskill About: Using logic and data to answer how we can do the most good that we can This book logically explains how we can do the most good for others given the resources that we have. Simple as that. Be prepared for a bit of math and numbers also, as William MacAskill explains step by step how he uses different formulas to prove his points. If you’re like me and you prefer seeing hard facts before deciding what to do (in this case, it could be supporting which charity to donate to, or what career path to choose that will allow you to help others the most, etc.), then reading this will definitely enlighten you and show you that it’s possible to answer how we can really “do good better.” This is not an easy read, but the book does a wonderful job identifying not just what causes need the most help, but how we can do that the most effective way possible. Let me know which ones you’ve read, too (or plan to read)! I’d love to discuss with you guys if ever! (And if you’re looking for ways to read as many books as I have while effectively retaining & applying the information you learn, I’ll write something about that soon too!)
https://medium.com/@anastaciotanya/top-10-best-books-ive-read-this-2020-59bf85fac723
['Tanya Anastacio']
2020-12-23 12:03:53.784000+00:00
['Productivity', 'Book Recommendations', 'Ratings', 'Book Review', 'Self Help']
What is the design thinking process in UX design?
Design thinking is an extremely useful process when you want to tackle complex problems that are poorly defined or unknown. This design methodology provides a solution-based approach to solving problems, by understanding the human needs involved. Understanding the five stages of design thinking will empower anyone to apply these methods to solve complex problems — regardless of the scale, industry, or context of the issue. I choose to focus on the five-stage design thinking model proposed by the Hasso-Plattner Institute of Design at Stanford (d.school) because they are the leaders when it comes to teaching design thinking. Who better to take inspiration and learn from? The following information comes from this fantastic article on design thinking: 5 Stages in the Design Thinking Process*. *It’s important to note throughout that the five stages are not always sequential. They do not have to follow any specific order and they can often occur in parallel and be repeated iteratively — more on that later! Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation. Copyright licence: CC BY-NC-SA 3.0 The five stages of design thinking, according to d.school, are: empathize, define, ideate, prototype, and test. 1. Empathize Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation. Copyright licence: CC BY-NC-SA 3.0 The first stage of the design thinking process is to gain an empathetic understanding of the problem you’re trying to solve. This stage requires you to consult experts to find out more about the area of concern. You will gain these insights as you observe, engage and empathize with people to understand their experiences and motivations. Empathy is crucial to human-centered design processes and allows design thinkers to set aside their own assumptions about the world to gain insights into users and their needs. 2. Define (the Problem) Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation. Copyright licence: CC BY-NC-SA 3.0 During the Define stage, all the information collected during the empathize stage is gathered. This is where you’ll analyze the observations and synthesize them to define the core problems the team has identified up to this point. You should seek to define the problem as a problem statement in a human-centered manner. For example, you shouldn’t define the problem as a personal wish or a need of a company: “We need to increase our food-product market share among teenage girls by 5%.” It’d be much better for you to define the problem in a more holistic way, and include the reasons behind the approach. Like this: “Teenage girls need to eat nutritious food in order to have more energy, be healthy and grow.” The subject is now well-defined — the generic term “girls” has been changed to “teenage girls” which, while still broad enough, is a more specific segment. “Food” is also replaced with “nutritious food” which is more meaningful. Finally, a mission is incorporated — to help the teenage girls “have more energy, be healthy and grow”. Added a paragraph below to explain “this” 3. Ideate Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation. Copyright licence: CC BY-NC-SA 3.0 In the third stage of the process, designers start to generate ideas. By now, you will have grown to understand your users and their needs and will have a human-centered problem statement. With this solid background, the team members can start to think outside the box to identify new solutions to the problem statement, and you can start to look for alternative ways to view the problem. There are hundreds of ideation techniques such as Brainstorm, Brainwriting, Worst Possible Idea, and SCAMPER. The Interaction Design Foundation actually has some downloadable templates on ideation techniques which are free for you to use. I’ve found them incredibly useful in ideation sessions I’ve been part of. 4. Prototype Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation. Copyright licence: CC BY-NC-SA 3.0 Now, with all the gathered information, the design team is able to produce inexpensive prototypes or scaled down versions of the key ideas generated within the ideation session. These prototypes may be shared and tested within the team itself, in other departments, or on a small group of people outside the design team. This is an experimental phase. The aim is to identify the best possible solution for each of the problems identified during the first three stages. The solutions are implemented within the prototypes and are investigated one by one. This will result in them either being accepted, improved and re-examined, or rejected on the basis of the users’ experiences. By the end of this stage, the design team will have a better idea of the constraints and the problems inherent to the products that are present. It’ll also provide a clearer view of how real users would behave, think and feel when they interact with the end product. 5. Test Author/Copyright holder: Teo Yu Siang and Interaction Design Foundation. Copyright licence: CC BY-NC-SA 3.0 In the test stage, designers or evaluators rigorously test the complete product using the best solutions identified during the prototype phase. This is the final stage of the 5 stage-model. As this is an iterative process, the results generated during the test phase are often used to redefine one or more problems, and you may find you loop back to the ideation phase based on this. These results also provide a way to understand the users, conditions of use, and how people think, behave, and feel about the solution. Although this is final phase, it may not be the end of the process. Even at this stage, alterations and refinements are made in order to rule out problem solutions and acquire a deeper comprehension of the product and its users as possible. The Non-Linear Nature of Design Thinking This may be a direct and linear outline of the design thinking process. Where one stage seemingly leads to the next with a logical conclusion at the test phase. However, in practice, the process is carried out in a more flexible and non-linear fashion. For example: Different groups within the design team may conduct more than one stage concurrently. The designers may collect information and prototype during the entire project to be able to bring their ideas to life and visualize the problem solutions. Results from the test phase may reveal some insights about users, which in turn may lead to another brainstorming session (ideate) or the development of new prototypes (prototype). I hope all of the above has helped you to understand the design thinking process in a little more detail. I recommend you check out the original article for more information, and even explore what this online course Design Thinking: The Beginner’s Guide could offer you. Design thinking is pretty popular these days, reading more and investigating about it is great way to supercharge your skills and learn how to apply the methodology to any problem you come across.
https://medium.com/nyc-design/what-is-the-design-thinking-process-in-ux-design-de26e2ab309f
['Eduardo Ramos']
2020-06-24 21:40:22.780000+00:00
['UX', 'UI Design', 'Design Thinking', 'New York', 'Design']
No Brainer Authentication in Django & React with Redux — Part 2
We’ll pick right off where we left in the last part, where we were done with our Django backend and we’ll start with using React in this one. There’s a few ways to start a React project — I personally like create-react-app with the Typescript template. That’s what we’ll do. We’ll create the frontend in the root directory and name the app ui . You can name it frontend or whatever you fancy. $ npx create-react-app ui --template typescript We need to install a few more dependencies. The first two are for redux. The next three are for redux enchancers and a router. The last few are the type declarations to work with Typescript. $ npm i --save redux react-redux $ npm i --save react-router-dom redux-logger redux-persist $ npm i --save @types/redux-logger @types/react-redux @types/react-router-dom Delete all the files in the src/ directory except index.tsx , App.tsx , and react-app-env.d.ts . Create a directory in src/ called store . All our redux logic — actions, reducers, types, hooks, etc, will live here. $ cd src/ $ mkdir -p store/auth/ && cd store/auth $ touch types.ts reducers.ts actions.ts hooks.ts Let’s start with our types. Let’s walk through it step by step. We create two string constants that would act as identifiers for the type of action we’ll dispatch. Next, we define the interface for our authentication state. This is the shape of our object that will store the token, the username, and a boolean signifying whether the user is logged in or not. Next, we define a bunch of interfaces for the actions that we can dispatch. The LoginPayload interface is just describes data we’ll be supposed to send the reducer to authenticate a user. All actions also need to contain a unique type to help the reducer distinguish them from each other. Finally, we have a union of all the actions and export it as a generic AuthActionType . Let’s look at the reducers.ts file — Again, if you’ve ever used Redux, this should be home. If not, let’s walk through it again. authReducer is a function that is used to perform reads and writes to the auth state — based on the action that is. And that’s all we do. Inside the switch statement, we compare the type of the action to the constants we had defined earlier and then mutate the state accordingly. Important point to remember — never mutate the state inside the reducer directly. Always return an entirely new updated state. Don’t ask me why, I don’t quite understand it myself but the reasons provided online seem to be good enough. Finally, all we have to do is create our actions. In the actions.ts file, enter the following — Pretty self explanatory. We simply create functions that take in the required data and return the appropriate action object for the reducer to accept. One last thing and this is completely optional but highly recommended, write hooks for easier access to commonly read state properties. In hooks.ts , enter the following — And we’re done with our auth store! Of course, we still need to configure a root reducer for the entire app but then we’ll be on our way. Change directory back into the src/store/ folder and create two files — index.ts and reducer.ts . We’ll create our root reducer inside the reducer file and export our final redux store in index.ts . I would explain the code but most of it is boilerplace nonesense. If you remember the dependencies I made you installed in the beginning, these were it. I like redux-logger quite a bit and find it useful for debugging. We’re now completely done with our redux setup and can now move on to creating routes and other fun stuff. Go back in your src/ directory and create two more folders — $ cd src/ && mkdir pages components $ touch routes.tsx The pages/ will store all the components that will be used as routes. components/ will store all the reusable Components that will be used in pages . The routes file will, of course, hold all the routes. PrivateRoute is a useful reusable component that protects pages that require authentication. It calls our trusty useAuthenticated hook to verify whether someone is logged in or not. The useAuthenticated hook in turn relies on the useSelector hook provided by react-redux that can “select” properties from our state. We still haven’t written most of the components that we import in routes.tsx so let’s do that now. Let’s begin with the most complicated one, the Login component. It’s actually simpler than you think - We have a couple of imports. React for… well, our React component and JSX. Redirect and useDispatch are interesting. Redirect will be used to… well, redirect the user to the home page if they’re already logged in. We don’t wanna show the login page to an authenticated user. useDispatch is a hook provided by react-redux that returns a function used to dispatch actions to the store reducers. Next, we import two things — Credentials, which seems to be a type, and getToken, which seems to be a function from an api/ directory. That’s right, we’re gonna be principled and create yet another directory api/ inside src/ to store all our fetching methods. Inside api/ , create a file auth.ts that will store all authentication related methods. Once again, this just makes a POST request to the endpoint we set up in the first part and recieves the token. If invalid credentials were provided, we simply throw an error saying so. Back to Login.tsx . We set up a couple of state variables namely error and credentials along with their set methods ( setError & setCredentials ). error will be used for displaying any errors we might face when we make a request to the token endpoint and credentials will be used to store the username and password from the input fields we describe in the JSX. We have an asynchronous handleSubmit function that will be called whenever we submit our HTML form is submitted. We use the getToken function defined in api/auth.ts and dispatch the token along with the username stored in our credentials state variable using the logIn action defined way back in store/auth/actions.ts . If there are any errors, we simply update the error state variable and it’s rendered on the document. The rest is pretty easy to understand. Go through it yourself. That was for our Login page. Next we have the Home and Account page. Once again, simple. You may be wondering why we don’t check whether the user is authenticated inside the Account component. That’s because of the ProtectedRoute guard we put in our routes earlier — it makes sure that the Account page is only available if the current redux state has the isAuthenticated property to true. Thus, adding an extra read for checking the auth state would be redundant inside the Account component. The final step is to have a Navigation reusable component and then all that’s remained it to wrap the entire app up in a Provider . And that’s it! We’ve written all our components, our pages, our routes, our store that includes, actions, reducers, types, and custom hooks! We’ve done so much! All that is left is to configure the Provider and connect it to our base App component. Inside src/App.tsx , enter the following — Once again, pretty much all normal except the PersistGate wrapper. All it does it wait for the store to be rehydrated and persist before rendering the components so that we don’t have to manually check whether the store is the latest version or not. If you’ve followed everything so far, you should have a working React website with a Django backend!
https://medium.com/swlh/no-brainer-authentication-in-django-react-with-redux-part-2-10416af59cf0
[]
2020-12-04 07:07:14.661000+00:00
['React', 'JavaScript', 'Django', 'Web Development', 'Typescript']
How to create an astonishing portfolio
An art portfolio is the best method to make way into top art schools. Students can include all their works in a portfolio, maybe painting, pencil portrait, sculptures and other artworks. For getting into art schools, this portfolio plays a vital role. It is necessary to make your portfolio stand out unique from others. It gives you an advantage for the application. So, how to make an art portfolio is an essential part to discuss. This article will take you through tips and ideas for an astonishing portfolio. It will also include a few online sketching courses that will prepare you for the best. Characteristics of a Good Portfolio An art portfolio is self-explanatory that brings out the skills of a student. Some will exhibit excellent painting skills while some may be good in realistic pencil drawings. Irrespective of this, the portfolio should include the following. ● Skills As a professional or an amateur artist, it is essential to display the skills in the portfolio. It should be technically correct and provide every detail. Drawing skills are more important than presentation skills in a portfolio. ● Uniqueness As an artist, the person will have a unique stroke or presentation skills. Your Uniqueness should be highlighted in the portfolio. If you are an expert in a pencil sketch, that can be displayed as your unique skill set. ● Variety in art The portfolio creates the first impression and that’s why it should be the best. It is a good thing to try out new things and explore. Learning a new way of art and including them in the portfolio is a big plus point. This can include varieties like glass painting, carvings and doodling too. These tips can help you create a commendable art portfolio. Use of resources Other than these ideas, other sources can help you in sharpening your skills. You can make use of websites to master your existing skills. Our website has several online courses for pencil sketching and realistic drawings. This will surely reflect on your portfolio and make it eligible for most applications. Many students have benefited from these courses to make their portfolios excellent. You can enrol in these online sketching courses to create good strokes. These courses provide you with guidance that can make even a simple drawing look realistic. These are the kinds of skills that art schools lookout for the most. Things to remember Other than all the mentioned tips, there are a few more things to remember. ● Make sure to keep the portfolio precise and excellent. You can include 15–20 artworks that you think is the best. ● Bring out variety in your portfolio. ● Ensure to have a good depth of skill display. Technical skills are widely appreciated. This concludes how to make an art portfolio. Your portfolio becomes an identity wherever you go. That is why it carries such significance. This can be complemented with our online courses for sketching. In turn, you get an advantage for the portfolio over others while applying.
https://medium.com/@pencilperceptions/how-to-create-an-astonishing-portfolio-ccec09e824eb
[]
2020-12-09 17:03:20.274000+00:00
['Art', 'How Create Portfolio', 'Create Art Portfolio', 'realistic pencil drawings', 'Portfolio']
Padres On Deck: Ornelas finished Mexican winter season with leading .353 average
Padres On Deck: Ornelas finished Mexican winter season with leading .353 average OF Tirso Ornelas The Mexican Pacific League regular season ended Dec. 23 with Padres’ outfield prospect Tirso Ornelas in line for several awards. Ornelas, a 21-Year-old native of Tijuana, finished with a .353 batting average in 60 games for league champion Navojoa. The 6-foot-4, 180-pound, left-handed hitter had the highest batting average in the league among qualifying players. Ornelas went 77-for-218 with 16 doubles, two triples, two home runs, 35 RBIs and 36 runs scored. The Padres’ №29 prospect also ranked third in doubles, tied for fourth in triples, fifth in hits, tied for fifth in runs scored and tied for 13th in RBIs. He also finished 11th with a .869 OPS with a .472 slugging percentage (12th-best in the league) and a .397 on-base percentage (14th-best in the MPL. Ornelas still qualifies as a rookie in the Mexican Pacific League. He is also a candidate for Most Valuable Player honors. Catcher Gilberto Vizcarra (Mexicali) and outfielder Agustin Ruiz (Jalisco) also completed the Mexican Pacific League regular season on teams headed to the post-season. Vizcarra, 22, hit .277 in 23 games with a .377 on-base percentage and a .323 slugging percentage for a .700 OPS. The 5-foot-10 Vizcarra went 18-for-65 with three doubles, six RBIs and 10 runs scored. He drew seven walks against only six strikeouts. Ruiz, 22, a left-handed hitter who is 6–2 and 175 pounds and is ranked the Padres’ №28 prospect, finished 17-for-64 with two doubles, a triple, two home runs, seven RBIs and five runs scored for a .246/.325/.391/.716 slash line. Infielder Kelvin Melean, 23, continues to play for La Guaira in the Venezuelan winter league. He is hitting .300 (18-for-60) with two doubles, a home run, 10 RBIs and seven runs scored. He has a .328 on-base percentage and a .383 slugging percentage for a .711 OPS., Three Padres’ minor league prospects continue to play in the Dominican Republic winter league. Outfielder Luis Liberato is hitting .295 (23-for-78) with three doubles, 10 walks, seven RBIs and 11 runs scored for a .389 on-base percentage and a .333 slugging percentage for a .722 OPS in his sixth season for Escogido. Infielder-outfielder Esteury Ruiz has a .163-.196-.209-.405 slash line with the Toros del Este. He has gone 7-for-43 with two doubles, five stolen bases, a RBI and six runs scored. Outfielder Nomar Mazara is still 7-for-22 with a double and two home runs for two RBIs and two runs scored for Licey with a .318-.400-.636–1.036 slash line.
https://padres.mlblogs.com/padres-on-deck-ornelas-finished-mexican-winter-season-with-leading-353-average-5fb874e18f0b
[]
2021-12-30 19:10:44.172000+00:00
['MLB', 'Minor League Baseball', 'Padres', 'Padres On Deck', 'On Deck']
The Best Subscription Boxes for Every Type of Person
Hot Sauce of the Month Club ($14/month): With both quarterly and monthly options, Fuego Box is a hot sauce lover’s dream come true. Sign up to receive one or three bottles of unique and interesting flavors and brands of hot sauce delivered to your door periodically. Readers get 5% off a subscription using code YumBacon! Hot Sauce of the Month Club Grass Fed Coffee Subscription ($44.99/month): Ready-to-drink butter coffee, Gras Fed Coffee, offers a monthly subscription service where you can have a 12-pack delivered right to your door. Grass Fed Coffee specifically uses organic, fair trade coffee extract, MCT oil and sustainably-produced grass fed butter that is high in vitamins, minerals, antioxidants, and healthy fats. Grass Fed Coffee BoxyCharm ($21/month): In each box, you will receive 4 to 5 full-size beauty items. Ranging from makeup and skincare, to beauty tools and color cosmetics, each box has a minimum value of $100. I got my hands on a box and it was filled to the brim of products I know and love and new ones I have yet to discover. Boxy Charm The Play Kits by Lovevery (Starts at $36/month): A subscription box service designed for the millennial mom that helps new mamas create meaningful playtime moments with their little ones. Each Play Kit delivers exactly the right science-backed, non-toxic toys babies both want and need, at exactly the child’s right stage of development, so parents can rest assured they’re giving their babies the best possible start in life. The Play Kits also include The Play Guides, providing all the need-to-know guidance new moms need, based on all the research they don’t have the time to read. For a limited time only, get 20% off the new toddler play kits! Lovevery Breo Box (Starts at $159): Every box in handpacked in a custom wood brēō box with 5–8 premium essentials centered around everyday lifestyle, fitness, and tech, curated to fit the season. I got my hands on a recent box and it was filled with awesome gadgets like the world’s first temperature-controlled mug, a mortar and pestle set, and a portable backgammon game set. The box can also be useful for future storage. The boxes are somewhat pricier, but they’re filled with premium goods you’d pay more for separately. The longer you subscribe, the less each box costs. Use code GIMMIE for $20 off your box. breo box ipsy ($10/month; $110 annually): ipsy’s Glam Bag is filled with five personalized sample-size beauty products per month. By filling out a quick customization quiz online (trust me, it is super detailed!) about your beauty preferences, ipsy is able to pinpoint exactly what products you’ll love most. You can also adjust these preferences at any time during your subscription — just in case you didn’t quite nail your preferences on the first try. ipsy MADE OF Deluxe Diapers Subscription ($110): A monthly disposable diaper subscription is a no-brainer. MADE OF The Better Diaper is sent right to your door shipped at the dates you choose. It also includes 6 diaper packs, 4 packs of 80 count wipes, 2 travel packs of 20, organic diaper cream, and organic baby powder. made of Walmart KIDBOX Stylebox ($48): Walmart’s first kids’ subscription stylebox of premium kids’ brands at substantial savings, the new stylebox offers customers personalized style from more than 120 premium kids’ brands. The stylebox includes four to five fashion items for $48 — which is approximately 50% off the suggested retail price for the group of bundled items. Fill out a short style survey about your child’s style preferences, have a box curated by a KIDBOX stylist and it’ll be delivered right to your front door. KIDBOX FoodStirs Out of This World Donut Kit ($24.99/month): This galaxy donut is almost too pretty to eat. This Out Of This World Donut Kit includes all the organic mixes, plant-based dyes, and decorative supplies to make this colorful treat. This kit is part of the Baker’s Club Subscription, which ships around the 5th of every month. FoodStirs Book of the Month Club ($14.99/month): When you join the Book of the Month Club, you’ll receive 1 (or more) book of your choice based on your book genre preferences. There’s also new member offers from time-to-time making it an even better deal for a solid, hardcover book. Book of the Month Club Urthbox ($9–49/month): Get delicious, gluten-free, organic goodies delivered straight to your home each month. Plus, there’s even a vegan snack box! Get $10 off + a free box here. Tubby Todd’s Bath Bomb Subscription Box ($20/month plus shipping): Every month, Tubby Todd will handpick 3 natural bath bombs and send them to you each month. Each box provides 2 natural bath bombs and they are all natural and organic, safe for sensitive skin types, dairy free, and more. Get free shipping here! Tubby Todd’s NOTE: I may receive a small commission for any purchases made through links in this post.
https://medium.com/@kim-kornfeld/the-best-subscription-boxes-for-every-type-of-person-6d1e1a4afa20
['Kim Kornfeld']
2019-04-26 15:56:39.399000+00:00
['Shopping', 'Subscription Boxes', 'Deals', 'Food']
Mad Maps – visualizing geographical data for maximum impact
Mad Maps – visualizing geographical data for maximum impact How to effectively communicate data on maps for clear, insightful insights with Python (code and data included) The U.S. presidential election is upon us again. You have no doubt had maps flood your Twitter timeline and news feeds already, and the next few weeks after Nov. 3rd will see an escalation of that. We take maps for granted these days, and it’s not difficulty to see why. Smartphones have commoditised technology to accurately locate us and provide live directions to the 5 nearest Japanese restaurants with at least a 4.0 rating. Maps have never been more integrated in our lives, even though just a decade ago people still used street directories. (Shudders) But it would be doing the field of cartography a huge disservice to say that Google Maps is the final evolution of maps. It’s far from the truth, and while useful, that’s only a small part of what maps can do. Maps are feats of human ingenuity that can help to derive unique insights and effectively convey them, whether they be demographic, economic, health, or indeed political. Imagine being an early cartographer, and accurately tracing the surface contours of our world from the ground level. Imagine being among the first people from your nation to explore certain regions of the world, and capturing on a page their form, inhabitants, flora and fauna such that the next people to arrive can navigate the world that little bit more safely and quickly. Imagine how it might feel to create something that will help inform and guide everybody from fishermen to generals. Cartography in other words creates visual aids to help users navigate through the treacherous seas of data, as in — the precursor to the broader field of data visualisation. The knowledge that one of the first known examples of data visualisation was an augmented map (of cholera outbreaks) has a certain quality of full circle to it as well, especially given the current pandemic and the proliferation of dashboards. John Snow’s map of Cholera Outbreaks (Wikipedia) With this relative simple map, John Snow was able to visualise the disease incidence data and illustrate that a central location of the outbreaks (and the water pump located there) was likely to be the cause of cholera cases. Fast forward to today, there exist amazing data visualisation tools (like Plotly, DataWrapper or Tableau) that are able to plot maps that are not only packed with information, but also visually striking. As an example, take a look at this image of crops farmed in the United States:
https://towardsdatascience.com/mad-maps-visualizing-geographical-data-for-maximum-impact-d7e2b5ff2471
['Jp Hwang']
2020-11-03 02:44:06.180000+00:00
['Getting Started', 'Data Visualization', 'Data Science', 'Programming', 'Python']
Optimizing CI/CD Pipeline for Rust Projects (Gitlab & Docker)
Before we begin, i would like to thank Astrolab-Agency for the Internship opportunity and for their trust on me to make this project, i would like to thank Mr Mahdi Ben Chikh for his precious support along the intern period. What is Rust ? Rust is a programming language ( general purpose) C-like, which mean it is a compiled language and it comes with new strong features in managing memory and more. The cool thing ! rust does not have a garbage collector and that is awesome 😅 . What is DevOps ? In short, Devops is the key feature that helps the dev team and the ops team to be friends 😃 without a work conflicts , It is the ART of automation. It increase the velocity of delivering a better softwares ! Identifying the problem we can make a lot of things with rust like web apps , system drivers ans much more but there is one problem which is the time that rust takes to make a binary by downloading dependencies and compile them. The cargo command helps us to download packages ( crates in the rust world) , The Rustc is our compiler. Now we need to make a pipeline using the Gitlab CI/CD and docker to make the deployment faster. This is our challenge and the Goal of this article ! 👊 Static linking Vs Dynamic linking Rust by default uses a Dynamic linking method to build the binary, so what is dynamic linking ?. The Dynamic linking uses shared libraries , so the lib is loaded into the memory and only the address is integrated into the binary. In this case the libc is used. The Static linking uses static libraries which is integrated physically into the binary, no addresses are used and the binary size will be more bigger. In this case the musl libc is used. You want to know more ? Then check this : click here. Optimizing the CI/CD pipeline The CI/CD pipeline is a set a steps that allow us to make : build → test → deploy In this article i will focus on the build stage because in my opinion it is very sensitive phase and it will affect the “Time to market” approach ! So the first thing is to optimize the size of our docker images to make the deployment faster. Before we begin, i will use a simple rust project for the demo. The project structure let’s understand the project structure : src : This dir contains all source code of the app ( *.rs files). : This dir contains all source code of the app ( files). Cargo.toml : This file contain the package meta-data and the dependencies required by the app and some other features … . : This file contain the package meta-data and the dependencies required by the app and some other features … . Cargo.lock : Ct contains the exact information about your dependencies. : Ct contains the exact information about your dependencies. Rocket.toml : With this file we specify the app status ( development , staging or production) and the required configuration for each mode, for example the port configuration for each environment. : With this file we specify the app status ( development , staging or production) and the required configuration for each mode, for example the port configuration for each environment. Dockerfile : This is the docker file configuration to build the image with the specific environment that is configured already in Rocket.toml. Are you prepared 👊 😈 !!! , let’s begin the show !! 🎉 🎉 🎉 We will begin by building the app image locally , so let’s see how the docker file look like : This Dockerfile is splitted into two sections : The builder section ( a temporary container) The final image (Reduced in size) The builder section: In order to use rust we have to get a pre-configured images that contains the Rustc compiler and the Cargo tool. the image have the rust nightly build version and this is a real challenge because it’s not stable 😠. We will use the static linking to get fully functional binary that doesn’t need any shared libraries from the host image !! let’s breakdown the code : First we import the base image. We need the MUSL support : musl-tool after updating the source.list of your packages apt-get update , MUSL is an easy-to-deploy static and minimal dynamically linked programs. after updating the of your packages , MUSL is an easy-to-deploy static and minimal dynamically linked programs. Now we have to specify the target , if you don’t know ! no problem ! you can use x86_64-unknown-linux-musl , run with Rustup (the rust toolchain installer) , run with Rustup (the rust toolchain installer) To define the project structure on the container we use cargo new --bin material (material is the project name), it’s much like the structure that we see earlier. (material is the project name), it’s much like the structure that we see earlier. Making the material directory as a default we use the WORKDIR Dockerfile command. directory as a default we use the Dockerfile command. The Cargo.toml and Cargo.lock are required for deps. installation and are required for deps. installation Setting up the RUST_FLAGS with -Clinker=musl-gcc : this flag tell cargo to use the musl gcc to compile the source code , the --release argument is used to prepare the code for a release ( final binary optimization). with : this flag tell cargo to use the musl gcc to compile the source code , the argument is used to prepare the code for a release ( final binary optimization). --target specify the target compilation 64 or 32 bit specify the target compilation 64 or 32 bit --feature vendored thsi command is an angle 😄 ! it helps to solve any ssl problem by finding the SSL resources automatically without specifying the SSL lib directory and the SSL include directory. It saves me a lot of time, this command is associated with some configurations in the Cargo.toml file under the feature section. Until now we only build the dependencies in Cargo.toml and we make the clean ( removing unnecessary files) After downloading and compiling required packages, it’s the time to get the source code into the container and make the final build to produce the final binary ( standalone). The builder stage has complete ! congrats 😙 🎉 yeah !!. Now let’s use alpine as a base image to get the binary from the build stage , but ! wait a second ! what is alpine ??? Alpine is a Linux distribution, it’s characterized in the docker world by his size ! it is a very small image (4MB) and it contains only the base commands (busybox) --from=cargo-build ..../material now we will copy the final binary to the alpine and the intermediate container (cargo-build) will be destroyed and we get as a result a very tiny image (12–20MB) ready to use 😃 😃 😃 Y ou know how to build a docker image right 😲 ? okay 😃 The CI/CD pipeline After testing the image locally, it seems good 😃, we resolve the docker image size, but in CI system the velocity is very important than size !! so let’s take this challenge and reduce the compilation time of this rust project !! let’s look at the .gitlab-ci.yml file ( our CI configuration): There is a tip in this file , i just splitted the docker file into two stages in this .gitlab-ci.yml : The builder stage (rustdocker/rust..)→ build dependencies and binary The final stage (Alpine) → the build stage For the CI work i prepared a ready-to-use docker image that contains all i need to make a reliable and fast pipeline for rust project , this image is hosted in my docker hub . hatembt/rust-ci:latest This image contains the following packages installed and configured : The sccache command : this command caches the compiled dependencies ! so by making this action to our build we can compile deps only one time !! 😅 , and we gained much more time. command : this command caches the compiled dependencies ! so by making this action to our build we can compile deps only one time !! 😅 , and we gained much more time. The cargo-audit : it’s a helpful command let’s us to scan dependencies security. Let’s breakdown the code and understand what’s going on !! In the first job : prepare_deps_for_cargo we need our base image hatembt/rust-ci . In this job some setting are required to make a successful build are placed in the before_script: Defining the cargo home in the path variable. Defining the cache directory that s generated by sccache (it contains the compilation cache ). (it contains the compilation cache ). Adding cargo and rustup ( tey are under .cargo/bin) in the path. Specifying the RUSTC_WRAPPER variable in order to use the sccache command with the rustc or MUSL in our case. Now all thing are ready ! so let’s make the build in the script section, you are already now what we should do 😃 , let’s skip it 👇. The cache and artifacts sections are very important ! its saves the data under : .cargo/ .cache/sccache target/x86_64-unknown-linux-musl/release/material (this is our final binary ). To know more about caching and artifacts flow this link. All data that is created in the first run of the CI jobs will be now saved and uploaded to the Gitlab coordinator. On the next build (new codes are pushed), we will not start the build from scratch, we just build the new packages , the old data will be injected with <<:*caching_rust after the image keyword. let’s move on the next JOB : build_docker_image: I made a new Dockerfile for the docker build stage, it’s based on the alpine image and it contain only the binary from the previous stage. The new Dockerfile: First we need a docker in docker image (dind) → to get the docker command and let’s make the steps below: Login to the Gitlab registry Build the image with the new Dockerfile Push the image to Gitlab registry and Now the results ! 😧 The image size is : image sise The CI Time : NB: the time is for the hole build time , the build binary and docker_build stages This is the power of Devops, the art of automation with some philosophy in the configurations and the steps to flow we can make even better than these results. In business the velocity ,the quality and the necessary features (on the application) are very important to Bring the company on the hight levels of success → this is the successful Digital transformation. Finally, i hope that this Story helps you to move on to next steps in the CI/CD systems, you can apply these ideas into any language (mostly complied languages, but still the same steps). If you have any feedback or critiques, please feel free to share them with me. If this walkthrough helped you, please like 👏 the article and connect with me on LinkedIn. Thank you 😄 👋 Join FAUN today and receive similar stories each week in your inbox! ️ Get your weekly dose of the must-read tech stories, news, and tutorials. Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬
https://faun.pub/optimizing-ci-cd-pipeline-for-rust-projects-gitlab-docker-98df64ae3bc4
[]
2020-12-25 09:15:11.472000+00:00
['Docker', 'Rust', 'Dockerfiles', 'Ci Cd Pipeline', 'DevOps']
A designer’s challenge to entry-level jobs that require five years of experience
Young talents deserve better than this type of treatment from employers. But why is this vicious cycle of “we want experience but we won’t give you a chance to build one” still a thing now? It was certainly the case for me when I was job hunting as a young designer. Now a decade later, I was hoping it would be a thing of the past. Sadly, it isn’t. Now as a design educator, I am again confronted with this dilemma faced by frustrated students who are eager to get a foot in the door. To find a solution, let’s break down why it keeps happening. Then we will find a way to confront it head on. Reason 1: Employers Don’t Want To Waste Time On Training Employers are selfish. I’m not saying they are bad — human beings are inherently selfish and that’s okay. In many ways, businesses must be selfish to be profitable. They must always put their interests first. Employees with no previous experience require longer and more in-depth training. It comes at a cost to the employer who would rather be as productive as possible making as much money as possible. In this regard, an inexperienced employee becomes more of a liability at first, not an asset. In this regard, an inexperienced employee becomes more of a liability at first, not an asset. In business, we know that we must maximize our assets and reduce liability. So naturally, employers prefer experienced talent. If they can get away with experienced talent being paid minimal salary, that’s even better! Think manufacturing — why do most companies still prefer to manufacture in countries where labors are cheaper? Because they can reduce cost and make more money. The same thinking process applies to employers in other sectors. It is how capitalism works. When we translate this to job postings, it is not at all surprising we are seeing the kind of ads that require 5 years of experience for a junior designer. Even if some employers don’t go as far as requiring 5 years of experience, most stipulate that they require 1–2 years of experience for an entry-level job. Common sense tells us that the 1–2 years of experience needs to be obtained from somewhere. Common sense tells us that the 1–2 years of experience needs to be obtained from somewhere. So where is that magical place that new talents can all go? Reason 2: Job Candidates Don’t Dare to Challenge It In the job market, employers always have an intimidating presence. In the job market, employers always have an intimidating presence. Many of them are large corporations with resources one cannot even imagine. Even if it is a smaller employer, candidates don’t often have the courage to challenge the process due to fear of losing opportunities. In a competitive job market, employers have an abundance of options to choose from. Candidates, however, don’t have as much leverage. When job requirements don’t cross over any legal boundaries, such as experience requirements, employers don’t really have a reason to change it. And as we all know, if nobody says anything, it will never change. Apply Anyway And Prove Your Own Worth Creatively Instead of getting frustrated at the problem, let’s confront the problem head on, with a spin. Disregard experience requirements you see. Apply anyway. If you are applying to a company that is open-minded and innovative, they will at least give you some consideration. If a company goes so far as to reject candidates automatically just because they don’t have the experience level, it’s not the kind of company you want to join anyway. This practice shows they are close-minded and rigid. Once you get someone to respond to you, whether that’s in the form of an initial phone screen or a formal interview, it is time to prove your own worth creatively. Let your work, your personality and your interview skills outshine your lack of experience. Let your work, your personality and your interview skills outshine your lack of experience. Without an outstanding portfolio that wows a company, you will not likely go too far beyond this point. So the first step before you get into the job market is to make sure that you actually have great work. When you compare your work to other designers, you will need to see a similar level of competence, or better. Without outstanding work, you might as well be a salesperson who tries to convince potential customers to buy a product that doesn’t work. Without outstanding work, you might as well be a salesperson who tries to convince potential customers to buy a product that doesn’t work. Ask Them The Uncomfortable Question What happens if a recruiter brings up the level of experience during the initial call and upon learning your lack of experience, attempts to hang up the phone? Believe it or not, some recruiters do that — I personally have experienced this before. As scary as it may be, it is time for you to bring out a counter question. Ask them this: “How would the company like to hire someone with 5 years of experience but not the level of work I can produce and lacks the work ethics that I have?” Ask them this: “How would the company like to hire someone with 5 years of experience but not the level of work I can produce and lacks the work ethics that I have?” See how they respond to this question. If they try to come up with answers to justify it or gets offended, that tells you everything you need to know about this company. At that point, it’s wise to move on and find another company that is at least willing to reflect on it. Candidates must remember that the job market is a two-way street. Employers want to test if you are the right candidate for them and you also want to test if the employer is the right place you want to work at. Follow Up With A More Daring Question To take it a step further, follow up with an even more daring question. “Give me a small design challenge that a designer with 5 years of experience can complete in 30 minutes to an hour. Look at my submission and tell me if you think my lack of experience is still a deal breaker.” From my decade in Corporate America, I have not seen this level of confidence too often, but I have seen it once or twice. Every time I see it happen, the person ended up getting hired. From my decade in Corporate America, I have not seen this level of confidence too often, but I have seen it once or twice. Every time I see it happen, the person ended up getting hired. In fact, it was that kind of confidence that convinced me to stop thinking that I am not good enough and finally start making bold moves. What’s the worst case scenario of having that kind of confidence? They still don’t think you are the right fit and they don’t hire you. But if you don’t try it, they will most definitely not hire you. If you don’t try it, they will most definitely not hire you. Easily Build Real Work Experience In 3 Different Places If you can’t get any interviews, there are only two reasons: You portfolio is not quite there yet You only passively apply for jobs and didn’t actively network. My school launched a great program to help students fix the first problem. The second problem requires students to get started on networking early and often. Nowadays, there are so many channels available for networking. LinkedIn, online communities, virtual events, conferences, and engaging with people in the industry you admire on social media. I often find myself saving Instagram handles of designers and artists whose work I admire and would some day want to hire or collaborate with. Be that person who is active every day at building your connection genuinely without expecting anything in return. Be that person who is active every day at building your connection, genuinely without expecting anything in return. When you can’t get an opportunity with an established and renown company yet, you may find yourself welcomed by small business owners, early-stage startup founders and non-profit organizations. Many of these companies and organizations will be grateful for what you do because their businesses can’t afford higher-priced talents, but you can work with them to build real client experience. Soon enough, you will be populating your portfolio with lots of real client projects. They count as experience. Now you can confidently walk into bigger companies and show them what you’ve got. Don’t be afraid of the ridiculous 5-year experience requirement for entry level jobs. Confront it. Question it. Prove your worth with confidence and back it up with real good work.
https://bootcamp.uxdesign.cc/a-designers-challenge-to-entry-level-jobs-that-require-5-years-of-experience-d77b7f8fba97
['Stella Guan']
2021-04-05 16:53:58.305000+00:00
['Jobs', 'Design', 'Designer', 'Job Hunting', 'Career']
Meet a Latina Marketer: Margarita Rojas
This profile is part of a five part series in which the Kapor Center for Social Impact is sharing stories from our diverse tech community in celebration of Hispanic Heritage Month! Margarita Rojas| Asana| International Marketing & Localization Lead How do you identify/What is your background? I identify as Latina. ​I was born and raised in Medellín, Colombia. After finishing high school, I moved to San Francisco for six months to study English. I fell in love with the city and always knew I had to come back. After graduating from college, getting my master’s, and working for P&G and Colgate for two years, I decided it was time to move to San Francisco, and enjoyed the rest of my 20’s in this magical city. ​I started my career in the US at La Cocina as their Marketing and Communication Manager. It was an incredibly fulfilling experience, as I had a direct impact on the lives of low-income immigrant women entrepreneurs as I helped them to launch and grow their food businesses. After two years at La Cocina, I decided to find a position in tech. I joined Evernote in 2012 as one of the first customer support agents. I was able to launch CS for Latin America and Spain, moving my way up to Senior Marketing Manager for Latin America. After almost 4 years, we grew the region from 2 million to 20 million users and I was ready to try bringing my talent and skills to a new company and product. In 2016, I joined the International team at Weebly, where I launched Latin America, Spain and Taiwan, and localized the website and product in 16 languages. I’m currently working at Asana, where I’m now leading the company’s international marketing and localization efforts. I sit on the board of Latinas in Tech, a non-profit organization that operates under the umbrella of the Latino Community Foundation, with the mission to connect, support and empower Latina women working in technology. How has your ethnicity/nationality/sexual preference/culture played into your story/brand? My culture and background has been really important to my continued success and growth. I came to SF with a strong background and experience in CPG but zero experience in technology. I didn’t graduate from a US University, and I didn’t have a strong network of people working in tech. After finding a great mentor, and meeting some Latinx in tech, I decided to build my narrative around my nationality, my working experience in Colombia, and the fact that I spoke three languages. This shift definitely helped me to get my first job at Evernote. What brought you into tech? When I moved to San Francisco, I was exposed to a lot of tech companies. At La Cocina, I had volunteers who worked for well-know Silicon Valley companies, and I even started dating an engineer (now my husband!). Everything was new to me, and I started absorbing a lot of information, attending events, and meeting people. I was curious about the products created in Silicon Valley, and I wanted to contribute. As I started looking for new opportunities, I found a role that combined both my experience in Marketing and my knowledge in the Latin America market, helping me feel connected to my home, which was also really important to me. What do you enjoy most about your role and the work you do? I enjoy making Asana available in other languages and creating go to market strategies to get more people to use our product across the world. I love learning about language nuances, and how this can impact the way you internationalize a product for a specific market. I feel fortunate to have the impact that I have at Asana, and help teams from all over the world achieve their goal no matter where they are or what language they speak. How do you think tech can help bring more opportunities to the Latinx community?
https://medium.com/kapor-the-bridge/meet-margarita-rojas-afc8e5d3b663
['Josh Torres']
2017-09-20 02:14:05.392000+00:00
['Latinx', 'Latin America', 'Hispanic Heritage Month', 'Tech For Good', 'Startup']
NFL, Visual Stats Frenzy — @CDM-Style
As for football in Europe, this is very fine. They work and everybody is used to consume this content in this way: tables with numbers, ordered by the stat that define the rank and complemented with extra stats that can explain tie-breaking rules or give more details on the performance. So what is the problem? What I find lacking is any visual clue of how the situation really is, each Division looks the same even if one may have two teams at 9–6 and the other has the top team at 12–3 and the second at 8–7, the same applies for conference and league views. So can something be done, at least as an alternative visualisation? The basic idea is to start from the core data element on which standings are based of: Won — Lost , ranging from 16–0 to 0–16 and all in between at the end of the season. 6–4 or 8–2 or 0–10 (sigh!) OK, I know also a tie is possible and it usually happens once or twice a season max, but let’s forget about that for now. These two numbers are always associated with a team, even on broadcast graphics. So. The idea is to vertically visualise “games won” on top of the baseline and “games lost” below the baseline with the baseline including the team short code like in the image below in the case of the Kansas City Chiefs being 10-6 at the end of the regular season. In this case the first won game was against the Patriots (NE) and the first lost game was with the Steelers (PIT). Each team alone does not look very meaningful as-is but when combined with others in horizontal manner to form a standing rank it starts to make sense. This is the final standing for the regular season of the AFC West Division. If we put all together the resulting visual infographics at the of the 2017 regular season looks like this. From this visual representation it’s easier to see how different divisions performed in more heterogenous or homogeneous way, some with one team dominating the group others with more shared success. it’s also interesting checking out which division generated the more teams for the Playoffs phase, for either Division games or WildCard ones. NFC South a quite balanced Division that was able to send 3 teams to the Playoffs.
https://medium.com/sport-the-digital-r-evolution/nfl-visual-stats-cdm-style-a55ea463576f
['Carlo De Marchis']
2018-03-14 09:57:25.395000+00:00
['Sports', 'NFL']
How To Make Money On Instagram 2021
If by now you’re wondering what number of followers you want it to happen, the short answer is “not as many as you think”. The great answer depends on factors that range from: What niche you’re in and the way easily you’ll directly tie it to a product category (fashion, food, beauty, and fitness are popular niches, supported top Instagram hashtags) How to engage your followers are (100K fake followers won’t amount to much). Which revenue channels you explore. Naturally, the more engaged followers you’ve got, the higher. Check our recommendations on the way to get followers on Instagram. While top Instagrammers make thousands per post on the photo-sharing platform, even those with a smaller-but-engaged following of 1000 have the potential to start out making money. How to make money on Instagram in 2021 Depending on your unique brand of Instagram Account Contents, your fans, and your level of commitment, you’ll earn money on Instagram within the following ways: Doing sponsored posts for brands that want to urge ahead of your audience. Becoming an affiliate and earning a commission selling other brands’ products. Building and selling a physical or digital product, or offering a paid service. Selling licenses for your photography or videos. The advantage here is that chasing one revenue stream doesn’t necessarily rule out another. So let’s start with the foremost common approach to Instagram monetization: partnering with brands as an influencer. http://prefectblogging.com/how-to-make-money-on-instagram/
https://medium.com/@sujeeshsujimon/how-to-make-money-on-instagram-2021-c5379f9e6b81
[]
2020-12-25 06:27:18.987000+00:00
['Instagram', 'Instagram Stories', 'Instagram Marketing', 'Earn Money Online']
PHP design patterns (Creational : part 2)
Let’s continue in our exploration of PHP data structures. In the first episode, we talked about the “simple factory” pattern. Now, we’ll go into a slightly more advanced structure. Let’s talk about the Factory method: The point of this pattern , can be compressed into one word : delegation. Let’s imagine that just got an access to play to your favorite heroic fantasy video game. Imagine there are 2 classes of characters to chose from : Warriors Druids You decided to share the access of your account to two of your friends. One of your friends like to play Warriors , while the other one likes to play Druids. Here the character classes : interface Character { public function attack(); } class Warrior implements Character { public function attack(){ echo 'the warrior attacks'; } } class Druid implements Character { public function attack(){ echo 'the Druid attacks'; } } Let’s use an abstract class , representing the idea of delegation. abstract class player{ abstract protected function createCharacter(): Character; public function attack(){ $character = $this->createCharacter(); $character->attack(); } } class friendA extends player{ protected function createCharacter(){ return new Warrior(); } } class friendB extends player{ protected function createCharacter(){ return new Thief(); } } Now, you have 2 different friends, allowed to perform actions as a player. But even though they structurally use similar functions, they implement these functions in different fashions. They actually build distinct characters.
https://medium.com/@michaelmanga/php-design-patterns-creational-part-2-4aa214808a02
['Michael Manga']
2020-12-08 02:01:35.077000+00:00
['PHP', 'Php Development', 'Code', 'Design Patterns']
An Identity Crisis Is Vital For Growth Because It Occurs At The Edge Of Chaos And Harmony
Internal Conflict Triggered By Chaos “Your pain is the breaking of the shell that encloses your understanding. It is the bitter potion by which the physician within you heals your sick self, so therefore, trust the physician and drink his remedy in silence and tranquillity.” — Kahlil Gibran It was the developmental psychologist Erik Erikson who first coined the term identity crisis. He formulated eight key stages one undergoes through their adolescent years based on their psychosocial development. They are: Stage 1 — Trust vs. Mistrust Stage 2 — Autonomy vs. Shame and Doubt Stage 3 — Initiative vs. Guilt Stage 4 — Industry vs. Inferiority Stage 5 — Identity vs. Confusion Stage 6 — Intimacy vs. Isolation Stage 7 — Generativity vs. Stagnation Stage 8 — Integrity vs. Despair Erikson believed a person’s personality develops in a series of stages. His model differs to Freud’s in that social interactions and relationships impact an individual’s development and growth throughout their life. Each stage builds on the previous one which creates the foundations for growth in the following years. At each stage, a person experiences internal conflicts, thus creating a turning point in the individual’s personality. The conflicts are based on the understanding that an individual experiences growth or fails to develop these qualities. In the educational book Key Concepts in Counselling and Psychotherapy: A Critical A-Z Guide to Theory author Vicki Smith gives a clear understanding of how an identity crisis can become a source of power within the individual’s psyche: “He (Erikson) believed that we all have identity crises at one time or another in our lives and that these crises do not necessarily represent a negative state but can be a driving force toward positive resolution.” If they integrate the conflicts into their personality, the subsequent growth and development will serve them later in life. If they don’t develop these abilities, they are likely to suffer an inhibited sense of self which dominates their life. Erikson’s understanding is that an individual becomes competent when moving through the eight stages and integrates the egoic self into their psyche. In a similar vein, author Jan Frazier explains in The Freedom of Being: At Ease with What Is the need to transcend the ego by stepping outside the known sense of self: “In order to look at yourself, you have to step outside of it. Look not with the eyes of the ego, but with the eyes of presence.” “A woman's hair covering her face.” by Sam Manns on Unsplash The Ego Is Not Meant To Dominate Your Life “Our stories come from our lives and from the playwright’s pen, the mind of the actor, the roles we create, the artistry of life itself and the quest for peace.” — Maya Angelou Many people identify with outer aspects of their life as the basis to their identity. For example, an individual may believe their role is that of a mother and wife. Yet, if their husband is unfaithful and the marriage dissolves, they will question their identity since they no longer associate with that label. Similarly, others presume their work, relationships, physical appearance, social and wealth status or performance are measures of their identity. Regrettably, if these aspects are removed from their life, they experience an identity crisis because they created a persona around them. I would argue these qualities do not shape your identity but are a vehicle in which to explore your life’s narrative. Your ego is the identity the mind constructs to define itself, yet this is a fictional narrative because external events can disrupt it. Jan Frazier reaffirms how the roles you play do not construct your identity since there is an underlying presence beneath that: “The roles you play, the features you exhibit, the things you believe in — while they matter very much in the ordinary realm of human discourse — are not what you are. When presence senses itself within you, none of these things have any substance.” Your true identity lies beneath the shadow of the egoic self. An identity crisis is vital to an individual’s growth because it allows for chaos and order to reveal one’s authentic nature. An identity crisis can be likened to the shell of an egg breaking open. The shell merely gives form to the ego so it can make sense of its role within society. The ego is not meant to dominate your life, nor do you wish to banish it. It must be integrated with the authentic self to develop the wholeness of who you are. Otherwise, the egoic self you once identified with is no longer something you can uphold. Photo by Seth Macey on Unsplash Don’t Try To Make Sense Of Chaos “It is not in the stars to hold our destiny but in ourselves.” — William Shakespeare To paint a contrasting view, psychotherapist and meditation teacher Loch Kelly writes in Shift into Freedom: The Science and Practice of Open-Hearted Awareness how consciousness creates a thinker to uphold the ego, thus forming a mistaken identity in the process: “Afflictive consciousness creates a thinker out of thinking and ego function, and this thought-based sense of self forms the core of mistaken identity. Nothing more than a self-referential loop of thinking about thinking, our mistaken identity is actually a continuous conceptual proliferation that creates solid things out of images and a solid self out of thinking.” What is essential is to unmask the egoic self so the pain, suffering and uncertainty are the underpinnings for future growth and development. It is like the progress of performance athletes experience when training for the Olympics. They must push to the edge of their limits and discover their potential or risk remaining where they are. If they push too far too soon, they may invoke physical injury that can sideline them. They may become depressed as a result because their identity is formed around their status of an athlete and performance. However, from a developmental viewpoint, the experience can be vital to their performance if they can let go of their fixed narrative and former identity. If you experienced an identity crisis, trust in the deeper psychological lesson guiding your personal development. Don’t try to make sense of the chaos, but surrender to the process, knowing whatever is breaking apart is doing so to make way for the true self to emerge. Call To Action To live a remarkable life, you must take consistent action in spite of your fears and doubts. Download your FREE COPY of my comprehensive eBook: NAVIGATE LIFE and embark upon your journey of greatness today!
https://medium.com/the-mission/an-identity-crisis-is-vital-for-growth-because-it-occurs-at-the-edge-of-chaos-and-harmony-78c713d49879
['Tony Fahkry']
2018-05-24 16:15:10.169000+00:00
['Self Improvement', 'Personal Growth', 'Identity', 'Personal Development', 'Ego']
The Naked and Famous & The Colourist — Review & Interview
As the seasons truly begin to change and the coolness of fall sets in, my playlist has officially been updated and fall touring is in full effect. A brisk Tuesday night led me to The Trocadero in Philly to see The Naked and Famous and The Colourist. Having just announced they would be opening for Imagine Dragons next year, I was curious to see what The Naked and Famous would be like. The Colourist on the other hand, I had discovered through the Nokia commercial and was curious to see what their live performance was actually like. About 8:30 the show began as The Colourist took to the stage. Keeping with their name, the stage was brightly lit in a variety of colors through the show. A roughly 30 minute set got the night started and had fans dancing and singing along. One of my favorite things about this band is the fact that not only do they have a female drummer, but she is also one of the lead singers. To me, that defines a woman who rocks and it was a pretty awesome sight to see. When it came time for “Little Games,” anyone in the crowd who didn’t know the band before instantly knew who they were simply because of their commercial with Nokia. As the set drew to a close, fans were prepped and ready for the rest of the show. The Naked and Famous took to the stage with bright strobes and bright lights. Beginning with “A Stillness” and transitioning into “Hearts Like Ours,” fans were taken in from the first note. The lighting was visually one of the most interesting designs I’d seen, making for a fabulous visual show to work in harmony with the fantastic vocals. From start to finish, fans watched, sang, and danced along to the music. To end the night, the final encore was the band’s hit song “Young Blood,” which like The Colourist, I had heard on a few different TV shows over the past few years. All in all, it was a great show and both artists are ones to catch if they’re ever in your area. Check out photos & an interview with The Colourist’s Adam Castilla below. Q: How did you all meet to form the band? A: Maya and I connected years ago. We had never sang in bands before, she was just a drummer and I just played guitar and piano. While we were trying to find a singer, we ended up just taking the ropes ourselves. Justin, and Kollin were introduced through mutual friends and our chemistry clicked from day one. When I introduced Kollin to Maya, Kollin had super long hair and I had told Maya that he used to be in a German Val Halen cover band called Von Holland. Q: How did the process of writing songs for the EP work for you? A: Each song on our Lido EP came together differently. I think thats what makes each song unique. Theres not really a formulaic way of writing for us other than making each story of the song come to life both live and on the record. Q: Do you have a favorite part of the live show? A: Personally, at times from stage if I connect with someone in the audience and they are mouthing the lyrics to a song that hasn’t been released yet, theres a special connection where the song and performance connects with the both of us. Its pretty exhilarating. Q: When the deal with Nokia was made to be the focus of one of their ads, what was that like? A: It was really spontaneous. When we received the offer we honestly thought it was some sort of spam in our inbox. The day of the shoot I remember the director having a vision for the band that allowed us to be us. He wanted the commercial to be as real as possible to create that organic live feel of our performance. Q: What can fans expect to see from you after the tour? A: More music on its way! Q: Who are some of your biggest musical influences? A: Ah, so many! The Church, Ian Curtis, Fleetwood Mac, Elton John, The Stooges and Roy Orbison off the top of my head. Q: Being on the road provides plenty of time for ‘road trip music,’ who is on your playlist right now? A: We listen to a lot of Art Bell who has a new talk show now. Depending on our moods the playlist fluctuates from annoying pop songs or jock jams to iconic songs that make us feel warm and tingly inside. Q: How big of a role has social media played in getting your music out to new fans and keeping in touch with older ones? A: We often enjoy taking pictures from the road or from shows, its nice being able to connect with old and new fans just from an app on your phone.
https://medium.com/a-teen-view/the-naked-and-famous-the-colourist-review-interview-1abdf751281f
['Arin Segal']
2016-11-04 00:41:51.015000+00:00
['Colourist', 'Review', 'And', 'Music', 'Interview']
DIG:ITA meets UniBG talents: Nicole Bosatelli
Nicole Bosatelli — a graduate in Economics and Data Analysis at the University of Bergamo — began a training in the company at DIG:ITA, going beyond the pure theory of studies and entering the merits of real practice. The internship, considered as a training and work orientation tool, is preparatory to the completion of her cycle of studies. Object of the training Realize Industrial Assets Operational potential collecting, analyzing and distributing in real time actionable information to gain production efficiency. Starting from a holistic view of the manufacturing process, Nicole will adopt “Real-Time Manufacturing Analytics” transforming vast amounts of operational data from both old and new machines, people, IoT sensors and environments into actionable information, accessible from trusted followers. By leveraging cutting-edge technologies and seamless integration with existing information and legacy in-house systems, Nicole will capture shop floor data in a secure way to deliver straightforward insights for improving production efficiency. Training Activities The Real-Time Manufacturing Analytics will be based on the following phases on which Nicole shall build her data science knowledge and skill. CONNECT Connect factory floor machines and enabling the collection of data from industrial assets and processes as well as management of that data to derive value. MONITOR Present focus on helping understand the performance of industrial assets and processes, and visualize what alarm, events are happening ANALYSE Determine the root cause of issues based on historical and real-time data so to understand relationships, correlations, and trends, and can effectively troubleshoot problems PREDICT Providing foresight into impending problems so to avoid issues before they occur and drive greater process consistency and asset uptime. All the aggregated data outcome shall be provided easily accessible from one source (app or desktop dashboard) to streamline workflows, boost productivity and break down any silos disrupting your manufacturing excellence.
https://medium.com/digita/dig-ita-meets-unibg-talents-nicole-bosatelli-9e6c37bc2a77
['Dig Ita']
2020-12-04 07:46:11.879000+00:00
['Training', 'Learning', 'Nativodigitale']
Conversational Forms — How we took Adaptive Cards, Power Automate and CRM Bot framework and turned it into something amazing 😎
Conversational Forms — How we took Adaptive Cards, Power Automate and CRM Bot framework and turned it into something amazing 😎 Artur Zielinski Follow Jun 8 · 5 min read Our latest implementation (fully integrated, automated and smart Chatbot Project Estimator) required us to refactor our CRM Bot platform to enhance the form booking capabilities. Our goal with Chatbot Estimator Form is simple — to give our clients the ability to quickly and easily estimate their next Chatbot project. In order to do that we have designed a simple form that: Collects some basic user data Analyses the industry, use cases and launch strategy Gathers the usage information Provides accurate licensing breakdown and high-level project scope overview AND IT LOOKS GREAT! 😉 Who said forms have to be boring? When you think about it, forms are not necessarily that much different to chatbots. In both applications, you are essentially looking to fulfil similar aims: Your main goal is to collect and process data from your user, usually following some sort of authentication pattern You want the user to be focused on a question at a time, and follow a logical process to its completion You want to be able to steer the “conversation” into the pattern you are in control of, and ensure the user experience is as quick and efficient as possible You want to augment your “replies” or “questions” with easy to navigate controls that seamlessly “do the job” That’s why at CRM Bot we took our standard Web Chat display and turned it into a fully mobile-optimised, “one-question-at-a-time” form generator, using adaptive cards and Power Automate behind the scenes. How we turn boring Adaptive Cards into a great looking Conversational Form Each transition is a separate Container in your Adaptive Card. We’re really pleased with the flow… How do we generate a great user experience with our forms? We use modals to ensure that the UX is adapted to all screens/resolutions/sizes to ensure that the UX is adapted to all screens/resolutions/sizes We use Adaptive Cards (with as many Containers within a Card as necessary) and show each Container as a separate “form step”. We have also added several new controls to the framework — sliders , process bars , file uploads and address lookups (to name a few…) (with as many Containers within a Card as necessary) and show each Container as a separate “form step”. We have also added several new controls to the framework — , , and (to name a few…) This allows the user to focus on answering just that one question at a time! We really focused on accessibility, by design all our controls are fully keyboard-friendly. You can go through the entire journey with just your keys We always use client-side validation to ensure fields are answered correctly (numbers only, email, even names can easily be “regexed”) Anytime there is an opportunity to introduce dynamic content or route the user into a specific pathway, we instigate a call to our Power Automate connector to handle the business logic One of the key differentiators between our approach and your standard Forms software is our unique capability of injecting business logic between one question and the next. Let’s have a look into what that means in practice. Dynamic Content Generation In our Chatbot Project Estimate form (if you want to give it a go, just click on “Make it Happen” on our main page!) after collecting some details about your industry, we will adapt the remaining questions to match your input parameters and save on the clutter. Not only we will adapt the answers, but also adjust the labels and error messages and even inject some video testimonials to guide the estimation process. Because you have selected Audio Bot as one of your chosen channels, we can then show you relevant content from our extensive library of Showcase Demos And because we are, after all, a conversational company, everything we do is infused with our NLP processes — meaning we can extract added meaning, sentiment analysis, contextualise and so much more. Quote in real-time Customers these days do not want to wait a day or two for your license estimate. That is why we have integrated our Dynamics 365 for Sales application with our Estimate your Project form to offer you direct quoting capabilities. All you have to do is tell us: How many customers are you expecting to use a chatbot on a monthly basis How often will they interact with a chatbot per month We will take these inputs, send the data to D365 for Sales, generate a new Opportunity, apply products, currencies, discount lists and generate your bespoke quotation without any human interaction whatsoever. The customers are benefitting from real-time quoting, thanks to our Power Automate fulfilment Estimate Chatbot Project costs After you know what license you need, the next step is to estimate how long it will take to get you started. In order to do that, we just need to know some important parameters: What channels you want to launch on What use cases interest you mostly What features would you be needing for Day 1 How many Line of Business systems you are planning to integrate Asking these questions upfront allows us to generate some high-level estimate of our project scope, which we promptly return to you with our set of assumptions. All of this is then combined into a single Opportunity Proposal document and dispatched to a user (following an internal sales review). It’s really as simple as that! Using Approval Flows in the pattern is really a good fit — they allow us to keep the Sales Team informed, send push notifications, offer escalation/reassignment and allow interactivity which is necessary to make the whole process work. For an in-depth review of Approval Flows, follow this link. And that’s pretty much it for forms! For more information on CRM Bot, please go here, or here, or here. Or just go ahead and ask us for a project estimate, you’ll be surprised how cost-effective our solutions really are! Till the next time! 🦾
https://medium.com/crm-bot/conversational-forms-adaptive-cards-power-automate-and-crm-bot-framework-showcase-737719045ab3
['Artur Zielinski']
2021-06-12 15:40:46.993000+00:00
['Chatbots', 'Conversational UI', 'Dynamics 365', 'Forms', 'CRM']
Founders Should Never Pay to Pitch
Founders Should Never Pay to Pitch I’m writing this in a Lyft on the way to judge a startup pitch competition, and I’m pissed. So pissed that I’m knowingly getting myself carsick because I want to get this out now. Someone I know referred the person running this competition to me because they were expanding to Boston and needed VCs to judge it. Neither person told me in their intro emails that they were charging founders to pitch. Thus I agreed, because meeting founders and judging pitches is something I do all the time. Then when I was getting ready for the event a couple of hours ago, I searched my email to remind myself of what I was supposed to do tonight. I saw that someone with the group running the event had sent me an email with a link to it, and that page listed prices for attendees. Here’s the kicker: they’re charging founders $200 to present to VCs for ten minutes. There are so many things wrong with this that I don’t know where to start. And now I’m in this uncomfortable position where I’d have to bail on an event that I committed to going to right before it happens, knowing that founders paid with the expectation that investors would be there, versus associate my and Accomplice’s brand with a practice that is completely against my values. I’m going, but I’m writing this on the way there and then directing people to it while I’m on stage. Awkward, but I feel like both the founders and I were mislead. I’m not going to call out this event or these people in this post because they’re probably startups themselves and I don’t want to wreck someone publicly without giving them the benefit of the doubt. I’m sure you can figure it out if you search, though.
https://storiusmag.com/founders-should-never-pay-to-pitch-1b3c37d0729f
['Sarah A. Downey']
2020-05-09 06:44:33.987000+00:00
['Pitch', 'VC', 'Startup', 'Investing', 'Business']
KyberDMM Launches on Binance Smart Chain with $4M in Liquidity Mining Rewards
KyberDMM Launches on Binance Smart Chain with $4M in Liquidity Mining Rewards With the approval of KIP-12, we are excited to share that KyberDMM, DeFi’s 1st multi-chain dynamic market maker protocol, is now deployed and LIVE on Binance Smart Chain (BSC)! In celebration of BSC’s 1st Year Anniversary, we are also including the BSC network in our ‘Rainmaker’ Liquidity Mining Program. ~$4 million in KNC incentives will be up for grabs for liquidity providers, starting September 2nd! To support the KyberDMM launch, Binance has listed the BEP-20 version of KNC and will enable the deposit/withdrawal of KNC to the BSC network. KyberDMM previously launched its beta on the Ethereum and Polygon networks, where total trade volume and total value locked (TVL) have exceeded US$1.1 Billion and US$500 Million respectively. Kyber plans to emulate this success on BSC as part of its strategy to widen adoption across different chains and provide greater flexibility and capital efficiency for liquidity providers in DeFi. What is Binance Smart Chain (BSC)? Binance Smart Chain (BSC) is an Ethereum Virtual Machine (EVM)-compatible sidechain that has grown in popularity as an alternative to Ethereum mainnet due to its speed and lower transaction costs. Tokens on BSC follow the BEP-20 standard and you can transfer assets between Ethereum and BSC using the Binance Bridge. There are already hundreds of projects on the network today, with over $19B in total value locked. Kyber also plans to work with several popular projects on BSC, including Coin98, DeFi Warrior, Faraland, Bunicorn, My DeFi Pet, Wanaka Farm and others to help grow the DeFi and NFT ecosystem. Bringing High Capital Efficiency to the BSC Ecosystem KyberDMM is an extremely capital efficient and flexible liquidity protocol that enables liquidity providers to maximise the use of their capital. At a fraction of the TVL compared to typical AMMs, KyberDMM is able to provide amplified liquidity and very low slippage for popular token pairs. This would improve the overall trading and liquidity provisioning experience for BSC ecosystem users, Dapps and games. With the deployment of KyberDMM on BSC, a portion of trading fees will go to KyberDAO and subsequently to KNC voters, complementing the existing KyberDMM protocol deployments on Ethereum and Polygon. Rainmaker Liquidity Mining Program with ~$4M in Incentives Starting from September 2nd, 2 million KNC tokens, worth approximately $4 Million, will be allocated as incentives as part of the Rainmaker Liquidity Mining Program. Incentives will be distributed to liquidity providers of 4 eligible pools over the course of 2 months. You can already start adding liquidity now to prepare! Rainmaker: Important Details Starting Block: 10,557,000 (~Thur, September 2nd. 13:52:33 GMT+07) Reward Vesting Duration: 400,000 blocks (~14 days) Liquidity Mining Duration: 1,672,000 (ends at block 12,229,000 ~Mon, Nov 1st. 13:39:05 GMT+07) KNC Token Smart Contract Address on BSC: 0xfe56d5892bdffc7bf58f2e84be1b2c32d21c308b How to Participate in Rainmaker? Important Notes Bridging Assets to BSC: If you do not already have assets on BSC, you first need to deposit ERC20 tokens to Binance Exchange and withdraw them to your BSC wallets as BEP20 tokens. Alternatively, use the Binance Bridge to transfer ERC20 to your BSC Metamask wallet. Switching from Ethereum to BSC Network: On the KyberDMM site, click the Ethereum button at the top to switch your network to Binance Smart Chain or change your network to ‘Binance Smart Chain’ on your Metamask Wallet extension directly. Step 1: Add Liquidity First, enter the ‘Pools’ page. Connect your wallet e.g. Metamask on the BSC network and add liquidity by depositing the required tokens into one (or more) of the eligible pools. You will receive DMM LP pool tokens in your wallet representing your pool share and start earning standard protocol fees for that pool. View Liquidity Positions: On the ‘My Dashboard’ page, you can view all your liquidity positions and remove or add liquidity there. If you cannot see your liquidity position, click ‘Don’t see a pool you joined? Import it’ to add it manually. Eligible Liquidity Pools: Eligible Rainmaker pools are labelled with the 💧 raindrop icon next to them on the left. These pools are eligible for yield farming. 4 Eligible Pools *AMP = Amplification factor. Amplified pools have much higher capital efficiency. Higher AMP, higher capital efficiency within a tighter price range. Step 2: Stake LP tokens and start receiving incentives Next, enter the ‘Yield’ page. You can view the corresponding yield farms for the eligible pools here and add liquidity for the pool if you haven’t done so. Stake your DMM LP tokens in the corresponding farming pool contract. You need to approve the tokens if this is your first time here. After staking, you can view the amount you staked under the ‘My Deposit’ column. After staking, you will start receiving mining rewards on top of protocol fees. APY refers to the annualized percentage yield based on pool fees + rewards. No Lock-up for Liquidity: You can unstake your DMM LP tokens and withdraw liquidity at any time without a penalty to existing rewards received. If you unstake your LP tokens, your rewards are automatically harvested. Important: Please use the official KyberDMM user interface to stake LP tokens. Direct transfers to the liquidity mining pool address will result in the loss of your deposited tokens to KyberDMM. Step 3: Harvest and Claim Rewards After being allocated KNC rewards, you will have to harvest your rewards (also on the ‘Yield’ page). When you harvest rewards, it activates a new ~14-day vesting period. Harvested rewards are locked at the start and vested linearly over ~14 days, with some rewards being unlocked every block. Depending on how many times you harvest rewards, there could be multiple vesting periods running concurrently. Gas is required for every harvest and reward claim. Navigating to the ‘Vesting’ tab, you can view how much of your KNC rewards have been claimed, locked, and unlocked since the beginning. You can also view your current and past vesting periods. KNC Rewards KNC rewards can be added back into the KNC/BNB pools or staked on KyberDAO (kyber.org) for additional KNC rewards. With high yield for eligible token pairs, the Rainmaker program will enhance liquidity on BSC, showcase KyberDMM’s powerful benefits, and bring more users and developers into the Kyber ecosystem. Read more about KNC here. We welcome BSC projects to reach out and propose joint liquidity mining campaigns with Kyber! Learn more here. “Binance Smart Chain has been a popular avenue for DeFi and NFT Dapps and users. KyberDMM will provide BSC ecosystem players with a capital efficient and reliable protocol for their liquidity needs and help them maximise their use of capital.” - Loi Luu, Co-Founder, Kyber Network Why use KyberDMM? Liquidity providers enjoy important benefits that are not available on typical AMMs on BSC. Amplified Pools: Liquidity providers have the flexibility to select amplified liquidity pools that greatly improve capital efficiency and help reduce trade slippage. With the same pool and trade size, stable token pairs with low variability in the price range (e.g. USDT/BUSD) can enjoy up to 200–400 times better slippage compared to other platforms. Liquidity providers can provide better prices and earn more fees with less capital. Liquidity providers have the flexibility to select amplified liquidity pools that greatly improve capital efficiency and help reduce trade slippage. With the same pool and trade size, stable token pairs with low variability in the price range (e.g. USDT/BUSD) can enjoy up to slippage compared to other platforms. Liquidity providers can provide better prices and earn more fees with less capital. Dynamic Fees: Protocol fees are adjusted dynamically based on market conditions to maximise returns and reduce the impact of impermanent loss for liquidity providers, with fees automatically accruing from transactions in the pool. Protocol fees are adjusted dynamically based on market conditions to maximise returns and reduce the impact of impermanent loss for liquidity providers, with fees automatically accruing from transactions in the pool. Fully permissionless: Anyone can create a pool or add liquidity to existing pools; while any Dapp, aggregator, or end user can access this liquidity. KyberDMM is already integrated with 1inch and Matcha, with more aggregators and Dapps on the way. Anyone can create a pool or add liquidity to existing pools; while any Dapp, aggregator, or end user can access this liquidity. KyberDMM is already integrated with 1inch and Matcha, with more aggregators and Dapps on the way. Committed to security: KyberDMM’s codebase has been audited by both the team and external auditors such as Chain Security with no critical issues found, and is open source on Github for community review. KyberDMM doesn’t use 3rd-party oracles so it is not vulnerable to external oracle risks. Kyber DMM is also covered up to $20 Million by decentralized insurance provider Unslashed Finance. Welcoming DeFi Builders and Liquidity Providers on BSC Kyber’s vision is to deliver a sustainable liquidity infrastructure for DeFi, which includes popular networks such as Binance Smart Chain (BSC). This $4M Rainmaker program will kickstart KyberDMM’s presence on the BSC network and is the first major step in building a sustainable and scalable liquidity infrastructure for the BSC DeFi ecosystem. KNC rewards distributed will give BSC liquidity providers a stake in Kyber Network. Rainmaker on BSC starts at block: 10,557,000 (~Thur, September 2nd. 13:52:33 GMT+07) We welcome BSC ecosystem players to add liquidity on the KyberDMM to enjoy dynamic fees, higher capital efficiency and KNC rewards! BSC projects can also propose joint liquidity mining campaigns with Kyber! For developers looking to build with KyberDMM, please check out our developer documentation. Follow us on Twitter and Discord to stay updated! Onward, Kyber Network! Learn & Win 💰$3000 in prizes! 10 lucky winners get $300 each! Learn about KyberDMM’s advantages and the $4 Million Rainmaker liquidity mining program on BSC, and stand a chance to win! Follow the instructions on Twitter: https://twitter.com/KyberNetwork/status/1434866354469830656 Complete this quiz https://forms.gle/yhHsHPuYsvn6yi1E8 Contest ends: Monday, Sep 13, 11pm GMT+8
https://blog.kyber.network/kyberdmm-launches-on-binance-smart-chain-with-4m-in-liquidity-mining-rewards-6e0dddab3c6f
['Kyber Network']
2021-09-07 04:22:49.341000+00:00
['Featured', 'Ethereum', 'English']
5 Unusual and Counterintuitive Habits of Successful People
5 Unusual and Counterintuitive Habits of Successful People Most articles about the habits of successful people are total nonsense. In fact, I once wrote an article making fun of the concept altogether. But here I am, writing an article about the habits of successful people. What gives? Well, there are some things you can get in the habit of doing to increase your odds of success. That’s all you get. Odds. Probabilities. Chances. Once you understand that, you understand success itself. I’m going to put together a list of never heard before odds(or rarely mentioned) increasing success habits, which means the list won’t include: Reading more (although this an excellent, favorite of mine, hobby) Waking up at 5 a.m. (which is useful, but only if you know why you’re doing it) Drinking green tea. Or tea of any variety Finding a mentor (which is a bit overrated) Spending less time on your smart-phone. I’ve given up on this. I’m an addict, but I try to use my phone to learn, sometimes, when I’m not watching Joe Rogan Ok, are you ready to have your mind blown? Here we go. Get Lucky Luck is the giant elephant in the room when it comes to becoming successful. The idea of true meritocracy is impossible. We all have advantages and disadvantages we have no control over. Not only that, sometimes people with no talent succeed. Sometimes people with great talent fail. Often, success or failure in life can be due to pure chance. So why become a self-help writer? Why give any steps at all if it’s all chance? It’s not all chance, but there’s a lot more chance involved than you’d think. So what should you do? Put yourself out there more. Send more positive energy out in the universe, expect good things, act like you’re lucky. Then, on a long enough time scale, you’ll get lucky. This could easily be rephrased as “learn how to spot opportunities,” but I don’t think that does the concept justice. People with a bit of an irrational sense of confidence, those who feel destined, tend to find their destiny. Who cares about the underlying mechanisms if you end up getting what you want? Of course, work hard, be prudent, focus on self-improvement. But realize those are just the tickets to the dance my friend. When you do all of the above, you tend to put yourself in a position to get luckier. Positive people are just more attractive. Negativity repels. Maybe the metaphysical nerds are right and your aura is working for you. Either way, actively think of yourself as lucky and try to get in the habit of getting lucky while working really hard and good things will happen. Understand What Motivates People Most people sleepwalk through life. They’re existing, not living. Living involves consciously observing your surroundings, especially other people. If you get in the habit of observing people and trying to figure out what motivates them, you’ll learn how to influence people. You’ll know how to move them and get them to do what you want (for good, not evil of course). This doesn’t mean you have to turn into Machiavelli, but you should always question peoples motivations in every interaction, realizing people are: Self-interested Vain (even in altruism, especially maybe) Kind Operating from the perspective that they are in the center of the universe In desperate need of meaning In desperate need of love Generous Validation-seeking Religious — everybody worships something Often well-intentioned (even if it leads to hell) Aspirational Cynical Resilient Insecure Confident Jaded Open to the right opportunities This is a short-list of many, many, many variables. People are all of these at once — negative and positive. Try to get into the habit of seeing below the surface level of interactions — your personal interactions, the interactions of others, the interaction at large… the zeitgeist. All of it. Become a “people watcher,” in coffee shops and bars. Listen more than you talk in conversation. See the social and power dynamics that go on in everyday life. If you get in the habit of understanding people, you’ll have psychological superpowers.
https://chef-boyardeji.medium.com/5-unusual-and-counterintuitive-habits-of-successful-people-4e044c71ff8c
['Ayodeji Awosika']
2020-10-10 15:02:41.791000+00:00
['Self Improvement', 'Life Lessons', 'Psychology', 'Productivity', 'Personal Development']
The Trauma in Trusting a Flawed System Part 1
I called my own OB/GYN the next day and was informed that the medication was never approved and that I wasn’t to take it due to its dosage being far too high. When I went into the office for an examination, my OB/GYN told me that my vagina was still red and visibly irritated from Dr. Kmetzsch’s examination. It was also ruled out that I had no yeast infection. When I reached out to patient affairs at Richmond University Medical Center, I was told that they’d reach out to me in the early weeks of October. They never did. I contacted them again and was told that they’d follow up on the investigation. The follow up consisted of a typical my word against his response and well wishes and apologies that felt more like an insult than anything. I was irate. I called patient affairs back and was told that I could speak directly to the head of the Resident OB/GYNs department. To no surprise, he was a man. A man who felt it was necessary to tell me that he’s known Kmetzsch for eighteen months and was surprised to hear my story and how when he spoke to Kmetzsch, he too was floored. Imagine how I felt knowing Kmetzsch all but ten minutes and having to lay down for an examination that felt more like imposed foreplay from a silent predator who had to get off before he was seen as suspicious. I was told that they are both sorry that I feel the way that I do and that my experience was a learning experience for the residents doctors. Richmond University Medical Center is a teaching hospital, however, how do you teach a resident doctor how to do his job without misdiagnosing a patient and leaving them feeling violated?
https://medium.com/@charmaine.russell707/im-a-new-mom-6d1d9e34a3f5
['Meena Ali']
2020-12-25 06:22:10.692000+00:00
['Trauma', 'Parenting', 'Metoo', 'Medical', 'Black Women']
CS — Super Talent Can Be Made. Wrapping up a good week. Not just…
It was a good week. Not just because Google gives “bicycles to 1,000 great minds”, but also CS Education Week is a good excuse for a few of us to spend sometime without “doing our day jobs”. Especially this year if to innovate our way out of COVID, what else can be better than spending time to help the new generation learning super power for their future? 🦾 #CodeNext @Google 33 years ago, my mother bought me a PC-XT. Which was the 1st important “toy” bringing me to where I’m. Today, it’s easier & more fun. Raspberry Pi 400 is much more capable. CodeNext curriculum is total free, if to apply: g.co/codenext/application. Many projects on Raspberry Pi community to inspire new souls to solve real work problems. So if you run out of idea for holiday gifts, why not considering Raspimon Academy Kit? 🎁 No Hardware Is Even Needed As software architecture 101: Separation of Concerns, good software is not supposed to depend on special hardware. With a proper emulator, you don’t even need to buy nor wait for your Raspberry Pi & Sense HAT to arrive. Just open the trinket page, you can play around it without leaving your browser. As CS-Literacy Is The New Black, what are you waiting for ? 🤓 Discretion The opinions stated here are my own, not those of my company. My Raspberry Pi 400 With Sense HAT Over VNC For Virtual Coaching
https://medium.com/@samlin001/cs-super-talent-can-be-made-96af280922e6
['Sam Lin']
2020-12-17 16:35:28.396000+00:00
['Raspberry Pi', 'Csedu', 'Python', 'Csedweek', 'Codenext']
Decolonizing and Rooting: Rebecca Orozco
Can you tell us a bit about your own story and what brought you to this work? Rebecca Orozco I’m originally from Los Angeles (Tongva land) and I’ve been living in the Bay Area (Ohlone land) since 2011. I moved to the Bay for UC Berkeley to study Medical Anthropology. Before and through school, I worked as Early Childhood Education Provider. I was supporting parents and giving doula support before I even knew the term. I started volunteering at San Francisco General Hospital in their doula program in 2015. Although I’m super grateful for my experiences I quickly noticed that these programs were lacking faces that looked like mine. My family is from Mexico — I’m Xicana. The patients at SF General were primarily low income communities of color and the volunteers were not. Very early on in my birth work I was stuck in this volunteer cycle, which I couldn’t afford as a low income person. Through the journey to figure out how to get paid for this work, and support my community, I came across Roots of Labor and it was everything I was looking for. After about a year and a half of working in isolation –in spaces where I felt like I didn’t really fit– I found community. The mission of Roots of Labor is to decolonize birth. It’s to make this work sustainable. It’s to empower each other. I feel like a poster child of our mission, having benefited so much. It has helped me in my own healing, in reclaiming my ancestral knowledge. I’ve been a core organizer with Roots since 2016. It’s the first time I’ve been in a role like this and it’s been a really nurturing place for me to step into that level of leadership within my community. What does a day in the life of a Roots organizer look like? No day looks the same. Saturday I was at a birth. Sunday I was meeting with a new client. Every single Monday we go to Santa Rita jail to support pregnant incarcerated community members. Yesterday I was on the computer for hours and then I went to a grants meeting, then went to a doula circle to debrief Santa Rita. And now I’m doing this interview and afterwards I have back-to-back postpartum visits. That’s a glimpse into my week. What does it mean to decolonize birth? Thank you for asking that because I know this word “decolonize” has many different connotations these days. For us, it means many things. It means recognizing the long standing effects of centuries and generations of oppression — racism, colonialism, homophobia, transphobia, and others — and how they intersect with practices, specifically medical practices, around birth. Birth in hospitals is relatively new, and this medical institution often doesn’t serve us. It’s created a culture where we don’t trust our bodies, we don’t trust the knowledge our bodies give us. For us, decolonizing birth means empowering and supporting birthing people of color using tools and skills that came long before hospital births. It looks like, wherever folks chose to birth whether in the hospital or at home, we can collectively access that wisdom. How do you think Roots holds feminine leadership? Roots of Labor largely identifies as queer. Some of our members also identify as gender non-conforming/two-spirit/trans. Some of us don’t identify as female, and some of us don’t identify as feminine. However, we are all attempting to disrupt patriarchy and dominance. We hold spirit instead of binaries. We do talk about womb work, but not in gendered language. And we do uplift the fact that womb-carrying people have experienced generations of trauma. For example, how bodies of color have been historically used in the name of medical research (i.e. Henrietta Lacks, history of obstetrics). Feminine leadership for us is the emotional and spiritual nurturing we do within the collective. How do you care for one another? We are all really understanding of this life we are leading. There’s a lot of community accountability for self-care and self-preservation. We try to model for each other the ways we know we need to be taking care of ourselves. So it’s critical that we show new doulas that we are capable of setting healthy boundaries, getting enough sleep, eating good food, putting our own family’s first. You are only as effective as a doula, as much as you are able to show up for yourself. We also create space for when folks can’t make it to a meeting because they are exhausted from a difficult birth or on their menstrual cycle, or if folks need to step away for a few months because the work is emotionally taxing — we understand. That can’t really happen in the same way in a masc-dominated space. How has this work changed you? It has forever changed me. I’ve really learned how much unlearning I needed, and still need, to do. This work has grounded me in things that I’ve always known that were important to me, and now I have community and language to share it. It’s given me a healthy direction in all the ways I want to continue to grow as a doula, community member, and ancestor.. What are some of the mistakes and impacts of those mistakes allies and supporters have made? I’ve seen more talking than listening. People want to share our stories, but not include us in the room. This makes us a talking point versus at the forefront of conversations regarding our experiences as people of color. Allies and supporters who make saints of us for supporting our community instead, of doing the work to support us. For example, with our work at Santa Rita jail, we don’t need people to tell us how “good” we are.. We’ve gone every single Monday for since 2016 and Birth Justice Project did the two years before that. For some reason, issues around incarceration are sexy or exciting to people and I can’t tell you why, it’s upsetting. What does it mean to think about being a birthing person that’s incarcerated? What does it mean to be trans and be incaractered? What does it mean to be a child from this experience? There are so many intersectional experiences here and I worry that truth gets lost. This is where people from Alameda County go. These are our neighbors. These are not invisible community members, even though the system is designed for their invisibility. I guess what I’m saying is instead of painting us to be saints, question the systems in place that propagate oppression to the point of your personal discomfort. Use the discomfort as an opportunity to learn and grow. Why is this feeling pushing you to be defensive, reactive, and/or guilty? As allies and supporters in a position of privilege, being intentional in where your money, time, and skill-set goes has the potential to disrupt an oppressive status quo. Use your resources to support people and grassroots organizations doing the work at the community level. Buy our products, support our patreons, offer us the use of your space for our events/retreats, donate your food goods to us for meetings, offer us a place to print our materials for free, offer us your skills/service, offer scholarships for our ongoing education, connect us to grant opportunities and funders, organize fundraisers and events that benefit us.
https://medium.com/blue-heart/decolonizing-and-rooting-rebecca-orozco-466d6dc378da
['Lindley Mease']
2019-02-01 05:46:00.858000+00:00
['Birth', 'Social Justice', 'Labor Justice', 'Racial Justice']
Rough Seas
image by Ulkar — purchased by the author Friend ships can be sailed Through rough seas and quiet storms Scarred — never broken First of all, you and Siri weren't stupid. You were a couple of kids doing what kids do — learning. I'm so glad you both escaped further harm, and that you built such a bond with one another.
https://medium.com/survivors/first-of-all-you-and-siri-werent-stupid-db927cef145c
['Toni Tails']
2020-09-10 07:47:16.835000+00:00
['Poetry', 'Relationships', 'Mental Health', 'Creativity', 'Life']
BLOOD RED DHALIA
When I think of my ex-lover Somadina, I think of some of the wilted flowers in the small garden behind his Ikoyi duplex, the stems of roses bent to submission, weak and dry like what we had. I think of how often I watched Somadina remove the weeds on his flower beds, each pull from the soil faded my senses into oblivion. “Our children will really love it in my garden, Dhalia”, he would say with a certainty that forced me to endlessly count the pinnules of the ferns in his garden, searching for ways to distract myself from the scary thought of the future he imagined for us. I liked Somadina and his love for plants, how he named me after his favorite flower - ‘Dahlia’, he immersed it into my heart until ‘Amaka’ stopped feeling like my name. I wanted to always hear him call me ‘Dhalia’, I loved the way the words fell off his lips, into my ears and down to my heart. He never forgets to water the plants that flanked his staircase except he looses himself in his writing. I had helped him water them on a Sunday afternoon, I watched as the water trickled down the can and into the flower pots, they looked beautiful, like the words he said to me afterwards, “Thank you Dhalia, I cannot wait for our daughter to watch while you water our plants”. I loved how safe my tongue felt in Somadina’s mouth whenever we kissed. That night, when we made love in his dimly lit room, I thought of how he said ‘our plants’. It felt good to be included in his friendship with his garden, our garden. Somadina knew what to do and where to go with my body until he didn’t, until he stopped calling me ‘Dhalia’, until he stopped seeing our future and all the children we had in it, until I found out he wanted to spend the rest of his life with a fair girl he calls ‘Gazania’. He wanted to have kids with a thick girl he calls ‘Tulips’. Until I found out ‘Pretty Lotus’ reminded him of his late mother’s grace and he wanted to bask in that feeling forever. I was ‘Dhalia’ for a while but he was done writing his book and I was only a character in a love story. I wonder where he would love to be buried…in his garden? Surrounded by all the 42 species of Dhalia.
https://medium.com/@ugochiokoli/blood-red-dhalia-bd359199c303
['Ugochi Okoli']
2019-10-16 17:18:34.875000+00:00
['Gardening', 'Flowers', 'Love', 'Men', 'Women']
Elections in Chile: Big challenges await Gabriel Boric
Elections in Chile: Big challenges await Gabriel Boric Chile’s new president, Gabriel Boric, is not the extremist he was made out to be in the election campaign — but a hope for his people, who are suffering under neoliberalism. Mochi Dec 20, 2021·3 min read süddeutsche.de — Benedikt Peters, 20.12.2021– 04:37 p.m. Translated by Michele Corlito Photo: Martin Bernetti/AFP Fist on heart: Gabriel Boric, 35, has made it. The outcry before the election was big. This young man was a radical, a communist, an extremist; he could not possibly become president. In the end, however, all the smear campaigns could not prevent this from happening. Gabriel Boric, 35, with a full beard and tattooed forearms, will enter Chile’s presidential palace next spring. He will be the youngest head of state in the country’s history — and the first decidedly leftist to hold this office since Salvador Allende. The world now need not worry that Chile is going to drift into a communist dictatorship. In terms of his political content, Gabriel Boric is about as radical as Germany’s Chancellor Olaf Scholz. Free schools and universities, an adequate state pension, good health care even for those with statutory health insurance — the chancellor would sign all of these as a matter of course. And if he were not in government with the FDP, Scholz, according to his campaign promises, would have moderately increased taxes for the richest; Boric wants to place a somewhat heavier burden on the top 1.5 percent of Chileans and on mining corporations. Boric’s program is radical only in the literal sense: He wants to get to the root of the Chilean system’s problems. The neoliberalism that dictator Augusto Pinochet once imposed on the country is to be replaced — and with it the glaring social inequality that has kept Chile in a state of crisis to this day.
https://medium.com/@mochi.moth/elections-in-chile-big-challenges-await-gabriel-boric-107f5251b1cb
[]
2021-12-20 16:08:44.034000+00:00
['Translation', 'Chile', 'Gabriel Boric', 'Elections', 'News']
Escaping the Time-Scarcity Trap
In case you missed it, Janet Choi from @idonethis has a compelling piece on 99U titled “Escaping the Time-Scarcity Trap.” “When you are busy, you feel flustered. When you’re flustered you start focusing on reactive work — which makes you feel busier. And when you are busy, you feel flustered… The good news is that it’s possible to escape the trap of time-scarcity thinking by reframing how you perceive your lack of time.” The full piece can be found here: http://99u.com/articles/27117/escaping-the-time-scarcity-trap
https://medium.com/nirvanahq/escaping-the-time-scarcity-trap-4438887e6b73
['Team Nirvana']
2017-06-27 16:09:17.434000+00:00
['Gtd And Productivity', 'News', 'Philosophical']
How to Lock Down Windows 10 Devices for Employee or Public Use?
Lockdown Windows 10 for Employee use Today, organizations are fast shifting from older Windows version to the more advanced options like Windows 10 devices, which are highly recommended for the values and benefits they add to businesses and organizations. The new-gen Windows version combines the already existing and useful functionalities with state-of-the-art features while ensuring security. The much-needed transformation will empower the workforce to be faster and smarter with more productivity on the table. With an array of devices with different shapes and sizes, high-security capabilities, sound app integration, and the new level of browsing experiences, Windows 10 is conquering the workplaces globally. It is gaining popularity across businesses — small, medium or large, as they are widely deploying Windows 10 devices for their employees, or for using the devices in kiosk lockdown mode, for marketing & advertising purposes or for simple training and education purposes. However, companies sometimes get a tough time ensuring that these Windows 10 devices are secured and are fully managed to facilitate complete utility while serving business purposes only, without any scope of misuse or data breach. It is imperative for driving total productivity, performance, accuracy, and efficiency across teams and the organization. This is where the intuitive and powerful MobiLock Mobile Device Management for Windows 10 devices comes to the rescue. With its multiple valuable features, it can help you lock down all your Windows 10 devices with complete control from a single unified dashboard. Not only that, but you can remotely have exhaustive visibility of all your MDM-integrated Windows 10 device inventory and can also monitor, control, secure and troubleshoot from the same dashboard in real time. Also Read: What do you know about Windows 10 Modern Management? It also gives you options to lock the devices into single app mode or multi-app mode depending upon your requirement. Now let’s understand the simple steps to know how to lock your Windows 10 devices for employees or public use The basic things that you will require at the start of the procedure, are: I. To have a MobiLock Pro account II. Access to windows 10 laptop/desktop/Surface device from the MobiLock Dashboard After that, you need to do the following: Create Initial Windows Device Profile(s) — To get started with enrollment, the very basic thing to be done is to create a Device Profiles. It enables to group all your device policies together which can be applied to one or more devices when enrolled. 2. Create Enrollment Configuration — On creating Enrollment Configuration, you will get a QR Code and a URL. This QR code/URL copied in MS Edge/IE browser will start the enrollment of the enterprise mobile and other Windows 10 devices. Enrollment configuration compiles of basic rules like [a. Device naming convention b. Default device profile c. Default license to be applied] 3. Enroll your Windows 10 Device(s) — With enrollment URL, you can enroll your Windows 10 devices either using Microsoft Edge or using Connect to Work or School App which comes already loaded on Windows 10 devices. Once the device(s) is enrolled it will start appearing on MobiLock Dashboard under Devices section as a Managed device(s). 4. Create & Configure Device Profile(s) — Device profile has 2 sections,a. Select Apps — You can configure your application policy from this section, like Application blacklisting wherein you can block selected windows applications to run, b. Select Settings — You can configure additional settings based on categories. Available sub-categories are, i. Branding — allows you to apply home and/or lock screen wallpaper to your enterprise devices. ii. Wi-Fi — Choose to allow or restrict users to connect to Wi-Fi. iii. Edge Browser — Choose cookie policy, specify a start page URL, allow/restrict AutoFill, allow/restrict pop-ups to name a few… iv. Email/Exchange settings — You can select the Email/Exchange Configuration(s) that you have created in Windows utility section so that they will be published to the devices in this Profile.v. General settings — These consists of all the generic settings which enhances the control over devices like whether to allow access of USB port, Bluetooth, to name a few. 5. Apply the Device profile — you can apply the device profile right at the time of enrollment by following the above steps or alternatively you can apply it at the later stage as well. Enrolling your devices into MobiLock MDM is not only easy, but it also opens a whole new world of possibilities for you that promises to enhance employee productivity, IT efficiency and operational accuracy. The above-mentioned steps will help you enroll your device(s) and apply device profile effortlessly for your employees and executives to use, without any hassle or tension of misuse or data threat. MobiLock Pro makes it a cake-walk for you to lockdown & manage all your Windows 10 devices from a single dashboard — so you can better focus on your core business while it tackles your Windows 10 device management with simplicity, security and total efficiency.
https://medium.com/@scalefusion/how-to-lock-down-windows-10-devices-for-employee-or-public-use-97694c7e74a0
[]
2019-02-06 06:16:00.057000+00:00
['Tutorial', 'Employees', 'Windows 10', 'Public', 'Lockdown']
Which Covid-19 Numbers Really Matter
Elemental: Where is the best place to find Covid-19 data for your area? And how local should you be going? Eleanor Murray: For most places, your state health department is a good first stop, but many states are so large that this won’t be specific enough. Local data from your city (or from your metropolitan area in larger cities like New York) will be more useful in those states. Your local health department may have a website that provides these numbers, or your state health department website might provide local numbers as well. Aside from just generally staying up to date about what’s happening in your area, when is it useful to look at local Covid-19 data? Local Covid-19 data is most useful when you are making decisions about whether to engage in higher risk activities, such as indoor dining, attending religious services in person, or visiting friends or family in their homes. When Covid-19 rates are low in your area, these may be reasonable to do (potentially with some safety precautions such as masks), whereas during a surge they should be avoided. What are the most important numbers to look at? What’s the first number you check for your area every day? I look at a couple of numbers: What are the per capita new case counts and — most importantly — how do they seem to have been changing over the past few days, and what are the test positivity rates? Test positivity rates can give you an indication of whether there is enough testing being done. We want to see those numbers below 2%–5% to indicate that there is not an outbreak happening and that sufficient testing is being done. Numbers closer to 10% indicate that there is under-testing, and numbers around 20% or above indicate a substantial surge in cases is probably happening. If cases and positivity are increasing or decreasing then that’s a good indication of whether your area is high or low risk at this time. How do you tell if things are “good” or “bad”? How big of an increase or decrease really matters? The trickiest part about assessing the numbers is that we are always playing catch up. Deaths tell us about infections that happened a month or even more in the past, hospitalizations tell us about infections from a few weeks ago, and even cases can be misleading since people can be infectious before they have symptoms. We also see a lag in reporting, so that the numbers reported today aren’t actually the final numbers for today — over the next few weeks today’s numbers will change. This means that even if you thought cases were going down, once you get a few weeks out you can look back at the graphs and realize they were actually going up. Because of this, it’s generally a good idea to take the maximum precautions suggested by the past few weeks of case data. If you’re considering traveling, what should you know about where you’re coming from and where you’re going, and how might those numbers affect your plans? When traveling, you definitely want to avoid moving between high and low case count regions, because this could lead to causing a new surge somewhere. In general that means you should really only travel if the cases are low where you are and have been low for the past couple weeks — and then only travel to and through other low case count areas! If cases are high where you are or where you want to go, it’s inadvisable to travel where at all avoidable. How should you read/interpret the death rate and the hospitalization rate? Do those matter on a day-to-day basis for a layperson? Hospitalization and death rates are really more of a measurement of how poorly the government is doing in controlling this pandemic. Doctors have gotten a bit better at treating patients who need to be hospitalized but in general the risk of death from Covid-19 is pretty close to where it was. Changes in the death rate over time mostly reflect changes in who is getting infected rather than any indication of disease severity. And hospitalization and death lag infection by so much that we really should not be looking at those numbers to think about how much risk we might be at on a day-to-day basis.
https://elemental.medium.com/which-covid-19-numbers-really-matter-cd492f2086aa
['Anna Maltby']
2020-10-14 14:25:16.599000+00:00
['Coronavirus', 'Health', 'Covid 19', 'Data']
CSS Flex: Creating dynamic, responsive lists
🏭️ The setup For this article, we need to start with a common file. Here’s the HTML: <div class="container"> <div>1</div> <div>2</div> <div>3</div> <div>4</div> </div> And the CSS: .container{ display: flex; margin: 0 5px; } .container div{ background-color: white; margin: 5px; /*** Just to center the text ***/ display: inline-flex; justify-content: center; align-items: center; } Adjust the margin (or width) of the container as you want So you have something like this: 💼 Wrap the container The flex-wrap CSS property sets whether flex items are forced onto one line or can wrap onto multiple lines. If wrapping is allowed, it sets the direction that lines are stacked. Add this to the container : .container{ display: flex; flex-wrap: wrap; margin: 30px 5px; } That just nothing but it’s normal don’t worry 🔎 How to use Flex? The flex property includes 3 properties to define: <flex-grow> : Should the element fill the empty space of the parent? : Should the element fill the empty space of the parent? <flex-shrink> : Can the element be cut in relation to the parent block? : Can the element be cut in relation to the parent block? <flex-basis> : The size of the element (to be used as <min-width> ) flex: flex-grow flex-shrink flex-basis; 🧙 The magic touch Now we add the CSS which will define an adjustment if the parent block has empty space + (the size of our divs - the margins) which gives, if we want 3 divs per line: .container div{ background-color: white; flex: 1 0 calc(33.33% - 10px); margin: 5px; /*** Just to center the text ***/ display: inline-flex; justify-content: center; align-items: center; } If you want 4 divs per line just do : flex: 1 0 calc (25% -10px) 🎊 Congrats 🎊 You just created a container that allows you to add as many elements as you want without having to redo a specific CSS. This technique is very useful if you have lists, products on several lines or articles! Enjoy 🖖
https://bootcamp.uxdesign.cc/css-flex-create-dynamic-lists-responsive-429ce8a8517d
['Robin Saulet']
2021-04-02 17:05:58.862000+00:00
['Web Design', 'Flex', 'CSS', 'Resources', 'HTML']
Warehouse Safety Management with IoT Security Systems
Posted On: December 18, 2020 Modern warehouses are large spaces, organized into designated areas, with machinery doing heavy lifting. But modernization has brought downsides. Warehouse safety is a huge concern for warehouse owners and personnel. People who work there are often risking their lives and warehouse managers need to be aware of the unique safety challenges that their personnel face. Safety challenges faced by warehouses in India Warehouses are spaces where humans work alongside heavy equipment and machinery such as forklifts, lift stackers, and so on. When personnel receive proper training in using this machinery, and adhere to them, productivity can be sky high. If safe operating practices are not adhered to, these can result in injury to personnel on the warehouse floor, as well as damage to the equipment. Spills on the warehouse floor, wires or ropes have the potential to cause accidents, if not taken care of right away. Personnel can slip or trip up as a result of spillage and badly injure themselves. Warehouse fires are an overlooked hazard but need to be paid attention. Unlike an accident involving a person or heavy equipment, fires can spread quickly through the warehouse because of the density at which product is packed. Not only that, a fire can take out all your personnel, machinery, and product in a short period of time. If your warehouse has exposed electrical wires, houses flammable substances, or has sub-par roofing, fire is even more important to watch out for. Warehouses in India may be built to maximize storage space, and not much attention might be paid to ergonomic layout to allow personnel to do their job most efficiently, and with the highest degree of safety. Improper stacking of pallets can lead to falling objects, which are a leading cause of injury to the hapless workers who happen to be in the vicinity. Warehouse safety standards Warehouse safety standards are codified requirements that warehouse owners and managers must follow in order to ensure the health and safety in their warehouses. When standards are present and followed, it encourages warehouse personnel to be active in learning about these policies, adhering to them, and point out co-workers who may not be following them as stringently. The Ministry of Labour in India set up the National Policy on Safety, Health and Environment at Workplace (NPSHEW), which includes guidelines on warehouse safety. A stated goal is to “establish a preventive safety and health culture in the country through elimination of the incidents of work-related injuries”. A review ofthese documents shows that though they are well meaning, they do not address the challenges to safety in modern warehouses. For e.g. The Warehouse Manual for Operationalizing Warehousing Development and Regulation devotes an entire section to the safe storage of pesticides but not much to other types of safety. It is thus incumbent on owners of modern warehouses in India to establish best practices and standards. We can look at the standards published by United States Organization for Safety and Health Administration (OSHA) for some ideas. Some standards from OSHA, which can be adapted to the Indian context, are: Communication of safety standards through appropriate training Safe operation of forklifts Establishing and following SOPs in the event of emergency Marking floor/wall openings/holes etc. Clearly marked and operable exits Availability of fire safety equipment and training to use them Use of appropriate PPEs by all personnel Warehouse safety management Warehouse owners can foster a strong culture of adherence to safety practices by creating a warehouse safety management plan. The plan should include personnel safety training. When training is comprehensive, and adhered to on a daily basis, it can help to build an informed workforce. Warehouse managers can also maintain a checklist of requirements specific to each warehouse area of operation, and encourage their staff to use them. Repeated practice of the correct procedures will lead to an organization wide culture of safety. Training middle managers to personally follow safety practices and communicate them to those in their charge through regular meetings, poster presentations, periodic warehouse drills, mandatory annual trainings, can all serve to remind workers of what they learned in initial training. As your warehouse builds a roster of best practices, modify your training to include those lessons learned. Some tips on warehouse safety Train personnel in the proper operation and storage of machinery, and to follow safety SOPs at all times. Warehouse personnel should use appropriate personal protective equipment (PPE) at all times. Ensure that aisles, perimeters, exterior areas, or other designated areas are clear and free of obstructions.Ensure that racks are bolted to the floor. Overhead sprinklers, fire extinguishers, etc., must be in working condition, and unobstructed with ability for personnel to reach them without effort. Emergency exits must be marked clearly, and warehouse personnel should be able get to their nearest exit without effort. Pallets should be stored properly, with no chance of tipping over. Adequate overhead lighting is paramount to safety. Identify any non-functional lights or any dark spots in the warehouse, where lack of lighting can cause serious damage to personnel, equipment, and products. Hazardous materials must be labeled clearly, and stored in designated areas of the warehouse. Ensure that dock doors are in working condition. Doors should open and close fully in order to not obstruct tall loads or machinery. IoT and E-surveillance systems for warehouse safety Many of the tips noted above depend on the diligence of humans in following them. Recent improvements in surveillance technology and Internet-of-Things (IoT) platforms make it possible to automate warehouse safety. A warehouse e-surveillance system can proactively identify blockages to aisles or exits; identify areas of the warehouse that are poorly lit. It can help your control room monitor if your staff wears PPE at all times, and if SOPs are followed. Based on If This Then That ( IFTTT) rules, it can notify warehouse security and other officers of fires. Investing in a comprehensive warehouse IoT and e-surveillance system, such as those provided by IGZY, can make warehouse safety less dependent on human intervention. IGZY delivers e-surveillance and security solutions that make your business premises intelligent, safe and secure using ourstate-of-the-art unified IoT platform that is trusted by major brands i.e. ICICI Bank, Vodafone, Myntra.com, MJ Logistics and many more. Ask for a demo today.
https://medium.com/@rounakpreet-singh/warehouse-safety-management-with-iot-security-systems-4db8e620740d
['Rounakpreet Singh']
2020-12-21 10:37:07.498000+00:00
['Warehouse Management', 'Iot Security', 'Iot Platform', 'Warehouse', 'Safety']
Bitfolio — March update
Wow, what’s happening to the crypto world? 🙈 It seems that after that insane run, we are having that long awaited correction, it’s a really good opportunity to buy and invest when the numbers are so low. On the Bitfolio side, we have a big update coming out soon, and a very exciting news: development on the Android version has started! 🙌 The (still-low) numbers of March 4000 monthly active users 350 daily active users 1800 sessions per day 50 new daily users We were honestly expecting this, due to crypto prices falling and less users investing. What’s new We are adding support for multiple portfolios from version 2.3.0! In version 2.2.0 we added a different way to calculate percentage gains for exchange transaction, refer to our FAQs to understand what this means exactly for you. Also new: Historical trends More portfolio insights Coinmarketcap support BTC value Coming in future updates More requested features are coming in future updates, such as: Android version 😱 Exchange sync In app news Live tickers Reports And much more, check out the Bitfolio roadmap on trello, you can vote on your favorite feature and keep an eye on the development. 👍 Useful Links Thank you ❤️ We’ve received a lot of feedback and we love it, please keep it going, keep sending feature requests and if you see a bug or an issue, send a complaint! (but please don’t be rude 😉). Please consider sharing Bitfolio with your friends and coworkers, it would help us tremendously. Again, thank you ✌️ Francesco
https://medium.com/bitfolioapp/bitfolio-march-update-564beddc42e2
['Francesco Pretelli']
2018-03-30 01:02:32.683000+00:00
['Blockchain', 'iOS', 'Cryptocurrency', 'Apps', 'Bitcoin']
The ACA Repeal Bill Takes Center Stage at Spotlight Health
June 23, 2017 And so, the Senate version of the Affordable Care Act repeal bill landed Thursday, on the first day of Spotlight Health. This is a bit like putting on a Star Trek fan convention, and having an ACTUAL KLINGON STARSHIP land in the convention center. Luckily, everyone’s already dressed for battle, and it turns out we’ve already got a tricorder (thanks to Anita Goel of Nanobiosym). Before the welcoming session even kicked off, Kathleen Sebelius sat down with Judy Woodruff to talk about the Senate’s proposal. Sebelius was Secretary of Health and Human Services under Obama and worked to pass and implement the Affordable Care Act, so no one expected her to have nice things to say. But she didn’t just speak about a difference in approaches to health care, she spoke about the Senate bill’s “assault on Medicaid.” (One could easily imagine a Starfleet Commander Sebelius, it should be said.) What’s on today’s agenda? A breakfast discussion of the roots of the opioid epidemic with Nora Valkow, Director of the National Institute on Drug Abuse at the National Institutes of Health. (Spoiler: You probably know the basic story, but the sheer scale of the problem — and the potency of synthetic opioids — is mind-boggling.) A panel discussion that attempts to answer, “Will People, Data, or Payments Drive Health Care into the Future?” (As you can imagine, viewpoints differ.) “Quelling Violent Extremism with Public Health Tools” takes place at noon, and looks at an incredibly pressing problem from a health perspective. Perhaps the war on terrorism — like some other “wars” that have been declared in the last half century — could benefit from a solid public health approach. And, yeah. It’s time to stop by the Mount Sinai booth for a free skin cancer screening. I can’t get an appointment with a dermatologist at home until November. Sunday, at the Spotlight Health closing session, we’ll see US Department of Health and Human Services Secretary Tom Price emerge from the Klingon ship, and assure us that AHCA comes in peace.
https://medium.com/the-aspen-institute/the-aca-repeal-bill-takes-center-stage-at-spotlight-health-b8ec53b6926f
['David K Gibson']
2017-06-28 13:36:37.281000+00:00
['Ahca', 'Health', 'Politics', 'Obamacare', 'Aspen Ideas Festival 2017']
Representation of PTSD in Modern Entertainment
In recent years, I am happy to see more realistic representations of how earlier traumas can affect our current day personalities. With the recent release and popularity of these three movies and shows in particular, a few of the characters stand out regarding how their ghosts move forward with them. In Avengers: Endgame, Thor carries the weight of what he feels is his failure in his choice of how/where to strike our main villain, Thanos. In the immediate aftermath of ‘The Snap’, we see him at turns sullen or angry but as we move another five years into the future we see …, well, we see Fat Thor. Yes, this character is played for laughs and those that complain about “fat shaming” may have cause to be offended but we’ll skip that aspect for the moment. We see a Thor that has chosen to brush over his pain with alcohol, a tightknit crew of friends, and the dopamine rush of video games. While his community around him is doing what they can to move on, Thor has chosen to keep his priorities and challenges simple. These minor comforts and victories keep thoughts of his bigger shame at bay. Thor has a right to heal in his own way but during his absence from the wider world, the need for this Avenger and his skills have been badly needed on Earth and beyond. He’s handicapped by his self-imposed trauma and unable to be of use to anyone, or so he may feel. A self-imposed trauma is just as damaging as an externally-forced, physical one and as someone with a mix of both, I can understand his desire to shelter and cocoon. Throughout the film, his current condition is used for comedya few times but I’m not so defined by PTSD that I took offense. I thought it was handled well. I was equally pleased to see that he didn’t have a redemption scene. He did become an important member of the heroes again but I like that he just soldiered on without craving the “one thing” that would make him worthy again. He regained his self-worth simply by being there. Well done, Russo Brothers. True Detective, Season 3 gave us Detective Wayne Hays (played by Mahershala Ali), a Vietnam Veteran and an Arkansas State Trooper investigating the abduction of children. Though the series time hops between three different eras in his life, his experiences as a soldier don’t directly come into play. Where his trauma effects him is in the persona that he has created for himself after the war. We don’t know anything about his childhood or upbringing so we’re left to infer how combat and its horrors effected him. Hays isn’t a stereotypical alcoholic brooding cop. He drinks, yes, but it doesn’t define him. He’s quiet but out of self-control and self-defense. His failure in communication and expression with his wife and children is the biggest hurdle in his personal life and we see this play out in all three time periods. He starts a relationship with his future wife by relying more on the moods of the moment than with establishing lines of trust and communication. Though not a fatal flaw, that beginning causes trouble throughout the marriage. At the end of the series, we are given the only flashback of his time in the Army and the scene is handled quite beautifully. In the parts of the show that covers the current time period, we see Hays in his 70s still trying to unravel a case as his mind floats in and out of dementia. The final image of the show is him in a much younger form, in the jungles of Southeast Asia. He walks slowly, stops to take a look around, and heads off into the dark mist of the jungle. Gone. Our last image of Wayne Hays is of him slipping back into the world in which he last felt safe. Not his family or friends or work but the jungle and the combat that formed him into what he was the remainder of his life. I can share a long list of my Veteran friends that are more comfortable “in the shit” than working with a bureaucratic system, for a retail empire, or struggling to exist in the gig economy. Game of Thrones is coming to an end soon and we still have CleganeBowl ahead of us. In this case, we may have redemption of The Hound’s trauma arc because it's his older brother, The Mountain, that induced the damage as a child. Sandor Clegane had his faced burned and scarred as a young child by his older brother and the physical and emotional scars have trailed him since then. We don’t see a young Hound but when we’re introduced to him, he’s quite the asshole. A fierce fighter to be sure but he instigates many of those battles simply by being brusk and rude. Even as a warrior, he has an aversion to fire and we see this affect him at several points in the show; Battle of Blackwater, Beyond the Wall, and most recently during the Battle of Winterfell. When the flames begin, he cannot tolerate it. It makes him flee or freeze. Several times he has to be defended by others while in this fright mode and we have yet to see him “snap out of it” on his own. I like that this massive figure is hobbled by fire, a thing that can be dangerous but has yet to be an actual threat to him. In these scenes, the actual threat is close and valid but the mere presence of flames triggers him and makes him lose focus on the actual danger. He becomes a child again with a freshly burned face. While the results of this trigger are a big overt on the screen, that’s precisely how they can feel when they strike. Whatever you may be doing at the moment instantly becomes less important than the flood of sensations swirling in the brain.
https://medium.com/@danscape/representation-of-ptsd-in-modern-entertainment-e23d572770d6
['Daniel D Baumer']
2019-05-06 21:52:38.706000+00:00
['True Detective 3', 'Avengers Endgame', 'Army', 'Game of Thrones', 'PTSD']
The Great Has Been and The Ruined
Photo by Jan Tinneberg on Unsplash Musical Selection: Queen|Another One Bites the Dust The Great Has Been and The Ruined An Audio Poem Crack goes the whip and every order or demand that can break a camel's back — thrown at us under a noon day's sun and just like that, the fun . . . is done. We've been asked to pack our things, close up shop, retire, put things to bed and not wake up and well, if you know us . . . you know we're not going quietly. Funny how money can shift a mountain of growth or how it can sharpen the hardest edges especially when the underdogs begin to climb too high. The Powers That Be say, “Drag them back down,” and as we fall, they clap and hoot and holler and throw confetti in the putrid air juddering around their safe spaces, laughing at the marks we've made. “It's been a nice run,” they'll say. “You've done well, but we have other plans in mind and you're not in them,” and we knew the hammer would come down, it was only a matter of time. We were hanging on to hope. It's hard to say goodbye to family, to friends. We've built a castle, invited the village, and made merry with thousands, and now . . . that castle is being demolished. Here comes the wrecking ball. These walls are being Jerichoed right before us and there is no time for tears, we must gather our pride, tuck it neatly alongside our egos, and zip the contents up . . . Shut it down. I wish there was another way of saying and I don't want to go, but the great has been and the ruined are two different things and I'm having trouble recognizing which one we are. Today is the first day of not being angry or hurt or sad because business is business and when business is business, words do not matter. I have nothing but love for this publication. As a writer, it is home to so many pieces of mine. As an editor, it was a place I shared in growing with others. To say that I will miss P. S. I Love You is an understatement. It is pressed deeply into my spirit and I will carry it with me forever. Peace and blessings, beautiful people. Thank you. ©2021 Tremaine L. Loadholt
https://psiloveyou.xyz/the-great-has-been-and-the-ruined-7778d7ac8a04
['Tre L. Loadholt']
2021-05-25 12:29:50.782000+00:00
['Swan Song', 'Music', 'Goodbyepsily', 'Poetry', 'Life']
Still Feelin’ Alive at 25(K): My NaNoWriMo Halfway Mark
Having never participated in NaNoWriMo before, I had no clue what writing 25,000 words felt like or if I could even do it. In my last post, I had proudly survived my first week, hit 10,000 words and quickly figured out this stuff is for real not easy. And thanks to my amazing writing buddy, Martina, who has kept my ass in gear, I can happily report that I broke the 25,000 mark like a champ. Here are a few realizations I’ve learned by reaching the halfway mark of my writing journey. Character inspiration is awesome. Have a real-life nemesis that you would love to see go down in the worst kind of way? Make them your inspiration for your story’s evil villan! Need a name for a sweet, old woman who bakes cookies for the little town orphans? Name her after your gold-hearted grandmother whose makes the most delicious snickerdoodles of all time. Find your inner Taylor Swift and instead of writing songs about ex’s or people, be sneaky and incorporate them in your story. Keep reality alive in fiction. Weave in real-life events into your story, such as historical milestones, modern day conflicts and important eras in pop culture. These will keep your story more grounded and interesting for you to craft plot lines around. It also adds a very realistic tone to your story that will help your reader connect with your fictional storyline. I’ve surprised the hell out of myself. Looking back, my story seemed so simple and not nearly as interesting then when I first started. There are days when I’ve definitely struggled and the words just haven’t flowed. Some days I would have much rather slept in a few extra hours rather than guzzle down a pot of coffee before 6 a.m. But I’m thanking myself for stayed relatively disciplined. Now, my story has taken on a life of its own and is becoming way bigger and better than I ever could have imagined — all because I just started typing. So cheers to the next 25,000…and beyond!
https://medium.com/nanowrimo/still-feelin-alive-at-25-k-my-nanowrimo-halfway-mark-7dcd680821f6
['Megan Wagner']
2015-11-16 00:20:47.040000+00:00
['NaNoWriMo', 'Authors', 'Writing']
Within your eyes,
Cormoran Lee your words to me today, struck a chord, intrinsic. Without you I would be devoid. Null. With you. We. What a marvelous place to be.
https://medium.com/@hawkinsjulia63/within-your-eyes-7db89eb4d91b
['Julia Hawkins']
2020-12-16 10:35:07.357000+00:00
['The Bad Influence']
Is Human Nature Good Or Bad?
Is human nature good or bad? It’s a controversial question. People argued this topic thousands of years ago and had no answers at all. I believe that some people’s nature is good, and some people’s nature is bad. The environment is crucial for a person’s growth. Their family, their childhood, their school, their experience in life, their work….create different people, create different characteristics of different people. A good environment can affect a person’s life very much. But there are also some bad people who come from a good environment. How does it happen? Well, it’s about their nature. Some people’s nature is bad. A good example of that is the people who are sociopaths. Some of them are from well-educated families. They have lower moral standards than normal people. They know how to manipulate people, and they are cold blood. A good environment doesn’t change them much, it becomes a hunting ground for them. They lack empathy, they can not feel other's feelings. They only want to gain attention and benefit from others. They do not see other people as human beings, they see others as their prey, and they are the predators. Some people’s nature is good. For example, Jeanne d’Arc. Jeanne was from a small village in the French countryside. She was a French hero that protect the French from the British army's Invasion. She was brave and great. Before the war, she was an ordinary girl in a small village, but when the British army invaded French, she took great courage to join the French army. She was illiterate, but her strategies were clever. Her good nature made her a hero, and people would not forget her. Is human nature good or bad? We might never know the answer. But believe in good, will always be a good choice.
https://medium.com/@gogleqi123_9072/is-human-nature-good-or-bad-b909456b5708
['Elizabeth Chen']
2021-09-14 16:01:13.241000+00:00
['Human Nature', 'Human Behavior']
Ask a Clean Person: Fragrant Shoe Season Arrives Early
I don’t want to get rid of my sandals but seriously, they are disgusting. I am not a Sweaty Person or a Stinky Person in general. My socks never get stinky or sweaty, not even the ones I wear when I work out. Come the spring/sandal time, though, and I swear by 10 a.m. it smells like there’s a dirty dog and a bag of Fritos under my desk. I make myself sick, and I’m terrified that if I can smell my own feet, everyone else can, too. This happens with tiny strappy sandals, flip-flops — any shoe I wear without socks. I’ve tried sprinkling them with baking soda and letting them sit overnight, which helps for a day, maybe two. Can you help me? I don’t see how any kind of stink-destroying insoles could work here, since it’s only my OPEN shoes that smell like the devil. Why is this? This happens to my best friend, too, so I know I’m not the only one. Are we just The Stinky Feet Girls? I would give up on sandals entirely, but we live in Texas and shoes with socks just aren’t an option when it’s 103 degrees for 90 days running. Help us? Please? You might be The Stinky Feet Girls. But you know what? That’s fine! Sometimes I eat potato chips in my bed while watching episodes of The Sopranos that I’ve seen a hundred times. We’re all gross in our own ways. I think what we need here is a two-pronged approach: foot care and shoe care. Because you might be a Stinky Feet Girl, I would definitely suggest that you start using a daily foot powder like Gold Bond or Tinactin. There are also a whole heap of foot antiperspirant products out there that work a little differently from the powders, which absorb sweat but don’t stop it. Also they have delightful names like The Ugly Little Bottle, Ghost Grip, and Neat Feat. My suggestion would be for you and your friend to each buy one different product, test it out, compare notes, and if the first thing you tried doesn’t work, try another. Just in the same way with deodorant for the underarms you may need to experiment with different brands to find the one that works best with your body’s makeup. So that’s step one. Step two is to treat the shoes themselves. There are a ton of odor-neutralizing shoe sprays on the market, and I particularly like products made specifically for getting the stank out of athletic shoes, because they’re optimized for use on some of the smelliest shoes out there. Here’s one from Dr. Scholl and another from KIWI for you to check out. It may be that during the summer months you have to spray your shoes down each night when you take them off, but once you get into the habit of doing that it’ll become something you don’t even think about, you just automatically do. Like how you make your bed every morning. Automatically. Right??? I realize the topic of smelly shoes, feet, and all manner of foot/smells have already been covered. However, I have a smelly foot problem that is slightly unique. I am a lucky lady in that I get to walk to work every day — no public transport needed. As I am a comfort-over-style kind of gal, I walk to work in sneakers and change into heels once I get there. When I get to work in the morning I am usually first or second and can easily slip off my socks and sneakers with little embarrassment; I store my shoes in one of my filing cabinets, to keep any stink encased in steel. The problem is that in the afternoon I am surrounded by several coworkers (admittedly some very attractive coworkers, though that shouldn’t make a difference) and I feel extremely embarrassed when opening my smelly shoe drawer. Currently I have tried scent balls or sent sprays, but both just mix with the foot smell. I also have used baking soda which works well in my shoes but no so much in the drawer! Please help! You just need to put a Bad Air Sponge up in that bitch. That’ll do ya. Another idea is to get your paws on a pair of sneaker balls and pop ’em inside your kicks after you take them off in the morning, so that they absorb the smells right out of your sneaks while they’re sitting in the drawer. Sneaker balls are pretty easy to find — most athletic shoe stores like Foot Locker will carry them — and they’re pretty inexpensive. Amazon lists several options in the $4 to $10 range. If the drawer isn’t big enough to fit a canister of Bad Air Sponge, you can also check out a thing called Fridge It. They’re designed for refrigerators (as if that weren’t utterly clear/why did I just explain that??), but the principle of odor-sucking-upping is the same, so they’ll be small enough to pop into a desk drawer. I recently bought these cute flats from the Friday Bargain Bin. I love them! Great find! The first day I wore them was a gray day that turned into a rainy day, and they got wet in the rainwater. Now they’re a little dirty. Since they’re dyed leather I’m not sure how to go about cleaning them. A damp paper towel got way too much orange on it for me to be comfortable continuing. My roommate hasn’t worn hers yet … should I get her drunk when she does and switch out her new unstained ones for mine like in the Judgement of Solomon? Yes, you should absolutely do that. I’m not even kidding. All’s fair in love, war and Friday Bargain Bin Anthropologie Trinket Flats. Okay no, not really! (I’ve been working on this whole Ask an Astonishingly Mean Clean Person persona but it’s not really taking.) You can try a couple different things. Since they’re just “a little dirty,” a damp (DAMP NOT WET!)(I swear one of these days I’m going to have pop-up sponges made up that say “DAMP NOT WET!” on them and make a killing selling them to people who are forever staining their couches and shoes and carpets and such) sponge with a little blurt of dishsoap will probably take the grime right up. If that doesn’t appeal to you, you can get your hands on some neutral-colored shoe polish and give the shoes a good going-over with it or grab a pack of these nifty Express Shine Wipes from KIWI, which will clean the dirt off and shine the leather back up as well. “Neutral,” by the by, is a pretty standard shoe polish color, and you really shouldn’t have any trouble finding it. In terms of the dye coming up as you’re cleaning them, unless the color of the shoe is visibly changing its color, don’t worry too much about it. Dye is just forever leeching out of things — remember the girl who had the wild makeout session on her carpet and got dark denim dye all over the place? Previously: I Drank the Juice; I Cleaned It Up. Jolie Kerr is not paid to endorse any of the products mentioned in this column, but she sure would be very happy to accept any free samples the manufacturers care to send her way! Are you curious to know if she’s answered a question you have? Do check out the archives, listed by topic. More importantly: is anything you own dirty? Photo by iofoto, via Shutterstock
https://medium.com/the-hairpin/ask-a-clean-person-fragrant-shoe-season-arrives-early-f6cbecdddf70
[]
2016-06-01 22:01:03.883000+00:00
['Advice', 'Cleaning']
He didn’t count on the institutional interests of the Supreme Court, and he probably has no…
He didn’t count on the institutional interests of the Supreme Court, and he probably has no experience in dealing with such an organization in his entire life. He’s probably felt that most organizations are entirely susceptible to bending to his will. SCOTUS has no interest in destroying its brand, and with it it’s power, even if three of its members were appointed by Donald Trump. A lot of people say that Trump has made us weaker. I’m not sure that’s the case. It’s like a network that’s just gotten attacked by a hacker. (Even more appropriate in that Donald Trump is the first Internet presidency.) It illustrates the gaping security holes in the network and the network administrators go quickly to work closing those holes. I believe that Trump has identified some security problems in our democratic system. There is a significant interest in repairing those, and incorporating the elephant in the room, the Internet into our politics further.
https://medium.com/@norm.cloudbaseflyer/he-didnt-count-on-the-institutional-interests-of-the-supreme-court-and-he-probably-has-no-79e0da155b8a
['Norm Young']
2020-12-12 17:37:50.729000+00:00
['Supreme Court', 'Donald Trump', 'Internet']
What makes one algorithm better than another?
Sign up for The Variable By Towards Data Science Every Thursday, the Variable delivers the very best of Towards Data Science: from hands-on tutorials and cutting-edge research to original features you don't want to miss. Take a look.
https://towardsdatascience.com/what-makes-one-algorithm-better-than-another-378ff600acdb
['Tds Editors']
2020-09-01 12:27:57.400000+00:00
['The Daily Pick']