title
stringlengths
1
200
text
stringlengths
10
100k
url
stringlengths
32
885
authors
stringlengths
2
392
timestamp
stringlengths
19
32
tags
stringlengths
6
263
Rocks in Space
Space Science in a Nutshell Rocks in Space Our Solar System has planets, asteroids and other bodies. But what are the differences? Large Meteor (Fireball) over the Atacama Desert in Chile. The radio telescope dishes are part of ESO’s (European Southern Observatory) ALMA telescope (Atacama Large Millimeter/submillimeter Array). Credit: ESO/C. Malin; License: Creative Commons Attribution 4.0 International License “Last night I saw a meteorite in the sky!” “I thought its called asteroid?” Hmmm … Astronomy and astrophysics are great research fields that inspire us with fascinating images of exoplanets, galaxies, black holes or try to answer fundamental questions of the universe and our existence. But our very cosmic neighbourhood, the Solar System, has one advantage: we can reach and explore foreign worlds with spacecraft missions. We analyse and catalogue miscellaneous bodies and can see different objects (and appearances) in the night sky. You have probably heard of asteroids, comets, meteors and many more. But what are the differences between these objects? And why do we need to differentiate between them? Let’s take a look: Planet 2006 the International Astronomical Union set new definitions for planets and other minor bodies (IAU News: Page 16 following). Three requirements need to be fulfilled to classify an object as a planet: The object revolves around a star (without being a star, since there are double-star systems and more) The object needs has to have a nearly perfectly spherical shape. The IAU states that “shape of objects with mass above 10²⁰ kg and diameter greater than 800 km would normally be determined by self-gravity, but all borderline cases would have to be established by observation” The orbit of the object needs to be clean from debris and other objects (also here: observations need to determine the cleanness of an orbit. Our planet moves through different dust streams and minor bodies cross the orbit as well) The result of this new definition: 8 planets; and for all other objects the general term “Small Solar System Bodies” applies. Dwarf Planet Not a planet since 2006: Pluto. Colour composition taken by NASA’s New Horizons mission in 2015. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Southwest Research Institute Dwarf planets are like planets, but their orbit is not cleared from other almost same sized objects. This definition applies for (1) Ceres, who revolves the Sun within the main belt between Mars and Jupiter; Pluto, Sedna and other too-small-to-be-a-planet objects. You may have heard about Plutons or Plutoids, Kuiper Belt Objects (KBOs) or Transneptunian Objects (TNOs). These classifications are based on the orbital elements of the objects; so the orbit-types these objects have. Asteroids When I started working in the area of asteroids I thought “Well… I give it a try, but it MUST be boring”. I was wrong. Today, we know 100-thousands of these primordial objects that inhabit different regions in the Solar System like the Asteroid belt between Mars and Jupiter. Asteroids are smaller than dwarf planets and mostly do not have a round-shape body. Larger asteroids (the transition between dwarf-planets and asteroids is not 100% clear) may have a dense and differentiated structure (iron core in the centre, rocky layers and a regolith surface), but for the majority the so called Rubble Pile model applies. Asteroids have a way lower density than dwarf planets. This suggests that asteroids are not monoliths (a single rock), but a conglomerate of rocks and boulders that are held together by gravity. The Saturn moons Helene and Pandora (that are captured asteroids) have such a low density that they would float on your oceans. (21) Lutetia in the main belt between Mars and Jupiter. The asteroid was visited during a brief flyby in 2015 by ESA’s mission Rosetta. Credit: ESA 2010 MPS for OSIRIS Team MPS/UPD/LAM/IAA/RSSD/INTA/UPM/DASP/IDA Michel, P., et al., editors. Asteroids IV. University of Arizona Press, 2015. JSTOR, www.jstor.org/stable/j.ctt18gzdvc. Accessed 13 May 2020. Comets Comets are, like asteroids, remnants of the formation of our Solar System. These objects are mostly referred as dirty snow balls and contain water ice, dry ice, minerals and also complex organic compounds like amino-acids. These life-building-blocks are used by living organisms to create proteins. Comets consist of different components explained in the following list: Core : The core is the solid part of the comet; the dirty ice ball with its minerals, ice, and so on. Although comets may appear quite bright on the night sky (see image below), the core has a reflectivity of only a few percent. The bright appearance results from other effects that are explained in a moment. Comet cores that approach the Sun closely (within a few AU) start to evaporate. Gas and dust are leaving the comet creating a … : The core is the solid part of the comet; the dirty ice ball with its minerals, ice, and so on. Although comets may appear quite bright on the night sky (see image below), the core has a reflectivity of only a few percent. The bright appearance results from other effects that are explained in a moment. Comet cores that approach the Sun closely (within a few AU) start to evaporate. Gas and dust are leaving the comet creating a … Coma : This part is a spherical shaped liked region around the core that consists of dust and gas. The density of this cloud decreases for larger distances from the core until the gas and dust are carried away and appear as … : This part is a spherical shaped liked region around the core that consists of dust and gas. The density of this cloud decreases for larger distances from the core until the gas and dust are carried away and appear as … Tails: Take a look at the comet below. You see a blueish and white / grey tail. The blue one is the ion tail and is affected by the Solar Wind: A (with respect to the Sun) radially escaping stream of charged particles. These particles interact with the charged particles of the comet, the ions, and are carried away. The grey one consists mostly of dust particles and spreads along the path of the comet. Gas and dust are responsible for the great and stunning appearance of a comet. Michel C. Festou (Editor), H. Uwe Keller (Editor), Harold A. Weaver Jr. (Editor). (2004). Comets II. University of Arizona Press. pages 780. ISBN-10: 0816524505. ISBN-13: 978–0816524501 Active Asteroids It’s a comet! It’s an asteroid! It’s … both? Well, yes, in a way. There are objects in our Solar System that have orbital properties of asteroids, but show cometary activity and vice versa. Further, asteroidal objects, rubble piles, eject particles and cause faint tails of dust, e.g. due to: Collisions with other objects Fast body rotation that leads to centrifugal forces that exceed the gravitational pull on the surface Close encounters with the Sun that leads to thermal stress and cracking One common active asteroid is the object (3200) Phaethon that ejects dust particles that intersect with our home planet causing falling stars in December, the Geminids. Images of the active asteroid P/2013 P5, taken by the Hubble Space Telescope at different dates. Credit: NASA, ESA, D. Jewitt (UCLA) Meteoroids The International Astronomical Union (IAU) did not only introduced definitions to distinguish between planets and dwarf planets. They also introduced the definition of a meteoroid. A meteoroid is like a small asteroid, a boulder or rock with a diameter between 30 µm and 1 m. That’s basically it. Dust Now, particles with a size that is smaller than 30 µm is defined as (cosmic) dust. By the way: cosmic dust was my main scientific doctorate studies topic and did you know that the guitarist of Queen, Dr. Brian May, also received a doctorate degree in this particular field? Due to the light pollution of our modern world there are only a few places left on Earth where one can see the night sky without any artificial light sources. Dark skies have become rare and people are missing a phenomena Dr. Brian May was writing his thesis about: The Zodiacal Light. The Ecliptic Plane (the imaginary plane, where all planets revolve around the Sun) is populated with dust particles of only a few micrometers in diameter. These particles scatter the light of the Sun, causing this light phenomena, as shown in the picture below. Dust sources in our Solar System are: Comets (Active) Asteroids Collisions of asteroids, meteoroids, etc. Vulcanic active moons like Io Meteor Dust particles that are ejected from comets disperse along the orbit of their parent body. They form a so called Stream. These Streams cross eventually our home planet. High velocity dust particles evaporate in the atmosphere and due to the friction with the air they light up. The result: a falling star, respectively called meteor. Some meteor showers like the Perseids in August or the Geminids in December can be linked to their parent bodies. Other meteors appear randomly in the sky. Their orbit has been altered for thousands of years due to the radiation pressure and gravitational perturbations. Consequently, a parent body cannot be identified. These ones are called sporadic meteors. Large Meteor (Fireball) over the Atacama Desert in Chile. The radio telescope dishes are part of ESO’s (European Southern Observatory) ALMA telescope (Atacama Large Millimeter/submillimeter Array). Credit: ESO/C. Malin; License: Creative Commons Attribution 4.0 International License Peter Jenniskens. (2009). Meteor Showers and their Parent Comets. Cambridge University Press. pages 804. ISBN-13: 978–0521076357 Meteorite If a meteoroid or dust particle enters our Earth’s atmosphere, a meteor can be seen. If the brightness exceeds a certain level it is called a bolide, respectively a fireball. In such cases, depending on the velocity, the mass, density and composition of the falling object, a remnant may hit the ground. If you find such an object, it is called a meteorite. Meteorites are classified in several sub-categories, depending on the origin and chemical composition. However, generally two meteorite types are quite common:
https://medium.com/space-science-in-a-nutshell/rocks-in-space-78e1ad132731
['Thomas Albin']
2020-07-25 12:05:53.444000+00:00
['Astronomy', 'Space', 'Education', 'Science', 'Physics']
The Shape of Character Design
Shape language is much more than a few geometric shapes as they all represent meaning and personality. It’s often categorized into 3 types of shapes: Natural/Organic shapes are classified as shapes that appear irregular or asymmetric and seem to have a curvy flow to them. By composition, almost all the shapes found in nature are organic. Examples are trees, leaves, flowers, etc. Geometric shapes are more precise they’re the basic shapes such as squares, rectangles, circles, triangles and crosses, and so on. They also look symmetrical and have sharp edges and angles in their structure. Examples are circles, squares, rectangles, and triangles. Abstract shapes Have a form that is recognizable, but not real. These are variations of organic shapes stylized or simplified. The abstract shape of a person would be a stick figure. It’s often a mix of geometric and organic shapes. Additionally, it doesn’t end there as specific shapes represent different meanings that help within the character designing process. Squares represent stability and it is a dependable, familiar shape symbolizing honesty and solidity. They are not flashy or attention seekers as far as the shapes go. However, some would say they are dull, but creative designers would transform them to add depth to a character. Triangles are associated with energy and power to indicate direction. Triangles may give a sense of action, tension, or even aggression. They can symbolize strength on the one hand, while conflict on the other. Circles have a sense of freedom in terms of movement. They may also represent power and energy in their movement. Ovals and circles are graceful and complete because of their curved lines. They give a sense of perfection and integrity. Circles are great types of The Innocent, The Caregiver, The Everyman. The shape language technique is still being used widely today and it can be especially recognized in major motion pictures such as Walt Disney movies. Moreover, a specific study on Big Hero 6 in which each character has a different shape and represents their personality. Starting from the left, Wasabi is a clear square shape and his character is dependable and honest however, he’s also neurotic and compulsive. The last two qualities go against what a square should be, but that’s when a character designer chooses to make them more interesting or interactive. The next character is GoGo Tomago, she is more circular shaped. Her character is fast, tough, athletic, and an adrenaline junkie which fits the circle description. The next four characters are more circular, rounded rectangle, and oval. Additionally, the most prominent out of them is the last character Baymax. He has a very large circular shape which essentially connects to his character as a Caregiver. His roundness allows him to be more approachable along with being a calm character.
https://medium.com/media-reflections-past-present-future/the-shape-of-character-design-78c66eb97518
['Hana Al-Ali']
2020-05-14 16:53:15.650000+00:00
['Illustration', 'Character Design', 'Design', 'Character', 'Shape']
Design a Temperature Monitor in 20 Minutes Using Flutter Radial Gauge
Design a Temperature Monitor in 20 Minutes Using Flutter Radial Gauge Suresh Mohan Follow May 4 · 7 min read In this tech-driven world, physical instruments are not used anymore to visualize units of temperature, speed, and other data. Every bit of detail is shown accurately using mobile phones and other electronic devices. Designing a temperature-monitoring application has become an effortless task with our Syncfusion Flutter Radial Gauge widget. In this blog, you are going to learn how to create a temperature monitor application using various features of the Flutter Radial Gauge widget. We’ll use the following features of the Radial Gauge widget: Axes: To display the temperature range from its minimum to maximum values with specific intervals in the radial scale along with the labels and ticks Pointer: To point to the current value in the temperature scale. Annotation: To represent the unit of temperature provided in the corresponding radial scale. Configuring Radial Gauge widget Follow the instructions provided in the Getting Started documentation to create a basic project in Flutter. Add Radial Gauge dependency Include the Syncfusion Flutter Radial Gauge package dependency in the pubspec.yaml file in your project. syncfusion_flutter_gauges: ^18.1.45 Get package To get the referenced package, run the following command in the terminal window of your project. $ flutter pub get Import package Import the Radial Gauge package into your main.dart file as shown in the following code example. import 'package:syncfusion_flutter_gauges/gauges.dart'; Add Radial Gauge widget Once the Radial Gauge package is imported in the sample, initialize the Radial Gauge widget and add it to the widget tree. @override Widget build(BuildContext context) { return Scaffold( body: Center( child: SfRadialGauge(), ), ); } Design temperature scale In the process of designing the temperature monitor, the very first step is to design a temperature scale to display the temperature range. To achieve this, you need to initialize a radial axis and add it to the axes collection of the widget. The radial axis allows you to display scale values that represent temperature in the axis labels of the circular scale from desired minimum to maximum values with the predefined interval. To customize the temperature scale start and end positions, use the startAngle and endAngle properties of the radial axis. axes: <RadialAxis>[ RadialAxis( minimum: -60, maximum: 120, interval: 20, startAngle: 115, endAngle: 65, ), ] The major elements present in the temperature scale are labels, an axis line, and major and minor ticks. In this demo, the following properties of a radial axis are used to design this highly customizable temperature scale: axisLineStyle: To customize the appearance of temperature scale line. axisLabelStyle: To customize the appearance of temperature labels. minorTicksPerInterval: To increase the number of minor tick counts per interval. majorTickStyle: To customize the appearance of ticks corresponding to the temperature scale labels. minorTickStyle: To customize the appearance of minor ticks that occur between the axis labels. ticksPosition: To place the major and minor ticks either inside or outside the axis line. labelsPosition: To place the axis labels either inside or outside the axis line. radiusFactor: To customize the size of the radial axis. axes: <RadialAxis>[ RadialAxis( ticksPosition: ElementsPosition.outside, labelsPosition: ElementsPosition.outside, minorTicksPerInterval: 5, axisLineStyle: AxisLineStyle( thicknessUnit: GaugeSizeUnit.factor, thickness: 0.1, ), axisLabelStyle: GaugeTextStyle(fontWeight: FontWeight.bold, fontSize: 16), radiusFactor: 0.97, majorTickStyle: MajorTickStyle( length: 0.1, thickness: 2, lengthUnit: GaugeSizeUnit.factor), minorTickStyle: MinorTickStyle( length: 0.05, thickness: 1.5, lengthUnit: GaugeSizeUnit.factor), minimum: -60, maximum: 120, interval: 20, startAngle: 115, endAngle: 65, ), ] Add temperature range indicator Once the temperature scale design is completed, add the temperature range indicator to indicate the level of severity of the temperature. To achieve this, add a single range with gradient colors or multiple ranges using the ranges collection property of the radial axis. In this demo, we have added a single range with gradient colors to indicate the severity of the temperature: The green color indicates the low temperature range. The yellow color indicates the moderate temperature range. The red color indicates the high temperature range. ranges: <GaugeRange>[ GaugeRange( startValue: -60, endValue: 120, startWidth: 0.1, sizeUnit: GaugeSizeUnit.factor, endWidth: 0.1, gradient: SweepGradient( stops: <double>[0.2, 0.5, 0.75], colors: <Color>[Colors.green, Colors.yellow, Colors.red])) ] Design needle pointer to indicate current temperature value Add a needle pointer to the pointers collection of the radial axis to indicate the current temperature value in the axis. Customize the appearance of the needle pointer with its various properties: needleColor: To customize the needle color. tailStyle: To customize the tail appearance of the needle. knobStyle: To customize the knob appearance. needleLength: To alter the needle length. needleStartWidth: To customize the start width of the needle. needleEndWidth: To customize the end width of the needle. pointers: <GaugePointer>[ NeedlePointer( value: 60, needleColor: Colors.black, tailStyle: TailStyle( length: 0.18, width: 8, color: Colors.black, lengthUnit: GaugeSizeUnit.factor), needleLength: 0.68, needleStartWidth: 1, needleEndWidth: 8, knobStyle: KnobStyle( knobRadius: 0.07, color: Colors.white, borderWidth: 0.05, borderColor: Colors.black), lengthUnit: GaugeSizeUnit.factor) ], Annotate temperature unit In this example, we have used gauge annotation to indicate whether the temperature unit type is °F or °C. annotations: <GaugeAnnotation>[ GaugeAnnotation( widget: Text( '°F', style: TextStyle(fontSize: 20, fontWeight: FontWeight.w600), ), positionFactor: 0.8, angle: 90) ], Add a Celsius scale In the above section, we designed the scale to display the temperature value in Fahrenheit. Now you are going to add a temperature scale in Celsius ( °C). To create a scale in °C, initialize a radial axis and insert it at the zeroth index of the axes collection, so that the needle pointer of the previous axis is rendered on the top of this axis For Celsius scale, we are going to use the backgroundImage property of the radial axis. In this demo, AssetImage is provided as the background image of the radial axis. Refer to this KB to learn how to set the AssetImage as a background image for a radial axis axes: <RadialAxis>[ RadialAxis( backgroundImage: const AssetImage('images/light_frame.png')), ] Customize the appearance of the temperature scale ( °C) by using the various properties of the radial axis. RadialAxis( backgroundImage: const AssetImage('images/light_frame.png'), minimum: -50, maximum: 50, interval: 10, radiusFactor: 0.5, showAxisLine: false, labelOffset: 5, useRangeColorForAxis: true, axisLabelStyle: GaugeTextStyle(fontWeight: FontWeight.bold), ), As in the temperature scale in °F, the ranges are added in this axis, too, to indicate the level of severity of the temperature range. ranges: <GaugeRange>[ GaugeRange( startValue: -50, endValue: -20, sizeUnit: GaugeSizeUnit.factor, color: Colors.green, endWidth: 0.03, startWidth: 0.03), GaugeRange( startValue: -20, endValue: 20, sizeUnit: GaugeSizeUnit.factor, color: Colors.yellow, endWidth: 0.03, startWidth: 0.03), GaugeRange( startValue: 20, endValue: 50, sizeUnit: GaugeSizeUnit.factor, color: Colors.red, endWidth: 0.03, startWidth: 0.03), ], In this axis, set the individual range color to the corresponding ticks and axis labels using the useRangeColorForAxis property of the radial axis. annotations: <GaugeAnnotation>[ GaugeAnnotation( widget: Text( '°C', style: TextStyle(fontSize: 20, fontWeight: FontWeight.w600), ), positionFactor: 0.8, angle: 90) ], As with the previous axis, annotation is added for displaying the temperature units of the axis. You can download the entire sample code of this demo from this GitHub location Conclusion I hope you now have a clear idea about the various features available for the Syncfusion Flutter Radial Gauge widget and how to create a highly customizable application with them. Browse our documentation to learn more about our Flutter widgets. You can find all our new features and controls in our release notes and What’s New page. You can also see Syncfusion’s Flutter app with many examples in this GitHub repo. Don’t miss our demo app in Google Play and the App Store. If you have any questions about this control, please let us know in the comments section. You can also contact us through our support forum, Direct-Trac, or feedback portal. We are happy to assist you!
https://medium.com/syncfusion/design-a-temperature-monitor-in-20-minutes-using-flutter-radial-gauge-10e13e689271
['Suresh Mohan']
2020-05-06 05:37:17.921000+00:00
['Web Development', 'Android', 'Flutter', 'iOS', 'Productivity']
This One Simple Exercise Will Help Counter Negative Thoughts
I’m going to be the first to admit that, at least since my teens (and probably before), I’ve developed some unhealthy thinking patterns borne out of the desire to succeed or get ahead in life. Did it make me happy? Of course not. I was an idiot thinking being harsh on myself and supposedly being disciplined with myself would be helpful. But that’s what most of us do, right? We associate drive, ambition, and ticking off boxes on our goal list with happiness. Yet these mindsets cultivate the very negative thinking patterns we try to undo when we realize what actual happiness is. It’s easy to get carried away with the zeal of ambition and end up taking a downward spiral into negative thoughts and self-talk. Albert Ellis identified that most people habitually think in ways that are self-defeating and irrational — irrational in the sense of going against our basic desire for happiness — and identified several faulty thinking patterns that we fall into on a daily basis. These insights spawned other researchers such as Aaron Beck to identify more examples of faulty thinking. Before being able to change the way we think, we must first identify which type of faulty thinking patterns we’re most prone to.
https://medium.com/curated-careers/this-one-simple-exercise-will-help-counter-negative-cognitions-so-you-can-love-yourself-dbe59a023038
['Michelle Middleton']
2020-12-17 22:02:51.496000+00:00
['Mindset Shift', 'Self-awareness', 'Personal Growth', 'Mindset', 'Self Love']
Huawei Cares or Applecare?
Almost everyone knows about Applecare service but, Huawei cares service a new tryout. I want to compare both services; Apple has an excellent performance about customer service: especially after-sales period. The company has a unique service quality for every region. iPhone has a balance between hardware and software. Huawei has been using all the advantages since Apple’s a big tech company. There’s a sub-brand named Honor. The sales have almost got a peak for the company. Trump administration’s decisions made the situation harder for a short period. The latest explanations of the administration were positive. ‘’American companies may work with China.’’ statement was a big success for the patience of China. There won’t be any peace all around the world, though. HUAWEİ CARES VS. APPLECARE Huawei banned by Trump administration. Most of the American tech companies supported this decision. Every news made the situation worse than yesterday: this period ended with the G20 Trump administration’s official statement. Huawei cares can be the answer to the current situation. Let me give more info about the service: - Every customer who buys P30 and p30 pro model between 10–27 July period in the Eurozone. - The company searched what the costumers afraid of when they purchased the product are. Physical damage is the main reason. - The product has 1–2 years of support that includes the scratched screen, liquid damage. - Technical service process minimized to one hour in standard situations. İn an exceptional case, the period may change. - There should be a standard service whether you go to official repair service or third-party service. Applecare has been a good model for after-sales. The company may add additional features against Huawei. I should give a few details about Applecare also: - If you visit the Apple store for a reason, genius starts to talk to you with a kind, sincere attitude. Whatever your situation or how bad it is can change: this may change your mind. - You can see the device’s current status with iOS specific tools like battery performance. - Since there are third-party technical services, technical service quality may change: it depends on the service. When there’s a significant recession in the economy, the perfect smartphone won’t be enough any time. Technical service support makes the big tech company better than the opponent. Let the consumers decide about it.
https://medium.com/hackernoon/huawei-cares-or-applecare-c7ea2d40c8f
['Kemal Karataş']
2019-07-01 18:24:53.861000+00:00
['P30pro', 'Applecare', 'P30', 'Apple', 'Huawei']
God is a memetic organism.
What is a memetic organism? Well, first let’s consider its analogue and inspiration, the genetic organism. I am a genetic organism. My body and brain are encoded into little packets of information that “want” to reproduce, but they do so better together as an intelligent multicellular organism. Memetic organisms are very similar, only the packets of information that encode them don’t take the form of DNA. Instead, their code manifests as ideas. Like genes, memes often do better together as complex organisms. For example: Wal-Mart is a pretty ideal memetic organism. It’s lasted a long time and reproduced itself many times. It’s worth noting that memetic organisms often live for tens of thousands of years and command armies. What makes religion such a special memetic organism? At its core, each memetic organism is founded on some measure of philosophy, whether it’s “Buncha stuff cheap” like Wal-Mart or “Good stuff expensive” like Apple. The nature of the metaphysical beast changes when the core of a philosophy isn’t a three-word axiom, but rather an archetype. An archetype-powered memetic organism isn’t just founded on the universal principles of game theory or design: it’s founded on the universal principles governing the function and very nature of the human mind.
https://medium.com/interfaith-now/god-is-a-memetic-organism-c853e283a3de
['Grant Talkington']
2020-06-09 19:04:43.379000+00:00
['Memes', 'Relationships', 'Christianity', 'Science', 'Racism']
31events.com Re-Launched on AWS
Counting RSVP’s on Calendar Clients March 2018 — We have updated all of our content on 31events.com and calendarsnack.com to reflect the new product — kudos to Arnie and Jonn. Our product is now fully functional using AWS and can be provisioned for OEM use in a few minutes. You can see our latest posts from Arnie and I on how the product works — https://31events.com/rsvp-for-5-billion-calendar-clients/. If your an OEM and want to serve up your own RSVP service for Web and Email integration, please reach out to me at [email protected] or one of site chatbots. Another example here of one of our customers using the product in a MailChimp Campaign to track users. In this use case, the customer used MailChimp to embed a RSVP request from a CalendarSnack. The Yes, No and Maybe are the actual interactions with the Calendar Clients. You can use our product for free here. Arnie on Medium: Greg On Medium: YouTube:
https://medium.com/calendar-marketing/31events-com-re-launched-on-aws-5305fecc1ee0
['Greg Hanchin']
2019-03-14 14:30:58.965000+00:00
['Calendar Invites', 'Mailchimp Integration', 'Mailchimp And Rsvp', 'Calendar Integration', 'Marketing']
Unless You Make Some Fundamental Changes, You’re Life’s Gonna Stay the Same
“If you want more, you need to become more.” -Jim Rohn Things don’t change — you change. If you want your life to be different, the change needs to come from you. Because you’re the person who’s most invested — you’re the one that’s going to reap the rewards (or suffer the consequences). Back in college, I had developed a pretty serious addiction to pornography, and it had started affecting my life pretty badly. I wasn’t sleeping well or eating right. I felt I couldn’t be friends with girls without “lust” entering my mind. I wanted a serious relationship, but I knew I couldn’t have one and keep my habit. I tried everything to stop — I read every book on addiction there was, prayed constantly, had others pray for me, had “accountability partners” (guys I’d asked to check in on me); I had my roommate change the password on my damn laptop so I couldn’t get online. I remember calling my mom and asking her to see if the phone company “could remove my internet” from my smartphone (they couldn’t do that). I kept waiting for “things” to change. But things only got worse. I realized that “things” don’t change — you change. So that’s what I did: I went to counseling, therapy, and even a 12-step program for addiction recovery. It was about as big a change I’d ever done in my whole life. Years later, things are…much better. I broke my addiction, got better at being an adult and not just escaping mentally from my problems. I learned how to have a great marriage. I learned how to feel discomfort and not numb myself out. Things don’t change — you change. And you might need to change some fundamental parts of your life if you want a better life. Living an Extraordinary Life Means Giving Up a Normal One An extraordinary life costs a “normal” life. You can’t have both. Good is the enemy of great. And that is one of the key reasons why we have so little that becomes great. Few people attain great lives, in large part because it is just so easy to settle for a good life. -Jim Collins, Good to Great People who prefer to live a “normal” life don’t want to pay most of the costs of an extraordinary life. Everything worthwhile in life has an opportunity cost. If you accept opportunity “A,” that means passing on opportunity “B.” You have to give up something in order to accomplish something else. If you want to live an extraordinary life for the long-term, you’ll need to give up some things in the short-term. Some of these things may be dear to you, which makes them extremely difficult to let go of. No one said this would be easy. For some, that means stopping looking at pornography entirely so you can start to actually connect with others. It might mean giving up some of their favorite foods so they can finally see abs they’ve never seen before. It might mean seeing friends less often in order to do the work necessary to succeed. It might mean declining wedding invitations because the trips are too expensive. Maybe it means giving up sleeping in so you can have more time in your days. Maybe it means saying no to opportunities at work so you can remain a loving, present father to your children. All great opportunities cost “good” ones. An extraordinary life costs a “normal” life. You can’t have both. You will have to sacrifice something that you value less than whatever it is you ultimately want. Make no mistake, this is a high price to pay. In fact, many people simply decline the offer of an extraordinary life after they discover how much it would cost. And that’s OK. An extraordinary life isn’t for everyone. But if you want to live the extraordinary life no else is living, you’ll have to start living a life no one else does. This means giving up a “normal” life. The 3 Things Everyone Needs to Sacrifice Everyone has different, unique things they’ll need to sacrifice in order to begin living an extraordinary life. But there are 3 things everyone will need to give up. 1. Security and Certainty One of the cornerstones of an extraordinary life is giving up the safety nets, security, and guarantees of a normal life. Maybe this is a steady paycheck at a job that will never allow you to reach your full potential. Maybe it’s the static 9–5 schedule. Maybe it’s a guaranteed retirement plan. Of course, you don’t have to live in this scenario for the rest of your life. This lifestyle is exhausting at first — you’re always on your toes, never knowing when the next paycheck is coming in, unsure of the future. But the extraordinary life gives you full control over your life and actions, at the cost of the comfort of having others call the shots. This is one of the hardest parts to give up and takes a long time to really sink in for even the most dedicated entrepreneurs, adventurers, and risk-takers. 2. Fear of Judgement “The worst part of success is to try to find someone who is happy for you.” -Bette Midler If you post a status on Facebook that says, “I got the job!” you’re likely to get dozens, even hundreds of likes. But if you post a status that says, “I finally started my own business!” you’re likely to experience little engagement at all. Which brings us to the next requirement of an extraordinary life: letting go of your fear of judgement. Trying to explain your extraordinary life to others will begin to seem like a lost cause. Most people are afraid you’ll achieve the dreams they never did, and so they attempt to protect themselves from that failure by bringing you down. The extraordinary life looks crazy to an outsider. They don’t understand it, and they’re afraid of it. To an individual living a “normal” life, the characteristics of an extraordinary life seem foolish, stupid, and unrealistic. They don’t understand why you go to the gym even when you’re exhausted. They don’t understand why you’d wake up at 6 am on the weekend when you could be sleeping in. They don’t understand why you’d prefer a wild, inconsistent, frightening life full of uncertainty when you could choose the comfort and safety of a normal one. So they judge you. They criticize you, condemn you, and ostracize you by singling you out as stupid, naive, and silly. You must ignore this. You will never succeed if you continue to take more stock in what your critics say than what you believe about yourself. This is another extremely difficult thing to give up. Separating ourselves from the herd is scary, and the criticisms and warnings from others might even sound-wise. Let it go. This is your life, not theirs. 3. Other People’s Definition of Success In the words of Srinivas Rao: At some point, I realized that I had to give up other people’s definition of success.This is one of the most difficult things to give up because it is so deeply embedded in our cultural narratives that it becomes the standard by which we measure our lives. Even as entrepreneurs we have collectively agreed that fame and fortune are the markers of success. But, giving up other people’s definition of success is incredibly liberating and ultimately leads to the fullest expression of who you are and what matters to you.It’s not a one-time thing. It’s a daily habit of comparing less and creating more. “Success” doesn’t just mean what the larger mob of society says it means: “lots of money, fame, and fortune.” Many people with fame, fortune, and lots of money have terribly empty, imbalanced lives. Your success isn’t defined by what other people say. No one can define your success but you. If you continue to let others tell you what success is, you’ll never reach it. Even if you did, it wouldn’t be a true success, because it’s not what you really valued. No, living an extraordinary life means defining your own version of what success is. You can begin to spend your time on what really matters to you. Do you really want 100,000 Twitter followers? Do you really need to be in the Forbes 30 Under 30 list? Do you really want to be a New York Times Bestselling Author? Or is your version of success more narrowed, more focused, more specific? If you want to live an extraordinary life, your definition of success must be your own. If we are always chasing what other people tell us to, we’ll never experience true success. Let go of other people’s versions of success. Define your own success, and achieve it. Ready To Level-Up? If you want to become extraordinary and become 10x more effective than you were before, check out my checklist. Click here to get the checklist now!
https://medium.com/publishous/unless-you-make-some-fundamental-changes-youre-life-s-gonna-stay-the-same-c0f9bb271407
['Anthony Moore']
2020-05-11 22:51:54.509000+00:00
['Life Lessons', 'Productivity', 'Self', 'Personal Development', 'Anthony Moore']
Code Sponsor Weekly Update
January 29, 2018 When we started Code Sponsor last year, the goal was simple: help sustain open source. This year with the help of Gitcoin and ConsenSys, we are taking what we’ve learned last year and rebuilding tools around funding open source. OSS FTW In the spirit of openness and eating our own dog food, we decided to build Code Sponsor completely open source. We hope that by doing so, developers can trust and contribute to a platform that will be built to help them sustain their projects of passion. The new platform is written in Python / Django. The source code can be found below. During development, we will be using Gitcoin to help fund contributions. If you are interested in helping and would like to make some money doing so, please visit Gitcoin and learn how it works. Once you are up to speed, you can work on funded issues on the codesponsor project. Q1 Goal The primary goal of this quarter is to ship a fully functional platform that allows for developers to receive funding for placing ethical advertisements in their documentation websites. This is already working for websites such as JS Bin, Devhints.io and CodeSandbox.io. It’s our goal to provide this type of scalable funding to as many projects as possible. If you are a company that wishes to advertise to software developers, please reach out to us ([email protected]). If you are an open source maintainer and want to find out if you are eligible to participate with the early release of Code Sponsor, please fill out this form. Finally, if you’d like to participate in the development of Code Sponsor, join our Slack!
https://medium.com/codefund/code-sponsor-weekly-update-ad26ce8a7c73
['Eric Berry']
2018-01-29 16:22:08.119000+00:00
['Open Source', 'Foss', 'Open Source Software', 'Sustainability', 'Sustainable Development']
The Story of how Natural Language Processing is changing Financial Services in 2020
The Story of how Natural Language Processing is changing Financial Services in 2020 NLP Applications in Financial Services Natural language processing is transforming the financial services industry with banks using NLP for evaluating performance drivers and forecasting the market. From market analysis, content reviews, and risk management, NLP is accelerating changes in the financial industry¹. The traction towards NLP in financial services is increasing with demand for BERT NLP growing among financial institutions. NLP can be utilized to assess a wide range of speech and text data from different contexts. Additionally, NLP enables banks to automate and optimize tasks including amassing customer information and searching documents. Credit: Xenon Stack Banks can expect NLP solutions from AI vendors to extract data from both structured and unstructured documents with a reasonable level of accuracy. Accordingly, financial institutions need to be aware of the fact that collected data from transactions and loan documents in the past, might not be useful for training #machinelearning models unless it is cleaned. Overview of Natural Language Processing The Bank of America is using natural language processing by leveraging this technology to become competitive in the market. Other banks including HSBC are following suit by using natural language processing to streamline operations and gain market insights. According to Yahoo Finance, the natural language processing market will expand in 2020 with a growth rate of 19% totaling to $14B. Alchemy Data tool from IBM² is changing the financial services experience by converting large information sets into insights used for decision-making. Companies such as Green Key Technologies¹¹ have developed NLP solutions for the financial industry with their latest innovation around trading desks. Financial institutions use their tools in voice information and analysis of trading processes. Why #naturallanguageprocessing in the financial services industry? The answer is simple. Retrieving information from unstructured resources that financial institutions have problems accessing. Banks need accurate information about their operations and NLP tools are changing the landscape by helping them make decisions based on customer and market trends. 1. Customer Management and Predictions Financial institutions must deliver quality services to their clients and this means going the extra mile to understand customer information. NLP is reviewing customer data³ including social interactions and cultures which helps them to customize services. For instance, NLP filters through social media information and detects conversations that may help them offer better services. Credit: Analytics Insight Stripe¹² is using NLP to explore customer information to identify interest areas that influence customers positively. Predicting customer needs is critical in the financial industry and Stripe is deploying #artificialintelligence and natural language processing to deliver better services. 2. Market Evaluation and Monitoring One challenge facing banks is the lack of tools for reporting market conditions such as company news posted online or mentioned in business news. NLP is bridging this gap by supporting real-time dissemination of information about their services from customers and business partners. A company with a bad reputation performs poorly in the market and NLP assists to anticipate these problems and address them. The Alchemy language tool enables financial institutions to track information about their operations in the market and make decisions. Developed by Watson, Alchemy Language assists banks to explore market trends⁴ and interactions around their services, which further supports the management process. Unlike the past when banks took long to get the whole market view, NLP is streamlining the process through #data extraction tools. 3. Compiling Financial Reports The financial services sector consists of volumes of information that pose challenges when reviewing transactions. Natural language processing⁵ is making the process easy through information filtering that helps financial analysts to access the right information. JP Morgan adopted NLP with much success after the company faced problems in identifying key areas of their market operations. Client communication in financial services is critical and NLP tools offer vital information to banks as they engage with customers. NLP systems predict and identify problem areas facing customers and this helps banks to develop policies around these challenges and serve them. Banks make decisions based on NLP tools, which further accelerates the preparation of financial reports⁶. 4. Automatic Updates on Company Operations Enterprises operating in the financial services experience market changes because of new hires and key people exiting the company and NLP is managing these responses by telling banks on market ramifications. The stock market fluctuates or rises depending on company departures and NLP tools⁷ relay information to management for further action. Banks look at the effects of staff reorganization on their share price and use NLP to facilitate the internal evaluation of operations to align with market expectations. 5. Risk Management The success of companies in the financial industry depends on risk management procedures adopted and NLP is supporting in this area. Fraud management is the first advantage of using NLP in financial services where banks monitor suspicious financial transactions⁸ and develop tools for addressing this problem. NLP systems point to the risk areas and support communication across the financial organization about the impending risk. This further reduces the chances of incurring losses. Chime¹³ is one banking institution with success in using NLP for fraud detection where the bank utilizes these tools in all transactions. According to the CEO of Chime, natural language processing is transforming financial services by reducing customer risks and offering value to investors. Cases of fraud in the financial industry rose by 60% in 2019 alone according to a Pew Research poll and Chime is taking advantage of NLP tools. Insider trading⁹ in financial services remains a major risk with banks losing revenues because of financial misconduct. Natural language processing offers an ideal platform for the management of trading activities by relaying updates based on company operations. NLP pinpoints instances of insider trading before losses occur and safeguards the image of the business. 6. Stock Market Forecasting and Management The stock market matters in the financial services and NLP tools are offering information about the behavior of stocks. For example, a bank can understand the current stock performance, forecast risks, and respond to market forces. The Alchemy Data from IBM develops responses that enable banks to determine the performance of their stock. A company needs to figure out ways of improving stock performance and through NLP¹⁰, this becomes easy because of accessing accurate information about the market. Trading in the stock market fluctuates and responding to the problem requires technology solutions such as natural language processing which interpret data. Natural language processing automation is helping banks and other financial institutions explore effective ways of managing their stocks with HSBC implementing NLP across all its operations. By using NLP for market forecasting, HSBC explores stock market performance and offers recommendations based on prevailing market conditions. 7. Sentiment Analysis Banks need information about their operations to remain competitive and reduce losses. Natural language processing reviews complex information within the financial services and offers accurate information including inconsistent data. Unstructured information within a bank poses challenges when it comes to extracting insights and this is where NLP comes in. Equity performance is one area where banks need attention and NLP tools provide a clear analysis of operations. The categorization of financial data by NLP is what makes this technology vital for banks in the current digital age. Overall, banks use NLP to measure and understand their operations based on variables such as customer demand and stock market performance. 8. Financial Variable Relationships The #financialsector is adopting natural language processing because of determining relationships including revenues, stock earnings, value, and competition. Graphical representations of these variables become easier by using NLP as banks can monitor and compare with the previous financial performance. Regression analysis on financial graphs is one area benefiting from NLP as companies use the technology to determine the success rate in the market and detect financial misconduct as well. By using NLP, banks establish connections between variables and use them to make strategic decisions. The entity modeling system from NLP has made relationships between variables convenient as banks can determine major areas affecting their operations. The Future of Financial Services is Natural Language Processing Advancements in natural language processing such as voice solutions are streamlining operations in the financial industry as banks use NLP tools to capture voice and text information, to convert data. The same applies to the customer service department where financial institutions rely on NLP to track and understand customer insights. The ability to search through loads of financial information within a short time and with high accuracy makes NLP an important tool for the banking world. #Textanalytics and voice recognition solutions by NLP have created new opportunities for banks to improve their services and offer value to the market. Before, banks incurred costs for mining data because of of the tedious task of searching through large data sets. In this era of COVID-19, financial institutions are using information generated from NLP systems to evaluate the market and estimate risks. Natural language processing systems are assisting bank managers to measure the implications of the pandemic to their operations and support decision-making. Because of information misinterpretation, natural language processing is improving this by scanning large volumes of data and interpreting them accurately. Unlike humans, NLP technology scans large information sets within a short time and increases efficiency for players in the financial industry. Do you think NLP adoption in financial services is accelerating? Share your comments below to contribute to the discussion on The Story of how Natural Language Processing is changing Financial Services in 2020. Works Cited ¹Financial Industry, ²Alchemy Data Tool from IBM, ³Customer Data, ⁴Market Trends, ⁵Natural Language Processing, ⁶Financial Reports, ⁷NLP Tools, ⁸Financial Transactions, ⁹Insider Trading, ¹⁰News API Companies Cited ¹¹Green Key Technologies, ¹²Stripe, ¹³Chime More from David Yakobovitch: Listen to the HumAIn Podcast | Subscribe to my newsletter Online
https://medium.com/towards-artificial-intelligence/the-story-of-how-natural-language-processing-is-changing-financial-services-in-2020-8709cca3a100
['David Yakobovitch']
2020-12-19 13:04:56.855000+00:00
['Naturallanguageprocessing', 'NLP', 'Finance', 'Future', 'Technology']
Writing Fiction? Easy Ways to Create Talismanic Props That Bring Plot and Character to Life
Writing Fiction? Easy Ways to Create Talismanic Props That Bring Plot and Character to Life Fiction Writers Who Give Their Characters Props as Talismans of Meaning Can Turbocharge Their Plots’ Magic Photo by Daniele Levis Pelusi on Unsplash When you’re writing fiction, does your main character have a prop, some physical object they carry like a talisman and return to in times of stress or happiness? A prop that’s unique to them? Here’s an easy way to empower your characters with props that can turbocharge meaning in your fiction writing. What Is a Fictional Prop? A fictional prop is a physical object a character finds meaningful. A Sherlock Holmes clay pipe. A Jay Gatsby silk shirt. A Winnie the Pooh honey jar. A Tinky Winky red purse. Fictional Prop Example 1 My character Lily Paige, a young woman rising from the ashes of a poor upbringing, sports an old camel coat, a caramel wool trench with deep pockets and a buckled sash. To her mind, it’s sophisticated, a movie star coat. This special coat originally belonged to her mother, who couldn’t afford to get Lily a more more expensive gift. So instead she gave Lily what she herself valued most. Lily slings that camel coat jauntily across chair backs. She saunters down city streets in it, hem swinging, collar popped. It isn’t just a coat, it’s a credo. It’s a touchable, practical symbol of what Lily values—style, grace under pressure, optimism, joy. The Camel Coat Becomes Her Saving Grace Lily’s camel coat becomes her saving grace one frantic night when her friend suffers a near-fatal assault. Lily spreads the coat tenderly across the shoulders of her bleeding and broken friend. Lily’s camel coat is her talisman, her magic object. And if I write it right, forever after, when my readers see a camel coat they’ll think of Lily Paige. A fictional prop is that powerful. How Do You Choose a Fictional Prop? At the planning stages, when you develop your characters, assign them props. Make the prop as uniquely powerful as a wand handed out at Hogwarts. Does every character need a prop? No, but your main character should have one, for sure. What makes a good prop? The short answer: something that’s common enough to be used frequently, or at least stashed away in a drawer. Something that stands the test of time. How about something with a little history? Think of a prop that speaks to a character’s past, makes readers wonder why a person would need to hang on to an old spelling-bee medal, say, or a tattered yellow tutu. Test Your Prop’s Power with This Simple Formula Try this easy Q and A formula to test your prop’s power. Just use the 5 Ws: Q and A for Creating a Prop Main character’s Name: James Ruffin Prop: An old family Bible inscribed with relative’s names Who is associated with this prop? The main characters’ family, killed during the Tulsa Massacre fire bombings. What happened that makes the prop meaningful? His family was killed when the house burned down, but James and his grandmother, returning to the scene, were able to save the family Bible. When does the character use this prop? He opens it when he wants to feel comforted and see the names of his lost loved ones. Where in the story does the prop become important? At a turning point in the story, James shows the family Bible to his girlfriend as a gesture of trust and desire for connection. Why is the prop important? It says, without words, what the character would like to express about the importance of remembering tragic events, keeping the people we love alive in memory, and sharing life experiences in a powerful way. So when you think of characters, think of props for them and what those props signal. Lily’s camel coat and James’s family Bible say a lot about them in microcosm. That concentrated energy makes props powerful tools for character development. Props as Tools for Plotting Props are also stellar tools for plotting. Fictional Prop Example 2 Think of a toy music box carried across the ocean by Irina, a ballerina who escaped Russia. Once, long ago, it carried the meaning of innocence and promise. She stashes it away in a closet, loses track of it. One day, Irina takes the music box from the shelf and raises the rusty lid. The beloved object has definitely been through some changes. It’s battered now; its clasp is crooked; its crank is broken. The little music box dancer inside won’t pirouette. The tinkling music won’t play. Still, the aging dancer hangs on to her precious music box, values it more than ever. What does this fictional prop mean to your character? What does it mean to your plot? Ask yourself: Has Irina herself changed, become broken too? Or has she stayed true to her dreams while the prop itself decayed? Are the hope and promise the music box once held for her now lost forever? Or has Irina kept the essence of its meaning, that hope and promise, alive? Contrasting Props with Characters Yields Insights About Story Contrasting a prop with a character as they evolve over time yields insights about story. After all, think about it. What made Lily throw her precious camel coat over her bleeding friend? Wasn’t she afraid of getting it dirty? Something’s changed in Lily that makes her think of that coat, and herself, in a different way. Fictional Prop Example 3 A young man clutches a book of poetry everywhere he wanders—not only because he loves the verse inside but also because he loves the person who gave it to him. Perhaps he loses his love, but later finds her again in spirit, when he takes that poetry book off the shelf again. Inspired, he decides to create a play, a novel, a painting, or a dance based on what her legacy means to him now that she’s gone. The simple prop has become a generator of deeper meaning. (In description, the shining detail has a similar power.) Go Deeper Than Movie Motifs Props are commonplace in film—Charles Foster Kane has his childhood sled Rosebud. Marlene Dietrich flaunts her slinky cigarette holder. Jack Sparrow has his pirate hat. In fiction writing, props can go deeper. You can layer an object with meaning that can extend through the life of your story. Even beyond, into the memories of your readers. You can create a symbol that lasts. How to Create Talismans That Work Magic in Your Fiction Assign your characters props. Let them give those objects personal, talismanic meanings. Let the characters and props evolve together. In doing so, you’ll guarantee your own characters will never be cardboard “props” on a stage set. They’ll be flesh-and-blood people who live in the real world and reach out to touch its power and hold it in their hands. Try it!
https://medium.com/an-idea/writing-fiction-use-props-to-bring-plot-and-character-to-life-db6e205bbc30
['Paula Sue Bryant']
2020-10-08 19:27:13.479000+00:00
['Fiction', 'Fiction Writing', 'Creativity', 'Writing Tips', 'Art']
A Diverse Group of Friends, Self-Discovery, and a Queer Hero
Jake and Kenny hike in the dry heat of the desert. In a small cameo, Superman flies by in a soft silhouette above them. It leads to a conversational trope of the genre about being a hero and what it might mean, how cool it would be to have superpowers and help people. There is something endearing in this exchange that feels new. Two young gay men are confiding in one another, Jake coming out for the first time, not wallowing in shame about it but pondering something more. Jake was with Kenny, his people, and it wasn’t about who he was, but what the future might be — and how he might break the news to his best friend Maria, who they both suspect might be in love with him. Alex Sanchez does a great job of not making it too easy for Jake. The tunnel vision he has for his burgeoning feelings for Kenny and the emotional rollercoaster that puts his friendship with Maria on feels real. Just as he gets a sense of how unusual he is outside of his relationships — yes, it has to do with water and, no, Kenny’s swim team captain sensibilities about water safety don’t help.
https://medium.com/interstellar-flight-press/review-of-you-brought-me-the-ocean-by-alex-sanchez-julie-maroh-f2c64ad15f21
['Presley Thomas']
2020-10-02 14:01:09.768000+00:00
['Reading', 'You Brought Me The Ocean', 'Comics', 'Queer', 'Books']
React Firebase Authentication
Creating a React app: make sure you have node installed by typing the command else install it here I am using create-react-app (recommended for beginners) to create my react boilerplate app. Feel free to use Next.js or Gatsby.js npx create-react-app [your-app-name] Configuring Firebase: now change the directory and install firebase npm module by cd [your-app-name] npm i firebase react-router-dom start the development server by running the command npm start create a .env file in your project root directory and paste the code like below this will save all these config strings to the environment as variables which can loaded to any of your project files at any time the static site generator create-react-app which we are using has an unique way of storing the custom environment variables. We need to start the variable name as REACT_APP_ only then you can load that variables in your app. This is done for some security purpose so that you don’t actually overwrite the old variables which is already there. For more details refer the Official Documentation Environment Variables are the way usually used by many developers to hide their API keys or any App Secrets while making open source projects so that we can hide our config and still use it. If you are pushing it to your repo in github or any other git clients, you can just include that .env file in your .gitignore file to ignore the file create a new file base.js in your src folder and paste the config object you copied from the Firebase website the environment variables can be accessed by process.env as mentioned above Creating the UI: here I have made the more simplest way to create the functional component without any styling and using the useState react hook to store the state create a Login Component and write the code as below create another component SignUp and paste the following code alternatively to redirect if there is a user like this, you can make the routes for Login and Sign Up components as Private Routes which you can learn this from this blog post make the Dashboard Component as per your requirements. You can get the current user details from the auth object as auth.currentUser exported from the base.js file you can get the user’s name as auth.currentUser.displayName and profile image as auth.currentUser.photoURL. It is recommended to save the user in a context and use the Firebase real-time update on the auth state changes as follows You can also use Social Logins like Google, Facebook, GitHub, Twitter, and many more… you can see all those options in the authentication tab in the Firebase console.
https://18ganapathy04.medium.com/react-firebase-authentication-6098fadae1a5
['Ganapathy P T']
2020-12-07 13:46:02.980000+00:00
['React', 'Authentication', 'Firebase', 'React Hook']
Building a Rotating IP and User-Agent Web Scraping Script in PHP
Rotating the Exit IP To implement the IP rotation, we are going to use a proxy server. “A proxy server is basically another computer which serves as a hub through which internet requests are processed. By connecting through one of these servers, your computer sends your requests to the server which then processes your request and returns what you were wanting. Moreover, in this way it serves as an intermediary between your home machine and the rest of the computers on the internet.” ―What Is My IP? When using a proxy, the website we are making the request to sees the IP address of the proxy server — not ours. This enables us to scrape the target website anonymously without the risk of being banned or blocked. Using a single proxy means that the IP server can be banned, interrupting our script. To avoid this, we would need to build a pool of proxies to route our requests through. Instead, we are going to use the Tor proxy. If you are not familiar with Tor, reading the following article is greatly recommended: How Does Tor Really Work? “Tor passes your traffic through at least 3 different servers before sending it on to the destination. Because there’s a separate layer of encryption for each of the three relays, somebody watching your Internet connection can’t modify, or read, what you are sending into the Tor network. Your traffic is encrypted between the Tor client (on your computer) and where it pops out somewhere else in the world.” — Tor’s official documentation First of all, we need to set up the Tor proxy. Following these OS-based guides is highly recommend: Now, we have a Tor service listening for SOCKS4 or SOCKS5 connections on port 9050. This service creates a circuit on start-up and whenever Tor thinks it might need more in the future or right now.
https://medium.com/better-programming/building-a-rotating-ip-and-user-agent-web-scraping-script-in-php-277bde659d20
['Antonello Zanini']
2020-08-24 14:19:08.582000+00:00
['Programming', 'PHP', 'Cybersecurity', 'Web Scraping', 'Startup']
HTML5 Canvas API ile Fotoğraf Manipülasyonu
Picozu the HTML5 Image Editor | the HTML5 Image Editor - Sharing Creativity Open a new document and draw something using Picozu's various funky brushes or just upload a previous sketch from your…
https://medium.com/arabamlabs/html5-canvas-api-ile-foto%C4%9Fraf-manip%C3%BClasyonu-1efd632f4eb5
['Emre Yasin Çolakoğlu']
2018-04-17 08:34:45.932000+00:00
['Html5 Canvas', 'Image Processing', 'Html5', 'Frontend']
Microinteractions: small details matter
Microinteractions are the small details that exist inside features. They’re often used to accomplish a single task or to interact with a small portion of data. Think back to the last time you logged into an app for the first time, posted a photo, or liked a status. All of these actions are microinteractions because they have a single use case attached to them. Just because the interactions are “micro” doesn’t mean that less effort should be put into designing these features. With a world so focused on detail, thoughtful microinteractions can make or break a product. The McDonald’s Example It might help to think about microinteractions in terms of the physical world before moving to the digital world. Believe it or not, you can find many great examples of microinteractions at your local McDonald’s. For the following examples, let’s imagine you’re a McDonald’s employee getting ready for the day. You walk into the restaurant and immediately notice that it’s hotter than usual. If you go to turn down the temperature, you’re engaging with a microinteraction. Twisting the knob produces a result of the temperature going up or down, depending on the way you turn the knob. After you turn the temperature on you notice that it seems quiet, so you go turn the music up. Pressing the “+” button is a microinteraction, because it produces a result of the volume getting louder. Before your shift starts, you decide to take a restroom break. When you finish up, you wash your hands. McDonald’s, being a modern restaurant, has the type of sink to where if you stick your hand under the faucet, the water comes out without having to press any buttons or interact with any physical objects. This is a microinteraction because it is designed to focus on a single action: making water come out of the faucet. Digital Microinteractions Now that we’ve covered physical microinteractions, let’s take a look at digital microinteractions. Microinteractions are present in nearly every digital product you use, whether you realize it or not. Think back to the last time you reacted to a status on Facebook, liked a post on Instagram, or retweeted someone on Twitter. These are all examples of microinteractions because they focus on one specific task of sharing or reacting to information. Twitter has nailed microinteractions in recent years. In taking a look at Twitter’s interface, there were three microinteractions that stood out to me. First, the chat bubble that pops up when someone else is typing in the messages tab. Second, the interaction of retweeting someone else’s tweet. And third, swiping up to refresh the page. These are just a few examples to take inspiration from when designing microinteractions into your next digital product. The Four Parts According to Dan Saffer, author and designer at Twitter, there are four parts to any microinteraction. Part One: The Trigger It’s always important to help users as much as possible when designing products. The trigger is a cue that helps the user figure out what action to take next. Going back to Twitter for example, the “see new tweets” button is the trigger that initiates the interaction. The button’s sole purpose is to bring the user back to the top of the feed, where new information is present. Part Two: Feedback Providing feedback along the way is a great example of helping users along the process. No one likes to interact with things that don’t provide feedback. Imagine you use your turn signal at a left turn and nothing happens. No sound, no visual indicator, no internal feedback. You’d be stuck thinking something is wrong with your car, even if the feature is working perfect externally. Bringing the same way of thinking into the digital world, it’s necessary to provide feedback. Airbnb, along with many other apps, handle the sign-in process by placing a checkmark next to the input field when the information is accepted. This helps show the user that she’s on the right track. Part Three: Rules Rules can be daunting, even annoying sometimes. These rules are not put in place to limit the user, but to help her accomplish goals on the app. In a chain of continuous interactions, the rules engage with the trigger mentioned in part one. They help explain what’s going on in the interaction and the boundaries within it. Let’s take a look at Polymail as an example. Last semester I was applying for a scholarship and had to attach my transcript to an email. My email had the word “attached” in it, but I forgot to attach the file. Polymail recognized this and gave me this warning message: “You wrote the word ‘attached’ but there are no files attached to your message. Send anyway?” Polymail did a fantastic job at using rules to help, rather than annoy a user. Rules help to keep the interaction smooth. For example, a rule is put in place to enforce a light turning on or off when flicking a light switch. If this doesn’t happen, the person is left feeling confused and irritated. Rules enforce feedback. Part Four: Loops and Modes How can the microinteraction adapt over time? The first time a user interacts with the microinteraction shouldn’t feel the same as the tenth or the hundredth time. As the product progresses, so should the microinteractions. For example, when updating a design system, the visual aspect of a microinteraction might change but the functional aspect will stay the same. Conclusion Microinteractions have the power to completely change a person’s experience and make people feel more comfortable using your product. Small details matter when building a digital product, so it’s important to think minutely when designing features. The future of social products will be built upon thoughtful microinteractions, so design accordingly.
https://uxdesign.cc/microinteractions-detailed-design-9113c88946d0
['Mariano Avila']
2018-05-02 17:34:54.350000+00:00
['User Experience', 'Design Thinking', 'UX', 'Microinteractions', 'Design']
Don’t Date Guys Who Hate That You’re Awesome
Rewatching The Romantics 5 years later and wow, that guy’s the worst. Maybe you’ve heard of The Romantics. Maybe you’ve watched it on a Friday night with a bottle of wine after falling down an Amazon Prime Video hole. Maybe we’re not so different, you and I. The Romantics is a movie by Galt Niederhoffer that she adapted from her novel of same name. By most accounts available online, it is not a good movie. It cost 4.5M to make and made less than $125K at the box office. I keep watching it because it’s very pretty visually and also the soundtrack is phenomenal. I’m going to use it here to give you dating advice, if you don’t mind. Since you can’t really ruin a movie that’s just okay, here’s the gist: A group of college friends get together because two of them are marrying each other. Except of course the groom is marrying the wrong girl. He should instead, if you fall for such things, marry her best friend because that’s the one he’s in love with and has been for like, years. There are actually three instances of couple swapping, at least. It’s a…close knit group, so to speak, though not much background is given. This film relies on your assumptions pretty heavily, just go with it. Every few years I return to this film because I need to watch something that doesn’t require any investment on my part. I’ve typically just sat down on the couch with a bowl of pasta made from chickpeas and I want to watch something I don’t mind all that much before my dish gets cold. This film never tells you enough about any one character for you to develop feelings, and therefore suits my purposes. I also enjoy the idyllic setting of this film although no explanation is ever given as to where they actually are or why there are so many different buildings on this property that they’re allowed to be dysfunctional in. It might be the Hamptons, I’m not sure. The main focal point of the film, apart from Anna Paquin in very small sundresses, is Katie Holmes and Josh Duhamel’s characters’ tortured lovers routine. Josh Duhamel’s character is set to marry Anna Paquin’s character, but Josh Duhamel’s character is in love with Katie Holmes, who he dated all through college and continued to fuck after graduation, occasionally. The wedding and indeed the relationship itself is depicted as such a shock to the system of Katie Holmes’ character that you’re often left to wonder why she’d put herself through the pain of even showing up. As you’ll learn, she makes self destruction something of a habit. Wait…I don’t get it, why is Josh Duhamel’s character marrying Anna Paquin’s character if he’s in love with the woman played by Katie Holmes? Here’s where the plot thins to crepe batter: We have no fucking idea. There’s a vague suggestion by Malin Åkerman’s character that it’s for money, which for sure could hold water, but these characters are supposedly hovering around 25 years old and it seems way too early in Josh Duhamel’s character’s career for him to decide to just sort of be “kept” for the rest of his life. And especially not while he’s still in love with and kind of still fucking seeing the character played by Katie Holmes. You have to accept a lot of nonsense to make it to this movie’s closing credits. Including why someone would name a child “Minnow.” It makes no sense. He’s in love with her, she’s in love with him, but like not just love. This is consuming, addicted love. They are each other’s “person.” Anyone with eyes and an Amazon subscription can see that. So during the whole movie we’re watching these tortured people — wait, I’m sorry, we mainly just see her torture because it’s way more fun to watch single women long for something than to also show the story from the man’s perspective, right?—dance around the fact that they’re both really upset about this upcoming wedding. She at least has the good sense to call out how this whole vaudevillian circus seems like a bad idea, but he doesn’t supply any answers that satiate. For a long time, I didn’t understand it. Why would Josh Duhamel’s character opt for the woman he basically just sort of likes when he could spend his days and nights with someone he loves so much that he has sex with her on the grass outside the night before his wedding to someone else? In previous viewings of this film, I was outraged that he wouldn’t just come to his damned senses and run off with Katie Holmes’ character to be like…happy! That’s what the viewer is supposed to want, right? The two people in love actually getting (back) together? But an older, wiser, more bullshit-resistant version of me sees things much more clearly now. He’s actually a piece of shit. There’s a special breed of trash human that I’ve had trouble spotting in the past: Guys Who Hate That You’re Awesome. They freak out. They run. They’re the ones who can’t handle something good, and you my darlings are what’s good. Know them. Recognize them. Block them on Instagram. Josh Duhamel’s character is clearly in love with Katie Holmes’ character. Like literally cannot stay away from her. I did mention that he has sex with her on the grass outside the night before his wedding to someone else. And yet, he’s about to marry her best friend, right in front of her. Oh yeah, I forgot, Katie Holmes’ character is the maid of honor because we haven’t yet poured enough lemon juice into the open wound that is her heart. The limp reasoning Josh Duhamel’s character gives for being unable to be happy with the woman he actually loves and his willingness to marry the frigid psychopath played by Anna Paquin is that he feels he can’t possibly live up to the amazing times they’ve had together in the past. He’s literally intimidated by his own relationship. Which makes no fucking sense at all but it never makes sense when a man really, really likes a woman and therefore cannot be with her. These creatures are real. I had one once. It’s a sobering experience, being made to feel wrong for being wonderful. Katie Holmes’ character is being denied the love of her life because she held a beautiful space for him. She was everything he wanted, and therefore he didn’t want her anymore. Not enough to marry her anyway. He can’t marry someone he actually loves, that would be insane. Katie Holmes’ character isn’t innocent here, by the way. In my opinion she should have never spoken to this man (or her “best friend” who thought it was totally chill to marry this guy without even fucking discussing it with her bestie first) ever again in her life. These people are red flags turning all of your clothes pink in the dryer and rather than running from them, Katie Holmes’ character is instead writing a speech for their wedding. There’s being a good friend, and then there’s protecting yourself from emotional ruin. Her character cannot discern the difference. Let me be clear: You cannot convince these people to stop freaking out. You cannot love them so much and be so “chill” about everything that they magically change their minds and decide to have a healthy relationship with you. Being “chill” about everything all the time for fear of scaring off the feral cat you’re dating is no way to live. If you freak someone out by being awesome, by being yourself, by loving them, run the fuck away. Don’t fall in love with the potential of things working out someday. Know that this kind of behavior is so much less than you deserve. Leave this shit in the wind and know that far better awaits you in the future. But let’s get back to Josh Duhamel’s character because I think he’s an important archetype in the dating space and I want to make sure we put an end to our willingness to tolerate his kind. He should be shut down and ignored the very second his nonsense bubbles to the surface. He’s a Guy Who Hates That You’re Awesome and there is no room for him on earth any longer. He is also simply the worst. Supporting arguments: He has sex with someone who is not his fiance on the grass outside the night before his wedding. He initiates this! He doesn’t tell the love of his life that he’s started dating her best friend. She has to learn of this when her best friend calls her to ask her to be her maid of honor. Would you die?! Right before all this ballyhoo started, I mean right before, Katie Holmes and Josh Duhamel’s characters have an amazing night together that ends in sex. She finds out about the engagement like…days later. If you’re keeping track, we know of two times this guy has cheated on the woman he’s proposed marriage to. He visits his bride-to-be as she goes to sleep the night before her wedding and starts to tell her about his doubts and whathaveyou. The night before her wedding. He leaves the woman he slept with the night before his wedding to wake up alone on the presumably wet grass in the morning. So this guy’s done. We can agree to that. Is he tall and handsome and possessing something resembling academic intelligence? Yes. But he is also the fucking worst. Further, no man is that tall or that attractive or that smart that we should ignore the simple truth that it’s weird when someone can’t handle something good. We as the un-freaked out parties to the relationship shouldn’t be responsible for someone else’s emotional immaturity. If you’re interacting with someone romantically or sexually and they literally cannot handle how amazing it is to be around you, run. Fast and far. Run before they do. Because they will. And it will hurt. But first they’re going to keep you around, just a little. Just the amount they can handle. And the longer you let them dip their toes in the water of you, the more it will hurt when it ends for real. So end it now. Don’t look back, don’t check in, just block, delete, and move on. The sooner you start, the sooner you won’t feel anything but pity for this person who missed out on the wonderful thing that scared them so much. And hey, if you’re looking for something to do while you pass the time getting over someone who couldn’t handle being happy with you, I’ve got a completely mediocre movie you can watch. Watching it with informed eyes is a very interesting experience, I dare say it makes the film more entertaining. The characters and conversations are weird and there will be lots you won’t understand, but you will know that this isn’t a story of how two people should be together. It’s a story of learning to recognize bullshit, and seeing the benefits in setting it free. The soundtrack is amazing though, really.
https://shanisilver.medium.com/dont-date-guys-who-hate-that-you-re-awesome-4bcb659904d7
['Shani Silver']
2020-05-27 10:24:01.032000+00:00
['Dating', 'Writing', 'Movies', 'Relationships', 'Humor']
Guide to Multimodal Machine Learning
Guide to Multimodal Machine Learning Analysing Text and Image at the same time! Meme with the same text but different meaning. Source: Author of this post I got my attention on multimodal learning from Facebook recent Hateful Meme Challenge 2020 on Driven Data. The challenge is about how to make an effective tool for detecting hate speech, and how it must be able to understand content the way people do. Seems pretty cool challenge as it makes use of both text and image for analysing content which is similar to what humans do. Let's dive deep into Multimodal Machine Learning to get what it is actually. Multimodal Learning As per definition Multimodal means that we have two and or more than two modes of communication through combinations of two or more modes. Modes include written language, spoken language, and patterns of meaning that are visual, audio, gestural, tactile and spatial. In order to create an Artificial Intelligence ( even A.G.I 🤩 ) that is on par with humans, we need AI to understand, interpret and reason with multimodal messages. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. To understand how to approach this problem we must first need to understand the challenges that need to be addressed in Multimodal Machine Learning. The challenge of Multimodal AI Representation: The first and foremost difficulty is way to represent and summarize multiple modalities in a way we can exploit their complementarity and redundant nature. See we need to understand that usually, all modes of information we take into account points towards the same information like lip-reading and sound we hear from a person represent the same thing. But using both things together gives us that robustness which helps us understand what the other person whats to convey. So the first challenge is how we can combine multimodal data. eg: Language is often symbolic while audio and visual modalities will be represented as signals. How can we combine them? Alignment: Secondly we need is to identify the direct relations between sub-elements from different modalities. Let's make this easy with a real-life example. We have a video on how to complete a cooking recipe. Now we also have subscript. To make it intuitive we need to match the steps shown in the video with the subscript to make a complete sense of whats going on. This is known as alignment. How do we align different modalities and deal with possible long-range dependencies and ambiguities? Translation: Process of changing data from one modality to another, where the translation relationship can often be open-ended or subjective. At some point, we might need to convert one form of information to another. Image captioning is one prime example of this. But there exist a number of correct ways to describe an image and one perfect translation may not exist. So how do we map data from one modality to another? Fusion: The fourth challenge is to join information from two or modalities to perform a prediction. The competition discussed above Facebook AI hateful Meme challenge is one example of it. Usually, we divide fusion techniques into two parts. Early Fusion or Late Fusion. ( Model -Agnostic Approaches) Early Fusion And Late Fusion. Source: Author of this post Co-Learning: Transfer knowledge between modalities, including their representations and predictive models. This is an interesting one because sometimes we have a unimodal problem and what we want from other modalities is some extra information at training time so that our system can perform best at testing time. If after reading out this if Multimodal Machine Learning got you hooked I would suggest going through CMU Multimodal Machine Learning Course.Link in the reference. Reference:
https://medium.com/datadriveninvestor/guide-to-multimodal-machine-learning-b9b4f8e43cf7
['Parth Chokhra']
2020-11-05 02:57:12.230000+00:00
['Data Science', 'Data', 'Machine Learning', 'AI', 'Deep Learning']
4 Web Design Trends That Could Help Improve Your Site
It doesn’t look like this whole website thing is a fad. In fact, I think it might be here to stay! So with that in mind, I’d like to explore how you can get a little extra value out of your website in 2020. I’m always a little bit at pain to create some sort of “OMG, YOU WON’T BELIEVE WHAT THE WEB TRENDS FOR 2020 ARE!” article, so I thought I’d approach the subject a little differently. I want to present the things you should consider for your website with one eye on the trends and how they could help add value to your brand or product. Let’s delve in, shall we? The Trend: Dark Mode. We’ve seen it steadily popping up in apps, UI and now the web for a little while; the infamous “Dark Mode”. The concept is simple: a little toggle or some such button that allows you to switch between your standard site and a “dark” (i.e. black version) of the site. A common response to this is “but why?” Well, there are actually some pretty big benefits. First and foremost, it’s excellent for saving battery life on OLED screens. On an OLED device (which encompasses most of the recent smartphones), the black pixel draws no power, so the potential battery saving by using a dark theme is massive. Alongside this, the darkness of the site will cause the screen to adjust its brightness to the users current lighting conditions and really help to reduce eye strain. Offering someone a break from the retina-burning whiteness could be a very welcome sight/site indeed. It also provides improved contrast, which is awesome news for universal accessibility. Any users with impaired vision or colour blindness will immediately receive a huge boost to the usability of your site. This will increase the chances of achieving an AA rating with the Web Content Accessibility Guidelines (WCAG), which is no joke for the public sector and educational services. Outside of that purely from a branding perspective, this extra contrast can also benefit by strengthening your brand. Your accent colours are likely to pop more and potentially create more impact on your page.
https://medium.com/swlh/4-web-design-trends-that-could-help-improve-your-site-ad7af45e9a18
['Tom Alexander']
2020-03-01 06:20:51.301000+00:00
['Website', 'Design Thinking', 'Design', 'Web Design', 'UX']
Binary Size Woes
I wrote this post last week for privately sharing with some people that asked for it. Coincidentally, McLaren wrote a thread that blew up a few days later. It was surprising to me—I had no idea this was that interesting to people! So, I’m polishing up what I had already written and am sharing it, more or less unchanged, below. Not too much is news after what was already disclosed on Twitter, but a few details are fleshed out. Maybe it’s interesting to some people. Background Uber rewrote its iOS Rider app in Swift in 2016. We were early adopters of Swift, so we encountered a few surprises along the way. About a year after the rewrite was released to users, in March 2017, a couple of engineers (robbert & mclaren) simultaneously & coincidentally realized that our app’s download size was growing much faster than our previous Objective-C app had grown. Robbert did some projections with historical size data he pulled using fastlane and calculated that we would surpass Apple’s then 100 MB over-the-air (OTA) download limit in three months. Crossing this threshold would mean that users would be required to connect to WiFi to download the app. We didn’t know how exactly this would impact business metrics, but we hypothesized that it would depress growth. Internally, this hypothesis was controversial. Colleagues who had worked at other companies with Very Large Apps said that they had seen no business impact to crossing the threshold — results that we were not sure generalized to our app (these companies had a better-known mobile web version of their app than we did). In addition, if we concluded that crossing the threshold had significant business impact, we would have to limit new code additions until we found a more sustainable solution to code growth. Naturally, product teams who depended on adding new functionality to the app to achieve their goals did not like the idea of being limited in what they could add. In order for this to get the attention it needed, we needed to quantify the business impact of crossing the OTA download limit. If it was high, then we would: build tooling to give insights into binary size (e.g., across releases, commits, etc.); investigate solutions to reducing the binary size, short-term and long-term; and Recommend a process for managing releases close to or over the OTA limit. At the time, no team “owned” binary size, so this quantification exercise fell on a couple of us. It wasn’t straightforward. Looking at historical data, when we blew past the iOS 8 OTA limit, we couldn’t definitely say anything about the business impact. And if we intentionally bloated the app for one release, we would not have a real control group. Comparing growth metrics across releases was unreliable, due to seasonality, other experiments affecting the comparison, variable marketing spend, etc. I proposed we take advantage of app thinning. A universal build, containing the executable architectures and resources for all devices, is uploaded to the App Store. Users download thinned variants, which contain a subset of the universal build’s executable architecture and resources — only those that are needed for the target device and operating system. Simplified look at how app thinning works Our data scientist designed a switchback experiment, where we alternated bloating the download size of one group of devices (non-plus device), then the other (plus devices). By manipulating the download size in a controlled way, we were able to observe variation in download size independently of time effects or device-user characteristics. The plus and non-plus devices basically served as control groups of one another. The release schedule looked something like this: We analyzed the collected data and found an enormous drop in installs, with a disproportionally large effect on first trips (presumably people who download over cellular have higher intent to ride). Without going into specifics, the numbers were an order of magnitude larger than anybody had expected and triggered the immediate formation of a task force to fix it. Mapping Out the Solution Space We found early on in our research that there was no silver bullet solution to this problem that met all of our constraints, meaning we would have to simultaneously investigate many ways to address it. At a high level, the work was organized into a few tracks: Talking to Apple Process changes Tooling improvements Build improvements Platform changes Cross-cutting changes Talking to Apple Given that Apple has control over the Swift compiler, the Xcode toolchain, and the App Store, they were in the best position to work on more permanent remedies. But, because of Apple’s longer release cadence, we knew we could not expect an immediate solution. Even if they acknowledged the problem, we would still have to pursue other solutions in the interim. Our goal was to make Apple aware of the problem that Swift executables were much larger than their Objective-C equivalents in hopes they would prioritize work to improve executable size output. The next major release of Swift (Swift 4), wound up having a few improvements that were able to deliver somewhat decent size improvements. And in September 2017, Apple increased the over-the-air limit to 150 MB. Process Changes The process changes were actually pretty interesting, but fortunately we never had to rely on them too much. The engineering and product leads in charge of our Rider app and I worked on defining a process to keep our iOS binary size under the OTA limit. We wanted a process that: established a protocol on how to handle releases that were close to or over the OTA limit; gave us headroom to add essential fixes/features to a release if needed; and still allowed product teams to develop and release new features. So, we began categorizing release candidates as Green, Yellow, or Red, based on their size: Green : sufficient headroom that we were not worried about the release. : sufficient headroom that we were not worried about the release. Yellow : small enough to not be over the OTA limit on any device, but the headroom was small. A yellow build would be released, but subsequent builds would not, until the size was Green again. Critical new features and bug fixes would be cherry-picked onto the release branch. : small enough to not be over the OTA limit on any device, but the headroom was small. A yellow build would be released, but subsequent builds would not, until the size was again. Critical new features and bug fixes would be cherry-picked onto the release branch. Red: over the OTA limit for at least one device class and would not be released. We never had a Red build, which was the goal. In addition, we introduced per-team size accounting, with the aim of eventually enforcing quotas. Fortunately, this was never needed. We also kicked off an effort to remove stale feature branches from the code, as well as requiring teams to condition features-in-progress out of the binary. We saw a few megabytes of one-time impact from removing stale branches. The Experiments team later added tooling to automatically detect and suggest the removal of stale experiments from the app. Tooling Improvements The Amsterdam-based iOS Developer Experience team worked to introduce a number of tooling improvements. At the time of detection of the problem, we had no way to accurately measure the binary size before App Store submission. Xcode provides tooling to produce builds for all devices and measure their sizes, but the tools are wildly inaccurate. This is primarily due to the opaque FairPlay encryption process used by Apple. When a user downloads an app from the App Store, the app is signed and encrypted with FairPlay DRM for that specific user. Not every part of the app is encrypted, only certain parts of the __TEXT segment of the Mach-O executable file. Afterwards, the signed and encrypted IPA is compressed, and then delivered to the user’s device. Note the order of operations here. Encryption is done before compression. Executable files are generally fairly compressible, since they contain repeated strings, function sequences, etc. However, encrypting the executable beforehand makes it virtually incompressible since encrypted data has maximal entropy. When an app is built and tested locally, FairPlay DRM is not done, so it compresses much better. Thus Xcode’s size estimates are far off. To work around this, the iOS DevEx team reverse engineered which segments were encrypted for App Store distribution and mimicked the encryption with tools locally. We then compressed the app and outputted size metrics as part of CI builds. We never achieved 100% accuracy, but we were generally within a few hundred kilobytes, which was good enough to have reliable per-commit size metrics. The team then added a number of features on top of this, such as size increase alerting and per-module size breakdown. Build Improvements The Amsterdam-based iOS DevEx team and our Palo Alto-based Programming Languages team also investigated a number of build-level size improvements we could do. We had a former LLVM engineer on the team, who uncovered a number of optimizations for us that reduced the executable size by close to 20%. Some of the changes were just using uncommon compiler/linker to flags to tune the build process to be size-optimal, but many were quite brilliant, like using simulated annealing to determine the best compiler optimization pass ordering. Enabling link time optimization for Objective-C Relocating strings to non-encrypted locations of the Mach-O executable Disabling loop unrolling Disabling Swift generic specialization Increasing the function inlining threshold Disabling Swift Whole-Module Optimization Running a simulated annealing algorithm to determine the binary size-optimal order of compiler optimizations and using this order instead of the standard order Platform Changes Here we explored changes to platform code that Uber wrote internally and product developers built on top of. Most had some developer impact — potentially a large one-time impact — but would not have a large ongoing cost. Some of our platform code was re-written in Objective-C as well as some of the generated networking code. We also changed most of our uses of Swift structs to classes due to size advantages. A few binary-size suboptimal code patterns were discouraged by adding lint checks for them. Heavy use of structs was one of them, there were others I don’t remember anymore. Cross-Cutting Changes We investigated a number of larger-impact changes, including rewriting significant portions of the app in Objective-C. In our testing, we found Objective-C executables to be at least 50% smaller than similar Swift code at the time. If needed, as an absolute last resort, we believed we could rewrite parts of the app in Objective-C and encourage more future code to be written in Objective-C. This would require rewriting most of the platform libraries in Objective-C and porting features over, which would be an arduous process. Simultaneously, a team was developing a framework for server-driven UI. It was not ready for general use at the time, but we advocated for increasing its funding since it would provide a scalable way to add more features to the app without increasing the app size significantly. We hoped this was going to be the solution for scalable product growth. Conclusion Our goal was to stay under the OTA limit while also not stopping product development for any significant time. We achieved this while also finding other important goals along the way. The experiment brought awareness to the problem, tooling was built and is still maintained to track binary size. Our results proved compelling enough for further experiments on binary size to be run to measure the impact of incremental megabytes of download size on growth metrics, on both iOS and Android. The Android team found a significant impact for each marginal megabyte of download size and this helped them advocate for building a light version of the Android Rider app. We also began to measure the impact of other metrics, such as on-disk install size, on user retention. After the initial scramble to make sense of the problem space, developer productivity was not significantly impacted and our process largely worked. We found sufficient fixes at the build and platform level to bring down the binary size without any significant impact to developer productivity. It was a lot of effort, but we overcame most of the issues we saw with Swift, so we never switched back to Objective-C. There was an unfortunate side-effect to the problem we saw with Swift: we became very wary of new language adoption. When Android engineers wanted to adopt Kotlin, there was a lot of resistance to doing so. Members from our iOS DevEx & Programming Languages teams participated more in the Swift mailing lists to encourage or add compiler optimizations to generate smaller executables and, indeed, future versions of Swift have improved in this regard (e.g., #8018, #8909, and -Osize). A few months after we approached Apple, they increased the OTA limit to 150 MB. Some time later, they increased it to 200 MB. As of iOS 13, they removed the limit entirely and simply prompt users to confirm they want to download a large app over cellular data.
https://medium.com/nerd-for-tech/binary-size-woes-acb5d96f058a
['Chris Brauchli']
2020-12-18 02:58:10.197000+00:00
['Apple', 'iOS', 'Uber', 'Binary Size']
The Force Does Not Belong to the Jedi: How to Interpret Star Wars After “The Last Jedi”
Time for the Jedi To End The stroke which prompts renewed interpretation for me is Luke Skywalker’s reflections in “The Last Jedi.” The proclamation, “It’s time for the Jedi to end,” caught my attention in the teaser trailer right away, because it sounded so radical. The Jedi are essential to the whole fictional universe, right? The film pivots around several encounters between Rey and Luke. After a number of denials, Luke reluctantly agrees to train her, conditionally: “I will teach you the ways of the Jedi… and why they need to end.” In the first lesson, Rey is meditating on a ledge, getting in touch with the plants, the bones in the earth, the porgs (so cute!), the crashing waves: Luke: What do you see? Rey: The island. Life. Death and decay, that feeds new life. Warmth. Cold. Peace. Violence. Luke: And between it all? Rey: Balance. And energy. A force. Luke: And inside you? Rey: The same force. It’s a powerful moment, similar to others we’ve rehearsed before in Jedi training scenes from past films, but here with more philosophical and emotional intensity. We reach out with our feelings along with Rey, and can almost feel that same Force. Luke concludes: “And this is the lesson. That Force does not belong to the Jedi. To say that if the Jedi die, the light dies, is vanity. Can you feel that?” We don’t get Rey’s answer directly, only her realization (after being tempted by the island’s dark side energy) that Luke, unlike the life and earth she felt before, has closed himself off from the Force: “But I didn’t see you. Nothing from you.” This becomes important later on. Luke’s argument, that the Force does not “belong” to anyone, least of all the Jedi, is compelling to me, and makes sense to say. We see Yoda’s teachings from “Empire Strikes Back” in this realization: My ally is the Force, and a powerful ally it is. Life creates it, makes it grow. Its energy surrounds us and binds us. Luminous beings are we, not this crude matter. You must feel the Force around you; here, between you, me, the tree, the rock, everywhere, yes. Even between the land and the ship. This mystical account of the Force is a far cry from the minimal account offered by Obi-Wan Kenobi in “A New Hope,” that the Force is “an energy field.” Obi-Wan’s account is reminiscent of the scientistic one we get from Qui-Gon Jinn in “The Phantom Menace”: Midi-chlorians are a microcopic life form that reside within all living cells. … Without the midi-chlorians, life could not exist, and we would have no knowledge of the Force. They continually speak to you, telling you the will of the Force. Qui-Gon’s deflationary, science-y account is accepted by his peers on the Jedi council, earlier in the film. They show the same incredulous amazement at the report of Anakin’s raw power, measured with a blood sample in a handheld device. This kind of quantified, ultra-rational paradigm is precisely what we see Luke break with in “The Last Jedi,” and having seen this break, we can recognize the same departure in the exiled hermit, Yoda, and to a lesser extent with Obi-Wan in episodes IV through VI. Which brings us to Luke’s second lesson: “Lesson Two. Now that they’re extinct, the Jedi are romanticized, deified. But if you strip away the myth and look at their deeds, the legacy of the Jedi is failure. Hypocrisy, hubris. … I failed [with Ben Solo]. Because I was Luke Skywalker. Jedi master. A legend.” Rey vehemently denies this interpretation at first, but we, like Luke at this point, are familiar with the story of that failure from having seen the story of Anakin Skywalker and the Jedi Order in the prequels. We have only to take an honest look. The Jedi Ideology The domestication, quantification and instrumentalization of the Force typified by Qui-Gon’s midi-chlorian story is just what we would expect to see from the Jedi Order of the prequels: a powerful elite institution tied to the galaxy’s largest political system, the Galactic Republic. The order gets drawn into a war secretly planned by the Republic’s own leader as part of a vast manipulation to seize power. For some Jedi, notably Yoda, the de facto militarization of the Jedi Order is to be resisted until the last possible moment. But really, the Jedi were already the muscle for the Republic’s imperialistic machinations. We don’t even need to dig that deep when we look at the prequel stories to see that the Jedi are pretty conceited, elitist, and prone to look down on their semi-colonial subjects as they do the Republic’s dirty work. Credit: Lucasfilm Qui-Gon and Obi-Wan have no qualms about talking down to and using Jar Jar Binks, a member of Naboo’s indigenous community, to get what they want (cf. “The ability to speak does not make you intelligent.”). Later, the Jedi treat the Gungan leader, Boss Nass, with similar conceitedness and self-importance, even perhaps using the Force to subtly manipulate him. Credit: Lucasfilm (One need simply contrast this encounter with Padme Amidala’s encounter with the Boss later in the film to see the stark difference.) In episode II, Obi-Wan and Anakin continue the trend of conceited self-importance. As the assassin’s minions creep toward Padme’s sleeping form, we hear Anakin and Obi-Wan nonchalantly arguing about her in the other room: Obi-Wan: And don’t forget, she’s a politician, and they’re not to be trusted. Anakin: She’s not like the other senators, Master. Obi-Wan: It is my experience that senators focus only on pleasing those who fund their campaigns. And they’re in no means scared of forgetting the niceties of democracy in order to get those funds. Anakin: Not another lecture… at least not on the economics of politics. Obi-Wan’s tone is abstract, bland, annoyingly condescending (as he is in much of the film)— the tone taken by those who choose to be ignorant of how the status quo benefits them. After all, Obi-Wan is certainly no revolutionary. None of the Jedi are. They are staunchly conservative on the whole. The Jedi of the prequels comprise a bureaucratic, elitist institution with deadly political power and a grand sense of their own necessity, authority and political wisdom: a misguided sense which Palpatine easily manipulates for his own ends, resulting in the Jedi’s destruction. This myopia is summed up really clearly in multiple moments in the prequels, but nowhere so clearly as whenever Yoda or someone else frowns in frustration at a dilemma, bemoaning the fact that “the dark side clouds everything.” It’s not the dark side of the Force; it’s the Jedi’s own unwillingness to think differently of themselves, an attitude which extends even to the order’s librarians: Jocasta Nu: I hate to say it, but it looks like the system you’re searching for doesn’t exist. Obi-Wan: That’s impossible… perhaps the archives are incomplete. Jocasta Nu: If an item does not appear in our records… it does not exist. Consider how this ideology plays out: The Jedi repeatedly use their control of the Force to control the “weak-minded” against their will. (Forcing (literally) a drug dealer to “rethink his life” is pretty demeaning, not to mention violent, taking away someone’s free will for your own ends.) The Jedi work without any external accountability, left to their own decisions about policy, ethics and influence. The Jedi insist on their own authority, excluding those they perceive to be “outside.” While they seem to use a quantifiable metric for determining Force sensitivity, they have no trouble initially denying the nine-year-old Anakin’s request to train, as Yoda declares, “Our own counsel we will take on who is ready!” while Mace Windu explains, “He is too old.” If a nine-year-old is too old, then we can assume children are taken at very young ages into the Jedi Academy, raising serious ethical questions that the Jedi would no doubt dismiss on its “own counsel.” The Jedi acclimate incredibly quickly to becoming military leaders fighting wars in the back (and front) yards of hundreds of worlds as the Clone Wars commence, probably because they were already using deadly force in the capacity of unregulated warrior-enforcers, as in the opening of “The Phantom Menace.” As Nute Gunray and Daultray Dofine realize that the Jedi have been sent here to “force a settlement” between them and Naboo, which they are blockading, Obi-Wan and Qui-Gon leave us no room to doubt that their intentions are to threaten them on the Republic’s behalf: Obi-Wan: How do you think the trade viceroy will deal with the chancellor’s demands? Qui-Gon: These Federation types are cowards. The negotiations will be short. Cut to: (see: “negotiations with a lightsaber”) Credit: Lucasfilm If you need more evidence that the Jedi are used to behaving violently and with confidence in their superiority to others (particularly non-human others, as it seems to me), just take a look at Chris Weixelman and Oliver Chips’ article on the topic. As “Revenge of the Sith” (the final film of the prequel trilogy) shows, Palpatine’s coup, his seizure of the state through “emergency powers,” is almost a logical extension of what the Republic always was, at least as we came to meet it in “The Phantom Menace”: a galactic empire. It’s just that the Jedi have served the Republic only insofar as doing so maintains their own dominance as an institution (as we realize when Anakin tells Windu that Palpatine is the Sith lord; rather than worrying about the threat this poses for the pseudo-democracy of the Republic, Windu says that they must “act quickly if the Jedi Order is to survive”). Once Palpatine has an army of clones programmed for absolute loyalty, the Jedi are no longer politically necessary, besides Palpatine’s other, more evil and less pragmatic concerns for their demise. The Jedi Order, defined by extra-legal colonial thuggery and political arrogance as much as they are by poise and equanimity, present a view of the Force as an instrument, an object of knowledge, rather than infinite relationality itself. This is an ideology of violence and power, the ideology of the colonizer, the gatekeeper, and the despot. Even still, because its basis is the living Force, it is haunted by something radically different. “You Must Unlearn What You Have Learned” So what happens next? Yoda and Obi-Wan, among the last (if not the very last) survivors of the new Emperor’s purge, go into exile. When next we see them, things have changed. Holding out for Luke to mature (and…not…Leia?…I guess?), Obi-Wan begins training the young descendant of Anakin to be a Jedi in the hope that he will one day be able to face Darth Vader and end the Emperor’s rule. But Yoda, already the wisest of the old Jedi Order, has changed by the time we get to “Empire Strikes Back,” and presents a significant challenge. At 900 years old, Yoda undertakes Luke’s training by saying, “You must unlearn what you have learned.” There is a subtext here of a conflict between Obi-Wan and Yoda. Obi-Wan’s vision is still myopic, limited by an old image of what the Jedi are supposed to be: knights, bound to the loose code of something like aloof chivalric nobility. But Yoda’s teachings are in conflict with that image. Throughout the film, Yoda inveighs against the recklessness and arrogance that the old Jedi exemplified in their mastery of the Force as an instrument. This extends, obviously, to the use of the Force for making war on others. Whereas the old Jedi Order might have paid lip service to the idea that the Force should only be used for defense, when Yoda says, “A Jedi uses the Force for knowledge and defense, never for attack,” there is an urgency to the statement that recalls the Jedi’s violent history. “So certain are you,” Yoda says to Luke’s despair as his X-wing sinks into the swamp. Here is where we receive Yoda’s mystical account of the living, creative Force: a web of relations between life and death, light and dark, which denies rationalization and ownership. Luke sees only the quantitative difference between rocks and spaceships, but Yoda sees the Force that binds them to himself and to all things. When Yoda lifts the ship out of the water, Luke is amazed: “I can’t… I can’t believe it,” he says, and we can see here the same attitude that defines the old Jedi, one of self-certainty, of Cartesian incredulity. And Yoda responds, speaking to Luke and also, as it were, to the past, his own past: “That is why you fail.” Overall, Luke’s growing power seems to go to his head in similar ways to the ideology of the old Jedi Order. Consider Luke entering Jabba’s palace in “Return of the Jedi,” nonchalantly Force-choking two guards, mind-controlling Bib Fortuna, and then threatening Jabba to his face (“I warn you not to underestimate my powers”). On the skiff above the sarlacc pit, Luke proclaims, in the voice of his institutional predecessors, “This is your last chance, Jabba: Free us, or die.” Jedi business, nothing to see here. But throughout the remainder of the film, as Luke and the others fall victim to the Emperor’s traps and provocations, Luke begins to realize that he is actually inviting violence upon his friends through his cocksure sense of importance and superiority. In a movement not often discussed, we come to the climax of the story, which involves Luke struggling against the temptation toward violence. He gives himself up willingly in the hopes of convincing Darth Vader, his father, to turn from the dark side of the Force. Later, he gives in to the dark side’s violent expression when forced to watch his friends losing the battle against the Death Star in orbit. But Luke’s victory comes later— in a moment “The Last Jedi” will reprise to countless fanboys’ chagrin — when he tosses aside his lightsaber, declaring victory over jealousy, anger and the recurring cycle of violence that results in expressions of possessiveness: “You’ve failed, your highness. I am a Jedi, like my father before me.” Luke has no idea yet that this comparison is wholly inaccurate. Anakin Skywalker, as a Jedi per se, never accomplished (nor felt the need to accomplish) the act of defiant nonviolent resistance that Luke achieves in this moment of apotheosis. Even in his redemption, Anakin-Vader ultimately commits what amounts to an act of vengeance on Emperor Palpatine: violence to save someone he values, the same motivation for his original turn to the dark side. Thus, the redemption interpretation, when seen in the light of the whole sweep of the Jedi story, in the light of Luke’s retrospection as a hermit, becomes complicated and troubling. “The Greatest Teacher, Failure Is” The Luke Skywalker we find in “The Last Jedi” is cynical and bitter, with grief lying underneath it all. He sees himself as an irredeemable failure for Ben Solo’s turn to the dark side under his watch. Luke rightly interprets this failure as coming from a place of arrogance that he sees now has always belonged to the Jedi ideology: “I failed. Because I was Luke Skywalker. Jedi master. A legend.” While Rey seems to disregard Luke’s interpretations as unreasonably down on himself and what she sees as unambiguously good, she is right to criticize Luke for his self-pity and cynicism. The difference between them is that Luke has decided that hope is irresponsible, where Rey is ready to stake her life on the same kind of hope Luke held when he gave himself up to Vader in “Return of the Jedi.” And here we see a difference between the two hermits — Luke and Yoda — which arises explicitly before long. As a hermit, Yoda grows in hope by gradually releasing himself from commitments of control and the need for certainty. In “Empire Strikes Back,” when Luke recklessly leaves in the middle of training to try to help his friends against Yoda’s wishes, Obi-Wan laments, “That boy was our only hope.” Yoda disagrees: “No, there is another.” We can only assume, as “Return of the Jedi” reveals, that this is a reference to Leia, Luke’s sister. It is only Yoda who sees Leia as having the potential to become a new kind of Jedi. In contrast, Luke’s hope dissipates after his self-exile because, in blaming himself for Ben’s fall, he still clings to shreds of the old Jedi ideology of control and possession, despite the wisdom he has gained through interpreting the Jedi’s past and the mystical nature of the Force. This conflict comes to a head in “The Last Jedi” when Rey leaves. In a fit of anger, Luke decides to finally destroy the ancient Jedi texts he has enshrined but, torch in hand, he can’t bring himself to do it. These objects, the last of the Jedi ideological apparatus, have become transcendent objects for Luke, despite his criticisms of the Jedi’s failings: fetish objects to which he ascribes authority and power beyond their materiality. Yoda’s not having it. In Luke’s indecision, Yoda, now a ghost, calls down lightning on the shrine to finish the job. Ironically, Luke objects in anger to Yoda performing what he just said he was going to do but couldn’t. (Such is the power of fetish objects, driving us into hypocrisy and alienation from our desires.) The poignant conversation afterward causes us to look back on the events of the film, of the entire saga, once again: Credit: Lucasfilm “Heeded my words not, did you?” Yoda says, concluding: “Pass on what you have learned.” Strength, mastery. But weakness, folly, failure, also. Yes, failure most of all. The greatest teacher, failure is. This last line serves as the theme for the entire film, which essentially follows, from start to finish, a series of near escapes and downright defeats on the part of our many heroes. But the failure is the point. Failure is the juncture at which the characters have the chance to meet the limits of the myths to which they cling, on which they so often depend. Failure is what undermines myth and ideology, both of which keep the characters stuck in the cycle of violence and myopia. Rey’s failure to encounter her parentage in the mirror cave disrupts her insecurities and brings her closer to Kylo Ren, allowing her to act on her hope for his capacity for good: a chance that Luke is unwilling to take. Finn’s failure in the elaborate plan to save the Resistance ships on the run from the Fulminatrix (yes, that’s the name of the Supreme Leader’s dreadnought) leads him to encounter the truth about the galaxy’s military industrial complex, and how both the Resistance and the First Order are caught up in the same economic game. Poe’s failure to think ahead, to put lives above bravado, and to get over his toxic masculinity brings him to a greater awareness of himself and of the value of the lives around him, radically shifting his priorities by the end of the story: from a drive toward death disguised as positivity, to a drive toward life expressed through care. Luke’s failure with Ben Solo brings him into a deeper and more honest engagement with the hubris of the old Jedi ideology; and his failed relationship with Rey, combined with his failure to separate himself from the Jedi fetish objects, awakens him to what he has repressed for so long, leading him to act on behalf of the dwindling Resistance and to exemplify the way of the Force through nonviolent resistance against those who would dominate and disrupt the unity of life and death, light and dark, subject and other. The Force Does Not Belong to the Jedi In the final battle of “The Last Jedi,” a few Resistance fighters rally to the air with battered old ski speeders to try to intercept the First Order’s siege cannon. The battle is another failure, and Poe calls for a retreat. But Finn steels himself in defiance, resolving to ram the laser and sacrifice himself for the others. At the last second, Rose flies her speeder into Finn’s, crashing them both out of the laser’s path. In this moment, another failure for Finn, he comes face to face with another form of toxic masculinity that the women of the film wind up working to expose in their masculine counterparts. Committed to the idea that a man’s life is ultimately expendable (a belief no doubt reinforced by his programming in the First Order), he berates the wounded Rose after they crash: “Why would you do that? Huh? I was almost there. Why would you stop me?” Rose replies, “I saved you, dummy. That’s how we’re gonna win. Not fighting what we hate, but saving what we love.” Rose’s response represents yet another ideological break, a shift in point of view that is necessary for real change, lest the cycle of violence and domination begin again. Rose is central to the film’s demonstration that “the Force does not belong to the Jedi.” Arguably, the non-Force-users — Rose and Admiral Holdo chief among them — are the most successful of the characters in “The Last Jedi.” It is Rose who enlists the child slaves to help her free the fathiers from their stable on Canto Bight and in so doing inspires them with dreams of resistance articulated through tales of the Jedi and, pointedly, through the life of the Force itself within and between them, as the film’s ending scene clearly shows: Credit: Lucasfilm Every other numbered film in the Star Wars series ends with a focus on one or more of the main characters, often with per se Jedi included. But “The Last Jedi” ends with an unnamed child we may never see again. It ends with the “nobody,” now seen as “somebody.” Because the Force doesn’t belong to anyone, least of all those with the most power. The Force is communal, diffuse, awaiting its mere recognition by the least as well as the greatest, for all are in a sense “one” by virtue of the Force, which symbolizes the power inherent in solidarity, in sheer relationality, in mutual care. This dynamic is represented in the spinoff films, the “Star Wars Stories,” “Rogue One” and “Solo.” These films do not involve Jedi per se. They are about those whom the numbered films have taught us to see as “nobodies.” “Rogue One” follows the Rebel Alliance’s secret agents, drawn together from all walks of life, and their suicide mission to retrieve the Death Star plans for the rebellion. We may not see Jedi, but we are made to interpret the Force at work in these characters nonetheless, in their leaps of fate, their impossible victories, their hope against all odds. The Force isn’t even mentioned in “Solo,” where we see the start of Han Solo’s career with his friend Chewbacca in a fun sort of midrash. But still, we can interpret the workings of the plot (all of which revolve around the use and abuse of relationships of trust), the games of chance, the defiance of the odds, the impossible escape from the Maw, as enabled by the web of interdependence that is the Force. One of the greatest moments in the film is L-3’s liberation of the droids on Kessel. Droids — who we are used to seeing as lesser, as instruments, even if lovable ones — are connected with the Force, too. These films are about the giving of the Force back to the people. The socialization of the Force, if you will. Conclusion This story, this reinterpretation, gives me new reasons to enjoy the Star Wars films in chronological story order. In light of the interpretation of “The Last Jedi,” these 10 films are nothing less than the deconstruction of the Jedi ideology, the unfolding of hope in a movement of resistance that ultimately learns it cannot rely on myths and legends alone, but must rediscover the power of solidarity among the oppressed, the embrace of the interdependence of life and death, light and dark. The symbols in this story can give us something to work with in our own time. We are in desperate need of myth busting. Ideologies of power, domination and capital growth threaten (as they always have) to rip each of us apart from each other, and then from ourselves. We are in need of resistance to the powers that be and their self-justifying narratives. The Force is just a symbol for what binds each of us together: We all breathe the same breath, we all die into the same dust, and we are all entwined in an empire of lies about what our dreams should be, who belongs and who does not, what we are capable of as a society, and the limits of political possibility. We have nothing to lose but our delusions and everything to gain. May that Force be with us.
https://medium.com/fan-fare/how-to-interpret-star-wars-1eea4b615b27
['Jedd Cole']
2019-07-02 13:10:39.322000+00:00
['Star Wars', 'Film', 'Politics', 'The Last Jedi', 'Analysis']
Essential Steps to Building and Refining an Effective UX Portfolio
To help you take advantage of this time, inspire you, and boost your skills, we talked to some leading designers and portfolio experts. Their insights and practical advice will help you create a strong, memorable portfolio that stands out and puts you on the path to a successful interview. Know why you’re making a portfolio “The first step is knowing why you’re building a portfolio,” points out Juhie Tamboli, senior product manager for Adobe Portfolio, currently free on a 60 day trial (which you can activate any time before the end of the year). “Is it to land a freelance gig? Or perhaps you’re looking to switch careers? If you answer this well, the rest will come more easily.” Then present your best work with the why in mind, Tamboli recommends. “You don’t need to showcase everything in your portfolio, but focus on your best work to help you land the opportunity you have in mind. Amplify what’s uniquely you, and share it throughout your portfolio whether that’s in a stellar About Me statement or in the body of your work that you’ve selected.” Finally, Tamboli advises not to forget to share the process. “A part of what makes your portfolio unique to you is the process behind the work, not just the final piece.” The digital product design portfolio by Tamara Oniani, a recent graduate of the University of Utah’s Multidisciplinary Design program. Start simple and don’t try to perfect it Multidisciplinary designer and creative director Tobias Van Schneider acknowledges that it’s very easy to procrastinate on your portfolio — especially when you’re feeling the pressure of a job search. Tobias says we do so for the same reason we put off anything else: because we’re overcomplicating it. “Start simple,” he advises. “Choose two projects — yes, only two — that represent the type of work you want to do more of in the future. Think of those projects in phases — phase 1 being the ideation phase, phase 2 concepting, etc — and write a few sentences about each phase, accompanied by an image. Done.” With your projects out of the way, Van Schneider says you then just need to design your homepage, create an About page, and launch your portfolio. “As you do so, remember your portfolio doesn’t need to be your creative masterpiece,” he points out. “Just focus on putting the work you’ve already done in the best light.” Jessica Ivins, a UX designer and faculty member at UX design school Center Centre agrees and in her article, How to Get Great Feedback on Your UX Portfolio, she writes, “Even if you’re a senior designer, it’s tempting to make your portfolio perfect because it’s about your work. But flawless designs don’t exist in any project. There’s a saying when it comes to software: ‘Perfection never ships.’” Tobias Van Schneider says that Mary Catherine Pflug pulls no punches with her portfolio intro. It’s simple and straightforward, which is all it needs to be. Use your research and design skills Ian Fenn, author of Designing a UX Portfolio, has found that people often feel real terror towards the act of creating their UX portfolio. A practical solution can help: exploit your research and design skills. “Research the needs of your intended audience,” Fenn suggests. “Then design the content that represents you and that will resonate with them. Once you consider your portfolio just another product, the process ought to become much easier.” Ivins favors the same approach and points out that by treating your portfolio like a high-priority design project you’ll give it the care and attention it needs. Make your portfolio represent your personality Designer, developer, and artist Lynn Fisher refreshes her portfolio every year (see our interview, her archive, and her case study of the 2019 redesign). She says your portfolio is one of the few spaces that are completely yours and recommends making it represent the ways you’re uniquely you. “If you’re a bit weird, make it weird,” she encourages. “We apply for pre-defined positions, but each of us will fill those roles differently based on our own experience and perspective. The more your portfolio can convey your distinct strengths, the more memorable it will be.” Fisher also suggests writing about your experience, perspective, and challenges you’ve had to solve, if you don’t have a lot of work to show. “As you rework your portfolio, document your process and decision-making to compile it into a case study. If you have side or just-for-fun projects, write about those, too. The way you talk about your work can often be more compelling than a set of screenshots and gives teams a look into how you might approach projects with them.” Van Schneider also recommends keeping the copy simple and straightforward (“clever usually translates to confusing when it comes to a portfolio”), making your case studies scannable (“nobody’s going to read longer than two minutes”), and thinking of each case study as a magazine feature (“you wouldn’t design every story in a magazine the same way, you’d customize each to tell that unique story in the best possible way”). Resizing the browser window will cause the illustrations on Lynn Fisher’s 2019 portfolio redesign to crack open revealing more within them. Find a mentor to give you feedback If you need feedback on your portfolio, Fenn cautions to not just post it online and ask for feedback. “You’ll get conflicting answers that will only serve to confuse you,” he explains. “Instead, find somebody you trust and ask for feedback in the style of a product critique. Explain to this mentor who the portfolio is for and what you were hoping to communicate. Then ask them how you can make it better.” “Seek feedback early,” Ivins adds. “Process it, make changes, and get more feedback. Repeat this iterative process as much as possible throughout your portfolio project.” Iterate and optimize your portfolio All the designers we talked to agreed that iteration is crucial for the success of your portfolio. UX designer Sarah Doody, founder of The UX Portfolio Formula, points out that your UX portfolio is never ‘done’. It’s a work in progress, and just like a product, you keep evaluating and iterating it. “Even if you’re not actively looking for a role,” Doody says. “Being proactive and ensuring your portfolio is up to date will ensure that you don’t rush to finish it if an amazing opportunity came your way.” Doody also warns that rushing to get your UX portfolio ready will increase the chances you make mistakes — such as not considering the three users of your UX portfolio, or failing to write effective case studies that truly convey the process instead of just showing final deliverables. Like Ian, Sarah stresses that when you work on your UX portfolio, you’re also honing your UX skills because you must consider the UX of your UX portfolio. “The beauty of a portfolio is that you can continue iterating and optimizing as you go,” Van Schneider adds. “Once the foundation is there, it becomes easy to make it better and better. And every time you update your website, it’s another opportunity to promote yourself. The first step to a successful portfolio is simply launching it.” Don’t just read articles about portfolios There are a lot of articles (like this one!) crammed with tips on how you should improve your portfolio. However, Fenn warns that some of the advice is heavily biased (not like this one!) and suggests exercising caution. “Be sceptical of much of what you read online about UX portfolios,” he says. “Many articles are solution-heavy, reflecting a single person’s opinion of what they think will work. They can’t tell you what will work for the hiring manager you are trying to attract. Even if they are hiring managers themselves, what someone says they need can be very different from what they actually need. Conducting your own research is key.” Take some of the advice you hear with a pinch of salt, and ensure you get the essentials right. Ask yourself why you are creating a portfolio, keep it simple and focus on your best work, make sure what’s uniquely you shines through and document the process behind the work. Make use of your research and design skills and treat your portfolio like a product, find mentors to provide feedback, and then keep evaluating and iterating. Good luck! Adobe Portfolio is currently free for 60 days to support our creative community in these times of uncertainty.
https://medium.com/thinking-design/essential-steps-to-building-and-refining-an-effective-ux-portfolio-9e75695421d6
['Oliver Lindberg']
2020-07-29 13:42:26.321000+00:00
['Creative Career', 'UX Design', 'Portfolio', 'UX', 'Design']
How to learn from Apple’s mistakes on website accessibility
Written by Joe Chidzik, Principal Accessibility Consultant at AbilityNet We often post about how good Apple is on accessibility, but a legal complaint against the company by a screenreader user last month has shown that even the biggest tech giants can fall foul of accessibility regulations and guidelines sometimes. Smaller organisations are often not considering web accessibility at all, meaning they could be discriminating against disabled people without realising. According to the complaint against Apple, the company’s website misses out some descriptions of images for screenreader users and presents confusing and unclear links about store locations and hours. These problems can make using a website frustrating and difficult for a blind person. Myself and the AbilityNet accessibility team looked at the case last month and then ran through the Apple.com website to check out the issues. It revealed some simple, easily fixable problems which I’ve mentioned below. They found that although there were no issues here which completely prevented reaching the final stage of buying an iPhone, there are a number of challenges with the flow that will cause some users difficulties such as: Dynamic progression through shopping process Unclear method for returning to previous steps Insufficiently labelled radio buttons Check out our detailed analysis for an expert insight on how to address accessibility issues. And feel free to use the code and ideas to make changes to your own website to provide a more inclusive digital experience. Investigating Apple website accessibility issues First we looked at difficulties accessing store locations and hours. One of the issues noted within the complaint was that screenreader users had difficulty finding store information e.g. location and opening times. So we ran through the user-journey for finding a store, using the JAWS screenreader as this is one of the more popular screenreaders available — and is specifically mentioned in the case. There were multiple difficulties noted. The key issues were: Insufficiently labelled input fields (meaning screenreaders are left unsure of what information is needed in the boxes they’re asked to enter information into) Links with identical text, that lead to different locations. While this is not a failure of WCAG (Web Content Accessibility Guidelines), it is not good practise. Dynamic content not made accessible to screenreader users, e.g. auto-suggest search results Further difficulties noted in the complaint were: Unable to browse and purchase electronics such as the iPhone, iPad, and MacBook Pro laptop Inability to make service appointments online Trouble finding a store Finding a store We found that navigating to the ‘Find a store’ page was relatively simple. From the homepage, using a JAWS screenreader, the user presses H until ‘Apple Store’ heading is selected. Then they tab to the ‘Find a store’ link, which leads to the find a store page. The form on this page is presented as the image below. Sighted users are informed that this input field requires the City and State or Zip code (US). JAWS users will hear the placeholder text (City and State or Zip) announced if they use the JAWS form field list, or JAWS users who tab onto the input field will instead just hear ‘Find a store. Edit. Required.’ This tells them that: The label is ‘Find a store’ It is an edit field, so they need to enter some information It is a required field But it is not clear what information they need to enter. Is it a town? Postcode? County? Whilst users may opt to guess at this point, it is straightforward to ensure that screenreader users get the same information as sighted users by entering the following code: <input class=”global-retail-search block” required=”” type=”search” placeholder=”City and State or Zip” autocorrect=”off” results=”0″ data-autoglobalsearch-module=”retail-locator”> This uses the placeholder attribute (highlighted) to label the input field. However, the placeholder attribute should not be relied on to convey information. This article on the popular Smashing Magazine website explains why not to rely on the placeholder attribute. This issue is easily remedied — simply use the aria-label attribute, as below, to duplicate the placeholder text for sighted users in the process. <input class=”global-retail-search block” required=”” type=”search” placeholder=”City and State or Zip” aria-label=”City and State or Zip”autocorrect=”off” results=”0″ data-autoglobalsearch-module=”retail-locator”> Alternatively, the more traditional HTML <label> element could be used to provide a hidden label, announced for screenreader users. Accessing Apple store information online Let’s assume the user tries to enter a city name (not an unreasonable assumption) to see what happens. Entering some characters into the search field displays the following auto-suggest results: While these are easily visible below the search input field, screenreader users are not informed of these results. In addition, pressing enter has no effect except to set the focus back to the top of the page. This means that screenreader users will need to manually navigate to the search results in the bottom half of the screen — they are still not made aware of these results however, so would need to manually explore. Once they reach the results, further challenges are presented. Each of the store location results has identical ‘View store details’ links: These are all announced as ‘View store details’ — there is no easy way for screenreader users to distinguish them. On the homepage, Apple uses hidden text to distinguish otherwise similar links e.g. ‘Find out more’, or ‘Buy now’. This is not the case here. When reading through the links on the page, a screenreader hears the following: Note the multiple ‘View store details’ links. However, each of the stores is prefixed with a descriptive heading, so a user can select a link and then use a shortcut to hear the preceding heading announced. This would tell them that the first link to ‘View store details’ is about ‘Apple Upper East Side’. As there is already a technique in place elsewhere on the website for augmenting links with descriptive hidden text, it would not be difficult to replicate this here such that the links were announced as ‘View store details: Apple Upper East Side’ for example. Viewing store details Selecting a link to view the store details leads to a further page with store information: It was relatively straightforward to read the store information here. The address and store hours were announced as expected by the JAWS screenreader. Browsing and buying products When looking to buy products from the online store, some specific difficulties were encountered. For example, when following the flow to purchase the iPhone X, the flow consists of multiple steps, at each step making a choice such as model, carrier, finish. These steps are dynamic — as soon as a user chooses the option for step 1, they are taken to step 2 without warning. This is a failure of the WCAG 2.0 success criteria on input. Users inputting data (e.g. making a selection via a form control) should not experience a change in context e.g. being taken to a new page, or step, without warning. However, otherwise this flow works reasonably well — the user is told when they land on a new step, and can proceed as expected. The dynamic nature may cause difficulties for some users however, especially as the method for returning to the previous step is not clearly explained — vital in case a user selects an option by mistake. A preferred solution is to let the users make their choice — colour, carrier, capacity etc — and then select a button to confirm their choice before they are taken to a new page. There was one interesting issue related to the final step of selecting the capacity of the iPhone being purchased: These buttons are marked up as radio buttons. However, the labels are not usefully announced. Sighted users can see the superscript 2, but this is not distinguished as such by JAWS users, who hear this announced as “64GB 2$49.91/mo” and it is further not clear that this 2 relates to a footnote which gives further information about the available capacity on the selected model. Mo is also not explained adequately. It would be better to use ‘Month’ in full to avoid ambiguity. Summary There were no issues here which completely prevented reaching the final stage of buying an iPhone, but there were a number of challenges with the flow that will cause some users difficulties such as Dynamic progression through shopping process Unclear method for returning to previous steps Insufficiently labelled radio buttons The fixes described above provide an expert view of the way to address these issues to deliver a more inclusive user experience. This article was originally published here. More thought leadership
https://medium.com/digital-leaders-uk/how-to-learn-from-apples-mistakes-on-website-accessibility-f3844f232b13
['Digital Leaders']
2018-09-27 15:39:35.071000+00:00
['Accessibility', 'Digital Leadership', 'Digital Inclusion', 'Design', 'A11y']
The Co-op Close-up: AutoML and Fintech at UMF
SFU’s professional master’s program in computer science trains computational specialists in big data and visual computing. All students complete a paid co-op work placement as part of their degree. In this feature, we examine the co-op experiences of some of our big data students. Btara Truhandarien completed a Bachelor of Computer Science from the University of Waterloo. He worked as a software engineer at Japanese e-commerce giant Rakuten for two years before joining SFU’s professional master’s program in computer science. Can you tell us about UMF? What is it like working there? UMF, short for Union Mobile Financial Technology, was established in China in 2003 and has become a strong player in the Chinese financial market ever since. The company powers much of the Chinese market’s financial transactions for consumers, enterprises of various sizes, and financial institutions by providing fintech and payment solutions. In 2015, UMF started its overseas expansion and now has two locations, one of which is in Vancouver, BC. The office in Vancouver, where I work, is an R&D branch and develops new technologies for the company. Due to the nature of researching and building cutting-edge technology, the branch has a good amount of liberty in its approach and solutions, while still interacting with the main branch in China to stay aligned with the overall mission and vision. The branch work schedule is project-oriented, and, for this year, we are focusing on building an automated machine learning platform (AutoML) for the company. Can you tell us a bit more about machine learning and AutoML? There are multiple steps within the development of a machine learning algorithm, also known as the machine learning life cycle. Briefly, those steps are data gathering, data pre-processing or cleaning, feature engineering, feature selection, model training and tuning, model deployment, and model maintenance and monitoring. A complete AutoML system aims to achieve the automation of all of these steps, except data gathering. This enables people of broad skill levels to create powerful and effective machine learning solutions to various problem domains. What are your responsibilities in the project you are working on? The project I have been working on involves building a drag-and-drop machine learning web platform. This platform, similar to Microsoft Azure, allows users to build machine learning models using their own datasets. Most of UMF’s clients are financial institutions and governmental organizations working with financial data. So a simple example of how the platform can be used is to create a model that predicts whether a customer will pay back their loans, based on possibly thousands of features. As the main developer of the translation API layer, I was responsible for handling the data flow between the user-facing data and the data structures required to execute various user commands. It is also this layer’s responsibility to store any required metadata information and decide what kind of data users receive, when they will receive it, and how they will receive it. In conjunction with the other parts of the system, the platform we have built enables users to provide their own datasets to the system, explore the datasets’ statistical information, build experiments and models, and execute the experiments with various parameters. How has SFU’s master’s program prepared you for your co-op work? There is no better experience for learning complex problems than putting your own two hands to the problem directly. I find that the big data program at SFU highly encourages this through the course projects. I am fortunate to have worked on projects that are technically challenging like my capsule networks project or the job advisor model project which covered a variety of techniques and data sources. The projects cultivated my research, design, technical, and critical analysis skills - each crucial for building the AutoML platform. Without the hands-on approach of the program, my understanding and skills could not have grown as much as they have during my co-op. Where have you seen your biggest areas of growth during co-op? I felt my largest growth has come from the responsibilities and trust that I have been given. While I do work within a team, and there is somebody who occasionally helps me implement features, for the most part, I work on the API layer alone. As the main developer, I have the responsibility of navigating through technical difficulties. I am often challenged with open-ended design decisions such as structuring the data flow of the system. This drives me to always critically assess the decisions I make and carefully plan for the potential impact in the future. In a sense, I am not just a developer but also a system designer. This design skill applies beyond the system-level design, extending to the design of the implementation. For example, when I implement the solution in code, I am always self-checking myself rigorously by going through several decision points examining code testability, maintainability, flexibility, usability and more. What are your most valuable takeaways from this co-op experience? I feel fortunate to be working on a project that is as challenging and unique as AutoML, and am grateful for the experience gained from packaging it as a platform to be used by users. It is both challenging and innovative, and it is something I can be proud of talking about when I look for future work opportunities. The second thing I feel most fortunate about is the unique experience of working within a cross-cultural team. Part of my team members are based in UMF’s China branch. This makes it tricky to communicate ideas and points across due to the obvious reasons of language barrier and time zone difference, but I have managed so far with the help of my team members and also by communicating through technical designs. Overall, my work at UMF has been a pleasant and unique experience. The culture of trust and autonomy is truly beneficial in helping me grow on the technical side and develop my soft skills. I truly believe that the culture of trust within the company has enabled me to maximally utilize my capabilities and I look forward to my next work term here at UMF.
https://medium.com/sfu-cspmp/the-co-op-close-up-automl-and-fintech-at-umf-3911e0d5cd9d
['Kathrin Knorr']
2019-11-14 17:58:47.444000+00:00
['Data Science', 'Fintech', 'Machine Learning', 'Co Op', 'Big Data']
Deep learning for recommender systems
In my past article on latent collaborative filtering, we used matrix factorization to recommend products to users. The input for that algorithm was UserItemRating matrix R. This matrix contains all the ratings of all the products given by all the users. This is the same matrix we are going to use to train our neural network. If we want to take this as input for the neural network then there becomes a problem because it is an integer identifier for users and items. For example, say the following users are tracking the following items: UserId MovieId 455 344 345 433 23 425 567 753 If we feed this into a neural network then we don’t get anything. So the idea here is to find a good representation of them. The same problem arises in neural nets when we deal with text tokens in NLP, categorical variables in other ML models like tags, category, etc. Say ‘s’ be the symbol in the vocabulary V. Then for a word ‘love’ in the dictionary, we can represent as : one-hot-encoding(‘love’) = [ 0, 0, 0, 0 …… 1 …… 0] Here this vector is very sparse and has a huge dimension. Also, another problem is that the distance between the vectors is equidistance. Meaning that love and hate are in the same distance as love and peace. So this does not capture the meaning of the concept clearly. Embedding Instead, we want to encode the vectors as continuous values in a lower-dimensional space. This is known as embedding. E.g embedding(‘love’) = [3.23, -4.5, 5.2, ……. 9.3] We can quantify this by using distance metrics like Euclidean distance or Cosine similarity between the vectors. If you want to learn more about the distance metrics here is a good article. Advantages: It is continuous and dense. Embedding metric can capture symmetric distance. Another way you can see the embedding is as the linear layer of neural network typically as the input layer that maps one-hot representation into the continuous space. We can achieve this by multiplying one hot-representation with embedding matrix W ∊ R n x d. embedding(s) = onehot(s).W We initialize W randomly in the start and entries of this matrix are tunable. They are also known as embedding parameters. In Keras: So now we have the output of embedding that can be fed into our model and define loss function depending on the target we want to predict. We then get the tunable architecture and we use gradient descent to adjust the parameters of embedding. Now our initial problem was to predict ratings for product j by user i. In my previous post, I have talked about matrix factorization way of doing so. Now the concept is similar but we input the embedding vectors for our items and products. Then we take the dot product of them and minimize the loss function. So the concept looks pretty similar but what is the advantage of doing this in a neural network way? Instead of just using dot product as the interaction(rating) we can use multi-layer perceptron with many fully connected layers to calculate the rating or interaction First, we concatenate the embedding then feed into the neural network and this will give us the ratings as the output. And we use the same loss function to minimize the errors. 2. The size of embedding can also be different meaning that we can have items a lot more than users. We can also concatenate the metadata information into the neural network. For instance, if some metadata have categorical variables like the director of the movie, we can define new embedding for directors or new embedding for movies and so on. So we have many embeddings as the input rather than a single matrix factorization model. But if we don’t have explicit feedback from users, we cannot use the regression loss function like before. We should use another architecture also known as triplet architecture. Conclusion In this architecture, we have a user i and he has watched the movie j. Then we also pick up another movie k at random from the database and it is very likely that the user has not seen the movie past or will not see in the future because there are a lot of movies that are negative(meaning that the user might not be interested in watching). So we contrast a negative movie with positive movie for a given user. So we compute the two interactions by taking the dot product and we compute the difference between them. And we make sure that the positive interaction between user and a positive movie is larger than the user and a random negative movie. Then we minimize the loss and maximize the difference. And at the end, we tune the embedding and come up with good recommendations. The V embedding for positive and negative items is the same matrix, we just take different rows in the same matrix. So we train with the same model parameters. These types of networks that use the same model parameters are known as Siamse networks. YouTube uses the same concept of learning embedding parameters to recommend videos to users. Here the metadata are geographical information, age, gender, and more. We feed these embedding into the neural networks and the serving selects the top N nearest neighbors and then softmax calculates the probability of the user watching that video. Then the results are sorted in descending order and presented to the user. Conclusion
https://medium.com/mldotcareers/deep-learning-for-recommender-systems-442fb9e46934
['Rabin Poudyal']
2020-09-12 13:16:58.685000+00:00
['Neural Networks', 'Data Science', 'Deep Learning', 'Artificial Intelligence', 'Machine Learning']
8 Ways You Can Integrate Video into Your Daily Social Strategy
At a local event I recently attended, I listened to a keynote speaker give a talk on video and why it needs to be a part of your social strategy. This was not your cut and dry talk on why you need video. He didn’t bring up metrics we have all heard, such as how a VP of Facebook recently said, “90% of content by 2019 will be video”. The talk was spot on and the speaker brought up some things I haven’t considered before. He posed a few insightful, yet powerful questions to the audience to get his point across. The conversation went a little something like this. “How many of you in the audience read posts daily on social media?”* Almost everyone in the audience raised their hand. * “How many of you post to social media daily?”* Almost everyone in the audience raised their hand. * “How many of you in the audience watch video on social media daily?”* Almost everyone in the audience raised their hand. * “How many of you post video to social media daily?”* Not one person in the audience raised their hand. * The speaker was RendrFX founder, Mat Silva. Do you see the disconnect here? Video is being consumed by all, but produced by few. This disconnect is what has caused video to be one of the least saturated mediums. 1) Use Video to Make Announcements You have announcements to make on a regular basis. Whether it is for awards, product enhancements, employee functions, new hires, etc. No matter the topic, there will be plenty of announcements. What better way to show people exactly what you are talking about than with a video? See how GoPro used video to announce their new drone. Here is an example of a company making a local announcement using video! 2) Video Makes Promoting Yourself Easy There is always something coming up you want to promote. Whether it is an upcoming event, a new product launch, an update to your software, or anything else you want your audience to know, video makes your message powerful and meaningful. If it is a big deal to you and you want it to be a big deal to your audience, then put it in a video. In traditional marketing, if you want to promote anything with video it can be a hassle. You have to spend a boat load of money to hire talent, get your video filmed and edited, then spend even more to air it on TV in peak watch times and channels. Thank goodness there is a quicker and easier solution! Now it is easy to create a video and promote is online for a fraction of the cost! Here is how you can make a promotional video from print materials! 3) Celebrate National Days Did you know, every day is a national something day? February 1st is Hula in the Coola Day, May 3rd is lumpy rug day, but most important of all May 14th is National Chicken Dance Day. To get a calendar of the national days go here. Check out our National Cat Day Video! 4) Get to Know Your Employees An easy way to create video is to feature an employee each week. Creating simple videos showcases their story, how they help your business, and more importantly how they help your customers succeed. Make your company personable, become more transparent, and get each employees family and friends engaged. See how Amazon did a ‘Meet The Team’ video to promote Amazon Echo! 5) Only The Cool Kids Are Live Streaming Live streaming is one of the hottest topics of 2016 and platforms are pushing you to do it. They want to send traffic your way when you live stream to promote their service. Be an early adopter. Set up a date and time where you live stream about consistent topics. Make sure to promote your event throughout the week and don’t think that only people watching live will see your content. Most live streaming platforms will allow you to automatically post your recording directly to your feed on your social channels! Here are some tips on developing a live stream strategy. 6) Create Holiday Videos Not only can you create videos for each Holiday, you can also create holiday videos beforehand for some pre-buzz excitement. People love Holidays, you can expect to get a lot of love when you are posting. Check out one of our holiday videos! (TIP: Watch your spending on video ads after Thanksgiving. If your business is non retail, consider slowing your online marketing efforts for the month of December. This time period is incredibly competitive with retailers selling for the holidays. Double down in January where you can expect traffic to be at least 3 times cheaper.) 7) Make an Explainer Video No matter what your product or service is, you can benefit from creating explainer videos. Explainer videos should be under two minutes and have a few purposes. Have a simple strategy, like a GET’EM, which stands for Give, Explain, Tell, Explain (even more), and Make. Here is a sample GET’EM strategy: G ive a call to action. ive a call to action. E xplain the problem the viewer is having. xplain the problem the viewer is having. T ell your audience how life could be easier with your solution. ell your audience how life could be easier with your solution. E xplain how your product or service works. xplain how your product or service works. Make it clear your product or service is the solution. This 5 step formula helps you create an awesome explainer video. If you do it right, it may turn into one of your best converting assets and you will reap the benefits for years to come. Check out this explainer video created by marking guru Neil Patel’s company Crazy Egg. Check out RendrFX’s explainer video! 8) Promote Your Company with Recruitment Videos No one knows how awesome your company is as much as you do! Don’t be afraid to show off perks, benefits, and what makes you unique. When you are trying to attract talent to your company, you have to position yourself as an industry leader. Here are some awesome recruitment vidoes: Dropbox — https://youtu.be/-ZuxQcp84o0 BambooHR — https://youtu.be/7WH8uxXXe9o Rackspace — https://youtu.be/ZfZPD2DrqkQ Conclusion With all these ideas, there is no reason why you shouldn’t be creating video for your social media channels on a daily basis. The possibilities are truly endless. If you want to start creating video for social, you can do so here. Don’t feel like reading? Here is a video summarizing the blog post! TL;DR
https://medium.com/rendrfx/8-ways-you-can-integrate-video-into-your-daily-social-strategy-255cdf0d889f
['Peter Schroeder']
2017-03-21 19:25:15.362000+00:00
['Social Media', 'Social Media Marketing', 'Video', 'Marketing', 'Video Marketing']
How to Write a Custom React Hook
How to Write a Custom React Hook A lesson on how to extract component logic into reusable hooks. React hooks, released in React v16.8, have changed the way developers write code. By default, React gives us access to a set of powerful base hooks, like useState , useEffect , useReducer , and others, but we can also build our own custom hooks to abstract complex state logic. Photo by v2osk on Unsplash Wait, what are custom hooks? When writing React applications, there’s bound to be instances where you find yourself using the same repetitive or redundant state logic across multiple components. With custom hooks, we can extract this logic into a function to make our code cleaner and more reusable. Custom hooks are simply functions that encompass other hooks and contain a common stateful logic that can be reused in multiple components. These functions are prefixed with the word use . Custom hooks mean less keystrokes and DRYer code. Before you write your own custom hook, keep in mind that the open source community has published thousands of hooks, so there’s a very high probability someone has written the logic you need and published it online. However, that is of no concern to us, this article is going to be focused on how to write custom hooks, not whether or not you should write them. Let’s dive right into our first example. useLocalStorageState Lets take a look at this Counter component that stores its current count value in local storage: Here, we’re using useState and useEffect to sync up local state to local storage. Now what if we want to duplicate this logic across multiple components? Instead of copying and pasting code, we’ll create a new custom hook to handle this logic. useLocalStorageState takes in two parameters, the key and the default value. On first initialization, we get the count from local storage if it exists and set it to state, otherwise we use the default value. We then use the useEffect hook to keep our local storage synced with local state, and return our state and setState functions as an array. Notice how we stringify and parse the values from localStorage using JSON.stringify and JSON.parse meaning we can also store objects with this hook as well! Now we can reuse this useLocalStorageState hook across any component we want! useArray Note that we don’t always have to return an array from a custom hook. In this example, we’ll be building a custom useArray hook that allows us to manage array states easier. This hook is pretty intuitive. We return an object with a bunch of modifiers for our array state that allow us to easily manipulate the array. We take in an initial array as the hook argument, then provide add , clear , removeById , and removeIndex as additional functions besides the usual value and setValue operations. Lets see how we would implement this hook in a component: See how we’ve drastically reduced the amount of extraneous logic we would need in this component by using a custom hook? Pretty incredible. Do we have to start our custom hooks with use ? Technically no, but really yes. As per the React docs: “Please do. This convention is very important. Without it, we wouldn’t be able to automatically check for violations of rules of Hooks because we couldn’t tell if a certain function contains calls to Hooks inside of it.” Custom hooks don’t share state Each instance of a custom hook has its own state, so unfortunately you can’t share state by default with a custom hook. However, if you pair up a custom hook with a global state library like Redux or Recoil, you can build custom hooks that interact with global state! Conclusion React custom hooks are incredibly powerful for writing cleaner, more maintainable, and DRYer code. We looked at a couple great examples of custom hooks, usLocalStorageState and useArray and how we can use them to reduce code complexity and increase reusability. Got some great custom React hooks you’ve used or made? I’d love to see them, feel free to comment below! Keep in Touch There’s a lot of content out there and I appreciate you reading mine. I’m an undergraduate student at UC Berkeley in the MET program and a young entrepreneur. I write about software development, startups, and failure (something I’m quite adept at). You can signup for my newsletter here or check out what I’m working on at my website. Feel free to reach out and connect with me on Linkedin or Twitter, I love hearing from people who read my articles :)
https://medium.com/javascript-in-plain-english/how-to-write-a-custom-react-hook-6a8315f351f6
['Caelin Sutch']
2020-12-26 22:37:13.493000+00:00
['Programming', 'Software Development', 'JavaScript', 'Web Development', 'React']
Writers as Movie Heroes
Writers as Movie Heroes America’s top grossing films feature superheroes and fantasies. Imagine a nation where real writers, artists and poets are heroes. The top U.S. films are all fantasy (Avengers, The Lion King, Toy Story 4 and Captain Marvel). Major new Polish films feature stories of true writers and poets fighting the powerful to change the world. That contrast between heroes (American and Polish) struck me when I realized I first considered journalism when I was a boy noticing Clark Kent was a newspaper reporter and Superman. My ancestry is Polish and a number of true Polish stories center on writers who actually did change the world. America, the Super Power, demands larger than life superheroes. In Poland, a land that was conquered and enslaved for 183 of the last 224 years, artists (including writers, poets, painters and actors) had to use their gifts to keep the Polish culture and history alive during the decades Polishness was forbidden. November 11, Veteran’s Day in America, is the 101st anniversary of the end of World War I, and Polish Independence Day, the date Poland rose from 123 years of slavery to be reborn. As you write your next story, imagine true heroes like this: Piłsudski. In the new film Piłsudski aka “Pilsudski,” we meet Józef Piłsudski right after he has been arrested in 1900 for publishing an underground newspaper. His writing called for revolution against Russia (one of three neighbors who wiped Poland off the map from 1795–1918). The film starts in an insane asylum where he has been drugged and is a delusional mess when his people help him escape. He goes on to organize a fighting force, becoming the George Washington of Poland. Mr. Jones Ukraine, Russia and fake news? Mr. Jones, an award winning-Polish film coming to America in 2020, tells the true story of the newspaper reporter who revealed The Holodomor, how Russian despot Joseph Stalin starved between 3.3 million and 12 million Ukrainians in 1932–33. The New York Times’ Soviet Union correspondent covered up the story. Pan T Pan T, a comedy, tells the story of a Warsaw writer resisting the power of communist overlords in 1953 Poland when tyranny was forcing writers and the rest of Polish society to write the communist ruler’s version of reality. Mr. T, sees his work unpublished because he keeps trying to slip the truth into his work. The government fears him, not knowing the difference between his fictional and non-fictional stories. Love and Mercy: Faustina Love and Mercy: Faustina was a big hit in Poland and attracted a massive 100,000 American fans during its debut in October. It tells the true story of the young nun, St. Maria Faustina Kowalska, called “Jesus’ secretary.’’ She wrote a 700-page diary detailing her conversations with Jesus that has been published around the world. It’s being brought back in December. The Divine Plan The Divine Plan, an American film that debuted November 6, tells the true story of a Polish poet, prolific writer and actor, St. John Paul the Great, and how he partnered with another artist/actor, Ronald Reagan, to win the Cold War. This year is 20 years since John Paul’s 1999 Letter to Artists where JPII wrote:
https://medium.com/the-partnered-pen/writers-as-heroes-19fa8d4da224
['Joseph Serwach']
2019-11-13 01:15:23.127000+00:00
['Life Lessons', 'Poland', 'Writing', 'Leadership', 'Inspiration']
Why Do We Murder the Beautiful Friendships of Boys?
Why Do We Murder the Beautiful Friendships of Boys? An epidemic of loneliness is being forced on boys and men Photo by Mark Greene O n a cold February night a few years ago, professor and researcher Niobe Way presented findings from her book Deep Secrets in New York. She was hosted by Partnership with Children, a groundbreaking organization doing powerful interventions with at-risk children in New York’s public schools. The work done by folks like Way and Partnership with Children has produced reams of hard statistical data proving that emotional support directly impacts every metric of academic performance — and, as it turns out, every other aspect of our lives as well. That night, as my partner Saliha and I made our way down the snow-blown streets toward Fifth Avenue, I was feeling the somber weight of the third month of the dark Northeast winter, wondering how many days remained until spring would come. “It’s February. Don’t kid yourself,” came the answer. My charming and lovely partner was to take me to dinner after Way’s presentation. It was my birthday. Niobe Way is Professor of Applied Psychology at New York University and the co-Director of the Center for Research on Culture, Development, and Education at NYU. A number of years ago, she started asking teenage boys what their closest friendships meant to them and documenting what they had to say. This particular question turns out to be an issue of life or death for American men. Before Way, no one would have thought to ask boys about what is happening in their closest friendships because we assumed we already knew. In fact, when it comes to what is happening emotionally with boys and men, we tend to confuse what we expect of them with what they actually feel. And, given enough time, they do as well. This surprisingly simple line of inquiry can open a Pandora’s box of self-reflection for men. After a lifetime of being told how men “typically” experience feeling and emotion, the answer to the question “what do my closest friends mean to me” is lost to us. A survey published by AARP in 2010 found that one in three adults aged 45 or older reported being chronically lonely. Just a decade before, only one out of five said that. And men are facing the brunt of this epidemic of loneliness. Research shows that between 1999 and 2010, suicide among men age 50 and over rose by nearly 50 percent. The New York Times reports that “the suicide rate for middle-aged men was 27.3 deaths per 100,000, while for women it was 8.1 deaths per 100,000.” In an article for the New Republic titled The Lethality of Loneliness, Judith Shulevitz writes: Emotional isolation is ranked as high a risk factor for mortality as smoking. A partial list of the physical diseases thought to be caused or exacerbated by loneliness would include Alzheimer’s, obesity, diabetes, high blood pressure, heart disease, neurodegenerative diseases, and even cancer — tumors can metastasize faster in lonely people. As I sat down to write about Niobe Way’s research, a tweet by the philosopher Alain de Botton popped up in my stream: “An epidemic of loneliness generated by the misguided idea that romantic love is the only solution to loneliness.” And there you have it. What Niobe Way illuminates in her book is nothing less than the central source of our culture’s epidemic of male loneliness. Driven by our collective assumption that the friendships of boys are both casual and interchangeable, along with our relentless privileging of romantic love over platonic love, we are driving boys into lives Professor Way describes as “autonomous, emotionally stoic, and isolated.” What’s more, the traumatic loss of connection among boys is directly linked to our struggles as men in every aspect of our lives. These boys declare freely the love they feel for their closest friends. They use the word “love” and they are proud to do so. Professor Way’s research shows us that in early adolescence, boys express deeply fulfilling emotional connection and love for each other, but by the time they reach adulthood, that sense of connection evaporates. This is a catastrophic loss — one that we assume men will simply adjust to. They do not. Millions of men are experiencing a sense of deep loss that haunts them even if they are engaged in fully realized romantic relationships, marriages, and families. For men, the voices in Way’s book open a deeply private door to our pasts. In the words of the boys themselves, we experience the heartfelt expression of male emotional intimacy that echoes the sunlit afternoons of our youth. This passionate and loving boy-to-boy connection occurs across class, race, and culture. It is exclusive to neither white nor black, rich nor poor. Its universality is beautifully evident in the hundreds of interviews that Way conducted. These boys declare freely the love they feel for their closest friends. They use the word “love” and they are proud to do so. Consider this quote from a 15-year-old boy named Justin: [My best friend and I] love each other… that’s it… you have this thing that is deep, so deep, it’s within you, you can’t explain it… I guess in life, sometimes two people can really, really understand each other and really have a trust, respect, and love for each other. It just happens, it’s human nature. Way writes: Set against a culture that perceives boys and men to be activity oriented, emotionally illiterate, and interested only in independence, these responses seem shocking. The image of the lone cowboy, the cultural icon of masculinity… in the West, suggests that what boys want and need most are opportunities for competition and autonomy. Yet the vast majority of the hundreds of boys whom my research team and I have interviewed from early to late adolescence suggest that their closest friendships share the plot of Love Story more than the plot of Lord of the Flies. Boys valued their male friendships greatly and saw them as essential components to their health, not because their friends were worthy opponents in the competition for manhood but because they were able to share their thoughts and feelings — their deepest secrets — with these friends. Yet something happens to boys as they enter late adolescence. As boys enter manhood, they do, in fact, begin to talk less. They begin to say that they don’t have time for their male friendships even though they continue to express strong desires for having such friendships. In response to a simple question about their friendships, two boys reveal everything about the decline of connection between boys during adolescence. Justin, now in his senior year, reports a tapering off of his friendships: It’s like best friends become close friends, close friends become general friends and then general friends become acquaintances. So they just… if there’s distance whether it’s, I don’t know, natural or whatever. You can say that but it just happens that way. Another high school senior, Michael, says: Like my friendship with my best friend is fading… I mean, it’s still there ’cause we still do stuff together, but only once in a while. It’s sad ’cause he lives only one block away from me and I get to do stuff with him less than I get to do stuff with people who are way further… It’s like a DJ used his cross fader and started fading it slowly and slowly and now I’m like halfway through the cross fade. After presenting these testimonials, Way takes us through the logical results of this disconnection for boys: [Boys] became more distrustful and less willing to be close with their male peers and believe that such behavior, and even their emotional acuity, put them at risk of being labeled girly, immature, or gay. Thus, rather than focusing on who they are, they became obsessed with who they are not — they are not girls and not children, nor, in the case of heterosexual boys, are they gay. In response to a cultural context that links intimacy in male friendships and emotional sensitivity with a sex [female] and a sexuality [gay], the boys “matured” into men who are autonomous, emotionally stoic, and isolated. The ages of 16 to 19, however, are not only a period of disconnection for the boys in my studies, it is also the period in which the suicide rate for boys in the United States rises dramatically and becomes four times the rate for girls. In America, men perform masculinity within a narrow set of cultural rules often called the Man Box. One of the central tenets of the Man Box is the subjugation of women and, by extension, all things feminine. Since we Americans hold emotional connection as a female trait, we reject it in our boys, demanding that they “man up” and adopt a strict regimen of emotional independence, even isolation, as proof they are “real men.” Behind the drumbeat message that real men are stoic and detached is the brutal fist of homophobia, ready to crush any boy who might show too much of the wrong kind of emotion. And so, by late adolescence, boys routinely declare “no homo” following any intimate statement about their friends. And there it is — the smoking gun, the toxic poison that is leading to the life-killing epidemic of loneliness for men: “no homo.” This is one more reason why we are right to fight relentlessly for gay rights and marriage equality. It is a battle for the hearts and souls of our young sons. The sooner being gay is normalized, the sooner we will all be free of the shrill and violent homophobic policing of boys and men. America’s pervasive homophobic anti-feminine policing has forced generations of young men to abandon each other’s support at the crucial moment they enter manhood. It is a heartrending realization that even as men hunger for real connection in our male relationships, we have been trained away from embracing it. We have been trained to choose surface-level relationships, even isolation — to sleepwalk through our lives out of fear that we will not be viewed as real men. We lock away the loving impulses that once came so naturally to us. This training runs so deep that we’re no longer even conscious of it. And we pass this training on, men and women alike, to generation after generation of bright-eyed, loving little boys. By the time Professor Way was completing her presentation, I was feeling sick. A queasy nausea roiled up. Something was uncoiling in me, something cold and bleak that had taken root long ago and gone to sleep there. As Way read these boys’ words, that thing woke up. It was a baleful moment of recognition. A sense of utter despair came rushing up, vast, deeper than deep. A February moment to end all of them. Spring was never coming back. No matter how determined I had been all those years ago to put my grief away, it was here now — a wall of pain so pure and unflinchingly raw, I was shocked to discover that something so huge could fit in the frail confines of a human being. Even now, as I write these words, gingerly reaching out to give witness to that part of me, I am confronted with a dizzying abyss of sadness that stops my breath, leaving me flinching, waiting for the same killing blow to fall again. Over and over and over again. I never made it to my birthday dinner. Instead, I wept for George, my wife holding me, as we barreled home through the winter darkness on the New York City subway.
https://medium.com/remaking-manhood/why-do-we-murder-the-beautiful-friendships-of-boys-3ad722942755
['Mark Greene']
2020-05-21 23:52:39.682000+00:00
['Mental Health', 'Parenting', 'Gender Equality', 'Masculinity', 'Feminism']
I Respect Gary Vee for a Different Reason
Connecting on a different level Gary has spent a lot of his career being interviewed, talking on stage and having conversations with strangers in the street. Right from the start, what I watched Gary do was have empathy. He tried to understand how other people were feeling and get inside their head with every conversation. You could see it in his reactions and the way he asked questions. From the outside, it’s as if he discovered that to solve problems, he had to be more empathetic. Looking at his early Youtube Channel called “Wine Library,” I am not sure he had discovered this yet. His early work from my inexperienced perspective felt somewhat self-promotional. As I have watched Gary grow in front of the eyes of the internet, rather than be torn down like the tall poppy he is, he seems to have become more empathetic which is not the natural by-product of influence or notoriety. The louder his voice has got, the softer his heart has become towards others and the daily struggles they go through. He has shown me how to connect with human beings on a completely different level. It is for this reason that I respect him as a person. It’s not his number of followers but his empathy which has made me rethink my own life. Could we not all have a bit more empathy?
https://medium.com/swlh/i-respect-gary-vee-for-a-different-reason-2aa8bd99e41b
['Tim Denning']
2019-07-15 07:46:55.443000+00:00
['Social Media', 'Self Improvement', 'Entrepreneurship', 'Empathy', 'Life']
Submit your Truth
Submit your Truth We want raw and gritty stories about life, love, sex, death, and regeneration, to name a few. Photo by Jon Tyson on Unsplash Candor is a compliment; it implies equality. It’s how true friends talk. -Peggy Noo Honest Writers wanted! If you’re a writer and would like to share your story on a platform solely about the cathartic release of truth-telling, then Candour is for you. Candour is a publication dedicated to sharing stories that celebrate the openness about the joys, victories, milestones, and pitfalls of life. This is not a place to sugarcoat things. We want raw and gritty stories that fall under Life, life lessons, sex & sexuality, women, issues, mental illness, spirituality, self (including self-awareness, improvement, help, and mastery) and stories about your failures. Dig deep. Only when we tunnel through to the depths of ourselves, do we successfully unfuck the self and emerge stronger, wiser, and closer to self-mastery. Submission Process The first step is to get you added as a writer. Email me your Medium handle, and I’ll check out your current work, along with the piece you wish to publish or a similar article. If I love your style and storytelling muscles, I’ll respond and add you to the publication. Email me here: [email protected] with the subject: ‘Story Submission for Candour.’ Please be sure to include your Medium handle, or your approval won’t be processed. Write a unique story (unpublished elsewhere) and select ‘add to publication’ from the (…) drop-down menu. Candour will show up once you’ve been added as a writer. I’ll read the story and run it through a plagiarism checker to be sure, do any small editing, and punctuation fixes — if necessary. Make sure you use Grammarly! Check your stats. If accepted, your story will be published in 1–3 days. Not all stories will be accepted, but if yours isn’t, do try again. I want to build a valuable place for writers to share their Candour with readers. Let’s learn from each other’s experiences.
https://medium.com/candour/submit-your-truth-66c0742c8108
['Nicole Bedford']
2020-03-27 19:54:56.616000+00:00
['Writers On Medium', 'Writing', 'Publication Spotlight', 'Publication', 'Stories']
Exploring Vercel Analytics Using Next.js 10 and GTMetrix
Exploring Vercel Analytics Using Next.js 10 and GTMetrix Add some datapoints to your new Vercel Analytics setup with GTMetrix and explore an overview of the results Image credit: National Cancer Institute Vercel announced their new analytics feature at their recent Next.js conference and, great news— it’s now live to try out! In my most recent post, I deployed a simple Next.js 10 application to Vercel. Now it’s time to test out some of the new features! In this post, we’ll cover how to enable Vercel Analytics on a Vercel hosted Next.js 10 project, then use GTMetrix to help send some request from around the globe (using throttling for various speeds) that our analytics can collect (on top of any other potential visits to the site).
https://medium.com/better-programming/exploring-vercel-analytics-using-next-js-10-and-gtmetrix-f70a8e1bb7f7
["Dennis O'Keeffe"]
2020-11-11 16:26:47.045000+00:00
['Vercel', 'React', 'Programming', 'Nextjs']
Quantitative Analysis of Harvard’s COVID-19 Response
by Henry Austin and Kelsey Wu Introduction Over the past few months, coronavirus has spread across the globe, eventually arriving at the gates of Harvard Yard. In light of recent administrative responses to the pandemic, we decided to track community messages from Harvard administration and health services over time. This included all messages from Harvard’s Updates & Community Messages page and emails sent to all Harvard community members. Through quantitative text analysis in R, we determined the most frequently used words in each correspondence, seeking to understand the university’s concerns, priorities, and actions as the situation evolved. Here’s an approximate timeline of Harvard’s correspondences: January 24: Harvard University Health Services (HUHS) issued its initial warnings, informing students that HUHS would be “monitoring the global concern for the novel coronavirus coming out of Wuhan, China.” HUHS also advised students to take general health precautions (e.g. washing hands often, coughing into a tissue, and avoiding contact with sick individuals) and discouraged travel to China. Harvard University Health Services (HUHS) issued its initial warnings, informing students that HUHS would be “monitoring the global concern for the novel coronavirus coming out of Wuhan, China.” HUHS also advised students to take general health precautions (e.g. washing hands often, coughing into a tissue, and avoiding contact with sick individuals) and discouraged travel to China. Late February: The university began restricting travel to China, South Korea, Italy, Iran, and other countries with CDC Level 3 Warning. In addition, inbound travelers were asked to self-isolate and complete confidential health forms. The university began restricting travel to China, South Korea, Italy, Iran, and other countries with CDC Level 3 Warning. In addition, inbound travelers were asked to self-isolate and complete confidential health forms. Early March: As many know, Harvard’s response to the virus escalated dramatically in March. HUHS prohibited any university-related international and non-essential domestic travel and strongly discouraged university affiliates from personal travel. The administration began closing events with more than 100 attendees and asking affiliates to familiarize themselves with Zoom. As many know, Harvard’s response to the virus escalated dramatically in March. HUHS prohibited any university-related international and non-essential domestic travel and strongly discouraged university affiliates from personal travel. The administration began closing events with more than 100 attendees and asking affiliates to familiarize themselves with Zoom. March 10: Students received notice to leave campus as soon as possible and at least by March 15, and all classes were moved online. Some exceptions were granted. Students received notice to leave campus as soon as possible and at least by March 15, and all classes were moved online. Some exceptions were granted. March 13 : The university informed its community of the first confirmed, Harvard-affiliated case. : The university informed its community of the first confirmed, Harvard-affiliated case. Last 2 weeks of March: The administration thanked the Harvard community for its flexibility and adjustments. The administration also released additional guidelines for essential personnel, expressed the significance of anonymity during this crisis, and announced the cancellation of commencement. On March 24, President Larry Bacow announced that he and his wife had tested positive for the virus. Data and Interpretation To produce the table below, we used R (tidyverse and tokenizers packages) to produce a list of all the words in each correspondence and their frequency of appearance. We removed common words such as “and,” “the,” and “of” that did not contribute much meaning. The table shows the top 10 most frequently used words in each correspondence, in descending frequency. The last row of the table shows the top 10 most frequently used words among all correspondences. The initial emails in late January indicate Harvard University Health Services (HUHS) and Harvard administration’s preparations for the pandemic. Namely, Harvard was in the process of producing protocol for COVID-19’s arrival to the U.S. or Harvard campus. During this time, words such as “travel”, “international”, and “China” are frequently used as the administration warns the community of potential dangers and uncertainty surrounding the novel coronavirus. This frequency seems to suggest the remoteness of COVID-19, establishing a tone of comfort and security on campus. At this time, the virus had yet to appear on American soil. Given the prevalence of the word “China” in Harvard coronavirus correspondence, we decided to track the frequency of its appearance over time. Initially (late January and early February), when the pandemic was largely centered across the Chinese mainland, the use of the word steadily increased. At the end of February, it dropped to zero. These changes are documented in the graph below. In fact, the use of the word “China” in Harvard’s messages peaked as U.S. cases began to skyrocket. This trend is intuitive. As the pandemic moved across new borders and affected new communities around the globe, the university placed a new focus on their response — one centered around the domestic spread. During early March, as the pandemic intensified across the country, Harvard’s correspondence became increasingly focused on the Harvard community. Rather than using words like “China,” and “international,” the most frequent words shifted to “dorms,” “campus,” and “students.” As the virus spread around the world, especially the United States, university leaders likely realized that the pandemic was not a remote “international” issue. Rather, it was one that could turn campus life upside down. Correspondence also focused heavily on the word “community” (see “Frequency of ‘community’ across Harvard’s messages” graph below). Although “community” appeared in all correspondences, it is interesting to note that “community” decreased in frequency as “China” increased in frequency. As mentioned earlier, this further indicates Harvard’s increasing emphasis on unifying its community, especially as the first Harvard-affiliated cases were reported. Messages sent after students left campus (March 15) and after the second Harvard-affiliated case (March 16) used “community” less frequently. At this point, the university focused on establishing protocol for essential personnel and thanking students, faculty, and community members. Words such as “students”, “protect”, “campus”, and “best” began to appear more frequently during this early March time period. There was a strong commitment by administration, or at least the appearance of one, to prioritizing the safety of students. Another prevalent word was “travel” (see “Frequency of travel across Harvard’s messages” graph below). As the frequency of “travel” increased, words such as “China” and “international” also appeared more frequently. During this time, the university discouraged travel and warned students of risks and precautions, should travel be necessary. However, as the pandemic moved closer to Harvard, the use of “travel” decreased. Harvard began focusing more heavily on issues related directly to campus and the student population in Cambridge. Following March 10, when students were informed they would have to leave campus in five days, the use of “travel” became frequent again as the university communicated information about leaving campus. In early March, the tangible impacts of the pandemic on Harvard’s campus became evident. Email content began reflecting a stronger sense of urgency, utilizing words such as “possible”, “strongly”, and “protect”. This call to action was realized through the University’s announcement on March 10, when students received notice to leave campus within five days. After students left campus, the tone of Harvard’s correspondence became increasingly dire, in line with the worsening state of the U.S. pandemic. The administration used words such as “emergency,” “essential,” and “guidance” frequently to convey the gravity of the crisis. This tone differs drastically from the initial warnings in late January. Overall, the urgency of the correspondence, as one might expect, aligns closely with the domestic upsurge of COVID-19. Initially seen as a solely international travel hazard, the virus rapidly progressed into a campus emergency, resulting in unprecedented action by campus leadership. Further investigation of this data could assess whether the University’s response was adequate or called for. Should the university have acted sooner with greater urgency (given the increasing number of cases in the U.S.), was the response unnecessary, or was it exactly on time? As more information is published, data-driven answers to these questions may provide useful proposals for future crises. Data Analysis Procedures First, we collected all correspondences from Harvard and HUHS about COVID-19. This included all messages from Harvard’s Updates & Community Messages page and emails sent to all Harvard community members. We then used R (tidyverse and tokenizers packages) to produce a list of all the words in each correspondence and their frequency of appearance. To remove common articles and prepositions (e.g. the, and, of, I), we utilized the Google Web Trillion Word Corpus, a dataset produced by Google’s web crawlers containing English word n-grams and their observed frequency counts. Essentially, the Corpus provides a list of the most commonly used words and the word frequency (measured by the percentage of the Trillion Word Corpus consisting of the given word). In our analysis of Harvard’s correspondences, we eliminated all words with a Corpus frequency > 0.1% — basically, we filtered exclusively for words that occurred less than once every 1000 Corpus words. This specification removed words that wouldn’t give us meaningful insight, which was key in identifying how the university framed the outbreak and its response. We completed this analysis for each correspondence between January 24 and March 27, 2020. Learn More Data on the COVID-19 pandemic can be found on the CDC website and in this map created by Johns Hopkins University. Sources used for this project can be found here and here. The most up-to-date information on Harvard’s response can be found on the university’s COVID-19 response website. This article was an analysis by the Harvard Open Data Project, a student-faculty group that analyzes public Harvard data to increase transparency and analyze problems on campus. Edited by Terry and Sahana, cover image by Melissa. Want to work on more projects like this? Join us!
https://medium.com/harvard-open-data-project/quantitative-analysis-of-harvards-covid-19-response-bfffdc1e5974
['Harvard Open Data Project']
2020-04-03 16:06:15.826000+00:00
['Covid 19', 'Coronavirus', 'Harvard']
My Experience at Khipu AI 2019
About Khipu Khipu, the Latin American conference in Artificial Intelligence, was inspired by Deep Learning Indaba, a meeting that’s been happening since 2017 in Africa. It’s last edition took place in November at the Facultad de Ingeniería, Universidad de la República in Montevideo, Uruguay. It was the first time that Latin America hosted an AI event of such scale and importance. Hundreds of researchers and professionals from all over the continent joined together with the purpose of empowering research in the field. An effort that was absolutely necessary. Area cartogram showing countries rescaled in proportion to their accepted NIPS papers for 2006–2016. (Ref) To make it happen, a large number of people and sponsors had to get involved. It is well known that being a researcher in a developing country is not easy. In Brazil, for instance, a master’s student monthly salary offered by federal government is equivalent to $350 dollars and for a Ph.D, $520. This makes the academic career very unattractive, to say the least. The emotional and psychological pressure are huge. Very often students get frustrated with their work. They feel alone, they feel the social pressure and they feel undervalued. Not to mention that they always have to answer to the “When are you going to get a real job?” question. Khipu’s Crowd Spending a week surrounded by so many interesting people was certainly motivating for each one of us. We felt part of a community. Moreover, it was also very important to be around so many successful researchers. They shared their stories and gave us many advices. A succesful academic career seemed like a less impossible goal and also totally worthy and rewarding. It inspired me to continue with my studies and, more importantly, to share my knowledge with the ones that need it. I felt the importance of sharing my experience with my local community and also of being closer to it. This is how I hope to repay to what I learned at Khipu. Program Khipu’s “official” routine happened from 08:30 to 19:00, but had many unofficial events. The schedule basically consisted of theoretical lectures interspersed with Spotlights and Sponsor’s Talks until late afternoon. Afterwards, we had parallel practical sessions in which we had the opportunity to choose between two topics or to participate in the Hackaton. The program was pretty intense, so I’ll just highlight my favourite moments. To see the whole schedule and have access to videos and slides, please access Khipu’s program. The first day of Khipu was mostly dedicated to Machine Learning and Deep Learning fundamentals. My favourite part of this day was the practical session of Optimisation for Deep Learning. Of course, I can’t say that I’ve mastered this subject, but I felt relieved because now I feel more confident about hyperparameter tuning for optimisation. Like I said earlier, Khipu had many unofficial events. The organizers worked very hard to entertain us with multiple surprises. The first day of Khipu ended with a Tango performance at Anfiteatro del Edificio Polifuncional José Luis Massera. It was awesome, and the week was just starting. My favourite parts of Day 2 were Kyunghyun Cho’s lecture on Recurrent Neural Networks and the panel “How to Write a Great Research Paper” with Nando de Freitas, Claire Monteleoni, David Lopez-Paz and Martin Arjovsky. The panelists shared precious advices: the tips went from text stylistics to editing in collaboration and guidelines. They also shared examples of great papers to inspire us. Reinforcement Learning Practical Session The best moments of Day 3 were related to the topic of Reinforcement Learning. Unfortunately, I arrived at Khipu knowing very little about this subject, but Nando presented an awesome lecture and I got very interested in studying it more deeply. The practical session afterwards fit like a glove. To finish Day 3, one of Khipu’s major sponsors, Tryolabs, threw a great party at Plaza Mateo Rooftop & Bar. It was amazing to enjoy such a beautiful sunset and get together with so many great scientists. Those feelings of solitude were far distant by now. After such amazing three days, I thought it would be very hard for Khipu to continue to surprise us — but I was very wrong. At Day 4, Chelsea Finn gave a fascinating talk about Robotics and Continuous Control. I’ve always been interested in robots, but my studies took me in a quite distant direction. So it was good to learn more about this subject. David Lopez Paz gave one of my favourite talks ever. His presentation held my attention from the beginning. I highly encourage you to watch the video. In this talk, David guides us through the history of causality and how it relates to correlation. The most important slide of David’s talk The 2nd most important slide of David’s talk Women in AI hosted by Google To finish Day 4, Google AI hosted the event Women in AI with the panelists Sandra Avila, Chelsea Finn, Maria Simon, Giulia Pagallo, Guillermo Moncecchi and Nando de Freitas. Remember what I said about that feeling of isolation that researchers often feel? I believe it is much worse when it comes to female researchers. The lack of representative female figures makes it feel like this career is not for us. But the room was filled by amazing women and this motivated me even further. Now it’s time to talk about the last day of Khipu. It’s tough to choose the best moments of such an incredible day. I’ll start by saying that I was looking forward to watch the parallel session on Advanced NLP with Oriol Vinyals Video, Jorge Pérez Video, Luciana Benotti Video and Lucia Specia Video; and I wasn’t disappointed. It was one of the most important sessions to me since this is my research area. This session basically summarized all the hot topics on NLP right now. Next, Oriol Vinyals presented the exciting project AlphaStar: StarCraft II using multi-agent RL (Video). By now, I must say that if Khipu would’ve been just this morning, I’d be totally satisfied. But it went on! During lunch I had the chance to present my poster. It was a great opportunity to talk about my master’s research. Khipu happened two weeks before my defense, so it was the nicest way of concluding this stage of my life. But sadly I didn’t have much time to see other posters that were being presented along with mine. After lunch we had to choose between two parallel sessions: i) AI for Social Good with Jeff Dean Video, Danielle Belgrave Video, Cecilia Aguerrebere Video Slides, Alejandro Noriega Campero Video, Guillermo Sapiro Video and ii) Life of a ML Startup with Mario Guajardo, Agustina Sartori, Martín Alcalá, Thiago Cardoso, Matthieu Jonckheere. I chose to watch AI for Social Good and I was pleased to see that so many researchers are working so hard on social problems. And finally, the last talk at Khipu: Deep Learning to Solve Challenging Problems with Jeff Dean (Video). Jeff is currently the lead of Google AI and most of his talk's been structured around a publication from 2008 by the U.S. National Academy of Engineering in which there was a list of Grand Engineerging Challenges for 21st Century. With this, Jeff introduces this list of 14 problems and mentions that Google is currently working on 10 of them. Nevertheless, he selected 5 of these to share their progress with us, namely: i) Restore and Improve Urban Infrastructure, ii) Advance Health Informatics, iii) Engineer better medicines, iv) Reverse-Engineer the brain, v) Engineer the tools for scientific discovery. During his talk, Jeff also communicates that it would be very interesting to see what researchers outside of Google could do with more computational resourcers. And then, once more we've been surprised by Khipu. We got free access to cloud TPUs to support our research. Unbelievable. After the talk, we've been invited to an awesome closing party with dinner and music from a local band. It happened at Club Uruguay, a club located in a historical neighbourhood in Montevideo (Ciudad Vieja). It was incredible.
https://medium.com/datalab-log/my-experience-at-khipu-ai-2019-ffe13d43f582
['Beatriz Albiero']
2019-11-28 13:42:12.940000+00:00
['Artificial Intelligence', 'Machine Learning']
A Physicist’s Guide to Lists in Python
Photo: Rhett Allain Physics isn’t just physics. You have to do a lot of other stuff too — like reading, writing, math, drawing, communicating…and yes…programming. For me, I prefer to use python. OK, I really like Glowscript VPython. It’s basically python but with some pre-loaded modules that has stuff to handle vectors, 3D objects, graphs and more. Yes, it’s my favorite. Oh, and it runs in a web browser too. Here is a List I previously created an introduction to functions in python, so now it’s time for lists. In python, lists are sort of like a list you make of your groceries. Let’s just make a simple list. Check this out (code is online). Here is the output: There you go. A list. Here are some notes: You can name a list as you would any python variable. It must be a unique name and it can’t start with a number. The list is created by using square brackets []. The items in the list are separated by commas. The list items can be anything. They don’t have to be just numbers, they can be a mix of stuff. Here is another list. This crazy list includes a vector, a string, another list, and a number. It’s awesome. Addressing List Items So, you’ve go a list. What now? Let’s make a list of numbers (I’m not really going to use strings in my lists since I am focusing on physics stuff). things=[2,8,1,3,0] Here are 5 numbers. Suppose I want to just print out the second element (the 8). That would look like this: print(things[1]) If you want to print out the first element, that would be things[0] — yes, the items are numbered starting from 0 (not 1). Here are a couple of other important things about lists. You can find the length (the number of items in a list) with len(things) — which would be 5 in this case. — which would be 5 in this case. It’s possible to go through a list backwards. If you use an index of -1 ( things[-1] ), it gives you the last item in the list things[-2] is the second to last item. ), it gives you the last item in the list is the second to last item. If you want to change one item in a list, you can do that. Just do something like things[-1]=20 . This will change the last item from 0 to 20. . This will change the last item from 0 to 20. You can make a list consisting of lists. This is pretty cool and it’s useful to make things that are sort of like matrices. I’ll cover this in more detail later. Finally, remember that glowscript.org and trinket.io aren’t real python. They actually take your code and convert it to javascript. This means that if you make two variables equal to the same list, changing one value changes the other. Just be careful. Adding and Removing Items Let’s make a list consisting of 10 random numbers. Remember that for GlowScript Vpython, random() returns a random number between 0 and 1. Here is the plan. Make a loop that goes up to 10 (but not to 11). You could do this a bunch of different ways, but if you have followed my other python stuff for physics — the while loop is very common. In the loop, generate a random number. Add this to the list. Repeat. Shouldn’t be too difficult. Here is what it looks like (with some comments afterwards). The code is online. See. It worked. Now for some comments: Line 7: I start off with an empty list. There’s nothing in it. But it’s difficult to add to a list that doesn’t exist, so you need to make one first. Line 4 and 10: I think it’s pretty obvious these are the max number in the list and a “counter” for the list. Line 12: in the while loop, you need to put <= (less than or equal to) so that it gets all the way up to 10. If you aren’t sure, you can always just make a loop that prints out “n” to check that it’s working correctly (that’s what I do). (less than or equal to) so that it gets all the way up to 10. If you aren’t sure, you can always just make a loop that prints out “n” to check that it’s working correctly (that’s what I do). Line 14: This adds an element to the list. The thing that you want to add has to be in square brackets. Pretty cool, right? I like adding to the list with the square brackets because it just makes sense to me. If you don’t think that’s sophisticated enough, you could do this: rando.append(temp) Same thing. What if you want to remove items from a list? Here are some options. rando.pop(2) — this removes the [2] item (the third) from the list. — this removes the [2] item (the third) from the list. rando.insert(2,3) — this puts the number 3 at the [2] position. — this puts the number 3 at the [2] position. rando.clear() — this removes everything from the list (not sure why you want to do that). — this removes everything from the list (not sure why you want to do that). There’s some other stuff — but I don’t really use it too much. Here is a list of list stuff if you need more. Traversing a List You know I’m working my way up to an example using lists — right? Well, one of the things you need to do is to go through each item in a list and do something (physics stuff). I’m going to show you two ways to go through a list (although there are more). Let me start with a list of 5 vectors — I just feel like using vectors. Here is my list: Yes, you can have a list continue onto a second line — just end the first line with a comma and you should be fine. Now let’s go through each item in the list and print the magnitude of the vector. Remember that in GlowScript mag() is a built in function that returns the magnitude of a vector. Here’s one way to do this. Comments to follow. It’s really cool. The for thing in list: does exactly what you would think it would. It temporarily assigns the variable thing to each item in the list. In this case, tempvector is each vector in the list ( vectorstuff ). Then you can just treat that like you would any other vector. Super simple. When would this method not be the best? What if you need to look at two items in the list at the same time? What if you want to calculate the distance between two position vectors in the list? In this case, you might need to use another method. Check this out. The output in this method is exactly the same as before. So, here are some notes: len(vectorstuff) returns the length of the list (but you already knew that). range(len(vectorstuff)) makes a list of numbers that’s the same length as the vectorstuff list. Now the variable n is a number, not the actual list item. returns the length of the list (but you already knew that). makes a list of numbers that’s the same length as the vectorstuff list. Now the variable n is a number, not the actual list item. With this index (n), vectorstuff[n] is the nth item. Let’s do some physics. Example: A Bunch of Projectiles The cool thing about GlowScript is the 3D aspect. I’m not going into all the details, but here is how you could make a projectile motion problem display in 3D. But I don’t want to make one ball. I want to make MANY balls. How many? N balls. Here is how this going to work. I’m going to make a list of balls. Yes, you can put objects (like the sphere() ) into a list. ) into a list. All the balls will start at the same location, but they are going to be launched at different angles. These angles will be evenly distributed between 0 and 90 degrees. For the projectile motion part, I am going to go through each element in my list: update the velocity and update the position. Here is the code. I’m going to go over this stuff in two parts. First, this is the set up stuff. Comments: Line 6: make an empty balls list. Line 7: All the balls will have the same starting velocity. This is that velocity. Line 9,10: This is the starting angle and then the angle step size (pi/2 divided by the number of balls) Line 13: for the list build, I use the index “i” in the range of the list. Line 14–15: adds a ball to the list. The make_trail=True just means the balls will leave a trail and look cool. just means the balls will leave a trail and look cool. Line 18,19: this is the time and the time step (for the numerical calculation). Now for the code that runs the thing. Really, this is really similar to the loop for a single ball. Line 21: the while loop is for 1 second. Different launch angles will take different times, but they should all be finished by 1 second. Line 22: Remember, rate(100) is a GlowScript thing. This tells the code to not run any more than 100 loops per second. Since the time step is 0.01 second, a rate of 100 should run in real time. is a GlowScript thing. This tells the code to not run any more than 100 loops per second. Since the time step is 0.01 second, a rate of 100 should run in real time. Line 24: Here is a loop in the main time loop. This loop just goes through each item in the balls list. Line 25, 26: since I am calling each item “ball”, I can just do stuff that looks like a normal line for a single ball. That’s nice. Line 27: This is just a check to make sure the ball “hits” the ground. If so, it stops. So, what does it look like when you run it? Like this: But wait. What if you want MORE BALLS? You just need to change N to 50 and you get this. It’s not just physics, it’s art. If you want to spice it up, you could try changing the ball colors and stuff like that. Example: Using a List for Data There is one more very useful way to use a list — for data. Yes, let’s say that you get some position-time data in a spreadsheet or something. Maybe this came from a video analysis or data from a sensor. Who knows. Yes, it’s very possible that you could plot this data in the spreadsheet or some other very awesome graphing program. But what if you want to plot this along with a numerical calculation. Or maybe you need to do some calculations to the data. Whatever the reason, I’m going to show you how to plot this in python using a list. Let’s start with some data. Here is position-time data from David Blaine’s ascension stunt (with balloons) — full analysis here. So, here’s what I do. I’m going to start with the time column. I’ll highlight all the numbers and copy them. Then paste them into a list in GlowScript. I’ll do the same for the position data. When I paste it into GlowScript, it’s just a bunch of numbers one on each line. I usually just manually add the comas and put them in one line. If you have a giant list of numbers, it might make more sense to copy them first into a word processor so you can “find and replace” the return-lines with comas. Yes, I also know that in real python you can open a file. Actually, you can do this in GlowScript too — but the file has to be on some server somewhere. I just find it’s easier to copy and paste the numbers into GlowScript. Now, to plot them I just need to traverse the lists and use normal graphing stuff (here is my graphing tutorial). Note: if your two data lists have a different number of elements, you are going to have a bad time. Here is the code to plot these two. I really don’t have any extra comments on this. Here is the graph that it creates. Cool. OK, I think that’s a good start to lists in python.
https://rjallain.medium.com/a-physicists-guide-to-lists-in-python-35f4304d6c6f
['Rhett Allain']
2020-12-28 22:55:53.491000+00:00
['Python', 'Data Science', 'Physics', 'Programming']
AI auto-generates M&A candidates
Traditional approach: Company X wants to expand in a particular technology area and wants to prepare a list of potential acquisition candidates. How does one identify these companies? How does one rank them? One hires an expensive investment banker to prepare a shortlist. Bankers love buy-side mandates. New approach: Hire a machine. Acknowledged that M&A is a lot more than shortlisting candidates but let’s peel the M&A onion bit by bit, shall we? Target list generation is a key activity and most conglomerates maintain an active list and spend hours of CXO time on it. Here is how a machine can help - illustrated via an example… Let’s take the Electric Vehicle sector. The “biggest electric car maker” (acquirer) wants to identify companies (targets) that have the closest matching technology portfolio to buy. A 2-dimensional representation of patent landscape Vectorize: 12,571 electric vehicle patents (recent ones) are accessed. This covers 1809 companies (patent assignee). The machine vectorizes patents (see vector cloud figure). In this process, the machine understands what each word means (See Inset: Machine understands text via vectors). Interpret: The machine next understands what areas is the acquirer owning technologies in. Below is a plot of the focus areas which span from battery packs to torque control to thermal charging etc. 8 areas are identified by the machine. A 2-dimensional representation of focus areas in vector space Shortlist: For each area, the machine maps every patent of the acquirer with every patent of the potential target (1808 companies) and given our definition of finding closest technology targets it uses “relatedness”/ “closeness” metric to shortlist candidates. The figure below has 8 diagrams for the 8 areas plotting every company (1808) basis the patent vector analysis. The shortlist is in front of us. A shortlist for every area. We defined the closest technology as criteria… we could as well define it as complementing/ most core/ most cross-connected/ etc… a metric change would change the shortlist. Acquisitive companies maintain an active target list. The machine can repeat this analysis in hours. Compare this with going through a procurement process to hire a banker and the costs associated with this. In the “digital” era where increasingly everything is a vector do we peel the onion or push the boundaries of what is possible with a different form of intelligence? If competitor 1 acquires competitor 2… what happens, what technology vanishes… 1809 companies are a network where anyone can buy anyone else… should we acquire someone today given potential changes in the technology landscape? Should an organic technology research project become inorganic (acquisition)? How do we model this? Existing human forms of intelligence will struggle to simulate such complexities… machines can… albeit artificially.
https://towardsdatascience.com/ai-auto-generates-m-a-candidates-41eca0b8d7c1
['Harsha Angeri']
2019-08-09 13:29:24.071000+00:00
['Machine Learning', 'Artificial Intelligence', 'Mergers And Acquisitions', 'Technology', 'Data Science']
How to Deploy Your Qt Cross-Platform Applications to Windows Operating System Using windeployqt
Step 7. Verify the execution of the ‘windeployqt’ command Check the windeployqt -h help command: D:\CodeLab\ComposingWidgetsDemo Windeployqt>windeployqt -h Usage: windeployqt [options] [files] Qt Deploy Tool 5.14.2 The simplest way to use windeployqt is to add the bin directory of your Qt installation (e.g. <QT_DIR\bin>) to the PATH variable and then run: windeployqt <path-to-app-binary> If ICU, ANGLE, etc. are not in the bin directory, they need to be in the PATH variable. If your application uses Qt Quick, run: windeployqt --qmldir <path-to-app-qml-files> <path-to-app-binary> Options: -?, -h, --help Displays help on commandline options. --help-all Displays help including Qt specific options. -v, --version Displays version information. --dir <directory> Use directory instead of binary directory. --libdir <path> Copy libraries to path. --plugindir <path> Copy plugins to path. --debug Assume debug binaries. --release Assume release binaries. --pdb Deploy .pdb files (MSVC). --force Force updating files. --dry-run Simulation mode. Behave normally, but do not copy/update any files. --no-patchqt Do not patch the Qt5Core library. --no-plugins Skip plugin deployment. --no-libraries Skip library deployment. --qmldir <directory> Scan for QML-imports starting from directory. --qmlimport <directory> Add the given path to the QML module search locations. --no-quick-import Skip deployment of Qt Quick imports. --no-translations Skip deployment of translations. --no-system-d3d-compiler Skip deployment of the system D3D compiler. --compiler-runtime Deploy compiler runtime (Desktop only). --no-virtualkeyboard Disable deployment of the Virtual Keyboard. --no-compiler-runtime Do not deploy compiler runtime (Desktop only). --webkit2 Deployment of WebKit2 (web process). --no-webkit2 Skip deployment of WebKit2. --json Print to stdout in JSON format. --angle Force deployment of ANGLE. --no-angle Disable deployment of ANGLE. --no-opengl-sw Do not deploy the software rasterizer library. --list <option> Print only the names of the files copied. Available options: source: absolute path of the source files target: absolute path of the target files relative: paths of the target files, relative to the target directory mapping: outputs the source and the relative target, suitable for use within an Appx mapping file --verbose <level> Verbose level (0-2). Qt libraries can be added by passing their name (-xml) or removed by passing the name prepended by --no- (--no-xml). Available libraries: bluetooth concurrent core declarative designer designercomponents enginio gamepad gui qthelp multimedia multimediawidgets multimediaquick network nfc opengl positioning printsupport qml qmltooling quick quickparticles quickwidgets script scripttools sensors serialport sql svg test webkit webkitwidgets websockets widgets winextras xml xmlpatterns webenginecore webengine webenginewidgets 3dcore 3drenderer 3dquick 3dquickrenderer 3dinput 3danimation 3dextras geoservices webchannel texttospeech serialbus webview Arguments: [files] Binaries or directory containing the binary. Step 8. Execute “windeployqt” command
https://medium.com/swlh/how-to-deploy-your-qt-cross-platform-applications-to-windows-operating-system-by-using-windeployqt-a7cd5663d46e
['George Calin']
2020-12-17 08:39:20.492000+00:00
['Qt', 'Development', 'Deployment', 'Cross Platform Apps', 'Windows 10']
The Bully Pulpit: How Presidents Lead the Nation (and How Trump Blew It)
It is a sad thing for the current occupant of the White House, and for whatever legacy he may leave to posterity, that the term “bully pulpit” has nothing to do with the sort of bullying at which he excels. If only making fun of people, giving them mean nicknames, and rousing supporters to hate one’s enemies constituted a useful presidential skill — then we might have to think twice before calling Donald Trump’s presidency an abject failure. Unfortunately, though, when Theodore Roosevelt coined the term “bully pulpit,” he was not thinking of that kind of bullying. He meant it as an adjective. “Bully for England” meant “good for England.” A “bully chap” was a fine fellow. And a “bully pulpit” was a good platform, as in neat, nifty, swell, the bee's knees, the cat’s pajamas — that sort of thing. What Roosevelt realized was that the President of the United States spoke from a unique position — not that of a monarch, since the president was elected by the people. But not that of an ordinary citizen either. Presidents command crowds. People care what they have to say. Newspapers print it, television stations broadcast it, bloggers blog about it. In Roosevelt’s day, as in our own, anything a president says will be listened to and given special attention just by the fact that the president is saying it. Roosevelt also recognized that the opportunity to use this bully pulpit was perhaps the greatest power that American presidents have. When they are campaigning, Presidents often talk about the laws and programs they will pass. But presidents don’t actually make laws. That is the job of Congress. The greatest influence that presidents have over legislation lies in their ability to persuade members of Congress to pass certain laws, or to persuade the public to pressure their Senators and Representative until they are willing to do so. The President’s power to persuade is important because laws are very blunt instruments for doing anything that matters. Laws can constrain people’s behavior, but presidential persuasion can inspire and comfort people, bring them together, build communities, and change society from the inside out. When John F. Kennedy challenged America to put a person on the moon by the end of the 1960s, he was not creating a policy; he was inspiring a vision. And when Dwight Eisenhower went on TV to explain why he generalized the National Guard to enforce the Supreme Court’s Brown v Board of Education decision, he was appealing — as Abraham Lincoln did a hundred years earlier — to the better angels of our nature. The bully pulpit is especially important in times of great crisis. Our greatest presidents have been the ones who had great crises to shine in —Washington and the Revolution, Lincoln and the Civil War, FDR and the Great Depression. James Monroe and William McKinley were perfectly serviceable executives, but they presided over relatively good times. Their mettle was never put to the test by a great national upheaval, so they never rose above the second rank. James Buchannan, Andrew Johnson, and Herbert Hoover, on the other hand, are counted among our worst presidents because they couldn’t rise to the crises that fate handed them. Which brings us to Donald Trump, the current president of the United States. Unlike many recent presidents, Trump had the opportunity to respond to a crisis the way that great presidents must. The COVID-19 pandemic is one of the most serious public health crises in our nation’s history, and it has sparked one of the worst economic crises since the Great Depression. There is nothing that a president could have done about the virus coming to America, but there are a lot of things that he could have done to control its impact on the lives and livelihoods of the people. Most of these things, though, would have involved using the bully pulpit to inspire the nation and challenge us to rise to our better selves. Imagine what might have happened if, back in February, the President had gone on TV and said something like this: For most people, COVID-19 is not fatal. Most people who are infected with the virus experience symptoms similar to the flu, and most cases end with full recovery. The United States has some of the best public health professionals in the world, and they are doing everything they can to identify cases and contain the spread of this disease. However, the virus can be very dangerous for some people. Older adults and those with weak immune systems are disproportionally at risk. If the coronavirus is not contained, it will spread in the United States the way that it has already spread in other countries, and many of the most vulnerable people in our society will suffer the most. By taking some very small precautionary steps, howevert, we can work together to slows this outbreak down, contain its effects, and give our public health officials the time that they need to understand what we are up against and figure out how to beat it. But this will only work if we all take these steps, even if there is no evidence of an outbreak in our immediate vicinity. At times like this, we must come together as Americans and protect each other from the ravages of this new disease. If we use this outbreak as a reason to blame each other, or to dig deeper into our silos and echo chambers, the situation will get worse. This is a public health emergency that we must greet with an outpouring of public virtue. Viruses are not anybody’s fault, and they do not work for one side and against another. This is not an election issue; it is an American issue and a human issue. We can do this together. We cannot do it alone. A speech like this, made at the outset, might have turned the virus into something that we beat together as Americans — and not as a reason to divide further into ideological categories. There was no reason for the President to turn a national emergency into a campaign attack. He did not have to label the virus a hoax, or encourage his supporters to protest against “Democrat governors” when they tried to implement public health measures. He did not have to undermine our public health apparatus and cause people to doubt guidelines intended to keep us safe. And even now, as we experience a resurgence in cases and, now, in deaths, the President could do more than any other person in the country to save lives and contain the disease by wearing face coverings whenever he is in public and urging everyone in the nation to do the same. Masking prevents the spread of viruses best when people want to wear them — when they want to promote public health because that is the sort of thing that Americans do. The fact that we are now debating compulsory mask ordinances across the nation demonstrates that we have already lost the battle for hearts and minds, which the stage at which presidential persuasion matters the most. Donald Trump has failed miserably to use his bully pulpit because he has continually chosen to use his pulpit to be a bully. This is why he alone, of nearly all of the leaders in the world, has seen his approval ratings plummet during the worst part of the pandemic. In a crisis, people need leaders who can comfort and inspire them, bring them together, and make the enemy — as microscopic as it may be — seem insignificant by comparison to the might of a great nation united. The Coronavirus gave Donald Trump the opportunity to be a Lincoln, and he has instead used it as an opportunity to nudge James Buchannan and Andrew Johnson out of last place in the power rankings of American presidents.
https://michaelaustin-47141.medium.com/the-bully-pulpit-how-presidents-lead-the-nation-and-how-trump-blew-it-2260561c4553
['Michael Austin']
2020-07-16 23:05:56.968000+00:00
['Politics', 'Persuasion', 'Trump', 'Teddy Roosevelt', 'Coronavirus']
How I learned what “digital transformation” truly means after waving 👋 to a couple Gs
I just finished my class at MIT Sloan on digital transformation. Not teaching it, of course. Taking it. You see, learning is a passion of mine. And I find that some things are harder to learn than others unless you really get your head in the game, which is especially hard when you have a full-time job. I reasoned that if paying for a gym membership can be a motivator to go to the gym, I needed to locate a paid way to learn WTF “digital transformation” means. I’d come into contact with it via the management consultant world and it was bugging me that I hadn’t found a crisp meaning. So I splurged on an MIT-branded experience on the topic, instead of just watching a few YouTube videos like I usually try to do. Free videos and blog posts weren’t working for some reason. There are plenty of “digital transformation” haterz who believe that it is a lot of consultant blah-blah-blah, and ultimately a waste of time and money. That depends whether you value the difference between a painkiller (immediate impact) versus a vitamin (possible impact). IMHO digital transformation is definitely a vitamin, and yet a lot of smart people reach for it while knowing that it’s not a painkiller. They have the foresight and accountability knowing that like any carefully selected vitamin that is procured with scrutiny, taking it regularly might make you live healthier. And even better, it might help you live longer. So I dove into my 10-week course hoping to find the underlying chemical structure of the elusive digital transformation vitamin. What I quickly discovered was that I was disturbed by how the course was delivered. As a former agent of the MIT enterprise (I was a tenured professor at MIT many moons ago), I felt slightly aghast at the quality of the 3rd party commercial e-learning platform MIT used, which obviously sat atop an open source system I knew from two decades ago. The obvious seams of the computational machinery made it difficult to ignore how each module’s content was inconsistent in structure and format. In traditional “design” terms, this is the problem when so-called “form” (how it feels) and “content” (what it does) are not aligned. I felt like I personally paid for a set of training sessions at a fancy gym, and later found that each workout machine was inconsistent from the rest. Furthermore, when I peeled the Peloton sticker off of one of the machines, I discovered that it was actually a stiff, old Boeing 727 seat from my childhood memories. But I definitely got my money’s worth from this bad experience. Because the main thesis of digital transformation is that it consists of two different kinds of activities for a business: Digitizing: Taking what is an existing process or activity and making it electronic to create cost efficiencies. Taking what is an existing process or activity and making it electronic to create cost efficiencies. Digitalizing: Realizing an entirely new digital business that can take on the likes of Amazon or Netflix to generate new topline revenue. My MIT class had been digitized over a few years, as was evident by the discontinuity of the content. Over those iterations, it was able to eventually take over the job of the existing paper-based content. The course content had gotten digitized, and now could be easily reused and re-provisioned for other purposes. Digitizing is easy, but succeeding in digitalization these days requires care and attention to the design of an experience if you are intending to charge a premium. Then, it was likely moved to be sold on a third-party platform under the MIT brand, to bring in an entirely new revenue stream for MIT. Moving their content to this next phase was their act of digitalization. Me, the consumer, then contributed to MIT’s new topline revenue growth through Sloan’s digitalizing what had been digitized in the past. The only glorious “bug” in this approach, however, is that I’m unlikely to ever pay for another poorly digitalized MIT Sloan product. And although I know that the course I took wasn’t in their Lexus line — it was more of a regular Toyota model, costing only a couple (versus ten) thousand dollars — it made me think how difficult it will be for higher education to digitalize what they do without a much better understanding of user experience. Digitizing is easy, but succeeding in digitalization these days requires care and attention to the design of an experience if you are intending to charge a premium. Oddly enough, I feel like my money was entirely well spent. Maybe I should teach a course on it in the future. But, a properly digitalized one :+).
https://johnmaeda.medium.com/how-i-learned-what-digital-transformation-truly-means-after-waving-to-a-couple-gs-3be62c4cef7a
['John Maeda']
2020-12-24 00:30:39.201000+00:00
['Design', 'Digital Transformation', 'Online Learning']
Don’t Become Someone Else’s Collateral Damage on Their Way to Self-Discovery
Don’t Become Someone Else’s Collateral Damage on Their Way to Self-Discovery Know your worth. REALLY know your worth. Photo by Kelli McClintock on Unsplash At some point in your dating lifetime, you learn what it’s like to have someone else as your “rebound” relationship or what it feels like to be the rebound. A rebound is a relationship (even if a short fling) with someone else shortly after the ending of a more significant relationship instead of using that time to heal and get in the right headspace for someone new. It’s not always a bad thing. The expression “the best way to get over someone is to get under someone” has its merits. It’s easy to obsess over an ex; inserting someone new in the picture can break the cycle of repetitive thoughts. This works best when both people agree to the no-strings-attached condition of sex. It’s a whole other ballgame when you’re the collateral damage on their way to self-discovery. Some people are by nature selfish and self-serving. During a time of rebound and self-discovery, no one is immune from becoming somewhat selfish. It’s necessary to learn who we are at that moment, who we want to become, what we need to fulfill our goals in our next life chapter. It’s understanding to miss the signs of someone else in self-discovery mode. Naturally selfish people aren’t stealth in their ways and they’re easy to spot after a few interactions. Someone under self-discovery isn’t aware that they’re selfish. It’s often a temporary state and they’re otherwise good people. After all, the first rule of self-awareness is becoming aware that you aren’t self-aware. While it’s great that they’re going through a personal metamorphosis, that doesn’t mean you need to stick around and take the hard knocks on their path. How do you do that? By knowing your worth. So cheesy. I cringed as I typed it. Hear me out. If you are on your path of self-discovery, it’s difficult to understand your boundaries, your interests, your goals, and everything else that would make for a sequel to Pixar’s Inside Out. Know your worth: basic human rights If you struggle to identify your worth, first think of the rules you would apply to every human, such as: not tolerating physical abuse. not tolerating blatant verbal abuse (such as “you’re stupid” or “you’re so fat”). not tolerating theft or damage to your belongings. not getting involved with someone who was incarcerated for any of the above reasons. The rules you apply to every human are typically very blatant and black or white. They’re easy to identify. Unfortunately, for many people, this is a boundary that is difficult to enforce. Know your worth: the bare minimum of dating Next, think of the rules that you would apply to all humans but more nuanced. They’re more specific to you but most would agree, such as: not accepting proclaimed “harmless” flirting. not accepting offhand insulting comments (“careful with those cookies, you’ll end up even bigger”). them going off on a “guys’ weekend” or “girls’ weekend” with loads of alcohol and partying without checking in if they promised to do so. keeping you hidden from their family and friends mooching off you financially because they make poor financial choices Know your worth: the rules specific to you The hardest list to determine your worth is the list of acceptable behaviors applicable to you. No one else can make this list for you. To identify them, think of things in the past that made you feel bad but you couldn’t quite articulate why or you felt irrational so you brushed them off. In particular, these are things that people do when they’re going through a life change, major relationship change, and all-around path to self-discovery. Since the list is unique to you, here are some of my rules I’m putting on my list.
https://medium.com/change-becomes-you/dont-become-someone-else-s-collateral-damage-on-their-way-to-self-discovery-64352d77f1cc
['Jennifer M. Wilson']
2020-12-29 19:32:45.911000+00:00
['Self Improvement', 'Sex', 'Mental Health', 'Love', 'Relationships']
Our World Needs a Reset Button or Maybe a Rewrite
With a desire to connect faces to this insidious disease, people where asked to participate in a project. Everyone who participated were asked the same questions and their responses developed into poems sharing their experiences. Follow
https://medium.com/faces-of-coronavirus/our-world-needs-a-reset-button-or-maybe-a-rewrite-fcf7cf288daa
['Brenda Mahler']
2020-12-26 20:15:25.580000+00:00
['Poetry', 'Faces Of Covid', 'Covid 19', 'Reflections', 'Coronavirus']
A Medium Writer Sent My Family a Box Full of Rainbows and Unicorns
My daughter and I received a package in the mail yesterday from my writer friend Shannon Ashley, wrapped up with sparkly pink duct tape. I looked inside and immediately started to cry. You see, I’m in a bunch of Facebook groups for Medium writers. In one of the smaller groups, writer Kyrie Gray leads a weekly Rant Thread, where for 24 hours we can rant all we want — about work or anything else. Every week, people pour out their hearts, and every week, other Medium writers reply to the rants with kindness, empathy, and understanding. This alone is heartwarming, but this week, things got next-level for me. The Rant Thread came at a good time. I haven’t had a super-viral article in a while, and I was feeling overwhelmed and afraid I wouldn’t be able to keep building on my writing successes. And as I wrote my rant, I arrived at the root of it all: money. Money worries underscore all the other worries. I commented that, since my kid started kindergarten, she’s somehow ripped holes in the knees of every pair of her size 4T pants. The weather’s getting colder, and I was sending my child to school in pants with holes in them. I was ready to prioritize getting her new pants, but in the meantime, I really needed to rant, to cry about how sick I am of working so hard and still being low-income. Shannon, with her heart of gold, got my address over PM, and it was only a couple days later we received her package full of unicorns and rainbows. Shannon’s daughter’s the same age as mine, so I expected a box of hand-me-downs — which would’ve been incredible on its own— but it was clear Shannon and her daughter specifically chose some things just for my daughter. I felt her kindness and her friendship in the thoughtful items in this box: a rainbow hair bow and unicorn dress my daughter received just in time for School Picture Day; a Snoopy book about being “brave and kind,” the very words my daughter’s teacher used for her when she named her Star Student of the Week in kindergarten. Shannon’s daughter made my daughter a drawing of flowers and smiling butterflies. I can’t wait for my daughter to mail some art back. The amazing package we received! (Photo credit: Author) And Shannon even included a gift card, with a note that it was for me to use on myself. She’s written about how difficult it is as a mom to prioritize yourself, and I feel this so much. I used part of the gift card immediately, and bought myself new underwear for the first time since my daughter was 1. I also treated myself to a little milk steamer so I can make soy lattes at home. Too often I hear Conservative talking points about how “handouts” will make people lazy. Honestly, I was just the recipient of a whole lot of generosity, and right now, I feel the opposite of lazy. I feel inspired. I feel grateful. Dare I say it, I feel #blessed! I want to believe that I deserve nice stuff. And more than anything, I want to pay it forward. Regardless of how much money we make or don’t make, we can find ways to give to others. And when we make more money, that can be an opportunity to find even more ways to be generous. “Our friends are so generous,” I gushed. “I want to be generous like that too.” “Me too!” my daughter said. “I want to be generous like that too!”
https://medium.com/warm-hearts/a-medium-writer-sent-my-family-a-box-full-of-rainbows-and-unicorns-1d6da7721539
['Darcy Reeder']
2019-10-09 23:27:26.612000+00:00
['Life Lessons', 'Writing', 'Kindness', 'Gratitude', 'Empathy']
Handling Categorical Features using Encoding Techniques in Python
In this post we are going to discuss categorical features in machine learning and methods to handle these features using two of the most effective methods. Categorical Features In machine learning, features can be broadly classified into two main categories: Numerical features (age, price, area etc.) Categorical features (gender, marital-status, occupation etc.) All those features that are composed of a certain number of categories are known as categorical features. Categorical features can be classified into two major types: Nominal Ordinal Nominal features are those having two or more categories, with no specific order. For example, if Gender has two values, male and female, it can be considered as a nominal feature. Ordinal features on the other hand have categories in a particular order. For example, if we have a feature named Level having values as high, medium and low, it will be considered an ordinal feature, because the order matters here. Handling Categorical Features So the first question that arises is why do we need to handle categorical features separately? why don’t we simply pass those as inputs to our model just like the numerical features? Well the answer is that unlike humans, machines and specifically in this case machine learning models, do not understand the text data. We need to convert the text values into relevant number before feeding those into our model. This process of converting categories into numbers is called encoding. Two of the most effective and widely used encoding methods are: Label Encoding One Hot Encoding Label Encoding Label encoding is the process of assigning numeric label to each category in the feature. If N is the number of categories, all the category values will be assigned a unique number from 0 to N-1. If we have a feature named Colors, having values red, blue, green and yellow, it can be converted to numeric mapping as following Category : Label "red" : 0 "blue" : 1 "green" : 2 "yellow" : 3 Note: As we can see here, the labels produced for the categories are not normalized, i.e. not between 0 and 1. Because of this limitation, label encoding should not be used with linear models where magnitude of features plays an important role. Since tree based algorithms do not need feature normalization, label encoding can be easily used with these models such as : Decision trees Random forest XGBoost LighGBM We can implement label encoding using scikit-learn’s LabelEncoder class. We will see the implementation in the next section. One Hot Encoding The limitation of label encoding can be overcome by binarizing the categories, i.e. representing those using only 0’s and 1’s. Here we represent each category by a vector of size N, where N is the number of categories in that feature. Each vector has one 1 and rest all values are 0. Hence it is called one-hot encoding. Suppose we have a column named temperature. It has four values as Freezing, Cold, Warm and Hot. Each category will be represented as following: Category Encoded vector Freezing 0 0 0 1 Cold 0 0 1 0 Warm 0 1 0 0 Hot 1 0 0 0 As you can see here, each category is represented by a vector of length 4, since 4 is the number of unique categories in the feature. Each vector has single 1 and rest all values are 0. Since One-hot encoding generates normalized features, it can be used with linear models such as : Linear regression Logistic regression Now as we have the basic understanding of both the encoding techniques, lets look at the python implementation of both of these for a better understanding. Implementation in Python Before applying encoding to the categorical features, it is important to handle NaN values. A simple and effective way is to treat NaN values as a separate category. By doing this, we make sure that we are not losing on any important information. So the steps that we follow while handling categorical features are: Fill the NaN values with a new categories (such as NONE) Convert categories to numeric values using Label encoding for tree based models and One hot encoding for linear models. Build the model using numeric and encoded features. We will be using a public dataset named Cat in the Dat on kaggle. Link here. This is a binary classification problem that consists of lots of categorical features. First we will create 5 folds for validation using StratifiedKFold class in scikit-learn. This variant of KFold is used to ensure same ratio of target variables in each fold. import pandas as pd from sklearn import model_selection #read training data df = pd.read_csv('../input/train.csv') #create column for kfolds and fill it with -1 df['kfold'] = -1 #randomize the rows df = df.sample(frac=1).reset_index(drop=True) #fetch the targets y = df['target'].values #initiatre StratifiedKFold class from model_selection kf = model_selection.StratifiedKFold(n_splits=5) #fill the new kfold column for f,(t_,v_) in enumerate(kf.split(X=df,y=y)): df.loc[v_,'kfold'] = f #save the new csv with kfold column df.to_csv('../input/train_folds.csv',index=False) Label Encoding Next lets define function to run training and validation on each fold. We will be using LabelEncoder with Random Forest for this example. import pandas as pd from sklearn import ensemble from sklearn import metrics from sklearn import preprocessing def run(fold): #read training data with folds df = pd.read_csv('../input/train_folds.csv') #get all relevant features excluding id, target and kfold columns features = [feature for feature in df.columns if feature not in ['id','target','kfold']] #fill all nan values with NONE for feature in features: df.loc[:,feature] = df[feature].astype(str).fillna('NONE') #Label encoding the features for feature in features: #initiate LabelEncoder for each feature lbl = preprocessing.LabelEncoder() #fit the label encoder lbl.fit(df[feature]) #transform data df.loc[:,feature] = lbl.transform(df[feature]) #get training data using folds df_train = df[df['kfold']!=fold].reset_index(drop=True) #get validation data using folds df_valid = df[df['kfold']==fold].reset_index(drop=True) #get training features X_train = df_train[features].values #get validation features X_valid = df_valid[features].values #initiate Random forest model model = ensemble.RandomForestClassifier(n_jobs=-1) #fit the model on train data model.fit(X_train,df_train['target'].values) #predict the probabilities on validation data valid_preds = model.predict_proba(X_valid)[:,1] #get auc-roc score auc = metrics.roc_auc_score(df_valid['target'].values,valid_preds) #print AUC score for each fold print(f'Fold ={fold}, AUC = {auc}') Finally let’s call this method to execute run method for each fold. if __name__=='__main__': for fold_ in range(5): run(fold_) Executing this code will give an output like below. Fold =0, AUC = 0.7163772816343564 Fold =1, AUC = 0.7136206487083182 Fold =2, AUC = 0.7171801474337066 Fold =3, AUC = 0.7158938474390842 Fold =4, AUC = 0.7186004462481813 One thing to note here is that we have not done any hyper parameter tuning on the Random forest model. You can tweak the parameters to improve the validation accuracy. Another thing to mention in the above code is that we are using AUC ROC score as metric for validation. This is due to the fact that the target values are skewed and metrics such as Accuracy will not give us correct results. One Hot Encoding Now lets see the implementation of One hot encoding with Logistic regression. Below is the modified version of run method for this approach. import pandas as pd from sklearn import linear_model from sklearn import metrics from sklearn import preprocessing def run(fold): #read training data with folds df = pd.read_csv('../input/train_folds.csv') #get all relevant features excluding id, target and folds columns features = [feature for feature in df.columns if feature not in ['id','target','kfold']] #fill all nan values with NONE for feature in features: df.loc[:,feature] = df[feature].astype(str).fillna('NONE') #get training data using folds df_train = df[df['kfold']!=fold].reset_index(drop=True) #get validation data using folds df_valid = df[df['kfold']==fold].reset_index(drop=True) #initiate OneHotEncoder from sklearn ohe = preprocessing.OneHotEncoder() #fit ohe on training+validation features full_data = pd.concat([df_train[features],df_valid[features]],axis=0) ohe.fit(full_data[features]) #transform training data X_train = ohe.transform(df_train[features]) #transform validation data X_valid = ohe.transform(df_valid[features]) #initiate logistic regression model = linear_model.LogisticRegression() #fit the model on train data model.fit(X_train,df_train['target'].values) #predict the probabilities on validation data valid_preds = model.predict_proba(X_valid)[:,1] #get auc-roc score auc = metrics.roc_auc_score(df_valid['target'].values,valid_preds) #print AUC score for each fold print(f'Fold ={fold}, AUC = {auc}') The method to loop over all folds remains same. if __name__=='__main__': for fold_ in range(5): run(fold_) The output of this code will be like below: Fold =0, AUC = 0.7872262099199782 Fold =1, AUC = 0.7856877416085041 Fold =2, AUC = 0.7850910855093067 Fold =3, AUC = 0.7842966593706009 Fold =4, AUC = 0.7887711592194284 As we can see here, a simple logistic regression is giving us decent accuracy by just applying feature encoding for categorical features. One difference to note in the implementation of both methods is that LabelEncoder has to be fitted on each categorical feature separately, while OneHotEncoder can be fitted on all the features together. Conclusion In this blog I have discussed what are categorical features in machine learning, why is it important to handle these features. We also covered two most important methods to encode categorical features into numeric, along with the implementation. I hope I have helped you to get a better understanding of the topics covered here. Please let me know your feedbacks in comments and give it a clap if you liked it. Here is the link to my Linkedin profile if you wish to connect. Thanks for reading.:)
https://medium.com/analytics-vidhya/handling-categorical-features-using-encoding-techniques-in-python-7b46207111ca
['Sawan Saxena']
2020-09-07 08:18:30.496000+00:00
['Feature Engineering', 'Data Science', 'Python', 'Machine Learning']
Can LSD Cure Mental Illnesses?
Can LSD Cure Mental Illnesses? The psychedelic-assisted treatment explained. For a guy who suffers from anxiety, LSD has crossed my mind multiple times so far for an alternative treatment. I assume many of you have considered it, as well. Psychedelics, such as DMT, LSD, psilocybin, and ayahuasca are slowly, but steadily being introduced into the psychotherapy-treatment world. For some, it may seem drastic and certainly illegal, but for others, it may be the only way through. Sometimes we are ready to do what it takes to cure mental illness. Many of you would relate. In the past years, micro-dosing on psychedelics has become more or less like a trend. Due to the substances’ properties and effects, their usage greatly enhances the effects of ordinary psychotherapy as we know it. What exactly is psychedelic-assisted therapy? As you may have already guessed, in simple terms, it’s therapy practices that involve the small doze usage of psychedelic drugs, such as DMT, LSD, MDMA, psilocybin (the active compound produced by mushrooms), and mescaline ( hallucinogen obtained from the cactus Peyote). This kind of research, although more widely known today, has been used as back as the 50s and 60s. In the late 60s, however, such drugs were widely restricted. The usage of psychedelics was prohibited when it concerned medical and psychiatric research. However, since the late 90s and early 2000s, the interest and practice of such therapies has renewed. Due to the advances in technology and ultimately health-care, professionals were able to collect and interpret more data regarding the topic. This all led to a better understanding of the drugs’ implications. In 2014, LSD and psilocybin were listed as Schedule I controlled drugs, thus researches could find out more about them. Although huge effort has been put on the better understanding of the psychedelic-assisted therapies, their effects are still considered to be strongly dependent on each individual and the environment the drug was used in. Nevertheless, numerous studies argue that it may have positive effects, rather than none.
https://medium.com/illumination/can-lsd-cure-mental-illnesses-b96f929b5ab0
['Viktor Marchev']
2020-06-11 21:42:47.612000+00:00
['Therapy', 'Mental Health', 'Psychedelics', 'Lifestyle', 'Drugs']
Introduction to Convolutional Neural Networks for Self Driving Cars
Convolutional Neural Networks (CNN) Let’s talk about Convolutional Networks, or ConvNets. ConvNets are neural networks that share their parameters across space. Imagine we have an image. It can be represented as a flat pancake. It has a width, a height, and because we typically have red, green, and blue channels, it also has a depth. In this instance, depth is three. That’s our input. Now, imagine taking a small patch of this image, and running a tiny neural network on it, with K outputs. Convolution Operation (Image by author) Now, let’s slide that little neural network across the image without changing the weights. Just slide across invertically like we’re painting it with a brush. On the output, we’ve drawn another image. Patch over a dog image (Image by author) It’s got a different width, a different height. And more importantly, it’s got a different depth. Instead of just R, G, and B, now, we have an output that’s got many colored channels, K of them. This operation is called the convolution. Shifted Patch over a dog image (Image by author) If our patch size were the size of the whole image, it would be no different than the regular layer of a neural network. But because we have this small patch instead, we have many fewer weights and they are shared across space. A convolutional neural network is going to basically be a deep network where instead of having stacks of matrix multiply layers, we’re going to have stacks of convolutions. The general idea is that they will form a pyramid. At the bottom, we have this big image, but very shallow just R, G, and B. We’re going to apply convolutions that are going to progressively squeeze the spacial dimensions while increasing the depth which corresponds roughly to the semantic complexity of your representation. At the top, we can put our classifier. We have a representation where all this spacial information has been squeezed out, and only parameters that map to content of the image remain. So that’s the general idea. If we’re going to implement this, there are lots of little details to get right, and a fair bit of lingo to get used too. We’ve know the concept of Patch and Depth. Patches are sometimes called Kernels. Each pancake in our stack is called a feature map. Another term that we need to know is stride. It’s the number of pixels, so that we’re shifting each time we move our filter. The stride of one makes the output roughly the same size as the input. A stride of two means it’s about half the size. I say roughly because it depends a bit about what we do at the edge of our image. Either we don’t go pass the edge, and it’s often called valid padding as a shortcut. Or we go off the edge and pad with zeros in such a way that the output map size is exactly the same size as the input map. That is often called same padding as a shortcut. Hierarchy Diagram showing detection at various layers of a convolutional neural network (Image by author) That’s it, we can build a simple convolutional neural network with just this. Stack up our convolutions which thankfully we don’t have to implement ourselves. Then use triads to reduce the dimensionality and increase the depth of our network layer after layer. And once we have a deep and narrow presentation, connect the whole thing to a few regular, fully connected layers, and we’re ready to train our classifier. You might wonder what happens to training and to chain rule, in particular, when you use shared weights like this. Nothing really happens, the math just works. You just add up the derivatives for all the possible locations on the image.
https://towardsdatascience.com/introduction-to-convolutional-neural-networks-for-self-driving-cars-c61e4224508
['Prateek Sawhney']
2020-11-10 07:20:25.317000+00:00
['Self Driving Cars', 'Data Science', 'Deep Learning', 'Artificial Intelligence', 'Machine Learning']
Make art. Don’t ask permission.
A tweet showed up in my feed recently that said, “Should you release an album no one cares about?” In short, no you shouldn’t. YOU should care about it. AND THAT’S EVERYONE WHO MATTERS. I loathe marketing click-bait bullshit like this. Listen, artists: marketers have nothing to say to you in this regard. The link on this particular tweet went to the author’s blog where he entreats you to contact him to get started on your marketing plan. “It’s important for you and your band to look ahead and start planning for an album people will buy,” he blogs. No, it isn’t. Make the art. Get all thoughts of people buying it out of your head. It will kill your creativity and set distracting expectations. Marketing plans are all well and good but they should never prevent or even delay you from releasing your art. I know far too many musicians caught up in the belief that they should delay a release or even delay working on an album until the time is right. All it ever does is prevent anyone from hearing your work. I’ve seen bands break up with great albums “in the can.” I’ve watched nobody folk musicians constantly rework a record to try to make it sell. It’s depressing and antithetical to the act of creation. My advice here won’t help you make money. But 99 times out of 100, neither will the marketer’s. The world needs artists. Artists make their art public. Marketing comes on the back end. It should always be subservient to the art and NEVER the primary concern. Today, we’re flooded with brand managers, marketing managers, and just plain managers who want to tell musicians how to make their music before they make it. These marketers of art misunderstand their role and how art works. Here’s how to be an artist: make art publicly. It’s the public part that differentiates an artist from a hobbyist. You gotta hang your balls out there. We’ve always had managers telling musicians when to make an album, whether or not to leave it in the can while the manager tries to sell it, when to play shows, where to play shows, and far too many other things about the actual making of art. These people who supposedly sell the art we make also want to tell us what sells and therefore what to make. I had a licensor tell me once that a song seemed “specific” as if that was cause enough not to consider it. Not “too specific” just “specific” as if I shouldn’t write about actual things. The overall message from all of these marketers and managers to musicians is: make the music we already know how to sell. Musicians don’t need to learn to make music that sells. Marketers need to learn to sell the music we make. I used to work for a camera company who made crazy cutting edge cinema cameras. Never did we in marketing tell the inventors of the camera we couldn’t sell it because it wasn’t like the other cameras on the market. We had meetings and we brainstormed and we figured out how to sell it. It was hard work. It was uncertain work. We didn’t know what would sell already. It would be great to see the marketing dynamic in music change to embrace that uncertainty. Nothing could make it more evident that the business side of the music business is awash with a bunch of philistines than the fact that these managers and marketers aren’t seeking interesting music to work with but attempting to reel in the same old shit. All that said, I realize there are managers and marketers and labels and such doing good work and finding interesting music. They’re just quiet about it. If you’re doing interesting art, hopefully they’ll find you. Just don’t listen to the guys telling you how to do your work. They’re trying not to focus on their own.
https://medium.com/hey-todd-a/make-art-don-t-ask-permission-93f1d0362d1f
['Todd A']
2017-09-12 20:32:18.654000+00:00
['Making', 'Music', 'Art']
Meet the Seven HAX Founders Named in This Year’s Forbes 30 Under 30 List
Meet the Seven HAX Founders Named in This Year’s Forbes 30 Under 30 List Forbes released its annual 30 under 30 list, detailing innovators making waves across the globe. Seven HAX founders made the cut this year, working on everything from making mines safer, to making fashion more sustainable. Lucas Frye, 28, and Joseph Varikooty, 24 | Cofounders of Amber Agriculture Lucas Frye, who received a bachelor’s in agricultural economics and an MBA from the University of Illinois at Urbana-Champaign, and Joey Varikooty, a 2018 Thiel Fellow who dropped out of the same school, founded Amber Agriculture to help farmers monitor and manage their crops with sensors. Its core technology is a wireless, kernel-like sensor that can flow with grain throughout the supply chain. By detecting moisture or incorrect temperatures, Chicago-based Amber (which has raised $2 million in funding) helps farmers protect grain from spoilage and capture high prices. HAX first invested in Amber Agriculture in 2017. Kevin Martin, 26 | Cofounder of Unspun Kevin Martin, who has a mechanical engineering degree from the University of Colorado Boulder, is cofounder of unspun, a tech-enabled apparel company that uses 3D scans to manufacture perfect-fitting 3D woven jeans for consumers on-demand, eliminating pattern-cutting waste and unsold inventory. He and his cofounders, who are over 30, took unspun through the HAX accelerator and have raised nearly $5 million from groups including The National Science Foundation, Fifty Years, and The Mills Fabrica. HAX first invested in Unspun in 2018. Noah Hill, 23 and Daniel Weinstein, 25 | Cofounders of Lura Health More than 1,000 health conditions are tracked by saliva tests. With Lura Health, Daniel Weinstein and Noah Hill have developed wireless, intra-oral, tooth-mounted sensors to enable users to track key diagnostics in oral health with saliva. Its first products focus on tracking acid to monitor tooth decay, while future sensors may be able to monitor allergens, electrolytes, hormones and disease markers. HAX first invested in Lura Health in 2019. Matthew Gubasta, 25 and Shelby Yee, 26 | Cofounders of Rockmass Technologies RockMass CEO Shelby Yee, a geological engineer, founded RockMass with Matthew Gubasta, a fellow student at Canada’s Queens University, in 2016 to digitize the industry. They’re streamlining and improving data collected at underground mines, starting with rock mechanics and making historically dangerous mine work safer and more efficient. Some of the largest global mining companies have adopted the Toronto-based startup’s flagship platform, called the Axis Mapper, and they’ve raised $3.1 million from SOSV, the Canadian government and others. HAX first invested in Rockmass Technologies in 2017.
https://medium.com/sosv-accelerator-vc/meet-the-seven-hax-founders-named-in-this-years-forbes-30-under-30-list-f881b02e1424
[]
2020-12-09 22:31:18.167000+00:00
['Manufacturing', 'Agtech', 'Apparel', 'Venture Capital', 'Startup']
cfgmgmtcamp 2020: Kapitan presentation by Ricardo Amaro
#father #kapitan #devops. Head of SRE at Synthace. Ex DeepMind. Ex Google. Opinions are my own Follow
https://medium.com/kapitan-blog/cfgmgmtcamp-2020-kapitan-presentation-by-ricardo-amaro-80888c893916
['Alessandro De Maria']
2020-02-05 08:24:38.225000+00:00
['Kubernetes', 'Cfgmgmtcamp', 'Kapitan', 'Helm']
Unique dashboards for external customers with Google Cloud
How BigQuery and DataStudio can enable you to build your managers requested dashboards right now Publiq, a client of ours wanted us to help them share their privacy-sensitive data insights with a number of cities in Belgium. The data describes how citizens participate in many events organized all over the country. It is stored in BigQuery and the cities each have an account with publiq. Each city should only see a dashboard based on “their data”, even though the entire dataset is driving Publiq’s publicly facing websites, such as uitinvlaanderen.be (for more info on this client project, see the note at the bottom). This is a common scenario. While dashboards are widely used within organizations, they should also be possible to be shared with 3rd parties. Upstream supply chain partners may want to know the current workload of a factory to decide whether or not to ship more raw materials to the partner. Public organizations may want to share their performance metrics with the public and companies may want to share their KPIs with regulatory bodies. Google’s GSuite offers a flexible way of sharing documents as every college student knows. What makes it cool though is the fact that this system also works for their data warehouse and reporting tools. Yes, with GCP it is as easy to share complex dashboards and metrics of a company as it is for a college student to share his lecture notes with her fellow students: Lets look at how to get something like this set up with BigQuery and Data Studio in under an hour. For my example, I will use the Github public dataset, available on BigQuery. Below is a diagram that shows the target state Big Query Let’s say I want to create a dashboard for specific repositories for an number of clients. In my case, I want to show torvalds/linux to my work email and apple/swift to my personal email. To define the mapping, I have a table which simply maps email addresses to repository names, as I want to expose certain repositories statistics to certain individuals. Below is a SQL query that feeds a view which resides in a separate dataset. Big Query only offers IAM rules on a dataset level, hence the separate dataset. This dataset is now shared with all of the clients, using either their own google credentials or credentials which we create for them using the GCP identity platform. The SESSION_USER() function in SQL is where the magic happens: It returns the current session user’s email address. Next, we need to share the “public dataset” (containing the saved view) with the users that we want to be able to read from the view and the view itself needs to be authorized to read from the “private dataset” i.e. our customer data table. Google Data Studio Hopping over to the data studio, we create a new data source and select the view. In the source settings, it is important to select “Viewer’s credentials” so that the viewers credentials are used when accessing the report’s underlying data. In the explorer, you can now build your dashboard as you please. I decided to build a small small dashboard that shows the different contributions of the top contributors of a repository (based on commit count). When I open the report in my two accounts (one part of our organization, one my personal account), I can see the two different repositories data and nothing else.
https://medium.com/datamindedbe/unique-dashboards-for-external-customers-with-google-cloud-f5e1bcf947a
[]
2019-10-23 14:14:16.557000+00:00
['Google Cloud Platform', 'Big Data', 'Reporting', 'Bigquery', 'Data Warehouse']
The Best Books for Machine Learning Beginners
The Best Books for Machine Learning Beginners 4 of the Best Machine Learning books out there and why they will make you the next big Data Scientist. Machine Learning is the hottest topic currently in the atmosphere of data science. With it’s unique applications, start ups and larger companies are hiring more and more data scientists to implement these models to get a better understanding of their business and their customers. For anyone that is getting involved in data science, Data Scientist is the career job we all dream of but you have to know your stuff, and that starts with understanding the concepts of Machine Learning!! Therefore I’m going to give you the 4 best Machine Learning books to read if your a beginner and looking to become a Data Scientist in the future, or just interested to learn more about the topic. Photo by Kimberly Farmer from Unsplash 1. Introduction to Machine Learning with Python: A Guide for Data Scientists. If your just getting started with Machine Learning this is a must read. It focuses mostly on the Scikit-Learn library with an in-depth tour of some of the most useful methods in Machine Learning— classifying, regression, a bit of clustering, PCA, and all the different ways to measure the outcome of your model. Introduction to Machine Learning with Python is easy to understand and will explain thoroughly all the necessary steps to create a successful machine-learning application with Python. The Unanimous book to read for those starting machine learning. Link: https://amzn.to/3b6ygSZ 2. The Hundred Page Machine Learning Book The quintessential book for those looking to learn machine learning fast. This book can be read in one night and has all the information you would need to create your own models with machine learning. It is clear, concise, and probably the best machine learning book I've read with respect to number of pages and quality of content. Link: https://amzn.to/3fqMr8y 3. Python Machine Learning This is a fantastic introductory book in machine learning with python. It provides enough background about the theory of each (covered) technique followed by its python code. One nice thing about the book is that it starts implementing Neural Networks from the scratch, providing the reader the chance of truly understanding the key underlaying techniques such as back-propagation. Even further, the book presents an efficient (and professional) way of coding in python, key to data science. I strongly recommend it to those with a moderate level of understanding of machine learning principals and python. Link: https://amzn.to/2L604fM 4. Hands-On Machine Learning with Scikit-Learn and TensorFlow The author Aurelien Geron does a great job with explaining different concepts with the prime focus on the practical implementation of Scikit-Learn and TensorFlow. The book is split into two, with the first half covering Scikit-Learn, it is a good mix of practical with theoretical. The Scikit-Learn section is a great reference and has nice detailed explanations with good references for further reading to deepen your knowledge. The second half dives deep into deep learning with TensorFlow, the next step to understanding Machine Learning to its fullest. Deep learning is explained using the easy to learn Keras library with the combination and power from TensorFlow. Link: https://amzn.to/3fq1foc
https://towardsdatascience.com/the-best-books-for-machine-learning-beginners-b2317d1ee27c
['Christopher Zita']
2020-05-08 17:23:11.001000+00:00
['Programming', 'Data Science', 'Deep Learning', 'Python', 'Machine Learning']
I Discovered It’s Never Too Late to Invest in Bitcoin If You Understand It
The “Too Late” Mindset Is Toxic The too late mindset isn’t good for your psychology. It makes you feel like shit. When people think about bitcoin I quickly see them become disappointed. They wish they had listened earlier. Or they wish they had bought at least one bitcoin when the price was lower. The problem with bitcoin is you can never buy at a good price. Why? Bitcoin keeps being the best-performing asset each year, and is now the best performing asset of the decade. Think about that. This is the superpower of bitcoin. It doesn’t matter when you buy it. What matters is that you do eventually. Buy low, sell high We’re taught to buy assets at a low price and sell them at a higher price. What if this was industrial age factory worker thinking? The best time to buy bitcoin is when you decide to. The reason is, bitcoin has a fixed supply of coins and predictable code built into it that tells you its future. You don’t have to be a genius to understand bitcoin’s future price. A Dutch institutional investor known as Plan B created a stock to flow model which helps investors understand where the price is going based on the hard-coded, predictable monetary policy of bitcoin. Once you understand stock to flow, then you’ll understand the overwhelming dollar value of guaranteed scarcity. Forget the price. What’s the opportunity? I want to shake up your thinking. The opportunity of bitcoin has nothing to do with price. Bitcoin solves a problem. The current value of the circulating supply of bitcoin is roughly $355 billion. The current value of the world’s supply of gold is about $9 trillion. The current value of the world’s supply of derivatives (a popular financial instrument people dump money into) is $640 trillion. The current value of the world’s supply of stocks is $95 trillion. If only a small amount of money moves out of one of these asset classes and into bitcoin then the price will skyrocket. Investment firms are buying bitcoin, not because they want to, but because they have to. There are very few places you can put your money and get a return. There are even fewer places you can put your money to store it safely in the event of a downturn or recession. I don’t personally care about the bitcoin price. I care about the problem it solves and whether over the long-term there is a financial return likely to occur for doing so. Long-term thinking The worst way to invest as an everyday person is for the short term. One of my early investments in bitcoin was short term. I bought a lot and then sold it when the price crashed in 2017. That short-term thinking has meant I’m working a few years longer than I need to. I would be retired if I still had all the bitcoin I owned in 2017. I’m okay with that. My investor psychology was weak back then. I needed to see a few 50% drops to learn my lesson. The March 2020 covid crash was certainly a good test, and I passed without selling any bitcoin. Thinking about the assets you invest in over the long term helps reduce your stress levels. You’re less worried about whether it’s “too late” or “is now a good time” and focused on doing your research and understanding what you’re buying. This phrase sums it up better than I could: Time in the market beats timing the market. Euphoria Is Coming It’s going to be bad for your psychology. When euphoria hits an asset like bitcoin, people lose their minds and throw a wall of money at it. How do I know? I lived through the euphoric phase of bitcoin back in 2017. People I worked with went from IT professionals to wild gamblers in a day. Every computer screen in my office at one point was showing the bitcoin price. One guy in the office mortgaged his house to buy bitcoin. The problem ended up being rather unusual: people didn’t buy bitcoin. They bought shitcoins that were worth less than $1, thinking they’d get rich. That’s what euphoria can do to people’s thinking. It isn’t pleasant to watch — and it will happen again. The use-case for bitcoin was born during the pandemic. This is going to create ongoing euphoria. With Citibank throwing numbers out like $318,000 USD per bitcoin, and the price going over the all-time high of $20,000 USD per coin, people are guaranteed to collapse on the ground with a case of FOMO. What can you do about it? Stay calm. Your life won’t end if you don’t buy bitcoin. Take your time and research it. Start by reading the book, “The Bitcoin Standard” and go from there. Patience always wins when it comes to investing. The Risk of Bitcoin You Must Understand Nothing you put your money in is perfect. The biggest risk of bitcoin has always been the potential for it to be banned. In 2017, the risk was enormous. Over time that risk has decreased as more and more adopters of the technology — like PayPal and payments company, Square — have offered the product to their customers. The likelihood of bitcoin getting banned is almost zero. What will happen to bitcoin, though, is regulation. Regulation just means you will have to be identified when you buy and sell bitcoin. Unless you’re a drug dealer breaking the law, regulation is good for your investments. With more regulation comes more opportunities for legitimate wall street firms to invest their money in bitcoin and use it for what it was made for: a store of value. The other risk is your coins get stolen. The old school way to fix that issue is to purchase a Trezor or Ledger to store your coins in. I’m going to give you the non-technical definition of a hardware wallet: A hardware wallet takes your bitcoin off the internet. When your coins are off the internet nobody can access them except you. This is how you stay safe and prevent a Russian hacker from stealing your coins while you drink your early morning coffee by the jacuzzi. What’s the Difference Between Gold and Bitcoin? Scarcity is the one attribute both bitcoin and gold have. The issue is, the scarcity of gold is controlled by humans. The scarcity of bitcoin is controlled by nobody and can’t be changed. Every year the gold supply increases by roughly 2%, as new gold is discovered and mined. With oil, humans invented a technological advancement called fracking which became popular in 1949. Humans could invent more ways to mine gold. We could also throw lots of money at gold to mine more of it. The final option is to mine asteroids for gold. Doomsdayers say that if NASA decided to bring an asteroid home with them from space it would “destroy commodity prices and cause the world’s economy — worth $75.5 trillion — to collapse.” I think this is ridiculous. Humans may mine asteroids for gold but it’s probably a long way away and incredibly difficult to do. I don’t see the downfall of gold coming anytime soon. But I do see the 2% increase in the supply of gold every year as an unfortunate feature of the metal. The speculation over banks manipulating the gold price is another factor to consider when thinking about the difference between bitcoin and gold. Bitcoin is different to gold. It’s understood my digitally savvy millennials and doesn’t require a safe to store it in or muscly arms to hold it. There is no one owner of bitcoin. You can’t rip Zucks out of bed when he’s butt-naked and drag him to court so you can sue him over your grievances with facebook. There isn’t a human or company to sue when it comes to bitcoin. Bitcoin’s supply is fixed. Bitcoin technology is owned by nobody. Bitcoin’s inflation rate is locked and guarded by code. Bitcoin has no ego because no billionaire owns it. Bitcoin is an egoless, non-existent company. The game-changer with bitcoin is the network is owned by the users. Users own and give the technology value. Venture capitalists, bankers, and politicians don’t get to give bitcoin a permission slip or approve its use. Every person from every country in the world gets to vote on bitcoin with their smartphone by buying it. How to Think About Bitcoin Safely Ric Edelman, founder of RIA Digital Assets Council, makes the idea of investing in bitcoin easy. He recommends to his clients as a financial advisor to consider a 1% allocation in bitcoin. If you have $10,000 in savings, then consider a $100 investment in bitcoin. He says if you had invested 1% of your money in bitcoin in 2020, then it would have lifted everything you’d invested in by 25%. Now my approach is different. I’ve spent more than five years understanding how bitcoin works so my allocation is a lot more aggressive than 1%. But if you have no idea and want to access the benefits of bitcoin, then a 1% investment is hard to look past. A slow approach you can steal from the pros Iconic hedge fund managers like Raoul Pal use a dead-simple approach: dollar-cost averaging. Here’s how to do it. Step 1: Decide on how much bitcoin you’re going to buy every month. Let’s say you decide on $100. Step 2: In the first month, invest $100 into bitcoin. The price might be at an all-time high. Step 3: In the second month, invest $100 into bitcoin. The price might be at an all-time low. Step 4: In the third month, invest $100 into bitcoin. The price might be going up again. Step 5: Keep investing $100 every month until you’ve reached your desired investment amount (let’s say your desired amount is 1% of all your money goes into bitcoin). With this approach you can smooth out the price you get into bitcoin at without stressing and losing your mind. Some months the price will be high, and some months the price will be low. Just like a savings account, you don’t care. You lock your money away and let it be protected by code, not humans. It Isn’t Too Late to Buy Bitcoin. It’s not time to panic. It’s time to digitally upgrade your financial education. Physical cash in your wallet and nuggets of gold you take inside with a wheelbarrow is the pre-digital financial world. The world of money is a long way behind advances in technology. Bitcoin is trying to catch the financial world up with the digital narrative, so we can use the phone in our pocket to go about our day and store our money without its value being secretly eroded away and taxed by inflation, by the pinstripe suit club. If you’ve been thinking about bitcoin, do your research. Go down the rabbit hole. Avoid FOMO. Stay away from the hype. Understand the problem it solves, intimately. Get to know bitcoin like you would a new lover you invite into your bed to shag. The best time to buy bitcoin is when you understand it and see its value.
https://medium.com/the-ascent/i-discovered-its-never-too-late-to-invest-in-bitcoin-if-you-understand-it-d0848141144b
['Tim Denning']
2020-12-29 17:32:40.865000+00:00
['Self Improvement', 'Psychology', 'Cryptocurrency', 'Lifestyle', 'Money']
Welcoming Our First Engineer in Europe — Domen Grabec!
We are excited to announce that Domen Grabec has joined the Origin Protocol engineering team! Domen is based in Slovenia and will be joining our growing team of engineers that are contributing to Origin Protocol around the world. Having a truly distributed team comes with collaboration challenges. The culture of “public by default” and open-collaboration helps tremendously. At Origin, we have all our discussions in public in Discord, publish our engineering meeting notes to the world, track our progress on a public project board, and invite anyone to join our weekly engineering calls. Domen started as an extended team member last year and since then has had an impressive impact on several different projects. His first project was to build search capabilities for our DApp. Then Domen took on an initiative to improve our DApp’s mobile user experience. Currently, he is busy implementing growth initiatives that we’ll be announcing soon. These contributions clearly demonstrated his abilities to move fast and operate at any level in our stack. Domen has solid and diverse industry experience under his belt. Prior to Origin, he worked at Celtra, an advertising IT company, where he was part of the team responsible for a Big Data pipeline using Scala and Spark clusters to process hundreds of GB of data daily. Prior to that, he partnered with two Slovenian Chess Grandmasters to create an iOS and Android app where users can play simultaneous chess with Grandmasters. He is also famous for contributing to a significant number of people’s loss of productivity when he implemented and released an iOS game called Pigs! During his free time, Domen enjoys dancing Salsa. He is part of an amateur group that performs a few times a year on stage. Rumor is that once, during warmup before a performance, Domen managed to pull off doing five consecutive perfect pirouettes!…but alas nobody was there to witness his prowess. Domen is an avid hiker; his home country of Slovenia provides an abundance of mountain trails. In the summertime, he wakeboards, practicing 720 degree jumps. In his own words, here is why Domen is excited to join our team: “Origin Protocol ticks a lot of the boxes for me when it comes to what is important in a company. It is an open source codebase and has a transparent collaboration process. The team is highly skilled and enjoys working on tough challenges.” We’re looking forward to working with Domen on building amazing products at Origin Protocol. Please join me in welcoming him to our team! Learn more about Origin:
https://medium.com/originprotocol/welcoming-our-first-engineer-in-europe-domen-grabec-44bc203aa72c
['Franck Chastagnol']
2020-01-17 19:11:53.463000+00:00
['Team', 'Ethereum', 'Blockchain', 'Cryptocurrency', 'Startup']
NASA Puts Its Space Rock Collection Online
NASA Puts Its Space Rock Collection Online ExtremeTech Follow Dec 18 · 3 min read by Ryan Whitwam NASA has a wealth of space rocks, known more properly as astromaterials. Some of them came directly from the surface of the moon during the Apollo era, and others were discovered after falling to Earth in Antarctica. Now, you can check out NASA’s collection in extreme detail using the new Astromaterials 3D Explorer site. Not only do you get high-resolution photos of the surface, but you can also peer inside the rocks using X-ray computed tomography. Getting these precious samples ready for their online debut wasn’t as simple as snapping a few photos and slapping some HTML together. First, photographer Erika Blumenfeld captured images of each rock from at least 240 different angles — that’s enough to produce what NASA calls a “research-grade 3D model.” Because the samples are so rare, the entire photoshoot takes place with the sample inside a sealed nitrogen cabinet, which is itself inside a cleanroom. NASA even includes data on the camera, which was a Hasselblad H4D-60, a $30,000 camera with a 60MP resolution. The HC 120 II lens Blumenfeld used costs about $2,500 all by itself. The visual data is measured in tens of gigabytes for each rock. Following the photoshoot, each of the two-dozen astromaterial samples was scanned using X-ray computed tomography (CT). This allows researchers (and now you) to examine the internal structure of the rocks without damaging them. The Explorer site integrates the 360-degree 3D mesh from the photos with the internal data to produce a virtual representation you can examine from any angle. On the site, you can choose between the Apollo moon rocks and the Antarctic collection. The Apollo rocks were collected by hand on the surface of the moon and returned to Earth in the cargo hold of the Apollo command module. The Antarctic objects plummeted through the atmosphere and impacted the frozen wasteland. These dark rocks are easy to spot against the white backdrop, making them easier to find than asteroids in other regions. The asteroid samples are organized by their origin — there are typical C, K, and M-type asteroids, as well as some that came from Mars, Vesta, and the moon. Transdisciplinary artist Erika Blumenfeld photographed each rock at NASA’s cleanroom laboratory inside nitrogen cabinets, NASA said, imaging the rock’s surface at 240 angles using a high-resolution camera to achieve the detailed surface texture seen in Astromaterials 3D’s exterior 3D models. (Credit: NASA) The Explorer interface includes a 3D model you can observe from all sides with different lighting and measurement tools — there’s even a 3D anaglyph mode. The CT scan data lets you isolate slices from the interior for closer examination. You can even view a high-resolution image of each individual X-ray slice. The Astromaterials 3D Explorer also links to the uncompressed TIFF images for each rock, clocking in at a few gigabytes per rock. NASA says this is just the start of an ongoing project. It wants to get as many of its samples as possible digitized so everyone from students to researchers will be able to examine these space treasures in detail. The agency says more samples will be live by summer 2021. Now read:
https://medium.com/extremetech-access/nasa-puts-its-space-rock-collection-online-29cd350dd8f8
[]
2020-12-18 13:08:57.163000+00:00
['Moon', 'NASA', 'Asteroids', 'Space', 'Science']
We Watch the Coronavirus Wreak Havoc…
Pandemics are horrible and scary. They spread chaos and wreak havoc, forcing entire cities down on their knees; they spread across continents and soon engulf the world. This is true, and I’m not here to downplay their obvious importance. But like all villains, they also have another devious super-power: Hogging the spotlight. Let me turn off that light switch for just a minute, and start with a statistic you may or may not be aware of. This is current information taken directly from the CDC, regarding the 2019–2020 flu season. You read that right; 8,200 reported deaths so far, this season alone. What’s more, these statistics only include the United States and Puerto Rico. I shudder to think of what those numbers might look like if they comprised the entire world. And what about this report from CBC News: Shouldn’t numbers like that be considered an epidemic as well? Or does the virus have to be ‘new’ to get that title? It kind of makes you wonder. Well, it makes me wonder. Now, I’m not by any means attempting to downplay this situation; I myself have friends who are being directly affected by this outbreak in China. I feel sincere sorrow for anyone who’s lost a loved one to this virus, and I hope and pray that anyone currently infected will be able to pull through and fully recover. I’m simply trying to bring attention to the fact that as bad as the Coronavirus is, it isn’t the only virus killing people. But I guess that just like a new starlet on the silver screen, these things stop being news-worthy and tend to fade in importance as the years pass, and we begin to turn a blind eye to what’s happening in our own backyards. H1N1 and his little friends are no longer the new kids in town, so even though they’re still taking thousands of lives every year, they’re relegated to the back of the classroom. It saddens me to know that in this age of modern medicine where we’re blessed to have a cure for nearly every disease, so many people are still dying of complications from viruses we consider ‘common’, such as influenza. Should something really be considered common if it has the power to take so many lives year after year?
https://medium.com/the-partnered-pen/we-watch-the-coronavirus-wreak-havoc-555887247eab
['Edie Tuck']
2020-01-28 20:13:47+00:00
['Pandemic', 'Illness', 'Virus', 'Coronavirus', 'Vaccines']
New Book Releases: December 1, 2020
I’M STAYING HERE, Marco Balzano, Jill Foulston (Translator). As fascism overtakes Europe in the early 20th century, a German woman in Italy must make impossible decisions when her young daughter goes missing. Publisher’s Weekly calls it “quietly devastating.” Bookshop. ORDESA, Manuel Vilas, Andrea Rosenberg (Translator). From the acclaimed Spanish novelist and poet, a work of autofiction about a man looking back at the shattered pieces of his life. Bookshop. REST AND BE THANKFUL, Emma Glass. A pediatric nurse approaches burnout in a novel the Star Tribune calls “a pungent piece of writing, tactile and sensory to the extreme.” Bookshop. PERESTROIKA IN PARIS, Jane Smiley. A horse wanders out of her stable and discovers Paris, becoming friends with an elegant dog and a young boy who lives with his 100-year-old great-grandmother. From the Pulitzer Prize-winning author of A Thousand Acres. Bookshop. THE ARCTIC FURY, Greer Macallister. After several search and rescue groups have failed to find Lady Jane Franklin’s husband and his expedition, Franklin decides to send twelve women to search for him instead. But one year later, the leader of the all-female search and rescue group is on trial for murder, and only five of the women she traveled with will stand behind her. What happened out there on the ice? Bookshop. I use affiliate links for the wonderful Bookshop.org. If you click and buy through my link, I get a small commission, but it would be equally wonderful if you ordered these books through your local bookstore. :)
https://angelalashbrook.medium.com/new-book-releases-december-1-2020-398ffe3e7ac2
['Angela Lashbrook']
2020-11-30 15:11:12.066000+00:00
['Literature', 'Culture', 'Reading', 'Fiction', 'Books']
Predicting Outcome of League of Legend Ranked games in ChampSelect via Machine Learning
Edit : you can now test it here : https://dodge-that.herokuapp.com/ League of Legends is a multiplayer online battle arena (MOBA) game where two teams of five players compete to destroy an enemy building called a Nexus. Before each game, the player select their champions and once everything is locked, enter the game. This Machine learning project aim to answer the folowing : Is this game likely to be a win ? or should I dodge given my teamates picks and recents games ? I build a Python app to retrieve data, explore and model this question via the Scikit-Learn Library over more than 2000 ranked games. Players can now have the possibility to test the ML Model directly on : https://dodge-that.herokuapp.com/ Acquire My choice of feature is key in the way I want to acquire data. Feature selected : Winrate of the player on his champion pick ( Planned ) Number of game played with this champion ( Planned ) Off Role Metrics : is the player playing his main role ? KDA over the last 5 Games : is the player a feeder or a challenger ? Number of win over the last 2 games — Metric checking if the player is in the right state of mind Number of win over the last 5 games — Metric checking the overall trend on the player mindset Experience of this player with his pick MMR, Which is his rank on the competitive ladder (like ELO on chess) Number of game played during the last week — Metric checking if the player is casual Now where to get all these informations ? Riot API’s store the information regarding the games, player experience and more OP.GG store the information about player profile Given the architecture of Riot API, which is in a diplomatic way of saying things, a challenge, I chained several API call per games in order to get the correct information. Here is the logic behind : 1- get the player name 2- get the list of games he played during the last week 3- loop over this list and get information about the result, champ played , list of teammates 4- for each teammate, get their 5 last games, result, experience (30 api call per game) 5- store all the information in MongoDB 6- select a random teammate and restart the whole process Once all set of games have been retrieved, and as it’s not possible with Riot Api to get the winrate of champion per player, we do some web scrapping on OP.GG to get the stats for each player. This will allow us to get the winrate for each champion per player We end up with 2 collection in our MongoDB Matches Winrate Prepare The raw data from Riot API and Champions GG need some tweaks Our input data that will feed our models is the feature presented above. MMR, is a metric that is aggregated according the ladder of the ranked system. The ranking system in League of legends is split into several tiers and division. From Iron to Challenger, each tier contain 4 divisions, in order to be promoted to a superior division player need to get 100LP (Points).To get the estimated MMR, I gave 100pts for each division and add the remaining LP. Last2, as we retrieve the history of game played for each player, we count the number of win Last5, same as above Winrate, is directly scrapped from OPGG. If we did not retrieve any information for a specific champion, the default winrate is 50% Nbplayed, is directly scrapped from OPGG. If we did not retrieve any information for a specific champion, the default number of game played is 5 Explore Lets have a quick look on our features : Thanks to the violin plot from seaborn we can visualise the effect of our features. Quicks assumptions frome these charts It seems that average experience of the team is positively correlated with chance of winning the game The better the MMR, the most likely we are to win the game We have a slight better chance of winning if our teammates plays a lot Paradoxically, if our team has a high winrate over the last 5 games, we are less likely to win. This is a phenomena that most player can feel. Once you are on a winning streak, Riot Matchmaking is often placing you against stronger opponent, resulting in harder match to win. Model Time to play with the Scikit-Learn Library ! We will iterate over 4 different models Logistic Regression (LR) : Used to model the probability of a certain class event. The model works by predicting the probability that Y belongs to a particular category by first fitting the data to a linear regression model which is then passed to the sigmoid function. If the probability is higher than a predetermined threshold (usually P(Yes)>0.5) then the model will predict Yes (1) Random Forest Classifier (RFC) : We combine many classifiers/model into one predictive model (ensemble learning), the most predicted class will be the choosen one, using the idea of wisdom of crowds. On RFC we add bagging, by decorrelating the different trees. During every split, we do not choose the full set of p predictors but just a random sample. Gradient Boosting Classifier (GBC) : GBC is also based on ensemble learning, but the idea is that we improve the model by using information from previously constructed classifier. we can tweak this slow learner model with 3 parameters : Number of classifier B / Interaction depth d / Learning parameter Lamda Multi-layer perceptron (MLPC) : Neural network using at least three layers of nodes ( input layer / hidden layer / output layer) Training our models I’ve choosen to set our train set as 70% of total population As we have continuous feature such as experience that can have high value we will also scale our data. def getdata(matrix,gamenumber=50,split_percent=0.7,scale=True): df = pd.DataFrame(data=matrix) #full data df = df.sample(gamenumber) #randomly select subset of data data = df.values np.random.shuffle(data) breakpoint = int(split_percent*len(data)) Xtrain = data[0:breakpoint,:-1] Xtest = data[breakpoint:,:-1] Ytrain = data[0:breakpoint,-1].T.astype(float) Ytest = data[breakpoint:,-1].T.astype(float) if scale==True: xScaler = StandardScaler() xScaler.fit(Xtrain) Xtrain = xScaler.transform(Xtrain) Xtest = xScaler.transform(Xtest) return Xtrain,Xtest,Ytrain,Ytest We will use RandomGridSearch in order to tune our hyperparameters, and to control the number of search iterations and lower our processing time. RandomGridSearch can be a good compromise when we do not have enough power to run all simulations. |def train_modelRSC(model,params,Xtrain,Ytrain): randomizedsearch = RandomizedSearchCV(estimator = model, param_distributions = params, n_jobs=-1) randomizedsearch.fit(Xtrain,Ytrain) return randomizedsearch.cv_results_["mean_test_score"], randomizedsearch.best_params_, randomizedsearch.cv_results_["std_test_score"] Checking the results and learning curve : The overall accuracy of the different model is surprisingly high, we notice that Logistic Regression is not a good model for this specific work. RFC/GBC/MLPC gives about the same accuracy at about 70%+ and hit this threshold at about 2000 samples. If we focus a bit more on loss per model we can see that the plot of training loss decreases consitently to a point of stability. The plot of test loss decreases to a point of stability and has a small gap with the training loss for Logistic regression. This seems to indicate that we are on a good fit. Finally we can see our overall metrics score below : Conclusion Congratulation we just win vs Riot Matchmaking algorithm ! the model is able to predict the outcome of the game around 73%+ of the time while there is usually a 50/50 chance of wining the game. What’s next ? In order to remove potential biais I did not use the winrate feature (default 50% for everyone yet). But this item could be a key feature in increasing the accuracy of the model. The only problem is that I have to retrieve the winrate of the champion played before the player play the game. Which mean going live on data collection. I created a website (Python-Flask) to be able to run the model while you are in champ select so you can significantly increase your overall winrate and doesn’t waste 30min in game that you will most likely loose ! It is available at : https://dodge-that.herokuapp.com/ Log : 08/11/2020 : Added the webapp to run the model : https://dodge-that.herokuapp.com/ 03/11/2020 : Added 2 Features
https://ffaheroes.medium.com/predicting-outcome-of-league-of-legend-ranked-games-in-champselect-via-machine-learning-8f9d86669eae
['Dan Cabrol']
2020-11-09 08:21:28.896000+00:00
['Data Science', 'Python', 'Machine Learning', 'Mongodb', 'League of Legends']
The Grim Reaper Rebrand Project
The Grim Reaper Rebrand Project Even Death needs help with her marketing Image by GraphicMama-team from Pixabay The smell of coffee pulled me out of my stupor as I walked into the bustling café from the pounding rain. I scanned the room, and I saw her. We had only corresponded via email, but there could be no doubt that she was the one I was supposed to meet. She was sitting in the far corner at a table near the large window facing the busy street, her long, flowing black cloak spilling onto the floor from her seat, her white skeletal features protruding from the mantle, and a long scythe propped up against the window behind her. Everyone else in the café was giving her table a wide berth. I approached her table. There were already two drinks there. She was sipping a chai tea. A tall paper cup sat in front of the seat opposite her. “Ms. Reaper?” I asked. “Please called me Grim. All my friends do.” With a graceful sweep of her hand, she motioned for me to sit and join her. “I got you a large hot chocolate, extra hot, with extra whipped cream. That’s your usual, right?” She said. I smiled awkwardly and nodded. I began to sip, letting the hot chocolate finish the job of awakening my senses that the aroma of the coffee had started. I was about to ask how she knew what I liked to drink when she started to speak again. “You know, drinking that every day isn’t healthy.” I stopped drinking and stared at her. She seemed genuinely concerned about my health. “Right.” I was a little unnerved. “Tell me about your branding problem Grim.” It was time to get down to business. Grim put on a pair of hipster black, horn-rimmed glasses and pulled out a yellow legal pad from her cloak. “I’m tired of people being afraid of me. I can’t get anyone to talk to me. Everyone acts like I’m trying to kill them.” She smiled slightly the way one does when you’ve told a subtle joke, and you hope someone else gets it. I chuckled. “But, isn’t that what you do? I mean, isn’t that why they call you Death?” Her smile disappeared into a straight line. “I could do without the mansplaining, Jason.” I gulped. She was right. “I’m sorry. You’re right. So, you’re looking to rebrand yourself away from being ‘Death’?” “Yes.” Her nod was curt. I was going to have to work hard to earn her trust back. I was losing control of the meeting. “Are you going to continue doing the same — uh — work?” “Yes. I will continue reaping souls. That’s what I do. It’s who I am. I can’t imagine having some kind of desk job. I love that each day is different. I am passionate about my work.” I scratched my chin and pulled my own yellow legal pad out of my bag. “It looks like you came with some notes. Do you mind telling me more about what direction you were thinking?” I asked. Finally, her face brightened, and she smiled again. “Absolutely! I wrote down all the different parts of my job. Would it help if I went over them with you?” “Definitely,” I said. She cleared her throat, which caused several nearby patrons to jump a little. “I visit the living in the moments when it is time for their mortal life to end. I guide their souls from this word into the next realm. I explain the setup there and help them get checked in. Because people are dying all the time, I can choose to be immune to time — it may make more sense if you think of me as being able to stop time — although strictly speaking, that’s not true.” I was writing furiously. “What do you use the scythe for?” “Oh, that functions as my ID badge to get into the next realm. We don’t have biometrics yet. The IT department is so antiquated.” “That’s it? It’s just so that you can get into work?” “Mostly. I also use it to reap stubborn souls who are unwilling to let go of their mortal attachments.” She said. I absentmindedly drank more of my hot chocolate. She looked at me over the top of her glasses. I thought I saw pity in her eyes. “You know a good walk every day wouldn’t kill you.” Now she sounded preachy. “Right. I really should start doing that.” I said. Grim glared at me — her gaze piercing my soul. She shook her arm free from her cloak. It was covered with dozens of different wristwatches. She looked at one near her elbow and back at me. “According to this, you must be serious about finally getting a little more exercise. I’m so glad.” I froze for a second, letting the implications of what she had just said wash over me. “So, how can you help me?” She asked. “Let me make sure I understand what you are after,” I said. “You want to continue reaping souls. But, you don’t want people to be afraid of you. You think the whole ‘Death’ moniker is not a good brand, and you want something different. Is that right?” “Yes. You see, I don’t cause death. I show up when death is imminent. Mortals are awful with the entire correlation never infers causation thing.” I made a few notes on my pad. “Have you thought about starting a blog?” I asked. “I’d love that! But, I’m not sure what to call it. All the good domains are taken. That’s another reason I need a new brand!” I nodded. I scanned my notes. “It’s okay if you can’t help. I won’t hold it against you.” She said. She was balancing her narrow white chin on top of her fist. She was a strange combination of stoic and vulnerable. I had never wanted to help a client so badly. “Let’s not give up yet. Rebranding efforts can take a lot of time.” I said. I looked over my notes once again, hoping that something would jump out at me. “I want to help people. I’m not in the death business, you know. I’m really in the soul business.” She said. I looked up. “What about building a brand around being the Uber for souls?” I said. She glared at me over the top of her glasses again. “That’s the best you can do? Your suggesting I become another ‘Uber of’ something else? That doesn’t strike you as tired?” She was right; that was garbage. “I’m just brainstorming here,” I said. “How about you call yourself ‘Soul Escort’?” “No.” “Soul Train — wait, never mind,” I said. “Tell me again what you do after you collect a soul?” “I take them into the next realm and help them get checked in. I answer their questions as best as I can and I…” I jumped up from the table. “You’re a consultant!” “Well, kind of.” She said. “You’re an afterlife transition consultant. No! You’re the Afterlife Transition Consultant!” Her mouth gradually transformed from a thoughtful grimace into a broad grin and then into a full smile. “Yes! Yes! That’s it!” She was now standing too. We hugged. I felt a chill and became a little sick to my stomach. She pulled away. “Sorry about that. I got a little carried away!” I smiled as I sat back down. “Thank you so much. I’m so excited. I’ve got to run and do some reaping. But, later tonight, I’m going to keep working on this. You don’t do logos, do you?” “No. I only do writing and brainstorming.” I said. “Okay. No problem. I think I know someone. Afterlife transition consultant. I feel so much better about my job now!” We began packing up. “Jason, I’ll be in touch with you later this week.” She said. I felt all the warmth drain out of my face and hands. “I mean about doing content for my new site.” She said. I sighed and smiled. “Sounds great. I’ll talk to you later, Grim.” She smiled at me again, turned around, grabbed her scythe, and disappeared.
https://medium.com/weirdo-poetry/the-grim-reaper-rebrand-project-a5356fcd6d50
['Jason Mcbride']
2020-08-23 18:11:13.799000+00:00
['Satire', 'Writing', 'Short Story', 'Fiction', 'Humor']
2 Elements Every Writer You Love to Read Has in Common
2 Elements Every Writer You Love to Read Has in Common If you’re looking to grow your readership, these are your targets Photo by Debby Hudson on Unsplash “Becoming a writer is about becoming conscious. When you’re conscious and writing from a place of insight and simplicity and real caring about the truth, you have the ability to throw the lights on for your reader.” — Anne Lamott, Bird by Bird A good writer knows that his or her writing is not just the sharing of thoughts, the processing of emotions, or the observations about particular circumstances and experiences. That’s because a good writer doesn’t think that highly of his or her abilities, nor do they make the mistake of thinking that everything in their lives is interesting and worth reading. Rather, a good writer is committed to keeping in touch with what his or her readers are actively ingesting. Though relevancy is important in this equation, it is not the primary motivation of good writers, seeing as more often than not, a good writer gives his or her readers what they need to read rather than simply what is most popular at any given time. Practically speaking, that means a good writer doesn’t just fixate on the national headlines to drive his or her ideation. Rather, a good writer finds inspiration from the underlying tensions and emotions of the average reader, choosing to go a layer beneath the obvious so as to connect and, at times, challenge the reader in ways that would be beneficial to that reader’s livelihood. It is my belief that many good writers tend to work using a relatively straightforward equation. They may not call each of these categories by the same name, but if you read any good writing, I am certain you will find traces of these elements at play. 10% [Relevancy] + 20% [Personal Interest] + 30%[Relatabiliy] + 40% [Vulnerability] = 100% good writing The exact percentages should not be dissected, seeing as each is subject to fluctuation on any given piece or article. However, what should be noted and attended to is how each category corresponds with each other. As mentioned above, relevancy can be an important element of good writing, and yet, it accounts for the lowest percentage within this equation. There should be some connection to the reader and their present circumstance, however, if you’ve found yourself engrossed in Amor Towles The Gentleman In Moscow or Delia Owens Where the Crawdads Sing, you quickly begin to realize that relevancy isn’t the top factor that indicates great writing. I included personal interest in this equation because it goes without saying that good writing largely often comes from areas that personally interest the writer. If you were a mariner, you may be interested in both reading and attempting to write something similar to Hemingway’s The Old Man and The Sea. However, if you were an astrologist, an anthropologist, or an activist, you might lack a personal interest in that lane of material and, therefore, your best writing might be done through a different avenue or topic. While both relevancy and personal interest are important, you will notice that my equation for good writing centers around two larger categories, of which I believe the majority of what we consider to be good writing has in common: #1 — relatability #2 — vulnerability
https://medium.com/swlh/2-elements-every-writer-you-love-to-read-has-in-common-2ba02f40518e
['Jake Daghe']
2020-12-09 03:01:15.539000+00:00
['Writing Tips', 'Literature', 'Writing', 'Reading', 'Writer']
Why We Need Fewer Politicians in Politics
Why We Need Fewer Politicians in Politics In today’s world, science is political. Source: created by the author via Canva As court battles dwindle there are signs the 2021 US election is finally coming to an end. In many ways, this has been an election unlike any other, framed by the context of a global pandemic, a climate emergency, and unprecedented technological change. These issues were at the front and centre of campaigns, debates, and discussions — yet the more I watched and read, the clearer it became that a large majority of politicians lack any sort of scientific background. Indeed, aside from COVID-19 and climate change, science policy appears to be missing from campaigns, debates, and party platforms. Even when these issues are discussed, they’re talked about in a very human-centric way. The focus is always on the economy first, society second, and the environment third. Is this surprising? Not quite. In 2017 there were just two members of Congress with PhDs in a Science, Technology, Engineering or Mathematics (STEM) related field. On the contrary, 222 held Law degrees and a further 18 held only a high school diploma. Considering the lack of scientists in politics it is hardly shocking scientific interests are underrepresented. It is worth noting that this is not a US-centric problem. In 2016 Australia’s federal parliament had only 20 politicians with STEM training — that’s just 12% of senators and 7% of MPs. In the UK, the most popular subjects for MPs who won seats in the December 2019 election include Politics (20%), History (13%), and Law (12%). Career politicians are important, and many of them do great work. I am not arguing that every political institution should consist entirely of scientists, nor am I saying that the arts are an inferior background in any way. Rather, I believe that in a society that trusts and needs science more than ever one must question why so few of our decision-makers come from a STEM or STEM-related field. Likewise, we must ask ourselves how we can fix this gap, and what scientists could bring to public policy and all levels of government. The need for change Around the world, scientists are becoming increasingly alarmed at the ignorance, apathy, and indifference of political leaders towards urgent issues. In South America, 2019 saw Brazilian President Jair Bolsonaro fire the head of the National Institute for Space Research. This followed tensions over one of the agency’s reports which stated that deforestation in the Amazon had gotten worse since his time in power. In Hungary, the government overtook around 40 institutes previously ran under the Hungarian Academy of Sciences. In my home of sunny Australia, Environmental Science degrees have suffered funding cuts of almost 30%, a move that is predicted to harm the country’s ability to deal with drought, bushfires, coral bleaching, and global warming. On the other side of the planet, it is no secret that White House scientists and Donald Trump have clashed on more than a few occasions. Trump has publicly supported Andrew Wakefield, a British gastroenterologist who is now infamous for his 1998 study linking the measles, mumps, and rubella vaccine to autism. Even though the Lancet retracted his paper, Trump allegedly told him ‘I’m gonna do something about this because I know it happened, I’ve seen it in people who worked with me and their children’. Furthermore, research suggests Trump Tweets may increase anti-vax attitudes. Likewise, in his time in office Trump has delayed nominating a scientific advisor, withdrew the US from the Paris agreement, and proposed cutting the Environmental Protection Agency’s budget by 25%. Clearly, our leadership must change if we are to have any chance of surviving the climate emergency. If anything, the wide and vocal support for the March for Science — an international series of Earth Day rallies and marches — proves that the people want effective and proactive leadership in the fight against climate change. Why are scientists generally on the sidelines? The answer is simple — scientists don’t typically run for office. There are a couple of surface explanations for this. Firstly, individuals with STEM degrees may find it difficult to get noticed by parties due to a lack of connections and contacts within the political sphere. Likewise, many may find the notion of science as a pure and objective pursuit of truth an inherent opposition to the world of politics, which is often based on subjectivity. Still, there is a deeper, perhaps more troubling cause for the lack of scientific voices in government: a struggle to communicate effectively. Aside from a few noteworthy personalities such as David Attenborough, Bill Nye, and Neil deGrasse Tyson, scientists are rarely in the limelight. Those who become household names are the exception rather than the rule. In fact, research suggests most Americans can’t name a living scientist. Admittedly, scientists themselves are partially to blame. The uncomfortable truth is that despite the advent of the internet, science remains inaccessible to a significant number of people. Scientists, intelligent as they are, sometimes struggle to communicate their findings in a way that makes sense to the general public. At worst, this leads to the spread of misinformation, fake news, and confusion over what is and isn’t a fact. To illustrate, there are widespread misbeliefs regarding topics such as climate change, vaccines, and evolution. Is part of this due to a refusal to listen? Sure, but it is also due to a struggle to communicate. To make matters worse these communication problems are exacerbated through the inaccurate and stereotypical portrayals of scientists in the media. Whether it be in TV shows, movies, or books, scientists are often categorised as eccentric and antisocial at best and ‘mad’ and egotistical at worst — not necessarily qualities we would want in our leaders, are they? Lastly, we must of course consider that it takes decades to become a respected scientist and build a career as a scholar. Unlike academia, politics does not provide a great deal of stability. It remains uncertain how one might leave behind the laboratory and enter the world of politics knowing that the stint could last four years or less, especially since entering politics is more or less a death sentence for said science career. What scientists can offer Representation of scientific interests Whether for better or for worse, science is political. From the Institute of Health and Welfare to the Department of Energy to the Department of Conservation, much of scientific research is funded by the government. Every day politicians make decisions regarding where to spend public money. One might choose to spend money on defense rather than a space exploration mission, for example. As such, deciding what gets funding — and subsequently what doesn’t — is a political decision, not a scientific one. For some, the idea that public money should fund any research is political. This brings me to my next point — perhaps the most obvious way in which scientists can further the interests of society is by advocating for funding. This is especially important at a time where science is becoming undervalued. Data shows we are not even representing our interests well. The UNESCO Institute for Statistics estimates Australia’s R&D spending as a percentage of GDP is approximately 2.2%. In America and the UK, this totals 2.7% and 1.7% accordingly. Contrastingly, Finland, Japan, and South Korea spend 3.2%, 3.4% and 4.3% of their GDP on R&D. Furthermore, scientists bring a unique perspective to the political arena. Unlike some politicians, scientists understand the inherent importance of basic research. In addition, they also understand how patents, regulations, and other barriers to entry stifle and encourage competition and innovation. Scientists can often bring a more holistic approach to policymaking as they understand how budget cuts and funding can make important projects more or less likely to succeed. Objectivity Being a voice in favour of funding is not the only benefit scientists can bring to politics. In their day-to-day work, scientists usually rely on evidence and data to analyse the significance, methodology, and conclusions of their work. They are also naturally inquisitive, always interrogating the question and asking ‘why’. Adversely, many career politicians fail to consider all the evidence before making important decisions. If enough scientists take the leap into politics and bring with them these transferable skills, they have the potential to change these dangerous thinking patterns. In the real world, this translates to less wasteful spending and more effective public policy. Trust In a disturbing trend, democracies are losing the trust of their citizens. The Edelman Trust Barometer is an annual study that measures the level of trust in institutions across the world. The 2020 report reveals the public across 28 countries has low confidence in government and media, with businesses and non-governmental organisations doing only a little bit better. In contrast, and perhaps due to the aforementioned objectivity, over 80% of people trust scientists. Bringing more scientists into the political system may just be the key to stop the erosion of trust and create stronger democracies. Final remarks Scientists in politics can form the basis for stronger evidence-based leadership. A more diversified government is crucial in the context of technological disruption, global warming, and COVID-19. After all, by pushing a science agenda, providing objectivity, and increasing trust, scientists have the power to unite societies and change the world.
https://medium.com/climate-conscious/why-we-need-fewer-politicians-in-politics-e0464c8fd0ab
['Sol Kochi Carballo']
2020-12-17 15:50:50.018000+00:00
['Environment', 'Politics', 'STEM', 'Stem Education', 'Advocacy']
Improving how we calculate writer earnings
How we calculate your story’s earnings From the start, our goal with the Medium Partner Program has been to reward the quality stories that resonate with readers. We recognized a broken status quo in traditional media, where advertisers were the ones indirectly paying for writing. Ultimately, this system rewarded the lowest common denominator, rather than encouraging creative autonomy and truly serving readers. At Medium, we’ve created a different path. Your stories are directly supported by readers’ membership fees, meaning everyone’s incentives are aligned. And you can focus on what matters: writing. In our new calculation, we’ll include two parts to your earnings. As always, we plan to continuously improve our model as we learn — and we’ll keep you informed along the way. 1. How long members spend reading your story. As Medium members spend more time reading your story (“member reading time”), you will earn more. We’ve improved our story stats page to show both your story’s daily total member reading time and its daily earnings, so that you can understand how much you are earning over time and where your earnings come from. Note that you will still be paid once per month. When we calculate your story’s earnings, we’ll also include reading time from non-members if they become members within 30 days of reading your story. So we encourage you to share your stories widely! 2. How much of their monthly reading time members spend on your story. By calculating a share of member reading time, we support authors who write about unique topics and connect with loyal readers. For example, if last month a member spent 10% of their monthly reading time on your story, you will receive 10% of their share (a portion of their subscription fee). Imagine an author writes about fly fishing. She finds an audience of fly fishing enthusiasts who subscribe to Medium primarily to read her stories, meaning she receives a strong share of reading time from each of her readers. In contrast, an author who writes about a wide variety of topics might receive smaller shares from a broader audience of readers, who also read a variety of other authors. While the generalist will often earn a lot through the first total reading time part, the fly fisher is well equipped to earn through this share part — even with a smaller audience. Reading time Reading time is the time that someone spends actively reading your story. As a user reads, we measure their scrolls and take care to differentiate between short pauses (like lingering over a particularly great passage) and longer breaks (like stepping away to grab a cup of coffee). Reading time incorporates signal from your readers without hurdles. You don’t need to ask your readers to remember to clap, or click, or do anything other than read. We believe in reading time because it represents the core value that our readers receive from Medium. It may not be a perfect measure of value, but we find that it’s a powerful proxy.
https://blog.medium.com/improving-how-we-calculate-writer-earnings-d2d3f4329b26
['Emma Smith']
2019-10-22 15:39:07.368000+00:00
['Partner Program', 'Media', 'Product News', 'Writing']
The Future of Energy from the Mojave
About a half dozen years ago I woke up from a dead sleep in my Los Angeles home. Something said to me, “go to Vegas. You got nothing of utmost importance scheduled this week.” At about 4:00 AM I jumped on I-10 and headed east into the desert. As I descended the Mountain Pass on the Mojave Freeway, I noticed what looked like a futuristic space station. A few illuminated towers arose from the desert floor of the Primm Valley. Thousands of mirrors were reflecting the first glimmers of the morning sun. Intrigued by what I saw, I pulled off of the freeway to take a closer look. As the sun began to rise higher in the desert sky, the mirrors sent a blinding light across the Valley. It was difficult to make my way back onto the freeway as the light grew more intense. Finally, I arrived at Buffalo Bill’s which was just a few miles away from the phenomenal site. I sat down for coffee and started my research. What did I see? After entering a medley of crazy phrases and descriptions of my location and the elements I saw, Google revealed the mystery site: Ivanpah Solar Electric Generating System. Stretched over 4,000 acres of the desert floor, an illuminating beacon gleaming brightly in the naked and dry Mojave Desert, about 40 miles southwest of Sin City, is the world’s largest concentrating solar power (CSP) plant: The Ivanpah Solar Energy Facility. It generates enough energy to power approximately 140,000 homes. Photo By BrightSource The scene of thousands of mirrors surrounding three high-towering, glowing beacons is quite a sight. Each year it intrigues and attracts tourists who want to get a closer look at the mysterious site. The construction of the facility was made possible by funding from the Department of Energy. Thanks to the enormous scale of facilities like Ivanpah, solar energy has become considerably cheaper and contributed to the efforts of replacing fossil fuels with clean and reliable renewable energy sources. Photo by Author The technology used by CSP systems generates solar power by using mirrors and lenses to concentrate a large area of sunlight onto a smaller, focused area. Specifically, Ivanpah leverages “power tower” solar thermal technology to generate energy. More than 170,000 devices, known as heliostats, direct solar energy onto boilers fitted within the three power towers. Each heliostat consists of two mirrors, which concentrate sunlight onto the water-filled boilers to create high-temperature steam. The steam is then pumped to conventional steam turbines to generate electricity, which is then carried by transmission lines to power homes and businesses. Ivanpah also demonstrates more efficient use of land than its solar technology competitors, such as photovoltaic and trough solar. What are the sustainable advantages? High Air Quality— Ivanpah reduces the emissions of CO2 by millions of metric tons. Desert Plant Preservation — Many similar technologies require the entire site to be graded. Ivanpah preserves the site’s natural landscape and contours, which means vegetation can coexist with the facility. Water-Saving Technology— The solar tower technology used at Ivanpah uses up to 95% less water than wet cooled solar thermal plants by using air instead of water to condense the steam. All water consumed during the steam production cycle is recycled back into the system. Next time you are leaving Las Vegas via car or plane and see this gem of a site in the desert, you’ll be well-informed.
https://medium.com/california-dreaming/the-future-of-energy-from-the-mojave-1634de4011ce
['Gena Vazquez']
2020-09-15 16:25:26.162000+00:00
['Technology', 'Green Energy', 'Environment', 'Energy Efficiency', 'California']
From the Beginning
Suntonu Bhadra began this ‘Multiplier of Five’ Poetry Series challenge. 5 (five) poetries in 5 (five) days, each in precisely 25 (twenty-five) words (spaces not counted) Anyone can join in the challenge, and the theme is up to the writer, or it can be different for different days. If you are up for it, I’ll request to tag me at the endnote of the first poem you write so that I can follow your series too. Best regards.
https://medium.com/genius-in-a-bottle/from-the-beginning-a13a83f2c9bc
['Susannah Mackinnie']
2020-11-06 05:09:50.688000+00:00
['Poetry', 'Storytelling', 'Childhood', 'Fantasy', 'Life']
Step up your microservices architecture with Netflix Eureka
Introduction The search for physical addresses around services within our microservices is a well-known concept since the beginning of distributed computing. It’s a key piece of our architecture since it directly affects our distributed architecture, starting with the following three key reasons: Horizontal Scalability : It offers the application the ability to quickly scale services down and up the without disrupting the service consumers; : It offers the application the ability to quickly scale services down and up the without disrupting the service consumers; Abstraction : As physical locations are not known by the service consumers, new instances can be removed and added at any time to the available services pool; : As physical locations are not known by the service consumers, new instances can be removed and added at any time to the available services pool; Resiliency: If for some reason a microservice becames unavailable, it will automatically be removed from the available services list, routing other service consumers around them. The big difference between service discovery (SD) and DNS is, while SD may have an N application server running, DNS will always have a static number that will not go up or down. Another difference is the persistence state, meaning that, while on SD we can restart any service at any time and get their original state, with DNS we will always get the same state as it was at the time of the crash. With these main differences, we can already see some of our horizontal scalability getting limited. In this article, I will explain you how this cloud-based solution will work together with client-side caching and load-balancing (for when the SD is unavailable). A stub project will be created with the goal of showing you how easy it is to empower your microservice architecture with few steps just by using Spring Cloud and Netflix’s Eureka service discovery agent.
https://rafael-as-martins.medium.com/step-up-your-microservices-architecture-with-netflix-eureka-cb3b92f90a18
['Rafael Martins']
2020-11-25 09:24:03.992000+00:00
['Programming', 'Java', 'Design Thinking', 'Spring Boot', 'Netflix']
The Write Plan. Plan on writing a novel in one month…
The Write Plan To Draft Your Novel, in One Month Plan on writing a novel in one month? It’s easy to dream, and much harder to achieve. My suggestion: go into this with a goal and a plan. Maybe you want to participate in NaNoWriMo. Or maybe you have a vacation, a chunk of free time, and want to finally write that book. Fine, but how? Before I wrote my first novel, I had no clue where to start, and had only ever written short stories up until then. I dreamed about writing a book. There was a goal, but no plan. Need structure? No matter where you’re starting today, follow these tips. Your writing will be the better for it. I recommend considering 2 important factors, before jumping into a fiction project: 1. Set an end goal, and 2. Develop a plan. Set an end goal NaNoWriMo begins on November 1st, with a target of 50,000 words. This can be a good approach, but can also be scaled up, or scaled down. Only have one week? Shoot for an average length short story of 7,000 words. Don’t have a lot of writing experience? Start with smaller incremental goals, and build up stamina over time. Develop a plan In a “pantser” approach, writers draft freely, without a clear goal where the story is headed. But if your time is limited, consider not wasting that valuable asset. Have a system, and there is no need to invent one. Years ago I stumbled on Book in a Month by Victoria Schmidt quite by accident. I go back to it time and time again. Start with the section on creating an outline. Create scenes, character maps, turning points, generate backstory and identify potholes. Write the whole damn first draft, or as much as you can muster, straight through from the opening sentence to the last line. Don’t write and edit at the same time! If you aim to reach the full 50K word count goal, I recommend setting up the outlines, characters and plot points in October — ahead of time. The limitations No, you won’t end the month with a complete novel. It won’t be ready to “hit send” to agents or publishers. You might pull that off with a short story, but probably not if it’s your first time. What will you have? A first draft; which must be shared with critique partners, and rewritten based on their feedback. Perhaps by that point it will be ready to self-publish or submit to publishers. Don’t skip professional editing if you’re going to independently publish. It’s not easy to write a fully complete novel, but it can be done. Who knows? If you have a good hook, well-paced plot, and put in the work, it might even be a great novel. You’ll never know until you try.
https://medium.com/nanowrimo/the-write-plan-to-draft-your-novel-in-one-month-4967f46a6085
['Chad Schimke']
2020-11-14 23:12:58.684000+00:00
['Fiction', 'NaNoWriMo', 'Writing', 'Nonprofit']
How Doing Stand-Up Comedy Can Help Your Writing
January 28, 2015. A hundred pairs of eyes were on me as I stood on stage at Goodnights Comedy Club, blinking into the miniature sun of a spotlight. Me, a 42-year-old librarian and English professor. Father of two. Owner of seven cats and a corn snake. Plus my wife had crabs — hermit crabs, in an aquarium. Jerry Seinfeld. Ellen DeGeneres. Robin Williams. Lewis Black. Jay Leno. Chris Rock. These are people who had stood where I was standing. How did this happen? I have loved stand-up comedy since I was a teenager staying up late on weekends to watch A&E’s An Evening at the Improv. I often imagined myself up there, cool as iced tea, my eyes sweeping the crowd, my hands on the mic or spreading wide to welcome laughter. In my youth, I might have made it happen. But now? Not with a wife and crabs to support. Then I learned about Goodnights Comedy Academy, a course for beginning comics meeting one night a week for four weeks and culminating in a “graduation” performance in front of a real audience. I paid $300 and got a slot in the next class. There were three other students: a twentyish IT guy named Justin; a thirtysomething waitress named Brandy; and a mid-fifties folk singer named Jonathan. And there was the instructor, Charlie Viracola. He was also in his fifties, maybe 5’6”, and wore the uniform of urban smartasses: long sweater, cargo pants, and a beanie. He lived in Los Angeles but grew up in Raleigh and had a history with Goodnights: he was the club’s first act when it opened in 1983. He also had serious comedy cred, with appearances on Craig Ferguson, Dennis Miller, and Conan, plus half-hour specials on Comedy Central and Showtime. He has performed internationally, and he did the largest USO show in history at Fort Hood, Texas. On the first night, Charlie had us stand at the microphone and talk about ourselves, including our favorite comedians. Mine were Bill Hicks, Jerry Seinfeld, Steven Wright, Dennis Miller, and Woody Allen. Charlie nodded. He knew most of them personally. We spent the rest of that night coming up with an opening joke. All night. On one joke. That may seem like over-preparation, but I quickly learned that, in stand-up, there’s no such thing. “The good comedians,” Charlie said, “recreate spontaneity. That’s what comedy is. You write everything down and prepare like hell so that, when you’re on stage, it sounds like you’re making it up right then.” My classmates had no trouble coming up with their first joke. Then it was my turn. I got up, stood at the mic, and said . . . nothing. My brain had no information. Everyone looked at me, waiting for some sound to emerge. Seeing that I was, in stand-up parlance, “eating shit,” Charlie stepped in. “So you’re a librarian,” he mused, walking around the room. “How about this? ‘My name is Anthony, I’m a librarian, and I’m probably the only comic you’ll ever see who would prefer you to keep quiet.’” Perfect. And it just came to him, like the smell of rain on the morning breeze. Man, I was going to suck at this.
https://pisancantos43.medium.com/how-doing-stand-up-comedy-can-help-your-writing-a8eb5da85f28
['Anthony Aycock']
2020-09-15 23:57:51.757000+00:00
['Comedy', 'Humor', 'Education', 'Writing']
15 Mistakes Every Developer Has Made in Their Life
15 Mistakes Every Developer Has Made in Their Life You can probably relate to these mistakes Photo by Chris Ried on Unsplash. Making mistakes is human and is actually what makes us grow. You shouldn’t be afraid to make mistakes. Chances are that you’ve made a lot of mistakes that are on this list. If not, that’s great. Try to learn from the mistakes other developers have made so you don’t have to make them yourself.
https://medium.com/better-programming/15-mistakes-every-developer-has-made-in-their-life-7b7ef03cc84c
[]
2020-11-16 17:47:49.368000+00:00
['Programming', 'Software Development', 'Web Development', 'JavaScript', 'Startup']
Building the Perfect FireTV App
Building the Perfect FireTV App Part 4 of our OTT series By Zeinab Bagheri, Kevin Chow, and Jon Holtan This is the fourth part of our OTT series. The first two blogs focus on the challenges of building on Roku, offer solutions, and introduce the Roku App Kit. The third turns to tvOS, and now, we give the rundown on FireTV. On Product FireTV is one of the top competitors in the OTT space. The first device was released in 2014 and since, Amazon has been constantly iterating, adding new features, and releasing new devices. We’ve been working closely with a leading digital streaming service and recently built and released a FireTV app; the app allows users to browse, search, and stream thousands of media content. To start the project, we engaged in platform discovery. We had a hard time finding clear documentation on how an app should react with the FireTV Operating System (OS)/Amazon. We created our own documentation of dos and don’ts, which was based off live FireTV apps. However, we found that nearly every app is different in terms of guidelines and requirements for app-OS interaction. This was an added challenge that made it difficult to really understand what the final requirements should be for our app. Fast forward a few weeks later, and we were ready to submit our app… It was rejected. The rejection came as a huge surprise to us and meant we needed to pivot quickly and re-release. But, it wasn’t all bad news and the rejection actually signified a strength: Our app was evaluated against particularly strict acceptance criteria due to the reputation of being a leading developer. In the end, we successfully changed our implementation and the app passed through the criteria. In sum, when developing for FireTV, keep in mind the following high-level product pointers: No app works with FireTV OS the same way. Do your research. Consider the sheer volume of FireTV devices on market. Test widely. Stay up to date with all FireTV devices, their technologies, and the OS. Deliver a product that leverages the technology to the fullest. Designing for FireTV When designing, always keep your audience in mind, understand them, and make sure you’re designing for their behaviours and attitudes. Without understanding who your target audience is, it’s much harder to effectively design for them. When we started this engagement, we knew the audience we were targeting was aged 45–60. It was then much easier to design an interface that would target and meet their needs and interests. For example, given the audience’s age demographic, we decided to make the font sizes more legible, and to design icons and buttons that would be clear and easy to see. In our app, the player view is the portion in red. The player view (see above image) posed a challenge as there are many states needed for the buttons (i.e.: highlighting a button to illustrate that it has been selected, or changing the colour of a button to indicate that it’s active). We needed to make sure each state was clearly identifiable and distinguishable, that way, our users could easily understand how to use the player. A further design consideration was designing a player that’s always on screen, without compromising usability and a user’s ability to navigate through other content on the app. It was important to our client that the player was on screen throughout use, while having easily accessible content. To manage both, we struck a careful balance between condensing and including as much content as possible and legibility of this content. Engineering for FireTV When building an app, developers usually focus on the available library toolset. In our case, building for FireTV, that was the Leanback library. Leanback is the defacto TV framework that provides software engineers with a starting point for development. It has a standardized set of User Interface (UI) components which work pretty well if you use them as they were intended. However, there are some limitations to the Leanback library, particularly when you want or need to modify any of the pre-specified functionalities. This is where we found ourselves at a fork in the road: we wanted to build a quality application that would meet our client’s needs but we were unable to use the existing Leanback settings fragment. So we found ourselves needing to build our own components instead of using what was provided by the Leanback library. By building our own components, we were able to meet the client’s desires. In short, engineers should use the Leanback library to enhance development, not drive it, and to always keep the focus on the client’s needs. Our client wanted to have a cool wave animation that would play when the user was listening to music. Originally, we tried to have the animation match the music but this proved to be pretty difficult, especially in maintaining smoothness. Instead of getting lost in the weeds of perfectly matching the animation to the music, we felt it would be best if we created an elegant yet pleasing wave animation. In order to do this, we used Lottie. Lottie is a powerful library that allows developers to import the crazy cool artwork made by designers in Adobe After Effects into their apps. The Lottie After Effects plugin exports a minified JSON file into the app project. The animation can then be rendered onto the device through the Lottie library. Using Lottie allowed us to iterate on the animation quickly and effectively. Without Lottie, it would have been much harder to craft elegance into our app. Test Driven Development (TDD) is the current hottest trend in the development world. Why? Because it allows developers to essentially prove that the code will do X, X being the functionality that they are trying to write. By using TDD, when building applications, bugs can be found and fixed even before the application is run on a device. It also gives developers a safety net when they go to refactor code or implement new features that have the potential of breaking any pre-existing code. The flow of TDD is: write a failing test, write some code to pass the test, refactor, and repeat. During the development of our FireTV app, we utilized TDD when structuring the flow for retrieving data from the network. This helped us organize our data flow and iron out any bugs that could have caused issues later in development. We highly encourage developers to use TDD when building applications as it can save you from headaches and will allow you to build the best app.
https://medium.com/tribalscale/building-the-perfect-firetv-app-f03e1d9f1ed6
['Tribalscale Inc.']
2018-10-25 19:47:48.149000+00:00
['OTT', 'Agile', 'Amazon Fire Tv', 'Entertainment And Media', 'Development']
Kindhumans & Nimiq Checkout
Kindhumans & Nimiq Checkout BTC, ETH, and NIM Cryptocurrency Adoption Through Sustainability Kindhumans is a marketplace for ethical products guided by the principle of delivering a more responsible, mindful, and holistic consumer platform. This positive, transparent and eco-friendly mindset resonates strongly with Nimiq’s spirit. As announced before, the two projects have been contributing to record the hash of Kindhuman’s published reports on the Nimiq Blockchain so that the public can verify the genuine, untampered nature of each report. We are happy to announce that this collaboration has grown to include the new Nimiq Checkout for Crypto in Kindhuman’s online shop! The vital role of social awareness There is no denying that the newer generation gives considerably more importance to eco-awareness, sustainability, and overall human kindness towards each other and our earth. This same generation is also likely to take the lead in crypto adoption in the years to come. Sustainability in the cryptocurrency space is probably not an obvious necessity at first glance but will come to play a significant role. Something all Nimiq Team Members have in common is that fairness and kindness guide their moral compass. This can easily be noticed in the way the project behaves as a whole. Initial charity allocation: Since the beginning of the project, Team Nimiq has shared its interest in good causes and sustainability, to the extent of allocating 2% of total NIM supply to its charity (ImpactX Foundation). These funds will be aimed at projects of high ecological and social impact and are time-locked in a vesting contract that limits rushed dispensation. Nimiq 2.0 and switch to Proof-of-Stake (PoS): One of the strong reasons for the transition to PoS with Nimiq 2.0 is the enormous electricity consumption of PoW Blockchains. The decrease in energy consumption by switching to PoS will be dramatic. In addition, Nimiq’s novel PoS implementation also breaks technological ground due to its extraordinary transaction throughput approaching the theoretical limit for single chain blockchain protocols, all without compromising on decentralization and censorship resistance. Nimiq #TeamTrees Campaign: Running during November 2019, this campaign supports the effort of MrBeast in planting 20 Million Trees. Team Nimiq has added a “Tree Item” to the Nimiq Shop with which you can donate a tree using the NIM equivalent of less than a Dollar. The best part is not only the easy and smooth payment, but also that Nimiq is supporting #TeamTrees by donating an extra tree for each tree donated in the shop. Bringing cryptocurrency to the masses To reach mass-adoption of cryptocurrency, crypto payments need to be conveniently available for all audiences, especially the next generation of eco- and crypto-friendly Millennials. Kindhumans is embracing this mindful movement and in extension of our existing partnership, we have worked to integrate a brand new Nimiq Checkout for Crypto in the Kindhumans online shop. This enables Kindhumans to accept cryptocurrency payments in Bitcoin, Ethereum and of course, Nimiq. Comparing the payment experience with NIM to ETH and BTC, a few differences are noteworthy: Nimiq has low fees: Scalability of the Nimiq Network keeps the Blockchain decongested. The transactions per second are due to increase from 7 to 1000+ with the transition to Nimiq 2.0. Nimiq shines in the Checkout: Thanks to Nimiq’s blockchain code being native to the web, the Nimiq Checkout experience for NIM is all in-browser and exceptionally quick, smooth and easy. It just takes a couple of clicks and your Account password to pay for your rewarding Kindhumans products. Nimiq confirmations are fast: When compared to BTC, Nimiq transactions get confirmed 10x faster. This will further improve with the transition to Nimiq 2.0 where transactions are expected to be confirmed in as little as 2–3 seconds. A place for the ethical crypto consumer The number of cryptocurrency wallets is growing exponentially, demonstrating that more people every day are developing a real interest. Kindhumans is providing mindful products to the world through online purchases and has made a conscious choice to support progressive forms of payment such as cryptocurrency. At Nimiq we are proud and humbled to be working with Kindhumans to now also fulfill the cryptocurrency part of the blockchain equation. We aim to support Kindhumans as marketplace for sustainable products by providing any holders of Bitcoin, Ethereum and Nimiq with an easy to use checkout experience at kindhumans.com — try it out now! Thank you Kindhumans team for choosing us to make this happen! Pura Vida, Team Nimiq
https://medium.com/nimiq-network/kindhumans-nimiq-checkout-23a6d3a8f7db
['Team Nimiq']
2019-11-17 22:00:52.349000+00:00
['Bitcoin', 'Cryptocurrency', 'Ecommerce', 'Sustainability', 'Nimiq']
A Comprehensive Guide To ServiceNow
ServiceNow Tutorial — Edureka Every Industry is being disrupted and at the same time is being transformed by automation, intuitive consumer experience, machine learning and an explosion of connected devices. Now, to keep up with pace, an enterprise needs to move faster but outdated patterns slow it down. Other factors like IT incidents, customer requests, HR cases all follow their path and add to this slowing down the process. So how does an enterprise overcome these problems? Is there a way to structure and automate these processes to accelerate the speed of work? With ServiceNow, yes an enterprise can certainly achieve this goal. In this article, I would be taking you through the details of this cloud platform, so continue reading to know more. In this article I would be covering the following topics: Why ServiceNow And Its Need What Is ServiceNow? ServiceNow Capabilities ServiceNow Demo So let us not waste any time and get started with this article. Why ServiceNow and its Need The ServiceNow System of Action lets you replace unstructured work patterns of the past with intelligent workflows of the future. Every employee, customer, and machine in the enterprise or related to it can make requests on a single cloud platform. All the departments working on these requests can assign and prioritize, collaborate, get down to root cause issues, gain real‑time insights, and drive to action. This will help the employees perform better and the service levels will eventually improve. ServiceNow will help you Work at Lightspeed — making your work process smarter and faster. ServiceNow provides cloud services for the entire enterprise. Let us take look at few of the reasons which show why ServiceNow can be so integral to an enterprise: IT: ServiceNow can help increase agility and lower costs by consolidating legacy tools into a modern, easy‑to‑use service management solution in the cloud. Security Options: Security can collaborate with IT to resolve real threats faster. To do this it uses a structured response engine to prioritize and resolve incidents based on service impact. Customer Service: Customer service can drive case volume down and customer loyalty up — by assessing product service health in real time and working across departments to quickly solve service issues. HR: HR can consumerize the employee service experience with self‑service portals and get the insights they need to continually improve service delivery. Building Business Apps: ServiceNow helps any department to quickly build business applications and automate processes — with reusable components that help accelerate innovation. Now Platform: The Now Platform delivers a System of Action for the enterprise. Using a single data model, it’s easy to create contextual workflows and automate any business process. Anyone, from the business user to the professional developer, can easily build applications at light speed. Any application user on the Now Platform can make requests through service catalogues, find information in common knowledge bases, and be notified about the actions and information they care about the most. Departments, work groups, and even devices can assign, prioritize, collaborate, get down to root cause issues, and intelligently orchestrate actions. Now, your business moves faster. Non-stop Cloud: The ServiceNow Nonstop Cloud is always on. No customer instance is ever offline or taken down for any reason. The unique, multi‑instance architecture ensures each customer can fully customize cloud services and perform upgrades on their own schedule. Highly secure, the Nonstop Cloud conforms to the highest levels of compliance and global regulations. And an industry leading, advanced, high‑availability infrastructure ensures instance redundancy between two data centre clusters in every geography, scaling to meet the needs of the largest global enterprises. Now that we have seen why ServiceNow is needed, let us continue with this article and understand what is ServiceNow: What is ServiceNow? ServiceNow is a software platform that supports IT service management and automates common business processes. It contains a number of modular applications that can vary by instance and user. It was founded in 2004 by Fred Luddy the previous CTO of software companies like Peregrine Systems and Remedy Corporation. ServiceNow is an integrated cloud solution which combines five major services in a single system of record. ServiceNow began its journey with IT Service Management applications providing Service Catalog Management. Later, other project management applications followed which helped in managing the entire projects when the magnitude of the incident, problem or change is more. It didn’t stop there, very soon, Configuration Management Database(CMDB) made its way to the list of applications. Today ServiceNow has apps for both IT Service Management Processes and IT Enterprise such as HR Management, Security Management and PPM, etc. Following features make ServiceNow better than its competitors: Instance-based implementation Ease of customization Better Support and low maintenance cost Real-time analysis and reporting Next, in this article we would be getting into the nitty-gritties of ServiceNow capabilities: ServiceNow capabilities Authentication Single Sign-on (SSO) feature is the essence of any tool and ServiceNow is no different. This tool has multiple provider SSO features. An organization can use several SSO IDPs (Identity Providers) to manage authentication. SSO enables a user to login into the application without providing any User ID or password. It uses the Windows ID and Password. LDAP Companies can use Active Directory for various purposes. Be it providing access to applications or maintaining Outlook Distribution list; there are many. The LDAP integration is a piece of cake for the ServiceNow tool, and the best part is that you do not have to code anything. Everything is a simple configuration! Orchestration ServiceNow provides the capability of orchestrating or automating simple or complex tasks on remote servers. Once Orchestration is implemented in any IT company, the entire work requires less skill and labour. It can automate systems like VMware, Microsoft Exchange mail servers, etc. Web Services The Platform provides the capability of publishing or consuming API at the same time. SOAP, WSDL or REST API are the protocols supported. You can either create codeless API or Scripted ones. Enterprise Portal One of the most important requirements for any organization is to have a web portal where users can request for access, service or support. ServicePortal is giving wings to different organizations. Today Enterprises are developing their ServicePortal to showcase their ServiceNow capabilities. ServicePortal also replaced the deprecated CMS site which was the old version of portal but not as capable like ServicePortal. Mobile Ready Today most of the people would want an enterprise application/service/solution to be mobile enabled. They need the ability to make changes on the go. ServiceNow makes it possible. ServiceNow forms and applications are mobile friendly and can be published directly to the mobile without specific development done for mobile. ServiceNow provides the web-based application for the mobile and a mobile native app for iOS and Android. This was about ServiceNow and its capabilities. Next in this ServiceNow Tutorial let us take a look at this that would help us another important concept. Import Sets Demo Import Sets is another important concept. Though simple it is very important and integral for ServiceNow’s smooth functioning. Import Sets allow administrators to import data from various data sources, and then map that data into ServiceNow tables. After an import set completes, you can review the completed import and clean up import set tables. The import log is where you can find information about the internal processing that occurs during an import. Let us try and do this practically. I would be importing a ‘sample.xlsx’ data set and then mapping that data set to a ServiceNow table. You may download that data set here. You would be needing a ServiceNow Instance to perform this demo on your system. I imagine you all have a ServiceNow instance by now. So let us proceed with the final part of this ServiceNow Tutorial. Search Import Sets and select Load Data under System Import Sets module. Select the file you want to import (In this case the ‘sample.xlsx’ which is shared in the above link) and click on Submit. Click on Loaded Data to review the data imported. This is how the imported data set looks like. You can go ahead and click on the setting symbol to personalize your table columns as per your needs. Next step is to create an import set table. Let us create a Target table for Import Set, to do that goto Filter Navigator and type system definition click on tables and then new. I have gone ahead and named the labeled the table as Sample Table. Next click on Columns field to add column names to the table. I have gone ahead added the column names to the table which I would like to map. Once you do that click on Submit. Your table is created. However, it still holds no records. This how the record field looks for now. If you search for it in Filter Navigator. Next, let us load the imported data set. Follow the steps mentioned in the image below. Once the data is loaded the state field displays Complete. You can click on the Loaded Data tab to view it. This is how the data looks. Let us personalize the column list for simplicity. The image below shows personalized view of the data we imported. Transform Map Go back to the previous page and click on create transform map. Provide the Name and Select the Source table and Target Table for mapping. Click on Mapping Assist for Mapping the fields. You can also auto map the fields by clicking on Auto Map Matching Fields. Once you click on Mapping Assist both source and destination tables are available to manually map the fields you want. Let us go ahead and map the fields as shown in the image below and click on Save. Once you have saved the progress click Transform in the following two steps. Confirm by clicking on transform again. The State field has a value Complete, indicating the transformation is complete. You can go ahead and type the name of the table (in this case “Sample table”) in the Filter Navigator to see the required fields and records. The image below shows the same. Hence we successfully imported data set and mapped it to a table in ServiceNow. This brings us to the end of article. Hope this was informative and helpful to you. If you wish to check out more articles on the market’s most trending technologies like Artificial Intelligence, DevOps, Ethical Hacking, then you can refer to Edureka’s official site. Do look out for other articles in this series which will explain the various other aspects of ServiceNow.
https://medium.com/edureka/servicenow-tutorial-55a3ce369e01
['Vardhan Ns']
2019-06-05 15:23:19.231000+00:00
['Servicenow Training', 'Servicenow Capabilities', 'Cloud Computing', 'Servicenow', 'Servicenow Demo']
User Input
C++ Basics : User Input To better understand someone. It is necessary to build a relation in which you both can communicate with each other easily. I am going to discuss about some expressions and user input since I have already discussed about data types in my DATA TYPE blog. In which I have written that data type is kind of sense of computer through which computer understand what type of data you are providing and what kind of data you demanding. As I have also discussed about some data types such as int for Integers to Input/Output, float for decimals numbers to Input/Output and char variable to alphabets for Input/Output. Well as you all are familiar with the type of expression. An expression is a group of numbers variables and operators. There are many types of expressions in mathematics ( i.e : Monomial expressions, Binomial expressions, Polynomial expressions,Linear expressions, Quadratic expressions ,Cubic expressions ,bi-quadratic expressions,Numeric expressions, Variable expressions and many more). Now I want to give you slightly review about each expressions. Monomial expression : An expression which is consist of one term ,one integer or coefficient(A number which multiply with a variable) and one or more variables or character such as :-Monomial expression : An expression which is consist of one term ,one integer or coefficient(A number which multiply with a variable) and one or more variables or character such as :- 7xy where 7 is coefficient and x,y are two variables or 38xyz is another type of monomial. Binomial expressions : An expression consist of two unlike terms and an operator such as :- 5x + 3 where 5x is first term and 3 is second term minus sign( — ) is an operator. Polynomial expressions : figure 1.1 An expressions consist of one or two or many terms as shown in figure 1.1. where x square is single term expressions. y square + 2y -1 is three term expression and so on. Linear expression : An expressions in which power of x is one such expression is called as linear expression. Quadratic expression : An expression in which power of x is two or square such expression is called as quadratic expression. Cubic expressions : An expression in which power of x is three or cube such expression is called as cubic expression. Bi-quadratic or Quartic expression : An expression in which power of x is four such expression is called as Bi-quadratic expression. Here I have shown examples of linear, quadratic, cubic and bi-quadratic expressions respectively. Numeric expression : An expression consist on two or more numeral terms and an operator/operators. Variable expression : An expression consist on two or more alphabetic terms and an operators/operators. Here I have also shown examples of numeric and variable expression.
https://medium.com/dev-genius/user-input-55e9987bd4a8
['Ahmed Yasin']
2020-07-14 08:12:32.641000+00:00
['Programming', 'Writing', 'Machine Learning', 'Language', 'Mathematics Education']
Skill Stacking: Be awesome with average skills
I was listening to a James Altucher podcast recently, with guest Scott Adams, the creator of the comic Dilbert. For part of the discussion Scott explained something he calls “skill stacking” and how the concept has helped him to create a successful career. What is skill stacking? Skill stacking is simply utilizing various skills that you have acquired throughout your life and combining them in a unique way that is individual to you and far more useful than the sum of each skill separately. The great thing about skill stacking is that you don’t need to have mastered any one skill. As a matter of fact, combining a variety of average skills you are only average at is what can give you a unique competitive advantage. You don’t have to have learned from any particular source or place. The fact that you have the skills to work with is what really matters. The idea of a talent stack is that you can combine ordinary skills until you have enough of the right kind to be extraordinary. -Scott Adams Scott Adams explained that he is not a great artist, but rather, just average compared to other artists. He is a good writer, but not a great writer and has never taken a college level writing course. He can be somewhat funny, but a lot of people are much more funny. He is just alright at business, and definitely not an expert by any means. Thoughtfully combining this list of mediocre skills has allowed him to become a very successful cartoonist and writer writer with an estimated net worth of $75 million. Another very simple example, and I’m just making this one up. Lets say you have average photography skills, the have ability to connect with people easily and have a knack for explaining things in an easy to follow way. You could try teaching some beginner level photography classes, or make a series of YouTube videos explaining your techniques. This is a very simplified example, but I think you get the idea. We all have a unique set of skills So how can we all use the concept of skill stacking to further our careers in a way that creates a competitive advantage of our own? Take time do figure out what skills you have and how to combine them to take you in a direction that is right for you and your interests. The thing is, everyone can benefit from skill stacking because, to some degree, everyone has a unique stack of skills that they have built throughout the course of their lives. The trick is to intentionally combine the skills that you already have into a stack of skills that can work together for what you are trying to pursue. You may not immediately realize what all of your skills are without taking a bit of time to work through what you are good at. Don’t worry about having a degree or official recognition of these skills. Just think about what you are interested in and can do moderately well. Maybe you are comfortable with public speaking or video editing or time management. Take some time to really think about your abilities and then fill in the gaps for whatever else you need to learn to create the perfect stack for you. Another great thing about a talent stack is you don’t have to put in your 10,000 hours to master each skill. A list of skills that you or pretty good at can be far more valuable than spending years to fully master any one skill. Of course actually mastering a skill is fantastic, the point is, you can be awesome without fully mastering any one skill. I would place that in the “work smarter not harder” category. All you need to succeed is to be good at a number of skills that fit well together. -Scott Adams An ever evolving process I am still working on my own talent stack by pulling from my current skills while learning new ones that will create a usable stack for me. I have formal training and years of experience as a pastry chef, for example, but that is not something I necessarily want to continue perusing as a career; not in a typical way, at least. I do, however, have a set of individual skills I have learned from baking and pastry work that can be used in or out of the culinary field. Customer service, time management, costing, leadership, hiring and training employees, staying calm under pressure, and the list goes on. I may be just average in some of those skills and I’m definitely not an expert at any of them, but these are all things that I can take with me to thoughtfully use and stack together as needed. I personally hope to create a stack for myself that is genuinely valuable to others and also fulfilling to me. This is an ongoing effort as I transition into a new career phase in my life. It’s not always enough to rest on skills from the past but there’s a good chance you may need take the initiative to add to your competencies. Fortunately, it’s easier than ever to acquire skills with online tutorials and even full university courses that are completely free through sites like Coursera and HarvardX. I am working on my own stack as I transition from working as a pastry chef into a seemingly unrelated field, like marketing. This may appear to be an unlikely and difficult transition but learning how to combine skills from the past while developing new ones will allow me to piece together a skill stack that offers value to others in a way that’s completely unique to me alone.
https://medium.com/the-shortcut/skill-stacking-be-awesome-with-average-skills-a52de26026e4
['Jason Link']
2020-01-24 09:12:09.303000+00:00
['Entrepreneurship', 'Talent', 'Talent Acquisition', 'Skill Stacking']
Docker-Powered Web Development Utilizing HTTPS and Local Domain Names
Creation of Self-Signed Certificates We can perform all the next steps in the folder .certs , i.e. the one that nginx-proxy also uses for the HTTPS configuration. The process is as follows: First, we’ll create a root SSL certificate, also called CA certificate or just root CA. This has to be done only once. Then later, we will generate a certificate for each local domain and sign it with this root CA. I wrote a bash script to automate this process, but for now, we will go through the whole process step by step. 1. Creation of a root CA First, we generate a private key for the root CA, a file called rootCA.key : $ openssl genrsa -out rootCA.key 4096 This key is not password protected (it could be if we add the parameter -des3 to the command). Please be aware that the process described here is just for your local development environment. Do not use it in production! Now, we self-sign the generated key by answering some questions about us: $ openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.crt You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank. For some fields, there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:DE State or Province Name (full name) [Some-State]:Berlin Locality Name (eg, city) []:Berlin Organization Name (eg, company) [Internet Widgits Pty Ltd]:My-Company Organizational Unit Name (eg, section) []:Development Common Name (e.g. server FQDN or YOUR name) []:My-Company Email Address []:[email protected] It is not important what we enter here. The Common Name will later appear in the list of our trusted authorities, so we should choose a name we can easily recognize. Now we have everything to become our own certificate authority. At least for our own purposes, because no browser in the world knows about our CA yet. To make a browser familiar with our root CA, we have to import it. On macOS, we can add the root CA to the Keychain. On Linux or Windows, we can import the root CA to the trusted authorities in our browser(s). Each browser has a different settings page, so please have a look at your browser’s documentation (or search e.g. “import root ca in chrome/firefox/edge/”). 2. Create a Certificate for Each Domain The next step is the creation of a certificate for a local domain, e.g. for my-cool-app.localhost . Again, we create a private key first: $ openssl genrsa -out my-cool-app.localhost.key 2048 Then, we create a signing request for this key: $ openssl req -new -sha256 -key my-cool-app.localhost.key -out my-cool-app.localhost.csr We will again be asked some questions. The important one is the one about the Common Name: this has to be equal to our local domain (e.g. my-cool-app.localhost ). The last step will be the signing of your certificate with the root CA. To do so, we will need a small configuration file called my-cool-app.localhost.v3.ext : authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment subjectAltName = @alt_names [alt_names] DNS.1 = my-cool-app.localhost Important here is the last line, which again contains the local domain. For this domain, we can now sign the certificate with the root CA: $ openssl x509 -req -in my-cool-app.localhost.csr -CA my-cool-app.localhost.crt -CAkey my-cool-app.localhost.key -CAcreateserial -out my-cool-app.localhost.crt -days 1024 -sha256 -extfile my-cool-app.localhost.v3.ext This finally creates our certificate: my-cool-app.localhost.crt . The .certs folder should now contain the key and certificate for my-cool-app.localhost . Restart Docker Compose and the nginx-proxy should serve our demo app via HTTPS now:
https://medium.com/better-programming/docker-powered-web-development-utilizing-https-and-local-domain-names-a57f129e1c4d
['Onno Gabriel']
2019-07-15 20:47:12.006000+00:00
['Programming', 'Docker', 'Https', 'Web Development', 'Development']
Splunk HTTP Event Collector: Direct pipe to Splunk
by Jussi Heinonen In August 2016 the FT switched from on-premises Splunk to Splunk Cloud (SaaS). Since then we have seen big improvements in the service: Searches are faster than ever before Uptime is near 100% New features and security updates are deployed frequently One interesting new feature of Splunk Cloud is called HTTP Event Collector (HEC). HEC is an API that enables applications to send data directly to Splunk without having to rely on intermediate forwarder nodes. Token-based authentication and SSL encryption ensures that communication between peers is secure. HEC supports raw and JSON formatted event payloads. Using JSON formatted payloads enables to batch multiple events into single JSON document which makes data delivery more efficient as multiple events can be delivered within a single HTTP request. Time before HEC Before I dive into technical details let’s look at what motivated us to start looking at HEC. I’m a member of the Integration Engineering team and I’m currently embedded in Universal Publishing (UP) team. The problem that I was recently asked to investigate relates to log delivery to Splunk Cloud. Logs sent from UP clusters took several hours to appear in Splunk. This caused various issues with Splunk dashboards and alerts, and slowed down troubleshooting process as we didn’t have data instantly available in Splunk. The following screenshot highlights the issue where event that was logged at 7:45am (see Real Timestamp) appears in Splunk 8 hours and 45 minutes later at 4:30pm (see Splunk Timestamp). [caption id=”attachment_8957" align=”alignnone” width=”1461"] Logs arriving to Splunk several hours late[/caption] The original log delivery pipeline included the following components. Journald — a system service that collects and stores logging data forwarder.go — Go application with a worker that receives events from journald and sends events to splunk-forwarder cluster splunk-forwarder cluster — A cluster of four EC2 instances and a load balancer that receives events from Go application and forwards them to Splunk Cloud The following diagram illustrates the log delivery pipeline back then. [caption id=”attachment_8958" align=”alignleft” width=”870"] Original log delivery pipeline[/caption] The initial investigation was focused on splunk-forwarder cluster and from the logs in the cluster it seemed like event timestamps on arrival to cluster were lagging behind. This indicated that the Go application with a single worker was not able to handle the volume of events it received from journald. So we started planning iteration 1 of forwarder.go. Iteration 1: An event queue, parallel workers and Grafana metrics The new forwarder.go release introduced an event queue that caches events while workers are busy sending events to the splunk-forwarder cluster. Also the number of workers was increased which enabled events to be delivered to splunk-forwarder cluster in parallel. The number of workers was made configurable so that we could easily add more workers in case there were not enough to process events from the queue. To gain visibility on internals of forwarder.go a few metrics were introduced and delivered to Grafana for graphing. After Iteration 1 the log delivery pipeline diagram began to evolve. [caption id=”attachment_8959" align=”alignleft” width=”873"] Forwarder.go with event queue, workers and Grafana integration[/caption] After deploying new release to production it was disappointing to notice that the delay in log delivery had not fully been eliminated. But on positive note we now had better visibility on what was happening inside the Go application, thanks to Grafana. One of the metrics we introduced was Event queue size. The following screenshot from Grafana after the deployment shows that queue size (of 256 events) was maxing out on most of the nodes in the cluster. [caption id=”attachment_8960" align=”alignleft” width=”1869"] Event queue size metrics in Grafana[/caption] As mentioned earlier the number of workers was made configurable in this release, but increasing the number of workers from default 8 to 12 didn’t have much impact on the queue size. This was a strong indication that the bottleneck was elsewhere than in the forwarder.go application. A closer look at the splunk-forwarder cluster revealed that few of the nodes in the cluster were struggling to process incoming messages at right speed. After these nodes were resized (adding CPUs and memory) log delay got reduced significantly but still queue size within the forwarder.go process was staying on the same level of 256. Iteration 2: Splunk HTTP Event Collector with event batching It was time have a fresh think about the current set up and look at alternatives to sending logs to Splunk Cloud via splunk-forwarder cluster. I discovered a blog post about Splunk HTTP Event Collector and I decided to give it a try. Getting started with HEC To get started with HTTP Event Collector you will need an endpoint URL and an authentication token. You can request authentication token from Splunk Support. Testing the token on command line Once you have the token you should verify that it works and you are able to send data to Splunk. The easiest way to test the token is to use curl. All you’ll need is the endpoint URL, token and some data to send to Splunk. Here is an example command line command sending JSON document {“event”: “Splunk HTTP Collector event”} to the HEC endpoint with the token in Authorization header. [caption id=”attachment_8961" align=”alignleft” width=”1254"] Testing authorization token using curl[/caption] When request is successful it returns a response: {“text”: “Success”, “code”0} Splunk HEC client and event batching Implementing HEC client required small amount of effort and it simplified the delivery process as we no longer had splunk-forwarder cluster in the diagram. [caption id=”attachment_8962" align=”alignleft” width=”874"] Forwarder.go connecting directly to Splunk HTTP Event Collector[/caption] We also introduced a configurable batch size which enables forwarder.go to batch events before sending them to Splunk Cloud. After deploying this release to live we could see a big drop in event queue size in Grafana. [caption id=”attachment_8963" align=”alignleft” width=”1869"] Event queue size metrics in Grafana after iteration 2[/caption] At 17:30 mark in above graph the new release got promoted to production with default batch size of 10 which resulted in queue size falling below 100 events. At 17:40 mark batch size was reconfigured to 20 which made the queue size to drop below 50 events across all nodes. After introducing Splunk HEC and event batching the forwarder.go application has much more head room in the event queue to store events from journald. We no longer have to wait for hours for logs to appear in Splunk. Instead we can monitor logs in real-time with latency down to ~100ms. [caption id=”attachment_8964" align=”alignleft” width=”1472"] Splunk real-time log view[/caption] I strongly recommend HEC for any application that currently uses splunk-forwarder cluster. Reference implementation of HEC client written in GO can be found in Github: https://github.com/Financial-Times/coco-splunk-http-forwarder.
https://medium.com/ft-product-technology/splunk-http-event-collector-direct-pipe-to-splunk-a0bb971e6080
['Ft Product']
2018-02-16 14:19:42.603000+00:00
['Golang', 'Monitoring', 'Grafana', 'Development', 'API']
The Forge Pitching Guide
The Forge Pitching Guide Pitching can be difficult. We’re here for you. Photo: Koson Rattanaphan/EyeEm/Getty Images Hi! We’re so glad you’re here, and we are excited to have you pitch your awesome idea for a Forge story. But first… Forge is Medium’s personal development publication. We love stories about productivity, self-improvement, optimization, personal progress, mindfulness, and creativity. We also love stories that comment on the world of personal development in a thought-provoking way. Our stories are backed by journalistic rigor and offer a toolbox of research- and expert-backed strategies to work, live, and be more productive, inspired, and whole. Formatting your pitch We’re looking for: longer features that really dig into a topic or trend in depth; daily stories that are about 1000 words in length; and quick hits that are more like 500–750 words and just include a tip or two. When you pitch us, please first take a look at what’s on the site today and make sure your piece feels like a Forge piece! In your actual pitch, please format your email with the subject line like so: “Pitch: [headline of the piece]” so we know what we’re looking at. In the body of your email, include a suggested headline and a brief paragraph that outlines: -Your thesis, which should be specific and fresh. Think of this less like a topic (for example, conversation skills or time management) and more like an actual statement or stance (for example, “People think gossip is bad but it’s actually good because it can create social cohesion and remind us of how to act in a society,” or “You don’t have to say yes to everything people ask of you and having this system of these seven yes/no questions will help you not to overcommit”). -Your backup, by which we mean the names of any experts you plan to interview, links to research you plan to cite, or specific examples of the thing you want to highlight. -The takeaways for the reader. While we have a lot of respect for a writing process that’s about discovering as you go… this is not the time for that. You know? You know. Start knowing where you’re going to finish. Basically, including lots of specific details early in the pitching process helps us to know whether or not this piece is going to work for Forge! It’s more efficient this way — we promise. It also helps if you can include two or three clips of your writing from different publications, so we can get a feel for your voice. Topics that work for Forge Forge’s topic areas include (in no particular order): creativity, leadership, productivity, work, digital life, family, health, lifestyle, philosophy, mental health, relationships, self, sexuality, mindfulness, money, parenting, psychology, spirituality, neuroscience, addiction, career, friendship, aging, habits, masculinity, love and dating, digital overload, personal finance, body image, immortality and life extension, time and chronemics, the wellness industry, trauma and recovery, motivation, personality, learning and teaching, caregiving, death, grieving, hobbies, sleep, the brain… and the list goes on. We look at all these topics through the lens of personal development. If a story touches on some aspect of making life better — on the individual level, the wider cultural level, or even in how we think about what “better” means — it might be a fit for us. Forge’s voice and approach The stories we commission are generally grounded in expert knowledge, reporting, or research. Many of our stories will be “evergreen,” but it always helps to have a strong peg, making it clear why now is the perfect time to explore the topic: a new report or study, a trend, news, a current conversation, or the zeitgeist. Most of our stories are voicey, and reflect the perspectives, experiences, and cultures of a wide and diverse range of writers. Having said that, our stories are all thoughtful, reflective, and smart. We are often funny, but never snarky. We are not clinical, detached, or disembodied. We don’t commission many pure personal essays, though many of our pieces are written in the first person. We don’t use vague inspiration-speak or glibly suggest quick fixes to major personal problems. We don’t write about fads for fads’ sake, or celebrities for celebrities’ sake. We are not interested in takedowns or screeds. We respect our readers’ time. Some examples of good Forge stories “7 Questions to Ask Yourself Before Committing to Anything”: This is clear, actionable, helpful, and feels fresh. There’s voice and personality, and a very specific takeaway. The writer has expertise because she invented this system and knows it works for her. And while it’s a topic that’s familiar and relatable, the actual strategy she’s offering is unique — it’s not the same advice we’ve heard a million times before. “Gossip is Good”: An unexpected take that sounds counterintuitive at first, but then is backed up by solid research and some reporting on the writer’s part. The authority comes from the experts she interviews and cites. The writer has a voice too, and the right amount of liveliness. “5 Ways to Train for Creative Work Like an Athlete”: Helpful and concrete advice that also offers some good scaffolding for long-term thinking. The writer has the authority that comes with having succeeded at both of the things she’s writing about. “Emma Watson Didn’t Invent ‘Self-Partnership’”: A good way for us to address a newsy, trending topic — lots of background information and solid evidence combine to provide some context and to offer the reader a strategy for life. In the end it’s an evergreen piece, which is ideal. The practicalities of writing for us Once the editor and writer agree upon a story idea, prospective headline, word length, and due date, we will send the writer a contract to sign. You will receive payment within 30 days of the story’s publish date. We pay by the word. Here’s a useful link explaining the nuts and bolts of writing commissioned work for Medium. A last note If you have sent us a pitch and you don’t hear back within a few days, please feel free to follow up. If it’s a timely or competitive story, follow up sooner. We try to respond in a timely way, but we are busy and human and sometimes things fall through the cracks. So! Email us already: Amy Shearn, Senior Editor [email protected] Cari Nazeer, Deputy Editor [email protected] If you have pieces that are already live on Medium that you would like to be considered for Forge, please email [email protected]. See you in our inboxes!
https://forge.medium.com/the-forge-pitching-guide-74c9d7c32cb5
['Amy Shearn']
2019-12-06 22:11:36.641000+00:00
['Medium', 'Forge', 'Pitching', 'Writing']
Album of the Day — November 6. Schuyler Fisk — The Good Stuff
Album of the Day — November 6 Schuyler Fisk — The Good Stuff Schuyler Fisk — The Good Stuff 05.November.2020 Schuyler Fisk The Good Stuff 2009 Given her pedigree, it’s no surprise Schuyler Fisk (pronounced SKY-lar) began her career as an actress. She has acted in some rather high profile projects, including: The Baby-Sitters Club Orange County Law & Order: SVU One Tree Hill Rest assured, The Good Stuff isn’t the vanity project of an actor deciding to become a recording artist. It’s worth noting that not only does Fisk bear a striking resemblance to her mother, but one of her mother’s most famous roles was that of a singer/songwriter. Schuyler Fisk’s mother is Sissy Spacek. There — that’s outta the way. While she still acts, Schuyler Fisk really excels as a singer-songwriter. The Good Stuff might be one of the best folk albums you may never have heard of. Digitally distributed in January of 2009, the record did make its way to #1 on the iTunes Folk Chart. But don’t be misled by the marketing label of “folk.” This isn’t folk in the sense of Joan Baez or Peter, Paul, and Mary …think more modern folk. Frankly, if this album were released in 1978, it would’ve been categorized as rock, maybe folk-rock. Think “California Sound” like Joni Mitchell, Linda Ronstadt. Don’t be turned away by marketing labels. Released when Fisk was 27 standing at the precipice of maturity while still having youth's coy playfulness. Her songs have the innocence of youth, written with the blossoming understanding of maturity. It’s a brilliant confluence of purity and adulthood. It’s got all the boxes checked for both women and men. Ever loved a bad boy? CHECK. You’re Happenin’ To Me Light me up like the sun, Shake me down like an earthquake, Gonna ride with your love going no place. I’m hanging on like a hero, And dragged like a fool, And nobody can tell me I’m crazy. You make me feel like I’m high, Like I’m low, Like ya know what I don’t, Like I’m here then I’m gone, Like you’re singing my song. You can’t love me like I need, But baby you’re still happening to me, You don’t see the things I see, But baby, you’re still happening to me. Ever loved the freshness of falling in love? CHECK Sunshine Sunshine, sunshine I was lost before, Now I've found everything I'm looking for, And every step I take, Turns everything to color, where everything was gray, 'Cause I can say You got me, I got you on my mind, When I see you, I'm smiling on the inside, 'Cause everything is alright, You are my sunshine, sunshine Rain, rain's gone away, There's nothing here but blue skies every day, Lucky, lucky me, If only everyone could feel so weak in the knees, We'd all be happy You got me, I got you on my mind, When I see you, I'm smiling on the inside, 'Cause everything is alright, You are my sunshine, sunshine Oh, you are so far from what I had, Sunshine, sunshine, And I never thought that I could feel this You got me, I got you on my mind, When I see you, I'm smiling on the inside, 'Cause everything is alright, Never thought I'd feel this fine You got me, I got you on my mind, When I see you, I'm smiling on the inside, 'Cause everything is alright, Never thought I'd feel this fine, You are my sunshine, sunshine Sunshine, sunshine Ever been gutted by a love affair that ended when you didn’t want it to? CHECK Fall Apart Today I don’t want us to fall apart today or ever You’re the one who said you’d never leave There’s no good reasons for giving up All this mess is just bad luck So please don’t lose your confidence in me I wish I wasn’t so fragile ’Cause I know that I’m not easy to handle Baby please Don’t forget you love me Don’t forget you love me today Oh my baby please Don’t forget you love me Don’t forget you love me today I don’t wanna feel like this But I’m so tired of missing you I don’t wanna beg for your time I want you mine, all mine I wish I wasn’t so fragile ’Cause I know that I’m not easy to handle Baby please Don’t forget you love me Don’t forget you love me today Oh my baby please Don’t forget you love me Don’t forget you love me today I bet you smile when you think of me You love me messy in the morning Freckles on my knees Oh baby please Oh my baby please Don’t forget you love me Don’t forget you love me today Oh my baby please Don’t forget you love me Don’t forget you love me today Oh baby, sweet baby, oh Oh my baby, sweet baby Don’t forget you love me Ever wondered where you stood in a relationship? CHECK (a personal favorite, which is no easy pick on this record) Who Am I To You? Quiet lies beneath the blue moon Couldn’t say much less now could you? It’s on your mind It’s in your eyes But you disguise it you’re so charming If it’s not me you need To sleep beside If I’m not the thought that’s always on your mind If I’m not the reason why you dream at night The love you’ll never lose Who am I to you? I’m awake but your still sleeping Not some secret you’ve been keeping You’re hard to see Missed a green’ Soon the sun will come and save you If it’s not me you need to sleep beside If I’m not the thought that’s always on your mind If not the reason you why dream at night The love you’ll never lose Who am I to you? I was here locked out My head so conflicted Everything I am I’m contradicted I’m so caught up I can’t let you go I need to know I need to know If it’s not me you need to sleep beside If I’m not the thought that’s always on your mind If it’s not me you need to sleep beside If I’m no the love that’s always on your mind If I’m no the reason why you dream at night about the love you’ll never loose Then who am I to you Who am I to you? For a 27-year-old, Schuyler Fisk had done some living by the time she wrote and recorded The Good Stuff. She’d also done a lot of loving, and like any good artist, a lot of feeling. Perhaps her heart hasn’t benefited by all the pain and hurt, but her art has, and our ears do. The Good Stuff is an astounding album that somehow went largely unnoticed. CRITICS: Aja Gabel wrote in Virginia Magazine — “It’s full of folk-rock songs in Fisk’s distinctive clear-as-a-bell, breathy voice accompanied by guitar and piano. The record is smart and sassy, but also has a stripped-down sincerity missing from many debut albums. Even songs like “You’re Happening to Me” and “Miss You,” while based on snappy guitar riffs, stand out because of the confessional quality of the coming-of-age lyrics.” This isn’t a review, but it’s a profile of her from the late New York Times media critic David Carr on her performance in 2009 at The Bell House in Brooklyn. It’s lovely, and like everyone who read him, I miss him. Don’t be misled by the fact that a woman is singing these songs. If you’re a human being and you’ve had a romantic entanglement with another person, there is a song on The Good Stuff that represents that. And the album is aptly titled because, at the end of the day, relationships are really the good stuff …most of the time anyway. When they’re good, it makes all the other bullshit worth muddling through.
https://medium.com/etc-magazine/album-of-the-day-november-6-d2b6fc55eb22
['Keith R. Higgons']
2020-11-06 13:49:37.398000+00:00
['Album', 'Culture', 'Singer', 'Music', 'Art']
Overcome Obstacles to Grow Your Career
“What’s the secret?” “How did you do it?” “Very cool, okay but like how did you really do it? Did you take something?” “Where do I buy whatever course you took to become as good as you are now?” “Thank you so much for doing this, I appreciate you for taking the time to tell me your background and sharing all of those setbacks you had and the advice on how you overcame it all… so what’s the secret though?” That used to be me — as well as a lot of you when you’re trying to grow your career. Every person you look up to — is a magical unicorn in your eyes; you just can’t fathom how they got to where they’re at. They must have done something to get there — it can’t be merely hard work and perseverance because that’s what you’ve been doing for the last two months straight. No, absolutely not. They must be part of an elite group — or better yet, they’re all just naturally born with a scarlet letter. S for Successful. When you see people doing what you want to do, and they’re doing it well, something shifts inside you, resulting in you doing 1 of 2 things. a.) You’re inspired — you’re amped up and ready to get to work b.) You think, “They just got lucky, I could never reach that level of success, it’s too hard.” And you don’t even bother trying. Jim Rohn once said, “Successful people do what unsuccessful people are not willing to do. Don’t wish it were easier; wish you were better.” As you’re navigating through your career — the first few months, and even years — will be rough. You’re going to come face-to-face with obstacles that can either break you — or be the next step into your career blooming. Here are a few essentials that will help you overcome those obstacles seamlessly.
https://medium.com/publishous/overcome-obstacles-to-grow-grow-your-career-1d66d7ed3a53
['Dayana Sabatin']
2020-10-09 16:37:55.349000+00:00
['Self Improvement', 'Success', 'Self', 'Lifestyle', 'Entrepreneurship']
Understanding Learning Rates and How It Improves Performance in Deep Learning
This post is an attempt to document my understanding on the following topic: What is the learning rate? What is it’s significance? How does one systematically arrive at a good learning rate? Why do we change the learning rate during training? How do we deal with learning rates when using pretrained model? Much of this post are based on the stuff written by past fast.ai fellows [1], [2], [5] and [3] . This is a concise version of it, arranged in a way for one to quickly get to the meat of the material. Do go over the references for more details. First off, what is a learning rate? Learning rate is a hyper-parameter that controls how much we are adjusting the weights of our network with respect the loss gradient. The lower the value, the slower we travel along the downward slope. While this might be a good idea (using a low learning rate) in terms of making sure that we do not miss any local minima, it could also mean that we’ll be taking a long time to converge — especially if we get stuck on a plateau region. The following formula shows the relationship. new_weight = existing_weight — learning_rate * gradient Gradient descent with small (top) and large (bottom) learning rates. Source: Andrew Ng’s Machine Learning course on Coursera Typically learning rates are configured naively at random by the user. At best, the user would leverage on past experiences (or other types of learning material) to gain the intuition on what is the best value to use in setting learning rates. As such, it’s often hard to get it right. The below diagram demonstrates the different scenarios one can fall into when configuring the learning rate. Effect of various learning rates on convergence (Img Credit: cs231n) Furthermore, the learning rate affects how quickly our model can converge to a local minima (aka arrive at the best accuracy). Thus getting it right from the get go would mean lesser time for us to train the model. Less training time, lesser money spent on GPU cloud compute. :) Is there a better way to determine the learning rate? In Section 3.3 of “Cyclical Learning Rates for Training Neural Networks.” [4], Leslie N. Smith argued that you could estimate a good learning rate by training the model initially with a very low learning rate and increasing it (either linearly or exponentially) at each iteration. Learning rate increases after each mini-batch If we record the learning at each iteration and plot the learning rate (log) against loss; we will see that as the learning rate increase, there will be a point where the loss stops decreasing and starts to increase. In practice, our learning rate should ideally be somewhere to the left to the lowest point of the graph (as demonstrated in below graph). In this case, 0.001 to 0.01. The above seems useful. How can I start using it? At the moment it is supported as a function in the fast.ai package, developed by Jeremy Howard as a way to abstract the pytorch package (much like how Keras is an abstraction for Tensorflow). One only needs to type in the following command to start finding the most optimal learning rate to use before training a neural network.
https://towardsdatascience.com/understanding-learning-rates-and-how-it-improves-performance-in-deep-learning-d0d4059c1c10
['Hafidz Zulkifli']
2018-01-27 17:18:00.194000+00:00
['Artificial Intelligence', 'Machine Learning', 'Neural Networks']
What Is The Boundary Between Data Visualization and Other Types of Images?
(I’d like to say that no butterflies were harmed in the making of these posts… but… ) I have a question, James Lytle and Stephanie Tuerk… why do you feel the data needs to be quantitative? Can qualitative data not be visualised… or does it only constitute data visualisation if the underlying data is quantitative Pierre Dragicevic: Stephanie Tuerk, I’ve been having the exact same questions as you. Jacques Bertin’s monosemic/polysemic distinction can help understand conceptual differences between data visualizations and other types of information-carrying images: “A system is monosemic when the meaning of each sign is known prior to observation of the collection of signs” […] “a system is polysemic when the meaning of the individual sign follows and is deduced from consideration of the collection of signs” (Semiology of Graphics). Others (e.g., Leland Wilkinson, Yuri Engelhardt) talk about visualizations as having a grammar. Robert Kosara also has a very interesting discussion on visualizations vs. other types of images: In his blog post, there’s an interesting comment from Hadley Wickham that visualizations should be “invertible”. This is easier to understand in the context of automatically-generated computer visualizations, which have been often described as multi-stage processes that turn raw data into images (the “visualization pipeline”). The idea is that a reader should be able to start from the image and get back to the data somehow. None of these analyses goes very deep into what makes a good vs. bad visualization, but I think it’s interesting to try to reflect on what an artifact is (a philosophical question) irrespective of whether it is effective at achieving its designers’ intent (an empirical question). lord Insightful, Pierre Dragicevic… there’s something else too (I don’t know how this is expressed in theory) where there is emergent understanding from the collected signs (a gestalt) that is qualitatively different from the various data. Pierre Dragicevic: Right. There is a perceptual side to that (e.g., ensemble perception) and also a cognitive side: researchers talk about going from data to knowledge, insights, sometimes wisdom. Those are key aspects of visualization but it’s much more difficult for me to think in that space. Hilma af Klint, Group IX/SUW, The Swan, №9, 1915. Oil on canvas, 149.5 x 149 cm. Photo: Albin Dahlström, The Moderna Museet, Stockholm. Jason Forrest: Pierre Dragicevic — this “invertible” idea is VERY interesting and one that I think is especially important when the subject matter is more intangible. I started thinking about this a lot when I wrote about Hilma af Klint as she was using a Theosophical semiology, which itself was part scientific and part poetic, so her visualizations were designed to be studied and meditated on as (invertible?) spiritual guidance. lord, the mix of qualitative and quantitative I think is one of the defining issues at the moment, and I think it’s something that separates much of the more analytical work from being more persuasive. IMO, it’s that editorial/qualitative nudge that people seem to connect to. Stephanie Tuerk: Jason Forrest, I would argue that the “invertible” requirement regulates not only the how information is symbolized but what *kind* of information can be encoded in a visualization. Meaning that “information” that is too complex for its comprehension/perception to be easily and unambiguously confirmed can’t be represented in a visualization, i.e. images that attempt to embody that kind of “information” (which I would call something like a “concept” or “idea”) are some other type of image aside from a visualization. That is to say, representation is a larger category than visualization. Jason Forrest: I’m not sure I agree, as an illustration of “heaven” is still a translation of an idea. But maybe I’m pushing too far at an extreme with that example. Would you consider an early diagram of an atom a visualization even if it turns out to be wildly inaccurate later on? Stephanie Tuerk: No, I would consider any diagram of an atom to be a diagram, not a visualization! Contemporary or historical, irrespective of the image’s relationship to an actual atom. Also, to me the status of something as a visualization has nothing to do with the veracity of the information that is visualized (responding to your comment about “even if the image is inaccurate later on”) but more so on the fact that the information shown begins as something that the author has in a form that is completely distinct from any graphical representation…and then is translated into graphic form….and then can be retrieved by the viewer and reconstructed in the distinct from graphical form. The evolution of atomic models in the 20th century: Thomson, Rutherford, Bohr, Heisenberg/Schrödinger lord: I agree with Stephanie Tuerk about the Bohr Atom… while data drove his theory, he wasn’t visually presenting any underlying data — he was representing a theory of atomic structure… it was a theory visualisation… Jason Forrest: maybe it’s graphic translation? James Lytle: lord, Good question. So, in general, I tend to view DV as the language of “How much?” or the grammar of scale, which grounds the heart of vis in how much of this or that (how much time, how much space, how much who/what). Naturally, in order to properly describe who or what you are talking about it is helpful to describe the profile of things (with numbers and characteristics) but then it is back to how those things relate to each other (ie how does this system work or interrelate, how are the parts of the petal structured in relation to each other?) So system diagrams are still very much vis to me in as much as it communicates how different pieces relate to one another. Arran LR: I liked Boris Müller’s piece on considering DV as a cultural image interesting: I think the question I’m interested in is considering DV as communication and as a tool for insight. Stephanie Tuerk: I totally agree with the argument that data visualizations are cultural images, but I think that there is a lot of literature in the history of science for example that argues that the kinds of images that Müller is setting up as strawmen/”technical” images are also cultural images. I mean, it’s the same argument that you see a lot these days (or at least I see a lot) arguing that the data themselves are cultural products and not “objective.” Essentially…cultural/technical is a false binary. The question of “why quantitative” is a good one. (Am I allowed to say, “oh lord, that is a good question!“?) I guess first I want to retract a bit and say that I think that ordinal information can be visualized as well, but I think that once you get to the idea of turning an idea into an image, you are in linguistic territory (and that of classic semiotics) — in that you are making an image the idea of cat the same way that the word “cat” is the signifier of the signified “cat.” I guess that is to say, I think that a characteristic of visualization is that it, in and of itself, doesn’t create meaning, although it communicates information. (People may interpret meaning from it, but that meaning is not in the visualization itself.) I mean, there is no one definition and sure some people are going to want to say that all diagrams are visualizations. But then we ignore that there is a smaller class of things within diagrams that make meaning in a very specific way — in which graphical elements are used to represent discrete relationships whose success in faithfully transmitting information relies on the lack of interpretability of the original information— and fail to recognize that that is something distinct because we have expanded the term visualization to include the larger class of things. Anyway, I went to the wrong source (Tufte). Pierre already mentioned this, but (as almost always), Bertin provides sound counsel on the matter. lord: I loved the way you described the importance of relationships James Lytle. It’s these relationships that allow a discerning person to pick the signal from the noise…. Like, what aspects of a butterfly are functional (wing dimensions) and what are too variable (have too much noise) to be analytically interpreted (eg number of spots). So I have another question… If the diagram of an atom is not a data visualisation… What about something that represents an equation? There’s no actual data… But the equation encodes data. Does that count? Example; a graph that represents the equation of an aerofoil or an animated metronome that is defined by a sinusoidal equation. Pierre Dragicevic: I like the equation case, that’s a tough one. You can also imagine you have a simulated dataset and visualize it with a set of complex plots. Most people would probably say they are visualizations, and yet that’s just a more complex version of your equation case. You could argue the parameters of your models are the data. In the case of a sinusoidal equation, your data would then be a set of three quantities (period, phase and amplitude). An interesting implication is that a sinusoidal curve can be seen as a technique for visualizing three quantities. That’s one possible response to your question, there are probably many others. About butterflies: many would probably agree that naturally-occurring phenomena like footsteps on a beach are not visualizations. Yuri Engelhardt discusses this in his thesis. To him, a visualization needs to be an artifact (i.e., something created by a human). So if you come across a bunch of dead butterflies during a walk in the woods, you could infer a lot of things using many of the same perceptual and cognitive skills as the ones you use when you read a visualization, but you couldn’t really call this a visualization. In your photos, however, the dead butterflies were purposefully arranged in a way that makes it presumably easier to extract information. That, I think, comes closer to a visualization. Stephanie Tuerk: …but does visualization not imply the human act of translation of information from a non-visual medium into a visual medium? Jason Forrest: See, that’s where I am still/too. In my world, the data aspect is important, but not all-encompassing. Stephanie Tuerk: I mean, I would never argue that a display of butterflies does not convey information. It very much does! So to me, it returns to a different unanswerable question, which is, what are we hoping to get out of defining the term visualization? For me, I’m trying to find a definition that distinguishes (data) visualization from other forms of representation — like, a visualization may often be a diagram, but I’m interested in what makes not all diagrams visualizations. Obviously one could also want to define it as broadly as possible for different reasons, and then I guess the question is, when does something stop being a visualization? I’m not sure if these are two sides of the same coin or not… lord: I agree, Stephanie Tuerk that there needs to be a human act of presenting the data in a form that facilitates understanding that could not be gained otherwise… But I’m on the fence about the form not being visual to start with. If you take a bar chart and plot a single dot for each observation instead of an incremental rectangle, you are visualising a number. Each dot has been arranged informatively… The overall visualisation gives you information you can glance at and reveals more about the data than you’d get from just looking at a bunch of numbers. How is this different from Nabokov’s scientific arrangement of butterflies? If I had been counting butterflies and placed them one on top of each other, like I did with these chocolates… Is that no longer a visualisation? Stephanie Tuerk: lord as I see it, if you are counting the butterflies, and then using the butterflies that you counted to make the visualization by stacking them one on top of the other, the information you are visualizing is a number, not the butterfly itself. (This using the things you count to make the histogram is a particularly cheeky kind of visualization — it’s probably an index in Peirce’s terms) versus in scientific displays, what is on display is the butterfly itself. lord: Not really… I picked them up and arranged them in order of type, one on top of each other. Nabokov’s arrangement reveals even more about the data because he analysed more details about the butterflies and then arranged them in a structured way that reveals more about them to the discerning eye. Stephanie Tuerk: By the same token, if one took all of the Impressionist paintings in MoMA’s collection, and put them in a room, and ordered them by artist and decade, and put them on the wall in these groupings so that you can see the visual differences between them, has one made a visualization? I guess for me visualization involves a signified and a separate signifier (the graphic element that represents the signified), which come together to make a sign. Jason Forrest: You mean like this: Stephanie Tuerk: When is something the object itself and when is it a representation of the object? If you put images of artworks in a timeline, to me they come way closer to actually being “data”. They aren’t the paintings themselves. If you took a room at MoMA and painted a timeline on the wall and hung paintings on it though…1. omg would the art world be pissed, and 2. I feel like that is really some kind of precipice of something! To me that is the equivalent of the histogram of candy bars. (I appreciate the provocation btw!) Jason Forrest: I honestly think that most museums are exactly arranged by timeline. Some actually have a line and year, so I don’t see that aspect as that unusual. Stephanie Tuerk: Yeah, I mean, after all, these “artworks” here TRULY are data in that what we see is literally a reconstruction of hex code somewhere. Oh god, now I just made the whole internet a data visualization.
https://medium.com/nightingale/what-is-the-boundary-between-data-visualization-and-other-types-of-images-61cde5b46643
['Jason Forrest']
2019-07-20 17:48:44.869000+00:00
['Data Visualization', 'Dvhistory', 'Data', 'Design', 'History']
Docker Containers: an absolute prevail over Virtual Machines
Docker containers allow the developers to run their apps in any environment, be it bare metal servers, cloud or virtual machines. Why don’t we drop the VMs for good then? The main idea of a Docker container is that once the Docker image is created, the container with it can be easily built and run in any environment — and it will behave exactly the same wherever it runs. This means that regardless of the underlying infrastructure, operating system, installed software, and other variables, the app will always run, as it has its runtime environment in the container. However, despite certain benefits we mentioned in our Docker vs Vagrant comparison, containers are not universal. Below we list several points to consider when choosing between containers and VMs: Containers provide more operational agility Containers underpin multi-cloud and hybrid systems Containers are easily integrated with existing IT workflows Containers are cheaper than VMs Containers are inherently more secure This is a brief overview of the points above. More operational agility Docker containers are built and launched according to scripts, so 1 command provides everything a developer needs. Every time a new VM is provisioned, it must be configured and all the needed software must be installed and configured before it can be used. Containers can be rebooted in a matter of seconds while rebooting an app running on a VM requires restarting the VM and all the required services. Cornerstones of multi-cloud and hybrid systems As containers run equally well anywhere, they can be easily used as building blocks for distributed multi-cloud systems, spanning across multiple availability zones and using various features of different providers. They can also work wonders in hybrid systems, where some parts of the infrastructure are deployed to the public cloud and mission-critical systems are kept on-prem or in a private virtual environment. Ease of integration with the existing IT infrastructure Mature companies have built their unique IT infrastructures around certain tools, workflows, people and skills. Once your IT department grasps the basic concepts of Docker containers, they will see multiple ways of integrating them into your unique software ecosystem. The main purpose of this process is reducing the time needed for app deployment, and removing the “works well on my machine” situation, plaguing the software delivery pipelines of many businesses. Containers are much cheaper than VMs Containerized apps share the same OS, libraries and other components, allowing to save a ton of computing resources when deploying at scale. Actually, due to removing the hypervisor layer and consolidating the virtualized resources, containers are 300% more cost-effective, as compared to VMs. Containers are inherently more secure Containers have multiple security layers and form a protective layer for the virtual host or bare metal server they run on. Every set of security restrictions within the container protects both the host and all the colocated containers, working well with the whole range of virtualization security features. Final thoughts on why Docker containers prevail over Virtual Machines For all the reasons above, Docker containers gain the ever-increasing importance for software delivery and production maintenance. However, there are multiple reasons why they cannot become the mainstay of the software development. First of all, virtual machine software like Vagrant or vSphere is designed specifically to support the software development pipelines and works great in tandem with multiple Big Data Analytics tools, monitoring solutions and code version control systems. Secondly, full-scale migration to the cloud and transition to using Kubernetes clusters can be quite a costly endeavor, especially for matured enterprise IT infrastructures. Thus said, it can be done and it must be done, one project at a time — but till all the workloads move to working on Kubernetes, virtual machines must do. Is your company still running virtual machines or have you containerized your workloads already? Which approach suits your operational profile best? Please let us know below!
https://medium.com/datadriveninvestor/docker-containers-an-absolute-prevail-over-virtual-machines-c595cdec1897
['Vladimir Fedak']
2018-11-12 10:21:49.595000+00:00
['Docker', 'Virtual Machine', 'Kubernetes', 'Cloud', 'Workflow']
The 10 YC Companies (W18) I'd Invest In
YC keeps getting bigger and bigger. There were 128 companies (!) in the Winter class. These companies presented to YC partners and potential investors on 3/19 and 3/20/18. The Top 10 Companies I’d Invest In **Credit to TechCrunch for their descriptions of the companies below** Juni Learning — Teach Children Computer Science TechCrunch Description: Juni is an online education program for kids that is targeting the $9 billion after-school market. The idea is to start with teaching kids computer science in a virtual, one-on-one setting by pairing them with tutors. It charges $250 per month for once-a-week classes. Juni says it’s grown 25 percent month over month in the last six months. The company also says it’s profitable, with a 95 percent monthly renewal rate. Without adequate computer science courses in schools, and the skills becoming clearly critical future employment, Juni could educate the next generation of programmers. Why I Would Invest: There are currently over 260k open technology jobs worth $21B (in the US alone). These numbers are guaranteed to grow rapidly in the coming years. Early education is one of the ways to help fill this gap and Juni has positioned themselves well here. Traditional schools haven’t moved fast enough to teach computer science and Juni could be the private solution that our country and our world needs to teach computer science at scale. 2. Veriff — Online Identity Verification TechCrunch Description: Veriff wants to be Stripe for online identity verification, handling the processing of drivers licenses, passports, and IDs for websites. They did $60k in revenue in February, and are currently profitable. They charge ~$1 per verification. Why I Would Invest: Checkr proved that a “simple” background checking API has a massive value to the gig-economy (the company is valued >$1B currently). Veriff, while young, offers a similar “simple” value proposition: instant verification of identity online and a per verification charge. Their potential customer base is huge and their presumably very high margins are attractive too! 3. Onderful — Dental Insurance Verification API TechCrunch Description: Onederful is an API for dental insurance. Onederful says dentists offices lose $6B in revenue per year due to insurance claim problems, and spend $3 billion a year on high friction claim verification. Onederful’s API integrates with 240 insurance providers to rapidly and reliably verify a patient’s insurance and make sure the dentist gets paid. Onederful doesn’t have to sell dentist by dentist, and instead is developing partnerships with the top dentist software suites for distribution. Why I Would Invest: How many times have you been at a dentist or a doctor office and the receptionist spends 15–30 minutes on the phone with your insurance company’s IVR only to ultimately fail to get the information they need? This leads to the dentist/doctor taking a chance that your insurance is valid, and they ultimately lose $6B/year on invalid claims. Onederful provides instant, frictionless, insurance verification for dentist offices to save time and avoid lost revenue. I’m a big fan of their distribution strategy: focusing on the top dental software suites instead of going door-to-door. While they will sacrifice some margin with this approach, they’ll be able to scale much faster. 4. HelloVerify — Background Checks in India Don’t judge HelloVerify by their website! TechCrunch Description: HelloVerify is doing online instant background checks in India where the the government has recently announced it will begin digitizing all personal records. The startup has lined itself up to be among the first to take advantage of this legislation. The company currently has $3 million in annual revenue and has closed $1 million in orders in the past 60 days. The company’s early customers include Accenture, Infosys and Cognizant. Why I Would Invest: Are you starting to see a theme with a few of the companies I would invest in? HelloVerify takes the Checkr model to India. With India’s rapid digitization, HelloVerify is well-positioned to be the interface between companies in India and government managed personal records. Given India’s massive population and rapidly growing skilled workforce, the opportunity here for HelloVerify is huge. 5. NextGenT — Certificate Programs for IT Positions TechCrunch Description: Bootcamps became insanely popular in the mid 2010s, but there’s been a big shakeout since then — and NexGenT hopes to take the fundamentals of getting an engineer production ready, but with a different approach. Rather than try to have someone ready to be a full-scale developer in 3 months, NexGenT focuses on just certificate programs to get people ready to be network engineers. The process is longer, but hopefully more robust as well. Why I Would Invest: NexGenT keeps the IT career training simpler. Instead of promising that you’ll be fullstack in 8 weeks (not possible), they focus on entry level technical careers that are still very in demand, but much more achievable. These careers also tend to have strong certificate programs that are well recognized and accepted by Fortune 500 companies. I’m interested to learn more about the cost/payment structure for NexGenT courses and would want to learn more about their job placement success rate before investing.
https://medium.com/startup-frontier/the-10-yc-companies-w18-id-invest-in-c1fff90653f5
['Alex Mitchell']
2020-07-11 13:16:33.293000+00:00
['Computer Science', 'Tech', 'Investing', 'Startup', 'API']
Data Visualization in Python
こんにちは、 ゴーリストのビベックです。 Hello World! This is Vivek from Goalist. If you want to build a very powerful machine learning algorithm on structured data then the first step to take is to explore the data every which way you can. You draw a Histogram; you draw a Correlogram; you draw Cross Plots, you really want to understand what is in that data what does each variable mean, what its distribution is and ideally how was it collected. Once you have a real rock solid understanding of what’s in the data, only then can you smoothly into creating your machine learning model. “roll down the hill” In this post, let’s go through the different libraries in Python and gain some insight into our data by visualizing it. Along with me, you may want to try and experiment with the packages that we are about to explore. Use Google Colab to follow along. Well, you may ask what is Google Colab? Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser. Learn more about Google Colab here… Follow this URL to create your notebook So let’s get started… We’ll be using the following packages to plot different graphs Matplotlib Seaborn Bokeh Out of these, Matplotlib is the most common charting package, see its documentation for details, and its examples for inspiration. 1) Line Graphs A line graph is commonly used to show trends over time. It can help readers to understand your data in a few seconds. Sample code to generate a Line Graph is given below. Go ahead and open the sample code in Colab and experiment with it.
https://medium.com/in-pursuit-of-artificial-intelligence/data-visualization-in-python-9aa1d9c2baec
['Vivek Amilkanthawar']
2019-02-28 01:54:35.931000+00:00
['Machine Learning Tools', 'Data Analysis', 'Python', 'Data Science', 'Data Visualization']
The Hopeful Truth Behind Todd Phillips’ Joker (2019)
The Hopeful Truth Behind Todd Phillips’ Joker (2019) Understanding the stigma surrounding mental illness Joker (2019) Arthur Fleck sits in a locked and padded room, wearing white scrubs and a crazed smile. He’s cuffed to a metal chair — to a metal desk that sits in-between him and the psychiatrist. A wicked cackle and a lit cigarette hangs from his lips. “What’s so funny?” says the psychiatrist. “Just thinking. Thinking of a joke,” says Arthur. “You wanna tell it to me?” says the psychiatrist. “You wouldn’t get it,” says Arthur, smirking ear to ear, beginning to sing the words to Frank Sinatra’s, That’s Life — That’s life (that’s life!). And as crazy as it may seem… If you’ve seen Todd Phillips’s controversial film Joker (2019), you’ve felt the terror of Arthur’s final words — it’s just a movie, right? But you know what comes next: the perilous fate that awaits the psychiatrist. Yet the knowledge of what’s to come can’t quite warm you of this feeling: the cold tendrils creeping up your spine as the horrors of Phillip’s film unfold before you. Even when you’re at home, surrounded by loved ones, supposedly safe, in bed, the covers pulled tight around you — you’re still checking corners, looking into the dark ones, peeking through shades and veiled curtains, hoping the madness hasn’t seeped in somewhere, unnoticed. What’s that there in the darkness? Behind the painted face, the wicked smile, the cackling eyes and the maniacal laughter that will haunt your sleepless nights for days to come. Is it just attention that Arthur seeks, or is there more to it? Maybe, he’s right — you don’t get it. The Stigma Mental illness is a social health crisis that affects millions of people in the U.S. each year. Yet even as the numbers of mental health cases continue to rise, awareness surrounding the crisis is drastically trivialized and misunderstood. This is a problem. Because for people suffering with mental illness, loneliness is an everyday occurrence. It makes it hard to get up in the morning. Sometimes, impossible. It leaves you feeling hopeless, like you can’t be any other way, like things will never get better. Ever. Because you’re not good enough. Not confident, happy or sane enough. Not normal. No one gets it…I’m utterly alone, you think to yourself as you lay in bed wearing the same sweats you’ve been wearing for the past three weeks, watching all eight Harry Potter movies in order — then reverse order and back again — just to feel an iota of happiness. Just one, an iota. An inkling of a spark of hope, joy, or a semblance of anything that can distract you from your shitty life for just a moment. Because it’s dark down there in those confined mental-spaces, those cramped thought-shafts and low-feeling tunnels, where you toil away restlessly, forever digging your own personal well of despair… Deeper and deeper down the rabbit hole. This kind of darkness grows everyday because of the stigma: a gross generalization that perpetuates the corrosive perception that the mentally ill, or those suffering with mental afflictions, are a detriment to society — a bane to be ostracized and obscured into shameful subservience to the injustices of social norms. It’s this flawed perception that’s proving fatal to those whom suffer with mental illness, because the stigma continues to perpetuate a serious lack of social awareness surrounding mental health. Which perpetuates a disastrous notion — the stigma — surrounding the crisis. Living With Mental Illness Mental illness is a widespread disease that doesn’t discriminate with its afflictions, infecting any mind, anywhere, anytime. According to a survey by NAMI (National Alliance on Mental Illness), 1 in 5 adults experience living with a mental illness (anxiety, depression, PTSD). Subsequently, 1 in 25 adults experience living with a serious mental illness (bipolar disorder, borderline personality disorder, obsessive compulsive disorder, schizophrenia). Unfortunately, adults are not the only ones affected by this crisis. 17 % of kids (ages 6–17) in the U.S. also experience living with this sickness. I can relate. I’ve been afraid for most of my life. I can hardly remember a time when I wasn’t afraid of something — afraid of being judged, laughed at, ostracized, anxious as hell every second of the day because I didn’t feel safe in this turbulent world we live in. Because like millions of others out there I, too, live with mental illness. I’m anxious, manic and depressed. And I live with these afflictions each and every day. But I’m dealing, coping, probably better than most; through years of therapy and cultivating a sense of self-awareness that I use to wage war against the deleterious thoughts that riddle my mind like parasites. Yet what has helped me most through these long years of pain and suffering has been the love and support of my family. It’s our collective strength that has guided me through the darkness. But I’m still afraid. In 2017, there were an estimated 1,400,000 suicide attempts in the U.S. 47,173 of those attempts were successful. Today, suicide is the 10th leading cause of death in the U.S. Unfortunately, these are the most current statistics that we have on this health crisis. The stigma surrounding suicide has led to underreporting, and the AFSP (American Foundation for Suicide Prevention) estimate the numbers to be higher. Scary, right? It’s this kind of madness that Joker reveals. And through the eyes of Arthur Fleck you catch a rare glimpse of the pain and suffering of one of cinema’s most iconically disturbed minds. Joker (2019) FADE IN… INT. LOCKER ROOM — DAY ARTHUR FLECK sits in front of a mirror, painting his face.Tears begin to well in his eyes, as he forces a smile. It doesn’t come easily. He uses his index fingers to stretch it out from ear to ear. Then a silent tear trickles down his painted cheek. He continues to smile, a broken grin that stretches painfully, forcefully. Ear to ear. EXT. SIDEWALK, GOTHAM — DAY Arthur stands out front of a shop, promoting. He’s in full clown attire, painted face and all. He’s twirling a sign that says: Everything Must Go. A gang of kids ramble down the sidewalk, jeering as they rush up to the would-be clown. Arthur is unaware, a jovial smile plastered on his face as he works, just trying to eek out a living the only way he knows how. But then, his sign is ripped from his unsuspecting grasp by the gang of kids whom hoot and holler, taking off down the street with Arthur in hot pursuit. EXT. ALLEYWAY, GOTHAM — LATER Arthur races down the street, in and out of traffic, then turns a corner into the alley and WHACK! One of the kids smashes the sign over Arthur’s head, shattering it to pieces. Head over heels, Arthur hits the asphalt before he can even shout for help — not like he’d get it, anyway. “Get him!,” the lead kid says, the gang of kids beginning to kick the shit out of Arthur. “Harder! Kick him harder!” Arthur is helpless, huddled in a feeble ball, gangly, twisted, limbs that nearly crack under the pressure of the kid’s boots: sizes 7 and up. There’s nothing he can do… But wait for it to end. INSERT TITLE: JOKER FADE OUT… Joker’s opening sequence sets the tone. It shows you Arthur, his sadness, and the violent neglect that surrounds him. It’s this willful ignorance that Phillips portrays in his depiction of Gotham City, shedding light on the stigma and the controversy incited by the film’s representation of mental illness in society. The Case of Arthur Fleck (the Stigma Personified) Joker is a standalone depiction of one of DC’s most iconic villains. It’s a unique and widely controversial portrayal from director Todd Phillips (The Hangover Trilogy). It’s definitely not your typical super hero flick (and it’s not trying to be). First off, it’s about the origin of the villain: Arthur Fleck, played by Joaquin Phoenix — he’s definitely not [insert raspy Christian Bale voice] Batman. Second, instead of creating an antagonist whom you love to hate, Phillips portrays Arthur as the protagonist, a symbolic hero in his own right — idol to the poor, the downtrodden, the lost and misunderstood rejects of Gotham City. This is what stems the controversy — why the film incites criticism and fear — because there’s a serious misunderstanding of Joker’s underlying theme: the negative effect mental illness has on those whom suffer from its afflictions. FADE IN… INT. SOCIAL WORKER’S OFFICE — DAY Arthur laughs, not yet a cackle. He’s crying, smoking a cigarette. The female SOCIAL WORKER grunts, waits. “Is it just me, or is it getting crazier out there?” says Arthur. “It’s certainly tense. People are upset. They’re struggling, looking for work. These are tough times. Have you been keeping up with your journal,” she says, looking down at his chart, scratching her pen needlessly as Arthur squirms in his seat. “Yes, ma’am,” says Arthur. “Great. Did you bring it,” she says. Arthur’s knee shakes as he takes a drag and stays silent. “Arthur, last time to I told you to bring your journal. Can I see it,” she says. Arthur looks down, scattered. His leg continues to twitch. Finally, stuttering, he takes out his journal and hands it gingerly to the social worker.She takes the journal from Arthur with tired hands, beginning to rifle through the crumpled pages of manic scrawl. “I’ve been using it, as a journal. But also, as a joke diary, you know, funny thoughts, or, observations…I think I told you I was pursuing a career in stand up comedy,” says Arthur. “No, you didn’t,” she says. “I think, I did,” says Arthur. But she’s not listening. She continues to flip through the psychotic ramblins of Arthur Fleck. She stops on a page. A line stands out in bold: I just hope my death makes more cents than my life. She reads the sentence aloud. Arthur giggles. The social shakes her head — done for the day. FADE OUT… This scene is the film’s most telling representation of the stigma. It shows you the weakness of human nature. The flawed aspects of social conduct that tends towards pseudo-norms of acceptable ignorance, willful neglect and self-righteous indifference. Arthur should be able to rely on the social worker to understand his pain, yet she’s indifferent. She sees Arthur’s plea — I just hope my death makes more cents than my life — and all she wants is to get through the session, because the city has cut her funding and now she’s out of a job. This is Arthur’s plight, his pain. It’s what incites the film’s narrative. It’s the root of his motivation, stemming every action he takes, every interaction he has, and how each contributes to his creeping descent into madness. By showing Arthur’s pain, Phillips allows you to empathize with a character whose moral compass doesn’t contain a North Star; someone you wouldn’t think twice to care about — but you do, right? You feel for Arthur, you see him — his brutal upbringing and the violent neglect that ignites his turbulent mind. You begin to empathize with him— albeit, unwillingly. This is how Phillips gets you to invest in the film, and how he gets you to invest in the notion of the stigma and its disastrous effects on those whom suffer with mental illness. Joker’s Gotham (Empathy, Hope and Madness) In Phillips’s Gotham, ignorance reigns supreme while the rich perpetuate the segregation of social classes through propaganda. Here you see the stigma; rooted deep and set in with concrete, built atop metal skyscrapers and the dying hopes and dreams of Gotham’s poor and downtrodden. Yet the Philips also shows you what Arthur needs most: someone to listen, to care, to show an iota of compassion for his plight. Because a person neglect of empathy — someone who’s devoid of society’s respect, compassion, and understanding— is a person without hope. And then the madness sets in. FADE IN… INT. ARTHUR’S APARTMENT — DAY Arthur sits in the entryway. Next to him is the body of RANDAL. A pair of shears protrudes from his eye socket, blood pooling underneath his body. “Why would you do that, Arthur!” says GARY, huddling in the corner. Arthur huffs and puffs. “Do you watch the Murray Franklin show — I’m gonna be on tonight.” Gary cowers. “It’s okay, Gary. You can go. I’m not gonna hurt you,” Arthur says, breathing deep. Gary scurries past Arthur. As fast as his short legs can muster, he rushes to the door and pulls on the knob with ready hands — CLACKK-CHNKK-THUDD “Hey, Arthur…Can you get the, umm, lock,” says Gary, pointing up. “Sorry, Gary,” Arthur says, still huffing. He gets up and heads to the door. He begins to open it, then shuts it. “Gary…” “Yeah…” says Gary. “You were the only one that was ever nice to me,” Arthur says, kissing the crown of Gary’s bald head. “Get out of here.” The door opens and Gary rushes out, tail between his legs. FADE OUT… Preventing Madness (Human Kindness and a Critique of the Criticism) As you know, Joker has incited a wave of criticism for its controversial portrayal of Arthur Fleck, as an empathetic protagonist — this didn’t sit well with a lot of people. They said it was the violence. Surviving families from the 2012 Aurora shootings — where 12 were killed at the screening of The Dark Knight Rises — spoke out to express their concern that Joker would incite the same kind of violence…The U.S. military even warned against “incel violence” at the premiere of the film…(There were no specific, nor credible threats made) They said it was misinformed. One of the more toxic ideas that Joker subscribes to is the hackneyed association between serious mental illness and extreme violence…(Yet, there have been more mass shootings than days this [past] year…According to the Gun Violence Archive (GVA), there were 417 mass shootings by the end of 2019) The Fallacy The fallacy in these arguments comes down to 3 things: ( 1 ) The film is a work of fiction and not reality. Therefore the events that take place in the film are purposefully exaggerated. It’s art: an expression of reality, not reality itself. ( 2 ) Violence is a part of human nature — always has been, probably always will be — and cannot be avoided by assuming misconceptions about a widely misunderstood topic, nor by hysterical speculation. Hysteria is a madness of its own sorts. ( 3 ) Mental illness is a deeply personal experience and not subject to scrutiny under generalized terms. (i.e. per case, as each individual experiences a unique onset of afflictions. What is true for one, is not true for all) Yes, the events in the film are horrendous and you hope that Arthur’s heinous acts never reach your streets, but the possibility for mentally ill patients to take their own lives is devastatingly real. Because, when a person is denied the hope of a better tomorrow, death can seem like the only option. But it’s not. There’s still hope. Yet the fallacy stokes the controversy. In turn, perpetrating the hopelessness that leads to the unnecessary death of those living with mental illness. So when considering Joker and the issues it presents, you must acknowledge that the film is a work of fiction and not an explicit representation of reality. Therefore, the themes take precedence. Not the stigma.
https://medium.com/pop-off/the-hopeful-truth-behind-todd-phillipss-joker-2019-6b7e1e0a905a
['Ryan Dimalanta']
2020-05-30 14:52:39.652000+00:00
['Suicide', 'Mental Illness', 'Mental Health', 'Filmmaking', 'Film']
How I Write 2000+ Words Each Day
Thomas Dylan Daniel, 2019 Writing during times of stress is a sure-fire way to get clear about what’s bugging you, and in general, this seems to lead to relief. In this article, I’ll walk you through my routine. I’ve written and published three books at this point, with another one slated for release soon, and a fifth in the works. I’ve done a patent, as well as a whitepaper, and almost 50 articles here on Medium this year alone. I hold a certificate in Professional Ethics from Texas State University as well as a Master’s of Arts in Applied Philosophy and Ethics. I sit on an editorial board and recommend for or against publication for different books. I’ve started a Medium publication called Serious Philosophy. And I write, literally every day, something like 2000 words. The key concept in writing every day is not simply putting words down, willy-nilly. Instead, it helps to have a problem in mind that your words are centered around and designed to solve. The bigger, more timeless your problem happens to be, the more you’ll be able to write about it, but the less actual resolution you’ll be able to find. Going with the flow One trick I’ve picked up, which comes from a lifetime of immersion in texts — find my reading list for 2020 here — is to mentally structure a work and produce a first draft with one stream of consciousness. This is something you can do for a scene, a poem, or an essay — but likely not a whole book. The trick is to avoid writing about the subject for a few hours. Do your chores, go shopping, socialize, and then when you sit down in front of your keyboard, shift gear. Get into the headspace of the thing you want to do, now that you’ve diligently eliminated your distractions, and engage your process. Sometimes it’s best to write from an outline. What that looks like for me most of the time is a single paragraph that says everything I want to say. I’ll go back and press Enter on the keyboard between each thought, with no regard for where the sentences begin and end. I’ll come back and fix them later, and as a rule of thumb, each Enter results in a paragraph or two of explication. Sometimes you’ll just have a good stream of consciousness you want to record. These generally take some work after they’re written — you’ll want to come back and cite sources or at least place links so that your reader understands the web of other writers you’re engaging with. If you’re telling a story, you may want to come back and edit for perspective, description, or plot. You can always come back later and look back over your work, make a few changes or corrections where necessary, and publish when it’s ready. The main thing is to focus on quantity, and write every day — you can come back and pick out the things you end up liking for publication later, but you will have a hard time remembering thoughts you never wrote down. Where the flow comes from When you’re sitting down to write, sometimes your mind will be blank. In my mind, this is a sure sign that I’m not reading enough. I’ll need to pick up a book or flip to an article and follow the bread crumbs until I’ve essentially proven my thought to myself, and at that point it becomes a matter of proving this to the reader in an objective way as well. I tend to read philosophy books a lot, because that’s my area of expertise. Yours may be quite different, but the process shouldn’t vary overmuch. I’ve successfully applied my method to everything from narrative fiction and nonfiction to scientific concepts and startup manifestos — once you have a solid method down, you’ll be good to go wherever you need to. Don’t believe in expertise Sure, experts exist. There are certain facts that lead one to a better grasp of a situation. But experts are not infallible, especially about the big stuff. Writers who do their job well can contribute to a wide range of topics meaningfully simply by reading the work of the field in which a given question resides and then applying this new knowledge in pursuit of the question. In some sense, this is the origin of expertise in the first place. Think about it. Before there were experts, there were driven people who really wanted to figure out how things worked. These people studied hard and eventually became experts because they knew more than others in whatever the given field turned out to be. All an expertise really involves is becoming one of those people and being up to date in the mechanics and particular jargon and logic of a given field. This frequently takes years of school, but a writer and an expert are not the same thing — an expert lives in the house; a writer peeps in through the window to see what’s going on, or shows up at the front door to ask the expert a question or two. Now, a bunch of uneducated people pretending to be experts can, sure enough, yield absurdities like the flat earth movement, but it’s easy to avoid such tropes if you’re capable of participating in real scientific inquiry. Science is rooted in self-critique; that is, the ability to admit when you’re wrong about something. In fact, science often contains more questions than answers — scientists love to learn that they’re wrong because definitively disproving a hypothesis always produces new knowledge. To write about something important, you need to recognize that you don’t have to be an expert to see what’s going on — but you do, definitely, need to actually know what’s going on. The scientific approach to writing As lovely as it would surely be to just be right from the outset about everything, it’s relatively plain to see that this is impossible. So when you write, ask others to review your work critically. A good reviewer will be able to ask questions of your work without tearing it down. I sit on the editorial board for philosophy at Cambridge Scholars Publishing, and anyone who has had me review their work over the years will probably appreciate the challenging criticism I tend to provide. This involves testing core theses of the works and pushing back against assumptions made by the authors — not always with the aim of preventing their books from being published, but often simply to engage with these works in a deep way, if possible! A good philosophy book isn’t always right about everything it says, but will always provide the reader with a variety of jumping-off points from which to think about a given issue or a handful of issues. Philosophy is a particularly scientific field, which is to say that it was the birthplace of scientific inquiry, sure, but also that the scientific method — conjecture, hypothesis, confirmation, and finally proof, are all integral parts of it. We want to show why a given train of thought is valid, but also where the problems are for it. How to know if it’s ready Unfortunately, the simplest answer is that you can’t. You can have others read it and gauge their responses, you can try it with different audiences, you can proof until your hands are covered with red ink and your eyes are glazed over and you just sit, catatonic for hours… But nothing you do will ever make anything you’ve written perfect. That’s why they say the perfect is the enemy of the good. It’s better to start with crap — believe me when I say crap is an integral part of doing anything good — and go for finished projects, which mean what you wanted them to mean when you started. That way, you’ll produce quantity, which will ultimately help you start to produce quality. What are you waiting for?! Get to it! You are a unique person with unique thoughts to share with the rest of us, and we can’t wait to hear them!
https://medium.com/bulletproof-writers/how-i-write-2000-words-each-day-8dc02ec1737e
['Thomas Dylan Daniel']
2020-04-08 21:40:38.095000+00:00
['Writing Tips', 'Writing', 'Writers Block', 'Creative Process', 'Creative Writing']
How Writing a Memoir Is an Experience in Shape Shifting
THIS IS US How Writing a Memoir Is an Experience in Shape Shifting Lyric essay “Dad, do you remember the Picasso painting in you and mom’s bedroom on Avondale?” I hear the catch in my voice, hoping I’m not hurting my stepmom, Lil’s, feelings. Before the memoir, my stomach clenched in knots of fear and worry when we talked about our history. Now, we smile and talk, and understand there’s a unique side to every story. Sometimes we cry. Memory is hazy. “Dad, when did you move to California? I mean what year? What grade was I in?” “Hmm…1990 or 1991. I took you to see Fantasia on my birthday, right before leaving.” We sing into each other’s ears. Lil’s texting me a Brach print, wondering if this is the one I’m talking about. My brain is working out the numbers. Sixth grade. It was sixth grade. I realize it doesn’t really matter if I say Picasso or Brach in my memoir. It’s the effect I want — the cubist effect of a whole picture splitting up — the opening scene — my parents’ divorce when I was four years old. 1982. Dad confirms I remember the year of the divorce correctly. Writing a memoir is hard work. There was a crab in the sand in Boston. Mom took me to the beach while Dad worked on his dissertation. We stayed in a distant relative’s mansion. I was two. Two years old. I let my eyes follow the crab crawling through the sand on the beach. My earliest memory? Maybe. Sunshine, ocean mist. Joy. Pure joy. Red crab crawling through grains of light brown sand, slow, calm crab, happy. I remember being on my parents’ bed. Being informed they were splitting up. My brain, at four, processed this information in a way to protect me, I think. I quickly sped to two of everything! I was smiling, happy. My parents were taken aback. This is not the reaction they were expecting. In some ways, I feel like I’ve spent the rest of my life catching up on emotions that were initially processed to save myself the pain of feeling hurt — or feeling deeply. Jot down a note. Ask mom about Jason. Writing a memoir is having tough conversations. It’s an excavation. I feel like a ball of clay. Words are my medium and I shape them. I’m a shapeshifter. A magician. My dreams have been weird lately — with my mom, driving through an ocean. Walking through a mall and witnessing a blood stained floor from a crime scene. Running back and forth. Towards and away from the hard things. Jason. He was a few years younger than me. Mexican. Pale skin. Bike rider. Porch visits. Not afraid to tease me. When I was in high school he was in middle school. You’re too old for him. I’d chastise myself. And, never, never let him know how much I adored him. Police swarming, sirens flashing red and blue. I saw the scene from my bedroom window. A scene I could never unsee. The police were there because Jason, thirteen, had been shot and killed. What happened exactly? Cold case. Case closed. Heartbroken. Guns and violence infiltrating my history in the 1990s. Not so long ago, my classmate had been murdered by her father who then turned the gun on himself. A family dead. Dead and gone. I didn’t understand why anyone would want a goddamned gun. My heart ached. I was traumatized, terrified. Depression, anxiety, OCD and rage infiltrating the cells of my body, my being, my energy — and no words came to me. I was a frozen child. Flight, fight, freeze — frozen. Biding my time to adulthood. Always, always, I return to my history. Writing a memoir is not easy. Was it Fantasia’s 50th anniversary? Maybe. I sat in the darkened movie theater in 1990, my throat tight with fear. A dull ache. A cry wanting to escape. Mickey Mouse, a wizard, working his magic, sweeping the flood away. The lights were fantastic. Dad, don’t leave me. There was nothing to say though. I gulped back tears and watched the movie unfold. The psychedelic effects, beautiful, and crashing hard against my mood. Happy Birthday, Dad. Goodbye. Dad, don’t leave me. So many words unspoken. I’m a lucky one. My parents are alive and willing to work through the hard stuff — all three of them. I thought my memoir was written. I submitted it to publishers and everything. 48,000 confessional words. Nature heals. The main message. No. That’s not long enough. You want a shot at the bigs, like Penguin? You need to fill in your story. 70,000 words. Minimum. The screen stared at me. My chapters laughed at me. Child, child, they said. You didn’t write about ages 0–14, ages 17–25. Right. Writing a memoir is hard work. I avoided this history. The one where I might hurt my parents even more with the words I weave on the page. Shapeshifter. Magician. Words wield power I don’t want to misfire. Ages 0–14. Terrified. Panic. Rage. OCD. Depression. Ages 17–25. Lost. Sex, drugs, and rock and roll. Anxiety. Depression. OCD. Married. Terrified. Don’t want to misfire words at my husband. Writing a memoir is complicated. And, worth it. An afterthought. My double digit birthday. Mimi and Poppy bought me tickets to Disney World. Dad and I met them in Orlando. The fireworks were spectacular. I wouldn’t try a big rollercoaster. A Small World was my type of ride. I hugged Goofy. The photo is across from me as I type these words. Side trip to Kennedy Space Center. I saw a rocket blasting off. I was lonely. An only child. Only kid at Disney World. Lost in the waves of amusement. Memories, I drift into memories, my personal history, hoping my book is cohesive and interesting. Hoping my shape shifting holds a magician’s power and captivates you from beginning to end. From end to opening. Because storytelling is life. We’re all clay, shaping our memoirs through this life we’re living, telling our own stories.
https://aimeegramblin.medium.com/how-writing-a-memoir-is-an-experience-in-shape-shifting-41494701d46d
['Aimée Gramblin']
2020-12-29 08:04:32.045000+00:00
['Writing', 'Self', 'Memoir', 'Family', 'This Is Us']
What to Know About Using Cannabis Right Now
What to Know About Using Cannabis Right Now Should consumers be concerned about weed in the age of Covid-19? Photo: Gabriella Giannini/EyeEm/Getty Images During this time of national lockdown, some states have deemed medical and/or recreational cannabis dispensaries to be essential businesses, keeping them open while following new safety precautions, such as allowing for curbside pickup so customers don’t have to come in contact with other shoppers and dispensary employees and delivery people wearing gloves and masks while working. (Rules are changing every day, but as of March 30, 21 states had dispensaries open to some extent.) But should consumers be concerned about using cannabis — particularly inhaling it — considering Covid-19 attacks the respiratory system, especially your lungs? While research on the effects of smoking cannabis on the novel coronavirus is scarce, experts warn that smoking or vaping anything is certainly not great for the lungs, no matter if it’s during a pandemic or not. “[Whether you’re] smoking tobacco, smoking cannabis, or vaping, you’re introducing foreign elements down deep into the lungs,” says Richard Castriotta, MD, a pulmonary critical care and sleep medicine specialist at Keck Medicine at the University of Southern California. “If you do a lot of it, you have more risk of a sustained injury with less of a chance for the lungs to recuperate and heal themselves over.” American Lung Association spokesperson Cedric “Jamie” Rutland, MD says smoking specifically damages type 2 pneumocyte cells in the lungs — cells that are crucial to providing support to the lungs. “It turns out the coronavirus also binds to the type 2 pneumocytea and causes significant illness that way,” Rutland says. “If you already have less type 2 pneumocytes, your lung is already under a significant amount of stress. So if you smoke and you contract the coronavirus, you’re probably going to be that much worse off.” Castriotta also urges people to stop vaping — tobacco and cannabis — due to the added chemicals and their unknown long-term effects. He especially warns against using illicit cannabis vapes that have not been properly regulated and tested and are sold legally. Illicit vapes are often contaminated with vitamin E acetate, a chemical that has been linked to many cases of respiratory illness and death. “Anything that you could do to reduce the risk of lung injury, you should do,” he says. “If you smoke anything and injure the lung in any way, you’re increasing the risk of the virus being able to penetrate deeper into the lung.”
https://elemental.medium.com/what-to-know-about-using-cannabis-right-now-27a9c53b8fe2
['Ashley Laderer']
2020-04-13 05:31:00.907000+00:00
['Marijuana', 'Body', 'Cannabis', 'Covid 19', 'Coronavirus']
Karma Progress 24.Sep-02.Oct. Working Fiat Banking and Detailed Borrowers' Rating
Hello, dear friends! What Have Been Delivered? First of all we’ve updated legal documents according to Russian law between borrower and investor and improved their automatic generation. When investor would like to accept an offer he or she will get a full preliminary document with all necessary data. We’ve added a tool for work with guarantors (front-end and back office parts). Now investor before making the decision could see and download the surety agreement offer. The final test of bank connection SOAP interface have been completed. It helps to: Track cash flows between accounts of investor and borrower on the Karma platform in real time automatically. Make an automatic payments from borrower account to investor’s account. All new investments and investors will be displayed on the progress bar of every bid. A Detailed Explanation on Borrowers' Rating Some of our subscribes asked to clarify the meaning of rating values which we announced in previous post. So, let's get started! ААА — Maximum safety level. High capacity to fulfill debt obligations completely and on time. АА — High safety level. Strong security factors. High capacity to fulfill debt obligations completely and on time. The risk level is moderate, but it can change according to economic environment. А — The safety level is above average. The protection factors are not very strong, but quite effective. Medium high capacity to fulfill debt obligations completely and on time. Greater sensitivity to the impact of negative changes in commercial, financial and economic conditions. ВВВ — The safety level is below average. The protection factors are below average, but it is sufficient for circumspect investments. Adequate capacity to fulfill debt obligations completely and on time, but at the same time there is a high sensitivity to the impact of negative changes in commercial, financial and economic conditions. ВВ — Speculative level (Not investment). It is below the level of safe investment, however, principal liabilities are sufficiently protected. There is practically no strong credit quality decline risk in the short term, but there is high sensitivity to the impact of negative changes in commercial, financial and economic conditions. Delays of interest payments and principal of a loan are possible. В — High speculative level. It is below the level of safe investment, there is a risk default. The level of financial protection greatly depends on the stage of economic development. There is a higher sensitivity to negative commercial, financial and economic conditions. At the moment, the principal and interest redemption is likely, but it is possible that negative economic situation will lead to a delay in payments. Frequent increase or decrease in the rating may happen. ССС — Significant risk level, the rating object is hard pressed. It is significantly below the investment level. There is a great uncertainty whether the principal and interest redemption will be effected on time, or only favorable combination of circumstances will allow to settle the debt on time. By an unlucky train of events, there is a possible suspension of payments. СС — Super speculative level. Very low level of credit quality. At the present time, there is a high probability of object’s noncompliance to the rating of debt obligations, payment refusal is possible. C — Regular delays in principal or interest repayment. Very low level of credit quality. Bankruptcy procedure or a similar action has been taken with respect to the rating object, but payments or debt obligations accomplishment are still ongoing. Timely debt obligations accomplishment is very unlikely without attracting additional sources of credit quality. D — Refusal of payment(default). Principal or interest payments are overdue at the moment. Cheers ^_^
https://medium.com/karmared/karma-progress-24-sep-02-oct-working-fiat-banking-and-detailed-borrowers-rating-812392a898a1
['Karma Project']
2018-10-04 11:25:34.825000+00:00
['Blockchain', 'Loans', 'Development', 'Finance', 'Banking']
Learn Anything Faster
Before you dive into this article I must mention that learning anything there is to know about everything is impossible. While it may be obvious that this is a fact, many of us struggle to accept the fact that we know enough about anything. I am one of those people. As a web designer, developer, and entrepreneur I have this struggle day to day. I feel so far behind other peers in my community that I instantly feel incompetent in my skills. I strive to learn anything and everything I can but often find myself at wit’s end when I just can’t digest the new skills I’d like to acquire. To overcome this obstacle one might think they must stop and learn this fancy new sought after skill immediately, but in reality, this is why we all have different job roles. Most people are great at a few sets of things and not so great at others. Combining forces and skills is what ultimately makes a great team. Teams can do more. I often have to remind myself to niche out my skills and seek to improve those over others I’m not fully comfortable with. With those polished skills, I can stand atop many others who aren’t as great in the same niche. This same principle can be applied towards a business. Establish a niche first and win the hearts of your consumers. Once they trust your brand then you can start to branch outside into unfamiliar territory assuming you need to. Never stop learning Even if you are niching out your offerings or skills you can still always be learning. I for one enjoy learning new things about the web design industry. New design patterns, guidelines, programming languages and more fascinate me. To learn these things I have to submerge myself in the community. Doing this can happen both physically and through the internet. Forums, YouTube, Slack Channels, Newsletters, Blogs and more have become my blueprints towards fast paced learning. Video, in particular, helps me tremendously which is why I set out to make Web-Crunch.com. A publication dedicated to showing the latest to do with the web. Seek Guidance and Feedback You can learn a lot alone. In fact, I learn best alone simply because I can go at my own pace. Until you apply your knowledge will you know if you’re indeed learning. The best way to do this is to dive right in and practice. I, for example, begin building things when I want to learn a new language or form of web development. Books, guides, documentation, and videos only get you so far until you apply it. The best way to validate your work is to test it. Put it in front of others. Apply what you learn and broadcast it to seek feedback or guidance from your community. Doing so will both allow you to confirm what you have learned as well as connect with others who may be doing the same. Distractions Distractions can hinder progress a great deal when it comes to learning something new. Practice makes perfect. If practice ceases to exist your knowledge may do the same. Limit social media, phone calls, texting, television, and opening tons of new tabs on your computer’s internet browser ;). Priorities To learn you must focus on the task at hand. To learn fast you have to submerse yourself in the task. I recently learned an entirely new framework known as Ruby on Rails. The framework itself is very approachable for a novice web developer but the knowledge of getting things up and running took me a long time. To understand the framework I spent an hour or two nearly every day for a couple months submerging myself in the Ruby on Rails community. I visited forums, watched YouTube videos, created very basic apps. I tried, failed and tried again until finally, things started to click. Now some 6 months later I’ve successfully built my first web app that I plan to launch within the next year. If not for focusing all that time on Ruby on Rails, I would still be learning and probably would have given up on it if it wasn’t my priority to learn. Commitments We all have commitments. Being a human is a challenge. Often times family, work, travel, and more become things that can halt the learning process. A good learner can schedule around these commitments and still devote time to obtaining new knowledge. Even if you only have a spare 15 minutes on some days to learn something new it is still worth it! Planning To learn you need to plan accordingly. Learning fast requires planning period. You don’t need to build a highly detailed calendar of learning or anything of that nature but I would suggest shooting for a set number of hours a week. Allow yourself to get lost in your learning. I do this at times and as a result, come out the other side with so many more ideas. Learning is inspirational. Use it to your advantage. The Downsides to Multitasking One might think that multitasking is the most efficient way to learn. Unfortunately, multitasking can actually hinder learning progress. It may even cause you to learn the wrong things as a result of constantly going back and forth between tasks. We can only do so much at once. While multitasking is a good thing for skills you already have, it isn’t for obtaining new skills. Spend some devoted time to learning and you will learn it at an accelerated rate. Going back to my example about learning a new web development framework; had I not gone head deep into that learning experience, my knowledge would have suffered and that’s not counting the things I still don’t know. Give yourself a free day every week In a given week I tend to devote a single day to work on the things I have been meaning to. You can classify this day as my “side projects” day. These projects tend to be things I’m building, planning, or learning about. To build some of these side projects I need to obtain a new skill I already didn’t have. As a result, large chunk of my “side projects” day is spent doing research, reading the trial and errors of other people learning the same thing and applying that knowledge. These days in return give me immediate satisfaction in terms of applying new knowledge. At your normal job, you may do repetitive things that are pretty mind-numbing or just uninspirational. While these tasks make the world go around, they don’t cause you to learn anything new. Your brain goes unactivated and as a result, you get caught in a comfort zone. Until you step outside this comfort zone will you be able to digest new knowledge and learn new skills. Key Takeaways To learn anything faster you have to be all in. Submerge yourself into what it is you want to learn. Talk to others who are trying to accomplish the same goal. Together you can bounce ideas, questions, and concerns back and forth which ultimately makes you learn more while doing so. Learning takes time, patience, and tons of practice. No one I know can truly learn something until you apply it. Give it all you got and you will come out the other side with new knowledge, ideas, and inspiration. Follow us:
https://medium.com/couple-of-creatives/learn-anything-faster-929d9c74d365
['Andy Leverenz']
2017-06-26 17:15:44.413000+00:00
['Self Improvement', 'Learning', 'Marketing', 'Mentorship', 'Web Development']
Should We Still Use OrderedDict in Python?
If you worked with Python 2 or an early version of Python 3, you probably remember that, in the past, dictionaries were not ordered. If you wanted to have a dictionary that preserved the insertion order, the go-to solution was to use OrderedDict from the collections module. In Python 3.6, dictionaries were redesigned to improve their performance (their memory usage was decreased by around 20–25%). This change had an interesting side-effect — dictionaries became ordered (although this order was not officially guaranteed). “Not officially guaranteed” means that it was just an implementation detail that could be removed in the future Python releases. But starting from Python 3.7, the insertion-order preservation has been guaranteed in the language specification. If you started your journey with Python 3.7 or a newer version, you probably don’t know the world where you need a separate data structure to preserve the insertion order in a dictionary. So if there is no need to use the OrderedDict, why is it still included in the collections module? Maybe it’s more efficient? Let’s find out! OrderedDict vs dict For my benchmarks, I will perform some typical dictionary operations: Create a dictionary of 100 elements Add a new item Check if an item exists in a dictionary Grab an existing and nonexistent item with the get method To simplify the code, I wrap steps 2–4 in a function that accepts a dictionary (or OrderedDictionary) as an argument. Let’s compare both functions. I run my benchmarks under Python 3.8: OrderedDict is over 80% slower than the standard Python dictionary (8.6/4.7≈1.83). What happens if the dictionary size grows to 10 000 elements? After increasing the dictionary size by 100x times, the difference between both functions stays the same. OrderedDict still takes almost twice as long to perform the same operations as a standard Python dictionary. There is no point in testing even bigger dictionaries. If you need a really big dictionary, you should use more efficient data structures from the Numpy or Pandas libraries. When to use OrderedDict? If the OrderedDict is slower, why would you want to use it? I can think of at least two reasons: You are still using a Python version that doesn’t guarantee the order in dictionaries (pre 3.6). In this case, you don’t have a choice. You want to use additional features that OrderedDict offers. For example, it can be reversed. If you try to run reversed() function on a standard dictionary, you will get an error, but OrderedDict will nicely return a reversed version of itself. How to stay up to date on Python changes? If you are using one of the latest versions of Python, dictionaries are ordered by default. But it’s easy to miss changes like this, especially if you upgrade Python version by a few releases at once, and you don’t read the release notes carefully. I usually read some blog posts when there is a new version of Python coming out (there are plenty of blog posts around that time), so I catch the essential updates. The best source of information is the official documentation. Unlike a lot of documentation that I have seen in my life, the “What’s New in Python 3” page is written in a very approachable language. It’s easy to read and grasp the most significant changes. If you haven’t done it yet, go check it out. I reread it a few days ago, and I was surprised how many features I forgot about!
https://medium.com/python-in-plain-english/should-we-still-use-ordereddict-in-python-f223c85a01d5
['Sebastian Witowski']
2020-10-08 17:58:12.531000+00:00
['Best Practices', 'Python', 'Performance', 'Tips And Tricks', 'Programming']
Never use Alt-Tab again with this free productivity tool
Being a modern-day knowledge worker means having a lot of applications open at any given time. My usual culprits include Chrome, Outlook, Excel, MindManager, Notion, Explorer, Teams, along with a host of other applications that see intermittent use. Switching between these can be done in a few ways, some involve using a mouse while others involve the keyboard. Each switch only takes a few seconds at most, but I found out many years ago that it can easily take me out of a flow state if I have to click around too much. So I’ve always been playing around with ways to make this app switching more transparent in my workflow. Before I show you how my current setup is, let’s review the different ways Windows 10 allows us to switch between different applications. Using a mouse Click on the application in the taskbar This is probably the most common way most people use to switch between applications. Perfectly acceptable, if a little slow. Simply find the icon for the app you want, and hit it with the mouse pointer. Here I’m switching to Chrome. If the application you need is visible, click anywhere inside its window If you are rapidly switching between a few applications, you might have all of them visible. In which case it might be best to simply switch to it by clicking somewhere within the boundaries of the window. You see the window you want? Just click it! You see the window you want? Just click it! Task view button Click the “Task View” button in the Taskbar. This brings up a view of all open windows. It also shows a list of documents you’ve opened over the last 30 days. I’ve never used that feature, it’s always seemed like a gimmick. But maybe it floats your boat. You can also add virtual desktops on this view, but I don’t make use of that feature either. I find that things get lost that way. Using a keyboard Here is where we get to the good stuff. I like keeping my hands on my keyboard as much as possible. Going from the keyboard to the mouse and back again disrupts my flow state. I also get an ache in my wrist whenever I have to use a mouse for an extended period of time. Here are the different ways Windows 10 has built-in that allow us to switch between open applications. Alt + Tab Pressing the Alt+Tab keyboard combination, and continuing to hold the Alt key while repeatedly pressing Tab cycles between open windows. Power tip here, holding Alt while pressing Shift-Tab cycles in the opposite direction. This is the best Windows 10 has to offer. It’s been around since Windows 2.0! Windows 2.0 — Used with permission from Microsoft. Windows 2.0 — Used with permission from Microsoft. Windows key + Tab Pressing Windows+Tab, which brings up a hybrid between Alt-Tab and MacOS’s Exposé feature. It’s the same view as you get by clicking the Task View button on the taskbar and I showed in a screenshot earlier. To use the keyboard to select a different window, you have to use the arrow keys. Hitting Win+Tab again just closes the view. This view feels like it’s designed to be used along with a mouse. So I don’t use it that much. Windows key + top number row Pressing Windows and holding it down and selecting any number from the top keyboard row, from 1 through 0. This corresponds to the pinned and open icons in the taskbar. Using my taskbar as an example, if I hit Win+2 I switch to Explorer, while Win+3 would switch to Chrome. A hot tip when using this feature, if more than one window is open for a particular application, holding down Win and repeatedly pressing the same number key cycles through the open windows. It doesn’t cycle between open browser tabs, though. Furthermore, if only one window is open, it alternates between minimizing the window and restoring it. What I do to shave h̵o̵u̵r̵s̵ seconds off my day Many years ago I came across something called AutoHotKey. It’s a small application you can download and use to run scripts on Windows. What it enables you to do is automate just about everything in Windows. Some people have taken it to extremes, which I really think is not the best use of it as a tool. Better and more robust tools exist if you want to build something complicated. AutoHotKey is a programming language, or a scripting language to be more precise. The possibilities are virtually endless. But what I want to show you today is how I use it to map a hot-key combination to perform certain actions. I use it here to activate a particular app. For example, I use Win+C to activate Chrome, and Win+N to activate Notion. In the script below you see the code, I use to do this. SetTitleMatchMode, 2 ; Set matching of windows title search to find match anywhere inside the windows title #c:: ; Windows+C to activate or Run Chrome IfWinExist, Google Chrome IfWinActive WinMinimize else Winactivate else run,"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe" return #n:: ; Windows+N to activate Notion IfWinExist, ahk_exe Notion.exe IfWinActive WinMinimize else Winactivate return For most people, that looks like gibberish. So some explanation is required to get a hang of the syntax. Ignore the first line, for now, it’s just to tell Autohotkey how to look for each window. If you want to read more about that particular function, the documentation is here. Let’s look at the first part: #c:: This indicates to Autohotkey to listen for the Windows and C (the letter c on your keyboard) being pressed together. The hashtag is the sign AutoHotKey uses to indicate the Windows key. You can first press the Win key and then C or press both at the same time. The two colons then indicate that an action is to be performed when that key combination is pressed. The next line is IfWinExist, Google Chrome This tells AutoHotKey to check if there is a window that contains the text “Google Chrome” somewhere in its title. It would report a match if the window is named “AutoHotkey — Google Search — Google Chrome”. This is because we used SetTitleMatchMode to 2, which matches a window title even if it’s only a partial match. If that window is already active, the script then minimizes it WinMinimize If it exists but isn’t active, it goes on to tell it to activate. This brings the window to the front. Like magic, I say. else Winactivate The final piece is to indicate to AutoHotKey that this particular sequence of events is over. We do that by using the return command. return Switching between apps I use this method to switch between apps whenever I can. I still have to resort to clicking particular windows with my mouse or even Alt-Tab sometimes. But this is how I prefer to move around. It’s become instinct, second nature almost. I miss it whenever I use my Mac or borrow someone’s computer. These are the apps and hotkeys I’ve mapped to switch between apps: Chrome → Win+C Outlook → Win+O Teams → Win+T PowerPoint → Win+P Word → Win+W Excel → Win+X Notion → Win+N I’d love to have a count for each time I’ve used these hotkeys. It’s in the hundreds each day. It allows me to switch without taking my hands off the keyboard. If you like the idea of doing that, you’re going to need to learn other keyboard shortcuts as well, but that’s a post for another day. Using the script To run an AutoHotKey script, you need to download the software from Autohotkey.com. When you’ve done that, save your .ahk script somewhere and double click it. It should automatically be run using AutoHotKey. Go create your own scripts AutoHotKey is a great way to learn about automation. I’ll share some of my other scripts soon, but in the meantime, check out these guides on the AutoHotKey site. https://www.autohotkey.com/docs/Tutorial.htm
https://medium.com/the-innovation/never-use-alt-tab-again-with-this-free-productivity-tool-2dc4e6058316
['Johannes Thor']
2020-11-08 14:27:33.497000+00:00
['Tips', 'Productivity', 'Keyboard', 'Computers', 'Life Hacking']
Scraping Whiskey Review Data to Build a Recommendation System
The Data We collected our data from whiskybase, which is a website devoted to whiskey enthusiasts. Users on the site can rate whiskeys from all around the world and share their ratings and profile with all other users on the site. To gather our data we needed three primary components: User ID Whiskey the user rated Rating for that specific whiskey Scraping the data at first seemed to be an easy task — just use BeautifulSoup4 to pull the specific HTML tags from the page to get the content we needed. But wait, not so fast. Of course, there turned out to be more barriers than anticipated to get to the pages we needed. We realized that, for our script to run properly, we would need to be able to sign in to whiskeybase to access the profiles of the other users on the website. The perfect solution for this task is a package called Selenium (one of the coolest packages I’ve worked with). Selenium is a web automation tool that can be used to mimic human interactions with web pages. So we added this to our script to allow for an auto sign-in feature. Now with Selenium working, we’re able to successfully scrape the data that’s needed. We end up gathering user review data from the top 1,000 whiskeys in the world, as well as whiskeybase’s list of newly released whiskeys. When our scrape was finished we had data from:
https://medium.com/better-programming/scraping-whiskey-review-data-to-build-a-recommendation-system-af6b82f31301
['Austin L.E. Krause']
2019-08-30 19:55:08.349000+00:00
['Programming', 'Data Science', 'Python', 'Machine Learning', 'Recommendation System']
Pair Push Notifications with your Email Campaigns
Unified marketing strategies can boost engagement across all channels. While email is still the best channel in terms of ROI and engagement, it’s not always the best method of communicating with customers. For example, a flash sale relies upon speedy responses from your customers. Just 21% of emails are opened within the first hour of receipt. The average email opening time is 6.5 hours. That’s where integrated campaigns that utilize other channels — social posts or push notifications for example — to support email comes in. Adding additional touchpoints to your marketing campaign is a great way to increase the engagement and conversion rates across your entire campaign. Pairing push notifications with your email campaigns can increase engagement by 3 times. Photo by Jonas Leupe on Unsplash From awareness to action At the beginning of your campaign, when the first email, ad, post or other communication goes out, it’s unlikely to convert the viewer into a customer or sale right away. People just don’t work like that. We need to be reminded. This is true even when it’s something that we want or have been looking for. An average of three views is needed before your ad gets noticed. The idea is that the first time your message is seen by an individual, it’ll get a ‘What was that?’ response from the viewer. The second time a ‘What of it?’ response, and the third time the viewer actually becomes engaged with the message. That’s if you’ve targeted correctly, and hit the right tone for the demographic. Once you’ve hit the three impressions mark for a lead, you might get a click. Other individuals in the group you’ve targeted may take a little longer, and, of course, some just won’t be interested at all. When you do get the initial engagement you’re able to collect some information. This allows you to purposefully begin moving leads through the sales and marketing funnel. The initial engagement and impressions stage could be social media ads, cold emails or direct mail. The three impressions mark is a general rule across all advertising forms. It’s a little like fishing. The initial hook needs to be recast several times in the pool of ideal customers (your segmented audience). Once you get a bite, you employ further tactics to reel them in and make a conversion. Customers, who have already engaged with your brand, often need reminders about promotions too. This is where push notifications come in for them, as well as the new leads you generate. The action plan Once potential customers have shown an interest in your brand, you’re able to begin engaging with them in a more meaningful way. That can involve sending emails about your latest promotions, new content or ads on the platforms you know they use. Pop-ups on blog posts, social ad clicks, whatever you’ve used to capture data are just the beginning of the ride. There are a couple of different types of push notification. Each has its own strengths and weaknesses. Web push notifications are, by necessity, short and sweet. Excellent for delivery notifications, order updates or time-sensitive messages, web push notifications can be used to prompt leads and customers to check their inbox for your offer. They’re used mostly for short alerts that direct the receiver to a specific action. The downside to web push is that they need an app and can only be used for smartphone users or web browsing on a desktop or other mobile device. If your ideal customer doesn’t have these, they won’t receive your web push notifications. SMS messages, on the other hand, don’t require a smart device to be received. You can also include links in SMS to send customers to landing pages, blogs or even social channels if they are on a smart device. The advantage of SMS messages over web push is that there’s no need to have an app built for them. They are covered by TCPA and GDPR though, just like web push. This means you need to be aware of the times you’re sending SMS messages and allow people to opt-out from future communications. Whichever you settle on — or even if you use both — push notifications are an ideal way to keep your campaign in front of mind with customers. Don’t overdo it though, you don’t want to be getting in their face and become annoying rather than helpful. Combining email and push notifications 70% of customers prefer to engage with brands via email. However, 72% of customers want an integrated marketing approach that allows for multiple touchpoints. Considering these facts, using email as the main artery of your campaign makes sense. Supplementing emails with push notifications and/or SMS makes even further sense. So how do you do it?
https://vicwomersley-42664.medium.com/pair-push-notifications-with-your-email-campaigns-f2b36cefc776
['Vic Womersley']
2020-06-03 07:34:34.577000+00:00
['Integrated Marketing', 'Email Marketing Tips', 'Marketing', 'Push Notification', 'Digital Marketing']
BeautifulSoup Python Library
Let’s start scraping on real websites rather than the save html file. To grab the source code of any website we’ve to use requests and pass the url of website as an argument. In this article we use this website ‘http://coreyms.com' just for example purpose. from bs4 import BeautifulSoup import requests import csv source = requests.get('http://coreyms.com').text soup = BeautifulSoup(source, 'lxml') print(soup.prettify()) #output will be source code of the website in HTML form. If you want to see the structure of website page than just right click and select Inspect option you’ll be able to see the HTML code of that page. Now for getting first article from website : article = soup.find('article') print(article.prettify()) #the output will be HTML structured code of the article section. #to grab the headline from the first article headline = article.h2.a.text print(headline) #'a' stands for anchor tag, 'h2' stands for the href tag which contains the link & the headline of the article. The output will be headline of the article in plain text. use print(article.prettify()) again to see where the summary tag is , after that make it comment using ‘#’. #to grab the summary from the article summary = article.find('div', class_ = 'entry-content').p.text print(summary) #'p' stands for paragraph and as an output we'll have whole paragraph of section 'div' tag. Again uncomment the print(article.prettify()) and see where the video source , if you’ve any idea of YouTube videos every video have an ID. Now make comment print(article.prettify()). #to grab the source of video vid_src = article.find('iframe', class_ = 'youtube-player')['src'] print(vid_src) #output will be the link to the embedded video. Now to grab the video ID: # make comment print(vid_src). vid_id = vid_src.split('/').[4] vid_id = vid_id.split('?').[0] print(vid_id) #'split' will split the vid_src in '/' seprated , [4] is the index of the video id, split('?') will seprated id from further info, [0] will give the 0th index. The output will be video id. Now we can create YouTube video link using that video id : # make comment print(vid_id). yt_link = f'https://youtube.com/watch?v={vid_id}' print(yt_link) # in above code '?' stands for query, 'v' stands for video where we've given video id. The output will be link to YouTube, just copy that and paste in your browser you'll be directed to the video on YouTube. Above was just for one article from website. Now for scraping all the articles from the website we have to use for loop to extract the video links from the website. #to save all info into a csv file, 'cms_scrape.csv' is name and opening it to write all info in it csv_file = open('cms_scrape.csv', 'w') #'w' stands for write csv_writer = csv.writer(csv_file) csv_writer = csv_writerow(['headline', 'summary', 'video_link']) for article in soup.find_all('article'): headline = article.h2.a.text print(headline) summary = article.find('div', class_ = 'entry-content').p.text print(summary) try: #use for dealing with some missing links vid_src = article.find('iframe', class_ = 'youtube-player')['src'] vid_id = vid_src.split('/').[4] vid_id = vid_id.split('?').[0] yt_link = f'https://youtube.com/watch?v={vid_id}' except Exception as e: #use for dealing with some missing links yt_link = None print(yt_link) print() csv_writer.writerow([headline, summary, yt_link csv_file.close() #In output first thing will be headline. 2nd will be summary and 3rd will be YouTube link of videos. BeautifulSoup’s Alternatives : Scrapy It is the most popular web scraping framework in Python. An open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way. Selenium Selenium automates browsers. That’s it! What you do with that power is entirely up to you. Primarily, it is for automating web applications for testing purposes, but is certainly not limited to just that. Boring web-based administration tasks can (and should!) also be automated as well. import.io import.io is a free web-based platform that puts the power of the machine readable web in your hands. Using our tools you can create an API or crawl an entire website in a fraction of the time of traditional methods, no coding required. ParseHub You can extract data from anywhere. ParseHub works with single-page apps, multi-page apps and just about any other modern web technology. ParseHub can handle Javascript, AJAX, cookies, sessions and redirects. You can easily fill in forms, loop through dropdowns, login to websites, click on interactive maps and even deal with infinite scrolling. Portia Portia is an open source tool that lets you get data from websites. It facilitates and automates the process of data extraction. This visual web scraper works straight from your browser, so you don’t need to download or install anything.
https://medium.com/swlh/beautifulsoup-everything-a-data-scientist-should-know-ecc5a39f135a
['Jitendra Singh Balla']
2020-10-30 06:14:26.448000+00:00
['Machine Learning', 'Python', 'Beautifulsoup', 'Data Science', 'Web Scraping']
Be quiet! Why designers talk too much.
This tweet really stuck with me, kind of like a social media earworm. I can’t tell you how much I’ve seen and experienced (and done myself!) over the years. It’s made me come to this conclusion: Designers should stop trying to show that they are the smartest person in the room. Design’s superpower is asking better questions. Yet put designers in client meetings and most will spend their times talking and explaining to clients their clever ideas, findings, and beliefs. So much talk, so little listening. Many people believe that designers like explaining so much because they really, really want and need to convince clients of the depth, and truth and sheer brilliance of their ideas. The smartest person in the room syndrome. Design is one of those things that people claim NOT to understand. Many organizations act as if focusing on customers and citizens instead of internal assumptions seems counterintuitive instead of plain common sense. So, designers feel the need to not only bring customer experiences and needs to life, the feel the need to proselytize and evangelize design. Hallelujah! Two things should give designers pause and encourage them to pipe down. The first is something good designers should know already: you’ll never convince anyone to do something different by talking at them, unless they already want to do something different. That’s just human behavior 101 and if you don’t understand that, it’s time for some refresher courses. As designers we learn to ask great questions and listen. It’s how we understand people. It’s how we develop great reframes that allow us to come up with innovative ideas. It’s how we test our prototypes. As designers, asking questions and listening is what we should do best. The second is some new research I stumbled over. It turns out that talking is a drug. The more we talk and hear our own voice, the more we can’t stop talking. Mark Goulston, author of the book Just Listen explains: “…the process of talking about ourselves releases dopamine, the pleasure hormone. One of the reasons gabby people keep gabbing is because they become addicted to that pleasure.” Wonder why that person in meetings can’t stop talking? It may be because he or she is really falling in love with their own voice. According to Goulston, you should set timers to stop talking after 20 or 40 seconds. The great Swedish design firm Doberman created an app GenderEQ to monitor how much men spoke in meetings compared with women. Maybe they should add some features to track when it’s time to stop talking as well. We do need to explain things, at the right time, when people are ready to pay attention. We call these times presentations or something similar. Even then, when we’re supposed to talk and explain, we also need to pause, ask questions and listen. But at all of those other times when we’re meeting with clients, prospects or partners, we designers need to use the tools of our trade to keep asking great questions and listening more than we talk. Because it’s more important to show that we’re the wisest person leaving the room than showing that we’re the smartest person in it.
https://rnadworny.medium.com/be-quiet-why-designers-talk-too-much-293bc45e9e41
['Rich Nadworny']
2019-08-07 07:25:19.016000+00:00
['Communication', 'Design', 'Strategy', 'Design Thinking']
Your Guide to the ThinkPad procut lineup
Photo by Olena Sergienko on Unsplash Your Guide to ThinkPads Understanding the vast product lineup and finding what’s best for you If you’re looking for a new mobile computer, you might have already stumbled over Lenovo’s ThinkPads. You might even have heard about their reputation as “no-nonsense-workhorses”. So maybe, you’re done with the fashionista laptops and you’re ready for something more “down to earth”. A ThinkPad. But after having a quick look at Lenovo’s website, you’re immediately intimidated with the plethora of ThinkPad models, that don’t differ too much from each other at first gtlance. If you’re so easily swayed to click away and buy just another MacBook instead, this guide is for you! E, L, T, X, P — what? The confusing ThinkPad lineup explained. © Scollurio The naming scheme of Lenovo’s ThinkPad product lineup can truly be confusing. To bring light into the dark, it’s helpful to first differentiate the letters from each other. Each letter corresponds to a series of ThinkPads. Let’s look at it. E-series This is your entry level no-nonsense work machine. The E-series are the budget option. You still do get the classic aesthetics, reliable and performant internals and the awesome keyboard. But the E-series come in rather “thick” by today’s standards and the screen options are often underwhelming. Also, in many cases, the keyboard is not backlit and feels a bit “cheaper” with printed-on letters, compared to the higher priced models. Still, if you’re all about value-for-your-buck with a new ThinkPad, this is where to start. Since the chassis is naturally thicker, you get a lot of upgrade options, which usually include RAM, storage and the WWAN module. Also there’s plenty of room for ports on the chassis. Neat. L-series The L-series really are a tiny step up above the E-series. Chassis are slightly thinner, keyboards a bit better but you’re still mostly stuck to sub-par screens. On the other hand, you get to upgrade the internals relatively easy and you get the classic, nostalgic aesthetics. Also, thanks to the relatively thick chassis, there’s plenty of room for all the port’s you’d need. If you’re a startup or hard on a budget, the L-series can offer great value and punch way above your expectations performance-wise, especially if you go with the latest AMD processors. T-series These are the classic ThinkPads. The T-series has been a staple for business users for decades now. They are the perfect blend between ThinkPad ruggedness, value for your money and features. T-series are very versatile and offer great variety in upgrades, screen options and processors. As of writing, you can either go with an Intel or AMD processor, the later one is what I’d highly recommend, as they are far more efficient and performant. You also get a very decent selection of ports, which should suffice for most of your usecases. T-series ThinkPads also come in “s” versions. A T14s for example is a slightly slimmer and lighter, more portable version of the T14. It also comes with a slightly bigger battery, as mobility is the name of the game. Some s-models compromise on the port selection in exchange for portability. Still, the situation is much better than with MacBooks or the latest Dell XPS laptops, which need a Dongle for everything except USB-C devices. X-series You could view the X-series of ThinkPads as something like the premium branch of the brand. Sleekness, portability and awesome screens are the focus here. The X1 Carbon ThinkPad is the “king of the hill” so to speak, as it’s the most portable and premium ThinkPad available. You have to lay down quite a bit though, to take one home with you. But for that, you get a carbon fibre chassis, that’s incredibly light, stiff and sturdy, a great selection of screens, but you have to let go of some ports. The X1 Extreme is the performance variation of the X1 Carbon and even sports a dedicated GPU, which theroetically makes full HD gaming possible, but it has such a sleek and thin chassis, you might run into thermal issues, if you keep it under load for too long and it will throttle. That dedicated GPU is more geared towards content creation and video editing. The latest X13 ThinkPad would make for an extremely portable sub-notebook for coffee-shop-writers and journalists, who might otherwise buy an Apple MacBook Air. It can also be very beneficial for “call-in”-sysadmins and network technicians. P-series P stands for power. Actually, I don’t know if that’s correct and just totally made it up, but the P-series are the mobile workstations in the ThinkPad family. These are the most expensive but also most performant and versatile. You trade in some portability for loads of performance. So much performance it even rivals dedicated desktops. But be careful — for 2020 the P-series have not been refreshed with AMD processors, which provide much more power than Intel’s chips at the moment. There’s hoping that next year, we will see AMD powered P-series. The workstations are not meant to be used off-grid for too long, as they are power-hogs. They are supposed to be lugged from your home to your work place for example and then used stationary. Typical professions that would get a ThinkPad workstation would be engineers, CAD-designers, architects, scientists working with huge sets of data, etc. They are definitely not meant to be gaming machines, but yes of course, they will run games too. OK, and what about the number behind the series’ letter? This can be confusing. Prior to their new naming scheme, starting in late 2019, it was a combination of screen size and generation/version indicated in their numbering schemes. For example: A T480 is a 14" T-series laptop and one generation behind the T490. The T490s is the smaller version of the T490. Since Lenovo ran out of numbers (T450, T460, T470, … T490) last year, the newer models are basically a combination of series-letter, screen-size, processor indicator and the generation of this iteration, spelled out. Confused yet? For example: The T490 became this year’s T14 Gen 1. If it is sporting an Intel processor, it’s also often referred to as T14 i Gen 1. If it’s the slim and light version with AMD processor (which I highly recommend), it is the T14s Gen 1. I know this can be confusing at first, until you wrap your head around it all. But with some attention and a bit of digging on Lenovo’s website, you’ll figure it out sooner than later! So, as a rule of thumb, the shorter number models, are the newer ones. What’s that “nub” for? That “nub”, also lovingly called the “nipple”, actually is the TrackPoint. It harks back to times when Laptops didn’t have a trackpad yet. You can easily (with some practice) control your mouse-cursor without ever using a trackpad, thanks to the TrackPoint. But why would anyone want to do that? Well, it’s still around for purists, really, but there are some applications where it even makes sense today. With enough practice you can use it for masking in Photoshop quite nicely and maybe even more precisely than a trackpad. It also allows you to stay in the home row with your fingers, while quickly moving your cursor or highlighting a word, when writing or programming (yes, this can also be done by using a trackpad with your thumbs). Photo by cetteup on Unsplash A thing not many people seem to know is, that you can switch to “scrolling mode” by pressing down the middle (blue dotted one) of the three buttons above the trackpad. You can scroll in all directions using this method, which often comes in handy in huge Excel- or InDesign files. If you don’t like the “nub”, don’t fret. There are replacement-caps for it available. Some of them are ultra-low-profile and if you’re irritated by the trademark bright red spot in the middle of your keyboard, they’re available in black too. Just look them up, usually you can get them really cheap, if you don’t go for the official ones. You could, theoretically, go completely without that cap too, if you prefer so (but the resulting hole will trap dirt). So what’s an Ideapad then? Ideapads are the consumer line of laptops by Lenovo and they also come in a wide range of models. They don’t sport the usual ThinkPad trademark features, but they are a great value. They offer less options when configuring them, but you can also get the latest (at the time of writing), high-performing Ryzen 4000 mobile processors in them, the build-quality is way above average and even though their keyboards are not quite of ThinkPad quality, they do come really close and you don’t have to worry about “the nub”. They have a great layout, great backlight and usually they’re really tactile and crisp, a joy to type on, easily rivaling Dell and Apple. So, if you don’t need the highest resolution (or brighter than 300 nits) screens and don’t care for the added ruggedness, nostalgia or security features of a ThinkPad, Ideapads are a way cheaper, valuable option for you. Thanks to a better thermal solution the latest AMD chips are even able to run within a higher power envelope, than on the more expensive ThinkPads, thus — numbers for numbers — providing even higher performance than an equivalent ThinkPad. The latest integrated Radeon graphics on AMD-powered Ideapads even make light gaming possible. Games like League of Legends run buttery smooth and if you don’t mind reducing details in your games you can even get away with playing some slightly older triple A titles. They are also fit for moderate content creation. Really great value here. And they’re stylish too. How about going “oldschool”? Speaking of value, depending on your budget, getting a used ThinkPad, that is a few years old, can be a great option. ThinkPads are very rugged, so even with a few scuff marks, they most likely will still perform very well. Also, older ThinkPads can be easily upgraded, cleaned and repaired. Some models even allow for a relatively easy panel swap, in case you want to get a higher resolution screen. ThinkPads are well documented and replacement parts can be easy to come by. In most cases, the performance of old ThinkPads can just be enough to serve as a great laptop for Uni or to kickstart your writing career, even though the battery life won’t be as impressive anymore. They’ll run Linux just great, which is a great alternative for programmers and writers alike. Everyone knows MacBooks hold their value pretty well, ThinkPads just come second. That is the reason why you can mostly find only MacBooks and ThinkPads on official refurb-stores, that offer limited warranty and great value in comparison to buying brand new. Watch out for special offers Another way to save quite a bit on new and last year’s models of ThinkPads is keeping an eye on the Lenovo online store itself. Lenovo is well known for putting special offers and deals out there on a regular basis. It’s not unheard of saving as much as 30% on your dream machine, on occassion. Just be patient. A ThinkPad for gaming? Well ThinkPads really are meant for work, but yes, you could get away with some light gaming on most modern machines, just don’t expect anything mindblowing and don’t get a ThinkPad for that specific reason only. Even the ThinkPad X1 Extreme, with it’s dedicated graphics chip, is severely hampered by thermal limitations under constant load. If you want to go all-in with gaming, you should check out Lenovos Legion line of gaming laptops and accessoires. They offer astounding value, great cooling capabilities and actually can replace a dedicated gaming desktop easily. They look great too! Conclusion So who are ThinkPads for? Well, ThinkPads span such a wide range of use cases, there really is something for everyone. An aspiring writer or student can do well with a machine that is a few years old, if thin and light is your utmost priority, go with an X-series, if you’re in for a great allrounder, the T-series can be recommended and if you’re a content creator going for the X1 Extreme might be worth a thought. Mobile workstations usually are for engineers, scientists, architects etc. while the E- and L-series provide a great entrypoint for small businesses on a budget while not skimping on raw performance. Yes, the high end, fully decked out models can become quite costly, but they still are cheaper than an equivalent MacBook by quite a bit. Things may change though, once Apple starts rolling out their own silicone later this year (which I will be keeping a curious eye on). But if you can live with Windows 10 instead of macOS, ThinkPads are just right up there with the very best Apple, Dell and HP have to offer. Perhaps the corporate-workhorse-aesthetic of a ThinkPad is a breath of fresh air too. Oh, and don’t be afraid of “the nub” — if you really don’t want it, it hardly gets in the way when typing, especially if you use ultra low profile caps for it. If you want to try out the ThinkPad typing experience beforehand, you can do so easily, with this separately available ThinkPad keyboard. The real keyboards on the thinkpads though are “crisper” and more tactile.
https://medium.com/swlh/your-guide-to-thinkpads-6a66ad4c20ab
[]
2020-07-27 13:43:35.860000+00:00
['Technology', 'Writing', 'Tech', 'Gadgets', 'Laptops']
Dockerize Your Python Flask application and deploy it onto Heroku
In this blog, we can see how to dockerize a simple python flask app and deploy it onto Heroku. This blog assumes that you have already installed Docker and Heroku in your PC. If that’s not the case, you can easily install it from here and here. At First, create a new directory my-dir and move inside the directory. Create a python file named app.py with the following content in it from flask import Flask, render_template import os app = Flask(__name__) @app.route('/') def fun(): return render_template('index.html') if __name__ == '__main__': port = int(os.environ.get('PORT', 5000)) app.run(host='0.0.0.0', port=port, debug=True) Create a new directory named templates (inside the my-dir directory) and inside it create a HTML file named index.html with the following content in it. <!DOCTYPE html> <html lang="en"> <head> <title>Document</title> </head> <body> <h1>Successfully dockerized your python app</h1> </body> </html> Now create a requirements.txt with the following content in it. Flask==1.1.1 Thus, we have created a basic flask app setup. Now, we need to dockerize the application. Creating a Dockerfile Inorder to dockerize our python Flask app, we need a Dockerfile which contains the instructions needed to build a Docker image, which contains the set of instructions for creating a docker container that can run on the docker platform. Create a file named Dockerfile (Note: without any extensions) with the following content. FROM python:3.6-buster WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . CMD ["python", "app.py"] For each instruction/command in the Dockerfile, the Docker image builder generates an image layer and stacks it upon the previous ones, thus each layer contains only the changes from the layers before it. Hence, the Docker image thus obtained is a read-only stack of the constituent layers. Line by Line Explanation: 1. FROM python:3.6-buster FROM allows us to build our required image over a base image. The base image we choose here is buster which is a Docker’s own “official” python image. This image itself is large, but these packages are installed via common image layers that other official Docker images will use, so the overall disk usage would be low. 2. WORKDIR /app Set the present working directory. 3. COPY requirements.txt . Copy the dependencies file from your host to the present working directory. 4. RUN pip install -r requirements.txt To install all dependencies or pip packages. 5. COPY . . Copy the rest of your app’s source code from your machine to the present working directory of the container. 6. CMD ["python", "app.py"] CMD is the command to run on container start. It specifies some metadata in your image that describes how to run a container based on this image. In this case, it’s saying that the containerized process that this image is meant to support is python app.py. There can be only one CMD command per Dockerfile, if in case there is more, the last CMD command will take effect. Now, we are done with all the source code we require. Currently the Directory structure will be as follows: my-dir ----app.py ----templates ----index.html ----Dockerfile ----requirements.txt Let’s Create the Docker image Let’s build the Docker image locally and run to make sure that the service is running locally. Run the following command in your terminal to create the docker image from my-dir directory. The -t flag is used to give a name to the newly-created image. $ docker image build -t my-app . The above command may take upto 16 minutes to finish running, depending on your internet speed. Run the below command to verify that you successfully built a docker image. If you see your image( my-app ) listed there, that means you are successful. $ docker image ls Run the docker container $ docker run -p 5000:5000 -d my-app In the above command the -p flag is used to publish a container’s port to the host. Here, we’re mapping port 5000 inside our docker container to port 5000 on our host machine so that we can access the app at localhost:5000 The -d flag runs the container in background and prints the container ID. Check the localhost:5000 and if you see the heading “Successfully dockerized your python app”, then we successfully dockerized the app. Stop and remove the docker container The below command STOP and Remove the dockerized container from the localhost:5000. The container id is previously printed into the terminal. $ docker container stop <container id> $ docker system prune Deploy onto Heroku At first you need to login into the Heroku CLI. $ heroku container:login It would open the browser and prompt you to login with your Heroku credentials if you are not logged in already or if you are already logged in your Heroku account in your browser, just click Login on the new browser tab. If you succeed in logging, you will receieve a message “Login Succeeded”. Run the below command to create an app in Heroku, which prepares Heroku to receive your source code. Enter any name for your app. Heroku doesn’t allows to take names that’s already taken. $ heroku create <name-for-your-app> Now, you will receive a link like, https://<name-for-your-app>.herokuapp.com/ Now, Run the below command to push the container into Heroku(the below command may take upto hours depending on your internet speed). $ heroku container:push web --app <name-for-your-app> .At this point, the docker container is pushed to Heroku, but not deployed or released. The following command would deploy the container. $ heroku container:release web --app <name-for-your-app> Now, the app is released and is running on Heroku and you can view it in the below site https://<name-for-your-app>.herokuapp.com/ Thus, we successfully dockerized and deployed our Python Flask app onto Heroku.
https://medium.com/analytics-vidhya/dockerize-your-python-flask-application-and-deploy-it-onto-heroku-650b7a605cc9
['Hari Krishnan U']
2020-12-08 16:29:06.290000+00:00
['Heroku', 'Dockerfiles', 'Docker', 'Python', 'Flask']
When AI Self-Imposed Constraints Aren’t Good For Self-Driving Cars
Dr. Lance Eliot, AI Insider [Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/] Constraints. They are everywhere. Seems like whichever direction you want to move or proceed, there is some constraint either blocking your way or at least impeding your progress. Per Jean-Jacques Rousseau’s famous 1762 book entitled “The Social Contract,” he proclaimed that mankind is born free and yet everywhere mankind is in chains. Though it might seem gloomy to have constraints, I’d dare say that we probably all welcome the aspect that arbitrarily deciding to murder someone is pretty much a societal constraint that inhibits such behavior. There are thus some constraints that we like and some that we don’t like. In the case of our laws, we as a society have gotten together and formed a set of constraints that governs our societal behaviors. In computer science and AI, we deal with constraints in a multitude of ways. When you are mathematically calculating something, there are constraints that you might apply to the formulas that you are using. Optimization is a popular constraint. You might desire to figure something out and want to do so in an optimal way. You decide to impose a constraint that means that if you are able to figure out something, the most optimum version is the best. Hard Versus Soft Constraints There are so-called “hard” constraints and “soft” constraints. Some people confuse the word “hard” with the idea that if the problem itself becomes hard that the constraint that caused it is considered a “hard” constraint. That’s not what is meant though by the proper definition of “hard” and “soft” constraints. A “hard” constraint is considered a constraint that is inflexible. It is imperative. You cannot try to shake it off. You cannot try to bend it to become softer. A “soft” constraint is one that is considered flexible and you can bend it. It is not considered mandatory. This brings us to the topic of self-imposed constraints, and particularly ones that might be undue. Self-Imposed And Undue Constraints Problems that are of interest to computer scientists and AI specialists are often labeled as Constraint Satisfaction Problems (CSP’s). These are problems for which there are some number of constraints that need to be abide by, or satisfied, as part of the solution that you are seeking. Some refer to a CSP that contains “soft” constraints as one that is considered Flexible. A classic version of CSP usually states that all of the given constraints are considered hard or inflexible. If you are faced with a problem that does allow for some of the constraints to be flexible, it is referred to as a FCSP (Flexible CSP), meaning there is some flexibility allowed in one or more of the constraints. It does not necessarily mean that all of the constraints are flexible or soft, just that some of them are. Autonomous Cars And Self-Imposed Undue Constraints What does this have to do with AI self-driving driverless autonomous cars? At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that deserves apt attention is the self-imposed undue constraints that some AI developers are putting into their AI systems for self-driving cars. As an example of how driving a car can consist of our own mental “constraints” there’s a story I like to tell. One day, I was driving down a street that was flooded due to a downpour of rain. Unfortunately, I drove into the flooded street thinking that I could just plow my way through the water, but I began to realize that the water was much deeper than I had assumed. I had cars behind me that were pinning me in and so I couldn’t readily try to back out of the situation. If I went any further forward, I was going to end-up having my car get so deep into the water that it might pour into the car and likely too stop the engine. What to do? There was a raised median in the middle of the road that had grass and was normally off-limits to cars. I would have never thought to drive onto the median, but I saw another car do so. This allowed the car to stay high enough in the water to make its way down the street. After a moment’s hesitation, I decided that driving on the median made sense in this situation, and I did likewise. As a law-abiding driver, I would never have considered driving up on the median of a road. It was a constraint that was part of my driving mindset. Was it a “hard” constraint or a “soft” constraint? In my mind, originally it was considered a “hard” constraint, but now I realize that that I should have classified it as a “soft” constraint. AI Dealing With Constraints Let’s now recast this constraint in light of AI self-driving cars. Should an AI self-driving car ever be allowed to drive up onto the median and drive on the median? I’ve inspected and reviewed some of the AI software being used in open source for self-driving cars and it contains constraints that prohibit such a driving act from ever occurring. It is verboten by the software. I would say it is a self-imposed undue constraint. Sure, we don’t want AI self-driving cars willy nilly driving on medians. That would be dangerous and potentially horrific. Does this mean that the constraint though must be “hard” and inflexible? Does it mean that there might not ever be a circumstance in which an AI system would “rightfully” opt to drive on the median? I’m sure that in addition to my escape of flooding, we could come up with other bona fide reasons that a car might want or need to drive on a median. I assert that there are lots of these kinds of currently hidden constraints in many of the AI self-driving cars that are being experimented with in trials today on our public roadways. The question will be whether ultimately these self-imposed undue or “hard” constraints will limit the advent of true AI self-driving cars. Machine Learning And Deep Learning Aspects For AI self-driving cars, it is anticipated that via Machine Learning (ML) and Deep Learning (DL) they will be able to gradually over time develop more and more in their driving skills. You might say that I learned that driving on the median was a possibility and viable in an emergency situation such as a flooded street. Would the AI of an AI self-driving car be able to learn the same kind of aspect? The “hard” constraints inside much of the AI systems for self-driving cars is embodied in a manner that it is typically not allowed to be revised. The ML and DL takes place for other aspects of the self-driving car, such as “learning” about new roads or new paths to go when driving the self-driving car. Doing ML or DL on the AI action planning portions is still relatively untouched territory. It would pretty much require a human AI developer to go into the AI system and soften the constraint of driving on a median, rather than the AI itself doing some kind of introspective analysis and changing itself accordingly. There’s another aspect regarding much of today’s state-of-the-art on ML and DL that would make it difficult to have done what I did in terms of driving up onto the median. For most ML and DL, you need to have available lots and lots of examples for the ML or DL to pattern match onto. After examining thousands or maybe millions of instances of pictures of road signs, the ML or DL can somewhat differentiate stop signs versus say yield signs. Conclusion The nature of constraints is that we could not live without them, nor at times can we live with them, or at least that’s what many profess to say. For AI systems, it is important to be aware of the kinds of constraints they are being hidden or hard-coded into them, along with understanding which of the constraints are hard and inflexible, and which ones are soft and flexible. To achieve a true AI self-driving car, I claim that the constraints must nearly all be “soft” and that the AI needs to discern when to appropriately bend them. This does not mean that the AI can do so arbitrarily. This also takes us into the realm of the ethics of AI self-driving cars. Who is to decide when the AI can and cannot flex those soft constraints? Let’s at least make sure that we are aware of the internal self-imposed constraints embedded in AI systems and whether the AI might be blind to taking appropriate action while driving on our roads. That’s the kind of undue that we need to undue before it is too late. For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website The podcasts are also available on Spotify, iTunes, iHeartRadio, etc. More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/ For his AI Trends blog, see: www.aitrends.com/ai-insider/ For his Medium blog, see: https://medium.com/@lance.eliot For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot Copyright © 2019 Dr. Lance B. Eliot
https://lance-eliot.medium.com/when-ai-self-imposed-constraints-arent-good-for-self-driving-cars-133a5a9b83c9
['Lance Eliot']
2020-01-24 17:12:44.216000+00:00
['Driverless Cars', 'Self Driving Cars', 'Artificial Intelligence', 'Autonomous Vehicles', 'Autonomous Cars']
Fascinating read.
Fascinating read. The author gives you a glimpse where his ideas come from. From observations of a mind that flows like a river. Easy. You see a word in your day job, and there is the start of a story. It is said that the great sage, Valmiki, who wrote the Indian epic, Ramayana, started when he saw two crows. All our stories begin from, well, let’s say, spring from the world around us. The five skandhas, as The Buddha would say. Then the mind. The story is always there. It needs an excuse. Go find it.
https://medium.com/thoughts-philosophy-writing/fascinating-read-1280df04bcfe
['Arindam Basu']
2016-04-09 21:51:54.775000+00:00
['Creative Process', 'Storytelling']
It’s almost NaNoWriMo Eve
Fear is funny, it keeps you from doing things that you should do and stuff that you have always wanted to do. Learning to overcome your fears is a vital part of growing up. You just have to learn when you should do it and when you should let your better judgement take precedence. I’m attempting to overcome my fears this time to hopefully complete a novel during the 2017 National Novel Writing event. I’ve got a pretty good idea of the plot I want to use, I’ve figured out how I am going to be writing it (I’ve started my own medium publication) and I am raring to go. Now if I only had any freaking clue how I should start without flaming out that would be good, but you don’t always get what you want. Reading and writing are passions of mine. I can’t exactly say why I haven’t made the attempt before. If I’m being totally honest I feel like the fat kid who is getting ready to publicly ask out the hottest girl in school. In short, I love the concept and have dreams about a pleasant result but I am absolutely terrified of the possibility of getting laughed at. For now, I’m keeping an open mind. I shall not expect too much for a first time effort like this. It likely won’t be the great American Novel. My goal is going to be fairly simple then. I’ll settle for people not wondering what sort of hallucinogens I was on when I came up with the plot. If you want to follow along with my journey you can see more about my work attempts here. I’ll also be posting a short summary of my efforts and the plot to date on the National Novel Writing Month publication. Even if my worst fears occur and I bomb, perhaps I can help someone with my thoughts.
https://medium.com/nanowrimo/its-almost-nanowrimo-eve-6f409c138230
['Legitimate Geek']
2017-10-31 04:47:15.652000+00:00
['Writing', 'Authors', 'Novel', 'Entertainment', 'Novel Writing']
Neural Machine Translation
For centuries people have been dreaming of easier communication with foreigners. The idea to teach computers to translate human languages is probably as old as computers themselves. The first attempts to build such technology go back to the 1950s. However, the first decade of research failed to produce satisfactory results, and the idea of machine translation was forgotten until the late 1990s. At that time, the internet portal AltaVista launched a free online translation service called Babelfish — a system that became a forefather for a large family of similar services, including Google Translate. At present, modern machine translation system rely on Machine Learning and Deep Learning techniques to improve the output and probably tackle the issues of understanding context, tone, language registers and informal expressions. The techniques that were used until recently, including by Google Translate, were mainly statistical. Although quite effective for related languages, they tended to perform worse for languages from different families. The problem lies in the fact that they break down sentences into individual words or phrases and can span across only several words at a time while generating translations. Therefore, if languages have different words orderings, this method results in an awkward sequence of chunks of text. Turn to Neural Networks Recent application of neural networks provides more accurate and fluent translations that would take into account the entire context of the source sentence and everything generated so far. Neural machine translation is typically a neural network with an encoder/decoder architecture. Generally speaking, the encoder infers a continuous space representation of the source sentence and the decoder is a neural language model conditioned on the encoder output. To maximize the likelihood of the source and the target sentences, the parameters of both models are learned jointly from a parallel corpus (Sutskever et al., 2014; Cho et al., 2014). At inference, a target sentence is generated by left-to-right decoding. Neural Network Advantages Dealing with Unknown Words Due to natural differences between languages, a word from a source sentence often has no direct translation in the target vocabulary. In this case, a neural system generates a placeholder for the unknown word with the help of the soft alignment between the source and the target enabled by the attention mechanism. Afterwards the translation can be looked up in a bilingual lexicon built from the training data to allow for typos, abbreviations and slips of the tongue — a problem that was not fully resolved by traditional statistical approaches. Tuning model parameters Neural networks have tunable parameters to control things like the learning rate of the model. Finding the optimal set of hyperparameters can boost performance, but such parameters can be different for each model and each machine translation project. Therefore, in practice However, this presents a significant challenge for machine translation at scale, since each translation direction is represented by a unique model with its own set of hyperparameters. Since the optimal values may be different for each model, we had to tune them for each system in production separately. Less data Typically, neural machine translation models calculate a probability distribution over all the words in the target vocabulary, which increases the calculation time drastically. However, for low-resource languages, it is possible to develop bi- or multilingual systems on related languages for parameter transfer, using linguistic features of the surface word form, and achieving direct zero-shot translation Types of Neural Networks for Machine Translation There are a number of approaches that use different neural architectures, including recurrent networks (Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015), convolutional networks (Kalchbrenner et al., 2016; Gehring et al., 2017; Kaiser et al., 2017) and transformer networks (Vaswani et al., 2017). The state-of-the-art, though, is attention mechanisms where the encoder produces a sequence of vectors and the decoder attends to the most relevant part of the source through a context-dependent weighted-sum of the encoder vectors (Bahdanau et al., 2015; Luong et al., 2015). Sequence-to-Sequence LSTM with Attention One of the most promising algorithms in this sense is the recurrent neural network known as sequence-to-sequence LSTM (long short-term memory) with attention. Sequence-to-Sequence (or Seq2Seq) models are very useful for translation tasks, as in their essence, they take a sequence of words from one language and transform it into a sequence of different words in another language. Sentences are intrinsically sequence-dependent since the order of the words is crucial for rendering the meaning. LSTM models, in their turn, can give meaning to the sequence by remembering (or forgetting) certain parts. Finally, the attention-mechanism looks at an input sequence and decides which parts of the sequence are important, quite similar to human text perception. When we are reading, we focus on the current word, but at the same time we old in our memory important keywords to build the context and make sense of the whole sentence. Transformer Another step forward was the introduction of the Transformer model in the paper ‘Attention Is All You Need’. Similar to LSTM, Transformer translates one sequence into another with the help of Encoder and Decoder, but without any Recurrent Network. In this figure, the Encoder (on the left) and the Decoder (on the right) are composed of modules that can be stacked on top of each other multiple times and mainly consists of Multi-Head Attention and Feed Forward layers. The inputs and outputs are first embedded into an n-dimensional space. An important part of the Transformer is the positional encoding of different words. Since it does not have recurrent networks to remember how sequences are fed into a model, it gives every word/part of a sequence a relative position since a sequence depends on the order of its elements. These positions are added to the embedded representation (n-dimensional vector) of each word. Neural Machine Translation (NMT) achieved significant results in large-scale translation tasks such as from English to French (Luong et al., 2015) and English to German (Jean et al., 2015). Sciforce Takes Action Inspired by the results for En-De model by Edunov et al. (2018), we expanded it with back translation. Our final goal was to develop a machine translation system for an En/De news website. For the task we created a De-En machine translation system based on the Transformer model (Edunov et al., 2018) that was a part of the fairseq toolkit. As a first step, we tested the performance of the pre-trained EN-DE models on Google Colab. The p1-model is 12gb, split into 6 models of 2gb. We only managed to start 3 of those because of RAM limits, but it still showed excellent results. The second p2-model is 1.9gb and it performed reasonably well, though not as great as p1. At the same time, it is more lightweight and needs less resources to train. Following the advice of the authors of the reference paper, we used the transformer_wmt_en_de_big architecture to train the back-translation model. The task fell into three modules: De-En translation, De-En translation with back translation, and En-De translation. The internal stages for each module were the same: Data collection and cleaning We used two types of corpora for the tasks: De-En and En-De parallel corpora English monolingual corpora for news To collect and clean up the data we used the prepare-wmt14de2en.sh script — a modification of the original prepare-wmt14en2de.sh , using additional datasets and removing duplicates. cd examples/translations BPE_TOKENS=32764 bash prepare-wmt14en2de.sh For bilingual data generation, we assumed that all monolingual data was gathered and split into 104 shards and was available for downloading. To get backtranslation data from monolingual shards, we used the script named run_batches.sh . Then we distributed shards translation tasks between GPUs manually. With all shards translated, and all bilingual data gathered, we applied BPE to them, concatenated to the whole dataset, and ran a clean-up script. BPE code file obtained from the bilingual data was reused for all three subtasks. Preprocessing For the two De-En tasks, the shell commands and methods used were almost identical to those supplied with the model documentation. For the En-De task, we reuseв dictionaries supplied with the baseline model with the following shell commands and methods: $ TEXT=examples/translation/wmt17_de_en $ python preprocess.py --source-lang en --target-lang de \ --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ --destdir data-bin/wmt17_en_de_joined_dict \ --srcdict data-bin/wmt17_en_de_joined_dict/dict.en.txt \ --tgtdict data-bin/wmt17_en_de_joined_dict/dict.de.txt Training For monolingual and bilingual En-De translation tasks, we used shell commands and methods similar to those specified here. To reduce the training time, we tried to use bigger batches and a higher learning rate on 8 GPUs. For this we specified --update-freq 16 and learning rate --lr 0.001 . However, training often failed with an error message offering to reduce learning rate or increase batch size. So, we had to reduce learning rate several times during training. Overall training for achieving the best BLEU score should take ~20 hours. The logic behind training a reverse model was using only parallel data. The target side monolingual data was translated with the mode we trained at stage. Afterwards, we combined available bitext and generated data, preprocessed it with preprocess.py and trained the final model. Shell commands and methods used: python train.py data-bin/wmt17_en_de_joined_dict \ --arch transformer_vaswani_wmt_en_de_big --share-all-embeddings \ --optimizer adam --adam-betas ‘(0.9, 0.98)’ --clip-norm 0.0 \ --lr-scheduler inverse_sqrt --warmup-init-lr 1e-07 --warmup-updates 4000 \ --lr 0.0005 --min-lr 1e-09 \ --dropout 0.3 --weight-decay 0.0 --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ --max-tokens 3584 \ --fp16 --reset-lr-scheduler The actual command for training may differ from one specified above, however, the key point is specifying --reset-lr-scheduler parameter, otherwise, Fairseq will report an error. The resulting model scored as high in BLEU-score (~35) as the reference model or even higher. Empirically, it also performed as good as the pre-trained EN-DE model discussed in the reference paper by Edunov et al. (2018).
https://medium.com/sciforce/neural-machine-translation-1381b25c9574
[]
2019-10-25 14:46:34.190000+00:00
['NLP', 'Neural Networks', 'Deep Learning', 'Artificial Intelligence', 'Machine Learning']
You will be surprised by the custom event in Vue js
There are a lot of rich set of events in vue js and the facility of adding custom events. If you are familiar with Vue, Props are used while sending data to the child component and events are used to change the state of the parent component. The below code is nothing special and is just a modified version of this example of vue js custom events. Custom event in vue js In html section, you can see the component tag where surprise event is the custom event. After you click the Surprise Me button, a click event is fired from the inside of the surprise-app component, in which the surprise method is called. The surprise method of that component is responsible for emitting the surprise event, which is listened by the: <surprise-app @surprise=”surpriseMe”></surprise-app> after which the surpriseMe method of the Vue instance is called and the surpriseMessage is filled with ‘you got surprised’. I apologise, if I didn’t surprise you !
https://medium.com/introcept-hub/you-will-be-surprised-custom-event-in-vue-js-c713afdff8e5
['Madhu Sudhan Subedi']
2017-06-01 08:24:49.891000+00:00
['JavaScript', 'Software Development', 'Vuejs']