title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Running Hot & Cold
|
Written by
Daily comic by Lisa Burdige and John Hazard about balancing life, love and kids in the gig economy.
|
https://backgroundnoisecomic.medium.com/running-hot-cold-2d2464b950bb
|
['Background Noise Comics']
|
2020-01-14 01:20:18.195000+00:00
|
['Humor', 'Global Warming', 'Comics', 'Climate Change', 'Weather']
|
Ignorance and Arrogance…
|
In Real Life — Ignorance and Arrogance — by Steve Rigell — https://bubbblesmedia.com/ignorance-and-arrogance
This strip was inspired by Genius Turner’s article in The Ascent in which you can encounter Socrates in a subway and get a choice bit of perspective from Albert Einstein.
|
https://medium.com/@bubbblesmedia/ignorance-and-arrogance-6b1480108953
|
['Bubbbles Media']
|
2020-12-20 17:03:27.854000+00:00
|
['Mindset', 'Damn Good Advice', 'Mindfulness', 'Attitude', 'Perspective']
|
TypeScript Best Practices — Member Access, Loops, and Function Types
|
Photo by Whitney Wright on Unsplash
TypeScript is an easy to learn extension of JavaScript. It’s easy to write programs that run and does something. However, it’s hard to account for all the uses cases and write robust TypeScript code.
In this article, we’ll look at the best practices to following when writing code with TypeScript, including disallowing member access on any typed variables.
Also, we look at why we want to use const assertions.
We also look at why import should be used instead of require for importing modules.
We find out how we can use interfaces to specify the type of functions.
And we look at why the for-of loop is better than the regular for loop.
No Member Access on any Typed Variables
any may leak into our codebase in nested entities.
For instance, we may have type declarations that have the any type.
Therefore, if we allow those nested members to be accessed, we may run into type errors at runtime.
So instead of writing:
declare const anyObj: { prop: any };
anyObj.prop.a.b;
We should write:
declare const anyObj: { prop: string };
anyObj.prop;
Now that we know anyObj.prop must be a string, we can access it safely.
Don’t Return any From a Function
We shouldn't return anything with the any type in a function.
Instead, we should add a return type annotation so that we know what it’s returning.
For instance, instead of writing:
function foo() {
return 1 as any;
}
or:
function arr() {
return [] as any[];
}
We should write:
function arr(): number[] {
return [1, 2];
}
or:
function foo(): number {
return 1;
}
Now we know what the functions return.
No Unused Variables and Arguments
Unused variables and arguments are useless, so we probably shouldn’t have them in our code.
We can remove variable declarations like var , const or let variables that aren’t used.
Likewise, functions and classes that aren’t used can be removed.
enums, interface, and type declarations that aren't used can also be removed.
Class members like methods, instance variables, parameters can all be removed if they aren’t used.
This also applies to import statements.
No requires Statements Except in Import Statements
We shouldn’t use require statements any more since ES6 modules have become standard.
Therefore, instead of writing:
const foo = require('foo');
We write:
import foo = require('foo');
or:
import foo from 'foo';
Use as const Over Litreral Types
const assertions are better than literal types since they make values constant.
If we use const assertions, literal types can’t be widened, and object and array entries become readonly .
For instance, instead of writing:
let bar: 2 = 2;
or:
let bar = { bar: 'baz' as 'baz' };
We can write:
let foo = 'bar' as const;
or:
let foo = { bar: 'baz' };
Use for-of Loop Instead of a for Loop
for-of loops let us loop through items with a simple loop instead of having to set up looping conditions and index variables.
It also works with any kind of iterable object instead of objects with index and the length property.
For instance, instead of writing:
for (let i = 0; i < arr.length; i++) {
console.log(arr[i]);
}
We write:
for (const x of arr) {
console.log(x);
}
As we can see, the for-of loop is much simpler if we just want to loop through all items in an iterable object.
Photo by Brooke Lark on Unsplash
Use Function Types Instead of Interfaces with Call Signatures
We can use function types instead of interfaces or object type literals with a single call signature.
For instance, instead of writing:
function foo(bar: { (): number }): number {
return bar();
}
We write:
interface Foo {
(): void;
bar: number;
} const foo: Foo = () => {};
foo.bar = 1;
foo();
We have a function that returns nothing and has a bar property that’s a number.
Conclusion
We may want to use as const over literal types to prevent the type of the variable from widening and to make them, read-only.
Also, we want to use import instead of require for importing modules.
We also want to use interfaces instead of functions to specify the type of functions.
Finally, the for-of loop is better than the for loop in most cases.
|
https://medium.com/javascript-in-plain-english/typescript-best-practices-member-access-loops-and-function-types-36dce7173b76
|
['John Au-Yeung']
|
2020-06-11 15:47:47.020000+00:00
|
['JavaScript', 'Software Development', 'Web Development', 'Programming', 'Technology']
|
Leetcode Algorithms
|
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
|
https://medium.com/jen-li-chen-in-data-science/leetcode-algorithms-f0708c41d8f4
|
['Jen-Li Chen']
|
2020-12-27 12:56:47.914000+00:00
|
['Leetcode', 'Algorithms', 'JavaScript']
|
4 Virality-Lessons From the Most Viral Writer Alive
|
You only need viral components
What is actually meant by “viral”? In the classical sense, viral means the extraordinary, digital distribution of something — be it a video, an image, a message, an article, or whatever.
When I talk about Drake’s virality, I don’t mean whole songs that get record-breaking clicks on YouTube and streams on Spotify. He already manages that anyway.
It is the individual parts of Drake’s work that go viral.
Challenges, dances, and above all, individual passages of text from what Drake sings and raps.
Especially on TikTok, Instagram, and YouTube, videos of people doing things like the In My Feelings-Challenge or the Toosie Slide are reaching out.
Both are named after Drake's songs — advantageous marketing of his music — without him having to do anything great to get it.
These things go viral in social media trends and reach people who have not been reached by Drake’s music before.
All he had to do in Toosie Slide was to show the dance steps in the official music video.
He released the song Toosie Slide in April 2020, which has 240 million clicks on YouTube — quite a lot. But it’s getting even bigger. Because what went viral about this song is not the lyrics but the dance he performs in the video.
On TikTok, clips under the hashtag #toosieslide have a total of 6.3 billion views. Correctly heard, billions. The trend has conquered the platform — and with it, it has managed to reach many new people.
Besides viral videos, it has another strength — expressing the attitude towards many people's lives in its target group. Drake manages to highlight individual lines in his expressive and memorable music — they then conquer the world in the form of captions, tweets, and even WhatsApp status.
So it is not primarily whole songs or music videos of Drake that are viral — but the individual parts that conquer the Internet and help Drake reach more, virtually-free of charge, even with people who otherwise have no contact with his music.
|
https://medium.com/illumination/viral-marketing-83aae0068aaa
|
['Louis Petrik']
|
2020-12-11 15:34:45.586000+00:00
|
['Marketing', 'Drake', 'Viral', 'Rap', 'Music']
|
Do not be afraid to breathe deeply!
|
Do not be afraid to breathe deeply!
Why has mankind still not been able to defeat the flu virus? One cardinal way would be to not inhale the virus, but no one wants to walk in a gas mask. Pryadko Veronika Mar 17, 2020·10 min read
Perhaps humanity could forget about airborne infections, if a brilliant designer could introduce a fashion for a mask or a helmet-filter, maybe featuring a “hoop with a veil” design, during an epidemic. So, we are waiting for a brilliant designer. Sometimes, when viruses become very contagious or deadly, the situation spurs mask developers to create something that could really radically solve the problem. But, when the virus season ends, this desire passes unrealized. This is understandable, because of the complexity of the task, both in terms of engineering, designing and commerce. Commercial challenges arise because the device must be very cheap in order to be accessible for the entire population of the Earth, including the poorest. Storing a filter mask in a warehouse between epidemics would already drive the price unacceptably high. The way out of the situation could be to design a device so simple that its mass production could be deployed in a short time after the outbreak. Engineering difficulties lie in the necessity to ensure sufficient efficiency of virus absorption by a stand-alone device that does not have expensive and difficult to use removable components.
An absolutely necessary condition for the mask to function is to ensure moderate excess pressure of (virus-free) air under the mask. That is, a fan or a compressor should be part of an individual means of protection against the virus.
There is a good idea for the filter that needs to be checked: an electrostatic filter on charged particles of fine aerosol of ordinary water. It does not require consumables (except for distilled water), is fantastically cheap to manufacture and there is a high probability that our filter mask will be much more effective than other masks. It can be produced both in mass production and on an individual 3D printer. We (more about us at the end of the article) offer this idea for joint development using a virtual team to speed up the work. All you need to participate in the development is access to a 3D printer / desire to understand physical processes / the ability to find beautiful engineering solutions / experience in organizing large-scale production. An open source development is conducted on Github ([https://github.com/FilterCOVID-19/FilterCOVID-19_ENG)), alongside with a complete description of the principles embodied in the filter and downloadable models for manufacturing on a 3D printer. Having made any of the variants of such a device and having tested it, you can safely attend any events, meetings with a large number of people, without being to get infected or to infect someone else. For the safety of others we strongly recommend that an output filter of a similar design is installed. The low cost of the filter and the lack of consumables allow this.
The concept is based on three main ideas.. Firstly, in order to protect yourself from the virus, you do not need any vaccinations, neither from the flu virus, nor from the common cold, nor from any other infection. It is enough to utilize massively effective personal protective equipment. Second, the solution to a pandemic can be an independent printout of individual filters on a 3D printer without leaving your home. Third, for mass production, the cost and complexity of the filter and helmet should be minimal.
The filter consists of a small number of parts: a housing, a battery, an electric motor, a disk for generating fine water mist, dielectric inserts on which the disk is mounted, mesh-electrodes for collecting drops, a high-voltage collector from the disk, a plastic insert to create a “cyclone”, as well as, optionally, a charcoal or silica gel filter.
The principle of operation of the filter is based on two physical principles: 1) charging the virus-particle in order to then precipitate it with a charge of the opposite sign; 2) merging of the virus-containing droplets due to surface tension and precipitation of large particles. We assume that this can be done by purifying the air with a large number of electrically charged fine particles with a large collective surface.
Individual air purification device
Water is dispensed through a capillary to the inner (cone) side of the disk that rotates at high speed. The droplet that has fallen into the center of the cup, due to centrifugal force, moves along a slightly conical surface, while breaking into small drops and charging due to friction on the surface of the disk. The resulting charged fine particles are carried away with the air stream generated by a small impeller on the reverse side of the disk, which creates a turbulent flow at high speeds of rotation. Air with fine particles passes through a spiral section of the casing in which the virus particles are recharged, absorbed by water droplets and the water droplets enlarge. Fine water droplets are positively charged due to the triboelectric effect, while the surface of the disk is getting negatively charged. The charge on the disk is used in the droplet collection unit with a mesh electrode. In addition, the inlet of the droplet collection unit and the grid electrode are designed in such a way that the droplets are collected by centrifugal force, as in the cyclone air separator. Thus, two mechanisms of absorption of a biological aerosol containing a virus are provided: mechanical (due to the merging of water droplets due to their surface tension, and their subsequent collection using a “cyclone” system) and electrostatic (due to the targeted attraction between the neutral virus-containing and charged particles, recharging of a viral particles due to fusion with a charged drop of water and their collection on a high-voltage particle-collecting electrode).
Of course, this is just an idea that needs refinement and verification.
We invite everyone to the virtual team, or, if you have such an opportunity, to carry out one of the parts of these works yourself. In connection with current events, everyone may need such a filter. The speed of successful creation of such a device will determine the number of lives saved by it. The production of such filters is not technically difficult; many production capacities can be converted for its production.
Practical Filter Design
All blocks are made in the style of a non-spill inkwell.
The principle of a non-spill inkwell
A motor with a disc and a non-spill chamber forming a droplet spraying unit
Mist-generating disk
The disk has radial blades forming a directed air flow. The inner surface of the disk has a conical section with notches for the formation of micron drops of water under the action of centrifugal force. Under the influence of the triboelectric effect, these drops acquire a positive charge.
The block for mixing of the suspended droplets with air
The chamber is divided by a partition forming a turbulent flow when air passes through it. The main task is to enable the droplets to draw dangerous viruses onto their surface.
Moisture settling unit
This block contains a guide nozzle which generates a vortex flow in the chamber. The camera is arranged on the principle of a “cyclone” filter. In the “non-spill” zone there is a metal grate that provides electrostatic precipitation of the virus-containing charged droplets. The non-spill mixing and mist-setting chambers are connected in the water compartment and contain a disinfectant dispenser and a valve for draining waste water. Purified air enters the central outlet.
General view of the device assembly
At the final stage of purification, the air passes through silica gel, which absorbs excess moisture. Silica gel plays the role of an indicator of the effective operation of the “cyclone” and electrostatic precipitator. The appearance of liquid in it indicates a filter malfunction (air filtration functions without the help of silica gel, it is simply a guarantee of safety in the event of a device breakdown).
Suggestions as to the helmet design
The design solution of the filter is a necklace fixed on the collar of any garment using magnetic latches, that is, one magnet is located inside the necklace, and the second under the clothes or collar. If the design of the case is made in the form of a necklace ring with some flexibility, and an annular groove is created along the upper edge of this necklace, in which the film could be clamped with a clamp, then the helmet could be made of any flat film in the form of “origami” fixed on the body and sealed with an elastic opening for the neck. Inside the necklace there is a filter for the inlet and outlet air flows from the helmet, and a battery for powering the electric motor. A lack of charge in the battery or a malfunction of the electric motor is indicated visually by a decrease in the tension of the thin-film helmet and the cessation of the damper when inhaling and signals that the helmet must be removed. A damper exists so that the helmet does not inflate and deflate with each breath. The damper, compensating for the inhalation, is made in a separate chamber in the form of an inflating rubber ball in the tube connecting the helmet with the atmosphere. This is a mandatory element for all helmets, because it works like an emergency valve, when due to the cessation of air blowing (for example, during a malfunction or battery discharge) the pressure in the helmet drops — the ball deflates and depresses the helmet.
The air flow rate of the filter is determined by the human consumption of approximately 10 liters of fresh air per minute, multiplied by a redundancy coefficient of 2. Total, if we take the battery life of 12 hours, it translates to 1440 liters of air. The pressure created by the fan must ensure this flow rate, taking into account the passage through the openings of the small cross-section of the cyclone filter, the resistance of the chamber with the silica gel, and also the resistance of the helmet. The fluid flow rate and the volume that must be kept in the spray chamber is determined by the size of the sprayed particles (we focus on 1 μm) and the volume of air pumped through the filter. The specific surface area is equal to the product of the shape coefficient k and the reciprocal of the diameter of the droplet. For a micron spherical drop, 1 cubic meter of water is sprayed into water particles with 6x106 square meters of total surface.
The diameter of the necklace beads is determined by the minimum diameter of our disk in the case (approximately 5 cm). A rough test of this device can be carried out: 1) for the presence of high voltage using a school or homemade (from two strips of tissue paper) electrometer. 2) on the ability to absorb fine particles with cigarette smoke (when the filter is in operation, the smoke supplied to the inlet must be completely absorbed and not give a trace of smell in the outlet).
You can check the operation of the “cyclone” by disabling the electric deposition unit and checking for the presence of water in the silica gel. The device emits a faint white noise when generating mist, which is an indicator of normal operation.
About us
Unfortunately, our five-man team has limited capabilities. I, Veronica, wrote this article and am ready to answer your questions. The author of the conceptual ideas is my father (Igor). If you have questions, ask as soon as possible. He lies paralyzed and has very little time left. Yuri specializes in 3D modeling. Valeriy is ready to contact all interested parties for the manufacture of the filter and the organization of mass production. We will conduct all our work on GitHub ([https://github.com/FilterCOVID-19/FilterCOVID-19_ENG)). Please join.
Security guarantees: the user does everything at his own risk. The license is free.
|
https://medium.com/@pryadkoveronika18/do-not-be-afraid-to-breathe-deeply-5f3140518ac4
|
['Pryadko Veronika']
|
2020-03-18 23:12:33.902000+00:00
|
['Masks', 'News', 'Virus', 'Coronaviruses', 'Covid 19']
|
What we believe in. Chronotope
|
Fragment of an article in Tatlin Mono — “Young architects 2014”
Habidatum Chronotope: map (2D) + time (vertical)
City as a Process
A modern city is a process unfolded in time and space. To study such a city is like to perform a piece of music or listen to it when the impressions of the passage being performed at the moment are immersed in the context of what has already been played, and the momentary pleasure complements the integrity of the performed fragment.
The view on the city in static defined the current toolkit of urban studies. There are virtually no tools for analyzing the time trajectory of a place.
Free Time of Space
Meanwhile, such trajectories open up new opportunities for citizens, businesses and city authorities. With the help of the Habidatum Chronotope platform, you can identify the “free time” resources of a place, the mode of its use at different times of the day, week, month, year.
According to Habidatum, approximately 60% of the time urban space is not in use: thus, the resources of urban activity growth and compaction are huge. Habidatum Chronotope platform helps detect free “temporal niches” in different areas of urban space.
Compatibility of Functions in Time and Space
The demand for free “temporary niches” raises the question of the compatibility of different functions when sharing the same place at different times — a situation reminiscent of the “right of passage” in the old medieval cities, when the street was used for the passage of cattle and people alternately. The compatibility potential of different functions can be assessed using analytical indicators that determine the “space capacity” and “usage type” for different categories of users of the platform.
Event as a Change of Trend
Habidatum believes in a special definition of urban “event” — for us, it is not just a single happening (football match, accident, demonstration, etc.). Oppositely, it is a change of “trend” or “pattern”. Data processed through special analytical “filters” in real time allow recording the change of “trends”, whether it is a change in a given parameter (for example, the average length of the trip) or a spatial pattern (for example, the configuration of ethnic diasporas in the city).
Real-time vs. Historical Analysis
Most of the professional tasks solved with the help of Habidatum Chronotope platform are in the between real-time monitoring and traditional historical analysis. The platform allows doing both, not limited to any of the extremes. Its main goal is the practice of “lean planning”, whose application requires linking historical analysis to constant engagement in the process that is studied and seeks adaptation of urban development scenarios based on the constantly updated information.
Thank you for reading. Appreciate your comments!
|
https://medium.com/habidatum/what-we-believe-in-chronotope-8b99af40b648
|
['Katya Serova']
|
2019-01-11 16:10:15.534000+00:00
|
['English', 'Urban Planning', 'Data Visualization', 'Smart Cities', 'Data Science']
|
How Entitlement is Silently Ruining Your Life
|
Venture capitalist and author Guy Kawasaki once wrote, “Entitlement is the opposite of enchantment.” I’ve lived this so I know it to be true.
For most of my life I’ve been in pretty good situations both socially and financially. But right after I finished my degree in 2010, my family was hit with significant money problems.
We had suffered those before but this was different. Bills went unpaid. Dinners had to be rationed. Water was lunch. Thankfully we had friends and family donate stuff for us.
While this time in my life lasted less than a year, I had to call it like I saw it: we were poor. And yet, I had never felt more alive.
I didn’t feel more alive because I was hungry more often. I didn’t feel more alive because I was worried they’d cut off the light and water. I didn’t feel more alive because I personally had debts to pay and wondered if they’d come for my kneecaps.
I felt more alive because I gave up the chase for more. I was humbled. I knew things could be worse and was more grateful for the little we had.
When the money came back, I had mixed feelings. This life that was flowing through my veins, this enchantment Kawasaki wrote about and my ability to finally embrace the present moment… what would happen to it? Would I be back to being just okay, future-oriented and thinking I deserved more and more? You bet.
I tried to fight it but I eventually lost. I said to myself, “Am I to believe that the only way I can feel truly alive is to be food insecure?” That didn’t make sense. Clearly some people are rich and obnoxious, but there are some that were humble too. How can I be like them? And that leads us to the first way entitlement is ruining your life.
1. Entitlement is Unconscious
When we think of someone being entitled, it’s the person barking at the coffee shop for their order or the person who thinks that because they are nice to another person, that person should do whatever they want, or whenever we put out little effort and expect immediate and/or big results.
And we’re right! Those are great examples of entitlement. But sometimes we fall prey to these tendencies. I for one hate when the light turns green and people are honking their horn, but one day I did it. I was in a rush and it seemed like the person in front of me was driving Miss Daisy. Usually, I’d just mutter to myself, but it’s the same entitlement!
These people who engage in these selfish behaviors are not villains. We think they are because we aren’t doing what they’re doing (at the moment). We think they’re bad and we condemn their actions but if we practice some self-awareness, we’d find that we’re guilty sometimes too. The solution to being unconscious and mindless is to be mindful and self-aware.
2. You Misunderstand Success
What I mean by this is that there’s this cross-cultural notion that successful people are entitled, arrogant pricks that believe that everyone should bend to their will.
As a result, people who want to be successful adopt this way of being before they’ve done anything of repute. This is a problem because they misunderstand success.
Yes, we have the laundry list of celebrities and rich people who abuse their power and are overall terrible but their success is not because they were entitled bastards. They were plugging away at their trade, became successful and then with the newfound power came the tendency for corruption. Also, there was a fear of losing the power, which made them even worse.
The only people who are entitled bastards who are rich were born into wealth (or were kids with parents who never told them no.)
Furthermore, if being entitled equals success, what of the celebrities and wealthy folk who aren’t entitled? They’re thriving and are not terrible. Why is that? Because being celebrated and providing the world with great stuff never had anything to do with being entitled. If anything, hard work and humility are why they made something of themselves.
3. You Have an Unhealthy Attitude with Volunteering
There are two ways one can volunteer. On the one hand, you do something for no pay but for a benefit. On the other hand, you do something for no pay and no benefit.
A lot of the times if we can’t see what’s in it for us, we won’t do it. But here’s the thing. If there was info or training or food you desperately needed and you needed someone to sacrifice some time for your benefit, you’d hope that they’d do it.
Sometimes people need help. Your help. They’re like babies in the sense that they can’t help you, they can’t give you anything in return and they may even leave a mess for you to clean up, but as a being of planet Earth who respects other beings, open your heart to the less fortunate. That could’ve been you. Actually, it was you once upon a time.
4. You Think Life Owes You
In addition to some of the rich and famous being entitled and obnoxious, there are some poor or average people that are entitled and obnoxious. How is that possible? Because they’ve been kicked around by life to the point where they think they are due a break.
I can relate to this. I was lucky to take my human rights and basic necessities for granted but socially and emotionally, I wasn’t as lucky.
There eventually came a point in high school where I figured I was due a break (or at least a girlfriend). None of these things came until I could…
5. Embrace Reality
Entitlement is thinking that you are inherently deserving of privileges or special treatment. Entitlement is essentially expectation. The antidote to expectation is humility.
When my family and I were struggling to make ends meet, my attitude changed. Life humbled me. Life didn’t humiliate me, it just grounded me. Expectation makes you fly off the handle and your attention is diverted here, there and everywhere. Humility keeps you rooted in the present moment.
It isn’t that you stop having ambitions. You just act from where you are, not where you imagine yourself to be. This is crucial because people who aren’t humble are too chummy, too flighty, too careless. When you’re humble you know you aren’t owed anything. You also embrace what is and the pros and the cons of it because there are always pros and cons to everything.
The great thing about this is that humility can be achieved by looking at the four pointers above.
You aren’t humble because you don’t know that you’re not.
You mistakenly connect arrogance with success.
You can’t give of yourself without getting something in return because you think you’re above that when you’ve literally benefited from that yourself.
And finally, either because you were born with a silver spoon in your mouth or no spoon at all, you think life owes you something when you’ve contributed nothing to life.
Now it’s time to stop worrying about what you can get and concern yourself with what you can give.
|
https://alchemisjah.medium.com/how-entitlement-is-silently-ruining-your-life-e3f9d03272ae
|
['Jason Henry']
|
2019-08-08 04:44:11.770000+00:00
|
['Self-awareness', 'Self Improvement', 'Life Lessons', 'Self', 'Life']
|
Is it true that open ecosystems are the future of application development
|
We can’t get enough of our mobile apps. Last year 204 billion apps were downloaded, and that number is only growing in 2020.
As app stores entered the mainstream tech culture, developers were exposed to an audience of millions of people eager to embrace the innovative capabilities of their devices through the creativity of third-party developers.
The rapid commercialization of the app ecosystem has led big tech brands to restrict app innovation within their platforms, often to the detriment of developers.
As developers wage war on app store policies, Huawei is banking on simple concept developers can sidestep: open ecosystems.
Developers versus closed ecosystems
In some ways, closed app ecosystems are the antithesis of dominant developer trends. We live in a time when anyone in the world with the skills and creativity to create apps can. In fact, developers adhere to strict policies on app stores that prevent them from showcasing their apps outside of branded app stores or monetizing their creations without the mandatory fees of major mobile platforms.
As the app economy thrives in a multi-billion dollar industry, developers are increasingly exposed to the vagaries of the platform and the erosion of developer freedom is the cost. In his TNW2020 keynote, Peter Gooden, chief marketing officer for environmental systems in Western Europe at Huawei, proposed an “alternative software ecosystem” that would allow developers to embrace open innovation. Huawei Mobile Services (HMS) is the corporate solution for claustrophobic developers who want to create new app experiences for a connected world.
We are at a critical moment in the face of closed ecosystems. A group of large app companies such as the makers of Fortnite and Spotify recently formed a Coalition For App Fairness to fight against deposit commissions and app store policies from Google and Apple. Companies like Facebook openly criticize app stores for policies that disproportionately affect business users. It is clear that multinational and independent developers want more freedom, and as Huawei embarks on connected life through the strategy of Seamless AI Life, developers are encouraged to embark on a new path to innovation in applications.
Open ecosystems, open innovation
Context is at the heart of Huawei’s approach to app development. In his TNW2020 opening keynote, Huawei’s Gauden shared insights into the HMS ecosystem that puts developers in the driver’s seat of application innovation. Developers will create different use cases for different parts of the hardware, depending on the needs of the device in that scenario.
Huawei provides developers with open access to HMS Core software code and 13000 scenario APIs to support application functionality in various contexts. The API suites fall into seven main categories, including application utility services such as a locator kit, a portfolio kit, and a scan set. The more adventurous combos include virtual reality engine APIs, 3D facial recognition, and smart device enhancing the capabilities of connected Huawei devices..
Developers can integrate APIs to create contextual experiences for users. One example is MyTunerRadio, a digital radio directory with 50,000 global listings and over a million podcasts. Its developers have used HMS Core and API Kits to create features that support various scenarios where a user activates the app outside of a smartphone. For example, MyTunerRadio integrates with smart home speakers and uses Huawei’s HiCar technology for in-car play. It can also integrate with a smartwatch so that users can control the volume from their wrist.
Huawei places the smartphone at the heart of its Seamless AI Life connected strategy but aims to expand its app capacity outside of the mobile phone. As Gauden told TNW: “We have an ambitious vision: to deliver digital to every person, home, and organization for a fully connected and intelligent world.” The scenarios that the MyTunerRadio developers envision are a realistic representation of how an individual interacts with the app on a daily basis, and Huawei wants to bring this sensitivity to all technologies related to the app.
Gauden would like to highlight how Huawei encourages app innovation: “With Huawei creating the building blocks to make it easier for developers to take advantage of amazing device experiences across multiple Huawei device types, this ensures developers can continue. To focus solely on innovation and developing their products. “
Increasing consumer interest in connected devices means that application innovation must extend beyond the mobile region and developers can meet this demand by pairing future Huawei devices with connected software. Huawei’s AppGallery is the world’s third-largest app ecosystem built by Huawei and two million third-party developers. It is the place where developers communicate with users through their apps, and Huawei aims to make it a welcome destination for developers who want to build a living, connected future.
The new challenges of building an open and connected app ecosystem present an active opportunity for developers who want to exercise their independence and creativity to the fullest.
Building an open future
The future of the open ecosystem is full of immense possibilities and immersive technologies. In less than two decades, app stores have grown from a fringe novelty to a standard requirement for modern smartphones. Huawei’s work in this field is not wasted; It is necessary. As Huawei aligns Seamless AI Life’s strategy with open app development, it is forging an unconventional but futuristic partnership with developers. Huawei sets a new industry standard for innovative collaboration, and brands pay attention.
Huawei’s partnership with News UK is an example of this new approach. The company behind TalkSport Radio, The Times, and The Sun has created an immersive newspaper app on the Huawei Mate XS foldable tablet. The app experience also simulates the experience of reading a newspaper with a larger screen.
As Huawei works to enable developers to build the future in an open environment, millions of Huawei users around the world will feel the impact. The effect of the ripple will change the daily activities of many and direct us towards a more connected future.
When asked about the impact of technologies like Mate XS integration from News UK, The Next Web Gauden made clear that open collaboration with developers was the way to go. “With the home and workplace changing, technology plays a more important role than ever in keeping things running smoothly. And if we want to make this life easier, instead of causing frustration, we need to work together — creating products and services that connect from my point of view.
|
https://medium.com/@globebusinesscenter/is-it-true-that-open-ecosystems-are-the-future-of-application-development-4ac000b40643
|
[]
|
2020-12-17 09:48:55.740000+00:00
|
['Android App Development', 'AndroidDev', 'Application', 'Ecosystem', 'Open Source']
|
Google Cloud Platform for SQL Practitioners
|
Image borrowed from the Google Cloud Platform blog
The purpose of this article is to give an overview of the SQL features in Google Cloud Platform (GCP). This article is intended for anyone who currently uses SQL and is looking to understand the current options for using their SQL skills on Google Cloud Platform.
MS SQL Server on Windows Server
If you want to run MS SQL Server on GCP you have a few options, IaaS BYOL, IaaS, and fully managed. A few benefits of running MSSQL Server on Windows Server on GCP:
Fastest instance startup times
Fast high performance global network
Fast disk throughput performance — SSD persistent disks (high IOPS)
Tempdb and windows paging files on local attached SSD (highest IOPS)
Custom instances sizes
Bring your own license (IaaS BYOL)
If your organization has license requirements that limit physical hardware usage or a service provider license agreement (SPLA), you can use Google Compute Engine (GCE) sole-tenant nodes. These instances allow you to import a custom image into GCE, use it to start an instance, and enable in-place restarts so the VM restarts on the same physical server. By using sole-tenant nodes you ensure your VMs run on hardware fully dedicated to your use while limiting physical core usage. To prove your server usage for reporting Stackdriver monitoring allows determine server usage for reporting and export server IDs. More on using existing Microsoft Application Licenses.
Pay license with instance cost (IaaS)
If you do not have a physical hardware license requirement you can pay for SQL Server on Google Cloud Platform with your instance cost. This is the easiest way to get started with SQL Server on GCP with no upfront costs and per second billing. This is your flexible license option that allows you to pay for licensing the same way you pay for cloud infrastructure, pay as you go, pay only for what you use.
SQL Server Images on Google Cloud Platform (June 2019)
Cloud SQL for SQL Server (Fully Managed)
If you are looking to run SQL server in a fully managed service where the provider takes care of managing the instance for you Cloud SQL for SQL Server is being released and is in alpha as of writing this article. Cloud SQL takes care of backups, replication, patches and updates, and has features such as read replicas for scaling out. Cloud SQL has limits of 10TB for storage, so if you need beyond 10TB you’ll need to consider other options (or clean up your DB :)).
More on SQL server on Google Cloud Platform here.
Data Warehousing and Processing with SQL
We previously covered MSSQL server relational transaction processing use cases for SQL. Now we will cover analytics processing with SQL on GCP.
Bigquery is a very powerful serverless, highly scalable, data warehouse with in-memory BI Engine and machine learning built in on Google Cloud Platform. The idea with BigQuery is to focus on the analytics, not your infrastructure.
Enterprises use BigQuery to unlock insights and run blazing-fast SQL queries on gigabytes to petabytes of data. To learn more about BigQuery check out the product page here. We’ll go over some of the more specific features of BigQuery that may be interesting to SQL practitioners in the remainder of this article.
BigQuery Standard SQL
If you are an experienced SQL administrator you can use your SQL experience to operate BigQuery. BigQuery has a mode called Standard SQL which is compliant with the SQL 2011 standard, supports nested and repeated data, allows for user defined functions, and DML.
If you are a SQL administrator you can use SQL queries for data transformation, analytics, and even machine learning in BigQuery.
The syntax and features of both MS SQL Server and Standard SQL are very similar and in most cases you will be changing a function or data type such as DATEADD vs DATE_ADD in your query.
Sqllines.com has a good resource of SQL Server to MySQL migration reference here.
SQL server:
WHEN (PurchaseDate < DATEADD(MM,+1,GETDATE()) AND (PERIOD_DATE__A >= DATEADD(MM,+1,GETDATE()) OR A.PERIOD_DATE__A IS NULL))
SQL Standard:
WHEN (DATE(PurchaseDate) < DATE_ADD(Current_Date, Interval 1 month) AND ( a.PERIOD_DATE__A >= CAST(DATE_ADD(current_date, Interval 1 month) AS string) OR a.PERIOD_DATE__A = ‘’) )
More on Standard SQL Reference here.
BigQuery Machine Learning (BQML)
BigQuery ML (BQML) enables users to create and execute machine learning (ML) models in BigQuery using Standard SQL queries. BQML makes ML accessible to data analysis teams and enables SQL practitioners to build models using existing skills.
ML on large datasets typically requires python experience and knowledge of ML frameworks. These requirements have been restrictive in organizations to a small group of individuals and in many cases organizations just do not have these skill sets yet in existing data teams. Many of these organizations have data analysts who understand their companies data but have limited or aspiring machine learning and programming skillsets.
Analysts can use BigQuery ML to build and evaluate ML models in BigQuery. Analysts no longer need to export small amounts of data into spreadsheets or other applications or wait for limited resources from a data science team.
So what type of insights and machine learning can be done within BigQuery by SQL practitioners? To start understand in BigQuery ML, a model can be used with data from multiple BigQuery datasets for training and prediction.
Types of machine learning models that are supported by BigQuery ML:
Linear regression for forecasting. Example, sales of an item on a given day in the future.
Binary logistic regression for classification. Example, determining whether a customer will make a purchase.
Multiclass logistic regression for classification. Example, predict where an input is low, medium, or high-value.
K-means clustering for data segmentation. Example, identifying customer segments.
After you have a model (dataset) and you have ran the ML.EVALUATE function query, you can use your model to predict outcomes using the ML.PREDICT function.
Example query:
#standardSQL
SELECT
country,
SUM(predicted_label) as total_predicted_purchases
FROM
ML.PREDICT(MODEL `bqml_tutorial.sample_model`, (
SELECT
IFNULL(device.operatingSystem, “”) AS os,
device.isMobile AS is_mobile,
IFNULL(totals.pageviews, 0) AS pageviews,
IFNULL(geoNetwork.country, “”) AS country
FROM
`bigquery-public-data.google_analytics_sample.ga_sessions_*`
WHERE
_TABLE_SUFFIX BETWEEN ‘20170701’ AND ‘20170801’))
GROUP BY country
ORDER BY total_predicted_purchases DESC
LIMIT 10
Result:
This is giving you total predicted purchases based on Google analytics data.
More on BigQuery ML here.
Cloud Dataflow SQL
Cloud Dataflow SQL lets you use SQL queries to develop and run Cloud Dataflow Jobs in the BigQuery UI. Dataflow SQL uses Beam SQL and this is big for SQL practitioners because previously you could only write Apache Beam or Dataflow pipelines with Java, Python, or Go languages. This functionality opens up a whole new area of data processing for SQL practitioners to explore.
What you can do with this:
Develop and run streaming pipelines from the BigQuery UI in SQL Join streams (from Cloud PubSub) with snapshotted datasets (BigQuery tables) Write results into a BigQuery table for analysis and dashboards
Currently you can only read from a PubSub topic or BigQuery table and write to a BigQuery table however I am sure more data sources and destinations will be enabled in the future.
Some possible use cases for Cloud Dataflow SQL
Join application events in AppEngine via PubSub with other monitoring system datasets in BigQuery for richer system insights
Join the Google Analytics Real Time Reporting API with CRM or sales table data in BigQuery for marketing analytics
If you are experienced with SSIS and looking for a UI based ETL service, also consider Cloud Data Fusion. While it does not include SQL capabilities at this time it is a code free GCP native ETL pipeline service.
This article was intended to give an overview of some of the SQL options in GCP to show how SQL is supported in many ways on Google Cloud. Lastly, this week Google Cloud databases named a Leader in The Forrester Wave Database-as-a-Service, Q2 2019.
I hope this post was helpful to show some of the options that SQL practitioners can try on Google Cloud Platform!
|
https://medium.com/google-cloud/google-cloud-platform-for-sql-practitioners-2b2e4507535e
|
['Mike Kahn']
|
2019-06-06 14:39:04.219000+00:00
|
['Sql Saturday', 'Relational Databases', 'Google Cloud Platform', 'Sql', 'Bigquery']
|
Securing access to Azure Data Lake gen2 from Azure Databricks
|
This pattern could be useful when both engineers and analysts require different sets of permissions and assigned to the same workspace. The engineers may need read access to one or more source data sets and then write access to a target location, with read write access to a staging or working location. This requires a single service principal to have access to all the data sets in order for the code to execute — more on this in the next pattern. The analysts may need read access to the target folder and nothing else. Analysts and engineers will be separated by cluster but allow them to work in same workspace.
The disadvantage of this approach is dedicated clusters for each permission group, i.e. no sharing of clusters across permission groups. In other words, each service principal, and therefore each cluster, should have sufficient permissions in the lake to run the desired workload on that cluster. The reason for this is that a cluster can only be configured with a single service principal at a time. In a production scenario the config should be specified through scripting the provisioning of clusters using the CLI or API.
Depending on the number of permission groups required, this pattern could result in a proliferation of clusters. The next pattern may overcome this challenge but will require each user to execute authentication code at run time.
Pattern 5. Session scoped Service principal
In this approach access control is governed at session level so a cluster can be shared by multiple users each with their own set of permissions. The user attempting to access ADLS will need to use the direct access method and execute OAuth code prior to accessing the required folder. Consequently this approach will not work when using odbc/jdbc connections. Also note that only one service principal can be set in session at a time and this will have a significant influence the design as described later.
This pattern works well where different permission groups (such as analysts and engineers) are required but you don’t want to have take on the administrative burden of isolating them by cluster. As in the previous approach, mounting folders using the provided service principal/secret scope details should be forbidden.
The mechanism which ensures that each group has the appropriate level of access is through their ability to “use” a service principal which has been added to the AAD group with the desired level of access. The way to effectively “map” the user group’s level of access to a particular service principal is by granting the Databricks user group access to the secret scope (see below) which stores the credentials for that service principal. Armed with the secret scope name and the associated key name(s), users can then run the authorisation code shown above. The client.secret (service principal’s secret) is stored as a secret in the secret scope but so to can any other sensitive details such as the service principals application ID and tenant ID.
The disadvantage of this approach is the proliferation of secret scopes of which there is a limit of 100 per workspace. Additionally the premium plan is required in order to assign granular permissions to the secret scope.
To help explain this pattern further and the setup required, let’s examine a simple scenario:
The above diagram depicts a single folder (A) with two sets of permissions, readers and writers. AAD groups reflect these roles and have been assigned appropriate folder ACLs. Each AAD group contains an associated service principal and the credentials for each service principal is stored in a unique secret scope. Each group in the Databricks workspace contains the appropriate users, and the group is assigned READ ACLs on the associated secret scope, which allows them to “use” the service principal mapped to their level of permission. Here is an example CLI command to grant read permissions to the GrWritersA group on SsWritersA secret scope. Note that ACLs are at secret scope level, not at secret level which means that one secret scope will be required per service principal.
databricks secrets put-acl --scope SsWritersA --principal GrWritersA --permission READ databricks secrets get-acl --scope SsWritersA --principal GrWritersA Principal Permission — — — — — — — — — — — — GrWritersA READ
How this is may be implemented for your data lake scenario requires careful thought and planning. In very general terms this pattern being applied in one of two ways, at folder granularity, representing a department or data lake zone (1) or at data project or module granularity (2):
Analysts (read-only) and engineers (read-write) are working within a single folder structure, and they do not require access to additional datasets outside of their current directory. The diagram below depicts two folders A and B, perhaps representing two departments, and each department has their own analysts and engineers working on their data, and should not be allowed access to the other department’s data.
2. Engineers and analysts are working on different projects and should have clear separation of concerns. Engineers working on “Module 1” require read access to multiple source data assets (A & B). Transformations and joins are run to produce another data asset ( C ). Engineers may also require a working or staging directory to persisting output during various stages (X). For the entire pipeline to execute, Service Principal for Module 1 Developers is added to the various groups which provide access to all necessary folders through assigned ACLs. Analysts need to produce analytics using the new data asset ( C ) but should not have access to the source data therefore they use the Service Principal for Dataset C which is added to the Readers C group only.
It may seem more logical to have one service principal per data asset but when multiple permissions are required for a single pipeline to execute in Spark then one needs to consider how lazy evaluation works. When attempting to use multiple service principals in the same notebook/session one needs to remember that the read and write will be executed only once the write is triggered. One cannot therefore set the authentication to one service principal for one folder and then to another prior to the final write operation, as the read operation will be executed only when the write is triggered.
This means a single service principal will need to encapsulate the permissions of a single pipeline execution rather than a single service principal per data asset.
Pattern 6. Databricks Table Access Control
One final pattern, which not technically an access pattern to ADLS, implements security at the table (or view) level rather than the data lake level. This method is native to Databricks and involves granting, denying, revoking access to tables or views which may have been created from files residing in ADLS. Access is granted programmatically (from Python or SQL) to tables or views based on user/group. This approach requires both cluster and table access control to be enabled and requires a premium tier workspace. File access is disabled through a cluster level configuration which ensures the only method of data access for users is via the pre-configured tables or views. This works well for analytical (BI) tools accessing tables/views via odbc but limits users in their ability to access files directly and does not support R and Scala.
Conclusion
In this blog we’ve covered a number of Data Lake gen2 access patterns available from Azure Databricks. There are merits and disadvantages to each and most likely it will be a combination of these patterns which will suit your production scenario. Below is a table summarising the above access patterns and some important considerations of each.
|
https://medium.com/microsoftazure/securing-access-to-azure-data-lake-gen2-from-azure-databricks-8580ddcbdc6
|
['Nicholas Hurt']
|
2020-04-07 11:46:48.325000+00:00
|
['Azure Data Lake', 'Azure Databrick', 'Security Token', 'Spark']
|
The Super Simple Formula You Need To Succeed At Everything
|
You’ve been thinking about doing starting that new career for the last 10 years. Today it’s time to put legs on this goal and get going. The number one thing that keeps us from achieving our goals in life and business is indecisiveness and procrastination. In my formula for how to succeed we have to crush them both systematically.
You may think there is a lack of opportunities or it’s not the right time. The truth is you just haven’t made the hard decision. You are still weighing several options or worse you are moving forward in multiple directions. It’s time to stop being afraid to decide on a course and take massive action.
Decide to Succeed
Before you can go off and change your life you have to make a decision about what you are going to do. This phase of your development is all about creating clarity and delivering an unambiguous answer to the question of: What? We don’t need to be focused on how you are going to achieve these goals now but we do need to make a choice.
What holds so many of us back is that we aren’t decisive, we are terrified to make a decision and commit. This is usually the number 1 reason why so many haven’t transitioned into that new career or started that new business, they just get stuck in analysis. I don’t want that for you though.
So here’s what we are going to do today: I want you to write down 3 things you have delayed making a decision on and put down a definitive answer. I don’t care what the answer is but the exercise of writing down the decision will offer a huge weight off of your shoulders.
Once you liberate yourself from the burden of in-decision you now can be free to go and act. Free yourself from the stress and regret of not making a decision day after day, decide today what your life will be like and get to work!
Commit to Your Success
In this age we lack the courage of real commitment. There are no lines in the sand anymore only lukewarm promises. If you want to succeed then you have to make a commitment to the decision. We are usually so caught up in trying to make the right decision that we only half decide and then attempt to move forward looking back.
This is where we usually try to take a new path halfway between the fork in the road. When we do this though we never really reach the success we desire because we have been working in opposing directions in a never ending tug of war. If you want to find success you must make a concrete decision.
If you want to succeed then you have to make a commitment to the decision.
If I had to pick one area that is the biggest area of concern when it comes to understanding how to succeed it is most definitely lack of commitment. People often skip this step because they are not ready to really decide. When i’m coaching clients these first 2 steps are the hardest to complete because there is always a level of risk in deciding on a direction.
While many claim to make a decision they usually don’t really decide because they fear being wrong. That fear and lack of commitment prevents our ability to take a focused course of action in the next step.
Act on your decision
Now that we made a concrete decision and commitment, it’s time to get moving toward success. Don’t kid yourself that thinking that any decision you make will not require immediate action. No matter what you decide there is something to be done. Whether that’s to sign up for a new course or type your resignation letter to be used on a specific date, you’ve got work to do now.
What we want to do is make a decision and take a break feeling relieved we actually did something productive. That’s not enough to move you forward in the direction of your goals. If you want to succeed you’ve go to begin taking massive action immediately.
The key here is to find something, anything that you can get started on now so that you don’t lose momentum or start second guessing the decision you made.
Review your progress
Stage 4 is all about checking your progress and reviewing your plan. I like to call this the learning phase of your journey to success because it’s an opportunity for you to check in and see if you need to course correct. If you find something isn’t working during your review then learn what you can and move on to the next thing.
This is probably one of the most important and overlooked steps towards your success. We often just run full speed with full excitement not realizing we are completely going in the wrong direction.
You will learn more about how to reach your goals by learning from your mistakes and applying new knowledge so you can accelerate your progress. This works really well in a monthly or quarterly cycle.
Depending on your goal you may need to monitor more frequently but that’s for you to decide. The point is to have regular feedback sessions. I don’t want you to get discouraged if things aren’t going well. These reviews are not to shame you, but to encourage you to find a new way to achieve your goals. It’s not about co-signing negative self talk but actually to protect you from let down at the deadline.
Repeat the success formula
Congratulations you have succeeded at the goal that you set out to crush! Now what? Well if you are smart and I know you are, you’d be wise to start the process all over with something new and even more audacious. The achievement cycle is never ending. Once you have a method for continuous improvement you will always be ahead of the pack. Learning a formula for how to succeed you can achieve your dreams like a boss!
Each time you set out to achieve something use the same method to ensure your success. Don’t kid yourself thinking this won’t work for your new goal. It will work for every goal you desire. The difference between those who consistently achieve results and those who don’t is a consistent process for achievement.
The message is clear
Decide what it is you want to do/be, wholeheartedly Commit to your decision, Act immediately to move forward, Review progress regularly, course correct as needed and then erase the board and start over…
While success may seem complicated those who succeed at high levels use a formula and follow steps in order to give themselves the greatest opportunity for success.
Let’s Get it.
|
https://medium.com/@flbcoach/the-super-simple-formula-you-need-to-succeed-at-everything-26e070c4d43b
|
['Steve Evans', 'Fit Life Balance']
|
2020-12-03 23:00:03.069000+00:00
|
['Personal Development', 'Goal Setting']
|
A Gentle Introduction to Coherence
|
A Gentle Introduction to Coherence
Just in case you missed the announcement, there is now a publicly available, open sourced version of Coherence! As a result we are publishing a series of articles that will help you use it effectively.
This initial article series provides a 30,000 foot overview of the major Coherence features, and explains how they work together to simplify development of distributed applications. We will follow up with a series of deep-dive articles, drilling into the details of various product areas that require more in-depth coverage. The combination of the two will allow you to quickly get up to speed and understand which features to use and when.
Let’s tackle the obvious questions you should be asking first: Why do I care? What does it offer me? What can I build with it?
Why do I care?
As an architect of a large, mission-critical web site or enterprise application, you need to address at least three major non-functional requirements: performance, scalability and availability.
Performance is defined as the amount of time an operation takes to complete. Performance is extremely important, as it is the main factor that determines application responsiveness — experience has shown us that no matter how great and full-featured an application is, if it is slow and unresponsive, the users will hate it.
Scalability is the ability of the system to maintain acceptable performance as the load increases. While it is relatively simple to make an application perform well in a single-user environment, it is significantly more difficult to maintain that level of performance as the number of simultaneous users increases to thousands, or in the case of very large public web sites, to tens or even hundreds of thousands. The bottom line is, if your application doesn’t scale well, its performance will degrade as the load increases, and the users will hate it.
Finally, availability is measured as the percentage of time an application is available to the users. While some applications can crash several times a day without causing major inconvenience to the user, most mission critical applications simply cannot afford that luxury and need to be available 24 hours per day, every day. If your application is mission critical, you need to ensure that it is highly available, or the users will hate it. To make things even worse, if you build an e-commerce site that crashes in the run-up to Christmas, your investors will hate you as well.
The moral of the story is that in order to keep your users happy and avoid all that hatred, you as an architect need to ensure that your application is fast, remains fast even under heavy load, and stays up and running even when the hardware or software components that it depends on fail. Unfortunately, while it is relatively easy to satisfy any one of these three requirements individually, and not too difficult to comply with any two of them, it is considerably more difficult to fulfill all three at the same time.
Fortunately, Coherence can help.
What does it give me?
Coherence is a fast, scalable, fault tolerant data store. It provides automatic discovery of cluster members, automatic data sharding, highly redundant data storage, built-in messaging, events for anything that happens to the data or the cluster itself, simple to use APIs, and above all else — a “coherent” system (it is literally in the name).
Coherence is fast. Typical data access operations are usually in low single-digit milliseconds range, and even sub-millisecond for basic key-based operations. Of course, the actual performance is very much determined by the network between the cluster members — the better the network, the faster all Coherence operations are going to be.
|
https://medium.com/oracle-coherence/a-gentle-introduction-to-coherence-1fc11f763970
|
['Coherence Team']
|
2020-09-09 04:38:41.964000+00:00
|
['Distributed Systems', 'Introduction', 'Coherence', 'Scalability', 'Microservices']
|
The ISW: W7 NFL 2020
|
How to interpret the ISW’s New Look Tables
p (w) % = Implied win probability per team based on the results stemming from 10,000 monte carlo simulated games (e.g. NYJ won 5,156 / 10,000 simulated games → NYJ’s p(W) % = 51.6%).
Avg. Score = The average (projected) score per team derived from 10,000 simulated games.
Spread = The average point differential between the two average scores.
Avg. Total vs. Market = The average total is equal to the the sum of each team’s average projected scores. The market is what ESPN’s quoted total (‘over /under’) is.
Market Spread = The current line or spread per ESPN’s NFL Daily Lines.
p (cover) % = Estimated likelihood that each team covers the spread given the market spread based on our 10k monte carlo simulations.
p (cover total) % = The likelihood that the total (i.e. the sum of each team’s final scores) goes over ‘O:’ or under ‘U:’ the market total based on 10k monte carlo simulations.
The ISW’s Power Ratings
The ISW employs its proprietary power rating system each NFL season beginning with preseason power ratings that are adjusted weekly according to team performance against the spread (ATS) as well as margin of victory throughout the season.
The ISW’s Power Rating Spread = The difference between each team’s power rating along with a standard home field edge of 2.31 points.
Market Moneyline: The quoted moneyline per ESPN’s NFL Daily Lines.
Est. Fair Value Moneyline: Our estimated fair value price based on the net average ‘Avg p(win) %’ of our estimated win probability along with FiveThirtyEight’s traditional ELO win probability and ESPN’s FPI.
Early Sunday Slate
|
https://medium.com/the-intelligent-sports-wagerer/the-isw-w7-nfl-2020-9a46efbcaa09
|
['John Culver']
|
2020-10-23 00:27:19.225000+00:00
|
['NFL', 'NFL Picks', 'Monte Carlo Simulation', 'Probability', 'Sports Betting']
|
On World AIDS Day, we must fight the stigma that continues to kill us, and uplift those thriving with HIV
|
By David J. Johns, Executive Director of the National Black Justice Coalition, and Angela Yee, cohost of The Breakfast Club and founder of Juices for Life
Thirty-two years ago, we observed the first World AIDS Day. In efforts to raise awareness and speak out against the stigma surrounding the disease, organizations worldwide designed programs to combat our biggest opponent: fear. This year is no different. The shame and stigma around sex and sexuality continue to prevent us from having life-saving conversations about sexual health. We must prioritize the issue of HIV in our communities and take steps towards awareness and prevention of this virus that disproportionately affects Black and Brown people and low income folks of all sexual orientations, gender identities, and gender expressions.
Black people account for 42% of HIV diagnoses, yet only represent 13% of the population.
Almost fifty years after the AIDS epidemic swept the globe, we need to discuss these disparities that have killed millions globally. We have to talk to each other honestly and with empathy, and we have to support one another.
That’s why this year, NBJC put out a new toolkit for World AIDS Day. We’re aiming to share the facts and resources with our communities, so we can all work to combat HIV and AIDS together. We know it can be hard, but we know that if we start talking more about HIV, sexual health, and well-being, more generally, we can ensure that everyone we know and love is healthy and able to thrive.
There’s a dangerous myth perpetuated by the medical community that Black people have higher rates of HIV due to engaging in risky sexual behavior compared to white people. Not only is this myth misleading and problematic in perpetuating racist stereotypes, but it also dismisses the reality that Black people still face many obstacles to accessing healthcare and managing our health, including: barriers to getting comprehensive insurance; biased doctors who give Black patients subpar treatment; and individuals who have trouble affording lifesaving HIV medications like Post-exposure Prophylaxis (PeP), Pre-Exposure Prophylaxis (PrEP), and antiretroviral therapy. Each of these challenges are exacerbated by the lack of advocacy for Black people from the elected officials who took an oath to serve and support us. We can help to fill these gaps, by arming ourselves with factual information and by learning from members of our community whose lived experience and demonstrated effort have equipped them with the expertise to lead. Together, we can reduce stigma, increase access to care and support, and do things that make us feel better — inside and out.
The stigma surrounding getting testing is one of our most significant challenges. To successfully raise awareness about HIV and prevention, we must first gain an accurate insight into our communities’ infection rates. We are often discouraged from getting tested in fear of testing positive and not being accepted by our families and friends. HIV has been historically dismissed as a “gay male disease.” Contrary to this belief, women actually account for a quarter of new HIV cases, and 59 percent of HIV positive folks are straight Black women. Black trans women are also at high risk of HIV. As of 2019, 44% tested positive for the virus.
According to the Centers for Disease Control and Prevention (CDC), nearly 40% of new HIV infections are transmitted by individuals unaware that they have the virus.
There is also a misconception that an HIV diagnosis means that one’s life is over. This is not true. Many people with HIV are living healthy and fulfilling lives. Once a person is aware of their HIV status, they can make choices to manage their health and focus on the rest of their lives whether that means investing in career success, getting married, or raising a family. ART is an antiretroviral treatment that reduces HIV in the blood, reduces illness, and helps prevent transmission to others. People who begin HIV treatment shortly after diagnosis benefit the most from ART. Testing is also beneficial for individuals who are negative for HIV, because they can make informed decisions about sex, drug use, and health care.
The work of the National Black Justice Coalition aims to fill the gap in education about HIV. In our Words Matter toolkit, NBJC details the power of language and ways to shape an effective narrative surrounding HIV. The key priorities include creating safe/brave spaces to discuss HIV, understanding the effect of everyday language, and replacing negative/harmful language with affirming and healing language. Resources like the Words Matter toolkit have the power to drive change. Although it will not occur overnight, it is time for a community-wide shift in perception and conversation.
In recognition of those who lost their lives and loved ones to AIDS, and in celebration of those thriving with HIV, let’s make a promise to our future generations to speak up. Speak up when we are unsure, in doubt, and fearful. The only way to end HIV and AIDS forever is to start meaningful conversations in our homes, our friend circles, and institutions.
|
https://medium.com/@nbjcmedia/on-world-aids-day-we-must-fight-the-stigma-that-continues-to-kill-us-and-uplift-those-thriving-fd7cc62be9e3
|
['National Black Justice Coalition']
|
2020-12-01 18:37:56.401000+00:00
|
['Health', 'Racial Justice', 'BlackLivesMatter', 'Healthcare', 'LGBTQ']
|
The Mercator Projection
|
The Mercator Projection
Here’s a handy piece of trivia to parse out at dinner parties — the mapping system used in most modern electric vehicles zipping along Europe’s pristine highways was first developed in the 16th Century. It’s called Mercator Projection. The what?
Photo by Stephen Monroe on Unsplash
Named after Flemish cartographer, Gerardus Mercator in 1569 (yes, really!), The Mercator Projection was soon adopted as the go-to map for globe trotters. It managed the previously unfathomable feat of efficiently representing a three-dimensional entity, or to be specific, an ellipsoid, (the earth), on a flat, rectangular surface (a map). Mercator managed this by distorting the size of sections of the globe the further you got from the equator. Thus, most distorted are the North and South Poles. So, in Mercator’s representation, Greenland is the same size as Africa, when in reality Africa is 14 times larger. Mercator was considered to have cracked the Rubik’s Cube of the era and so successful was his projection that it is still being used today.
Why is the Mercator Projection used by Chargetrip?
One of the key successes of the Mercator Projection was that it was able to preserve rhumb lines — the imaginary lines on the earth’s surface, cutting all meridians at the same angle, used as the standard method of plotting a ship’s course on a chart. The good news for EV drivers is that these rhumb lines serve as the basic tenets for all navigation. However, there are differences in plotting a ship’s course across the Atlantic and a car’s navigation to the next charge station and so modifications have been made since Mercator’s day.
Charge map with pre-rendered & filtered stations and clusters
Vector Tile Service
The rhumb lines give way to tiles or a Vector Tiles Service to give it its proper name, when zooming in becomes necessary to plot driving routes, as with a car’s navigation systems. A map is divided into equal square tiles, with the entire earth fitting into a single tile at a zoom level of zero. Tile sizes increase by a factor of four, both horizontally and vertically, each zoom level you go up so that a zoom of 1 would have 4 tiles and a level of two would have 16 tiles, etc. Using tiles is necessary not only to allow detailed routing but also to save space and data and thus allow speedy searches, with each tile only containing the data needed for its particular section.
Pinpoint accuracy
It is possible to achieve pinpoint accuracy by continually zooming in. By a zoom level of 17 or 18 individual buildings occupy much of the screen. This is achieved by using X and Y coordinates based on the longitude and latitude of a building or point being searched.
Charge station icons
Web Mercator
To support zooming, a new projection, designated by the European Petroleum Survey Group, was created in 2005, which was still based on the Mercator Projection and is commonly referred to as “Web Mercator” or “Spherical Mercator.” While the new projection distorts size in the same way as Mercator’s projection did, the further away you from the equator you go, it simplifies the equations to use a spherical approximation for faster calculations.
Conclusion
There’s a lot to the saying, ‘If it ain’t broke don’t fix it.” That’s why the Mercator Projection of mapping has been used for centuries. However, to accommodate technology and allow meticulous accuracy in navigation systems and, in the case of Chargetrip, to map charge stations, modifications have been necessary. By using the Vector Tile Service, regardless of your location, you’re a tap of a screen away from knowing the exact location of the nearest charge station to your EV.
|
https://medium.com/chargetrip/the-mercator-projection-d192ef72f6aa
|
['Jeff Vasishta']
|
2021-08-10 11:46:43.192000+00:00
|
['Routing Software', 'Electric Mobility', 'Maps', 'Routing', 'Electric Car']
|
Elliptic Curve Signatures and How to Use Them in Your Java Application
|
Let’s assume you want to send a message and you want to ensure that a) the receiver can detect whether or not the message was modified (integrity) and b) the receiver can verify that you’re the author of this message (message authentication). In that case you typically use digital signatures to digitally sign that message.
Actually, digital signatures also provide non-repudiation, i.e. the sender cannot deny the signing of a message. However, providing integrity and message authentication are the two most common use cases for signatures.
Initial Steps
Initially, you create a public key and a private key. You have to do this just once. When you want to sign a message, you use the private key. To verify a signature, you use the public key. The private key is a secret key and must not be shared with other people as everyone in possession of this key, would be able to compute a valid signature and could trick the receiver of a message into believing that the message is from you. On the other hand, the public key needs to be distributed to everyone who should be able to verify a signature, i.e. the public key is public.
Steps to Sign and Verify a Message
If you have created a public and private key, a message is typically signed as follows:
Compute a hash h of the message m with a cryptographic hash function such as SHA-256. Compute the signature s for the hash using the private key. Send the message m and the signature s to the receiver. The receiver verifies the signature s with the sender’s public key.
Algorithms
The most commonly used signature algorithms are those standardized by the National Institute of Standards and Technology (NIST) in FIPS 186–4. These are 1) the RSA Digital Signature Algorithm, 2) the Digital Signature Algorithm (DSA) and 3) the Elliptic Curve Digital Signature Algorithm (ECDSA).
From these three algorithms DSA should not be used for generating digital signatures anymore as this algorithm will no longer be approved by NIST with FIPS 186–5, the successor of FIPS-186–4. Instead, FIPS-186–5 will only approve the verification of DSA signatures. However, the generation of RSA and ECDSA signatures are still approved and, additionally, the generation of signatures with the Edwards-curve Digital Signature Algorithm (EdDSA).
Both ECDSA and EdDSA are based on elliptic curves, DSA is based on the discrete logarithm problem and RSA’s security is based on the problem of factoring large numbers (decomposition of a composite number into smaller integers) and the RSA problem (taking the eth roots modulo a composite n) for which no efficient algorithm exists.
Security Level and Key Sizes
The big advantage of signature algorithms based on elliptic curves is that they require much smaller keys compared to RSA to achieve the same level of security resulting in much smaller signatures and reduced computational requirements. Due to this, these algorithms are particularly well suited for embedded and IoT devices but also for applications where a huge number of signatures needs to be computed or verified.
The security level is typically measured in number of bits. For instance, if a system provides a security of 5 bits, an attacker would need to perform 2·2·2·2·2=32 (2 to the power of 5) operations to break it. For symmetric ciphers such as AES the security level is typically equal to the size of the key that is used. For asymmetric algorithms such as RSA, ECDSA and EdDSA the security level of a key size is estimated from the difficulty of the underlying mathematical problem. The level is adjusted every time a new attack is discovered which reduces the effort to solve the mathematical problem. NIST has published their recommendations for key sizes in NIST Special Publication 800–57 Part 1 Revision 5. A summary for asymmetric algorithms is shown in the following table.
Also, in Publication 800–57 Part 1 Revision 5, NIST does no longer approve key sizes which provide a security level less then 112 bits, at least when cryptography is used to protect federal government information.
Recommendations on the key sizes may differ slightly between various organizations. An overview of recommendations can also be found here.
Signature Size
The size of a signature depends on the algorithm and the key size. The size of an RSA signature is equal to the size of the key. For instance, if you’re using a 3072 bit key to sign a message, the size of the signature is 3072 bit (384 bytes). ECDSA and EdDSA signatures have twice the size of the used key. For instance, if you’re using a 256 bit key to achieve 128 bit security, the signature will be 512 bit (64 bytes).
Compute ECDSA Signatures in Java
Select a Curve
Since version 7 Java supports various elliptic curves. If you have jshell installed on your machine, you can easily get a list of all supported curves by executing the following Java code in jshell:
jshell> import java.security.*;
jshell> Security.getProvider("SunEC").getService(
...> "AlgorithmParameters", "EC").getAttribute(
...> "SupportedCurves")
The list should contain at least the following curves:
secp192r1 (NIST P-192)
secp224r1 (NIST P-224)
secp256r1 (NIST P-256)
secp384r1 (NIST P-384)
secp521r1 (NIST P-521)
These are curves that have been standardized by NIST in FIPS 186–4. The prefix “sec” stands for “Standards for Efficient Cryptography”, the letter “p” indicates that this curve is over a prime field, the number following the “p” denotes the key size and finally, the letter “r” indicates that the parameters of the curve were chosen verifiably at random. More details on the naming convention, the curves and their parameters can also be found in “SEC 2: Recommended Elliptic Curve Domain Parameters”.
In the following example we use the curve secp224r1 which provides a security level of 112 bit, results in small signatures and doesn’t take too much resources to compute the signature. This curve is well suited for most applications with high security and high performance requirements.
Create a Public and Private Key
Before we can sign a message, we have to create a public and private key. We do this as follows:
import java.security.*;
import java.security.spec.*; KeyPairGenerator g = KeyPairGenerator.getInstance("EC","SunEC");
ECGenParameterSpec ecsp = new ECGenParameterSpec("secp224r1");
g.initialize(ecsp); KeyPair kp = g.genKeyPair();
PrivateKey privKey = kp.getPrivate();
PublicKey pubKey = kp.getPublic();
Now, we select the signature algorithm. Here, we use ECDSA to sign the SHA-256 hash of the message.
Signature s = Signature.getInstance("SHA256withECDSA","SunEC");
s.initSign(privKey);
Next, we compute the signature of a message. First, we call the update function of the Signature instance and provide the message as input. Then we call the sign method which computes and returns the signature.
byte[] msg = "Hello, World!".getBytes("UTF-8");
byte[] sig;
s.update(msg);
sig = s.sign();
Now, the signature is stored in the byte array “sig”.
The receiver of the message can now verify the signature as follows: the receiver creates an instance of Signature and initializes this instance with the public key. Then this instance is updated with the message and finally the receiver can verify the signature by calling the verify method with the signature to be verified.
Signature sg = Signature.getInstance("SHA256withECDSA", "SunEC");
sg.initVerify(pubKey);
sg.update(msg);
boolean validSignature = sg.verify(sig);
If the provided message is the same for which the signature was computed, the verify method will return true, otherwise false.
The full source code is also available on GitHub here.
Performance
To get an idea how much signatures we can compute per second, I signed the “Hello, World!” message in a loop and measured the time it takes on an i5–5300U with 2.3GHz on one CPU core. I have reused the signature instance, i.e. in each iteration I just called the update and sign method.
The following table summarizes the result.
Conclusion
Elliptic curve cryptography can be confusing as many different curves exist and one curve is sometimes known under different names. Usually, the curves standardized by NIST (i.e. P-192 aka secp192r1, P-224 aka secp224r1 and so on) should be sufficient for most applications with high security requirements.
Since Java 7 it is quite easy to compute elliptic curve signatures as since then Java supports the most frequently used curves. External dependencies are not necessary anymore. Furthermore, the number of lines required to compute or verify signatures is really small so that the code is simple and clean.
Curves with small key sizes are well suited for high performance applications. On a modern CPU with four cores you should be able to compute more than 10.000 signatures per second.
|
https://medium.com/@etzold/elliptic-curve-signatures-and-how-to-use-them-in-your-java-application-b88825f8e926
|
['Daniel Etzold']
|
2020-12-23 21:42:20.983000+00:00
|
['Signature', 'Performance', 'Security', 'Java', 'Elliptic Curve']
|
What Were The Cod Wars?
|
A British fishing trawler (left) passes an Icelandic patrol boat (right) (Public domain)
What Were The Cod Wars?
There is a dispute brewing between the United Kingdom and the European Union over fishing rights surrounding the British Isles. This is the latest in a long, long line of open water fishery disputes that have plagued nations for centuries. The latest Brexit-fueled disagreement over fishing rights, catch limits and territorial sovereignty have evoked comparison’s to the Cod Wars which took place in the 1960s and 1970s.
The present is often an echo of the past. So what were the Cod Wars? What kind of lessons can they teach us about problems today?
A watery background
Icelandic controlled waters highlighted. (Public domain)
Both Iceland and the United Kingdom are island nations with a strong reliance on seafood as a part of their national diets. Fishing in the waters around the UK and Scandinavia has been occurring for hundreds of years. Norse and Anglo fishermen have been sailing the seas in search of fish, wealth and food since ancient times.
In 1901, the much more powerful British Empire got an agreement with Denmark (which at the time controlled Iceland), that limited their sovereign fishing territory to a measly three nautical miles (nm) offshore. The agreement only had a posted period of fifty years and at the time of its signing British fisherman around Iceland were more of a nuisance than a problem.
World War I and World War II saw dramatic decreases in fishing activity due to the heavy fighting at sea but from the 1930s onward, British fishing in Icelandic waters had jumped considerably. They were starting to be a problem rather than just a nuisance. However, the agreement which bound Icelandic fishermen to 3nm only was about to expire so they decided to simply wait the treaty out.
Conflict erupts
Iceland did not become an independent nation until 1944. The previous agreement between Denmark and the British expired in 1951 but Iceland, being a brand new and tiny nation, did not want to wade into the fray until they felt ready.
By 1958, Iceland had passed a new national law which expanded their economic zone from 3nm to 12nm. To the people of Iceland, who were tired of seeing fleets of British trawlers sail up from the northern parts of the UK all weeks of the year, this seemed like a reasonable expansion. The British were not happy.
Not wanting to give up the previously lucrative fishing grounds to an upstart nation, the British declared that they would continue to fish within the twelve mile zone under the protection of military patrol boats. The Cod Wars had kicked off.
The First Cod War lasted from 1958 to 1961 and saw mostly posturing on both sides. Iceland could not run off British patrol vessels and was forced to watch as foreign trawlers steamed into their waters under cannon protection. Instead, Iceland resorted to geopolitical threats to achieve their aims.
This was at the height of the Cold War and Iceland stated flatly that if they were going to be treated poorly by their so-called allies then they would simply pull out of NATO and go it alone, giving the USSR an opening to move into the valuable strategic position.
The threats worked and the British agreed to the new 12nm economic zone after two and a half years of saber rattling and patrols.
The Cod Wars Continue
A collision between an Icelandic ship (foreground) and British frigate (background) (www.hmsbacchante.co.uk / CC BY-SA 2.5)
The Second Cod War was a much tenser and more violent conflict than the first had been. Following a similar pattern that preceded the first fishing conflict, Iceland passed a new law dramatically expanding their exclusive fishing zones from 12nm to 50nm. It was another fourfold increase, similar to the first one but this one was much, much larger.
The Second Cod War would last from September of 1972 to November of 1973.
This time around, the Icelandic marine forces were much more equipped to deal with the British intrusions. Iceland had built new cutters for its Coast Guard and they had a new plan: cut their nets.
Iceland deployed a fleet of fast cutters that were to sail quickly around the lumbering fishing trawlers and cut their nets. Once the nets were severed they would either sink or float free allowing the catch to escape. It was a good plan but they underestimated how aggressive the British patrols would be.
Like in the First Cod War, Britain deployed military craft to protect their fishing fleets from Icelandic harassment and that is when things started to heat up. There were multiple collisions, one which led to the death of an Icelandic engineer.
Just two years later, in 1975, Iceland announced its plans to expand their economic zone from 50nm to 200nm which, to many, was a bridge too far. The Third Cod War broke out in 1975 and led to millions of dollars in damages.
Third time is the charm
The Third Cod War would be the last. At the time of the expansion to 200nm of an exclusive economic zone, the idea was being floated in the United Nations to give those sort of protections to all coastal nations. However, the unilateral expansion that Iceland pushed before any negotiating could be done at the global level rankled everyone, including the Warsaw Pact.
The third and final Cod War lasted just over six months but was the most costly yet. The British activated nearly two dozen frigates, some of them with reinforced wooden bows, to be used in specialized ramming operations with the intention of simply ramming the Icelandic Coast Guard cutters out of the way.
The British were now wise to the Icelandic strategy of quickly cutting nets and sailing away. Instead of threatening to shoot the cutters, instead, they simply sailed up and rammed the ships before they could cut the nets. This kind of sailing was extremely dangerous and led to serious damage on both sides of the conflict.
According to records there would be 55 ramming incidents during the Third Cod War.
Despite all of the bluster and ramming on the high seas, the outcome was the same as before. The new limits were eventually recognized and respected.
Results and lessons for today
Current oceanic economic zones as observed today. (Liam Mason / CC BY-SA 4.0)
The results of the final of the three Cod Wars actually led to the codification of a 200nm economic exclusionary zone around every coastal nation in the world. The United Nations Convention on the Law of the Sea was passed and is still followed to this day and was largely influenced by the ongoing conflicts that the Cod Wars spawned.
Today, a similar conflict is brewing between the UK and the EU over fishing rights within the waters that lie between EU countries and the newly economically independent United Kingdom. Just as in the Cod Wars, both sides are threatening to posture on the high seas and invade territorial waters with patrol craft and fishing boats.
The Icelandic fishing expansion drastically damaged the fishing industries in the northern parts of the UK, an economic downturn that still effects some of the old fishing towns in the region. The UK is hoping to avoid a similar outcome in their conflict with the EU over fishing rights.
Iceland showed the world that you can defend territorial waters in an aggressive manner and still get the outcomes you want, something that generally fuels more intense conflicts in the future. Iceland achieved all of its goals against a much larger, wealthier and more established United Kingdom through tenacity, aggressive behavior and an unwillingness to back down from a conflict.
The Cod Wars remain a sore subject for many people and they highlight an ancient brand of battle over fish. Who gets to catch them? How many should we catch? Where can we fish? Those questions have been asked for hundreds of years and continue to be asked today.
|
https://medium.com/exploring-history/what-were-the-cod-wars-28ccea9c4dc2
|
['Grant Piper']
|
2020-12-22 14:03:54.771000+00:00
|
['World', 'Europe', 'History', 'News', 'United Kingdom']
|
Are Some People Just Born Lucky?
|
To me, I’d attribute my luck to my bubbly personality and being blessed with the ability to learn quickly and get things done right away. I also rarely deviated from the rules and was extremely obedient. And obviously, I also took advantage of every single opportunity I received.
However, in my eyes, even though things came a little easier for me in comparison to other people, I never really deemed myself as a lucky person. In the grand scheme of things, I actually considered myself less fortunate than the average middle class.
My parents immigrated from another country, knowing no one and having very little. They did this so they could give me the opportunities that I have now — And I am eternally grateful. But, growing up, we didn’t have everything.
Our house wasn’t huge. We never went to Disney World. We never had a dishwasher. I always had to work growing up. Going to college, I didn’t receive any financial aid from my parents — Every dime I had for college came directly from student loans.
Of course, this isn’t me dismissing the things that I did have. By no means were we poor. We got by. But, it was a humbling thing which taught me a lot about being thankfulness.
As I got older, I realized that the less you had, the more appreciative of a person you were because you know what it’s like to live with less. In turn, the more you know about actually being able to live with less, the easier it is for you to live with less.
So, where does luck fit into all of this?
|
https://lindseyruns.medium.com/are-some-of-us-just-born-lucky-d879a545ed38
|
['Lindsey', 'Lazarte']
|
2019-09-27 01:58:39.857000+00:00
|
['Self-awareness', 'Personal Growth', 'Advice', 'Life', 'Self']
|
How To Find Remote Jobs in 2021
|
As the jobs market edges closer to pre-pandemic normality, many workers are looking to maintain the flexibility and freedom they gained during the lockdown. As such, our consultants are seeing rising interest in remote jobs. Read this guide to learn more about how to find remote jobs in 2021.
The Rise of Remote Jobs
Data from Eurofound revealed that Ireland had one of the highest home-working rates in Europe at the height of the pandemic with 40% of paid hours worked by employees were performed from home. The National Remote Working Survey found that 94% of Irish workers are in favour of working remotely on an ongoing basis after the pandemic. The top benefits associated with remote working were:
No traffic and no commute
Reduced costs of going to work and commuting
Greater flexibility as to how to manage the working day
Reduced carbon footprint
Greater productivity
Less stressful than working onsite
The Opportunity of Remote Work
In April 2021, there were 55,000 remote jobs open in Ireland. Remote work lifts geographical barriers, meaning that job seekers can hugely expand their career prospects to roles all over Ireland and beyond. As such, ambitious professionals seeking career growth no longer have to stay in metropolitan areas with a high cost of living. What’s more, those with dreams of seeing the world can fulfil their wanderlust while also earning a living.
How To Find Remote Jobs: A Beginner’s Guide
If you are seeking a new role in 2021 then you should broaden your search to include remote jobs. This will add an entirely new dimension to your job hunt as there are many new tactics you will need to add to your traditional job-hunting strategy. Fear not, however, Berkley is here to help!
What Skills & Qualities Do Employers Look For in Remote Employees?
If you are applying for a remote role then your application will need to show that you have the right skills and qualities to work productively from home. You will need to reconfigure your CV to highlight how you have demonstrated relevant skills in the past. So what are the skills and qualities that make for a productive and reliable remote worker?
Previous Remote Experience
It’s a good idea to mention any remote work that you’ve already done. Perhaps you completed an online course from home or you worked from home part-time in a previous role? Employers will be seeking someone who can easily slot into the role, so this experience will give you a competitive advantage over other candidates.
Self-Efficacy
As a remote worker, you will be trusted to complete projects on time with minimal managerial oversight. As such, the independent nature of remote work is highly suited to self-starters. When writing your CV, try to highlight times when you showed autonomy, proactivity and self-motivation.
Organisation
Strong organisation skills go hand-in-hand with self-efficacy. In your application, show how you have used your organisation skills to complete projects on time. Showcase your time management skills and your task prioritisation abilities. You should highlight how your organisational skills have enabled you to multitask or quickly pivot when needed.
Technical Know-How
As a remote employee, you will be highly dependent on technology. Popular tools for remote work include email, instant messaging apps, and document sharing tools and video conferencing software. As such, it’s helpful if you are a tech-savvy individual who can quickly get to grips with new tools.
Communication
What tends to happen when people work remotely is that communication becomes a function of a particular need in time, rather than more social interaction, as would happen in the workplace. It’s these random social interactions that knit together the fabric of company and workplace culture and keeps employees engaged with their company and their colleagues. Pete Rawlinson — Silicon Republic
As a remote worker, you will mostly communicate with your colleagues through the written word (i.e. emails, instant messages, productivity tools etc.). As such, it’s vital to be able to relay information and ideas in a clear and concise manner. Be sure to highlight your communication skills in your application. Throughout your application, demonstrate the strength of your communication skills in all your engagements with your potential employer. Be timely in your responses and pay attention to spelling and grammar in your CV, cover letter and all written communications.
How to Find Remote Jobs: Our Top Tips
Partner with A Recruiter
Remote job hunting is a new challenge for everyone. Engaging with a Recruiter who knows their market well can be worth its weight in gold as they can guide you through processes and give you some strong insight into how a company functions — the finer detail can help you to secure the right role for you. Louise O’Neill, Manager of Berkley’s Business & Technology Team
If you are looking for remote work, we recommend reaching out to a recruitment consultant. They can talk to you about career opportunities in your area and support you with job applications. Your recruiter may also be able to tell you about companies that aren’t openly advertising remote roles but would be open to remote hires.
Choose Your Search Terms Wisely
If you are searching for work through online jobs boards, then pay attention to your search terms. Keywords to look out for include:
Remote
Work at home
Work from home
Home-based
Telecommute
Cyber commute
Use the Remote Job Search Feature on LinkedIn
LinkedIn is another great tool for finding remote jobs. To find suitable remote work opportunities, follow these instructions:
Click the Jobs icon at the top of your LinkedIn homepage. Click the Search jobs field and enter keywords or a company name. Click the Search location field and select Remote from the dropdown. Use the filters options at the top of the search results page to filter the results. Once you’ve applied all the filters, you can switch on the Job Alert toggle and set job alerts. Learn more about setting job alerts on LinkedIn. Click the job posting to view the job description and apply for the job if the job suits your requirement.
Security Tips for Remote Job-Hunting
When looking for remote jobs online, safety is paramount. Research from Zapier has revealed that there are approximately 60 “work from home” job scams online for each genuine opportunity. To avoid being taking advantage of, we recommend working with a trusted recruitment partner. If you are searching online, avoid general classified websites such as GumTree. Always research the company before applying and double-check that the provided email address is legitimate. For more advice on how to safely search for a job online, we advise reading this comprehensive guide from Indeed.
Start your remote job search today by scheduling a free call with one of our expert recruitment consultants!
|
https://medium.com/@berkleymarketingteam/how-to-find-remote-jobs-in-2021-5c22309f2aad
|
['Berkley Recruitment']
|
2021-06-08 09:16:51.333000+00:00
|
['Remote Job', 'Careers', 'Job Search Tips', 'Job Hunting Tips', 'How To Find A Job']
|
Adorable Sit Sand Height Adjustable Desk
|
Description
The new trend in the office is the ergonomic sitting Sit Sand Height Adjustable Desk at the standing desk. When people and work are more flexible and change the culture of work these days. Ergonomics and employee convenience have become the management’s main aim in ensuring optimal effectiveness for every employee. Height adjustable desk is the greatest choice for long hours on the desk. This makes the job easier mentally and physically.
Office Furniture Dubai
If you want to get a modern, office, or home office table height adjustment for your office. Twice I don’t have to think. The greatest place is Salam UAE. Just call us or visit us at our Ajman showroom.
Sit Sand Height Adjustable Desk
We Provide Different Kinds of Height Adjustable Desk like Sit Sand Height Adjustable Desk, office furniture dubai, office furniture, height adjustable desk, height adjustable desk Dubai, height-adjustable desk ikea, electric height adjustable desk, show electric height adjustable desk, best electric height adjustable desk, height adjustable desk amazon, height-adjustable desk legs, height adjustable desk frame, Ikea height adjustable desk, electric height adjustable desk, height adjustable table Ikea, height-adjustable table Dubai, standing table UAE, IKEA standing desk use, sit-stand table Dubai, electric height adjustable desk, uplift desk uae, best height adjustable desk, manual height adjustable desk, small height adjustable desk, flex spot height adjustable desk, Steelcase height adjustable desk, flex spot electric height adjustable desk, DIY height adjustable desk Office Furniture,
Modern Design Height Desk
Electric dual-engine ergonomic design Adjustable modern desk height. A fantastic home, office, and gaming desk. The top-quality Adjustable Desk in Dubai is provided by Salam UAE. Office Desks in Dubai give unique and awesome. Buy office desks and chairs in Dubai in a cost-effective way.
Visit Our Facebook page Salam UAE
|
https://medium.com/@salamuae008/adorable-sit-sand-height-adjustable-desk-had-18-salam-uae-b5f1a92c2b7b
|
[]
|
2021-12-16 12:15:51.908000+00:00
|
['Office Setup', 'Office Culture', 'Furniture Design', 'Office Furniture', 'Furniture']
|
Robinhood Opens Crypto Trading for New Yorkers
|
Robinhood Opens Crypto Trading for New Yorkers
Customers can now invest in seven cryptocurrencies, including bitcoin and ethereum, and track price movements and news for 10 other crypto assets.
Source: Robinhood
Robinhood Crypto is set to launch for residents of New York State Thursday.
The New York launch comes five months after Robinhood, which was initially founded as a free consumer stock-trading app, received a virtual currency activities license (aka the BitLicense) and a money transmitter license from New York State.
“We’ve introduced millions of people to equity investing on Robinhood, and want to do the same for everyone interested in crypto, so launching in New York is a crucial next step,” Josh Elman, VP of product, told Cheddar by email Wednesday.
Robinhood has been adamant about its plans to “democratize access to the American financial system” — something of a chorus among fintech startups. Operating in New York is obviously an integral part of achieving mainstream and institutional adoption, with 20 million consumers and the largest financial hub in the world.
Historically, New York regulators have been more rigorous in their approach to cryptocurrency businesses than other states, an unpopular approach among many in the crypto industry. But industry leaders, including Robinhood, Square, Circle, and Coinbase and others, hope that by complying with New York’s requirements, they can bring crypto closer to the incumbent financial system. New York has granted BitLicences to 18 companies since introducing the license in 2015.
When Robinhood first introduced its zero-commission cryptocurrency trading service in January 2018 it became something of a gateway drug for users who would then move on to other Robinhood products like equities and options.
“We heard from lots of our customers that they want crypto to be a part of their investment strategy,” Elman said.
Robinhood Crypto customers can currently invest in Bitcoin, Bitcoin Cash, Bitcoin SV, Ethereum, Ethereum Classic, Litecoin, Dogecoin on Robinhood Crypto and track price movements and news for 10 other crypto assets.
The fintech startup is valued at $5.6 billion, after its last funding round which closed in March 2018. Robinhood has raised $539 million in capital to date and last fall revealed it’s preparing for an eventual IPO.
The company now boasts six million users in the U.S. across the platform, compared to four million this time last year. Crypto trading is available in 39 states, compared to 16 last year. The company declined to specify how many of its users are active on Robinhood Crypto.
|
https://medium.com/cheddar/robinhood-opens-crypto-trading-for-new-yorkers-d56d2eac8a18
|
['Tanaya Macheel']
|
2019-05-23 15:35:50.272000+00:00
|
['Cryptocurrency', 'Business', 'Fintech', 'Technology']
|
What is Redux, its importance, and its integration?
|
Redux is basically a powerful JavaScript library used for the purpose of maintaining the state. This can be used with web-based technologies, however, it’s likely to be used with React and AngularJs to manage the data across the application.
The question is — Is it a framework?
Then the answer is ‘No’, it’s not a framework like AngularJs and VueJs. Then the question is still the same — what is Redux? Redux is simply a data flow mechanism used to manage application data flow and state.
Its state predicting feature allows you to manage the data you want to display and the response you can generate for any actions. Redux makes states of application more manageable and testing easy.
Why Redux?
In the traditional application, it was quite difficult to maintain the data flow and the architecture of data traverse was more complex.
Here we will try to understand it through an example
Assume you have a `“name`“data in Parent which is going to be displayed using 3 levels deep component, we will be passing that using below format,
class Parent extends React.component {
constructor(props) {
super(props) {
this.state = {
name: “Manish”
}
}
}
render() {
return(
)
}
}
const ChildLevel1 = (props) => {
let { name } = props;
return {
}
}
const ChildLevel2 = (props) => {
let { name } = props;
return {
}
}
const ChildLevel3 = (props) => {
let { name } = props;
return {
My name is {name}.
}
}
Looks complicated! So, to overcome these drawbacks engineers have designed Redux.
How Redux handle this?
Check the method below,
Make Reducers
— — — — — — — -
const name = (state = ‘Manish’, action) => {
switch (action.type) {
case ‘SET_NAME’:
returnaction.payload
default:
return state
}
}
constrootReducer = combineReducers({
name
})
Create A store
— — — — — — — -
const store = createStore(
rootReducer,
compose(
applyMiddleware(thunk),
window.devToolsExtension ?window.devToolsExtension() : f => f
)
);
Now use in whichever component you want to use.
You can use it as given below
There are two ways to get data from Store
Step 1:
– Import store object
– Use it like, store.name (Output: ‘Manish’)
Step 2:
Pass store to the provider and then access it inside component like below.
“`
“`
and then, work inside Components like below
“`
class ChildLevel3 extends React.component {
render() {
My name is {this.props.name}
}
}
mapStateToProps = (state) => {
return(
name: state.name
)
}
export default connect(mapStateToProps, null)(ChildLevel3)
“`
In this, there are some new keywords are used like connect, mapStateToProps.
Connect:
Connect is a React Connector that is used to connect store data with components. It is also can perform as a Higher Order Component. It takes data from the Store and provides it to the component accords to the need.
It uses two functions,
mapStateToProps and mapDispatchToProps
mapStateToProps =>
That is used to get pieces of data from the store as per the need of a component. Just like we are using a name in the above example. It will map state data to the component props.
mapDispatchToProps =>
It is being used to assign actions to the component props, and those actions are dispatched to the store.
What are Actions, Action creators and reducers in React-redux?
Actions are simply a javascript object that used to send data from applications to the store. These are the only source of information that makes information available for stores. This can be sent to the store using store.dispatch().
Action creators are exactly that — functions that create actions. It’s easy to conflate the terms “action” and “action creator”, so do your best to use the proper term.
Action creator is a function that returns action object, performing an action manually is more prone to error thus action creator came into action. The difference between the term action and action creator is subtle, use it properly.
Reducers define the changes in the application’s state while an action sent to the store. Action defines what is just happened but it doesn’t specify how the application’s status changes according to actions sent to the store.
Redux Integration with web technologies
Redux not only belongs to React but can be implemented with Ember, jQuery, Angular or Vanilla javascript also.
So, we are going to implement this with ReactJs.
Redux is not included with React so to use that we need to install it first. The following code is provided by expert Reactjs developers
“`
npm install –save react-redux
“`
Let’s start in the simplest way.
Below, we will be using an Input to get the name and then instead of using component state we are using Redux for the storage purpose.
Inside App.js we are creating an Input for the name.
```
import { updateName } from ‘./Actions/appActions’;
class App extends React.component {
hadleChange = (e) => {
this.props.updateName(e.target.value)
}
render() {
}
}
mapDispatchToProps = (dispatch) => {
return {
updateName: (val) => dispatch(updateName(val))
}
}
```
export default connect(null, mapDispatchToProps)(App)
Above we are using ‘updateName’ function to update the name that would be our Action to make data manipulation in the Store.
Let define Action, Create appActions.js file inside Actions Folder like below
appActions.js
“`
export function updateName(value) {
return {
type: ‘UPDATE_NAME’,
value
}
}
“`
Now, when component dispatches an Action, it will be handled by Reducer.
appReducer.js
export function name(state=’’, action) {
switch(action.type) {
case ‘UPDATE_NAME’:
returnaction.value
default:
value
}
}
To make the store object, we will use the ‘combineReducer’ to wrap all the reducers derived inside our application like below
rootReducer.js
import { combineReducers } from “redux”;
import { name } from ‘./reducers/appReducer’;
export default combineReducers({name})
We will be using this combined reducer to create store objects, so with store objects, our index.js file will look like below.
“`
import React from ‘react’
import { render } from ‘react-dom’
import { Provider } from ‘react-redux’
import { createStore } from ‘redux’
importrootReducer from ‘./reducers’
import App from ‘./App’
const store = createStore(todoApp)
render(
,
document.getElementById(‘root’)
)
“`
Above we have created a store object using the ‘createStore’ and then passed that store object to the provider, which will make store state accessible to components.
Now, ChildElement component will use that update store data to display the name attribute
“`
classChildElement extends React.component {
render() {
{this.props.name}
}
}
mapStateToProps(state) {
return {
name: state.name
}
}
export default connect(mapStateToProps, null)(ChildElement);
“`
So, Instead of passing data to a very deep level we can utilize the data of one component to any other component. This is how redux data flow Architecture works.
|
https://medium.com/@oaktreecloud/what-is-redux-its-importance-and-its-integration-4958098db5b5
|
['Ashish Sharma']
|
2020-02-07 07:19:38.893000+00:00
|
['Reactjs', 'Integration', 'Redux']
|
How to Use Neuroproductivity to Boost Your Potential
|
This article is not going to be an article that tells you to get up at 6 AM every morning. I won’t tell you at what time you should be checking your e-mails or how to get organized. You will not come out of it with the perfect planning to organize your busy day. You will learn the neuroscience behind your brain productivity and some little tricks to tap into it.
Your brain is always working. It is a powerful machine that allows you to breathe, swallow, pump your heart, see or hear without having to make much of a conscious effort. Your brain is always working in the background, and when you try to focus on a difficult task, your brain is either going to be with you or against you. Even though you’re not actively aware of your neural activity, it is the key to whether or not you will accomplish a flow-like state that will allow you to crunch all of your objectives. Let’s dig into what your brain needs to be productive!
Photo by David Cassolato from Pexels
This incredible machine that is your brain
Your brain’s way of communicating is called a neurotransmitter. These little guys are chemicals created by the nervous system to communicate across your body. To help your brain achieve productivity, you’ll need the help of four of these little guys :
Serotonin: This one makes you feel calm
Dopamine: This one makes you feel happy
Neuroadrenaline (or Neuropinephrine): This one makes you feel alert
Acetylcholine: This one helps you focus
We’re going to dig a little bit into each of them to see how they affect your productivity.
Serotonin: That little confidence boost you need to achieve your task.
Serotonin, like dopamine, is part of the neurotransmitters that are often called “happiness hormones.” It triggers a sense of security and pride. It helps you keep a stable mood when you work and keeps you awake. Serotonin is linked to your circadian rhythm and is the hormone that allows you to wake up in the morning. While the lack of serotonin is not always a cause for depression, serotonin intake can help people get out of depression. You can boost your serotonin levels by taking a deep breath, exercising, or finding someone kind enough to give you a massage. But we’ll dig more into the practical framework later in this article.
Dopamine: The sense of reward you get from your work
Dopamine is a bit trickier than it’s friend serotonin. It is one of the neurotransmitters that have the biggest role in the reward system. This system tells your brain that something is good, and it should repeat it or, on the opposite, avoid it at all costs. For this reason, dopamine has a key effect on the formation of habits. You can see how this sense of reward can motivate you in your work. If you find some accomplishment in what you are doing, you will physiologically want to do more. However, the other side of the dopamine coin is that you can get dopamine from other sources, distracting you from your work. For instance, an e-mail notification can get you away from your work and break your flow. You need to be mindful of your reward system to take advantage of it.
Neuroadrenaline: This stress that is good for you
Neuroadrenaline is the neurotransmitter that triggers a state of fight or flight in your body. It keeps you on high alert. While chronic stress is obviously harmful to you, you can create positive stress that allows you to stay alert and focused on the task. For instance, it is because of neuroadrenaline that people who tend to procrastinate manage to work more efficiently as their deadline approaches. The state of alertness allowed by neuroadrenaline also allows you to retrieve the data you need to perform your task easily. Let’s say, for example, if a bear chased you, you would instinctively find appropriate solutions in a record time to try to get out of it alive. The same can happen during your task. If you feel this positive stress, you will handle all the different concepts needed to perform your task easily.
Acetylcholine: The focus and memory maker
Acetylcholine is the most obvious neurotransmitter when you see what it does, but surprisingly, it is less known than the ones mentioned above. The main role of acetylcholine is that it is involved in your autonomic nervous system. It is the system that handles all your basic functions, like breathing or managing your heart rate, without you having to worry about it. It is also involved in learning and memory. It has been found that damage to the cholinergic system (the acetylcholine factory)is linked to degenerative memory diseases such as Alzheimer’s. Increases of it, on the other hand, allow for better cognitive functions and better memory. The neurotransmitter also helps you stay focus.
Key Takeaways :
There are four neurotransmitters at play for your brain productivity: serotonin, dopamine, neuroadrenaline, and acetylcholine.
Serotonin and dopamine are part of the “happiness hormones”. Serotonin helps you wake up in the morning and stabilizes your mood, and dopamine makes you feel rewarded.
Neuroadrenaline is a “stress hormone” that, unlike cortisol, can positively affect you if it happens in small doses. It makes you feel focus and alert.
Acetylcholine is the neurotransmitter involved in all of your body’s background tasks such as breathing and can also increase your focus, memory, and learning abilities.
Art by the author
How to use this knowledge to boost your productivity
To increase your productivity, you will need to boost each of these neurotransmitters. Here is a list of actions you can take :
1. Know your reward system. Create a new one if you need it.
Photo by Giorgio Trovato on Unsplash
You need to find a sense of reward in doing your work. It is going to help your dopamine levels up and will help you stay in your flow. Sometimes, we do tasks that we have to do but don’t find as rewarding. In this case, why not try to give yourself external rewards that you truly care about. You can allow yourself a treat, for instance.
2 . Create positive pressure
You don’t have to wait for an external source of pressure for your work before creating positive pressure to keep you on track. One trick can be to create a To-Do list of what you want to accomplish in the day and a bigger To-Do list of what you want to accomplish in the week. These lists will help you stay on track and create deadlines that trigger this positive stress. And nothing prevents you from creating rewards for the good accomplishment of your To-Dos, that’ll give you an extra dopamine boost! If you don’t feel like self-discipline is doing it for you, you can also commit publicly to be done with your task for a due date. You can tell your colleagues that you will do a quick presentation of your current research project next week.
3. Create an environment that helps you stay focus
That advice is fairly common, I’ll admit it. It has to do with acetylcholine. If you turn off your notifications and prevent distractions, you’ll help yourself stay focused. Focusing has to do with will power and the better your working environment is, the less will power you have to use to stay focused.
Photo by cottonbro from Pexels
4. Breathe and meditate
Slow breathing exercises and mindfulness help you raise your serotonin levels. You need to feel safe and clear before you start your task. You can always incorporate 5 minutes breathing exercises during your breaks, for instance.
5. Mind what you eat
To boost your acetylcholine levels, you can eat foods rich in choline. Those include mostly :
Beef liver
Chicken liver
Salmon
Cod
Eggs
Cauliflower
Broccoli
Why not start your day with a boiled egg or a plate of broccoli? Eating foods rich in choline not only benefit your productivity but a lot of other bodily functions. If you wonder what food to eat best to help your cognitive activity, you can also check these recommendations from Harvard Health.
Photo by Jenny Hill on Unsplash
6. Exercise
This piece of advice is also fairly common, but the reason behind the need for exercise for productivity are manifolds :
It helps you decrease your cortisol levels and feel more at peace.
It triggers endorphins, another happiness hormone that makes it easier to endure difficult tasks.
It helps you regulate your blood pressure and your health, and you need to be healthy to feel productive.
However, light exercise is more recommended than strenuous exercise, like running a marathon. This type of exercise can actually lower your acetylcholine levels. Try to take a walk in the morning or during your lunch break. It can help you gain some perspective on your day of work.
7. Sleep, sleep, sleep
Photo by Jordan Whitt on Unsplash
With this last one, I will clearly go against the “Wake Up at 6 AM to have more time” paradigm. Some people don’t need a lot of sleep to be fully restored. That much is true. It might, however, NOT be your case. You need to know about your circadian rhythm, know what is best for you, and try to do everything you can to have a good quality sleep. Here are some advice to achieve high-quality sleep :
Sleep and Wake up at a regular time
Avoid screens at least 30 minutes before sleeping
Try to unwind at least 40 minutes before sleeping through meditation or light reading.
Sleep in a calm environment.
Everybody knows by now that sleep is essential. For productivity, it is essential for several reasons; first, it allows you to keep stable serotonin levels that will help you have a stable enough mood to put in some work. It will also keep your levels of acetylcholine high. Finally, a rested body is a healthy body. You might avoid being distracted by sleepiness and dozing off during your work sessions, which will make you more productive.
Key Takeaways :
To boost your productivity, you can boost the fabrication of the four neurotransmitters presented in the article.
You can act on your living hygiene by getting enough good quality sleep, minding what you eat to maximize your choline intake, and do regular light exercise.
You can act on your working environment to prevent most distractions, to help you stay on track.
You can create a reward system that works for you and that helps you perform your tasks. On top of that, you can bring positive pressure in your environment to help you have this little positive stress that allows you to be more productive.
Take a deep breath, center yourself, and apprehend your tasks with clarity, and everything will feel better!
Photo by Minh Pham on Unsplash
I hope this article helped you understand more about your brain and ways to stimulate it to improve your productivity. If you have anything to add on tricks to improve productivity, I’ll be happy to see you in the comment section!
I wish you a happy journey toward self-improvement!
|
https://medium.com/curious/how-to-use-neuroproductivity-to-boost-your-potential-42571c1e1a00
|
['Cheshire Chaton']
|
2020-12-17 15:31:23.150000+00:00
|
['Neuroscience', 'Brain', 'Productivity', 'Self Improvement', 'Achievement']
|
IAGON Has Been Approved For SkatteFUNN
|
Dealing with uncertainty during these pandemic times, our team is moving forward to achieve IAGON’s mission and create a world where anyone can profit by joining a massive processing and storage platform.
We are happy to announce that IAGON has been approved for the well-known and important SkatteFUNN program.
It means that IAGON has been recognised as a company that seeks to develop a new or improved product through a dedicated R&D project. This will help to generate new knowledge, skills and capabilities within the company.
The SkatteFUNN R&D Project is a government program designed to stimulate research and development (R&D) in Norwegian trade and industry. The incentive is a tax credit and comes in the form of a possible deduction from a company’s payable corporate tax.
We need to note that SkatteFUNN is aimed to significantly increases recipients’ investments in R&D and enhances innovation and productivity.
The SkatteFUNN is open to all companies with a permanent establishment in Norway as long as the R&D costs can be attributed to future earnings of the Norwegian company.
How This Will Help IAGON
Participation in this program will help us stay on track to continue product development despite all the negative impacts of the coronavirus pandemic. It means, we can invest more funds in research and development of our products and become the next tech stack in the book.
We need to remind you that we have a complete working product and you can try our Platform — https://iagon.com.
Our main goal is to make the platform super functional but at the same time understandable and easy to use.
We are going strictly according to our roadmap, so please stay tuned with our updates with all the innovations and improvements.
About SkatteFUNN project
The SkatteFUNN R&D Project is a government program that is designed to stimulate research and development (R&D) in Norwegian trade and industry. Businesses and enterprises that are subject to taxation in Norway are eligible to apply for tax relief.
Approved projects may receive a tax deduction of up to 20 percent of the eligible costs related to R&D activity. All costs must be associated with the approved projects.
For more information and to see what else is going on with IAGON, please follow us at the social media links below, or head over to the IAGON Website!
Facebook, LinkedIn, Reddit, Twitter, Telegram, Youtube, Medium
|
https://medium.com/iagon-official/iagon-has-been-approved-for-skattefunn-9797fd7a5e18
|
['Iagon Team']
|
2020-11-10 10:16:42.481000+00:00
|
['Blockchain Technology', 'Cloud Computing', 'Blockchain Startup', 'Iagon', 'Cloud Services']
|
Why Positive Thinking Works
|
From “Thought-Forms” (1905) by Annie Besant and C.W. Leadbeater
Why Positive Thinking Works
Toward a Theory of Mind Causation
Why should positive thinking, “manifestation,” or the “law of attraction” work at all? Before you cry “confirmation bias!” (materialism’s equivalent of “lock her up!”) take a deep breath.
In my book The Miracle Club I propose a theory of mind causation. It may be wrong, it may be grossly incomplete, but I feel that we need to at least try to theorize from the intersection of testimony, science, and mysticism. It’s necessary, I believe, for our generation of seekers to do more than tell the same stories over and over. We must experiment, we must experience, we must have results — and we must attempt to come up with reasons why mind causation just might work.
I’ll start by quoting something that mystic Neville Goddard (1905–1972) said in 1948: “Scientists will one day explain why there is a serial universe. But in practice, how you use this serial universe to change the future is more important.”
It was a striking observation, because it wasn’t until years later that quantum physicists began to talk about the many-worlds theory. Physicist Hugh Everett III (1930–1982) devised the concept in 1957. He was trying to make sense of some of the extraordinary findings that had been occurring for about three decades in quantum particle physics. For example, scientists are able to demonstrate, through various interference patterns, that a subatomic particle occupies a wave state or state of superposition — that is, an infinite number of places — until someone takes a measurement: it is only when the measurement is taken that the particle collapses, so to speak, from a wave state into a localized state. At that point it occupies a definite, identifiable, measurable place. Before the measurement is taken, the localized particle exists only in potential.
Now I have just about squeezed all of quantum physics into roughly a sentence. I think it’s an accurate sentence, but obviously I’m taking huge complexities and reducing them into the dimensions of a marble. But I believe I’m faithfully stating what has been observed in the last eighty-plus years of particle experiments. And we’re seeing that on the subatomic scale, matter does not behave as we understand it to.
Our understanding of matter in our macro world generally comes from measuring things through our five senses, and experiencing them as singularities. There is one table. It is solid and definable. It’s not occupying an infinite number of spaces. But contemporary quantum physicists have theorized that we may not normally see or experience superposition phenomena because of information leakage. This means that we gain or lose data based on the fineness of our measurement. When you’re measuring things with exquisitely well-tuned instruments, like a microscope, you’re seeing more and more of what’s going on — and that’s actual reality. But when you pan the camera back, so to speak, your measurements coarsen and you’re seeing less and less of what’s actually happening.
To all ordinary appearances, a table is solid. The floor beneath your feet is solid. Where you’re sitting is solid. But measuring through atomic-scale microscopes, we realize that if you go deeper and deeper, you have space within these objects. Particles make up the atom, and still greater space appears. We don’t experience that; we experience solidity. But no one questions that there’s space between the particles that compose an atom. Furthermore, we possess decades of data demonstrating that when subatomic particles are directed at a target system, such as a double slit, they appear in infinite places at once until a measurement is made; only then does locality appear. But we fail to see this fact unless we’re measuring things with comparative exactitude. Hence what I’m describing seems unreal based on lived experience — but it’s actual.
In any event, my supposition is this: if particles appear in an infinite number of places at once until a measurement is taken; and if, as we know from studying the behavior and mechanics of subatomic particles, there’s an infinitude of possibilities; and if we know, as we have for many years, that time is relative, then it is possible to reason — and it’s almost necessary to reason — that linearity itself, by which we organize our lives, is an illusion. Linearity is a useful and necessary device for five-sensory beings to get through life, but it doesn’t stand up objectively. Linearity is a device, a subjective interpretation of what’s really going on. It’s not reflected in Einstein’s theory of relativity, which posits that time slows down when it begins to approach the speed of light. Nor is it reflected in quantum mechanics, where particles appear in an infinitude of places and do not obey any orderly modality. Linearity is not replicating itself when a measurement taken of a particle serves to localize the appearance or existence of the object.
If we pursue this line of thought further — and this is where the many-worlds theory comes into play — the very decision to take a measurement (or not to take a measurement) not only localizes a particle but creates a past, present, and future for that particle. The decision of an observer to take a measurement creates a multidimensional reality for the particle. This is implied in the famous thought-experiment called Schrodinger’s Cat, which I describe here (yes, pre-tattoos, but remember there is no time):
So whatever that particle is doing, the very fact that a sentient observer has chosen to take a measurement at that time, place, moment, and juncture creates a whole past, present, future — an entire infinitude of outcomes. A divergent set of outcomes would exist if that measurement were never taken. A divergent set of outcomes would also exist if that measurement were taken one second later, or five minutes later, or tomorrow. And what is tomorrow? When particles exist in superposition until somebody takes a measurement, there is no such thing as tomorrow, other than subjectively.
And what are our five senses but a technology by which we measure things? What are our five senses but a biological technology, not necessarily different in intake from a camera, photometer, digital recorder, or microscope? So it’s possible that within reality — within this extra-linear, super-positioned infinitude of possibilities in which we are taking measurements — we experience things based upon our perspective.
Neville Goddard’s instinct was correct in this sense. He taught that you can take a measurement by employing the visualizing forces of your own imagination. You’re taking a measurement within the infinitude of possible outcomes. The measurement localizes or actualizes the thing itself. Hence his formula: an assumption, if persisted in, hardens into fact. But the assumption must be persuasive; it must be convincing. That’s why the emotions and feeling states must come into play. And Neville observed that the hypnagogic state — a state of drowsy relaxation — helps facilitate that process.
You can use several different techniques in connection with Neville’s ideas, and, as he did, I challenge you to try them and see what happens. You’re entitled to results. I believe strongly in results. I believe that every therapeutic and ethical and spiritual philosophy should result in some concrete change and improvement in your life or your conduct; if it doesn’t, then such an idea should have no hold on you. I feel similarly strongly that the ability to describe a concrete outcome in your life is vitally important, and that too was always part of Neville’s teaching. Testimony is both an important source of ideas and an invitation to others.
One way of using Neville’s approach to mental creativity is to enter into an inner state of theatrical or childlike make-believe. Not childish but childlike: a state of internal wonder and pretending. Children are so good at this. We get embarrassed about this quality as we age, but Neville talked about walking the streets of Manhattan imagining that he was in the tree-lined lanes of Barbados, boarding a ship to some desired destination, or in a location where he wanted to be.
He would say: “Unfoldment will come. You will see.” He would always say that an assumption, although false, if persisted in, eventually hardens into fact. He would say, “Assume the state of the wish fulfilled. Live from the end. Live from the state of your wish fulfilled.” Remember, Neville would remind listeners, you’re not in a state of wanting; you’re in a state of having received. Your aim is simply to occupy the emotional and mental state that you would experience after having received.
One simple way to use Neville’s method is to freely enter this state of make-believe, as you used to when you were a child. Of course, you must also continue to go about your adult life in this world of Caesar and currency and commerce, and fulfill your obligations and do the things you need to do. You cooperate with the world. You must abide by the world. You must do the things that the world needs you to do. But the secret engine behind what’s really going on is what you’re imagining. Within are the hidden currents of emotionalized thought, which are the actual engine of what’s occurring.
How long will it take you to see your desired changes in outer life? How long will it take for outer life to conform to your internal focus, your living from the end of your ideal? This question of time intervals has recently become very hot for me personally, because with all the stresses that life throws at us, it is not easy to adopt a feeling state and stick with it for weeks. It’s very difficult, in part because the world we live in does everything possible to disrupt our inner quietude.
Neville noted later in his life that there could be a substantial time interval between your visioning, your mental imaging, and the appearance of the wished-for thing. He would point out that the gestation period of a human life is nine months. The gestation period of a horse is eleven months. The gestation period of a lamb is five months. The gestation period of a chick is twenty-one days. There is almost always going to be some time interval. You must persist. If you want to find yourself in Paris, and you wake up every day and you’re still far away from Paris, you’re naturally going to feel disappointed or dejected. But if you really stick with it, I venture that you will see that your assumptions eventually concretize into reality, and the correspondences will be uncanny.
I’ve had such experiences in my own life; but I’ve personally observed that in some cases, there have been extended time intervals. This has been true regarding my career as a writer, speaker, and narrator. The philosopher Goethe made an interesting observation. We’ve all heard the expression “Be careful what you wish for; you just might get it.” It actually has its roots in Goethe. Taking a leaf from Goethe’s play Faust, Ralph Waldo Emerson noted this dynamic in his 1860 essay “Fate,” which led to the popular adage. Emerson wrote:
And the moral is that what we seek we shall find; what we flee from flees from us; as Goethe said, “what we wish for in youth, comes in heaps on us in old age,” too often cursed with the granting of our prayer: and hence the high caution, that, since we are sure of having what we wish, we must beware to ask only for high things.
We are being warned to act with perspective: what we wish for when we are young will come upon us in waves when we are old. Many people would object to that claim, saying that they have all kinds of unfulfilled wishes. But unlocking the truth of this observation requires peeling back the layers of your mind and probing formative images and fantasies from when you were very young. What was the earliest dream you can remember when you first came into conscious memory, maybe at age three or four? I mean a literal nighttime dream. What were your fantasies when you were very young? I do believe that children — certainly this was true of me — have very intense fantasy lives even at age four or five. What were your earliest fantasies?
I believe that Goethe’s observation relates to Neville’s remarks about the perceived passage of time and the gestation period between the thought and the actualization. If you take Goethe’s counsel, you might be surprised to discover an extraordinary symmetry between the things that you’re living out in your life today and things that you harbored and thought about when you were very young. These can be positive, negative, or anywhere in between.
Neville recommends that you avoid thinking in terms of, “It will happen this way or that way” or “I’ll do something to make it happen.” His attitude was that the event will unfold in its own lovely, harmonious, perfect way. Your job is not to draw the map. Your job is to live from the destination.
I believe that Neville is going to be remembered, and is being looked upon today, as having created the most elegant mystical analog to quantum physics. He was thinking and talking about these ideas long before the popularization of quantum physics. He had a remarkable instinct in the 1940s, which has been tantalizingly, if indirectly, reiterated by people studying quantum theory — people who have never heard the name of Neville. Yet it wouldn’t surprise me if, within a generation or so, some physics students begin to read him as a philosophical adjunct to their work. That may sound unlikely, but remember that many of the current generation of physicists were inspired by Star Trek and Zen and the Art of Motorcycle Maintenance, and I believe there is greater openness today to questions of awareness and mind causation.
***
We all live by philosophies, unspoken or not. Even if we say we don’t have an ideology, we obviously have assumptions by which we navigate life. When I look back upon people like Neville and Zen teacher Alan Watts (1915–1973), I realize that their greatness is that they lived by the inner light of their ideas. That is a rare trait in our world today. We are a world of talkers. People are sarcastic or cruel over Twitter, and they think they’re taking some great moral stand. Is it brave for someone who lives miles away and doesn’t even use his real name to call people out online? That’s no victory. It’s make-believe morality.
When we look back on certain figures in the political, cultural, artistic, and spiritual spheres, those we remember are the ones who lived by the inner light of their ideas, who put themselves on the line, for success or failure, based upon an idea.
My wish for every one of you reading these words is that you provide that same example. And I really must say the following, and I mean this in my heart: if you sincerely attempt what I am describing, I believe that you will find greatness, because, if nothing else, you will be making the effort to live by the inner light of an idea.
(This article is adapted from Magician of the Beautiful: An Introduction to Neville Goddard.)
|
https://mitch-horowitz-nyc.medium.com/why-positive-thinking-works-fa1ca49e3d61
|
['Mitch Horowitz']
|
2020-08-02 22:39:39.342000+00:00
|
['Paranormal', 'Self Improvement', 'Mysticism', 'Science', 'Occult']
|
An Open Letter
|
Hey Guys
I hope you’r all doing good. Just wanted to write to you all and let you know about a few things. How weird it is, just a few weeks back we were in our perfect little world. Comfortable and Safe. Even me and Trijit, were worried about which movie to watch in the weekend and where to eat out. And how suddenly, our whole world changed.
In the coming few months we should be starting to see a lot of changes, As far as I understand, this Virus is not going away completely anytime soon. Humanity will have to evolve into a new way of living. Things like being socially distant and practicing extreme hygiene which was previously considered OCD are going to be the new normal. How people communicate. How sales are done. How films are made. How healthcare is given. How work is done. What is the work. A lot of questions arise.
We might see more outbreaks, wars, poverty, death or what not in the coming days. When economies come crumbling down, the first thing that goes are jobs.
One small example, Boeing cancelled an order for a software worth 160 Cr to a company in france, this company had partnered with another french company for testing of the software, they had in turn partnered up with an Indian company to outsource the testing, this company on the other hand hired people at good salaries for this effort. Hence the effect eventually will come trickling down to the employees. When consumption goes down, every industry eventually gets effected.
Who are most at risk?
Anybody, who does what they do because they are required to do, and not because they love to do it, are at risk.
For example in IT, a major chunk of the Software industry is filled with people who are not interested in software. To them it is a job, like any other. They would do equally good at a CCD or anywhere else.
First to go will be these people.
What we do?
Make ourselves relevant and indispensable. Create value.
Focus on our passion and make value out of it. Sell our passion.
The virus brings to us unseen benefits. We now have the time to self evolve. We do not have the daily distractions. We may have some distractions being in the same house with our family 24x7, but relatively we have more time.
Keep 1 hour a day dedicated to study.
Let us use this time to self improve. This will make us all bullet proof for the coming recession. The knowledge we gain will transfer into us building better products and at faster speed.
How do we survive?
As a tech product company our survival is completely dependent on how we adapt to technology. How fast we find opportunities and score goals.
The pattern to grow and succeed is simple.
1. Find the problem 2. Quickly build the solution 3. Market the solution 4. Keep updating the solution incrementally
The survival of each one of us is dependent on the other. Our team comprises each key element required to finish the above cycle. The faster we keep completing the cycle. The better our chances of survival.
We should start looking at how, we as technocrats can fit ourselves in this economic shift. What can be made to ease the upcoming ways of live. Such should be our questions.
Being a small company, We have to find solutions that are simple and can be built in days and may have some commercial potential
Our Team at Work
On a technical side note
The ever changing landscape of technology has now evolved from simple js apps to frameworks like ionic and f7, furthermore the world now demands stunning looking highly responsive native like apps built in record time. This is where flutter comes in.
On the UI end Adobe XD has emerged to be a key player. Bridging the gap between designers and coders. UI can be drawn and exported to flutter components.
Thus, now is the requirement of developers who understand graphics and UI design.
As well as designers who understand code. To be unbeatable, we have to master Flutter + Adobe XD.
I will not point out to tutorials, there are just too many, but this is the direction I am pointing to technology wise.
Be ready for changes. Be ready to change. Changes are often good eventually.
Remember Together with passion we shall overcome this as well.
This shall pass too.
Stay Safe Stay In
Avijit
Team Regular.li and Avifa
Increase Sales with this Mobile CRM and AutoDialer GoDial
|
https://medium.com/godial/an-open-letter-95a36e81e5e6
|
['Avijit Sarkar']
|
2020-04-23 10:19:07.651000+00:00
|
['Letter', 'Survival', 'Technology']
|
Economics of AI: Agriculture
|
The big picture
Agriculture worldwide is a US $5 trillion industry. And artificial intelligence (AI) is revolutionizing this industry every step of the way — from preparing soils and sowing seeds to getting products to the kitchen table. AI-powered technologies are increasing productivity and reducing costs significantly throughout the production and supply chain.
The market value of global AI in the agricultural sector is currently estimated at $852.2 million. In the next decade alone this value is expected to grow more than 10 times, exceeding $8 billion annually. As of 2020, artificial intelligence is impacting about 70 million farmers globally. North America is a clear front runner in this technology race.
Increasing productivity and reducing costs
AI-powered information technology is boosting productivity from the industrial scale down to the individual farm level. Even small farmers are benefiting from advances in AI. As reported by a farmer in Japan, the productivity of his tomato farm has increased by as much as 15% due to the adoption of information technology that he uses to monitor all aspects of farm production.
Data analytics and the use of information technology is having a big impact in Africa as well. Maize production in Western Kenya is reported to have increased from an average of 6 to 9 bags (90 kg per bag) per farmer in just a single year.
Automation is increasing productivity and reducing the cost of production dramatically. Strawberry harvest is a classic example. A robot can pick strawberries as fast as 8 acres/day. The same amount of harvest takes 30 humans per day. This means a big saving in time and a significant reduction in labor costs.
Weeds are key enemies of agricultural commodities. On a global scale, the value of weed damage is estimated at $43 billion. In India alone, $11 billion worth of agricultural products is damaged by weeds every year. AI technologies such as robots are powered by computer vision algorithms and are trained to identify weeds and destroy them on the field. These robots use 90% less herbicide and are 30% cheaper compared to traditional weed treatments. So there is big financial savings there as well.
I gave just a few examples of how AI is impacting the agricultural industry by increasing productivity and reducing production costs. But that’s the tip of the iceberg. Research on the economics of AI in agriculture (or AI generally) is still in its infancy, more research is needed to catch up with the accelerating pace of AI adoption in the agricultural sector.
Stay in touch for more articles like this via Medium, or you can follow me on Twitter and LinkedIn.
|
https://medium.com/datadriveninvestor/economics-of-ai-agriculture-7c363b3ae3eb
|
['Mahbubul Alam']
|
2020-11-05 05:33:44.724000+00:00
|
['Artificial Intelligence', 'Technology', 'Economics', 'Machine Learning', 'Agriculture']
|
What’s the Origin Behind These Car Names? Part 1
|
Here’s a look at how some brand of cars got their names and the history that made it so. This first part focuses on American car companies, and how many of them are interwoven with each other. Check out Part 2 and Part 3 for even more.
General Motors and Chevrolet
William Durant, the founder of General Motors (GM), initially made his fortune selling horse-drawn carriages in the late 1800s through a company he started with a $2,000 loan. Durant didn’t like cars and thought they were stinky, noisy, and dangerous, but he did see that the automobile industry could be consolidated. Durant envisioned a holding company that would be over other automobile manufacturers and their line of cars.
He started by acquiring the troubled Buick Motor Company in 1904. As will be mentioned later, he made Buick into the largest selling brand in the U.S. by 1908. He established General Motors in 1908, and in 1909, he bought Oldsmobile, Cadillac, and Oakland Motor Car, which would become Pontiac. Durant also bought a number of other companies that were put under GM, which consisted of paint companies, parts manufacturers, and wheel companies. But Durant had overextended the new company through the purchase of so many other companies. Durant was ousted by investors in 1911 when they didn’t like the direction the company was going.
Undeterred by his removal, Durant then co-founded another company with a Swiss-American race car driver named Louis Chevrolet. They started the Chevrolet Motor Car Company in 1911.
Incidentally, the iconic bowtie logo of Chevrolet, according to Ken Kaufmann, a Chevrolet historian, was believed to have come about when Durant saw a logo for a product in a newspaper. The product was made by the Southern Compressed Coal Company and called Coalettes. The logo for the product was found in a newspaper from 1911 and was the same logo as the Chevrolet company. Durant’s wife had even recounted that while he was reading a newspaper in 1912, he had told her that a particular logo in the paper would make a good logo for Chevrolet. The logo was first used on a Chevrolet in 1914.
Louis Chevrolet sold his shares in Chevrolet to Durant in 1914. Chevrolet was selling well enough that Durant was able to buy a controlling stake in the company he had originally founded, General Motors. He once again became president of GM in 1916.
He brought Chevrolet under GM in 1919 and also bought Frigidaire. But once again, his buying habits got the best of him, and he was again forced out of GM in 1920. Durant had nearly $1 billion in stocks by 1928, but then the Great Depression hit, and he was bankrupt by 1936. In the early 1940s, Durant began running a bowling alley in Flint, Michigan, and lived off a small pension with his wife until his death in 1947. Sources: (1)(2)(3)(4)(5)(6)
Buick
David Dunbar Buick built his first automobiles in 1899 and started Buick Motor Company in Detroit, Michigan, in 1903. He sold his newly incorporated company in 1904 to Benjamin Briscoe, who turned around and sold it to James Whiting in the same year. Whiting didn’t last long and ran out of money before 1904 was over and had to bring in William Durant as an investor.
Durant, at the time, owned his own carriage company. Durant made Buick into the largest selling brand in the United States until he started General Motors in 1908. David Buick remained a manager at the company through all these changes until 1906, when he sold all his stock in the company. Buick is currently the oldest active car maker in the country. Source: (7)
Ford, Cadillac, Lincoln, and Oldsmobile
Henry Ford built his first automobile, called a Quadricycle, in 1896. He started his first company, the Detroit Automobile Company, in 1899, but it was unsuccessful and dissolved in 1901. Ford, along with investors from the Detroit Automobile Company, started the Henry Ford Company, also in 1901. Ford quickly had a dispute with his backers and left the company and took his name with it in early 1902.
Henry Leland was brought into the company as a consultant and to conduct a liquidation audit later that year and recommended that instead of liquidating the company, it should be continued as a car company. The company was reorganized and renamed Cadillac after the founder of Detroit, Antoine de la Mothe Cadillac.
Leland and the other partners sold Cadillac in 1909 to General Motors. Leland stayed with GM until 1917 but left after a dispute with GM’s founder, William Durant. Leland then went on to start Lincoln the same year with his son and named the company after Abraham Lincoln.
In the meantime, Henry Ford had started the Ford Motor Company in 1903 with a portion belonging to the Dodge brothers. Ford then changed everything in 1908 with the introduction of his Model T.
Ford was able to make the car affordable for many Americans and was credited with the invention of the assembly line, but the invention of the assembly line had been patented and used earlier by Ransom E. Olds, the founder of Olds Motor Vehicle Company in 1897. His cars became known as Oldsmobiles.
Olds patented the assembly line in 1901 and used it for the production of his Oldsmobile Curved Dash car, which was the first mass-produced automobile. Henry Ford took the Olds assembly line and improved it by adding a moving conveyor. Ford then became the first to make cars on a moving assembly line.
Ford then crossed paths with someone from his past. In the early 1920s, Lincoln was having a difficult time and had to declare bankruptcy. Ford bought the company in 1922 for $8 million. Ford took over the company from Henry Leland, the same man that had turned one of his previous companies into Cadillac. Sources: (8)(9)(10)(11)(12)(13)(14)
Dodge
Before Horace and John Dodge founded the Dodge Brothers Company in 1900, the company supplied parts for the automobile industry in Detroit, Michigan. The brothers had previously run a successful bicycle business before entering the automobile industry.
They quickly became the largest supplier of parts in Detroit and made engines for the Olds Motor Vehicle Company. They also produce car parts that were assembled by Ford. The brothers held a 10% stake in Ford at the time and became the company’s single supplier. It was an uncomfortable arrangement for both companies, and the Dodge brothers wanted to break out from making cars only for Ford.
In 1915, Henry Ford stopped paying the stock dividend. The Dodges filed suit because of the decision, which resulted in a buyout of the brothers’ stake in Ford for $25 million. This freed them up to start producing their own line of cars. The Dodge brothers had already produced a well-regarded car in 1914, and they soon had a full production line. The brothers weren’t generally accepted in Detroit society and were known for drinking hard, racing boats, and their tempers.
The brothers both died in the same year. John Dodge died in January 1920 because of influenza and pneumonia, and his brother Horace died later that year in December also because of pneumonia, cirrhosis of the liver, and reportedly because he was so grieved over the loss of his brother. The company then went into the hands of their widows while it was being run by a longtime employee named Frederick J. Haynes. In 1925, the company was sold for $146 million and was then purchased by Walter Chrysler’s new Chrysler Corporation in 1928. Sources: (15)(16)(17)
Chrysler
Walter Chrysler didn’t start his career making cars. He wanted to follow in his older brother’s and father’s footsteps in the railroad business. His brother was already an apprentice in the machinist program for Union Pacific railroad in the late 1800s, and Walter wanted to do the same. But his father, Hank, who was a railroad engineer, wanted him to go to college.
Hank refused to sign off to let his 17-year-old son go into the program, so Walter got a job cleaning floors in the railroad machine shop instead. He soon got noticed for his work habits and was offered an apprenticeship by the shop’s master mechanic. This began Walter Chrysler’s 20-year career in the railroad business.
In 1908, Chrysler purchased his first car, the Locomobile. But instead of driving it, he studied it for three months by taking it apart and putting it back together. In that time, he learned everything he could about the car and how to drive it.
In 1911, Chrysler ended his railroad career. He was the works manager at the American Locomotive Company making $12,000 per year when he was offered a job at General Motors for half of his previous salary. He took the job at the age of 36 and entered the automobile business. He was tasked with running GM’s Buick plant in Flint, Michigan, and revolutionized its operation.
By 1916, Chrysler was made the president of Buick. He left Buick in 1919, and in 1920, he went on to save and reorganize the Willys-Overland Company, the automaker who later made the Jeep in the 1940s. Chrysler then took over another company in distress called the Maxwell Motor Company and acquired a controlling interest in it. In 1924, the first Chrysler car was built. It was a huge success, and the company was then renamed the Chrysler Corporation in 1925. Sources: (18)(19)
Pontiac
Pontiac was originally started in 1893 by Edward Murphy as the Pontiac Buggy Company. Murphy went into the automobile business in 1907 and formed the Oakland Motor Car Company, which was purchased by General Motors in 1909. General Motors next introduced Pontiac as one of its divisions in 1926 as a companion to its Oakland line of cars, and in 1931, Oakland was renamed to Pontiac.
The Pontiac name came from the Ottawa Indian Chief Pontiac, who led American Indians against British occupancy in the Great Lakes region in the mid-1700s. The city of Pontiac, Michigan, where Pontiac automobiles were produced, was also named for the chief. GM discontinued Pontiac in 2010. Source: (40)
Hummer
In 1983, the AM General Corporation was awarded a $1 billion contract from the U.S. military for 55,000 High Mobility Multipurpose Wheeled Vehicles to be delivered over five years. The new vehicle was known by the acronym “HMMWV” and later nicknamed the Humvee. The Humvee was later made into a civilian version in 1992 that was dubbed the Hummer. Source: (30)
Check out Part 2 and Part 3 in the “What’s the Origin Behind These Car Names.”
Want to delve into more facts? Try The Wonderful World of Completely Random Facts series, here on Medium.
Find even more interesting facts in the four volumes of Knowledge Stew: The Guide to the Most Interesting Facts in the World.
More great stories are waiting for you at Knowledge Stew.
|
https://medium.com/knowledge-stew/whats-the-origin-behind-these-car-names-part-1-115dddff0db8
|
['Daniel Ganninger']
|
2020-09-26 16:43:11.838000+00:00
|
['Design', 'Business', 'Automotive', 'Marketing', 'History']
|
Helm Audio DB12 AAAmp review: This mighty mite of a mobile headphone amplifier sounds mighty fine
|
Helm Audio DB12 AAAmp review: This mighty mite of a mobile headphone amplifier sounds mighty fine Jack Sep 8, 2020·8 min read
In addition to certifying the performance of commercial cinemas and consumer audio and video products, THX also develops new technologies and licenses them to various manufacturers. Among these new technologies is the company’s ultra-quiet AAA (Achromatic Audio Amplifier) power-amplifier design, which first appeared commercially in the Benchmark AHB2 in 2015.
Two years later, THX introduced the second generation of AAA technology, this time intended for headphone amps. I first heard some prototypes at CanJam SoCal 2017, and I was quite impressed. So, when Helm Audio announced at CES 2020 that it was implementing AAA technology in a mobile headphone amp, I immediately requested a review sample. Once the DB12 AAAmp finally arrived, it was well worth the wait.
This review is part of TechHive’s coverage of the best headphones, where you’ll find reviews of competing products, plus a buyer’s guide to the features you should consider when shopping.Helm DB12 AAAmp feature setThe DB12 AAAmp is a small unit measuring 2.8 x 0.9 x 0.5 inches (LxWxH) and weighing only 1.08 ounces. A 12-inch cable emerging from one end terminates in a 3.5mm TRRS (tip-ring-ring-sleeve) male plug, while a 2-inch cable at the other end terminates in a 3.5mm TRRS female jack. Both cables are custom-shielded silver with molded strain relief.
Helm Audio If your smartphone has a headphone output, just connect it to the DB12 for a distinct improvement in sound quality on your headphones. If the device has no headphone output, you’ll need an adapter.
Essentially, the DB12 is a powered headphone cable. As such, it relies on a physical connection at both ends. That means the source device must have a headphone output, which many smartphones and other mobile devices no longer have. If a device doesn’t have such an output, you will need an adaptor for it.
The frequency response is specified from 20Hz to 20kHz (+0.01/-0.2 dB) with a 32-ohm load. The full-range gain is said to be +12 dB with an independent bass boost of an additional +6 dB for frequencies between 60 and 100Hz. When I asked why the boost was only down to 60Hz, I was informed, “The curve isn’t a bell and it’s not super sharp. It’s a shelf, and it does extend down to 20Hz.” I would recommend saying that in the product info. Output power is rated at 109 mW/channel into 16 ohms and 111 mW/channel into 32 ohms with <0.1% THD (total harmonic distortion).
Mentioned in this article iFi Audio hip-dac Read TechHive's reviewSee it Speaking of THD, it gets much better at lower output levels: 0.0008% (-102 dB) at 10 mW/16 ohms and 5 mW/32 ohms and 0.00035% (-109 dB) at 0.049 mW/10 kohms. Likewise, IMD (intermodulation distortion) is quite low: 0.03% (-70 dB) at 16 ohms and 0.01% (-80 dB) at 32 ohms and 10 kohms, all measured using the SMPTE standard 70Hz + 70kHz.
To achieve such low distortion levels, the THX AAA amplifier design uses a bipolar class-AB output stage with feed-forward error correction to cancel zero-crossing errors. According to THX, this allows the amp to exceed the performance of class-A designs without their low efficiency, poor damping, and high power consumption. It also allows long battery life by reducing bias currents by a factor of 10 to 100 without increasing distortion.
THX As this graph indicates, THX AAA amplifier modules exhibit much lower THD than competing amp modules at different amounts of quiescent power consumption.
The lithium-ion battery provides six to eight hours of play time depending on the volume setting and whether or not bass boost is active. A USB-C port on one side of the unit lets you connect it to a USB power source to charge the battery. The specs say “2.5 hours fast charge, traditional charging time longer,” which means that a USB power adapter with an output of 9V/1.67A or 5V/2A can charge the device from empty to full in 2.5 hours, while an ordinary charger with 5V/1A output takes about four hours.
side class="nativo-promo nativo-promo-1 smartphone tablet desktop" id=""> A sliding power button on the other side also activates the bass-boost mode. The center button on the top lets you play/pause and skip to the next or previous track, while the “+” and “-“ buttons control the volume.
Helm DB12 AAAmp performanceMy iPhone XS does not have a headphone output, so I used a Lightning-to-headphone adapter with the DB12. As usual, I played tracks from Tidal’s Master library of lossless high-res audio as my test material. And of course, I used the stunning Focal Stellia headphones, reviewed here, to present the best possible sound quality to my ears.
I’m always eager to listen to Jacob Collier, so I started with his version of “Here Comes the Sun” from Djesse Vol. 2. His incredible arrangement includes massive stacked vocals; a guest vocal by Dodie; guitars, deep bass, and percussion. The DB12 has a beautiful, clean, clear sound with excellent balance and delineation. The bass-boost function works extremely well, with no congestion or change in tonality, while remaining super clean. Bassheads will love it, though I did most of my listening without it; I heard plenty of bass with the more neutral balance.
Next up was “Noble Nobles” by bassist and vocalist Esperanza Spaulding, from her album Emily’s D+Evolution, one of my all-time favorites. The DB12 sounded clean and clear, and I could hear deep into the mix. The bass was perfectly balanced without bass boost, and her vocals were sublime.
For a bit of country, I listened to “Sheryl Crow” from Tim McGraw’s album Here on Earth. Actually, this track has a strong rock component with a dense mix, which sounded wonderful on the DB12. Despite the density, I could hear deep into the mix with super-clean vocals and guitars.
Being a jazz-rock trombone player from way back, I always appreciate horn-based funk, which I got in spades from “Basket Case” on Brasstracks’ debut album Golden Ticket. The brass licks, drums, guitar, and organ sounded punchy and clean on the DB12. The bass was a bit diffuse, which is probably in the mix, and the DB12’s bass boost made it worse.
Mentioned in this article Focal Stellia Read TechHive's reviewSee it Steely Dan is another of my all-time favorite groups, so I listened to “West of Hollywood” from Two Against Nature. Donald Fagen’s meticulous attention to every detail is plainly evident in this track, which sounded fantastic on the DB12. As before, I could hear deep into the mix, which nevertheless remained completely coherent and cohesive. In fact, I could make out the lyrics better than I can on many playback systems.
Turning to classical, I cued up Schoenberg’s Notturno for Strings and Harp as performed by Zürcher Kammerorchester, with violin soloist Daniel Hope on the album Belle Époch. The DB12 rendered the delicate strings, harp, and solo violin beautifully with excellent balance between the sections.
For a more forceful selection, I listened to the intro to Mahler’s Symphony No. 8, “Symphony of a Thousand,” as recorded live by Münchner Philharmoniker under the direction of Valery Gergiev. The DB12 sounded clean and clear with a natural rendering of the hall, and I could easily distinguish each section, the choir, and soloists.
Helm Audiosmall> The DB12 can be used with a computer as well as a mobile device.
Comparison to the iFi hip-dac and iPhoneIn addition to the DB12, I listened to each track on the iFi hip-dac, reviewed here, and the direct output from the iPhone for comparison. Of course, comparing the DB12 with the hip-dac is not entirely apples-to-apples—both units replace the phone’s amplifier, but the DB12 uses the phone’s internal DAC (digital-to-analog converter), while the hip-dac replaces the phone’s DAC with its own. Still, these two units are very close in price, and they serve much the same purpose—improving the wired sound of a source such as the iPhone—so I forged ahead.
Overall, I heard no appreciable difference in sound quality between the DB12 and the hip-dac; both sounded exceptionally clean. The only difference—and it’s extremely minor—was that the hip-dac sounded a hair more neutral and laid back, while the DB12 was just a tad more punchy. But again, this difference was miniscule and easily missed if I had not been paying close attention.
There was a slightly bigger difference in the bass-boost function of the two units. The hip-dac’s bass boost sounded just a tiny bit congested and less well defined, but it was apparent only in direct comparison and nothing egregious at all.
Helm Audio The power switch on one side also activates the bass-boost function, with LEDs that indicate the unit’s status. Three buttons on the top control volume (“+” and “-“) and play/pause/skip track. A USB-C port on the other side allows charging.
As I was swapping connections between the DB12 and hip-dac, I noticed that the DB12’s volume controls didn’t work reliably; sometimes, I had to unplug and replug the unit into the iPhone to get them to work at all. Interestingly, the play/pause button seemed to work all the time. On a related note, the “+” button had a tendency to stick if I happened to push the outer part of the button, and it did not change the volume on the phone. This felt like an issue of poor alignment with the hole in the case, which could be a quality-control problem.
I also compared the sound of the DB12 with a direct connection to the iPhone using only the Lightning-to-headphone adaptor. As I had heard during my review of the hip-dac, the direct connection sounded a bit sharper and more brittle—not tremendously, but slightly. And on some tracks, the bass was a bit less distinct, and the sound was ever-so-slightly congested and veiled, especially in the low frequencies. This is probably due mostly to the iPhone’s headphone amp, since the DB12 (using the iPhone DAC) and hip-dac (using its own DAC) sounded nearly identical.
Bottom line on the Helm DB12 AAAmpThe Helm DB12 AAAmp claims to be the “world’s smallest portable hi-fi headphone amplifier.” I don’t know if it truly is the world’s smallest, but it is remarkably tiny, and it does provide exceptional sound quality. And of course, it’s THX certified.
My only real gripe is that the “+” volume button sometimes sticks if pressed in the wrong place. Also, the volume controls did not work reliably, though that might well have been because I was plugging and unplugging the unit frequently, which the average user wouldn’t do. And like the iFi hip-dac, the DB12 does not automatically power down after some period of inactivity, so it’s easy to run down the battery if you forget to turn it off.
Speaking of the hip-dac, it’s sound is virtually identical to the DB12, and it’s $50 less. It also has more robust build quality with a super-sturdy power/volume knob, and it doesn’t rely on the DAC in the device it’s used with. On the other hand, it’s larger and more awkward to manage with a smartphone.
At just under $200, the DB12 AAAmp is a bit on the pricey side, but it does offer a distinct improvement in the sound quality of mobile devices—depending on the quality of their DAC—in a super-convenient package.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
|
https://medium.com/@Jack99753526/helm-audio-db12-aaamp-review-this-mighty-mite-of-a-mobile-headphone-amplifier-sounds-mighty-fine-619bbf0446fa
|
[]
|
2020-09-08 15:08:08.861000+00:00
|
['Chromecast', 'Entertainment', 'Gear', 'Cutting']
|
Fake Data: A never ending nightmare of data research and market analysis
|
Fake Data: A never ending nightmare of data research and market analysis
More companies than ever rely on business and market predictions made by researchers and consultants. Although much data is received and shared through personal connections and business partnerships, especially digital domains increase in relevance and attractiveness for analysts. Within the hype revolving around digital databases and automated market research, we must stay aware of fake data and bad actors attempting to manipulate your decisions and business actions. Specialists might even start to wonder about the need for a new role, solely dedicated at verifying available data.
To provide more details on malicious actors and sources, Schema 1 indicates a strategic model to categorize data threats and actors into one of 4 groups. The available differentiation is dedicated to speed up first response actions and to help researcher with methodological verification of data sources. The order of the schema provides a suggestion of expected costs for verification or clearance, where layer 1 is easier and cheaper to rule out, than layer 4 with no attempt of cost or effort relativity. Although criticists might highlight the apparent lack of self or damage reports by industry, a targeted attack against your business will ruin your forecast.
Following list contains past breaches, where names of organizations are purposefully removed to avoid support of bad actors in marketing activities. According to BBC many fortune 500 companies were hit by a breach of Solarwind, which is currently sourced back to a militant group (Tidy, 2020). Furthermore a lone hacker in the UK is charged with £1.1m for breaching hospital and medical care data, which includes ransomware and modifications. (BBC, 2020) Another example is the hack of Marriott and Starwood, which is currently being traced back to an industrial dedicated team in China (Fruhlinger, 2020). Finally autonomous organizations have made an impact by hacking companies like Sony (BBC, 2014).
To conclude, data researchers and market analysts must be more careful than ever before trusting and working with new data sources. Especially first suspicious evidence should lead analysts to look for applicable verification measures built on top of schema 1. You might even go as far as to suggest new roles or processes solely dedicated to ensuring a root of trust for your data!
Struggling with deployment or operationalization of verification processes in your business? Reach out to me through mail or my booking site at https://cobblevision.company.site. I’ll help you to embed necessary operative activities in your organization based on a methodological strategic framework sourced from scientific research and case studies.
References:
Joe Tidy., (2020). SolarWinds: Why the Sunburst hack is so serious [online]. BBC. [Last Accessed on 16.12.2020] Available at: https://www.bbc.co.uk/news/technology-55321643
BBC.com., (2020).
Dark Overlord hacker pleads guilty [online]. BBC. [Last accessed on 16.12.20] Available at: https://www.bbc.co.uk/news/technology-54247527
Josh Fruhlinger., (2020). Marriott Data Breach FAQ: How did it happen and what was the impact? [online]. CSO Online. [Last accessed on 16.12.2020] Available at: https://www.csoonline.com/article/3441220/marriott-data-breach-faq-how-did-it-happen-and-what-was-the-impact.amp.html
BBC. (2020). Sony’s PlayStation hit by hack attack [online]. BBC. [Last accessed on 16.12.2020] Available at: https://www.bbc.co.uk/news/technology-30373686
|
https://medium.com/@Mark.Schuette/fake-data-a-never-ending-nightmare-of-data-research-and-market-analysis-61c8ded3964b
|
['Mark Christoph Schütte']
|
2020-12-16 07:25:44.581000+00:00
|
['Strategic Frameworks', 'Data Analysis', 'Research', 'Cyber Security', 'Fake News']
|
Know when to step away…
|
Last night I got involved in a discussion on social media that went off the rails pretty quickly. The problem is…the original post was made by a family member.
Learning when enough is enough…
Social media does funny things to people. Some folks use social media to post the goings-on in their lives while others use it as a platform to perpetuate their cause. Some posts are well thought out and yet others are simply ridiculous. The difficult thing for some of us is learning when to walk away. I often find myself trying to have a civilized discussion with friends online, only to see the whole thing descend into chaos. Often it’s not the original author or poster who causes the problems, but rather the folks who feel they have the right to hijack a discussion thread and turn it into a three-ring circus.
My problem? Sometimes it’s difficult for me to simply “let it go”. In my “real” life, I often find myself trying to explain a particular point of view or showing someone why their view might not be entirely accurate. I’m often successful, but not always. I make it a point to listen to others and consider their side of the story. There have been many times that my mind has been changed, however, it’s usually because the other person has presented something that I hadn’t known or considered. That’s not often the case when I’m online.
Social media allows people of all types to become keyboard warriors. Sadly not everyone has the ability to engage in thoughtful and purposeful discussions while considering all points of view. I try, but I’m not perfect.
Last night was one of those “less than perfect times.” I allowed myself to get sucked in. I responded when I shouldn’t have. Chaos erupted from all sides. What started as a simple response to an inaccurate post turned into an all-out war of craziness. Things went off the rails pretty quickly and any attempt by me to bring things back on the topic ended up being futile.
As men, we need to recognize this. A real man will notice destructive behavior and reel it in before it gets out of hand. In my case, I calmly bowed out of the discussion and let my family member know that I still loved and respected them no matter what. It doesn’t do any good to stay in the fight and continue to be bludgeoned by ridiculousness from all sides. There is a short distance between frustrating chaos devolving into name-calling and personal attacks. The trick is to know where that line is. Ending your part in the destructive behavior is not an admission of defeat, but rather an indication of your strength of character. Choosing to preserve real-world relationships over social media chaos is a far better solution than getting mired in the quagmire that could ultimately lead to “real world” issues later on.
I remember my Dad would often give us a stern look and use the phrase, “Ok, that’s enough.” Strong men have a way of saying things with a “look” that lets everyone know that they mean business. It was not often spoken in anger, but simply a statement to let us know that we had approached the line and were in danger of crossing it. This simple lesson can be applied to so many things in our adult lives. It’s also very important to teach our own children that there is almost always a line, and there is usually a danger in crossing it. Help them to learn the difference and when to step back from the brink.
So how do you know when to step back? Learn to recognize the little voice in your head that’s saying, “This is going nowhere” or “ who will get hurt if I continue down this path?” Reflect back on your life and some of the situations where you might have had a falling out with a friend or family member. Was there a point where you might have been able to walk away or step back to avoid trouble? Can you recognize similar situations that occurred at different points in your life? How are they the same or different? Thoughtful reflection can teach us a lot. There were times earlier in my life and my career where I would charge into an online debate with a fire ‘n brimstone and a “damn the torpedoes” state of mind. I didn’t care who it was, I was going to bludgeon them with my version of the truth. Sometimes this didn’t end well. Feathers were ruffled and I found myself doing damage control more often than not. A couple of nasty encounters made me realize what I was doing and brought me back to those times when my Dad simply said, “Ok, that’s enough.”
I have grown as a man. My father’s voice has become my own voice just as mine is for my son. As he grows and matures as a man, that same voice will carry on. Instead of hearing me say, “Ok, that’s enough”, he’ll hear his own voice saying the same.
We all slip up once in a while or we go just a little further than we should have. In tomorrow’s post, I’ll share what my Dad taught me about making and keeping the peace.
|
https://medium.com/@lifelessonsfromadad/know-when-to-step-away-3fe51343261
|
['David Trask']
|
2020-12-08 20:08:33.688000+00:00
|
['Dads', 'Life Lessons', 'Social Media', 'Manliness', 'Fatherhood']
|
A pragmatic approach to demonstrating impact
|
When it comes to impact measurement, it can be hard to know where to start. Most people now understand the need for measurement and seek to understand the ‘impact’ of impact enterprises. But there is not yet a common understanding of how to do this in ways suitable for smaller organisations or those just starting out.
What should we measure? How can we account for value beyond financial? How can we demonstrate impact generated across multiple dimensions in often-complex entities? How can we show some sort of industry consistency while accounting for our specific context? And how is this all possible in a small, stretched team with limited financial resources or time for the task?
At The Yunus Centre, Griffith University we recently completed an Outcomes Framework and Impact Report for Logan-based social enterprise Substation33.
Through this project we uncovered and tested six learnings we think may be useful to other impact enterprises and their key stakeholders, including funders.
6 steps for new economy organisations to start measuring and sharing their impact right now
Start with the Theory of Change
Also called an Impact Map — use this to step-out the logic and establish a framework for your narrative.
2. Work closely with those who will be involved in the collection, storage and reporting of data to design your outcomes framework
The people involved in these steps are best placed to understand what is already available, where and when there might be opportunities to collect something new, and how to collect and store it in the most practical way. They’re often closest to the impact too, so can provide useful insights into appropriateness of indicators and other design matters.
3. Focus on existing data and processes that already occur, first
Make the most of what you’ve got. Do a deep dive of all the data and processes that already link to various stages of your Impact Map. If you only have a few, that’s still a good place to start.
In this example the team built upon an existing sign-in/off process. They added more data points to the daily fingerprint scan process to improve understanding of people’s journeys to track changes over time. Now, volunteers and staff answer one randomly allocated question each time they sign-in for the day.
Planned data collection during daily sign-on process at Substation33
4. Be transparent about where you’re at
Your data set may not be perfect, but that’s ok. The important thing is to always be transparent about how robust the data is, and seek to improve it over time.
5. Do a ‘pilot’ for the first reporting cycle
Linked to point four — make your first impact report a pilot. Test and trial it, discover what works and what doesn’t, find out how your stakeholders react to it.
6. Agree a staggered implementation timeline
Don’t overwhelm your team. As you improve your methods, create space for iteration and time for new practices to develop and be integrated into daily routines.
Think about a staggered approach that is fit for purpose and reflects the reality of the organisation
We hope this helps you find a place to start, or to have deeper discussions with key stakeholders about how to better understand, monitor and demonstrate your impact. For further inspiration and detail around some of these points you can delve into the full Substation 33 Outcomes Framework and Impact Report 2020.
|
https://medium.com/y-impact/a-pragmatic-approach-to-demonstrating-impact-c70398b72400
|
['Griffith University Yunus Centre']
|
2021-08-02 08:47:11.913000+00:00
|
['Impact Investing', 'Sdgs', 'Impact Measurement', 'Social Enterprise', 'Philanthropy']
|
A Rebuttal to Pedro Domingos’ Rebuttal to my Remarks Opposed to Signing the Open Letter on Academic “Free Speech”
|
A Rebuttal to Pedro Domingos’ Rebuttal to my Remarks Opposed to Signing the Open Letter on Academic “Free Speech” David Karger Jan 5·12 min read
I appreciate that Pedro took the time to write an extensive response to my initial post taking a position against signing the open letter that is currently circulating. Now it is my turn. Before getting to his specific rebuttal points, I offer some context.
I am assuming good faith among the signers; I do not believe their intent is to harm. I do not say this to patronize, but because I understand that some see simply signing the letter as justification to attack the signers, and I disagree. Indeed, I am not attacking the signers’ views at all because — due to the ambiguity of the letter — I do not know what their views are. My focus is on the text of the letter, not the views of the signers.
I am generally a supporter of free speech, even when it makes people uncomfortable. I often find myself disagreeing with “cancellations”, and defending my position against friends arguing for them. I consider the letter and Pedro’s rebuttal to be reasonable debate; there is nothing in them that I find out-of-bounds for discussion. In fact, I can easily imagine versions of the open letter that I would have signed. However, I oppose signing this particular letter because of its ambiguity and absolutism, as I’ve already explained in my previous post.
I have the easier side of this argument: my goal is to show that the letter can be misused — that there exist plausible interpretations of the letter that people should not support. The writers on the other hand need to demonstrate that all plausible interpretations of the letter are acceptable to a potential signer. And unlike the letter-writers, since I am not asking people to sign anything, but only to refrain from signing, the only harm in my posts being ambiguous is that they may fail to convince. But Pedro’s rebttal, and other commentary I’ve seen, shows a misunderstanding of my objections, so I’ll try to clarify them here. I aim to point out natural interpretations of the letter that I reject, and which provide reasons not to sign on. As before, I’ll tackle one bullet at a time, but I’ll try to provide more specific examples of the letter’s problems.
The Bullets
Scientific work should be judged on the basis of scientific merit, independent of the researcher’s identity or personal views.
In my first speculative interpretation of bullet 1 as addressing scholarship submitted for publication, I argued that peer review is expressly designed to provide independent judgment of such work. Pedro rejected this argument by discussing “attacks for what they publish after it’s published, people being fired, ostracized, antagonized, and more.” So, it seems that his intended meaning of the ambiguous point here is the broader interpretation I explored immediately after, of judgment of the scholars themselves.
So let’s consider this broader point. Pedro and I may find ourselves in partial agreement here. I will readily agree with Pedro that there are cases where academics have faced excessive retaliation when their scientific work is inconvenient. Perhaps like me Pedro will sign the Standing with Dr. Timnit Gebru petition which argues that denying publication of her research and firing her was not an appropriate response to the disagreement about her scientific findings.
But while we might agree that Timnit’s firing (and some others that have been in the news) was excessive, Pedro seems to argue that every response is excessive. This is a fallacy of overgeneralization: the existence of overreactions does not mean that every reaction is an overreaction. There are cases where even “meritorious science” should have consequences for the scientist.
Consider for example the infamous Tuskegee Syphilis Study. This study asked the scientifically “meritorious” question of how syphilis affects African Americans. The experimenters deceived the subjects and left them untreated or mistreated, and many died of the disease. We now recognize the severe moral violations in this study; they informed the Belmont Report which guides many institutional review boards (IRBs) overseeing the use of humans as experimental subjects. Research that violates the Belmont principles is forbidden by our institutions no matter how meritorious the scientific questions are. And a scientist who violates these principles should be judged for doing so.
So what exactly does the open letter mean when it speaks of “judging on scientific merit”? Do they mean that the Tuskegee experiment was actually justified because it gathered scientifically useful data? Or do they mean that IRBs should be abolished, with the decision about appropriate ethical boundaries left to the scientists? Or do they consider it obvious that ethical review is part of “scientific merit”? And if the last, then what exactly are the bounds of “scientific merit” and what is being excluded? These interpretations are widely varying, but all consistent with bullet 1. Which is a signatory supporting?
The AMA’s Code of Medical Ethics also addresses this issue. Like me, and unlike the letter, it recognizes that there is no black and white answer, but instead that complex considerations must be weighed against each other in deciding how scientific work should be judged.
Pedro goes on to suggest that there is a contradiction between my objections to bullet 1 and my argument that we must avoid racism and sexism in judging scientific work. But this is another fallacious overgeneralization: while I agree that it is not valid to judge scientific work based on the researcher’s race and gender, there is no contradiction in simultaneously arguing that it is appropriate to include other considerations, such as ethical concerns, in such judgments.
Pedro then addresses the Nature Communications work on mentorship by women, and objects that it is unfair that this work was retracted (by the authors) only because it got so much attention which led to the identification of scientific flaws. If Pedro has some technique for ensuring that all scientific work receives equal scrutiny, I’d love to know it because I think many of my papers haven’t gotten the attention they deserved. I’m not sure what remedy Pedro is advocating here (and it certainly isn’t clear from bullet 1). Should we limit the number of people permitted to view a given article? Or should people not be permitted to question the science in papers they read if they disagree with the conclusions? Should authors not be permitted to retract? Pedro also claims this is an example where “the science is attacked because of the scientist” but goes on to remark that another paper by the same authors was not attacked, refuting his own claim. Overall, I couldn’t find a clear argument to rebut here.
Pedro then disagrees with my argument that it will sometimes be appropriate to exclude scientists from a conference even if we accept their work to be published there. I described the specific example of banning someone who has publicly stated that certain races or genders are inferior. Pedro asserts this is “letting objections to the person’s views affect the reception of the scientific work.” To make this concrete, let’s consider Ludwig Bieberbach, a talented mathematician and a Nazi who made the racial inferiority argument that “the spatial imagination is a characteristic of the Germanic races, while pure logical reasoning has a richer development among Romanic and Hebraic races” and “was enthusiastically involved in the efforts to dismiss his Jewish colleagues.” (Yes, I know this will generate piles of “David says Pedro Domingos is a Nazi” quotes from people who can’t read, but I think it is a useful test case.) He was dismissed from his position in 1945 because of his Nazi work, but was invited to lecture in 1949 by others who “considered Bieberbach’s political views irrelevant to his contributions to the field of mathematics”. So you can see that the disagreement between me and Pedro isn’t new. Bieberbach’s work is of course cited, and there’s a conjecture named after him. But, if he were alive today, would we want him at our conferences? I would say no. The letter, at least in Pedro’s interpretation, seems to say yes — so I cannot sign it.
Pedro finally argues that “the whole point of Principle 1 is to separate judgments of the science from judgments of the scientist.” Can we really do this? Long ago I served on various STOC and FOCS program committees. In those days the page limits meant that a lot of math got left out of the proof “sketches” that were submitted. At one of our committee meetings, one of our greatest researchers told me they were opposed to double-blind review because “you need to know who wrote the paper in order to decide whether you trust the proof sketches”. I disagreed with this particular argument, but it continues to be the case that a great deal of assessment of scientific work relies at a minimum on our trust in the author. When I read a paper describing execution of code that yielded certain data, I don’t run the code and I don’t compare its output to the data claimed. I trust the author. If this author had been convicted of research misconduct in the past, I would demand a higher standard of proof that they did the work they claimed. This would certainly be a case of judging science dependent on the scientist, and I think it would be appropriate.
2. Discussion and debate in the scientific community must be free of prior restraint as to topic or viewpoint.
I turn now to bullet 2 on prior restraint. Pedro claims “Karger says that this principle is vacuously satisfied” and that this shows a “shocking naivete.” He must have failed to notice my “taken literally” qualifier. My “shocking” statement was proposed only to point out that since nobody is literally prevented from publishing on the internet, the letter writers must mean something more than this. I then proceeded to argue against these broader interpretations.
In particular, I observed that conferences have always exercised prior restraint on which topics they will accept. Pedro appears to accept this point. But at the same time he objects to the restraint imposed by the NeurIPS ethics review. I’m not sure why Pedro says “Karger claims that (for example) NeurIPS’s requirement of broader impact statements and ethics reviews is not really a restraint” when in fact I described it as “a pretty clear prior restraint.” But overall Pedro seems to be saying that topic constraints are valid but that ethics review is an invalid restraint. This is inconsistent with the absolutist message of the letter. Bullet 2 does not say that discussions must be free of invalid restraints (which everyone would agree with, but would push off the entire debate into which restraints are invalid); instead it levels a blanket objection to any restraint. This would include the restraints on topic that we all expect. So how can anyone signing this letter understand exactly what kinds of restraints are covered by the principle? The letter writers might have a specific set of restraints in mind, but they remain hidden there.
3. No individual should suffer harassment or attack based on their personal or political views, religion, nationality, race, gender, or sexual orientation.
Finally we come to bullet 3 on attacks. Again we agree about a good part of this, but we disagree about the inclusion of personal or political views as grounds. Here, Pedro raises a valid objection to something I worded poorly. In my post, I explained that “If someone declares that a certain group is inferior, I want that “personal or political view” to be “attacked”. Or let’s consider another example: someone at a conference expressing their “personal view” that a particular scientist is sexy. I stand by the assertion that such views should lead to attack. But I then argued that these attacks “should not be ad hominem — -they should attack the individual on the grounds of the views they hold.” This was poorly worded; the attack should not be on the grounds of the views they hold, but on the grounds of their expression of those views.
What is the difference? Well, once you go from merely holding views to expressing them, that expression impacts those around you. Looking to my specific examples, I have heard from many researchers (especially students), of the psychological toll it takes to be told that you are inferior or unwanted or sexy. It burdens every interaction with concern about how one is being perceived and a pressure to be perfect. It discourages scientists from doing their best work. It drives people from the field. It is bad for science. Most important, it is hurting someone. Indeed, I would say these kinds of assertions are perfect examples of the attacks that bullet 3 claims should be off limits. I therefore believe it is entirely justified to respond to such attacks. Of course we can split hairs over whether and when such responses are themselves attacks. Is telling someone their comment was sexist an attack? What if it actually was sexist? Is telling them they are sexist an attack? Is refusing to have dinner with them an attack? Even if such actions are declared to be attacks, I think there are situations in which they are appropriate responses, and again the letter offers no guidance on the scope of “attack” that it rejects.
Balancing Harms
Pedro finishes by addressing my argument that we need a better balance between false positives and false negatives on accusations of racism and sexism. Here Pedro agrees with my high-level point but disagrees on the direction in which we need to shift. Pedro claims that we are tilted strongly in the direction of (imagining) “racism and sexism everywhere”. I have the opposite perspective: while I have seen a few eminent researchers like Pedro, and a few students, called out as racist or sexist for their expressed views, it seems that just about every female or minority student or researcher I have spoken to can tell me about their personal experiences of sexism and racism that have gone unaddressed in our field. Indeed, I suspect it is the rarity of call-outs that has made them newsworthy enough to be collected as anecdotal evidence of a “cancel culture”, while the experiences of those facing racism and sexism are so common that they go unremarked. To me it appears that our unwillingness to allow even the minimum of false positives has led to an astronomical cost in false negatives.
Pedro’s main anecdote here is from his own experience of being called a racist, sexist, misogynist bigot. This is unpleasant; I know because I too have been called these things. Not as often as Pedro, but enough to know that I’d like to avoid the experience. I’ve even modified my behavior a little bit to reduce the chances of it happening again; I think of it as “reflection and self-improvement.” But in the end, Pedro and I are pretty well situated to handle these kinds of attacks: we have prestige, strong social ties in academia, money, and tenure. False positives cost us relatively little. Contrast this with a smart, talented female undergraduate who dropped my class this semester because she faced sexism from the male students she needed as collaborators. For her the cost was very high. If we at the apex can reduce harm to our vulnerable members at the cost of a degree of discomfort for ourselves, deserved or not, that’s a trade worth making.
Conclusion
In sum, I cannot support a letter that takes such a black-and-white view on principles while simultaneously presenting significant ambiguity on exactly what those principles are. Pedro asks “where I draw the line.” As a scientist I get it; we like to draw lines and define absolute boundaries. But while our science may be precise, we do that science in a real world that is much more complicated. The relative costs of tolerating or rejecting certain positions, speech or behaviors depend on the speaker, the audience, the topic, and many other factors. The value of freedom in speech and scientific inquiry is high, sufficient to outweigh many other costs, but it is not infinite. And when we accept the costs that come with free inquiry, we have a responsibility to spread those costs fairly over our community, instead of letting them fall entirely on our most vulnerable members.
Lest some argue that my arguments reject signing any statement of principles, here’s an example of something I could sign: the World Wide Web Consortium’s code of conduct. It covers some of the same ground as the current letter — for example, like bullet 3, it rejects attacks based on “gender, gender identity and gender expression, sexual orientation, disability (both visible and invisible), mental health, neurotype, physical appearance, body, age, race, socio-economic status, ethnicity, caste, nationality, language, or religion.” Notably, parallel to my arguments above, it does not include “personal or political views” in this list. But it does expect members to “be inclusive and promote diversity” and asserts that “diversity of views and of people powers innovation, even if it is not always comfortable.” This matches a core theme of the open letter. But crucially, instead of laying out diversity of views as an inviolable principle, the code recognizes that it is a value that can be in tension with and overridden by others, and that such tensions ought to be resolved with empathy, in favor of those who would be most harmed. Unlike the current letter, the code’s precision and detail give me a better understanding of what I would be supporting. I’d be happy to sign a letter committing to the principles of this code.
|
https://medium.com/@david-karger/a-rebuttal-of-pedro-domingos-rebuttal-of-my-remarks-opposed-to-signing-the-open-letter-eb31821dd736
|
['David Karger']
|
2021-01-05 13:48:55.482000+00:00
|
['Computer Science', 'Diversity', 'Diversity In Tech', 'Free Speech', 'Academia']
|
What Would We Experience If Earth Spontaneously Turned Into A Black Hole?
|
One of the most remarkable facts about the Universe is this: in the absence of any other forces or interactions, if you start with any initial configuration of gravitationally bound masses at rest, they will inevitably collapse to form a black hole. A straightforward prediction of Einstein’s equations, it was Roger Penrose’s Nobel-winning work that not only demonstrated that black holes could realistically form in our Universe, but showed us how.
As it turns out, gravity doesn’t need to be the only force: just the dominant one. As the matter collapses, it crosses a critical threshold for the amount of mass within a certain volume, leading to the formation of an event horizon. Eventually, some time later, any object at rest — no matter how far away from the event horizon it initially was — will cross that horizon and encounter the central singularity.
If, somehow, the electromagnetic and quantum forces holding the Earth up against gravitational collapse were turned off, Earth would quickly become a black hole. Here’s what we would experience if that were to happen.
If you begin with a bound, stationary configuration of mass, and there are no non-gravitational forces or effects present (or they’re all negligible compared to gravity), that mass will always inevitably collapse down to a black hole. It’s one of the main reasons why a static, non-expanding Universe is inconsistent with Einstein’s relativity. (E. SIEGEL / BEYOND THE GALAXY)
Right now, the reason Earth is stable against gravitational collapse is because the forces between the atoms that make it up — specifically, between the electrons in neighboring atoms — is large enough to resist the cumulative force of gravity provided by the entire mass of the Earth. This shouldn’t be entirely surprising, as if you considered the gravitational versus the electromagnetic force between two electrons, you’d find that the latter force was stronger by about a factor of a whopping ~10⁴².
In the cores of stars that are massive enough, however, neither the electromagnetic force nor even the Pauli exclusion principle can stand up to the force inciting gravitational collapse; if the core’s radiation pressure (from nuclear fusion) drops below a critical threshold, collapse to a black hole becomes inevitable.
Although it would take some sort of magical process, such as instantaneously replacing Earth’s matter with dark matter or somehow turning off the non-gravitational forces for the material composing Earth, we can imagine what would occur if we allowed this to happen.
One of the most important contributions of Roger Penrose to black hole physics is the demonstration of how a realistic object in our Universe, such as a star (or any collection of matter), can form an event horizon and how all the matter bound to it will inevitably encounter the central singularity. (NOBEL MEDIA, THE NOBEL COMMITTEE FOR PHYSICS; ANNOTATIONS BY E. SIEGEL)
First off, the material composing the solid Earth would immediately begin accelerating, as though it were in perfect free-fall, towards the center of the Earth. In the central region, mass would accumulate, with its density steadily rising over time. The volume of this material would shrink as it accelerated towards the center, while the mass would remain the same.
Over the timescale of mere minutes, the density in the center would begin to rise fantastically, as material from all different radii passed through the exact center-of-mass of the Earth, simultaneously, over and over again. After somewhere between an estimated 10 and 20 minutes, enough matter would have gathered in the central few millimeters to form an event horizon for the first time.
After just a few minutes more — 21 to 22 minutes total — the entire mass of the Earth would have collapsed into a black hole just 1.75 centimeters (0.69”) in diameter: the inevitable result of an Earth’s mass worth of material collapsing into a black hole.
When matter collapses, it can inevitably form a black hole. Penrose was the first to work out the physics of the spacetime, applicable to all observers at all points in space and at all instants in time, that governs a system such as this. His conception has been the gold standard in General Relativity ever since. (JOHAN JARNESTAD/THE ROYAL SWEDISH ACADEMY OF SCIENCES)
If that’s what the Earth beneath our feet does, however, what would a human being on Earth’s surface experience as the planet collapsed into a black hole beneath our feet?
Believe it or not, the physical story that we’d experience in this scenario would be identical to what would happen if we instantly replaced the Earth with an Earth-mass black hole. The only exception is what we’d see: as we looked down, a black hole would simply distort the space beneath our feet while we fell down towards it, resulting in bent light due to gravitational lensing.
However, if the material composing the Earth still managed to emit or reflect the ambient light, it would remain opaque, and we’d be able to see what happened to the surface beneath our feet as we fell. Either way, the first thing that would happen would be a transition from being at rest — where the force from the atoms on Earth’s surface pushed back on us with an equal and opposite force to gravitational acceleration — to being in free-fall: at 9.8 m/s² (32 feet/s²), towards the center of the Earth.
When a human enters free-fall, such as this 1960 skydive jump by Colonel Joseph Kittinger from over 100,000 feet, they accelerate towards the center of the Earth at a roughly constant rate of ~9.8 m/s², but are resisted by the non-accelerating air molecules around them. After only a few seconds, a human will reach terminal velocity, as the drag force will counterbalance and cancel out the accelerative force of gravitation. (U.S. Air Force/NASA/Corbis via Getty Images)
Unlike most free-fall scenarios we experience on Earth today, such as a skydiver experiences when jumping out of an airplane, you’d have an eerie, lasting experience.
You wouldn’t feel the wind rushing past you, but rather the air would accelerate down towards the center of the Earth exactly at the same rate you did.
There would be no drag forces on you, and you would never reach a maximum speed: a terminal velocity. You’d simply fall faster and faster as time progressed.
That “rising stomach” sensation that you’d feel — like you get at the top of a drop on a roller coaster — would begin as soon as free-fall started, but would continue unabated.
You’d experience total weightlessness, like an astronaut on the International Space Station, and would be unable to “feel” how fast you were falling.
Which is a good thing, because not only would you fall faster and faster towards the Earth’s center as time went on, but your acceleration would actually increase as you got closer to that central singularity.
Both inside and outside the event horizon of a Schwarzschild (non-rotating) black hole, space flows like either a moving walkway or a waterfall, depending on how you want to visualize it. At the event horizon, even if you ran (or swam) at the speed of light, there would be no overcoming the flow of spacetime, which drags you into the singularity at the center. Outside the event horizon, though, other forces (like electromagnetism) can frequently overcome the pull of gravity, causing even infalling matter to escape. (ANDREW HAMILTON / JILA / UNIVERSITY OF COLORADO)
As you can see from the illustration above, the size of the arrows — as well as the speed that they move at — increases as we get closer to the central singularity of a black hole. In Newtonian gravity, which is a good approximation as long as you’re very far away from the event horizon (or the equivalent size of the event horizon), the gravitational acceleration you experience will quadruple every time your distance to a point halves. In Einsteinian gravity, which matters as you get close to the event horizon, your acceleration will increase even more significantly than that.
If you start off at rest with respect to the center of Earth, then by the time you’ve:
fallen halfway to Earth’s center, a distance of ~3187 km, you’ll be falling at a speed of 11 km/s,
fallen 90% of the way to Earth’s center, so you’re just ~637 km away, you fall at 34 km/s,
fallen 99% of the way to Earth’s center, so you’re only ~64 km away, you’re moving at 112 km/s,
made it to within 1 km of the very center, you’ll move at 895 km/s,
and while you might only be a millisecond from the event horizon, you’ll never get to experience what it’s like to get there.
If you were represented by a sphere falling towards a central point mass, like a black hole, these arrows would represent the tidal forces on you. While, overall, you (as the falling object) would experience an average force over your entire body, these tidal forces would stretch you along the direction towards the black hole and compress you in the perpendicular direction. (KRISHNAVEDALA / WIKIMEDIA COMMONS)
That’s because your body, as you fall closer and closer to the center of the collapsing Earth, starts to experience enormous increases in tidal forces. While we normally associate tides with the Moon, the same physics is at play. Every point along any body in a gravitational field will experience a gravitational force whose direction and magnitude are determined by their displacement from the mass they’re attracted to.
For a sphere, like the Moon, the point closest to the mass will be attracted the most; the point farthest from it will be attracted the least; the points that are off-center will be preferentially attracted to the center. While the center itself experiences an average attraction, the points all around it will experience different levels, which stretches the object along the direction of attraction and compresses it along the perpendicular direction.
Here on the surface of Earth, these tidal forces on a human being are minuscule: a little less than a millinewton, or the gravitational force on a typical small earring. But as you get closer and closer to Earth’s center, these forces octuple each time you halve your distance.
At every point along an object attracted by a single point mass, the force of gravity (Fg) is different. The average force, for the point at the center, defines how the object accelerates, meaning that the entire object accelerates as though it were subject to the same overall force. If we subtract that force out (Fr) from every point, the red arrows showcase the tidal forces experienced at various points along the object. These forces, if they get large enough, can distort and even tear individual objects apart. (VITOLD MURATOV / CC-BY-S.A.-3.0)
By the time you’re 99% of the way to Earth’s center, the force pulling your feet away from your torso and your head away from your feet works out to about 110 pounds, as though the equivalent of nearly your own body weight was working to pull you apart.
When you experience a force on your body that’s equivalent to the gravitational acceleration on Earth — or a force that’s equal to your weight — scientifically that’s known as “1g” (pronounced “one-gee”). Typically, humans can only withstand a handful of gs over a sustained period of time before either lasting damage occurs or we lose consciousness.
Roller coasters might get up to 5 or 6 gs, but only for a brief period of time.
Fighter pilots can endure up to 12 to 14 gs, but only in a pressurized suit without losing consciousness.
Humans have experienced and survived extremely brief (less than a second) accelerations of between 40 and 70 gs, but the risk of death is very real.
Above that threshold, you’re headed for trauma and possibly death.
This illustration of spaghettification shows how a human gets stretched and compressed into a spaghetti-like structure as they approach the event horizon of a black hole. Death by these tidal forces would be painful and traumatic, but at least it would also be quick. (NASA / PUBLIC DOMAIN / COSMOCURIO OF WIKIMEDIA COMMONS)
By the time you’ve reached about 25 kilometers from the central singularity, you’ll cross a critical threshold: one where these tidal forces will cause traumatic stretching your spine, causing it to lengthen so severely that the individual vertebrae can no longer remain intact. A little farther — about 14 kilometers away — and your joints will begin to come out of your sockets, similar to what happens, anatomically, if you were drawn-and-quartered.
In order to approach the actual event horizon itself, you’d have to somehow shield yourself from these tidal forces, which would rip your individual cells apart and even the individual atoms and molecules composing you before you crossed the event horizon. This stretching effect along one direction while compressing you along the other is known as spaghettification, and it’s how black holes would kill and tear apart any creature that ventured too close to an event horizon where space was too severely curved.
As spectacular as falling into a black hole would actually be, if Earth spontaneously became one, you’d never get to experience it for yourself. You’d get to live for about another 21 minutes in an incredibly odd state: free-falling, while the air around you free-fell at exactly the same rate. As time went on, you’d feel the atmosphere thicken and the air pressure increase as everything around the world accelerated towards the center, while objects that weren’t attached to the ground would appear approach you from all directions.
But as you approached the center and you sped up, you wouldn’t be able to feel your motion through space. Instead, what you’d begin to feel was an uncomfortable tidal force, as though the individual constituent components of your body were being stretched internally. These spaghettifying forces would distort your body into a noodle-like shape, causing you pain, loss of consciousness, death, and then your corpse would be atomized. In the end, like everything on Earth, we’d be absorbed into the black hole, simply adding to its mass ever so slightly. For the final 21 minutes of everyone’s life, under only the laws of gravity, our demises would all truly be equal.
|
https://medium.com/starts-with-a-bang/what-would-we-experience-if-earth-spontaneously-turned-into-a-black-hole-c86d2a6b7ae1
|
['Ethan Siegel']
|
2020-10-22 14:01:18.796000+00:00
|
['Gravity', 'Einstein', 'Black Hole', 'Physics', 'Relativity']
|
Debts
|
Headline from foxnews.com: “Attacker targeting pro-Trump marchers ID’d as journalism student.” He’ll fit right in with the mainstream media — immediate hire.
Here’s a sad headline from FN: “Twitter says Antifa-aligned group cheering alleged arson at police officer’s home is allowed.” What if it were a right-wing group cheering a Democrat’s house being burned down?
Another FN headline: “Michael J. Fox thinks his Republican ‘Family Ties’ character wouldn’t support Trump.” To no one’s surprise, Alex Keaton is a RINO.
Except for the political posts, I enjoy Facebook. Some of my friends are moving to a site named Parler. I balked because it asked for my phone number. I always refuse FB requests for it. I also got to thinking why would I want to join a site simply because it’s supposedly neutral politically. I would prefer a site completely without politics. Would it be possible for FB and Twitter to prevent political posts from reaching the feed of anyone who wished to opt out from them? Then again, as I’ve told many friends — scroll, baby, scroll! Duh!
Leftists icons AOC, Ilan Omar and Rashida Tlaib are pushing for student loan forgiveness. Each has such debt. The starting salary of a member of the House of Representatives is $174,000. What kind of economics is this? By the way, should such forgiveness include sending a check to anyone who has honorably honored his/her debt? If not, why not?
Here’s a book to be published 11/24:
It’s always great when someone comes along you hadn’t seen since pre-pandemic. My thanks to Ludmila, who picked up where she left off, buying four books in Russian; and to the woman who selected two of same; and to Wolf, who purchased one in Russian and one in Hebrew; and to Maria, who took home two books on Anne Frank, one on brain health, and How to Make Love to a Man by Alexandra Penney; and to Ira, who chose a book on pranks; and to my constant benefactress, who donated five works of non-fiction; and to whoever (whomever?) left the sealed plastic bag filled with books, half in Russian, half English, in the garden of the apartment building where I set up shop.
My Amazon Author page: https://www.amazon.com/Vic-Fortezza/e/B002M4NLJE
FB: https://www.facebook.com/Vic-Fortezza-Author-118397641564801/?fref=ts
Read Vic’s Stories, free: http://fictionaut.com/users/vic-fortezza
|
https://medium.com/@vicf1950/debts-770d688554a3
|
['Vic Fortezza']
|
2020-11-17 22:27:29.003000+00:00
|
['Fb Vs Parler', 'Books', 'Headlines', 'Loan Forgiveness', 'Modern Warriors']
|
How To Run Angular With Java API on Minikube
|
Create a Deployment and Service Objects
A pod is a group of one or more containers that share the storage and network and has the specification on how to run the container. You can check the pod documentation here.
Let’s create a pod with the below file. Before that, You need to start the Minikube on your local machine with this command minikube start and create a pod with this kubectl create -f pod.yml
pod.yml
It takes some time to pull the image from the Docker Hub if you are doing the first time or depending on the image size. Now you can see the pod is in the running status and exec into it to explore the file structure, etc.
// get the pod
kubectl get po // exec into running pod
kubectl exec -it webapp /bin/sh
exec into running pods
Deployment
Creating just one pod is not enough and what if you want to scale out the application and want to run 10 replicas at the same time. What if you want to change the number of replicas depending on the demand. That’s where the deployment comes into the picture. We specify the desired state in the deployment object such as how many replicas you want to run etc.
Kubernetes makes sure that it always meets the desired state. It creates replica sets which inturn creates pods in the background. Let’s create a Deployment for our project with this command kubectl create -f deployment.yml
deployment.yml
We have 5 replicas in the specification and the deployment creates 5 pods and 1 replica set.
deployment running
Service
Service is an abstract way to expose an application running on a set of Pods as a network service. Let’s create a service with type NodePort so that we can access the Angular app from the browser. Here is the service object YAML
service.yml
Create a service with this command kubectl create -f service.yml and you can list the service with this kubectl get svc
kubectl get svc
You can create one file called manifest.yml to place all the Kubernetes objects in one place and create all of the objects with one command kubectl create -f manifest.yml
|
https://medium.com/bb-tutorials-and-thoughts/how-to-run-angular-with-java-api-on-minikube-f5a1d1b1b697
|
['Bhargav Bachina']
|
2020-08-22 05:01:00.933000+00:00
|
['Angular', 'Programming', 'Java', 'Kubernetes', 'Web Development']
|
#2 Data Engineering — PIPELINES. This is the second blog in the series…
|
This is the second blog in the series of posts related to Data Engineering. I am going to write down all the important things that I learn as a part of the Data Scientist Nanodegree Program, Udacity. I have realized it is the best way to test my understanding of the course material and maintain the discipline to study. Please check the other posts as well.
The first thing we will learn as Data Engineer is Data Pipelining.
Pipelines — INTRODUCTION
Pipelining is nothing but moving data from one place to another. There are two major pipelines:
ETL
An ETL pipeline is a specific kind of data pipeline and is very common. ETL stands for Extract, Transform, Load. Imagine that you have a database containing web log data. Each entry contains the IP address of a user, a timestamp, and the link that the user clicked.
What if your company wanted to run an analysis of links clicked by city and by day? You would need another data set that maps an IP address to a city, and you would also need to extract the day from the timestamp. With an ETL pipeline, you could run code once per day that would extract the previous day’s log data, map the IP address to city, aggregate link clicks by city, and then load these results into a new database. That way, a data analyst or scientist would have access to a table of log data by city and day. That is more convenient than always having to run the same complex data transformations on the raw web log data.
Before cloud computing, businesses stored their data on large, expensive, private servers. Running queries on large data sets, like raw web log data, could be expensive both economically and in terms of time. But data analysts might need to query a database multiple times even in the same day; hence, pre-aggregating the data with an ETL pipeline makes sense.
ELT
ELT (Extract, Load, Transform) pipelines have gained traction since the advent of cloud computing. Cloud computing has lowered the cost of storing data and running queries on large, raw data sets. Many of these cloud services, like Amazon Redshift, Google BigQuery, or IBM Db2 can be queried using SQL or a SQL-like language. With these tools, the data gets extracted, then loaded directly, and finally transformed at the end of the pipeline.
However, ETL pipelines are still used even with these cloud tools. Oftentimes, it still makes sense to run ETL pipelines and store data in a more readable or intuitive format. This can help data analysts and scientists work more efficiently as well as help an organization become more data-driven. We will learn about ETL in the following articles.
ETL Pipeline —Brief Introduction
ETL Pipeline Structure (Image from Databricks)
As a data engineer, you extract data from a place and then transform into another format and further load it into storage someplace else.
(Image from Udacity)
Data Scientists start working on the data after the ETL process is completed. However, in small-scale companies, most people do the task of both Data Engineer and Data Scientist. The consequent articles will give insights into everything you will need to get the job done.
Here is the outline we will follow:
Extract data from different sources such as:
CSV files
JSON files
APIs
Transform data
combining data from different sources
data cleaning
data types
parsing dates
file encodings
missing data
duplicate data
dummy variables
remove outliers
scaling features
engineering features
Load
send the transformed data to a database
ETL Pipeline
code an ETL pipeline
Continue looking this space on to get started with ETL Pipelines.
|
https://medium.com/@sakaggi/2-data-engineering-pipelines-b9fb587a5371
|
['Sakshi Agarwal']
|
2021-09-01 03:07:20.796000+00:00
|
['Data Engineering', 'Udacity', 'Data Science', 'Etl', 'Pipeline']
|
The Top 10 Earners on MTV’s The Challenge were all Crowned in the Last 6 Seasons*
|
The Top 10 Earners on MTV’s The Challenge were all Crowned in the Last 6 Seasons* NovaRogue Dec 19, 2019·5 min read
(with 1 exception)
updated November 2020 — no spoilers for Double Agents
Another season of MTV’s The Challenge has come to a close, crowning 1 new champ, Elite Jenny West, and somehow giving a seventh damn win to Johnny Devenanzio 🍌
The previous season coronated 2 new champions — Rogan and Dee — and heralded 2 competitors into the vaunted 3+ Win Club: Jordan and CT.
Now this impressive feat has only been achieved by 8 other people — Johnny, Darrell, Kenny, Landon, Evelyn, Veronica, Derrick K, and Jamie M.
Not all wins are created equal, however, especially when it comes to the prize purse. The money won has increased exponentially since Season 30, the first to have $1,000,000 up for grabs. And because of this, the Top 10 Earners on The Challenge were all crowned in the last 6 seasons (with 1 exception 👨🦰).
This is especially surprising considering the seasons in the 30s only represent 17% of the overall show. And 3 of these high earners have only been on 3 seasons or fewer — they were Rookie Champions — and the second richest achieved that feat on their 3rd full season as well.
In fact, if you only counted from Season 30–35, these people would be the highest earners without the 1 exception. Actually — fully HALF of the Top 10 Highest Earnests were crowned in the past TWO seasons. That’s how much the prize money has increased. Cwazy 😵
So who are these “rich bitches,” and who are the other 2?
And what would happen if there were no steal-your-money twist?
1. Johnny “Bananas” Devananzio — $1,184,720
won $500,000 on S35 Total Madness
won $276,000 on S28 Rivals III ($275k for first place and not sharing the money + $1k for a daily)
won $125,000 on S25 Free Agents
won $17,500 on S24 Rivals II (second place)
won $76,250 on S22 Battle of the Exes ($75k for first place + $1,250 for a daily)
won $52,000 on S21 Rivals ($50k for first place + $2k for 2 dailies)
won $52,970 on S17 The Ruins ($32k for first place + $20,970 for his individual bank account)
won $75,000 on S16 The Island
won $10,000 on S14 Inferno 3 (team bank account)
2. Ashley Mitchell — $1,121,250
won $1,000,000 on S32 Final Reckoning
won $121,250 on S29 Invasion (100k for winning the final + 21,250 from the Underdog team bank account)
3. Jordan Wisely — $833,000
won $250,000 on S34 War of the Worlds 2
won $450,000 on S30 Dirty Thirty
won $125,000 on S26 Battle of the Exes II
earned $8,000 on S24 Rivals II ($7,500 for third place + $500 for a daily)
4. Turabi “Turbo” Camkiran — $750,000
won $750k on S33 War of the Worlds 1
5. Cara Maria Sorbello — $602,250
won $378,750 on S31 Vendettas ($370k from first place + $8,750 from individual bank account)
won $35,000 on S30 Dirty Thirty (second place)
won $125,000 on S27 Battle of the Bloodlines
won $17,500 on S24 Rivals II (second place)
won $26,000 on S21 Rivals ($25k from second place + $1k from a daily)
won $20,000 on S20 Cutthroat (from team bank account)
6. Camila Nakagawa — $561,250
won $450,000 on S30 Dirty Thirty
won $27,500 on S29 Invasion (second place)
won $76,250 on S22 Battle of the Exes ($75k for first place + $1,250 for a daily)
7. Chris “CT” Tamburello — $514,750
won $250,000 on S34 War of the Worlds 2
won $15,000 on S30 Dirty Thirty (third place)
won $112,500 on S29 Invasion ($100k for first place + $12,500 from team bank account)
won $63,000 on S24 Rivals II ($62,500 for first place + $500 for a daily)
won $52,500 on S22 Battle of the Exes ($50k from second place + $2,500 for 2 dailies)
won $1,000 on S21 Rivals (from a daily)
won $10,000 on S10 Inferno II (from team bank account)
won $11,000 on S8 Inferno ($10k from team bank account + $1k for a daily)
8. Jenny West — $500,000
won $500,000 on S35 Total Madness
9. (exception) Wes Bergmann — $233,000
won $5,000 during the Throne Off daily on S35 Total Madness
won $50,000 on S33 War of the Worlds (third place)
won $63,000 on S24 Rivals II ($62,500 for first place + $500 for a daily)
won $25,000 on S21 Rivals (second place)
won $150,000 on S13 The Duel
won $10,000 on S12 Fresh Meat (third place)
10. Dee Nguyen — $255,000
won $5,000 during the Throne Off daily on S35 Total Madness
won $250k on S34 War of the Worlds 2
Honourable Mentions
Hunter Barfield — $0 / “$500,000”
won S32 Final Reckoning but his partner chose to keep his $500,000
Sarah Rice — $173,700 / “$311,200”
won S28 Rivals III but her partner chose to keep her $137,500. still earned $1,000 from a daily
won $125,000 on S26 Battle of the Exes II
won $10,000 on S23 Battle of the Seasons (third place)
won $20,000 on S20 Cutthroat (team bank account)
won $17,700 on S17 The Ruins (team bank account)
If there were no Steal-Your-Money Twist
The Top 10 Earners would be
1 Johnny Bananas — $1,047,220
2 Jordan Wisely — $833,000
3 Turbo Camkiran — $750,000
4 Ashley Mitchell — $621,250
5 Cara Maria Sorbello — $602,250
6 Camila Nakagawa — $561,250
7 CT Tamburello — $514,750
8 (tie) Hunter Barfield — $500,000
8 (tie) Jenny West — $500,000
9 Sarah Rice — $311,200
10 Wes Bergmann — $298,000
thereby leaving Dee-leted out of the Top 10. #justiceforSarah #justiceforHunter #bringthemback
|
https://medium.com/@novarogue/the-top-10-earners-on-mtvs-the-challenge-were-all-crowned-in-the-last-4-seasons-36862340068b
|
[]
|
2020-11-25 19:54:51.737000+00:00
|
['Reality TV', 'Television', 'The Challenge', 'MTV', 'TV']
|
My 9/11 Experience — 20 years later
|
One person’s recollection of something she has spent two decades trying to leave in the past.
The morning was clear and sunny, and from the moment I stepped through the door at the top of the brownstone’s stoop in Greenpoint, Brooklyn I decided that I would walk through the park near my apartment to hop on the L train to get to my temp job in the Financial District, instead of catching the closer G subway line, which would connect to the A train and put the longer walk through the giant skyscrapers on Lower Manhattan. I put my headphones on and enjoyed the weather as the U2’s Beautiful Day album became my morning soundtrack while I walked through Greenpoint and into Williamsburg to catch the L train.
Humanity squeezed in to metal tubes hurtling themselves through concrete tunnels under the Hudson and below the city buildings. In the last year, the subway had become routine to me. I was constantly running up and down steps, listening to buskers playing their varied instruments, waiting for trains, grabbing the poles at uncomfortable angles, and spending time in a sea of people who didn’t say much to each other. I was managing to barely pay my bills by bartending and doing office admin temp work throughout the city, which meant that I had learned the whole subway system very quickly. I transferred from the L train to the 4/5/6 train headed downtown. The train arrived at the Fulton Street station, and I quickly walked out to the platform and then up the concrete steps. When I walked onto the street the world seemed slightly askew. Everyone was looking up and behind me. I turned around and looked up myself. The sun glinted off the sides of the buildings and there were papers falling from the sky. I heard someone say something about a plane or helicopter, but it didn’t make logical sense to me. Did someone manage to fly a helicopter around downtown Manhattan and throw ads from it? That could make sense, not necessarily sane, but definitely plausible.
I needed get to work. Today was my second day at an Israeli-based computer company named Camelot. I was covering for the company’s receptionist who was out on vacation for the week, and the task was pretty simple — answer the phone, push the buttons, use the phone extension list. My Midwest work ethic told me that I didn’t want to be late.
As I walked the block to the office, I called my mom who was at her work in Illinois. I told her “Hey momma. There’s something going on here, but I’m not sure what it is. You should turn on the news.”
“Are you okay?” She asked.
My mom wasn’t the biggest fan of me moving to New York, so I tried to keep my voice steady despite the concern growing in my gut. “Yeah. I’m fine. I’m just here in downtown going to my temp job. I’ll find out what’s happening when I get there. I’ll call you later. Gotta go. Love you.”
“Love you too, sweetie.”
I snapped my phone closed and entered the glass doors of the skyscraper’s lobby. People were scurrying in and out, chattering with questions and comments. I entered the elevator and pushed “9” while checking my watch to make sure I wasn’t late. When I entered the chaos in the office, nobody cared about my punctuality. A group of people were gathered around the receptionist’s desk, and the phone was ringing off the hook. One of the company’s managers was manning the phones, and she was not about to make me deal with the calls coming in. As she answered call after call I could hear her talking to spouses that were calling to see if their significant other had made it to the office yet, and corporate owners were calling from the other side of the world to check on everyone. Piece by piece, things began to take shape.
The internet didn’t work. Cell phone reception was spotty. A co-worker arrived to explain that he’d seen a plane with “American” written on the side hit one of the Trade Center towers. People kept asking questions, trying to make sense, hoping to talk him out of the facts he knew to be true. Nothing changed what he saw.
From behind came the high squeal of a jet plane which is usually reserved for runway landings. The sound passed over us, and then an earthquake followed. The building shook and we held our breath. Our view was blocked by the tall buildings surrounding us, and the man who had seen “American” ran outside to see what was happening and promised to report back. I was frozen with uncertainty. Part of me was anxious to run and help, find out what was happening, get to the problem and help fix it. But something else held me in my place. Even though I wanted to move, I felt like I had hands on my shoulders keeping me still, and I reasoned to myself that I should stay in case these Camelot people needed my help. I watched people trying to tune in to radio stations, get the internet to work, and talk to concerned family members. I stood there helpless. Then another sound came rolling in.
It was a low rumble that grew louder and closer, and a shadow crept toward us reflected in the sides of the buildings surrounding us. A woman screamed “We’re all gonna die!” She ran past me towards the stairs. I calmly turned and followed her, as did many others. Looking back, nine flights of stairs isn’t too bad, especially in comparison with the 100 flights of stairs others had to traverse in the Towers. When I arrived back in the building lobby, outside the glass wall the world was grey. A thick cloud hovered. We halted. The lobby door opened, and a woman covered in ash and dirt came in. And then a man, and another man, and one more woman. They found their way to the fountain in the lobby and began to remove the ash from their skin. The upper floors streamed into the lobby and we all stared at the cloud outside. I came to the realization I might not be able to get back to Brooklyn. My friend Christy lived on Manhattan — the Upper East Side, which was a long walk, but nonetheless had no bridges or tunnels to cross. My cell phone had signal, and I managed to reach her. She wasn’t sure whether she would have to go to work, so she told me she’d call me back after she talked to her boss. My cell phone didn’t work any more that day.
The people filling the lobby had no clue the extent of what was happening, but it didn’t stop everyone from guessing and trying to figure it out.
Could everyone have made it out? Of course. I hope so. Where will we go? How long should we stay? It can’t have been real planes.
A few fire fighters in full gear walked in and talked to the security guard. The security guard asked everyone to prepare for evacuation. When the fire fighters turned to leave, there was another rumbling, and people were running down the street in front of the glass wall as the cloud returned to chase them down and envelop them. The fire department ran into the cloud. Part of me wanted to go with them. Maybe I could stop it. Maybe I could help.
Some people were crying, but I just sat quiet in stunned silence. Words weren’t enough. Someone decided that it would be safe to return upstairs, so we did. We tried the internet. We answered the phone. We waited. Evacuation time arrived, and a tall husky man from the office told me he needed to go to Brooklyn and would walk with me. He handed me a navy blue baseball cap that read “Camelot” in white letters, explaining that I could use it to cover my hair from the dust cloud, and also suggested I use my black sweater to cover my mouth from breathing in whatever was in the air. I walked back down the stairs and out the front door of the lobby with these strangers. I stopped and looked left, toward the place where all the ash and dust emanated. I wanted to go left. I wanted to know for myself. I wanted to heal, to help, to save, do something to make anything different. I turned right instead. I still regret that right turn even though it was probably for the best. The tall man and I followed the line of people from all over downtown walking toward the Manhattan bridge, some of whom were covered head to toe in ash and debris.
The Manhattan bridge was a full, steady stream of people. Like a mantra, I kept
thinking “Did everyone get out? I hope they got out. They could’ve got out.” Many people on the bridge were cursed with the same question, wondering it aloud to each other. The sound of another plane appeared, it was off to my right headed from downtown toward uptown. It was moving fast and I wondered if this plane would hit the bridge, which was clearly visible and packed with people. And, if that didn’t kill me, whatever was in the Hudson River after I fell into it certainly would. A fighter jet appeared above and I did not fall in to the Hudson river. Sharp relief washed over me for a moment until I looked back to the smoke filling the air of lower Manhattan. It looked like two smokestacks I would have seen at a midwest factory in my childhood. And despite the sweater over my face, the smell of burning metal overpowered my senses. I turned my back to the nightmare and continued over the bridge.
When we reached the other side, I parted ways with the tall, Camelot man. I never saw him again. I found a G train stop that would get me close to home. As I walked into this station, I was leery and anxious, worried that the subways may be compromised too, but I got on the train anyway. I arrived at the Greenpoint station stop I avoided earlier in the day, and walked 3 blocks home, up the stoop and the stairs to the 2nd floor apartment, through the door and straight to the phone. The land line was working; I called my mom at her work. Her receptionist answered and all I could muster was “Can I talk to my mom?” She transferred me immediately. My mom was relieved, she thought I would have been in the middle of it all, helping people when everything fell down, and I thought “So did I. Why wasn’t I?” I didn’t tell her that, though. I was happy to ease her mind, but then I couldn’t talk about it any more and we hung up. I just didn’t have words to explain what happened or how I felt. So I stopped feeling for awhile. I didn’t know what to do. I was home alone and couldn’t reach my two roommates so I decided to do something sensible with my time and focused on my pile of laundry — I gathered it, walked down the block to the laundromat, and when I turned the corner I saw that the street outside the laundromat offered a straight view of what was left of the towers. Once I got settled in to the regularity of the laundry, I found myself drawn back out to the street while I waited for the washer to finish. I stared at the dust clouds that hovered where the towers used to be. At the time, I thought just the tops had fallen off. The dust was in such a tight column, it was like the ghost of the towers hung around to confuse everyone. It was a few more hours before I found out that everything had collapsed.
For days the count of the dead fluctuated. Photos and flyers with missing people and phone numbers of their loved ones lined parks and message boards throughout the city, and Union Square turned into the hub of searching and remembering. People lit candles, and strangers comforted each other. Everyone — strangers, friends, co-workers, bodega buyers and sellers, and subway riders — they all communicated honestly and thoughtfully as we made our way through the weeks that followed, checking in on whether any family or friends were unaccounted for, sharing stories of the experience, and being constantly kind to all fellow New Yorkers. Remarkably, it was a tragic and magical time in the city. There was a love on everyone’s sleeve born from the terror and disaster of that day.
I’ve carried this story and experience with me for 20 years. I always wanted to share it, but never really felt comfortable doing so. It’s a real conversation stopper, and how often do any of us want to talk about the most traumatic experiences of our life? I’ve shared it here and there, usually if asked or to help someone understand some of the quirks that came from that day, like how I can’t stand the smell of burning metal, low flying planes make me jump, and I am almost always on alert for the possibility of disaster. Every thing that happens to us becomes a part of us, our own personal story whether we talk about it or not. And the other part of the story that I haven’t been telling involves quite a few “what ifs” that have haunted me for 20 years. If I had taken the other train that morning, I would have taken the G train to the A train and gotten off at the stop under the World Trade Center at the same time as the planes hit. If I had left the office after the second plane hit, and before the towers fell, would it have fallen on me? If I had turned left instead of right during evacuation, would I have been injured, could I have helped someone, would I have seen things that would completely change the person I am today, for good or bad? I will never stop wondering did I let myself down, or did I save myself?
|
https://medium.com/@jenludden/my-9-11-experience-20-years-later-9c73c66594eb
|
['Jennifer M Ludden']
|
2021-09-11 05:13:38.400000+00:00
|
['New York', 'NYC', 'September 11', 'Life', 'Remembering']
|
Some pointers on code reviews — Part 1
|
Don’t look for mistakes, search for oportunities
As many other young developers out there, I started my journey with the non-coding aspects of my job ( including code reviews ) on the wrong note. The fact that I was under the mentorship of a coding wizard with a short fuse didn’t help either.
However, moving from simply writing code to fully embracing code reviews was not an easy step for me. As any other starting developer I thought the whole process just a waste of time.
I no longer think that.
During the last couple of years, I have learned a number of good lessons regarding code reviews. I do believe the context or the specifics of my lessons are not relevant to anyone but me. To that end, I will spare you the entire story and move straight to the point.
Here are some things I have learned regarding code reviews.
Consider accountability
I believe a lot of young developers these days take a rather personal approach to coding and code reviews, in general. It’s not hard to see why, really: The project manager assigns me a task to do, I figure out the scope, I do my research, I look at the impact and I make sure I find the best possible approach. So why then, do you get to just come in and “review” something that is clearly well-though and almost perfect ?
Let me ask you this: What if your perfect piece of code fails to perform on a production environment ? What if the 0.001% happens and leads to massive losses both financial as well as in terms of the user’s trust in the system ?
Do you stand solely responsible for that loss ?
That is simply not how this business works. Truth is, (most of the time) the company you work for is accountable. As a general rule of thumb, you should always remember: You are solely responsible for the code you have locally, however, once that reaches a code review, the responsibility shifts to the entire team. Finally, when that amazing piece of code reaches the live environment, it falls under the responsibility of the company itself.
Knowing this, I think it gets a little bit easier to understand why many seasoned developers or team leaders put a lot of emphasis on this process. In many ways, approving a code review is acknowledging the responsibility and accountability for the code being submitted.
Prioritize but don’t overlook
I think we’ve all been there. The team goes through an exceptionally productive sprint and the code reviews keep piling up. The day of the reckoning eventually comes when you have ten, twenty such code reviews to take in. What now ?
Well, first, take a deep breath and strap in. You okay ? Good !
It’s time to prioritize. I know a lot of us tend to give our own code reviews top priority. Well, let’s take a step back, have a look at the bigger picture and start asking some questions.
What seems to be the thing that’s most important for the team ?
What seems to be the highest priority for the project manager ?
What about the customer ?
Keep in mind that it is always a good idea to ask if you’re not certain where to start.
Priority, however, is only half the story here. The other half speaks to overlooking. Overlooking happens when some code reviews are always stuck in the low priority queue. I won’t get into why postponing a code review too much is a bad practice but I will say that nobody wants that. If nothing else, remember this: Low priority doesn’t mean unnecessary !
Look at the code, not at the person
If the title wasn’t clear enough, let me unpack it this quicky. You are part of a team, regardless of being an intern or a senior developer. As part of said team, you and your colleagues are working towards the same goal. We all just really want to write clean code, efficient code and high quality code. However, I think it’s fair to assume that nobody does that all the time.
This is when comments start popping up in your code reviews or, shifting perspective, this is when you start writing feedback on code reviews. Please keep in mind that the emphasis here is placed on “feedback” and “code”.
It’s important to understand the two aspects above. You should not review or evaluate the code based on the person who submitted it. All code must be treated fairly, regardless of it being written by an intern or a senior developer. Try to briefly detach these two entities (the code, the person) and I guarantee you the whole experience will get better.
By doing this, you will manage to avoid two common issues with code reviews. First, we tend to automatically trust the code submitted by our colleagues who seem to have more experience than us — not good. Second, we tend to treat more harshly the code submitted by our colleagues who have less experience than us — also not good.
Self review
Say you just finished a task, tested it one last time and finally opened a code review. It’s time to move on, get a cup of coffee and move to the next feature. Right ? Ehm, hold on for just a second there.
I believe we all noticed that the code inside a code review is slightly different from the one you worked on. First, all of these pretty colors, the carefully arranged indentation and the quick navigation options are not there. Secondly, most of the context of the files, all the stuff that’s around your feature, is not highlighted inside the code review. Lastly, you don’t remember writing so much, do you ?
Well, this is where self review comes into play.
I’m not against coffee breaks, don’t get me wrong. In fact, I think you deserve it. However, when that’s over, let’s have another look at the code and see how it looks like outside your local environment. Does it still make sense ? Do you think the scope is clear to everyone ? Are you sure that going through it won’t give your colleagues a headache ?
There will be code reviews where the answer to all of these questions is “Yes”. How about the other ones, though ?
For all the other ones, there are some steps we can take.
First, don’t be shy to add comments or clarifications on your own code review. You can provide some insights on the scope, ask for special attention to some parts, explain some complex lines and even describe your decision process. It may take you a little bit of time but I assure you, it is well worth it.
Second, attach screenshots or relevant files. This is most useful if you’re working on some UI or frontend features. I think it’s fair to say, most developers don’t have a css or html interpreter embedded in their brain. Having a visual aid to help them nagivate your styling choices can save everyone a lot of time.
Third, make sure you indicate that some files can be omitted from the review process. Most modern programming languages have a lot of stuff pregenerated for you. It could be the actual migration code or a snapshot of the context. It can be the package list or even some csv reports. The code you write should be the target of the review, not the stuff that’s generated from it. Make sure you indicate that clearly as sometimes, it may not be obvious for everyone.
That wasn’t so hard, now, was it ?
Thank you for your time. Part two will be out soon.
Until then, happy coding !
|
https://medium.com/@tarpescu-marian/some-pointers-on-code-reviews-part-1-4828ea40eb97
|
['Marian V. T.']
|
2021-06-17 12:24:36.428000+00:00
|
['Development', 'Programming', 'Developer Tools', 'Advice', 'Code Review']
|
Bash Speedup
|
The other day I had to write a shell script that generates 5637 queries cramped into a file. The goal was to run these queries against a database and change discrepancies between what is and what should be. Anyway, this article is not about queries and databases, but the raw power of Bash and why it makes sense to rethink the process twice before you go the beaten path.
The situation
The input emanates from a file that contains precisely 5637 value pairs.
They look like this:
Value1.XYZ Value2.XYZ
The first field contains the current value stored in the database. The second field defines its new name. So, I created a query template that parses all the lines and replaces the following pattern: “==VALUE1==” and “==VALUE2==” with Value1.XYZ and Value2.XYZ. Invariably, I would have this template reused in a loop:
[==TITLE==]
...
ROOT = DB/DO_SOMETHING(NEW_VALUE_NAME="==VALUE2==")
...
IMPORTANT_FIELD = DB_NAME::==VALUE1== 0
...
The template was way bigger, but for the sake of discretion and because it could be potentially distracting, I commented all those lines out. Yes, and there is the field “==TITLE==” which must be also unique and has to be replaced with something like Value1Value2.
The final query file that contains all those subqueries will have 90205 lines.
first attempt
Since that was only a one-time task, I honestly did not really think about performance. Since we already have a template, a loop, and some replacements involved, I immediately took sed as the tool of choice, and the subsequent outcome looked similar to this:
template=template.tpl
workcopy=workcopy.tmp
output=query.out
while read -r line
do
array=(${line})
value1=${array[0]}
value2=${array[1]}
cp ${template} ${workcopy}
sed -i s/==TITLE==/${value1}${value2}/g ${workcopy}
sed -i s/==VALUE1==/${value1}/g ${workcopy}
sed -i s/==VALUE2==/${value2}/g ${workcopy}
cat ${workcopy} >> ${output}
done < inputfile.txt
When I ran the code snippet in my test VM, “time” returned me the following measurements:
real 22m52.692s
user 0m33.331s
sys 1m47.344s
Wow, that has taken a long time. Almost 23 minutes. That script will run for all 20 or so DB-Instances, so that is a lot of time to wait. As such, I decided to do some improvements.
second attempt
I realized that in my first attempt I copied 5637 times the template to the memory, then again to the filesystem, sed had to read it copies it into memory, compute changes, writes it back, that is done 3 times by sed and then the workcopy is read again from the filesystem, copied to the memory, and appended to the existing output file (which must be probably also read and copied into memory, computed and written back to the filesystem). Not to mention, that sed itself needs to be loaded into memory. In other words: a lot of io.
So I had to get rid of all the unnecessary io and focused first on copying the template. So I thought I append the template right to the output file and replace the “== XX ==”-fields with the correct values. That was a horrible idea and you will eventually see why. Here is the code:
template=template.tpl
workcopy=workcopy.tmp
output=query.out
while read -r line
do
array=(${line})
value1=${array[0]}
value2=${array[1]}
cat ${template} >> ${output}
sed -i s/==TITLE==/${value1}${value2}/g ${output}
sed -i s/==VALUE1==/${value1}/g ${output}
sed -i s/==VALUE2==/${value2}/g ${output}
done < inputfile.txt
The result was:
real 74m30.974s
user 13m8.227s
sys 46m4.676s
74 minutes was definitely no option. But what was happening here? Yes, I got rid of the unnecessary copying of the template. But to what end! Now I am copying the whole output file again and again from the filesystem to the memory and back. And with each iteration i add more and more read/write operations. In other words, this will never scale. With a big enough inputfile, this can break the whole system. So I got back to start and decided to only optimize the sed calls.
third attempt
To remove a workcopy from the process was a bad idea. But there is still the part where sed is called 3 times, and this is causing even more read/write operations. So i changed this part:
sed -i s/==TITLE==/${value1}${value2}/g ${workcopy}
sed -i s/==VALUE1==/${value1}/g ${workcopy}
sed -i s/==VALUE2==/${value2}/g ${workcopy}
to:
template=template.tpl
workcopy=workcopy.tmp
output=query.out
while read -r line
do
array=(${line})
value1=${array[0]}
value2=${array[1]}
cp ${template} ${workcopy}
sed -i "s/==TITLE==/${value1}${value2}/g;
s/==VALUE1==/${value1}/g;
s/==VALUE2==/${value2}/g" ${workcopy}
cat ${workcopy} >> ${output}
done < inputfile.txt
Again, the template is copied in and out of the memory, but by optimizing sed, I reduced at least 3 read/write operations to 1. And that ended in a much better result than the first attempt:
real 14m26.738s
user 0m20.367s
sys 1m0.661s
From 23 minutes down to 15. That was not bad, but still not good. So why not execute all the computations in memory? Does Bash have the ability to do that? Can I even get rid of external tools like sed or awk?
fourth attempt
The short answer is: yes, Bash can do that and does an outstanding job at it.
Variables can be filled with the content of a file, and even substitutions can be done with a simple line of code. But there are a few things to mention!
Preserving the structure
If I echo a variable without double quotes, Bash does not print things like newlines. Instead of having a structured text, I will end up with a word salad if I don’t put the variable in quotes.
The second thing is that Bash truncates leading and trailing newlines. If you read from a file and you append the same text repeatedly to the existing text, add not only a newline at the end of your template but a blank space. Bash will not cut that space. So my template was slightly changed, I saw that as the easiest way to get around that trailing newline problem:
[==TITLE==]
...
ROOT = DB/DO_SOMETHING(NEW_VALUE_NAME="==VALUE2==")
...
IMPORTANT_FIELD = DB_NAME::==VALUE1== 0
...
Now to the code. I got rid of my workcopy and the sed call. I copied the template to the variable $template and with each iteration I made a copy into the memory as variable $tempvar, which I used for the substitutions:
template=$(cat template.tpl)
output=query.out
while read -r line
do
array=(${line})
value1=${array[0]}
value2=${array[1]}
tempvar="$template"
tempvar=${tempvar/==TITLE==/${value1}${value2}}
tempvar=${tempvar/==VALUE1==/${value1}}
tempvar=${tempvar/==VALUE2==/${value2}}
workcopy+="${tempvar}"
done < ${pairfile}
echo "$workcopy" >> ${output}
The result was overwhelming:
real 0m6.655s
user 0m2.883s
sys 0m3.272s
6 seconds instead of 23 minutes. That is awesome.
Conclusion
You might ask: “why all the fuss and not use Python or Perl instead?”. And I agree: I am a huge fan of both languages, but sometimes all that is needed is a Shell script. And for that, Shell can provide some decent tools that give you quite a remarkable speed.
If you stayed with me until this point, I want to thank you, and I hope this article might help you one day and save you some time!
|
https://medium.com/swlh/bash-speedup-14179a2dba68
|
['Pascal Thalmann']
|
2021-01-03 17:08:10.458000+00:00
|
['Computer Science', 'Bash', 'Programming Languages', 'Programming', 'Linux']
|
Community Airdrop (2nd Airdrop Event)
|
Hello from Bezant!
We are thrilled to share with you another Community Airdrop Event. The below are the details of this event.
1. Total Airdrop Reward : 1,000,000 BEP-2 BZNT-464 Token
2. Duration : August 12th (Monday) 18:00 [KST, UTC+9] — August 19th (Monday) 18:00 [KST, UTC+9]
3. Event Details
a. This event is to encourage community activity in Bezant’s Channels.
The event will be done on a “first-come first-serve” basis; all the reward distributions will be distributed at once.
Participants should participate through the mission below.
Participants can choose one or more of the three missions listed below
b. How to Participate (only the activities that are done within the event period will count)
Mission 1. Join the Official Bezant Telegram Channel : 200 BZNT(members who leave during the event will not be counted)
Mission 2. Follow Bezant Official Twitter account, and ‘retweet with comment’ the following three posts from the official Bezant Twitter Account : 300 BZNT
Mission 3. Participate in a Bezant Trivia based on the medium posts : 500 BZNT (must solve ALL of the 5 questions correctly)
4. How to Register? (You must register to win airdrop rewards!)
a. Click on the “Community Airdrop” menu on Bezant’s website
b. Fill in the Google form
Mission
Join Telegram Mission : input telegram ID and date joined Twitter Post Retweet Mission : Twitter ID, Links to the retweets Bezant Trivia Mission : Fill in the answer to the questions.
Binance DEX wallet address
c. Submit the Google form
Leave blank on the missions that you haven’t participated.
Once you submit, you can input the information from the additional missions that you have participated in. (However, you cannot use the same Binance DEX wallet address that you have previously used)
5. Airdrop Date : August 23rd (Friday), 18:00 (KST, UTC+9)
6. Notice
The Airdrop event can be ended earlier than the prescribed date.
Airdrop Distribution time can be subject to delay based on the network congestion.
Follow our official channels for the latest updates
|
https://medium.com/bezant/community-airdrop-2nd-airdrop-event-25184784fa6b
|
[]
|
2019-08-12 07:32:01.430000+00:00
|
['Bezantium', 'Bznt', 'Bezant', 'Blockchain', 'Binance']
|
How To Reduce Brain Fog and Improve Clear Thinking
|
What Causes Brain Fog?
Brain fog is an elusive condition and its symptoms are easily mistaken for a whole variety of illnesses. Some of these illnesses have brain fog as a symptom! But it can also come from various lifestyle problems.
Luckily, this phenomenon is not irreversible. Here’s what could be causing it and how to kick it to the curb.
1. Lack of oxygen
Sinusitis — an inflammation of the tissue lining the sinuses — causes swelling of the nasal crevices, applying pressure to the eyes and forehead.
As our nasal airways are blocked, the only way to supply our lungs with oxygen is by breathing through the mouth. Unfortunately, that oxygen is not being filtered, further decreasing the quality of the oxygen supplied to our brain.
Without oxygen, our brain performance decreases, resulting in cognitive dysfunction and difficulty concentrating.
Solution:
Although you can relieve sinusitis symptoms with some home remedies or over-the-counter sprays, treating chronic sinusitis requires expert advice.
Home treatment includes using a humidifier, inhaling steam vapors, or rinsing your nasal passages.
2. Medications
Certain medications can impact our cognitive health. Benzodiazepines — a class of medications encompassing lorazepam, diazepam, temazepam, alprazolam — are often prescribed to patients suffering from insomnia or anxiety because of their calming properties.
However, these medications have been linked to “impairment[s] in several cognitive domains, such as visuospatial ability, speed of processing, and verbal learning” (Stewart 2005). All of these cognitive deficits are characteristic of brain fog.
Other medications that induce similar side effects include non-benzodiazepine prescription sedatives (zolpidem, zaleplon, and eszopiclone), anticholinergics (brand names: Atropen, Cogentin, Cyclogyl, Enablex, Toviaz, Urispas, etc.) and mood-stabilizers (lithium).
Solution:
If it’s your physician that prescribed any of these medications, talk with them and ask for a change of prescription. Be honest about the side-effects and explain that the medications are impeding your functioning.
If you’re using non-prescription medication of any kind and are experiencing an uptick in brain fog symptoms, talk to your doctor or pharmacist.
3. Sleep problems
According to research, 25 percent of Americans experience acute insomnia each year and according to the Centers for Disease Control and Prevention, 1 in 3 adults don’t get enough sleep (seven-plus hours per night).
Sleep deprivation is known to influence attention and vigilance (Alhola & Polo-Cantola 2007), working memory, psychomotor and cognitive speed (Goel et al. 2009).
As such, sleep deprivation is one of the leading causes of brain fog.
Solution:
Get enough sleep! Make sure to go to bed on time and have some quality rest. Dim your bedroom, avoid using social media before bed, optimize your bedroom temperature (it should be colder than the rest of the house), and don’t snack or drink alcohol late in the evening.
4. Allergies
Similar to sinusitis, allergies cause nasal congestion by releasing cytokines — proteins that help our bodies fight foreign substances (allergens in this case). These proteins cause the inflammation in our nose, narrowing the airways and decreasing the amount and quality of oxygen coming to our brain.
Solution:
Protect your indoor air by keeping your windows and doors shut during pollen season. You can even consider purchasing air filters. When outside, wear a protective mask and wash your face as soon as you get home.
Lastly, schedule an allergy skin test and determine your triggers. Perhaps it’s really easy to avoid them. If not, consult your physician about the best antihistamine medicines you can use to treat the allergy.
5. A thyroid condition
Hyperthyroidism occurs when the thyroid gland becomes overactive and starts producing too many thyroid hormones. Studies demonstrate that “brain development is much more sensitive to thyroid hormone excess or deficit than previously thought” (Zoeller et al. 2002). It is likewise evident that there is a “link between [thyroid hormones] and cognitive processes that are mediated primarily by the frontal cortex, areas associated with executive function tasks.” (Grigorova & Sherwin 2012)
Solution:
If you suspect there might be a problem with your thyroid gland, you should visit a doctor immediately. The symptoms of thyroid dysfunction may vary from person to person but a blood test will certainly provide the most accurate diagnosis.
Once you start taking treatment, you’ll probably feel your brain fog symptoms diminishing. You can also introduce changes to your lifestyle: cut down on sugar, eat selenium-rich foods (tuna and turkey) or food packed with vitamin B-12 (beans, peas, and eggs).
In any case, you should always talk with your doctor about the proper treatment plan.
6. Inactivity
Physical exercise not only inflates our muscles but it also makes our brain bigger. A study done at the University of British Columbia discovered that “aerobic exercise training increases the size of the anterior hippocampus, leading to improvements in spatial memory.” (Erickson et al. 2011)
Inactivity, on the other hand, shrinks and weakens the brain, dampening its performance.
Solution:
Get moving! Search for ideas on how to get healthier and start exercising. You don’t have to go to the gym or start running, though. Long walks will suffice.
Brisk walking was shown to have tremendous benefits for our mental health. Hippocampal atrophy — a key biomarker in the preclinical stages of Alzheimer’s disease — “has been shown to be associated with hippocampal volume; specifically increased aerobic activity and fitness may have a positive effect on the size of the hippocampus.” (Varma et al. 2015)
According to Harvard Health, you should aim for walking at least 30 to 45 minutes a day, preferably briskly so that you can feel your heart rate elevating a bit.
7. Dehydration
Our bodies are 60% water and our brain a whopping 75%. That means that even the slightest hint of dehydration negatively affects brain performance.
One study determined that mild dehydration “impaired vigilance and working memory and increased anxiety and fatigue” while another study revealed that a 36-hours-ling water deprivation had negative effects in the participants’ vigor, cognitive performance, short-term memory and attention span (Na Zhang et al. 2019)
Solution:
Amp up the fluids. You can stick with the tried and trusted method of 8 glasses a day. But the amount of water you need will largely depend on your activities throughout the day — if you’re more active, you’ll need more water.
8. Food intolerance
If you google the symptoms of food intolerance, you’ll discover that they encompass nausea, headaches, nervousness, and irritability. Sounds familiar?
If you’re experiencing brain fog, there’s a chance you can blame it on certain foods.
Solution:
If you suspect you’re intolerant to some foods, your physician will probably perform a blood test or elimination diet. The usual suspects are dairy, gluten, and caffeine, and starting a diet that eliminates these may yield results. If not, a thorough medical exam will solve the mystery.
|
https://medium.com/wholistique/how-to-reduce-brain-fog-and-improve-clear-thinking-1b1f81b224ba
|
['Eric Sangerma']
|
2020-09-23 11:05:07.851000+00:00
|
['Sleep', 'Self', 'Productivity', 'Brain Fog', 'Mental Health']
|
Giving Text an Inner Shadow with ImageMagick and Perl
|
Giving Text an Inner Shadow with ImageMagick and Perl
Creating a CGI script that composites text with fancy effects onto an existing image is easier than you think
Image licensed from Bigstock
My memoir, The Accidental Terrorist, is about my youthful misadventures as a Mormon missionary. Missionaries always wear black name tags, so to promote my book I thought it would be nice to give fans a way to create and share their own customized name tag images.
To accomplish this, I figured a simple CGI script written in Perl would be best. I had a vague sense that I could use the Perl interface to ImageMagick to overlay a name in bold white text onto a blank name tag image like this one:
Blank name tag graphic (image-magick-step-1.jpg)
What’s more, I wanted the name to look like it had actually been stamped or drilled into the name tag, with maybe a slightly pebbled white surface to give things a nice feeling of texture.
I had used ImageMagick before for some simple applications, and I knew it was a very powerful graphics-processing package. However, it’s also very arcane, without much in the way of user-friendly documentation. (Oh, there’s plenty of documentation. It just helps to be fluent already in graphics-processing-ese to understand it.) Stack Overflow, to name just one forum, overflows with questions about how to do this or that with ImageMagick.
I scoured the web for an answer to what I thought was my very simple question about how to make an inner shadow, but I came up empty. Finally, all I could do was start playing around until I figured it out for myself.
I did figure it out, and I’ll lay out my method below in case there’s anyone else out there looking for an answer to the same question. I’m not claiming this is the best solution, in fact, I’m sure there’s probably some fiendishly clever way to do this in ImageMagick with a single convoluted command. Me, though, I like to take things step-by-step so I can easily see what’s happening at every point and why.
Having said that, my method is pretty straightforward, though a few of the details are a little tricky. We’ll start with the declarations, initializing a bunch of variables we’ll need later (some of which we can futz around with to adjust our output):
That all should be pretty self-explanatory, though we’ll talk more about some of these variables below.
Next, we declare a couple of Image::Magick objects and load them with, respectively, the blank name tag graphic from above and the pebbled texture graphic below:
Pebbled texture graphic (image-magick-step-2.jpg)
So far, so good. But before we actually try to print any text on either of these images, we need to gather some information about the text itself — specifically, how wide it will be when rendered:
QueryFontMetrics is a method of Image::Magick that, when passed some text descriptors, returns an array of stats about how that text will be rendered. The only return value we're interested in for our purposes here is $width , which will help us center the text properly.
Our variables $startx and $starty describe the point on the name tag around which we'll center the text. Knowing the width of the text, we can easily calculate where the upper left corner will need to fall:
If we wanted to center the text vertically as well, we could calculate that from the $height value, but in this case, we only need to know where the top edge of the text will fall.
Now we start getting to the interesting stuff. Our next step is to construct a mask, which is a grayscale image used as a filter when compositing one image onto another. The black parts of a mask will render the composited layer transparent, while the white parts will render it opaque. The levels of gray in between provides varying degrees of opacity.
I find it a little difficult to think of masks in those terms, though. It might be simpler to think of a mask as a stencil. You can lay your stencil down on the base layer of the image you want to composite, then sort of “spray-paint” your top layer through it.
You’ll see what I mean after a couple more steps. For now, we’re going to create our mask image by initializing a new Image::Magick object, filling it with black, and then printing our (properly positioned) text on it in white:
The chunk of code above results in the following image:
Our mask layer, stored in the $mask object
See, doesn’t that look like a stencil? We’ll be using this mask in our final step to spray bits of one image onto another while blocking out other bits.
Okay, now we’re going to construct our shadow. This is what we’ll eventually composite with our text layer to give our name tag the 3-D look that we want. To create this shadow, we need to construct a new image that looks a lot like a mask but really isn’t.
The process is very similar to making our mask above. We want our shadow to be shaped like our text, so we again build an image with white text on a black background (though we could just as easily use a brown or purple background, or anything else we feel like):
But this time we do two things differently. We offset the text a little, in this case moving it down vertically by two pixels. Then we apply a Gaussian blur effect to the image, using a couple of variables that affect the degree to which the image gets blurred (play around with those values to see what happens). This gives us the following result:
Our shadow layer, stored in the $shadow object
Like I said, while this looks very similar to our mask image, it’s not exactly the same sort of thing. What we’re going to do with it — and this is where the magic really starts to happen — is layer a translucent version of it on top of our texture image. The code to do this is very simple:
And that gives us the following image:
Our composite shadow/texture layer, now stored in the $texture object
We now have a composite image that looks like bright fuzzy letters projected onto a pebbled charcoal wall. The fact that the texture is only faintly visible is the result of our $opacity parameter, which we could easily dial up or down, depending on the effect we wanted.
Now we’re ready for the final step. We take that stencil from way back and spray our composite shadow layer through it onto our original blank name tag:
We write the result to the file system, and voilà! Here’s our final image, looking quite fine:
Our final composite image (image-magick-step-6.jpg)
There’s no doubt a way to do this in fewer steps, but what we have here was certainly acceptable for my purposes and not all that difficult.
If you try this code out with your own images, I’d suggest spending time playing around with the values of the initial parameters, and with different colors for the shadow layer. You might be surprised what you end up with!
Hellfire bevel: $offsetx = -2, $offsety = -2, $sigma = 2, $opacity = ‘85%’, $shadow->ReadImage( ‘canvas:brown’ )
In the end, my script was a little more complicated than what I’ve presented here, giving users a way to input a name and also choose from different image sizes with various slogans. But the code above is where the magic (or rather, the Magick!) all happens.
Resources
|
https://medium.com/better-programming/giving-text-an-inner-shadow-with-imagemagick-and-perl-d8efd83affb8
|
['William Shunn']
|
2019-06-19 17:07:55.715000+00:00
|
['Design', 'Image Processing', 'Perl', 'Programming']
|
Do not go Agile!
|
Photo by Clemens van Lay on Unsplash
Do not go Agile!
Just don’t do it. Spend your dollars on something way more valuable like a cool team activity or some new conference tool for all the teams to use in this remote working environment. See this tip I suggest as the ‘money maker’ of 2020. It’s been a tough year, so don’t overcomplicate it with this agile thing.
We are successful… right?
Recently I had a very interesting conversation with the Chief Transformation Officer at a large corporate organisation here in New Zealand. This organisation has started its agile transformation over a year ago. This same organisation is now questioning if the change has actually created any organisational value compared to the way they worked before.
A similar commend was echoed during a meetup that I attended a few weeks ago. The organisation that hosted the meetup was sharing the success story of its agile transformation. When someone asked how they were measuring their success of the actual transformation, their response was that they were not measuring outcomes (a.k.a. value). That was part of ‘phase two’ of the transformation.
In both situations, I wondered how organisations spend lots of time and effort, let alone the investment of thousands of dollars on a transformation, without being sure there will be benefits. Isn’t it the purpose of being agile, that you collect feedback to check if you are doing the right thing? If so, why is this lack of visibility on value and/or benefits happening?
‘Agile way of working’
I am currently writing a book (together with my colleague) about why so often organisations fail so often when implementing any sort of change initiative. For this book, we have spent many hours researching and analysing transformations. During that research, we have found that many unsuccessful change initiatives come from a disbalance (or unbalance) in a driver, actions and belief. When we look specifically at change initiatives that are connected to ‘agile’, there is another reason: the agile way of working implementation.
Many organisations look at agile as a way of working. They pick a framework, get the roles in place (like a Product Owner and Tribe Lead) and work via iterations. Often, they measure indicators like employee satisfaction and delivery outputs. This is not a problem per se as these elements can be important to measure.
The risk is, that it tells us how well we are executing a plan but not if the plan is actually the right plan to execute. We need to measure the why, what and the how.
Your future proof organisation
This brings me back to my suggestion, that you shouldn’t ‘go agile’. If you believe that ‘agile’ is a way of working and it will be successful by implementing Scrum or Kanban (or any other framework), don’t go there. Agile has never been a ‘way of working’. It is always about mindset; a way of thinking and being. The frameworks within agile are designed to support people and teams to adopt the mindset. Agile changes behaviour, thinking, responsibilities and collaboration. The magic and benefits come from realizing this and supporting the culture change in your organisation.
What has been seen cannot be unseen, what has been learned cannot be unknown. — C.A. Woolf
For now, it is called agile and maybe in a few months it will be called ‘Human-Centred Design’. Tomorrow will be the future and the past. Adapting to the future is therefore a never-ending thing. Changing towards an agile organisation is not something you can start and then go back from. It is a way of thinking and we cannot say to our employees ‘thanks for joining the pilot but now go back to the way you were thinking before this.’ Agile is a journey, as with any journey, you don’t reach your destination immediately by pressing ‘GO’.
|
https://medium.com/@marcellakoopman/do-not-go-agile-54734e779df3
|
['Marcella Koopman']
|
2020-12-14 22:22:07.072000+00:00
|
['Organizational Culture', 'Change', 'Agile', 'Scrum', 'Journey']
|
Exploring Cycles in data.
|
Exploring Cycles in data.
We are full of cycles.
Cycles are part of life, nature and perhaps some data you might encounter, by cycles we mean that events repeat themselves in time and space with certain periodicity.
If you live on planet earth, you experience the day/night cycle every day, it gets dark and colder roughly one third of the day, then light and warm the rest of it, this series of events repeat themselves during a period we call a day.
Seasons are another type of cycle, it is cold for a number of days, then its warmer, and this repeats itself over a longer period of time, life and death are yet another example of a cycle, but here the time scale is so large that we usually forget or don’t notice we are part of a greater cycle.
Why study cycles?
By studying cycles, we can then detect them and adapt our behavior to exploit or avoid a certain phase of said cycle, for example if we know that temperatures will be cold and food scarce 6 months from now, we can prepare accordingly.
As mentioned earlier, cycles are everywhere, we biological beings are hardcoded to some aspects of them (days and season), and create our own cycles (sleep/wake, fertility cycles, work/play, etc, etc.), yet the utility of knowing how to identify and describe them extends to other domains…
Consider the problem of cycles in the financial markets and when to invest in them, here the cycles are influenced by known and unknown factors, yet if you want to be a successful investor, you need to be aware of where you are in the cycle, or as one prominent investor puts it:
"Being too far ahead of your time is indistinguishable from being wrong." - Howard Marks ( Oaktree capital )
A cycle in detail.
For starters, let’s look at the simplest of cycles:
And some relevant data points: +---+----+---+---+----+----+----+----+----+----+
| X | 0 | 4 | 8 | 12 | 16 | 20 | 24 | 28 | 32 |
+---+----+---+---+----+----+----+----+----+----+
| Y | -4 | 0 | 4 | 0 | -4 | 0 | 4 | 0 | -4 |
+---+----+---+---+----+----+----+----+----+----+ Note that the values -4 and 4 repeat themselves over the non-repeating axis 0…32, what we have here are 2 cycles that start and end at (0,-4) with a length of 16 .
Here’s another cycle found all over nature (science and engineering) and is usually referred to as a sine wave :
Sine waves deserve their own separate discussion, for now just note that they provide us with additional ways to talk about cycles and describe their parts.
But more often than not you will encounter cycles in the raw like these ones:
The axes are left out on purpose so you can hopefully note that there are 2 large full cycles and an incomplete 3rd one, you can identify the first 2 by their peaks and troughs, the 3rd one is longer in length and hasn't peaked yet... After noticing these features, we can then reveal the mystery data as the Dow Jones Industrial Stock Average (DJIA) from May 1997 to May 2019 (~ 22 Years), these cycles represent the financial ups and downs of millions of people on planet earth during those years.
Detecting cycles
Visually detecting cycles on a chart representing your data is a perfectly valid way to figure out this cycle business, unfortunately it lacks some refinement, we could ask specific metrics about our cycles and then we would be left gesturing at a chart...this cycle is about hmmm 2 thumbs wide!
Fortunately, smart people have been tackling cycles in a structured and mathematical fashion, so we can take advantage of that.
I'll explore a common and popular algorithm for cycle detection (Floyd's Tortoise and Hare) but there are a few more if you want to explore them at your own pace, here's a good place to start: https://en.wikipedia.org/wiki/Cycle_detection
Floyd’s Tortoise and Hare
We start with a number sequence (here the cycle is obvious) and place both the tortoise and the hare on the same starting point.
Like in the fable, the Hare is fast and the Tortoise slow, the Hare moves in two 2 space increments and the Tortoise just one 1 at a time.
At this rate if there is a cycle, both the Tortoise and the Hare will meet on the same value 0 thus revealing the cycle 0,4,8,4,0 ,simple and elegant, but…
Notes:(1) This is a very naive explanation of the algorithm (for the sake of clarity), in reality we need to deal with nodes and pointers and also implement the algorithm in your language of choice, a good starting point is python, you will need to learn and implement , after that you can add complexity, here's a few implementations: . This is a very naive explanation of the algorithm (for the sake of clarity), in reality we need to deal with nodes and pointers and also implement the algorithm in your language of choice, a good starting point is python, you will need to learn and implement linked lists after thatyou can add complexity, here's a few implementations: Rossetta Code: cycle Detection Notes:(2) It might not be obvious, but you can now get cycle metrics, once you have a cycle, you can get the min/max (trough/peak ...0,8) and calculate amplitude, things like frequency and period are also possible once you incorporate pointers (the X axis, which in this example we are omitting but assume the data is continuous like a time series). Notes:(3) This problem/algorithm has multiple practical applications, a favorite of coding interviews, it also helps detect infinite loops and cryptographic collisions amongst other uses.
Advanced Cycleology
The world of cycles is vast, depending on your specific needs and project it might be convenient to create your own research path or analysis, that’s not to say that there are more advanced ways to look at cycles in data and corresponding techniques and tools, here are a few rabbit holes you might want to consider…
Fourier Analysis : If it’s a natural phenomenon (and what isn’t) chances are it has a frequency (see illustration on sine waves), Fourier analysis involves breaking down or extracting those frequencies and finding functions that recreate them, (once more a gross simplification), the idea being that by reversing the process you can generate a new time series with your own variables.
Hilbert -Huang & Wavelet transforms: 2 additional ways to decompose signals, each has a different approach than Fourier Analysis and might work better depending on type and availability of data.
Forecasting:
Forecasting is a heavy subject (check the notes below) , once you have figured out that your data has a cyclical component and are done quantifying it, you would also need to figure out if and when it will repeat itself, will it follow a trend? up, down, sideways? what is driving the cycle and what makes you so sure it will repeat itself forever?
These questions require not only some knowledge about cycles and the math & algorithms to recreate them in the future, but more importantly also knowledge of the subject you are forecasting to gain insights about future behavior and the underlying reasons that drive it.
For instance, a cycle in biology will sooner or later come to an abrupt stop when the organism dies; a cyclical seasonal trend (think holiday sales) can be disrupted by new technology or like we will see an external factor can also affect a cycle, context here is king, let’s say you encounter the following unlabeled data/chart…
Without context we can make the reasonable observation that there are cycles and we can forecast the next one quite comfortably, here’s what actually happened along with the missing context…
With the context restored, actual data and the previous observations, we can now realize that the current cycle is not behaving in a normal or expected way, we can then look for possible causes and gain clarity.
|
https://towardsdatascience.com/exploring-cycles-in-data-a1746fb19735
|
['Keno Leon']
|
2019-07-24 03:50:07.152000+00:00
|
['Cycles', 'Data Science', 'Forecasting', 'Data Visualization', 'Finance']
|
21 Hot Cryptocurrency Apps for 2019
|
Crypto-currencies are virtual, private currencies that allow their holders to purchase certain types of goods or services without having a bank account.
The first crypto-currencies were created as soon as the advent of the Internet but their use only developed with the bitcoin, designed by Satoshi Nakamoto in 2009. Bitcoin records every minute nearly 80 transactions in the world. Nevertheless, this figure remains far from the number of transactions made with Visa and Mastercard payment cards, of the order of 100,000 transactions per minute each.
The crypto-active ecosystem is increasingly attracting public scrutiny as to how their use can be diverted. Anonymity guaranteed by the absence of an account would encourage practices related to money laundering or other criminal activities such as the financing of terrorism. However, this surveillance comes up against the desire to encourage technological innovation, of which crypto-currencies have been one of the representatives in recent years.
In this article i am going to list some Crypto-Currency Apps and Websites which I personally like… Let’s Begin.
Hot Apps for Crypto-Investor
When you take your first steps in the world of cryptocurrency, it can be practical to get some smartphone applications that allow effective and permanent management of your funds.
Indeed, you do not always have the means to be on the computer all day, and this is the strong point of applications on mobile phones: they allow you to manage everything remotely, via your smartphone, anywhere anytime!
Today, I will advise you some applications, classified by features, so that you can choose the one that will be most useful depending on what you want to do. These apps are all available on Android and iOS.
Portfolio Applications
They allow you to manage your crypto-currencies and store them in wallets. The most famous are:
Jaxx
With this application, you own your wallet with your own private keys, and this wallet is compatible with several different crypto-currencies, including Bitcoin and Ethereum. Thus, you are totally free of your operations and the management of your funds. In terms of security, Jaxx has some advantages, thanks to its phrase recovery system (12 words!) To restore your wallet. However, you are the only real guarantor of the security of your funds (as always with cryptos), and it is up to you to pay a lot of attention.
mycelium
With Mycelium Bitcoin Wallet, you can send and receive Bitcoins using your mobile phone. The unparalleled “cold storage” feature allows you to secure 100% of your funds until you are ready to spend them.
Coinomi
Take absolute control of your money and security today with the Coinomi app. Bitcoin, Ethereum and an impressive number of altcoins are available here! Carry your corners safely with this cross-chained mobile hybrid wallet.
FreeWallet
This application is perfect if you need to store small amounts of virtual currencies. You will be able to own several different cryptocurrency portfolios, and the majority of these will be stored offline, in a safe. Rather safe, so! An all-in-one version (in beta) is even available to gather all the portfolios of all cryptocurrencies available on the site (Bitcoin, Ethereum, Bytecoin, Zcash, Monero, Bancor, XDN, Ardor, Steem, Dash, FantomCoin, Tether, Dogecoin, Lisk …)
Coinbase
It’s simply the application of the site Coinbase. According to the users, it works better than the site in times of fall of the course of Bitcoin or high traffic. It will allow you to buy, sell, and trade Bitcoin, Bitcoin Cash, Litecoin and Ethereum, all in a secure way. It is also a way to simply obtain, with the help of your credit card, some cryptocurrencies.
Cryptonator
It is a very practical application that will allow you to have your different cryptocurrencies in your associated portfolios. More than a dozen virtual currencies available, and many possibilities: storage, sending and receipt of funds. Very positive point, you can also make instant exchanges between different crypto-currencies of your portfolios.
Bitcoin wallet from Blockchain.info
It is an application that allows you to have a wallet and send and receive Bitcoins, but also to view transactions on the Bitcoin network. You can also install a PIN code when opening the application, and secure your cryptocurrencies with a “Recovery Phrase” (a kind of recovery phrase).
TabTrader
It is an application that allows you to have a wallet and send and receive Bitcoins, but also to view transactions on the Bitcoin network. You can also install a PIN code when opening the application, and secure your cryptocurrencies with a “Recovery Phrase” (a kind of recovery phrase).
Trackers
This kind of application allows you to view in real time the price of crypto-currencies on different trading platforms. The big advantage lies in the system of notifications and alerts put in place by the majority of these mobile applications, and this so that you do not miss anything on the variations of the price of your favorite currencies! Most of them allow you to track the status of your portfolio in real time.
Blockfolio
As it is a tracker and as this name suggests, this app tracks investment prices and sets price alerts. By indicating only, the number of cryptocurrencies you own, the application lists a very useful information on a daily basis: your total balance, buy and sell orders, evolution charts and a lot of other. In short, it is a very good alternative to follow the evolution of your various portfolios.
Ztrader
This is a slightly similar application to Blockfolio, used by a large number of investors. It makes it possible to follow your investments, to consult the price of the various currencies, as well as interesting and very detailed graphs.
Crypto calc
With Crypto Calc, you have at your disposal the price of about 50 different currencies, in Bitcoin and in your usual fiat currency (euros, in our case, but this can be the one of your choice). You can also follow the exchange rates between fiat currencies. It is a very simple application to use and useful to inquire in real time on the prices practiced and to realize conversions.
Bitcoin Price Widget
With this application you will be able to install widgets directly on your homepage. This will allow you to track the currency of your choice at any time, just by clicking on the widget. Moreover, you can also consult graphs. The application offers a wide variety of crypto-currencies, including Litecoin.
Coincap (Coinmarketcap.io)
This is a relatively simple price tracking application that performs its function. It is not as practical and detailed as the ones I presented above, but it can help you out without any problem. This is the official application of the site Coinmarketcap.io which is not bad when we start, but happens to be quite limited for more advanced users … Attention the graphics are very “basic” and the courses are not always put updated in real time!
Delta
Delta is an ultimate electronic currency wallet tracking tool available on iOS and Android. Manage all your currencies from this App, including Bitcoin, Ethereum, Litecoin and over 2000 other alts.
For Traders
These applications will allow transactions and exchanges of crypto-currencies, and of course to manage your existing portfolios.
Tradingview
The official application of the site tradingview.com that allows you to share your analyzes and discover those of many professional traders!
Tabtrader
I’ve already presented it in more detail in this article. This application allows you to trade on a large number of different trading platforms, including Kraken, Coinbase (GDAX), Bitstamp, ANXPRO, HitBtc, BTC-E, BTCChina, Huobi, ItBit, Bitbay, Bter, Bitfinex, Bitmarket, Gatecoin , Bluetrade, QUOINE, Bittrex, BL3P (Bitonic), Polonyx, EXMO, Gemini, Vaultoro, Mercado Bitcoin.
Bitfinex and CEX.io
These are the applications of exchange sites with the same name. You will find the main features of these same sites.
My Bittrex Wallet
This application, connected to Bittrex, allows you to simply consult and manage your Bittrex wallet at any time on your smartphone, wherever you are.
Binance
The official application of the Binance exchange platform. A good application to consult the different markets available on Binance and connect to your own account!
Cobinhood
The official application of the new Cobinhood exchange launched just a few months ago and is rather promising. The platform recently comes to list a lot of new corners and is currently (January 2018) the only exchange not charging any trading fees (yes 0% !!!). Their application is really well optimized for smartphones and I find their project / corner promising (I bought personally for the most curious of you ;-))
To Be Aware Of the News
Bitcoin Map World
The function of this application? Allow you, anywhere in the world, to know all places accepting Bitcoins. Very convenient!
Icoalert / IcoAlarm
This application allows you to be notified of new ICO releases, (ICO meaning Initial Coin Offering, which is a crowdfunding system for the development of a new cryptocurrency or a new Blockchain project). You will be notified of any news related to ICO, as well as all ICO still in progress.
What Else your Can Do with CryptoCurrency
1. Wix.com will soon allow its users to pay their subscription in cryptocurrency through a partnership with the startup PumaPay. A similar option will be available to creators of e-commerce sites in order to accept this type of payment in turn.
2. UNICEF France has indicated that it is now possible to make the donation in crypto-currencies like Bitcoin for various causes supported by the body.
Sébastien Lyon, Executive Director of UNICEF France,
Cryptocurrencies and blockchain technology for charitable purposes offer a new opportunity to appeal to the generosity of the public and continue to develop our actions with children in our country of intervention. It is an innovation in terms of solidarity and fundraising that we are still few to propose, but which tends to become more democratic.
3. Coinbase Commerce- Coinbase announced that WooCommerce users now have Coinbase Commerce and can accept Bitcoin and Litecoin. More than 28% of online stores would use the popular WordPress plugin.
In a note posted on his blog, the California firm Coinbase explained that the traders who run a shop built on the open source system WordPress with the plugin WooCommerce can now accept the crypto-currency as means of payment.
Coinbase Commerce opened in February 2018 and its first integration took place on Shopify a few days after its release. This solution aims to democratize the settlement in virtual currencies and help merchants to support this method of payment.
Coinbase said that more than 28% of online stores would be based on WooCommerce, which are several million online shop owners who could accept cryptocurrency.
I hope I have given you a rather complete overview of the range of applications available for managing your crypto-currencies in 2018 and beyond.
Of course, it is impossible to list all knowing that there is a package now, but it should help you, according to your needs, to find the application that suits you perfectly and you cannot live without!
|
https://medium.com/hackernoon/top-hot-list-cryptocurrency-apps-for-2019-f1e06308dec0
|
['Hardik Patel']
|
2019-07-04 06:11:51.847000+00:00
|
['Bitcoin', 'Online', 'Cryptocurrency', 'Payment Cards', 'Cryptocurrency Apps']
|
My Romantic Vision of Becoming a Freelance Writer
|
My Romantic Vision of Becoming a Freelance Writer
I remember when I accidentally came across this website when I browsed the internet for marketing purposes. I found out that Medium has one of the highest domain authority on the web, so it was worth checking.
The real breakthrough came when I discovered the true purpose of the platform.
I found out that Medium is a platform that gives everyone a chance to express themselves through writing and monetize content not by ads, but by the unusual connection of writer and reader. It’s quite innovative, but the founder- Ev Williams, could be the definition of “innovation”.
When I found out about it, I changed my plans right away, signed up, and on the very first day, I wrote a 2000-word article. I was unaware of the curation system and publications. I was not interested in any trends- I didn’t know anything, I just wanted to write.
All I had was my romantic vision, which I intermediately shared in my journal on an internet forum that I was running at the time. Everything was sublime, exciting, and I was in a state of great fascination, the joy that I had found something I wanted to do. Sounds familiar?
Reality check on “writing game”
Medium is like YouTube for the writers. These days YouTube is dominated by professional channels, and very often corporations. It’s not that bad on Medium, but the competition is sure noticeable, as it is the leading platform in the writing niche.
By “noticeable competition,” I mean, I don’t know, thousands, tens of thousands, hundreds of thousands of active people trying to become a “freelance writer”?
The “race” has been running since 2012 — It is not a fresh platform. So, there are people who have been writing for several years yet still didn’t manage to achieve the dream of being a full-time freelance writer. It sounds depressing, but well, that’s the reality.
After all, anyone who likes to write would love to sit on laptops for a few hours, write some valuable articles, and get a good salary for that.
No one cares about my passion
The brutal truth is that — no one cares that I or anyone else found the passion and start fulfilling his dreams. Literally, not a single person cares, even the family. Most people will ask, “can you make a living from it”?
The times when we accidentally take up the guitar, and everyone was enthusiastic, “oh, maybe he will be a musician,” are gone.
Part of the growing up process is grounding yourself in reality, which often means giving up on your potential and doing something that ensures the safety and decent life.
In this sense, like any other passion, writing is a bit of a struggle with yourself and an attempt to combine it with other duties that are necessary.
Quality content and a sense of unfairness
Well, if you have more than twelve years, you know that the world isn’t fair. But if you are accidentally passionate about something, you may forget that.
You can write a brilliant article that shows your perspective on some important topic, but the thing is — no one cares, no one cares about your view until you have strong authority.
In this way, you may write better pieces than someone and be 100x less popular than him, and it’s become of the lack of authority.
Society life is a run or a “rat race’. Everyone has their own, most important life, and the only thing they are looking for in others is? Values.
Yes, that’s why articles presenting lifehacks are the best clickers. Nobody after work wants your 3000-word story, an argument on politics (unless you have authority), but is looking for some value for themselves or fun to rest.
You need immense authority to make people read you on some professional topic. Even if you have huge knowledge, it takes some time to gain people’s trust.
|
https://medium.com/@jakemura/my-romantic-vision-of-becoming-a-freelance-writer-fe7a3033d3e6
|
['Jake Mura']
|
2021-02-19 01:04:25.508000+00:00
|
['Writing', 'Writing Journey', 'Writers On Medium', 'Self Publishing', 'Writers On Writing']
|
Makers: Lessons of the First Few Weeks
|
Today is Sunday, and my third week at Makers is officially over. The theme of fast-flying days that arose in week one has stuck around and to know that I’m 1/4 of the way through the main course is slightly scary. On one hand, the days have passed so quickly that it almost feels as though they didn’t even happen. On the other, the amount that I’ve crammed into those days makes them feel like a lifetime.
Below is a representation of what the inside of my brain feels like at the moment:
Makers has a saying: ‘it’s not hard, it’s just new’. And I’m already feeling the truth of it. Content that felt impossibly difficult in week one, now seems like a breeze. Our progression has been amazing and it’s inspired me to try harder in all aspects of life. Turns out that setting smart, achievable goals every morning can do wonders for productivity, and can easily be applied to fitness goals, personal goals, or whatever goals you like.
Anyway! Back to the point.
The main learning curve of weeks one and two was test-driven development. I had been prepared to learn new, Ruby-specific knowledge, but Makers prefers to look at the bigger picture. It’s all about process. It makes sense really, you can learn more about a specific language anytime you want with a quick google search. But developing a methodical process and learning how to break down problems into manageable ones takes practice and guidance and is a lot more valuable in the long-term.
Test-Driven Development (TDD)
Using TDD when writing code essentially just involves following a particular process: RGR, or ‘red, green, refractor’.
Red: Write a test, run it, and pay attention to the red error messages that appear as it (predictably) fails. Green: Write the minimum amount of code required to pass the test. Refactor: Go back to the code and refactor it. Now that you’ve passed the test already, you can easily re-run them as you alter the code to ensure that you don’t break anything. Repeat.
It’s a simple idea, but hard to grasp at first. It feels counter-intuitive. Why write a test and run it when I already know that it’s going to fail? In fact, it didn’t start making sense until the weekend challenge.
Our task was to create a program with Airport objects which could release and land planes depending on the weather. It seems simple enough looking back, but this was the first challenge Makers gave us after taking the training wheels off. We’d been practicing TDD during the week but were given plenty of guidance and hints along the way. For this, we were on our own with the exception of a few user stories.
Looking at the task on Saturday morning, I panicked a little. I knew I could do it, but could I do it well? Starting was tricky. The instructions didn’t seem completely clear and I wasn’t sure which first steps I should take. That’s where TDD came in and it finally clicked.
We were taught to incorporate feature tests into our RGR cycle. A feature test is essentially a test for the features of a program. We implemented them by running a REPL (irb) and attempting to use the programme in the way we hoped it would function once complete.
The cycle essentially became the following: Write a feature test and fail it, write a unit test with the aim of producing the same error message as the feature test, then write the code to pass the tests and refractor.
Following this cycle gave me my first steps and eventually, all of the ones after that. Opening irb, I realised that first and foremost, a user must be able to create a new airport. Working from irb, it was obvious that this should come before even thinking about the planes or the weather, and it stopped me from getting ahead of myself.
As you can probably predict, typing ‘airport = Airport.new’ didn’t go so well, and I got my first error message: uninitialized constant Airport.
It followed that the next step was to create a unit test for an Airport class and, when that failed as well, I knew to create the class itself. TDD gives direction.
It’s easy to race ahead when you’re coding, but it’s not helpful. Often, it means running into unpredictable errors or writing overcomplicated code. The above example might sound simple or redundant, but as the project progressed and the code gained complexity, it kept me on the right path and ensured that I was creating a program with exactly the functionality that was required. No more and no less. TDD has plenty of other benefits which will quickly become apparent as the code that we’re writing becomes more and more complex.
Other Lessons
I won’t go into too much detail about the remaining lessons of the first three weeks, otherwise this will quickly turn into a ten thousand word essay, but here’s a quick summary of the main lessons (this is by no means exhaustive):
Week 1: Intro to TDD and Debugging
Debugging was another subject that I underestimated at first. I knew to have a rough look at the error message and which line the problem seemed to originate from, but I was also guilty of simply trying things, altering code here and there and hoping I would get lucky. Makers taught me that process is everything and gave me a specific mantra to repeat: “Tighten the loop, create visibility”. In more words: look at the type of the error, use the stack trace to work out where the problem is happening in the code and create visibility, or put another way: ‘p everything’.
Week 2: OO Principles, Process Workshops and Diagramming
Object-oriented principles that we covered included encapsulation, polymorphism and forwarding, among others. We practiced applying each of these with the mini challenges and practicals provided by Makers during the week, but it all came together for most of us during the main weekly and weekend projects. My code has become immeasurably easier to make sense of (for myself as well as new eyes), and I can’t wait until I have time to go back and improve on all of my lengthy Codewars solutions.
Process workshops: these occur weekly, but I attended my first session in week two. After a day of work, we all get together on zoom and pair up to work on challenges one at a time while the other watches, so that we can get feedback on our process. Though kind of nerve-wracking at first, it’s been incredibly helpful (and I’d much rather fight the nerves and get used to being watched while I code now as opposed to later on in, for example, a job interview).
Diagramming: In addition to strictly using TDD when we code, we were taught how to plan properly. Again, this is often underestimated as a skill and people get away with bad planning all the time, but there’s no denying that bad planning will trip you up when you’re least expecting it.
Week 3: Web
This week brought something completely different. We had workshops to teach us the basics of how the web works and by Thursday I was building my own Macronutrients Calculator web application. The weekend challenge of building a Rock, Paper, Scissors web app was my favourite so far. It was clear this week that we’ve already become better learners. We had to quickly become familiar with Sinatra, Capybara, HTTP request types and the MVC pattern and best practices, and I really don’t know if I would have trusted myself to do that so quickly just a few weeks ago.
Above all else, the main lesson of Makers so far is to focus on process. And not just in the coding sense. The workload is immense, and it’s pretty much impossible to complete every practical, workshop and project to perfection. But that’s the point. We are here to learn as much as possible, so Makers throws more at us than we can handle. This way, we can be sure that our potential isn’t being capped.
It’s been an emotional rollercoaster, to say the least. I’m a perfectionist, and it’s not easy to accept that I can’t complete everything. I’ve been overwhelmed several times (thank you Dana, thank you meditation, thank you yoga and thank you reflection sessions for grounding me), but I’m realising that I’m learning at a pace I hadn’t anticipated and I have a lot to be proud of, as do the rest of my cohort. Makers know what they’re doing, and I’m feeling more and more confident that I’m going to leave the course feeling more prepared for the world of work in the coding industry than I had hoped for only a few weeks ago. Until then, I‘m happy to weather the storm.
|
https://blog.makersacademy.com/makers-lessons-of-the-first-few-weeks-fcfc7e1b60ff
|
['Emily-Alice Sesto']
|
2020-10-18 19:05:53.165000+00:00
|
['Codingbootcamp', 'Test Driven Development', 'Self Learning', 'Makers', 'Learning To Code']
|
Football/Soccer Talk, Analysis, Prediction
|
Football/Soccer Talk, Prediction (EPL)
As we move towards the xmas and new years period, it is fortunate we still have football to entertain us. Let’s go ahead and cover the games for 27 December 2020.
Leeds vs Burnley
Interestingly enough these 2 sides haven’t met each other for the last 3 years which was a EFL Cup game resulting in Leeds victory after penalties. Obviously things are far more different this time around or is it?
If you have been following the EPL, you know Leeds games can be very exciting. Last time out they did fall against Man United but they gave a brave performance nevertheless as they usually do. It’s hard to say how Bielsa will approach this game with Leeds playing conservative football after couple heavy defeats being somewhat a pattern.
Burnley started their late surge in gaining momentum before January remaining undefeated in the last 4 games. The results have been solid beating Wolves and Gunners in the process.
Leeds may want to test out this Burnley side and perhaps won’t be as conservative in the process and may stick to their high attacking football style. (some may even say reckless!) Both teams should be charging for a win here to push themselves away from relegation zone.
Prediction: Both teams capable of delivering a result here but if in doubt go with goals, after all it is Leeds playing.
West Ham vs Brighton
West Ham got off to a flying start but their last 4 games may have cast some doubt about their form but in fairness the losses were against Chelsea and Man United.
Brighton have been frustrating to watch as they keep making lot of mistakes when they move forward, they could seal games but they never take the opportunity and generally games end up in a draw. In fact 50% of their games have ended in a draw. Welbeck has provided more creativity going forward but generally involves Welbeck in the support/assist role, more needs to be done by Brighton midfielders.
West Ham only kept 2 clean sheets out of 10 games so this should provide encouragement to Brighton to find the back of the net however as things are with Brighton, it’s likely they will make mistakes making life hard for themselves.
Prediction: I am predicting a draw here with both teams to score.
Liverpool v West Brom
History shows West Brom don’t do too bad against Liverpool but history doesn’t win games! (well sometimes it does)
Liverpool running riot at Selhurst Park last time out would have made lot of Liverpool fans overcome with joy as some of the games by Reds this year have been somewhat sketchy but regardless they still close out the games. Klopp has been trying to survive with a squad that has been hit by injuries so this may explain the incomplete game style seen by Liverpool. Only difference with this squad is everyone wants to play hard and be part of the starting 11.
Not sure what Sam can produce with West Brom for the remainder of the season or if he can even manage staying there until end of season... I felt sorry for Slaven Bilic, it’s not easy to coach in EPL. It’s honestly not looking good for the baggies at the moment. It’s probably the worst time for their morale to face Liverpool.
Prediction: All signs are indicating another riot at Anfield. I really don’t see how West Brom can manage this game, Liverpool with their tails up wouldn’t want anything less than a victory. Victory, goals and clean sheet for the reds here.
Wolverhampton vs Tottenham
Historically these 2 sides have shared victories, each time either side winning it. Since 18/19 the trend has been Spurs, Wolves, Spurs, Wolves with a win. So does it mean it’s Spurs winning again?
Well there you have it, my magical formula above indicates a Spurs victory but perhaps we need to have a closer look.
Wolves have suffered defeats in the hands of Burnley, Aston Villa and Liverpool in the last 4 games but managed a victory against Chelsea. Wolves playing at home has seen cagey games where they generally defend well.
Spurs have seen average of 2 goals on away games which is actually more impressive than their home record. Just recently Spurs won against Stoke in the cup game bouncing back from back to back losses.
The games between these sides has seen 3 goals or more so we may see another goal fest but Wolves defense at home has been not bad.
|
https://medium.com/@worldgame/football-soccer-talk-analysis-prediction-abe21eaa5c0b
|
['Sevket Eris']
|
2020-12-27 03:51:32.170000+00:00
|
['Football', 'Bundesliga', 'English Premier League', 'Soccer']
|
Oracle ADF BC REST — Performance Review and Tuning
|
I thought to check how well ADF BC REST scales and how fast it performs. For that reason, I implemented sample ADF BC REST application and executed JMeter stress load test against it. You can access source code for application and JMeter script on my GitHub repository. Application is called Blog Visitor Counter app for a reason — I’m using same app to count blog visitors. This means each time you are accessing blog page — ADF BC REST service is triggered in the background and it logs counter value with timestamp (no personal data).
Application structure is straightforward — ADF BC REST implementation:
When REST service is accessed (GET request is executed) — it creates and commits new row in the background (this is why I like ADF BC REST — you have a lot of power and flexibility in the backend), before returning total logged rows count:
New row is assigned with counter value from DB sequence, as well as with timestamp. Both values are calculated in Groovy. Another bonus point for ADF BC REST, besides writing logic in Java — you can do scripting in Groovy — this makes code simpler:
Thats it — ADF BC REST service is ready to run. You may wonder, how I’m accessing it from blog page. ADF BC REST services as any other REST, can be invoked through HTTP request. In this particular case, I’m calling GET operation through Ajax call in JavaScript on client side. This script is uploaded to blogger HTML:
Performance
I’m using JMeter to execute performance test. In below example, REST GET request is invoked in infinite loop by 100 concurrent threads. This creates constant load and allows to measure how ADF BC REST application performs under such load:
ADF BC REST scales well, with 100 concurrent threads it does request processing in 0.1–0.2 seconds. If we would compare it to ADF UI request processing time, it would be around 10 times faster. This is expected, because JSF and ADF Faces UI classes are not used during ADF BC REST request. Performance test statistics for 100 threads, see Avg logged time in milliseconds:
Tuning
1. Referenced Pool Size and Application Module Pooling ADF BC REST executes request is stateless mode, REST nature is stateless. I though to check, what this mean for Application Module tuning parameters. I have observed that changing Referenced Pool Size value doesn’t influence application performance, it works either with 0 or any other value in the same way. Referenced Pool Size parameter is not important for ADF BC REST runtime:
Application performs well under load, there are no passivations/activations logged, even when Referenced Pool Size is set to zero.
However, I found that it is still important to keep Enable Application Module Pooling = ON. If you switch it OFF — passivation will start to appear, which consumes processing power and is highly unrecommended. So, keep Enable Application Module Pooling = ON.
2. Disconnect Application Module Upon Release
It is important to set Disconnect Application Module Upon Release = ON (read more about it — ADF BC Tuning with Do Connection Pooling and TXN Disconnect Level). This will ensure there will be always near zero DB connections left open:
Otherwise if we keep Disconnect Application Module Upon Release = OFF:
DB connections will not be released promptly:
This summarises important points related to ADF BC REST tuning.
|
https://medium.com/oracledevs/oracle-adf-bc-rest-performance-review-and-tuning-c3acadecd477
|
['Andrej Baranovskij']
|
2018-05-29 17:13:19.610000+00:00
|
['Oracle', 'Oracle Adf', 'Rest', 'Java']
|
Animations of Multiple Linear Regression with Python
|
In this article, we aim to expand our capabilities in visualizing gradient descent to Multiple Linear Regression. This is the follow-up article to “Gradient Descent Animation: 1. Simple linear regression”. Just as we did before, our goal is to set up a model, fit the model to our training data using batch gradient descent while storing the parameter values for each epoch. Afterwards, we can use our stored data to create animations with Python’s celluloid module.
This is the revised version of an article about the same topic I uploaded on July 20th. Key improvements include the cover photo and some of the animations.
Setting up the model
Multiple linear regression is an extended version of simple linear regression in which more than one predictor variable X is used to predict a single dependent variable Y. With n predictor variables, this can be mathematically expressed as
with
and b representing the y-intercept (‘bias’) of our regression plane. Our objective is to find the hyperplane which minimizes the mean squared distance between the training data points and that hyperplane. Gradient descent enables us to determine the optimal values for our model parameters θ, consisting of our weights w and our bias term b, to minimize the mean squared error between observed data points y and data points we predicted with our regression model (ŷ). During training, we aim to update our parameter values according to the following formula until we reach convergence:
with ∇J(θ) representing the gradient of our cost function J with respect to our model parameters θ. The learning rate is represented by α. The partial derivatives for each weight and the bias term are the same as in our simple linear regression model. This time, however, we want to set up a (multi-)linear regression model which is flexible to the number of predictor variables and introduce a weight matrix to adjust all weights simultaneously. Additionally, we store our parameter values in arrays directly during the fitting process itself, which is computationally faster than using for-loops and lists, as we did in the last article. I decided to arbitrarily set the initial parameter values for the weight(s) to 3 and to -1 for the bias. In Python, we import some libraries and set up our model:
After that, we intend to fit our model to some arbitrary training data. Although our model can theoretically handle any number of predictor variables, I chose a training dataset with two predictor variables. We intentionally use a particularly small learning rate, α=0.001, to avoid overly large steps in our animations. Since this article focuses on animations rather than statistical inference, we simply ignore the assumptions of linear regression (e.g. absence of multicollinearity, etc.) for now.
To ensure our fitted parameters converged to their true values, we verify our results with sklearn’s inborn linear regression model:
Now, we can finally create our first animation. Like before, we want to begin with visualizing values our cost function and parameters take on with respect to the epoch while plotting the corresponding regression plane in 3-D. As described previously, we intend to only plot values for selected epochs, since the largest steps are usually observed at the beginning of gradient descent. After each for-loop, we take snapshots of our plots. Via Camera’s animate-function we can turn snapshots into animations. In order to get the desired regression plane, we introduce a coordinate grid (M1, M2) via numpy.meshgrid and define a function “pred_meshgrid( )” to calculate the respective z-values with respect to the model parameters at a certain epoch. The dashed connecting lines seen in the following animations can be obtained through line plots between training data points and predicted points. By returning the final parameter values (see commented-out code!) we obtained in our animation, we ensure that we roughly visualized model convergence despite not using the full range of parameter values, we stored during the fitting process.
Despite its simplicity, plotting the values parameters take on for each epoch separately is actually the most informative and most realistic way of visualizing gradient descent with more than two model parameters. This is because we can only witness changes in costs and all model parameters involved simultaneously this way.
Fixed intercept model:
When creating more sophisticated animations of gradient descent, especially in 3-D, we have to focus on two model parameters while keeping the third parameter ‘fixed’. We are generally more interested in the weights rather than the bias term of our multiple linear regression model. In order to visualize how costs steadily decrease with our adjusted weights (w₀, w₁), we have to set up a new linear regression model where the bias term b is fixed. This can easily be done by setting the initial value for b to a predefined value b_fixed and remove the part of code where b is being updated in the new model. b_fixed can take on any value. In this case, we just set it to the y-intercept value the former model converged to:
After accumulating new data with our new model, we yet again create another coordinate grid (N1, N2) for the following animations. This coordinate grid enables us to plot the costs for every possible pair of w₀ and w₁ within the given range of numbers.
Contour plot
Contour plots allow us to visualize three-dimensional surfaces on two-dimensional planes via contour lines and filled contours (= contourfs). In our case, we want to plot w₀ and w₁ on the x-axis and y-axis respectively with the costs J as contours. We can label specific contour levels within the graph with Matplotlib’s contour-function. Since we know that our final costs are roughly 76, we can set our last contour level to 80.
When the y-intercept was allowed to vary, we saw significant changes of parameter values and costs with later epochs, that we did not want to miss out in our regression- and parameter plots. However, with b being fixed, most of the ‘action’ appears to be confined to the first 400 epochs. Thus, we limit the epochs we intend to visualize in the following animations to 400, which is less computationally expensive and results in more appealing animations. To confirm this impression, we can compare the final parameter values, the fixed intercept model returned after 100,000 epochs, to the parameter values we obtain after 400 epochs (see commented-out code below!). Since parameter- and cost values generally match up, it is fair to say that we visualized model convergence despite restricting the number of epochs being visualized.
Animated regression plane and contour plot (large)
Surface plot
For the last plot, we want to incorporate the same epochs as we did with the contour plot. This time, however, we want to visualize gradient descent in three dimensions using a surface plot.
Animated regression plane and surface plot (large)
Theoretically, plotting the trajectory of gradient descent in the x-y-plane - as we did with the contour plot - corresponds to the ‘real’ trajectory of gradient descent. In contrast to the surface plot above, gradient descent actually does not involve moving in the z-direction at all since only parameters are free to vary. For a more in-depth explanation see here. Lastly, I want to point out that creating animations with Celluloid can be very time-consuming. Especially animations involving surface plots can take up to 40 minutes to return the desired result. Nevertheless, I preferred Celluloid over other packages (e.g. matplotlib.animation) due to its simplicity and clarity. As always, creative input and constructive criticism is appreciated!
I hope you found this article informative and useful. Should any questions arise or if you noticed any mistakes, feel free to leave a comment. In the next article of this series, I will dedicate myself to animations of Gradient Descent on the example of Logistic Regression. The complete notebook can be found on my GitHub.
Thank you for your interest!
References:
Appendix:
|
https://towardsdatascience.com/animations-of-multiple-linear-regression-with-python-73010a4d7b11
|
['Tobias Roeschl']
|
2020-10-07 02:43:36.042000+00:00
|
['Machine Learning', 'Regression', 'Statistics', 'Python', 'Animation']
|
Ancient Japanese Recipe For Weight Loss!
|
Random image from google for security purposes!
I am apparently an affiliate, which promotes other people products, however, I decided to review a product that I found really helpful to a lot of people which is the ‘’Okinawa flat Belly tonic’’, I came across this topic whilst I was looking for a good product to promote and I found it helpful because recently a lot of people have been making loads of researches on how to reduce belly fat!
This is also a very helpful video on the review and benefits of Okinawa belly fat, and its health benefits, trust me its worth your time. A lot of the time people who normally search how to reduce their weight end up quitting and the reason is because most of their search results are; low dieting, exercising, eating healthy, drinking water e.t.c and to be honest who really wants to do that? this product already assures you that you do not have to eat less to reduce fat but only drink! which makes the product one of the most successful products now selling in the market.
Here is a customer review I found days ago!
Here this customer had purchased this product and confirmed it worked well for him, I decided I need to share this product with the world and got inspired to write this article today on medium. So If you are ready to stop your low-self-esteem, embarrassment, and eating boring salad why not click on this link now: Okinawa belly fat- here in this link there are few people who would be telling stories about their body weight loss and how they overcame it in Just a few months! This only cost $49–$69.
|
https://medium.com/@habibatibrahim2006/ancient-japanese-recipe-for-weight-loss-ef48cbf8b633
|
[]
|
2020-12-20 23:51:34.126000+00:00
|
['Fitness', 'Weight Loss', 'How To Lose Weight Fast', 'Health Foods', 'Transformation']
|
Why Bragging About Your Online Success Isn’t Helping Me
|
Why Bragging About Your Online Success Isn’t Helping Me
Or anyone else for that matter
Image: olezzo/Adobe Stock
If you’ve spent any amount of time on the internet, then you know what it’s like to come across the humblebragger. At least, they think they’re being humble. If you’ve been on social media, then you’re also familiar with overt braggarts.
You’re minding your own business, scrolling your newsfeed, and you stumble across a longwinded post from an old friend. You know them well, but a lot of people also know them because they’ve grown into something of an Internet sensation.
Let’s be honest, anyone with a bit of knowledge these days can build a healthy following online. And your old friend just made a post that was a longwinded way of bragging about all of their achievements. There was literally no other point to the post, but to highlight all of the incredible work they’ve done.
While there’s nothing wrong in theory with being proud of one’s accomplishments, it hits a little different when you take time out of your day to write a post with the intent to brag about those accomplishments.
I’ll be completely honest with you. That post changed my view of that person. It’s something I hadn’t noticed about them before so I went to browse the rest of their posts and realized it was par for the course. That’s all they seemed to do. And, by injecting words like blessed, proud, humbled, they tried to make it sound like something it wasn’t.
What is Bragging?
Look, social media is social. It’s there to share your life with your friends and family. So, it makes perfect sense that you share moments of success and happiness. Of course, you want to announce a promotion, a new addition (whether human or furry), or any number of positive achievements or events going on in your life. Generally speaking, people share in your happiness. They congratulate you because friends like to see their friends succeed.
The issue comes when you share not to spread happiness, but to make others feel envious of you. The announcements or information you’re sharing isn’t useful, there’s no informative purpose, it’s intentionally dysfunctional, and is all about showing off. When you brag, there are two considerations — the information you’re sharing, as well as the people you’re sharing it with.
If we take it in a professional context, you may make a post on LinkedIn announcing an upcoming paper. That’s useful information. However, making a random post where you tell your audience you’ve been cited thousands of times… that’s not helpful or useful.
Which begs the question, if someone wants to share information that isn’t useful or helpful, why do they do so? What were they trying to accomplish? What harm are they causing by sharing as they did?
Let’s define bragging. If it comes up naturally, then revealing impressive information about yourself isn’t necessarily bragging. For example, if someone asks you where you live and you happen to live in The Hamptons, then you aren’t bragging by responding to their question. Now, if you were to disclose it without being asked, then it’s bragging.
If you complete your LinkedIn profile by filling in relevant achievements, then you’re not bragging you’re simply doing what the platform requests. So, perhaps the best way to determine whether something is bragging or not is to consider whether you’re imposing your status elevating thoughts on others by sharing what you’re sharing.
The Result of Bragging
What bragging does is highlight negative information about you, the sharer. You’ll become known as a braggart. Guess what? Most braggarts don’t know they’re braggarts, but the rest of the world recognizes it and they don’t like those people. What that bragging impresses upon your audience is that you hold a negative view of the people around you because ultimately, the message you send is that you’re better than them.
If only those people kept their bragging to social media where you could mute them. Unfortunately, the people who brag online are just as apt to do so in person. Research shows that bragging is linked to more undesirable traits. People who tend to brag when positive events occur are reportedly less empathetic, less agreeable, and less conscientious. Whereas those who brag and overshare have higher levels of narcissism.
It shouldn’t come as a surprise, but what might come as a surprise is how often you brag without even realizing you’re doing it. I’d encourage you to go look at the posts you make and determine whether you’re guilty of a humblebrag or two or if you go all out with straight-up, overt bragging. If you do, what does it do for you?
Often, what happens is the braggart attracts a crowd. A lot of people gravitate to them. Those people ingratiate themselves to the braggart because they see a benefit in doing so. Generally, these are people who have a lower status and/or ulterior motives to build a relationship with the person. The braggart now has an entourage. It sounds innocent, but in the real world (and online) that entourage can be used to tear others down in a bid to protect the braggart.
This is something you can watch unfold online all the time. A celebrity can call someone out and all of their fans rush to attack that person. It isn’t just celebrities who do this, however, anyone with a following can sick their followers on someone with who they have an issue. It creates this vicious cycle of bullying and power exertion.
There’s a new habit emerging, people brag (online and in-person) about the success they’ve found and they brag about it by acting as though they’re trying to inspire you to also succeed. They package their brag as a way of empowering you to take the same steps they did to create the success they have.
In fact, some people even go to the trouble of writing it all out and selling the information in a book. Then, they brag about that success. Look at what I did! I managed this online business while working a full-time job and now look at me! What are you waiting for? It may come with good intentions, but it doesn’t always lead to positive results.
We’re all guilty of a bit of self-promotion now and again. We want to highlight our strengths, we want to have our moment to show our expertise or competence, whether it’s in the workplace, online, or in general. We all do it.
But what is your intention when you do so? Because I can tell you this, when you brag about your online success it isn’t helping me. It isn’t helping anyone. It isn’t helping you either, because one over-the-top brag or poorly worded post highlighting your success can tank your reputation. You might continue to brag, thinking others are jealous of your success or inspiring success in others and the reality is it’s making everyone dislike you.
All that to say, be careful how you communicate your success to others. You might think you’re helping, but you’re more than likely doing real damage to the people around you.
|
https://medium.com/assemblage/why-bragging-about-your-online-success-isnt-helping-me-3471fe1a6d65
|
['George J. Ziogas']
|
2020-12-28 13:43:03.628000+00:00
|
['Psychology', 'Self', 'Success', 'Freelancing', 'Work']
|
Bowl/LivE||NFL Super Bowl Locations Reddit StreaMs, how to watch Chiefs vs Buccaneers online
|
How to watch Super Bowl 2021: Live stream online without cable, TV channel, time for Chiefs vs. Buccaneers NFL game.
Go LIVE░░▒▓██►▶️▶️>>> https://superbowl2021.page.link/live
Super Bowl Sunday 2021 is right around the corner and the Kansas City Chiefs and Tampa Bay Buccaneers are set to play in football’s biggest game. NBC Sports has you covered with TV channel information and every live streaming option on Roku, Apple TV and more for Super Bowl LV. Plus, find out where to watch the game for free and options for anyone without cable TV.
Super Bowl MVP Patrick Mahomes and the 14–2 Kansas City Chiefs are looking to repeat as Super Bowl champions after last year’s victory over the San Francisco 49ers. Mahomes threw for 286 yards and two touchdowns with two interceptions, while Travis Kelce scored one touchdown on six receptions. This year, Mahomes will go up against Tom Brady and the Tampa Bay Buccaneers. According to PointsBet, the Buccaneers are favorites over the Chiefs. Click here to bet on the game. Follow ProFootballTalk for more on the 2021 NFL Playoffs as well as game previews, recaps, news, rumors and more leading up to Super Bowl 2021.
Go LIVE░░▒▓██►▶️▶️>>> https://superbowl2021.page.link/live
PointsBet is our Official Sports Betting Partner and we may receive compensation if you place a bet on PointsBet for the first time after clicking our links.
RELATED: Where does Tom Brady rank amongst quarterbacks with the most Super Bowl wins?
The Weeknd will perform this year’s Super Bowl halftime show at around 8:00–8:30 p.m. ET. R&B singer Jazmine Sullivan and country singer Eric Church will perform a duet of the national anthem prior to kickoff, while Grammy-award winning artist H.E.R. will sing “America the Beautiful.”
What channel is the Super Bowl on this year?
CBS will broadcast this year’s Super Bowl with Jim Nantz and Tony Romo announcing the game. Check your local listings to see what TV channel CBS is in your area. For those without access to CBS, you can watch the game for free on your phone or connected devices with the CBS Sports App or on CBSSports.com, as well as with the NFL App and the Yahoo! Sports App. In addition, the Super Bowl can be streamed live other ways with services such as FuboTV, YouTube TV, Sling TV, Hulu with Live TV and AT&T TV.
According to The Verge, this year’s Super Bowl will not be broadcast in 4K or HDR by CBS.
RELATED: Check out the full 2021 NFL Playoffs schedule including scores, recaps and more here
NBC was originally scheduled to broadcast the 2021 Super Bowl, with CBS airing the 2022 Super Bowl. However, the two networks decided to swap years in order for NBC to have both the Super Bowl and Winter Olympics in 2022.
How to watch Super Bowl LV
When: Sunday, February 7, 2021
Kickoff time: 6:30 p.m. ET (3:30 p.m. PT)
Where: Raymond James Stadium in Tampa, Florida
TV Channel: CBS
How to watch, live stream: Super Bowl halftime show
Follow along with ProFootballTalk and NBC Sports for Super Bowl news, updates, scores, injuries and more
Get betting tools, DFS, season-long fantasy help, live odds and more for Super Bowl 2021 with Rotoworld Premium
2021 NFL Playoff Bracket
RELATED SUPER BOWL POSTS
What kind of Super Bowl food, appetizers, snacks should you make?
Why does the NFL use Roman numerals for the Super Bowl?
What are Super Bowl Squares and how do they work?
|
https://medium.com/@livecbsonline/bowl-live-nfl-super-bowl-locations-reddit-streams-how-to-watch-chiefs-vs-buccaneers-online-d6755030f303
|
['Online Tv All Game']
|
2021-02-07 09:38:33.978000+00:00
|
['Super', 'Live', 'CBS', 'Bowl', 'TV']
|
Convert Darknet model to Keras model
|
keras-yolo3 클론
https://github.com/qqwweee/keras-yolo3
2. yolo3-tiny.weights 다운로드
https://pjreddie.com/media/files/yolov3-tiny.weights
위 링크를 클릭하면 다운이 된다. 다운받은 파일을 위에 클론한 레포지토리에 넣는다.
3. weights를 .h 로 변환
잘 돌아가는지 테스트해보자.
python convert.py yolov3-tiny.cfg yolov3-tiny.weights model_data/yolo_tiny.h5
위의 코드를 실행시키자 아래와 같은 오류가 발생했다.
File "/Users/kimnan-young/opt/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py", line 703, in is_tensor return isinstance(x, tf_ops._TensorLike) or tf_ops.is_dense_tensor_like(x) AttributeError: module 'tensorflow.python.framework.ops' has no attribute '_TensorLike'
4. Tensorlike module 오류 해결
/Users/kimnan-young/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/ops.py
파일의 212line 을 보자.
def is_dense_tensor_like(t): return isinstance(t, core_tf_types.Tensor)
리턴 값의 두번째 파라미터 값(core_tf_types.Tensor)을 복사한다.
/Users/kimnan-young/opt/anaconda3/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py
파일에서 문제가 발생한 703line을 아래와 같이 수정한다.
def is_tensor(x): # return isinstance(x, tf_ops._TensorLike) or tf_ops.is_dense_tensor_like(x) return isinstance(x, tf_ops.core_tf_types.Tensor) or tf_ops.is_dense_tensor_like(x)
주석처리 한 부분이 고치기 전이고 두번째 리턴이 고친 후이다.
5. .h 파일 생성
yolo_tiny.h5 파일이 생성되었다.
에서 dodg.jpg 파일을 다운 받고 아래 코드를 실행시켰다.
from IPython.display import display
from PIL import Image
from yolo import YOLO
def objectDetection(file, model_path, class_path):
yolo = YOLO(model_path=model_path, classes_path=class_path, anchors_path="model_data/tiny_yolo_anchors.txt")
image = Image.open(file)
result_image = yolo.detect_image(image)
display(result_image)
objectDetection('dog.jpg', 'model_data/yolo_tiny.h5', 'model_data/coco_classes.txt')
잘 돌아간다.
|
https://medium.com/@nanyoung18/darknet-model-to-keras-model-9534da3c2819
|
['Nanyoung Kim']
|
2020-11-01 14:35:14.605000+00:00
|
['Darknet', 'Keras', 'Macos']
|
Destruction By Ignorance
|
Vaccine Injury
Destruction By Ignorance
CDC Comes Clean
canva.com
December 19, 2020: a report leaks under the cover of night from the CDC. It states those injected with the mRNA vaccine suffered severe adverse reactions. From the 272k doses administered, 2.8% suffered from vaccine injury.
Next to the notation on page six [1] of the presentation, it states health impact events refer to individuals “not able to perform normal daily activities, unable to work, and required care from a doctor or health care professional.” If we carry the trend-forward, the projected numbered of injured from this vaccine will reach into the hundreds of millions worldwide.
And who’s to blame? Experts around the world warned us daily not to partake in the human experiment of mRNA. They told us if we suffered from allergic reactions to food, medication, or if pregnant, do not allow yourself injected. Yet, the CDC results show pregnant women participated in this rollout.
These warnings came as Big Pharma put profit over people as the headlines read. Pfizer and BioNTech Begin Human Clinical Trials of COVID-19 [2]
“Messenger RNA (mRNA) vaccines, which have never been licensed for use in humans, inject cells with mRNA, usually within lipid nanoparticles, to stimulate cells in the body to become manufacturers of viral proteins. … mRNA vaccines have potential safety issues, including local and systemic inflammation and stimulation of auto-reactive antibodies and autoimmunity, as well as development of edema (swelling) and blood clots.” May 05, 2020 NIH ‘Very Concerned’ about Serious Side Effect in Coronavirus Vaccine Trial [3] “The test was halted when a participant suffered spinal cord damage…” Sept. 15, 2020 Pregnant women told not to get Pfizer Covid-19 vaccine because risks unknown [4] “There are no data as yet on the safety of Covid-19 vaccines in pregnancy, either from human or animal studies…Women should be advised not to come forward for vaccination if they may be pregnant or are planning a pregnancy within three months of the first dose.” Dec. 02, 2020 FDA Says 2 Participants In Pfizer COVID Vaccine Trial Have Died [5] Dec. 08, 2020 UK Warns People With “Severe Allergies” Shouldn’t Take COVID Vaccine [6] “any person with a history of significant allergic reactions to vaccines, medicine or food should not receive the Pfizer/BioNTech vaccine.” Dec. 09, 2020 University Of Pittsburgh Medical Center Won’t Require Staff To Take COVID-19 Vaccine Due To ‘General Uncertainty’ [7] “The reason are several-fold, … For starters, general uncertainty over the vaccine. — Dr. Graham Snyder” Dec. 09, 2020
Today, we see the same injuries happen, which experts warned us about months prior. We look and watch the devastation unfold in the headline news.
Alaskan has allergic reaction after getting Pfizer’s COVID-19 vaccine [8] Dec. 16, 2020 Suburban Hospital Temporarily Pauses Vaccinations ‘Out of Abundance of Caution’ Following Adverse Reactions [9] Dec. 18, 2020 Australia Cancels COVID Vaccine Trial Over ‘Unexpected’ False Positives For HIV [10] Dec. 11, 2020 Fairbanks clinician is third Alaskan with adverse reaction to COVID-19 vaccine [11] Dec. 18, 2020 CDC confirms COVID-19 vaccine allergic reactions, issues new guidance [12] Dec. 19, 2020 Over 3,000 “Health Impact Events” After COVID-19 mRNA Vaccinations [13] Dec. 22, 2020
The Australian story brings great concern with the HIV positive test results in those injected. In 2011, patents filed for a SARS-COV-2 added four sequences of HIV. [14].
Two thousand eleven saw the Institut Pasteur filing a further patent application for “SARS-COV-2,” which was identical to the previous one, … because commercial exploitation of the first patent started in 2003 and would expire 20 years later, in 2023. According to Fourtillan, four sequences of the HIV virus — responsible for AIDS — were added to the virus, in view of creating further vaccines.
We’re witnessing a replay of Hitler’s reign of terror. But today, it’s genocide by medical practitioners who have not disclosed the danger behind the Pfizer and Moderna technology. According to the Nuremberg code, anyone who takes part in the experimentation of humans without their voluntary consent, if convicted, is sentenced to prison or receive the death penalty.
It’s time to take a stand and stop the madness before it’s too late.
|
https://medium.com/common-sense-now/destruction-by-ignorance-fab44ef3f022
|
[]
|
2020-12-26 17:42:46.035000+00:00
|
['Covid 19', 'Vaccines', 'News', 'Medical', 'Pandemic']
|
Boost your Business by making a free Website in just 5minutes
|
Businesses are locked down and bleeding due to this pandemic.
“But remember, the current crisis is temporary.”
Your Business must go on.
Take your business online in 5 minutes by clicking this link : https://lnkd.in/earjW8y
No technical skills needed. If you know how to use WhatsApp, you can create your website right now.
Your website works for you 24/7 and gets you more business from all over the world. Take your mind off things and start building your website with amazing content, offers, products, and services.
Don’t worry, try it out. It’s free.
Let’s make all small businesses grow bigger by getting them online.
One of our fellow startups Websites.co.in developed an easy to develop website tool.
Uniqueness
1. Pay monthly subscription charge max Rs 2100 (Depend upon the place like — Urban, Semi-Urban and Rural we have pricing)
2. Domain and hosting is free
3. Inbuilt SEO and help you get better ranking in google. Google Adsense is also enabled(only after the payment )
4. You can convert a website into any language
5. No need to have any technical skills or coding. Design your own website just like how you create your Gmail account
6. Develop your own eCommerce, B2B sale, dynamic text website
You can also contact us on: 8802153865
#websites.co.in #comeunity #freewebsite #onlinebusiness
|
https://medium.com/@eishagoel15/boost-your-business-by-making-a-free-website-in-just-5minutes-226b0fc1e15c
|
[]
|
2020-12-21 08:55:58.376000+00:00
|
['Free Website', 'Website Design']
|
Danny Gonzalez recommends using a VPN — what’s the best VPN deal
|
Danny Gonzalez is a viral YouTuber, and if you’re frequently on YT checking influencers or celebrities, then you must’ve heard of him. He joined the YouTube community in 2014 and since then gathered 3.44 million subscribers with over 450 million views — impressive. On his channel, you will find reactions to celebrity videos, analysis of who’s good on TikTok and who’s not, or just charmingly making fun of other YouTubers videos. For his fans, he even provides a discount for a Virtual Private Network, but is it the best discount there is?
What’s the best VPN deal out there?
Danny Gonzales is suggesting a well-know VPN service, but it’s not the cheapest one, and it’s your right to choose.
NordVPN is currently on a 70% sale, which makes it one of the cheapest options out there amounting to just $3.49/month for a three-year plan.
Click here to apply the NordVPN discount automatically
What does NordVPN offer?
NordVPN is one of the most popular VPN service providers out there, and for a good reason. First of all, some cybersecurity services might be a bit tricky to use, but NordVPN has been developed with easy access in mind. You’ll everything is just a few clicks away, selecting and changing a server is very easy, and all other options are clearly explained.
Then there’s the server quality. It offers 5600+ servers in 58 countries, one of the biggest amount out there. You’ll need these if you want to bypass geographical restrictions efficiently. If you stumble on a video or a web-page, which is not available in your country, you choose another server in the required region and boom — you’re in.
Last but not least are the cybersecurity features. This service will add additional encryption to your connection, reroute all the data traffic through its secure servers, protecting against malware, phishing, trackers, and even block annoying ads. In terms of security, it is truly second to none.
What is a VPN?
VPN stands for Virtual Private Network, and it’s a cybersecurity software. It works by creating an encrypted tunnel between your device and VPNs server providing additional protection against any third party that wants to take a peek inside. It also obfuscates your original IP address allowing to bypass the aforementioned geographical restrictions.
|
https://medium.com/@luciopatterson221/danny-gonzalez-recommends-using-a-vpn-bc7329e86cd4
|
['Lucas Patterson']
|
2020-03-26 12:50:45.971000+00:00
|
['Cybersecurity', 'Discount', 'Deal', 'Privacy', 'VPN']
|
The Deliberate Rat Crisis of 1902
|
The Deliberate Rat Crisis of 1902
A creative retaliation against white supremacists
Vietnam during the French occupation (Public domain/Wikimedia Commons)
When the French came to Asia, they were determined to bring white civility to the unfortunate locals through the use of superior science and technology. After conquering Indochina, which included what is now Vietnam, the French made Hanoi into the capital of their new colony. They were positive that this city would become a testament to how white man’s modernity could solve every one of the indigenous Vietnamese people’s shortcomings.
The city that was once filled with narrow alleys was transformed so that it now had shady trees and luxurious mansions lining the broad roads. Meanwhile, the natives were herded into the Old Quarter — ninety per cent of the population was crammed into less than one-third of the land they had previously occupied.
Of all the innovations the French brought with them, nothing rivaled their state-of-the-art plumbing system. Running water and flushable toilets ensured that the residents of the European Quarter lived in a state of constant luxury. Clean water could be gathered with almost no effort at all, and the removal of waste was just as simple.
While the French people were congratulating themselves for liberating the Vietnamese from their backward lifestyle, the locals were less than enthusiastic when it came to welcoming their white overlords. Europeans had indoor plumbing installed into their own homes, but the Old Quarter was only given public fountains.
Although these water fountains made a slight improvement over their previous way of life, the downsides of this new plumbing and sewage system hardly made the convenience worthwhile. The French had already spent so many resources making their own drainage network that the Old Quarter was left with a system that was woefully inadequate. Water would carry human waste down the sides of the streets until the gutters emptied into a nearby lake. This not only polluted the waters that surrounded a temple, but the lake would overflow once a year, filling the streets with sewage.
However, when a Frenchman flushed a toilet, he could be assured that superior technology ensured that his feces disappeared forever — at least, so he thought.
The Crisis Begins
Few people realized that these advanced sewers would soon become the ideal breeding ground for rats. The dark tunnels not only kept the rats safe from predators, but they served as a network of roads that would lead them into homes of every European family. The rats began pouring out of the toilets and into the bedrooms of even the wealthy elite.
The council responded by deploying a team of rat-catchers into the sewers to eliminate the infestation. But of course, no self-respecting Frenchman would voluntarily enter a sewer. Because the entire city of Hanoi had been built by employing the locals to perform manual labor, it only made sense to have the Vietnamese people climb into the filthy underground tunnels.
The French reasoned that they were employing the locals and providing a source of income for these unfortunate neighbors. And, indeed, the Vietnamese people were so impoverished that when the city council offered one cent per rat killed, an army of men rose up to take the job.
In the first week, the rat-catchers killed over 1,000 rats per day. After a few weeks passed, and the laborers became more efficient at their trade, they were able to amass 4,000 rat corpses per day. A month into the crusade, rat catchers were consistently reporting having killed over 10,000 rodents in a single day. Because the sewers seemed to produce an unlimited supply of rats, the Vietnamese laborers eagerly descended into them. A day’s work could yield enough to buy fifty measures of rice — but of course, this had to be shared among the laborers.
But as tens of thousands of rodent corpses began accumulating in the streets, the wealthy Europeans began complaining about the ghastly sight and stench. When the locals emerged covered in sewage and carrying buckets filled with the bloody remains of rodents, the residents of the white community began insisting that the indigenous people were as filthy as the rats they hunted. They needed to be kept off the streets.
This was the final straw for the Vietnamese people. They had labored to build this marvelous city but were not allowed to enjoy any of its luxuries. Now, they were being coerced into filthy — not to mention dangerous — working environments. But to be banned from sight was unreasonable.
The rat-catchers went on strike. Within a few weeks, the rodent population had grown exponentially to the point that the city council was willing to offer a fourfold increase in the workers’ wages. Additionally, the laborers were to dispose of the corpses on their own, they only needed to provide the rat tails to receive payment.
At first, the natives grumbled. But then, someone proposed a brilliant plan.
Now that they were supposed to enter and exit the sewers unseen, there was no need to risk exposure to diseases and wild animals when they could receive four cents per tail by farming rats. When it came time to harvest the tails, the rats were allowed to live and continue to breed. Soon, the Old Quarter was harvesting thousands of tails each day. Rats became a valuable commodity, and some locals even began importing rats from the surrounding villages.
But of course, the rats in the sewers only continued to multiply. When the city council realized that there was no reduction in the number of rats pouring out of their toilets, they became suspicious. When they began to see a few stray tailless rats running free, they became so upset that they canceled the rat-catching program altogether.
The locals canceled their farms, and let all their rodents go free — presumably into the sewers of the European Quarter.
Sources
|
https://medium.com/history-of-yesterday/the-deliberate-rat-crisis-of-1902-91520ee230bc
|
['Philip Naudus']
|
2020-12-29 09:03:00.366000+00:00
|
['White Privilege', 'Humor', 'Karma', 'History', 'Racism']
|
Keeping the Future of Music Tech Exciting and Weird
|
Oval, performing live in 1998
Flashback to 1998…
My high school friends and I stood in Philadelphia’s Theatre of the Living Arts, waiting for Tortoise to come on. But before the post-rock hero headliners crowded the stage with an array of guitars, mallet percussion, and vintage synths, Markus Popp, aka Oval, stood almost motionless, staring at his huge CRT monitor as he filled the room with a wash of digital sound. It was the first time I’d seen someone perform with only a computer, and my friends and I cracked the obvious jokes about playing video games or checking email during the show. But I left that concert feeling like I’d seen the future, or at least a compelling version of it.
I’d started using music production software myself a few months earlier and my head was swimming with the possibilities — so many sounds to explore, all at the click of a mouse. As the last surly powerchords of ’90s rock radio rang out, plugging my guitar into a computer instead of an amplifier felt transgressive in its restraint. My buddies and I had also started filling our computers with MP3s, getting our first instantly gratifying taste of wanting to hear something one minute and finding it on the internet the next. And all this, just in time for the millennium!
But as with most thrilling innovations, the shocking newness quickly faded. At this point, the ubiquity of laptops on the stage and digital music in our pockets has become normal — even dreary, prompting the question: where do we go from here?
Making Space for Innovation in the Music Tech Landscape
Certainly the most notable trend in music technology over the last twenty years has been the migration of both music listening and music making to the same trio of devices.
Our laptops, smartphones, and tablets now offer multitrack recording studios, endless effects racks, and more synthesizers, virtual instruments, and samples than we could ever hope to audition—let alone use. These new tools also give us access to huge libraries of recorded music for downloading or streaming and, if we’re not sure what we want, they can recommend infinitely long playlists for us.
With this ability to record, distribute, and listen to a great-sounding album all on the same device, what else could we want? What is the future of music technology if not simply to create further refinements of this very agreeable situation?
The answer largely depends on whether we locate innovation within the nice clean lines of optimized efficiency and convenience, or whether we venture onto the mushy, unstable terrain of inspiration and expression. The seamlessness of our digital devices is hugely impressive, but when it comes to creativity, seams can be where the good stuff happens.
These days, we’re accustomed to thinking about innovation and technological progress as a linear path towards an objectively desirable goal. For example, consider the above timeline of notable hardware and software updates. This computer begat that computer, getting smaller and faster at every turn.
We also have models of exponential growth that see technological development through the same up-and-to-right lens favored by the stock market and venture capital.
But perhaps technological evolution, particularly as it pertains to creative fields, looks less like these tidy progress charts and more like an Anthony Braxton composition.
Anthony Braxton, Composition Number 368m (circa 2007)
The point illustrated in the Braxton composition isn’t directionality or constant upward movement, but rather unpredictability and complexity. Instead of ideas toppling into each other with the satisfying predictability of dominoes, we have a series of tangents and nodes that constantly scatter, go into holding patterns, cross paths, and recombine. It’s exciting.
Left: Tony Martin, Bill Maginnis, Ramon Sender, Morton Subotnick, and Pauline Oliveros at the San Francisco Tape Center. Right: Don Buchla with his 100 Series modular system.
This lively dynamic brings to mind the recent resurgence of interest in modular synthesizers—and not simply because the squiggly lines recall patch cables. Modern sound synthesis came into its own in the mid ’60s with the codevelopment of Bob Moog and Don Buchla’s first modular systems in New York and California respectively. Both reflect efforts of a connected creative community rather than a single inventor toiling in solitude. Buchla famously collaborated with the composers of the San Francisco Tape Center, including Morton Subotnick, Ramon Sender, and Pauline Oliveros, to build an electronic instrument that could create the adventurous, unprecedented musical ideas they wanted to pursue. Similarly, Moog had musicians like Herb Deutsch and Wendy Carlos helping to decide how to shape the voltages passed across wires into musical content, suitable for producing pop confections or performing a Bach prelude.
As synthesizers became smaller and more accessible, wall-sized modular systems faded into the background, largely becoming the domain of academic composition departments or rock stars like Keith Emerson. But in the ’90s the aforementioned migration of music production to computers brought a new type of patchable music making in the form of visual “control flow” programming languages like Max and Pure Data. All of a sudden, music makers with a desire to look beyond presets had incredibly flexible tools for shaping sound on their laptops.
The mid-90s also saw Germany’s Doepfer Musikelektronik introduce Eurorack, a more compact format for modular synths. Crucially, it was also an open standard, meaning that any company could create compatible modules by designing to Doepfer’s specs. The astonishing variety of Eurorack modules on display at Moogfest’s Modular Marketplace reveals just how many independent builders have embraced the format to explore their own creative ideas.
Malekko’s setup at the Moogfest Modular Marketplace
Between the aforementioned digital devices and this renewed interest in analog and modular synths, we have a kaleidoscope of music technologies from which to draw. So, taking these devices as a baseline for music making and listening, how do we build on them? What paths for creating new music technologies are instrument builders and product designers pursuing? Taking a peek at recent Kickstarter campaigns, several notable strategies emerge as we seek a more vibrant, weird, and exciting future for innovations in music tech.
Strategy One: Make the Exotic Accessible
With a whole generation of musicians who’d cut their teeth on digital platforms looking to move beyond laptops and touchscreens, Andrew Kilpatrick, an established synth builder from Toronto, recognized the opportunity to create a user-friendly entry point to the pricey, complicated world of modular synths. His patchable tabletop Phenol synth gives budding synthesists a complete system to start sculpting sounds without having to spend months researching the thousands of available Eurorack modules. Kilpatrick brought Phenol to Kickstarter at the end of 2014 and found a very enthusiastic community of musicians who were eager not only to use the instrument, but also contribute to refining its design.
Strategy Two: Make the Digital Physical
Music making software and apps largely recreate physical instruments or equipment, complete with skeuomorphic interfaces that mimic the look of mixing board sliders, knobs on an amp, or even a vintage drum kit. Using the digital inputs of MIDI controllers and sequencers to control virtual versions of acoustic or analog instruments has become the norm for music production.
Sunhouse’s revolutionary Sensory Percussion system flips this arrangement on its head, using vibration sensors and machine-learning algorithms to let acoustic drums control digital sounds and effects, opening up a world of experimental possibilities. Inventor Tlacael Esparza, himself a professional drummer, was frustrated by the clunkiness of existing drum triggers and created the system to let percussionists translate their technique to the digital realm without giving up the nuance and responsiveness of their acoustic kits.
Like Sensory Percussion, Artiphon aims to give digital music making a more compelling, expressive physical form.
Founder Mike Butera recognized that our phones, computers, and tablets are now sophisticated sound engines that can recreate virtually any instrument, and set out to create an equally flexible and inviting control interface. Artiphon’s INSTRUMENT 1 can be played like a guitar, violin, keyboard, or lap steel and can also be programmed by users to meet their individual creative needs. In the first-impression video created for their wildly successful Kickstarter campaign, a range of people with varying degrees of musical experience try the instrument. While a professional jazz guitarist picks up on the unique affordances of a digital fretboard—marveling at the ability play two notes on the same string—a young musical explorer beams as she has her first taste of strumming a fuzzed-out guitar chord, made accessible through INSTRUMENT 1’s ability to put sonorous sounds immediately under a beginner’s fingertips. It’s equally at home in a recording studio and the living room couch.
Strategy Three: Blur the Line Between Music Making and Listening
Artiphon’s ability to span the worlds of professional and casual music making is hardly a new idea, but it’s one that’s ripe for reinvention. In a way, it takes us back to the pre-sound-recording era when many people’s interactions with popular music involved sitting around a parlor piano and singing as a friend or relative played the latest hits from sheet music.
Many new musical products are exploring the creative territory between passive listening and music making with the goal of creating playful, interactive experiences.
Superficially, this can mean designing a user experience in the digital UX sense. But on a deeper level, it’s about breaking down the hierarchy of performer and audience. The prescient, iconoclastic musical vision of Anthony Braxton comes to mind once again, specifically his use of the term “friendly experiencer” to describe listeners as active participants in creating music.
The Ototo musical invention kit is certainly a product created with friendly experiencers in mind. Growing out of the interactive sound work of artist Yuri Suzuki, the compact device combines capacitive touch sensors with an onboard synthesizer, inviting users to rapidly prototype their own musical interfaces—using vegetables, tin foil, or even their friend’s hands to trigger sounds.
Acquired for MoMA’s Humble Masterpieces collection in 2014, Ototo exemplifies a stream of music tech innovation that, like the 19th century parlor piano, creates a participatory, communal experience of music and looks beyond album sales or rockstar endorsements for validation. The chuckles at the end of this video say it all:
Datagarden’s MIDI Sprout goes even further in disrupting expectations around music making, recognizing that creative collaborators needn’t be human. The Philadelphia collective have long explored the cross pollination of electronic music and nature and came to Kickstarter to create a device that harnesses the biorhythms of plants, converting them into a signal to control synthesizers.
As we think about the convergence of music-making and listening devices, another obvious point of reference is Grandmaster Flash and other hip hop DJs’ use of turntables, traditionally playback devices, as musical instruments. So it makes sense that virtuoso turntablist DJ QBert of the Invisibl Skratch Piklz saw the increasing capabilities of flat paper circuits as an opportunity to turn a record’s packaging into a playable instrument. Collaborating with interactive poster wizards Novalia, he invites listeners to scratch samples and loop beats from his latest album, Extraterrestria, using a cardboard controller inserted alongside the vinyl.
Strategy Four: Invent New Ways of Listening
In the rousing talk she gave on the final day of Moogfest, Laurie Anderson spoke of the need for new types of listening spaces—places beyond traditional concert halls and museums where audiences could come together and hear adventurous sound art and music in the way creators intended. Excitingly, Kickstarter’s music tech community includes people working to open venues for doing just that.
One such example is Envelop, a forthcoming space in San Francisco that will feature a sophisticated ambisonic speaker array, allowing artists to create a highly spatialized surround-sound mix.
The migration of music listening to smart devices means that, more than ever, our experience of music is an individual one. A number of recent projects have taken the familiar form-factor of headphones to create wildly different listening experiences. Not taking for granted the recorded stereo sound that has been the standard for music distribution since the 1950s, these listening products harness the processing power of the computers we now carry in our pockets.
Nura, currently live on Kickstarter, is most simply described as a pair of headphones that listens to you before you listen to it. By measuring the unique frequency response of users’ ears, it creates a personalized mix, giving each listener as full and balanced a sound as possible.
Ossic explores the same spatialized-sonic territory as Envelop, putting an array of four headphone drivers in each ear cup and using head tracking sensors to create an immersive sound environment that simulates the experience of moving through space. While this naturally lends itself to enriching virtual-reality gaming, it’s equally compelling to consider what music made for a platform like this might sound like. If albums could be virtual sonic spaces, what would our most adventurous sound architects build?
Doppler Labs’ Here moves beyond recorded music entirely, offering listeners the ability to apply studio production techniques to their sonic environments in real time. This includes the familiar options for noise cancellation, but also frequency adjustments to override the choices made by a sound person at a live concert, and even effects like flanger and delay to add a psychedelic quality to the sounds of everyday life.
Strategy Five: Embrace Weirdness and Invite Friends
Some of the most interesting projects seem to exist on islands of their own. We’ve become quite accustomed to visuals accompanying electronic music, but rarely do they have as literal a connection as Jerobeam Fenderson’s Oscilloscope Music project.
The results are both mesmerizing and hilarious. But beyond creating a wonderfully playful exploration of sound and image, Fenderson used his Kickstarter campaign as an opportunity to share the process he developed with backers, inviting them to become not just listeners but creative collaborators.
This sampling of projects demonstrates the rich variety of ideas found within Kickstarter’s creative community for shaping the way we make and listen to music. I’m excited to see what new twists and turns emerge from them.
To keep up with fresh developments in music tech, check out the Sound section of our Technology category.
Nick Yulman curates Kickstarter’s Design & Technology categories and has worked with many music technology creators to develop campaigns. His own creative work features musical robots, and he has exhibited his interactive sound work internationally. He’s an alumnus of, and adjunct professor at, NYU’s Interactive Telecommunications Program.
|
https://medium.com/kickstarter/beyond-convenience-6278c275ea3c
|
[]
|
2018-03-09 21:48:29.623000+00:00
|
['Technology', 'Innovation', 'Startup', 'Music', 'Design']
|
ABD-Çin Teknolojik Soğuk Savaşı Tırmanıyor
|
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
|
https://medium.com/t%C3%BCrkiye/abd-%C3%A7in-teknolojik-so%C4%9Fuk-sava%C5%9F%C4%B1-t%C4%B1rman%C4%B1yor-3a751b3aa167
|
['Av. Kâzım Üstün']
|
2020-12-23 21:44:44.273000+00:00
|
['Çin', 'Türkçe', 'Teknoloji', 'Abd', 'NASA']
|
The Shadows and Connection Between Sex + Money
|
“I feel jealous of other women’s beauty.”
“I feel guilt + shame around my pleasure.”
“I feel embarrassed when thinking about my finances.”
These are common shadows I hear from the women I work with ( and ones I’ve experienced myself ).
Shadows are the aspects of ourselves we shame, judge or deny. Often they’re the things we wouldn’t want anyone else to know about us.
Here’s the thing: until you learn to embrace your shadows they 𝙊𝙒𝙉 you.
MEANING: if you don’t acknowledge, clear and integrate those pieces of yourself they will continue to drive your behaviors and decisions, usually in a destructive way.
Want epic, heart-opening transcendent se-x? Not possible if you’re holding shame around your body and pleasure.
Want a loaded bank account? Not possible if you’re holding judgement of others monetary success.
When you integrate your shadows you 𝙡𝙞𝙗𝙚𝙧𝙖𝙩𝙚 𝙮𝙤𝙪𝙧𝙨𝙚𝙡𝙛 + your energy to actually magnetize your desires + express your radiant soul.
Where are most people trapped in limitation?
𝙎𝙀-𝙓 + 𝙈𝙊𝙉𝙀𝙔.
So naturally that’s our intention together during the Scorpio New Moon Ceremony next Monday ( 11.16 ).
Serena
@serenavamoroso
PS — for the latest event info send me a DM. More on the connection between sex and money energetics coming soon.
|
https://medium.com/@serenavamoroso/the-shadows-and-connection-between-sex-money-c3f5ff9f9e3b
|
['Serena V Amoroso']
|
2020-12-22 22:40:17.787000+00:00
|
['Spiritual Growth', 'Money Mindset', 'Abundance', 'Money Management', 'Sexuality']
|
Petscop: The Game That Doesn’t Exist
|
Petscop: The Game That Doesn’t Exist
The scariest game that you’ll never play
Petscop, developed by Garalina and released for the PlayStation in 1997, is a 2.5D platformer that has players navigate a cute and vibrant world catching creatures known as “pets” by solving puzzles. However, there’s only one problem…the game doesn’t actually exist.
I should probably come clean here and reveal that Petscop is in fact a horror web-series/ARG that centers around someone named Paul who receives a copy of the mysterious “Petscop”. The series is shown in a commentary format with Paul talking to the audience as he plays through the game which slowly swaps it’s cute and harmless aesthetic for something much more sinister.
Although Petscop is a video series, I would actually class it as an ARG (Alternate Reality Game). If you just sat through the videos, you’d probably be left with nothing but questions. The community that Petscop spawned worked together to try and solve it’s cryptic clues. It was strangely heartwarming to see a small fan base blow up into a massive collection of conspiracy theorists and to say that I was there when it happened feels like an honor. It felt like a multiplayer game at times, with everyone collaborating to try and unravel Petscop’s overwhelming mystery.
It took over 2 years for the series to finish with a total of 24 parts, with each video becoming increasingly more unnerving and creepy. If Petscop taught me anything, it was patience. Having to wait for the next video to release only to leave me with more questions was infuriatingly painful. Petscop would leave fans waiting months for each instalment and there was even points where people started to believe that the series had been abandoned.
Petscop manages to use it’s simplicity of being a live play-through to create the most unnerving atmosphere possible. The only comfort that the audience has is the knowledge that Paul is with them, commentating over the game and reacting to everything that happens. However, Petscop uses this comfort to it’s advantage, with Paul seemingly disappearing at certain points in the series leaving the audience alone with the game. It doesn’t help that Paul isn’t the most talkative person and as you’ll find out through watching Petscop, it’s not just the audience that he ends up talking to…
Paul is a very interesting character and as you learn through each video, Petscop is so much more than just a spooky game, often breaking the fourth wall to interact with Paul and the audience. To say what Petscop reveals about Paul and how it interacts with him would be a huge spoiler and it’s best to enter Petscop as blind as possible so I’ll leave that for you to find out for yourself.
What makes Petscop so unique is how convincing it is. When I first came across the series I genuinely believed that it was a real game until I looked into it further and even after doing my research it was still a struggle to believe that it was fake. The way the game glitches out and how the player moves around is so realistic that it’s honestly a shame that it’s not a real game. The creator behind Petscop revealed after the project had ended that he had no plans to make a playable version and as disappointing as that sounds, I can understand why. Being able to explore the game would ruin series and the mystery that surrounds it.
Petscop feels so realistic that it often feels like a real collection of tapes that should have been left alone. Being mislead by it’s initial vibrant world to be pulled into something much more darker and sinister gives off such a surreal feeling that’s extremely difficult to explain. By the time Petscop reveals it’s true evil nature, you’re already hooked by it’s complex mystery. It feels like you should stop watching but you become so invested in it’s world that you feel unable to. Petscop represents the best in horror. There’s no jumpscares or loud noises, it’s simply just a combination of fantastic storytelling and a constant build of dread that keeps the audience both intrigued and horrified.
I don’t exactly know what it is about Petscop that makes it so creepy. From the long silences where the player doesn’t move to the disturbing subject matter, Petscop is a complex and twisted puzzle that very slowly reveals itself to the audience. It takes a lot of patience but when everything starts to piece together, Petscop becomes an extremely unique and unforgettable experience that will get under your skin and stay with you long after finishing its series of strange videos. If you’re obsessed with horror and mystery like me then I honestly can’t recommend this series enough.
If you’re interested in exploring Petscop and it’s many horrific mysteries, you can find a link to its YouTube channel here.
|
https://medium.com/super-jump/petscop-the-game-that-doesnt-exist-b1c853d321b2
|
['Anthony Wright']
|
2020-11-30 23:50:28.890000+00:00
|
['Gaming', 'Horror', 'Features', 'Culture', 'Social Media']
|
How it works: The LinkedIn Algorithm
|
At Squirrels&Bears we believe in simplicity and we like to provide clear answers to your questions. In our #howitworks series we focus on simple explanations of various aspects of small business, highlighting the basic facts, what works and what doesn’t work. And we hope to make your life a little easier.
The Basics:
Almost 600 million business professionals use LinkedIn to find jobs, grow their networks and share content.
Your LinkedIn feed doesn’t show everything your network is posting by default — it’s only showing content it believes is relevant to you.
However, you can sort the content in your LinkedIn feeds by recency by changing the filter on the top of your news feed.
The feed has a spam filter, which determines whether your content shows up in the feed, how far of an audience it reaches within LinkedIn or whether it’s spam.
When you post an update, as the first step a bot classifies your update using three key categories: spam, low-quality or clear. If you pass the test, your post will then appear for a short while and the bot will track the level of engagement. If others are interacting with your posts, you are likely to make it through the next filter. But, if your post is being marked as spam or they hide it from their feed, it’s not good news for you.
After the initial check, the algorithm looks beyond your post. It considers you and your network to determine if the post should be showing up in other users’ feed.
At the final stage, your post gets reviewed by a human, so they can determine whether to keep it showing and to understand why exactly is it popular. And as long as you post keeps getting noticed, it will remain in the mix — this is why you sometimes see posts from a few days ago.
Do
Use common SEO and content marketing tactics.
Keep your posts short, interesting and visually strong.
Write your own articles using LinkedIn Publisher — the articles will show up on your feed, appear on your profile and could be selected to be included highlight emails sent by LinkedIn.
Optimise the most effective times to post — look at best practice guides, but also test and track various times that work for your posts and focus on those effective time slots.
Offer valuable tips and advice relevant to your audience.
If a single post is liked and shared, it will be seen more than if multiple people share the same link — this is particularly effective on company pages, so encourage your team to engage with posts on your page.
Post a variety of content such as videos, images, podcasts and links to other content.
Promote your LinkedIn profile and company page on your website.
Use relevant hashtags and keywords.
Follow influencers in your industry to demonstrate your interests.
Join and participate in relevant groups.
Comment on updates posted by others.
Mention (@) people in your posts if you want them to see it or if they are somehow connected to the post.
Don’t
|
https://medium.com/@squirrelsandbears/how-it-works-the-linkedin-algorithm-db918ceb8d59
|
['Petra Smith']
|
2019-03-29 14:36:19.401000+00:00
|
['LinkedIn', 'Social Media Marketing', 'Marketing', 'Social Media', 'Digital']
|
Essential vs. Expendable. How you should be thinking about your sponsorship packages
|
(BONUS: An active worksheet your team can use to game plan your outreach…but first you have to read the article to understand & use it. It will be worth it…I promise)
I’m a fanatic about how to thrive in economic downturns. And what I’ve realized is most of it comes from fear of looking back and saying “Man, I missed that opportunity.”
With this, I had been reading constantly on the dynamics of a downturn in all industries and see how we can shift to our work in sponsorship.
In that research, I came across a great HBR article that looked at the changing purchasing behavior in consumer segments and how brands can adjust their offering to ensure that sales don’t slump.
There was such an alignment to our sponsors and prospects that I edited the process to fit our industry based on their framework. Here is the result (Plus again a worksheet at the end to jam on):
Overall, your sponsors can be broken down into 4 groups of customers
Normally we break our customers into demographics. Food partners, car partners, etc. and build packages that overall encompass all of them.
There is an issue there. Some restaurants make more than others. Some have different goals. Some are franchisees vs. mom & pop.
This article brings a new way to segment our sponsors into four groups: Slam-on-the-breaks, Pained-but-patient, Comfortably well-off, Live for today.
Slam-on-the-breaks
As the article defines the segment:
The slam-on-the-brakes segment feels most vulnerable and hardest hit financially. This group reduces all types of spending by eliminating, postponing, decreasing, or substituting purchases. Although lower-income consumers typically fall into this segment, anxious higher-income consumers can as well, particularly if health or income circumstances change for the worse.
These are our sponsors hit hardest by the shutdown and will most likely see a slow recovery when we get back. One segment that jumps to the top of my head are most restaurants…others are barbershops and in-person attractions like bowling alleys.
The biggest test on this is are they looking for money back or to break contracts. They will be slamming the breaks on any partnership spend.
Another test…have you seen any digital ads from this partner or category?
Pained-but-patient
Pained-but-patient consumers tend to be resilient and optimistic about the long term but less confident about the prospects for recovery in the near term or their ability to maintain their standard of living. Like slam-on-the-brakes consumers, they economize in all areas, though less aggressively. They constitute the largest segment and include the great majority of households unscathed by unemployment, representing a wide range of income levels. As news gets worse, pained-but-patient consumers increasingly migrate into the slam-on-the-brakes segment.
These are our sponsors that are less effected directly from the shutdown but have the ability to wait it out on their sponsorship packages. They still have a budget to spend…but they are waiting to spend as they think games are coming back.
Auto dealerships come to mind here. Right now they are pained with little to no purchases coming…but if this doesn’t change and they can’t get people back into dealerships that can drop into the Slam-on-the-breaks.
Comfortably well-off
Comfortably well-off consumers feel secure about their ability to ride out current and future bumps in the economy. They consume at near-prerecession levels, though now they tend to be a little more selective (and less conspicuous) about their purchases. The segment consists primarily of people in the top 5% income bracket. It also includes those who are less wealthy but feel confident about the stability of their finances — the comfortably retired, for example, or investors who got out of the market early or had their money in low-risk investments such as CDs.
Here we see the brands that are doubling down on marketing efforts. Notice I said marketing efforts…not sponsorship spend. While they might not be spending with you…you could still see them double down on digital ads.
Generally speaking beer, insurance, and banks come to mind here. Again we don’t want to place a whole industry here as microbrews could be in the Slam-on-the-breaks segment while big brews come into this category as comfortable.
Live-for-today
The live-for-today segment carries on as usual and for the most part remains unconcerned about savings. The consumers in this group respond to the recession mainly by extending their timetables for making major purchases. Typically urban and younger, they are more likely to rent than to own, and they spend on experiences rather than stuff (with the exception of consumer electronics). They’re unlikely to change their consumption behavior unless they become unemployed.
On the sponsorship end, these are the wildcards. Sometimes it’s because they are well-funded startups…sometimes it is because they just had a ton of cash on hand before the shutdown.
As the description says above they will still spend but push the timetable of their big spending.
The next part: Segmenting how they will respond to our packages
So we understand the segments, how can we understand how they will purchase in the new world? Well, this article does a great job of breaking down the purchases & products into a few categories:
Regardless of which group consumers belong to, they prioritize consumption by sorting products and services into four categories:
Essentials are necessary for survival or perceived as central to well-being.
Treats are indulgences whose immediate purchase is considered justifiable.
Postponables are needed or desired items whose purchase can be reasonably put off.
Expendables are perceived as unnecessary or unjustifiable.
Again this is based on consumers, but it holds true to our sponsorship categories as well. This is how they will analyze our products in their heads before buying.
How can we know what items fall under these categories? You’ll have to be brutally honest with yourself, ask your sponsors, and a bit of analyzing what they have purchased in the past.
For example if a sponsor has always demanded couponing in their packages…most likely that is essential for them.
On the macro-level we can look at what they have said is most valuable as well, as IEG did with the below graph.
I’ll caveat this with the fact that this could be totally upside-down with the pandemic. I would imagine tickets & hospitality has plummeted, access to personalities (think IG Live with players) has skyrocketed. Use this as a hypothesis building chart…but really challenge these with your conversations with partners.
Again what is expendable to one is essential to another. Really use the knowledge you have on a partner in order to finish this.
From here we can start to chart how each segment will react & behave to our products based on their classification.
From the chart above we can see where we will have success and where we’ll struggle for each partner. As you can see, this will allow us to adjust our sales pushes to set ourselves up for success.
From here we can start to look at what tactics will work for each of these behaviors.
So how do we put this into practice? We can use this to build packages & prices for all segments to maximize revenue.
The second graph the article has created a beautiful map to do just that:
Some of these again we’ll have to adjust based on our industry…but we can get a pretty good idea of where we will have to make adjustments in order to be successful.
For example, even though a slam-on-the-brakes sponsor might find an asset essential…that doesn’t mean they won’t be price sensitive. We’ll need to unbundle our $20,000 packages into a la carte items that may be $500 each. I’m thinking social posts here, charge by the tweet….not the whole season.
Understanding these dynamics could be the make or break to a deal.
Ok, now how do I get tactical?
As the article states:
Begin by performing triage on your brands and products or services. Determine which have poor survival prospects, which may suffer declining sales but can be stabilized, and which are likely to flourish during the recession and afterward.
Each sponsor will be different, each category as well. But the first step will be to categorize each one of your current partners into these customer segments.
Once you do this, you can understand which of your products in the packages you send out will need to be tweaked and which will be fine for the downturn.
Off the top of my head, I’m really looking at signage as a tough sale. For most brands this is an expendable item. Vanity metrics that don’t prove sales that you have to buy all at once.
If you understand that this asset will sell less in the downturn you can either adjust the pricing model or overall understand there will be a drop in sales here so we need to change our sales goals to push more essentials.
By breaking these down this allows us to understand which products we can push at what price to each segment. This also shows us that we are going to have to get creative with our packages and how we price them.
One important piece in this is don’t forget your core brand when looking at these items. As the article states:
When sales start to decline, companies shouldn’t panic and alter a brand’s fundamental proposition or positioning. For instance, marketers catering to middle- or upper-income consumers in the pained-but-patient segment may be tempted to move down-market. This could confuse and alienate loyal customers; it could also provoke stiff resistance from competitors whose operations are geared to a low-cost strategy and who have intimate knowledge of cost-conscious customers. Marketers that drift away from their established base may attract some new customers in the near term but find themselves in a weaker position when the recession ends. Their best course is to stabilize the brand.
You don’t want to move down market and start slashing prices. If you can understand the customer segments and their needs you can get more flexible with your prices and packages.
Don’t bring down the price of your packages, but offer alternatives and pieces. You don’t want to drop the value of what you are selling…rather your goal should be to re-structure your assets so they can fit the cash-flow budgets of the
Pepsi did this by offering single cans in supermarkets over the 12 pack. They cost the same…it just gives the Slam on the brakes customers the ability to buy within their budget.
Overall we should be making small but powerful shifts with this information.
We may have to piece together more deals…but you will be able to reach all segments with the packages and therefore have more customers willing to buy.
You are being empathetic toward the sponsor’s situation. You are offering them the ability to still be a part of your influence and reach even if things are tough. This is our goal with this structure.
— — — —
And now….as promised Click HERE for a free spreadsheet worksheet that puts this into a process. You won’t be able to edit it…but if you duplicate it through File-> Make A Copy you can have your own version to work with.
As always my goal is to help sports sponsorship thrive. If we can understand our sponsorship needs better than other options we’ll win. This is hopefully the blueprint your organization needs to offer the perfect products that will sell and be a fit with our sponsors.
|
https://medium.com/sqwadblog/essential-vs-expendable-how-you-should-be-thinking-about-your-sponsorship-packages-342cc00b935
|
['Nick Lawson']
|
2020-05-13 02:42:47.110000+00:00
|
['Sponsorship Activation', 'Sports Business', 'Sponsorship', 'Sportsbiz']
|
Keep Workplace Safe from the COVID-19!
|
The last couple of months were very stressed. Due to the Coronavirus pandemic, the mayhem has created in every industry, millions of people worldwide have lost their jobs. Hence, the time has come when we have to step out of our homes to get back on track. So now we have to take ownership to make your city, state, country, and society COVID-19. If you are planning to reopen your office? Then you are required to take the necessary measures to protect your safety and your employees from the virus. One of the first things to implement in the office is installing a Touchless Visitor Management System.
What is the Touchless Visitor Management System?
The Touchless Visitor management systems manage the departure of a visitor to an office. The manual system is not safe enough and also slowed down the productivity of the business. That’s why we have thought through how to make the visitor check-in experience seamless and touch-free. Touchless is a way to prevent the viruses from spreading and make your workplace safe. Visitors can check-in without touching the tablet. They can use their own smartphones to check-in. The Touchless sign-in saves your time and once visitors arrive so they’re not bothered to check-in and can more quickly get to who they’re there to see.
Visitor Management System for Covid-19!
The visitor management system prevents Coronavirus by going Touchless. This system also assists in contact tracing. Using this system you can ask questions to the visitors about their health, travel, and possible exposure to COVID-19. These latter features can also be implemented for staff of the office Visitors can check-in using their own smartphones without the help of another person.
Know your Visitor’s Location:
With the visitor management system, the organization will know which visitors are on-premises.
You can record the digital timestamp with the visitor management system, when the person departs the property.
Contact Tracing:
Contact-tracing technology is one that helps authorities track the virus and warn staff with consistent alerts. Here are some ways to protect the workplaces with this tool.
The contact tracing system provides both visitors and employees with check-in and check-out with adding their information, capturing a photo with the user-friendly interface.
Alerts and Notifications:
With the visitor management system you can alert your staff by any number of means, including email, phone call, when visitors arrive in the workplace.
A visitor management system also notifies the employee if they have been in contact with an infected visitor.
Watchlists:
An organization may have banned the visitors from its facilities and restrict the entry of unwanted people.
With the visitor management system, receptionists can alert with information such as photographs, and background information
Conclusion!!
The visitor management system has upgraded itself with Touchless technology and become the best system to prevent the spread of viruses. These effective systems not only consider security but also offer reliable features for curbing the Coronavirus pandemic. Let’s make our workplace free from virus with the Touchfree visitor management system. Start your Touchless journey with Vizitor’s signup.
|
https://medium.com/vizitor/keep-workplace-safe-from-the-covid-19-76587750182a
|
['Ritika Bhagat']
|
2021-03-03 16:11:28.818000+00:00
|
['Coronavirus', 'Visitor Management System', 'Workplace', 'Covid 19', 'Safety']
|
The Sun and The Moon
|
A short story.
“Grandpa, can you tell me one last story? I promise to go to bed afterward,” said the little girl with big bright sparkly little eyes, staring opposite an elderly man with long, silky white hair.
“Ahh, how can I refuse when you look at me like that, you little troublemaker? This is the last one for today, and you must go to bed afterward or else no more bedtime stories from tomorrow onwards, okay?”
“Okay-y-y, pinky promise.”
“What kind of stories do you wish to hear?” asked the elderly man, while sitting himself in the rolling chair beside her bed.
“Ohhh ohhh hmmm,” the little girl uttered. She glanced across her bed as she put her pointer finger on her chin, a gesture she learned from the elderly man. “Oh I know, I want to listen to a story about friendship!” the girl exclaimed, proud of her newfound knowledge of the topic.
“Hmm, then let me tell you a story about the Sun and the Moon,” replied the elderly man scratching his chin with his pointer finger.
“Ha ha ha, how can the Sun and Moon be friends? They are days and nights apart. That’s silly grandpa” The warm giggles of the little girl warmed up the chilly night.
“Ooof, then what do you think friendship is?” asked the elderly man, squeezing the tip of his granddaughter’s nose teasingly.
“Ouch no, that hurts.” Pushing her grandpa’s hand away, the little girl answers seriously, “a friendship is when you share your favourite chocolate chip cookie with your friends and they share their favourite toys with you.”
“Well, that is surely one of the ways to explain friendship. Friendship is a wondrous thing and there are many ways to explain it. You little one, do you want to listen to it or not?”
“I wanna listen to it, sorry I’ll be quiet”.
“Okay.” The elderly man carefully put a blanket over his granddaughter, patted the top of her head, and began telling the story.
*******************************************************
Friendship. Friendship is extraordinary. It is diverse and possesses different meanings for everyone. For most people, friendship is a mixture of one’s admiration, affection, compassion, loyalty, commitment, and dependence on one another. For some, friendship goes beyond sharing time: for them, it is for eternity. Whereas for some others, it is not: they part together with a bittersweet memory and move forward in life. Friendship is bizarre, and it is rare in life to find a friendship that is genuine, true, and pure. A rare friendship is when someone understands you better than yourself. Someone you can share and express everything regarding yourself. Someone who takes a stand in your most salutary concerns in a crisis, and the Sun and Moon are a great illustration of that.
It was the little things in their lives that made them shine the brightest in the dark. Or perhaps they shined bright just for each other. In any way, the sun and the moon were indeed two of the brightest luminescence. If one was like a volcano blaze about to erupt, then the other was as a gleaming calm of radiance. Yet when you mention one you can’t mention without the other.
I guess both were each other’s saving grace. When the Sun was born, it was the brightest of all, Blinding everyone with its brightness to not see anything else other than itself. Everyone has their own selfishness and wishes to shine brighter than one another. Yet it was their envy and selfishness that led them to isolate the Sun.
The same can be said for the Moon. When the Moon was born, it shined brighter than the rest, despite its small size. Though the Moon didn’t blind everyone with its radiance it was still the brightest luminance of all. Once again behind the faces of admiration hid the darkness of envy and jealousy.
Their meeting was quite accidental. The Sun and the Moon were not fated to meet when they did and in the blindness of the eclipse, it was accidentally perfect. In that endless perpetual darkness, they found each other. In the calamity of their existence, they found the remedy. And their darkest cadences started to become filled with wondrous sparkling light.
Years turned to decades, and decades turned to centuries. The young Sun and Moon became mature and filled with youthful radiance. Maturity comes with its disposition. It is a journey from physical to the intellectual, from childhood toward adulthood and developing intellectually, comes with its own responsibilities, burdens, and rules. No one is an exception to that, not even the Sun and Moon.
But despite the responsibilities, burdens, and rules, there is always room for commonalities. Some struggle with it. Some in-depth their mutual understanding. Whereas for the Sun and Moon, it was the latter. In their commonalities, they created a mutual friend, the Earth.
Their duo became a trio. Earth couldn’t shine as bright as the Sun and Moon or the rest. Yet when the Sun and Moon shined, their luminescence transcended the beauty of Earth, highlighting its uniqueness. Their friendship was filled with love and laughter. It was flawless, but as the hour of the clock ticked, there came another calamity.
The year of the Eclipse. The year of separation came. Earth could not stand the darkness of the eclipse, as it was born weak. As the hour of the clock ticked, the weaker and the closer to its deathbed the Earth got. And once again, the darkness of eclipse started shadowing over the three of them. Along with the darkness, there is always light, but in this cadence, the consequences of the light could only be won by the separation of the Sun and Moon.
The Sun and Moon can not coexist without each other. Just as in every darkness there is light, in every light, there is a hint of darkness. The two cannot coexist with the other. The Sun and Moon are not some altruistic beings, they too have their own selfishness in their darkness. Yet no matter how selfish they wanted to be, at this hour of the clock they could not.
A rare friendship is when someone takes a stand in your most salutary concerns in a crisis. Even if they wanted, they could not for how much they loved each other and the Earth. They had a choice to be selfish, but being selfish for this choice was another type of death. Years by years the darkness grew and commenced to shallow the other radiance of lights, little by little. Leaving only, the Sun, Moon, Earth, and few others.
The Sun and the Moon wanted the Earth to live a life full of content, safe and well. For them, the Earth was the precious entity they created and despite the opposition of the Earth, they made their choice. And in this chaotic hour of darkness, they taught the Earth a bittersweet lesson. That is no matter how much one begs someone, they cannot change the mind of someone who made their decision.
As the hour of the clock ticked, the hour for the solution to this calamity came: the sacrifice of the Sun and the Moon and their separation. Even the brightest luminesces have to pay the price. Earth as the sole witness watched it all happen — the last interaction of Sun and Moon.
As the cadence for separation came between the Sun and Moon. No intimate gesture, no words spoken between them. They locked eyes for a brief moment, and somehow they both knew that this would be the last time they would ever see each other.
*************
“Wait what kind of friendship story is this? They didn’t even say goodbye. That’s not what friends do, they just looked at each other. And all just for what a new friend they create. This doesn’t make sense. Hmmph” the girl asked in indignant confusion.
“Ha ha,” laughed out loud the elderly man and paused before answering carefully. “That is because you are still little. Sometimes just a little glance and eye contact can have unfathomable emotions beneath them. It is similar to music being played in the background that contains a hidden message. Perhaps one day when you grow up and listen to a sad song while reminiscing this moment you will fully understand it. But for now, you better go to sleep.”
“B-But that doesn’t make any sense. Grandpa you party pooper.”
“Yeah, yeah I am. It is pretty late in the night, goodnight little one.” said the elderly man as he stood up and kissed the forehead of his granddaughter dearly before he left, and just as he was about to close the door the little girl asked a question.
“Wait, what song should I listen to, when I think of this story again?”
Surprised by the sudden question of his granddaughter, the elderly man went stiff for a second by the door, before he gave a warm-hearted smile and replied, “Maybe Kokoronashi. Now Sleep Well, grandpa loves you”.
“I love you too, Grandpa. Goodnight,” The little girl said, watching her grandpa close the door. She closed her eyes and fell into a peaceful slumber.
************
Kokoronashi playing in the background…..
‘If I abandoned everything
Would it be easier to laugh and live?
My chest is starting to hurt again
Don’t say any more
If I forgot everything
Would it be easier to live without tears?
But I can’t do that
Don’t show me any more
However close I get to you
I only have one heart
It’s cruel, it’s ugly, I’d rather you take my body
And destroy it, tear it apart, do as you like with it
No matter how much I scream and struggle, or my eyelids swell
You just hold me without letting go
You can stop now
If my wishes came true
I would want the same as you
But I don’t have any
At least come here
However much I’m loved by you
I only have one heart
Stop, quit it, you’re being too kind to me
I can’t understand, however, I try
It hurts, I’m in pain, use your words and tell me
I don’t know what’s going on, don’t leave me alone
It’s cruel, it’s ugly, I’d rather you take my body
And destroy it, tear it apart, do as you like with it
No matter how much I scream and struggle, or my eyelids swell
You just hold me without letting go
You can stop now
If I have a heart
How could I find it?
You smile a little and say to me
If that’s what you’re looking for, it’s right here”
Ver.Sou — Kokoronashi/Gumi Lyrics
They locked eyes for a brief moment and somehow they both knew that this would be the last time they would ever see each other.
|
https://medium.com/@mushyy/the-sun-and-the-moon-fd6ecaf4728c
|
[]
|
2020-12-24 19:59:53.260000+00:00
|
['Short Story', 'Short Stories And Poems', 'Moon', 'Poem', 'Sun']
|
How to improve data quality for machine learning?
|
Why is data preparation so important?
Photo by Austin Distel on Unsplash
It is no secret that data preparation in the process of data analytics is ‘an essential but unsexy’ task and more than half of data scientists regard cleaning and organizing data as the least enjoyable part of their work.
Multiple surveys with data scientists and experts have indeed confirmed the common 80/20 trope — whereby 80% of the time is mired in the mundane janitorial work of prepping data, from collecting, cleaning to finding insights of the data (data wrangling or munching); leaving only 20% for the actual analytic work by modeling and building algorithm.
Thus, the Achilles heel of a data analytic process is in fact the unjustifiable amount of time spent on just data preparation. For data scientists, this can be a big hurdle in productivity for building a meaningful model. For businesses, this can be a huge blow to the resources as the investment into data analytics only sees the remaining one-fifth of the allocation dedicated to the original intent.
Heard of GIGO (garbage in, garbage out)? This is exactly what happens here. Data scientists arrive at a task with a given set of data, with the expectation to build the best model to fulfill the goal of the task. But halfway thru the assignment, he realizes that no matter how good the model is he can never achieve better results. After going back-and-forth he finds out that there are lapses in data quality and started scrubbing thru the data to make them “clean and usable”. By the time the data are finally fit again, the dateline is slowly creeping in and resources started draining up, and he is left with a limited amount of time to build and refine the actual model he was hired for.
This is akin to a product recall. When defects are discovered in products already on the market, it is often too late to remedy and products have to be recalled to ensure the public safety of consumers. In most cases, the defects are results of negligence in quality control of the components or ingredients used in the supply chain. For example, laptops being recalled due to battery issues or chocolates being recalled due to contamination in the dairy produce. Be it a physical or digital product, the staggering similarity we see here is that it is always the raw material taking the blame.
But if data quality is a problem, why not just improve it?
To answer this question, we first have to understand what is data quality.
T here are two aspects to the definition of data quality. First, the independent quality as the measure of the agreement between data views presented and the same data in real-world based on inherent characteristics and features; secondly, the quality of dependent application — a measure of conformance of the data to user needs for intended purposes.
Let’s say you are a university recruiter trying to recruit fresh grads for entry-level jobs. You have a pretty accurate contact list but as you go thru the list you realize that most of the contacts are people over 50 years old, deeming it unsuitable for you to approach them. By applying the definition, this scenario fulfills only the first half of the complete definition — the list has the accuracy and consists of good data. But it does not meet the second criteria — the data, no matter how accurate are not suitable for the application.
In this example, accuracy is the dimension we are looking at to assess the inherent quality of the data. There are a lot more different dimensions out there. To give you an idea of which dimensions are commonly studied and researched in peer-reviewed literature, here is a histogram showing the top 6 dimensions after studying 15 different data quality assessment methodologies involving 32 dimensions.
A systemic approach to Data Quality Assessment
If you fail to plan, you plan to fail. A good systemic approach cannot be successful without a good planning. To have a good plan, you need to have a thorough understanding of the business, especially on problems associating with data quality. In the previous example, one should be aware that the contact list, albeit correct has a data quality problem of not being applicable to achieve the goal of the assigned task.
After the problems become clear, data quality dimensions to be investigated should be defined. This can be done using an empirical approach like surveys among stakeholders to find out which dimension matters the most in reference to the data quality problems.
A set of assessment steps should follow suit. Design a way for the implementation so that these steps can map the assessment based on selected dimensions to the actual data. For instance, the following five requirements can be used as an example:
[1] Timeframe — Decide on an interval for when the investigative data are collected.
[2] Definition — Define a standard on how to differentiate the good from the bad data.
[3] Aggregation — How to quantify the data for the assessment.
[4] Interpretability — A mathematical expression to assess the data.
[5] Threshold —Select a cut-off point to evaluate the results.
Once the assessment methodologies are in place, it is time to get hands-on and carry out the actual assessment. After the assessment, a reporting mechanism can be set up to evaluate the results. If the data quality is satisfactory, then the data are fit for further analytic purposes. Else, the data have to be revised and potentially to be collected again. An example can be seen in the following illustration.
|
https://towardsdatascience.com/how-to-improve-data-preparation-for-machine-learning-dde107b60091
|
['Jack Tan']
|
2020-08-12 15:48:34.720000+00:00
|
['Data Management', 'Machine Learning', 'Data Science', 'Big Data', 'Data Quality']
|
Chalkboard
|
Photo by Brooke Cagle on Unsplash
When you are far, there is warmth.
The warmth of sadness, solitude, and mislay of self.
©2020 Anamika Pokharel
|
https://medium.com/chalkboard/far-you-39de5d3266bd
|
['Anamika Pokharel']
|
2020-11-11 18:49:16.511000+00:00
|
['Longdistancerelationship', 'One Line', 'Lovers', 'Poetry', 'Love']
|
Georgia and the European Union
|
About the author: Justin Tomczyk ’20 is an FSI Global Policy Intern with the Economic Policy Research Center. He is currently an a Master’s student in Russian, East European, Eurasian Studies, at Stanford University.
A major component of Georgian foreign policy is the ongoing process of Euro-Atlantic integration. In an effort to distance itself from the Russian Federation, Tbilisi has fostered closer ties with the major pillars of the Euro-Atlantic community. One of the most noteworthy examples of this partnership is Georgia’s relationship with NATO and the United States. Outside of the areas of security and defense, the European Union remains one of Georgia’s largest partners on economic and political matters. While this relationship is largely based on a mutual interest in extending the rules-based order of the European Union to the Caucasus, there is also the potential that European integration and a healthy relationship with the EU may be used as a counterweight against Russian aggression in the region. To many in the South Caucasus, the EU and its associated forms of governance represent a departure from the corruption, oligarchy, and general mismanagement seen since the collapse of the USSR. While naturally Georgia will continue to trade extensively with other countries in the post-Soviet space (due to geography and preexisting supply chains), integrating with the EU and its associated single market would be one of the most viable ways to develop an innovative economy and efficient governance while moving beyond the economic malaise seen in much the post-Soviet space.
Georgia’s main form of engagement with the EU is through the framework of the Eastern Partnership (EaP) — an initiative designed to foster closer ties between the EU and six former Soviet Republics in Eastern Europe and the Caucasus. As a part of the Eastern Partnership, Georgia regularly participates in a variety of summits and conferences with not only representatives of the EU but also other members of the EaP. In addition to maintaining diplomatic relations with virtually every member of the EU, Georgia also maintains diplomatic representation with the union’s super-state structures through its embassy in Brussels and the EU delegation in Tbilisi. With regards to economic and political integration, two treaties serve as the bedrock for EU-Georgia relations. The first is the EU-Georgia Association Agreement, which is a bilateral treaty designed to approximate legal standards, regulations, and other elements of legislation between the EU and Georgia. The purpose of this is to enable a greater sense of political alignment between both parties. An example of the impact of this association agreement is the alignment of standards for biometric passports alongside entry and exit protocols, leading to the mutual removal of entry visas for citizens of Georgia and the EU. The second major treaty is the Deep and Comprehensive Free Trade Agreement (DCFTA), which is designed to remove as many barriers to trade as possible between Georgia and the EU. While the agreement does not contain provisions for the establishment of a customs union, the DCFTA has partially integrated Georgia into the Single Common Market while allowing Tbilisi to still hold and pursue FTAs with third parties.
Although Georgia has been officially elevated to the status of an “associate” of the European Union, Brussels’ often provides ambiguous posturing towards the prospects of future Georgian membership in the EU. The largest obstacle in Georgia’s EU aspirations is the unresolved status of territories of Abkhazia and South Ossetia. These two territories are de jure part of Georgia but under the de facto control of Russian-backed separatist authorities. The unresolved nature of these conflicts and occasional outbreaks in fighting raise the question of Georgia’s ability to maintain its own territorial integrity. Additionally, the notion of “expansion fatigue” has struck some parts of Western Europe following the 2004 eastern expansion of the EU. This is compounded by the perspective held by some in Brussels that eastward expansion of the EU serves to antagonize Russia. Regardless of these challenges, Georgia has shown itself to be a committed partner of the European Union and will likely remain the EU’s main partner in the South Caucasus.
|
https://medium.com/freeman-spogli-institute-for-international-studies/georgia-and-the-european-union-96bf16c7249b
|
['Fsi Student Programs']
|
2020-09-10 15:20:08.402000+00:00
|
['Georgia', 'Internships', 'Fsi Students', 'Stanford']
|
From CarrierWave to Active Storage
|
At Livewire Markets, a practice we’re following is to keep upgrading Ruby, Rails, Ruby gems to the latest possible versions, and exploit as much as we can built-in features in Rails.
Attachment handling is a popular and important feature in the Livewire web application. We use it to upload, display contributors’ profile pictures,
attachments and embedded images in wires …
In the old days, we used CarrierWave to manage attachment upload and quite happy with it until Active Storage was born. In this article, I will walk you through how we migrated attachment handling from CarrierWave to Active Storage.
Key differences
Active Storage uses two polymorphic tables active_storage_blobs and active_storage_attachments to store all types of attachments, so we don't need to create a database migration whenever we have a new type of attachment as we need with CarrierWave.
and to store all types of attachments, so we don't need to create a database migration whenever we have a new type of attachment as we need with CarrierWave. Active Storage can do image processing such as resizing at runtime.
CarrierWave uses such a method profile_picture_url to get the URL of the attachment while Active Storage use a helper method url_for(profile.profile_picture) .
to get the URL of the attachment while Active Storage use a helper method . Active Storage does not have built-in validation helper as we have in CarrierWave. Fortunately, we can use the active_storage_validations gem for validation.
Installation
Add the following line into the config/application.rb file
require "active_storage/engine"
Then run
bundle exec rails active_storage:install
It will generate config/storage.yml file for storages configuration, and the database migration to create two tables active_storage_blobs and active_storage_attachments .
Run the migration: bundle exec rake db:migrate
Configuration
Following is our config/storage.yml but you can change it to suit yours.
local:
service: Disk
root: <%= Rails.root.join('storage') %>
amazon:
service: S3
bucket: <%= ENV['AWS_S3_BUCKET'] %>
region: <%= ENV['AWS_REGION'] %>
upload:
cache_control: <%= "public, max-age=#{365.days.to_i}" %>
We use local config for the development and test environments
# Active Storage
config.active_storage.service = :local
and amazon config for production .
# Active Storage
config.active_storage.service = :amazon
Zero downtime migration
We want to keep the business running as usual while we are in the process of the migration. Therefore, we will store attachments in both systems.
mount_uploader :profile_picture, ProfilePictureUploader
has_one_attached :as_profile_picture
Then we implement an Active Record callback to make sure whenever the attachment maintained by CarrierWave changed then the one maintained by ActiveStorage is also updated respectively.
after_commit do
update_active_storage if previous_changes.keys.include?('profile_picture')
end def update_active_storage
self.as_profile_picture.purge if self.as_profile_picture.attached?
sync_profile_picture if self.profile_picture.present?
rescue StandardError -> error
Log.error(error)
end def sync_profile_picture
picture = self.profile_picture
picture.cache_stored_file!
file = picture.sanitized_file.file
content_type = picture.content_type
self.as_profile_picture.attach(io: file, content_type: content_type, filename: self.attributes['profile_picture'])
end
With the above code in place, all newly uploaded attachments will be stored and synced in both systems. Now we can write a rake task to migrate the existing attachments uploaded by CarrierWave to be uploaded and managed by Active Storage as well.
namespace :active_storage do
desc "Migrate profile pictures to use Active Storage"
task migrate_profile_pictures: :environment do
puts '*' * 50
puts "Start migrating #{Profile.count} profiles..."
Profile.find_each do |profile|
next if !profile.profile_picture.present? || profile.as_profile_picture.attached?
profile.sync_profile_picture
end
puts "Completed migrating #{Profile.count} profiles..."
puts '*' * 50
end desc 'Rename as_profile_picture to profile_picture'
task rename_as_profile_picture_to_profile_picture: :environment do
sql = <<-SQL
UPDATE active_storage_attachments
SET name = 'profile_picture'
WHERE name = 'as_profile_picture';
SQL
ActiveRecord::Base.connection.execute(sql)
end
end
We run the first rake task to migrate all existing attachments in CarrierWave to Active Storage.
bundle exec rake active_storage:migrate_profile_pictures
Then we can rename the has_one_attached attribute, remove the CarrierWave uploader, and ActiveRecord hook.
has_one_attached :profile_picture
Finally, we can run the second rake task to update existing records in the active_storage_attachments table to use the new name profile_picture .
bundle exec rake active_storage:rename_as_profile_picture_to_profile_picture
We implement a utility class to process image resizing and return the URL of a given file.
class ActiveStorageUtils
def self.image_url(image, size=nil)
url_for(image, size)
end
def self.file_url(file)
url_for(file)
end
private
def self.url_for(file, size=nil)
return unless file && file.attached?
url_helpers = Rails.application.routes.url_helpers
if size && file.variable?
url_helpers.rails_representation_url(file.variant(resize_to_fill: size).processed, only_path: true)
else
url_helpers.rails_blob_path(file, only_path: true)
end
rescue StandardError => error
Log.error(error)
nil
end
end
Then use it in the model class
# Active Storage
PICTURE_SIZES = {
thumbnail: [20, 20],
medium: [50, 50],
wire: [65, 65],
large: [200, 200]
}.freeze
def profile_picture_url(size=nil)
ActiveStorageUtils.image_url(self.profile_picture, PICTURE_SIZES.fetch(size, nil))
end
Remove CarrierWave
Finally, we can remove the ProfilePictureUploader class, the profile_picture column and the carrierwave gem.
Voila! Now all profile pictures are uploaded and maintained by Active Storage.
|
https://medium.com/@huythieuhoang-0109/from-carrierwave-to-active-storage-b2fd3e71407f
|
['Huy Hoang']
|
2020-12-17 03:27:02.550000+00:00
|
['Active Storage', 'Livewire', 'Carrierwave', 'Ruby on Rails', 'Migration']
|
I Will Use My Voice
|
How have I always had hope? No matter what there is always a spark of hope in me. On the worst days at the darkest times, when the only thing I believe is that I am unworthy to exist… somehow there is a spark of hope. Somehow, tomorrow arrives and I’ve made it.
Where does that come from? How has it not been extinguished? The more I delve into my past, and allow all that which I have crammed down into my body and out of mind to resurface, I am in complete awe with myself. I have watched mini series based on true events that haven’t even scratched the surface of what I am now remembering and there was no hope. The victims/survivors became addicted to drugs, gave up, released themselves from this world (I like that much better than committed suicide, I don’t think suicide is a crime or a sin — and I will never think less of anyone who has attempted it or succeeded) So many people give up, and it isn’t their fault. It was never their fault.
How many times when I was 12–14 years of age sat with a razor blade, in utter fucking despair, begging for the courage to kill myself. Or how many times did I pray to god to just let me die. Please let the hurt end and take me home so I don’t have to feel this anymore. I hurt so fucking bad inside and had been left to deal with and process all the abuse on my own. But I could never do it. You know why? I was afraid that I would miss out. I didn’t know what might be coming and I had hope that it was better.
Some of it was better, and some of it was not. I was conditioned to accept abuse as love, and my life is filled with abuse. I don’t think I could even remember all of it, and some of it was so trivial compared to others it isn’t even worth worrying over. That is some seriously fucked up shit. How can humans do this to other human beings? How can one human dehumanize another? I can come up with explanations about what trauma and abuse they may have went through… but that is all it is. An explanation. It can never be an excuse. Never. We all have choices.
I have a choice and a voice. A voice that I am going to use, over and over again. When you look the other way, and allow abuse to continue because it’s not “your place” to call it out, or say something… your silence allows it to continue. Yes, the abuser is the one at fault, but anyone who knows and keeps silent… it’s on you as well. We have got to learn how to stand up for one another. Abuse can no longer stay in the dark. We have got to protect our children and the people who need us most.
Not everyone has the spark of hope that was inextinguishable in me. And for each of them I write. I speak up. We can no longer allow this to live in the dark and turn the other way. Yes it hurts. It effects everyone on this earth. 1 in 4 girls & 1 in 6 boys are sexually abused. That means you or someone you know has been directly affected.
This is your fight. This is all of our’s fight. Do not let the light of their hope extinguish because you don’t know what to do, or because it’s safer and easier to look the other way. We have got to do more than believe. We have got to stand up. We have got to stand up for each other. That time is now. Right now, just start standing up, and each time it will get easier and inspire others to do the same. Never give up hope. Each person who is being abused is worth standing up for. Stand up!
|
https://medium.com/@cary.bach/i-will-use-my-voice-2ec95d0268f2
|
['Cary Bach Donahou']
|
2019-08-22 04:43:32.239000+00:00
|
['Childhood Sexual Abuse', 'Suicide Awareness', 'Life Lessons', 'PTSD', 'Healing From Trauma']
|
Is Marijuana Addictive?
|
Marijuana is addictive, despite what some people think. If it alters the user's mind, makes them feel good, then it has the potential for addiction.
Screw you, I can quit anytime I want.
Those of us who have experienced addiction of any nature knows quitting a habit is difficult. Quitting involves changing so many factors of the user’s life. They may have to move away from the substance, putting distance between themselves and the drug, old friends, and other potential triggers.
The user has to set firm boundaries in their relationships with friends and family members who are high risk. This can be a hurtful and difficult process to endure. I’ve learned to never shame a user, no matter their substance. Instead, encourage them to make the necessary changes to leave the substance behind.
But it’s just a harmless plant, exclaims the exasperated pothead, but a coca plant is just a harmless coca plant until the leaves are extracted and cocaine is produced.
But it’s legal…
Thirty-three states in America have legalized marijuana. But just because it is legal in some places does not mean cannabis is good for us to consume. Heck, opioids are legal and people overdose every day. Nicotine is legal although we know it causes lung cancer. Sugar is legal and people are unhealthily obese.
|
https://medium.com/be-unique/is-marijuana-addictive-2474ceba4d56
|
['Olivia Fletter']
|
2021-01-02 05:34:40.057000+00:00
|
['Addiction', 'Health', 'Marijuana', 'Personal Growth', 'Drugs']
|
Warming your lambda functions using the AWS CDK
|
The AWS CDK is an awesome tool for describing your infrastructure.
In a language of your choice, TypeScript is my own, you can define and deploy your stack. For example:
and then simply run:
cdk deploy
to send it out the door.
Recentlyish’ AWS added provisioned concurrency where you can ensure a certain number of instances of your functions are available between certain times. This great and can be done like so:
Another approach is polling the lambda. In days gone by I’ve used the Serverless framework which has a warmup plugin you can use. I wondered how I could do this with the CDK. Well here’s how:
This will ensure you don’t invoke your business logic but keep costs low and keep everything nice and toasty.
|
https://medium.com/@louisjq/warming-your-lambda-functions-using-the-aws-cdk-9cf35a420fe
|
['Louis Q']
|
2020-12-18 14:42:39.945000+00:00
|
['AWS', 'AWS Lambda', 'Cdk']
|
Say Hi to Mura!
|
In 2017, I decided to leave CNN-IBN, a place that had been home for more than half a decade. “Don’t quit” was the advice I received from nearly everyone from the industry.
Why? Since I had made it on-air, I was expected to stay there. It was the ‘sweet spot’ that nearly everyone in a TV newsroom was told to aspire for.
Many well wishers volunteered to make me see sense in staying back “Why are you quitting?! You look so good on-air!”.
While I had made up my mind to leave, that statement meant to convince me to stay is what reassured me in my exit.
I left that ‘sweet spot’ to help pilot data journalism projects across multiple newsrooms in India. During the last two years, I’ve had the opportunity to learn things about journalism that have little to do with mass media, big newsrooms, first news breaks and conventional storytelling. Why did I choose to learn new things? As my friend Lakshmi Sivadas puts it succinctly— In a non-linear world, it does not pay to be linear sources of information gatherers and distributors.
In my previous post, I talked about why I decided to become a Tow-Knight fellow — understanding how people consumed the news outside of big newsrooms was important to me.
Over the course of my interviews, I learnt that there are students in India who consume the daily news with the sole purpose of learning from it to clear the general knowledge sections of various exams. I am hoping to build Mura, my project as a Tow-Knight fellow as a news-based learning tool that can help this community.
Mura is a Telugu word roughly translated as a “hand’s measure”, used here to inspire taking one small step at a time.
Keeping this in mind, I envision Mura to be a virtual mentor that teaches students from the news and tracks their learning. For this, I built my first prototype in April and had 10 people test it.
Based on the feedback, the key takeaways have been both for understanding the technology needed to build this, as well as figuring out a conversational voice for Mura to be an effective learning tool.
|
https://medium.com/journalism-innovation/say-hi-to-1327efbe259a
|
['Kamala Sripada']
|
2019-04-08 19:37:04.206000+00:00
|
['Journalism']
|
The Ultimate Security Camera Installation and Purchasing Guide 2021 — Houston Security Solutions
|
The Ultimate Security Camera Installation and Purchasing Guide 2021 — Houston Security Solutions Robbie Handy Aug 9·36 min read
Pros of Security Cameras
The most important benefit of security cameras is that they deter crime. Whether the cameras are installed in your home or business, the sight is typically scary to anyone who has criminal intentions since they will realize their unlawful activities will be recorded on video. Security camera installation in Houston, TX is an excellent choice for areas with high crime rates. It will help keep your job or house from becoming a target.
Observing settings and actions — Security cameras may be installed virtually everywhere. Power over Ethernet (Poe), a newer technology, allowing power and video to be sent to a camera via a single connection. Depending on your needs and requirements, you may install visible or concealed (covert) cameras to monitor the actions of visitors to your home or business. This is a fantastic method for monitoring and tracking suspicious visitors.
Pick up and put together evidence — Security cameras placed by a competent security camera installation firm are ideal for monitoring sounds and activities. Furthermore, as technology advances, cameras are increasingly equipped with high-quality audio and video capabilities for recording and documenting occurrences.
Improve public safety — Surveillance cameras, which are commonly utilized in public locations such as crosswalks, malls, and parking lots, provide great surveillance options for preventing and deterring crimes in public.
Reduce crime in public places — It is improbable that a person will commit a crime if they are aware that a surveillance camera will capture them in the act. Furthermore, if there is any suspicion of a crime occurring in a certain location, the area might be vacated as a safety measure.
Convenient monitoring from anyplace — Surveillance cameras are highly efficient since the camera feed can be accessed via the internet or even your smartphone.
You may use the camera system to keep an eye on your children as well as your pets. Pets are an important part of many people’s lives, and leaving them at home alone may be distressing as well as costly. You may check in on your dogs from work with a professionally fitted security camera system.
Cons of Security cameras
Costs — When compared to fake cameras, genuine security cameras are clearly more expensive to install, depending on the features, number of cameras, and monitoring systems.
Vulnerability — Advances in technology have enabled thieves and other intruders to detect genuine or dummy cameras and develop ways to deactivate or disconnect the power supply of cameras that have not been professionally installed.
Privacy infringement — Security cameras have sparked debate across the board, particularly in the professional sectors. Employees may perceive security cameras as an invasion of privacy or interpret their existence as an indication that their boss does not trust them.
Installation costs a lot of money — This is a significant disadvantage of using security cameras. Professional systems are often acquired on an a la carte basis, which means they do not come in pre-configured generic bundles. Professional systems are often assembled by the installation firm to precisely match the customer’s application and demands.
Complex to use — If you are unfamiliar with technology, you may find it difficult to operate some of the highest-quality cameras on the market. This is becoming less of an issue as time goes on. Surveillance camera makers are figuring out how to include high-tech capabilities into surveillance component software in a way that non-tech users can locate and utilize.
Surveillance systems are easily abused — Hackers and vandals may attack surveillance cameras installed in public areas.
What is a Surveillance Camera?
Security cameras and surveillance cameras are the same thing. Cameras, on the other hand, are one of the most common and well-known technologies used to observe us as we go about our everyday lives. Local governments and companies deploy surveillance camera networks. With the advent of real-time crime centers that access public and private video cameras, the difference is becoming increasingly blurred. Surveillance cameras are commonly employed in public locations to monitor public behavior.
Difference between a Security and Surveillance Cameras
Surveillance cameras and security cameras are words that are sometimes used interchangeably. Both safeguard your house and let you examine video of incidents such as attempted break-ins. The phrases surveillance camera and security camera are sometimes used interchangeably to indicate whether a system is being professionally monitored. Security cameras, for example, are cameras that are actively monitored in the case of a break-in, a fire, or an accident. Surveillance cameras are cameras that monitor your house and can only be accessed on a smartphone, tablet, or computer.
CCTV and Surveillance Camera Fundamentals
A CCTV camera is a self-contained surveillance system that records or stores video. In the case of analog CCTV cameras, it transfers them to a recorder, which was formerly known as a DVR, either digitally or by cable wire. Surveillance cameras, on the other hand, are basic cameras that broadcast video and audio data to a network video recorder (NVR), where they may be watched and recorded. Surveillance cameras are used to secure your assets as part of security systems.
Involvement of Technology
The video feeds from all linked cameras are collected by the older CCTV system. It sends them to receiving devices, such as a DVR. In an analog system, this connection is usually made via a coaxial wire. In a more up-to-date camera setup. Ethernet cables are used to link an IP camera to a network video recorder (NVR) or an IP camera to a network switch.
The closed-circuit system used to monitor and govern a specific property is made up of a whole network of surveillance cameras. IP (internet protocol) networks are commonly used to connect security (surveillance) cameras from remote locations to a central location.
Features
To deliver video feeds to a restricted number of displays, CCTV cameras require cabling. Furthermore, the cameras must be carefully located in a single spot. Surveillance or security cameras, on the other hand, send recorded footage as digital signals to an NVR (Network Video Recorder) through a single PoE connection, eliminating the need for power cords.
Applications
CCTV cameras are used to manage the security of both public and private buildings. These systems can be used in conjunction with intrusion detection sensors to provide enhanced security. The surveillance camera, on the other hand, is ideal for monitoring a specific region and therefore controlling any undesired occurrences.
Different type of Security Cameras
Let’s say you’re thinking of getting a security camera for your house or office. In such a scenario, you’ll have to choose between wired and wireless options. There’s a lot of misunderstanding about these two kinds of cameras.
Wireless Cameras
A WiFi camera, often known as a wireless security camera, broadcasts video over WiFi and is powered by either AC or battery power. This necessitates the use of a power cord to connect it to an outlet for AC power. It’s important to note that a wireless camera isn’t necessarily wire-free; instead, it’s termed a wireless camera because it transmits data via a wireless network (WiFi). When a wireless camera is powered by a battery, it becomes really wire-free.
The footage from wireless security cameras is often stored on a cloud server, allowing you to access it from anywhere. Some cameras can also save video on local media, such as a micro SD card. Wireless cameras are popular because they are simple to set up and view from a smartphone or computer.
When motion or sound is detected by wireless security cameras, they usually start recording. Even so, if hooked into electricity, some may be programmed to record 24 hours a day, seven days a week. They record high-resolution video and, if equipped with night vision, can record at night. Some consumer brands also feature two-way audio capabilities, allowing you to converse with the person who is visible to the camera. Finally, some models include machine learning, a technique that enables cameras to perform useful functions such as alerting you when a person or item is detected.
Wired Cameras
A wired security camera system combines cameras and a recording device. The number of cameras typically begins at four and may go to 256. They can record 24 hours a day, can be viewed remotely via the internet, and are hard-wired to the internet and power. Traditional DVR systems, which utilize a coaxial cable and a separate power connection to link the cameras and record the footage, and newer NVR (networked video recorder) systems, which use Ethernet cables to both power and record video, are the two types of wired home security camera systems.
An Ethernet cable may link both DVRs and NVRs to the internet. NVRs are more sophisticated than DVRs and can record higher-quality video. Wireless cameras offer several capabilities that NVRs may include, such as two-way chat and person recognition. The IP cameras that come with a wired home security system get their power from the NVR or a Power over Ethernet switch and don’t require a plug. Most wired systems feature a smartphone app for watching footage. You may still watch the recordings and real-time feeds by connecting a computer display to the recording device.
Whenever possible, we recommend that you utilize wired security cameras instead of wireless security cameras.
In comparison to wireless security cameras, professionally installed hard-wired cameras are always a superior choice because-
The biggest disappointment you could have if you pick a wireless camera is the monthly costs. The majority of wireless cameras rely on cloud storage, which comes with a monthly cost. As a result, if you install a wireless camera at your home or business, you will be charged a monthly cost. It’s also possible that you’ll have to pay extra for smart features like person and vehicle recognition. A wired camera, on the other hand, does not usually require a monthly subscription.
Another disappointment is that wireless cameras only function with WiFi, implying that they are only as good as your home WiFi network. You may have problems if your WiFi is too sluggish or your camera is too far away from your router. This video lags, freezes, or is unable to get a live view at all at times. Wired cameras, on the other hand, are directly connected to the network through a network connection and operate without any technical difficulties or glitches 24 hours a day, seven days a week.
As your internet speed fluctuates, so will the quality of your video stream from wireless cameras. Even if you have 1 GB of internet, the quality of your WiFi will fluctuate depending on a variety of variables, including how many other people in your area are online at the same time and radio interference from other wireless devices in your home.
As a result, because there isn’t enough bandwidth to offer higher quality footage, your 4K cameras may occasionally communicate in 720p (not even full high definition). That is why, especially in areas of Houston, TX where crime is a concern and continual monitoring is required, it is preferable to utilize a professionally installed connected security camera.
Wireless cameras offer a lot of flexibility in terms of location and setup. Even so, you’ll need to connect your cameras to a solar panel or remember to keep their batteries charged. They have battery problems since wireless cameras can’t record continuously without fast depleting their batteries. Instead, they record in brief bursts (10 seconds to five minutes, depending on the brand and amount of activity), so you could miss important moments. Wired cameras, on the other hand, are permanently attached to a power source and may record continuously.
Wireless cameras are susceptible cyberattacks since they link to the internet and allow remote access. They have the potential to be hacked, jeopardizing your privacy and security.
As a result, we always recommend professional wired cameras installed by a security camera installation firm over wireless cameras for your home or business protection.
How Important are Security Cameras
As previously said, it has become important to install security cameras at your home or office in order to ensure your safety. Installing a security camera at your house or business as a security or preventative measure was always seen to be a dramatic, costly, and unneeded undertaking. However, given technology’s accessibility and cost, failing to install some sort of security camera now appears to be a reckless and strange move. Major advancements have been made possible because of technological advancements.
Surveillance cameras enable owners to see their house or place of business at any time and from nearly anywhere. Installing security cameras in your home is a wise choice for a variety of reasons. Here are some of the most compelling reasons to do so in Houston, TX.
Criminals and Crimes are being deterred.
Criminals are deterred by the sheer appearance of an outside camera. Even yet, relying on fake cameras is risky since seasoned burglars can usually identify them from a mile away. The majority of the time, criminals will inspect a residence before robbing it. They will most likely abandon the burglary attempt if they see cameras placed a competent installation company. Assume you’ve been a victim of a break-in. In that event, the cameras will catch the occurrence and aid in the arrest of the offender, as well as the restitution of your stolen property.
HPD said there had been 199 homicides within the city limits through Wednesday, June 10, 2021. Through the same time period in 2020, there had been 148 homicides. That’s a 35% increase.
Aiding The Police
Installing security cameras in your house may enable you to assist the authorities in the case of a break-in. The event will have been caught in high quality HD footage by your professionally placed cameras by a Houston camera installation business. These recordings and photos can help police apprehend the perpetrator, prevent future crimes, and restore your belongings.
Keeping an Eye On The Family
Surveillance cameras aren’t only for home security; they can also be used to keep an eye on your children while you’re at work. When a child gets out of school in the middle of the afternoon, many families with two working parents find themselves in a bind. A parent may always check in on their children from work utilizing a video security system’s remote monitoring option.
Don’t Forget About Your Pets
You may use the camera system to keep an eye on your kids, and you can even keep an eye on your pets. Pets are an important part of many people’s lives, and leaving them at home alone may be distressing, as well as expensive. You can check on your dogs from work with a professionally setup home security camera system.
Insurance Benefits
Following a burglary, you must file an insurance claim for vandalism or theft. This is when your high-definition surveillance camera comes in handy. You may use the film to chronicle the occurrence and back up your insurance claim. In addition, a security system may generally result in up to a 20% savings on home security.
Different Brands of Security Camera System
There are a multitude of security camera brands to select from. In today’s world, security is crucial. People want to feel safe in their own homes and businesses. Installing security cameras is one of the greatest methods to receive such protection. You should conduct research before purchasing any large-ticket products, such as a security camera, on which you will rely.
With the current epidemic, security cameras have become more popular as company owners seek to secure their assets amid nationwide shutdowns. There are a variety of CCTV camera brands available on the market. Professional security cameras and consumer security camera brands are the two most common types.
We’ve put up a list of some of the most well-known security camera brands for you to consider.
Professional Security Camera Brands
Axis Communications Offers The Best Security Systems For Business
RapidHandy, Inc. is a company that specializes in providing quick and easy Houston Security Solutions is a Certified Axis Security Camera installer that sells security cameras, video management software, and integration software.
Houston Security Solutions’ Axis Security Cameras are based on an innovative, open technology platform and offer the security market’s most comprehensive variety of professional quality video surveillance cameras solutions. With the launch of the world’s first network camera in 1996, Axis revolutionized security and has remained the worldwide market’s number one option for network video solutions ever since.
If you’re not in the security sector, you’ve probably never heard of Axis Communications. However, this is due to their concentration on security camera systems. They’ve been making high-end analog and digital security systems for years. They collaborate with partners all around the world and specialize in networked solutions.
Bosch Security Camera
Bosch Security camera Systems is a massive security company that provides all types of security systems. From security cameras to alarm systems. They have everything to offer, from small home systems to multi-building business systems. The company is pretty well-known among businesses and is utilized around the globe.
Avigilon Security Camera
Avigilon Certified Partner Offers The Best In Class Video Surveillance.
RapidHandy, Inc. is a company that specializes in providing quick and easy Houston Security Solutions is a full-service Avigilon partner that sells security cameras, video management software, and integration software from other Avigilon partners.
Avigilon ‘s sophisticated security system may aid in the reduction of theft, the prevention of violence, and the tracking of questionable persons. When combined with AI, Avigilon transforms into a security powerhouse that few other systems can match.
The security professionals at Houston Security Solutions have over a decade of expertise providing, installing, and integrating Avigilon security systems. We will personally guarantee that your Avigilon security is perfectly customized to your business size, and we will maintain it throughout its lifespan.
Avigilon manufactures a wide range of security camera solutions for use with CCTV cameras. They develop software, hardware, and analytics as a whole. A full-service solution ensures that you receive all you need from your goods and more. Avigilon products are used by industries and consumers all around the world.
Hanwha Wisenet Security
Hanwha Security Offers The Best In Class Video Surveillance.
Hanwha, a leading security company, offers video surveillance solutions such as IP cameras, storage devices, and management software that are based on world-class optical design, manufacturing, and image processing technology. They provide end-to-end security solutions and have seen global success in a variety of industries.
The security professionals at Houston Security Solutions have over a decade of expertise providing, installing, and integrating Hanwha security systems. We will personally guarantee that your Hanwha security is perfectly customized to your business size, and we will maintain it throughout its lifespan.
FLIR
FLIR is the world’s top infrared camera manufacturer. They also own Lorex, whether it’s surveillance cameras or systems meant to be installed under helicopters. They can now supply every degree of camera requirement thanks to Lorex’s acquisition. From the most basic security systems to the most advanced security systems, we have you covered.
Hikvision
Hikvision is a major manufacturer of security camera systems. They have a large number of employees who work hard to produce cutting-edge security technology. Products from Hikvision offer top-of-the-line lenses with NVR and HD features. With offices around the world, Hikvision’s sales of security cameras have done quite well.
Due to their dependability, Hikvision products can be found in homes and businesses around the world. They sell complete systems, including everything from the cameras themselves to video intercom systems and software to record, manage, and access footage. Some of their top solutions include applications in the retail industry, healthcare services, and smart buildings (including smart school systems). Such innovative products have led them to become one of the most well-recognized names in the industry. All the brands are available with Houston security camera installation.
Dahua
Dahua is a surveillance systems business that specializes in security systems. They provide a diverse selection of high-quality security solutions. A five-year warranty is also included.
Monitor and record your properties with LTS security cameras.
With constant surveillance and video recordings, LTS security systems keep businesses safe. Site Admins and security personnel may use LTS cameras to remotely monitor buildings, rooms, and outdoor locations at all times, without needing to be there. Employee regulation is aided by surveillance systems, which allow administrators and workers to notice and respond to problems. They help decrease liability by preserving incident evidence.
We installed LTS’s highest level video surveillance and video equipment. Our expert technicians install LTS security cameras, ensuring proper setup and positioning for the greatest possible coverage.
Uniview
Uniview, commonly known as Unv, is a well-known Chinese surveillance technology business. Uniview’s financials indicate that it is a strong firm that will continue to lead China’s quality camera sector. Their cameras are mostly IP camera systems that can be utilized by both companies and households.
Consumer Security Camera Brands
Swann Security Camera
Swann is an Australian firm that has grown from its humble beginnings in a basement to selling goods all over the world. They provide a wide range of security devices to assist in the protection of homes and businesses. Swann offers anything from simple systems with only two cameras and limited functionality to complex systems with NVR and a big number of cameras. There are even cordless security camera solutions available.
Lorex Security Camera
Lorex offers a wide range of devices that aim to provide both security and value to households. They provide a wide range of options, from plug-and-play to more sophisticated camera systems. They provide a number of wireless security camera systems with data transmission distances of up to 500 feet. In addition, 4K camera choices are available.
Logitech Security Camera
Logitech is another company that manufactures a wide range of goods. Almost certainly, you’ve heard of them before. The company has been in the webcam and home camera market for some time. They now offer a wide range of security camera systems to choose from.
Netgear Arlo Security Camera
Netgear started off as a networking company, so it’s not surprising that they’d branch out into security cameras. Arlo is their camera line, which includes both wired and wireless cameras that save data in the cloud and can be accessed from any phone, tablet, or computer. Audio recording is also available on certain of their gadgets.
Nest Security Camera
Nest is well known for its internet-connected thermostats. These are some of the items they produce. Alarm systems and security cameras are among the many security devices available.
D-Link Security Camera
D-Link is another networking brand that you may be familiar with. Their network routers have made them famous. They also provide a range of Wi-Fi, HD cameras with a number of functions that may assist safeguard any property.
The difference between the Consumer and Professional Security camera brands.
Dahua, Hikvision, and Uniview are professional security brands, whereas Nest, Also, and Ring are consumer brands. This easy-to-follow guide will provide you with useful information to help you make an informed decision about your new video surveillance system in Houston.
A professional video surveillance camera may feature a varifocal lens or, more often now, an autofocus lens that allows the user to optically zoom in on a specific target or zoom out for a wider view (through a web browser interface). This will save you time in the long run because you won’t have to climb a ladder to adjust the focal length. These cameras are designed for a wide range of purposes, including forensic detail and situational awareness.
Depending on the company demands and requirements, professional security camera installers can utilize a 360-degree fisheye camera with a multi-sensor. Fisheye cameras include a fisheye lens that allows for 180-degree surveillance while maintaining HD video quality. A single fisheye security camera may cover up to 4,000 square feet and can be used to replace many conventional cameras without sacrificing coverage. Fisheye cameras have just one wire, but conventional cameras require many cords.
Video from consumer camera systems may be stored in the cloud. In principle, this is a wonderful capability, but the cost increases as the resolution is increased. Yes, just because it’s an HD camera doesn’t imply you can keep HD recordings in the cloud at a reasonable price. Overall, the better the resolution and the more cameras you have, the more money you’ll spend on cloud storage. Furthermore, most out-of-the-box features limit recordings to 10-second snippets, which are insufficient for commercial purposes. In the professional security camera sector, this is a hot topic.
Commercial cloud video surveillance, often known as VSaaS (Video Surveillance as a Service), may have some promise in the near future. Still, there are too many restrictions in terms of picture quality, intelligent video, and overall evidence management for us to suggest these platforms to our business clients that require mission-critical surveillance systems.
Another issue with these consumer systems masquerading as professional video surveillance kits is that they are frequently shipped with a low-cost, low-quality Linux operating system that is unlikely to function with video surveillance hard drives. Finally, embedded DVR systems generally have a fixed amount of storage; if you add cameras later, these systems may not enable you to simply add another hard disk drive and instead force you to buy a new system.
How Much Does It Cost To Install A Security Camera?
The typical cost of installing video security cameras in Houston is $400 to $5000. Without installation, the national average for a system with four or more cameras, a recording system, Smart features, and Cloud possibilities is $600. A single-unit doorbell camera may be purchased and installed for approximately $175. For $2,500, you may have 12 or more high-tech camera-wired systems installed, with monitoring.
We have categorized the Houston Security Camera installation cost into various categories.
Security Camera Cost By Camera Type
Surveillance cameras exist in a variety of shapes and sizes, allowing them to be used for a variety of purposes and scenarios. The numerous versions on the market are differentiated by factors like as film quality, internet capability, and configuration flexibility. Understanding the many model kinds can help you pick the best camera for your needs, as costs and features vary greatly.
Dummy Security Camera Cost
Dummy cameras usually cost between $10 and $15. They’re phony cameras that don’t record video but provide the impression of a working surveillance system. While these cameras have the apparent disadvantage of providing no genuine monitoring capability, they are extremely inexpensive and need almost no setup. Many come with genuine flashing lights to provide the illusion of a working system, making fake cameras a viable option for homeowners seeking a low-cost deterrent.
Bullet Security Camera Cost
Bullet cameras range in price from $30 to $500 apiece and may be either low-cost security cameras or high-resolution beasts that can see the tiniest particle of salt on a white floor. A bullet camera resembles a box camera in appearance. Its lens, like that of a dome camera, is permanently mounted within a glass casing. These cameras are more discrete and are available in both indoor and outdoor models. Despite this, due to the permanent nature of the housing, repositioning and performing maintenance might be challenging. Bullet cameras work with both CCTV and IP systems.
PoE Security Camera Cost
PoE cameras range in price from $50 to $500. PoE cameras, or power-over-Ethernet cameras, get their power from an Ethernet connection rather than a coaxial cable, another cable type, or batteries. If you already have Ethernet wires in your house, these cameras may help you save money on installation. PoE cameras work with both CCTV and IP systems. PoE is used in 90% of professional security cameras.
Box Security Camera Cost
Each box camera costs between $100 and $750. Cameras having a box-like body that is attached to a separate lens are known as box cameras. These cameras are often larger, more costly, and less appealing than other varieties. However, they generally come with better performance and product life, as well as the ability to change lenses. They might be a good alternative for Businesses who want a better level of security. Box cameras work with both CCTV and IP systems.
Hidden Security Camera Cost
Each hidden camera costs between $50 and $250. They are distinguished from other camera kinds by the fact that they do not generally resemble cameras. To prevent discovery, hidden cameras are sometimes disguised as other items such as smoke alarms or clocks, or are extremely tiny. These characteristics make them ideal for covert observation. Despite this, their small sizes and odd forms can occasionally result in video quality and memory space restrictions. CCTV and IP systems are both compatible with hidden cameras.
Doorbell Security Camera Cost
Doorbell cameras are becoming more popular and cost $75 to $250. Doorbell cameras combine camera technology with traditional doorbell features to allow you to survey the area in front of your home’s door. Doorbell cameras typically offer Smart features like smartphone alerts when someone rings the doorbell or movement is detected. However, they rely on WiFi signals like other wireless camera types. Doorbell cameras can be compatible with CCTV or IP systems but are mainly used with IP systems.
Dome Security Camera Cost
The cost of a dome camera ranges from $80 to $300. The clear, dome-shaped glass covering that covers the lens gives dome cameras their name. Dome cameras have the advantages of being unobtrusive in appearance, resistant to damage owing to the protective glass, and it is difficult to determine which way the lens is pointed when tinted. However, because of the glass housing, accessing the lens to adjust it or perform maintenance might be difficult. Dome cameras work with both CCTV and IP systems.
Outdoor Security Camera Cost
The price of outdoor security cameras ranges from $50 to $600. Outside security cameras are security cameras with extra characteristics aimed toward outdoor use, such as waterproof enclosures and low-light capabilities. Outdoor security cameras may be more expensive than other models as a result of these increased capabilities. CCTV and IP systems are both compatible with outdoor security cameras.
License Plate Recognition Security Camera Cost
Cameras capable of collecting high enough pictures to view and read license plate numbers are known as license plate capture cameras. These cameras cost between $300 and $1000. It’s worth noting that cameras labeled with this phrase may or may not have software that can automatically interpret numeric data. Many cameras labeled as license plate capture cameras are merely those with image quality good enough to discern numbers while reviewing film. CCTV and IP systems are both compatible with license plate capture cameras.
PTZ Security Camera Cost
PTZ cameras range in price from $250 to $1500. PTZ cameras, also known as pan-tilt-zoom cameras, are remote-controlled cameras that can move, swivel, and zoom the lens. These cameras have the huge advantage of being able to instantly alter the camera angle without having to remount the camera. Some even have software that automatically adjusts the camera to movement. PTZ cameras work with both CCTV and IP systems.
Professional Grade Security Camera Cost By Brand
Brands We are offering as a security partner such Axis, Bosch, Avigilon, Wisenet, Hikvision, LTS Security Cameras cost are variable. Depending on futures and specifications prices are changed to $200 — $3000. However Security Cameras have cutting-edge technology with thermal sensors, people counting, motion detection, smart tracking, microphone and two way audio options. Also these security cameras came with different shapes and models. They are available in Analog and Network IP security cameras.
Hikvision Security Camera Cost
Hikvision cameras have a wide range of prices, ranging from $125 to $475 per camera. Another Chinese security manufacturer, Hikvision, offers a wide selection of camera solutions, including dome, bullet, PTZ, and license plate recognition cameras. Hikvision has a number of product lines that are customized to various security needs. They have cameras with deep-learning algorithms, cameras that catch color in low light, and even cameras that can withstand explosions, for example. DVRs, NVRs, and cabling are also available from Hikvision.
Dahua Security Camera Cost
Dahua cameras come in three different price ranges, including the Lite, Pro, and Ultra Series. Their cameras range in price from $75 to $350 on average. Data is a Chinese security company that manufactures dome, bullet, PTZ, and license plate recognition cameras. They also have a diverse product range of DVRs, NVRs, cabling, and smart home integration capabilities. Dahua also has a customer service line.
Consumer Grade Security Camera Cost by Brand
The brand of a security camera may tell you a lot about its quality, longevity, and efficiency. Based on the product and the warranty, installation, and other services they provide, various brands may be more or less attractive depending on your specific needs. These are all important factors to consider when selecting a camera brand.
Swann Camera Price Cost
Swann cameras are priced between $70 and $200. Swann is an Australian security company that offers a wide variety of camera types, including dome, bullet, and floodlight. Swann cameras are available in both wired and wireless versions, with some models including built-in lighting and Google Assistant/Amazon Alexa compatibility. Swann cameras are an excellent middle-of-the-road choice, offering high quality and functionality for the money.
Night Owl Security Camera Cost
The cost of a Night Owl camera ranges from $100 to $150. Bullet and dome security cameras are available from Night Owl. Many Night Owl security cameras have improved night vision capabilities, and some even have heat-detection capabilities. These are the most well-known characteristics of Night Owl cameras. They’re particularly well-suited to long-range, outdoor, or extremely low-light settings.
Lorex Camera Cost
The cost of Lorex cameras range from $100 to $175 apiece. Lorex is a Chinese security company that manufactures a high-quality line of wired and wireless cameras for usage in the home, business, and commercial sectors. Lorex provides bundle-style solutions that include numerous cameras and NVRs with excellent resolution for a low price, as well as doorbell and wire-free cameras.
Nest Camera Cost
Nest cameras range in price from $150 to $300. Nest cameras are part of Google’s Home suite of devices. They come in four different camera types: normal, smart, indoor, and outdoor cameras. For homes interested in smart AI features like Google Assistant integration, microphone communication capabilities, and automated Smart notifications when sounds and movements are detected, these cameras stand out.
Cost of a Security Camera Systems Based on Storage Capacity
When installing a security camera system, it’s important to think about how the footage from the cameras will be stored. Physical copies, such as SD cards and DVRs, are supplemented by cloud-based storage and hybrid approaches, such as network video recorders (NVRs). Consider how accessible the video will be through the internet and mobile devices, as well as whether you’ll be paying a one-time price, as with memory cards, or a recurring monthly subscription, as with Cloud-based services, when choosing a storage solution.
SD Card CCTV Camera Cost
SD cards range in price from $10 to $50 on average. SD cards are a physical way of storing footage on a camera’s card. SD cards are less expensive than other storage options, don’t require internet connection, and can be viewed on any PC or smartphone with the necessary software. However, compared to other techniques, SD cards have limited storage space, do not automatically post film to the internet for remote viewing, and can be lost if the camera is stolen.
Security Camera Systems with DVR Cost
The cost of a DVR ranges from $200 to $2500. DVRs, often known as digital video recorders, are hard drives for analog surveillance systems. The DVR receives the analog signal from the cameras and transforms it to digital footage before saving it. DVRs have higher storage capacity than other types of storage, such as SD cards. However, their capabilities are usually restricted to wired cameras.
Security Camera Systems with NVR Cost
The price of an NVR ranges from $250 to $3000. NVRs, also known as network video recorders, are hard drives that store video footage, similar to DVRs. NVRs have the advantage of being able to function with both wired and wireless IP cameras. This may be a huge benefit for homeowners who wish to install a wireless system. When utilized as part of a wireless system, though.
Security Camera Systems with Cloud Storage
The term “cloud storage” refers to the storing of video on distant servers. The cost of cloud storage ranges from $15 to $50 per month. This technique of storing has a number of benefits and drawbacks. You can view your film from nearly anywhere thanks to cloud storage. It saves you the trouble of manually archiving your film. Most businesses, on the other hand, charge a monthly subscription for using Cloud storage on their servers. You will not have a physical backup of your footage by default.
Cost of a Security Camera based on Field of View
Field of view, along with resolution, is one of the most critical elements in deciding whether the image your camera generates meets your distance and detail requirements. Lens millimeters or angle degrees are used to measure field of vision. Larger lenses often have a narrower field of vision but provide greater information over a longer viewing distance. Smaller lenses, sometimes known as wide-angle lenses, provide a broader field of vision but can only be used at close distances. When picking a field of view, think about whether you want to capture a larger perspective or a more precise, specific region.
6 mm Security Camera Price
6 mm cameras range in price from $100 to $250. Security cameras with 6 mm, or 50-degree, lenses have somewhat smaller fields of vision than cameras with wider fields of view. These cameras have a resolution of at least 2 megapixels. These cameras can handle somewhat longer distances without losing too much information in the local environment, making them an excellent choice for confined settings up to 16 yards away from the camera.
3.6 mm Security Camera
Security cameras with 3.6 mm, or 69-degree, lenses have fields of vision that are generally balanced in terms of width and distance. For 3.6 mm cameras, expect to pay between $50 and $400. These cameras have a resolution of at least 2 MP. These cameras are ideal for producing images with a nice blend of detail and short to mid-distances, thus they’re good for places up to 9 yards away from the camera.
2.8 mm Security Camera
Wide-field-of-view security cameras with 2.8 mm, or 90-degree, lenses are available. Cameras with a focal length of 2.8 mm cost between $50 and $500. These cameras have a resolution of at least 2 MP. These cameras capture a broad field of vision but aren’t ideal for long distances, thus they’re best for tiny places up to 5.5 yards away from the camera.
Motorized Varifocal Security Cameras
Motorized Varifocal Security Cameras are good solutions for long range of distance to monitor from. Motorized Security Cameras lens parameters start at 2 mm — 12 mm range and give you the opportunity to have enough room to play with it. Motorized Security Cameras are available with 2MP, 4MP, 6MP and 8MP 4K resolutions with analog and IP network security camera types. Motorized Security cameras are priced as $300 — $1200 in the security market as of the day of 2021.
Security Camera Cost by Resolution
The size or detail of the image produced by a camera is referred to as resolution. While resolution is not the only element that affects image quality, it is crucial for security cameras since the more detail your camera catches, the more you can see in your film. The greater the area you wish to scan in your house, the higher the resolution you should choose for better detail over longer distances. However, as resolution improves, so does the price and the amount of memory space needed.
2MP Security Camera Cost
CCTV cameras with a resolution of 2MP or 1080p cost between $40 and $100 apiece. The typical starting point for HD-quality security cameras is 2MP cameras, often known as 2-megapixel or 1080p CCTV cameras. These cameras have a resolution of up to 30 feet and an 80-degree viewing angle, making them ideal for facial recognition. They do not, however, provide the clarity that other higher resolutions provide.
4MP Security Camera Cost
4MP cameras, commonly known as 4-megapixel security cameras, have 30% more pixels than 2MP cameras in the business. The cost of a 4MP camera ranges from $80 to $200. This means they have an 84-degree viewing angle and image quality high enough to catch face characteristics from up to 50 feet away. One disadvantage is that as the resolution of the film rises, so does the price and the amount of memory necessary to store it.
8MP Security Camera Cost
8MP cameras, commonly known as 4K security cameras, range in price from $150 to $400 per camera. These cameras have an extremely high resolution, capable of generating 4K (or 8.3 million pixel) footage. These cameras are ideal for people who want to examine bigger areas from afar without losing information, such as outdoor spaces. 8MP cameras, on the other hand, need more bandwidth and storage space. These cameras, which have a resolution of 8 megapixels or greater, are typically used in bigger industrial, corporate, and commercial areas.
12MP Security Camera Cost
12MP cameras, often known as 12-megapixel cameras, have some of the best picture resolution available in security cameras today. 12MP cameras with a 2.8 mm lens are available with 1080p and 4K screen resolutions. Each 12MP camera costs between $800 and $1,000. These cameras are capable of capturing a lot of visual detail. Large stadiums, airports, and military facilities frequently employ them. They, like other high-resolution cameras, need a lot of storage space and are among the most expensive types available.
4K Security Cameras
4K security cameras are available with an 8MP lens and offer best quality video resolution for security camera systems. This type of security cameras are available as PTZ, Dome, Bullet and other types of cameras. Also 4K security cameras are available with motorized varifocal lens as analog and IP network security cameras. If you are thinking long term of your security we recommend professional licensed security camera installers to purchase the best quality of security cameras available in the security market. 4K security cameras are priced $300-$3000 for professional grade and customer grade of use.
Many factors determine the labor cost of security camera installation in Houston, TX. One is whether you are installing a wired or wireless system.
Establishing a wired system is more expensive than installing a wireless security system because wired security systems require additional cables, drilling, and installation processes. Assume you already have Ethernet lines in your house. In such an instance, it will considerably lower the overall cost of a wired system by removing much of the installation expense. A wired surveillance camera system costs $300 to $2,500 to install, bringing the total cost of supplies and installation to $500 to $3,000. In Houston, TX, CCTV systems are generally installed by a licensed security camera installation company, who may also provide the security cameras and equipment.
Wireless security camera installations, depending on the demands, are often significantly less expensive, costing approximately $50 — $100 per security camera. Depending on the arrangement, the total cost of supplies and installation for a wireless system ranges from $350 to $700. Professional installation may be a fantastic choice for getting the maximum performance out of your system.
To keep your security system running well, you’ll need to do routine maintenance on your security cameras. Hardware maintenance is one of the most important aspects, including keeping lenses clean, ensuring outside equipment and wires are secured, ensuring cameras are oriented in the proper direction, and safeguarding power and WiFi connections.
Regular software upgrades are necessary for optimal performance, as well as to avoid hacking and other security concerns. Many cameras have automatic software update choices, but it’s a good idea to see whether manual upgrades are required. Consider upgrading your camera every 1–2 years to maintain your gear up-to-date and enjoy enhanced security capabilities, as cameras constantly come out with higher resolutions and Smart features at more competitive rates.
Assume you have your camera system professionally installed and are paying for a remote monitoring service. Periodic professional maintenance will almost certainly be included in the price in such scenarios. If you don’t want to pay for a monitoring service, you can do it yourself. Cleaning items like microfiber cloths and compressed air, as well as checking your system for broken connections and poorly pointed cameras, generally cost $50 or less each year.
Many homeowners install security cameras in areas where there are a significant number of attempted crimes. Security cameras are commonly installed at the front entrance, first-floor windows, the rear door, and above garages. There are basic best practices for camera placement to guarantee the best security coverage outside of these specific locations. To gain a larger view of a room or region, position cameras in the corners. Another idea is to put cameras where they will be camouflaged (hidden), such as behind something or against a similar-colored surface.
However, depending on your security needs, you may want to position your camera(s) in a prominent, clearly visible area to provide the appearance of security and dissuade thieves. Finally, when installing exterior cameras, consider putting them where they will be protected from the elements and vandalism, such as high up or in a covered place. In any case, it’s usually a good idea to talk to the installation firm about the final placement of the cameras.
More Security Camera Features
When picking a security camera, keep in mind that it may offer a variety of specific functions. It depends on your unique circumstances; certain characteristics may give additional possibilities for properly monitoring your environment. Some camera functions are pre-installed, while others may be added afterwards. With a highly competent Houston security camera installation firm, you will receive all of the available features.
Videocheck Security Camera
Built-in security cameras are generally only offered as part of remote video monitoring service packages. These services typically cost $100 per month per camera on average. Although many cameras come with a built-in video monitoring capability that does not require a subscription, the homeowner must actively check their camera feeds at the time of the occurrence.
Surveillance Camera Floodlight
Camera for surveillance Large lights that are either integrated inside or next to a camera are known as floodlights. By providing a strong light on a specified location, these lights and cameras capture the finest video. Because they connect with the CCTV system feed, these camera-based and camera-sized floodlights vary from ordinary floodlights. When motion is detected, many immediately switch on and brighten the area, resulting in more real footage in low-light situations. The cost of cameras with built-in floodlights ranges from $140 to $280.
Surveillance Cameras with Motion Detector
A surveillance system’s functionality is not limited to continuous monitoring. You may also set up cameras that only turn on when they detect movement. This lowers the camera’s running expenses, such as energy or battery power. When motion-sensing cameras are activated, they may send an alert to your security provider via a smartphone app, letting you to get a feed just when you need it, rather than all the time. Costs range from $60 to $300.
Outdoor Security Cameras with Siren
Outdoor security cameras with sirens, either built-in or added afterwards, may be a useful deterrent for both notifying the public and discouraging criminals. These siren-equipped security cameras may be configured to automatically turn on whenever motion is detected or manually switched on and off by the homeowner via smartphone, depending on the camera, system, and user-determined specific settings. Many are also outfitted with red and blue lights similar to those used by cops to create the sense of enhanced protection. The cost of security cameras with built-in sirens ranges from $175 to $250.
Night Vision Camera Price
The price of a night vision camera can range from $50 to $500. The majority of cameras have night vision built in. It may, however, be feasible to purchase night vision lenses as an add-on for current cameras. The term “night vision” refers to cameras of varying resolutions that provide clearer images during the nighttime hours, when the majority of crimes occur.
Night vision cameras provide crisp pictures in low light in one of two ways: active or passive. Infrared light, which is invisible to the human eye, is combined with a camera lens that can take up infrared light and provide a clear image in active night vision cameras. Regular lenses are used in passive night vision systems, but image-intensifying technology amplifies the existing light in the image to create a brilliant image.
Security Camera with Mic
More cameras now include built-in microphones for communicating with pets, welcome visitors, or even undesirable attackers. Cameras with built-in mics range in price from $100 to $250. These cameras generally function in conjunction with an app or a cloud-based system to provide one-way or two-way communication with the person being recorded. Aside from cameras with built-in microphones, there are various independent microphones that may be added to operate with an existing camera. These stand-alone mics range in price from $20 to $35, not including installation.
Surveillance Camera with Facial Recognition
Artificial intelligence, sometimes known as facial recognition, is a function integrated right into certain modern cameras. Facial recognition cameras include software that automatically scans film for faces, including individual faces in certain situations. When used in conjunction with smart systems, this technology is extremely helpful.
When your face shows in the film, for example, certain smart cameras with facial recognition send smartphone notifications. Furthermore, cameras equipped with advanced or even basic facial recognition aid in the identification of suspects during a break-in, potentially speeding up the judicial process. The cost of a facial recognition camera ranges from $150 to $250.
IR (Infrared) CCTV Camera Price
Infrared CCTV cameras range in price from $150 to $250. IR cameras employ infrared technology to capture images in low-light conditions. Night vision technology is comparable to infrared technology. Many IR cameras are referred to as night vision cameras, and vice versa.
There are, nevertheless, significant variances. Some night vision cameras rely on some light and simply magnify it in the footage feedback to brighten the image, while infrared cameras function in a different way. They illuminate their subject using infrared light. Human eyes are unable to see infrared light. Even in low-light settings, it is visible to the camera and gives a considerably sharper image. Pre-installed lenses are common on infrared cameras.
Motion-Activated Smoke Detector Security Camera
When smoke is detected in the home, motion-activated smoke detector cameras detect, film, and even warn you through a mobile app. This Smart feature normally necessitates software that is only pre-installed and not accessible as an add-on. In the event of a fire, homeowners may wish to consider these for instant notice, allowing for a faster emergency response. Typically, motion-activated smoke detector cameras cost between $200 and $300.
Heat Sensor Security Camera
Heat sensor security cameras detect intruders by detecting heat rather than light, which allows them to detect attackers wearing dark clothing that other cameras may miss. The majority of cameras lack this function. Heat sensor security cameras, which typically cost $300 to $500, are required for homeowners that desire heat sensor security.
Additional Costs and Considerations for Installing Security Cameras in Houston
A security camera installation in Houston might rescue you from a variety of disasters. The possibilities of difficulties with camera wiring, power access, recording device access, and film access are greatly reduced with expert CCTV installation. Besides, most experts have insurance to cover any potential problems. Professional installation should come with a guarantee if your security camera system is a large, wired system.
While having a surveillance system installed may qualify you for a discount on your homeowner’s insurance, most insurance policies only cover expert monitoring and installation. Compare the monthly expenses, peace of mind, and insurance savings to see if this is the best option for you.
Although some wireless cameras come with a built-in battery, they are not designed for extended usage. Having a power supply nearby is helpful unless you’re using a motion-activated camera.
Most jurisdictions allow you to install a hidden camera with audio on your property. Audio, on the other hand, may be considered wiretapping in some circumstances and may be prohibited in some places. Check the regulations in your state before installing a camera with built-in audio.
The stream from most cameras with in-home monitoring may be seen for free on your tablet, phone, or computer. Some professional monitoring companies, on the other hand, may charge an additional $10 per month to see the same stream. To learn more, contact your employer.
The cost of installing four wireless cameras is widely available, ranging from $350 to $700 for the whole system. However, the best option for ensuring correct installation is to engage an experienced expert who will ensure that the cameras are properly installed and that all of the necessary connections and encryption security are operational.
Many homes utilize cameras to deter burglars and catch intruders in the case of a break-in. Cameras with motion detection, night vision, and facial recognition
recognition helps produce footage capable of detecting clothing, facial details, and other pieces of information to help catch criminals and regain losses.
Consider how many rooms or outside sides of your property you want to cover, as well as the varied views of your house, when choosing how many cameras you need. Angles have a big influence on the field of view. Furthermore, depending on the size of the room, more than one camera may be required to survey the whole space.
Security camera concealment is an important approach for maximizing the effectiveness of your security system. Some homeowners opt to conceal cameras in corners, which reduces visibility while providing a better, wider view of the film.
Cameras can also be hidden behind items or in regions that are similar in color to the camera. Some homeowners, on the other hand, opt to place their cameras in a prominent spot to signal to would-be burglars or trespassers that the area is under observation. Camouflaging your camera with standard concealing methods such as hidden camera photo frames, hidden camera electrical outlets, and even ordinary houseplants costs on average $30 to $90.
|
https://medium.com/@rahibismi/the-ultimate-security-camera-installation-and-purchasing-guide-2021-houston-security-solutions-a7b32926a160
|
['Robbie Handy']
|
2021-08-09 06:55:40.931000+00:00
|
['Cctv Installation', 'Security Camera Install', 'Cctv Surveillance System', 'Security Camera', 'Security Camera System']
|
A year of Flutter Community
|
A year of Flutter Community
A look back at the first year of the Flutter Community by Nash Ramdial and Jay (Jeroen) Meijer Nash Follow Jun 20, 2019 · 5 min read
The idea for Flutter Community was simple; create a place that users can visit to find the latest packages and articles written by Flutter developers from around the world. With this goal in mind, we set out to make it a reality. First, our GitHub organization was created by Simon Lightfoot (The Flutter Whisperer) and Jeroen Meijer. This was quickly followed by the Flutter Community Medium Publication, maintained by Nash Ramdial and Scott Stoll.
From the very beginning, there was a lot of enthusiasm from the wider community about the idea. Articles and packages slowly began filling Flutter community from 3rd party developers all over the world. As the months progressed we slowly grew from an average of one article per week to two articles each day. Popular posts such as “Flutter on desktop, a real competitor to Electron” by Norbert, “Flutter Layout Cheat Sheet” by Tomek Polański, and “Parsing complex JSON in Flutter” by Pooja Bhaumik were all published on Flutter Community Medium.
On the GitHub side of things, packages such as “flutter_downloader” by Hung HD, Flutter Launcher Icons by Mark O’Sullivan and “responsive_scaffold” by Rody Davis all found their homes on our GitHub.
As Jay Meijer, the primary maintainer of Flutter Community GitHub said,
“I’m very happy with how the GitHub turned out. We’ve already had a load of packages submitted that are now hosted by Flutter Community. Multiple people have become maintainers of packages that would have otherwise died out, which is one of the most important things we wanted to achieve with the organization. Going forward, we’d like to be more clear to our users on how to submit their packages and what our guidelines are, so we can make Flutter Community’s packages easier to use and a better place for package maintainers and users alike. Thank you to all the contributors and maintainers. It’s awesome to see people work together.”
Today, there are 22 packages under the Flutter Community GitHub and over 360+ articles published under our publication. In addition to this, we have 17K+ followers on Medium, 4000+ on Twitter and, on any given day, there are at least 10K unique daily visitors and over 1 million minutes read per month on Medium.
Number of minutes read per month on Flutter Community
|
https://medium.com/flutter-community/a-year-of-flutter-community-eae82bcd9b69
|
[]
|
2019-06-20 19:31:20.918000+00:00
|
['Flutter Community', 'Open Source', 'Year In Review', 'Flutter']
|
neolexon
|
How would you describe your business idea to a potential investor?
We are developing digital solutions for the domain of Speech and Language Therapy. neolexon offers digital assistant systems in order to support therapists during their therapy sessions as well as patients (e.g., after stroke) to train at home without limits. Thereby we increase the effectiveness and efficacy of therapy.
What problem do you want to solve, what is your goal?
Speech and Language Therapy is conventionally dependent on analogue methods (e.g., picture cards). Moreover, patients usually receive only one hour therapy per week which is not enough. By using digital solutions, therapy content can be personalized to a much higher degree and trained at home without limits. Thus, we aim to increase therapy success for patients.
How did you come up with your idea/concept?
As speech and language therapists we experienced these problems in therapy sessions on our own. At the same time, we got a deep understanding of the scientific requirements when working as researchers at LMU München. To reduce the gap between research results and work practice, we decided to develop digital solutions for therapists and patients.
What is your business model?
We sell software-as-a-service B2B to clinical institutions and private practices as well as B2C to patients. Our aim is that health insurances will cover the costs for patients in the future. So far, we have already one insurance on board.
What is special about your product?
neolexon provides a huge database for speech and language therapy from which therapists can choose individual training materials for every patient. These can then be trained in user-specific apps. Other software solutions cannot be individualized in such a way. Moreover, neolexon is certified as a medical product and is compliant with data protection requirements.
The founders of neolexon: Dr. Mona Späth (left) und Dr. des. Hanna Jakob (right)
Why did you decide to work with XPRENEURS?
XPRENEURS has a great network of mentors, speakers, and investors which we wanted to profit from. It was highly recommended by their alumni teams.
You can get more information about neolexon on their website or follow them on Facebook.
You want to become part of the XPRENEURS incubator program as well?
Get more information and apply at https://xpreneurs.io/
|
https://stories.xpreneurs.io/neolexon-3f855ff0b51d
|
['Xpreneurs Incubator']
|
2019-03-12 09:14:46.998000+00:00
|
['Speech Therapy', 'Language Therapy', 'B2B', 'Medical Technology', 'Logopedia']
|
Implementing a custom drag event function in JavaScript and THREE.js
|
THREEjs is a cross-over JavaScript library that allows us to unleash the potential of GPU driven graphics within the web browser. Although it provides both a orbitalControls and dragControls functions for object or scene manipulation, but what if we want a more personalised response on drag events? — this is what we shall cover here.
The code discussed within the article can apply to any HTML element, although the emphasis, in this case, is placed upon THREEjs. The only difference falls upon what goes into the dragAction function at the end.
— If you are new to the THREE.js library, have a browse through the following materials: here
— If you wish to apply the same principles to Data-Driven Documents (d3.js), have a look here
The Event Listener
To determine what is happening within our program we start by observing a number of mouse events. As the user ‘drags’, we need to know when they click, move and then release the object — in JavaScript, this is done through the use of event listeners.
Each Event ( mousedown , mousemove and mouseup ) is attached to a DOM element within the webpage. In our case, it is the canvas upon which the THREE libraries is plotting on. We can get this by using
let canvas = renderer.domElement
Default Variables
We begin by defining a set of default variables which our program will update. Here we have a logical ‘is the mouse down?’ mouseDown variable, and the cursor position variables.
var mouseDown = false,
mouseX = 0,
mouseY = 0;
Mouse Down
The first check we want to make is to see if our user has pressed the mouse button — as without this we do not have a ‘drag’ event. We do this by listening for a mousedown event on our canvas element.
canvas.addEventListener('mousedown', function (evt) {
evt.preventDefault();
mouseDown = true;
mouseX = evt.clientX;
mouseY = evt.clientY;
}, false);
When this happens we update our mouseDown logical variable and record the current cursor position.
Mouse Move
Now we have a pressed button, we want to track how much the user drags the mouse across the screen. We can do this as so:
canvas.addEventListener('mousemove', function (evt) { if (!mouseDown) {return} // is the button pressed? evt.preventDefault();
var deltaX = evt.clientX - mouseX,
deltaY = evt.clientY - mouseY;
mouseX = evt.clientX;
mouseY = evt.clientY; dragAction(deltaX, deltaY,object); }, false);
This function is activated each time the user moves their mouse. If the mouse button has not been pressed, however, it does not do anything.
If the button is pressed, our user is ‘dragging’ across the screen. We therefore record the distance moved, and pass it on to a custom dragAction function, along with the object we wish to manipulate.
Mouse Up
Finally, we need to know when the user stops pressing the mouse button, and reset the mouseDown variable. This is done with the mouseup event listener:
canvas.addEventListener('mouseup', function (evt) {
evt.preventDefault();
mouseDown = false;
}, false);
The custom response function
So far we have looked at recording when the user applies a ‘drag’ event and how far they have moved the cursor throughout this. Now we create a function to pass that information to the part of our scene.
For the purpose of this example, we will be rotating a mesh object — or in my case a THREE.Group containing 5 objects (see here for groups). We do this by changing the x and y rotation as such:
function dragAction(deltaX, deltaY,object) {
object.rotation.y += deltaX / 100;
object.rotation.x += deltaY / 100;
}
Here deltaX and deltaY represent the relative change of the mouse cursor throughout the drag event. The actions however can be anything the user desires — for instance, if we wanted to translate the position, we could use object.position.x += X instead.
Putting it all together
As I plan to reuse this code, It is not within my interest to copy-paste each time I wish to apply it. Instead, I can package it into a module, place that within a shared folder and import it each time I desire to use it. The module contents are given below:
Importing the module
Now we have a module, we can import it within our script as :
import {dragControls,dragAction} from './dragTHREE.js';
and use it as
dragControls(renderer.domElement,dragAction,graphs)
Where renderer.domElement is my canvas, dragAction is a function which takes the mouse positions and an object as an argument, and graps is a group object containing all the individual components of each graph in the title image.
Using a personal function with the module
Since the module is available to edit, we can either change the dragAction function directly, or just pass a different one to dragControls :
|
https://uxdesign.cc/implementing-a-custom-drag-event-function-in-javascript-and-three-js-dc79ee545d85
|
['Daniel Ellis']
|
2020-11-24 12:33:30.375000+00:00
|
['JavaScript', 'Webgl', 'Drag', 'D3js', 'Threejs']
|
December 2020 Deals Recap
|
As we approach the winter holidays we have a final monthly deals recap market map for you. With a new year, and a light at the end of the tunnel (vaccine rollouts), we are looking forward to a better, brighter, and healthier 2021. Things are looking bright for New England, as capital floods into biotech, deeptech, and just about all tech in the region. Again, we’re thankful for all you founders and investors continuing to move forward with your plans to make the world a better place! Now, onto the deals. [NOTE: Round info per Crunchbase reporting]
|
https://medium.com/the-startup-buzz/december-2020-deals-recap-5b36019ab47a
|
['Matt Snow']
|
2020-12-22 20:02:37.049000+00:00
|
['Technology', 'Fundraising', 'Startup', 'New England', 'Venture Capital']
|
Blogging Guide
|
Research has shown that people make decisions about the sites they click through based on the relevance of keywords present in the URL. Including pertinent keywords improves the likelihood they’ll choose your site among their options when you fill their need best.
When they encounter your link through social media, email, or a website, they’ll get a clearer picture of what your link offers. This can build enough trust and engagement that they click through.
In websites where your link is not included with anchor text, the URL itself becomes the anchor text. A readable, keyword-focused URL can drive traffic to your site by both boosting your rankings and encouraging a higher click through rate.
Luckily, Medium allows users to customize the URL link for individual stories.
How to Edit Your Medium Article URL
When you are editing your story draft click on the ⋯ (three dots) icon in the upper right hand corner. Select “more settings” from the drop down menu. Select advanced settings, and check the box that says custom. Enter your custom link.
An example of this process is shown below:
Additional Notes on Custom Medium Article URLS:
|
https://medium.com/blogging-guide/customize-medium-story-link-a7ab58ed0bce
|
['Casey Botticello']
|
2020-07-10 00:42:52.900000+00:00
|
['Url', 'Medium', 'Format', 'Story Link', 'Writing']
|
Hands on Machine Learning to program in Jupyter notebook
|
I created YouTube videos on Machine Learning using Jupyter notebook.
Hands on Machine Learning to program in jupyter notebook and easy to learn. This is my youtube channel for data science and Machine Learning for Beginners in English, हिंदी (Hindi) and తెలుగు (Telugu)
I have covered video tutorials in three languages, viz., English, Hindi and Telugu.
Following are playlist in English:
Promotional: https://www.youtube.com/watch?v=OaXZieKMqWQ&list=PL01e4bnNRw-Sl9Xt4bw1hmCfMBatVR7YH Basics in English: https://www.youtube.com/watch?v=TAHr-dLsDy8&list=PL01e4bnNRw-S71tN3OJGEMbvb9blR733G Machine Learning algorithms: https://www.youtube.com/watch?v=Zyo9lTvhAxk&list=PL01e4bnNRw-SZvEaIxSOo44Ip4ZPzLYvh
Following are playlist in Hindi:
Promotional: https://www.youtube.com/watch?v=GcHVwuOpubc&list=PL01e4bnNRw-RUmsUDf-X7wK5l85AwAKBx Basics in Hindi: https://www.youtube.com/watch?v=TGvSkbYb8oA&list=PL01e4bnNRw-TyOmg_LHziwfa0HVfqOF9J Machine Learning algorithms: https://www.youtube.com/watch?v=aaarXiwDnyk&list=PL01e4bnNRw-Rx9paxRUOheI2X8tIf7nJ5
Following are playlist in Telugu
|
https://medium.com/@kmeeraj/machine-learning-for-all-2350e63e758d
|
['Meeraj Kanaparthi']
|
2020-11-24 22:21:27.041000+00:00
|
['Python', 'Ipynb', 'Colab', 'Machine Learning', 'Jupyter Notebook']
|
Emporia energy-monitoring smart plug review: Power management on a budget
|
Emporia is best known for its Vue energy monitor product, and now the brand is expanding—if ever so slightly—into additional power-centric smart home gear. Up first is the Emporia Energy Monitoring Smart Plug, which is designed to work with the EmporiaEnergy mobile app.
Mentioned in this article Emporia Vue (with expansion module) See it The hardware is straightforward, featuring a single three-prong outlet in the center of a reasonably compact chassis with well-rounded corners. A large power button is placed to the right of the outlet and a status LED appears to the left. The LED is small, but it can’t be disabled. The outlet has a 15-amp limit, or an implied maximum power rating of 1800 watts.
This review is part of TechHive’s coverage of the best smart plugs, where you’ll find reviews of competing products, plus a buyer’s guide to the features you should consider when shopping for this type of product.A curious omission here is the lack of printed instructions of any kind. Instead there’s just a QR code printed on the box that directs you to download the EmporiaEnergy app to get you started. The good news is that that process is quite intuitive, and the Emporia app is effective at walking you through the basics of connecting the outlet to your Wi-Fi network (2.4GHz only), which primarily involves holding down the power button for six seconds to put the outlet into pairing mode.
[ Further reading: The best smart switches and dimmers ] Christopher Null / IDG The graphs in Emporia’s mobile app are detailed and useful.
From there, the plug can be assigned to an existing circuit—if you already use the Emporia Vue system—or assigned to a new one. The plug doesn’t directly interact with the Emporia Vue hardware, but there is some synergy between the two devices. Namely, if you use Emporia Vue, a plug can be included as a subcomponent of one of your monitored circuits, so you can further break out and fine-tune the power draw on that specific circuit. But you don’t need Vue to use the smart plug if you have more modest power monitoring goals.
The big draw here is energy monitoring, of course, which the Emporia app breaks down numerically and graphically over a time horizon that you can set, and which can range anywhere from by-the-second to by-the-year. Few products in this category have both the depth and usability of the Emporia Vue outlet when it comes to managing power consumption. The intuitive graphics almost make energy management fun. Outside of the monitoring features, I encountered no trouble with the plug during my testing, it was quick and responsive to commands, and it never dropped offline.
Emporia Emporia’s smart plug is just slightly wider than the typical outlet cover plate.
Otherwise, the Vue plug isn’t the most sophisticated of devices. It includes a basic scheduling system, buried under the “Manage Devices” menu, but there’s no countdown timer system to shut power down after a specified length of time. Alexa and Google Assistant are both supported, but not IFTTT.
The $11 price tag (or as little as $6.50 each when bought in a four-pack) is perhaps the icing on the cake, this could be the lowest-priced Wi-Fi outlet with energy monitoring features on the market. It’s definitely a top pick whether you have a Vue system installed or not.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
|
https://medium.com/@crystal41134267/emporia-energy-monitoring-smart-plug-review-power-management-on-a-budget-b2ae6829bf3b
|
[]
|
2020-12-24 01:27:25.608000+00:00
|
['Surveillance', 'Home Theater', 'Cord', 'Connected Home']
|
Chess: The Significance of the First Move Advantage
|
Introduction
In the world of Chess, one question which remains open is whether or not White has a significant advantage due to having the first move. The player who controls the White pieces begins the game on the attack while the player who controls the Black pieces must defend, which is where this apparent advantage originates. If both players play identically and symmetrically, then White will win a significant majority of the time. The issue with this is that players do not play identically or symmetrically most of the time, in fact White wins only 37% of the time compared to Black’s 28%, according to the chessgames.com database of chess games (Chess Statistics). This paper aims to analyze whether White has a significant advantage in Chess due to having the first move, and this will be done through considering multiple factors including rating level, game type, and first move.
Chess remains as one of the most popular games of all time, claiming 600 million fans worldwide (Cowen, 2018) hence the significance of Chess cannot be understated. Analyzing whether White has a definite advantage over Black will help novice and expert players understand the game better and modify the strategies utilized by every player.
To identify whether White has a significant advantage over Black in Chess, a logistic regression model will be applied over the LiChess Chess Game Dataset (Jolly, 2017). Logistic regression is best used to describe the relationships between one dependent binary outcome variable against one or more independent predictor variables. The logistic regression model will also provide estimates identifying how a certain predictor affects the outcome in an intuitive manner.
An outcome variable of the form white_won will be used in the logistic model, where 1 represents games where the White player won and 0 represents games where White lost or tied. The Methodology section (Section 2) will dive deeper into which predictors were chosen and how the logistic regression model was developed. Results of the model can be seen in Section 3, Results, and further discussion on the outcome of the model will be highlighted in Section 4, Discussion.
Methodology
This section will discuss the dataset further as well as the development and selection of the logistic model.
Data
The data used in this analysis was retrieved from https://www.kaggle.com/datasnaek/chess. This dataset consists of 20,000 games of chess collected from LiChess.org. The data was assembled using the LiChess API ( https://github.com/ornicar/lila), and consists of the most recent games taken from the top 100 teams on LiChess in 2017 (Jolly, 2017). The dataset contains 16 columns, including white_rating , black_rating , moves , opening_name , and winner , along with several other useful features. Additionally, this data is considered an experiment as LiChess randomly assigns players as Black or White. The population this dataset attempts to measure would be all players of online Chess, the frame being all LiChess members, and the sample would be the 20,000 most recent games taken from the top 100 teams on LiChess.
A table summarizing the characteristics of this dataset can be seen in Table 1 below (Revelle, 2020), along with a plot displaying the frequency of a White win, a Black win, and a draw (Figure 1).
Table 1: Characteristics of Chess Dataset
Figure 1: Frequency of Winners
Model
To create a logistic model to analyze the relationship between a player controlling the White pieces and winning games, a binary outcome variable denoted as white_won was created where if player White won, then this variable would be represented as 1. Otherwise, if Black won or the game was a draw, the variable white_won would be represented as 0.
Two additional variables were also created to assist the analysis. A continuous variable diff_rating was created to represent the difference in player rating between White and Black. A variable named game_type was created to replace increment_code in the raw dataset. This variable represents the game type as an integer, where game type constitutes the time length of the game.
The selection of predictors for the logistic model was chosen through backward step-wise variable selection. A full model comprised of all predictors was created and then through backward step-wise variable selection, predictors were dropped based on whether the new model reduced the Bayesian Information Criterion (BIC). The BIC is a criterion which penalizes models containing too many predictors (The Methodology Center). The full model and step-wise model were evaluated by comparing their AUC and ROC Curve, using the pROC package in R (Robin et al., 2011), seen in Figure 2.
Figure 2: ROC Curve of Full (left) and Reduced (right) Model
It is notable that the reduced model has a lower AUC which implies that it is a worse model compared to the full model. Despite this, the Akaike Information Criterion (AIC) is lower, which suggests a better model fit (The Methodology Center). Thus, a model using the chosen significant predictors from the reduced model along with predictors from the full model that were deemed useful for the analysis will be used. The final model consists of predictors turns , diff_rating , and game_type to predict the outcome variable white_won .
A second model will be created to assess the relation between the first move made and the probability of White winning the game, where the outcome variable is white_won and the predictor is opening_eco . The variable opening_eco refers to a code representing the first move, where each code's corresponding move name can be found here: https://www.365chess.com/eco.php. This model produces a high AIC compared to the first model suggesting it is a poor model fit, but it will be used to assess the variable of interest opening_eco .
Results
The summary of the logistic model can be seen in Table 2.
Table 2: Summary of Logistic Model
The table displaying the top and bottom coefficients of the logistic model using only the opening_eco variable as a predictor can be seen in Tables 3 and 4.
Table 3: Top Coefficients of Opening Moves
Table 4: Bottom Coefficients of Opening Moves
A two sided t-test was performed along with the logistic model, where the results can be seen in Table 5.
Table 5: Two Sided t-test Results
Discussion
In the previous sections, the raw data was cleaned to produce predictors used in the logistic model. The model, created through backward step-wise variable selection and identifying other useful predictors, attempts to model the probability that white_won using predictors turns , diff_rating , and game_type . A second model created solely to understand the relationship between white_won and opening_eco was also developed. Finally, a two-sided t-test was performed on the mean of white_won to identify if this was a significant result.
Conclusion
A two-sided t-test performed using the null hypothesis of “true mean is equal to 0.5” and alternate hypothesis of “true mean is not equal to 0.5” resulted in a p-value of 0.6926. As this p-value is greater than our $\alpha$ value of 0.05, we fail to reject the null hypothesis. This implies that we do not have evidence that player White would win Chess games greater or less than 50% of the time. Using this, the conclusion drawn is that White does not have a significant advantage due to having the first move. The t-test is visualized in Figure 3 using the R package gginference (Charalampos and Kleanthis, 2020).
Figure 3: t-test Visualisation
The coefficients of the first logistic model is seen in Table 3, and describes how the probability of White winning the chess game is affected by each predictor. It is notable that the coefficient for turns is -0.00046287 which explains the log odds of White winning the game decreases with every unit of increase in the number of turns. This implies that the longer the chess game goes, White loses their advantage from having the first move by a factor of -0.00046287. Similarly, the coefficient for game_type is -0.0013123. As game_type refers to the max time length of the game, the log odds of White winning the chess game decreases with every increased unit in game_type by a factor of -0.0013123. Combining these two results, it is evident that the significance of the first-move advantage decreases with longer games. One reason for this is that human players are prone to "blunders", where the odds of winning consistently shifts between players as more blunders are made by each player.
Figures 4 and 5 displays how the probability of White winning in relation to number of turns and game type, using the stat_smooth function in ggplot (Wickham, 2019). It is evident that the probability of White winning decreases steadily as the number of turns increases, while it is less evident the probability decreases with the increase in game type.
Figure 4: White Win in Relation to Number of Turns
Figure 5: White Win in Relation to Game Type
The logistic model provides a positive coefficient for diff_rating , which is the difference between player White's rating and player Black's rating. This result is plausible as the higher the difference in rating, the greater advantage player White has. Figure 6 displays the sigmoid relationship between the difference in rating and player White winning; it is seen when the difference in rating is around 0 (i.e. players have relatively equivalent ratings), the odds of White winning is approximately 50%.
Figure 6: White Win in relation to Difference in Rating
The second logistic model computes the coefficient for each of the opening moves in relation to their affect on the log odds of player White winning the game. Table 4 and 5 show the highest and lowest coefficients of the given factor levels of opening_eco . Using https://www.365chess.com/eco.php to decode the associated codes, it is seen that the best opening moves for White are Benoni, Taimanov variation (A67), Sicilian, Dragon, classical Variation (B74), and Queen's Gambit Declined semi-Slav (D47). Inversely, the worst opening moves for White are Benoni, classical (A71), Dutch, Leningrad (A89), and Queen's Gambit Declined, Tartakower (D58). It is interesting to see that slight variations of one move can result in it being considered a strong opening move or a weak opening move. For example, the Benoni Defense is considered a strong opening under the Taimanov variation while it is considered a weak opening move under the Classical variation.
In conclusion, player White does not win chess games at a significantly higher rate than player Black even with having the first move advantage. The longer the chess game goes for, the log odds of player White winning the game decreases. Using this information, player Black can aim to reduce the significance of player White’s first move advantage through extending the game and avoiding blunders in the opening stages.
Weaknesses
One of the most significant weaknesses of this analysis is the sample size of the dataset. As stated previously, there are an estimated 600 million chess players worldwide and this study analyzes approximately 20,000 players on a single platform. Due to this, the results may not be generalizable to the population of Chess players.
Another weakness with this analysis is in relation to the logistic model. The model produced a rather high AIC score, which can imply a poor model fit. This is also supported through having an AUC between 0.5 and 1 as seen in Figure 2, which can imply the accuracy is not satisfactory.
The fact that Chess is typically a game played by two humans can also be considered a weakness in this topic. Chess is prone to human error, which may not be recorded in the dataset and so analyzing whether a specific player has an advantage may not be accurate.
Next Steps
In terms of the model, adding interaction terms or other predictors which can model the human error aspect of Chess can help produce a more accurate and meaningful model. Adding more complicated terms can also decrease the AIC and improve the accuracy. It may also be useful to group the data by average rating between player White and player Black as this can show whether the first move advantage is more significant in lower rated games.
The key next step would be to collect more data. This can be done by collecting games from numerous online Chess platforms rather than just LiChess.
|
https://medium.com/@labibc01/chess-the-significance-of-the-first-move-advantage-f88d99dd8f3e
|
['Labib Chowdhury']
|
2020-12-24 20:55:33.669000+00:00
|
['Logistic Regression', 'Data', 'Chess', 'Data Science', 'Analysis']
|
Post- 3 Work
|
Post- 3 Work
In August this year I started a new job. To explain what I actually do would take far too long and would put an insomniac to sleep. Suffice to say, I work in operations for an investment firm so I do financial admin.
I have been told I have a good job. But what makes the job good? It has the benefits of a pension scheme, health insurance, a healthy bonus scheme (or so I’m told) and an above average salary. So by those measures it is a good job. It is also currently taking up approximately 60 hours of my week, and I cant say that I wake up every morning excited to sit at a computer and track the ins and outs of the funds for hours on end each day. So maybe not such a good job then….
Did you ever look back and wonder how you ended up where you are? I’ve been doing that a lot lately, both personally and professionally. In college I was quite good with accounting and finance. I kept choosing those electives because I could boost my average grade by getting high marks with minimum effort. I also enjoyed being able to figure out financial problems and come up with a solution. When I found an elective called financial markets, I envisioned myself making my millions on Wall St.
See when it comes to my intellectual ability I have a bit of an undeserved ego. I can often think I’m smarter than I actually am. Luckily I’m also quite quiet and usually only pipe up when I’m sure I know what I’m talking about. So I quietly went about my business in this financial markets class, got a good grade and found I was actually interested in the markets. This lead me to signing up for a MSc in UCC specializing in investment banking. This was much tougher than I’d like to admit and I would say I struggled at times. I got my 2.1 and got out of Cork.
Then it was time to find a job. I remember while I was in college my brother slagged me for doing Business Studies. “What are you going to do with a Business degree? Become a business man? Sure anyone can do that”. So I wanted to get a job that used my degrees and paid well to prove college wasnt a waste of time.
A google search lead me to a job in Wexford with a large American bank. I spent 7 months in Wexford and met some great people but I was never settled there as there isnt much going on. Luckily I was able to transfer to the Dublin office.
Fast forward 6 years and I decided to leave the bank and take a job with one of our clients. The reason behind this move was to push myself to a new challenge. I was bored of what I was doing in the bank and wanted a change. And a change is what I have gotten. The new role is a big change from my last.
The question I keep asking is, do I really want a challenging role that takes up my entire week. Do I really want to dedicate this much of my time to a career I’m not excited about? Is money really that important, or would I be happier with a different job and more free time to enjoy my life?
I’m starting to think the latter would be a better option…..
|
https://medium.com/@cooper.robin3/post-3-work-7119b1007bad
|
[]
|
2020-12-08 22:59:44.551000+00:00
|
['Career Change', 'Work Life Balance', 'Working From Home']
|
Getting the Last Drop
|
He got on the bed and I got undressed as fast as I could. This is not like me, I’m a lights off girl but I didn’t care. My days of drooling over this man were over. Time to get his dick wet. I got on the bed beside him, on my knees, and softly touched his thighs, massaging my way up, across his belly and down the other thigh. I payed special attention to the bruising on his inner thigh on both sides.
“That OK?” I asked. I wanted this first time to last, not just jump on that fabulous fat cock that was getting fatter by the minute.
“That’s grand,” He said and closed his eyes and enjoyed my touch. Then I felt his hand caress my thigh, high up near my butt. I hadn’t expected it, so I jumped up a little and his hand slipped under my ass and the tips of his fingers in the crack of my ass.
“Hey, I’m supposed to be the one massaging you,” I said. He responded by slipping one finger up inside me slowly. I rocked forward a little, holding myself up with one hand on his chest as I continued to rub his lower body. But as he finger fucked me, I forgot about the bruise and started to rub the length of his cock. Teasing the tip with my finger and thumb and then tickling the length. He adjusted his hand. Under me now, he slipped his hand down my belly and over my pubes. His fingers found their home again, and I spread my thighs for him a little. He had two fingers working my insides and the palm of his hand providing pressure to my clit. “Ohh, Mmm, fuck, that’s amazing. Oooo.” I cooed.
Not to be outdone, I tickled my way gently to his balls and cradled them. “Is that OK?”
“Fuck yeah, it is,” he said. I felt his balls just under the skin. Smooth and heavy. They were retired, all they had produced over the years reduced to just one more load, and it was all mine. I licked the head of his cock for a moment and scooped it up into my mouth, tasting the soft velvet texture of the tip of his cock, I couldn’t get more into my mouth if I tried. I was wondering now if I could take him. It has been a while, and I’d never had a man this size in any case. I made soft slurping noises as I wet him with my saliva. I wished now I had some lube.
His fingers left my hole, and I felt him grab me and pull me up onto his chest. The sensation of his tongue along my slit was intoxicating, I could barely think as it probed into my pussy. I arched my back a little only to feel his tongue bathing my asshole, probing my pucker, “Whooo, Ohh, fuck that tickles. I’ve never had anyone eat my ass before,” I squealed. I sat on his face a moment and wiggled my butt, feeling his tongue invade my naughtiest of holes.
Enough of this, I needed his cock. So I walked forward on all fours over his body until I was able the rub my wet slit on his cock. Back and forth I rode him a little, wetting that shaft and feeling his length and the thickness between my pussy lips. I reached between my legs and lifted his cock straight up and squatted over him, teasing myself with the tip a bit until I finally began my descent. His cock head slipping between my lips and my tight hole stretching around him, I should have started earlier, before he got this hard and huge.
It took some doing and a lot of spit on his shaft, but finally I felt him inside me a few inches. I bounced up and down a little to take more of him. I had never felt so full in my life. Fuck, this was heaven. I put my hands on his thighs and rode him as he put his hands on my hips to steady me. I watched as his cock slipped in and out of me, each stroke it got wetter and our juices became a sexy cream at the base of his cock. I was taking all of him now. We fucked like this for a good ten minutes. My thighs burned a little, but I was in love with the feeling of him touching me in places I never felt before. I felt him playing with my asshole, probing me with his thumb, it triggered something inside me and made me bounce faster and harder on his cock. My legs weak, my mind was gone as a massive orgasm ripped through my body and I became an uncoordinated useless mass as my legs shook, and I lay on his thighs a moment to catch my breath.
He rolled me off to one side and spread my legs; I was a passive partner now, about to be taken by this stallion. My pussy was his, my womb was his, I was his to do with as he pleased. I put one leg on his shoulder and the other on the bed, allowing his body to stretch my legs out for him. There was a fiery passion in each stroke. I loved the way my tits bounced as he found the bottom of my cunt, his balls touched my asshole each time.
“Fuck me. Give it to me. Give me your load,” I said weakly as I was being soul fucked by this amazing man. “Fill me up. Ohhh, Ohhh, Mmmmm.” a second orgasm or a continuation of the first, I couldn’t tell I just knew it was wonderful. I grabbed my tits and mashed them together, pinching my nipples and biting my lip as his cock became my whole world. His pace fast now, desperate, his last chance to do this for keeps, I hoped my body would accept him and give me his baby. I was getting a bit sore, but it was a good soreness I would have endured forever.
“Ahhh, Ahhh, fuck that’s tight, I’m… Ahhhh.” Finally he gave up his essence. I imagined the seed rushing into my womb, the last drops of fertile sperm franticly searching for their mark. I pushed him back a little. I didn’t want any of his come to leak out.
“Pull back, give me room to hold your come inside me,” As he came. Finally he was done. Pulling out of me, I rocked my hips so his seed would flow into me. “Hand me a pillow, I need to lay high so you don’t fall out.”
He grabbed the pillow and lifted me gently, putting the pillow under my pelvis. Then he laid down with me and rubbed the length of my body, lingering each time on my breasts as we kissed.
“Twenty-five more times. I don’t think we’ll be able to do half of that in a few days. Your cock will kill me,” I said.
“We have two weeks. I never called Martha, she and the girls won’t be back for two weeks,” He said. He kissed me deeply, like he appreciated my giving his sperm a place in my body. But I had to admit, I loved him, I had no right to him, but I loved him just the same and I was proud he would let me do this for him.
|
https://medium.com/bella-cooper-books/getting-the-last-drop-4c92d492377e
|
['Bella Cooper']
|
2020-12-20 16:35:20.738000+00:00
|
['Short Story', 'Sexuality', 'Erotica', 'Relationships', 'Love']
|
Responding to Vietnam’s Healthcare Challenges: Bolstering Sustainable Funding Models for Community-Based Organizations and Social Enterprises
|
Responding to Vietnam’s Healthcare Challenges: Bolstering Sustainable Funding Models for Community-Based Organizations and Social Enterprises INVEST Follow Oct 13, 2021 · 4 min read
A healthcare worker stands outside a clinic in Vietnam. Photo: USAID Vietnam
By Mariah Redfern, INVEST Communications Specialist
At the height of the HIV/AIDS crisis in the 1990s, the United States established PEPFAR — the President’s Emergency Plan for AIDS relief — which is the largest commitment by any nation to address a single disease in history. In Vietnam, USAID used PEPFAR funding to support community-based organizations — private clinics that focused on HIV/AIDS. Thirty years later, medical advancements have transformed HIV/AIDS from an acute, fatal disease to a manageable, chronic condition, and USAID Missions project that PEPFAR funding will decline significantly. This decline in funding presents a problem for community-based organizations that are heavily reliant on donor funding. It also presents a challenge for the Government of Vietnam as it seeks to continue supporting access to specialized healthcare.
Over the past two decades Vietnam’s economic growth has been undeniably strong. Not only has Vietnam seen its per capita gross domestic product increase 2.7 times from 2002 and 2018, but also Vietnam was one of few countries that saw economic growth in 2020 during the COVID-19 pandemic. With its growing middle class, Vietnam is seeing an increased appetite in its citizens for private healthcare services from community-based organizations, social enterprises, and private clinics. As funding from the PEPFAR program declines, private investment may be the key to sustaining successful community-based organizations, social enterprises, and private clinic models throughout the country that provide critical healthcare services.
USAID INVEST’s Private Sector Solutions
USAID’s Vietnam Mission is working with the Vietnamese Government to find sustainable funding solutions for private healthcare providers. In August 2019, the Mission began working with USAID INVEST — an initiative that mobilizes private capital for better development results — to explore and facilitate private investment and diversify funding for community-based organizations, social enterprises, and private clinics.
In order to provide high-quality, specialized services to a wide range of patients, private healthcare providers will need to take advantage of the increased willingness to pay among the middle class and transition from subsidized delivery models to self-sustaining business models.
To address the challenges that Vietnam’s private healthcare providers are facing, INVEST developed a two-pronged strategy: first, to assess the opportunities in Vietnam’s healthcare system for community-based organizations, social enterprises, and private clinics, and second, to partner with the private sector to implement recommendations from the assessment’s findings.
In May 2021, INVEST worked with PATH and the Centre for Social Initiatives Promotion (CSIP) to conduct an assessment — with participation from 42 community-based organizations and social enterprises — of market-based, scalable models for community-based organizations and social enterprises to deliver primary healthcare services. The assessment mapped providers, identified service model types, and assessed the needs of community-based organizations and social enterprises as well as their capacity to implement new models. Based on data collected by INVEST and its partners, the assessment identified 15 middle- to late-stage community-based organizations and social enterprises across Vietnam that could benefit from new business models, commercial capital, or business mentoring.
Phase one of INVEST and USAID Vietnam’s two-pronged strategy: investment opportunity assessment. Graphic: Lauren Yang, INVEST Communications Advisor
In Phase Two, INVEST and its partners will provide technical assistance to a shortlist of organizations identified in the assessment — organizations that demonstrated readiness to expand their HIV and other primary healthcare services and that could benefit from business mentoring, support in transitioning their business models, or access to commercial capital for growth or expansion.
INVEST will identify and pilot scalable fee-for-service models and expanded service offerings to assist community-based organizations and social enterprises delivering essential HIV/AIDS services to become financially sustainable and deliver more diversified, quality care. This work will help to support healthcare community-based organizations and social enterprises to seek new business models and diversify their services to ensure viability and sustainability. INVEST will partner with the private sector to bolster the capacity of community-based organizations and social enterprises to test and implement new business models by providing roadmap development, technical assistance, and transaction support.
Nguyen Thi Chien interviews a client at a USAID-supported HIV testing and counselling center near Hanoi. Photo: USAID Vietnam
The Long-Term Impact of Private Sector Engagement
In Vietnam and other markets around the world, public and private sector actors have an opportunity to work together to provide patients with the services they need and want. While private capital may provide a key to sustainable business models for private healthcare providers, many investors are hesitant to invest in social enterprises due to concerns about cash flow and profitability, and current investment models are not well-fitted to newer and less experienced organizations without established track records.The work that USAID/Vietnam and INVEST are doing will help equip community-based organizations, social enterprises, and private clinics with the tools they need to effectively attract private investment and pursue new pathways towards sustainable funding models.
For decades, USAID has helped strengthen healthcare systems, and we cannot afford to lose those hard-won gains. By enabling community-based organizations and social enterprises to diversify their financing, USAID is testing a new way to help healthcare providers adapt to new funding realities and better meet the healthcare needs of their communities.
|
https://medium.com/usaid-invest/responding-to-vietnams-healthcare-challenges-bolstering-sustainable-funding-models-for-dffce5a371ec
|
[]
|
2021-10-13 16:13:18.351000+00:00
|
['Health', 'Global Health', 'Vietnam', 'Sustainability', 'Funding']
|
Reflections On A Tumultuous Year
|
Reflections On A Tumultuous Year
Well here I am, finally within reach of the minimum required credits for enrollment in the second year. Barely keeping my head above water in order to avoid a year of nothing, with my only requirement being to pass at least one of the four subjects, two of which I’ve repeatedly failed at, the other I’ve steadily avoided.
I’ve had the strangest summer yet, and not just because of the global pandemic. I’ve never had to shuffle with school for so long into the summer. So far, school used to encompass most of the week days throughout the year, pausing for holidays, weekends, a short winder break and an expansive three month summer break.
During these times I neither thought about nor did anything for the institution that would have domain over me until 18. But now university is different. The separation of work and play is no longer here. I’m constantly in a mixed state, always on the fringe of going back to school mode.
I barely passed my first year. Now, I’ve spent the first two weeks of school traveling around, in one country for private reasons in another because that’s where my college is. I only went to two physical classes, because I only stayed a week in my college town. I would have gone to two more, but my schedule got messed up and I missed them. Then I left, convinced that it was currently useless to stay there and it wouldn’t last.
I turned out to be right, since the next weekend the whole country shut down. Turns out you can’t just pretend a global pandemic doesn’t exist, while carrying on. Now I’m back in my state. Back in my home. Back with my family. Back in my old life.
Now I’m staring down at the beginning of another week of school, all new material raxi g ahead of me.
|
https://medium.com/@LordViktorSaint/reflections-on-a-tumultuous-year-d4b7372bb470
|
['Viktor Saint']
|
2020-10-26 16:58:24.731000+00:00
|
['College', '2020', 'Personal', 'Memories', 'Year In Review']
|
It’s Time for Self-Help Gurus to Sit Down
|
It’s Time for Self-Help Gurus to Sit Down
Positivity at the expense of reality is destructive.
Photo by Dollar Gill on Unsplash
Suffice it to say I’ve had quite an eventful couple of years. Since 2018, I suffered a series of losses, each challenging on their own, but all arduous as a whole. I’ve been dealing with trauma and the chronic physical pain that often accompanies it.
I’ve tried meditating, exercising, stretching, talking, crying, and yelling the pain out of me. I’ve seen a psychotherapist specializing in grief, a psychotherapist specializing in somatic therapy, my medical doctor on multiple occasions, a chiropractor, a physiotherapist specializing in vertigo, a physiotherapist trained in dry needling, and a massage therapist.
And while a lot of the aforementioned therapies and approaches have helped, none of them singlehandedly “cured” me of my pain. And they didn’t erase my grief, either. They simply provided me with better tools for coping with it.
Now that some time has passed, and after a lot of processing on my end, I’m a bit better equipped at navigating grief, as well as all the other unfortunate events that have ensued. But this experience has allowed me to see beyond the veil, to recognize a lot of my own privilege, and to contend with the fact that so many people are suffering every day for things they have no control over. I’d just never really noticed, that is – not until it had happened to me.
With this new knowledge brought the recognition of just how many people in the personal growth community are grossly ill-equipped at dealing with trauma and suffering. I’ve discovered that behind the idea of “manifesting” is an industry that profits off white privilege and the systemic inequalities that perpetuate it. I’ve witnessed self-proclaimed “gurus,” “lightworkers,” “spiritual coaches” and the likes, selling the notion of transcending one’s emotions and traumas, while directly perpetuating the use of spiritual bypassing. I’ve had my own emotions dismissed, downplayed, and disregarded by so-called “experts” who charge fees for their services and believe themselves more evolved than the rest of us.
And I’m here to tell you why this form of toxic spirituality is harmful, exploitative, and long past its expiration date.
Manifesting is a facet of privilege.
In order for you to believe you have complete control over your environment, to the extent that you can think something into existence by persistently wishing for it, you must live a life in which you haven’t yet been proven wrong. Which means you’re privileged.
As my university stats professor drilled into my head a decade ago, correlation does not equal causation. Just because things go right for you, doesn’t mean you caused them to. Being privileged doesn’t mean you’ve never had something unfavorable happen to you, it just means the things you long for were probably always at your fingertips, you just hadn’t realized it. More than likely, the odds were already skewed in your favor.
According to The Law of Attraction’s website, manifesting “is where your thoughts and your energy can create your reality,” and so if you think and act positively, then you’ll attract favorable circumstances. The flip side of this is that when things go wrong, it’s also because of you. Which is glorified victim-blaming, and discounts so many factors outside of our control that prohibit or hinder people’s successes.
There are significant boundaries to manifesting; for instance, racial inequality.
So manifesting implies that you can attract wealth, fame, and career success by changing your thoughts. But how does this notion translate to systemic racial inequalities? Let’s first look at a bit of history for why we have a racial wealth divide today.
According to a 2019 article in the Center for American Progress, the US federal government has directly contributed to racist housing policies. After The Great Depression, the Home Owners’ Loan Corporation and the Federal Housing Administration (FHA) promoted residential segregation by keeping middle-class neighborhoods white and making it difficult for black people to qualify for mortgages.
This, in turn, led white people to earn more equity, allowed them access better to education (due to tax-funded schooling), and enabled them to afford certain opportunities for their children, such as extracurricular activities and college tuition, that were less accessible in more impoverished communities.
Of course, the opposite happens for black people. “African Americans face systematic challenges in narrowing the wealth gap with whites,” reports the Center for American Progress. “The wealth gap persists regardless of households’ education, marital status, age, or income.” With less access to wealth and equity comes a lack of opportunity for higher education, which inevitably impacts employment prospects.
Further to that, according to the Stanford Centre on Poverty & Inequality, a huge barrier preventing people from achieving financial and employment success can be reduced to the spelling of their name. People with “white-sounding” names are more likely to get callbacks for interviews, making them more likely to get the job, and leaving them under the false impression that it’s their skills that earned it.
Equally-qualified, educated, and skilled black people — those beating the odds of systemic inequality raised against them — can still be turned down from jobs they apply to, simply because of the fact that their names sound “black.”
And there are many other barriers to success outside of racism that manifesting doesn't account for. According to the Centre for Disease Control and Prevention, the CDC-Kaiser Permanente Adverse Childhood Experiences (ACE) Study revealed that childhood abuse or neglect increases a person’s risk of developing negative outcomes, such as depression, anxiety, substance abuse, cancer, diabetes, and even suicide, as an adult.
Barriers like the gender pay gap, suffering from mental health conditions, growing up with neglect and abuse, being raised in poverty: these are all factors that are outside of a person’s control and can greatly impact their ability to “think their way” into success.
The personal growth community believes you are your only roadblock.
“The truth is that financial success starts in the mind and the number one thing holding many people back is their belief system concerning wealth and money,” says Jack Canfield, motivational speaker, and corporate trainer. On his website, Canfield boasts a subscriber’s list of 2.5 million people and has allegedly sold more than 500 million books worldwide, many of which earned him the title of New York Times’ Bestseller.
“I am successful because I have never once believed my dreams were someone else’s to manage,” writes motivational speaker and author Rachel Hollis in her book Girl Wash Your Face. Founder of The Hollis Company alongside her soon-to-be ex-husband Dave, Rachel’s company focuses on personal growth and motivational seminars. According to the company’s LinkedIn page, they are based on six core values, one of which is centered on the belief that “our only competition is who we were yesterday.”
And how can I even discuss toxic spirituality without discussing the eponymous face for the personal growth movement, motivational speaker, and author Tony Robbins? On his website, Robbins boasts the ability to help you master every area of your life, and he sells anything from training programs to supplements and retreats.
Despite the fact that he’s been accused of berating abuse victims, subjecting his followers to dangerous techniques, and sexual harassment, his personal and professional development program is said to be the #1 of all time, with more than four million people in attendance to date. “The only thing that’s keeping you from getting what you want is the story you keep telling yourself,” says Robbins.
Now I beg to differ.
What do these three personal growth “gurus” have in common? They’re all white, they’re all rich, and they’ve all ignored systemic inequalities as a possible barrier to achieving success. And if they were to admit that there are other reasons someone may not land their dream job or make six figures, it would dismantle their entire platform and raison d’etre.
Consequently, by ignoring these barriers, they’re also directly profiting off them.
Spiritual bypassing is a self-righteous form of avoidance coping.
“Once you attend the motivational workshop I went to last weekend, you won’t sink to the level of getting angry over this,” a friend of mine said to me recently, efficiently undermining my emotions, suggesting that anger wasn’t a healthy, nor appropriate, response.
I didn’t say anything more to her, mostly because the entire purpose of her spiritual bypassing was to circumvent my experience with self-righteous “holier than thou” spiritual rhetoric, and in doing so, creating a more comfortable reality for her. At that moment, though, I vowed to always be someone who was comfortable feeling anger.
Spiritual bypassing, by its very definition, is harmful.
Author and psychotherapist John Welwood, his 2000 book “Toward a Psychology of Awakening,” described spiritual bypassing as “the use of spirituality, spiritual beliefs, spiritual practices, and spiritual life to avoid experiencing the emotional pain of working through psychological issues.”
Spiritual bypassing is the act of avoiding feelings deemed to be “negative,” blaming unfortunate circumstances as “vibrating at a lower frequency,” and claiming that transcending one’s emotional reactions is a goal to strive for.
It is the belief that everything has a “higher purpose,” and that difficult circumstances are no more than lessons in disguise. And it’s an efficient form of avoidance coping, which a 2011 article in the Journal of Personality defines as “attempting to evade a problem and deal with it indirectly.”
Changing its name but not its practice is just as futile.
Autor Michael Beckwith decided that it might sound better to call it “spiritual shapeshifting” and talks about how he “shapeshifted” the energy from his healthy knee to the energy of his injured knee, holding a “higher, purer vibration” and somehow magically ridding himself of pain and inflammation.
Now I’m not exactly sure which frequency I’m vibrating at, but I can tell you that promoting the notion that your thoughts can cure your pain is a dangerous narrative to spin, particularly for those suffering from a debilitating disease or chronic pain.
Claiming that an injury, whether physical, emotional or otherwise, being reduced to no more than “low frequency” implies that it’s entirely under your control. Does this line of thinking translate to cancer sufferers? Can someone with multiple sclerosis think themselves out of symptoms? Is it our fault if we’re diagnosed with a life-threatening illness?
Can you see how this rhetoric is inherently damaging?
Anyone who’s ever experienced acute suffering and trauma can tell you that while positive thinking as a concept can be helpful at times, positivity at the expense of reality is utterly offensive and harmful. It may make other people feel good to respond to your pain with “love and light,” but it entirely disregards the very real suffering you’re going through. And it efficiently undermines your experience.
“Spiritual laws offer an elegant solution to the problem of unfairness,” writes author Kate Bowler. “They create a Newtonian universe in which the chaos of the world seems reducible to simple cause and effect. The stories of people’s lives can be plotted by whether or not they follow the rules. In this world there is no such thing as undeserved pain.”
But this is not the world most of us are living in. And if you still live in this world, count yourself lucky that you haven’t been kicked out yet.
Those who spiritually-bypass live in their own world, and are uncomfortable with yours.
When you suffer in a world not governed by spiritual laws, you are not required to find a silver lining. The notion that “everything happens for a reason” is not true, and it’s very harmful. The universe isn’t a sentient being out there to teach you a lesson by causing the death of a loved one or unleashing a pandemic. Your personal growth is not the focus of the entire universe.
It may help some people feel better about themselves to spiritually bypass your experience because it gives them a false sense of control, and prevents them from having to empathize with (and thus, acknowledge) your tangible fear. By avoiding the reality of your painful experience, spiritual bypassing enables people to separate themselves from believing it could also happen to them.
The truth is, every single day, horrible things happen to people who don’t deserve them, by absolutely no fault of their own. I’ve seen it, I’ve witnessed it, and I’ve lived it. Failing to acknowledge this fact is a gross disservice to ourselves and to others.
It’s not about transcending your emotions and always remaining positive, it’s about processing your life and adapting to its ups and downs. People who are suffering don’t need to be victim-shamed or feel at fault for their circumstances. Talk about adding insult to injury.
Those entitled enough to preach toxic spiritual rhetoric to vulnerable people need to take inventory of their own lives and process their unresolved traumas. Anyone feeling comfortable profiting off others with their MLM essential oils or motivational seminars should do us all a favor and sit down. And we should all take stock of our privilege; while it may be invisible to us, it’s certainly obvious to others.
Life is hard enough as it is. Let’s not make it any harder.
|
https://medium.com/swlh/its-time-for-self-help-gurus-to-sit-down-c1f2693d0239
|
['Shannon Leigh']
|
2020-10-23 22:02:47.850000+00:00
|
['Personal Growth', 'Self', 'Psychology', 'Trauma', 'Self Improvement']
|
Object-Oriented JavaScript — Parts of a Class
|
Photo by George Tseganis on Unsplash
JavaScript is partly an object-oriented language.
To learn JavaScript, we got to learn the object-oriented parts of JavaScript.
In this article, we’ll look at the parts of a JavaScript class.
Constructor
The constructor is a special method that’s used to create and initialize the object we create with the class.
The class constructor can call its parent class constructor with the super method.
super is used for creating subclasses.
Prototype Methods
Prototype methods are prototype properties of a class.
They’re inherited by instances of a class.
For instance, we can write:
class Car {
constructor(model, year) {
this.model = model;
this.year = year;
}
get modelName() {
return this.model
}
calcValue() {
return "2000"
}
}
We have 2 prototype methods within our Car class.
One is the modelName getter method.
And the other is the calValue method.
We can use them once we instantiate the class:
const corolla = new Car('Corolla', '2020');
console.log(corolla.modelName);
console.log(corolla.calcValue());
We created the Car instance.
Then we get the getter as a property.
And we called calcValue to get the value.
We can create class methods with dynamic names.
For instance, we can write:
const methodName = 'start'; class Car {
constructor(model, year) {
this.model = model;
this.year = year;
} get modelName() {
return this.model;
} calcValue() {
return "2000"
} [methodName]() {
//...
}
}
We pass in the methodName variable to the square brackets to make start the name of the method.
Static Methods
We can add static methods that can be run directly from the class.
For instance, we can write:
class Plane {
static fly(level) {
console.log(level)
}
}
We have the fly static method.
To run static methods, we can write:
Plane.fly(10000)
Static Properties
There’s no way to define static properties within the class body.
This may be added in future versions of JavaScript.
Generator Methods
We can add generator methods into our class.
For instance, we can make a class where its instances are iterable objects.
We can write:
class Iterable {
constructor(...args) {
this.args = args;
} *[Symbol.iterator]() {
for (const arg of this.args) {
yield arg;
}
}
}
to create the Iterable class that takes a variable number of arguments.
Then we have the Symbol.iterator method to iterate through this.args and return the arguments.
The * indicates that the method is a generator method.
So we can use the class by writing:
const iterable = new Iterable(1, 2, 3, 4, 5);
for (const i of iterable) {
console.log(i);
}
then we get:
1
2
3
4
5
We have created an instance of the Iterable class.
Then we looped through the iterator items by and logged the values.
Photo by George Tseganis on Unsplash
Conclusion
A class can have a constructor, instance variables, getters, instance methods, static methods, and generator methods.
Enjoyed this article? If so, get more similar content by subscribing to Decoded, our YouTube channel!
|
https://medium.com/javascript-in-plain-english/object-oriented-javascript-parts-of-a-class-37eceab91f4e
|
['John Au-Yeung']
|
2020-11-15 12:27:47.543000+00:00
|
['Technology', 'Programming', 'Software Development', 'Web Development', 'JavaScript']
|
PODHADITH: THINKING POSITIVELY
|
In a Hadith found in Sunan Abu Dawud, number 4993, the Holy Prophet Muhammad, peace and blessings be upon him, advised that thinking well about people is an aspect of worshipping Allah, the Exalted, correctly. Meaning, it is an aspect of obeying Allah, the Exalted.
Interpreting things in a negative way often leads to sins such as backbiting and slandering. Unfortunately, adopting a negative mind-set effects people from a family unit to a national level. For example, how many times has a nation gone to war over an assumption and suspicion? The vast majority of scandals which are found in the media are based on assumptions. Even laws have been created which support the use of assumptions and suspicion. This often leads to fractured and broken relationships as people with this mind-set always believe others are taking a dig at them through their words or actions. This prevents one from taking advice from others as they believe they are only being mocked by the advisor and it prevents one from giving advice as they believe the other person will not pay any attention to what they say. And a person will refrain from advising the one who possesses this negative mind-set as they believe it will only lead to an argument. This leads to other negative traits such as bitterness.
It is important for muslims to understand that even if they assume someone is taking a dig at them they should still accept their advice if it is based on the Holy Quran and the traditions of the Holy Prophet Muhammad, peace and blessings be upon him. They should strive to interpret things where possible in a positive way in order to give the benefit of the doubt to others. This in turn leads to a positive mentality. And a positive mindset leads to healthy relationships and feelings. Chapter 49 Al Hujurat, verse 12:
“O you who have believed, avoid much [negative] assumption. Indeed, some assumption is sin…”
PodHadith: Thinking Positively: https://youtu.be/3OXDhNkngBg
PodHadith: Thinking Positively: https://fb.watch/5YwYHXhIDt/
#Allah #ShaykhPod #Islam #Quran #Hadith #Positivity #Prophet #Muhammad #Sunnah #Piety #Taqwa #Pessimism #Optimism #Negativity
|
https://medium.com/@shaykhpodblog/podhadith-thinking-positively-2dc0f380a3c3
|
['Shaykhpod Blog']
|
2021-06-17 12:29:24.739000+00:00
|
['Positivity', 'Hadith', 'Pessimism', 'Optimism', 'Islam']
|
Leetcode Algorithms
|
Leetcode Algorithms
905. Sort Array By Parity
Given an array A of non-negative integers, return an array consisting of all the even elements of A , followed by all the odd elements of A .
You may return any answer array that satisfies this condition.
Example 1:
Input: [3,1,2,4]
Output: [2,4,3,1]
The outputs [4,2,3,1], [2,4,1,3], and [4,2,1,3] would also be accepted.
Note:
1 <= A.length <= 5000 0 <= A[i] <= 5000
Solution:
class Solution:
def sortArrayByParity(self, A: List[int]) -> List[int]:
even = []
odd = []
for i in range(len(A)):
if A[i] % 2 == 0:
even.append(A[i])
else:
odd.append(A[i])
return even + odd
Time: O(N)
Space: O(N)
Java Solution in O(N) and O(1):
class Solution {
public int[] sortArrayByParity(int[] A) {
int index = 0;
for (int i = 0; i < A.length; i ++){ if(A[i] % 2 == 0){ // A[i] is the current element int temp = A[index]; // store it so wont overwrite A[index ++] = A[i];
A[i] = temp;
} }
return A; }
};
Reference
Link
|
https://medium.com/jen-li-chen-in-data-science/leetcode-algorithms-8dbf8c38552e
|
['Jen-Li Chen']
|
2020-12-27 13:31:04.269000+00:00
|
['Leetcode', 'Java', 'Algorithms', 'Python3']
|
Build A Cryptocurrency Trading Bot with R
|
Photo by Branko Stancevic on Unsplash
** Note that the API used in this tutorial is no longer in service. This article should be read for illustrative purposes with that in mind.
The trader’s mind is the weak link in any trading strategy or plan. Effective trading execution needs human inputs that run in the opposite direction to our instincts. We should buy when our reptile brain wants to sell. We should sell when our guts want us to buy more.
It is even more difficult to trade cryptocurrencies with a a critical constitution. The young and emerging markets are flooded with “pump groups” that foster intense FOMO (fear of missing out) which drive prices sky-high before body-slamming them back down to earth. Many novice investors also trade on these markets, investors that possibly never entered a trade on the NYSE. On every trade, there is a maker and a taker, and shrewd crypto investors find it easy to take advantage of the novices flooding the space.
In order to detach my emotions from crypto trading and to take advantage of markets open 24/7, I decided to build a simple trading bot that would follow a simple strategy and execute trades as I slept.
Many “bot traders” as they are called, use the Python programming language to execute these trades. If you were to google, “crypto trading bot,” you would find links to Python code in various Github repositories.
I’m a data scientist, and R is my main tool. I searched for a decent tutorial on using the R language to build a trading bot but found nothing. I was set on building my own package to interface with the GDAX API when I found the package rgdax, which is an R wrapper for the GDAX API. The following is a guide to piecing together a trading bot that you can use to build your own strategies.
The Strategy
In a nutshell, we will be trading the Ethereum — USD pair on the GDAX exchange through their API via the rgdax wrapper. I like trading this pair because Ethereum (ETH) is typically in a bullish stance, which allows this strategy to shine.
Note: this is a super-simplistic strat that will only make a few bucks in a bull market. For all intents and purposes, use this as a base for building your own strat.
We will be buying when a combination of Relative Strength Index (RSI) indicators point to a temporarily oversold market, with the assumption that the bulls will once again push the prices up and we can gather profits.
Once we buy, the bot will enter three limit sell orders: one at 1% profit, another at 4% profit and the last at 7% profit. This allows us to quickly free up funds to enter another trade with the 1st two orders, and the 7% order bolsters our overall profitability.
Software
We will be using Rstudio and Windows task scheduler to execute our R code on a regular basis (every 10 minutes). You will need a GDAX account to send orders to, and a Gmail account to receive trade notifications.
Our Process
Part 1: Call Libraries and Build Functions
We will begin by calling several libraries:
The package rgdax provides the interface to the GDAX api, mailR is used to send us email updates with a Gmail account, stringi helps us parse numbers from JSON and TTR allows us to perform technical indicator calculations.
Function: curr_bal_usd & curr_bal_eth
You will use your api key, secret and passphrase that are generated from GDAX in the API section. These functions query your GDAX account for the most recent balance which we will use repeatedly in our trading:
Function: RSI
We will use the RSI or Relative Strength Index as our main indicators for this strategy. Curr_rsi14_api pulls in the value of the most recent 14 period RSI, using 15 minute candles. RSI14_api_less_one and so forth pull in the RSI for the periods prior:
Function: bid & ask
Next, we will need the current bid and ask prices for our strategy:
Function: usd_hold, eth_hold and cancel_orders
In order for us to place limit orders in an iterative fashion we need to be able to pull in the current status of our orders already placed, and be able to cancel orders that have moved too far down the order book to be filled. We will use the “holds” function of the rgdax package to do this for the former, and “cancel_order” for the latter:
Function: buy_exe
This is the big-daddy function that actual executes our limit orders. There are several steps that this function works through.
1. Order_size function calculates how much eth we can buy, because we want to buy as much as possible each time, less 0.005 eth to account for rounding errors 2. Our WHILE function places limit orders while we still have zero ETH. 3. An order is added at the bid() price, the system sleeps 17 seconds to allow the order to be filled, and then checks to see if the order was filled. If it wasn’t then the process repeats.
Part 2: Store Variables
Next, we need to store some our our RSI indicator variables as objects so the trading loop runs faster and so that we don’t exceed the rate limit of the API:
Part 3: Trading Loop Executes
Up until now, we have just been preparing our functions and variables in order to execute the trading loop. The following is a verbal walk through of the actual trading loop:
If the current balance of our account in USD is greater than $20, we will start the loop. Next, if the current RSI is greater than or equal to 30 AND the RSI in the previous period was less than or equal to 30 AND the RSI in the previous 3 periods was less than 30 at least once, then we buy as much ETH as we can with the current USD balance.
Next, we save this buy price to a CSV file.
Then, we send an email to ourselves to alert us of the buy action.
The loop then prints “buy” so we can track that in our log file.
The system then sleeps for 3 seconds.
Now, we enter 3 tiered limit sell orders to take profits.
Our first limit sell order takes profit at a 1% gain, the next takes profit at a 4% gain, and the last takes profit at a 7% gain:
That’s it, that’s the entire script.
Part 4: Using Windows Task Scheduler to Automate the Script
The whole purpose of this bot is to take the human error out of the trade, and to allow us to enter trades without having to be present at a screen. We will use Windows Task Scheduler to accomplish this.
Schedule script with Rstudio addin
Use the handy Rstudio add in to easily schedule the script:
Modify the scheduled task with Task Scheduler
Navigate to the task created by the Rstudio add in and adjust the trigger to fire at the interval you wish. In my case I choose every 10 minutes indefinitely.
Keep an eye on your task with the log file
Every time your script runs it will make an entry in a text log file, which allows you to troubleshoot errors in your script:
You can see how the “START LOG ENTRY” and “END LOG ENTRY” print function comes in handy to separate our entries.
Make it Your Own
You can modify this script to make it as simple or as complex as you want. I’m working on improving this script with the addition of neural networks from the Keras module from Tensorflow for Rstudio. These neural networks add an exponentially more complex element to the script, but are incredibly powerful for finding hidden patterns in the data.
In addition, the TTR package provides us with a large number of financial functions and technical indicators that can be used to improve your model.
With all this being said, do not play with more money that you can afford to lose. The markets are not a game and you can and will lose your shirt.
Link to Full Source Code on Github
|
https://towardsdatascience.com/build-a-cryptocurrency-trading-bot-with-r-1445c429e1b1
|
['Brad Lindblad']
|
2019-10-01 21:34:49.803000+00:00
|
['Bitcoin', 'Investing', 'Trading', 'R Language', 'Cryptocurrency']
|
8 Surprising Diseases & Disorders That Coffee Combats
|
8 Surprising Diseases & Disorders That Coffee Combats
Photo by borphloy Adobe Stock
As an enthusiastic coffee drinker, I was happy to find out that coffee has some surprising health benefits that can fight many common diseases and disorders.
According to MedicalNewsToday, heart disease, stroke, diabetes, Alzheimer's, cancer, and suicide are within the top ten leading causes of death for Americans. Coffee can help lower these risks, as well as a couple of others.
Let’s break down exactly how your morning cup of joe can do more than get you through the day.
Alzheimer's Disease (and Dementia)
Alzheimer’s disease is an increasingly common neurodegenerative disease across the world and often leads to dementia. At this time, we don’t have a cure for Alzheimer’s, so focusing on prevention is essential.
Numerous studies have been done on coffee and Alzheimer’s prevention. Many show that daily coffee consumption can reduce the risk of Alzheimer’s by up to 65%.
The current idea is that caffeine is responsible for this reduction. But why?
Scientists and doctors have studied brain imaging with caffeine. Caffeine actually reduces inflammation in the brain.
In fact, adults over 65 years old who had high levels of caffeine in their blood (think about 3 cups a day of coffee), were able to delay or avoid Alzheimer’s disease even if they displayed mild cognitive decline already.
One neuroscientist, Dr. Cao, commented,
“These intriguing results suggest that older adults with mild memory impairment who drink moderate levels of coffee — about three cups a day — will not convert to Alzheimer’s disease or at least will experience a substantial delay before converting to Alzheimer’s.”
Parkinson’s Disease
Parkinson’s is a neurodegenerative disease caused by the death of dopamine-producing neurons in the brain. Like Alzheimer's disease, there’s no cure. Therefore, prevention is extra important.
According to numerous studies from Annals of Neurology, JAMMA, and Movement Disorders, drinking 1–3 cups of coffee per day can decrease the risk of Parkinson’s between 30–60%.
The protective property in coffee that fights Parkinson’s? Again, caffeine. When similar studies were run with decaf coffee, the results were negligible.
Type II Diabetes
Type II Diabetes is a huge health issue. More than 34 million Americans are diagnosed with it. Type II diabetes is caused when your body becomes insulin resistant, usually due to the damaging effects of high blood sugar.
Interestingly enough, coffee consumption has proven to reduce the risk of developing this form of diabetes.
One study shows that coffee may lower your risk of Type II Diabetes by 7% per cup. So if you drink 3 cups of coffee, you would potentially lower your risk by 21%.
Researchers believe that the reason coffee would provide this protection is that it is rich in antioxidants, which lower inflammation. Sugar is known to cause inflammation, so the antioxidants in coffee would help to counter the negative effects of high blood sugar.
Cirrhosis
Cirrhosis is a serious medical condition in which the liver becomes covered in scar tissue and is unable to function properly. Different things can cause this condition, such as fatty liver disease or alcoholism.
Many studies show the relationship between coffee and reduced risk for cirrhosis. This study indicates that coffee consumption can lower the risk of cirrhosis based on how many cups you drink.
1 cup of coffee = 30% reduced risk
2–3 cups of coffee = 40% reduced risk
4 cups of coffee = 80% reduced risk
These effects were more prominent in people who were at risk or suffered from alcoholic cirrhosis.
Depression
Depression is not only a serious mental disorder that 6.7% of Americans experience, but it can also lead to suicide if left unchecked. Two studies show that moderate coffee consumption may lower the risk of depression and suicide.
A ten-year study on over 200,000 people, showed that those who drank 4 or more cups of coffee per day lowered their risk for suicide by over 53%.
A study done by Harvard found that women who drank 4 or more cups of coffee per day had a 20% decreased risk of developing depression, to begin with.
Cancer
Cancer claims many lives each year and is a leading cause of death in the United States. Luckily, coffee consumption can lower the risk of cancer. A study on liver cancer specifically showed that just 2 cups of coffee per day can lower your chances of liver cancer by 43%.
But what about coffee makes it such a seemingly magical elixir?
Caffeic Acid: This component of coffee creates an anti-inflammatory, anti-carcinogenic, and anti-tumor response in the body.
This component of coffee creates an anti-inflammatory, anti-carcinogenic, and anti-tumor response in the body. Chlorogenic Acid: Coffee is the main way people ingest this acid. Chlorogenic acid also has anti-tumor producing properties.
Coffee is the main way people ingest this acid. Chlorogenic acid also has anti-tumor producing properties. Cafestol + Kahweol: These chemicals protect against oxidative stress and DNA damage, essentially acting as antioxidants to fight off any cancer cell growth and carcinogens.
Heart Disease
Coffee often gets a bad rep for “raising blood pressure.” Whereas your blood pressure can indeed elevate, there are two reasons why it may not be a big deal for most people.
It usually raises a small amount of 3–4 mm/Hg AND
Most people who drink coffee regularly quickly grow resistant to this rise in blood pressure, meaning it will return to normal after a short duration.
Instead of hurting most people’s hearts and cardiovascular systems, a study done in the International Journal of Cardiology shows that women get a decreased risk of heart disease by drinking coffee.
Stroke
Two groundbreaking studies indicate that coffee may reduce your risk of stroke by up to 20%. Considering strokes are one of America’s top killers, this has huge potential to help citizens live longer lives. The studies are:
The first is a Japanese study that assessed nearly 90,000 Japanese adults. When looking at coffee specifically, they noticed that adults who drank coffee 1–2 times a day had a lower risk for strokes.
The second study looked at nearly 500,000 participants and concluded that moderate coffee consumption may lower your risk of stroke.
The Takeaway
Outside of being absolutely delicious and a highlight of any morning, coffee has many health benefits.
Moderate coffee consumption can lower your risk for some of the deadliest diseases and disorders, as well as make you a happier, more productive person overall. So, stop second-guessing if you should have that lunchtime cup, and go enjoy your coffee.
|
https://medium.com/in-fitness-and-in-health/8-surprising-diseases-disorders-that-coffee-combats-1ae5aed61f35
|
['Malinda Fusco']
|
2020-12-20 19:14:09.825000+00:00
|
['Food', 'Advice', 'Health', 'Coffee', 'Health Foods']
|
Bra
|
The bra is underwear worn to cover or lift a woman’s breasts.
The bra can have several functions, it can be used to enhance the shape of the breast to make it appear larger; to shape cleavage, for fashion purposes, for cultural or aesthetic reasons.
The misconception that wearing a bra helps prevent sagging of the breasts over time has long been held; this has been disproved by science and manufacturers are trying to avoid this in their advertising.
There are many types of bras, such as braced bras, non-braced bras, push-up bras, self-adhesive bras, strapless bras, bralettes, balconettes, nursing bras, or sports bras. In addition, we can also talk about bras with a full, triangular, pre-shaped, adjustable strap, low back, high neck, U-shaped, V-shaped, T-strap, front strap, bandeau, elongated, cross strap, lift, or breast reduction. The bra may be padded or unpadded.
There are also men’s bras, as men’s breasts can also become enlarged. These bras tend to flatten the breasts. Cross-dresser men also wear bras, although they prefer to wear women’s pieces. Male athletes such as runners can also wear sports bras to prevent painful nipple chafing.
There are many types and styles to choose from, the key is to wear a top that is both physically comfortable and expresses your personality.
We need to take care of ourselves because we don’t notice it, but it is a defining element of our everyday life. And that’s just the basics, but there are also special events for which good underwear is essential.
It’s worth learning this from an early age.
There are many different manufacturers and materials, but the one I trust is Pi.An.Bl. Lingerie webshop. Because they listen to me, I can get live help if I need it, they have colourful and varied lingerie, fast delivery and a guarantee, almost free of charge. You need a safe place in your life that you can always count on. Well, that’s why Pi.An.Bl. is the best.
Pi.An.Bl. Lingerie
|
https://medium.com/@pianbl/bra-6fd0ac69fdb2
|
['Catherine Angel Sivie']
|
2021-12-14 19:36:33.796000+00:00
|
['Lingerie', 'Lingerie Shopping Online', 'Bras', 'Brands', 'Lingerie Online']
|
New Perk — Earn 15% Crypto Reward on iPhone 12
|
New Perk — Earn 15% Crypto Reward on iPhone 12 Plutus Follow Nov 9 · 5 min read
We are heading into the shopping season, and e-commerce is booming! In light of the multitude of new tech products arriving on the market, we have introduced an exciting new Bonus Perk — 15% crypto rewards on iPhone 12 purchases.
iPhone 12—15% Crypto Reward
Promotional Offer
For the remainder of the year, you can earn 15% crypto rewards (PLU) on any iPhone 12 model purchases (includes Regular/Mini/Pro/Pro Max) directly from your Perks dashboard > Apple’s online store.
Current Availability
Timeline: 01 Dec 2020, 00:00 BST — 31 Dec 2020, 23:59 BST
01 Dec 2020, 00:00 BST — 31 Dec 2020, 23:59 BST Account Type: Premium / Pro
Premium / Pro Staking Level 2 : 450 PLU
: 450 PLU Countries: All Supported Countries
From December 1st, this promotion is available only to Premium/Pro members who have staked the 450 PLU necessary to unlock this Perk. The offer is international and will be available across all of our supported countries.
Note: The PLU rewards will be visible in the Earned section of your Pluton Dashboard after the transaction has settled (typically less than 3 days). It will move to Available after 45 days whilst Plutus does our own internal validation checks. If your staked holdings drops below the required 450 PLU staking threshold during this 45 day period, your 15% reward for the iPhone 12 will become void.
How Does it Work?
To reap the savings upwards of £115, just follow the steps below.
Log in to your Plutus web application.
Click on the Perks tab and find the iPhone promotion.
Select the iPhone of choice — this will redirect you to Apple’s website.
Make the payment via your Plutus Card on the Apple Store.
Important Note: The link will take you to the UK Apple Store, if you reside outside of this region, just amend the URL to your local Apple Store and you will still be awarded for your purchase.
The purchase must be made on the Apple store via the Plutus Perks dashboard. The payment must be made using the Plutus Card and in full (no split monthly payment or contracts).
What Products are Eligible?
The following products are eligible for the 15% crypto reward:
iPhone 12 SIM-free handset
SIM-free handset iPhone 12 Mini SIM-free handset
SIM-free handset iPhone 12 Pro SIM-free handset
SIM-free handset iPhone 12 Pro Max SIM-free handset
SIM-free handset Apple Care
Important Note: The phone must be purchased outright in full, we do not reward users on monthly payments or contracts.
How to Stake & Earn Crypto (PLU)?
If you have just received your Plutus Card and would like to learn how to get started then please check out the blog below.
You can also find details in our helpdesk article of all other benefits included in Plutus Perks as well as the full breakdown of requirements and rewards of each household brands affiliated with Plutus:
|
https://medium.com/plutus/new-perk-earn-15-crypto-reward-on-iphone-12-c2f99fa13d67
|
[]
|
2020-12-14 16:57:47.984000+00:00
|
['Fintech', 'Announcements', 'Rewards', 'Cryptocurrency', 'Bitcoin']
|
Best high-resolution digital audio player: Which DAP reigns supreme?
|
I am thankful for all of those who said NO to me. It’s because of them I’m doing it myself (Albert Einstein)
|
https://medium.com/@omar37906010/best-high-resolution-digital-audio-player-which-dap-reigns-supreme-b1bdc48afe7d
|
[]
|
2020-12-23 21:58:37.517000+00:00
|
['Mobile', 'Consumer', 'Home Tech', 'Surveillance']
|
Zoom problems with Catalina MacOS Screen share Permissions
|
I spent quite sometime figuring this out, and the problem was sooo silly and simple.
Writing this so that no other people have to waste time on this.
Problem I was facing :
When I tried to share my screen with Zoom it asked for screen sharing permissions. You’ll see something like this :
And when you open the system preferences and navigate to screen sharing permissions, Zoom app is not in the list (If you see it, voila! you’re lucky, just enable it and go go… ). Like this :
What you have to do in this situation is a VERY BASIC and SIMPLE STEP : UPDATE THE APP!!! Voila!
You can thank me later.. ;)
Btw going off-topic for all quarantined office people,
Recently, I switched from iOS to Android. And I was missing Apple’s Continuity feature. Where I could easily make or receive phone calls on Mac, get access OTPs, messages quickly on my Mac.
I noticed I was ending up landing social media or hacker-news :D spending hours every-time I picked up my phone for a call(there’re a lot in this work from home situation).
And then after spending a day, going through a lot of spam! I noticed this app on App Store called “Connecton”. I’m surprised why was it hard to find it.
Putting out loud, so that I can save someone’s time who needs this.
https://apps.apple.com/in/app/connecton/id1497000705?mt=12
|
https://medium.com/@nehaguptag/zoom-problems-with-catlina-macos-screen-share-permissions-fa544af043aa
|
['Neha Gupta']
|
2020-03-24 16:12:39.327000+00:00
|
['Catalina', 'Apple', 'Zoom', 'Screenshare', 'Macos']
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.