title
stringlengths 1
200
⌀ | text
stringlengths 10
100k
| url
stringlengths 32
885
| authors
stringlengths 2
392
| timestamp
stringlengths 19
32
⌀ | tags
stringlengths 6
263
|
---|---|---|---|---|---|
Are software algorithms more biased than humans? | I believe that humans are more biased than algorithms. For algorithms, their bias depends on the rules it was programmed with to make decisions and the data set they were trained with. On the other hand, we humans have a more complex decision-making approach because of our System 1, automatic but susceptible to bias, and System 2, takes conscious effort but is analytical and less prone to bias (Douglass). Because of this, algorithm behavior can be more flexible when we adjust it towards a more controllable and favorable level of performance to solve problems, compared to human behavior.
Human bias is part of the technology humans create. In 2015, Amazon realized that their AI recruitment tool was biased against women when showing the top tech job candidates (Dastin). This was mainly because the training data set provided to the AI recruitment tool was based on a male-dominant 10-year resume pool. In the past, human resources have perceived men to be rightfully dominant to land tech jobs such as IT and software development (Pot’Vin-Gorman 23). Although the AI’s biased results were unintentionally programmed, the prevalence of men candidates in the results reflected the human bias that tech jobs are often applied for by men.
After Amazon’s attempts in revising the algorithm to be more gender-neutral, Amazon halted the AI recruitment tool by the start of 2017 “because executives lost hope for the project” (Dastin). Perhaps the loss of hope was not due to not being able to solve the bias of the AI tool, but solving human bias seemed impossible. Diving deeper into the issue of gender discrimination in the workplace, researchers have looked into two types of gender discrimination: statistical discrimination, the perceived average gender group differences in abilities or skills, and taste-based discrimination, driven by stereotypes, favouritism, and bias (Gerdeman). Statistical discrimination shows that when we know little about an individual, we tend to categorize them to a certain group. If individuals belong to a group that is perceived to perform worse than the other group, it creates a bias against them. Between the two, statistical discrimination is much harder to root out (Gerdeman).
Despite the unsuccessful attempts of creating a bias-free AI recruitment tool, I believe Amazon should continue to develop it. According to Moore’s Law, computers can remarkably grow faster over time and become exponentially more powerful (McAfee and Brynjolfsson 35); thus, if the AI recruitment tool becomes more fine-tuned to fairly and reliably recommend candidates, it can significantly minimize hiring costs and hiring issues prone to humans. Using resumè metrics, such as keywords like “women’s chess club captain”, can categorize candidates into groups. An algorithm would then use keywords to decide if the candidate belongs to the perceived inferior group or the superior group. If this would be the main basis for determining how qualified a candidate is for a job position, it can lead to statistical discrimination. To minimize bias, Amazon can implement AI in its recruitment process by adapting Google’s hiring approach (McAfee and Brynjolfsson 57). Instead of resumès, Amazon can use a standardized questionnaire and a consistent rubric for hiring. In replacement of a human interviewer, the AI recruitment tool can indicate how the candidate did on the hiring process based on measurable and comparable results. By understanding better the contributing factors to human bias, we can use methods to eliminate or at least control them better through algorithms that will reduce the bias that exists in these human-created technologies to create faster and more effective solutions.
References: | https://medium.com/@jasrosebay/are-software-algorithms-more-biased-than-humans-e80dc3c2d25b | ['Jasmine Bayani'] | 2021-09-13 05:26:32.365000+00:00 | ['Human Resources', 'Human Behavior', 'Algorithms', 'Technology', 'Personal Biases'] |
FacetGrid vs AxesSubplot Type with Seaborn | Seaborn is a powerful data visualization library which based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics. Two types of plot that used are FacetGrid and Axessubplot. While Axessubplot is a single plot of graph drawn at one time, FacetGrid is able to create several subplots in one time based on determined variable. So Firstly, i will explained the difference between those two plot types, then i will going little deeper in FacetGrid, because it will make plot more easily.
Beside seaborn library, we have to import pandas because FacetGrids need to store data in a panda dataframe where each row represents an observation and columns represents variables. We will use “tips” dataset, which is built in dataset in seaborn library.
We will plot, from the same dataset, two scattered diagram by using FacetGrid and AxesSubplot with seaborn libary. The results of the plot are the same, we will just know the type of the plot from its command or we check the type using type command.
AxesSubPlot Type
FacetGrid Type
AxesSubplot type is generated by plotting graph using seaborn library prefix. seaborn.scaterrplot or seaborn.countplot are two examples from this type of plot. We can just generate one plot graph using this type plot at one time. So if we want to create an array of graph or merely single graph we can use facetgrid type.
Facetgrid type is an array of graph that has three dimensions, which are column, row and hue. It is easy and flexible to create subplot using row and column variable. Seaborn catplot or seaborn relplot are samples of facet grid type. Relplot is usually used to plot scattered plot or line plot to create relation between to variable. We can choose type of graph line or scatter by define it at kind variable in relplot command. We can define dimension for column by using col variable and col_wrap to define how many column will be generated.
scatterd plot using relplot sample
Another facetgrid plot is catplot. Catplot or categorical plot involved a categorical variable. Samples of plot using catplot are countplot , bar plot and box plot. Usually those kind of plot used to compare between two groups. Like relplot we can define type of plot wheter count, bar, or box by using kind variable. we can see the sample of box plot using cat plot below.
barplot using catplot command
So after i consider the advantages between two types of plot in seaborn , facetgrid and axessubplot, i prefer to used facetgrid, because it’s more flexible to create plot and easy to generate subplot if it is needed. By using relplot , for relation, and catplot, for comparison, we simply can create visualization from data set that is needed for exploring data or communicating final analyzed data. | https://medium.com/@bravo-sierra/facetgrid-vs-axessubplot-type-with-seaborn-5aa730dd8add | ['Bangun M Sagala'] | 2020-09-03 05:34:03.535000+00:00 | ['Catplot', 'Facetgrid', 'Matplotlib', 'Seaborn', 'Relplot'] |
Identity in .NET Core? | Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more
Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore | https://medium.com/appunti-di-uno-smanettone/gestione-degli-utenti-in-poche-righe-net-core-ecd80ecedef6 | ["Michael Disaro'"] | 2021-01-07 16:09:23.432000+00:00 | ['Identity Management', 'Gdpr', 'Open Source', 'Iam', 'Dotnet Core'] |
A New World Odor | Witches and wizards are the new power elite and best friends Henry Porter, Harmony Danger and Rob Easily will have to learn magical skills if they want to find good jobs.
Photo by Bee Felten-Leidel on Unsplash
Blundermore was the world’s greatest wizard. On his deathbed, surrounded by his closest friends, he decided to cast a final spell. This was no ordinary spell, but a devastating creation that he had been toiling on in secret for most of his life — one that would, if successful, change the course of history.
He closed his eyes, made a motion with his hand like a symphony conductor and began repeating a series of peculiar words and sounds, each time with greater intensity. Then, having spent the last of his power, he died.
The effect was not immediately apparent and those who were present wondered aloud what he had done. The answer soon came when they saw countless non-magic folk starting to gather around their formerly invisible castle.
Many great mages would spend years trying to undo what he had done, while others would simply try to understand why he had done it. For Blundermore had done the unthinkable. He had reversed the ancient, long-lost spell that for eons had concealed the magic folk and their civilization from the realms of man.
#
Harmony Danger parked in front of Henry Porter’s apartment and waited for him to come down. They’d been friends since middle school and today they were going to school together again, this time at the nearby community college for their first day of magical vocational training. They had both dropped out of top universities, abandoning formerly promising careers in medicine and engineering in the hope of finding positions in one of the agencies the magic folk had set up after taking over the governments of the world.
Henry had tested 66/100 on the DMAT — Dummel Magical Aptitude Test — while Harmony had tested 94. “Dummel,” a term that Henry didn’t care for, was what the magic folk called a person born without magical abilities. The test determined not only to what extent a dummel could learn magic, but also their ability to wield such power responsibly. Henry’s best friend and Harmony’s boyfriend, Rob Easily, had tested with a score of 30, disqualifying him from consideration for the most part. He had decided to attend the local college under a lacrosse scholarship with the goal of earning a degree in finance, a career which would prove useful for years to come while the dummel financial system was being phased out.
Harmony’s stellar test score had earned her a position in the most advanced starting classes, in which students actually got to see a wand. Henry’s courses would consist entirely of theory and history. After a short drive during which Henry stayed mostly silent, they arrived at the college, agreed to meet during lunch break, then said goodbye.
Henry walked into a nearly full classroom. An older man wearing a brown Victorian-era suit complete with a gold pocket watch greeted him. “I’m Professor Lester Lovejoy,” he said. “How do you do?”
“Good. Thanks.”
“And what is your name?”
“Henry Porter.”
Professor Lovejoy scanned his attendance sheet and wrote a check mark next to Henry’s name. “Very good. Leave your phone and any other electronic device in the basket and please be seated.”
For the next two hours, Professor Lovejoy paced around the classroom, expounding on some of the finer points of magic folk history and its key figures, including the wizard named Blundermore, who had cast the spell that had unveiled the magical world, Lovejoy claimed, out of a deep belief that all were created equal and should live together as one. Henry didn’t buy it. Whoever this Blundermore had been, Henry believed he’d either screwed up colossally or had exposed his own people out of spite for something they’d done to him. The chaos that had resulted from that one act wouldn’t be sorted out for decades, if ever.
Most people in the class seemed to hang on Lovejoy’s every word, which Lovejoy clearly relished. Henry was still upset over having to give up structural engineering for this. The world no longer needed people who understood static and dynamic loading when you had magic folk able to assemble a building out of marshmallows if they wanted, and it would hold together for eternity with a few words and the flick of a wand. Henry didn’t even know what kind of job to expect when his training was completed. Would he actually be casting spells and doing something useful? Or opening doors and getting coffee for magical agency bureaucrats?
Some things, he admitted, had changed to his liking. War was no longer allowed, along with missiles, bombs, nuclear weapons and poison gas. No more guns — they had all vanished in an instant, to the outcry of millions. Magical birds now policed the skies and reported crimes as they saw them, which were dealt with swiftly and justly by the Agency of Mischief. Cannabis was legalized worldwide while prescription drugs were banned. Medicine, in general, was seeing profound changes. The magic folk had introduced a cornucopia of herbal concoctions the likes of which humanity had never imagined, quickly making illnesses like cancer, HIV, blindness, paralysis, and erectile dysfunction things of the past.
“Who’s your professor?” Harmony asked as they sat down in the cafeteria.
“Lester Lovejoy.”
“What do you think?”
“Blah blah blah.”
“Really? I like mine. Harriet Hargrave. After lunch, I’m with Gilda Greylock, a potions teacher. It’s going to take years to even begin to understand their culture.”
“Wonderful.”
“What’s wrong?”
“I dunno. I just don’t care about any of it.” Henry sensed that she was going to dive into a lecture about the harmful effects of pessimism. “Don’t,” he said. “Please.”
“All right. Be miserable.” She pulled her knees up and said, “Rob wants to know what you’re doing later.”
“Nothing. We could meet him at the Strychnine.” The Strychnine was the campus bar at Rob’s school.
“Cool. I’ll let him know.”
Henry trudged to the next class and received a proper scolding when he arrived one minute late. For a second, he thought the professor — a short, beefy woman named Malvina Milliport — was going to smack his hand with a ruler.
He couldn’t check his phone or even look down and doodle. All the magic professors were schoolmarms, deadly serious about their jobs and expecting the students to pay utter attention and show endless gratitude for the opportunity to serve in their society. Henry hadn’t felt this helpless since his first day in elementary school when the teacher wouldn’t let him use the bathroom no matter how many times he raised his hand.
#
Henry crashed down into a booth at the Strychnine and sighed. “This day is finally over,” he said. Even the tune on the jukebox, a folksy dirge by Youth in Asia, felt like a godsend compared to being in the classroom.
Harmony slid gracefully into the opposite side, then Rob squeezed in next to her. “I wish they’d put on a game,” he said.
It had become common at bars to play the news instead of sports because something crazy happened nearly every day. A man dressed as Dorothy from The Wizard of Oz was, at that moment, climbing barefoot around the base of the Statue of Liberty, shouting and brandishing what was believed to be a toy wand.
A waitress came over. Rob ordered a coal-fired pizza and a pitcher of beer.
“Screw beer,” Henry said, taking a joint out of his cigarette case. He took a deep hit. “That’s it, right there.”
“Pass that,” Rob said.
Harmony took a hit and passed it down. “At least this is legal now.”
“Like that ever stopped Henry.”
“He didn’t like his first day at magic school,” she said, laughing.
“You didn’t like your professor?”
“Oh, but he liked me,” Henry said. “Professor Lovejoy couldn’t take his eyes off me.”
“No!” Harmony said, her mouth open.
“Really?” Rob said.
“Yeah.”
“Are you into him?”
Henry guffawed. “What? Oh, my God, no.”
Their order arrived after a few minutes. Rob stuffed a slice of pizza into his mouth while drinking and smoking at the same time, which made them all laugh.
More people started coming in, none of them magic folk as far as Henry could tell. He could usually spot them even when they dressed normally. There was something about the way they carried themselves, even the way they looked at people.
“They’re so smug,” he said, blowing a cloud of smoke.
“I know,” Rob said.
“Come on. Most of them are so nice, you guys are being way too harsh,” Harmony said.
“Oh, yeah?” Henry said. “Where were they during World War Two when we were slaughtering each other and nuking the planet. And why do all of their names have to be so alliterative? Lester Lovejizz, Henrietta Hargrave, Gilda globbedyglob. Ugh.”
“Harriet Hargrave,” Harmony corrected him.
“I don’t like that they call us dummels,” Rob said. “It sounds like ‘dummies.’ That’s so wrong.”
At a nearby table, a bearded guy wearing a backward baseball hat turned to them and said, “I hear ya, man. It’s bullshit.”
Rob raised his glass.
“All of this gloom and doom,” Harmony said. “We’re going to be fine.”
“Easy for you to say,” Rob said, giving her a kiss. “You scored a ninety-four. You’re going to be a junior wizardess. Sorceress. Whatever.”
Henry lit another joint. “Yeah. That’s hitting just right.”
“What even is magic?” Rob asked. “I mean, what is it, exactly?”
“It’s complicated,” Harmony said. “Sometimes it’s just words. Language is more powerful than we realize, almost anything spoken can become an incantation. Then you’ve got herbs and other natural objects whose magical properties can be amplified through speech and thought.”
“Paprika never had magic powers before they showed up,” Henry said.
“That’s the thing,” Harmony said. “It did.”
“How so?”
“The missing ingredient was the person. There’s something about the magic folk that we don’t understand. They’re able to harmonize with the frequencies, or who knows what, of things like words and plants, and use them in ways that we can’t. Like tuning forks or something. It’s amazing, really.”
“Still doesn’t answer what magic is,” Rob said.
A story came on the news about how the government was having trouble dealing with the growing number of criminals in the dummel prison system and something drastic would have to be done. There was talk of temporarily turning prisoners into mice until the Agency of Mischief had enough staff and facilities to handle the load.
The bearded guy stood up and shouted at the bartender, “Put on the Yankees!”
Rob, Henry and several others started whistling and stomping their feet amid a growing chorus of boos until the bartender finally raised his hands in defeat and pointed the remote at the TV.
The screen flickered, a baseball game appeared, and everyone applauded.
#
This story was first published in As Bad As It Looks. © 2016 Ivan Paganacci | https://medium.com/@ivanpaganacci/a-new-world-odor-4c1c4149bd6c | ['Ivan Paganacci'] | 2021-03-05 11:31:09.117000+00:00 | ['Short Fiction', 'Harry Potter', 'Fantasy', 'Parody', 'Fanfiction'] |
The Real Witch Hunts | The Trump Department of Justice was weaponized to pursue political enemies. They futilely pursued Andrew McCabe, presented his case to a grand jury, which to the humiliation of Trump DOJ refused to indict, a real black swan event.
The DOJ secretly acquired phone records of reporters from CNN, The Washington Post and New York Times. The DOJ placed CNN attorneys under gag orders to keep this from reaching the light of day. The seizure of a CNN reporters emails continued even after a judge told prosecutors their justifications were “speculative” and “unanchored in any facts.”
Now being revealed are direct attacks on Trump’s enemies in Congress, a coequal branch of government. The DOJ secretly acquired the emails of various members of Congress, the ones so far identified include Eric Swalwell and House Intelligence Committee Chairman Adam Schiff.
The email sweep extended to their staffs, and even to their families to include at least one minor child. Even after prosecutors recommended that the investigation be dropped Attorney General Barr overruled them and ordered it to continue.
What did all this secret prying into Trump’s media and political enemies and even their families and children produce? Nothing. Absolutely nothing. Not a single indictment. Not a single conviction. Zilch. A fact which reduces any comparison to the Mueller investigation absolutely fatuous.
In my view a special prosecutor should be appointed to investigate Trump’s own possible involvement in prompting these pointless and illegitimate investigations. The special prosecutor should have the power to subpoena Trump, Former Attorney General Bill Barr, and any other DOJ officials involved.
Donald Trump’s threat to our democratic institutions was far reaching, and with eventually deadly consequences. No American, whether they be a reporter, a member of Congress, or a member of their family should feel that an American President can abusively turn the wheels of justice, on his whims, against them. | https://medium.com/@keithdb/the-real-witch-hunts-c5ea7f893467 | [] | 2021-06-11 21:35:13.944000+00:00 | ['Special Prosecutor', 'DOJ', 'Trump Administration', 'William Barr', 'Corruption'] |
Who Are the Proud Boys? | Who Are the Proud Boys?
These are the white supremacists who follow Trump’s orders
The Proud Boys are a racist, white nationalist, and misogynist group that has woven violence into its identity. And last night during a national presidential debate, Trump gave them marching orders.
You can read more about the group here, in this post titled “Who Are The Proud Boys?”
It’s important to know who these people are, as they have showed up at protests with the intent to cause violence. But be careful what you share, and how you amplify the groups’ message. These extremist organizations thrive on mainstream attention and social virality, as it perversely normalizes and amplifies their racist and sexist talking points.
Jane Lytvynenko, a disinformation reporter at BuzzFeed, suggests redacting names and identifying information out of any screenshots you share of Proud Boy content, and to also avoid linking to any that content or including names of Proud Boy members. | https://momentum.medium.com/who-are-the-proud-boys-66cdb218a494 | ['Dave Gershgorn'] | 2020-09-30 18:32:05.185000+00:00 | ['Racism', 'Proud Boys'] |
ground to air missile | Introduction
With the accelerated development and evolution of information warfare, the complexity, diversity, and uncertainty of the confrontation of offensive and defensive systems have become increasingly prominent. At the same time, with the development of advanced information technology and military applications, battlefield information acquisition and organizational models are also quietly changing, bringing profound changes to the organization of war.
From the large-scale wars that have taken place worldwide in the past decade, we can see that airstrike operations have achieved systematization, informatization, and networking.
The form of warfare under the conditions of information warfare is developing from “platform-centric warfare” to “network-centric warfare”. The core idea of network-centric warfare is to integrate all combat units into the combat information transmission network so that all units participating in combat.
The right information can be obtained at the right place, at the right time, and in the right way, that is, to realize the coordinated joint operation of all combat participating units
In the future, informatization and systematic warfare will break the boundaries between services and domains. It is necessary to integrate various power elements. The concept of “multi-domain warfare” of the US Army is proposed under this background. Fields use cross-domain firepower and maneuvering together to achieve physical, time, location, and psychological advantages, and realize the transition from “service union” to “multi-domain integration”.
To effectively counter the current and future information air strike system, future air defense should develop into a networked and systematic combat model.
The United States pays special attention to the development of coordinated combat command and control and communication systems, such as IBCS, CEC, and the “Next Generation Enterprise Network” project.
Through the command and control combat management and communication (C2BMC) system, the global detection sensors, interception weapons, and The level command and control system are connected into a whole, forming the resultant force of system operations.
In recent years, especially with the continuous development and integration of cloud computing, big data, the Internet of Things, and artificial intelligence, its service areas have also continued to expand
In addition to being widely used in social intelligence, medicine, business, geographic information, biology, and other industries. The world’s major military powers are stepping up research on the application of cloud computing and other military intelligence technologies, hoping to transform information advantages into decision-making advantages and combat advantages through cloud computing and realize the transition to “network-centric warfare”.
With the help of cloud computing virtualization’s network service advantages, research the virtualized surface-to-air missile network cooperative combat mode, realize the optimal allocation of resources and information sharing, and enhance the networked air defense combat capability.
The concept of virtualization
The US “National Institute of Standardization and Technology” (NIST) defines cloud computing as follows: Cloud computing is an operating mode that needs to build configurable computing that can be accessed through the network anytime, anywhere, conveniently, and on-demand Resource sharing pool (such as network, server, storage, application, and service), and the resource sharing pool can quickly provide services and release resources with minimal management work or interaction with service providers.
According to the definition of NIST in the United States, cloud computing is a kind of distributed computing. Based on virtualization technology, the processing program is divided into multiple subroutines through the network, the calculation is allocated on a resource pool composed of a large number of computers, and the processing results are returned to the user.
Computing resources exist on the network and are collectively referred to as “clouds”, which are abstractions of various resources. Cloud computing is a supercomputing model that integrates large-scale and dynamically scalable computing, storage, data, applications, and other distributed computing resources for collaborative work.
In essence, cloud computing realizes the reasonable allocation of computing resources through the virtual organization, allocation and use of computing resources, avoids duplication and waste of computing processes, enables users to obtain computing power, storage space, and information services on demand, and improves computing resources the utilization rate.
Therefore, virtualization is the core of cloud computing infrastructure and the foundation of cloud computing development.
In cloud computing, data, applications, and services are stored in the cloud. The cloud is the user’s supercomputer
Cloud computing requires that all resources be managed by this supercomputer. However, the differences between various hardware devices make them compatibility is very poor, which poses a challenge for unified resource management.
Virtualization technology can abstract the underlying architecture such as physical resources, making the differences and compatibility of devices transparent to upper-layer applications, allowing the cloud to uniformly manage the diverse resources at the bottom.
Virtualization technology means that computing software runs on a virtual basis instead of on a real basis
It can expand the capacity of hardware, simplify the software reconfiguration process, reduce software virtual machine-related expenses, and support a wider range of operating systems.
Virtualization technology can realize the isolation of software applications from the underlying hardware. It includes a split mode that divides a single resource into multiple virtual resources and an aggregation mode that integrates multiple resources into one virtual resource.
Virtualization technology can be divided into storage virtualization, computing virtualization, network virtualization, etc. According to objects, and computing virtualization is further divided into system-level virtualization, application-level virtualization, and desktop virtualization.
In the realization of cloud computing, computing system virtualization is the foundation of all services and applications built on the “cloud”.
Virtualization technology is currently mainly used in CPU, operating system, server, and other aspects, and it is the best solution to improve service efficiency.
Advantages of virtualization in military applications
Cloud computing technology has a potentially comprehensive and positive impact in the military field, the U.S. military has been committed to exploring and researching the use of cloud computing technology in recent years.
The U.S. Air Force recently proposed the concept of “combat cloud”, trying to maximize virtualization through cloud computing. Exploit the potential of advanced electronic information systems equipped with US Air Force combat aircraft, realize collaborative processing based on high-speed interconnection networks, and make full use of the intelligence and data information obtained by the entire combat formation.
This information can flow freely on demand throughout the network. Based on the comprehensive deployment of various combat forces, the concept of coordinated operations will be expanded, systemized, and specific.
The application of virtualization-based cloud computing technology in networked operations will make network collaboration more specific, more oriented to combat tasks, and will also bring subversive changes to future combat models.
Improve the effectiveness of coordinated operations
The core of cloud computing is the data exchange of each node based on the high-speed interconnection network. The virtual construction of the resource pool that each node can obtain on-demand, and the operation and management of the resource pool are as autonomous as possible, and the acquisition and release of resources are simple and fast.
Interconnect all combat nodes in the sea, land, air, and space in the combat space into a seamless, distributed operation, self-organization, and self-healing comprehensive information system, which is the so-called cross-combat domain and cross-dimensional comprehensive information system, Regard the functions of the combat platform such as reconnaissance, strike, logistics, etc.
As a shared resource pool, based on the system to achieve seamless sharing of information, multi-source integration, and data on-demand acquisition, and strive to achieve cross-domain combat capabilities such as detection, command and fire strike The integration of, logistics, etc.
In the way of system capability sharing, gives full play to the capabilities of each node and combat system, uniformly deploys on-demand, improves the overall combat effectiveness of each combat unit and system, and maximizes the effectiveness of networked collaborative operations.
Photo by israel palacio on Unsplash
Improve resource utilization
Virtualization technology provides a dynamic access resource pool for the combat system, incorporating all currently available equipment resources into the resource pool. In this way, the system has strong task scheduling capabilities.
When the system load increases rapidly during wartime, it can quickly apply for expansion resources, and when the resource demand is not high, system resources can be reduced. Distributed resources, network bandwidth, and even information flow can be allocated according to the needs of the client so that resources can support more users at the same time, which overcomes the limitations of the hardware.
Virtualization technology enables higher utilization of hardware computing resources and can improve resource utilization efficiency in wartime to meet the different needs of users at all levels.
Improve command and control efficiency
The information flow structure of traditional combat command and control is mostly a “tree-like” structure. This hierarchical model corresponds to the existing command model and intelligence collection, processing, and distribution model.
Although the structure is obvious and clear, it can be better realized. The balance of efficiency and capability. However, with the rapid increase of data on the battlefield and the increasingly urgent need for data support for coordinated operations of various combat units, the existing hierarchical structure has become an obstacle to the flow of data on-demand in some aspects.
And once the central node fails, the entire system is paralyzed. Virtualization technology uses an architecture similar to a heterogeneous structure to break the isolation between different levels and build a flat “cloud” structure combat command and control structure.
This distributed command and control mode will provide extremely high levels of combat command. Flexibility, showing more efficient characteristics.
Network co-operative combat mode of the surface-to-air missile-based system on virtualization
To meet the needs of future information warfare, implement the concept of networked warfare capability, and solve the real problems of traditional ground-to-air missile weapon system command and control architecture, poor flexibility and openness, poor network reconfiguration and reconstruction capabilities, and insufficient firepower-level cooperative combat capabilities.
The specific discussion is as follows
a) Broadband wireless autonomous collaboration, dynamic network reconfiguration.
Facing the needs of coordinated operations, the use of communication systems with strong hardware performance and support for software upgrades enables the communication network to achieve flat weapon system combat equipment and non-centralized networking, supports dynamic network access and disconnection of various combat equipment nodes, and supports the battlefield environment Real-time dynamic reconstruction of networked collaborative operations.
b) Decoupling of combat resources and dynamic resource allocation.
Adopting the virtualization design concept, based on the concept of “resource decoupling”, resource pooling of resources such as detection and tracking, command control, fire control, guidance, and launching in the weapon system is used to build a weapon system resource pool, and the command and control center can deploy according to operations Dynamically gather the relationship with the task to form a virtual combat unit to ensure optimal interception in a complex and highly dynamic battlefield environment.
Communication network architecture
ground-to-air missile networked coordinated operations adopt a flat network structure, based on a networked coordinated communication network, builds a distributed, multi-hop reachable, real-time or non-real-time business mixed interactive network, realizes distributed networking, and supports network without a center.
The communication function and status of each node in the network are equal, and any two points can transmit voice, data, image, and other services through wired or wireless channels, which can realize the information-sharing ability of “discovering at one point, knowing the whole network”.
Each combat unit inside can achieve “access to the network wherever it happens”. At the same time, using environmental intelligent perception technology, according to the electromagnetic environment characteristics of the battlefield, geographic location, communication distance, information volume, etc.
Combat application architecture
Surface-to-air missile networked coordinated operations adopt the “edge cloud-end” architecture. The “Edge Cloud” is deployed in each combat unit to summarize the information of each device in this unit and provide centralized computing and information services for it “End” refers to the physical equipment that interacts with the “edge cloud”, including equipment entities such as radar and optoelectronics, and human-computer interaction terminals such as control seats.
The “edge cloud” uses servers as the hardware platform and uses virtualization technology to integrate the server’s computing, storage, and data resources to form a computing resource pool and storage resource pool to provide support for distributed computing and storage.
When a computing service fails, computing resources can start backup services as needed to ensure the flexible organization of combat mission channels.
The specific functions are as follows
1. The “end” equipment of the combat unit can be resourced, and equipment such as radar and photoelectric can be virtualized as available combat resources.
2. The algorithm model and the strategy model can be decoupled and packaged into services.
3. It can act as a gateway, publish resources and services, and push services to designated consumer nodes.
The “end” requests computing resources from the “edge cloud” in a service-oriented manner and performs data interaction with it. The “end” hardware uses different hardware platforms according to different equipment, and conventionally is the equipment node in the combat unit.
With the help of the “edge cloud” end management service, the virtualized ground-to-air missile networked cooperative combat mode can break the strong closed-loop state of combat resources in the past, and integrate the detection unit, tracking unit, fire control unit, guidance unit and launch unit of the weapon system Combat resources (including missiles) are directly connected to the collaborative communication network to provide various services.
The command and control system relies on the “edge cloud” to coordinate and dispatch these functional units in time and space to form optimal combat resource allocation rules and build The virtual combat unit organizes and implements collaborative operations centered on the virtual combat unit and supported by resource services, to realize the reasonable allocation of combat resources in interception effectiveness.
Combat mode design
In the process of networked coordinated operations of surface-to-air missiles, the system always guarantees a command center.
In the normal mode, the command center is the command vehicle. On a very large scale, other combat vehicles can be selected to be upgraded to the command center. The command center is used to provide data and computing resources and can aggregate all the detection resources in the system to form an air situation resource pool.
At the same time, according to the combat situation, the system’s tracking resource pool, guidance resource pool, and launch resource pool are optimally combined to form cloud combat capabilities.
If the battle environment requires two or more systems to coordinate, there will be two or more command nodes on the battlefield, and the edge cloud data of each command node will be automatically integrated, and one command node can be upgraded to the main command center, and the remaining command nodes will take on backup tasks.
Besides, it can also provide computing resources to load balance system calculations. Each combat unit connected to any command node can obtain the required services and perform its own tasks.
In this mode, the edge cloud of the command center receives real-time internal and external combat information of the system, calls related services of the edge cloud to display comprehensive situation information, and accepts manual operations.
The edge cloud of each combat unit aggregates its own vehicle tracking resources, guidance resources, and launch resources to the command center
The command center acquires system detection resources, receives real-time internal air situation information and ground status information in the system, and uses edge cloud-related services to perform unified filtering of resources, air situation fusion, and multi-station positioning to form a unified air and ground situation, and receive the system in real-time, Various internal resources, and aggregate the above information into a resource pool.
The command center dynamically optimizes the allocation of firepower resources according to the resource pool and summarizes the combat resources it belongs to. The system resources can be allocated to each combat unit in manual or automatic mode, which is virtualized as a virtual combat unit.
In the course of operations, the command center can designate another command node as the active and standby command node. The active and standby command node performs a hot backup of key data in the edge cloud of the command center.
When the command center cannot continue operations, the command node can be upgraded to a command center, To perform related command tasks. The active and standby command nodes will automatically select the secondary and standby command nodes, and send key data to the secondary and standby command nodes for hot backup.
Main technical approaches
The so-called dynamic allocation of combat resources is to coordinate the use of resources to maximize combat effectiveness. Combat resources include tracking resources, guidance resources, missile resources, etc. To achieve a dynamic allocation of tracking resources, dynamic allocation of guidance resources, and dynamic allocation of missile resources.
1. Dynamic allocation of tracking resources, through dynamic adjustment of tracking targets, to maximize the total benefit of target tracking.
2. Dynamic allocation of guidance resources. By flexibly forming a firepower enable chain, the guidance resources and missile resources are used in a balanced manner to avoid shortcomings or waste of resource use.
3. Missile resource planning selects a quantitative plan for the optimal use of firepower on the target through the constraints of boundary benefits.
According to whether the combat targets are confrontational, they are divided into non-confrontational (such as cruise missiles, some types of unmanned aerial vehicles, guided bombs) and confrontational targets (such as fixed-wing manned aircraft, armed helicopters, etc, anti-radiation missile carriers, anti-radiation missiles).
Radiation missile two cases are considered. This distinction is due to the different enthusiasm of the strategies adopted for the two types of goals.
For non-adversarial targets
There is no need to consider the countermeasures of the target. It is a purely optimized resource allocation problem.
For adversarial targets
The confrontation and tactical behavior of the target need to be considered, and the use and allocation of resources need to be properly arranged through the use of resources. Induce and guide the enemy’s behavior to the most favorable situation for shooting.
Networked collaboration mechanism technology
In the traditional platform-centric warfare mode, the sensors and missile resources of each combat unit are controlled by the firepower of the unit to form a closed loop of control, tracking, and guidance of the combat unit.
Each combat unit accepts the unified command of the upper-level command and control system to form a unified situation field. Optimal allocation of goals.
Under the networked cooperative combat mode, through distributed perception, on-demand integration and intelligent management and control of sensor resources, firepower resources, and launch resources, comprehensive and optimized use of various platform resources.
The wide-area spatial distribution of resources in information, signals, Different levels such as radiofrequency are shared to realize the system combat capability against the enemy.
This mode is a cloud combat mode under the concept of networked combat, and its core is the virtualization and pooling of combat resources.
Conclusion
With the development of world weaponry and military technology, new technologies such as cloud computing, big data, and other network deepening development will have a profound impact on the development of major battlefields in the future.
In response to the needs of networked operations, this paper applies cloud computing virtualization technology to networked coordinated operations of surface-to-air missiles, studies the virtualized networked coordinated operations model, designs a distributed and flat communication architecture, and establishes an “edge cloud” and The “end” combat application structure makes the organization of coordinated operations more flexible in the form of service.
At the same time, due to the use of backup and fault-tolerant processing, the system reliability is further increased, and the degree of information about future air defense operations will be greatly improved. Improve networked collaborative operations capabilities.
Gain Access to Expert View — Subscribe to DDI Intel | https://medium.datadriveninvestor.com/ground-to-air-missile-based-system-on-virtualization-technology-979d631a3312 | ['Arslan Mirza'] | 2020-12-05 17:05:25.949000+00:00 | ['Artificial Intelligence', 'Evolution', 'Modern Combat', 'Surface To Air Missile', 'Virtualization'] |
Ethereum on a roll as cryptocurrency takes steps towards greener future | Ethereum on a roll as cryptocurrency takes steps towards greener future Carbonyte Aug 10·3 min read
At the time of publishing, one Ether (the token of the Ethereum network) was worth $4,252.
Less than a month ago one Ether was worth a staggering 42 per cent less at $2,434.
If you bought $1000 of Ether on July 20 — a little over 21 days ago — you’d have already made more than half your money back by today.
The cryptocurrency hit its highest ever price in early May this year, where it levelled off just one dollar shy of the $AUD 5000 mark.
Simon Peters, market analyst and cryptocurrency expert at eToro, said Ethereum was making inroads in being “greener”, given a major criticism of Bitcoin is the amount of electricity used to mine coins.
“Ethereum’s London hard fork finally took place on Thursday, with the cryptoasset’s creator Vitalik Buterin hailing the moment as a step toward making it more energy efficient,” Mr Peters said.
“The hard fork is the single biggest upgrade to the Ethereum network since 2015 and has significant implications for the cost of fees on the blockchain — so-called ‘gas’ fees.
“But according to the founder Buterin, it has implications for the energy efficiency of ETH. Speaking to Bloomberg News in Singapore, Buterin said EIP-1559 could reduce emissions caused by the network by 99 per cent.”
Any step a cryptocurrency takes towards being more environmentally friendly is likely to drive up its price.
“Cryptoassets such as ETH and bitcoin have come in for criticism in recent times for high energy usage,” Mr Peters said.
“Tesla chief executive Elon Musk triggered a major selloff in May with comments on the high emissions of the bitcoin mining process.”
The information provided on this website is general in nature only and does not constitute personal financial advice. The information has been prepared without taking into account your personal objectives, financial situation or needs. Before acting on any information on this website you should consider the appropriateness of the information having regard to your objectives, financial situation and needs. | https://medium.com/@carbonyte/ethereum-on-a-roll-as-cryptocurrency-takes-steps-towards-greener-future-ba59e477c1a2 | [] | 2021-08-10 11:50:01.028000+00:00 | ['Ethereum Blockchain', 'Ethereum', 'Cryptocurrency News', 'Cryptocurrency Investment', 'Cryptocurrency'] |
Behind the Scenes-The ALAX Team | Tomas Koprusak
If Matej and Kalvin are the Admirals of the fleet, then Tomas is the Captain of the ship, who guides it through the storms and calm seas with a steady hand. Starting of his career as a web developer with IBM, he quickly moved up the ranks at an incredible pace achieving the position of web execution specialist. From there he moved on to Sygic where he spent two years developing their online marketing and as a product manager for the Open Street Map project. One of the highlights of his career, was working at Smart media star as a product manager, focusing on a project that utilized big data analysis and had to be scalable, available and have immediate responsivity, a feat which was not easy to accomplish. Tomas is the full package as far as product leads go, he is able to carefully steer his team to achieve the desired results, while building a webpage and fulfilling his managing duties. Did we mention he has a wide array of programming languages at his disposal? HTML5, PHP, XHTML, jQuery and the list goes on.
Matej Nemcek
Matej is the resident wizard, but with less magic and more science. He is the brilliant backend engineer that makes sure everything works better than planned. His expertise in programming languages includes Express.js, ES2015 (ES6), FlowType, StandardJS, Vue.js, Bulma, PouchDB, Bootstrap, jQuery. So actually he is a full stack developer but he likes the title backend engineer more. He doesn’t just program, he teaches people how to do it themselves with his Node.js school. He also has a keen interest in cryptocurrencies such as Bitcoin or Ethereum, running nodes and programming smart contracts. He implemented his knowledge of smart contracts in SophiaTX. If he is not programming then he is organizing meetups in Progressbar (which he co-founded) trying to introduce people to the world of blockchain, by inviting guests from the field to share their experience.
Peter Student
Peter is our Head of Software development, or as we like to call him, the Man with Answers. If we ever stumble upon a tech problem we’d like to solve he has the solutions and if he doesn’t he’ll come up with them. As our software architect, he and his team help us implement our ideas in the form of technical solutions. As for his work history, he has almost eight years of experience in different fields that make him extremely proficient in his work. He co-founded one of the very first online marketing agencies in the Czech and Slovak Republic. Throughout his career he has been leading teams on various web based and mobile apps projects. Recently he worked as the Head of Software Development for Thirtyseventy Digital where he and his team were responsible for delivering global IoT initiatives for companies such as Electrolux and AEG.
If you are interested more in numbers, well ,then we have some for you. We have four community managers who are working around the clock to provide you with insights and answers, they are fun, highly motivated and lack the words “I don’t know” in their vocabulary. Our resident PR and Operations managers are dedicated to building our brand awareness working with the motto “Haven’t heard of us? Well you should have!”. Last, but in no way least, we have a team of developers on two continents doing their best to roll-out a product you will love and we will be proud of!
To find out more about the team visit our website, where you can read all the BIO’s of the people working on ALAX.
For more information join our telegram or subscribe to our newsletter. For the most up to date news follow us on social media. | https://medium.com/alax-io/behind-the-scenes-the-alax-team-8ee8ee0c07c2 | [] | 2018-03-07 15:05:26.830000+00:00 | ['Apps', 'Tge', 'Alaxblog', 'Blockchain', 'Bitcoin'] |
The Essence of Liberalism | Liberalism is the philosophy of liberty.
Why liberty?
Simply put: liberty works and liberty’s right.
Strictly speaking I could sit down now. I flew all the way here so I won’t, that would seem like a waste. So let me expand on liberalism.
I’d like to use one source to explain these ideas. I think the source a rather unimpeachable one as far as classical liberalism goes.
I’ll be preaching to you from the book of Thomas — Thomas Jefferson that is. And, for the main text, I’d like to use his stirring words from the Declaration of Independence.
“We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. — That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, — That when ever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it….”
Our allies on the Left love the first part of Jefferson’s sentence where he said “all men are created equal.” The egalitarian impulse is so strong many are willing to distort these words to give substance to their flimsy philosophy.
Could Jefferson have meant some sort of equal results?
Not if you take him at his own word.
He said that “A right to property is founded in our natural wants, in the means by which we are endowed to satisfy those wants, and the right to what we acquire by those means without violating the equal rights of other sensible beings.”
The right to property is restricted only the equal rights of others. But is inequality of wealth a violation of equal rights?
Jefferson said it wasn’t. For instance he wrote: “To take from one, because it is thought that his own industry and that of his father has acquired too much, in order to spare to others, who, or whose fathers have not exercised equal industry and skill is to violate arbitrarily the first principle of association, ‘the guarantee to every one of a free exercise of his industry and the fruits acquired by it.”
In his Second Presidential Inaugural Address he said his presidency promoted “equality of rights” and the maintenance of “that state of property, equal or unequal, which results to every man from his own industry, or that of his fathers.”
In Jefferson’s view the equality of rights might, and probably would, lead to an inequality of results at the extreme, but it would also lead to general equality of results for a larger middle.
This is a point that FA Hayek made so well in The Constitution of Liberty: “It is just not true that human beings are born equal; . . . if we treat them equally, the result must be inequality in their actual positions; . . . [thus] the only way to place them in equal position would be to treat them differently. Equality before the law and material equality are, therefore, not only different but in conflict with eachother.” [p. 87]
Because there is no equality of ability, equality ofrights must lead to inequality of results.
Conversely this means to obtain equality of results a state must abolish equality in rights.
Imagine the Olympics conducted on such principles. Each runner in the marathon must finish at the same time. How? Perhaps by the imposition of added burdens on faster runners. Or maybe slow runners can use bicycles. We end up with the nightmarish society that Kurt Vonnegut described in his short story Harrison Bergeron. Vonnegut begins his story:
“The year was 2081, and everybody was finally equal. They weren’t only equal before God and the law. They were equal every which way. Nobody was smarter than anybody else. Nobody was better looking than everybody else. Nobody was stronger or quicker than anybody else. All this equality was due to the 211th, 212th and 213th Amendments to the Constitution and to the unceasing vigilance of agents of the Handicapper General.”
In this foreboding future a ballet is conducted with the best dancers weighted down with lead. The intelligent wear headpieces that periodically scramble their brain waves to disturb their ability to think. The greatest crime is to be under-handicapped.
Surely this was not the type of philosophy Jefferson had in mind.
It wasn’t. Jefferson knew different abilities would lead to different results.
He was aware there were various levels of wealth and property. Left floating out of context the term “created equal” is meaningless.
But, Jefferson did not leave it floating. He immediately stated precisely what the phrase meant. It’s all part of the same sentence — “all men are created equal, that they are endowed by their Creator with certain unalienable Rights.”
It was this equality of rights that Jefferson was speaking about, not equality of results.
This idea of equal rights is not hard to understand. Jefferson wrote about it often. In one letter he wrote: “No man has a natural right to commit aggression on the equal rights of another; and this is all from which the laws ought to restrain him…”
In another of his copious correspondence he wrote saying good legislation banishes “all arbitrary and unnecessary restraint on individual action” and “shall leave us free to do whatever does not violate the equal rights of others.”
In 1819 he wrote: “rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others.”
Note Jefferson argued these rights were part of human nature not the result of legislation at all. Governments do not grant rights. Rights precede and are superior to government.
In the Declaration of Independence he originally wrote individual rights are “inherent & inalienable.” In the editing process, through the US Congress, this was simplified. Jefferson’s idea that rights are inherent is a better way of looking at it in my view.
Human rights are human rights because we are human not because we are subjects of any state. Since we are all equally human then all human rights must be equal. This was a primary principle of liberalism. It still is. Such rights don’t end at the border, nor do they start their either!
Jefferson was arguing that rights precede the state and the justification for government — the sole justification — is it protects these pre-existing rights. As he put it “To secure these rights Governments are instituted among Men.”
Government is not there to make us wealthy. It is not there to make us nice. It is not there to educate us, feed us, coddle us, caress us, harness us, or change our diapers when we need it. It is there for one primeary reason—“to secure” the rights which all people have equally.
Jefferson then noted a truly radical principle of liberalism. Let me quote his entire sentence: “That to secure these rights Governments are instituted among Men — deriving their just powers from the consent of the governed.”
The old order believed what rights people held were grants given them by the King, who received his power through a Divine Right bestowed by God. Jefferson took a radical new approach. In the New World of liberal America, governments did not grant rights and did not exist because of Divine will. The source of the legitimacy of the state is the people themselves.
Each individual legitimately could defend his own rights with whatever was necessary to do so. This was the state of nature.
By creating governments men created an institution with the function of protecting their rights. That institution can’t grant rights as it has none to grant. It can only do what the individual can do. I can neither give you rights nor strip you of them. Neither can the state. Collectively we can work to protect those rights but not control the lives of others.
Jefferson spoke of the “just powers of government” which was his way of saying there are also unjust powers ofgovernment.
A just power is derived from the consent of the governed. People, not a Divine Being, are the source of the power of government. And people can only give to government what powers they themselves hold.
Now, ask yourself what rights you have? Do you have the right to ransack the library of your neighbor censoring texts you find offensive? Can you tell him what wages he must pay his employees? Do you have a right to control his love life, tell him what color to paint his house, or what values to teach his children? If you as an individual do not have such rights how can you delegate these rights to the government? You can’t.
If government derives its just powers from the governed it can only receive from them the rights they held before the government was formed. The more authoritarian-minded argue when man left the state of nature for the artificial state of government he surrendered his natural rights. Jefferson thought that a dangerous delusion. He said “the idea is quite unfounded, that on entering into society we give up any natural right.”
Another attempt to strip people of their rights was the argument that the rights of the collective we call society somehow were greater than the rights of the individual. This is another idea that finds much favor with both Left and Right collectivists. Jefferson said quite bluntly: “The rights of the whole can be no more than the sum of the rights ofindividuals.”
Classical liberalism today is primarily known for its advocacy of economic freedom. Yet during its infancy liberalism was most concerned with freedom of the mind. With centuries of religiously-sanctioned genocide and war fresh in the minds of the great liberals they argued only when people are free to think and to express that opinion can peace be secured.
To restrict speech in the name of peace or social harmony is to guarantee conflict. Liberals knew this.
Today, with organized efforts being made to imprison the minds of men and incarcerate their voice, it is important to see how Jefferson saw this issue.
He wrote: “[John] Locke denies toleration to those who entertain opinion contrary to those moral rules necessary for the preservation of society. It was a great thing to go so far… but where he stopped short, we may go on.” Jefferson said that God, keep in mind he was a Deist, not a Christian, “created the mind free” and that “all attempts to influence it by temporal punishments, or burdens or by civil incapacitations, tend only to beget habits of hypocrisy and meanness, and are adeparture from the plan of the holy author of our religion.”
He ridiculed rulers for “setting up their own opinions and modes of thinking as the only true and infallible, and as such endeavoring to impose them on others.”
Just in case people were unclear, Jefferson said quite bluntly: “The opinions of men are not the object of civil government, nor under its jurisdiction.”
The expression of even odious opinions, is not an object of the government. Why? Go back to the foundations of the liberal order to find out. No individual has the right to control the opinions of others. Thus no individual can give that right to the government. Since all government power is derived from the consent of the governed and since the governed have no such rights they can’t delegate this power to the State. Even if a majority of the people supported shackling the tongues of heretics, no such power can be granted the state because the rights of the group are merely the “sum of the rights of individuals.”
Liberals saw the moral issues of how each lives their own life as belonging to the private sphere. The public sphere were those actions where one directly violated the life, liberty or property of others. Church and state were separate because, as Jefferson put it, “the life and essence of religion consists in the internal persuasion or belief of the mind.” This is inherently a private concern of the individual not a public concern of the state. He said: “The care of every man’s soul belongs to himself…”
Jefferson said “Our rulers can have authority over such natural rights only as we have submitted to them, the rights of conscience we never submitted, we could not submit.” “The legitimate powers of government extend to such acts as are injurious to others. But it does me no injury for my neighbor to say there are twenty gods, or no god. It neither picks my pocket nor breaks my leg.”
The mind of the individual is free to think as they wish. Each is equally free to express those thoughts. No matter how offensive That opinion “neither picks my pocket nor breaks my leg.” I have a right to my life, to my liberty and to my property. But there can be no such thing as a right to live unoffended by others. As Jefferson told the Danbury Baptists, in that famous letter where he coined the phrase “a wall of separation between Church and State, “the legislative powers of government reach actions only, and not opinions…”
Jefferson was passionate about the ability of human reason, when left free, to deal with evil. When he established the University of Virginia he said that it “will be based on the illimitable freedom of the human mind. For here we are not afraid to follow truth wherever it may lead, nor to tolerate any error so long as reason is left free to combat it.”
We should remember Jefferson himself was a victim of a concerted smear campaign by the authoritarians of his day. His liberalism was too much the conservatives of his day. And they were incensed when he allowed Thomas Paine, who Teddy Roosevelt infamously called a “filthy, little atheist” to stay in the White House. The conservative press smeared him with tabloid style articles. The pulpits were filled with preachers railing against Jefferson and claiming his views would destroy all civilization and were contrary to the word of God.
Yet in the midst of this Jefferson said: “Were it left to me to decide whether we should have a government without newspapers, or newspapers without a government, I should not hesitate a moment to prefer the latter.” Of course, the opposite of Jefferson, would say the opposite of that. He would stand in his bully pulpit claiming how “fake news” should give him the power to control the media. That is what the anti-Jefferson would and does advocate.
Liberalism was first and foremost a philosophy that advocated the unchaining of the human mind. It was the right to believe and to express that belief, that compelled liberalism. The principles that justified freedom of conscience were the same principles that later justified freedom of commerce. This is why so many who damn social freedoms ultimately become blatant advocates of economic centralism and wage war on markets and free trade.
Liberals did not embrace free markets first and then attach social freedom to the mixture. It was social freedom that first compelled them and the consistency of their arguments caused them to adopt free markets in later battles.
Historians Isaac Kramnic and Laurence Moore noted this.
“We have seen that the very same people, the very same English and American liberals calling for an end to the role of the church in the state with the cry of individual freedom of conscience and a limited state, were calling for an end to the role of the state in the economy, again in the name of individual freedom. Such was the nature of the victory that laissez-faire liberalism sought and achieved. In the name of individualism and freedom, all restraint, be it religious, political or economic, was deemed tyrannical.” These historians say that “no one better captured this moment of liberal ascendance when religious laissez-faire went hand in hand with the triumph of economic laissez-faire than Jefferson did.” (The Godless Constitution, p. 85.)
The French laissez-faire economist de Tracey, wrote: “Commerce, far from being evil, is the ‘author of all social good.’” The industrious man “does more good to humanity, often even without knowing it, than the most humane idler, with all his zeal.”
What many people don’t know is that the man who translated this free market book from French into English was Jefferson himself. He embraced markets as fervently as freedom of conscience. He told Americans: “Let the general government be reduced to foreign concernsonly, and let our affairs be disentangled from those of all other nations, except as to commerce, which the merchants will manage the better the more they are left free to manage for themselves, and our general government may be reduced to a very simple organization and a very unexpensive one — a few plain duties to be performed by a few servants.”
Liberalism means liberty in all spheres of man’s life. It includes his private and public life. In includes the boardroom and the bedroom; conscience and commerce; the mind and the market. This, Jefferson, said was the reason for limiting the power and scope of government.
This was and is the essence of the true liberal tradition.
Our one source of income remains payment or donations for the columns that you see here. Please consider either making a one time donation or a monthly donation to help sustain them. The link is below.
Your support to fund these columns is important, visit our page at Patreon. | https://medium.com/the-radical-center/the-essence-of-liberalism-b0c988c8a264 | ['James Peron'] | 2019-08-09 03:11:01.147000+00:00 | ['Government', 'Natural Rights', 'Classical Liberalism', 'Thomas Jefferson'] |
Designers guide to user data and CRUD | In this article, I’d like to describe the challenges of designing apps that manage user-generated content and how to create great app experience. Examples here are mostly for Apple platform but the same principles apply everywhere. Without a further do let’s talk about user content.
User data or user-generated content
User data or user-generated content is anything that the user creates in your app. The app purpose can be working with user content, like creating documents, writing notes, taking and editing photos, composing music, etc.. Or this can be a feature in a larger app: commenting on articles, creating wish-lists in a shopping app, saving a place you interested in on a map, etc.
What is CRUD
CRUD stands for Create, Read, Update, and Delete. You may know this acronym as functions of persistent storage. We can use CRUD as a framework to design user interactions.
Importance of user data
User data is something people invest their time, skill, and soul. Some things, like photos or videos, are irreplaceable. User data belongs to users regardless of what privacy policy says.
“I just pressed something and everything disappeared.” — Frustrated user.
It is incredibly important to protect users from data loss or corruption. Because CRUD actions directly change user data, the UI must be predictable to minimize user error.
With this in mind let’s take a look at designing CRUD operations:
Create
New entities, depending on the app, can be empty or must contain information. | https://medium.com/flawless-app-stories/designers-guide-to-user-data-and-crud-4e53f7c5150d | ['Tanya Anokhina'] | 2019-09-26 09:06:54.002000+00:00 | ['Usability', 'iOS', 'Design', 'UI Design', 'UX Design'] |
Penjelasan singkat tentang project dan product serta perbedaan antara keduanya | Easy read, easy understanding. A good writing is a writing that can be understood in easy ways
Follow | https://medium.com/easyread/project-vs-product-b9c89a079fed | ['Fitra Akbar'] | 2020-11-22 07:34:45.314000+00:00 | ['Indonesia', 'Project Management', 'Product Manager', 'Startup', 'Product Management'] |
Explore Emerging Technologies in the Aerospace Industry | Innovations in the aerospace industry are continuously on the rise over the years. While a number of technologies have emerged in the industry, here are some of the top technologies that you must know.
• Virtual Reality Interiors
In recent times, there is hardly any industry that does not leverage the latest virtual reality technology. The aerospace industry is no different. The aircraft are now being designed with the use of virtual reality indicators. With virtual reality technology, the aircraft windows are designed to provide travelers with a view of the outside even when there is no window. This makes the aircraft lighter, thereby enabling faster flight.
• Zero Fuel Aircraft
Zero fuel aircraft are gaining much popularity in recent years. It has proved to be helpful in the commercial as well as the civil sector. Such aircraft make use of the photovoltaic panels to leverage solar energy in order to provide the essential thrust to the aircraft engine. Zero fuel aircraft have a wide range of use in aerial photography, wildlife protection, agriculture, 3D mapping, and more.
• Autopilot
Autopilot is another emerging technology in the aerospace industry. Autopilot is all set to replace the need for human operators in an aircraft. Autopilot is a system used to control the trajectory motion of an aircraft, spacecraft, or marine craft without the requirement of manual control.
• Smart Automation
Manufacturing the different parts of an aircraft is daunting as well as very expensive. However, with the emergence of the latest technologies, creating efficient and smarter aircraft has become easy. Smart automation enables the technicians to easily scan the different metal surfaces with the use of smart glass or tablets. It helps in enhancing automated transactions as well as improves efficiency.
• Advanced Materials
Advanced materials are now used for the making of aircraft. Materials such as carbon nanotubes and graphene help make the wings of the aircraft more efficient through the reduction of weight and fuel consumption.
Apart from these technologies, 3D printing, AI-based design, and smart airports are some of the latest technologies in the aerospace industry. With seamless innovations in the industry, the sector is expected to experience optimum success. | https://medium.com/@infigentsolution/explore-emerging-technologies-in-the-aerospace-industry-37c79cb88a96 | ['Infigent Solution Pvt Ltd'] | 2021-08-17 08:42:15.297000+00:00 | ['Writers Block', 'Writer', 'Writers On Medium'] |
How to Use ElasticSearch With Django | ElasticSearch with Django
What is Elasticsearch?
Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java.
What is Elasticsearch used for?
Elasticsearch allows you to store, search, and analyze huge volumes of data quickly and in near real-time and give back answers in milliseconds. It’s able to achieve fast search responses because instead of searching the text directly, it searches an index.
Elasticsearch — some basic concepts
Index — a collection of different types of documents and document properties. For example, a document set may contain the data of a social networking application.
Type/Mapping − a collection of documents sharing a set of common fields present in the same index. For example, an index contains data of a social networking application; there can be a specific type for user profile data, another type for messaging data, and yet another one for comments data.
Document − a collection of fields defined in the JSON format in a specific manner. Every document belongs to a type and resides inside an index. Every document is associated with a unique identifier, called the UID.
Field — Elasticsearch fields can include multiple values of the same type (essentially a list). In SQL, on the other hand, a column can contain exactly one value of the said type.
Using Elasticsearch with Django
Install and configure:
Install Django Elasticsearch DSL: $pip install django-elasticsearch-dsl
Then add django_elasticsearch_dsl to the INSTALLED_APPS
to the INSTALLED_APPS You must define ELASTICSEARCH_DSL in your django settings.
in your django settings. For example:
ELASTICSEARCH_DSL={
'default': {
'hosts': 'localhost:9200'
},
}
Declare data to index:
Then for a model:
# models.py
class Category(models.Model):
name = models.CharField(max_length=30)
desc = models.CharField(max_length=100, blank=True) def __str__(self):
return '%s' % (self.name)
To make this model work with Elasticsearch, create a subclass of django_elasticsearch_dsl.Document , create a class Index inside the Document class to define your Elasticsearch indices, names, settings etc and at last register the class using registry.register_document decorator. It required to defined Document class in documents.py in your app directory.
# documents.py
from django_elasticsearch_dsl import Document
from django_elasticsearch_dsl.registries import registry
from .models import Category
@registry.register_document
class CategoryDocument(Document):
class Index:
name = 'category' settings = {
'number_of_shards': 1,
'number_of_replicas': 0
} class Django:
model = Category fields = [
'name',
'desc',
]
Populate:
To create and populate the Elasticsearch index and mapping use the search_index command: $python manage.py search_index — rebuild
For more help use $python manage.py search_index — help command
Now, when you do something like:
category = Category(
name="Computer and Accessories",
desc="abc desc"
)
category.save()
The object will be saved in Elasticsearch too (using a signal handler).
Search:
To get an elasticsearch-dsl-py Search instance, use:
s = CategoryDocument.search().filter("term", name="computer")
# or
s = CategoryDocument.search().query("match", description="abc")
for hit in s:
print(
"Category name : {}, description {}".format(hit.name, hit.desc)
)
To convert the elastisearch result into a real django queryset, just be aware that this costs a SQL request to retrieve the model instances with the ids returned by the elastisearch query.
s = CategoryDocument.search().filter("term", name="computer")[:30]
qs = s.to_queryset()
# qs is just a django queryset and it is called with order_by to keep
# the same order as the elasticsearch result.
for cat in qs:
print(cat.name)
Who uses Elasticsearch?
eBay — with countless business-critical text search and analytics use cases that utilize Elasticsearch as the backbone, eBay has created a custom ‘Elasticsearch as a Service’ platform to allow easy Elasticsearch cluster provisioning on their internal OpenStack-based cloud platform.
Facebook has been using Elasticsearch for 3+ years, having gone from a simple enterprise search to over 40 tools across multiple clusters with 60+ million queries a day and growing.
Uber — Elasticsearch plays a key role in Uber’s Marketplace Dynamics core data system, aggregating business metrics to control critical marketplace behaviors like dynamic (surge) pricing, supply positioning, and assess overall marketplace diagnostics — all in real-time.
Github uses Elasticsearch to index over 8 million code repositories, as well as critical event data.
Microsoft — uses Elasticsearch to power search and analytics across various products, including MSN, Microsoft Social Listening, and Azure Search,
Just Eat — Elasticsearch increases delivery radius accuracy as it can be used to define more complex delivery routes and provides real-time updates whenever a restaurant makes a change.
Thanks for reading. If you found the article useful don’t forget to clap and do share it with your friends and colleagues. :) If you have any questions, feel free to reach out to me.
Connect with me on 👉 LinkedIn, Github :) | https://medium.com/geekculture/how-to-use-elasticsearch-with-django-ff49fe02b58d | ['Hitesh Mishra'] | 2021-04-26 05:27:05.888000+00:00 | ['Django', 'Coding', 'Programming', 'Python', 'Elasticsearch'] |
How to Simulate Your Return on Ad Spend Before Launching Your Product | How to Simulate Your Return on Ad Spend Before Launching Your Product
Fake doors and multivariate testing can tell you which ideas will sell
Photo by Dil on Unsplash
Is it possible to simulate a product idea that only exists in your head right now?
What if we could understand exactly how customers would react to a hypothetical product — even down to the nitty-gritty details of what your CAC (customer acquisition cost) would be, your ROAS (return on ad spend), and even which ad would work best for any given audience?
If this was possible, wouldn’t it allow us to rapidly assess product-market fit without investing the $50k–$100k required to design, manufacture, and ship a product? Yes, it would. It would be kind of like a cheat code.
About a year ago, my team and I started to work with several well-capitalized entrepreneurs who were launching many different brands and products simultaneously — a business model referred to as a venture studio, which is when a single entity both founds and funds their own venture ambitions. This is in stark contrast to the venture capital model, where one entity founds the idea and the other entity funds the idea.
Entrepreneurs running venture studios have a ton of advantages that skew the odds heavily in their favor. They have experience launching successful products before, they have capital, they have a reliable network of tested vendor relationships, and they have proven instinct.
Their biggest challenge actually has nothing to do with the typical challenges most founders face (funding, team building, ops), but instead has to do entirely with time. They have a lot of ideas and not enough time to execute all of them. Execution against a product idea is at least a six-to-nine-month process that requires product design, manufacturing/logistics negotiations, creative/web/content production, and launch campaign planning/deployment. All this, with only a 20% chance of successfully reaching product-market fit.
A holy grail for entrepreneurs is to find a way to skip all the time and work in between product idea and product-market fit.
This is where we started playing with the idea of combining a design practice called fake doors and a marketing practice called multivariate testing (MVT). After much trial and error, we discovered that by combining the two principles in a strict methodology, we could successfully simulate hundreds of product launch scenarios and tell entrepreneurs which products were worth launching and how (before any product even exists).
I’m going to peel back the curtain and show you how we do it for venture studios, how it works in a three-step process, and how you can apply it to your product idea, no matter who you are. | https://medium.com/better-marketing/how-to-simulate-your-return-on-ad-spend-before-launching-your-product-a801b31b8a30 | ['Dan Pantelo'] | 2020-03-10 14:59:56.083000+00:00 | ['Entrepreneurship', 'Design Thinking', 'Business', 'Marketing', 'Product Management'] |
Supercharged Science: Will big data lead us to faster medical breakthroughs? | In the last few blogs, I have explored the future of health in a data-driven world, from smart-devices in our home improving our everyday health, to the important of big data in the current pandemic. In my final blog on big data and health, I look at the way that science and pharmaceutical companies are using AI to develop faster, safer and more effective treatments of our biggest diseases. This has resulted in fast-tracking what would otherwise of taken years and decades of research. With AI, the future of medicine is looking bright.
Mapping diseases
There are nearly 25,000 genes in the human genome, but the connection between genes and disease is still not largely understood; of all the human illnesses, only 2,418 of our genes have so far been attributed as the causes of these diseases. This has increased science and pharmaceutical companies to hedging their hopes to machine learning to detect patterns between the thousands of genes and diseases that would take human many many years to understand.
OccamzRazer, have become the first to map everything science knows about Parkinson’s disease. This means that all information from doctors appointments, molecular processes in the brain, individual’s genetic profiles, and results from drug trials are all available in a database accessed by their cutting edge machine learning algorithm. They liken this to a human doctor who knows absolutely everything about Parkinson’s. The possibilities here are endless; AI will be able to piece together hidden connections between all of this data, advancing the timeframe for potential cures. Katrina Sophia Voltz, CEO and founder of OccamzRazer, admits that while their database is massive, there are still gaps in our knowledge. This is where scientists and AI can work together, designing experiments that fill the remaining gaps in the puzzle.
With similar strategies to all human diseases, such as all cancers and Alzheimer’s, discoveries for a cure could be fast tracked by effective data management for AI and create a world where, one day, there is a cure for all.
Drug discovery
Pharmaceutical and technology companies are taking a number of approaches to use AI to assist their discovery of successful medical cures and advance our knowledge of the disease. Currently, around 90% of potential cancer treatments fail in the development stage.
Thanks to CRISPR gene editing technology, technology companies like DepMap can use large databases of genes, applying artificial intelligence and CRISPR to turn off genes one by one to identify which genes have an affect on the growth of different cancers. With the results, they can create medicines that target those genes in order to treat the cancer. Over 3000 drug combinations can be tested on the dataset of cell models to identify possible treatments, increasing the future success rate and reducing time of developing new treatments.
As we have learnt during the COVID-19 pandemic, the usual time for drug and vaccine development can take 10 years or more, with less than 12% of drugs making it to pharmacies.
AtomWise recently received funding of $123 million for their drug acceleration programme using AI. Their deep learning algorithm AtomNet uses a database that autonomously learns how millions of different molecules and proteins will bind, able to test over 16 billion different combinations in just 2 days, something that would normally take years. This will help scientists to identify combinations that are both effective and safe, far quicker, and therefore fast-track the development process. The AtomNet has been used to narrow down potential combinations that are effective targets for COVID-19, narrowing thousands of combinations down to a few hundred candidates.
With advances in technology in the coming years, as predicted by Moore’s law, it is conceivable that drug development processes like this, with far bigger databases and fast processors in the future, could find potential treatments within days of any new pandemic.
AI and COVID-19
AI is being used in a number of ways during this pandemic, one, as noted above, is by fast-tracking drug combinations that could be effective treatments for the virus.
The UK’s Medicines and Healthcare Products Regulatory Agency (MHRA) has just granted a £1.5 million fund to GenPact to develop an algorithm to support the mass-vaccination scheme in the UK. Normally within a 12 month period it could be expected up to 100,000 reports of vaccine side affects per 100 million doses. The aim of the algorithm is to run safety checks on a large scale for any potential side affects that may pose a dangerous risk to the public, before mass-vaccination takes place. The MHRA have said they currently do not expect the vaccine to pose any more safety risks than any other vaccines in the past, but the algorithm will be as an extra safety measure.
This is not the first algorithm of its kind. Earlier in the year, the U.S. Food and Drug Administration held a competition to source the best algorithms that could identify side-affect event reports to aid the processing of all US drugs. Winners were Enigma and two scientists from within the agency.
In addition to using algorithms to make safer treatments for COVID-19, it can also prove effective for diagnosing it. MIT researchers have been training an algorithm on thousands of recordings of coughs, both regular coughs and positive Covid coughs. Their algorithm has shown success in identifying differences between regular and Covid coughs, proving a 98.5% accuracy. In the future, they hope their FDA-approved app will allow the public to record themselves cough and get feedback on whether their cough is likely to be the virus, even when they feel no other symptoms. They identified that there are many people currently unaware they have the virus, but through recording a forced cough were proved to be positive. With some further development, this could become a great way for us to test whether that niggling cough that we have started with is something we should be staying at home with and getting tested.
I write on behalf of Digital Bucket Company, a consultancy specialising in Big Data, AI and Cyber Security, | https://medium.com/carre4/supercharged-science-how-big-data-will-lead-us-to-faster-medical-breakthroughs-8cf08d25074b | ['Lauren Toulson'] | 2020-11-28 15:12:34.784000+00:00 | ['Data', 'Artificial Intelligence', 'Health', 'Vaccines', 'Science'] |
Ego: Self-Discovery Oracle Card | Your ego is the roundtable of children within you doing their very best to meet your needs.
You may have been taught you must destroy your ego, but there is no part of you that requires destruction. Yes, your ego makes terrible masters, but they do make fantastic servants.
The children that make up your ego are the parts of you that only ever want to feel loved and helpful. Don’t shame them even more in your attempt to destroy them.
Rather, help them grow up. Treat the children inside of you better than you were treated.
In return, you will co-create a mature ego; one that helps you move with grace and power, always and forever affirming confidence in your worth. | https://medium.com/just-jordin/ego-self-discovery-oracle-card-d2ec2389eda1 | ['Jordin James'] | 2020-11-23 19:45:37.012000+00:00 | ['Mental Health', 'Psychology', 'Spirituality', 'Self', 'Inspiration'] |
Take Your Histograms to the Next Level Using Matplotlib | Step 3: Emphasize Information
Emphasizing information doesn’t just mean increasing font sizes, but also f.e. choosing a color scheme that is benefitial to the message you are trying to convey with your visualization. Since this step will take much more code, I am going to show code snippets seperately.
First, we need a different set of colors. The orange/blue combination in the previous plots just doesn’t look good. An easy way to change your plots layout is to change the matplotlib style sheet. Personally, I love the “bmh” style sheet. However, this style sheet adds a grid, which would be distracting in our plot. Let’s change the style sheet to “bmh” and remove the grid it produces.
plt.style.use("bmh")
# Later in the code
ax.grid(False)
Another aesthetic improvement would be to reduce the histogram opacity. This will make the KDE more dominant which will give the reader an overall smoother impression.
avocado.plot(kind = "hist", density = True, alpha = 0.65, bins = 15)
To make the title stand out more, we can increase its font size. The “pad” argument will allow us to add an offset, too.
ax.set_title("Avocado Prices in U.S. Markets", size = 17, pad = 10)
During this step, I also wondered whether the y label “Frequency” was necessary. After testing both variants, I found having an y label more confusing than helpful. If no other information is given, I think we intuitively read the y axis as “Frequency”.
ax.set_ylabel("")
If we applied all of the above, we would get this plot:
Plot with emphasized information (1/3)
At this point, the visualization is ready for your presentation or report. However, there is one more thing we could do to make the plot more interpretable. In principle, histograms hold all the information we need to display percentiles. Unfortunately, it is impossible to read the information directly from the plot. What if we want to show our boss what the 75th percentile of avocado prices is while keeping all the information about the distribution? We could compute some percentiles and display them in the plot as vertical lines. This is similar to what a boxplot would do, but actually integrated into the histogram. Let’s try this!
First, we want to plot the vertical lines. For this, we’ll calculate the 5th, 25th, 50th, 75th, and 95th percentiles of the distribution. One way to do it would be to make every vertical line a bit longer than the previous one in a kind of stairwise motion. We can also give the inner lines a higher opacity than the outer lines.
To do this, we’ll store the quantiles, line opacities and line lengths in a list of lists. This will allow us to loop through this list and plot all lines automatically.
Plot with emphasized information (2/3)
Now, we need to add labels. Just like before, we can use different opacities and in this case font sizes to reflect the density of the distribution. Each text should have a little offset compared to the percentile lines for better readability.
Plot with emphasized information (3/3)
We can now use this histogram to make business decisions on at which price to sell our avocados. Maybe our avocados are a bit better than the average avocado and our company is rather well-known for their avocados. Maybe we should charge the price that is at the 75th percentile of the distribution (around 1.65$)? This histogram is a combination of histogram and boxplot, in a way.
This is the entire code for the final histogram: | https://towardsdatascience.com/take-your-histograms-to-the-next-level-using-matplotlib-5f093ad7b9d3 | ['Max Hilsdorf'] | 2020-05-28 15:38:43.422000+00:00 | ['Python', 'Programming', 'Data Visualization', 'Data Science', 'Data Analysis'] |
Let’s Take-Off Together! | Let’s Take-Off Together!
By Emma Duncan, FAA Communications
This November’s scheduled SpaceX and NASA Falcon 9 Crew-1 launch marks the first step toward a reinvigorated mission of space exploration. The FAA’s role in this mission is one that ensures safety and encourages innovation through its licensing processes.
This year alone, the FAA licensed over 30 spacecraft launches and reentries. Even more exciting, the Falcon 9 Crew-1 mission marks the first time the FAA has licensed a crewed launch to the International Space Station — ever. None of this would be possible if the agency did not have a diverse, creative and motivated workforce to oversee and certify these operations.
Liftoff of SpaceX’s CRS-17 Dragon Cargo Craft on May 4, 2020. Credit: NASA
The U.S. aerospace industry wants to get back out there and the FAA’s role in commercial space launches is vital for that to happen. The FAA licenses all space launch and reentry operations through a thorough application process. The industry partner that wants to conduct a launch must submit a request complete with the logistical plans for launch and reentry operations to the FAA. The plan is vetted by FAA engineers and we work together with the partner to ensure compliance and safety requirements are met. On the day of the operation, FAA officials are at the launch pad to ensure continued safety and the agency places a Temporary Flight Restriction (TFR) over the planned trajectory of the spacecraft.
Since its conception, the aviation and aerospace industry has been one of extraordinary innovation. This innovation comes from diverse thought, advancing technology and creativity. At the FAA, innovation is part of our mission — it’s that important. That’s just one reason why we invest resources in fostering the aviation workforce of the future: young people.
Now more than ever, the future of aviation relies on the next generation. It’s projected that the aviation and aerospace community will need more than 2 million more employees in the next twenty years than we have today. The world of aviation is rapidly developing with the integration of drones into the airspace, supersonic travel and commercial space.
The International Space Station 2011 Credit: NASA
There are a plethora of space related career and internship opportunities available at the agency. The FAA Office of Commercial Space hires engineers to work through the licensing processes of launch and reentry operations to ensure continued safety in our airspace. Our engineers also oversee operations at the launch and reentry sites. Everyday our engineers work in unique high performance positions with industry organizations and our partners at NASA. They conduct safety evaluations and utilize leading-edge technology and systems.
The partnership between the FAA, NASA and industry professionals fosters growth. At the agency, we are encouraged by the nation’s desire to explore more types of operations, develop better services and foster innovation all while maintaining the safety of the crew and the public. Launching a rocket is no easy feat; there are hundreds of professionals involved. We want to encourage young people with a passion for aviation and aerospace to look into the opportunities provided for them at the FAA. There are more facets of expertise within this agency than many people realize and developing a stronger, more diverse workforce will certainly help maintain the U.S.’s leading role in the aviation community world-wide.
If this launch has you feeling inspired, it’s time to tap into the aviation and aerospace community! We encourage you to explore STEM education resources and careers at the FAA. There are hundreds of opportunities for learning, growing and working within the aviation and aerospace community.
Let’s learn, work, create and take-off together. | https://medium.com/faa/stem-blog-lets-take-off-together-2e99ab450bf4 | ['Federal Aviation Administration'] | 2020-11-18 20:05:40.569000+00:00 | ['Spacex', 'Space', 'NASA', 'STEM', 'Engineering'] |
Hierarchical Time Series Forecast for Apparel Industry — Doppler Effect | The below chart shows an animation of the space dimension and shows how sales are distributed mostly across Province2.
Sales and Quantity across provinces over the years
The sales across the city are spread as follows:
Bar chart showing sales spread across cities on the dashboard
Sales distribution across the product dimension is as shown:
Sales distribution across Product Dimension
4. Model Training and insights
Data preparation for training
a) We removed 667 store class combinations as these did not have more than 12 weeks of data from the year 2012 to 2016.
b) We applied logarithmic transformations on training data to re-scale large variations in sales. This data preparation technique did not give us good evaluation metrics as compared to the model trained without applying the transformation.
c) We saved the output variables for combination with $0 sales as None so that we do not lose these combinations labels during prediction for our bottom-up results.
We applied models ranging from as simple as ARIMA to complicated models like Auto-ARIMA, Prophet, and recurrent neural networks.
We implemented a 20 layer LSTM network in Keras and trained for 500 epochs. Its performance was affected by the presence of dying classes.
Auto-ARIMA is an R package implemented in python which worked well only for certain combinations because its performance was significantly affected by outliers.
The Prophet is a time series additive modeling package by Facebook. It is robust to outliers and missing data. It works best with time series that have strong seasonal effects and several seasons of historical data, as well as holiday effects. Another benefit of Prophet is that it is fast and tuneable and that it provides human interpretable parameters to improve forecast by adding domain knowledge.
We stored the models and forecasts as pickle files so that we can reuse them again for visualization, calculating bottom-up aggregations, and evaluation metrics.
Methodology: Grouped Timeseries
There are many approaches to hierarchical time series modeling. Top-down, middle-out and bottom-up [1]. The reason we have chosen bottom-up is that top-down and middle-out disaggregates a higher time series into its components. The proportions of disaggregation are dependent on domain knowledge and data distributions. It is appealing to be able to both aggregate and disaggregate consistently; this is an advanced approach, used in probabilistic modeling [2], which we can tackle after having explored the bottom-up approach.
Two sample hierarchies of same time series
The way we approached this problem is through the use of the aggregation matrix S. At the top of the hierarchy is the Total or most aggregate level of data. The t-th observation of the Total series is denoted by y_t for t=1,…, T. The Total is disaggregated into two series (A and B) at level 1. Separately, the same Total can be disaggregated along a different dimension (X, Y) producing a system of equation in terms of different leaf nodes (y_{X,t}, y_{Y,t}). Each can be further disaggregated into their components (y_{AX,t}, y_{AY,t}, y_{BX,t}, y_{BY,t}).
Aggregation matrix of each hierarchy
Yet, both Totals are the same series, but there is more than one disaggregation. To combine them into a grouped hierarchy we use the observation that further disaggregation of either hierarchy produces the same leaves. So any other combination of hierarchical level can be represented in terms of these leaves.
Sample hierarchy and grouped aggregation matrix
For our data set the leaves (b_t) are store-class combinations, and in our data set and hierarchy, the number of valid combinations between class and store is 2217.
In our hierarchy, the cross combinations are as follows.
The count of individual nodes is 164. So the total number of terms is 5641.
The aggregation matrix thus has a dimension of 5641 x 2217 (plus one more row for the total sales).
Through the matrix multiplication y_t = S · b_t, we can obtain the predicted sales across the remaining 3424 levels.
Evaluation
From 6 years of sales, the last year is held out for validation purposes. Unseen forecasts are restricted to one year as well due to the apparel industry’s dynamic nature. The metrics used are mean absolute error (MAE) and mean absolute percentage error (MAPE). As we noticed (and somewhat expected) during EDA, there is the possibility of outlier sales, for example, there is a substantial increase in sales during thanksgiving/Black Friday week. These outliers signify that mean squared error would not be a useful metric as it is not robust to outliers.
We perform two evaluations, mainly:
1. Comparison of various time series forecast models (Auto-ARIMA and Prophet) at bottom leaf nodes.
Evaluation metrics comparison for Auto-Arima and Prophet in the bottom-up approach
Prophet provided us better predictions in fitting the validation data at the leaf combinations for store-class-weekly. From the above figure, we find that Prophet gives a lesser MAPE as compared to Auto-ARIMA at Province1 where Province sales are an aggregation of leaf-level nodes.
2. Comparison of prediction from the same model applied individually at various levels and with bottom-up grouped time series approach.
Comparison of bottom-up predictions (yellow) with level predicted sales for province2
As we can see, the bottom-up grouped time series does a splendid job predicting sales for Province2-weekly, where most of the stores are concentrated. It also predicts high and low sales and thus is optimal in giving results for this level. The lowest MAPE recorded was achieved by weekly bottom-up sales for Province2 as:
Comparison of node level and bottom level prediction for Province2 weekly sales
Data Product
Our final product is a User Interface, where retailers can explore the data and forecasts across hierarchies and visualize the spread of sales.
EDA Across Different Dimensions
They can also see the predicted sales for the year 2018 to 2019 on the forecast tab. We created sunburst charts as they are ideal for displaying hierarchical grouped data. Each level of the hierarchy is represented by one ring or circle with the innermost circle as the top of the hierarchy, which, in our case, is the province for space dimension and category for product dimension.
Forecast Sales Across Different Dimensions
Lessons Learnt
We learned concepts revolving around time series forecasting for the retail domain and how to apply it in a hierarchy across various aggregation levels and multiple dimensions.
Experimental Learnings
AWS- We experimented with AWS lambda and Docker technology on how to deploy the trained Prophet model on AWS Lambda. We experimented with the training models on AWS EMR by converting our implementation to PySpark but later failed because of incompatible issues on the cluster with PyArrow. We explored the AWS AutoML forecast algorithms like Deep AR. Time Series Models- Our data had huge variations and non-linear trends. Different models are better for different predictions at various aggregation levels. We learned about multiple time series models, their mathematics, and the parameters involved for better model tuning. Prophet has a lot of advantages over other models in terms of automatically adjusting to yearly and monthly seasonality and trends, handling missing values, and taking into account outliers too. Hierarchical Probabilistic Modeling- We learned about probabilistic forecasts that are “aggregate coherent” i.e., the forecast distribution of each aggregate series is equal to the convolution of the forecast distributions of the corresponding disaggregate series. Such forecasts naturally satisfy the aggregation constraints of the hierarchy. This method allows different types of distributions and accounts for dependencies to enable the computation of the predictive distribution of the aggregates. It proceeds by independently generating a density forecast for each series in the hierarchy. Then, a state-of-the-art hierarchical forecast combining method is applied to produce revised coherent mean forecasts.
Technology Learnings
Plotly/ Flask — This was something new for all of us. Integrating forecasting results in an end-to-end web dashboard was challenging and fun at the same time. We were successful in assembling a complete data product that can be used by a potential analyst to make optimized buying or selling decisions. We learned how powerfully we could communicate the results of a Data Science project by creating an online data science dashboard using Plotly Dash. Furthermore, we learned how to utilize different visualization tools such as the sunburst chart, how to embed the forecast results into the webpage, and also how to improve the web UI by using various components of Plotly Dash. Matrix Algebra — We learned how to extend hierarchical timeseries into grouped time series approach through matrix algebra. We improved execution time and memory in constructing the aggregation matrix using sparse matrices from the SciPy module. We saved our results as boolean values instead of integers for all possible 5641 combinations, further providing memory savings. Multiprocessing — We had to combat huge computation time for getting results from 2217 combinations with different time series models. One iteration of Prophet over these combinations took 3 hours, which we reduced to 20 minutes by deploying the setup environment on powerful lab servers and leveraging the python multiprocessing module. The Pool object, which offers a convenient means of parallelizing the execution of a function across multiple input values, distributing the input data across processes (data parallelism).
Prediction Learnings
Grouped time series, is a mathematically consistent and sound approach at modeling; however, it is prone to propagate error across deep and multidimensional hierarchies. The bottom-up aggregates the leaves error for upper levels, and hence it does not give the best predictions for all nodes that are non-leaves. There is no one model that provides the best forecasts. Different models are better for different dimensions and aggregation levels. We discovered that Auto-ARIMA performed well at province-department-monthly but gave drastically worse results at the store-class-week level.
Future Scope
To enable dynamic selection of models at various aggregation levels. Currently, we have hard-coded the modeling aspect, which we can make automatic in the future with more data and experimentation. Every time series forecast has uncertainty measure associated with it. Probabilistic modeling captures this uncertainty and gives confidence in how accurate the forecast is. The prophet does capture this uncertainty but has to be hyper tuned to select the best parameter. Using hierarchical probabilistic modeling, we can select a better measure of this parameter. Inner workings of the retail domain can help us preprocess the data better and engineer the time series models accordingly. Explore the AWS Forecast, which is a fully managed service for time series forecasting with high accuracy. It combines different variables including historical data. It uses an AutoML approach that takes care of the machine learning aspect. We suspect that recurrent neural networks, such as LSTM, should perform better. We would like to revisit this avenue with preprocessing the series before training, hyper-tuning, and extending the depth of the model. Explain which products are causing an increase in sales so that retailers can increase the supply of those and reduce the supply of those that are not performing well.
Summary
Hierarchical time series prediction has a lot of uncertainties. Trends and seasonality patterns vary at different levels of the hierarchy. So there is no single model that can work well at all levels. Since sales are the lifeblood of businesses, correct predictions using appropriate models becomes an important aspect. We have shown this using Auto-ARIMA, LSTM, and Prophet models, which worked better at certain levels only. Prophet has performed better as it can handle seasonality and trends with minimal hyper-parameter tuning.
We have developed and compared two approaches to predict sales at different aggregation levels: node level prediction and bottom-up prediction. Whereas node level prediction might lead to heavy computation and memory issues, bottom-up can aggravate the leaves' error to the upper levels. We can easily compare and choose which of these approaches gives better forecast to have in our final implementation based on evaluation metrics and forecast plots.
Thanks for reading our post, and we hope you enjoyed a learning experience. Here is a three-minute overview of our project.
Acknowledgments
Special thanks to our Professors Jiannan Wang and Steven Bergner, and our industry mentor Hassan Saidinejad for the idea of this project and guiding us.
References
[1] R. Hyndman, G. Athanasopoulos, Forecasting: Principles & Practice. OTexts, 2008. [online]. Available: https://otexts.com/fpp2/
[2] S. B. Taieb, J. W. Taylor, R. J. Hyndman, Hierarchical probabilistic forecasting of electricity demand with smart meter data, [online] Available: https://robjhyndman.com/papers/HPFelectricity.pdf.
[3] https://medium.com/spikelab/forecasting-multiples-time-series-using-prophet-in-parallel-2515abd1a245
[4] https://medium.com/@josemarcialportilla/using-python-and-auto-arima-to-forecast-seasonal-time-series-90877adff03c
[5] https://www.scipy.org/
[6] https://scikit-learn.org/stable/
[7] https://keras.io/
[8] https://facebook.github.io/prophet/
[9] https://docs.aws.amazon.com/
[10] https://dash.plotly.com/ | https://medium.com/sfu-cspmp/doppler-effect-hierarchical-time-series-forecast-for-apparel-industry-45a55bb23be6 | ['Ria Gupta'] | 2020-04-20 13:30:07.882000+00:00 | ['Data Science', 'Time Series Forecasting', 'Machine Learning', 'Data Visualization', 'Deep Learning'] |
André Christ, LeanIX: “Like Google Maps for IT in a company” | André Christ has built a company that could be best described as an underdog. Even though LeanIX isn’t very present in the news, the company is one of the youngest German success stories. In June, they closed a Series D funding and their client list is a potpourri of international top brands like Adidas, Atlassian, or Kühne + Nagel. In our podcast REWRITE TECH, we talk to André Christ about enterprise architecture, product development, and why Bonn is the perfect place for a start-up that is looking for talent.
André Christ, LeanIX: Mapping out the IT Infrastructure
The technological infrastructure of big companies is complex, and responsibility lies in many different departments and teams. At a certain point of scale, a company runs hundreds of services and software products. And that’s where LeanIX comes in.
“LeanIX is like Google Maps for IT in a company. It actually helps organizations to map out what software they have, for what processes that software is used, which organization is using it and which business capabilities, that means which functionality, is in that software,”
explains André cleverly.
The advantages are obvious: Less time spent on reporting, faster onboarding for new colleagues, and cost savings due to the elimination of redundancies. Since they started in 2012, LeanIX has won several clients like Adidas, Atlassian, or Bosch. Recently they closed a Series D funding round led by Goldmann Sachs — in the midst of a global pandemic.
From bootstrapping to venture capital
LeanIX has already been backed with venture capital since 2015 when Capnamic Ventures and Iris Capital invested. In 2017 DTCP, the investment group of Deutsche Telekom invested. However, the first three years after the founding, André and his co-founder bootstrapped the company.
Only when they got their first investment in 2015, they switched to “growth mode” as André calls it. But with more money and employees coming in, the right mindset stays important for André:
“We try to be as lean and quick in decisions making as possible.”
André Christ, CEO & Co-Founder LeanIX
Even though LeanIX is a classic technology scale-up, their product doesn’t rely on tech alone.
“Not everything can be automated”
as André states in the conversation. That’s why the software obtains data from various sources.
“LeanIX is a hybrid of people putting their knowledge in and leveraging APIs and other systems to get data.”
Besides André and his team provide additional content like information about software lifecycles to be the one-stop-shop for everything related to enterprise architecture.
Hidden champion based in Bonn
Despite its huge success, LeanIX is perhaps not as well-known as other German start-ups. One reason could be the location. Instead of Berlin or Munich, LeanIX’s headquarters are located in Bonn.
Office of LeanIX
And, as André reveals in our discussion, he struggled with Bonn during the first years:
“I was having this debate with myself: Was it the right idea to found a business in a region that is not really known for building a fast-growing company?”
Now, with a success record at their back, André has come to a conclusion:
“I’m over this question now. I am fully convinced that Bonn is a great place for us.”
They have managed to build a brand around LeanIX and are now able to attract talents from the whole region, even competing with big players like Telekom or DHL, which are also headquartered in Bonn. | https://medium.com/rewrite-tech/andr%C3%A9-christ-lean-ix-not-everything-is-automatable-df0c925077c6 | ['Michael Mirwald'] | 2020-11-26 10:26:53.572000+00:00 | ['Enterprise Architecture', 'Startup', 'Podcast', 'Product Development', 'Digital Transformation'] |
Open Skies | Open Skies
Photo by Sebastian Molina fotografía on Unsplash
we see you looking at us
looking at you,
toy breaker
we have all been to your
house that (one) time
that one birthday party
we went as a team —
every kid a guard
for the other
no one opened their
presents at the party
anymore
except you
which expectant, hopeful
child gave the best gift
you peeled the tape politely
at first, then greedily
clawed at the gifts
hiding a paper cut
you tossed each toy
into the pile
and sugared-up
refused to play
with any of them
until your guests all
left and your siblings
wanted a tiny glimpse
at your treasure
and what they got
was a display of broken
toys for no one
no one
no one
you never ever learned
to share and so here we
still are
people voted
for the tantrum to
continue
imagine that
people still love
the spoiled kid best | https://medium.com/resistance-poetry/open-skies-b0126697901a | ['Samantha Lazar'] | 2020-11-26 19:13:44.273000+00:00 | ['International Treaties', 'Resistance Poetry', 'News', 'Poetry', 'Politics'] |
Ghostrunner Comes to Switch | Ghostrunner Comes to Switch
Is Switch the best home for cyberpunk ninja parkour? Our full review.
The screen turns bright red as the katana drops from your fingers. Sweaty palms grip the controller for one more go — and you’re in. Implants let you fly past the first guard before they know you’re there, leaving only a stain on the railing behind you. Their guns pulse with the music, but you slide through a barrage of shots to get personal. The sword, even as you put it to work glows purple with the neon lights of the city. Short screams give way to fountains of blood pooling on the slick metal walkways — but you’re already gone.
Lights, blood, and speed punctuate Ghostrunner as artificial legs take you flying through each of its tech-coated levels. The whole game is built on the principle of movement. You have to be fast but precise, brutal yet efficient if you want to save Dharma Tower from its tyrant. The concept is exciting and trendy. It capitalizes on the success of other difficult die-and-try-again formulas to promise a game that sounds too good to be true.
Ghostrunner strives to be as beautiful as Cyberpunk 2077, as violent and difficult as Hotline Miami, with the dedicated platforming of Mirror’s Edge. It’s nothing if not ambitious, but trying to fulfill the roles of so many other games leaves Ghostrunner without its own identity.
Source: One More Level/Slipgate Ironworks.
During my time with Ghostrunner, I fluctuated between awe and frustration. Enjoyment and confusion. From early on in the game victory feels satisfying. Slicing through Dharma Tower guards feels good in that Dark Souls kill sound kind of way, and finally FINALLY tearing through the last enemy in a level where you’ve died fifty times is electrifying.
However, on the Switch getting to those satisfying kills or trying to execute high-flying acrobatics is much harder than it needs to be.
When it comes to my taste in games, I can be pretty sadistic. I adore banging my head against the same boss over and over again until I finally defeat it. Ghostrunner nails that rush, but instead of overcoming an intelligent enemy I’m trying to beat floaty controls and chugging frame rates. Several times the game had such a difficult time running at my TV’s resolution that I had to switch to handheld mode for a chance at beating certain encounters. I played on an original Switch, but I haven’t heard any better reports coming from the revised console. When the game did run well, I still felt like I had to get lucky to not be randomly killed by a bullet in the back because of the imprecise controls.
The Switch version also has more than a few hiccups in the software. Some of the options don’t work, the buttons the game tells you to use can be wrong, and in one menu you have to use the joystick to control a mouse cursor that lets you use the menu. These small bugs littered my game but were usually nuisances rather than experience defining. I also encountered a few that forced me to reboot. The most prominent was when the grappling hook point to enter an encounter no longer worked, causing me to hilariously jump to my death over and over again while I mashed the grapple button repeatedly in desperation.
For all that, there are moments where the game succeeds. When you unlock the grappling hook the platforming suddenly becomes much easier and more exciting. In fact, every time I got an upgrade it felt flavorful and like an exciting addition to the gameplay. Using them is a different story, but being a cybernetic ninja getting upgraded letting you slow time or teleport is gonna be cool no matter what. One of my favorite things is how those upgrades are implemented. A little like NieR: Automata, you can plug upgrades in and remove them on the fly to change up your build. The plugging in is represented by a “circuit board” (See Tetris board)that you gradually unlock more sections of. Each upgrade is a Tetris piece that you can mix and match to try out any build. It’s a fun take on a minigame that I really liked.
Using those upgrades is often a tedious exercise on Switch, but when it works it really works. Lining up a long slash to fly through three different enemies instantly killing them is awesome. And slowing time for everyone else while you still move at ninja-speed made me feel powerful. I wish they leaned into that power fantasy more, but it’s a hard line to walk when you also want the game to feel difficult.
Source: Rock, Paper, Shotgun.
I really want to love Ghostrunner, it’s an ambitious project that was tailor-made for speedrunners. It feels like the developers wanted to take their favorite experiences of the last ten years and mash them together with a slick cyberpunk paint job. Most of the things holding the game back aren’t big on their own, but my experience was filled with little gripes. All those issues came together to sour an exciting project. Sadly for me, it was less memorable than the composite experiences that it pulls from.
If you’ve been excited about digging into Ghostrunner. I recommend that you try it for a different system, PC if possible. If you have a hankering for cyberpunk action on the Switch, I’d recommend a bite-sized version that is much more suited for the console: Akane. | https://medium.com/super-jump/ghostrunner-comes-to-switch-11f0dcd65fac | ['Austin Horne'] | 2020-12-08 00:53:48.326000+00:00 | ['Gaming', 'Review', 'Features', 'Nintendo', 'Games'] |
Exactly how Dangerous are Prisons? Read and Decide for Yourself | I was out in the Rec yard at FCI Miami, a Low security Federal Prison, just walking in circles. Yes, everything in prison is circles and lines, circles and lines. Lines for medical, chow hall, the phone, laundry, commissary, you name it. It grates on you at times, until you remember that your sentence is passing regardless and you just sort of let it go, out of necessity. Out in the rec yard, it’s circles, typically on the track. The only way to get any kind of cardio. On this night, though, I was doing smaller ones, in the volleyball court which, incredibly enough, had a base of sand. I had my shoes off, enjoying the sensation of powdery sand squishing through my toes, listening to a local jazz station on my radio. With my eyes closed, I could almost imagine I was back in Costa Rica.
Not too far off, a basketball game was in progress. The court was old and ragged, with cracked asphalt, but served its purpose as two teams battled back and forth in front of crowded stands. It was a Rec Department organized league game and I could make out the score on the cheap electric scoreboard in the nearby distance. One team was comprised of black players and the other Puerto Ricans. Things weren’t supposed to be like that. They were supposed to mix things up to avoid problems, but officers get lazy and let inmates do their own thing. At least for something minor like that. It’s no big deal, things work out just fine, well, normally that is.
One black guy got fouled hard. It had been happening the entire game. This time he took exception and pushed back, with the Puerto Rican falling to the ground. His teammate came in and started swinging but got socked in the mouth and hit the ground as well, as the entire stands erupted to join the melee. It was the blacks against the Puerto Ricans, with the blacks outnumbered two-to-one.
I was watching from a relatively safe distance, and froze in my tracks entranced by the scene, as dozens of people ran by to jump into the fray. Prison protocol mandated that Rec officers should have been outside supervising, which probably would have kept everything from escalating. They were, however, hanging out in the Rec building mindlessly surfing the internet instead, and would later claim it was an unsanctioned game.
“Get the fuck out, everyone out,” they screamed, once they realized what was happening. All inmates in the building were kicked out into the turmoil as the officers barricaded themselves inside. In the meanwhile, a full-on race riot was breaking out, enveloping half the compound.
Makeshift pipes and shivs came from out of nowhere as inmates pummeled each other left and right. I started backing off, realizing this probably wasn’t the safest place, as I saw one person grab the scoring monitor, bashing another guy upside the head. It was five-on-one a few feet away, with someone on the ground beaten unrecognizable while crumbling into a fetal position. The odds were better in other spots, but not all that much. In fact, some were giving almost as good as they got, until they were surrounded by packs howling like hyenas, popping in for little cheap shots. Lingering tensions now unleashed, as all hell broke loose.
About 10 minutes in, officers started rushing the scene en masse and that scared off some inmates, but there were still too many battles for the guards’ liking. They were woefully outnumbered and didn’t want to get beaten themselves in the process. Lord knows, many inmates would have loved the opportunity. Things got so bad that the prison actually called 911 asking local police for back-up, which is typically taboo at a Federal facility.
“Get on the ground, get on the ground,” officers screamed, as more of them swarmed on in. At this point, the numbers were shifting in their favor, so most inmates complied, not wanting to get in trouble. I took the opportunity for full retreat back to my Unit, after all, I understood the potential consequences all too well. It was best to be as far away from this debacle as possible.
It took another half hour, but the officers regained control, locking up a few dozen inmates in handcuffs and roughly tending to the wounded. A few guys were still unconscious, and one had a knot on the back of his head the size of a softball. All told, almost a dozen inmates were sent to the hospital, some with major injuries, but all amazingly survived.
An hour later, the entire compound was paraded back out to the Rec yard, it was time for discipline. Standing for an hour, at full attention, while the Captain dressed us down, screaming and taunting, threatening to ship the entire lot of us. That hardly sounded fair, since only a couple hundred, or so, were involved at the most, but prisons are like that. Group punishment for all, for the actions of a few. The officers then inspected every one of us for injuries to see who else to charge. There were, of course, too many people for just the SHU so they’d be shipping people out in the morning, sending them to the SHU at the downtown Miami detention center as well.
Total lockdown. Compound wide. Ross, our third bunkie Eric and I confined to our cell. Food was delivered three times a day, mostly pre-packaged baloney sandwiches and cookies well beyond the expiration date. At least for the first few weeks, until they started serving food with no expiration dates at all. Only three showers a week and no access to phone or email. It was like being in the SHU all over again. When would it end, who knows? Rumors were floating around that the blacks were planning revenge and one guy, a former NASA rocket scientist, claimed to have heard there was a gun buried out in front of G Unit. It turned out to be bullshit but left us locked up a couple extra weeks as guards tore bushes and shrubs to shreds, turning a once beautiful garden into a graveyard. The rocket scientist, himself, got sent to the SHU and then shipped as his bad info turned up nothing. Stupid shmuck.
Lawrence Hartman is the author of (i) GUILTY TILL PROVEN INNOCENT: A Shocking Inside View Into America’s Failing Justice System, (ii) BLIND GREED: From Ivy League to International Fugitive, and (ii) BLIND JUSTICE: The Consequences of Greed. He has also been featured in articles on Forbes.com, including “The Life of a White-Collar Fugitive Not All That It’s Cracked Up To Be” and “A Voice From Prison Weighs In On Drug Addiction And A Solution.” | https://medium.com/@justicefailing/exactly-how-dangerous-are-prisons-read-and-decide-for-yourself-23aef7167e7e | ['Failing Justice'] | 2020-10-08 11:41:56.455000+00:00 | ['Arrest', 'Injustice', 'Prison', 'Criminal Justice Reform', 'Prison Reform'] |
The Girl in the Mirror by Jewel Enrile | Photo credit: Collaboration of Kensuke Koike aka 小池健輔 (Japanese, b. 1980, Nagoya, Japan) & Thomas Sauvin aka Beijing Silvermine aka 北京银矿 (French, b. 1983, Paris, France, based Beijing, China) — No More No Less Photo Collages
The priest’s sermon came out of his mouth in a monotonous drone, casting a spell of obligatory stillness all across the church. For this particular Sunday routine, Adela is straight-backed, perfumed, and stuffed into a lilac dress with roses embroidered at the hem. She and her mother had left just before the sun had set, crossing the plaza in wobbly heels. They had taken their seats in front. She is awash with familiarity, her body going through the motions, knowing when to stand up for communion, when to move or open her mouth, where to hail a tricycle right after mass.
The routine is not complete without the eyes on her neck. Her body tilts to turn as an all too-familiar response. She knows where to look, turning her head slightly: three pews behind, and a couple of seats to her left. She catches a glimpse of a balding man with drooping, sleepy eyes a row behind her, his head nodding to the sermon blanketing the church. Behind him, a mother and her dark red lips is crooning to a baby clinging to her shoulder, with her arm clad with gold bangles holding him in place. Finally, behind her, there is a familiar face belonging to a boy, hair slicked back, catching her eyes, in a dark blue polo. Paolo. He was a couple of years above her in school, and even there, he paused to greet her in the hallways. He smiles now, abashed, and looks away.
She purses her lips; looks down. It was hard not to feel flattered. Hard not to enjoy the staring and the attention; the teasing she overheard. But what does he see, anyway? A stranger. A schoolmate. A tan girl, with long hair, pretty eyes — she’ds heard him say that to a mutual friend once, after the sermon in the gardens by the fountain. A silver bracelet on her wrist. A floral dress every Sunday.
There is a sharp pinch on her thigh, and she jumps, hastily looking back up at the priest. Still, she catches a whiff of jasmine perfume and a view of cherry red lips turned down in a scowl on a woman’s face bearing likeness to hers. She bites the inside of her cheek, heart pounding in shame.
Her mother must see a younger her. That’s what everybody in the family says, even though she herself couldn’t see her face on her mother’s features, with all of her high cheekbones and arched brows. Her face was softer and rounder, but maybe that would be taken away by age.
Soft. Her mother could see her that way. A sensitive, dazed, impractical young girl, who stayed in the corner in any sort of gathering (instead of milling around and shaking everybody’s hands like her mother), and who liked to stay out on the balcony, the mosquitoes feasting on her legs, to stare out at a fogged-up view of the city.
“You better listen,” her mother mutters, lips barely moving.
She tries. She can’t. The priest seems to look at everyone in the pews, but she knows it can’t be so. If his eyes would rest upon her, what would he see? Nothing more than a teenage girl as part of the 6 PM session. Just another person in the crowd. Someone in a dress.
Time passes. An hour later, she follows the crowd out and excuses herself to the bathroom.
She follows the stone path leading to a white door. She opens the door to the smell of bleach. Her heels echo. She faces the mirror.
What does she see?
A girl. Long hair to her waist. A floral dress. A silver bracelet on her wrist. A daughter.
A girl in the church bathroom. Long hair she’s been itching to cut for months, except it seems such a waste. A floral dress she spilled scalding coffee on in her bedroom a week before, with the silver bracelet hiding the tiniest portion of the mark of the burn. A daughter, an only child, in fact, but not a prized one.
A girl. Someone with big dreams. Someone who desires too many things. Someone kind — yes, she could see that. She could admit to that. But what else? What else?
“I see you,” she says to the mirror. “Can you see me?”
The girl in the mirror nods. | https://medium.com/@hello.girlupzine/the-girl-in-the-mirror-by-jewel-enrile-c58d586bd8d9 | [] | 2020-01-14 12:39:19.582000+00:00 | ['Short Story', 'Philippines', 'Girl Up', 'Girls'] |
ISTQB Certification | ISTQB Certification
The Foundation Level syllabus forms the basis of the International Software Testing Qualifications Board (ISTQB®) Certified Tester Scheme.
ISTQB® Foundation Level is relevant across software delivery practices including Waterfall, Agile, DevOps and Continuous Delivery.
The 2018 Foundation Level qualification is suitable for anyone who needs to demonstrate practical knowledge of the fundamental concepts of software testing including people in roles such as testers, test analysts, test engineers, test consultants, test managers, user acceptance testers and software developers.
It is also appropriate for individuals who need a basic understanding of software testing including project managers, quality managers, software development managers, business analysts, IT directors and management consultants.
The new 2018 syllabus is recognised as a pre-requisite to other ISTQB® certifications where Foundation Level is required (note: all previous releases of Foundation Level, including the 2011 syllabus and “grandfathered” Foundation Level certifications, will remain valid)
Exam Structure
The Foundation Level exam is comprised of 40 multiple-choice questions, with a pass mark grade of 65% to be completed within 60 minutes. Participants that take the exam not in their spoken language, will receive additional 25% time, for a total of 75 minutes.
Contact me for Certification help : +91–9030624001 (whatsapp) | https://medium.com/@divya_95774/istqb-certification-f1ec4e630485 | ['Divya S'] | 2020-02-20 07:33:00.740000+00:00 | ['Certification', 'Istqb', 'Tester', 'Foundation', 'Level'] |
Live Show Tokyo International Film Festival (2021) | Full Show | ❂ Artist Event : Tokyo International Film Festival
❂ Venue : Tokyo Midtown Hibiya, Tokyo, Japan
❂ Live Streaming Tokyo International Film Festival 2021
Conversation Series at Asia Lounge
The Japan Foundation Asia Center & Tokyo International Film Festival
Marking its second installment since 2020, this year’s Conversation Series will again be advised by the committee members led by filmmaker Kore-eda Hirokazu. Directors and actors from various countries and regions including Asia will gather at the Asia Lounge to engage in discussion with their Japanese counterparts.
This year’s theme will be “Crossing Borders”. Guests will share their thoughts and sentiments about film and filmmaking in terms of efforts and attempts to transcend borders. The festival will strive to invite as many international guests as possible to Japan so that they can engage in physical conversation and interaction at the Asia Lounge.
The sessions will be broadcast live from the festival venue in Tokyo Midtown Hibiya every day for eight days from October 31st to November 7th. Stay tuned! | https://medium.com/@b.i.m.sa.la.bi.mp.r.ok/live-show-tokyo-international-film-festival-2021-full-show-fea05d1b4aa9 | [] | 2021-10-30 14:03:46.256000+00:00 | ['Festivals', 'Film'] |
After the Athenian Democracy, was direct Democracy ever ruled as form of government? | [Disclaimer: This is a personal corner. Any views or opinions represented in this text are personal and belong solely to the writer owner and do not represent those of people, institutions or organizations that the owner may or may not be associated with in professional or personal capacity, unless explicitly stated. Any views or opinions are not intended to malign any religion, ethnic group, club, organization, company, or individual.]
We all have the feeling that there is a new open space to be occupied by humans in their short span of time. Technology, for good or for evil, are giving us new tools, tools that gave the opportunity to a few, with no substantial difference of initial conditions from us, to become unreasonable rich, and, without special preparation, except lots of money, are becoming the leaders of the new form of society that unfolds in front of us. Like a cosmic pendulum, the balance seems tending to dictatorship or to democracy. In dictatorship, there is no place for debate, so, we are mostly interested on the possibility of Democracy. But, has Democracy had ever a place in governance? We believe not despite all the propaganda and mind manipulation to force us to believe otherwise.
«Athenian democracy developed around the 6th century BC in the Greek city-state (known as a polis) of Athens, comprising the city of Athens and the surrounding territory of Attica. Athenian democracy is often described as the first known democracy in the world. Other Greek cities set up democracies, most following the Athenian model, but none are as well documented as Athens’ democracy.» — [1]
The Athenian Democracy and the modern forms of Democracy, direct and representative, are first drafts of the idea that emerges from the observation of the universe and as it was understood by the great Italian philosopher Giordano Bruno: «There is no absolute up or down, as Aristotle taught; no absolute position in space; but the position of a body is relative to that of other bodies. Everywhere there is incessant relative change in position throughout the universe, and the observer is always at the center of things.»
This is the 19th-century monument to Giordano Bruno on the Campo de’ Fiori in Rome, the exact place where he was burn at the hands of the Inquisition.
To harness the full strength of our societies it is necessary to implement a citizen-centred form of governance. And we are at the verge of a transformation that may allow this type of governance that is due to the advance of technology. The work of Nikolai Kondratyev (1892–1938), the Soviet economist and mathematician, gave us important clues with their contributions to the understanding of the importance of business cycles.
Kondratyev's K-cycles and the new phase of capitalism: In the 1930s, the Soviet Union commissioned to the mathematician Nikolai Kondratieff, a mathematical model to prove that capitalism would fall and communism would survive the troubled course of history.
Kondratieff studied economic history in depth and came to the conclusion that economics is better explained by technological evolution than by class struggle, the Marxists' preferred explanation. He concluded that technological evolution is not linear, and evolves in cycles of 50 to 60 years, the famous patterns now called Kondratieff cycles, or K-waves, to his homage. This conclusion was not to Staline's satisfaction and, on September 17, 1938, Nikolay Kondratieff was sentenced to 'ten years in prison without the right to correspondence'. In other words: he was shot down by a firing squad that same day.
K-waves have been studied and confirmed by a branch of mathematics called spectral analysis. There is controversy about the number of cycles, and when they are triggered, but Table 1 gives an interpretation of the cycles and the technology that activate them.
Kondratyev's findings thus shows that technology does not necessarily harmonize with dictators, I think, on the contrary.
(see Ref.[2])
In Ref. [2]
AI, blockchain, deep learning, automation, accommodate the building-up of networks that are spontaneously established and are opposed to the linear structure of autocracies, and favour a society where each one citizen can be a center of activity and contributions to a better society. This is the essence of democracy.
It is predictable that the agendas from UN and WEF and BG, or any authoritarian, soul impoverishing society, will face the counter-action of historical evolution, self-organized structures, and non-linearity. In fact, with the new era in technological evolution, the working driven forces tend to be bottom-up, democratic leadership, and not the old top-down organization (preferred by the international-socialists and Bilderberg group, as we surprisingly realized with their recent discourse) which is characterized by linear relationships, counterproductive nowadays, and contrary to the historical evolution. When Klaus Schwab, and others leaders, refer to the «small window» available to implement the new world order, they recognize slightly this powerful ongoing process, that’s why they conquered the support of a faction of the left-wing to the cause of fascism.
The future is not adapted to top-down, authoritarian leadership, I believe, and I stand up for Democracy. Unless, the few powerful rich with the new sociological phenomena of intellectual deviation, allow it [3]. In fact, «we see that any government — local, state, or national — wanting to embrace complexity and systems thinking must work towards:
«-Viewing themselves differently: Governments thinking in systems need to adopt a more humble mindset; one which recognises that government sits alongside other actors in the system, rather than above or at the centre. Governments must also invest in understanding the system — who is part of it, and how actors work together.
-Engaging differently: Governments need to build trust with other system actors, and work to address power imbalances by understanding the importance of different perspectives and voices. A co-design approach is critical.
-Leading differently: Leadership needs to shift from having the answers to asking the right questions. As Senge et al explain, “To be a systems leader one needs to: (1) see the larger system (2) be able to foster reflection and generative conversations (3) shift the focus from problem solving to co-creating the future.”
-Structuring themselves differently: Silos and hierarchies don’t work well in complex systems. Government agencies embracing a systems approach need to think about how to take a cross-portfolio, cross-disciplinary and cross-sectoral approach to their work.
-Working differently: Every interaction within a complex system changes the system itself. This means that governments must move away from the concept of developing ten (or even two!) year plans. Instead, they should adopt an approach of experimentation, learning and iteration. Interventions must be adaptive and responsive to the new conditions that constantly emerge.
-Governing and measuring differently: Thinking in systems also means that governments must adopt a different approach to evidence and evaluation. As John Burgoyne writes, “Measurement should not be used for top-down control, but rather to learn about complex problems and the people experiencing them, so we can adapt and improve our approach.”
-Viewing themselves differently: Governments thinking in systems need to adopt a more humble mindset; one which recognizes that the government sits alongside other actors in the system, rather than above or at the center. Governments must also invest in understanding the system — who is part of it, and how actors work together. to the new conditions that constantly emerge.» [4]
As Noam Chomsky stated, «The most effective way to restrict democracy is to transfer decision-making from the public arena to unaccountable institutions: kings and princes, priestly castes, military juntas, party dictatorships, or modern corporations» [5].
When the outstanding German mathematician (and not Jewish), Kurt Gödel immigrated to the USA, with the support of Albert Einstein and the economist Oskar Morgenstern (game theory), during the act of asking US citizenship, Gödel detected a breach in the US Constitution, a crucial contradiction that could open the door to dictatorship. So, the danger is still there.
Of course, we can altogether imagine an hatred society — the dual of Democracy-, where each citizen is the source of hate and the mandatory of a rigid, constricted and authoritarian society. Could the design of this dual be already planned in the fine printings of the Agenda 2030 and The Great Reset, the seeds that open the doors to a global dictatorship? Democracy needs vigilant citizen, this is the necessary but not sufficient condition to let true, citizen-centered, Democracy, expand.
Fortunately, Nature has its own ways to congregate harmony and chaos, and as we have seen in the small beautiful gardens, to ensure that things work.
GOOGLED REFERENCES:
[1] Athenian democracy — Wikipedia
[2] https://participate.melbourne.vic.gov.au/city-future/city-future-1/embracing-complexity-government-story-about-gardening-and-thinking-systems?fbclid=IwAR3XFHqe780vFZpXc0Bg7LytzUNC2EmbzMBUjjB6LxlY1WZQpDnmHKSReFQ
[3] https://knowledge.insead.edu/strategy/the-next-cycle-of-capitalism-5226?fbclid=IwAR1TKhz35P6FCCNR0tm0Sq3yD25QEms1mQwnT5b9r65u8hSQNxtUs16_nwo
[4] https://booksandideas.net/White-Collars-Dirty-Hands-and-Clean-Records.html
[5] Domestic Constituencies (chomsky.info) | https://medium.com/@mariojpinheiro/was-direct-democracy-ever-exercised-as-a-form-of-government-954bebaaaf4f | ['Mario J. Pinheiro'] | 2020-12-27 15:07:39.838000+00:00 | ['Technological Advances', 'Democracy', 'Society Politics', 'Network', 'Kondratieff'] |
Why You Should Use Ai Writing Tools | Note: This entire article was written by Jarvis AI. Check their official website here for a free sign up.
As AI writing tools are becoming more and more advanced, it’s important to know the benefits of using these helpful tools.
There are many uses for AI writing tools, including academic papers, blog posts, articles and even novels!
The best part is that you don’t need any knowledge about how to use them.
In this article we will discuss why people should use Ai writing Tools and what the limitations of using AI Writing Tools are.
How does AI writing tools work?
AI writing tools are software that can be used to automatically generate content.
They work by analyzing a topic sentence, paragraph or whole text and then generating sentences based on the user’s input in order to have an AI-generated article free of grammatical errors.
Academic papers, blog posts, articles and even novels can all be generated using these software programs. These powerful tools are designed to be used by anyone.
All you need is the topic and a couple of sentences about your idea, and an AI writing tool will do all the work for you!
The best part is that these tools can also help with SEO (Search Engine Optimization) because they’re free from any grammatical errors or spelling mistakes.
The benefits of using AI writing tools
AI tools are designed to help with all aspects of content creation: from the most basic blog posts, articles and academic papers.
These powerful programs can provide a good base for any type of content you may need. The best part is that they’re free from any grammatical errors or spelling mistakes!
They also help with editing your work. You can just let the computer do all the heavy lifting and take care of any errors you might have made in your text.
Another thing to consider when thinking about benefits is cost savings since most programs are free or very affordable.
Other great benefits of AI tools include:
- They allow you to spend more time on other activities.
- You don’t need any knowledge about how to use them.
- They produce quality work in a shorter amount of time than someone would normally take.
The limitations of using AI writing tools
It can be hard to find an appropriate topic with which you’re familiar enough that the automatic sentences generated by your software will make sense.
The other limitation is if you’re trying to write a novel, this kind of tool cannot generate an entire book for you- it would only provide content for certain parts of the book.
Also…
-AI writing tools cannot write or come up with new ideas by themselves. They are limited to what they have access to.
- AI writing tools cannot be able to produce the same quality work as a human.
- You have limited control of how they write or what content is written.
Which is the best AI writing tool?
Based on the benefits and limitations, you may be wondering which is the best AI writing tool for your needs.
There really isn’t a “best” option because it all depends on what you’re trying to accomplish with your writing; however, if I had to choose one I would recommend Jarvis AI writing tool.
Jarvis AI writing tool is designed to be used by anyone and can generate various types of content such as articles, blog posts or even novels.
It also helps with editing your work and it’s free from any grammatical errors or spelling mistakes!
Jarvis AI has an AI that can understands the intended audience and writes content relevant to your interests.
Conclusion
I hope I’ve been able to convince you that AI writing tools are the best option for your content creation needs.
They’re free from any grammatical errors or spelling mistakes, and they can generate quality content in a shorter amount of time.
Just think about how much more time you’ll have to spend on other activities if you let AI do all the heavy lifting for you!
Jarvis for instance, is the best AI writing tool in my opinion and its free. Visit their website and give it a try. | https://medium.com/@felinmiringu/why-you-should-use-ai-writing-tools-a35f4dc28dc9 | ['Felin Mosby'] | 2021-08-21 14:08:21.404000+00:00 | ['Copywriting', 'Content Creation', 'Freelance Writing', 'Blogging', 'Content Strategy'] |
Cooperative Coffees infuses fairness and transparency into its ecosystem | Cooperative Coffees infuses fairness and transparency into its ecosystem
Lender RSF Social Finance is “so aligned with us it’s crazy,” allowing the cooperative to thrive without sacrificing values Melinda Cheel Follow Jun 1 · 5 min read
Photo credit: Cooperative Coffees
Bill Harris was on a Habitat for Humanity mission to Guatemala when he befriended a local coffee farmer in 1997. As they spoke, Harris learned that even though farmers did most of the hard work necessary for a great cup of coffee, they received little money in return. To remedy this, Harris decided to start his own roastery, Café Campesino, in Americus, Georgia, where he lived. He ordered a container (about 40,000 pounds) of green coffee from a Guatemalan farmer cooperative and paid top prices, hoping to put more money in the pockets of small-scale coffee farmers.
Harris had worked in the food industry before, but he still was shocked when he arrived home to discover how much coffee a container held. It was way more than one start-up roaster could use. So he got in his Volkswagen van and drove around the Southeastern U.S. trying to find roasters who would buy a share of his coffee and join with him to continue importing coffee at fair prices for farmers.
One of the roasters, Mike Mays of Louisville, Kentucky, met him in an airport parking lot with a bag of cash. As he handed Harris the money, Mays thought, “I am never going to see this money again.”
But he did. By 1999, Harris had recruited six roasters, including Mays, and started Cooperative Coffees, an importing cooperative now based in Georgia and Montreal. Over the past 20 years, Cooperative Coffees has grown to 23 members — local roasters across the U.S. and Canada — and is an industry leader in paying generous prices for coffee beans. “We want to be the top payer in our supply chain,” says Ed Canty, general manager of Cooperative Coffees, “because producers [small-scale farmers and cooperatives] deserve more money for the products they’re selling.”
This commitment is what ultimately led Cooperative Coffees to RSF. “When we switched to RSF, we felt we had finally found a lender that understands our mission and is excited about it. It’s a great partnership.”
Pushing the industry to do better
To create a sustainable livelihood for its producers in places such as Peru, Ethiopia and Honduras, Cooperative Coffees has always paid top dollar for beans. In fiscal year 2019–20, for example, the cooperative paid $2.51 a pound for beans — 2.3 times the commodity price at the time. This amount included a voluntary three cents per pound that every member contributes to the cooperative’s Carbon, Climate and Coffee Impact Fund.
Created to offset roasters’ carbon emissions in the future, the fund currently distributes grants to farming communities for projects such as reforestation, a compost facility, and tree inventories, based on what producers told the co-op they needed. During the pandemic and an active hurricane season in Central America, Cooperative Coffees pivoted to using $180,000 of this money to ensure farmers’ food security and access to medical care.
Cooperative Coffees prides itself on cultivating long-term relationships with its growers. This past year, 100% of the purchases Cooperative Coffees made were from producers it has been working with for three years or longer, and 68% were from producers it has worked with for 10 years or longer.
The cooperative is Fairtrade certified, but to this group of roasters, that wasn’t enough. To promote more transparency in the coffee industry, Cooperative Coffees also started a website, fairtradeproof.org, which provides details on its members and producers and offers visibility into every contract the co-op has ever executed, including the price paid to farmers.
“This was unprecedented in the industry and remains unmatched, though competitors and peer importers have started to take notice. It’s what makes Cooperative Coffees a perfect fit for us,” says Casey Johnson, relationship manager for RSF’s Food & Agriculture lending portfolio. “They’re paving the way for a more transparent and fair coffee-buying ecosystem.”
“These guys are so aligned with us, it’s crazy”
Cooperative Coffees has been consistently profitable, with an annual growth rate of 8% to 10%. But because of its co-op model — and its members’ decision to prioritize mission over profits — it has struggled to find the right financing relationship. “Three years ago, we were working with a bank that never understood our social mission,” Canty says. “Their sole focus was the profitability in our business. While that’s important, they missed the bigger picture of who we are and what we set out to accomplish.”
In addition to being fed up, Cooperative Coffees also needed a line of credit to bridge cash flow across inventory and accounts receivable. Like many enterprises that buy agricultural products, Cooperative Coffees often pays its producers months before members purchase their shares to roast. With a line of credit, they can finance that lag time. And they also have a better chance of staying stable if the volatile commodity price of coffee shoots up.
Harris (who was by then the co-op’s chief financial adviser) reached out to RSF, which had been on his radar for years. RSF invited everyone to the table and had a long conversation with Cooperative Coffees about its mission and financials, during which Canty realized, “These guys are so aligned with us, it’s crazy. Widening the circle to include buyer and seller, bringing us all into the conversation and then getting a better result: That’s what we’re all about.”
RSF issued a $5 million line of credit to Cooperative Coffees in 2018. In July 2020, in the heart of the pandemic, the cooperative experienced what Canty calls a “cash flow hiccup.” With producers expressing an urgent need to move coffee shipments out of their communities while trade routes were open, Cooperative Coffees needed more cash to manage the next four months of trade. RSF quickly upped that credit line to $5.5 million. It was the kind of quick action Canty could never have maneuvered with a bank. RSF subsequently increased the credit line to $6 million to support the co-op’s growth.
As part of its integrated capital approach, RSF also recently made a $450,000 equity investment in Cooperative Coffees. RSF was entitled to a 6.5% annual dividend but chose to take only 4.5%. It’s donating the other 2% to help Cooperative Coffees further its mission and diversify its membership base. “Decision-making like this is one of the reasons I love these guys,” Canty says.
The key to a good night’s sleep
Most of all, though, Canty loves RSF because the lender has allowed the cooperative to stay in business without sacrificing its values. Cooperative Coffee has leveraged its own $2.6 million in equity and RSF’s $6 million in loans and equity to purchase $16 million worth of coffee every year. That means a lot of farmers go to sleep at night knowing they can feed their families. It helps Canty too.
“Everything we do, we’re able to do because of a great financing relationship,” he says. “The fact that we can change course quickly with informed decision-making from a lender that believes in our vision of equity through trade — I don’t even know how to quantify that. Before RSF, it’s what I lost sleep over at night.” | https://medium.com/reimagine-money/cooperative-coffees-infuses-fairness-and-transparency-into-its-ecosystem-44c98f43d0cb | ['Melinda Cheel'] | 2021-06-01 16:15:38.922000+00:00 | ['Coffee', 'Farming', 'Cooperatives', 'Fair Trade', 'Social Enterprise'] |
Featuring: Sarah Sharif | Natalie Ruiz: Tell us about your work.
Sarah Sharif: I’m currently the Founder of Experimental Civics, an innovation institute, and consultancy, where we propel our clients further as innovators. We run think tanks, hackathons, internal innovation committees, workshops, and host our own global programs as well.
Natalie Ruiz: What do you love about it?
Sarah Sharif: The most profound reward is working with clients who are dedicated to pushing their boundaries and learning alongside them. The magical impact of connecting people and bringing them together to unleash their creativity and imagination is always what I take home in the evenings.
Photo provided by Sarah Sharif
Natalie Ruiz: What makes you great at what you do?
Sarah Sharif: I enjoy creating spaces of comfort, play, and creativity. I take wild ideas swimming around in my head, ground them on paper, and loop in the right people to build with. I’m always surprised how well-received and in-demand these spaces are and how much we all need to remind ourselves to take time to innovate.
Natalie Ruiz: What is the most challenging part of your job?
Sarah Sharif: Balancing work, play, and personal pursuits. My career has never defined me; it’s another means to an end to do good in the world before I pass on. I figure I have another 60+ years, give or take, left to redesign and build with people the type of world we should all want to live in and communities we want to be part of. However, finding a balance is hard. I have a slew of personal pursuits like summiting Mount Kilimanjaro to publishing my poetry book, which has to take front and center eventually.
Natalie Ruiz: When you were a kid, what did you want to be when you grew up?
Sarah Sharif: Lara Croft.
I wanted to travel the world, explore hidden treasures, own my home, play with gadgets, and save the world. I had chemistry kits, astronomy kits, archeology kits, art supplies, to owning giant tubs of lego. I was always creating, playing, and learning. To be honest, nothing has changed; if anything, I’m more like Lara than I have ever been.
Image provided by Sarah Sharif
Natalie Ruiz: Who is one of your heroes, and why?
Sarah Sharif: Björk Guðmundsdóttir.
She is so powerful, moving, and unapologetically herself. I find that vulnerability incredibly inspiring, and I have always wanted to channel that level of artistic confidence for myself. The ability to shine in your purest essence and not care what others think is what I’m striving for.
Photo provided by Sarah Sharif
Natalie Ruiz: What is one of your superpowers?
Sarah Sharif: Getting sh*t done. I operate in a do-it-once-do-it-right-automate-greatness mode with every personal or professional endeavor. Being an entrepreneur polishes and molds you into being self-sufficient and building reuse + recycle into everything you produce for ease later on.
Natalie Ruiz: What is one of your favorite books?
Sarah Sharif: Algorithms to Live by Brian Christian and Tom Griffiths. A very insightful read into how we make decisions, how we download the universe we’re living in, and how to make the “right” decisions.
“some of the biggest challenges faced by computers and human minds alike: how to manage finite space, finite time, limited attention, unknown unknowns, incomplete information, and an unforeseeable future; how to do so with grace and confidence; and how to do so in a community with others who are all simultaneously trying to do the same.”
Photo by Richa Sharma on Unsplash
Natalie Ruiz: What professional advice would you give to someone?
Sarah Sharif: I know it’s hard to “just be yourself.” When we want to make an impression or launch a new career path, don’t forget to bring your unique flavor to the conversation. No one else has the magic you were born with; let that soak in for a second.
Natalie Ruiz: What advice would you give your younger self?
Sarah Sharif: Recognize you don’t know what you’re capable of and no one else knows either, so dream big and go for it. Surround yourself with those you want to see you win and achieve your goals.
Natalie Ruiz: I believe there is power in sharing our big dreams and audacious goals. What are some big dreams of yours and will you share with us?
Sarah Sharif: Impact 1 million lives in some positive fashion before I’m 35. I have roughly 5 years left. Publish a collection of poetry by age 45. Design my own wardrobe; I have 10 outfits waiting to be sewn together. | https://medium.com/@its-natalieruiz/featuring-sarah-sharif-6414da54a314 | ['Natalie Ruiz'] | 2020-11-04 21:12:52.968000+00:00 | ['Female Founders', 'Leaders', 'Hackathons', 'Women In Business', 'Women In Tech'] |
by The Good Fight (5x07) Episode 7 On "Paramount's+" | ⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package.
✌ THE STORY ✌
Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing
(Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself.
✌ STREAMING MEDIA ✌
Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content.
Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”.
This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation.
✌ TELEVISION SHOW AND HISTORY ✌
A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings.
A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video.
Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet.
The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 5, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRAThe Good Fight CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets.
✌ FINAL THOUGHTS ✌
The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature.
FIND US:
✔️ https://onstream.club/tv/69158-5-7/the-good-fight.html
✔️ Instagram: https://instagram.com
✔️ Twitter: https://twitter.com
✔️ Facebook: https://www.facebook.com | https://medium.com/@thegoodfight-5x07/the-good-fight-series-5-episode-7-5x7-full-episode-ac694d2d3a34 | ['The Good Fight', 'Episode On', "Paramount'S"] | 2021-08-05 05:00:24.641000+00:00 | ['Covid 19', 'Technology', 'Plise Sineklik', 'Politics'] |
Explore the 5 Underwater Restaurants in Maldives | Maldives
Mesmerizing aura and Magic are two of the synonyms of Maldives. This astounding destination is definitely known worldwide for its scenic beaches and luxury accommodation but that’s not all what Maldives has to offer. Apart from the Insta-worthy or picture perfect accommodation Maldives also offers exquisite dining experience to leave you all awe-struck. In case you’re planning a trip to Maldives make sure you dine at underwater restaurants of Maldives. Dining under the sea is a growing trend in Maldives and hence there are 5 underwater restaurants in Maldives that offer great food and incredible underwater views.
Best underwater restaurants in Maldives
Ithaa Undersea Restaurant
It is one of the best underwater restaurants in Maldives which is built 16 feet below the sea level. This amazing undersea Maldivian restaurant offers 180 degrees of panoramic sea view which comprises sea life and beautiful corals. Ithaa is known to provide the guests with an unforgettable experience and the most amazing part about this restaurant is that you can also book it for private breakfasts, weddings and for other special occasions.
Cuisine served at Ithaa underwater restaurant: European
SEA Restaurant
If you want to experience the best underwater dining experience with chic décor and yummy food, then visit the SEA restaurant which is located at the Anantara Kihavah. This restaurant of Maldives takes the underwater experience to altogether another level. Their prime attraction amongst the guests is the spectacular range of wines that have been spanning over nine decades and international gourmet cuisines.
Cuisine served at the SEA: International
Subsix Restaurant
It is another stunning underwater restaurant in Maldives which is also the first underwater nightclub on the island. Subsix is one of the most visited Maldives underwater restaurants and surely for the obvious reasons. Everything from the décor to the seating arrangement to the shell shaped bar reflects the charm of the sea. Moreover, what is really gorgeous and stunning in this restaurant is the coral like chandeliers and the ocean blue lighting that adds up to the theme.
Cuisines served at Subsix Restaurant: Seafood and International
5.8 Undersea Restaurant
It is one of the exotic undersea restaurants in Maldives which is well-known for offering an exquisite dining experience. 5.8 undersea restaurant is no less than a true paradise as it offers heavenly sea views and extraordinary culinary treat. It is an ideal restaurant in Maldives for newlyweds to create some special memories. It is a great restaurant to relish tempting lunch or dinner. The prices might seem a little high but it is all worth it.
Cuisines served at 5.8 Undersea Restaurant: European, International, Fusion and Seafood
Minus Six Meters
This underwater restaurant of Maldives is set amidst the mesmerizing marine life. This undersea restaurant in Maldives is best for couples willing to experience something different and unique. Imagine eating with the fishes…Amazing isn’t? The added exclusiveness of this restaurant is that you can dine here for free if you stay at the resort for more than four nights. It is one of the most romantic and surreal dining places in Maldives set 20 feet under the blue lagoon. Minus Six Meters is one of the best restaurants that one should definitely visit for an offbeat dining experience in Maldives.
Cuisines served at Minus Six Meters: Seafood and European
So, what are you guys waiting for? Plan your trip to Maldives with cheap flight ticket to Maldives and have a unique dining experience at some of the super-amazing underwater restaurants. | https://medium.com/@apoorvashukla95/explore-the-5-underwater-restaurants-in-maldives-2be6a761c412 | ['Apoorva Shukla'] | 2021-03-11 07:03:19.744000+00:00 | ['Holidays', 'Travel', 'Maldives', 'Restaurant', 'Flights Booking'] |
Super fancy splashes! | Base Photo by Johnny Brown on Unsplash
Super fancy splashes!
Creating an animated Splash Screen for your iOS app using Lottie
Splash screens are the windows to the app's souls. It's the first thing any user will see when they open your app (and we all know that the first impression is really important).
For a very long time, our app had just a plain boring splash with a static image (with veeeeeeery little movement). Later, we created a second version of the Splash, which we improved with a pulsing logo, it brought a much bigger appeal to the users and it was way nicer to watch.
But… I thought that it was not enough.
What if we could bring a little bit more life into our splash screen? What if we had the power to change the animation whenever we wanted?
The Beginning of a Dream
It was Christmas time! And how could I resist all the magic and beautiful decorations from the holidays?
I gathered some of the designers and managers with a not-so-clear meeting title: Christmas Magic on App 🎄. And it worked! Everyone came in really curious about what could it be.
So I threw the idea: Making a series of visual changes on the app to celebrate the holidays! And also, make them somehow remotely updatable so we could change it any time of the year.
And our main star: A new holiday-themed splash screen animation.
Opsies! Is it a real Splash?
Ok… I have something to confess. The actual app's splash screen is not really animated. It's a solid color screen (iOS splash screens cannot be dynamic 👀). But we have a few things that we need to load on the app start, so we created a secondary splash (where the actual loading happens).
The real splash screen is optimized to load as fast as possible, and the secondary splash has the same background color, so the transition between them is seamless, and if you are not aware it's happening, it's undetectable.
The transition between the two splash screens. It's important to keep the background color consistent across them!
Every animation will be taking place on this secondary splash, where we can have much better control of what's going on.
But, if we want to code every animation we want to display on the splash, using UIView.animate(... or any other animation framework it would require a lot (and I mean a lot) of extra work. We need to find an easier way to create these animations.
Lottie: A mega blaster amazing animation Lib
We were already using on our project the Lottie lib, by Airbnb. If you are not familiar with Lottie, it converts a JSON file into a visual animation.
The animations are usually made using Adobe’s After Effects, later converted into a JSON using the Bodymovin plugin.
Example of an animation generated with a Lottie JSON
This gives us an easy way to add super fancy animations on the app — our motion designers can create incredible movable illustrations, and we can display, move, update them on the app with little effort.
So, Lottie is a very useful library, but maybe I didn’t emphasize enough a high point of it, it uses JSONs and JSONs are just good old strings. It means they are easy to store, easy to download, easy to share… easy to everything!
Now we just need to use all the power Lottie gives us inside our Splash Screen.
The multiple screens problem
It's on! We have the design team on our side, everyone is excited to create a totally new experience, and people start having ideas. But suddenly, you are faced with a new issue: the animation placing.
Imagine your motion designer wants to make a rocket to come from the left side of the screen, spin around, and leave on the right side. Chill, they are the ones who are going to animate it, but you still have to place it on the app. And now the issues start to appear.
Positioning the animation is tricky, if you make it go side to side on the screen considering just one device size, it may look weird on a bigger device, because it will start on the middle of the screen and give a not very refined feeling, or worse, it could rescale and become something really big and scary to see.
Trying to use the same edge-to-edge animation on multiple devices
So, for our first version, we decided to fix an animation size in the middle of the screen, adjust it to multiple screen sizes, but keeping the animation proportions fixed. Giving these constraints to the designers will give animations that can fit on every screen size!
Defining a fixed proportional animation frame in the middle of the screen for every device
I'm not gonna lie to you, the side-to-side animation will probably be discarded, the animations will need to rely much more on the center of the screen using cross-dissolves and other transitions to appear from the solid background. But hey, this is a initial version, we can totally improve it later 😉.
Storing the animation remotely
We have a lib that allows us to animate everything, we have established standards to our animation run on every device, now we just need to be able to update it whenever we want.
Do you remember that the Lottie animations are just JSONs? You can create an API that returns you the animation based on the current date or simply store it as a file on the Amazon S3 and download it anytime you want!
The trick is: Do not download it during the splash time! Network requests are unpredictable, unreliable… they can basically fail and you would make your user wait on an empty screen for much longer than he would need, just to maybe display an animation. Not a very good solution I have to say.
To overcome this problem, we decided to: download the animation at some time after the splash is loaded, and the next time the app is opened, render the stored animation.
To load a Lottie animation from a JSON is very simple, the very own Animation type is Decodable:
But we actually went a step further, along with the animation JSON we also sent a start and end date, this way we could schedule the animation! Our final model was something like this:
We could even send more than one animation at once, scheduled for different dates, and the app would know which one to render!
Also, using some kind of cache, depending on where you stored your animation, you can prevent it from downloading the same information multiple times and save some bytes from the user's data plan.
Tips and Tricks
Keep the animation duration small (less than 3 seconds)
This is the thing about splash animations: They have to be fast. In an ideal world, the splash screen would not even exist… and the Xcode would not randomly crash. But as we know, this is not an ideal world.
The animation itself should not have a background color
It has happened a bunch of times, we decide to add a new animation, and when I'm going to test it, the entire animation frame pops out with a different color than the rest of the background. Unless that's what you want — what I really doubt you do — keep your animations backgroundless.
The animation must not rely on external assets
While creating the animation on After Effects the designer may add assets, like PNGs, vectors, colors, etc. When exported, these animations will depend on these assets as external files (which we are not ready to handle on our animation download service 😧). But do not fear! The designer can vectorize everything and embed them into the animation's JSON so you don't depend on any external file.
It’s not over yet!
That year’s Christmas is long gone, but the custom splash animation is very much alive! Like any other project, it needs maintenance, updates, improvements… Someday the design team may want to make some super crazy new thing and you’ll need to change it completely — but one thing has already happened: We brought the magic of the Holidays into our app!
And here on iFood we love bringing the magic of commemorative dates to life.
Christmas 2020 commemorative splash animation
And now that you can change it at any time, what are you going to do? Some fancy doodles like Google? Celebrate important and historical dates? It's up to you.
Animate all the way. | https://medium.com/macoclock/super-fancy-splashes-2f229a161155 | ['Giovani Pereira'] | 2020-12-22 07:29:09.761000+00:00 | ['Lottie', 'Splash Screen', 'Animation', 'iOS', 'Swift'] |
Problems we are solving at @onethousandADU | Problems we are solving at @onethousandADU
Housing Affordability crisis — Not enough Supply
Solution / Approach
State ADU regulations completely transform the risk and timeline associated with building new units on existing parcels.
May approach is to build new units on existing cash-flowing multifamily parcels. Limiting the value add risk.
Problem / Significance
Housing Affordability crisis — Too expensive
Solution / Approach
Building needs to evolve in the same way other technologies have evolved
Pre fabricated, factory built housing is far faster and less expensive than traditional construction
Problem / Significance
Energy crisis
Fossil fuels cause climate change
Shutoffs
and increases rates 8% yr
Solution / Approach
We will power the ADUs with renewable energy sources
We will be grid independent and self sustaining
Protected from inevitable rate increases and shutoffs.
Problem / Significance
Wealth distribution crisis — Access to asset ownership is minimal
Solution / Approach
Residents will become investors in the ownership entity of the property
Start the snowball of wealth-building
Problem / Significance
Wealth distribution crisis — Disproportionately low representation by minority investors (women and people of color)
Solution / Approach
Find and train operators of color and female operators and educate them & help them do this.
Help them become independently wealthy change The trajectory of their life, and expand access to their American Dream | https://medium.com/@prestonrutherford/problems-we-are-solving-at-onethousandadu-f298935a487e | ['Preston Rutherford'] | 2021-04-25 22:31:17.522000+00:00 | ['Purpose', 'Financial Freedom', 'Real Estate', 'Mission', 'Investing'] |
The Pandemic Hasn’t Stopped Ultrarunner Pamela Chapman Markle | The Pandemic Hasn’t Stopped Ultrarunner Pamela Chapman Markle
Photo by Spencer Markle
I never tire of interviewing Pamela Chapman Markle. And she never tires of running grueling hours and distances, and setting USA and world records in in her age group, not even during the Covid-19 pandemic.
I recently interviewed Pamela about what it’s been like during the pandemic. It hasn’t been much fun. “Nine races cancelled on me. Badwater and Spartathlon were the last two cancelled. I was literally depressed about it,” she lamented. “But I helped a lot race directors because a lot of cancellations did not defer. In the Badwater series you lost every penny you sent.”
Pamela was one of twelve Americans given special permission to head to Greece in September only to learn a week before flying to Greece that Spartathlon was cancelled.
With so many races cancelled in 2020 due to the pandemic, many runners, elite and ordinary, have turned to virtual races offered as alternatives to fill the void of canceled races. Pamela turned to one virtual ultra — the Trans-Texas 879 mile ultra. Runners have from July 1 to December 31 to cross the finish.
While Pamela likes to run by herself, and likes to compete against herself, she also wants to run with others. “I like to have other runners competing too. It’s inspirational to me.”
However, virtual races don’t lend themselves to real race competitions. Pamela finished the Trans-Texas 879 mile ultra on November 19 and not a day too soon. “It bored me to death.”
Recording one’s miles is an important part of running virtual races. Pamela does not like documenting how many miles she runs on any given day. “If I go out to run two hours and I feel I can run four, I’ll run four. I do not keep a log. I have a running coach and she keeps track of it.”
Chantalle Robitaille has been Pamela’s coach for about a year and a half. “She’s really positive and has good insight for me as a female runner.”
Setting Records in the Year of the Pandemic
While not a fan of virtual races, the 879 miles were good training miles, if not for Badwater and Spartathlon, for the Icarus Florida Ultrafest, which was not cancelled. At the 2019 Icarus events, Pamela set a few records.
Now, at age 65, she races in a new age category and set three records at the 2020 Icarus Florida Ultrafest in November in the year of of the Covid-19 pandemic.
She’s the world record holder in the 48-hour event with 174.76 miles. Pamela also set a new USA all-time age group record in the 24-hour event with 109.3 miles, and a new USA all-time 12-hour event with 60.64 miles.
With all the race cancellations, Pamela said, “I was scared to death because I hadn’t raced in nine months.” Her strategy was to take it slow at Icarus.
Unlike running Icarus in 2019 in her favorite weather — hot and humid — the 2020 Icarus ultra events had runners endure a combination of sun, heat, rain, and 30 mph winds.
“It was hot and sunny. Then it rained for a few hours. Then it got hot again. I chapped so bad,” said Pamela.
Hard on Herself
Not racing for over nine months also affected her body.
With four and a half hours to go at the Icarus 48-hour in November, Pamela had had enough by mile 174.76. “I literally stopped because I was falling over. I told my husband I just want to lay down. I couldn’t get up. My toenails were coming off again. It was a very miserable way to go for 44 hours.”
Taking a nap or sleeping for a little while is not uncommon in 48-hour events. But not for Pamela. She did not nap or sleep. “I did not sleep at all until I stopped running.” But she did sit for about three minutes to wolf down a grilled cheese.
She also hallucinated. “I was way behind in fluids and way behind in fuel so the nine months you don’t race your belly forgets what you’ve done to it all year long.”
Sleep deprivation and hallucinations are not stopping Pamela from plotting her strategy at next year’s Icarus Ultrafest. “”I have to work out a better plan to redo this 48. Last year I took an hour and put my feet up and napped a little bit and then I took off running and I did 186 miles.”
Pamela is tough on herself. “I’m very hard on myself. Probably harder than I thought. My husband has pointed it out to me.”
Even though she set three records, she felt she didn’t do her best at Icarus. “I was happy but I was sad. I set two US records and a world record. The world record hadn’t been broken since it was set the late 80s early 90s, but I didn’t do as well as I knew I could.”
She is now rethinking her strategy for next year at Icarus.
Aging, Healing, and Sleep
At age 65, Pamela’s body takes longer to heal after such grueling distances. Her sleep is also affected, “The healing takes longer. It’s harder to sleep as you get older. It’s very hard to get a full 8 hour sleep. I think it’s aging.”
But her aging body doesn’t diminish Pamela’s fast running. “I always run faster when I have someone to pass,” she said. With runners to pass at Icarus, she kept running. “I didn’t stop to even sit down until I hit the mile 110 marker. I ran solid for 110 miles.”
Motivation and Nutrition
Having someone to pass during a race is a great motivator for Pamela. “Otherwise, I would run slower forever,” she added.
Nutrition and carb intake at Icarus came in the form of Maurten gels, Spring gels, and alternate peanut butter and jelly.
“Maurten gels are tasteless and that’s what I like about them. Spring gels are natural fruits and vegetables. I don’t take more than 200 calories an hour. I like the Spring gels but I get sick of the gels at mile 40. I have to grab something different for my carbs. Cookies. Something with sugar,” said Pamela.
Unstoppable and Grateful
After setting USA and world records in her age group at the 2020 Icarus Ultrafest, Pamela flew back home to Texas, and went to work on Monday. She is retired but works part-time as a nurse anesthetist for plastic surgeons. On Thursday she went for a very slow hour and twenty minute run. and was most likely plotting her 2021 ultras.
But before tackling her 2021 races, she’s running the inaugural Into the New Year in Florida on December 31 to go after the world record in the 12 hour event.
No doubt, Pamela will continue to be unstoppable in 2021. She’s having a go at the Pier 2 Pier 200K in Florida in January. She might take February off. In April, she’ll take on the Badwater Salton Sea. She’ll head back to The Keys 100 in May.
I’ve convinced her to take on the Dawn to Dusk to Dawn 24-hour track ultra on Mother’s Day weekend. She’ll go back to Badwater 135 and to Six Days in the Dome in July. Spartathlon is on the schedule for September. Then, it’s back to the Icarus Ultrafest in November.
“I’m a work in progress,” said Pamela. And she is also filled with gratitude.
Pamela with her husband Spencer
“I’m just grateful to be alive. I’m very grateful to have had my father for 88 years. My husband came down with Covid in July. I, like every one else, have had a rough year. However, I have so much to be grateful for. I bought the house of my dreams. I’m living in Texas which to me is the best state in the whole world. I train along the seawater. I see dolphins,” said Pamela with joy in her heart.
Originally published in www.miriamdiazgilbert.com
Miriam Diaz-Gilbert (aka Miriam Gilbert) is a published author and ultrarunner. Follow me on Twitter, Facebook, Instagram, and YouTube. | https://medium.com/@ultramiriam/the-pandemic-hasnt-stopped-ultrarunner-pamela-chapman-markle-916a7b55020f | ['Miriam Diaz-Gilbert'] | 2020-12-21 15:03:59.871000+00:00 | ['Life Lessons', 'Lifestyle', 'Sports', 'Ultrarunning', 'Aging'] |
5 Universal Truths of Modern Dating | 5 Universal Truths of Modern Dating
That we forget too often.
Photo by Alex Perezon Unsplash
Modern dating sometimes feels like a minefield.
It feels like exploring new territory without a map, like trying to find our way in a dark, unfamiliar house just by sense of touch.
But despite how lost we feel, there are some common landmarks that remind us we’re not treading such unfamiliar waters after all. In the end, we’re all in the same boat.
We all have a past
We all have exes, a history, and baggage.
We all have reasons to mistrust the opposite (or the same) sex. We can all begin sentences with “men always,” or “women never.”
If we wanted to, we could spend date after date turning our past dating woes into topics of conversation — a conversation that seems vulnerable and honest on the surface, but that’s actually empty of substance deep down. Sometimes, that’s exactly what we do.
But we’re not as good at accepting that other people also have baggage. We expect them to have all their issues figured out and/or neatly stored away out of sight from the moment we meet.
We hate low-effort dating…
It doesn’t matter how much you hate it, low-effort dating is a staple amongst 21st-century dating trends.
Low-effort dating is all about minimizing effort and maximizing rewards. It’s all about swapping for a last-minute Friday night date and settling for the first person within a certain mile-range who says yes. It’s about skipping the romantic dinners and moving straight to Netflix and chill like a couple who’s been together for ten years and has long since stopped caring.
Low-effort dating is all about refusing to text first, to do any real pursuing, to do your best not to care so you’re not heartbroken when it eventually ends. It’s about going for convenience over feeling, and practicality over standards.
Low-effort dating inspires you to back away at the first sign of trouble, to bail instead of working in the relationship, and to only stay while the honeymoon phase lasts.
We hate low-effort dating because it makes us feel like that discount item next to the cashier at the grocery store, something someone picked just because it was there and available for cheap. It makes us feel like an afterthought, as if we’re not worth the trouble.
…but sooner or later we all engage in it
We hate the notion that putting effort into dating has become that unfashionable, but sooner or later, we all give up and give in. We low-effort date because we’re tired of running on the dating treadmill and getting nowhere. We’re exhausted from getting our hopes up just to see them come crashing down. We need a break from all of this, but we don’t know how to give ourselves one so we carry on taking shortcuts.
We look at the prospect of a 20 min drive to go on a dinner date and shudder, our favorite places for first dates become any bar within two blocks from our home. We ghost to avoid difficult conversations, we block so we don’t have to deal, we don’t even spend the energy to pretend we’re too busy.
We make the exact mistakes we despise so much, and fall beneath our own standards because we are not perfect. There’s only so much pressure we can take before we crack.
The good news is, despite the fact that you might slip sometimes, you do not have to blindly follow the herd. If you believe you’re worth the effort, starting putting in some effort.
We know luck plays a major role in finding love…
When we’re single, we see a happy couple and recognize how lucky they are to have found each other.
We watch romcoms and The Office reruns and logically, rationally, we understand those couples have one in a million shot of meeting and working it out. We know that for every fictional Jim that gets his Pam there are countless real-life Jims forever stuck in the friendzone; for every fictional Pam that gets a guy as devoted as Jim, there are countless real-life Pams who don't get a second — let alone a third — chance at the wonderful guy they’ve thoughtlessly turned down.
… but we don’t like to think we depend on luck to find a soulmate
We know luck plays a major role in finding love, but we somehow refuse to accept it has any real influence over our romantic lives.
When it comes to our own love life, we forget about luck entirely. There’s no room for luck because we deserve a happy ending — that’s how it’s supposed to be, that’s how it’s written in the stars.
With the right attitude, we can all be Jim and Pam. Our real life story can be exactly like their fictitious one.
So we try to manufacture our happily ever after. We read every how-to article available online, we buy dating books, and listen to podcasts. We attend workshops and watch Ted Talks, we dissect our dating lives with our friends and engage a therapist in an attempt to fix whatever is wrong with us that’s making us “still single.”
While you could probably use that therapy regardless of the state of your love life, it doesn’t hurt to remind yourself you can’t self-improve a suitable partner into existence. Working on yourself to become a better person is a noble goal, but no amount of self-work will guarantee the right person will pop up into your life.
Some people get incredibly lucky in love, some do not. That’s not a reason to give up, but it should be an incentive to stop feeling like such a failure for being single, and to start setting up new measures of personal success. | https://medium.com/acid-sugar/5-universal-truths-of-modern-dating-6f62a54678c0 | ['Renata Gomes'] | 2020-12-24 12:45:26.186000+00:00 | ['Self', 'Relationships', 'Life Lessons', 'Love', 'Self Improvement'] |
Understanding Sex Addiction | Understanding Sex Addiction
How self-isolation from coronavirus impacts sex addicts and their partners
Photo by Joe deSousa on Unsplash
Self-isolation is causing partners to spend a considerable amount of time together under the same roof. I’m curious how Sex Addicts (SA’s) and their significant others are coping?
The question led me to have a conversation with a woman I know well. Let’s call her Pam for anonymity. Pam is a former wife of a Sex Addict (SA). For twenty years, she lived through hell and, through time and experience, evolved into a valuable knowledge source, managing an online sexual addiction support system and became an essential lifeline for spouses of Sex Addicts. Pam has either seen or heard it all.
The following interview with Pam delves into what SA’s and significant others may be facing right now during self-isolation. Pam explains how a person becomes addicted to pornography, how the addiction progresses, and what chemical changes occur in the brain. We discuss reprogramming opportunities, available treatment, and resources.
Pam’s former spouse is male; this interview is slanted toward men who are sexual addicts. In our discussion, we also refer to porn and sex addiction as interchangeable.
[Lisa] How do you envision pornography or sex addicts are coping right now in self-isolation?
[Pam] I believe being in self-isolation will exacerbate the problem. Many Sexaholics Anonymous (SA) group meetings are canceled, and no face to face support is available. The coronavirus is a time of uncertainty, fear, and anxiety for many, and Sex Addicts use porn and masturbation as self-soothing activities for their default mechanism.
With people in quarantine, men will be going through withdrawal as their wives are with them regularly, and they can’t watch as often as they want to, given they’ll be under scrutiny. I’m sure there will be more unhappiness than ever. While their partners may wish for more reassurance, more times of closeness, and feelings of being protected and loved. The SA is incapable of providing these emotions so he will do whatever he can to create distance — coldness, fighting, being overly critical, etc. to ensure there is space between them and enabling addictive behavior.
[Lisa] Let’s take a step back for a moment; how do you define a sex addict?
Pam performs a specific search on Google and provides the following research paper. “Sex Addiction, Neuroscience Trauma, and More!” written by Stefanie Carnes, Ph.D., CSAT-S.
[Pam] reads:
A pathological relationship to a mood-altering experience (sex) that the individual continues to engage in despite adverse consequences.
It is very much a physical addiction.
[Lisa] This statement provides a clear understanding of the problem. During quarantine, what signs can spouses look out for?
[Pam] Increased time on the computer or phone. Hiding what he’s looking at. Passwording his devices and not allowing you access. Unexplained time away from home. Excuses not to accompany you on outings and preferring to stay home alone. Loss of interest in sex. Coldness. Continually picking fights to put distance between you. Making excuses not to sleep together.
[Lisa] What happens if the sex addict is not able to get what he or she needs right now due to being in isolation with their partner?
[Pam] When sex addicts can’t get their fix, they go into withdrawal almost with the same symptoms as a drug addict. Symptoms include anxiety, short temper, anger, nausea, and even physical pain.
[Lisa] How do people become addicted to porn?
[Pam] One of the most common reasons is early sexual experience before maturity, often in childhood. It is not uncommon for SA’s to be victims of childhood sexual abuse. The first introduction to pornography may involve magazines, or videos, or online.
Pam is correct. Carnes research paper reports the following reasons:
72% experienced physical abuse
81% experienced sexual abuse
97% experienced emotional abuse
[Lisa] In our previous conversations, you mentioned there’s a slippery slope with porn addiction. Can you describe this process from beginning to advanced stages?
[Pam] In the beginning, curiosity leads people to seek out images. Porn sites offer lots of free drawing cards.
[Lisa] Wait — what’s a drawing card?
[Pam] When you first go on porn sites you can see short videos which are usually soft porn with a short teaser trailer to harder stuff and a place to enter your credit card number.
For some, it whets the appetite for more consumption, like how a long sip of Crown Royal demands the whole bottle for an alcoholic. Soon, porn becomes harder as the neurons in the brain are re-wired and desensitized by tamer acts. To meet demand, watching more disgusting, demeaning, and sometimes violent or sex acts with children are required to stimulate the brain. Soon it can evolve from just watching pornography to performing the actions themselves. And affairs begin. Online dating sites are filled with SA’s looking for no strings attached thrills.
Photo by Kristina Flour on Unsplash
[Lisa] Tell me more about the term “emotional anorexia.” How does an SA’s personality change over time?
[Pam] Dr. Douglas Weiss coined the phrase. He’s a psychologist and the founder of Heart to Heart Counseling Center in Colorado Springs, Colorado. Here he explains what this is between man and wife. When love happens between two people in a ‘normal’ sexual, loving relationship, the man’s brain imprints with his partner’s body and sexual rewards come from that. Her body and looks become associated with sex in his brain, and the look of his wife turns him on sexually. It doesn’t matter if she is fat or thin, ugly or gorgeous; his imprint of her nude body triggers sexual desire and loving emotions.
Once an SA has become strongly addicted, chances are he no longer views sex with his partner as desirable for several reasons. She is human versus online porn women and may complain about things like demanding foreplay, romance, and expect sexual gratification. To most SA’s, this is not desirable. His online girls never whine. They always look perfect and do as they’re told. Of course, they adore him in his fantasy world. Since he lives in a world of make-believe, trying to integrate back into the real world with a partner who makes demands and has expectations is beyond what he is willing to give. Once he’s involved with viewing pornography and masturbates, it doesn’t take long before he can no longer enjoy intimacy.
According to Dr. Weiss’s research, intimacy anorexia is caused by four major sources
[Lisa] What happens on a chemical level to a person’s brain as they evolve more in-depth into the addiction?
Pam pulls up another online research paper from TE Robinson and B. Kolb titled Structural plasticity associated with exposure to drugs of abuse. She starts to read it aloud to me.
Recent research has shown that non-drug addictions such as gambling, binge-eating, and sexual activities affect brain function in ways similar to alcohol and drug addiction. Many addiction studies focus on what is referred to as the pleasure/reward circuitry and their corresponding neurotransmitters — chemicals that are responsible for the communication between neurons.
Pam looks up at me. I nod my head in understanding and ask her to continue reading aloud:
One of the neurotransmitters frequently identified as central to addiction is dopamine. A behavior or drug that produces pleasure induces a rush of dopamine that ultimately “reinforces” that behavior, making it more likely to occur. The amygdala, basal ganglia, and other reward centers play a role in the reinforcement of the activity that produces pleasure.
Changes in the brain’s neural pathways are referred to as “plasticity”; and “synaptic plasticity” refers to changes among neuronal connections.
[Lisa] To recap, research substantiates the idea that porn addiction can alter brain plasticity.
[Pam] Exactly.
Black and white image of a man staring at a fuzzy TV screen
[Lisa] Can the addict ever reprogram their brain back to its original state?
[Pam] With time, and a lot of work, the addict’s brain can return to normal. After all, it is considered plastic and can change and mold. It takes strict refusal to view porn and a program of recovery for both partners. There is a slow, gentle approach to introducing intimacy back into the marriage. In time, if conditions are met, the SA can once again look at his partner with desire and love. He must be forever vigilant not to allow himself to be triggered and may have to avoid places that he once went to fill his needs. Places like the beach, shopping mall, downtown street, and, most importantly, his computer. An accountability program for the computer can be installed, protecting the SA from entering sites that are not safe.
[Lisa] What treatment is available for porn addicts?
There are several treatment centers and resources available for sexual addiction. Some require in house treatment for 30 days or more with extensive programs involved. Others are geared towards the husband and wife. There are 12 step sexual addiction groups in most urban spaces and as well as support groups for partners. Online support groups exist also.
[Lisa] Support resources will be included at the end of this post. My last question to you, Pam — will you be willing to share your personal story with us in Part 2?
[Pam] Yes, absolutely.
In the next post, we continue to speak with Pam to understand what it was like to be a spouse of a sex addict. She reveals how she discovered her husband’s addiction, it’s progression and the evolution of her own feelings. Pam will describe her difficulty in accepting sexual addiction as a disease. Next, we talk about the support she received and how Pam became a voice for spouses of SA’s. The interview will conclude with the outcome of Pam’s marriage and where she is today on her journey.
If this interview resonates with you, know you are not alone. There are resources available to help in your time of need.
Resources
Help Available: United States
Help Available: Canada
Scholarly Journals
Associations
General
More like this: | https://medium.com/beingwell/interview-with-a-spouse-of-a-sex-addict-part-1-732146e1b5f9 | ['Lisa Bradburn'] | 2020-04-10 21:04:05.269000+00:00 | ['Addiction', 'Interview', 'Mental Health', 'Sex', 'Coronavirus'] |
Using Natural Language Processing (NLP) to Analyze Social Media Posts by Local Politicians | If you are a social media active person, you may understand how highly impactful social media content can be in creating visibility in audiences’ minds. You could post texts, do live videos, share posts, and much other social media content with your friends and followers. As all this occurs quite frequently as a normal part of daily living for most people, tens of thousands of data are generated in the process. Analyzing text data can be valuable in providing meanings to the opinion of people and organizations. And this is exactly what Natural Language Processing (NLP) is developed to do. NLP simply uses computers to understand human language by extracting and analyzing text to provide insight into the meaning of those texts.
Quite recently, I came across a python script that uses NLP to analyze and visualizes Donald Trump’s Tweets. Being curious to explore what such analysis would make of social media texts by local politicians, I replicated the project but instead of using Twitter, I used Facebook. I used python and NLP and related packages to extract Facebook posts from Sierra Leone’s State House and Mayor of Freetown. The textual data were scrapped, analyzed, and visualized to reveal top words, top organizations, and top people mentioned in thousands of social media texts posted on Facebook by the State House and Mayor of Freetown.
Analysis — Facebook Posts(texts) by State House, Sierra Leone
Analysis on State House, Sierra Leone, Facebook post reveled top words to be: “Presid”, “bio” , “maada”, “Sierra” “Leon”, “govern”, and “contri” “develop”, “support”,” excel”, “educ” and “new” etc.
NLP was also used to extract the top organizations and people mentioned in social media posts. State House has mentioned top organizations such as the UN, Millennium Challenge Corporation (MCC), EU, RSLAF, and others. The top people mentioned are Maada Bio, Fatima Maada Bio, Dr. Mohamed Juldeh Jalloh, Jacob Jusu Saffa, David Sengeh, etc. | https://lfvibbi.medium.com/using-natural-language-processing-nlp-to-analyze-social-media-posts-by-local-politicians-a78ada98db97 | ['Leonard Francis Vibbi'] | 2020-10-25 12:15:31.381000+00:00 | ['Data Science', 'NLP', 'Civictech', 'Civic Engagement'] |
Lagos seals 18 hotels, event centres over N91m tax evasion | The Lagos State Internal Revenue Service (LIRS) has shut 13 hotels, restaurants and event centers for failure to pay their taxes due under the Hotel Occupancy and Restaurant Consumption Laws of Lagos State 2009.
The Hotels and Restaurants were said to be owing the State a sum of N426,976,528.87
The Director Legal Services of the LIRS, Mr. Seyi Alade, on Friday 17 November 2017 during the state-wide tax enforcement exercise, warned that defaulting Hotels, Restaurants and Event Centers will henceforth face the full wrath of the law if they fail to deduct and remit their taxes as provided by the law. He promised to release in due course the names of the entities affected by the enforcement exercise.
According to him, failure to deduct and remit taxes as at when due attracts very serious penalties that may lead to the sealing, seizure of the goods and chattels and criminal prosecution of principal officers of recalcitrant entities.
He said the LIRS usually gives a long rope by issuing multiple notices to the taxpayers to inform and also remind them of their tax liabilities adding that only recalcitrant taxpayers are shut down as in the present case.
He, therefore, urged all business entities operating in the state to ensure prompt remittance of their taxes to avoid costly disruptions on their businesses as a result of a distrain exercise. | https://medium.com/@barometerng/lagos-seals-18-hotels-event-centres-over-n91m-tax-evasion-6ddcfe740bd2 | ['Barometer Ng'] | 2017-11-19 12:54:44.713000+00:00 | ['Hotels', 'Restaurants', 'Tax Evasion'] |
Introducing BlockLoan… Crypto Loans and Pooled Smart Contracts | What is BlockLoan?
Traditionally, if you want to get a loan, you would have to approach a financial institution and then go through the bureaucratic process of applying for it. This will most likely be a manual process, where you have to fill out lots of forms and provide lots of details. Then you will have to wait for your loan application to be reviewed and get either accepted or rejected. However, times have changed, and this manual process that most financial institutions use doesn’t meet customers’ needs.
Even the development of online or digital platforms that provide loans hasn’t yet been able to address the key problem that people who want to borrow money face. This is the possibility of borrowing money in real time in a simple, easy, cost-effective and transparent way. Also, if you have a bad or limited credit history, a lot of institutions won’t be willing to lend you money.
This is where BlockLoan comes in to fulfil the gaps in the market. It aims to make it possible for people to access credit anytime and anywhere in a transparent and efficient way. This is to be achieved globally through the use of the blockchain technology.
Currently BlockLoan is undertaking a private stage token offering to selected participants, you can view the BlockLoan Crypto Loans Whitepaper.
Defining BlockLoan
This is a decentralized ledger technology that uses a peer-to-peer network to give crypto loans to people from all over the world. This is founded on smart contracts using a new credit scoring system. The main goal of this process is to make it easier for consumers to access credit.
The Evolution of Banking Models
The 1980s — You had to visit the bank for all your transactions
The 1990s — ATMs were set up, making self-service possible
The 2000s — Online banking solutions were invented, making it possible to access banking services even after work hours.
The 2010s — The rise of peer-to-peer platforms offering competitive products
The 2020s — Banking platforms powered by APIs and blockchain technology
It’s expected that Peer to Peer (P2P) and digital loans will grow by more than 50% within the next five years. This is as a result of the advancement of digital lending technology, decentralized technology and the ability to develop countries to access more efficient lending systems.
The use of blockchain technology in the banking industry can help to speed up these processes, provide consumers with more options and greatly lower banking costs.
Traditional Lending Platforms Challenges
Old fashioned credit rating methods
Specific to certain channels and geographical areas
Long bureaucratic application processes
Borrowers cannot easily access their bio-data and personal information
Takes a long time to handle the paperwork and compare options
High transaction fees and other charges
Only a limited number of people can access these services as a result of data information, such as bad credit history.
Challenges in understanding legal implications and terms and conditions of the lending institution
The reason why these problems exist is that of the need to minimize risk and also the fact that banks didn’t have much competition before. This has resulted in a lot of inefficiencies in the credit industry.
Characteristics of Banking Based on the Blockchain
BlockLoan provides an extraordinary platform and concept when it comes to global lending, this includes:
Traditional, non-traditional and Al credit scoring
Worldwide crypto private lending
Lending against equity portfolios while using cryptos
End to end loan application and management
Partnering with particular countries to reduce cases of loan defaults and for loan collections
A Crypto debit card that can be used globally
Pooled smart contracts that allows loan to be fulfilled completely when the funding goal is reached
Based on a Proven Existing Business — Lodex.co
BlockLoan has taken advantage of the experience and exposure that Lodex provides to access the Australian and global market. Lodex has more than 35,000 people who have signed up for their system. These people are already taking advantage of the various resources that Lodex provides, such as credit scoring, asset valuation reports, and social scoring. Also, the reverse auction feature makes it possible for banks and brokers to bid for loans or deposit requests put in by consumers.
BlockLoan will use the Lodex platform for end-to-end loan origination and management of crypto as well as fiat loans. This BlockLoan platform provides a network that connects borrowers and lenders through smart contracts. It also makes it possible for the flow of loans and ongoing payments to take place on an ongoing basis through the layers of services that it offers.
This model highlights a power shift that’s happening in the industry, where power is taken away from lenders and is equally distributed between lenders and borrowers, as brokers and lenders try to outbid each other to meet the needs of the borrower.
Also, the credit rating of individuals will be based on data collected from third parties, proprietary scoring and consumer’s social data. This way, a larger pool of people will be able to apply for loans. Consumers will even be able to access this information and export it to wherever they want. This will do away with the need for customers to send in multiple applications just to get a single loan.
Lodex Main Features
Some of the main features of Lodex include having several sources of information used to score and authenticate borrowers. Consumers can also auction off their loans or deposit needs on this platform and lenders and brokers will bid on the depositor’s business. The winner gets to fund the borrower or depositor.
Each lender has its own management system where they will manage the loans that they advance to borrowers. BlockLoan is going to revolutionize the credit industry through crypto loans, transactions that can be carried out through the use of tokens, smart contracts and how loans are managed right from the start when the application of the loan is made right to the end when the loan gets repaid.
BlockLoan Features
Well-experienced management team
Using blockchain banking as a platform through the proven Lodex platform
Revolutionize the lending industry through the use of smart contracts, decentralization and blockchain technology
High liquidity through the use of tokens
Provides services worldwide and can be easily scaled
Flexibility when it comes to loan products
Several sources of funding
Being able to take advantage of interest rate arbitrage
The Multi-Million Dollar Lending Market
At the beginning of 2016, the lending market stood at $42.3 trillion. At this time, $1,790 billion was transacted through Peer to Peer (P2P) lending platforms. It’s projected that this figure will go up to $83,460 billion by 2025.
How BlockLoan Works?
BlockLoan is a decentralized crypto platform that offers lending solutions to borrowers. Anyone from anywhere in the world can take a crypto personal loan. The credit score of the individual will be assessed through a credit scoring system that will determine the credit-worthiness of the individual and their ability to make repayments.
Here are the steps that the consumer will have to go through:
The borrower files an application, their identity is verified, and credit score is assessed
Lender key criteria engine
Collective smart contracts
Loan disbursement and settlement
Continuous loan management and repayments
How to Apply for a Crypto Loan
Create a user profile
Create your profile on the BlockLoan KYC portal. Additional information about you will be obtained from your social profiles and reporting agencies.
Apply for a specific loan amount
Once you are accepted into the system, apply for the specific amount that you need. Choose interest rates and terms of the loan. This will lead to a smart contract being generated in the system with the terms that you specified.
Get matched with a lender
There are two systems for matching borrowers with lenders. In the first system, a P2P model is used, where many lenders bid for one loan contract. The second model is the smart contracts pooled model where lenders are brought together beforehand. You only get to access your funds when they have been fully funded. It’s also not possible for the system to refuse to release the loan, as the system is completely managed by smart contracts and the lending pool digital wallet and smart contract.
Crypto-Equity Margin Lending Model
One of the best ways for BlockLoan to succeed in the industry is to take advantage of the equities margin lending industry. This can be a great way to bridge the gap between cryptocurrency and fiat currency.
In this model, investors will be able to take crypto loans by using their equity shares as leverage. This increases the chances of the investor making more profit through the added financing that they get. However, it also increases their risk for bigger losses, in case the market doesn’t go as they thought it
would. So far, this market has been controlled by fiat currency, but it now provides a great opportunity for investors to get into the cryptocurrency market this way.
Crypto-Equity Margin Lending Process
The equity holder applies through BlockLoan for credit
His equity shares get valued based on factors such as their liquidity and volatility among other factors
Crypto funds get transferred to the borrower
The purchased shares get transferred to the custodian
Custodian holds onto the shares until the loan is fully repaid
Revenue Model
The BlockLoan model has potential to generate significant revenues through the size of the personal loans industry and the early-stage nature of crypto loans (at this stage there aren’t many participants in the space). Future revenues will mainly be through the fees earned when loans are fulfilled, loan management fees and transaction fees when transactions get debited. BlockLoan is also looking at the opportunity for secondary trading of loan portfolios using the transferable nature of tokens and smart contracts.
BlockLoan Token Offering
BlockLoan’s Initial Token Offering is expected to take place in late Q3 2018 and will have a total max token supply of 500 million BLL tokens.
During the Token Offering, 329 million BLL Tokens will be available for purchase starting at a full price of US$0.20 per BlockLoan token. The BlockLoan tokens can be purchased with Ethereum (ETH).
There will be 62.5 million BLL tokens available during the private pre-sale period at the discounted price of US$0.08 per token (60% discount to the Main Sale price). The private sale will be limited to selected participants.
A total of 150 million tokens will be available during the public pre-sale period discounted at 30% to 10% off the Main Sale full price.
To get whitelisted please complete the BlockLoan Whitelist form.
Full details of the token offering are available at https://block.loan/BlockLoan_Whitepaper.pdf
Tech Features of the BlockLoan Platform
The key features that the BlockLoan platform has for borrowers and lenders to use include:
Smart Contracts — This is an electronic agreement that has been pre-programmed through the blockchain ledger. It gets automatically executed when various events within the contract get performed. Steps that were manually performed before through paper applications get electronically encoded into the blockchain in smart contracts.
Pooled Smart Contracts — Different kinds of lenders can use this platform. This includes individual lenders, as well as institutions and expert traders or funds. These pooled resources make it possible for loans to be fulfilled. Borrowers only get the loan when they raise all the funds that they need, and then they can access the funds in crypto on the BlockLoan platform.
Matching Engine and Loan Bidding — Through the use of the matching engine, lenders will no longer have to review loan applicants as they can get automatically matched manually. The loan bidding process also allows for lenders to put in place rules and criteria that they would like their real-time applicants to meet.
Digital Wallet Partners — Users will need to have a digital wallet that’s connected to Blockchain so that they can be able to transfer and store cryptocurrencies. These wallets will be connected to your crypto debit cards, making it possible for you to access your funds wherever you are in the world.
Third-Party Data Providers — BlockLoan will have its own credit scoring system that integrates with social data, third-party providers and user history that will be used to assess the authenticity and credit-worthiness of users.
Conclusion
BlockLoan is the technology of the future when it comes to advancing personal loans and credit. This technology is founded on the blockchain technology, underpinned by smart contracts and is reinforced by a proven business model. This technology provides an alternative to traditional lending models that were inefficient and time-consuming. This system is geared more towards meeting the needs of borrowers and lenders in a modern and global marketplace.
To learn more about BlockLoan please visit:
BlockLoan website: https://block.loan
Download the whitepaper at: https://block.loan/BlockLoan_Whitepaper.pdf
Whitelist please complete the BlockLoan Whitelist form
BlockLoan Telegram: https://t.me/BlockLoan
Token strategy and advisory: Kapitalized.com
Marketing and Distribution: BlockToken.ai
See the team on Linkedin at: https://www.linkedin.com/company/blockloan/ | https://medium.com/kapitalized/introducing-blockloan-crypto-loans-and-pooled-smart-contracts-1614096d3649 | ['Ivan Mantelli'] | 2018-07-28 07:42:45.541000+00:00 | ['Blockchain Lending', 'Banking As A Platform', 'Crypto Lending', 'Crypto Loans', 'Blockloan'] |
Stopping Global Warming ? This is what experts are saying | Ice melts on tundra and thawing permafrost in Newtok, Alaska. Photo: The Washington Post/Getty Images
Earlier this year, the Earth saw a huge dip in carbon emissions as nations around the globe locked down to slow the spread of the coronavirus. It offered a glimpse into what the world might look like if we took drastic steps to reduce our carbon emissions to slow the spread of global warming: For a brief moment, smog-choked cities around the world had clear skies.
But according to a new modeling study published in Scientific Reports today, even if we made such drastic reductions permanent, it would still not be enough. The study suggests that if we stopped all human-made greenhouse gas emissions immediately, the Earth’s temperatures would continue to rise because of self-sustaining melting ice and permafrost. These “feedback loops” — in which melting ice causes less sunlight to be reflected back into space, which in turn raises temperatures and causes more ice melt — have already been set into motion, the researchers argue.
Humanity “is beyond the point-of-no-return when it comes to halt the melting of the permafrost using greenhouse gas cuts as the single tool,” Jørgen Randers, PhD, professor emeritus of climate strategy at BI Norwegian Business School and lead author of the study, tells Future Human in an email. That’s not to say we should give up on reducing emissions: Rather, Randers says that the world “should accelerate its effort to cut greenhouse gas emissions (in order to postpone as much as possible the temperature rise) and start developing the technologies for large scale removal of greenhouse gases from the atmosphere.”
For decades, climate scientists have tried to predict the so-called tipping point at which it would be too late to stop global warming — too late to limit the amount the temperature rises, the amount of sea level rise, and the number of lives claimed by both and other climate-induced ecological disasters — through reducing carbon emissions alone. Climate scientists point to either 2030 or 2050 as deadlines for the world to get to zero emissions before runaway climate change kicks in. But according to the new study, no matter how much we reduce emissions now, warming will continue, and the self-sustained melting of Arctic ice and permafrost that has already begun could continue for 500 years.
“It simply will not stop from cutting manmade greenhouse gasses,” says Randers. “We need to do something more in order to stop it.” He and co-author Ulrich Goluke, an associate professor at Business School Lausanne in Switzerland, make the case that it’s time to pursue more aggressive climate strategies, like carbon sequestration.
They modeled the impact of greenhouse gas emission reductions on the global climate from 1850 to 2500, using data from a variety of sources, including NASA and the National Oceanic and Atmospheric Administration (NOAA). They found that even if we stop all greenhouse gas emissions this year, the Earth would still be 3 degrees Celsius warmer and the sea level would be 2.5 meters higher in 2500 than it was in 1850. And if we take even longer to take action — allowing our greenhouse gas emissions to peak during the 2030s and reach zero by 2100 — the Earth will be 3 degrees Celsius warmer and the sea level three meters higher in 2500.
“It simply will not stop from cutting manmade greenhouse gases. We need to do something more in order to stop it.”
Recent research on global temperature rise due to climate change predicts that Earth’s temperature will rise between 2.6 and 3.9 degrees Celsius over the next few hundred years. NOAA scientists projected in 2017 that the sea level will rise between 12 inches and 8.2 feet by 2100.
To stop this projected rise, we would have had to reduce our emissions to zero between 1960 and 1970, according to the model.
Randers says it’s urgent that larger organizations, like NASA and NOAA, check that their climate models address the self-sustained melting he and Goluke saw in their model. “Their models are much bigger than ours, and may reveal counteracting forces that can stop the melting we observe,” he says.
Self-sustained melting is caused by three things. First, there’s the ongoing melting of Arctic ice, which decreases the area of ice that can reflect the sun’s light and heat back to space. As a result, the remaining ice and permafrost absorb more heat. Second, higher temperatures increase the amount of water vapor in the atmosphere, which in turn increases the humidity and temperature. And third, changes in greenhouse gases driven by emissions from permafrost melting and absorption of carbon in biomass and oceans also contribute to increasing temperatures and lead to melting.
To stop self-sustained melting — and the expected rise in temperature and sea level after emissions cease — Randers says the world must undertake a massive effort to capture carbon out of the atmosphere and store it back underground, a technology known as carbon sequestration. And we would have to start sucking at least 33 gigatons out of the air every year, starting this year. For comparison, all animal life on Earth collectively weighs an estimated two gigatons.
There are already lots of carbon capture and storage projects of varying sizes underway all over the world — startups like Global Thermostat and CO2 Solutions and multinational corporations like Shell and Chevron have launched such projects. But none of them attempts to tackle carbon capture on the scale that Randers recommends. He says we will need tens of thousands of huge plants to capture and store the carbon for at least 100 years.
“It is a very big job,” Randers says, “but it’s equivalent to the work involved in putting all the manmade CO2 into the atmosphere, which has taken us 100–200 years of industrial activity. Getting it out again will be the same type of effort.” | https://medium.com/@rosoc66524/stopping-global-warming-this-is-what-experts-are-saying-a4e67a646caf | [] | 2020-11-13 17:22:04.336000+00:00 | ['Climate Change', 'Weather', 'Future', 'Expert'] |
Corporate Organizations and the Sustainable Development Goals. | Corporate Organizations and the Sustainable Development Goals.
The Global Goals aren’t just the business of a select few or the United Nations alone. It is the business of every single person, government, nation, and all businesses worldwide.
In every nation, businesses usually carry out Corporate Social Responsibility (CSR) where they conceive and birth projects that have social impact in their immediate environment. This is one way the communities around the company can benefit from the existence of such businesses in their locality.
Now with the rise of the 17 Global goals, corporate organizations can do more. The possibility of touching the world from the local standpoint is now more possible than ever before. Just as governments have their role to play, the private sector also has a critical role to play in ensuring the success of the Sustainable Development Goals. While the SDGs were being formulated back then in 2015, the pivotal role the Private Sector would play in its success was highlighted. As a matter of fact, the SDG document which was adopted and agreed upon by the 193 member nations of the UN stated that, “private business activities, investment, and innovation are major drivers of productivity, inclusive economic growth and job creation.”
In most economies of the world –especially those that are capitalist in nature, the private sector is the main driver of the economy. Many changes and development in such nations can be traced to the active roles the private sector plays. If the private sector can play such vital roles that lead to the development and growth of nations, it follows that they can also be instrumental in the achievement of Sustainable Development Goals.
More so, the combination of corporate organizations and government will further help to accomplish the goals.
One of the beautiful things about the SDGs is the fact that corporate organizations have different options to pick from and they can pick the goals that align the most with their mission and vision.
For instance, Companies like Visa has aligned itself with the goal that addresses poverty. This they do by bringing financial services to the underserved.
General Mills gives mills to local food banks. By so doing, they are solving and reducing the problem of global hunger.
Microsoft through its YouthSpark Program helps young people acquire computer skills. This in itself will help to ensure that young people get decent work and this will translate to economic growth.
Siemens aligns itself with communities that foster green and sustainable economic solutions. By moving into renewable energy, we can build sustainable cities and communities.
JetBlue teaches their customers and crew about climate change. Proper orientation and information about the challenges of climate change will help curb wrong behaviour and actions that affects our environment.
Discovery Channel support clean oceans. This means that life of organisms in water can now be protected.
These are just few examples of the many companies all over the globe who have aligned themselves to one goal or the other and are contributing their quota to see change happen.
How can corporate organizations really help in achieving the goals?
*Job Creation: No government can fully carry out all its operations without the support of the private sector and foreign direct investment. Corporate organizations provide jobs to the populace of whatever country they are in. As these organizations expand, more people are employed.
The Nigerian banking sector and manufacturing sectors employ thousands of Nigerians thereby reducing the unemployment rate. The activities of these corporations help to generate jobs and income needed to overcome poverty.
*Expertise: Corporate organizations often have some of the most brilliant and competent personnel working within them. The combined effort of this highly intelligent and competent personnel can help to tackle the national problems inherent in their countries. Moreso, they will be able to lend their skills, expertise, and knowledge to tackle some of the global problems the SDGs seek to solve. | https://medium.com/@iholuwatoby/the-role-of-corporate-organizations-in-sustainable-development-goals-cc2437bc924e | ['Oluwatobi Aigbogun'] | 2019-03-21 08:47:06.784000+00:00 | ['Development', 'Csr', 'Sdgs', 'Sustainable Development', 'Corporate Culture'] |
I’m Tempted To Not Get My Child Anything For Christmas This Year | I’m Tempted To Not Get My Child Anything For Christmas This Year
Photo by Annie Spratt on Unsplash
Birthdays. Christmas. New Years. I don’t know when it began but the older I get, the less excited I feel towards these events. Perhaps it’s just part of adulting and growing up, but this year I’m struggling more than others.
After having my child at the end of 2017, I went on a financial diet. That meant every dollar and penny became carefully counted and spent accordingly. As I recalibrate my career, my work environment, and general life, spending just hasn’t been part of the main equation. Only the essential things in life that require money gets assigned a budget.
As I stood in line with my child to have a moment with Santa, a twelve-year-old was racking his brain for what to ask the jolly man in red. It became a toss-up between a new iPhone or a mini-fridge — not because he needed it or actually truly wanted either of them but because he couldn’t think of anything else pricier.
I remember looking towards my two-year-old, who had found the coloring pencils and was carefully putting all the chairs back into place. I tuned out the twelve-year-old and remembered thinking with a slow sigh — thank f**k.
Contentment in the little things
There are two types of kids in the world — the ones that are happy with the little things and those who are never satisfied. Currently, my toddler is the first of the two. It might change but for now, I am grateful.
I want to keep her in this state for as long as possible. Once the materialism starts and the feeling of missing out slips in, it’s a slippery slope down the rat race and keeping up with Joneses.
Currently, her mind gets blown when we go under bridges and drive past big hedges. Her sources of excitement are simple — cats, flowers, dinosaurs, books, the free fruit basket at the grocery store and splashing water.
I don’t know when all this will change but at some point, materialism kicks in and little kids turn into little humans that want things.
All the junk and all the trash…
My mother likes to horde. My dad is not much better. In short, we had a lot of stuff.
When I moved out, I thought I was free but then fell into the same habits. Over time, an empty house became filled with things. I’m not sure where they all came from or how they entered the house — only that they’re here now and have decided to stay.
As I tiptoe around the clutter that’s built up like settled dust, I’m tempted to just throw everything away and begin again, except this time smarter, with more appreciation and space.
This Saturday, rather than going Christmas shopping, I’ve got plans to purge the house and become a minimalist.
All this, if I succeed, will be sufficiently ruined once the Christmas rounds begin and the child gets given a million and one presents from family members.
They say a cluttered house equals a cluttered brain.
A cluttered brain leads to a stressful life.
A stressful life leads to the world falling apart around you.
Well, no one really says that but that’s what I’ve figured out so far over the past few years of having my own space. I’m struggling to properly adult as my life obligations and quest to re-create my career takes over the life admin tasks.
No presents for next year?
I’m tempted to tell everyone not to give the toddler any presents next year — rather, to gift experiences instead. Or contribute to her investment account. Maybe it’ll help her buy a house or pay for her student loan if she chooses to get one.
Over the past two Christmases, I didn’t organize gifts in return for the adults that gave my toddler presents. What, exactly, would an under two-year-old give as presents? I didn’t want to give things just for the sake of giving.
I’m not a fan of empty gifts and perhaps that’s why I struggle with events that require giving things.
This year, I’ve organized a present making activity for the child, now that she’s old enough and aware of what’s happening. She’s making the presents this year. If it’s not perfect, it’s not going to stress me out. I didn’t want Christmas to be a shopping spree kind of event. I want the preparation for it to be something personal, infused with a touch of thought and care.
The activity for the toddler has its purpose as well. Rather than traversing through the mall trying to find something on a budget, I ‘help’ her produce a gift instead. It’ll probably take an entire day, keep things simple for this mommy and I don’t have to worry about judgment since, well, the toddler made it.
It’ll probably turn out more memorable this way.
The gift of sanity
It’s hard to figure out what exactly to give and a part of me wants to give her a sane mommy — one that isn’t stressed by all the stuff that surrounds her. I want to give her a mommy that is attentive and calm, one that is wholly present rather than distracted by her million other obligations and fallen behind life admin.
It sounds selfish but perhaps that’s what my toddler actually needs.
Not more physical things but a parent.
Maybe I’ll throw in new watercolors and some brushes so we can do some painting together. Or a set of markers.
I don’t know. We’ll see. I still have a few days left to decide. But this Saturday — it’s happening — the house is getting a purge to make mental space for a sane mommy to emerge and make the toddler’s year to come her best year yet. | https://hustlethrivegrow.com/im-tempted-to-not-get-my-child-anything-for-christmas-this-year-6441777cd2f1 | ['Aphinya Dechalert'] | 2019-12-20 08:47:32.166000+00:00 | ['Parenthood', 'Motherhood', 'Life Lessons', 'Life', 'Parenting'] |
Outdoor Date Ideas She’ll Like But Mosquitoes Will Love | Right now indoor dates are hard to organize. Forget a romantic dinner at an over-priced steakhouse or a quirky game of bowling. Since those options are off the table, try taking the romance outside with of these romantic outdoor date ideas. That special someone will end up itching for more. Or maybe just itching.
Date idea #1: A picnic at a cute lookout spot.
Your date will be so entranced with the delicious picnic you created that they won’t even notice you forgot to bring the bug spray. What’s more, a high vantage point is the perfect place to discuss the question, “What are we?” Your journey there will have you brushing up against each other as well as mosquito-filled foliage. By the time you arrive, your feelings won’t be the only thing you’re itching to get off your chest.
Date idea #2: Apple picking in early October.
Early October is a great time to go pick up apples and a blood-borne disease at your local orchard. Get in the fall spirit as you and your loved one pay $50 to perform manual labor so you can take home a small box of rotten apples in exchange for an Instagram photo that filters out the millions of blood-suckers.
Date idea #3: A bike ride through very unpopulated part of the forest.
This is the perfect way to show someone that you want to be alone with them even with the impending threat of getting lost. Battle your deep fears of being murdered in the woods and rub her arms with the mosquito repellent that will do nearly nothing for the growing sense of unease you feel as darkness sets in and you lose the path. Don’t worry! The mosquitoes will never forsake either of you.
Date idea #4: A moonlight walk through the marshes.
There’s really no better place to get to know someone quickly than a critter-filled swampland. Put her to the test physically and mentally as you see exactly how many mosquito bites and interactions with strange slimy creatures she can withstand before bailing on you and never speaking to you again.
Date idea #5: A quick swim in the watermelon wading pool you haven’t emptied out since the summer.
This date idea is the perfect way to show either a new love or an old flame that you are resourceful when all the pools closed this summer you said “No, no! I will make my own pool or at least buy one from target.” The stagnate water will reflect your earnest faces as you confess your love. It will also reflect the deep discomfort you experience as the swarm who’ve been born from these still waters descend upon both of you to feast. | https://medium.com/jane-austens-wastebasket/outdoor-date-ideas-shell-like-but-mosquitoes-will-love-7e857abe02ce | ['Isabel B'] | 2020-10-11 19:54:25.327000+00:00 | ['Comedy', 'Satire', 'Relationships', 'Humor', 'Dating'] |
AMA Recap: Gem Chasers & Gather | Gem Radar ✪:
Ok @gemchasers, we have Gather Network here with us today for our AMA series. Warm welcome to @RJerath the founder of Gather. How are you doing ? 🙂
Reggie — Gather Founder:
Hey @radargem I am doing well! Thank you for hosting us, looking forward to a great AMA!
Gem Radar ✪:
We do too :). Ok so let’s start with you Reggie. Could you please introduce yourself and your past experiences related to cryptocurrency? What is your past work experience and how does it relate to Gather etc..
Reggie — Gather Founder:
For sure, I’ll start with history and what led to Gather and my first exposure to crypto
Gem Radar ✪:
Perfect
Reggie — Gather Founder:
Started my career in Oil and Gas managing a couple bases in and out of Iraq managing about 200 or so people, was there for about two years or so, Post evacuating the staff. I decided I wanted to shift industries. — I ended up working in advertising for a bit — with different agencies — the first being MSLgroup ( Where I met our Marketing Advisor Naren ). Post this I had my first exposure to entrepreneurship. It was a startup similar to what Deliveroo is — provides the technology and riders, making deliveries for banks, supermarkets etc. Post this — I was hired as a consultant to help a website to help them figure out how to monetize. — That is where everything started. As for my first exposure to crypto. — Back when I was at university me and a former founder were room mates, and someone told us about Bitcoin. Being typical college kids, we were broke, so we bought LTC lol. This was when BTC was around 600 or so
Gem Radar ✪:
Reggie, so far I have never seen such an interesting background in an AMA yet, haha! wow this is so cool. So how did Gather start then?
Reggie — Gather Founder:
Hah, thanks
Well while trying to figure out ways to monetise the site, I discovered something called coinhive — for those not Familiar — it was one of the first JS based XMR miners. — Pirate bay was using it for quite some time. So that idea itself, — trading processing power for revenue — very powerful. But since I had worked in advertising, I knew major publishers would not accept it. So that is when we got the initial team together. — I’ll share some images here of them.
The idea back then in 2018 was to build a really simple multi miner but with the features that publishers would need, — IE it would mine the most profitable crypto at the time, and payout in BTC/ETH/GTH or Fiat and a few other features
Gem Radar ✪:
That’s awesome, and thanks for sending over the Core Team and Advisory board, so just to be clear here. Gather started before the #DeFi trend. So where is Gather heading to now? Is your plan still the same or will you join the trend?
Reggie — Gather Founder:
Oh yea, we started in 2018 hah, we raised about 350k then, planned to do an ICO, but the bear hit. So we self funded with life savings essentially. Then went through a pivot to what you see as Gather now, and closed the private sale this year. There is a good tweet thread that sums this up really well one sec
Well
There are allot of Defi projects out there
and we started Gather with a clear problem to solve, I mean if we wanted to pivot to Defi, it would be before we had listed etc
Out expertise as a team, and company lies in the problems we are solving. So it really does not make sense to add another feature, just for the sake of being branded as Defi
https://twitter.com/chatwithcharles/status/1311042919311253505?s=20
However, we can monetise Defi apps etc, we Partnered with CHR for this express purpose, along with FRM
but we are not a “defi” based company
Gem Radar ✪:
Definitely agree, teams persistence throughout the bear has been amazing. (I’ve been following Gather for a while, even hold some myself I know). I love the fact how Gather is not a #DeFi based company however they are utilizing the trend and can build on it such as monetisation etc.
For the people who don’t know Gather Network, what is the best way to explain to someone, in your own words of what is the use case of it? I mean.. Why should people use Gather? What are the sole benefits of Gather.
Reggie — Gather Founder:
In as few words as possible
Gather is a platform that provides publishers/apps an alternate form of monetization, without having to rely on ads, and enterprises and developers a cheaper form of distributed cloud computing
Here is an image that depicts the network- a multi layer marketplace platform
Gem Radar ✪:
So how is that possible? how does a company, publisher or app get monetisation without having to rely on ads?
Reggie — Gather Founder:
Really simple honestly — for websites you sign up, and then get access to the admin panel, you generate a small script and put it on the backend of your website, and that’s it! Your users will then see a disclaimer ( which you can customise ) — Check out this thread — we did a UI demo a few weeks ago. — For apps, it would be via a SDK. A wordpress plugin is planned. Once the users provides consent ( we are all about privacy and consent, we do not collect data ) a very small portion of processing power is used to secure the Gather network. It would the same amount of processing power needed to view an ad
https://twitter.com/Reggi3J/status/1322606917081071616
Gem Radar ✪:
Seems really interesting and useful tbh. Gather’s product is really flexible that’s what I like about. Ok how about smaller scale people? Maybe an influencer that has a website, or a small online business that doesn’t have any relation to crypto. Can they still use it?
Reggie — Gather Founder:
100% literally my grandma and her baking site can use Gather to monetise. — We identified that for publishers and applications — our whole narrative needs to focus not on crypto. Really leave crypto out of it. Because for them — they are just earning revenue here, but when the go to get their revenue. That is when they have their first ever exposure to crypto — get paid in Fiat or BTC/GTH/DIA/FRM/GIV/CHR and I am sure I missing a few
but yes, very small sites ( less than 10k sessions with 120 seconds, wont stand to make allot, still make something but not allot )
my point is that, Gather’s users — most of them would have their first experience with crypto by using the platform
Gem Radar ✪:
This is where things get interesting for Gather in my opinion. The ability that you also cater for non-crypto services as well. Also it is good for crypto as a whole since Gather helps to introduce new adoption to the crypto market. So this leads me to ask you the question of your Gathers tokenomics. Briefly what is the tokenomics of Gather and why have you chosen to do it in such way?
Reggie — Gather Founder:
For sure, so which parts hah — the token sale metrics? Inflation rate? deflation mechanisms?
I’ll paste a few images as well
Gem Radar ✪:
So basically i think what will be interesting to know is the inflation and deflation mechanisms
Reggie — Gather Founder:
https://gather.network/docs/Gather_Token_Economy.pdf — this is pretty in-depth here, but let me summarize a bit
Gem Radar ✪:
Ok for sure if it’s possible 🙂
Reggie — Gather Founder:
So in terms of inflation — the majority comes from block rewards — which after about 10 years — is essentially 0 — via Halfing/halving ( depending on how you like to spell it ) at this point we are looking at a circular economy based on TX fees
In terms of deflation, — allot of it is derived from buy back and burns using USD generated from operations ( payments from cloud customers )
Gem Radar ✪:
Perfect that sums it up pretty well actually!
Reggie — Gather Founder:
So on revenues of 1.681M you are looking at buyback and burn of 336K — roughly numbers might be a bit off — we just introduced that all Masternode holders would also earn 6% of gross profits from cloud revenues paid in USDT ( apart from the GTH APY )
in terms of traction: this is where we are so far
Gem Radar ✪:
So what about the benefits of holding $GTH ? Just think of an ordinary person looking for an investment, will $GTH be an option for them to invest in? What sort of incentives can they expect?
Reggie — Gather Founder:
That is a question.. I cannot directly answer hah
But I can speak of the utility of $GTH
Gem Radar ✪:
I understand price talk may not be allowed on your behalf and we respect that here, but yes overall the utility of $GTH in regards to someone wanting to just buy some $GTH and hold it.
Reggie — Gather Founder:
GTH’s value is derived from being as a medium of exchange within the ecosystem along with providing holders the opportunity to vote on the future of Gather. The specific features that use $GTH as a medium of exchange include:
1) Gas Fees and network interactions charges, settlement and processing power related charges for the Gather Cloud, and Settlement of Gather Online Rewards.
2) Providing a financial incentive to nodes to act honestly.
3) Entitling Staking holders with the right to participate in the governance and future of the Gather ecosystem
Speaking of nodes as well:
- We currently have opened reservations for masternodes. When the main net goes live — we are looking at 80 masternodes — of which 16 ( 4m GTH ) were reserved in a FCFS manner in 4 minutes hah — we only planned 10 ( just to test out demand ) but decided to let everyone reserve, for those that had applied. We have opened interest batch 2 — and as of now there is about 7M tokens to be sent to reserve 20 additional master nodes. This would equate to 20% of supply locked in nodes. The 2nd batch should open next week
Gem Radar ✪:
What security measures has $GTH taken so far? Has the contract been externally audited? has any other implementations been taken place for clients who actually use the Gather Network Script in there dapp or website?
Reggie — Gather Founder:
For sure, you can find our Erc-20 audit here:
https://gather.network/docs/Cryptonics_Gather_Audit.pdf
and for specific security precautions, best practises are always used — we have a good dev team ( rotate password and pem files every so often, user based access etc )
Gem Radar ✪:
Great, thanks for that Reggie. What about developments, partnership deals or any major updates in general. What can we expect from the near future for $GTH?
Reggie — Gather Founder:
Can’t speculate about exchange, but here our list of partners and investors. I think I can tease one non crypto partnership
Reggie — Gather Founder:
Can’t speculate about exchange, but here our list of partners and investors. I think I can tease one non crypto partnership
Gem Radar ✪:
Oh how we love a tease 😉
Reggie — Gather Founder:
And for the tease, we have been negotiating with a very large streaming platform, to use Gather when we go live. I don’t want to delve into too many details hah otherwise the cat is out of the bag
Gem Radar ✪:
That is were i’d like to see Gather heading. Partnerships with streaming platforms! Ok so we have preselected 3 questions for $GTH please let me know when you are ready to answer them 😄
Reggie — Gather Founder:
Slowly but surely!
Good to go
Gem Radar ✪:
@ajibang1 asks, What is your relationship to $DIA(Oracle Project)? Why partner with them?
Reggie — Gather Founder:
We will use DIA’s oracle services for price feeds and we have added $DIA as a payout currency within Gather. There is also on-going conversations about the potential use of oracles in relation to firewalls
Gem Radar ✪:
@CryptoBoooming asks, Can you please explain the concept of Masternodes and their purpose in Gather ecosystem?
Reggie — Gather Founder:
For sure, Mastenodes provide the back bone of the whole ecosystem they have two major functions 1) They help produce and validate blocks 2) They are provide the infra for the decentralized cloud ( compute + storage )
Gem Radar ✪:
@SayaSiapa999 asks, Do the token holders have the right to participate in the governance of the project? What kind of decisions can they vote on about the project?
Reggie — Gather Founder:
Yes they do !
Gem Radar ✪:
Ok would it be possible to elaborate on what kind of decisions they can vote on as an example?
Reggie — Gather Founder
they can propose whatever they would like essentially, but as with any governance system it needs a consensys and a final approval from the Gather foundation. I mean you could make a proposal to change the Name of the foundation to BoatyMcBoatFace if you wanted. Here is a doc that goes a little more in-depth about how governance would work : https://gather.network/governance.html
Reggie — Gather Founder:
There is no cap at all, any one can come and propose whatever they want. It just needs to be passed by everyone
Gem Radar ✪:
Perfect Reggie thank you so much for all of the information you have provided to us today, this concludes our AMA this time around. It was lovely to have a chat about you and your project. We wish the best for Gather! | https://medium.com/@gemchasers/ama-recap-gem-chasers-gather-d990c2eb02e1 | [] | 2020-12-18 16:21:56.853000+00:00 | ['Cryptocurrency', 'Defi', 'Altcoins', 'Staking', 'Bitcoin'] |
How great UX design starts and ends with the user | How is it that some products and digital interfaces can end up so poorly designed? The world is littered with such examples that are amusing and frustrating in equal measure!
Microwaves are notorious for this. This model below has 31 buttons, which is a little over the top when all I really need to do is adjust the power level and time required.
Kitchen appliances are clearly a theme. My experience with this slow cooker was made so much worse when i couldn’t work out whether it was on or not. Just put a light on please! I can’t leave the house till it slowly warms up. Not helping my morning much…
It’s not just electronic devices that this applies to of course. Check out these new scissors. How should I open that vacuum sealed packaging? With another pair of scissors maybe? The mind boggles.
Designers can become so engrossed with preconceived ideas of the look and feel of a product, they forget to solve some fundamental questions. Crucially these questions revolve around the user and not the designer’s view of the world.
What do they need?
What do they want to achieve?
How will they use it?
What are their cognitive and physical limitations?
This cartoon illustrates the problem perfectly! [1]
Image credit — UXMag
Ultimately products will be used by real people, so the design process should also start with real people. User centred design (UCD) is the philosophy that positions the user at the centre of the creative process.
Throughout all of the development phases , there is a focus on a deep understanding of who will be using the product in order to optimise the design for those users.
The international standard 13407 is the basis for many UCD methodologies [2]. It can be embedded into any form of project management process such as AGILE, Waterfall or otherwise, so there really is no excuse for not applying it.
In the 1980s, Donald Norman advocated the practice of UCD, promoting that such a philosophy was needed to make products usable and understandable.
As a result, users will effortlessly and instinctively know what to do, and will be less likely to become frustrated with the product [3].
If that is the case, then the product has a much greater chance of being a success. It’s as true today as it was then.
“Good design is actually a lot harder to notice than poor design, in part because good designs fit our needs so well that the design is invisible.” (Donald Norman).
Whether designing a physical product or digital interface, the process is the same. UCD starts with the user, by establishing their needs and requirements, and where and when the product will be operated.
Techniques including focus groups, questionnaires, interviews, physical measurements and modelling, observation and ethnography may all be used.
UCD also ends with the user, by field testing the product to validate those user needs and requirements and to ensure the product is desirable, attractive, usable and accessible.
Image credit — pinterest
I don’t believe it needs a rocket scientist to work out whether this piece of genius design is meeting user requirements… the loo for two, discovered at MIT no less.
Where was the UCD? What does the user need? What do they want to do? How will they use the product? There are no words!
How to follow a user-centred design process
Image credit — userbub.com
Variations of specific details of the UCD process exist, but a common thread persists where the user is involved in all design and evaluation phases [4].
UCD is iterative, allowing product design to evolve and improve until reaching an optimal state. It is typically categorised into four phases:
Context of a product is established first. For example, when designing an in-vehicle interface, factors such as reach, readability, vision angle, cognitive workload and potential distraction from the driving task are to be considered. Design of mobile apps demand a very different scrutiny. User requirements are then created and agreed by stakeholders for clear definition to design teams. A number of techniques can be utilised, such as focus groups, interviews or user observations. All too often these critical requirements are poorly defined introducing costly changes later. Design can now actively begin and will be continuous throughout the product development lifecycle. This phase may also be subdivided as the design evolves from concept to completion. Evaluation follows the design phase and a number of iterative design / evaluation cycles may be needed to improve the product, pending feedback. Testing realistic prototypes is preferred, subject to time and resource. Lo-fidelity pen and paper prototypes still have a role, although hi-fidelity rapid prototyping is often required later in development to simulate representative designs and complex system interactions.
As product design reaches completion, the need to be test in situ becomes more significant as some issues may not be prevail in short desktop based studies. They may require users to ‘live with’ a product in real world situations for those issues to surface.
Acting on recommendations from evaluation is crucial for evolving design. Test protocols must be designed appropriately to answer open design queries with reliable and unbiased data.
Good moderators also unearth issues by observing and listening to participants, following up on interesting comments without ‘leading’ them or introducing bias.
Verbatims provide valuable information and clues to problems that numerical data might not (i.e. subjective ratings or task times).
What can happen if the user is ignored?
I wonder how much evaluation and UCD went on at Microsoft for Windows 8? Upon removing the ‘Start’ button there was a social media storm and a consumer backlash.
After some red faces and backtracking it was re-introduced for Windows 8.1. The shut down process is not much better.
This happened when Microsoft wanted to develop a platform that worked on all its devices from smartphone through to PC.
Unfortunately the metro interface which was designed to be touch friendly was in effect the same underlying system (more or less). The fusion created chaos.
Image credit — Microsoft
Microsoft isn’t alone in UX design calamities. Apple has been at the forefront of great design, yet iTunes remains terrible. Originally a mechanism to quickly download music, it became a monster.
Quality of the music experience has been diluted in adding masses of functionality.
Major issues include:
An unwieldy interface that is unnecessarily difficult to use and infuriatingly slow
Apple tried to crowbar way too much functionality into a single application
Lack of integration with social media
iTunes has for a long time been the great faux pas, perhaps only challenged by Apple Maps. It is designed around a business model rather than the needs of the user.
Perhaps in a time where streaming content is the preferred experience, the days of iTunes could well be numbered. As a user of Apple products I do hope so.
i-Tunes latest design in 2018
Measuring success of UCD
When evaluating a product, it’s important to consider if user requirements are met, the product is intuitive and usable, and evokes positive emotive responses such as user satisfaction and delight.
Image credit — pinterest
In some cases, it might also be an opportunity to observe if there are any potential adverse effects on the user’s safety or well-being in a controlled environment.
For example, simulations are often used to assess whether in-vehicle displays are distracting while driving.
Criteria for success will vary depending on the product and its intended context for use. Core to whether a product will be successful is it’s ‘usability’.
Major factors that contribute to this include being: Learnable, memorable, effective and efficient. Desirability, usefulness and delightfulness may also be used as measures for how a product resonates with users [5].
Errors will naturally happen when interacting with a complex product. That is symptomatic of human behaviour.
However, practising sound design principles and making good use of evaluation feedback will alleviate this by minimising the proliferation and the impact of such errors.
Yes, it’s worth the effort!
It is often clear when interacting with a product or interface whether UCD has been applied. Remote controls are a bug bear of mine.
As users we take them for granted, but it’s easy to get the design very wrong and it seems like I am not the only one who thinks so [6].
Take the Sky+ model, which sits wonderfully in either hand, allowing for comfortable, single-handed, often blind usage with minimal frustration. Frequently used functions are easily accessed and reliably operated.
Buttons for non-critical, often mind boggling features are thankfully sacrificed and accessed via on screen-menus.
Apple have a completely minimalist approach, albeit with a different functionality. The product is beautiful in hand and a joy to use.
Compare to some standard models which suffer from ‘feature-itis’ with buttons all over the place and little thought to ergonomics. Then there is the Sony remote for Google TV: Designed for a brain surgeon?
Enough said. The difference is startling. As technology advances, the challenge is to retain design simplicity despite increases in functionality.
Search mechanisms for on-demand content are a prime example, and although voice-control is a supportive interaction for that task it is not desirable for everyone.
Conclusion
There are undoubted benefits of working to a robust UCD:
Products will be easier to operate for users who will encounter with fewer errors and have an easier learning curve
Users will experience greater satisfaction and more trust in the product
Development costs will decrease with late change reduced
Customer retention and loyalty will strengthen
Brand credibility will improve and sales will increase
Working directly with users will aid design of the best possible customer experiences but it’s not always easy. Interpretation of test data can be an arduous process.
Participants can be notoriously ambiguous and are affected by fatigue and general emotional state. Furthermore, individual differences can produce conflicting results.
Nonetheless, it is worth the journey. As Faulkner stated, working with users can give you problems, but the alternative of ignoring them would be foolhardy and should never be an option [7].
References
[1] https://uxmag.com/content/comic-the-ux-designer-paradox
[2] https://www.usability.gov/what-and-why/user-centered-design.html
[3] Norman, D.A. (2002). The Design of Everyday Things. USA: Basic Books.
[4] http://www.theuserhub.com/literature/human-centered-design-process/
[5] Krug, S. (2014). Don’t make me think. Revisited. A common sense approach to Web and Mobile Usability. USA: Pearson.
[6] Rogers, Y., Sharp, H. and Preece, J. (2012). Interaction Design: Beyond human computer interaction. 3rd Ed. Great Britain: Palsgrave.
[7] Faulkner, X (2000). Usability Engineering. Great Britain: John Wiley and Sons Ltd.
Get in touch with the author
Darren Wilson, Co-Founder and Director of Design at UXcentric
[email protected]
07854 781 908 | https://uxdesign.cc/how-great-ux-design-starts-and-ends-with-the-user-5a8a248f9551 | ['Darren Wilson'] | 2019-07-24 23:34:38.350000+00:00 | ['User Interface', 'User Experience', 'UX Design', 'Usability', 'UX'] |
Building a Node.js server | If you want to use the server starter directly without going through the tutorial, find the code on Github. Link to the next parts are at the bottom of this page.
In Part I we built a basic Koa.js server with Typescript and improved our workflows with some tooling. The next step would be to set up a database to store and retrieve data. We will use MongoDB, a NoSQL database. But we’d like to have a single and simple way to install it on each developer’s machine instead of relying on tedious manual configuration. We’d also like to make the installation process deterministic, with configuration stored in files instead of set by the OS itself (environment variables).
So let’s containerize our apps, including the existing server and the future MongoDB database. It will make installation of the whole back-end (which will now be considered as a suite of services) much easier, with only one step.
Requirements: Knowing the basics of Docker and MongoDB. We won’t cover the installation of docker as it is OS-dependent and requires some specific tweaking for everyone. Please refer to the getting started and/or installation guide provided by Docker.
Robin level: dockerize the server
With docker installed on your machine (see the requirements above), create the following Dockerfile:
(Eventually update the node and typescript versions)
These are the instructions to dockerize the existing server. Let’s get through the file:
We request a first container as the builder. It copies our entire working directory into /usr/src/app on the container (line 2 & 4). It installs the devDependencies and builds the project (line 6). We use a second container (runtime-container) that only contains the generated files, with no “useless” dev dependency (line 8 to 14). Line 10 and 16 we are defining and exposing ${SERVER_PORT} . We won’t use it yet, and it will remain undefined. However, the Koa server ( src/server.ts ) defaults to 3100 and we can explicitly expose the container’s port by using the -p argument when running Docker. We will only use the ${SERVER_PORT} argument later in this tutorial.
Run the following commands to build and run the server container:
docker build -t <name> .
docker run -p 3100:3100 -d <name>
Go to http://localhost:3100/health and check that we still have a 200 — OK response.
Batman level: dockerize the whole back-end
The benefit of using containers grows larger when our application is composed of many different services, like our back-end is. So let’s add a MongoDB service, in a container.
Add a docker-compose.yml file. Docker-compose files operate at a higher level than Dockerfiles. While Dockerfiles are nice to configure a single container/service, docker-compose files are great for listing all the containers/services.
Things are only getting slightly more complex from now, I promise. I recommend keeping the Docker Compose API reference nearby if needed.
Starting from docker-compose version 3.4, we can set local variables. That’s what we’re doing here with DB_NAME because we’re going to use it many times in the file. Consider it as an alias for the string “database”.
because we’re going to use it many times in the file. Consider it as an alias for the string “database”. We are setting up many Environment Variables for both containers (all the variables prefixed with a dollar ($) sign). More info on that below.
We are defining 3 containers/services: database , mongo-express and server .
, and . For database we’re using the Docker image made by the community ( mongo:4 ).
we’re using the Docker image made by the community ( ). The database container is using volumes. The first volume is used to copy/paste a database initialization script into the container. More info on that below. The second one ( mongodb_data_container ) is used to persist data on the host when the container is turned off. It’s managed by Docker.
container is using volumes. The first volume is used to copy/paste a database initialization script into the container. More info on that below. The second one ( ) is used to persist data on the host when the container is turned off. It’s managed by Docker. mongo-express is a “Web-based MongoDB admin interface” (repo). It will be super useful to browse the database from our machine through the database container. 🤯
is a “Web-based MongoDB admin interface” (repo). It will be super useful to browse the database from our machine through the container. 🤯 For server we’re using the custom Docker image we prepared at the previous step with the Dockerfile (see line 39).
Here’s what it looks like:
The back-end architecture
🔐 A way to manage environment variables
How to set the variables mongo needs without hard-coding them in the docker-compose file and eventually push them to the repository remote and make them publicly available? We can use .env files!
I recommend having a .gitignored .env file, copy/pasting the keys from a git-tracked sample.env file and filling the values with your own secrets. This way, each team member can have their own secret variables.
Not hard-coding the values in the docker-compose file will also be useful later when we deal with multiple environments (like development, testing, production).
So let’s write the sample.env file, duplicate it and rename the copy .env :
Add this line to the .gitignore file if it’s not already there:
.env
A script to initialize our database
Before storing and retrieving data with MongoDB, we need a database and a user to interact with it. This is done by adding an initialization script to a specific volume in docker (see line 18 of the docker-compose.yml file). Here is an example of a script adding a database and a user as named in the .env file. Feel free to tweak the script or to rewrite it. Mongo DB also supports javascript (thread).
Adding init-mongo.sh at the root is the only thing we need to do, docker will copy it automatically where it needs to. It will be run only if the docker volume is empty (so only the first time, so that we can keep working on the same database afterwards).
Final touches
Finally, add a .dockerignore file to avoid putting useless files in the containers:
We are now ready to see if everything works! Start the containers by running:
docker-compose up [-d if you want to start the containers in detached mode] [--build if you want to force rebuild]
Give it a few seconds to pull the images and build the containers. You should then be able to:
Browse http://localhost:3100/health and get a 200 — OK as before. This is our server.
and get a 200 — OK as before. This is our server. Browse http://localhost:8081 and log in, using the mongo express credentials you listed in the .env file. This is a way to visualize the database we created with our init-mongo.sh script and which is living inside the database container.
and log in, using the mongo express credentials you listed in the .env file. This is a way to visualize the database we created with our init-mongo.sh script and which is living inside the container. Note: To turn the containers off, use docker-compose down .
Note: it’s still possible to access the database container with a connectionString, for example in MongoDB Compass from the host:
mongodb://your_username:your_password@localhost:27017/database?authSource=database
Injecting variables at runtime
Containers are great, but when developing it’s better to instantly see the changes we made to the code rather than rebuilding the containers and waiting for them to start.
So the ideal workflow would be to only start the database and eventually mongo-express containers, and to keep our server on our host machine. But how will the same environment variables we defined in the .env file be injected into the server? There’s a package to answer that need:
npm i -D dotenv
We then have a few adjustments to make to our nodemon.json file. Replace the “exec” line with the following:
"exec": "npx ts-node --require dotenv/config ./src/server.ts"
Dotenv will take the variables defined in our .env file and inject them into our server, each time it finds process.env.VARIABLE . Remember this line at the start of our src/server.ts file?
src/server.ts, line 6
Now the server will use the $SERVER_PORT variable defined in the .env file, instead of defaulting to 3100.
We can then safely run the following commands and properly develop our server locally, interacting with the containerized database:
docker-compose up [-d] [--build] database mongo-express
npm run start
Now our server and database are ready to talk to each other. 🕺🏻
In part III, we will write some boilerplate code to set up GraphQL and MongoDB. Click the link below to go to part III: | https://theo-penavaire.medium.com/step-by-step-building-a-node-js-server-2021-edition-f2e7f5e84b17 | ['Théo Penavaire'] | 2020-11-29 15:07:50.088000+00:00 | ['Nodejs', 'Mongodb', 'GraphQL', 'Typescript', 'Docker'] |
A new solution to the problem of efficient charging of moving electric cars | Kopeycki ycheni ppedctavixa strategy for maximum efficient delivery of energy without conductors — through inductance. Unlike the standard approach, in which only one transmitting inductance is active, which is closest to the receiver, the new method is used. This technology makes it possible to overcome the difficulties of wireless charging of electric vehicles and industrial work.
To date, wireless charging methods with the help of one inductance are well researched, which cannot be said about the system. It is difficult to achieve maximum efficiency in one system with several inductances, because the receiver can be used in any of these systems.
Kopei specialists have developed an effective strategy for control, which achieves the maximum efficiency of wireless security with the help of support. They formulated the theoretical foundations of this method and found a number of important relationships.
The method proposed by them is not complicated and does not use censors to locate the receiver in order to include the closest to it. Instead of this, the students learned that it is possible with high accuracy in real time to find out where the name is based on the measurement of the inverter. This allows for dynamic tuning of any emitting inductance, thus maximizing wireless efficiency. | https://medium.com/@7tvus7/a-new-solution-to-the-problem-of-efficient-charging-of-moving-electric-cars-989646a2b61b | [] | 2020-12-09 14:47:41.773000+00:00 | ['Russia', 'Problem Solving', 'Electric Car', 'Cars'] |
BlackFin Tech Weekly — December 14th | Airbnb’s long-awaited IPO finally took place last week… and the company surprised everybody with a crazy valuation! Indeed, Airbnb opened at $146 on its first day trading, up from its $68 IPO price: even its CEO Brian Chesky couldn’t hide his shock. This also led Affirm, Inc. & Roblox to postpone their IPO until next year citing Airbnb’s price surge as a sign of mispricing risk. And who would want to leave money on the table, right?
And, closer to home, EU fintech deals came in strong too last week totalling €326m raised across 15 deals, including one of our own!
We are indeed proud to co-lead Modularbank’s €4m Series A alongside Karma Ventures and Plug and Play as well as star business angel Ott Kaukver (ex CTO of Twilio). Created in 2018 in Tallinn, Estonia, Modularbank has developed a cloud-based banking platform enabling traditional banks, fintechs but also non-financial institutions to deliver suitable financial services to their customers. If you want to know more about our investment thesis, feel free to read our article right here!
After this quick bout of self-promotion, let’s go back to European deals! Geography-wise, the UK leads the week once again with 5 deals, followed by Germany with 4 deals, Spain with 2 deals, and France, Sweden, Estonia and the Netherlands with 1 deal each.
Sub-sector wise, there were 7 deals in Banktech, 5 in Insurtech, 1 in Wealth Management, 1 in Payments and 1 in another Fintech sector related to blockchain.
Let’s dive in:
Made with ❤ by BlackFin Tech
Tink raises another €85m round:
This round was co-led by new investor Eurazeo Growth and Dawn Capital, with PayPal Ventures, HMI Capital, Heartcore Capital, ABN AMRO Ventures, Poste Italiane, and Opera Tech Ventures, also participating.
The post-money valuation was €680m.
Created in 2012, the Swedish open banking platform has raised a total amount of $308.5m.
The company already links up 3400 banks, covering some 250 million people, with partners including PayPal or BNP Paribas. Moreover, 8000 developers are using its APIs.
The new funds will be used to expand furthermore its networks of banks and payment services in Europe.
Luko raises a €50m Series B:
This round was led by EQT Ventures, alongside existing investors Accel; Founders Fund and Speedinvest also participating.
Created in 2016, Luko is a French insurtech selling home insurance products. Its customer base jumped from 15 000 last year to 100 000 today.
It currently operates as a broker, which means it collects premiums for an insurer, while getting a cut, typically somewhere between 20 and 30%. If the insurer makes a profit at the end of year, a positive technical result, Luko gets a share of this, which, as a B Corp, it gives to an NGO of its choosing. This highlights the importance of Corporate Social Responsibility to the company and its management.
Today, the company employs 85 people and plans to expand beyond France at the same time Lemonade set foot in this market.
Congrats also to Gohenry, Cleo, Outfund, Hellogetsafe, Finn.auto, Simplesurance, Battleface, Upvest, Asistensi, Payaut, Net Purpose and Bit2me! | https://medium.com/blackfintech/blackfin-tech-weekly-december-14th-ca1e56cd244e | ['Romain Grimal'] | 2020-12-14 10:28:48.152000+00:00 | ['Fundraising', 'Fintech', 'Bftw', 'Startup', 'Venture Capital'] |
The benefits of being a nobody.. When you are a child, you can have big… | When you are a child, you can have big dreams. I first wanted to become a professional soccer player and then a rock star when I hit my teen years. These were small and very attainable goals when you are young enough not to know any better.
I thought it would be cool to be famous for some reason. You never see the work behind the scenes, however, making it look easier than it is. It seems like people are making stuff up as they go along, but it’s all scripted and meticulously planned out.
You have to be a particular type of person to push yourself to a level of familiarity and to gain fame. There are probably cases of people just wanting to be creative and not be famous, and then it just happens,
Today, there are so many levels of famous, and it’s almost mind-blowing. There was reality tv, and then the internet caused people to go viral. There are the usual megastars and actors in many things, sometimes just because of one significant role.
In the last four years, politicians and government workers go from unknown in under twenty seconds because of a presidential tweet or something excellent or crazy they did. James Comey, the former FBI director, is writing a book. These are people who were doing their jobs and then got embroiled in a scandal.
Looking back, I now realize that it’s probably okay. I never became a famous person. It’s probably way more challenging than it seems and a hassle for most people. I am also becoming okay with who I am. I often obsess about my past and think about how things could have been different. I am what I am, like Popeye.
I don’t want to be a character on the internet or the male equivalent of a Karen. However, I would have liked to write one hit song or a fantastic novel where I could live off the proceeds and stay under the radar to create what I would like.
I also am envious of people who want to work their job, be good at it, and live a steady life: no fantasy life, no lottery dreams, or need for admiration. They are just happy about what life has given them and can do it one day at a time.
I am a worker bee, to make money and provide for my family. I dream of making money off of something I created. I am more realistic on the financial side that I can’t just sink money into creative projects and hope for the best. I write songs, am working on a novel, have other career aspirations, and write this article, hoping people will read it. No one is a nobody. We all live in a different view of how we should live our life. | https://medium.com/@riftyrich/i-am-nobody-and-here-is-what-i-think-b6d1a5337efb | ['Rich Horton'] | 2020-12-24 10:57:50.380000+00:00 | ['Writing', 'Famous', 'Creative', 'Working', 'Dreams'] |
Buy the wheat, not the chaff — how to profit from the next crypto cycle | The five most dangerous words in investing are: “this time it is different”. It never is. Crypto is just another market — it is more volatile, it moves faster, but ultimately it behaves like every market since the beginning of trading. As we approach the start of the next cycle, it’s worth looking at the historical parallels.
Crypto made many people rich almost overnight, just to lose most of it a few months later. People who bought in at the top feel being cheated.
It’s nothing new: the last time this happened was in 1999. Investors made unfathomable profits back then, and a precious few became millionaires for life. Most of them, though, left the floor with empty pockets. The challenge of 2018 in a nutshell: how to make sure you are part of the former group when it comes to crypto.
In hindsight, it’s easy to laugh at investors putting millions of dollars into companies like pets.com — a business clearly lacking a viable business model and making a loss on every single sale.
Yet, most people buy into crypto projects without doing any due diligence.
Crypto was apparently in a bubble at the end of last year, and many people lost 70–90% of their capital. Some might even argue the capitulation has not happened yet. My personal opinion is that we are getting close to the bottom, evidenced by the mushrooming of conspiracy theories. People starting to blame the “Cartel”, the “banks”, Soros (or whoever the scapegoat of the week is) for their own bad decisions is often a clear signal of a reversal.
The truth is, it doesn’t matter much in the long-term if Bitcoin bottoms at $6000 or $3000 or even at $1800. What does matter is what happens when the next market cycle starts.
Cycles separate the wheat from the chaff. It is a widely held view among crypto veterans that 90% of the altcoins will go to zero on the long term, and most people because of negligence, a lack of information or blind stupidity will be caught swimming naked, holding coins equivalent to 1999’s pets.com shares. Your goal should be finding the 10%. If you hodl the Amazon and Adobe stocks of crypto through 2018, you’ll never have to worry again about your financials ever.
The time to buy is now. The million dollar question (quite literally) is what to buy.
Projects with a real-world use case
The majority (!) of the crypto projects are solutions in search of a problem. Even worse, many of them decidedly don’t have a use case, like most of the masternode coins. It sounds good to make 500% ROI by merely holding a coin, but good luck cashing out. Those days are over.
The next cycle will favor projects solving real-world problems efficiently. Identity management, logistics and supply chain management, anti-counterfeiting, ownership records, you name it — blockchain technology has the potential to revolutionize a dozen industries. What we possibly don’t need is yet another payment solution or yet another fork coin.
A working product
In 2017 we’ve seen insane market valuations for projects with barely working tech like Cardano and IOTA. As the most visible sign of a bubble, dozens of ICOs raised millions of dollars without showing a single line of code.
Markets are often happy to finance companies who are “not there yet.” Amazon was losing money for many years, raising funds to build up scale. Tesla, easily the most fascinating company of 2018, has its financials in shambles. Yet, Tesla does have a working product. You can buy it, you can drive it and crash it. When you buy crypto assets like ADA (Cardano) or TRON you are buying stocks in a car company which has nothing but a brochure about a car they might build one day. Look for projects with a working product instead. | https://medium.com/sandor-report/buy-the-wheat-not-the-chaff-how-to-profit-from-the-next-crypto-cycle-87a6b75e2e1b | ['Torsten', 'Quadrant Protocol'] | 2018-04-17 14:19:15.091000+00:00 | ['Cryptocurrency Investment', 'ICO', 'Bitcoin', 'Cryptocurrency', 'Blockchain'] |
Nmap from Scratch | Part-6 | NSE ( Nmap Scripting Engine ) | Initially, Nmap was just a port scanner, but on 10th December 2006, Nmap launched the NSE (Nmap scripting engine) which changed the entire thing about how people looked at Nmap, now it was not just a port scanner but a port scanner with so much more.
Nmap contains a total number of 589 scripts (Version 7.70), there are a lot of scripts that are useful but not all of them works perfectly, it’s like other tools a better for that particular task, so we’ll look at how we can use the powerful NSE and what scripts to use.
To see all the scripts, navigate to
/usr/share/nmap/scripts
According to Nmap manual
-sC performs a script scan using the default set of scripts. It is equivalent to --script=default . Some of the scripts in this category are considered intrusive and should not be run against a target network without permission.
So, kindly take permission before running this type of scan against a target .
SYNTAX:
nmap -sC nmap.scanme.org
For manually specifying script we use
Using this option we specify a script, and use other options like help to understand what the script is used for.
The best way to find out a script according to your need is to use the locate command, so let’s suppose if we want to search for all SSH scripts, we’ll write
locate *ssh*.nse
Result:
We can see that we have total seven script for SSH, to understand what does a script do, we can write:
SYNTAX:
nmap --script-help script_name.nse
Example:
nmap --script-help ssh-brute.nse
Result:
So let’s see it in action
Scanning the target for SSH
nmap -sS nmap.scanme.org -p 22
Result:
From the result we can conclude that port 22 is open, now let’s find what version of SSH the target host is running.
nmap -sS -sV nmap.scanme.org -p 22
Result:
So the target system is running OpenSSH 6.6.1p1 we can look out for exploits for this version of SSH on the internet, but for now we’ll only use the script.
Brute forcing SSH login using NSE:
nmap --script ssh-brute.nse nmap.scanme.org -vv -p 22
Nmap will try every possible username-password combination present in it’s word list to brute force the target.
In brief:
Scan the target and find out the services running
Look for exploits for the running service on the internet.
Try to exploit the target ( using any tool from plethora of different tools )
If the exploit fails, try another exploit
If it fails again , try again, until you successfully exploit the target.
Remember:
“The master has failed more times than the beginner has even tried.”
― STEPHEN MCCRANIE
Few more script:
--script banner #grab banner
--script broadcast #reveals broadcast information --script vuln #will use default scripts to report vulnerabilities if any. ##Looking for specific scripts #Basic syntax:
locate *service name*.nse examples:
locate *smb*.nse #will list all SMB related scripts
locate *http*.nse #list all web related scripts
That’s it for this blog, try to play around the NSE scripts, you can even install new scripts from the internet using wget and don’t forget to update the NSE engine using
nmap --script-updatedb
In next blog we’ll cover how to work upon the timing and performance of Nmap scans, till then keep practicing ! | https://medium.com/@a3h1nt/nmap-from-scratch-part-6-nse-nmap-scripting-engine-f26e30e28578 | [] | 2021-02-03 02:44:31.469000+00:00 | ['Nmap', 'Scanning', 'Cybersecurity', 'Information Security', 'Hacking'] |
Tomek Młodzki of PhotoAiD: This Is How We Brought Our Business In The Media | Tomek Młodzki of PhotoAiD: This Is How We Brought Our Business In The Media
PhotoAiD is a biometric photography startup raised from a photo booths service. Along with his brothers, Tomek Młodzki founded Poland’s first company dealing with the production and service of photo booths. The business was fruitful, so once they had built a solid photo booth network, they felt that they wanted to create something more meaningful and make the lives of their customers easier. In fact, they have invested in AI technology and new devices to become the leader in passport photo processing. Today the company serves over 100 countries and processes thousands of photos per month with a team of 80 highly skilled employees.
For PRontheGO, Tomek shares insights into how the team handles PR and growth activities for PhotoAid.
Have you had any PR experience before founding your business?
Not at all. I have studied law, a completely different field from media and public relations. Neither of my brothers — my business partners — had any experience in that field. However, I have learned more than just legislation from studying law. In some ways, I feel judicial work or lawyer professions are pretty similar to PR: your job is to portray a favorable image of your client.
Do you do DIY PR or do you work with a PR freelancer /agency? Please share your experiences.
I haven’t worked with a PR agency yet. We have a person who handles PR, but it’s only one of her duties. In addition, we’ve created an Outreach team responsible for PR, among other things. Right now, that solution suits our needs, so we are not planning to hire a separate PR specialist.
We see it as an advantage to train our employees in PR because they know our organization inside out. Things run more smoothly since they are familiar with other employees, our company culture, strategy, and vision.
What does the PR strategy for your business look like?
We are a Polish company, but we aim to become a global service provider. Therefore, we use both local and global media equally. Using HARO responses and digital PR articles, our outreach team increases brand awareness online at the international level.
Hand-in-hand, we promote our business locally. The best examples are campaigns that offer free student ID photos before starting a new school year or free personal ID photos before implementing new document requirements. While launching a campaign, we reach out to local media outlets and web portals to promote it.
How do you implement PR tactics in your every day work schedule?
PR entails various tasks, such as interviews, answering requests, planning meetings, writing requests, building databases of contacts and outlets, and team management. My schedule depends on which assignments take center stage during our current PR campaign. Some days I reach out to journalists; other days, I answer their questions. Sometimes I have to wait for answers, so I just stay updated on the market by reading the news.
What is your experience with Thought Leadership activities? Please share examples.
HARO is our way to build a credible online presence. Through this service, our insights appear in trusted media outlets. And how does it improve our image? For example, readers perceive our Head of Marketing as an expert source when quoted in a reputable publication. Her expertise and reputation have grown, and she has become recognized as a thought leader.
Which platforms and tools do you use for PR?
It depends on the tasks. For finding relevant outlets and contacts and research keywords and website rankings, we use Prowly and Ahrefs. Meanwhile, our outreach team uses HARO and Terkel to answer questions from their field of expertise. In addition, some tools simplify everyday tasks, such as Airtable for organizing assignments and Grammarly for correcting texts.
What was a PR success story of your business, and how did you get there? | https://medium.pronthego.com/tomek-m%C5%82odzki-of-photoaid-this-is-how-we-brought-our-business-in-the-media-9c8384eb7d40 | [] | 2021-12-29 12:39:03.101000+00:00 | ['Pr', 'Startup Marketing', 'Founder', 'Public Relations', 'Founder Stories'] |
7 Awesome Android Apps You’ve Never Heard Of | Ready, set, download!
It’s no secret that apps are changing the way we live. I have a friend that has over 100 on her phone. She swears she uses them all, but I doubt it. That’s the problem with apps. There are so many choices, it’s hard to find something new that you’ll want to use and keep.
Beyond Facebook
Everyone uses Facebook, Amazon, Piggy, and Pocket. You’ve never heard of Piggy or Pocket have you? That’s my point. There are thousands of apps that innovative developers have built that make your Android device more useful and fun.
To save you time, I created a list of clever new apps. It’s not scientific, it’s based on new apps my friends have tried, personal use, user reviews and pure awesomeness.
These 7 Android apps will help you get more out of your phone or tablet and do things you didn’t even know were possible. Read on and become an Android expert, and feel free to add your own suggestions below.
1) Become a coupon pro with Piggy
Get coupons automatically with Piggy
Shop your favorite stores in your phone or tablet’s browser, and Piggy will automatically search for coupon codes and cashback whenever you’re checking out. Just click the Piggy button and it will scour the internet for legitimate coupon codes and apply them to your shopping cart. No matter what, you’re always earning cashback. It’s free, it’s easy and it saves you money!
Download Piggy here
2) Always have something to read with Pocket
Pocket is an easy way to save any article and read it later, and works on both Chrome and Android. If you come across any article or website and don’t have time to read it then, save it to Pocket and you can pull it from the queue and read it at any time from any device.
Download Pocket here
3) Never get left out in the rain with 1Weather
1Weather is arguably the best weather app out there. It has a very simple, paginated design that shows you the current weather and forecasts up to 12 weeks. 1Weather offers two full versions, one that is free and has ads within it. Or you can purchase it for $1.99 with no advertising. The fun fact about 1Weather is they offer fun facts about weather that are sure to keep you entertained indefinitely.
Download 1Weather here
4) Google Drive Suite — the complete storage solution
Google Drive is a cloud storage solution essential available on Android. All new users will get 15GB of storage 100% Free indefinitely. The best part of this is you are also able to acquire what Microsoft charges a premium for through GSuite entirely free. This includes Google Docs, Sheets, Slides, Photos, Gmail, Calendar, and Keep. Between the office and photo apps, which by the way allow unlimited amounts of photo and video backup, you have an app to serve a use for practically anything.
Download Drive Suite here
5) Get a personal assistant with Google Now
And an intelligent one, at that. Just say the magic words “Okay Google” to get answers to your questions, make recommendations, and do just about anything and everything by making requests to various web services. Sync it to your Google Account to be able to pull up your schedule and notes in an instant, among many other actions; it also largely works hand in hand with Google Search so the repeated actions you perform are utilized to your advantage.
Download Google Now here
6) Don’t lose track of passwords with LastPass
Even if you have photographic memory or a systematic way of safekeeping your passwords, LastPass will change your life. It’s an awesome digital vault that takes its job of safeguarding all your online accounts seriously. Create a free account and secure it with a strong master password — your last password ever! Fill your vault with all your fave sites, save new sites automatically, and never be bothered with taking note of new passwords ever again.
Download Last Pass here
7) The best app for getting things done Wunderlist
This app surely lives up to the promise of its name, with its very user-friendly interface that packs in heroic features — from the digital notepad, alarms, and reminders, to the folders section and messaging function. You’ll be so excited to get your schedule, plans, goals, and lists in order because Wunderlist is so handy, you can access it anytime, anywhere on your mobile device or computer, and allows you to share your lists with anyone and work collaboratively with them.
Download Wunderlist here
We bet you won’t be able to put your Android device down after getting a hold of these apps… and we really can’t blame you. Enjoy! | https://medium.com/easysimplemore/7-awesome-android-apps-youve-never-heard-of-cb7a0d87fd8c | ['Katrina Angco'] | 2017-08-30 19:57:46.124000+00:00 | ['Android Apps', 'Lifestyle', 'Mobile Device', 'Productivity', 'Digital Marketing'] |
Reading the article I can understand that the problem was your lacking experience and not Firebase… | Reading the article I can understand that the problem was your lacking experience and not Firebase itself.
If you have choosen another nosql like mongo you had the same problems | https://medium.com/@giovanniarixi/reading-the-article-i-can-understand-that-the-problem-was-your-lacking-experience-and-not-firebase-7923b1ed0b4f | ['Giovanni Arixi'] | 2020-12-11 07:27:27.073000+00:00 | ['Development', 'NoSQL', 'Technology'] |
The Tesla bombshell almost nobody is talking about | Last week, Tesla held an event focused on their advances in autopilot and what they call “full self driving”. There, nearly three hours into the event, they made the announcement: not only will they have fully autonomous vehicles ready years ahead of the industry’s best estimates, Tesla expects to have a fleet of one million robotaxis on the road in 2020.
This time, there wasn’t the usual whooping and hollering from the Tesla loyalists. Instead, this was an event for investors. It was a more steadied and almost scholarly affair, with dizzyingly deep dives into the technology powering Tesla’s self-driving ambitions.
And the announcements—carrying world-changing ramifications, if the numbers are right—were delivered aloud into otherwise pindrop silence. It seemed the collective reaction of those in the room was a furrowed brow. Could Elon be believed? Still, onward he pressed, proclaiming a grand vision as though he’d only just returned from the future to share what he had seen. Maybe the claims seemed too far-fetched, maybe the event was too long, or perhaps it’s because the investors at the event were, for the most part, non-technical.
As someone in the tech industry myself, I had a hard time keeping up with the swirling Acronym Soup shared by presenters like Pete Bannon, esteemed former-Apple chip designer. But the overall message resonated: Tesla is ahead, they argued, because Tesla has the data. Whereas nearly every competitor is relying on Lidar, Tesla has placed their self-driving bets almost entirely on computer vision. Choosing to rely on cameras and cheap radar + ultrasonics has allowed them to deploy these sensors on every car they’ve sold for the past several years. Having sensors on every vehicle means they’ve been able to collect data from every mile driven from every Tesla produced in the past several years. That’s a huge number, and it’s increasing rapidly.
Lidar, meanwhile, is power hungry and expensive, adding anywhere from around $7K to $70K to the cost of the vehicle. The upshot is that the major Lidar-based competitors have several hundred cars on the road each, while Tesla has nearly half a million. And machine learning, which is needed for object recognition in any self-driving system, depends on access to mountains of data. In fact, it thrives on it — there’s a direct correlation between how much data you throw at a neural network and the quality of the results. Because they make their own cars, and because they’ve bet on cheaper sensors, Tesla is now sitting on an unmatched (and possibly unmatchable) pile of data, and that pile grows with each mile driven, with the rate of growth multiplying with each new vehicle sold.
In that light, Elon emphatically assures us their self-driving capabilities are improving “exponentially”, which would make the advent of full autonomy arrive much sooner than expected (and, frustratingly, will also make estimating its exact arrival date even more difficult). To demonstrate these advances, they gave investors fully-autonomous rides in standard off-the-shelf Teslas with remarkable capability improvements over the previous generation of software. In other words, given their advances in chip hardware and their substantial lead in real-world data, the final piece of their self-driving puzzle is software. Software which, once ready, can be deployed at the push of a button.
Tesla’s Full Self Driving system stopping for stop signs on surface roads. See the full demo video at the end of the article.
With the facts and figures out of the way, they delivered the real shocker of the event: robotaxis. As early as next year, your Tesla will be able to drive you home as you read a book, they say, as well as go off to make money for you as an autonomous robotaxi whenever you like. You can make money while you sleep, or earn a second paycheck while at your day job. The car can pay for itself, and then some.
Now, even taking that with the massive tablet of salt called “regulatory approval” combined with a healthy schedule adjustment to pad for Elon Time™, that’s still a staggeringly audacious proposal. Especially when you consider their back-of-the-napkin math:
The base self-driving Tesla costs about $38K.
As a robotaxi, the car will be able to earn around $30K per year. This assumes rides at half the cost of a Lyft or Uber, with half of the miles travelled being empty “dead legs”.
Tesla cars will be rated for one million miles, including the battery.
In the lifespan of the car, it can earn approximately $200K of income for the owner.
A $38K car bringing in $200K of income on its own? That’s insane. That’s impossible. And that’s being spearheaded by a team famous for achieving the impossibly insane. | https://medium.com/swlh/the-tesla-bombshell-almost-nobody-is-talking-about-robotaxis-930556d9f965 | ['Hans Van De Bruggen'] | 2020-07-06 19:43:17.151000+00:00 | ['Technology', 'Futurology', 'Future', 'Self Driving Cars', 'Tesla'] |
Ezra Blount, Age 9, Becomes 10th Person to Die | A tenth person has died from injuries suffered at last weekend’s Astroworld Festival. Nine-year-old Ezra Blount passed away Sunday, according to a statement released by his family’s attorney.
“The Blount family tonight is grieving the incomprehensible loss of their precious young son,” the Blount family attorney Ben Crump said in a statement. “This should not have been the outcome of taking their son to a concert, what should have been a joyful celebration. Ezra’s death is absolutely heartbreaking. We are committed to seeking answers and justice for the Blount family. But tonight we stand in solidarity with the family, in grief, and in prayer.”
Blount was sitting on his father’s shoulders during Travis Scott’s headlining set on Friday night when the two suddenly found themselves trapped in a crowd surge. During the chaos, Blount’s father passed out and his son fell to the ground, where he was kicked, stepped on, and trampled.
Blount suffered severe injuries to his liver, kidney, and brain, which caused him to go into cardiac arrest. Doctors put the boy into a medically induced coma to minimize brain and heart function with the hope of reducing swelling. Unfortunately, the boy succumbed to his injuries on Sunday.
Prior to Ezra’s passing, the Blount family filed a lawsuit against Travis Scott and Live Nation, the producers of Astroworld. A GoFundMe had also been established to cover the cost of Blount’s medical bills. As of Sunday evening, the family had raised close to $70,000 in donations.
Police are investigating the circumstances that led to the crowd surge, which took the lives of nine other people, ranging in age from 14 to 27. Earlier this week, Bharti Shahani, a 22-year-old Texas A&M University student, became the ninth victim to pass away.
Travis Scott has offered to cover funeral costs for all of the victims, and is providing free counseling for those festival-goers impacted by the tragedy. | https://medium.com/@veronasiglerztf80/ezra-blount-age-9-becomes-10th-person-to-die-eacbce9d151c | [] | 2021-11-15 14:29:03.253000+00:00 | ['Death', 'Houston', 'Festivals', 'Astroworld', 'News'] |
How to get the best out of your Android Smartphone | Photo by Daniel Romero on Unsplash
Android is the most popular operating system on the planet. And perhaps the most important factor in making it so is how versatile it is. It is designed for everyone to use, from the most clueless of users to the ones who want to tinker around with every setting. There’s a term for those tinkerers and enthusiasts, they’re called “power users”.
But using a device to get the best out of it shouldn’t be limited to just those who bother to mess around with it. There shouldn’t be such a barrier of entry. So I’ll try to put together handy-dandy guide that everyone can use to make their phones better, more reliable, and most importantly, more secure.
The list is pretty long and I’ve tried to include stuff that I found and use myself, ones that I discovered from the community over at r/Android, and other online communities. | https://medium.com/dsckiit/how-to-get-the-best-out-of-your-android-smartphone-2fba12006d07 | ['Anshuman Pati'] | 2020-12-18 17:04:02.587000+00:00 | ['Android', 'Android Tips', 'Smartphones'] |
The Political Ramifications of Trump’s Haste to Make Peace with the Taliban | Global Security Review, December 11, 2018
By Kambaiz Rafi
The same week NASA announced the successful landing of its probe InSight on Mars, violence erupted in Kabul, Afghanistan. A rogue militia commander’s arrest by government security forces triggered violent protests by his supporters, leaving at least 30 civilians and security personnel wounded, and brought parts of the city to a standstill.
Commander Alipour — known as “Commander Sword” by his supporters because of his exploits in fighting the Taliban in Afghanistan’s inland provinces — was detained because of accusations of human rights abuse. His arrest follows the detention of a powerful police chief in the North, Nizamuddin Qaisari, who was arrested by the orders of President Ashraf Ghani as part of a crackdown on unruly officers behaving like rogue militia leaders. That incident also led to violent protests.
What is worth noting in both these incidents is the overly explicit ethnic fervor shown by the supporters of both the men who were objecting their arrests. The supporters of Alipour and Qaisari were predominantly from Hazara and Uzbek ethnic groups, to which Alipour and Qaisari respectively belong. The intensity of the protests forced the government to walk back Alipour’s arrest. In Qaisari’s case, indictment proceedings of current vice president and Uzbek strongman Rashid Dostum on charges of sexual abuse had to be overlooked to let him return to the country from a self-imposed exile in Turkey to pacify pro-Qaisari protests. The unrest had, by then, paralyzed daily life in many northern provinces.
Another individual with a similar reputation as a rogue official, Kandahar’s police chief Abdul Raziq, was left unscathed until he was assassinated by the Taliban in November 2018. Raziq — now hailed as a national hero after his assassination — was no martyr. Although Raziq was spared from the crackdown largely because of his role in ensuring Kandahar’s security against the Taliban — and tacit support from the United States — his being spared from arrest highlights discrepancies in the treatment of individuals of different ethnic groups.
Raziq, unlike Alipour and Qaisari, was Pashtun, as is President Ashraf Ghani and the majority of his aides and advisors. Although the men are members of different Pashtun tribal confederations — Raziq was Durani while President Ghani is Ghilzai — the incident accentuates the privilege Raziq enjoyed due to his belonging to the same ethnic group as the president.
The peace talks with the Taliban have reached a critical juncture
Zalmai Khalilzad, the veteran Afghan-American diplomat and former U.S. ambassador to Afghanistan, has returned as U.S. President Donald Trump’s envoy to hasten along the negotiation process. During this critical phase, attention must be paid to the ethnically-charged uproar engulfing Kabul following Alipour’s arrest. This is especially important due to the ambiguity of the situation.
The content of the peace talks remains a matter of debate even between the U.S. envoy and his Afghan hosts in the National Unity Government. The opacity of the negotiations could create anxiety among sections of Afghan society that are wary of giving the Taliban too generous of a deal. The content and nature of any future negotiations remains unknown, both to those involved and to outside observers, according to the head of Washington-based American Academy of Diplomacy and former U.S. ambassador to Afghanistan Ronald Neumann in a recent discussion he attended at the Afghanistan Institute of Strategic Studies.
Bellicose and opportunistic individuals among non-Pashtuns, similar to Alipour and Qaisari, might proliferate and see a boost to their popularity if a peace deal with the Taliban and their return to political dominance in any form put their ethnic groups in potential or actual harm.
What is more at stake — apart from safeguarding the delicate ethnic balance — is sustaining the advancements made by women, the civil society, and Afghanistan’s youth in post-Bonn Agreement Afghanistan. In the words of activist and former government official Shaharzad Akbar, what should be achieved through a deal is “an expansion of opportunities, not their curtailment.” Akbar added “if today a girl in Faryab cannot go to school, if in Helmand a girl is unable to study in a university, we want them to have this opportunity. Not a girl who is already going to school in Bamyan to be deprived of this right.”
A Hastily-Negotiated Peace Deal with the Taliban is a Slippery Slope
In a recent survey of over two thousand individuals from Afghanistan’s 34 provinces, 90 percent of the respondents indicated that they disagreed with the Taliban’s style of governance. To many who oppose the group’s politics and ideology, the Taliban’s blend of Islamic fundamentalism and tribal hierarchy might be palatable if they form a political party similar in nature to the parties formed by other militant groups from the Soviet resistance era. However, giving over too many concessions may lead to unforeseeable backlash, including by ordinary Afghans.
Over-concession would threaten the advancements in Afghan society made possible by an international commitment to Afghanistan’s reconstruction that has endured for over seventeen years. The Taliban’s return through an overly-generous power-sharing arrangement made possible by a hasty deal by U.S. envoy Khalilzad — allowing for the U.S. military withdrawal the U.S. president is impatient to achieve — will likely aggravate segments of society that view the group as a tribal outfit and political misfit.
Both the Taliban’s deeds and words do little to reinforce their claims of politico-ideological reform and a move away from the dogmatism demonstrated during the late 1990s. The Taliban’s online rhetoric grows harsher the more it nears what the group perceives as a military victory against the U.S. and Afghan security forces. This perception is reinforced by the constant unidirectional plea for peace from the U.S. and the Afghan governments that emboldens the group. The Taliban swiftly condemns anyone who contradicts their official pronouncements. Early in November, at a conference in Moscow to which the Taliban sent a delegation, the group issued statements demanding respect for the “Islamic Emirate of Afghanistan.” Such rhetoric should be taken into account when considering whether the Taliban has genuinely reformed.
Any peace deal should include strong guarantees to prevent the Taliban — who’ve shown a penchant for totalitarian rule — from gaining too much power. This is vital for preserving social dialogue on important issues such as women’s rights, freedom of expression, and civic equality — among all ethnic groups in Afghanistan.
A recent statement by the Afghan National Unity Government (NUG) gives some reassurance as to such guarantees through an Afghan-owned and Afghan-led negotiation process. The statement reiterates respect to the Afghan Constitution as a fundamental starting point. However, U.S. envoy Khalilzad has so far ostensibly circumvented the government in Kabul by directly engaging in talks with the Taliban’s political office in Doha.
More importantly, there is suspicion regarding Khalilzad himself who is not known for being impartial in his past dealings and has previously expressed favorable views about the Taliban. Members of the main anti-Taliban group, the Northern Alliance, might be reluctant to cooperate with him because Khalilzad has made efforts to politically sideline the group in the past — about which he elaborates effusively in his book, The Envoy. At present, sitting with Khalilzad would be akin to a “fool me twice” scenario for most of these actors.
The U.S. strategy should be depersonalized — with an impartial diplomat essential for doing so. Khalilzad himself could be seen as ethnically biased as he belongs to the Pashtun ethnic group. Any deal struck by Khalilzad, even if it is of sound basis, will be treated with suspicion by other Afghan ethnic groups. Further, nothing endangers the credibility of a peace deal more than a prevailing doubt concerning the intentions behind it. Replacing Khalilzad as U.S. envoy would go a long way in inviting confidence that the interests of all ethnic groups will be taken into consideration.
Projecting respect for equality is paramount to ensure the implementation of a sustainable agreement that won’t result in a civil war in the future. If any one ethnic group is perceived as receiving special treatment, feelings of insecurity will increase among the other groups. Historically — in Afghanistan and throughout the world — such insecurity has led to armed resistance and violence. Should history repeat itself in this way, the current stalemate with the Taliban may be seen in a nostalgic light. | https://medium.com/@kambaiz-rafi/the-political-ramifications-of-trumps-haste-to-make-peace-with-the-taliban-2d1cf671130e | ['Kambaiz Rafi - Oped Archive'] | 2020-04-23 16:37:05.753000+00:00 | ['Afghanistan', 'Donald Trump', 'Peace', 'Terrorism', 'USA'] |
Tutellus Roadmap — updated. Understanding how a digital product… | Understanding how a digital product work is complex, and if we add a blockchain layer with a double token model the usual will be that no one understand what we are working on. To bring some light to the Ecosystem and our community, we share an updated version of the project status and the product development we plan for the mid term (6 months from now).
1. The past: Tutellus, an APIcentric platform a little bit rusty
As my partner and CTO Javier Ortiz says, “I would burn most of the code”. As any other big project, Tutellus is the result of many people, many years and many lines code. In spite of the product works perfectly it’s a little bit old: this Tutellus version was designed and built during 2014–2015. In that time we started with the 0.1.8 Docker version. There weren’t dockerization services and GraphQL was just a dream.
The past: Tutellus infrastructure model (2017), running over a mongoDB data base
You have in this image some of the main projects that run around the API. The mail issue has always been the Front-end maintenance, since not to lose SEO in 2015 (remember we are in-the-air since 2013) we had to build a SEO-friendly project breaking up the website from the application, with all the complexity to maintain and duplicate code in both sides. Despite of this fact the product works, scales very well and at a very reasonable cost.
2. The present: API migration from Rest to GraphQL and services redefinition
During 2018, as we progressed in Tutellus’ tokenization, Tutellus.io development and the business around the token, we noticed we would have to increase the tech requirements and to change a lot of things. Our infrastructure model fixed very well with the NEM infrastructure, the blockchain we bet, but we had to renovate the technology in order to scale better and to improve the maintenance of future services around the token.
Present: API changes, Frontend fusion and Backend redefinition services
In the actual infrastructure model we work with (without the blockchain layer) we unify the Front with React and redesign the API connections with all the services.
3. Introducing the Blockchain layer over the actual infrastructure
Next natural step is to introduce the Blockchain layer, where due to our Tokenomics design (a lot of transactions per second and a double token model deeply described in the Whitepaper) we need 2 networks, one public (NEM mainnet) and other private, with a different token in each one and atomic cross chain transactions each a concrete number of blocks.
Work in Progress: APIcentric services combined with decentralized NEM services and atomic cross chain transactions
Why do we need atomic cross chain transactions between blockchains?
We have 2 tokens, the STUT and the TUT.
The STUT tokens measures the Relevance (knowledge) you are acquiring en each course. Any course is tagged proportionally per skill depending on its educational content. For example, a Blockchain course can have a Relevance distributed in:
45% knowledge in Asset Tokenization
30% knowledge in NEM — token creation
20% knowledge in NEM — apostille serve
5% knowledge in NEM — Catapult development
The Relevance (number of STUT tokens you get in the course) keeps the previous proportionality. The sum of the skills in each course will let your global skills grow faster. By last, the STUT tokens are distributed continually during the course, in real time, so they are consolidated in the private chain where we don’t have consolidation cost (in the Catapult NEM version, still in development).
The TUT token is the liquid token, that fluctuates and is in the mainnet, in the public NEM Blockchain. As the STUT is in the private chain no one as outsider would have access to it; and if we are the owners of the private chain we’ be able to do whatever-we-would-want on it, being able to change transactions, for example. The way to assure we don’t manipulate the private chain info is making anchorages each several blocks between both blockchains, sharing info from private to public. We assure in this way the data (STUT tokens) immutability in the NEM public chain. And this awesome functionality is still on development by Catapult team (we hope to have in production in Q4).
4. Why a double token model, or why just a token?
After the crazy ICO year of 2017 and the 2018 crypto winter people discuss a lot about the need -or not- to launch a token, and only a few projects (like Tutellus) risk to launch a doble token model. We remind you the value added to the community with our token model;
4.1. Decentralized Governance close to a DAO
The double token model and the STUT management let us to decentralize Operational decisions and to delegate them in the Community. We inspire in how contributors get tokens in the DAO Bisq, for example.
The STUT token, as you know, is used to assign Relevance to users. The more Relevance a user has in a concrete skill the more decisions will be able to make and the more STUT tokens will get. For example: new Courses approval, Careers new content update o added Services course proposals (exams, tests, projects, documentation, etc). In this way (similar to Bisq) any user can add value to the Community and been awarded by its contributions, even with similar techniques: whereas that in Bisq holders “color” bitcoins, we “color” TUT tokens creating in this way STUT tokens and assign them to contributors.
This kind of services and contributions acquire even more value when we launch the platform in other languages (Q4)… imagine the team we would need to review, approve, reject courses & other materials in any language.
4.2. Reputational power untied Economic power
Here is where the double token model gets all the sense: it would be unfair that the TUT holder would get contributions for operational tasks without knlowledge or reputation in the skills he is working with. By another hand, the STUT holder can have a lot of Relevance but not economic power (TUT tokens), so the governance about the contributions he can get revolve around him. By last, we need a mechanism to transfer value from one token (STUT) to another (TUT), the “trade decision” deeply described in the Whitepaper.
Each token has its own functions and coexist in a healthy way.
4.3. The TUT token as the only way to access to several services
Any user can pay with TUT tokens a product/service in Tutellus, now in a manual way and soon automatically (wip). Imagine, at the end of 2020, we have millions of users with their relevance tokenized and we offer recruiting services to companies based on those Relevance, updated (in time) and located (in space); in this scenario we can localize people looking for a job in Madrid with high skills in Blockchain, Git and JS. This kind of services can be taken on only in TUT tokens, although for the company will be transparent (it will be an internal operation between Tutellus and the Exchange).
By last and not less important, imagine the TUT token as the gas you need to execute smart contracts by third entities that use our dAPPs, as we describe then.
5. The TUT token as the [future?] decentralized protocol for the EdTech industry
Why are we so obsessed to launch a token with a very clear use inside Tutellus (by students, teachers and companies) but a strange use outside it?
TUT token & TUT protocol approach published in the Paper (Tutellus.io)
This point is important. In january 2019 we published the TUT protocol Yellowpaper, you have it here. the TUT token has a much longer life than staying inside Tutellus. The TUT protocol includes a set of mechanisms, functionalities and smart contracts that let anyone to execute EdTech decentralized dAPPs without the need to develop a business model as the one we did.
We pretend to open our smart contracts to anyone that wants to create an Educational dAPP and to empower its Community in a simple and cost-effective way. The only thing you will need to execute the smart contracts is gas under the shape of TUT tokens.
This model is very interesting: to become the TUT token in a standard in the EdTech industry. Again, the execution of smart contracts paying the gas with a no-native NEM token (without XEM) is something under development over the Catapult umbrella.
6. Roadmap: where we are and where we go
We are taking advantage of these months to run with the internal functionality we described as the core Catapult DevTeam progress in their Roadmap.
In a parallel way, first services to implement -depending on Catapult updates- with impact in the final user are:
Wallet creation for STUT and TUT deposit. We’ll provide custody services to users from the STUT side.
Service to purchase/exchange courses & subscriptions in TUT tokens.
Service of STUT tokens generation associated to player (video consumption).
Service of STUT tokens generation associated to Answers.
Service of STUT tokens generation associated to Notes service.
Service to convert STUT to TUT tokens (‘trade decision’).
Token TUT Exchange listing. We are been cautious in this point, due to the complexity & darkness in the Exchanges business. The fact to list the TUT token should add value to the project, not taking it away.
7. NEM Status — Catapult
As you can see, services described in the private chain must be executed with Catapult, and it’s still under development. We hope to have it in production in Q4. We are convinced that the wait will compensate and NEM has always been the best choice for the project.
The kind of services we can implement with Catapult are awesome, its speed (until 4.000 tps in the private chain), the easy-to-develop smart contracts (using Javascript SDK) and the fact that we are the platform with the official certification Catapult course position ourselves in the first line.
Cheers! | https://medium.com/tutellus-io/tutellus-roadmap-updated-252ebbf5c534 | ['Miguel Caballero'] | 2019-07-16 13:22:27.249000+00:00 | ['Blockchain', 'Education', 'Token', 'Nem'] |
How to write a user story — a beginners guide | Before learning how to write a user story, it's important to understand what exactly a user story is and its purpose.
A user story is a high-level description of a feature told from the perspective of the user. It is the smallest piece of work that can provide value to the user and can be delivered in a single sprint.
The purpose of a user story is to articulate how a piece of work will deliver value back to the user. Note that the “user” doesn’t have to be an end-user in the traditional sense, the user can be anyone who gains value from the feature being built. For example, if a product manager wanted to learn more about their users through a data collection feature, then the user for this story would be the product manager.
How to write a user story?
A user story is often written as an equation:
As a <type of user>, I want to <desired feature> so that I can <value gained>.
A further breakdown of each component;
Type of user: this is the ‘who’, who are you building this feature for? Who will benefit from this user story? Remember, it's common to assume all features are built for the end-user but features can be built for anyone who benefits from the feature being built.
Desired feature: this is the ‘what’, what are we building? What is the intention of this feature? What will it accomplish for the user?
Value gained: this is the ‘why’, why are we building it? What value or benefit does this feature provide to the end-user? This can also be looked at as what problem does this feature solve?
Example user stories:
Food delivery app: As a frequent food orderer, I want to be able to invite my friends, so that I can get referral incentives
Mobile banking app: As an app user, I want to transfer money, so that I can quickly send money without using a computer or going into a bank
Rideshare app: As a passenger, I want to rate my driver, so that I can share my feedback
Other helpful information to provide when writing a user story
Besides the user story itself, there is additional information that should be shared with the developer, usually in some form of a ‘ticket’. This information will include additional context for the team/developer to ensure the feature is built right the first time. Some examples are:
Link(s) to designs — designers often use Figma, Invision, or Zeplin to share designs with developers. Link to product requirements —this will be a detailed document provided to the developer and includes feature functionality, user experience, scope, risks, and more. Technical requirements—they typically refer to how the software should be built and any considers that a developer should take into account when developing the feature. Acceptance criteria — a checklist to confirm that the work completed meets the intended purpose and functions as expected before being marked as ‘done’.
Conclusion
Just like any form of writing, writing user stories is a craft of its own. It takes time and practice to nail it down. It’s also important to remember that every team is different, what works for one team, might not work for the other, so there is definitely some trial and error. If you’re looking to improve your user story writing, always ask your team for feedback. You’ll quickly learn what is working and what isn’t. | https://medium.com/the-innovation/how-to-write-a-user-story-a-beginners-guide-ef21eb7bcfbf | ['Neha Nathani'] | 2020-11-15 17:45:37.278000+00:00 | ['User Stories', 'Software Development', 'Scrum', 'Product Management', 'Agile'] |
The Week To Be In Crypto [8/28/2018] | Market Overview
A few main themes over the past week (or two, I was on a baby-free vacay last week so no update) that I have seen across a number of headlines. In general on the price front, a nice pop today, we will see if it sticks. Hash power is back to ATHs which at least means that people with money (miners) think its worthwhile to pour effort and money into trying to acquire BTC so thats a decent sign. Im not going to try and make any price predictions but I have a hard time believing it will take years to get back to the bitcoin ATH. One year maybe but at the current pace of development in the space it feels like we should see another run in the not too distant future.
Ethereum Bear Case
ETH has obviously taken a pretty big hit, actually being down on a 12 month timeframe. I have always considered there to be a ceiling on the price of ETH for two reasons.
It is inherently leveraged by all of the tokens running on ethereum, priced against ETH, and requiring ETH to transact/trade
There is no set limit for the total supply
That being said, I have been holding on and still buying tokens through the downturn. I don’t have a target ETH price in mind but it will be interesting to see what happens first………
ETH price returns to new heights
ERC20 projects release actually useful products/solutions
If value starts flowing into the various token economies built on ethereum before ETH recovers we may actually see the correlations decline. I think lower correlations are healthy for the space but we are probably still at least 6 months out from any product becoming what could be considered a commercial success.
ETFs
We have now entered bitcoin ETF backlash after the most recent proposal rejections with many in the community now saying they think an ETF approval will be bad for the space. Its kinda like getting rejected by someone at a bar and then telling your friends theyre ugly. Having an opinion on if it will be good or bad isnt particularly useful. Its going to happen, there is too much money in it, so it comes down to what are you doing to help move the space in the direction you think it should go to be prepared for this eventuality? The pro is it will make the crypto asset class accessible to more people. The downside is, depending on how the products are structured, it could in essence dilute the bitcoin supply through synthetic instruments. I think it will be best if physically backed/settled instruments are approved first so a distinction can be made between them and the truly synthetic. The introduction of futures trading basically signaled the top so we will see what happens when an ETF is approved.
What I’m Buying
Buying ERC20s is like rearranging deck chairs on the titanic right now. The only thing I am actively buying is REN. The reasoning is that when their platform is live, those running darknodes will be able to earn fees in the traded coins/tokens (bitcoin, ethereum, ERC20s to start). mainnet should be live in a couple weeks, if things go as planned running one could be a great way to earn some crypto through a continued bear market.
Recommended Media
Read
Just finished Daemon. Entertaining read and was an interesting dive into what the extreme of decentralization could look like.
Listen
Really haven’t had much time to consume podcasts recently but here ya go.
From the Slack
A collection of content from The Bitcoin Podcast slack group the past week
Follow Me
twitter: https://twitter.com/Tompkins_Jon
linkedin: https://www.linkedin.com/in/thejonathantompkins
steemit: https://steemit.com/@j-o-n-t
telegram: https://t.me/joinchat/FqrpRUo8CKHV8gGKRuNNNQ (nothing really here yet but will start engaging if a few more folks join) | https://medium.com/the-bitcoin-podcast-blog/the-week-to-be-in-crypto-8-28-2018-dc557eb55092 | ['Jonathan Tompkins'] | 2018-08-28 20:33:38.790000+00:00 | ['Ethereum', 'Cryptocurrency', 'Blockchain', 'Bitcoin'] |
In Other Words | Liam has been falling asleep to ‘fly me to the moon’ for the past couple of weeks. He loves when I sing it to him, even if it’s bizarre, he listens to it, lying on my chest, with the backdrop of the so offbeat rhythm of my heartbeat ruining the song, wrapping his arms around me. He slips into sleep just like that, I sometimes sprain my neck trying to make him comfortable. I’m not allowed to touch any electronic devices in the meantime, so every time I get lost in the beautiful lyrics.
If you are thinking how fancy our bedtime routine is, actually no, it’s not our everyday. Some days I beg him to sleep so that I can finally have some me-time, some days I will just pull out my guilt card for not being able to spend enough time with him, some days he will be too tired for any of my dramas.
But I never miss his bedtime, no matter what. I would ignore the pile of dishes and laundry that needs my immediate attention, and I would shut my eyes at all the toys he spread on the floor which is capable of taking someone’s life, and forget about the coffee I badly wanted after all the running.
I love that moment with him, just us. Hugging each other tight, forgetting whatever happened that day, be it deadlines, tantrums, guilt, chores, everything. That moment I simply stare at him and wonder how is he even real and caress his hair knowing I will miss all these moments someday when my little Leo won’t fit in my hands anymore. | https://medium.com/@shahnaz_siddiq/in-other-words-7108f63e3d81 | ['Shahnaz Siddiq'] | 2020-12-24 04:24:04.474000+00:00 | ['Random Thoughts', 'Motherhood', 'Dreams', 'Life', 'Love'] |
Climate Heroes Need Help in the Pacific | The entire world is feeling the effects of climate change. However, the countries emitting the lowest levels of deadly CO2, face the biggest consequences.
By Alex Clarke — ARNEC Conference 2019 — Hanoi, Vietnam — 6 Dec 2019
Although industrial giants China and the US emit the biggest volumes of carbon dioxide, it’s only fair to judge them in relation to their size. The 2016 rankings by per capita for CO2 emissions, revealed Saudi Arabia, Australia and the US to be the most intense contributors. It’s known that developed nations typically have high emissions, but what about the less developed parts of the world? The Pacific Islands consist of 11 countries, and not one of these make it into the top 20 for high CO2 output. In fact, the Islands contribute only 0.02% of global emissions. Despite their innocent living, the Pacific region is, and will continue to be some of the worst affected areas, by increasing and more intense storms as a result of climate change.
At this year’s Global ARNEC Conference 2019, Soana Kaitapu (Ministry of Education, Government of Tonga) shared the painful scars of Cyclone Gita in 2018. The cyclone was the worst storm to rip through Tonga in 60 years. With winds of 230km per hour, it flattened houses, electricity lines and fruit trees, causing significant damage across the region. With more than 80% of homes in Tonga left without power, an intensive emergency effort was needed. An effort that cost $164.3 million — 38% of Tonga’s GDP (Gross Domestic Product). As Ms Kaitapu explained, it will take the best part of a decade for the country to recover from this event. They simply cannot afford to have this happen again any time soon.
While a decade-long recovery period from cyclone Gita is alarming in itself, what is even more worrying is that these cyclonic super storms are increasing with climate change. The Intergovernmental Panel on Climate Change says climate change will result in tropical cyclones having greater intensity, higher rainfall and increased area coverage. This is mainly due to warming sea temperatures, which drive cyclonic storm activity. Storm surge is also exaggerated by rising sea levels and changes in wind patterns — driven by climate change.
As discussed at the ARNEC Conference, climate change can be stopped. Climate change is reversible. Climate experts must be listened to. Out of the top 10 countries for highest CO2 emissions, only 1 of them — India — is on track to meet carbon emission goals. If the objective to stabilize temperatures is to be met, the whole world must act in equal. Millions of people worldwide are campaigning for more vigorous political action. Reforming policy is an integral part of changing the way society functions, and a multi-sector coherent action plan is the only way to save our planet and its people. Countries like Tonga are left at the mercy of the world. They need our help.
About Author:
Alex is a student studying Landscape Architecture at the University of Sheffield. His studies are preparing him for a career in urban design as well as ecology. Alex’s interests stem from the natural world, and ways it can be protected from climate change. | https://medium.com/@SheffSocScience/climate-heroes-need-help-in-the-pacific-29e5a2c35fa9 | ['Human', 'Putting The Social In Science'] | 2020-11-24 14:48:34.910000+00:00 | ['Students', 'Cyclone', 'Pacific', 'Climate'] |
Top 40 iOS Swift Questions Solutions | In This Article I have Covered 40 Questions | Swift Programming
1. What is Type Inference ?
In short its an ability of swift . You dont always need to write types of variables and constant you making in your code . For example :
// swift know its Int type
var age = 40 // Int
// You dont need to tell them always like below
var age : Int = 40
2. What is Generics ?
Generic code enables you to write flexible, reusable functions and types that can work with any type, subject to requirements that you define.
Understanding it with an example :
Suppose you want to swap to values of type Int , lets write a non-generic functions :
func swapTwoInts(_ a: inout Int, _ b: inout Int) {
let temporaryA = a
a = b
b = temporaryA
}var num1 = 4
var num2 = 5
swapTwoInts(&num1 , &num2)
Now , suppose you want to swap two double values or two string values , you will need to write another function for that , because the above function is accepting only for Int type .
What if we have a function which accept any type of values and swap them , this is what generic do .
Now lets do the same thing with a generic function :
func swapTwoValues<T>(_ a: inout T, _ b: inout T) {
let temporaryA = a
a = b
b = temporaryA
}
var num1 = 3
var num2 = 4
swapTwoValues(&num1 , &num2)var str1 = "sdf"
var str2 = "dafdf"
swapTwoValues(&str1 , &str2)
Now you can swap any type of values , you dont need to write different different function to swap different different type of values.
T is a placeholder , called as type parameter .
We use array in swift which is also a generic type
Array <Element> , Dictionary<Key , Value>
3. What are Protocols ?
Its a blueprint of methods, properties, and other requirements that suit a particular task and it could adopted by a class , structure or enumeration .
Protocol does not include any implementation !!!! Type which is adopting the protocol , should have all the methods which are present in given protocol . And this action is calledconforming protocol .
Its syntax looks like :
protocol Vehicle {
func accelerate()
func stop()
}class Unicycle : Vehicle {
var peddling = false
func accelerate(){
peddling = true
}
func stop() {
peddling = false
}
}
4. What are Tuples ?
Sometimes data comes in pairs or triplets. An example of this is a pair of (x, y) coordinates on a 2D grid. Similarly, a set of coordinates on a 3D grid is comprised of an x-value, a y-value and a z-value. In Swift, you can represent such related data in a very simple way through the use of a tuple.
let coordinates: (Int, Int) = (2, 3)
5. What about Mutability in Swift ?
Constant(let) are constant in swift and variable(var) varies .
6. What are Subscripts ?
With subscripts you can quickly access the member elements of collections.
A subscript consists of:
The name of the collection, such as scores
Two square brackets [ and ]
and A key or index inside the brackets
By default, you can use subscripts with arrays, dictionaries, collections, lists and sequences. You can also implement your own with the subscript function.
subscript(parameterList) -> ReturnType {
get {
// return someValue of ReturnType
}
set(newValue) {
// set someValue of ReturnType to newValue
}
7. What is an Optional ?
Optionals are Swift’s solution to the problem of representing both a value and the absence of a value. An optional is allowed to hold either a value or nil.
8. In what ways you could Unwrap an optional ?
We can unwrap any optional in following ways :
By Optional Binding By Force Unwrapping By Guard Statement By Nil Coalescing
Optional Binding (If let)
Its simplest way to do unwrap an optional .
var authorName : String? = "Mohd Yasir"if let authorName == authorName {
print("Author name is \(authorName)")
else{
print("No Author Name")
}
By Force Unwrapping
To force unwrap , we use “!” .
var authorName : String? = "Mohd Yasir"
print("Auhor name : \(authorName!)")
Guard Statement
Sometimes you want to check a condition and only continue executing a function if the condition is true, such as when you use optionals. Imagine a function that fetches some data from the network. That fetch might fail if the network is down. The usual way to encapsulate this behavior is using an optional, which has a value if the fetch succeeds, and nil otherwise.
Swift has a useful and powerful feature to help in situations like this: the guard statement.
func testingGuard( _ name : String?){
guard let unrappedname = name else {
print("You dont entered any name")
return
}
print("Hello , \(unrappedname)")
}
Nil Coalescing
let name = String? = nil
let unwrappedName = name5 ?? "Unkonwn"
9. What kind of memory allocations takes place in Swift ?
In short Stack and Heap
When you create a reference type such as class, the system stores the actual instance in a region of memory known as the heap. Instances of a value type such as a struct resides in a region of memory called the stack .
10. What is the difference between stack and heap memory ?
The system uses the stack to store anything on the immediate thread of execution; it is tightly managed and optimized by the CPU. When a function creates a variable, the stack stores that variable and then destroys it when the function exits. Since the stack is so strictly organized, it’s very efficient, and thus quite fast.
The system uses the heap to store instances of reference types. The heap is generally a large pool of memory from which the system can request and dynamically allocate blocks of memory. Lifetime is flexible and dynamic. The heap doesn’t automatically destroy its data like the stack does; additional work is required to do that. This makes creating and removing data on the heap a slower process, compared to on the stack.
11. What is In-Out Parameter ?
Function parameters are constants by default, which means they can’t be modified. To illustrate this point, consider the following code:
func incrementAndPrint(_ value: Int) {
value += 1
print(value)
}
This results in an error:
Left side of mutating operator isn't mutable: 'value' is a 'let' constant
A behavior known as copy-in copy-out or call by value result. You do it like so:
func incrementAndPrint(_ value: inout Int) {
value += 1
print(value)
}
inout before the parameter type indicates that this parameter should be copied in, that local copy used within the function, and copied back out when the function returns.
Ampersand (&)
You need to make a slight tweak to the function call to complete this example. Add an ampersand (&) before the argument, which makes it clear at the call site that you are using copy-in copy-out:
var value = 5
incrementAndPrint(&value)
print(value)
12. What is the difference between synchronous and asynchronous ?
Asynchronous means , you can execute multiple things at a time and you don’t have to finish executing the current thing in order to move on to next one. Synchronous basically means that you can only execute one thing at a time.
13. How you could pass data from one ViewController to another ?
You can pass data between view controllers in Swift in 6 ways:
By using an instance property (A → B) By using segues (for Storyboards) By using instance properties and functions (A ← B) By using the delegation pattern By using a closures or completion handler By using NotificationCenter and the Observer pattern
14. What is Completion Handler in Swift ?
A completion handler is a closure (“a self-contained block of functionality that can be passed around and used in your code”). It gets passed to a function as an argument and then called when that function is done.
The point of a completion handler is to tell whatever is calling that function that it’s done and optionally to give it some data or an error. Sometimes they’re called callbacks since they call back to whatever called the function they’re in. Example:
import UIKitlet firstVC = UIViewController()
let nextVC = UIViewController()firstVC.present(nextVC, animated: true, completion: { () in print("Welcome") })
15. What Compiler Swift Uses ?
The Swift compiler uses LLVM .
16. What is Lazy in Swift ?
In simple “A lazy stored property is a property whose initial value is not calculated until the first time it is used.”
17. Explain Core Data ?
~AppleDocumentation
Its a Framework 💯
Apple Says : “Use Core Data to save your application’s permanent data for offline use, to cache temporary data, and to add undo functionality to your app on a single device.” 😮
Core data gives you these features : Persistence , Undo and Redo of Individual or Batched Changes , Background Data Tasks , View Synchronization , Versioning and Migration etcc.. .
Creating a Core Data Model
The first step in working with Core Data is to create a data model file. Here you define the structure of your application’s objects, including their object types, properties, and relationships.
You can create core data model while creating project by check the box “use core data” .
Core Data Stack
After you create a data model file , set up the classes that collaboratively support your app’s model layer. These classes are referred to collectively as the Core Data stack .
There are few Core Data Components :
An instance of NSManagedObjectModel represents your app’s model file describing your app’s types, properties, and relationships.
represents your app’s model file describing your app’s types, properties, and relationships. An instance of NSManagedObjectContext tracks changes to instances of your app’s types.
tracks changes to instances of your app’s types. An instance of NSPersistentStoreCoordinator saves and fetches instances of your app’s types from stores.
saves and fetches instances of your app’s types from stores. An instance of NSPersistentContainer sets up the model, context, and store coordinator all at once.
Different Data Types in Core Data
Many apps need to persist and present different kinds of information. Core Data provides different attributes, including those common for all databases, such as Date or Decimal type, and non-standard attributes handled with Transformable type.
18. What is Sentinel Value ?
A valid value that represents a special condition such as the absence of a value is known as a sentinel value. That’s what your empty string would be . 🙂
19. Automatic Reference Counting (ARC)
Swift uses ARC to track and manage your app’s memory usage. ARC automatically frees up the memory used by class instances when those instances are no longer needed , you do not needed to think about memory management .
But in few cases ARC requires more information about the relationships between parts of your code in order to manage memory for you .
Reference counting applies only to instances of classes. Structures and enumerations are value types, not reference types, and are not stored and passed by reference. I Will PUBLISH an entire article on this topic with more detail !
20. What is nested optional ?
Consider the following nested optional — it corresponds to a number inside a box inside a box inside a box.
21. What are Property observers ?
A willSet observer is called when a property is about to be changed while a didSet observer is called after a property has been changed. Their syntax is similar to getters and setters :
struct S {
var stored: String {
willSet {
print("willSet was called")
print("stored is now equal to \(self.stored)")
print("stored will be set to \(newValue)")
} didSet {
print("didSet was called")
print("stored is now equal to \(self.stored)")
print("stored was previously set to \(oldValue)")
}
}
}var s = S(stored: "first")
s.stored = "second"
willSet was called
stored is now equal to first
stored will be set to second
didSet was called
stored is now equal to second
stored was previously set to first
22. When would you say that an app is in active state ?
An app is said to be in active state when it is accepting events and running in the foreground.
23. What is the difference between Viewdidload and Viewdidappear ?
Viewdidload is called when it is loaded into memory .
Viewdidappear is called when the view is visible and presented on the device .
24. What do you meant by Concurrency ?
Concurrency is a condition in a program where two or more tasks are defined independently, and each can execute independent of the other, even if the other is also executing at the same time.
25. Which are the ways of achieving concurrency in iOS ?
The three ways to achieve concurrency in iOS are:
Threads
Dispatch queues
Operation queues
26. What is Thread ?
According to Apple:
“Threads are especially useful when you need to perform a lengthy task, but don’t want it to block the execution of the rest of the application. In particular, you can use threads to avoid blocking the main thread of the application, which handles user interface and event-related actions. Threads can also be used to divide a large job into several smaller jobs, which can lead to performance increases on multi-core computers.”
27. What is Dispatch Queue in Basics ?
According to Apple:
An object that manages the execution of tasks serially or concurrently on your app’s main thread or on a background thread.
28. Difference between Foreground and Background?
The foreground contains the applications the user is working on, and the background contains the applications that are behind the scenes .
29. Classes
Classes are reference types, as opposed to value types . You create a class like :
class Person {
var firstName: String
var lastName: Stringinit(firstName: String, lastName: String) {
self.firstName = firstName
self.lastName = lastName
}
var fullName: String {
return "\(firstName) \(lastName)"
}
}let john = Person(firstName: "Johnny", lastName: "Appleseed")
In Swift, an instance of a structure is an immutable value whereas an instance of a class is a mutable object. Classes are reference types, so a variable of a class type doesn’t store an actual instance — it stores a reference to a location in memory that stores the instance .
Here Comes Stack vs Heap !
When you create a reference type such as class, the system stores the actual instance in a region of memory known as the heap, while instances of a value type such as a struct resides in a region of memory called the stack .
The system uses the stack to store anything on the immediate thread of execution; it is tightly managed and optimized by the CPU. When a function creates a variable, the stack stores that variable and then destroys it when the function exits. Since the stack is so strictly organized, it’s very efficient, and thus quite fast.
The system uses the heap to store instances of reference types. The heap is generally a large pool of memory from which the system can request and dynamically allocate blocks of memory. Lifetime is flexible and dynamic. The heap doesn’t automatically destroy its data like the stack does; additional work is required to do that. This makes creating and removing data on the heap a slower process, compared to on the stack.
Working with Reference
var homeOwner = john
john.firstName = "John"
john.firstName // "John"
homeOwner.firstName // "John"
Sharing among class instances results in a new way of thinking when passing things around. For instance, if the john object changes, then anything holding a reference to john will automatically see the update. If you were using a structure, you would have to update each copy individually, or it would still have the old value of “Johnny”.
Identity Operators
Because classes are reference types, it’s possible for multiple constants and variables to refer to the same single instance of a class behind the scenes.
In Swift, the === operator lets you check if the identity of one object is equal to the identity of another:
john === homeOwner // truelet newInstance = Person(firstName: "Johnny", lastName: "Appleseed")newInstance === John // false
30. What is MVC ?
MVC stands for Model View Controller. Models represent application data; views draw things on the screen; controllers manage data flow between model and view. Model and view never communicate with each other directly and rely on a controller to coordinate the communication.
31. What is @State ?
If you assign @State to a property, SwiftUI will monitor this property and, if it is mutated or changed, will invalidate the current layout and reload.
No need to invoke a refresh call (or a reloadData(), as you might have previously seen in CollectionViews and TableViews).
32. What are Modifiers ?
These are a way of rendering custom interactions and decoration . font() , background() , and clipShape() are the some some examples .
33. What is Nesting Syntax ?
A simple example to that is nesting a list inside a navigation view .
34. What is Grouping in SwiftUI ?
Suppose you have written following code :
VStack {
Text("Line")
Text("Line")
Text("Line")
Text("Line")
Text("Line")
Text("Line")
Text("Line")
Text("Line")
Text("Line")
Text("Line")
}
That works just fine, but if you try adding an eleventh piece of text, you’ll get an error like this one:
ambiguous reference to member 'buildBlock()'
This is because SwiftUIs view building system has various code designed to let us add 1 view, 2 views or 4 , 5 , 6 , 7, 8, 9, and 10 views, but not for 11 and beyond , that doesn’t work .
But we can do this :
var body: some View {
VStack {
Group {
Text("Line")
Text("Line")
Text("Line")
Text("Line")
Text("Line")
Text("Line")
} Group {
Text("Line")
Text("Line")
Text("Line")
Text("Line")
Text("Line")
}
}
}
That creates exactly the same result, except now we can go beyond the 10 view limit because the VStack contains only two views – two groups.
35. What is Combine ?
Combine is Swift’s own version of Reactive Streams, and it enables objects to be monitored (observed) and data to be passed through streams from core application logic back up to the UI layer .
36. What HashFunction Swifts Dictionary Uses ?
Swift uses the SipHash hash function to handle many of the hash value calculations.
37. What is init() in Swift ?
Initialization is a process of preparing an instance of an enumeration, structure or class for use.
38. What are the control transfer statements that are used in iOS swift ?
Return Break Continue Fallthrough
39. What is a delegate in swift ?
Delegate is a design pattern, which is used to pass the data or communication between structs or classes. Delegate allows sending a message from one object to another object when a specific event happens and is used for handling table view and collection view events.
40. What is Optional chaining ?
Optional chaining is a useful process which we can use in combination with the optional to call the methods, properties, and subscripts on the optionals and these values may or may not be nil. In this process, we may try to retrieve a value from a chain of the optional values. | https://medium.com/datadriveninvestor/top-40-ios-swift-questions-solutions-d038c7f20a48 | ['Mohd Yasir'] | 2020-12-27 16:04:56.324000+00:00 | ['Development', 'Swift', 'Programming', 'iOS', 'Interview Questions'] |
New resolution for the new year | Deep down we have different accomplishments that we want to do in life whether it’s loosing weight or being healthier or simply being a better mom dad sister or brother we each tell ourselves over and over again next year I’ll be better and we think to ourselves why don’t I just start right now before next year? and then we think because I want to start fresh from the first day of the new year.
Does this scenario seem familiar ?
Well That’s mistake number one.
We all tell ourselves we’ll ll be better tomorrow or next week or next year and so on to make ourselves feel better so why is it that we can’t just change ourselves for once and for all?
I’ve been wondering this for years and One day I told myself that this is enough. We don’t change because we aren’t committed. The reason we’re not committed is because we haven’t mentally made the decision to go to the other side.
Don’t get me wrong it takes a lot of guts to actually change but as I’m sure you know that when you put your mind to something you will be successful now there are different tricks and techniques that you can learn to help you not procrastinate and to help you be more focused. But the fact is that you must have a mindset that is literally bulletproof as if you’ve dug your feet in the sand and you’re not going anywhere to the extent that you would rather die then retreat.
As if you’ve dug your feet in the sand and you’re not going anywhere, to the extent that you would rather die then retreat.
It is only after such commitment that you will really get where you want to go if you truly think and envision that you have a Road in front of you and you have a choice of 2 paths one is to success and the other is death there are no other choices.
I believe that’s the only way change will come about it needs to be ironclad I remember when I started writing and it was so difficult for me I’d have to wake up early and I’d have to write and go over the articles s then proofread it again then think about it decide whether it was something I wanted to publish or not but once I decided that I love to write and even if I don’t like it I’m going to write and publish anyway then it became an ironclad decision to the extent that it isn’t a decision anymore it is an absolute must.
Because after all said and done you realize that the problems are all about making a decision to change that’s when your lazy self comes in and tells you don’t do it but if you’ve committed with an ironclad decision then deciding to write isn’t a decision anymore and you can use that strength that you would’ve used for that decision instead to think of which task to do.
So how do we come to an ironclad decision?
so in order for you to decide with an absolute certainty you must envision a dream and the clearer the dream the better the more physical the better the more comprehensive the better you need to write it down think about it over and over again print out a picture and put it on your wall so you can see it every day. Remind yourself but don’t just put a reminder in one location put it in 10 locations
bombard yourself with reminders and notes of what your dream is.
Maybe going on a trip to Venice or maybe traveling on a sailboat around the world or maybe just moving to Miami and sunbathing a whole day but the point is you will never change yourself if you haven’t made a decision.
However it’s very important that we do not take big giant steps at once we must take baby steps because the only way to cement our progress is to do it in little chunks and once were confident that we’ve mastered the previous step we can move on to the next step.
Just like if you want to go from the main floor in your house to the upstairs you don’t go three steps at a time or five steps at a time we go one step at a time because that’s what works and remember each person knows them selves best
I know for myself whenever I finish supper I quickly go and brush my teeth so I’m done eating for the night
because if I don’t brush my teeth I’ll end up finding myself in my pantry snacking on junk for no reason.
In the morning I make sure my alarm clock is on the other side of the room because thats the only way I’ll get up because that’s what works.
Good luck on your new year resolution and make sure that it isn’t luck but logic ! | https://medium.com/@enjoylifebetter1/new-resolution-for-the-new-year-ab1aebc86886 | ['Joe Ohan'] | 2020-12-25 03:02:15.652000+00:00 | ['New Years Resolutions', 'Commitment', 'Focus', 'Self Improvement'] |
Convolutional Neural Network Champions —Part 1: LeNet-5 (TensorFlow 2.x) | Applying convolutional layers (aka “ConvNet”) on images and extracting key features of the image for analysis is the main premise of ConvNets. Each time “Conv-layers” is applied to an image and divides this image into small slices known as receptive fields, hence extracting important features from the image and neglecting less important features. The kernel convolves with the images using a specific set of weights by multiplying its elements with the corresponding elements of the receptive field. It is common to use “pooling layers” in conjunction with Conv-layers to downsample the convolved features and to reduce the sensitivity of the models to the locations of the features in the input.
Finally, adding dense blocks to the model and formulating the problem (classification and/or regression), one can train such models using the typical gradient descent algorithms such as SGD, ADAM, and RMSprop.
Of course, prior to 1998, the use of convolutional neural networks were limited and typically support vector machine was the preferred method of choice in the field of image classification. However, this narrative changed when LeChun et al. [98] published their work on the use of gradient-based learning for handwritten digits recognition.
Data
LeNet models are developed based on MNIST data. This data-set consists of the hand written digits 0–9; sixty thousand images is used for training/validation of the model and then a thousand images are used to test the model. The images in this data-set have a size of 28×28 pixels. An example can be seen in the following figure. The challenge of using a MNIST data-set is that the digits often have slight changes in shape and appearance (for example, the number 7 is written different way).
Examples of MNIST data sample
Looking at the labels in the MNIST data set, we can see the number of labels are balanced, meaning there is not too much disparity.
Label counts in the MNIST data set
Network Structure
LeCun et al. [98], The proposed structure of LeNet5 network
The proposed model structure of LeNet-5 has 7 layers, excluding input layers. As described in the Data section, images used in this model are MNIST handwritten images. The proposed structure can be seen in the image above, taken from the LeChun et al. [98] paper. The details of each layer are as follows:
Layer C1 is the first Conv-layer with 6 feature maps with strides of 1. Using a formula given in the appendix, one can calculate the output dimension of this layer 28×28 with 156 trainable parameters (refer to appendix 1 for details). Activation function of this layer is tanh (refer to appendix 2 for more details). Layer S2 is an average pooling layer. This layer maps average values from the previous Conv layer to the next Conv layer. The Pooling layer is used to reduce the dependence of the model on the location of the features rather than the shape of the features. The pooling layer in LeNet model has a size of 2 and strides of 2. Layer C3 is the second set of the convolutional layer with 16 feature maps. The output dimension of this layer is 10 with 2,416 parameters. Activation function of this layer is tanh . Layer S4 is another average pooling layer with dimension of 2 and stride size of 2. The next layer is responsible for flattening the output of the previous layer into one dimensional array. The output dimension of this layer is 400 (5×5×16). Layer C5 is a dense block (fully connected layer) with 120 connections and 48,120 parameters (400×120). Activation function of this layer is tanh . Layer F6 is another dense block with 84 parameters and 10,164 parameters (84×120+84). Activation function of this layer is tanh . Output Layer has 10 dimension (equals number of classes in the database) with 850 parameters (10×84+10). Activation function of output layer is sigmoid (refer to appendix 2 for more details).
The following code snippet demonstrates how to build a LeNet model in Python using Tensorflow/Keras library. Keras sequential model is a linear stack of layers. Then, we need to define each layer as seen below. Finally, the model needs to be compiled and the choices of optimizer, loss function, and metrics need to be explicitly defined. The optimizer used in this work is sgd or Stochastic Gradient Descent. The loss function is optimized to train the machine learning model. The loss function used here is cross-entropy, or log loss, and measures the performance of a classification model whose output is a probability value between 0 and 1. An accuracy metric is used to evaluate the performance of training. Loss function is a continuous probability function while accuracy is a discrete function of the number of correctly predicted labels divided by total number of predictions (refer to appendix 3).
LeNet-5 Layers Structure
Note that in the above code snippet, we have not specified anything about how the weight of neural network is initialized. By default, Keras uses a glorot_uniform initializer. Weight values are chosen randomly in a way to make sure that information passed through the network can be processed and extracted. If the weight is too small, the information shrinks as a result. If the weight is too large, the information grows as a result and becomes too big to process. The Glorot uniform algorithm (also known as Xavier algorithm) chooses appropriate random weight values from a multivariate random normal scaled by the size of the neural network [refer Glorot 2010].
The summary of LeNet-5 network constructed with Tensorflow is given below (Using model.summary() ) :
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 28, 28, 6) 156
_________________________________________________________________
average_pooling2d (AveragePo (None, 14, 14, 6) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 10, 10, 16) 2416
_________________________________________________________________
average_pooling2d_1 (Average (None, 5, 5, 16) 0
_________________________________________________________________
flatten (Flatten) (None, 400) 0
_________________________________________________________________
dense (Dense) (None, 120) 48120
_________________________________________________________________
dense_1 (Dense) (None, 84) 10164
_________________________________________________________________
dense_2 (Dense) (None, 10) 850
=================================================================
Total params: 61,706
Trainable params: 61,706
Non-trainable params: 0
_________________________________________________________________
Now that we have constructed the LeNet model using Tensorflow and Keras, we need to train the model. Using model.fit() and feeding training and validation sets, the model is trained. Additional parameters needed for training are the number of epochs, batch size and verbose:
An epoch is one complete presentation of training data. Samples from the training data set are selected randomly and presented to the model to learn. An epoch, therefore, represents one cycle through the full training data set.
As mentioned before, an epoch refers to the complete presentation of data to model. Training data is selected randomly and fed to the model. The number of samples in the randomly selected data is called batch size. Smaller batch sizes are noisy compared to the larger batch sizes, but they may generalize well. Larger batch sizes are used to avoid memory limitation problems especially when using Graphics Processing Units (GPU).
Verbose specifies the frequency of output logging. If set to 1, each iteration model loss will be printed.
Training code snippet. Note to_categorical command is used to convert a class vector to binary class matrix.
Once the model is trained, we can use a testing set that we have set aside to evaluate the performance of model training using model.evaluate() command:
Testing code snippet
The result of training the model for 10 epochs can be seen in the following figure. Initially, the weights of neural network are chosen randomly, but after 2 epochs of presenting 48,000 pictures to the model, the model loss reduced from 0.38 to 0.1. After 10 epochs of model training, the model accuracy surpassed 95% on testing set. This is a substantially improved accuracy compared to previous models at the time (mainly support vector machines), and as a result LeNet-5 cemented its legacy as one of the earliest champions of computer vision.
LeNet-5 Training results (10 epochs)
Using model.optimizer.get_config() we can interrogate the optimizer parameters. Note that we only specified the type of optimizer, the loss function, and accuracy metrics. As it can be seen from the following snippet, the optimizer used to train LeNet model is a Stochastic Gradient Descent (SGD) optimizer. The learning rate by default is set at 0.01. The learning rate controls the change in the model parameters in response to observed error measured by loss function. Imagine the model training process as traversing from a hill to valley, and the learning rate defines step sizes. The larger step size traverses to the solution faster but it might result in jumping over the solution. On the other hand, smaller step sizes take too long to converge. In the more complex problems, step size decay is commonly used. In this problem, however, no decay is sufficient to get good results. Another parameter using SGD is momentum . Instead of using only the gradient of the current step to guide the search, momentum also accumulates the gradient of the past steps to determine the direction to go. Momentum, therefore, can be used to improve converge the speed of the SGD. Another parameter is nesterov . If set to true (boolean), SGD enables the Nesterov accelerated gradient (NAG) algorithm. NAG is also a closely related algorithm to the momentum in which step sizes are modified using velocity of the learning rate change (refer to Nesterov [1983]).
model.optimizer.get_config():
{'name': 'SGD',
'learning_rate': 0.01,
'decay': 0.0,
'momentum': 0.0,
'nesterov': False}
Model Interrogation
There are multiple ways to assess the performance of a classifier. Accuracy performance on the test data set that held back from any of the model development tasks is the obvious choice. However, the confusion matrix can provide a detailed report of classifier and better assess the classification performance. Furthermore, the Classification accuracy can be misleading if an unequal number of observations in each class are present.
A confusion matrix is a detailed report of the number of samples classified correctly/incorrectly. The number of samples along the diagonal of the confusion matrix are correctly predicted samples. All other samples are miss-classified. The higher the number of samples on the diagonal, the higher the model accuracy. As it can be seen from the confusion matrix of LeNet-5 on MNIST data set, most of the classes are classified correctly. However, there are a few cases the classifier had trouble correctly classifying the label such as label 5,4, and 8. For example, there were 16 cases the classifier incorrectly classified number 2 as number 7. Those aforementioned cases are depicted in the following image.
LeNet-5 Confusion Matrix
Example of miss-classified label
Choice of Optimizer
In the previous section, it is mentioned the SGD optimizer is used for this optimized neural network model. However, because of the slow convergence of SGD and problems of getting stuck at local minima, this method is not popular. Since its introduction, Adaptive Moment Estimation aka Adam (refer to Kingma et al. [2014] for more details) enjoys significant popularity in the field of deep learning. The Adam optimization algorithm is an extension to stochastic gradient descent in which momentum by default is applied to gradient calculation, and separate learning rates for each parameter. Using Adam optimizer and retraining the LeNet-5 from scratch, model accuracy can be increased to 98% as seen in the following learning curves:
LeNet-5 Training results (10 epochs) using Adam optimizer.
Effect of Batch Size
The batch size is one of the most important hyper-parameters in neural network training. As discussed in the previous section, the neural network optimizer during each training epoch randomly selects data and feeds it to the optimizer. The size of the selected data is called the batch size. Setting the batch size to the entire size of the training data may cause the model to be unable to generalize well on data it hasn’t seen before (refer to Takase et al. [2018]). On the other hand, setting batch size to 1 results in higher computational training time. The proper choice of the batch size is particularity important as it leads to model stability and increases in accuracy. The following two bar charts demonstrate testing accuracy and training time of various batch sizes from 4 to 2,048. Testing accuracy of the model for batch sizes 4 to 512 is above 98%. However, the training time of batch size 4 is more than four times the training time of batch size 512. This effect can be more severe on more complex problems with a large number of classes, and large numbers of training samples.
Effect of batch size on model accuracy (upper chart) and training time (lower chart)
Effect of Pooling Layer
As discussed before, the pooling layer is required to down sample the detection of features in feature maps. There are two most commonly used pooling operators: average pooling and max pooling layers. Average pooling layer operates by calculating average values of the selected patch in the feature map, whereas max pooling layer calculates maximum value of the feature map.
Max pooling operation as it can bee seen in the following figure, works by selecting the maximum feature value from the feature map. Max pooling layers discriminate against features with less dominant activation functions and only select the highest values. This way only the most important features are fed through pooling layer. The major drawback of max pooling is that the pooling operator in the regions with features of high magnitude, only the highest value feature is elected and the rest of features are ignored; the discerning features disappeared after performing max pooling operations and results in loss of information (the purple region in the following figure).
Max Pooling operation
Average pooling, on the other hand, works by computing average values of the features in the selected region of the feature map. All parts of the selected region in the feature map are fed through using average pooling. If the magnitude of all the activations is low, the computed mean would also be low and rise due to reduced contrast. The situation will be worst when the most of the activations in the pooling area come with a zero value. In that case, feature map characteristics would reduce by a large amount.
Average Pooling Operation
As indicated earlier, the original LeNet-5 model uses average pooling strategy. Changing average pooling to the max pooling strategy resulted in approximately the same testing accuracy on the MNIST data set. One can argue the point of different pooling layers. However it should be noted that the MNIST data set is rather simple compared to other complex data sets such as CIFAR-10 or Imagenet, hence the performance benefits of max pooling in such data sets can be by far more beneficial.
Comparison between average pooling and max pooling strategies
Effect of Feature Rotation and Flipping
Thus far we have explored different aspects of LeNet-5 model including the choice of optimizer, effect of batch size, and choice of pooling layer. LeNet-5 model is designed based on MNIST data. As we have seen so far, the digits are centered in each image. However, more often than not, the location of the digits in the image in real life is shifted, rotated, and sometimes flipped. In the following few sections we will explore the effects of image augmentation and sensitivity of the LeNet-5 model to image flipping, rotation, and shifting. Image augmentation is done with the help of the Tensorflow image processing module, tf.keras.preprocessing.
Effect of Flipping
In this exercise, images are flipped along the horizontal axis using ImageDataGenerator (horizontal_flip=True) . Applying ImageDataGenerator tests image results in the new data set with the images horizontally flipped as seen in the following image. As it can be seen, it is expected the model have low accuracy on the flipped images data set. As seen from the testing accuracy table, the accuracy of the LeNet-5 model dropped from 98% to 70%.
Test accuracy on flipped images
A closer look at the confusion matrix of the flipped images data set reveals few interesting takeaways. The highest accuracy labels are 0,1,8, and 4. The first three labels are symmetrical (0,1,8) and as a result model has good accuracy of prediction on such classes. But it is interesting that the LeNet-5 model has good classification accuracy on label 4. Another interesting aspect of this test is how the model identifies digits. For example one of the labels that the model suffers from accuracy in the flipped data set is 3. The model almost half the time misclassified it as the number 8. It is very useful to understand how the model identifies each digit in order to build a robust classifier. Packages like SHAP can provide means of understanding input and output mapping of any deep neural network model (look for DeepExplainer module in SHAP library).
Confusion matrix of flipped images prediction
Image Rotation
Image rotation is another possible scenario in real life. Digits can be written in an angle with respect to the image boundaries. With the Tensorflow image augmentation module, one can produce randomly rotated images using the following line of code: ImageDataGenerator(rotation_range=angle) . The testing result of LeNet-5 model with various randomly rotated images can be seen in the following figure. The more rotation, the worse the prediction of the model. It is interesting to note that model predictions are rather satisfactory for up to 20 degrees of rotation, then the predictions degrade rapidly.
Prediction of LeNet-5 on randomly rotated images
Effect of Shifting
One final image augmentation effect is shifting digits along the horizontal or vertical axis within the image. This effect can be easily applied to MNIST test dataset using ImageDataGenerator(width_shift_range=shift) . Note that, in this section I am demonstrating the result of width_shift generator. The sensitivity of LeNet-5 network to width shift is much higher than image flipping and image rotation. As can be seen from the following figure, the accuracy degrades much faster than other discussed image augmentation processes. Only a 10-degree width shift results in accuracy drop from over 95% to about 48%. This effect might be attributed to the filter size and kernel dimensions of the model.
Prediction of LeNet-5 on randomly shifted images
Performance on CIFAR-10 Data set
As we have seen from all the previous sections, LeNet-5 model achieved a significant milestone in the hand written digit recognition. Due to its superior performance on classification problems, LeNet-5 model used in banks and ATM machines for automatic digit classification in mid-1990s. However, the next frontier for this model was addressing image recognition problem to identify various objects in the image.
In this final section, we aim to train the LeNet-5 on CIFAR-10 dataset. CIFAR-10 (Canadian Institute For Advanced Research) is an established computer vision data set with 60,000 color images with the size 32×32 containing 10 object classes as it can be seen from the following picture. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. As can be seen from the images below, the complexity of images is much higher than that of MNIST.
CIFAR-10 Data Set
Applying LeNet-5 structure to this data set and training the model for 10 epochs, results in an accuracy of 73%. The testing accuracy of this model is 66%. Considering human level accuracy on this data set is about 94% (according to Ho-Phuoc [2018]), LeNet-5 type structure is not efficient enough to achieve high recognition capability. | https://towardsdatascience.com/convolutional-neural-network-champions-part-1-lenet-5-7a8d6eb98df6 | ['Amir Nejad'] | 2020-11-06 15:30:13.193000+00:00 | ['Python', 'TensorFlow', 'Deep Learning', 'Convolutional Network', 'Data Science'] |
CAN A NATION DIVIDED HEAL? | CAN A NATION DIVIDED HEAL?
The 2020 Election “Game” that seems to never end…
“Football Field Panoramic (HDR)” by joebiologyuni is licensed under CC BY-NC-ND 2.0
The American 2020 election has been like watching a football game between rival teams that never seems to end. No one has decided on what should be the final score, there is questions about some of the touchdowns that have been made. And while the crowd has left the bleachers, we’re still checking-in to see who is ultimately going to be the winners.
We have different announcers, telling us different facts about what we’re viewing. One is decrying fouls and penalties and questioning the rules of the game that were acceptable during the last game played. The other says that the referees are calling the game correctly, but they are going back to check the tape.
The fans and the whole world are watching to see if we’re going to follow our own rules and wonder how the rules of the game can be so differently interpreted by the officials in place. Meanwhile the cheerleaders on our opposing teams continue to shout that their team has won, don’t let the opposing team “steal the victory.” The fans could go to credible news sources and not follow opinions, but that would take too much work and effort. So a lot of us choose to just listen to our cheerleaders read off the cue cards and regurgitate what they’ve been promising for the last few months.
What’s evident is that at some point we’re going to have a winner and a loser. Is it that important who the winner is? Is it that important that the game is played fairly? In an actual football game, depending on the league or division, maybe not so much. But when you’re talking about the most powerful country in the world, the winner and the process is very important. Not just for this country, but for the world at large.
“American Flag” by JeepersMedia is licensed under CC BY 2.0
A Snapshot of our United States
The truth is that we’re actually not on different teams when it comes to governing the United States of America. When I served in the Armed Forces and took care of active duty military, veterans, and their families, I cared little about their political opinion or their values. I treated them as my fellow citizens and worthy of the care, the best care that I could provide for them, as service to this country.
I’ve been trying to understand the hate, anger, bitterness and contention that seems to tear at the fabric of honest discourse on politics. It seems in the rush to express our opinions about what is best, we forget we are talking to a fellow countrymen, who has an equal right to express their ideas and opinions. When those opinions or votes are at odds with our own, we often get offended and try to speculate on the character or morals of these individuals. This stereotyping leads to biases and prejudices that hinder the conversation and common ground to move forward.
As a mother, seeing a man on his stomach with a police officer’s knee to his neck, crying out for his mother as he breathed his last this summer, pierced my heart. It brought to the forefront the apparent injustices in the way policing is performed in this country and many of us grieved at the loss of his life and so many others. Especially when there seemed to be no reasonable, quick action or justice for those lost lives and the police officers’ reckless behavior. I used my voice to call on my representatives and I, with my daughter, took to the streets in peaceful demonstrations for change. I will admit that I was appalled at the reaction that I received from some of my white Christian congregants and friends concerning their reaction to any protest of racial inequality and policing. They were offended at any acknowledgment that “Black Lives Matter,” and my state of Mississippi’s response was to pass a bill called “Blue lives matter,” that further criminalized any action made to police officers or first responders.
At the same time, in Mississippi, we have been trying to change our state flag from one that doesn’t have inherit ties to racism to one that more aptly reflects the 21st-century and the values we would like to have in the “hospitality state.” There was lots of outrage and anger at any suggestion of change. The term “cancel culture,” began to be used and people expected all Mississippians to respect and honor the views of the culture that existed centuries ago, and forfeit the representation of current Mississippians and our ability to move forward, united.
I dare say seeing these reactions from people that I worshiped with, studied with, and fellowshipped with, was troubling. They seem to accept me and like me, but didn’t know the world I came from or understand my plight in society outside of the church walls. The same church people who preached and talked about love, seem to show no love for those that were mistreated by law enforcement. Their collective response was to support the policemen, contrary to what they saw or heard. They could not understand why flashing a rebel state flag in today’s society would make an African-American feel uncomfortable or unwelcome. It is up to us, African-Americans, to “get over it.”
Even my peaceful protest was ridiculed by a couple of elderly men who wanted to know why I was protesting. I will refer to them as OG1 (Older grandad 1) and OG2 (Older grandad 2) from here on. OG1 questioned my Christianity for protesting. While OG2 was offended at us, of all colors, having the audacity to walk down his peaceful street. With OG1 around, OG2 stated that black people had let drugs ravish their communities, that was their problem, although he later admitted that George Floyd’s death was unfair. However, after OG1 left, OG2 reminisced about how his father treated black people on their land back in the day, and finished with saying something to the effect that “God must have hated black people to allow them to go through so much.” (I will state that at this time my eldest daughter was with me, and was offended by the blunt questions the OGs were asking and tried to get me to move on. I told her that I wanted to “understand” and asked “how can we ever get to know one another if we don’t communicate? This is what I’ve been praying for.” She left me, exasperated by the whole ordeal. I understood that these people had another perspective that was worth pursuing, and I saw these men as my grandfathers. Isn’t that my Christian duty? They came from a different era and a different culture, but they weren’t blatantly disrespectful. I think they wanted to talk to someone they saw on the other side. They were concerned about what was threatening their peaceful existence. When I let them know more about myself and that I wasn’t against them, we were able to put our guards down and listen to each other, and view each other as people, not “political parties” or “labels.” I learned about their past life, they learned about my faith, we talked about how to go forward. OG1 couldn’t believe that I attended a predominately white church. OG2 had his guitar, and he ended up playing a tune, while later I sung a gospel hymn. I left with mixed emotions, but, I left with hope. Maybe I had shattered some of their myths about me, and they certainly shattered some of mine.
This incident, though uncomfortable, I think helped me decide how to go forward during this divisive time. I dare say that few people can believe or agree with any one person, candidate, or political party 100%. So, I won’t let the hurt or misunderstanding put a divide between me and my friends; white or black. I won’t label every Trump supporter a “racist” no more than I will label every Biden supporter a “baby killer.” I refuse to take the over-simplistic manner of referring to people as directions (right, left, far- right, and the like), I think we are too complicated to be put in these narrow confines, and it’s personally insulting.
http://www.theinclusionsolution.me/a-point-of-view-who-is-american/
I Have a Choice
There was a time when I saw all the world in black and white. This side was right and that side was wrong. That is a very dangerous and prideful place to be. There was a time that I, listening to church leaders, hedged all my votes on one issue, I now believe that is a very dangerous place to be. I also do not believe that one side or one party is inherently evil and another party or side is inherently morally right. I don’t listen to opinionated news to interpret and translate the facts for me. I go to news that gives me the facts and I take the facts and interpret them for myself.
I don’t believe the worst in people, I don’t believe that everyone in government or in Hollywood is out to get us, and I refuse to listen to people attacking others instead of dealing with facts and the issues. I don’t believe that people are the problem, but the solution. And I still believe that we can unite as Americans and go forth to make a more perfect union that includes us all.
https://en.wikipedia.org/wiki/Flag_of_Mississippi
By the way, on November 3rd, 2020, Mississippi voted to accept our new flag “The New Magnolia.” Yes, it took a change of heart, a change of mind, and a lot of bold people to stand up for what they thought was right and 126 years. It will not all be done on our watch. The “game” that we are watching play out is a “game” that will continue to happen as long as this country continues to be a nation. It will go through times of upheaval, of discord, of strife, of unity, and peace. Like the Bible says, there is a time for everything under heaven (Ecclesiastes 3:1–8). I believe it’s time to heal, and that starts one heart at a time.
So How Do I Heal?
1. Believe that there is a God in heaven who is Just, even though this life may not seem fair. And that we all will be accountable for the things we have done on this earth; the good and bad.
2. Learn to love people for who they represent, (Man is made in the image of God) not for what they do.
3. Know that love is not a feeling, it is what you do: “Love is patient and kind; love does not envy or boast; it is not arrogant or rude. It does not insist on its own way; it is not irritable or resentful; it does not rejoice at wrongdoing, but rejoices with the truth. Love bears all things, believes all things, hopes all things, endures all things.” 1 Corinthians 13:4 — 7 ESV
4. Be as merciful to others as I would like God and others to be to me.
5. Be as forgiving to others as I would like God and others to be to me.
6. Stand for justice and balanced scales for all in every area of life.
7. Pray that my heart does not become burdened with the hatred and division that would keep me from reaching out and treating all people with love and respect.
8. Pray that all of our leaders would have the motive to do the will of the people with wisdom, honesty, transparency, and without bias.
9. Be engaged with my political leaders locally, state-wide, and nationally to let my voice on the issues be heard, regardless of their political party. They represent all their constituents.
10. Be purposeful about the kind of material I ingest in my mind and heart from news, articles, books, social media platforms, tv, etc. We are what we eat. If I continue to listen to information that may be biased towards one side or agenda, my perspective is skewed. It might be better to check several different news sources, and get news that is not translated and interpreted by someone’s opinion. It also helps to sometimes listen to the news of someone that I may not normally listen to, to get a better feel for their fears, their worries, or concerns.
11. Do not let fear, or what could happen, rule my decisions. (False Evidence Appearing Real) Deal with the facts and act in faith.
12. Remember that I don’t know it all, and I don’t have all the answers. Always be open to growing in knowledge about things I think I know, and about things I don’t know. Hoping that revelations of knowledge, information, and experience will transform my believes on certain issues, and allow for that growth.
13. Remember that in the end it is all about us as a nation and a united people. No one person will always have things their way, no one party will always have things their way, no demographic of people will always have things their way. The fact that we are a group of people, means that we must work in community and consider everyone. That is democracy.
14. Never give up on people.
15. Make every effort to live at peace with everyone…. | https://medium.com/@valeriehousen/a-lot-of-people-are-hurting-how-do-we-heal-b725de3a7c23 | [] | 2020-11-18 02:44:11.022000+00:00 | ['Healing', 'Racial Justice', 'Election 2020', '2020 Presidential Race', 'Mississippi'] |
Create a Infinite Color Flipper in JavaScript | In a previous post, i had created a Simple color flipper in JavaScript. In that project, we had used an array of colors.
In this project we are going to randomly generate the hex color code from the available 16 million colors(that’s why infinite).
We are going to do this project in codepen. So, open any codepen and put this basic html and css. In the html, we have a container class wrapping a h2 and button.
In the CSS, we are placing the main at the center. Also, adding basic styles for h2 and color. | https://medium.com/swlh/create-a-infinite-color-flipper-in-javascript-417ca5f0d7cd | ['Nabendu Biswas'] | 2020-09-14 21:24:05.192000+00:00 | ['Javascript Projects', 'Javascript Programming', 'Javascript Development', 'Javascript Dom', 'Javascript Tips'] |
When Should You Tell Someone You Have Covid-19 | Open and honest communication is currently one of the most invaluable tools we have to slow the spread of Covid-19, experts say: Promptly notifying those around you if you test positive helps others know if they need to quarantine, alter their social behaviors, and get tested. But the protocol for sending out this kind of alert isn’t always so clear. How do you know when to start notifying people — after you test positive or the second you have a scratchy throat? Who needs to know? And what’s a contact tracer’s role in all of this? Here’s what the experts say about whom you should notify — and when — if you test positive for Covid-19 or are experiencing Covid-like symptoms.
When should you say something?
If you test positive for Covid-19, it’s crucial to quickly inform those with whom you’ve had (masked or unmasked) contact. But should you really wait for an official diagnosis to make those calls and send those texts? Or should you start even sooner, when you feel symptoms associated with Covid? While more disclosure is generally better than less, cold and flu season is complicating matters because it’s hard to tell which illness you may be coming down with. “It will be important not to freak each other out at every turn,” says Kumi Smith, PhD, an assistant professor of epidemiology and community health at the University of Minnesota. “My personal practice would be, if I’m reaching a point where I am considering getting a test, then that’s when I’d start to let people know.”
https://www.reddit.com/r/nflstreamsusafreetv/
https://www.reddit.com/r/nflstreamsusafreetv/hot/
https://www.reddit.com/r/nflstreamsusafreetv/top/
https://www.reddit.com/r/nflstreamsusafreetv/now/
https://www.reddit.com/r/nflstreamsusafreetv/wiki/
https://www.reddit.com/r/nflstreamsusafreetv/gilded/
https://www.reddit.com/r/nflstreamsusafreetv//rising/
https://www.reddit.com/r/nflstreamsusafreetv/comments/kl3m2m/reddit_nfl_streams/
https://www.reddit.com/r/nflstreamsusafreetv/comments/kl3nbj/nfl_streams_reddit/
That said, other experts prefer earlier action, like Susie Welty, an academic program manager and contact tracing expert at UCSF: “I’d recommend letting your close contacts know as soon as you start to feel symptoms. Whether it’s Covid or the flu, it doesn’t really matter. They’re both infectious and you should act the same either way.”
Whom to notify
If you test positive or start feeling ill, Smith says, “The first thing to remember is not to panic and not to blame yourself or feel guilty. This is a very contagious virus and it spreads in ways that we just aren’t prepared for. There’s a lot that is out of our control.”
After you’ve self-isolated, put together a list of everyone who needs to be contacted. According to Welty, the time frame you need to consider begins 48 hours before you first started showing symptoms. Or, if you were asymptomatic at the time of being tested, go back 48 hours before the positive test. Contact anyone who you were less than six feet apart from, (masked or unmasked) for approximately 15 minutes or longer. “That doesn’t have to mean talking — just close contact,” she says. “And it doesn’t have to be a consecutive 15 minutes. It can be over the course of a day.” Intimate partners, household members, and anyone whom you’ve shared utensils with should always be promptly notified.
People who you’ve been around who you don’t know personally (waiters, grocery store clerks, uber drivers, etc.) fall into a gray area. Welty says contact tracers won’t typically notify these types of contacts because interactions are typically brief enough that their risk of infection remains quite low. That being said, some social establishments (like restaurants) are being diligent about collecting patrons’ contact information for this exact reason. If you want to be extra cautious and considerate, Smith says it wouldn’t hurt to give the places you’ve been to a call to tell them you tested positive. If they have contact information for customers who were there around the same time, they can notify them.
As uncomfortable as these conversations may be, being transparent about your diagnosis is crucial not only to the health and well-being of those closest to you but to their loved ones as well.
What to expect from case investigators and contact tracers
On average, you should expect to get a call from a case investigator within 24 hours of receiving a positive diagnosis. However, the scope of contact tracing programs varies dramatically throughout the country. If your local health department’s program is limited or your area is currently experiencing a surge in cases, it may take longer for someone to get in touch. A quick reminder: “Anyone who asks for your social security number or any kind of insurance information or anything related to personal details beyond your health status and date of birth is most likely a scam,” Welty says.
You can expect them to ask about your symptoms, if you’re getting the treatment you need, and whom you’ve been in close contact with. “You’ll give those names over and that will be handed off to the contract tracing team,” Welty says. Then, they’ll reach out to those people to tell them they’ve been exposed and should quarantine and will give them whatever the testing guidelines are needed,” Welty says. (Contact tracers will protect your identity, though Welty says that many contacts can figure it out for themselves.)
While contact tracers can break the initial news to the contacts you provide them with, you will be able to notify your friends, family, and co-workers faster than a contact tracer is able to. You also will not be informed as to who a contact tracer has or has not been able to get in touch with, so if you want to be 100% sure, it may be best to alert them yourself. “Contact tracers are having to prioritize and triage cases. They also have to do it only during working hours,” says Smith. “The sooner people know, the sooner they can alter their behaviors and make decisions informed by your current status.”
How to actually share the news
Notifying people about a positive Covid-19 diagnosis can come with a sense of shame and fear of how they may react. To this, Smith offers an important reminder: “I don’t think we should see this as a personal or moral failing but rather a public health responsibility that reminds us that we’re all in this together.”
When sharing the news, Smith recommends using language like, “‘I don’t know that I necessarily gave it to anyone, but I care about your health and I want to make sure that you and I don’t end up becoming part of a transmission chain that we don’t otherwise stop. And so for that reason, I’m giving you a call.’”
As uncomfortable as these conversations may be, being transparent about your diagnosis is crucial not only to the health and well-being of those closest to you but to their loved ones as well. “If someone is about to go visit with an elderly family member and they knew that you just tested positive, then they might change their plans,” Smith says. “The more we can communicate with each other openly and without public shaming or rejection the better. You can’t predict what your friends will do, but you can do your best to protect any future contacts that your friends will have.” | https://medium.com/@jarin646289/when-should-you-tell-someone-you-have-covid-19-bd5a47dc5047 | [] | 2020-12-27 13:52:51.408000+00:00 | ['Health', 'Coronavirus', 'Public Health', 'Covid 19', 'Pandemic'] |
Top MedTech Startups: | Medical Tech Outlook
The role and value of medical technology is increasing by leaps and bounds as healthcare industry is rapidly moving toward digital transformation. The healthcare sector is taking advantage of advanced technologies to improve and broaden healthcare services.With the help of AI, 3D printing, and automation, medical researchers and entrepreneurs are making major breakthroughs that allow for easy and more accurate diagnoses, improved treatments, drug discovery, customized prosthetics, and more.
Over the course of 2019, medical experts are largely inclining toward data sciences to gain better insights from existing health information. Blockchain’s ability to provide data security is expected to drive a push towards more data-driven technologies and that is expected to bring about a real revolution in digital healthcare. As a lookout for what is to come in the future of digital health, the use of AI in improving healthcare robotics is another promising field.
Understanding the changing times, MedTech Outlook has compiled a list of Top 10 MedTech Startups to guide organizations associated with the medical sector in harnessing the power of technology to tackle today’s medical challenges, while simultaneously adapting them to improve and broaden healthcare services. With several innovative technological capabilities and success stories up its sleeves, startups like Innovative Sterilization Technologies, Biologica Technologies, OrthoGrid Systems, and more, are constantly proving its mettle in the field of medical technology.
We present to you MedTech Outlook’s : “Top 10 MedTech Startups Companies”
Abilitech Medical is one of the medical device companies that are launching a product that appears directly out of science fiction at first blush for patients with very restricted mobility, such as those suffering from MS, muscle dystrophy, ALS and spinal cord injury. Their first device increases patient function by helping individuals feed themselves, undertake self-care duties, and literally open doors to fresh possibilities. Significantly less time is spent on the accomplishment of the device to the patient
abilitechmedical.com
Founded in 2015, Biologica works to find new and better ways to increase efficacy in orthobiologics by leveraging its proprietary technology. The company’s proprietary technology captures naturally occurring growth factors found within allograft tissue, providing patients and surgeons with novel biologic solutions. The company has been developing a number of orthopaedic initiatives involving our proprietary processing methodologies. The company’s primary product, ProteiOS growth factor possesses an array of osteoinductive, chemotactic, angiogenic and mitogenic growth factors that can be added to enhance a surgeon’s scaffold of choice
www.biologicatechnologies.com
Innovative Sterilization Technologies is at the nexus of disrupting the market with the highly efficient filtered vent sterilization containers — ONE TRAY®. The ultra-efficient technology underpinning ONE TRAY® Sealed Containers has been cleared by the FDA with a 4 minute at 270-degree sterilization cycle with no required dry time and a defined shelf life. IST has partnered with an internal K1 Medical and EZ-TRAX™ to organize OEM knee and hip into ONE TRAY®/EZ-TRAX™ universal sets that consist of just three individual levels of instrumentation, allowing a facility to process two full EZ-TRAX™ sets in one of the washer cycles. This brings in over 80 percent efficiency to the entire process
www.iststerilization.com
OrthoGrid Systems is a global MedTech leader in the development, innovation, and commercialization of alignment technologies and platforms for orthopedic surgery in North America, Asia, and Europe. Our intelligence-guided systems are designed to work within the surgical theater and interface with existing hospital equipment revealing fluoroscopic distortion and enhancing surgical outcomes by providing greater accuracy and proficiency. Our technology platforms work for all Orthopedic implants in the market, and ultimately prevent re-admissions, reduce hospitals costs, and increase positive patient outcomes
orthogrid.com
According to medical studies, one in every three women experiences stress urinary incontinence (SUI) — the leakage of urine due to the pressure on the bladder or urethra — at some point in their lives. Minneapolis-based pelvic health company, Pelvital, hopes to help change this. The company is working to commercialize FlyteTM, a simple in-home treatment for SUI, designed to treat weakened pelvic floor muscles and provide a good outcome at a fraction of the cost and risk of surgery. It is the only product to use mechanotherapy to treat pelvic floor muscle (PFM) disorders, the primary underlying cause of female SUI
www.pelvital.com
ProSomnus enables dentists to create better treatment experiences for people suffering from Obstructive Sleep Apnea and Snoring. The next generation Sleep Apnea Devices utilizes patented, FDA-cleared, advanced technologies making them effective, comfortable, safe and easy. ProSomnus is focused on commercializing device designs that are clinically relevant, and creating treatment experiences that exceed the needs of the practicing sleep dentist and his or her patients. The company is dedicated to further advancing the treatment of Obstructive Sleep Apnea through ongoing research, product development, and process enhancement for improved effectiveness, efficiency, and convenience for patients and doctors alike.
prosomnus.com
ReGelTec, a pre-clinical medical device company, is developing the next generation of spinal implants for lower back pain and degenerative disc disease. The company’s HYDRAFIL™solution is heated and injected into the nucleus of an intervertebral disc.As the HYDRAFIL™ cools it solidifies as a homogenous solid mass between the vertebrae to eliminate pain. Since the implantis a hydrogelthat has mechanical properties similar to the normal disc tissues,it helps restore the natural biomechanics of the disc and improve spinal alignment. With this commitment to developing innovative healthcare products, ReGelTecwill start its clinical trial in Colombia by early 2020
www.regeltec.com
A clinical stage medical technology company, Smartlens has developed a first-of-its-kind disposable, electronics-free and ultra-sensitive soft contact lens that measures eye pressure and its fluctuations throughout the day, giving doctors and patients a better tool for glaucoma diagnosis and management. Smartlens’ non-invasive soft contact lens has a biocompatible sensor that constantly monitors and responds to changes in the eye pressure of the person wearing it. Patients simply have to take a selfie of their eye using their smartphone while wearing the lens. While Smartlens’ revolutionary product is yet to be commercialized, it is already making great strides in tests, providing accurate results
www.smartlens.health
Alphatec Spine
Helps in improving lives by providing innovative spine surgery solutions through relentless pursuit of superior outcomes
BrainCo
BrainCo strives to apply brain machine interface (BMI) and neurofeedback training to optimize the potential of the human brain. BrainCo was founded in 2015, transforming the most advanced technologies from the Center for Brain Science at Harvard, and Mcgovern Institute for Brain Research at MIT into research and development of wearable wireless EEG brain wave detector. The company has specialization in brain-machine interface technology, focus level enhancement, brainwave detection, analog-digital system, brain science. The products offered by the company are FocusFit, Focus EDU, FocusNow, Focus Developer Kit, and Stem Kit. | https://medium.com/@techmag1/top-medtech-startups-6af2b310d528 | ['Technology Magazine'] | 2020-08-20 05:35:50.158000+00:00 | ['Medtech', 'Technology', 'Medical', 'News', 'Medical Technology'] |
SOLARWINDS IN THE SOLAR SYSTEM: TO GO WHERE NO HACKER HAS GONE BEFORE | Hackers (allegedly nation-state actors) compromised last week SolarWinds software to lead a massive malware campaign affecting U.S. federal agencies and other governments and private companies worldwide. Cyber security threats pose a huge challenge for aging technologies of the satellite industry, which is on the verge of transition between generations of companies. Next generation is better prepared for cyber security challenges, and yet - a gap of a decade forces older companies to adopt new methods to prevent hijacking of their satellites and hacking their communications.
The world’s daily functional routine is heavily dependent on space infrastructure - about 2700 operational satellites and numerous ground stations - for communications, financial services, weather monitoring, defense, air and maritime transport and trade, to mention a few purposes. This dependence poses cyber threats for space systems. Nevertheless, these threats are unfortunately under-recognized, and the space industry is mostly driven by commercial market demands, the pressure to speed up development and production and to cut costs. This leads companies to cut corners in areas like cyber security that are secondary to getting satellites in space.
All satellites and associated ground infrastructures are subjected to remote cyber-attacks. These are not hypothetical concerns: The Center for Strategic International Studies, has documented this year growing cyber and physical threats to U.S. and allied satellites. Space systems are vulnerable because many were created before cyber security became a top priority. Their vulnerabilities — like hardcoded credentials, used by ships, planes, and the military — make access by sophisticated and less-sophisticated actors incredibly easy: protocols explaining how to hack satellite internet and live demonstrations of piracy on U.S. Navy satellites are widely spread.
Short and long term history include many hostile incidents of satellite hijacking, for instance — as happened in 1998, when hackers took control of the U.S.-German ROSAT X-Ray satellite and a decade later, when two U.S. environment-monitoring satellites were interfered. The complexity of satellite architectures offers many possible entry points for a hacker from which to eavesdrop, tamper with information, cause a service interruption or take control of the satellites. With the successful completion of SpaceX Demo-2 mission and as spaceflights transform from a public endeavor to a commercial industry, more spacecraft connect with ground-based assets and users, making the attack surface exponentially larger. Vulnerabilities grow hand in hand with these entry points.
Some space cyber security standards and regulations are evolving slowly (e.g., the Committee on National Security Systems and the National Oceanic and Atmospheric Administration), and this fall, to face potential satellite hacks and hijacks, the White House Issued Space-Security Directive to Industry, urging manufacturers to design their hardware and software so that operators can monitor and adapt to activities that could disrupt space system operations. And thus, while regulation and legislation lag behind the constantly evolving technology, the effective response to space-based cyber threats is dependent on creative and innovative solutions of the private sector.
The new generation of Satellite Communication companies is emerging: SpaceX has received the U.S. FCC approval to launch 12,000 satellites in the coming decade. OneWeb seeks to create a “satellite constellation” - a network of 48,000 Low Earth Orbit satellites around the earth, has raised more than $1 billion and launched the first 74 satellites. If either company successfully launches its satellite network and can offer broadband at an affordable price, older Satellite Communication companies could lose significant market share.
This Generation Change will make some of the recurring cyber security issues disappear: Vulnerabilities found in satellite-based communications and internet system of Iridium Communications (IRDM), and the vulnerability discovered in Inmarsat’s SATCOM control software, which allowed an attacker to take arbitrary control of the system - may vanish as new technologies replace old ones. But to stay relevant, EchoStar (SATS) and other older Satellite Communication companies will need to 1) Partner or partially own new generation companies and 2) Embrace new technologies that provide better services and specifically improved cyber security protection. These may help the older industry to grow for another decade, until new protocols, regulations, technology, satellites, and new hackers take over the space. | https://medium.com/@poderez/solarwinds-in-the-solar-system-to-go-where-no-hacker-has-gone-before-9936ba3858fb | ['Erez Podoly'] | 2020-12-22 12:50:06.850000+00:00 | ['Cybersecurity', 'Satellite Technology', 'Hacking', 'Stock Market', 'Space'] |
8/10 Best teen or college comedies, including Animal House | Gutting the Sacred Cow | We’d be shot like the horse was (with blanks) if we left off Animal House
Here’s KI’s picks:
1. American Pie — this movie still makes me laugh…hard. It’s aged perfectly and in this time of PC hyperdrive, it’s even more irreverent.
2. CaddyShack — Yeah, to me this wasn’t about Chevy or Rodney…it was ALL about Danny Noonan…..NOONAN!
3. VanWilder — I laughed so hard when I saw this in the theater I almost threw up. Ryan Reynolds at his Ryan Reynolds-est. If watching a group of guys guzzle eclairs unknowingly filled with bulldog goo doesn’t make you laugh, go to the doctor…you might be dead.
4. Animal House — I don’t need to say anything here.
5. Roadtrip — I’m not sure how often my friends and I quoted this movie, but it definitely caused some girlfriends to leave me. Did I say two fingers? Bette make it three.
6. Superbad — Jonah Hill peaked early with this one. I truly irreverent take on the end of high school and trying to maintain those friendships…oh, and choosing the right name for a fake ID.
7. Porkys — I haven’t seen this one in a looooong time, but I had to include it on my list because I watched it A LOT in my younger years…for obvious reasons.
8. Revenge of the Nerds — Sure, there are some scenes that would be felonies today, but in the 80s, this movie was a bible. A booger filled bible.
9. PCU — I was in college in the 90s, this came out in the 90s and spoke DIRECTLY to me. Everyone wanted to be Piven and we all knew that Gutter was a tool (that was Jon Favreau BTW!)
10. Pitch Perfect — I’m sorry, I’m a sucker for singing… KG, stop rolling your eyes.
KG: Looks like KI had some of the same ones I have except one pick of his that’s very effeminate but hey, he’s on brand for that and also liking Grease.
1. KI put Caddyshack on the list so I’d be remiss if I didn’t as well.
2. Ferris Bueller’s Day Off- This should’ve been on KI’s list but he must have been doing his kegel excercises when he added Pitch Perfect. An absolute mainstay on all classic teen/college lists.
3. American Pie-If you don’t find Stiffler funny, we probably don’t have much else in common.
4. Road Trip-See above
5. Eurotrip-Ahh, topless women for no reason. Remember when films were fun and could be watched without a checklist? And oh yeah, one of the catchiest goddamn songs you’ll hear from a film. If I ever meet Matt Damon, I’m telling him that was my favorite role of his.
6. Animal House-the cornerstone of all college comedies.
7. Back to School-Give me Rodney Dangerfield in damn near anything. But his son in the movie was a cunt, a flat out cunt.
8. Van Wilder-KI hit it on the head but he forgot Tara Reid at her pinnacle. But again, he’s too busy singing along with Anna Kendrick. If John Hughes was still alive, she’d be every friend instead of the girlfriend she wanted to be.
9. Weird Science-Just kidding, that film is fucking trash. Stop pretending it’s good, you were brainwashed because it was on a 1.5 year loop on HBO
9. Revenge of the Nerds-If Animal House is a cornerstone, this is another cornerstone.
10. TIE: I love American Pie 2 for many a reason but Jim jerking off with superglue and then getting a trumpet stuck in his ass is Shakespearean. But I can’t forget a classic that most of you would yell at me if I didn’t put on. That’s right, HOUSE PARTY shall not be overlooked. Robin Harris was hilarious, lest we forget.
And before you say what about, I certainly will tell you all that Old School, Breakfast Club, PCU, Mean Girls, Clueless, Fast Times, Can’t Hardly Wait, and Grease were intentionally left off my list because they’re insanely overrated or just viciously unfunny. | https://medium.com/@guttingthesacredcow/8-10-best-teen-or-college-comedies-including-animal-house-gutting-the-sacred-cow-4af84fcba5ed | ['Kevin Gootee'] | 2020-12-27 15:34:12.884000+00:00 | ['Podcasting', 'Movie Review', 'Comedy', 'Movies', 'Podcast'] |
What to Look For in a College Tour | In the cycle of each academic year, two events come together on college campuses to demonstrate the continuity of tradition, business, and the calendar.
The first is “move in” day. It’s one of my favorite moments, exceeded only by Commencement and perhaps the emotion that comes during an unexpected win in an NCAA March Madness game or, in a very different way, a 9/11 commemoration ceremony.
“Move in” day is a kind of family initiation in which parents separate from their children who begin their lives officially as young adults. Neither may be ready. Indeed, the ritual is often more painful for the parents than it is for the new college student.
Campus Tour is Part of Family’s College Selection Ritual
Concurrent with “move in” day, however, are college tour visits. Colleges and universities vary in their opinion and openness to college tours. For example, many do not offer tours on Sundays, an especially convenient day for parents who work within driving distance.
A number of institutions also discourage interviews, citing various reasons including the sheer volume of applicants. But the college tour is an inescapable part of a family’s selection ritual.
A college tour is often a tense moment for families. Many chalk up almost impossible numbers of campus visits in an effort to reach the best decision possible.
Parents Behaving Badly on College Tour
Some parents behave badly during these tours, auditioning for their future role as “helicopter parents” buzzing in to solve problems that beset their offspring throughout their college careers. Others are in awe of the facilities, programs, and people that they encounter. And a few burst with pride at the opportunities they help to make available to their children, opportunities that were often not available to them.
College Tour Tips for Parents and Prospective Students
There is a practical side to these tours. Here are some points parents should consider while embarking on college tours:
The result of the campus visits may surprise you, but you’re not the one going to college. Where you think your child should go may not be where your child wishes to go. It’s not a parent’s decision.
Where you think your child should go may not be where your child wishes to go. It’s not a parent’s decision. Don’t limit the imagination or range of opportunity for your child until you have to do so. Anything is possible until the applicant receives the acceptance letter and financial aid package. The sticker price these days is not what almost any family will pay.
Anything is possible until the applicant receives the acceptance letter and financial aid package. The sticker price these days is not what almost any family will pay. There are many colleges and universities at which your child may have a rich and fulfilling experience. Surprisingly, many of the prospective applicants “know it when they feel it.”
Surprisingly, many of the prospective applicants “know it when they feel it.” Plan your college tour strategically. Work hard to determine the differentiation that exists among college campuses and encourage your child not to apply to those colleges where the memories are not sharp and distinct.
Work hard to determine the differentiation that exists among college campuses and encourage your child not to apply to those colleges where the memories are not sharp and distinct. Treat a college tour as something akin to a half-day bus or trolley tour in a city that you have never visited. Take the tour but return to those sites that your student found most intriguing as you passed by.
Take the tour but return to those sites that your student found most intriguing as you passed by. Locate the campus anchors — libraries, academic facilities, and residential and recreational complexes. Is the library of good quality, well-staffed, and a suitable learning commons? Are there excellent athletic facilities or simply excessive jock-plexes built off tuition and debt? Are the dorms clean and spacious, with progressive, varied housing options, capable of handling technology but well short of a Taj Mahal?
Is the library of good quality, well-staffed, and a suitable learning commons? Are there excellent athletic facilities or simply excessive jock-plexes built off tuition and debt? Are the dorms clean and spacious, with progressive, varied housing options, capable of handling technology but well short of a Taj Mahal? Talk to students and faculty. One of my sons visited the campus of a very respectable Northeast university. He ran into a history professor and noted that he’d like to consider the institution and major in history. The faculty member openly discouraged him from doing so and suggested other highly selective institutions of similar quality. The lesson was invaluable.
One of my sons visited the campus of a very respectable Northeast university. He ran into a history professor and noted that he’d like to consider the institution and major in history. The faculty member openly discouraged him from doing so and suggested other highly selective institutions of similar quality. The lesson was invaluable. Do your research to know what questions to ask. How do faculty relate to students? What research, externships, internships, and study abroad options exist?
How do faculty relate to students? What research, externships, internships, and study abroad options exist? There are a thousand teachable moments outside the classroom, all of which will factor heavily into how quickly your child adapts to the new campus environment. If your child plays a trumpet, is there a jazz band, pep band, or university symphony to meet others who share similar interests.
all of which will factor heavily into how quickly your child adapts to the new campus environment. If your child plays a trumpet, is there a jazz band, pep band, or university symphony to meet others who share similar interests. Does the school respect your child’s prospective major? Should you major in music at an engineering school if the institution does not adequately fund the people, programs, and facilities for the major? Sometimes it’s the wrong kind of prestige that attracts a student to a college. Getting in does not mean happiness.
Should you major in music at an engineering school if the institution does not adequately fund the people, programs, and facilities for the major? Sometimes it’s the wrong kind of prestige that attracts a student to a college. Getting in does not mean happiness. As a family, are you willing to sacrifice to pay for the family’s financial share of the price of admission? If you haven’t saved, are you willing to sacrifice or is your child’s college education a transaction negotiated as though it were a used car sale? The mindset and resources of a parent will heavily influence the outcome.
A college tour can be a very special moment for a nervous child and anxious parents if approached as an adventure.
Ultimately, the best advice to most parents is to stand back and watch with some pride the first steps that your child is taking to make their own way in the world. It’s what you’d always hoped would happen and what they need to do. | https://medium.com/academic-innovators/tips-college-campus-tour-7282238febab | ['Brian C. Mitchell'] | 2018-09-18 15:50:05.792000+00:00 | ['College Admissions', 'Higher Education', 'Financial Aid'] |
The Bad Writing Habits We Learned in School: And Advice to Forget Them | Photo by Evan Leith on Unsplash
The Bad Writing Habits We Learned in School: And Advice to Forget Them
‘Good habits make time your ally. Bad habits make time your enemy.’
Intro: Why Term Papers Need to Go
If you’re an undergraduate student right now, you are probably consuming and sharing more forms of communication than at any time in history: texts, blogs, Instagram, tweets, TikTok, email, news.
You are a node in a fast-moving network of incoming and outgoing communication of all kinds.
In a society in which most of us are immersed in massive amounts of information, sociology professor Deborah Cohan writes, the power of writing lies not merely in the ability to absorb and recycle endless amounts of information, but more so: “to appreciate essence, nuance, and depth, to distill and focus on important points without convenient guides to translate all the ideas for [us].”
It’s with this ethos of what writing enables us to do that Cohan calls for the end of a modern staple of higher education: the end-of-semester, final ‘term paper.’
In her essay, The Case Against the Term Paper, Cohan writes: | https://medium.com/swlh/the-bad-writing-habits-we-learned-in-school-and-advice-to-forget-them-7662e7517e61 | ['Gavin Lamb'] | 2020-07-20 22:22:45.146000+00:00 | ['Writing Tips', 'Education', 'Productivity', 'Learning', 'Writing'] |
Typewriter app ate my first 500 words | Note to self:
When using a new program, such as Typewriter, which is so minimal it barely has a menu, use caution. The first time you open this app when a list of “Key Mappings” is presented, you might want to copy that shortcut list somewhere safe, because three days before ‘wrimo starts officially, when you decide it’s not exactly cheating to begin your 50k word count, you are so excited that you don’t want to write more scenes by hand into your notebook, and you actually got your behind out of bed partially because a coughing fit overtook you, and you had to get water, and then you were up anyway so you may as well feed the cat, feed the dog, start the water boiling for coffee, and your girls, your characters are starting to talk to you, so hey! Let’s try out that program.
So you eke out 500 words, plus or minus, most of a chapter, which may not end up being the first chapter, because another ‘wrimo suggested that you don’t have to write your chapters in the order they may be in your (hypothetical) novel, and you suffer through the typos, and the really bad writing, because this is a program, like your trusty typewriter, where your typos and mishaps are immortalized on the page, and you go to exit, and get a prompt that says:
Use caution before exiting!
And you think, surely I’ll get to save first if I exit, because you can’t remember how you’re supposed to save in this app, and you don’t think to try your good, reliable friend Command+S (Apple+S), so you say No, at first, then think, well, let’s gamble, it’s only 500 words, and I wasn’t particularly attached to them.
So you go to exit Typewriter again, and you say Yes to exiting, and Poof! Those 500 words (give or take a dozen), that you thought you weren’t attached to, you thought you didn’t care much about because they were messy and imperfect and riddled with typos in this $%^%& app, are gone.
You sit and stare at your desktop picture of the Golden Gate Bridge, taken from the top of the bridge, not finding it ironic that the perspective is 100 feet in the air, wondering where your words went. And you chuckle. You’re slightly amused because you made your character chuckle, at something that was ironic more than funny as well. But those words are gone now, along with your character’s chuckle.
You are glad you are finding this out now, on October 29th, not on November 1st, when those words matter a lot more. So you install Typewriter on another Mac, because you can’t get back that shortcut menu no matter what you do, and it’s not on the page where you downloaded the app, and you see it, again, finally. And you chuckle again, because Apple+S could have saved those measly, poignant, probably cliche riddled 500 words.
magical key mappings
And you are never, ever going to forget this, especially not for the next 32 days.
Next: What is a NaNoWriMo, and is it contagious? (T minus 2) | https://medium.com/nanowrimo/typewriter-app-ate-my-first-500-words-5f8747d36703 | ['Julie Russell'] | 2015-10-29 03:19:25.509000+00:00 | ['Writing', 'NaNoWriMo'] |
Big Data Analytics Chats 1: Organizational & Personal Success Factors with Cuneyt Aksakalli | Big Data Analytics Chats 1: Organizational & Personal Success Factors with Cuneyt Aksakalli Çağatay Kıyıcı Sep 7·4 min read
Cüneyt Aksakallı is my first guest for my “Case Studies in Analytics” course project in MEF Big Data Analytics Graduate Program. Cüneyt has been working on data science since 2008. He has managed the R&D and AI teams of a large-scale e-commerce operation. As a consultant, he also developed data science projects for local and international businesses especially in retail, B2B and ecommerce. Natural Language Processing is an area he has specialized in.
Cüneyt is also a seasoned entrepreneur. He has sold his first startup in retail. After long years of providing consulting services, he has recently launched a new startup offering e-commerce search engine products with SAAS business model further building on his experience on NLP and ecommerce. Building custom search engines is extremely difficult and costly for ecommerce companies. Some SAAS products in the market are trying to address this need but they require complicated manual configurations. Their e-commerce search solution is simplifying the integration and maintenance. They now have 9 customers generating over 100 millions monthly requests.
Our Discussion
We had an interesting interview for about 1 hour. In this blog post, I have consolidated Cüneyt’s insights on 2 levels:
Organizational Level:
Why do most data science projects fail?
Organizational and team level issues and how to eliminate them
Impact of improving data literacy in non-technical teams
Personal Level
Scope of data science roles in various context
Self-development in data science careers
Key capabilities in data science career
Cüneyt is interested in execution themes from our topic list. AI projects are very costly and complicated due to infrastructure costs, skilled workforce and risks involving privacy. Correct execution is the key success factor. Because, most data projects fail.
Why Most Data Science Projects Fail?
Most data science projects fail, period. Cüneyt, as a seasoned data professional with extensive project experience, started head on with this subject to our discussion. This is definitely not the sexiest subject in data science which is at the top of its hype-cycle. I believe this issue will be much more visible when the hype-cycle fades after the top.
Cüneyt elaborated on the “fail” definition. He thinks fulfilling technical requirements is the least relevant issue. Most important failure is the inability to operationalize the project and/or inability to deliver intended business results which is reported as high as 80–90% failure rate.
He has identified inability to integrate data and AI approach to the core processes of organizations as the main reason. AI and big data analytics are very strong transformative tools. Data approach should be integrated to the core processes of organizations to deliver results. This is very difficult and requires hard work. When organizations take the shortcut and position data projects as “side gig” to sit alongside core business processes, data projects lack the required attention and can not be operationalized. Cüneyt thinks orphan data projects, which meet technical requirements but deliver no business value are very common.
Orphaned Projects
One reason is the concentrated project ownership. Some data projects are initiated from top level organizations but fail to stimulate participation from functional teams. When functional teams can not participate, their experience, ideas and operational know-how will not contribute to the project requirement. Resulting project may meet top management’s requirements but will fail to deliver business value.
In some organizations, AI and data may not be understood as key drivers for digital transformations but be regarded as quick wins in marketing and PR. “me too” data projects are also, unfortunately, not exceptions in business.
Cüneyt’s approach to these problems is setting up small, diverse and empowered teams. Every functional team is represented in these project teams and bring their unique insights and domain expertise. This small team approach consistently delivered successful data analytics and AI projects.
Increasing data literacy in organizations is another powerful tool. Training on core concepts of data science boosted the effectiveness of small teams and the quality of contributions from functional team members.
But there is a caveat in collaboration: Beyond the project teams communication of the project should be handled centrally with designated team members and in a consistent way. Otherwise, communication errors may create confusion. Also, eliminating contact between developers and internal/external customers of data projects outside project teams is crucial for flawless execution. Such interactions may inject ill prepared features to projects without careful planning and trade off analysis.
Challenges of Becoming the Data Guy
There is an exploding interest for data science and AI subjects in business. Lots of hiring in data jobs and lots of candidates are trying to switch to data careers. Attention of most candidates and employers are mostly on technical capabilities. Cüneyt, prioritizes non-technical capabilities.
Communication skills to function well in project teams and collect inputs from functional team members
At least basic domain knowledge, the more the better. Good domain knowledge directly contributes to the business value of the project and quality of communication with functional teams.
Specialization (e.g. customer analytics, NLP, ecommerce, fintech, CRM, digital analytics)
Specialization with domain knowledge (e.g. customer analytics in retail, customer analytics in banking)
domain knowledge (e.g. customer analytics in retail, customer analytics in banking) Curiosity
Critical / scientific thinking capability
Specialization is good, but in most organizations data teams need to handle multiple tasks due to the small size. For example, many companies don’t have dedicated data engineering, ML Ops or visualization teams. Therefore, these tasks should be handled by data guys. These are contradicting but sad realities of most businesses and public organizations.
Among soft skills, critical / scientific thinking capability is one that connects them to technical skills. Cüneyt thinks, a solid educational background and probable doing academic studies, research, papers etc. is crucial to develop this skill. As data tools, libraries, and materials are more accessible and cheaper many data career candidates can quickly learn tools and start producing. Cüneyt believes without domain knowledge, communication skills, specialization, scientific thinking skills and lots of hard work those quick wins will not create real value to organizations.
Discussion with Cüneyt and his insights were very helpful to develop a more complete perspective of data science with personal and organizational aspects. | https://medium.com/@cagataykiyici/big-data-analytics-chats-1-organizational-personal-success-factors-with-cuneyt-aksakalli-46b35f657e19 | ['Çağatay Kıyıcı'] | 2021-09-07 11:38:25.792000+00:00 | ['Big Data Analytics', 'Data Science', 'Mef Üniversitesi'] |
How to…calculate and compare owned media engagement rates across multiple social channels | Social media measurement programmes can be complex. Perhaps your brand is active on several social channels; maybe you are reporting outputs and outcomes separately, using metrics and exports from those platforms’ native analytics/insights functions. What metrics are comparable? Where should we be cautious when interpreting the data?
Steph Bridgeman, from Experienced Media Analysts, has experience of working on measurement programmes with organisations of all sizes. Keen to guide us through ‘the basics’ so we understand the principles and meaning behind some of the metrics which seem so familiar; this post looks at engagement rate and how to calculate this manually.
Engagement rate combines factors related to audience (reach / impressions) with those related to actions (evidence of consumers reacting to your content, likes, comments, shares, link clicks etc).
First, we need to get a picture of the number of people reached by each social post and the level of engagement with those posts. Some platforms make this job easy (Facebook, Twitter, LinkedIn), others such as Instagram make it a little harder.
Facebook engagement rate
If you are an administrator of a Facebook page, there will be a tab called Insights which will allow you to export data at a page level and post level:
In this example, we’ll export data for the past three months — March to May 2019.
When you click on insights, it defaults to results for the last week, to change the dates, click on export data:
Then set your date range, choose .csv as your file format and choose post data.
Why choose post data? Post level data on Facebook gives you a detailed view per post showing potential reach, split between paid and organic (if any paid promotion was applied), evidence of actions taken, details related to media views etc. It is different from page level data which reveals information about people who have interacted with your Facebook page, rather than with specific content (posts) on that page.
Why choose .csv file format? When you export Facebook page level data to Excel (eg a .xls file format) rather than csv, it places different types of metrics on different tabs in Excel:
However, there are limitations to the number of posts (500) or days (180) which can be exported in one go with Facebook Insights. This means that if you are carrying out analysis over a longer time period such as for an annual review, a .csv export works best. A .csv export places the multiple tabs of data into just one worksheet in Excel. This makes it easier for you to paste multiple rows into a bigger combined master worksheet when carrying out analysis over a longer time period than the Facebook system allows, perhaps for year on year comparisons.
Tip, although your csv file will be in your downloads folder as a .csv file, it is always good to save it as an Excel file afterwards, so you can carry out your data interrogation, save changes etc.
Your Facebook page level export will look a little bit like this:
Different clients will have their own definitions for what constitutes engagement rate — I use these Facebook columns:
Column I — “Lifetime Post Total Reach” — defined as “The number of people who had your Page’s post enter their screen. Posts include statuses, photos, links, videos and more. (Unique Users)”.
Column O — “Lifetime Engaged Users” — defined as “The number of unique people who engaged in certain ways with your Page post, for example by commenting on, liking, sharing, or clicking upon particular elements of the post. (Unique Users)”.
In the export example above, the sum of the reach of all posts (col I) was 447 and the sum of the engaged users (col O) was 196
Engagements / reach = engagement rate
So, the engagement rate on our team Facebook page during the three-month period was 196/447 = 43.8%.
Twitter engagement rate
A deeper understanding of engagement per tweet can be accessed by clicking on analytics once logged into Twitter and then clicking on Tweets:
Twitter’s basic analytics dashboard provides a 28-day summary and also month by month view on some key metrics such as top tweet, top followers, impressions, profile visits.
On a month by month basis, Twitter will show you the engagement rate per Tweet and the aggregated engagement rate for the month:
Frustratingly, Twitter limits exports to a 90-day period, so it is not possible to export data for the months March to May all in one go. Instead I need to do three exports, one per month, and cobble the quarter’s data back together again on a combined master Excel sheet.
Your Tweet export will look like this:
If no paid promotion has taken place on Twitter, focus on columns F and E (you’ll see the engagement rate per tweet in col G, Twitter does the calculation per tweet for you).
If you have supplemented organic activity with paid promotion, don’t forget to include any data to the right of col V — where the breakdown for paid activity is detailed.
In the quarter March to May, the total number of engagements was 267 and the sum of the impressions was 18,655. This means the quarterly engagement rate was 1.4%.
Instagram engagement rate
With Instagram, as we recently highlighted, the data collection is a little frustrating. Instagram Insights are only available to business accounts and for many brands, only within the mobile app (desktop insights are now available to Instragram Creator accounts with more than 10,000 followers that are also connected to a Facebook business page).
There is no export to Excel button, making data collection over longer periods more time consuming.
The team at Experienced Media Analysts don’t currently have an Instagram page, so many thanks to CoverageBook for letting us take screenshots from their account.
We keep Excel open in the background and log the results manually to calculate the engagement rate.
Click on the right hand logo icon for your brand, this will show you the tab for insights:
You will then be able to click whichever post you need insights for and the app reveals reach and engagement figures:
In the example above, 52 were reached, there were 19 likes, but no other metrics related to engagement (the other icons refer to comments, shares and saves). The engagement rate for this post would be 19 / 52 = 37%.
The post above about the Resolution podcast had 22 engagements and reached 59 — so it also had a 37% engagement rate.
LinkedIn engagement rate
Company or brand pages on LinkedIn also have an export to Excel feature. If you are an administrator of such a page, click on analytics and then updates (this is equivalent to a Facebook post level export, we want to see who interacted with specific posts rather than overall level of interactions with the page itself).
Just like Twitter, LinkedIn works out the engagement rate post by post — essentially all it does is add up the clicks, likes, comments and shares (the engagements) and divides this number by the post level impressions.
In our example, over the quarter our LinkedIn posts had 174 engagements and 1,982 impressions. This means the engagement rate was 174/1,982 = 8.8%.
Making comparisons
Reflecting on the past three months’ social output by our team on the three social channels where we are active, we have found the following:
By incorporating the engagement rate into our suite of metrics, we can further triangulate the results.
Looking solely at post engagements, we might have deduced that Twitter was performing better than other channels, as it generated the strongest level of engagement and strongest reach overall. Looking at the engagement rate metric, we might have deduced that Facebook was performing best.
Adding context to the results however, I know that the audience profile of the Facebook page followers is essentially team members and a small section from our network of media analysts. Therefore, it is a small and by its nature, engaged group.
We can therefore conclude that LinkedIn is performing well. The engagement rate is relatively strong, especially given the relatively small number of LinkedIn page followers itself. | https://medium.com/the-resolution/how-to-calculate-and-compare-owned-media-engagement-rates-across-multiple-social-channels-e6a10ec357b6 | ['Laura Joint'] | 2019-08-06 10:01:42.825000+00:00 | ['Engagement', 'Measurement', 'Owned Media', 'Engagement Marketing', 'Social Media'] |
8 Secrets, Digital Marketers should know | Did you ever felt, “I know digital marketing why should I care about the basics of marketing anyway”?
Are you one of those people who learned digital marketing and not implementing it or not getting any better results? do you know why there are no results?
Everybody can learn digital marketing, but only a few people excel in it. Do you know why?
This article will answer all the questions above.
we are gonna see some very important secrets most digital marketers fail to know.
1. Marketing Fundamentals
Do you know there is a law of marketing?
Yes, there are, actually many laws.
Sadly, most digital marketers don't know this. They don't even know there is “The Law of Marketing”.
The aim of marketing is to know and understand the customer- so your product fits them and sells itself... the ultimate aim of marketing is to make selling superfluous — Peter Drucker.
Do you know when the actual marketing process starts? If you say “after the product/service launch”.
No, it is not, it actually starts way before creating a product, it starts when you understand your customer’s need, produce that product/service, and fulfilling his need with your product/service.
Marketing is all about delivering the right message to the right person at the right time. Many people think that marketing means just selling the product, but deep down, it is how you make your customers happy by solving their problems and retains them for your next product/service.
This can be achieved only by constant value addition to the customer, and build trust, then what, they will start to transact with you. sales done.
So, focus more on your product than your marketing strategy, more the value your product delivers, less the effort you needed for marketing.
let us take the Google search engine as an example. It solves almost any problem, it provides so many values that's why we can't see any ads for the google search engine. It doesn't need any marketing till they stop providing value.
The next secret may be controversial. A fight between Digital vs traditional marketing.
2. Is Traditional Marketing still alive?
I always felt that traditional marketing has lost its shine? but still, we can see TV ads, banners, radio ads so on right? That means it is still alive, right?
It is still doing a great job for some of the products.
If you are choosing a product with a generic wide targeting audience like Apple iPhone, you can use TV ads because its cost-per-reach ratio is low.
let's do the quick math, the total number of TVs in India is = 197 million out of 298 million households.
Total people watching TV (4–5 persons per household) = 800–1000 million
Just imagine the amount you should spend on digital marketing to have this 800 million reach. It will be a Nightmare, right?
So Traditional marketing is still there and important, you can reach 65% of the Indian population with Radio and have 450+ million reaches with the newspaper.
Digital vs Traditional Marketing
Still, Digital marketing has its own advantages like more engagement, high ROI, measurable insights, etc. And as a digital marketer, you may know a lot more about digital marketing.
But traditional marketing also has its own strength with its respective audience.
Simply saying, you can use traditional marketing for generic products and low-end B2C products, and you can use digital marketing for high-end B2C and complex products.
Whether you are doing digital or traditional marketing, for both marketing communication is very important, let me tell you why?
3. Importance of communication
According to studies, 85% of your success is attributed to communication skills, and 15% to mastery of your work skills.
A good marketing is all about good communication — Digital Deepak
importance of communication
Communication is not about your grammar skills, or your English, or your vocabulary, instead, communication is all about how effectively you deliver your thoughts into the message.
There are some techniques to improve your communication skills:
Join the conversation that already running in people’s minds. For example, you might have wondered about digital marketing secrets that most people hide, what I did was I just joined that conversation running in your mind about revealing the secrets.
Start thinking in English than your mother tongue.
watch English series or standup comedies like Friends, The Office, etc. I am watching them till now to develop my communication.
For blogging- Write at least 500 words a day, don’t worry about the structure and all now, just keep that flow going, and edit it later. this is what I learned from my mentor Divya Kothari, it has changed my writing skills way better.
Having said that, are you a fan of economics and number? if you are not, at least try to understand these facts below. Because it is very crucial.
4. Understanding of Global Economics in Marketing
Don't panic!! when you hear this term.
The economy in layman's terms means money circulation among the people. More the money circulation in your country, the stronger is the economy of your country.
That's why developed countries have a high standard of living, and developing countries have relatively less standard, due to more money circulation in former than later.
Every entrepreneur should learn about some basics of global economics, and make decisions accordingly — digital deepak
Do you know that if the average age of the country increases, the economy of the country will also increase? it is.
the average age of India
Because people will start spending more. for example, take India, the average age of India is 26.8 years now, and it will reach up to 38.1 years in 2020. And we know that India is a developing country with constant growth in economics. So you can focus on developing countries.
source: Sajith Pai — India 1,2,3
Just look at this table above. 88% of the workforce is from India 3. Our economy will grow by shifting people from India 3 to India 2 then to India 1.
If you are reading this article with a laptop and with a wifi connection, you are definitely under the India 1 section. with that, you can relate to who all are in India 2 and 3.
Most entrepreneurs start their businesses by providing their products/services to people in the first section and they will gain some revenue from them. with that income, they serve section 2 people and go on with section 3.
So, you can start serving people from India 1, and then continue to India 2, India 3.
for example, Tesla started producing high-end model Tesla Roadster a sports car for section 1 people, then Model S for next-class people, and then Model 3 for average citizens.
So, understanding the basics of economics will help your business. Even when you want to choose a niche, you should know about the market trends. You will see how the market influences niche selection.
5. How to choose your Niche
Deciding where to compete is half success.
Many people think selecting a niche is just selecting a topic, but is way more than that.
Selecting a niche is not only the topic you chose but also how you going to deliver it and engage with it.
For selecting a niche you should consider these things:
niche selection Venn diagram
Passion: whether you will be interested in this topic for at least 10 years? Market: know, where your niche will be in the market in the next 5 years. (apply all you learned in the economics section above) Talent: you should have skills related to this niche, if not yet, no problem start learning today.
Note: I recommend you always start with the micro-niche first and then expand it over a period of time.
Don't stick with a single niche, you have the freedom to change or expand your niche. let's take Deepak Kanakaraju, he started blogging about motorcycles first, then he changed his niche into digital marketing.
And remember you should shine in your niche, you should be the number 1, for that you have to spend some time and effort building a personal brand in your niche. but how?
6. Brand building and Power of personal brand
Your brand is what other people say about you when you are not in the room — Jeff Bezos
Power of personal brand
Building brand is the number one objective for most of the organization.
People always remember only number 1.
If you can’t able to become number 1 in your Niche, be the only one in your micro-niche.
For example, take Volvo, it may not be a number 1 brand in the automobile section, but it is definitely a number 1 brand like Safest Automobiles.
Develop a personal brand that is so strong so that it can give rise to many new brands of your own. And once your personal brand is popular, you will be an influencer or a brand ambassador. for example, Elon Musk is a personal brand, and he uses his brand to influence the market of Dogecoin market.
But you can’t develop your brand overnight, it will take some time and effort to have a strong personal brand, it includes a framework too- the MassTrust blueprint. It consists 6 stages of brand-building:
Mass trust blueprint from Digital Deepak
Learn- learning is always important, you can't excel unless you learn, but learning is not just reading or memorizing. It includes 3 types
Remembering the facts,
Understanding the concept,
Practice the procedure.
2. Work- once you learned, start implementing what you have learned in the real world. Only if you put what you learned into action, you will remember it better. Gain experience from your work.
3. Blog- start writing about what you have learned and experienced from work. You can understand better once you start writing about your niche and your experience in it. It is the first stage of Trust Building.
4. Consult- after blogging itself, you would have gained some authority among the audience in your niche. It is time to stop working, start consulting. Listening to their problems and needs can make you guide them for better steps in the future.
5. Mentor- teach what you have learned and experienced to the people who want to become like you. Teaching will make you an expert in your niche.
6. Startup- after mentoring, it is the time to build your startup. Apply all knowledge you gained from the above steps to build a startup. And the cycle repeats.
After you have done all these 6 steps. you will have a strong personal brand in your hand. yeah! Brand in a hand.
And remember this “The best known will always beat the best”.
And it is time for sales now, as a digital marketer you might have known marketing funnels, but the upcoming one is way better.
7. CATT Marketing Funnel
Imagine What happens when you ask your audience for a sale in their first interaction itself? Do you think they will purchase from you, most probably they will avoid you.
They don’t know who you are, what you offer, why they should care about you.
For a sale to occur, they should know you and trust you to transact with you. For that, we need this CATT funnel.
A marketing funnel is a process of converting those recent audience members who visited your website or social media, into customers by making them purchase your products.
You might hear about the AIDA(Attention-Interest-Desire-Action) funnel in your digital marketing course. But this CATT funnel is somewhat different, effective, and unique.
Lead generation formulan from Digital Deepak: Wealth = n^CATT -where ‘n’ stand for Niche, -’CATT’ stand for Content > Attention > Trust > Transact.
Lead generation formula from Digital Deepak
CATT funnel involves 4 stages, and each stage is very important, any mistake in any stage you might miss a sale. But perfectly prepared CATT funnel will drive more sales than your expectation. The four stages are:
Content: It can be your blog post, your social media post, your podcast, video to capture your audience's attention. It can be your lead magnet also. (like free ebooks, webinars, etc. to get their email id or phone number). the main purpose of the content is to provide value. Attention: After the lead magnet, you need to get their attention, and you need to drive your audience's attention to your lead magnet. You can do this in many ways including organic search traffic, paid traffic, referral traffic, etc. Trust: It is the most important phase in your funnel. You can build trust through email sequences by providing value-filled content to them. And remember, once the trust breaks, it is hard to fix that. Transact: Understand whether your leads have reached the stage where they are ready to transact. You can do it with lead profiling and lead scoring. These methods can tell you the right time to make the right offer for the right audience.
And the sale happens.
This framework is not known by many digital marketers out there, that's why it is like a secret, you should know this and implement this in your business. we will see CATT funnel in depth in upcoming articles.
And finally,
8. Integrated Digital Marketing
Integrated digital marketing is the trending, and most effective marketing method now.
You might have used digital marketing modules like content marketing, social media marketing, SEO, email marketing, paid ads, etc. But you might had used them separately. But, have you ever tried to integrate them?
The result from integrating them will be a snowball effect, you can’t imagine how effective it is.
Every module in digital marketing has its own advantage let's say content marketing, it is the heart piece of the marketing, no other module can build as much trust as it does, but it can’t be effective without the help of other marketing modules. Similarly, other modules will be more effective in conjunction with others modules.
let me explain each connection here:
Integrated Digital Marketing is content marketing on steroids- Digital Deepak
Content + Social Media: Say you have written an article and published it in the blog. The article will be in peace. Is that what you want? we want traffic, right? This will be done by social media sharing. It is advantageous for both of you (you and your audience).
Your good quality content is getting shared, Those you are sharing your high-value content will get good karma points
That's why people call this integrated digital marketing process content marketing 2.0.
Content + Social Media + SEO: if your content is filled with high value, it will get its maximum share in social media(off-page SEO). If people sharing your content on social media it will get a lot of quality backlinks. which is a good signal for search engines. It makes search engines happy, which in turn will help your content to rank higher in SERP.
if your content is filled with high value, it will get its maximum share in social media(off-page SEO). If people sharing your content on social media it will get a lot of quality backlinks. which is a good signal for search engines. It makes search engines happy, which in turn will help your content to rank higher in SERP. Email + content : instead of writing content in email, you can publish it to your blog and email them your blog link. The number of people visiting your blog from email will be higher than any other media, if your subscribers like your content they will share it on social media, this, in turn, signals search engines to rank your content well, and by the result, you will attract new people and you can also convert them into subscribers.
: instead of writing content in email, you can publish it to your blog and email them your blog link. The number of people visiting your blog from email will be higher than any other media, if your subscribers like your content they will share it on social media, this, in turn, signals search engines to rank your content well, and by the result, you will attract new people and you can also convert them into subscribers. Paid ads : instead of using ads for sales, you can promote your content through paid ads, so, people after consuming your content will gain trust, and they will share it in social media, which increases the activity of social shares, search engines, email subscribers list (as I said above). Paid ads and social-search-content trio, when combined, your results going to boom
: instead of using ads for sales, you can promote your content through paid ads, so, people after consuming your content will gain trust, and they will share it in social media, which increases the activity of social shares, search engines, email subscribers list (as I said above). Paid ads and social-search-content trio, when combined, your results going to boom Sales: Nobody will buy from you at the first instant itself. Studies say that it takes at least 7 contact points with your prospects to make a sale. Your blog content and automated email sequence are the best way to follow up your leads and gain their trust to transact.
Did you see that how powerful is your marketing when you integrate all these different modules? but most of the marketers are not aware of these, that's why it is secret.
Conclusion
I hope you understood all the 8 secrets that I’ve told you about Marketing and Economics, etc. which most of the digital marketers out there don't know.
Make use of all these in your digital marketing journey, once you start implementing all these in your marketing journey, you will see yourself way ahead of the other digital marketers out there.
If you want to know more about each secret in-depth, let me know in the comments below. I will read all the comments.
Thanks,
Sasi10x. | https://medium.com/@sasiramkanna46/8-digital-marketing-secrets-5e739e05673c | ['Sasi Ram'] | 2021-07-27 06:51:50.115000+00:00 | ['Catt Marketing Funnel', 'Law Of Marketing', 'The Global Economics', 'Digital Marketing Secrets', 'Personal Branding'] |
Why Having Secure Attachment Doesn’t Make You Invincible | You Might Feel The Urge To Run In The Beginning
Have you recently worked on shifting your attachment style? Maybe you’ve opened up the door to attachment and are learning about it now.
Well, you might find dating securely attached people different.
Dating them might feel boring.
Don’t run yet!
Your addiction hasn’t been fed. It’s because secure people don’t give you the emotionally rollercoaster fix you’re used to.
This doesn’t mean you’re a horrible monster who wants to suffer, not at all.
But when we’re insecurely attached, our brains and bodies are wired to anticipate emotional highs and lows.
There’s no need to tip-toe around a secure partner, or even hide your feelings from them. Heck, you can trust each other and not feel jealous.
Sounds like a scam, right?
Kristen Hick rightly said:
“Instead of feeling comfortable and even attracted to secure partnerships, you might feel bored and uninterested in partners who don’t keep you on your toes with emotional ups and downs.”
Predictability and having someone share their wants and needs with you can feel overwhelming when you’re not used to it.
I felt this way when I met my partner. She was loving, kind, dependable and supportive.
It can send your brain into shock mode when you’ve never experienced this kindness from someone before.
You haven’t had your insecure attachment fix
You might be caught off guard and have emotional reactions you didn’t expect.
I cried on the second, third and fourth date (maybe the fifth too) at the start of my relationship with my partner. We laugh about it now, but I felt terrible at the time.
Acknowledge your brain is weaning off it’s addiction to insecure drama. You might want to run away when these feelings jump out at you unexpectedly.
What To Do
Understand it could be your insecure attachment getting activated — If you feel bored of like there’s no connection, why not give it a shot? You’ll see if insecure attachment is clouding your vision, or if you don’t connect.
Mindfulness practice — Practising breathing techniques when you’re feeling wobbly can help you feel grounded again. My therapist taught me the STOP Technique: | https://medium.com/candour/why-having-secure-attachment-doesnt-make-you-invincible-5cce2e1f11a7 | ['Kathrine Meraki'] | 2020-09-21 00:00:45.557000+00:00 | ['Self-awareness', 'Self Love', 'Relationships', 'Love', 'Psychology'] |
business as usual | At Carrefour you’ll find plenty: Rice, beans, eggs and more! All at your local grocery store. And please ignore the dead man on the floor!
The sign above Manuel Moises Cavalcante says ‘Welcome to Brazil in 2020.’
In honor of November 20, the dark meat’s on sale, it’s practically free! That’s our savings guarantee. Get fresh meat so rare, it’s still bloody! We’re woke, so aware! It’s only fair — Joao Freitas reminds us all to care! | https://medium.com/resistance-poetry/business-as-usual-9dc206d3b62e | [] | 2020-12-06 17:12:50.807000+00:00 | ['Consciencia Negra', 'Brazil', 'Satire', 'Resistance Poetry', 'Poetry'] |
A lesson, from Better Call Saul | At this point it is safe to say that Breaking Bad is one of the most known and recognised TV series in the recent time, if not ever. Let’s just take this fact into account: this article from The New York Times shows that a recent study made by Netflix itself revealed that Breaking Bad is [still] one of the most binge watched shows in the streaming platform. That’s huge.
And just ask around, I bet most of people will tell you that they have watched it whether they liked it or not. It’s a big phenomenon. And that is important to highlight because this is not a conventional show, it’s one that raised the bar in storytelling by presenting a story in a way that only a few do and that can be seen as boring by many. Among the multiple effective things Breaking Bad did, is the fact that they were really worried about telling the characters’ arcs in the most complete sense, specially the main one, and do it in a way in which it felt like real life.
But all this success and uniqueness that Breaking Bad brought, made me think that Better Call Saul was going to be just a copy of it for the sake of making more money out of the world Breaking Bad had created.
After watching Season 1 of Better Call Saul last year, I was thankful to know that they were not copying Breaking Bad or relaying on its stories to make this a good one. And now, after finishing Season 2, I confirmed that storytelling can be done with inspiration and even imitation at some levels, as long as it’s not a copy but a source to make a good job and tell a story.
Nothing is original, at least not entirely. But that doesn’t mean copying, literally, is right. Better Call Saul treats its characters the way Breaking Bad did, as elements of the story that have to be developed as much as they need and no matter how long it takes. The main plot and subplots are explored in a way that most of shows don’t do, because they are too rushed, but not here. And the good thing is that Better Call Saul, even though it makes use of some characters from Breaking Bad, doesn’t sort this only as a way for people to remember them and the previous show, but to tell their story and guide the plot the way it’s intended.
Aesthetically, both in the photography and art department, you know this is Breaking Bad’s universe but with some added features, with some changes; long shots are common, wide angle ones tell more than we imagine, and the care in composition is impressive to the level in which is obvious that every single shot is thought carefully.
To me, this is a true lesson and eye opening experience as a viewer and someone who also tells stories through audiovisual products, because society might teach us that we need to be original, and that’s true, as long as you recognise that originality isn’t being completely unique (because it is impossible) but being inspired about things and “adapt” them to your objectives and way of living, or in this case on what you want to tell and how you want to tell it.
Follow us on Twitter and Facebook!
We are Chevy, Sole, Stef and Susan. | https://medium.com/urmindace-stories/a-lesson-from-better-call-saul-7a7eec3201f7 | [] | 2016-07-05 20:19:42.579000+00:00 | ['Netflix', 'Television', 'Storytelling', 'TV'] |
A critique of Marx: Why Communism never works. | Equality of Outcome
I’ll start with what Karl Marx really dispises about capitalism; equality. Or lack thereof. Marx looks at the average life of the factory worker in a factory and expresses how the worker is getting exploited by his employer because the employer makes a profit off of him. Marx viewed capitalism as a class system where the rich or the bourgeoisie inherited all wealth and exploited the workers. What Marx did not keep in account was that capitalism is a merit-based system. Not a class one. The workers too can accumulate enough wealth capital and move up the hierarchies. Which if you look around today. Most of the richest men aren’t the ones who inherited wealth but created it.
The Marxist solution to inequality is that the articles of commerce should be distributed equally among the workers, That there should be no hierarchy on which society should be organized. Everyone has equal access to goods regardless of whether they worked as much as others to accumulate it; Equality of outcome.
The problem with equality of outcome in any creative endeavor is that it doesn’t work, it cannot work. Some people are better at certain traits and those traits matter in certain domains. Hence in any competition, the outcome of results is never distributed equally in a fair game, it has no reason to. To understand this in an analogy, Picture a classroom with 100 students. Now picture all the students who get A+ grades. They will be a tiny minority of the classroom, hardly 6%. Does this mean that the ones getting an A+ grade are ruling over the ones getting C’s? Does this mean that the ones on top have some sort of power relation with the teacher because they are the only ones on top? Should they be obliged to “share” their grades to make up for the inequality they have caused in the classroom. And the worst of all should we destroy the idea of grades and distribute test answers among all students equally even when a tiny majority is working 5x harder than everyone else.
Now I am not saying that the richest 1% today don’t lobby politicians or abuse the wages of their workers to maximize profit or find new methods to hide their wealth to avoid taxes- they do all of that. but the solution to that is not to abolish the whole system or even worse the idea of money or value itself.
Marx fails to recognize how capitalism works and what causes inequality in any creative endeavor. Take the top 10% of football goal scorers in 2018 in Europe and take the sum of their goals and compare it the bottom 90%…. They may be way off and totally ‘unequal’ but there’s not plutocracy or agenda going on in football goals as far as I am concerned. They are earned by those who specialize in that creative endeavor and thus cause inequality. Similarly, take the top 10% artists who sold most records in 2018 and compare them to the rest and so on…..
The nature of work
Marx argued that Capitalism in the dawn of Industrial revolution was repressive and sucked the blood out of the worker. That the life of the average man had dissolved into nothing but tired hopeless work. Which rendered the life of the worker meaningless, something he referred to as alienation.
Marx’s solution to this was that work is not necessary in today’s society (He proposed work as something that fulfills man too but that wasn’t the work that creates capital necessarily) because of how efficient our economy is. We can just provide everyone with goods and services as not all people need to work in the means of production, the instant problem with is that people who are working more are inherently getting less as they work more and get the same amount. If they get more and work more that creates an elitist and a dominant group which is exactly what communism is against. if they work more and get less that too creates an inequality which is also what communism is exactly against…… The Marxist approach to equality is unachievable.
And what happens when you adopt the ultra-egalitarian ideals of communism to the economy is that nobody has to work any harder to earn what they deserve. Now it goes without saying that Marx envisioned a utopia in which the human way of thinking would change too. According to him capitalism has corrupted our minds and ideals and how we think about value in general. But arguably its an evolutionary inherent trait of humans to improve with competition in any endeavor and not just the markets its a trait which is even found in animals; the ability to grow and adapt to fit the competition.
A modern day example would be why the private capitalist sector is always so much more productive than the government sector in a country. The private sector always has a competition to compete with within its own domain in capitalism unlike the government sector which has doesn’t have challenging objectives which change in respect to competition.
“From each according to his ability, to each according to his needs.”
On surface, this sounds like the perfect utopian ideal. Where no one works harder than they can, And no one consumes more than what they need. But who constitutes those definitions and limits of consumption is a bigger problem to solve. And the mere fact that ability in humans is something that is pushed and not a constant thing. And without need or competition that ability will be limited as in respect to how much it can be.
Authoritarianism
One of the greatest criticism against communism is usually that the millions of people that were killed in the USSR and other atrocities under communist regimes. The usual response to this by socialists is that ‘that wasn’t real communism’. But anyone who’s read the manifesto can conclude how the deaths of millions actually have something to do with the theory of Marx. Let’s begin with the idea that power should be distributed among the people. Marx wanted to disrupt the political and monetary hold of the bourgeoise and give it to the people. One presupposition that comes with that demand is that a state can be run by all people at once. The way historically this has been tried to achieve is by giving the state all the power and resources and The means of production so that they can redistribute it. Now according to Marx in communist countries the people run the country, But that isn’t practically possible by any stretch of the imagination, You cannot have direct democracy for the same reason; every person cannot run the state at the same time. So you need to have a leader. You cannot have a revolution without a leader. And it goes without saying that no leader is perfect. And communism, with its promise of redistributing all articles of commerce, gives virtually every power to the leader which eventually becomes a dictator. And as all dictators he also just happens misuse his power 10/10 times. Mix it up with the hate against classes in communism and so it’s not surprising that the most brutal dictators in history were aftermaths of Marx utopian dream. Much has been talked about a democratic socialism or Marxism ever since. The problem with that is that even with socialistic programs like a housing scheme you still at the core will deal with the same problem. For example imagine a housing society that’s owned equally by everyone living in it. Now if you want to make any changes to your home everyone in the society will have to agree with you for you to make changes to your own home. And this is true in all aspects of life in communism. The abolition of private ownership is the death of your control over what you own- since you own nothing. Which taps into a bigger debate of individualism vs collectivism, and is precisely the problem with all such collective utopian dreams. The dream for a better future looks different for everyone in society. And adopting not only a way of life but the core values for a whole society to function socially and economically based on one man’s ideals and the state’s decision on how that dream will be achieved tells us that the sacrifices of freedom in communist societies and the death of freedom of speech is nothing foreign to the theory of communism, and is not the “not real communism”.
Radical Revolutionary Fervor
Even after discussing lets just call them “loopholes” in marx’s ideology from which violence can manifest itself in communist regimes and revolutions. It stays unclear that how can a theory that was meant on the promise of libertarian ideals such as justice & equality for all, How does it always erupt into and even initiated from brutal acts of slaughter and human rights violations.
Lenins’s hanging order for the Kulaks
Marx’s desires in the communist manifesto are clearly stated with “overthrowing the bourgeoise class” and how all proletariats and communists of all nations should unite to strip the bourgeoise off of their wealth and distribute it to the community.
“Workers of the world unite!”
One very accurate portrayal of how this plays out in an actual revolution is what happened to the Kulaks in the USSR. The rich peasant farmers who produced a large percentage of agriculture for Russia and Ukraine were seen as a threat to the communist part by stalin so he stripped them off of any power and shipped them to siberia during the collectivization in the early 1930s. Which later caused the famine and extreme shortage of food in USSR which resulted in an estimated 5 million people starving to death.
Marx separates the working class and the owners of property and means of production into 2 separate social classes portraying that capitalism is a class-based system. When in reality capitalism does not care about who accumulates wealth.
“the history of all hitherto existing society is the history of class struggles”
The Marxist view of history is an ongoing fight between the working class and the bourgeoisie, the patrician vs the plebian, the oppressor vs the oppressed.
Its clear that when marx talks about overthrowing the property owners he quite definitely means a full-blown war-like revolution. (How else can you steal all wealth and means of production literally and distribute it)
Thus it comes with no surprise that the utopian vision of a man whose world view is shaped around a distorted and limited perspective went on to become one of the most repressive, brutal and dogmatic political ideologies the world has seen.
This results on the individuals level is an ingenious feeling of being oppressed and a constant hatred against anyone who’s in a better position than you in any domain. Which has shown itself to be true throughout the history of communism and even today in young activists who think their employer paying them $10 an hour is oppression and wearing a Che Guevara T shirt makes them a damn activist.
Final Words
Marx was one of the first philosophers I was exposed to. And his writings seemed influential to me for some time as they do to many other teenagers when they read him for the first time. In a way it confirms the doubts you already had about capitalism but in a more extreme way that seems appropriate especially for young men looking to get behind something rebellious and different.
One thing we’ve got to give Marx credit for is recognizing some of the deepest issues in capitalism and how it will affect our society way before they truly did. And perhaps his criticism would be a key role in transforming capitalism in the future. But his alternative for capitalism should be left behind in it- if that isn’t obvious enough. | https://medium.com/@subhanhussain22/a-critique-of-marx-why-communism-never-works-f690b4cfd11a | [] | 2019-01-22 03:53:43.436000+00:00 | ['Communism', 'Jordan Peterson', 'Politics', 'Ussr', 'Karl Marx'] |
Full Stack Development in 1 Twitter Thread | Full Stack Development in 1 Twitter Thread
End-to-End Steps for Turning Your Idea into Something Real
When Form Follows Function
People have ideas. Lots of ideas. But how often do they turn those ideas into something real? There is a distinct difference between what’s in our heads and what can be created.
Most people don’t build software. This includes many people on actual software teams. If you’re not coding now you see the creation of working software as someone else’s job. Perhaps you’ve wanted to start learning (or relearning) yourself but assume it takes too long, is too technically challenging or just not something that interests you.
Ideas typically live as drawings, post-its, mockups and “clickable apps”, all of which might capture ideas but are hardly validation of something workable. There is a transition point between one’s ideas and what nature allows to happen, and through that transition is where innovation actually happens. There is a beautiful correspondence between form and function that only a true piece of working software can expose.
There are no excuses left for someone not making software. No matter your interest, if you want to see your ideas come to life then you need to build software. Given today’s level of abstraction in tooling, full pieces of software can be created rapidly.
Software development isn’t about “programming” computers it’s about crafting something real, that interacts with people, and becomes part of the working economy.
I originally wrote this on Twitter in a single thread, to show how just how accessible it is for anyone to create end-to-end software applications.
There are many false gatekeepers to opportunity. The assumption that you need formal education, online courses or years of deep experience is patently false. Everyone has the ability to piece together their idea into something that actually works.
Let’s get started.
The Pieces
You only need a browser, a notepad, and your computer’s terminal (command prompt on PC). There are a few high-level pieces that go into building all applications:
a page
styles
layouts
interactive elements
events
server
fetching/storing data
People rarely write programs from the ground up anymore. That’s too slow. Today we use libraries. We will use JavaScript and Python libraries. I will use Azle for front-end stuff ( DISCLAIMER : I created Azle, but you can use any JS library), and Flask for back-end stuff.
Let’s start with our page.
Pages always have an index.html file. Go to the Azle homepage and click on the STARTER HTML icon in the top left. Copy and paste this into notepad or whatever editor you want (I’ll use TextWrangler), then save the file as index.html (on your Desktop).
Grabbing the STARTER HTML file from Azle.
Every web application has an index.html file. It’s the page your browser points to when you load a website or web application. Let’s place the file in a folder called my_app , and drag your index.html file into this folder:
If you right-click the index.html file and choose your browser you can view the page. Doing that will pop open your browser and show a blank page.
Let’s style the page. Go grab the style_body code from Azle and paste it into your index.html file (inside the create_azle function):
Don’t be like Reddit and think you’re cool cuz yer ugly. Go grab some minimalist hex codes and style your page like its 2020. Nothing says I’m modern like Russian pastel:
flatuicolors.com/palette/ru
Choose your favorite color by clicking on it. Replace the “background” color in the style_body function with your new color. I’ll choose Biscay. We also don’t want to use standard browser fonts because the only thing worse than looking like Reddit is looking like Craigslist.
Google fonts to the rescue. I’ll choose Ubuntu. Nice and modern looking.
Of course, we need to make our Google font available. Let’s load that prior to calling our style_body function. Search for “font” in the docs to find the load_font function):
az.load_font("Ubuntu")
I also set the min-width property in the style_body function to 1150px so the page doesn’t squish when I resize the browser. Since we’re only prototyping an application we don’t care about responsivity.
Our code should now look like this:
Since this all sits in our index.html file we can refresh the browser and see the result:
Here’s the CodePen if want to play with the code. Try changing the color of the background. Of course, changing the font won’t make a difference yet since we are not showing any text.
Whenever you see CodePen in this article, click on it to open a tab showing a live interactive version of our current code. Play with the values to suit your preferences.
We are making progress…
Of course, nothing is on our page. Let’s change that. We will add a section to hold our content. Add 1 section using the add_sections function:
…and refresh your browser:
Azle’s default color for sections is a blue color, but of course, we can change that as needed.
CodePen
Try changing the number of sections.
Let’s head back to Russia and see if we can’t find a better color than blue for our section. Use the style_sections function (placed directly below our add_sections function) to style our new section. To style an element we must target it using its class name and instance number. We know the class name of our section is called “my_sections”, and since we only made 1 section it must be the first instance. Our style_sections function looks like this:
I will choose Apple Valley as my section background color. I also added a border-radius of 6px to round the corners. I set the height to auto , which allows HTML elements to grow and shrink based on the contents inside the element.
Since we want to arrange things on the page we will use layouts. These are just grids; boxes placed on the page, which we fill with text, buttons, sliders, inputs, etc. I’ll add a layout with 2 rows and 1 column using the add_layout function:
It looks like this:
I want the top row to hold my title. I’ll change the top row’s height to 60px by targeting the first row of my layout using another style_layout function, directly below our previous one:
Note how I targeted the class name of the row “my_layout_rows” and the first instance (row 1). Refresh the browser to see the difference:
Everything we’ve been adding inside the curly braces of our style functions is standard CSS styling. As you try to figure out how to achieve the styles you want just search online for the right CSS. With time you will learn many styling tricks.
Let’s go ahead and show off the Ubuntu font we loaded earlier by adding a title to our application. We target the first cell of our layout and add text using the add_text function:
Let’s increase the font size of our title, and center its alignment:
Pretty good so far. Now things get more interesting, as our next step is adding interactive elements to our application.
Let’s add another layout inside our 2nd cell to hold our interactive elements. Copy the same layout code we used before, pasting it at the bottom. Target the 2nd cell of our original layout, use 1 row, 2 columns, and color the background creamy peach (or whatever you like). We add the add_layout and style_layout as functions directly below our previous code, as follows:
Notice I also used the column_widths property of the style_layout function to give a 20/80 split between column widths.
The app looks too “liney.” Let’s remove the bordering from our most recent layout by setting its border to 0 :
That’s cleaner. However, I still want some kind of separation between the 2 cells of our inner layout. Let’s color the 2nd cell “Squeaky” from our Russian palette:
Notice our app no longer requires any bordering, since the coloring alone demarcates our layout cells. Let’s make it minimal and smooth by removing the outer layout’s (the first one we added) border.
Here’s the CodePen at this point.
Now we’re ready to add our interactive elements. We didn’t start with any mockups of what we wanted to create, which is fine. But now might be a good time to think about what we want this to be.
Most of today’s interesting applications are data-driven. What kind of interesting data could we fetch, and what kind of model might we use? While I write this, covid-19 is on everyone’s mind. Let’s fetch covid-19 data and use a model to predict cases.
****Usual Disclaimer***** Don’t be an idiot and use this model to make real-life decisions regarding a pandemic. This is just for demonstration purposes.
We want publicly-available data, ideally delivered as a “RESTful service” (“REST API”). REST APIs deliver data and functionality “over the wire” into our browser, making it easier to create interesting applications without having to write a ton of code.
If someone offers covid-19 data as a REST API it means we don’t have to store/manage the data ourselves; we can just use it
I found one here: https://about-corona.net It’s free and requires no authentication.
REST APIs have “endpoints”, which are what we point to with our browser to fetch actual data. After reviewing the documentation I found the endpoint we will use:
https://corona-api.com/timeline
This gives global counts for deaths, confirmed and recovered cases.
Anytime you want to see what a REST API’s data looks like just open your browser to the endpoint:
To use the data in our application we don’t access it as above, rather we ingest the data using JavaScript, and parse the results into some useful form. But first, let’s get back to our mockup. Now that we’ve seen the data, we can think about how our app might look and act.
A dirty mockup is all we need to anchor our approach. Here’s what I came up with, using Google Slides to create the sketch:
The user chooses a type (deaths, confirmed, recovered), with results visualized on the right in a graph. The user then chooses a “horizon” (future days) and clicks FORECAST to run a model in the back-end, whose results also get visualized when they are returned.
With data and mockup in hand we can start adding our interactive UI elements. We need a dropdown, a slider, a button, and a line chart. Let’s start with the first 3. I’ll add a new layout inside the first inner layout cell to help position our elements:
Notice I kept the border set to 1 so I can see the new layout:
Now let’s add our UI elements inside these new cells. From Azle’s documentation we can grab the code we need.
Add dropdown, slider and button code to our application:
Boom, now we have UI elements:
Let’s tailor these elements for our app. We know the options we want for our dropdown (deaths, confirmed, recovered). Add those now:
Let’s allow up to 30 days for the prediction. Set the default to 1 week out ( 7 days), with min_value of 1 day and max_value 30 days:
Let’s center the elements and remove the border from our recent layout.
We will use the all_ prefix in front of a new style_layout function to apply center alignment to all 3 cells at once:
I also set the border on the layout to 0 since we no longer need it.
Toggle the border between 1 and 0 while developing.
halign with center ensures the contents of all 3 cells align horizontally.
Now let’s add our line chart visual.
Starting to look like a real application now. For the line chart we will use another library called Plotly. Plotly is built on top of D3.js, an industry-standard visualization library written in Javascript.
While you can learn D3.js itself, Plotly offers an abstraction layer that makes development (much) more rapid. Click the line chart option on Plotly’s website:
We will copy the line chart code into our application. First, we need to make this library available. The simplest way to make a JS library available to an app is via CDN (content delivery network). We just add the appropriate URL to the header of our index.html file. We find this URL from Plotly’s Getting Started page:
Now we can use any of Plotly’s visuals in our application. Grab the line chart code from Plotly’s website and place it inside a function called draw_line_chart , like this:
Place this anywhere in the index.html file, outside the main create_azle function (so anywhere at the bottom, but still inside the <script> tags). Look in the next CodePen to see.
If we invoke our draw_line_chart function it will draw our line chart inside the element with the my_div id. Of course we don’t have any element like that now, so let’s create the HTML element that will house our plot.
We will use Azle’s add_html function. I will give the div an id called “hold_chart” :
Let’s go back and change ‘myDiv’ in the previous draw_line_chart function to ‘hold_chart’ so it targets correctly.
We need a way to call our draw_line_chart function. Let’s make it so that clicking on our FORECAST button draws our line chart.
To add events to UI elements in Azle we use the add_event function. Let’s add a click event to our FORECAST button like this:
If we now click the FORECAST button we will see the line chart drawn:
Looks great. Here is this latest CodePen.
Plotly provides a lot of slick stuff out of the box. We get tooltips, zooming, panning, and a host of customizable options. Doing this all yourself in raw D3 isn’t fun.
The data shown in the line chart is just mock data provided by Plotly. Obviously we want to hook up real data. Let’s do that now.
I mentioned earlier that we will ingest the REST data using JavaScript, and parse the results into some useful form. An important fact about building apps that rely on fetched data is the data must be available prior to using it.
While this sounds obvious it’s easy to miss when one is just beginning to learn software development. As an example, when our users first load our app we want to show the line chart. But that line chart depends on data being available.
To make sure data are available to any pieces of our app that require it we use so-called “asynchronous code.” Asynchronous code WAITS until something has happened (e.g. data has been fetched) before calling a function of our choice.
In our case, we want to fetch covid-19 data, WAIT until it’s available in our app, THEN draw our line chart. JavaScript makes all this possible via its “fetch” API. With fetch, we simply point to the URL provided by the REST service and tell it what to do once data are received.
Let’s use fetch to bring our covid-19 data into our application. How do we use it? A simple Google search led me to this, which explains it quite nicely. It tells us to use fetch like this:
Let’s paste it into our code, using the covid-19 URL we found above (right after our Plotly code):
Refresh your browser. The app itself doesn’t look any different. But if we open the “browser console” we can see the covid-19 data we fetched. Open the browser console by right-clicking anywhere on the screen and clicking “inspect”. Then click console.
You will see an “Object” sitting in the console. This is the covid-19 data we fetched (notice the fetch API we pasted above says console.log(data)). Click repeatedly into this object to view its structure:
This looks a lot better than what appeared in the browser when we first pointed to the covid-19 URL. Now we can parse this data object, using its contents to populate our line chart. To do this we’ll need to make a slight change to how our line chart is being drawn.
Inspecting the line chart code we added earlier we can see it uses “traces” to convert raw data into lines. We need to get the data from our fetch results into these x and y properties of the trace object:
Let’s write some JavaScript to parse the results of our fetch. We need to: 1. understand the source structure; 2. understand the destination structure. We can understand the source structure by inspecting the data in the browser console as we did previously.
The destination structure is the trace object required by Plotly. I wrote the following function to take the raw data retrieved from fetch and convert it into the trace structure needed for Plotly:
Functions are how we group code in software. They have a name, accept arguments, and return some result. Functions help keep code modular and maintainable.
What’s important here is to understand this is not the way to parse data. It’s a way. You must experiment with JavaScript and toy around until you find something that works. Search online for how to parse JavaScript objects, loop through them, and return new structures.
My get_dates_and_cases function accepts the fetched data, a choice (e.g. deaths), loops through its contents, extracts the pieces I need for plotting, and returns the object for Plotly. I will write another function to draw the line chart with our prepared data.
We can remove the original Plotly code we added and use this function instead. It uses our first function to prepare the trace data, then plots the line chart as usual.
Be sure to keep the “hold_chart” div we added so the plot has somewhere to live.
Recall that our fetch code only fetches data but does nothing with it. Let’s make it do that our fetch function draws our line chart once the data have arrived. Change our original fetch code to look like this:
Finally, remove the call to draw_line_chart that currently sits inside our add_event function. We’ll add it back in a second. Also, don’t worry about the az.hold_value.fetched_data = data line yet. We’ll explain that later.
Your current code should look like this: CodePen.
Notice in the CodePen I wrapped a setTimeout around the fetch function. This just adds a slight delay to ensure the elements are on the screen when it goes to draw the plot. This won’t be an issue once we move the fetch function to inside our add_event in the next step.
Our covid data is now showing in the line chart. Importantly, the line chart will only be drawn when the data have been fully fetched from the REST API. To recap, we used asynchronous code in JavaScript to fetch data from an API, THEN created a visual once the data were ready.
We also wrote 2 functions to prepare the raw data and plot the results. We are getting close:
…although we haven’t discussed the data storing yet.
Our next step is to allow the user to select a choice (deaths, confirmed, recovered) to redraw the line chart accordingly. Let’s store the returned data so we can use it as needed, without having to refetch the data each time.
I will keep the returned data in a JavaScript object. I will just use Azle’s namespace like this:
Type az.hold_value.fetched_data into the browser console and hit enter. You can see that we can access our covid-19 data anytime by simply working with this object.
First, we want to redraw the line chart when the user makes a choice from the dropdown. Let’s use Azle’s “ change ” event to make this happen.
To add an event to our element we use Azle’s add_event function as we did before for the button, targeting the element of choice, just as we did with styling:
If you refresh your browser and make a selection from the dropdown you should see the choice alerted.
Now we just need to redraw the line chart instead of calling alert. Our draw_line_chart function we wrote earlier is ready to go. It already takes the data and choice as arguments and redraws the plot accordingly. So all we need to do is add draw_line_chart to our add_event function for the dropdown:
Notice we are using the data stored inside az.hold_value.fetched_data . We are also using a new Azle function called grab_value ; this allows us to grab whatever value the user chose on an element (targeted as usual, with class name and class instance ).
Let’s see if it works:
Beautiful.
Here’s the current CodePen.
Adding events to our other elements works the same way. But those elements relate to calling some back-end model to make predictions using our covid-19 data. So before adding the other events let’s start working on the back-end model.
For the “back-end” we need a server that supports the heavy lifting required to compute our predictions. We could use a cloud provider such as Digital Ocean or Amazon Web Services, but since we’re just prototyping ideas we’ll use our own local computer.
Getting a front-end communicating with a back-end requires a web service. A web service will allow us to send requests to the server and receive something in return. This is what we need, since we want to give a back-end model data from our front-end and return a prediction.
We will use a lightweight web framework called Flask to build our web service in Python. Let’s do that now. At the beginning, we created our index.html file. Let’s add another file to the same folder, calling it predict.py :
I just copied the index.html file and removed the contents to make an empty predict.py file.
Now we will add some basic Flask code to our currently blank predict.py file. Flask’s documentation has a Quickstart guide where they show us the minimal code needed to set things up:
We will need a few more pieces, like extra libraries, to get our web service suitable for our purpose. Here is what our predict.py file will look like:
Whereas in JavaScript we used “CDNs” to add extra libraries to our application, in Python we use import statements. Above we are importing Flask, as well as the “request” and “jsonify” libraries, which will enable us to receive and send data from-and-to our front-end.
We also set a “route” which is an “endpoint” for our service. Recall our discussion on endpoints when we looked at fetching covid-19 data from a REST API. If you’re thinking we are making our own REST API right now then you are correct :)
Much of today’s back-end machinery in enterprise software is made available as services, consumed as REST APIs. This makes it easier to patch together various functionality and maintain and scale the application.
Beneath the “route” in our predict.py file we create a function. Python functions look different than Javascript functions, but the idea is the same; a modular piece of code that can accept arguments and return a value. I called our function forecaster.
Finally, we specify the “port” at the end of the file. A port is a programmatic docking point that grants access from the outside world to our local system. I chose port 5000. You can choose a different number if you want (if it’s already being used your computer will tell you)
Let’s fire up our web service to see if it works. If it all goes well we’ll add our prediction model and start serving real forecasts for our application to consume and visualize.
So far we’ve just been using a browser and notepad to create our application. But now we need to speak directly with our operating system, and for that we must use terminal (command prompt on PC).
I am using a Mac, so everything you see here will be in macOS. But the same general steps apply to PC. Open terminal. Quickest way in Mac is to use Spotlight Search by typing “command + spacebar” and typing in “terminal”:
With terminal open, change into our my_app directory by running the following:
cd Desktop/my_app/
…and hit enter. Now type:
ls
… and hit enter again.
You should see both the index.html and predict.py files. Hard to believe our entire application is only 2 files.
Thanks to the level of abstraction available in today’s tools we only require minimal code to create a full blown app.
Now, start our web service by running the following:
python predict.py
…and hitting enter.
You are now running a web service, exposing your Python code to any application capable of communicating with it. This is pretty cool, considering so many high-powered libraries are available today in Python. Just think what you could create!
Our back-end is being served on “localhost”, port 5000, with an endpoint called “predict_covid”, accepting parameters called “x” and “y”. To pass all this information “over the wire” we can construct the following URL:
http://localhost:5000/predict_covid/?x=100&y=400
This is the standard way to communicate with a REST API when using URLs in the browser. The ? mark precedes the first parameter name, and the & precedes the second. Open a new tab in your browser, add the above URL to the top, and hit enter. You should see the following:
Our web service simply returns the values we passed for “x” and “y”. Not too exciting, but it does prove our web service works.
Let’s hook it all up, end-to-end, so that our front-end application written in JavaScript (Azle) is passing data to, and receiving data from, our web service (rather than the browser). If we succeed, the only thing left to do will be building a good forecasting model in Python.
We need to pass the following data to our back-end:
dropdown choice
slider value
dates and # of cases
Note: We could fetch the covid data in the back-end rather than passing it over-the-wire, but then we would be fetching data twice. Since we are only forecasting a few dates and values it makes more sense to use the data already fetched from the front-end.
We already know how to “grab” the values from UI elements, using Azle’s grab_value function. We also have our fetched data structured nicely in our az.hold_value.fetched_data object.
To send this to our web service we could use JavaScript’s fetch API again. Instead, let’s use Azle’s call_api function, inside our button’s event listener:
Here is a breakdown of what is happening:
We added an “event listener” to our button, just like we did to our dropdown earlier. We also added Azle’s call_api inside the event’s function property, specifying the URL to our Flask service, the “x” and “y” parameters, and an alert when data are returned.
Refresh your browser and click on the FORECAST button:
Our application is officially interacting with our back-end service, passing data to, and receiving a response from, our Python code. Let’s replace the “x” and “y” parameters with the actual data we need to send, and also add the forecast horizon from the slider value.
Here I am grabbing the slider value, and the “x” and “y” data from our existing get_dates_and_cases function.
We now have our events hooked-up:
Finally, let’s build a forecasting model to make predictions based on the user’s choice of type (deaths, confirmed, recovered) and horizon. After some searching around I found a library called Prophet, which was open-sourced by Facebook.
Facebook recently released their neural prophet version as well.
Prophet is a forecasting library for time series, and has a Quickstart guide for Python that should get us up and running.
Our final predict.py file receives data from the front-end (which we already know how to do), prepares the data as needed by the Prophet library (as specified in their docs), trains a forecast model, makes a prediction, and returns the results.
All this in about 10 lines of code. Not bad! (There’s that abstraction working for us again).
In our index.html file I added 2 new functions for prepping data for the 2nd trace (to show the forecast). You’ll see it’s very similar to the ones we wrote for trace 1. I also added the new draw_forecast function to the done property of our call_api function.
Have a look over the final index.html to see the changes. You should recognize the final changes as familiar code. If anything looks unfamiliar search online to learn why it might have been added.
Let’s refresh our browser one last time to see the full application in action.
We achieved everything on our original list:
Before singing off, I want to add a spinner when the user clicks FORECAST so they know to wait for the result. I also want the Plotly chart to be transparent so we can see that nice blue background we had. Finally, I’m going to change the title of our app now that it’s something real.
You can find the final code here:
And here is the final CodePen. Note that since CodePen isn’t running our back-end web service you won’t see the predictions. But everything should be working on your local machine.
These are the major parts that go into building any application. With these skills you can grow into the world of software development (or just use it as a hobby to build stuff).
That’s it. Might seem like a long article but considering you just crafted an entire application with all major pieces, not too bad. If you’re stuck, reach out to me on Twitter. We’ll work through the code together. And don’t stop here. Try other libraries. Other use cases. Other anything. Just build.
Again, the only gatekeeper to opportunity is you. | https://towardsdatascience.com/full-stack-development-in-1-twitter-thread-5a0cbce2e059 | ['Sean Mcclure'] | 2020-12-02 04:27:42.885000+00:00 | ['Front End Development', 'Hacking', 'Prototyping', 'JavaScript', 'Development'] |
5 different DIY Projects with Wine Corks | Some days just seem to call for a bottle of wine. So you grab a bottle, you pop the cork…and then you find yourself saying “well this is a nice bottle of wine, so I should keep the cork”. Jump to a year later, and suddenly you’ve got a bucket full of corks and no idea what to do with them, and clearly you’re not going to throw them out now…you just need the right project! Well, we’ve come up with our five favorite ways to use those leftover corks.
#1: Cork zoo? Cork zoo.
People have gone pretty wild with these cork animals, and there’s loads of ways to make them. You can use the cork as the body, and attach some body parts to it. Feeling more artistic? Grab a marker and design yourself a penguin or a tiger. In a more constructive mood? Take a few and build yourself a horse (here’s a hint: start with using a cork for each leg, another for the body, and cut a third into a big piece for the head and a smaller piece for the neck). Endless possibilities!
#2: Who started the fire? Wine corks!
You know what pairs well with a nice glass of wine on a cold day? A nice fire in your fireplace. And you don’t need to be a pro camper to make a fire if you’ve thought ahead with some wine corks. Just fill an airtight container with some rubbing alcohol, throw in your corks, and let ’em soak. Once it’s time for your fire, throw in some kindling with your logs, and then put in your corks. One match later, and you’ll be relaxed next to a warming fire with a warming glass of wine in hand.
#3: It’s gonna be a very classy affair
Wine is best shared with friends, and sometimes that means classing the place up a bit. You break out the nice plates, make sure all the silverware matches, and with a few corks, you’ve got yourself some placeholders too! All you need to do is cut a small slit lengthwise down the cork, and you’ve got yourself a perfect placeholder for all your guests namecards.
#4: Build a bathmat
If you’ve found yourself with a prodigious collection of corks, you may find yourself saying “well those suggestions are well and good, but it’s not going to dent my cork cabinet!”. Enter the cork bathmat. All you need are corks, a good powerful glue (hot glue works best), and a non-adhesive liner to place them on. Then all you need to do is build a rectangle out of your corks, cut the liner to size, and glue them into place. Boom, cork bathmat.
#5: A plant holder for even the least green of thumbs
If you’re the type who has managed to kill even your plastic plants, this is the plant holder for you. Take your cork, and punch a hole in the top (you can even use a corkscrew for this). Then use a knife to dig out a bigger hole, which you’ll want to go about halfway down the cork. Then fill the hole with soil, plant a succulent clipping. They’re cheap, hardy, and in style, so you’ll seem to be quite the fashionable gardner.
About Toast!:
We made Toast! gummies shareable so you can easily carry them in your bag or pocket and then be the talk of the group for bringing party favors! Check out our collection here and stock up today. | https://medium.com/@sean_71552/5-different-diy-projects-with-wine-corks-edf48e8535b2 | ["Sean O'Neill"] | 2019-03-07 18:39:38.113000+00:00 | ['Alcohol', 'DIY', 'Funny', 'Wine'] |
Hrithik Roshan Net Worth, Age, Movies, Height, Son, Birthday | Hrithik Roshan Net Worth, Age, Movies, Height, Son, Birthday
Hrithik Roshan Net Worth, Age, Movies, Height, Son, Birthday
The monthly income of Hrithik Roshan is $1.5 Million. The daily income of Hrithik Roshan is $39,512.The Hrithik Roshan net worth is $32.5 Million. The annual income of Hrithik Roshan is 21,45,12,000 rupees. He also earns money as the Brand Ambassador of many big companies. Hrithik Roshan also earns money from his own starting clothing brand HRX.
Hrithik Roshan’s birthday is on 10 January. Hrithik Roshan’s age is 47 years. Hrithik Roshan’s height is 5ft 9 in. Hrithik Roshan son are Hridaan Roshan and Hrehaan Roshan. The real name of Hrithik Roshan is Hrithik Rakesh Nagrath.
He gives many blockbuster movies to the Bollywood industry. Blockbuster Hrithik Roshan movies are Kaho Naa Pyar Hai, Krrish, Agneepath, Koi Mil Gaya, Krrish 3, Zindagi Na Milegi Dobara, Jodhaa Akbar, Dhoom 2, Kaabil, Kaho Naa Pyar Hai, Super 30, Guzaarish, Lakshya, War.
Hrithik Roshan’s father is named Rakesh Roshan and he is a director and producer. Hrithik Roshan’s mother’s name is Pinky Roshan.
Hrithik Roshan’s grandfather is also touching the film industry. Jay Om Prakash’s grandfather of considered lucky Hrithik Roshan that’s why Hrithik Roshan always plays a small character in his movie.
At the age of 6 years, Hrithik Roshan debut in the movie Aasha as a child artist. For this movie, Hrithik Roshan gets 100 rupees. At an early age, Hrithik Roshan has a stuttering problem, and that’s why he is not fully attached to his age children. With the help of speech therapy, Hrithik Roshan stutters problems he solves.
What other Reads?
During child artist, Hrithik Roshan decided to become an actor, but Hrithik Roshan’s father wanted that he be entirely focused on study. At the age of 20 years, Hrithik Roshan faces another problem scoliosis disease. Due to this disease, Hrithik Roshan has a problem in his spinal cord.
The doctor advice Hrithik Roshan to do not to do stunts and dance, but Hrithik Roshan does not ignore his passion. After that, Hrithik Roshan completes his graduation from Sydenham College of Commerce in Economics. Now Hrithik Roshan decided to enter the film industry. At the starting time, he worked with his father as an assistant director.
As an assistant director, Hrithik Roshan work in Khudgarj (1987), Uncle(1993), Karan Arjun (1995), Koyla(1997). At this time, he also gets coaching classes from Kishore Namit Kapoor. In 2000 Hrithik Roshan worked in his first debut movie as a leading actor Kaho Na Pyar Hai. The director of this movie is Rakesh Roshan, and he wanted Shah Rukh Khan as a leading actor.
Shahrukh Khan does not like this movie story, and he says no to this movie. Then Rakesh Roshan decided to launch his son in Bollywood with this movie. Kaho Na Pyar Hai movie is a blockbuster movie, and people appreciate Hrithik Roshan’s dance and acting in this movie.
Hrithik Roshan won Filmfare awards, IIFA awards, Zee Cine Awards for Best male debut actor and Best actor. After that, he never looks back.
He worked in many movies Fiza, Mission Kashmir, Kabhi Khushi Kabhie Gham, Koi Mil Gaya, Krrish, Jodha Akbar, Dhoom, Gujarish, Agneepath, Jindagi Na Milegi Dobara, Kaabil, Super 30, War, and many more.
If we talk about Hrithik Roshan’s personal life, e married with the f his life Sussanne Khan on 20 December 2000. They both have two children. Unfortunately, due to tome personal reasons, they took for Hrithikan is an outstanding HollywoodHrithik Roshan made their television debut in Just Dance TV reality show. Hrithik Roshan started his clothing brand in the name of the brand is HRX. Hrithik Roshan achieves this with his hard work. | https://medium.com/@akshayakky/hrithik-roshan-net-worth-age-movies-height-son-birthday-b8bc7d09ea03 | ['Akshay Akky'] | 2021-11-18 08:37:49.858000+00:00 | ['Bollywood', 'Net Worth', 'Movies'] |
Fast Forward: What Marketers Need to Know about the Facebook Connect Event | Fast Forward: What Marketers Need to Know about the Facebook Connect Event
Facebook outlines its roadmap for AR and VR developments
Editor’s note: This is an abridged edition of our Fast Forward newsletter, a fast read for you and an easy forward to your clients. If you wish to receive the full version a day earlier in your inbox, please contact Josh Mallalieu ([email protected]) to get on our mailing list.
Facebook has always wanted to be a platform company. Mark Zuckerberg is acutely aware that his company’s ad-based business model is always at the whims of the platform owners, most recently demonstrated by a blog post the company published last month decrying Apple’s decision to tighten privacy settings in iOS 14 could cut its Audience Network revenue in half. This deep-seated goal explains why the company has always had the ambition to establish itself as a platform company in the post-mobile world.
The impending paradigm shift in computing interface, from mobile to AR and VR, is one that Facebook is deeply concerned about in its latest public event. Formerly known as the Oculus Connect event, hosted by the Facebook-owned VR subsidiary, the event has been renamed Facebook Connect and is now presented by Facebook Reality Labs, a new subsidiary of Facebook that merges Oculus with its internal team working on AR-related projects.
Compared to the Apple event on Tuesday, which ran a concise 60 minutes, Facebook Connect was over-ambitious and future-forward, laying out its roadmap for its transition towards an AR/VR future in great detail but with few convictions. Over a 100-minute event, Facebook announced a wide-ranging variety of developing projects, upcoming features, and one consumer-ready product — an updated Oculus Quest VR headset. The Verge has a rundown of all the announcements if you’d like to check them out. Here, we’ll focus on the big picture and what it means for brand marketers.
Oclus Quest 2
VR: Doubling Down on Gaming Won’t Solve Adoption Hurdles
The VR industry has been stuck in development limbo for a few years; even the stay-at-home orders didn’t do much to spur consumer interest in VR headsets. Facebook likes to tout that its Oculus Quest headsets have been sold out at many retailers, but the fact that it has so far shied away from announcing an exact sales number has led many to speculate that sales have not been that impressive, and that the sellouts are merely a result of insufficient inventory.
Nevertheless, the lack of consumer interest hasn’t deterred Facebook from its commitment to selling Oculus headsets. The new Oculus Quest 2 is lighter, faster, more comfortable to wear, and most of all, cheaper. Starting at $299, it is the most affordable cordless VR headset and arguably the best one available on the market. The problem, however, is that hardware quality has not been the main holdback for VR adoption for at least two years. Instead, it is the lack of content, especially non-gaming content, that is preventing VR from being considered by consumers outside of gamers and tech enthusiasts.
Given this, it is rather disappointing to see that Facebook is still unsure of how to solve this VR content issue. It attempted to position VR as a socialization channel by integrating its Messenger platform into Oculus Quest’s UI and rehashing the grand promises of Facebook Horizon (which still remains in invite-only beta testing). It argued for the need for “full presence” in this age of Zoom meetings, but the VR solution it proposed seemed a bit far from being deployed at scale. Similarly, Facebook also tried to position Oculus Quest as an essential productivity tool for the future of work with a new “Infinite Office” mode, but only going so far as to reimagine laptop displays as floating touch screens in VR without actualizing the real benefits of working in a simulated environment.
Predictably, Facebook fell right back into the reliable VR game route as the main selling point of Oculus Quest 2, devoting a significant amount of airtime to showcasing a dazzling array of upcoming VR games, including titles from popular franchises such as Assassin’s Creed, Jurassic Park, and Star Wars. While the games do look exciting, and the lower price of Oculus Quest 2 does make it more accessible, Facebook still faces a tough holiday season this year to compete in the gaming hardware market against new-gen consoles from both Playstation and Xbox.
While gaming has been a tried-and-true revenue generator for VR, it inherently limits possible use cases and caps VR headsets’ true potential as the next computing platform. Only by working out how to actualize non-gaming use cases and implementing them correctly can Facebook make a better sales pitch for Oculus and push VR into the mainstream consumer market. Unfortunately, this event showed little evidence that it has found the right ways to do so.
Spark AR platform
AR: Taking a Leap from Instagram to AR Glasses
Compared to its vision in steering VR development, Facebook fared better in terms of AR. Although one could argue that by focusing on VR for the past few years, the reality is that Facebook is cognizant of the rapid development in consumer-facing AR and has been talking about building its own AR glasses for years.
Today, Instagram serves as the primary channel for Facebook’s AR initiatives, which means that the majority of the “1.2 million AR effects” that Facebook claims creators have made with its Spark AR framework are selfie lens and filters. Sure, introducing those AR effects into Messenger and Facebook Portals will help increase their reach, but at the end of the day, they are only the first step toward applications of mobile AR that don’t provide much utility or functional value. Compared to Snap’s AR announcements a couple of months ago, or Apple’s at WWDC, which included things like multi-user AR and occlusion, Facebook didn’t announce any comparable technical advancements for its own Spark AR platform.
Encouragingly, Facebook did expand its AR use cases beyond camera filters. It announced it will bring its existing AR try-on features from Instagram to the newly-launched Facebook Shops to help sellers better show off their products and engage with online shoppers. Beyond that, Facebook is also working with museums and cultural institutions such as the Smithsonian and the Palace of Versailles to capture their collections in 3D and making them available in virtual tours.
As Facebook continues to expand on its mobile AR initiative, it is also actively working to build its own AR hardware. In fact, Mark Zuckerberg kicked off the event with the debut of Project Aria, a pair of camera-equipped smart glasses that Facebook employees will wear in public to gather “ego-centric” location and content data for Facebook’s AR projects. Facebook emphasized that Project Aria is not meant to be a consumer-facing product nor a prototype, but rather a research device to help it advance its location dataset needed to build its AR glasses.
Project Aria
To that end, Facebook also announced it is looking to release a first pair of consumer “smart glasses” next year as a branded Ray-Ban product. Details about what this device will do remain unclear, but Facebook clarified with The Verge that “the device will not be classified as an AR device, and it will not have an integrated display of any kind.” Regardless, it will serve as a necessary stepping stone to Facebook’s eventual AR glasses, where information snippets and 3D virtual objects are placed contextually in the user’s surrounding environments. In addition, by partnering with Luxottica, a leader in premium eyewear, it will ensure its glasses will be fashionable and comfortable to wear for everyone.
Even without a consumer-ready hardware product, Facebook does seem to have a comparatively solid plan to extend its AR user experiences beyond mobile and leap onto AR wearables to establish itself as an AR platform owner. However, given existing public use cases of AR glasses, it does call into question whether Facebook has enough brand trust to convince consumers that their AR glasses won’t infringe on their and others’ privacy. The fact that the well-received Facebook Portal failed to gain market share is perhaps a testament to the idea that Facebook’s reputation is hurting its chances of entering new markets. To their credit, Facebook was self-aware enough to repeatedly underscore the precautions it is taking to ensure transparent data collection, clear controls, and inclusion of all people of all backgrounds, especially non-users who’d be affected. But trust is not easily gained by guidelines; it has to be earned through concrete actions. So far, Facebook’s past track record has shown little to inspire confidence that it could get things right this time. Nevertheless, hope springs eternal.
What Brand Marketers Should Do
Regardless of whether Facebook can successfully push VR into the mainstream or establish itself as a major AR platform owner, one thing is clear from this Facebook event, as well as the Apple event on Tuesday — the personal computing paradigm is shifting and major tech owners are all readying their next move. While there is no tangible immediate impact on brand marketing and advertising at the moment, smart brands know it is always better to stay ahead of the curve.
With an eye on the fast-approaching post-mobile era, now is the time for brands to start exploring AR-enabled brand experiences, virtual events held in games and other immersive environments, and, perhaps the most important and difficult of all, figuring out how to surface your products and services contextually to your target audience, via the best platform, at the right time, and in the right place. An AR/VR strategy should be made today to guide your brand into the post-mobile era of brand-consumer interactions, where contextual relevance and immersive experiences will be key to delivering value and building customer relationships.
If you’d like to learn more about Facebook’s latest announcements and what they mean for your brand, or just want to chat about how to prepare for a post-mobile future, the Lab is here to help. You can start a conservation by reaching out to our group director, Josh Mallalieu ([email protected]). | https://medium.com/ipg-media-lab/fast-forward-what-marketers-need-to-know-about-the-latest-facebook-connect-event-ced515383876 | ['Richard Yao'] | 2020-09-21 16:30:13.812000+00:00 | ['Virtual Reality', 'Augmented Reality', 'Innovation', 'Technology', 'Marketing'] |
How to Build a Reporting Dashboard using Dash and Plotly | A method to select either a condensed data table or the complete data table.
One of the features that I wanted for the data table was the ability to show a “condensed” version of the table as well as the complete data table. Therefore, I included a radio button in the layouts.py file to select which version of the table to present:
Code Block 17: Radio Button in layouts.py
The callback for this functionality takes input from the radio button and outputs the columns to render in the data table:
Code Block 18: Callback for Radio Button in layouts.py File
This callback is a little bit more complicated since I am adding columns for conditional formatting (which I will go into below). Essentially, just as the callback below is changing the data presented in the data table based upon the dates selected using the callback statement, Output('datatable-paid-search', 'data' , this callback is changing the columns presented in the data table based upon the radio button selection using the callback statement, Output('datatable-paid-search', 'columns' .
Conditionally Color-Code Different Data Table cells
One of the features which the stakeholders wanted for the data table was the ability to have certain numbers or cells in the data table to be highlighted based upon a metric’s value; red for negative numbers for instance. However, conditional formatting of data table cells has three main issues.
There is lack of formatting functionality in Dash Data Tables at this time.
If a number is formatted prior to inclusion in a Dash Data Table (in pandas for instance), then data table functionality such as sorting and filtering does not work properly.
There is a bug in the Dash data table code in which conditional formatting does not work properly.
I ended up formatting the numbers in the data table in pandas despite the above limitations. I discovered that conditional formatting in Dash does not work properly for formatted numbers (numbers with commas, dollar signs, percent signs, etc.). Indeed, I found out that there is a bug with the method described in the Conditional Formatting — Highlighting Cells section of the Dash Data Table User Guide:
Code Block 19: Conditional Formatting — Highlighting Cells
The cell for New York City temperature shows up as green even though the value is less than 3.9.* I’ve tested this in other scenarios and it seems like the conditional formatting for numbers only uses the integer part of the condition (“3” but not “3.9”). The filter for Temperature used for conditional formatting somehow truncates the significant digits and only considers the integer part of a number. I posted to the Dash community forum about this bug, and it has since been fixed in a recent version of Dash.
*This has since been corrected in the Dash Documentation.
Conditional Formatting of Cells using Doppelganger Columns
Due to the above limitations with conditional formatting of cells, I came up with an alternative method in which I add “doppelganger” columns to both the pandas data frame and Dash data table. These doppelganger columns had either the value of the original column, or the value of the original column multiplied by 100 (to overcome the bug when the decimal portion of a value is not considered by conditional filtering). Then, the doppelganger columns can be added to the data table but are hidden from view with the following statements:
Code Block 20: Adding Doppelganger Columns
Then, the conditional cell formatting can be implemented using the following syntax:
Code Block 21: Conditional Cell Formatting
Essentially, the filter is applied on the “doppelganger” column, Revenue_YoY_percent_conditional (filtering cells in which the value is less than 0). However, the formatting is applied on the corresponding “real” column, Revenue YoY (%) . One can imagine other usages for this method of conditional formatting; for instance, highlighting outlier values.
The complete statement for the data table is below (with conditional formatting for odd and even rows, as well highlighting cells that are above a certain threshold using the doppelganger method):
Code Block 22: Data Table with Conditional Formatting
I describe the method to update the graphs using the selected rows in the data table below. | https://medium.com/p/4f4257c18a7f#906f | ['David Comfort'] | 2019-03-13 14:21:44.055000+00:00 | ['Dashboard', 'Towards Data Science', 'Data Science', 'Data Visualization', 'Dash'] |
Introducing solids to your baby | I hope your parenting journey is going good. Once your baby reaches 4 months of age, you all want to know which foods to introduce to your baby apart from formula milk(FM) or breast milk(BM).
First, you need to realize that it's not until late in the first year of life that the digestive system is matured to maximally absorb nutrients and calories. So solid foods, which can fill a baby up but are low in calories and nutrients, don't help your baby grow until they're about a year old.
So, what are solid foods for in infancy? Well, they're really only there in infancy to help your baby develop a fondness for tastes and textures. Their main ongoing source of nutritional growth in that first year of life is through the breast milk/formula milk you are giving your baby.
when is the best time to initiate solids for taste and texture? Well, certainly not before at least 4 months of age, but more importantly, when your baby demonstrates good head and neck control. If solids are introduced too early or too late then there is a risk of food allergies.
The American Academy of Pediatrics recommends starting solids sometime between 4 to 6 months of age. At this age babies will be watching you when you eat something in front of them. Watch out for the signs that they are ready to eat different foods.
As to what foods to start with, most nutritionists recommend starting with a single-grain cereal, like iron-fortified rice cereal, which can be easily digested at 4 to 6 months of age. And then moving up to pureed vegetables and subsequently, fruits. Saving the sweeter tastes for last so your baby adjusts to the non-sweet tastes of cereals and vegetables first. Don't introduce more than one new food at a time to make sure that your baby doesn't develop a food allergy to a particular new food or an ingredient you're giving him or her. If your baby doesn't like a food, wait a week or two and try again, since often the second or even third time is the charm.
Some baby food recipes you can prepare:
Cereal (Rice/Barley/Oatmeal) :
Ingredients:
1/4th cup of cereal powder
1 cup water
Step 1: Bring liquid to boil in saucepan. Add the cereal powder while stirring constantly.
Step 2: Simmer for 10 minutes and do not stop stirring, add some breast milk or formula milk. Serve warm.
Apple puree:
Ingredients :Apple, water
Step 1: peel the apple and cut into small pieces.
Step 2 : boil the pieces by adding some water.
Step 3 : blend the boiled pieces with a blender. Add some breast milk or formula milk. Apple puree is ready to serve.
Banana puree:
Step 1: peel a ripe banana and cut it into pieces.
Step 2: you can either mash the banana pieces with a fork or blend them in a blender. Add some breast milk or formula milk to get the desired consistency.
Once your baby gets habituated to semisolids you can introduce solid foods. If your baby is teething you can give finger foods that are kept in fridge. As the cooling effect can be soothing to babies while teething. Do not give your babies lot of solids at a time. One food at a time and then slowly increase them if your baby is not allergic to them. All the best all of you lovely mommies:)
Also read : Immunity booster foods for kids
Article from https://mammaandsara.blogspot.com/ | https://medium.com/@happyparenting/introducing-solids-to-your-baby-a28927484d88 | ['Happy Parenting'] | 2020-08-20 09:56:58.586000+00:00 | ['Solidfood', 'Baby', 'Baby Food', 'Parenting', 'Baby Care'] |
Airbnb | Airbnb
In the class our professor told us about the Ad Age 2018 A-List advertising and communications agencies . We got the chance to have a tour on the article and find out the top marketing and communication agencies for 2018. The one that catched our eyes was Airbnb.
What is Airbnb?
Airbnb is an online marketplace connecting travelers with local hosts. On one side the platform enables people to list their available space and earn extra income in the form of rent. On the other, Airbnb enables travelers to book unique home stays from local hosts, saving them money and giving them a chance to interact with locals. Catering to the on-demand travel industry, Airbnb is present in over 190 countries across the world.
Founders, Funding received, Salient features & Facts:
Airbnb was founded on 1 August, 2008 and in a short span of time it grew as a technology masterpiece. Having received a massive funding of $2.3 Billion from 31 different investors, Airbnb is a part of the Billion dollar club. Here are few interesting facts about Airbnb.
· Founders: Nathan Blecharczyk, Joe Gebbia and Brian Chesky.
· Funding received: $2.3 Billion (Till June 2015)
· Airbnb company valuation: $25.5 Billion (As of June 2015)
· Airbnb is present in 34,000+ cities across 190+ countries.
· Having 1.2 Million listings, the company has served over 35 Million guests.
· Headquarters: San Francisco, California, USA.
· 140,000+ people stay at an Airbnb listed place everyday.
Value Propositions of Airbnb
· Enables owners to list their space on the platform and earn rental money.
· Airbnb provides insurance to listed properties.
· Gives cheap options to travellers to stay with local hosts.
· Facilitates the process of booking living space for travellers.
· Rating and review system for hosts and guests.
The 5 step model about how Airbnb works:
1. Hosts list out their property details on Airbnb along with other factors like pricing, amenities provided etc.
2. Airbnb sends a professional photographer (if available) to the property location in order to take high quality photographs.
3. Travellers search for a property in the city where they wish to stay and browse available options according to price, amenities etc.
4. Booking is made through Airbnb where traveller pays the amount mentioned by host and some additional money as transaction charges.
5. Host approves the booking. Traveller stays there and finally Airbnb pays the amount to the host after deducting their commission.
The host and the traveler can rate each other and can write reviews based on the experience.
How Airbnb finds customers?
A customer for Airbnb can find and use airbnb via Social Media, word of Mouth, Digital Marketing including Internet ads and campaigns and lastly via promotional offers. It is fast and easy for everyone to get to know the Airbnb network.
Airbnb Timeline
The Future of Airbnb
Airbnb is already a multi billion company and is sure to grow further. Having a presence in 190+ countries across the world, it is now concentrating to further increase the daily transactions on its platform. With a total funding of $2.3 billion till date, the unique business model of Airbnb has even become stronger as people prefer staying at an Airbnb inn rather than a hotel.
Georgia Simitzi
Team: Georgia Simitzi Renata Stav Σοφία Τσαβδαρίδου Κωνσταντίνα Μουντάκη Stavriana
Professor: Betty Tsakarestou | https://medium.com/ad-discovery-and-creativity-lab/airbnb-906ddacf614e | ['Georgia Simitzi'] | 2019-06-18 09:28:41.180000+00:00 | ['Ads', 'Airbnb', 'Betty Tsakarestou', 'Advertising'] |
Word Sequence Decoding in Seq2Seq Architectures | Natural Language Generation (NLG) is a task of generating text. Natural Language Generation tasks like Machine Translation, Summarization, Dialogue Systems have a core component of generating the sequence of words as an output conditioned on a given input. For example — For a machine translation system, given an input sentence in English, the model needs to generate its French translation. Today most such systems are built on Encoder-Decoder architecture and it’s variations. Fig.1 shows a diagrammatic glimpse of such an architecture.
Fig.1 from Source
In the above-shown image, Encoder is responsible for capturing the full context in the source/input language, whereas, Decoder is responsible to use this information and output its translation in the desired language. Such models are trained with a huge parallel corpus of sentences for both the languages. Getting into the nitty-gritty of this architecture is beyond the scope of this blog. Please read more here. Also, actual Google’s NMT system has an attention component to it as well. To read about attention networks please read this.
It is quite common to output a probability distribution using Softmax activation function across vocabulary at each decoder time-step. Choosing the final output sequence after the model has been trained depends on the decoding strategy that one uses.
Here, we will discuss 3 decoding strategies that are widely used in practice during inference time—
1. Greedy Search
This strategy selects the most probable word (i.e. argmax) from the model’s vocabulary at each decoding time-step as the candidate to output sequence.
Decoder Segment
The problem with this approach is that once the output is chosen at any time-step t, we don’t get the flexibility to go back and change our choice. It is seen in practice that greedy decoding strategy is prone to have grammatical errors in the generated text. It will result in choosing the best at any time-step t but that might not necessarily give the best when considering the full sentence to be grammatically correct and sensible.
2. Beam Search with Fixed Beam Size
The beam search strategy tries to find an output sequence having a maximum likelihood. It does this by extending the greedy sampling to Top-k sampling strategy. At any time-step t, it considers top-k most probable words as the candidate words for that step. Here, k is called the beam size. Mathematically, we are trying to maximize the below mentioned equation —
Below diagram shows how it happens in practice for k=3 —
Beam Decoding with k=3 at each time-step
Here, we search for a high scoring output sequence by keeping track of top k vocabulary outputs at each time-step while decoding. We usually stop our search when end-of-sentence token (<eos>) is reached or till time-step t for all or at-least some n output sequences. We also normalize by length, to avoid any biasness induced while scoring any sequence.
Normalized Beam Search Scoring
Here, beam size acts as a trade-off variable between time complexity and accuracy. Let’s analyze the minimum and maximum values that k can take:
When (k=1) — It behaves like Greedy search where argmax at any time-step t is fed to later consecutive step.
— It behaves like Greedy search where argmax at any time-step is fed to later consecutive step. When (k=Size of Vocabulary) — It behaves like an Exhaustive search where possible words at each time-step are whole of vocabulary and of which each gives probability distribution over the next set of vocabulary for later consecutive step.
Time Complexity Analysis
3. Beam Search with Variable Beam Size [1]
The notion of variable beam size was developed considering the limitations of fixed-beam size. Fixed beam size might not necessarily be an optimal choice at each decoding time-step.
Let’s understand it with an example — Let the probability values for the top 5 words at any time-step t are comparable. In such cases, fixing beam size(k) we might miss out on relevant candidates resulting in information loss. Whereas, let top 2 words at any time-step t are comparable and rest are way low. In such cases, fixing beam size(k) we might add noise by adding not so relevant candidates.
Instead, the beam size should be a function of the probability distribution at any decoding time-step (t). In [1] the author talks about various techniques that can be used to derive this relation. Also, if you notice carefully then this can also be seen as a binning problem that can be discretizatized based on entropy measures. Please read more here.
4. Temperature Induced Softmax
This isn’t a decoding strategy but can be used in-hand with any of the above-mentioned searches. At any decoding step, we typically use the Softmax activation function to deliver probability distribution over our vocabulary. Instead of using plain Softmax we use the modified version of it, shown below —
Temperature Induced Softmax
Here, T is the temperature variable. It can be easily seen that higher values of T, would result in fattened distribution (giving almost same probabilities to all), whereas, lower values of T would result in peaked distribution (giving high probabilities to some).
Although the blog is majorly focused on talking about word sequence decoding, it goes unsaid that these inference time decoding sequence algorithms can be used for any sequence decoding tasks.
Implementation of the above mentioned approaches can be found here.
Check out my blog repository at https://prakhartechviz.blogspot.com
Further Readings
Feel free to share your thoughts :) | https://towardsdatascience.com/word-sequence-decoding-in-seq2seq-architectures-d102000344ad | ['Prakhar Mishra'] | 2019-12-15 02:41:17.143000+00:00 | ['Machine Learning', 'Deep Learning', 'Neural Networks', 'Naturallanguageprocessing', 'Naturallanguagegeneration'] |
This Is Why You’ll Forever Be an Ignorant Person | You Think You're Smarter Than You Actually Are
On April 19, 1995, one Mcarthur Wheeler robbed two banks in Pittsburgh. What makes his case unique is that seeing how lemon juice makes ink invisible, he smeared his own face with lemon juice, believing that this would make him invisible to surveillance cameras.
He then carried out the robberies full of confidence, in broad daylight, and he didn’t even bother wearing a mask to hide his face from cameras. Why would he? To him, that would be like superman wearing a bullet-proof vest.
Of course, he was quickly identified and arrested by the police later that day. And to prove just how confident he was in his little magic trick when he was being taken away by the cops he mumbled, "but I wore the juice."
Now, you think of him and laugh at just how ignorant he was and even wonder if he was high on drugs. But how many times have you been so damn sure about something… and still ended up being wrong? Cause I for one know that has happened to me a lot. And this is what the Dunning-Kruger effect aims to explain.
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - Bertrand Russell
Named after social psychologists David Dunning and Justin Kruger, the Dunning-Kruger effect is a cognitive bias that shows that people tend to overestimate their own knowledge and abilities in areas where they are incompetent, while those who are more competent do the opposite; thereby creating an ironic scenario where the less we know about something, the more confident we are about that subject, and vice-versa.
To put it in other words, the Dunning-Kruger effect shows that when people lack basic knowledge on a subject, they also lack the experience to understand how little they know. As a result, they vastly overestimate their comprehension of the subject.
To justify this claim, Dunning and Kruger conducted numerous studies and the results all pointed to a single truth: we are all Mcarthur Wheeler in one area of our lives or the other. | https://medium.com/live-your-life-on-purpose/this-is-why-youll-forever-be-an-ignorant-person-223f78c1e4fc | ['Raphael Paul'] | 2020-12-18 12:31:31.696000+00:00 | ['Life', 'Self Improvement', 'Psychology', 'Advice', 'Life Lessons'] |
Reverse Engineering The 4 Daily Habits of a Highly Effective Freelance Copywriter —Pt. 3 | Heya!….
Here’s another ass spanking daily habit for ya…
“Come Up With an Idea a Day”
Now, this article will be a little bit shorter than my previous posts as it’s tied together with the 2 habits that I mentioned in the past articles.
Anyways, let’s get down to it.
As a copywriter, you won’t just be writing copy all day, every day for the rest of the week.
Although the bulk of what we do is writing, there’s another aspect of our service that enhances our value in a company— our ideas.
We are IDEA MACHINES.
I was talking with my retainer client the other day and she wanted me to brainstorm ideas for her new e-book.
Side note: To give you a little bit of context on the situation, I’m trying to revive her email list as she’s currently having trouble sending out her promotions to her subscribers.
So…
I sat down on my thinking chair.
Did a ton of market research.
And came up with 3 radically new titles
And here’s what I told her…
The result:
Here’s the thing…
Coming up with one simple idea a day has two major benefits:
It makes the copy stronger It makes writing the rest of your copy 10x easier
As a copywriter, you must understand that your readers don't want to hear everything you have to say about a certain topic.
What your readers are looking for is a single, useful solution that could make them more successful.
That’s the power of coming up with an idea a day.
It strengthens your copy and helps you narrow in the things that are actually important for your reader.
“Well, what ideas should I come up with?”
It could be anything, really
Like my example above, it could a cool title for a brand new e-book…
An awesome subject for an email…
What’s important here is that you stay in the habit of asking “what if…” or “what so good about this”…as often as possible.
Now, here’s a little hack to generating ideas…
“Practice coming up with 10 small ideas a day instead of ONE BIG IDEA”
Here’s how I did it when I was starting out:
Check out your Gmail inbox and look for emails with subject lines that grab your attention
Exhibit A:
Subject line: Fancy a blue watch?
2. Then, come up with 10 different angles of your own.
For example:
The best watch for the month of July A new trend in fancy watches Looking for the perfect gift? Almost SOLD OUT This is an all-time classic It’s a shame there’s only 5 left Only 5 left in stock Are you sporty and elegant? A simple yet elegant watch for your man You should give him this
3. Then, spend some time looking through your ideas and how you can make them better.
Do this simple exercise a day and you’ll be on your way to churning out money-making hooks,angles and ideas every time you write a piece of copy for your clients.
Next week, I’m going to show you the counterintuitive approach to getting better at sales copy…this may not be for beginners but it’s definitely something that can instantly cut through the noise and sharpen your copywriting game.
For those of you reading this for the first time…
Part 1: Read a piece of copy a day
Part 2: Write a piece of copy
Like this article? Hit that 👏button and the “Follow” button to show your support. | https://medium.com/@GerrickWayne/reverse-engineering-the-4-daily-habits-of-a-highly-effective-freelance-copywriter-pt-3-4d7812bd2cab | ['Gerrick Wayne'] | 2021-08-19 06:37:15.545000+00:00 | ['Email Marketing', 'Copywriting', 'Direct Response Marketing', 'Ideas', 'Freelance Writing'] |
Primary Care Physicians: The Ultimate Referral Hack | How to Connect Your Patients with Effective Psychotherapy
Authors: Elizabeth Collison, Ph.D. & Chandler Chang, Ph.D.
Photo by National Cancer Institute on Unsplash
The Typical PCP’s Dilemma
You’re thinking, “This patient has so many physical complaints, but how will anything help when they’re facing so much stress?” And then you say in resignation, “How about we try a medication?”
Perhaps your patient says, “I don’t want to take any pills if I don’t have to,” so you decide to wait and see. But months later, here they are, sitting on your exam table with little change.
Or you may suggest CBT — Cognitive Behavioral Therapy, which you know is the gold standard treatment for depression, anxiety, and a growing number of psychophysiological conditions, like chronic pain/fibromyalgia, chronic fatigue, IBS, nonepileptic events, etc.
But how do you know which therapists actually provide evidence-based care or CBT? How do you know if a therapist you recommend will understand the balance between a patient’s experience of physical symptoms or pain and the psychological interventions that may reduce their suffering?
Be The Hero
At Therapy Lab, we hear versions of this dilemma from so many PCPs, and we’re eager to be a part of the solution. CBT treatment fidelity and transparency are our priorities. Your patients will know why you sent them to us because their physical relief and symptom reduction will be top priority as we work on emotional, mental, and functional wellness goals. They are woven together, and we take advantage of symptom reduction in any way possible!
Therapy Lab is an innovative therapy platform offering CBT treatment packages to help clients meet specific goals. We target mental health and specific physical health problem areas such as insomnia and IBS, and we like to work as efficiently as possible. Most of our treatment plans are designed to last 16 or fewer sessions, and often to significant benefit. We measure clients’ progress as we go and apply science-based principles to the therapy process.
What About Patients Who Dismiss an Emotion-Focused Approach?
Now, some patients may feel misunderstood by the suggestion that therapy can help with a physiological complaint. We get it. No one wants to feel unheard. Might we suggest using statements like these?
I hear you completely. Your physical pain/complaints are real and not imagined.
I’d like to recommend CBT to you, and I’m going to continue following you to measure how CBT helps with these physical symptoms.
In addition to the medical approach, adding the psychosocial component is a science-backed way to reduce your suffering.
At Therapy Lab, we avoid stigmatizing terms and have treatment packages designed specifically for people with physical health problems. Our transparent approach helps to quell fears about the unknowns of “therapy.”
And How Do I Get Them In?
Easy. Send patients to our website: www.therapylab.com, where they can easily book a prompt, free consultation call or complete an online form.
Patients will know the anticipated cost and time commitment upfront at Therapy Lab. They’ll be matched with the right therapist and will be able to choose a session time usually within 24 hours.
Who Can Benefit?
Any patients with depression or anxiety symptoms, or patients with a psychophysiological condition that remains unmanaged (despite medication trials or referrals to other specialists), will likely benefit. If cost is a consideration, keep in mind that our treatment plans are designed with an upfront budget, and we provide superbills for out-of-network reimbursement.
We look forward to collaborating with you and your patients in their pursuit of wellness! | https://medium.com/@shan-const-psych/primary-care-physicians-the-ultimate-referral-hack-5b323daccbe8 | ['Shannon Constable'] | 2021-03-25 22:37:10.060000+00:00 | ['Primary Care', 'Doctors', 'Physicians', 'Therapy', 'Referral Marketing'] |
How Sound Design Triggers Emotion | Sound design greatly contributes to the feelings of the audience while watching a movie. Valuable information about the emotional aspects of a film is contained in the sound design, so understanding how it works and what it tells you is a key aspect to grasping what your content is all about.
Vionlabs has developed an Emotional Fingerprint API that instead of using sub-par metadata to try to provide users with better content, measures the emotional aspects of various key components, such as sound, color, and pacing. This enables OTT providers to better understand their content, which in turn ensures that they can make better content recommendations.
Since we know what benefits a deeper understanding of the sound of a movie can give to OTT services, let’s examine how sound, just like color, can be used to alter how a cinematic moment is perceived emotionally by its audience.
The Truth Behind Great Sound Design
Looking at the clip from the episode “The Long Night” in the final season of Game of Thrones (2011–2019) listed above, your mind is instantly drawn to what you can see. We observe the whitewalkers climbing the wall and the people waiting in suspense trying to defend it. We see fire turning the environment into a murky brown color. But if you think about it, the story is actually mostly told through the sound. This happens to be the darkest Game of Thrones episode, sometimes rendering its visual aspects too dark to make out, thus putting greater emphasis on the sound. The clip features sounds of people calling out to each other to defend their wall, the grunts of the whitewalkers, and the sizzling of fire. The music has a distinct ticking in it, alerting us that time is of the essence.
And then, timestamped at 00:20 in the clip, we also hear something called a Shepard tone. An effect used quite heavily by filmmaker Christopher Nolan, this sound creates an illusion, a scale that seems to be rising forever without ending. This effect is created by superimposing different tones separated by octaves on top each other and tinkering with the volume of the frequencies. The result is what one might call the sound of suspense; and in this particular Game of Thrones moment it is key to the emotional buildup we feel as an audience.
The truth behind great sound design is that it helps you truly immerse yourself into the story, so much so that you might not even consider that it is there. Whether it be the tumultuous thunder of weaponry in Saving Private Ryan (Spielberg, 1998), the sound of a lightsaber in Star Wars, or the theme song accompanying Indiana Jones as he goes on his adventures, the sound in a film helps create realism, adds credibility to the story and above all, sucks you into the emotional state of the moment.
Sound Throughout Film History — The Short Version
Throughout film history, sound has always accompanied our motion pictures. Even during the art form’s first trembling steps, and what would follow as the silent era, films were never fully silent. While some of the leading industry innovators like Edison had an interest in sound, the technology wasn’t advanced enough to entertain a larger audience. So in the silent era something else emerged to keep viewers engaged — music. Often played by a live musician hired by the cinema, it was the music that made all the difference. It was there to set the tempo, it created the mood, and told the audience what they should feel at any given moment.
In Japan, where filmmakers took a different approach from its Western counterpart, they even went so far as to hire benshi — men who would stand alongside the screen and narrate what was happening in the film. The benshi gained a lot of popularity among crowds and would make sound effects, take care of all the dialogue, and be the true performer of the film.
When The Jazz Singer (Crosland, 1927) hit theaters in 1927, it surprised audiences to hear the actor Al Jolson’s chatty adlibs in between his singing. They instantly wanted more; and while most features included a mix of prerecorded audio and the various story-telling techniques of the silent era, just two years later in 1929, 75% percent of movies had turned into talkies.
Since then, storytellers and filmmakers have been doing their utmost to challenge how we hear and feel through sound in movies. In Francis Ford Coppola’s Apocalypse Now (Coppola, 1979) for example, sound designer Walter Murch forced both theaters and Hollywood to adopt a six-speaker surround system by changing the way sound was edited and mixed. Murch created sound that was able to travel around the movie theater, allowing the audience to experience the film through the traumatized war-heavy mind of Captain Willard. It is clear that sound has now become a major component of the filmmaking process.
Sound Design and its Components
The sound design of any movie can be broken down into several different parts. Filmmakers are able to manipulate each sound component to contribute to the emotional feelings created in that moment.
Usually, upon filming, the only sounds recorded would be the dialogue. Other key components, such as music, weather, the sound of objects, or a character’s interaction with them is all added later. The reason why filmmakers do this is because in order for the audience to fully grasp the emotional moment of a scene, the sound is hugely important. Being able to manipulate the sound after filming is therefore major in creating what you would like to convey. Dialogue, sound effects, music, and mixing all come together to tell you the story.
In a film there is diegetic sound — which is sound heard by the characters inside the filmic universe. Examples of this is dialogue, sound effects, or music played on the radio, like in the infamous scene in Reservoir Dogs (Tarantino, 1992) where a cheerful song accompanies Mr. Blonde as he performs an act of violence.
There is also non-diegetic sound — which is sound not heard by the characters — often added to make sure the audience feels a certain way. The biggest example of this is music, like the orchestral track featured in Steven Spielberg’s Jaws (Spielberg, 1975), a tune that quickly ensures that suspense and fear are added to the shark’s appearance on screen.
The Emotional Orchestrator of Film — Music
Ever since the beginning of cinematic history, music has been a major component of creating emotional and suspenseful stories. Music can be considered the emotional orchestrator of film. It suggests to the viewer what they should be feeling. For example the piano tune created by the AI in Her (Jonze, 2013) expresses the bittersweet technological love felt between the characters, whilst the intense violins in the Psycho (Hitchcock, 1960) shower scene convey precisely how horrifying this encounter is to the audience.
By attaching a signature melody to the story, filmmakers are able to use our expectations to quickly alert us as to what we should be feeling. Take a look at Darth Vader for instance. The menacing imperial march sets the tone. After hearing it a few times throughout the story, even someone not looking at the screen would be able to tell when the Imperial Empire is present and strong. Without it, Darth Vader would just be a man breathing heavily in a black suit.
The Deep Emotional Cues of Sound Design
But we also live in a world where nonmusical sounds are connected to deep emotional cues. The sounds of cars honking, birds chirping, or waves crashing against a cliff all induce feelings.
Since most of what you hear on screen is actually added in post-production, it gives the filmmakers the freedom to use sound effects to tell the story emotionally. In Gravity (Cuarón, 2013) the sound designers really explored how being inside a vacuum-like environment would affect the sound. In Eraserhead (Lynch, 1977) dreary, abstract sounds not reflected on screen were used to create a soundscape, a mood, and an atmosphere. In Brokeback Mountain (Lee, 2005), different types of wind were used to signal the isolated characters’ shifting emotional states.
Since sound effects provide a sense of realism to what we see on screen, each sound has to be carefully considered. Footsteps will sound different depending on whether the scene is happy or scary, if the character is stomping or sneaking. Foley artists — professionals who recreate sounds for moving pictures — therefore work hard to introduce different elements to enhance a sound in order to make it believable and just right for the story at hand.
And to go even further, sound designers also frequently use something called “sonic metaphors”, where a different sound is added to the mix to create emotional depth. In Mad Max: Fury Road (Miller, 2015), as the War Rig roars through the desert, whale sounds were added to the mix to reinforce the feeling of this vehicle as a metaphor of a heavy, slow animal. As the truck takes a hit, the whale groans. It is not noticeable unless you know it’s there, but it adds emotional depth to the sound of the vehicle. This is a powerful tool that keeps the audience engaged.
How To Become Cutting-Edge by Using Vionlabs Emotional Fingerprint API To Measure Sound
So in times where more streaming services are launched than ever, how can understanding sound be utilized to help OTT providers stand out, become cutting-edge, and reduce churn?
The problem with today’s content data is that it is not detailed enough. It simply fails to detect the nuances necessary in order to give good content recommendations. While it makes sense that the metadata for I, Tonya (Gillespie, 2017), a black comedy about figure skater Tonya Harding and the event that ended her career, and Blades of Glory (Gordon, 2007), the sports buddy comedy where Will Ferrell and Jon Heder decide to become an all-male figure skating duo, would be similar, the two films might not attract the same audiences. But, when using metadata, they would most likely be linked to each other.
Vionlabs’ Emotional Fingerprint API goes further than this. It measures and examines thousands of factors in a film including colors, pace, audio, object recognition, and much more. These factors are distilled by AI into a fingerprint that encodes the emotional structure of the content. Such fingerprints will allow the industry to make better informed decisions, and enable the consumers to come into contact with content they would actually want to consume.
— — — — — — —
Also published at:
https://www.vionlabs.com/post/how-sound-design-triggers-emotion
Read some of our other blog posts:
https://www.vionlabs.com/post/why-apple-should-buy-quibi
https://www.vionlabs.com/post/how-movies-use-color-to-create-emotion
https://www.vionlabs.com/post/how-the-movie-midsommar-is-creepy-despite-not-being-dark | https://medium.com/vionlabs-tech-blog/how-sound-design-triggers-emotion-bf052d3da2a9 | ['Josephine Holmström'] | 2020-10-19 08:54:41.381000+00:00 | ['Movies', 'Sound Design', 'Machine Learning', 'Streaming', 'Filmmaking'] |
Automating Rust and NodeJS deployment on AWS Lambda using Lambda Layers | Part of our stack at Clevy is written in Rust, and we use Neon to ease the bindings with other parts of the stack written in NodeJS.
Recently, we needed to deploy this stack on AWS Lambda, which runs a very specific NodeJS runtime, not cross-compatible with our existing stack.
Since we struggled a little bit with getting Lambda/Rust/Node to play nicely together, I figured I would post a short how-to of what we found worked well for us. You can of course take this as a base and change it to your liking!
1. The Setup
The first thing you need to know is that AWS Lambda runs on either Amazon Linux 1 or 2, depending on the version of NodeJS that you plan to use. So your build pipeline needs to reflect that. Luckily, Amazon provides Docker images for both: amazonlinux:1 or amazonlinux:2 . In our case, we want to use node v10.x, so:
FROM amazonlinux:2
Then of course, you need Rust and NodeJS to be installed onto the amazonlinux image.
# Install rust
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y --default-toolchain stable && \
PATH="/root/.cargo/bin:$PATH" rustup install stable
ENV PATH $PATH:/root/.cargo/bin # Install node
RUN curl -sL https://rpm.nodesource.com/setup_10.x | bash - && \
yum install -y nodejs && yum clean all
Then, Neon requires a few dependencies, that you can customize based on what you actually require for your own needs. In our case we needed to add quite a few dependencies over what is stated in the docs, especially all the *-devel dependencies which were definitely not straightforward.
# Install dependencies
RUN yum install -y make gcc gcc-c++ libgcc openssl-devel readline-devel sqlite-devel && yum clean all
Finally, install neon-cli and you are all set.
RUN npm i -g neon-cli
Save this base image somewhere and use it for your AWS Lambda-compatible builds later!
docker build -t lambdabuildbase .
2. The Build
There are several ways to use this image, so let me share my script, which you can customize to your liking. The goal with Neon is to create a NodeJS addon that you can then require elsewhere like any other node module, but precompiled for the environment it runs on.
Let’s put our sources into a src/ folder, and inside, consider the following package.json :
"name": "
"version": "1.0.0",
"description": "AWS Lambda demo",
"main": "lib/index.js",
"scripts": {
"build": "neon build --release && mv native/index.node lib/addon.node"
}
} "name": " @clevy/l ambda-build-demo","version": "1.0.0","description": "AWS Lambda demo","main": "lib/index.js","scripts": {"build": "neon build --release && mv native/index.node lib/addon.node"
The reason why we mv native/index.node lib/addon.node is that we don’t need the whole native directory after the build. It is quite huge (over 700MB in our case), compared to what we really need (only the compiled addon, which is only a few MB). But of course you can leave it as is if you are happy with your final build size, simply note that AWS Lambda functions (including all the layers together) can never exceed 250MB unzipped.
The main lib/index.js contains:
module.exports = require("addon");
And of course, native/ contains all my rust code.
Let’s create a second Dockerfile that looks like the following:
FROM lambdabuildbase WORKDIR /dist COPY src .
RUN npm install && npm run build # remove now useless native/ directory
RUN rm -rf native
To extract built files from the Dockerfile, one easy way is the following bash script:
#!/bin/bash image=lambdabuildpkg
docker build -t $image . id=$(docker create $image)
docker cp $id:dist - | tar x
docker rm -v $id
This will build the image (which will in turn build the node module with the FFI bindings), copy the resulting built node module from inside the docker image into the dist/ folder on your host machine, then cleanup.
3. The Deployment
Lambda requires node layers to be prepared in a very specific way. First, it needs to be inside a directory called exactly nodejs . Then, if you are preparing a layer that contains a node module, it needs to be inside the usual node_modules/namespace/package_name tree, so in our case nodejs/node_modules/@clevy/lambda-build-demo .
path=node_modules/@clevy/lambda-build-demo # remove any existing data
rm -rf nodejs
mkdir -p nodejs/$path
mv dist nodejs/$path
Then, Lambda tells us they need the layers to be zipped before we upload them. Fine:
# cleanup then zip again
rm nodejs.zip
zip -r nodejs.zip nodejs -q
Then, proceed to deploy onto AWS Lambda (using the AWS CLI, so you can add your own credentials in whatever way you like). If the package is too big, you can simply send it to S3 first, then upload to Lambda Layers from a S3 bucket. The documentation is quite simple, you can customize it easily.
aws s3 cp nodejs.zip s3://my-bucket/nodejs.zip
aws lambda publish-layer-version \
--layer-name "my-lambda-layer" \
--content "S3Bucket=my-bucket,S3Key=nodejs.zip" \
--compatible-runtimes "nodejs10.x"
Voilà, you have your Rust-powered NodeJS-compatible AWS Lambda Layer ready to use in your Lambda functions!
4. The Usage
This is the easiest step. Create a lambda function in whichever way you want, select nodejs10.x as the runtime, use the newly imported layer as one of its layers by selecting “layers” just below the lambda function:
Then click again on the Lambda function and inside your code, simply import your module as usual:
Notice that there are no node_modules in this Lambda function? That’s because the node_modules are inherited from the underlying layer.
Of course you can stack layers (up to 5) and you can also import your own node modules very easily, but using layers is a very simple way to use weird runtimes on AWS Lambda as well as to share common code.
Hope this tutorial helps you as much as it would have helped me to find it in the first place! | https://medium.com/clevyio/automating-rust-and-nodejs-deployment-on-aws-lambda-using-layers-2d47d129a6bc | ['Francois Falala-Sechet'] | 2019-06-06 11:28:57.140000+00:00 | ['AWS', 'Nodejs', 'Rust', 'AWS Lambda', 'Aws Lambda Layer'] |
Shortform: A Helpful Update Before You Jump the Bandwagon | Here is how it looks on desktop
The secret sauce;
it will be truncated in a way that encourages readers to click through to its story page.
Now readers have to click to read your shortform story.
But look at what I discovered, if I use a GIF on my shortform, it becomes visually appealing, and it is the final hook that will make readers click through to its story page. | https://medium.com/illumination/shortform-a-helpful-update-before-you-jump-the-bandwagon-c0345170b409 | ['Xin Xin'] | 2020-12-17 16:23:42.588000+00:00 | ['Writing', 'Productivity', 'Social Media', 'Médium', 'Short Form'] |
Building Flexible Credit Decisioning for an Expanded Credit Box | At LendKey, we’re building a platform that improves lives with lending made simple. Our innovative cloud technology creates the most transparent online lending platform for consumers shopping for low-cost borrowing options from community banks and credit unions.
Credit decisioning– deciding whether or not to lend by predicting how likely applicants are to repay a loan– is a component of our platform that is critical to our lending partners.
The Problem
Enabling our lending partners to make credit decisions quickly, accurately, and in a way that can be explained is critical to our success. Our legacy credit decisioning system met those criteria, but it had three flaws:
It only supported a single ruleset-style model for determining if a loan application exceeded our lending partners’ limits for credit risk. Every rule was considered in isolation. If a loan application couldn’t meet every single criteria, it was declined. Its rules for determining credit risk were implemented in code. While it was possible for our lending partners to enable and disable particular rules and customize the threshold for each rule, it was challenging and time-consuming to add new rules. It was part of a monolithic application. This made it impossible for us to offer it à la carte as well as making it difficult for us to improve and maintain it.
Seeing the Whole Picture
We began this initiative with the simple hypothesis that a more robust credit decisioning engine could give our lending partners a better toolset for approving loans.
We strongly suspected that applicants who would be able to successfully repay loans were being denied due to the simplistic “one strike and you’re out” nature of our platform’s decisioning engine and the underwriting utilized by our lending partners. For example, if our lending partners’ credit risk models required an applicant to have a gross monthly income of at least $5,000 and she had an income of $4,999, she would be declined– regardless of the other characteristics of her application.
A more sophisticated model would consider the applicants holistically. If the applicant’s income was slightly low, but the total balance on her other accounts was small, her FICO score was high, she never had a late payment, and she was requesting a small amount, she might still be eligible for a loan.
Careful analysis of historical lending data by our data science team confirmed that credit risk models that considered applicants holistically would allow our lending partners to approve significantly more applications without increasing their risk.
Predictive Models Everywhere
Predictive models are simply algorithms for forecasting an outcome. They are developed using statistical techniques, like regression, to determine the relationship between dependent variables (like risk of loan default) and one or more independent variables (like income and the number of accounts past due.) In recent years, machine learning techniques have greatly increased the speed with which new models can be developed.
Predictive models can be implemented in many different ways depending on the application. For example, when making a credit decision, it is imperative that a lender be able to explain exactly why it made that decision. A “scorecard” model will allow our lending partners to consider loan applicants holistically and also to explain their lending decisions. In this type of model, applicants are assigned points for each of the characteristics considered by the model. The closer an applicant comes to the ideal value for that characteristic, the more points he or she is awarded. The overall score is the sum of these partial scores and represents how likely the applicant is to repay the loan. To determine which characteristics were most significant to the credit decision, we simply compare the points the applicant received for each characteristic to the maximum points possible for that characteristic and sort the characteristics based on that difference.
Our new holistic credit risk models are one application for predictive models, but we quickly thought of more. For example, we might:
Monitor the health of the loan portfolio we service on an on-going basis;
Flag applications that are likely to be fraudulent;
Automate time-consuming manual review processes;
Score the quality of the data provided by the applicant and third-parties to cut back on the situations when we require applicants to upload documents; and
Make more relevant product suggestions.
We knew we had to build a general-purpose tool for evaluating predictive models at scale.
Building on a Strong Foundation
Having software engineers re-implement the predictive models created by our data scientists and lending partners would be time-consuming and costly. Wouldn’t it be better if we could load the models that they developed directly into our lending platform? Enter PMML, Predictive Model Markup Language, a well-established industry standard format for describing predictive models. PMML files can be generated by the tools that data scientists use, like SAS, R, and Python. It is used extensively within analytics-heavy companies like Netflix and Airbnb. JPMML-Evaluator is a popular open source tool for evaluating PMML models.
PMML supports many different types of predictive models, including scorecards, rulesets, regression, clustering, and even neural networks.
PMML and the tools that produce it are perfect for data scientists, but they have a steep learning curve. It would be great to also allow non-technical people to create simple rulesets for credit decisioning. LendKey had already adopted the Camunda business process modelling platform, and it seemed like the perfect fit for this use case because it provides a simple user interface for describing business rules.
We had already identified two different engines for making credit decisions, and we knew there might be more. What if one of our lending partners wanted us to integrate our lending platform with a proprietary decisioning system? What if we wanted to add support for the complicated but powerful successor to PMML, PFA (Portable Framework for Analytics)? We definitely didn’t want to maintain support for all these predictive model evaluators in all of the exciting new apps we were planning. We needed a layer of abstraction around predictive models!
Introducing Insights
Rather than updating all of our apps to talk to multiple predictive model engines, we wanted to create a single common “vocabulary” for interacting with predictive models. We designed the Insights service as a well-defined and standards-compliant API for interacting with predictive models. It translates our apps’ requests into specific calls to each of the supported predictive model engines and then translates their responses back into a common format.
When our predictive models are used to do things like make credit decisions, we need to be able to “show our work.” We need to be able to demonstrate to our lending partners that we are correctly enforcing their underwriting criteria. Our data scientists also need to get feedback on our predictive models so that we can improve them over time. For these reasons, the Insights service stores the details of every predictive model transaction, including all of the arguments and results, in an encrypted data store. Our loan origination systems can then associate the unique identifiers for each Insights transaction with the loan event that spawned it and display the transaction’s details to anyone with the appropriate permissions.
We need a general-purpose, data-driven tool to evaluate predictive models, so Insights is, by design, completely ignorant of our business rules and concepts. Insights has no code specific to the concept of a “loan,” an “applicant,” or a “credit score.” It is the responsibility of the applications that use Insights to inquire about which arguments a particular predictive model requires and determine the values for those arguments before asking Insights to evaluate that predictive model.
We get our predictive models from a wide variety of sources, both inside and outside of LendKey. Different predictive models may refer to the same concept, like “primary applicant’s debt-to-income ratio,” using different names. When we set up a new model in Insights, we “map” the arguments it expects and results it generates to fields in Insights’ data dictionary. In this way, consumers of the Insights API are only required to support the fields in the data dictionary, rather than being required to support every field in every predictive model.
A Sample Transaction
In the following example, our app would like to use Insights to determine the likelihood that my daughter’s soccer game will be cancelled. Our app will first make a request to the Insights service to find out which arguments our model requires. It will then determine the values for all of these arguments and then make a second request to evaluate the model. Finally, it will take some action based on the results.
Sequence diagram showing a typical transaction with the Insights API
We wanted the Insights API to feel instantly familiar to the developers who used it, so we chose to build on industry standards like JSON:API and provide OpenAPI documentation.
First, we ask Insights for the arguments required by our soccer cancellation model, which is identified by a UUID.
GET /insights/v1/predictive_models/00000000–0000–0000–0000–888888888888/fields
{
"data": [
{
"id": "473bb0a3-2e2a-45cb-a453-d9cf97012414",
"type": "predictiveModelField",
"attributes": {
"name": "temp_in_degrees_f",
"supportedFieldName": "temperature",
"fieldType": "argument",
"dataType": "number"
}
},
{
"id": "15f9eea2-c38d-4dca-9376-857a71c06e3c",
"type": "predictiveModelField",
"attributes": {
"name": "lightning_in_area",
"supportedFieldName": "lightningReported",
"fieldType": "argument",
"dataType": "boolean"
}
},
{
"id": "dd5fefc9-dcf8-40a5-b78e-dc3891d044c8",
"type": "predictiveModelField",
"attributes": {
"name": "heavy_rain_in_last_24h",
"supportedFieldName": "heavyRainRecently",
"fieldType": "argument",
"dataType": "boolean"
}
}
]
}
We can see that this model requires three arguments: temperature, lightningReported, and heavyRainRecently. Our app would look up those values and use them to create a request to evaluate the model.
POST /insights/v1/insights
{
"data": {
"type": "insight",
"attributes": {
"modelId": "00000000-0000-0000-0000-888888888888",
"arguments": {
"temperature": 67,
"lightningReported": false,
"heavyRainRecently": true
}
}
}
}
We tell Insights the unique identifier of the predictive model we’d like to use and the values for each of the fields required by that model. I don’t have to know any of the details about which engine will actually be evaluating this model– Insights already knows.
Insights will translate our request into the format required by the appropriate engine and use that engine to evaluate the specified predictive model. The model will make a prediction about whether or not we will be playing soccer today. Perhaps it will be based on a correlation between recent heavy rain and a muddy field.
Insights evaluates the model, securely stores all of the arguments and results, and responds.
{
"data": {
"id": "6e4866f9-7e99-40d0-afe3-1e925fe6e083",
"type": "insight",
"attributes": {
"created": "2019-05-14T18:49:05Z",
"arguments": {
"temperature": 45,
"lightningReported": false,
"heavyRainRecently": true
},
"results": {
"willPlay": "probably not",
"reason": "muddy field"
}
},
"relationships": {
"predictiveModel": {
"data": {
"id": "00000000-0000-0000-0000-888888888888",
"type": "predictiveModel"
}
}
}
}
}
We probably won’t be playing soccer today.
The Insights API also provides endpoints for retrieving the details of previous transactions– which is essential for auditing– and managing predictive models, including mapping their fields to fields in Insights’ data dictionary.
What’s Next
LendKey will continue to work with our lending partners to develop more accurate predictive models. The models that our lending partners use for credit decisioning must always be able to “show their work”– explain why they made a particular decision, including the application characteristics that were considered. This disqualifies certain types of machine learning, but ensures that all applicants are treated fairly.
Insights currently offers a RESTful HTTP API. This was simple to implement and perfect for the low-volume nature of credit decisioning. We are now exploring ways to improve its performance. These might include changes to our database encryption library and offering a low latency streaming interface built on top of our inter-application messaging bus, Kafka.
With Insights, we’ve built a strong foundation to leverage predictive models throughout our lending platform. We’re excited about being able to offer loans to people who would have otherwise been declined, and we can’t wait to use this new toolset to streamline the loan application process. | https://medium.com/lendgineering/building-flexible-credit-decisioning-for-an-expanded-credit-box-54dc626914b4 | ['John Moose'] | 2019-06-12 15:05:29.589000+00:00 | ['Lending', 'Machine Learning', 'Technology', 'Predictive Modeling', 'Data Science'] |
PUBG Mobile India: New features, launch date, expansion plan updates you must know! | PUBG Mobile is undoubtedly one of the most popular games in India ever. The hugely popular game was banned in India along with several other Chinese apps. It seems the game got even more famous after the imposition of the ban, considering the tremendous interest on the Net regarding its comeback, especially launch date as well as details of any changes in the game. PUBG Corporation has announced its comeback in a whole new avatar tailor-made for Indian audience during Diwali. Since then, there have been several updates about the Battle Royal Game, but none of them have confirmed it its release date.
It gave an opportunity to rejoice millions of mobile gamers of the country after PUBG Corporation announced their return in India through social media posts during Diwali fest season. Soon…READ MORE | https://medium.com/@ezzguide/pubg-mobile-india-new-features-launch-date-expansion-plan-updates-you-must-know-594c3cb928f1 | ['Ezz Guide'] | 2020-12-26 15:16:35.792000+00:00 | ['Game Development', 'Pubg', 'Games', 'Pubg India'] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.