id
int64 10.8M
42.1M
| by
large_stringlengths 2
15
| time
timestamp[us] | title
large_stringlengths 1
95
⌀ | text
large_stringclasses 0
values | url
large_stringlengths 12
917
| score
int64 1
5.77k
| descendants
int64 0
2.51k
⌀ | kids
large listlengths 1
472
⌀ | deleted
large listlengths | dead
bool 1
class | scraping_error
large_stringclasses 1
value | scraped_title
large_stringlengths 1
59.3k
| scraped_published_at
large_stringlengths 4
66
⌀ | scraped_byline
large_stringlengths 1
757
⌀ | scraped_body
large_stringlengths 600
50k
| scraped_at
timestamp[us] | scraped_language
large_stringclasses 50
values | split
large_stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,820,498 | trengrj | 2016-01-01T00:16:04 | Debian creator Ian Murdock dies at 42 | null | http://www.zdnet.com/article/debian-linux-founder-ian-murdock-dies-at-42-cause-unknown/ | 1 | null | null | null | true | no_error | Debian Linux founder Ian Murdock dies at 42, cause unknown | null | Written by | UPDATED: I'd known Ian Murdock, founder of Debian Linux and most recently a senior Docker staffer, since 1996. He died this week much too young, 42, in unclear circumstances. Ian Murdock backed away from saying he would commit suicide in later tweets, but he continued to be seriously troubled by his experiences and died later that night. No details regarding the cause of his death have been disclosed. In a blog posting, Docker merely stated that: "It is with great sadness that we inform you that Ian Murdock passed away on Monday night. This is a tragic loss for his family, for the Docker community, and the broader open source world; we all mourn his passing."The San Francisco Police Department said they had nothing to say about Murdock's death at this time. A copy of what is reputed to be his arrest record is all but blank.Sources close to the police department said that officers were called in to responded to reports of a man, Ian Murdock, trying to break into a home at the corner of Steiner and Union St at 11.30pm on Saturday, December 26. Murdock was reportedly drunk and resisted arrest. He was given a ticket for two counts of assault and one for obstruction of an officer. An EMT treated an abrasion on his forehead at the site, and he was taken to a hospital.At 2:40 AM early Sunday morning, December 27, he was arrested after banging on the door of a neighbor in the same block. It is not clear if he was knocking on the same door he had attempted to enter earlier. A medic treated him there for un-described injuries. Murdock was then arrested and taken to the San Francisco county jail. On Sunday afternoon, Murdock was bailed out with a $25,000 bond.On Monday afternoon, December 28, the next day, Murdock started sending increasingly erratic tweets from his Twitter account. The most worrying of all read: "i'm committing suicide tonight.. do not intervene as i have many stories to tell and do not want them to die with me"At first people assumed that his Twitter account had been hacked. Having known Murdock and his subsequent death, I believe that he was the author of these tweets.His Twitter account has since been deleted, but copies of the tweets remain. He wrote that: "the police here beat me up for knowing [probably an auto-correct for "knocking"] on my neighbor's door.. they sent me to the hospital."I have been unable to find any San Francisco area hospital with a record of his admission. Murdock wrote that he had been assaulted by the police, had his clothes ripped off, and was told, "We're the police, we can do whatever the fuck we want." He also wrote: "they beat the shit out of me twice, then charged me $25,000 to get out of jail for battery against THEM."Murdock also vented his anger at the police."(1/2) The rest of my life will be devoted to fighting against police abuse.. I'm white, I made $1.4 million last year, (2/2) They are uneducated, bitter, and and only interested in power for its own sake. Contact me [email protected] if you can help. -ian"After leaving the courtroom, presumably a magistrate court, Murdock tweeted that he had been followed home by the police and assaulted again. He continued: "I'm not committing suicide today. I'll write this all up first, so the police brutality ENDEMIC in this so call free country will be known." He added, "Maybe my suicide at this, you now, a successful business man, not a N****R, will finally bring some attention to this very serious issue."His last tweet stated: "I am a white male, make a lot money, pay a lot of money in taxes, and yet their abuse is equally doned out. DO NOT CROSS THEM!?"He appears to have died that night, Monday, December 28. At the time of this writing, the cause of death still remains unknown.His death is a great loss to the open-source world. He created Debian, one of the first Linux distributions and still a major distro; he also served as an open-source leader at Sun; as CTO for the Linux Foundation, and as a Docker executive. He will be missed.This story has been updated with details about Murdock's arrest.Related Stories:Not a typo:Microsoft is offering a Linux certificationDebian GNU/Linux now supported on Microsoft's AzureWhat's what in Debian Jessie | 2024-11-08T20:50:45 | en | train |
10,820,620 | BuckRogers | 2016-01-01T00:52:20 | Where are we in the Python 3 transition? | null | http://www.snarky.ca/the-stages-of-the-python-3-transition | 4 | 0 | null | null | null | no_error | Where are we in the Python 3 transition? | 2015-12-31T04:35:00.000Z | Brett Cannon |
Dec 30, 2015
3 min read
Python
The Kübler-Ross model outlines the stages that one goes through in dealing with death:
Denial
Anger
Bargaining
Depression
Acceptance
This is sometimes referred to as the five stages of grief.Some have jokingly called them the five stages of software development. I think it actually matches the Python community's transition to Python 3 rather well, both what has occurred and where we currently are (summary: the community is at least in stage 4 with some lucky to already be at the end in stage 5).
Denial
When Python 3 first came out and we said Python 2.7 was going to be the last release of Python 2, I think some people didn't entirely believe us. Others believed that Python 3 didn't offer enough to bother switching to it from Python 2, and so they ignored Python 3's existence. Basically the Python development team and people willing to trust that Python 3 wasn't some crazy experiment that we were going to abandon, ported their code to Python 3 while everyone else waited.
Anger
When it became obvious that the Python development team was serious about Python 3, some people got really upset. There were accusations of us not truly caring about the community and ignoring that the transition was hurting the community irreparably. This was when whispers of forking Python 2 to produce a Python 2.8 release came about, although that obviously never occurred.
Bargaining
Once people realized that being mad about Python 3 wasn't going to solve anything, the bargaining began. People came to the Python development team asking for features to be added to Python 3 to make transitioning easier such as bringing back the u string prefix in Python 3. People also made requests for exceptions to Python 2's "no new features" policy which were also made to allow for Python 2 to stay a feasible version of Python longer while people transitioned (this all landed in Python 2.7.9). We also extended the maintenance timeline of Python 2.7 from 5 years to 10 years to give people until 2020 to transition before people will need to pay for Python 2 support (as compared to the free support that the Python development team has provided).
Depression
7 years into the life of Python 3, it seems a decent amount of people have reached the point of depression about the transition. With Python 2.7 not about to be pulled out from underneath them, people don't feel abandoned by the Python development team. Python 3 also has enough new features that are simply not accessible from Python 2 that people want to switch. And with porting Python 2 code to run on Python 2/3 simultaneously heavily automated and being doable on a per-file basis, people no longer seem to be adverse to porting their code like they once were (although it admittedly still takes some effort).
Unfortunately people are running up against the classic problem of lacking buy-in from management. I regularly hear from people that they would switch if they could, but their manager(s) don't see any reason to switch and so they can't (or that they would do per-file porting, but they don't think they can convince their teammates to maintain the porting work). This can be especially frustrating if you use Python 3 in personal projects but are stuck on Python 2 at work. Hopefully Python 3 will continue to offer new features that will eventually entice reluctant managers to switch. Otherwise financial arguments might be necessary in the form of pointing out that porting to Python 3 is a one-time cost while staying on Python 2 past 2020 will be a perpetual cost for support to some enterprise provider of Python and will cost more in the long-term (e.g., paying for RHEL so that someone supports your Python 2 install past 2020). Have hope, though, that you can get buy-in from management for porting to Python 3 since others have and thus reached the "acceptance" stage.
Acceptance
While some people feel stuck in Python 2 at work and are "depressed" over it, others have reached the point of having transitioned their projects and accepted Python 3, both at work and in personal projects. Various numbers I have seen this year suggest about 20% of the scientific Python community and 20% of the Python web community have reached this point (I have yet to see reliable numbers for the Python community as a whole; PyPI is not reliable enough for various reasons). I consistently hear from people using Python 3 that they are quite happy; I have yet to hear from someone who has used Python 3 that they think it is a worse language than Python 2 (people are typically unhappy with the transition process and not Python 3 itself).
With five years left until people will need to pay for Python 2 support, I'm glad that the community seems to have reached either the "depression" or "acceptance" stages and has clearly moved beyond the "bargaining" stage. Hopefully in the next couple of years, managers across the world will realize that switching to Python 3 is worth it and not as costly as they think it is compared to having to actually pay for Python 2 support and thus more people will get to move to the "acceptance" stage.
| 2024-11-08T12:09:11 | en | train |
10,820,781 | jonbaer | 2016-01-01T01:54:49 | Algorithms of the Mind – What Machine Learning Teaches Us About Ourselves | null | https://medium.com/deep-learning-101/algorithms-of-the-mind-10eb13f61fc4#.hzuheczet | 3 | 0 | null | null | null | no_error | Algorithms of the Mind - Deep Learning 101 - Medium | 2015-05-22T09:27:31.481Z | Christopher Nguyen | What Machine Learning Teaches Us About Ourselves“Science often follows technology, because inventions give us new ways to think about the world and new phenomena in need of explanation.”Or so Aram Harrow, an MIT physics professor, counter-intuitively argues in “Why now is the right time to study quantum computing”.He suggests that the scientific idea of entropy could not really be conceived until steam engine technology necessitated understanding of thermodynamics. Quantum computing similarly arose from attempts to simulate quantum mechanics on ordinary computers.So what does all this have to do with machine learning?Much like steam engines, machine learning is a technology intended to solve specific classes of problems. Yet results from the field are indicating intriguing—possibly profound—scientific clues about how our own brains might operate, perceive, and learn. The technology of machine learning is giving us new ways to think about the science of human thought … and imagination.Not Computer Vision, But Computer ImaginationFive years ago, deep learning pioneer Geoff Hinton (who currently splits his time between the University of Toronto and Google) published the following demo.Hinton had trained a five-layer neural network to recognize handwritten digits when given their bitmapped images. It was a form of computer vision, one that made handwriting machine-readable.But unlike previous works on the same topic, where the main objective is simply to recognize digits, Hinton’s network could also run in reverse. That is, given the concept of a digit, it can regenerate images corresponding to that very concept.We are seeing, quite literally, a machine imagining an image of the concept of “8”.The magic is encoded in the layers between inputs and outputs. These layers act as a kind of associative memory, mapping back-and-forth from image and concept, from concept to image, all in one neural network.“Is this how human imagination might work?But beyond the simplistic, brain-inspired machine vision technology here, the broader scientific question is whether this is how human imagination — visualization — works. If so, there’s a huge a-ha moment here.After all, isn’t this something our brains do quite naturally? When we see the digit 4, we think of the concept “4”. Conversely, when someone says “8”, we can conjure up in our minds’ eye an image of the digit 8.Is it all a kind of “running backwards” by the brain from concept to images (or sound, smell, feel, etc.) through the information encoded in the layers? Aren’t we watching this network create new pictures — and perhaps in a more advanced version, even new internal connections — as it does so?On Concepts and IntuitionsIf visual recognition and imagination are indeed just back-and-forth mapping between images and concepts, what’s happening between those layers? Do deep neural networks have some insight or analogies to offer us here?Let’s first go back 234 years, to Immanuel Kant’s Critique of Pure Reason, in which he argues that “Intuition is nothing but the representation of phenomena”.Kant railed against the idea that human knowledge could be explained purely as empirical and rational thought. It is necessary, he argued, to consider intuitions. In his definitions, “intuitions” are representations left in a person’s mind by sensory perceptions, where as “concepts” are descriptions of empirical objects or sensory data. Together, these make up human knowledge.Fast forwarding two centuries later, Berkeley CS professor Alyosha Efros, who specializes in Visual Understanding, pointed out that “there are many more things in our visual world than we have words to describe them with”. Using word labels to train models, Efros argues, exposes our techniques to a language bottleneck. There are many more un-namable intuitions than we have words for.There is an intriguing mapping between ML Labels and human Concepts, and between ML Encodings and human Intuitions.In training deep networks, such as the seminal “cat-recognition” work led by Quoc Le at Google/Stanford, we’re discovering that the activations in successive layers appear to go from lower to higher conceptual levels. An image recognition network encodes bitmaps at the lowest layer, then apparent corners and edges at the next layer, common shapes at the next, and so on. These intermediate layers don’t necessarily have any activations corresponding to explicit high-level concepts, like “cat” or “dog”, yet they do encode a distributed representation of the sensory inputs. Only the final, output layer has such a mapping to human-defined labels, because they are constrained to match those labels.“Is this Intuition staring at us in the face?Therefore, the above encodings and labels seem to correspond to exactly what Kant referred to as “intuitions” and “concepts”.In yet another example of machine learning technology revealing insights about human thought, the network diagram above makes you wonder whether this is how the architecture of Intuition — albeit vastly simplified — is being expressed.The Sapir-Whorf ControversyIf — as Efros has pointed out — there are a lot more conceptual patterns than words can describe, then do words constrain our thoughts? This question is at the heart of the Sapir-Whorf or Linguistic Relativity Hypothesis, and the debate about whether language completely determines the boundaries of our cognition, or whether we are unconstrained to conceptualize anything — regardless of the languages we speak.In its strongest form, the hypothesis posits that the structure and lexicon of languages constrain how one perceives and conceptualizes the world.Can you pick the odd one out? The Himba — who have distinct words for the two shades of green — can pick it out instantly. Credit: Mark Frauenfelder, How Language Affects Color Perception, and Randy MacDonald for verifying the RGB’s.One of the most striking effects of this is demonstrated in the color test shown here. When asked to pick out the one square with a shade of green that’s distinct from all the others, the Himba people of northern Namibia — who have distinct words for the two shades of green — can find it almost instantly.The rest of us, however, have a much harder time doing so.The theory is that — once we have words to distinguish one shade from another, our brains will train itself to discriminate between the shades, so the difference would become more and more “obvious” over time. In seeing with our brain, not with our eyes, language drives perception.“We see with our brains, not with our eyes.With machine learning, we also observe something similar. In supervised learning, we train our models to best match images (or text, audio, etc.) against provided labels or categories. By definition, these models are trained to discriminate much more effectively between categories that have provided labels, than between other possible categories for which we have not provided labels. When viewed from the perspective of supervised machine learning, this outcome is not at all surprising. So perhaps we shouldn’t be too surprised by the results of the color experiment above, either. Language does indeed influence our perception of the world, in the same way that labels in supervised machine learning influence the model’s ability to discriminate among categories.And yet, we also know that labels are not strictly required to discriminate between cues. In Google’s “cat-recognizing brain”, the network eventually discovers the concept of “cat”, “dog”, etc. all by itself — even without training the algorithm against explicit labels. After this unsupervised training, whenever the network is fed an image belonging to a certain category like “Cats”, the same corresponding set of “Cat” neurons always gets fired up. Simply by looking at the vast set of training images, this network has discovered the essential patterns of each category, as well as the differences of one category vs. another.In the same way, an infant who is repeatedly shown a paper cup would soon recognize the visual pattern of such a thing, even before it ever learns the words “paper cup” to attach that pattern to a name. In this sense, the strong form of the Sapir-Whorf hypothesis cannot be entirely correct — we can, and do, discover concepts even without the words to describe them.Supervised and unsupervised machine learning turn out to represent the two sides of the controversy’s coin. And if we recognized them as such, perhaps Sapir-Whorf would not be such a controversy, and more of a reflection of supervised and unsupervised human learning.I find these correspondences deeply fascinating — and we’ve only scratched the surface. Philosophers, psychologists, linguists, and neuroscientists have studied these topics for a long time. The connection to machine learning and computer science is more recent, especially with the advances in big data and deep learning. When fed with huge amounts of text, images, or audio data, the latest deep learning architectures are demonstrating near or even better-than-human performance in language translation, image classification, and speech recognition.Every new discovery in machine learning demystifies a bit more of what may be going on in our brains. We’re increasingly able to borrow from the vocabulary of machine learning to talk about our minds. | 2024-11-08T14:18:52 | en | train |
10,820,785 | jonbaer | 2016-01-01T01:55:57 | Demystifying Deep Reinforcement Learning | null | http://www.nervanasys.com/demystifying-deep-reinforcement-learning/ | 3 | 0 | null | null | null | no_error | KINGGACOR | Situs Slot Gacor Hari Ini Slot PG Soft Maxwin di Indonesia | null | null |
KINGGACOR | Situs Slot Gacor Hari Ini Slot PG Soft Maxwin di Indonesia
KINGGACOR adalah salah satu situs slot online terpercaya di Indonesia dengan platform slot gacor hari ini dan menawarkan berbagai jenis permainan slot melalui provider terpercaya seperti PG Soft dan Pragmatic Play. Kinggacor telah menjadi pilihan utama para pemain slot gacor dikarenakan RTP slot yang tinggi untuk memberikan peluang kemenangan yang lebih besar kepada pemain. Berbagai jenis slot online tersedia di kinggacor seperti dengan berbagai jenis tema permainan yang dipadukan animasi game yang memukau dan memiliki bonus promosi yang dapat membantu pemain meraih kemenangan maxwin. Didukung dengan artificial intelligence yang dimiliki kinggacor, pemain akan dibantu untuk bisa memenangi game slot setiap saat.
Kinggacor bekerjasama dengan PG Soft untuk memberikan pengalman pemain terbaik kepada seluruh pemain slot gacor, provider slot thailand terkenal yang sudah terbukti menghasilkan banyak kemenangan besar bagi para pemain. PG Soft memiliki berbagai game-game slot berkualitas tinggi yang tidak hanya menawarkan tampilan visual menarik tetapi juga fitur-fitur seperti Free Spins, Wilds, dan fitur bonus yang menguntungkan pemain slot. Berbagai keuntungan yang diberikan PG Soft menjadikan mereka sebagai provider slot gacor online yang paling dicari dan dimainkan di Indonesia saat ini.
5 Game Slot Online PG Soft Paling Diminati Hari Ini
PG Soft nerupakan provider Slot Gacor nomor 1 di Indonesia hari ini. Dengan berbagai keunggulan yang dimiliki, PG Soft kerap memberikan inovasi dan menciptakan permainan slot baru secara rutin. Berikut beberpa game slot gacor online terkenal yang dimiliki oleh Platform slot PG Soft :
Slot Gacor PG Soft Mahjong Ways 2
Slot Gacor PG Soft Caishen Wins
Slot Gacor PG Soft Ganesha Fortune
Slot Gacor PG Soft Treasure of Aztec
Slot Gacor Pg Soft Lucky Neko
Kuantitas | 2024-11-08T08:38:29 | id | train |
10,820,925 | Someone | 2016-01-01T02:51:11 | Users No Longer Need to Jailbreak Apple iOS to Load Rogue Apps | null | http://www.darkreading.com/vulnerabilities---threats/users-no-longer-need-to-jailbreak-apple-ios-to-load-rogue-apps/d/d-id/1323726 | 2 | 0 | null | null | null | no_error | Users No Longer Need to Jailbreak Apple iOS To Load Rogue Apps | 2015-12-29T17:00:00.000Z | Ericka Chickowski, Contributing Writer | Security practitioners who've counted on the protection of Apple App Store's walled garden approach now have something new to worry about: rogue app marketplaces are now using stolen enterprise certificates to allow users with even non-jailbroken iPhones and iPads to download applications through unapproved channels. Researchers from Proofpoint have dubbed the process used by these types of rogue app stores as "DarkSideLoaders." In their research, they pointed to one marketplace in particular, vShare, as an example of those using DarkSideLoader methods. Advertising one million apps available for iPhones and iPads, including pirated paid apps available for free, vShare in past years has catered to Android and jailbroken iOS devices. However, the game has now changed for this marketplace as it has figured out how to "sideload" applications, or circumvent the Apple App Store or legitimate app stores, into non-jailbroken iOS devices.Rogue app stores are doing this by signing their apps with Enterprise App distribution certificates issued by Apple."These certificates are normally issued to enterprises that want to operate their own internal app stores for employees," the researchers wrote. "A rogue app marketplace using the DarkSideLoader technique has implemented a large scale app re-signing capability. Legitimate games and other apps are decrypted, modified, and re-signed with an enterprise certificate for download by users of the rogue app marketplace."This capability puts enterprises at risk when their employees start loading applications from these unauthorized app stores."These apps can make use of private iOS APIs to access operating system functions that would not be permitted by apps that have been vetted by Apple for publishing on the official app store," Proofpoint researchers said.The biggest risk to enterprises, of course, is that these unauthorized apps are used as vehicles to carry known or zero-day vulnerabilities that will allow the app maker to compromise the device. Security experts have long warned about the dangers of jailbreaking devices in order to sideload devices due to the high prevalence of malicious mobile devices lurking in these types of marketplaces. Attackers load attractive applications--such as pirated popular games or productivity applications--with remote access trojans (RATs) that can be used to infiltrate corporate networks when infected devices connect to them."The vShare marketplace is noteworthy in that it is accessible to iOS devices connecting from anywhere in the world, representing a global expansion of this attack technique," wrote the researchers. "This technique also makes it possible to load onto the iOS devices configuration profiles that would allow an attacker to configure VPN settings to redirect network traffic to their man-in-the-middle nodes, as well as change various OS settings."About the AuthorEricka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading. | 2024-11-08T07:55:51 | en | train |
10,820,938 | pavornyoh | 2016-01-01T02:56:54 | In 2015, promising surveillance cases ran into legal brick walls | null | http://arstechnica.com/tech-policy/2015/12/in-2015-promising-surveillance-cases-ran-into-legal-brick-walls/ | 48 | 6 | [
10833956,
10834045
] | null | null | no_error | In 2015, promising surveillance cases ran into legal brick walls | 2015-12-31T16:00:25+00:00 | Cyrus Farivar |
Attorneys everywhere are calling things moot after the phone metadata program ended.
Today, the first Snowden disclosures in 2013 feel like a distant memory. The public perception of surveillance has changed dramatically since and, likewise, the battle to shape the legality and logistics of such snooping is continually evolving.
To us, 2015 appeared to be the year where major change would happen whether pro- or anti-surveillance. Experts felt a shift was equally imminent. "I think it's impossible to tell which case will be the one that does it, but I believe that, ultimately, the Supreme Court will have to step in and decide the constitutionality of some of the NSA's practices," Mark Rumold, an attorney with the Electronic Frontier Foundation, told Ars last year.
The presumed movement would all start with a lawsuit filed by veteran conservative activist Larry Klayman. Filed the day after the initial Snowden disclosures, his lawsuit would essentially put a stop to unchecked NSA surveillance. In January 2015, he remained the only plaintiff whose case had won when fighting for privacy against the newly understood government monitoring. (Of course, it was a victory in name only—the judicial order in Klayman was stayed pending the government’s appeal at the time).
With January 2016 mere hours away, however, the significance of Klayman is hard to decipher. The past year saw an end to the phone metadata program authorized under Section 215 of the USA Patriot Act, but it also saw the government flex its surveillance muscle in other ways, maintaining or establishing other avenues to keep its fingers on the pulse. That activity dramatically impacted Klayman and other cases we anticipated shaping surveillance in 2015, and we can admit our optimism was severely dashed. In total, zero of the cases we profiled last January got anywhere close to the nine Supreme Court justices in the last 12 months.
Tomorrow we'll bring you five new (and hopefully more active) cases that we’ve got our eye on for 2016, but let’s review what’s happened to our 2015 list first.
The grandaddy of them all
Case name: Klayman v. Obama
Status: Pending at the District of Columbia Circuit Court of Appeals for the second time.
This case is notable for two reasons. First, it was filed the day after the first published disclosures from the Snowden leaks. Second, the case marks a rare win against the government.
US District Judge Richard Leon ruled in favor of plaintiff and attorney Larry Klayman in December 2013, ordering that the NSA’s Bulk Telephony Metadata Program be immediately halted. However, he famously stayed his order pending an appeal to the District of Columbia Circuit Court of Appeals. The DC Circuit reversed his order in August 2015 and sent it back down to Judge Leon. The DC circuit found (as has often been the case) that Klayman did not have standing as there was not enough evidence that his records had been collected.
Judge Leon next suggested that the case be amended to include a specific plaintiff that had been a customer of Verizon Business Services, not Verizon Wireless. That person, California lawyer J.J. Little, was soon found and added to the case. The judge then ruled on November 9, 2015 that the government be ordered to immediately stop collecting Little’s records. As Judge Leon wrote:
With the Government’s authority to operate the Bulk Telephony Metadata Program quickly coming to an end, this case is perhaps the last chapter in the Judiciary’s evaluation of this particular Program’s compatibility with the Constitution. It will not, however, be the last chapter in the ongoing struggle to balance privacy rights and national security interests under our Constitution in an age of evolving technological wizardry. Although this Court appreciates the zealousness with which the Government seeks to protect the citizens of our Nation, that same Government bears just as great a responsibility to protect the individual liberties of those very citizens.
The government again appealed the decision back to the District of Columbia Circuit Court of Appeals. Weeks later though, the phone metadata program authorized under Section 215 of the USA Patriot Act ended on November 29, 2015. As such, the government said in December 2015 that it will soon formally appeal Judge Leon’s decision, largely on the basis that it’s now moot.
Phone metadata fallout
Case name: ACLU v. Clapper
Status: Sent back down to the Southern District of New York, likely to be dismissed as moot
In a landmark May 2015 decision, the 2nd Circuit Court of Appeals ruled that the bulk telephone metadata program was not authorized by Section 215 of the Patriot Act. Again, that program halted shortly after in November 2015. Today it’s likely that the lower court will soon dismiss the case as moot.
"The statutes to which the government points have never been interpreted to authorize anything approaching the breadth of the sweeping surveillance at issue here," the appeals court wrote last spring.
At the time, the court also noted that the Patriot Act gives the government wide powers to acquire all types of private records on Americans as long as they are "relevant" to an investigation. But according to the court, the government is going too far when it comes to acquiring, via a subpoena, the metadata of every telephone call made to and from the United States.
As 2nd Circuit judges concluded:
The records demanded are not those of suspects under investigation, or of people or businesses that have contact with such subjects, or of people or businesses that have contact with others who are in contact with the subjects—they extend to every record that exists, and indeed to records that do not yet exist, as they impose a continuing obligation on the recipient of the subpoena to provide such records on an ongoing basis as they are created. The government can point to no grand jury subpoena that is remotely comparable to the real‐time data collection undertaken under this program.
After the 2nd Circuit, the case was sent back down to the Southern District of New York, which has yet to schedule any arguments in this case for 2016.
Ridiculously slow
Case name: First Unitarian Church v. National Security Agency
Status: Pending in Northern District Court of California
Unlike Klayman and similar cases, First Unitarian Church v. National Security Agency was filed in 2013 on behalf of a number of wide-ranging religious and non-profit groups. This collective runs the gamut, representing Muslims, gun owners, marijuana legalization advocates, and even the Free Software Foundation. In total, the suit represents the broadest challenge to the metadata collection program so far.
First Unitarian Church takes the bulk collection of data and questions how it may reveal an individual's associations:
Plaintiffs’ associations and political advocacy efforts, as well as those of their members and staffs, are chilled by the fact that the Associational Tracking Program creates a permanent record of all of Plaintiffs’ telephone communications with their members and constituents, among others.
The plaintiffs demands that the metadata program be declared unconstitutional and formally shut down. In the latest chapter, the plaintiffs’ attempt to hold a court hearing regarding their attempt for summary judgment was denied in December 2015.
Overall within this past year, the docket only advanced slightly. Oakland-based US District Judge Jeffrey White did not hold a single hearing in the case, and nothing is scheduled so far for 2016. Like the previous two cases to watch from 2015, this case is also likely to be dismissed as moot given that the phone metadata program under Section 215 is no longer operational.
NSA snoops on a cab driver
Case name: United States v. Moalin
Status: Convicted in Southern District Court of California, appeal pending in 9th Circuit Court of Appeals
As is proven time and time again, the wheels of justice often turn quite slowly. Last year, we guessed that the 9th Circuit Court of Appeals would hear oral arguments in the only criminal case where the government is known to have used phone metadata collection to prosecute a terrorism-related case. It didn’t.
Most of the appellate case’s docket in 2015 was taken up by new lawyers being added to the case and extensions of time. Finally on December 14, 2015, lawyers for Basaaly Moalin and his three co-conspirators filed their opening 258-page brief.
United States v. Basaaly Saeed Moalin involves a Somali taxi driver who was convicted in a San Diego federal court in February 2013 on five counts. The counts include conspiracy to provide material support ($8,500) to the Somali terrorist group Al Shabaab, and Moalin was sentenced in November 2013 to 18 years in prison.
At congressional hearings in June 2013, FBI Deputy Director Sean Joyce testified that under Section 215, the NSA discovered Moalin indirectly conversing with a known terrorist overseas. However, the case was domestic and the FBI took over at that point. They began intercepting 1,800 phone calls over hundreds of hours from December 2007 to December 2008. The agency got access to hundreds of e-mails from Moalin’s Hotmail account, and this access was granted after the government applied for a court order at the FISC.
Attorney Joshua Dratel
Credit:
Aurich Lawson
Though Moalin was arrested in December 2010, attorney Joshua Dratel (yes, the same attorney representing Ross Ulbricht) did not learn of the NSA’s involvement until well after his client’s conviction. Dratel challenged the validity of the spying in court, requesting that the court compel the government to produce the FBI’s wiretap application to the FISC. The government responded with a heavily redacted 60-page brief, essentially arguing that since case involved national security issues this information could not be revealed.
In the appeal to the 9th Circuit, Moalin and the other co-defendants “deny that was the purpose for which the funds were intended. Rather, the funds, consistent with contributions by the San Diego Somali community for all the years it has existed in the US, were designed to provide a
regional Somali administration with humanitarian assistance relating to drought relief, educational services, orphan care, and security.”
Moalin’s legal team—which includes top lawyers from the American Civil Liberties Union—argue forcefully that the court heed the ruling in ACLU v. Clapper.
As it has often done, the government relied upon a legal theory known as the third-party doctrine. This emanated from a 1979 Supreme Court decision, Smith v. Maryland, where the court found that individuals do not have an inherent privacy right to data that has already been disclosed to a third party. So with telecom data for instance, the government has posited that because a call from one person to another forcibly transits Verizon’s network, those two parties have already shared that data with Verizon. Therefore, the government argues, such data can't be private, and it’s OK to collect it.
Moalin’s lawyers firmly reject the third-party doctrine:
The aggregation of records compounded the invasiveness and impact of the NSA program upon Moalin’s privacy because the government acquires more information about any given individual by monitoring the call records of that individual’s contacts—and by monitoring the call records of those contacts’ contacts.
…
As a result, it would be particularly inappropriate to hold that Smith—again, a case involving a very short-term and particularized, individualized surveillance of a person suspected of already having committed the specific crime under investigation—permitted the warrantless surveillance—including not only collection, but aggregation, retention, and review—of Moalin’s telephone metadata when the Supreme Court has expressly recognized that long-term dragnet surveillance raises distinct constitutional concerns.
In a previous government court filing, then-NSA director Gen. Keith Alexander testified that the NSA had reviewed a phone number “associated with Al-Qaeda,” and the agency saw that this number had “touched a phone number in San Diego.” Finally, Alexander said, the NSA observed Moalin’s number “talking to a facilitator in Somalia” in 2007.
Moalin’s lawyers fired back against this line of reasoning:
This information is relevant. Consistent with the FIG assessment, the 2003 investigation of Moalin “did not find any connection to terrorist activity.” The raw material for this finding would have established Moalin’s lack of connection to terrorist activity. (CR 345-3 at 18.)
What it meant to “touch” a number “associated with al-Qaeda[,]” raises questions. First, what does “associated” with al-Qaeda mean? The trial theory was that Moalin was contacting Aden Ayrow of al-Shabaab, and not someone “associated” with al-Qaeda. Second, was Moalin’s number in direct contact or was it a “hop” or two or three (or even more) away?
The government’s response is due April 15, 2016. With any luck, oral arguments will take place before the end of 2016.
"Backdoor searches" ?
Case name: United States v. Muhtorov
Status: Pending in District Court of Colorado
While all the previous cases have to do with bulk phone metadata surveillance under the now-defunct Section 215 of the Patriot Act, there’s is another case we singled out last year that involves another surveillance law.
Many different types of digital surveillance are authorized under the particularly thorny Section 702 of the FISA Amendment Act. This authorizes PRISM and "upstream" collection programs like XKeyscore, which can capture digital content (not just metadata) primarily where one party is a non-US person outside the US. Executive Order 12333 is believed to generally cover instances where both parties are non-US persons and are both overseas, although EO 12333 can "incidentally" cover wholly domestic communication as well. And with Section 215 now gone, cases under Section 702 take on greater importance.
This particular case begins in February 2013 with Clapper v. Amnesty International. The Supreme Court decided via a 5-4 decision that even groups with substantial reasons to believe that their communications are being surveilled by government intelligence agencies—such as journalists, activists, and attorneys with contacts overseas—have no standing to sue the federal government. The reason? They can't prove that they have been actively monitored. It's a major catch-22 since those who were being watched weren't exactly going to be told about the surveillance. But all that changed in October 2013 when the Justice Department altered its policy, stating that when prosecutors used warrantless wiretaps against criminal defendants, the defendants must be told.
Jamshid Muhtorov became the first such person to receive such a notification. The Uzbek human rights activist has lived in the US as a permanent resident and refugee since 2007. He's accused of providing material support and resources to the Islamic Jihad Union (IJU), and the US believes the IJU is an Islamic terrorist group. His criminal trial was scheduled to begin in April 2012, but it became beset with delays. Muhtorov pleaded not guilty during his arraignment hearing in March 2012. And in January 2014, Muhtorov became the first person to challenge warrantless collection of specific evidence in a criminal case against him.
In the latest development (from November 2015), US District Judge John Kane ruled against Muhtorov’s January 2014 motion to suppress evidence obtained under Section 702. As the judge concludes:
Mr. Muhtorov argues that § 702's minimization procedures are inadequate (and the approval scheme therefore constitutionally unreasonable) because they allow the government to maintain a database of incidentally collected information and query it for law enforcement purposes later. These “backdoor searches,” Muhtorov concludes, require a warrant and render the FAA approval scheme unconstitutional. I disagree. Accessing stored records in a database legitimately acquired is not a search in the context of the Fourth Amendment because there is no reasonable expectation of privacy in that information. Evidence obtained legally by one police agency may be shared with similar agencies without the need for obtaining a warrant, even if sought to be used for an entirely different purpose. This principle applies to fingerprint databases and has also been applied in the foreign intelligence context in Jabara v. Webster, 691 F.2d 272, 27779 (6th Cir. 1982).
The next hearing is currently set for January 4, 2016, and Ars plans on attending.
Cyrus is a former Senior Tech Policy Reporter at Ars Technica, and is also a radio producer and author. His latest book, Habeas Data, about the legal cases over the last 50 years that have had an outsized impact on surveillance and privacy law in America, is out now from Melville House. He is based in Oakland, California.
56 Comments
| 2024-11-07T14:59:51 | en | train |
10,821,026 | Shivetya | 2016-01-01T03:36:21 | A Glove That Lets You Feel What's Far Below the Water | null | http://www.popsci.com/glove-that-lets-you-feel-whats-under-water | 1 | 0 | null | null | null | no_error | A Glove That Lets You Feel What's Far Below The Water | null | Haniya Rae |
We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›
A haptic sonar glove developed by Ph.D. candidates Aisen Carolina Chacin and Takeshi Ozu of the Empowerment Informatics program at Tsukuba University in Japan allows wearers to “feel” objects that are just out of reach in underwater settings. In situations where there’s limited visibility, like flooded streets in an emergency, gloves like these could prove especially useful.
Inspired by the dolphin, IrukaTact (iruka means ‘dolphin’ in Japanese) uses echolocation to detect objects below the water, and provides haptic feedback to the wearer with pulsing jets of water. As the wearer’s hand floats closer to a sunken object, the stronger the jets become, and the wearer feels more pressure on her fingertips. Since the apparatus has minimal bulk, the wearer can grasp objects easily after they’ve been found.
“Our overall goal was to expand haptics,” says Chacin. “How can you feel different textures or sense depth without actually touching the object? Vibration alone doesn’t cut it for me, or most people, for that matter.”
The glove uses a MaxBotix MB7066 sonar sensor, three small motors, and an Arduino Pro Mini, and is programmed to send signals to the three middle fingers in silicone thimbles. The motors are placed on top of the index, middle, and ring fingers, and pump water from the surrounding environment. This water is siphoned onto the wearer’s fingertips to create pressure feedback. The thumb and pinky are left free in order to reduce clunkiness, save battery power, and improve movement. A silicone ring around the middle finger, connected to the sensor at the wrist by a small tube encasing the sensor’s wires, keeps the sensor parallel with the hand and allows it to read information from the direction the palm is facing. The sensor can receive and send signals from up to 2 feet of distance underwater, though Chacin says in the future it’d be possible to expand this range.
Chacin and Ozu, in collaboration with Ars Electronica, designed the glove as a DIY kit in hopes that the glove could potentially be used to search for victims, sunken objects, or hazards like sinkholes.
The glove could also be paired with a device like the Oculus Rift and outfitted with gyroscopes and accelerometers to provide haptic feedback in virtual reality.
| 2024-11-08T03:59:35 | en | train |
10,821,178 | apayan | 2016-01-01T05:29:34 | My Unwanted Sabbatical, Why I Went to Prison for Teaching Physics | null | http://iranpresswatch.org/post/13704/ | 2 | null | null | null | true | no_error | BIHE Professor | My Unwanted Sabbatical, Why I Went to Prison for Teaching Physics | 2015-12-22T18:18:08+00:00 | editor |
December 22, 2015
, , 2 Comments
Source: www.technologyreview.com
Mahmoud Badavam and family on the day of his release from prison.
On April 30, 2015, I was standing behind the very tall and heavy door of Rajaee-Shahr prison in the suburbs of Tehran, anxiously waiting for a moment I’d been imagining for four years. At last, the door opened and I could see the waiting crowd that included my family, friends, and former students. The first thing I did was hug my wife. We were both crying.
In 2011, my career had taken an unexpected and unusual turn: I was imprisoned for the crime of teaching physics at an “unofficial” university called the Baha’i Institute for Higher Education (BIHE).
Iran’s Baha’i community created BIHE in the 1980s after our youth were banned from Iranian universities. I began volunteering there in 1989 after serving three years in prison for simply being an active Baha’i. At BIHE, I taught physics and electronics and, as a member of BIHE’s e-learning committee, was a liaison with MIT’s OpenCourseWare Consortium. When I was arrested in 2011, I was on the engineering faculty at BIHE on top of my day job at an engineering company.
After six months in solitary confinement, I joined 70 to 80 fellow prisoners of conscience (many of us Baha’is); I shared a two-by-four-meter room with five others. I spent most of my time meditating, praying, and reading any available books. I wrote letters to friends and family, talked to fellow prisoners, and taught English.
Weekly visits with my wife and daughter (and sometimes my sister) provided a connection to the outside world. They brought news of calls, e-mails, and visits from my friends and colleagues. Once my daughter brought me a copy of MIT Technology Review, which I read line by line and page by page, including all the advertisements! But the authorities did not allow me to receive the next issue, because it was in English and no one there could verify its contents.
Now that some months have passed since the day of my freedom, I am back to almost normal life and work at the same engineering company. But my heart is still with my fellow prisoners at Rajaee-Shahr.
The experience of being a prisoner showed me there is a lot in our daily lives that we take for granted. After being in solitary for months, I was given access to a 12-inch TV. Looking at the colors on the screen was very exciting—blue, red, green, pink. I hadn’t appreciated the importance of color until I went without it. And now whenever I walk with my wife, I am very conscious of how dear these moments are, and I try to enjoy every one. My wife suffered much more than I did. I have committed myself to comforting her for the rest of my life.
My gratitude for many things also increased: the love of my wife and daughter, the respect of my former students and friends. Now many of my students have graduated and are responsible people with respectable careers as engineers and managers. Many bring their children to visit me. I am happy that I have had a tiny share in their success, and when I look at them I sometimes think the whole prison term was worthwhile.
In prison, I had a chance to read Nelson Mandela’s inspiring book Long Walk to Freedom several times. In it, he writes that education is the great engine of personal development. Through education, the daughter of a peasant can become a doctor, the son of a mine worker can become the head of the mine, a child of farm workers can become the president of a great nation. This was exactly what I wanted to do at BIHE for the young Baha’is.
I am sharing my story with the MIT community because MIT means a lot to me and I follow its news closely. MIT has been involved with BIHE since the beginning, with several MIT alumni, staff members, and academics lending their support. In September 1999, Chuck Vest joined the presidents of several other U.S. universities to appeal to the Iranian government to restore education to Baha’i youth in Iran.
My incarceration taught me that we, the privileged and educated population, have a great responsibility to the world. Humanity is suffering from prejudice, poverty, and lack of democracy. As engineers and scientists, we can do much to address these issues.
Mahmoud Badavam, SM ’78, who works for an engineering consulting company in Tehran, has not resumed teaching physics at BIHE. But he hopes that it will one day be legal to do so
| 2024-11-07T19:19:19 | en | train |
10,821,183 | jonbaer | 2016-01-01T05:33:20 | Time Warps and Black Holes: The Past, Present and Future of Space-Time | null | http://www.space.com/31495-space-time-warps-and-black-holes.html | 2 | 0 | null | null | null | no_error | Time Warps and Black Holes: The Past, Present & Future of Space-Time | 2015-12-31T15:51:27+00:00 | Nola Taylor Tillman |
A massive object like the Earth will bend space-time, and cause objects to fall toward it.
(Image credit: Science@NASA)
When giving the coordinates for a location, most people provide the latitude, longitude and perhaps altitude. But there is a fourth dimension often neglected: time. The combination of the physical coordinates with the temporal element creates a concept known as space-time, a background for all events in the universe."In physics, space-time is the mathematical model that combines space and time into a single interwoven continuum throughout the universe," Eric Davis, a physicist who works at the Institute for Advanced Studies at Austin and with the Tau Zero Foundation, told Space.com by email. Davis specializes in faster-than-light space-time and anti-gravity physics, both of which use Albert Einstein's general relativity theory field equations and quantum field theory, as well as quantum optics, to conduct lab experiments."Einstein's special theory of relativity, published in 1905, adapted [German mathematician] Hermann Minkowski's unified space-and-time model of the universe to show that time should be treated as a physical dimension on par with the three physical dimensions of space — height, width and length — that we experience in our lives," Davis said. [Einstein's Theory of Relativity Explained (Infographic)]"Space-time is the landscape over which phenomena take place," added Luca Amendola, a member of the Euclid Theory Working Group (a team of theoretical scientists working with the European Space Agency's Euclid satellite) and a professor at Heidelberg University in Germany. "Just as any landscape is not set in stone, fixed forever, it changes just because things happen — planets move, particles interact, cells reproduce," he told Space.com via email.The history of space-timeThe idea that time and space are united is a fairly recent development in the history of science."The concepts of space remained practically the same from the early Greek philosophers until the beginning of the 20th century — an immutable stage over which matter moves," Amendola said. "Time was supposed to be even more immutable because, while you can move in space the way you like, you cannot travel in time freely, since it runs the same for everybody."In the early 1900s, Minkowski built upon the earlier works of Dutch physicist Hendrik Lorentz and French mathematician and theoretical physicist Henri Poincare to create a unified model of space-time. Einstein, a student of Minkowski, adapted Minkowski's model when he published his special theory of relativity in 1905.Breaking space news, the latest updates on rocket launches, skywatching events and more!"Einstein had brought together Poincare's, Lorentz's and Minkowski's separate theoretical works into his overarching special relativity theory, which was much more comprehensive and thorough in its treatment of electromagnetic forces and motion, except that it left out the force of gravity, which Einstein later tackled in his magnum opus general theory of relativity," Davis said.Space-time breakthroughsIn special relativity, the geometry of space-time is fixed, but observers measure different distances or time intervals according to their own relative velocity. In general relativity, the geometry of space-time itself changes depending on how matter moves and is distributed."Einstein's general theory of relativity is the first major theoretical breakthrough that resulted from the unified space-time model," Davis said.General relativity led to the science of cosmology, the next major breakthrough that came thanks to the concept of unified space-time."It is because of the unified space-time model that we can have a theory for the creation and existence of our universe, and be able to study all the consequences that result thereof," Davis said.He explained that general relativity predicted phenomena such as black holes and white holes. It also predicts that they have an event horizon, the boundary that marks where nothing can escape, and the point of singularities at their center, a one dimensional point where gravity becomes infinite. General relativity could also explain rotating astronomical bodies that drag space-time with them, the Big Bang and the inflationary expansion of the universe, gravity waves, time and space dilation associated with curved space-time, gravitational lensing caused by massive galaxies, and the shifting orbit of Mercury and other planetary bodies, all of which science has shown true. It also predicts things such as warp-drive propulsions and traversable wormholes and time machines."All of these phenomena rely on the unified space-time model," he said, "and most of them have been observed."An improved understanding of space-time also led to quantum field theory. When quantum mechanics, the branch of theory concerned with the movement of atoms and photons, was first published in 1925, it was based on the idea that space and time were separate and independent. After World War II, theoretical physicists found a way to mathematically incorporate Einstein's special theory of relativity into quantum mechanics, giving birth to quantum field theory."The breakthroughs that resulted from quantum field theory are tremendous," Davis said.The theory gave rise to a quantum theory of electromagnetic radiation and electrically charged elementary particles — called quantum electrodynamics theory (QED theory) — in about 1950. In the 1970s, QED theory was unified with the weak nuclear force theory to produce the electroweak theory, which describes them both as different aspects of the same force. In 1973, scientists derived the quantum chromodynamics theory (QCD theory), the nuclear strong force theory of quarks and gluons, which are elementary particles.In the 1980s and the 1990s, physicists united the QED theory, the QCD theory and the electroweak theory to formulate the Standard Model of Particle Physics, the megatheory that describes all of the known elementary particles of nature and the fundamental forces of their interactions. Later on, Peter Higgs' 1960s prediction of a particle now known as the Higgs boson, which was discovered in 2012 by the Large Hadron Collider at CERN, was added to the mix.Experimental breakthroughs include the discovery of many of the elementary particles and their interaction forces known today, Davis said. They also include the advancement of condensed matter theory to predict two new states of matter beyond those taught in most textbooks. More states of matter are being discovered using condensed matter theory, which uses the quantum field theory as its mathematical machinery."Condensed matter has to do with the exotic states of matter, such as those found in metallic glass, photonic crystals, metamaterials, nanomaterials, semiconductors, crystals, liquid crystals, insulators, conductors, superconductors, superconducting fluids, etc.," Davis said. "All of this is based on the unified space-time model."The future of space-timeScientists are continuing to improve their understanding of space-time by using missions and experiments that observe many of the phenomena that interact with it. The Hubble Space Telescope, which measured the accelerating expansion of the universe, is one instrument doing so. NASA's Gravity Probe B mission, which launched in 2004, studied the twisting of space-time by a rotating body — the Earth. NASA's NuSTAR mission, launched in 2012, studies black holes. Many other telescopes and missions have also helped to study these phenomena.On the ground, particle accelerators have studied fast-moving particles for decades."One of the best confirmations of special relativity is the observations that particles, which should decay after a given time, take in fact much longer when traveling very fast, as, for instance, in particle accelerators," Amendola said. "This is because time intervals are longer when the relative velocity is very large."Future missions and experiments will continue to probe space-time as well. The European Space Agency-NASA satellite Euclid, set to launch in 2020, will continue to test the ideas at astronomical scales as it maps the geometry of dark energy and dark matter, the mysterious substances that make up the bulk of the universe. On the ground, the LIGO and VIRGO observatories continue to study gravitational waves, ripples in the curvature of space-time."If we could handle black holes the same way we handle particles in accelerators, we would learn much more about space-time," Amendola said.Merging black holes create ripples in space-time in this artist's concept. Experiments are searching for these ripples, known as gravitational waves, but none have been detected. (Image credit: Swinburne Astronomy Productions)Understanding space-timeWill scientists ever get a handle on the complex issue of space-time? That depends on precisely what you mean."Physicists have an excellent grasp of the concept of space-time at the classical levels provided by Einstein's two theories of relativity, with his general relativity theory being the magnum opus of space-time theory," Davis said. "However, physicists do not yet have a grasp on the quantum nature of space-time and gravity."Amendola agreed, noting that although scientists understand space-time across larger distances, the microscopic world of elementary particles remains less clear."It might be that space-time at very short distances takes yet another form and perhaps is not continuous," Amendola said. "However, we are still far from that frontier."Today's physicists cannot experiment with black holes or reach the high energies at which new phenomena are expected to occur. Even astronomical observations of black holes remain unsatisfactory due to the difficulty of studying something that absorbs all light, Amendola said. Scientists must instead use indirect probes."To understand the quantum nature of space-time is the holy grail of 21st century physics," Davis said. "We are stuck in a quagmire of multiple proposed new theories that don't seem to work to solve this problem."Amendola remained optimistic. "Nothing is holding us back," he said. "It's just that it takes time to understand space-time."Follow Nola Taylor Redd on Twitter @NolaTRedd or Google+. Follow us @Spacedotcom, Facebook and Google+. Original article on Space.com.
Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: [email protected].
Nola Taylor Tillman is a contributing writer for Space.com. She loves all things space and astronomy-related, and enjoys the opportunity to learn more. She has a Bachelor’s degree in English and Astrophysics from Agnes Scott college and served as an intern at Sky & Telescope magazine. In her free time, she homeschools her four children. Follow her on Twitter at @NolaTRedd
Most Popular
| 2024-11-08T11:40:29 | en | train |
10,821,263 | shawndumas | 2016-01-01T06:28:33 | Hiking Minimum Wage an Inefficient Tool to Fight Poverty: Fed Research | null | http://www.nbcnews.com/business/economy/hiking-minimum-wage-inefficient-tool-fight-poverty-fed-research-n488111?cid=sm_tw&hootPostID=d54005cf9aa5678fcc7e97e310e5f2b4 | 5 | 0 | null | null | null | no_error | Hiking Minimum Wage an Inefficient Tool to Fight Poverty: Fed Research | 2015-12-30T20:33:29.000Z | By Jeff Cox, CNBC | Increasing the minimum wage is an inefficient way to reduce poverty, according to a Fed research paper that comes amid a national clamor to hike pay for workers at the low end of the salary scale.Fast-food workers and their supporters join a nationwide protest for higher wages and union rights outside McDonald's in Los Angeles on Nov. 10.Lucy Nicholson / ReutersDavid Neumark, visiting scholar at the San Francisco Fed, contends in the paper that raising the minimum wage has only limited benefits in the war against poverty, due in part because relatively few of those falling below the poverty line actually receive the wage.Many of the benefits from raising the wage, a move already undertaken by multiple governments around the country as well as some big-name companies, tend to go to higher-income families, said Neumark, who also pointed to research that shows raising wages kills jobs through higher costs to employers. Neumark is a professor of economics and director of the Center for Economics and Public Policy at the University of California, Irvine."Setting a higher minimum wage seems like a natural way to help lift families out of poverty. However, minimum wages target individual workers with low wages, rather than families with low incomes," he wrote. "Other policies that directly address low family income, such as the earned income tax credit, are more effective at reducing poverty."13 States to Raise Minimum Wage in 2016His conclusions drew a response from advocates for raising the wage who said the argument that boosting wages would cost jobs has been proven invalid and that an increase would help cut into poverty levels."The mainstream view, as illustrated by meta-surveys of the whole minimum wage research field, is that the job loss effects of raising the minimum wage are very, very small," Paul Sohn, general counsel for the National Employment Law Project, said in an email to CNBC.com. An NELP study "shows that the bulk of rigorous minimum wage studies show instead that raising the minimum wage boosts incomes for low-wage workers with only very small adverse impacts on employment."The U.S. poverty rate has been fairly flat in recent years but actually was 2.3 percent higher at the end of 2014 than it was before the Great Recession in 2008, according to the Census Bureau. Advocates for the poor believe raising the minimum wage is a linchpin in helping to eradicate poverty, and 29 states plus the District of Columbia have minimums above the national floor of $7.25.Fighting poverty, though, is more complicated than raising wages.Five Reasons Why Job Creation Is so WeakDemographically, about half of the 3 million or so workers receiving the minimum are 16 to 24 years old, with the highest concentration in the leisure and hospitality industry, according to the Bureau of Labor Statistics. Moreover, the percentage of workers at or below the minimum is on the decline, falling to 3.9 percent in 2014 from the most recent high of 6 percent in 2010.Neumark also points out that many of those receiving the wage aren't poor — there are no workers in 57 percent of families below the poverty line, while 46 percent of poor workers are getting paid more than $10.10 an hour, and 36 percent are making more than $12 an hour, he said."Mandating higher wages for low-wage workers does not necessarily do a good job of delivering benefits to poor families," Neumark wrote. "Simple calculations suggest that a sizable share of the benefits from raising the minimum wage would not go to poor families."Increasing the earned income tax credit is a more effective way to fight poverty, he said. A family of four can get a credit of up to $5,548, which Neumark said is more tailored toward low-income families than hikes in the minimum wage."The earned income tax credit targets low-income families much better, increases employment and reduces poverty, and for all these reasons seems far more effective," he wrote. "Policymakers are likely to do a better job fighting poverty by making the EITC more generous than by raising the minimum wage. Furthermore, using both of these policies together is more effective than minimum wage increases in isolation."Jeff Cox, CNBCJeff Cox is a finance editor with CNBC.com where he covers all aspects of the markets and monitors coverage of the financial markets and Wall Street. His stories are routinely among the most-read items on the site each day as he interviews some of the smartest and most well-respected analysts and advisors in the financial world.Over the course of a journalism career that began in 1987, Cox has covered everything from the collapse of the financial system to presidential politics to local government battles in his native Pennsylvania. | 2024-11-08T09:54:16 | en | train |
10,821,336 | waruqi | 2016-01-01T07:22:21 | Itrace v1.3 released | null | https://github.com/waruqi/itrace | 1 | 0 | null | null | null | no_error | GitHub - hack0z/itrace: 🍰 Trace objc method call for ios and mac | null | hack0z |
itrace
Trace objc method call for ios and mac
如果你想逆向 某些app的调用流程 或者 系统app的一些功能的 私有 framework class api 调用流程, 可以试试此工具
只需要 配置需要挂接的 类名和app名, 就可以实时追踪 相关功能的 调用流程。 支持批量 hook n多个类名
特性
批量跟踪ios下指定class对象的所有调用流程
支持ios for armv6,armv7,arm64 以及mac for x86, x64
自动探测参数类型,并且打印所有参数的详细信息
更新内容
增加对arm64的支持,刚调通稳定性有待测试。
arm64进程注入没时间做了,暂时用了substrate的hookprocess, 所以大家需要先装下libsubstrate.dylib
armv7的版本是完全不依赖substrate的。
arm64的版本对参数的信息打印稍微做了些增强。
注:此项目已不再维护,仅供参考。
配置需要挂接的class
修改itrace.xml配置文件,增加需要hook的类名:
<?xml version="1.0" encoding="utf-8"?>
<itrace>
<class>
<SSDevice/>
<SSDownload/>
<SSDownloadManager/>
<SSDownloadQueue/>
<CPDistributedMessagingCenter/>
<CPDistributedNotificationCenter/>
<NSString args="0"/>
</class>
</itrace>
注: 尽量不要去hook, 频繁调用的class, 比如 UIView NSString, 否则会很卡,操作就不方便了。
注: 如果挂接某个class, 中途打印参数信息挂了, 可以在对应的类名后面 加上 args="0" 属性, 来禁止打印参数信息, 这样会稳定点。
如果要让所有类都不打印参数信息, 可以直接设置:
安装文件
将整个itracer目录下的所有文件用手机助手工具,上传到ios系统上的 /tmp 下面:
/tmp/itracer
/tmp/itrace.dylib
/tmp/itrace.xml
进行trace
进入itracer所在目录:
修改执行权限:
运行程序:
./itracer springboard (spingboard 为需要挂接的进程名, 支持简单的模糊匹配)
使用substrate进行注入itrace.dylib来trace
在ios arm64的新设备上,使用itracer注入itrace.dylib已经失效,最近一直没怎么去维护,如果要在在arm64上注入进行trace,可以借用substrate,将itrace.dylib作为substrate插件进行注入
再配置下itrace.plist指定需要注入到那个进程中就行了,具体可以看下substrate的插件相关文档。
放置itrace.dylib和itrace.plist到substrate插件目录/Library/MobileSubstrate/DynamicLibraries,放好后使用ldid -S itrace.dylib处理一下,然后重启需要trace的进程即可, itrace.xml的配置文件路径变为/var/root/itrace/itrace.xml。
ios arm64设备的itrace.dylib 编译时使用 xmake f -p iphoneos -a arm64命令。
查看 trace log, 注: log 的实际输出在: 控制台-设备log 中:
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadQueue downloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadManager downloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadManager _copyDownloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadQueue _sendDownloadStatusChangedAtIndex:]: 0
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadQueue _messageObserversWithFunction:context:]: 0x334c5d51: 0x2fe89de0
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadQueue downloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadManager downloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadManager _copyDownloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownload cachedApplicationIdentifier]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownload status]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownload cachedApplicationIdentifier]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [CPDistributedNotificationCenter postNotificationName:userInfo:]: SBApplicationNotificationStateChanged: {
SBApplicationStateDisplayIDKey = "com.apple.AppStore";
SBApplicationStateKey = 2;
SBApplicationStateProcessIDKey = 5868;
SBMostElevatedStateForProcessID = 2;
}
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [CPDistributedNotificationCenter postNotificationName:userInfo:toBundleIdentifier:]: SBApplicationNotificationStateChanged: {
SBApplicationStateDisplayIDKey = "com.apple.AppStore";
SBApplicationStateKey = 2;
SBApplicationStateProcessIDKey = 5868;
SBMostElevatedStateForProcessID = 2;
}: null
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownloadManager _handleMessage:fromServerConnection:]: 0xe6920b0: 0xe007040
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownloadManager _handleDownloadStatesChanged:]: 0xe6920b0
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownloadManager _copyDownloads]
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownload persistentIdentifier]
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownload _addCachedPropertyValues:]: {
I = SSDownloadPhaseDownloading;
}
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownload _applyPhase:toStatus:]: SSDownloadPhaseDownloading: <SSDownloadStatus: 0xe6b8e80>
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownloadQueue downloadManager:downloadStatesDidChange:]: <SSDownloadManager: 0x41ea60>: (
"<SSDownload: 0xe6bd970>: -4085275246093726486"
)
####如何编译
编译ios版本:
xmake f -p iphoneos
xmake
xmake f -p iphoneos -a arm64
xmake
编译macosx版本:
更详细的xmake使用,请参考:xmake文档
依赖库介绍:tbox
联系方式
邮箱:[email protected]
主页:TBOOX开源工程
QQ群:343118190(TBOOX开源工程), 260215194 (ihacker ios逆向分析)
微信公众号:tboox-os
| 2024-11-08T11:01:55 | en | train |
10,821,365 | pmontra | 2016-01-01T07:42:26 | Facebook’s Controversial Free Basics Program Shuts Down in Egypt | null | http://techcrunch.com/2015/12/31/facebooks-controversial-free-basics-program-shuts-down-in-egypt/ | 4 | 0 | null | null | null | no_error | Facebook's Controversial Free Basics Program Shuts Down In Egypt | TechCrunch | 2015-12-31T08:36:48+00:00 | Catherine Shu |
Free Basics, a Facebook program that gives free access to certain Internet services, has been shut down in Egypt. The news comes the week after India’s telecom regulator ordered the suspension of Free Basics as it prepares to hold public hearings on net neutrality.
A report from Reuters cites a government official who said the service was suspended because Facebook had not renewed a necessary permit, and not related to security concerns.
A Facebook spokesperson confirmed the shut down in an emailed statement, but did not disclose the reason behind the suspension:
“We’re disappointed that Free Basics will no longer available in Egypt as of December 30, 2015. Already more than 3 million Egyptians use Free Basics and through Free Basics more than 1 million people who were previously unconnected are now using the internet because of these efforts. We are committed to Free Basics, and we’re going to keep working to serve our community to provide access to connectivity and valuable services. We hope to resolve this situation soon.”
Free Basics was available in Egypt on telecom Etisalat Egypt’s network. The program, which is run by Facebook’s Internet.org initiative, lets subscribers to its telecom partners access a limited group of services and websites, like Wikipedia, Bing search, and BBC News, without data charges.
While Free Basics, which has launched in 37 countries so far, is meant to help more people in emerging economies get online, critics say that it violates net neutrality and question Facebook’s motives, since the services included in Free Basics include both its social network and Facebook Messenger.
The controversy has become especially acute in India, Facebook’s second biggest market outside of the United States. Facebook arguably made a major public relations mishap there with its “Save Free Basics” campaign, which called on Facebook users to send a pre-filled email to the Telecom Regulatory Authority of India supporting the program. The company also purchased newspaper and billboard advertisements to defend Free Basics. Many people, however, found the campaign misleading. In response, Facebook chief executive officer Mark Zuckerberg defended the program in an opinion piece for The Times of India, comparing Free Basics to public libraries, while Internet.org vice president took part in a Reddit AMA.
Most Popular
Catherine Shu covered startups in Asia and breaking news for TechCrunch. Her reporting has also appeared in the New York Times, the Taipei Times, Barron’s, the Wall Street Journal and the Village Voice. She studied at Sarah Lawrence College and the Columbia Graduate School of Journalism.
Disclosures: None
View Bio
Newsletters
Subscribe for the industry’s biggest tech news
Related
Latest in TC
| 2024-11-07T23:25:41 | en | train |
10,821,384 | divramis | 2016-01-01T07:57:18 | SEO+:+Hosting+σε+Ελληνικό+server+-+SEO+|+WEB+DESIGN | null | http://paramarketing.gr/seo-hosting-%cf%83%ce%b5-%ce%b5%ce%bb%ce%bb%ce%b7%ce%bd%ce%b9%ce%ba%cf%8c-server/ | 1 | null | null | null | true | no_error | SEO : Hosting σε Ελληνικό server - Divramis | 2014-04-22T17:00:41+03:00 | null |
Ένας από τους 200 παράγοντες SEO της Google είναι το hosting από γεωγραφική και λειτουργική άποψη.
Άμα το site σας απευθύνεται στην ελληνική αγορά, πρέπει να προτιμήσετε καταλήξεις domain name σε .gr, όπως .gr, .org.gr, edu.gr, net.gr και .com.gr και ελληνικό hosting.
Άμα το site σας απευθύνεται στην διεθνή αγορά, μπορείτε να επιλέξετε όλες τις άλλες διαθέσιμες καταλήξεις domain name που είναι πάνω από 700 με προτίμηση αυτές σε .com .net και .org και hosting εγγύτερα στην αγορά στόχο σας. Αν η αγορά σας είναι στην Ευρώπη προτιμήστε servers στην Ευρωπαϊκή χώρα με την μεγαλύτερη ομάδα πελατών ή αγορά στόχο και αντίστοιχα για τις άλλες χώρες.
Σε περιπτώσεις πολλών αγορών ή διεθνών αγορών, εκτός από διαφορετικά sites, ένα για κάθε αγορά, απαιτούνται και πολλοί διαφορετικοί servers στις ανάλογες αγορές.
Η αγορά του ελληνικού Hosting σήμερα
Μετά από εκτεταμένη έρευνα και αφού εξέτασα κάμποσες ελληνικές εταιρείες με ελληνικό hosting – Παρεμπιπτόντως, άλλο το ελληνικό hosting και άλλο το ελληνική εταιρεία με γερμανικό ή αμερικάνικο hosting – κατέληξα στην αξιόπιστη ελληνική εταιρεία TopHost.
Εταιρείες όμως που προσφέρουν αξιόλογο ελληνικό hosting με έδρα την Αθήνα, μια και εκεί κατοικεί το 50% των υποψήφιων πελατών σας είναι οι παρακάτω:
Papaki.gr
Tophost.gr
Pointer.gr
Dnhost.gr
Otenet.gr
Cyta.gr
Ο κατάλογος δεν σταματά εδώ μια και θα βρείτε πάρα πολλές άλλες αδικημένες από την Google εταιρείες hosting που ενώ είναι πολύ καλές και ποιοτικές στις παρεχόμενες υπηρεσίες, δεν έχουν καμία ορατότητα στις μηχανές αναζήτησης. Ο αλγόριθμος της Google απλώς της έπνιξε!
Να προτιμήσω το GR-IX δίκτυο;
Ο κόμβος GR-IX αποτελεί το σημείο στο οποίο διασυνδέονται επιλεγμένοι Internet Service Providers στην Ελλάδα. Μέσω του GR-IX υλοποιείται το peering, το οποίο επιτρέπει την απευθείας ανταλλαγή κίνησης μεταξύ των Internet Providers και κατ΄επέκταση επιτυγχάνονται πολύ υψηλές ταχύτητες κατά την διενέργεια της επικοινωνίας.
Τι διαφορά έχει η φιλοξενία του Ultra Fast GR-IX δικτύου με την φιλοξενία που δεν είναι σε GR-IX δίκτυο;
Η διαφορά είναι αρκετά μεγάλη, σε επίπεδο ταχύτητας δικτύου. To web hosting σε Ελληνικό Datacenter δεν είναι από μόνο του επαρκή για να επιτευχθούν οι σύντομοι χρόνοι απόκρισης που προαναφέρθηκαν.
–
Αν για παράδειγμα κάποιος χρήστης επισκεφθεί από Ελλάδα ένα site σε server στην Ελλάδα που δεν είναι σε GR-IX δίκτυο, ο χρόνος φόρτωσης θα είναι αντίστοιχος με τον χρόνο φόρτωσης μιας σελίδας που φιλοξενείται σε server στην Ευρώπη.
–
–
Αντιθέτως οι εταιρείες hosting που βρίσκονται στο GR-IX δίκτυο, εκμεταλλεύονται τις εσωτερικές αποστάσεις και η κίνηση δεδομένων δρομολογείται χωρίς αυτά να περνάνε από κόμβους του εξωτερικού. Έτσι, εξασφαλίζουν για τους πελάτες τους και το site σας τους πιο σύντομους δυνατούς χρόνους φόρτωσης των σελίδων από Ελλάδα.
Οι χρόνοι απόκρισης σε GR-IX επιτυγχάνουν ταχύτητες μέχρι και 5 φορές πιο γρήγορες.
Ενδεικτικές ταχύτητες ping από Ελλάδα:
σε server στην Αμερική 225ms
σε server στην Ευρώπη 93ms
στην Ελλαδα χωρίς GR-IX 80ms
στην Ελλαδα με GR-IX 40ms
Το web hosting στην Ελλάδα ανεβάζει την κατάταξη ενός site στη Google;
Για SEO πρώτη σελίδα στα αποτελέσματα αναζήτησης του Google στην Ελλάδα, θα πρέπει να λάβετε υπόψιν ότι ο αλγόριθμος αξιολόγησης και κατάταξης των sites επηρεάζεται σημαντικά από τους εξής παράγοντες:
την τοποθεσία φιλοξενίας της ιστοσελίδας και
το TLD (.gr).
Σύμφωνα με τον Matt Cutts, ο οποίος είναι υπεύθυνος για την ποιότητα αποτελεσμάτων αναζήτησης στο Google, εκτός από το TLD, η χώρα φιλοξενίας αποτελεί σημαντικό παράγοντα, καθώς η Google ελέγχει την τοποθεσία της IP του server στον οποίο φιλοξενείται το website.
–
Ο λόγος είναι ότι θεωρούν πως ένας server που φιλοξενείται για παράδειγμα στην Ελλάδα, θα εμπεριέχει και περιεχόμενο που θα είναι περισσότερο χρήσιμο σε χρήστες εντός Ελλάδας, σε σχέση με το περιεχόμενο ενός server του εξωτερικού. Στο ακόλουθο video, ο Matt Cutts της Google επιβεβαιώνει τη σημασία που έχει η τοποθεσία μίας IP για το SEO σε τοπικό επίπεδο.
Επομένως, η φιλοξενία σε Ελληνική IP και το ταχύτατο GR-IX δίκτυο, μπορεί να επηρεάσει θετικά την κατάταξη της σελίδας στα αποτελέσματα αναζητήσεων για τους χρήστες του internet εντός Ελλάδας και να σας δώσει το ανταγωνιστικό πλεονέκτημα που ζητάτε.
–
Βάλτε το site σας στην πρώτη σελίδα της Google για πάντα με ελληνικό hosting!
TopHost, o πρωταθλητής του Hosting στην Ελλάδα -τουλάχιστον για αυτό το μήνα
Όλα τα πακέτα της Tophost φιλοξενούνται σε servers στην Ελλάδα και σας δίνουν τη δυνατότητα να εκμεταλλευτείτε ταχύτητες δικτύου έως και 5 φορές πιο γρήγορες από αντίστοιχες στην Αμερική και Ευρώπη. Αξιοποιήστε την διασυνδεσιμότητα του Ultra Fast GR-IX δίκτυου των servers της Tophost, για την πιο γρήγορη εμπειρία φιλοξενίας!
Ποιες υπηρεσίες φιλοξενίας μπορούν να ενεργοποιηθούν στο Tophost Ultra Fast GR-IX δίκτυο στην Ελλάδα;
Οι υπηρεσίες που ενεργοποιούνται στο Datacenter στην Ελλάδα είναι όλα τα πακέτα φιλοξενίας Shared Hosting και Reseller Hosting σε Linux και Windows servers, καθώς και κάποιοι από τους Dedicated Servers της Tophost.
SEO Google και ελληνικό hosting Πηγές:
Podcast: Google και οι 200 παράγοντες SEO
Google και οι 200 παράγοντες SEO
Η μαύρη λίστα του Web Hosting No2
Η μαύρη λίστα του Web Hosting
SEO Google Πρώτη Σελίδα: Πώς να βγείτε στην πρώτη σελίδα της Google
Ζητήστε μια προσφορά σήμερα για κατασκευή ιστοσελίδας
Αν είστε πριν την κατασκευή ή την ανακατασκευή της ιστοσελίδας σας, πριν κάνετε οτιδήποτε ζητήστε μια προσφορά για κατασκευή site ή κατασκευή eshop με το Genesis Theme Framework.
Ζητήστε προσφορά κατασκευής ή προώθησης ιστοσελίδας
Δωρεάν Μαθήματα SEO Αξίας 129€
Πάρτε εντελώς δωρεάν τον οδηγό βίντεο μαθημάτων αξίας 129€ SEO GOOGLE Πρώτη Σελίδα. Είναι πολύ συνετό να αφιερώνετε το 20% του χρόνου και των πόρων σας στην προσωπική σας εκπαίδευση και στην προσωπική σας ανάπτυξη. Γραφτείτε σήμερα στα βίντεο μαθήματα εντελώς δωρεάν!
| 2024-11-08T03:56:03 | el | train |
10,821,392 | Tomte | 2016-01-01T08:05:50 | Teller Reveals His Secrets (2012) | null | http://www.smithsonianmag.com/arts-culture/teller-reveals-his-secrets-100744801/?all?no-ist | 2 | 0 | null | null | null | no_error | Teller Reveals His Secrets | 2012-03-01T00:00:00-05:00 | Smithsonian Magazine |
According to magician Teller, "Neuroscientists are novices at deception. Magicians have done controlled testing in human perception for thousands of years."
Jared McMillen / Aurora Select
In the last half decade, magic—normally deemed entertainment fit only for children and tourists in Las Vegas—has become shockingly respectable in the scientific world. Even I—not exactly renowned as a public speaker—have been invited to address conferences on neuroscience and perception. I asked a scientist friend (whose identity I must protect) why the sudden interest. He replied that those who fund science research find magicians “sexier than lab rats.”
I’m all for helping science. But after I share what I know, my neuroscientist friends thank me by showing me eye-tracking and MRI equipment, and promising that someday such machinery will help make me a better magician.
I have my doubts. Neuroscientists are novices at deception. Magicians have done controlled testing in human perception for thousands of years.
I remember an experiment I did at the age of 11. My test subjects were Cub Scouts. My hypothesis (that nobody would see me sneak a fishbowl under a shawl) proved false and the Scouts pelted me with hard candy. If I could have avoided those welts by visiting an MRI lab, I surely would have.
But magic’s not easy to pick apart with machines, because it’s not really about the mechanics of your senses. Magic’s about understanding—and then manipulating—how viewers digest the sensory information.
I think you’ll see what I mean if I teach you a few principles magicians employ when they want to alter your perceptions.
1. Exploit pattern recognition. I magically produce four silver dollars, one at a time, with the back of my hand toward you. Then I allow you to see the palm of my hand empty before a fifth coin appears. As Homo sapiens, you grasp the pattern, and take away the impression that I produced all five coins from a hand whose palm was empty.
2. Make the secret a lot more trouble than the trick seems worth. You will be fooled by a trick if it involves more time, money and practice than you (or any other sane onlooker) would be willing to invest. My partner, Penn, and I once produced 500 live cockroaches from a top hat on the desk of talk-show host David Letterman. To prepare this took weeks. We hired an entomologist who provided slow-moving, camera-friendly cockroaches (the kind from under your stove don’t hang around for close-ups) and taught us to pick the bugs up without screaming like preadolescent girls. Then we built a secret compartment out of foam-core (one of the few materials cockroaches can’t cling to) and worked out a devious routine for sneaking the compartment into the hat. More trouble than the trick was worth? To you, probably. But not to magicians.
3. It’s hard to think critically if you’re laughing. We often follow a secret move immediately with a joke. A viewer has only so much attention to give, and if he’s laughing, his mind is too busy with the joke to backtrack rationally.
4. Keep the trickery outside the frame. I take off my jacket and toss it aside. Then I reach into your pocket and pull out a tarantula. Getting rid of the jacket was just for my comfort, right? Not exactly. As I doffed the jacket, I copped the spider.
5. To fool the mind, combine at least two tricks. Every night in Las Vegas, I make a children’s ball come to life like a trained dog. My method—the thing that fools your eye—is to puppeteer the ball with a thread too fine to be seen from the audience. But during the routine, the ball jumps through a wooden hoop several times, and that seems to rule out the possibility of a thread. The hoop is what magicians call misdirection, a second trick that “proves” the first. The hoop is genuine, but the deceptive choreography I use took 18 months to develop (see No. 2—More trouble than it’s worth).
6. Nothing fools you better than the lie you tell yourself. David P. Abbott was an Omaha magician who invented the basis of my ball trick back in 1907. He used to make a golden ball float around his parlor. After the show, Abbott would absent-mindedly leave the ball on a bookshelf while he went to the kitchen for refreshments. Guests would sneak over, heft the ball and find it was much heavier than a thread could support. So they were mystified. But the ball the audience had seen floating weighed only five ounces. The one on the bookshelf was a heavy duplicate, left out to entice the curious. When a magician lets you notice something on your own, his lie becomes impenetrable.
7. If you are given a choice, you believe you have acted freely. This is one of the darkest of all psychological secrets. I’ll explain it by incorporating it (and the other six secrets you’ve just learned) into a card trick worthy of the most annoying uncle.
THE EFFECT I cut a deck of cards a couple of times, and you glimpse flashes of several different cards. I turn the cards facedown and invite you to choose one, memorize it and return it. Now I ask you to name your card. You say (for example), “The queen of hearts.” I take the deck in my mouth, bite down and groan and wiggle to suggest that your card is going down my throat, through my intestines, into my bloodstream and finally into my right foot. I lift that foot and invite you to pull off my shoe and look inside. You find the queen of hearts. You’re amazed. If you happen to pick up the deck later, you’ll find it’s missing the queen of hearts.
THE SECRET(S) First, the preparation: I slip a queen of hearts in my right shoe, an ace of spades in my left and a three of clubs in my wallet. Then I manufacture an entire deck out of duplicates of those three cards. That takes 18 decks, which is costly and tedious (No. 2—More trouble than it’s worth).
When I cut the cards, I let you glimpse a few different faces. You conclude the deck contains 52 different cards (No. 1—Pattern recognition). You think you’ve made a choice, just as when you choose between two candidates preselected by entrenched political parties (No. 7—Choice is not freedom).
Now I wiggle the card to my shoe (No. 3—If you’re laughing...). When I lift whichever foot has your card, or invite you to take my wallet from my back pocket, I turn away (No. 4—Outside the frame) and swap the deck for a normal one from which I’d removed all three possible selections (No. 5—Combine two tricks). Then I set the deck down to tempt you to examine it later and notice your card missing (No. 6—The lie you tell yourself).
Magic is an art, as capable of beauty as music, painting or poetry. But the core of every trick is a cold, cognitive experiment in perception: Does the trick fool the audience? A magician’s data sample spans centuries, and his experiments have been replicated often enough to constitute near-certainty. Neuroscientists—well intentioned as they are—are gathering soil samples from the foot of a mountain that magicians have mapped and mined for centuries. MRI machines are awesome, but if you want to learn the psychology of magic, you’re better off with Cub Scouts and hard candy.
Get the latest Travel & Culture stories in your inbox.
Filed Under:
Brain,
Performing Arts
| 2024-11-08T04:01:53 | en | train |
10,821,399 | egfx | 2016-01-01T08:14:28 | Converts Elixir to JavaScript | null | https://github.com/bryanjos/elixirscript | 3 | 0 | null | null | null | no_error | GitHub - elixirscript/elixirscript: Converts Elixir to JavaScript | null | elixirscript |
The goal is to convert a subset (or full set) of Elixir code to JavaScript, providing the ability to write JavaScript in Elixir. This is done by taking the Elixir AST and converting it into JavaScript AST and then to JavaScript code. This is done using the Elixir-ESTree library.
Documentation for current release
Requirements
Erlang 20 or greater
Elixir 1.6 or greater (must be compiled with Erlang 20 or greater)
Node 8.2.1 or greater (only for development)
Usage
Add dependency to your deps in mix.exs:
{:elixir_script, "~> x.x"}
Add elixir_script to list of mix compilers in mix.exs
Also add elixir_script configuration
def project do
[
app: :my_app,
# ...
# Add elixir_script as a compiler
compilers: Mix.compilers ++ [:elixir_script],
# Our elixir_script configuration
elixir_script: [
# Entry module. Can also be a list of modules
input: MyEntryModule,
# Output path. Either a path to a js file or a directory
output: "priv/elixir_script/build/elixirscript.build.js"
]
]
end
Run mix compile
Examples
Application
ElixirScript Todo Example
Library
ElixirScript React
Starter kit
Elixirscript Starter Kit
Development
# Clone the repo
git clone [email protected]:bryanjos/elixirscript.git
#Get dependencies
make deps
# Compile
make
# Test
make test
Communication
#elixirscript on the elixir-lang Slack
Contributing
Please check the CONTRIBUTING.md
| 2024-11-07T20:20:08 | en | train |
10,821,439 | codingdefined | 2016-01-01T08:50:37 | Capture Screen of Web Pages Through URL in Nodejs | null | http://www.codingdefined.com/2016/01/capture-screen-of-web-pages-through-url.html | 3 | 0 | null | null | null | no_error | Capture Screen of Web Pages through URL in Nodejs | null | null |
In this post we will be discussing about how to capture screen of web pages through URL in Node.js. The following code snippet will convert any web URL into a Jpeg image. We will be using PhantomJS which is a headless WebKit scriptable with a JavaScript API. Since PhantomJS is using WebKit, a real layout and rendering engine, it can capture a web page as a screenshot.
To use PhantomJS in Node.js we will be using phantomjs-node (phantom npm module) which acts as a bridge between PhantomJS and Node.js. To use this module you need to install PhantomJS and it should be available in the PATH environment variable. If you get any error while installing please refer to How to solve Cannot find module weak in Nodejs. Then install phantom module by using the command npm install phantom.
Code :
var phantom = require('phantom');
var cLArguments = process.argv.slice(2);
if(cLArguments.length === 1) {
var url = cLArguments[0];
}
if(cLArguments.length > 1) {
var url = cLArguments[0];
var file = cLArguments[1] + '.jpg';
}
console.log(url + ' ' + file);
phantom.create(function(ph) {
console.log('Inside Phantom');
ph.createPage(function(page) {
console.log('Inside Create Page');
page.set('viewportSize', {width: 1920, height: 1080});
page.open(url, function(status) {
if(status === 'success') {
console.log('Success');
page.render(file);
ph.exit();
}
})
})
}, {
dnodeOpts: {
weak: false
}
})
In the above code we will be getting the URL and the name of the file from the command line. Then we are starting the PhantomJS process and creating web page out of it. Then we will be setting a view-port with the height and width. After that we will be opening the URL and if it is a success we will be rendering the file.
Please Like and Share the CodingDefined.com Blog, if you find it interesting and helpful.
| 2024-11-07T20:14:12 | en | train |
10,821,451 | edward | 2016-01-01T08:59:59 | Where are we in the Python 3 transition? | null | http://www.snarky.ca/the-stages-of-the-python-3-transition | 3 | 0 | null | null | null | no_error | Where are we in the Python 3 transition? | 2015-12-31T04:35:00.000Z | Brett Cannon |
Dec 30, 2015
3 min read
Python
The Kübler-Ross model outlines the stages that one goes through in dealing with death:
Denial
Anger
Bargaining
Depression
Acceptance
This is sometimes referred to as the five stages of grief.Some have jokingly called them the five stages of software development. I think it actually matches the Python community's transition to Python 3 rather well, both what has occurred and where we currently are (summary: the community is at least in stage 4 with some lucky to already be at the end in stage 5).
Denial
When Python 3 first came out and we said Python 2.7 was going to be the last release of Python 2, I think some people didn't entirely believe us. Others believed that Python 3 didn't offer enough to bother switching to it from Python 2, and so they ignored Python 3's existence. Basically the Python development team and people willing to trust that Python 3 wasn't some crazy experiment that we were going to abandon, ported their code to Python 3 while everyone else waited.
Anger
When it became obvious that the Python development team was serious about Python 3, some people got really upset. There were accusations of us not truly caring about the community and ignoring that the transition was hurting the community irreparably. This was when whispers of forking Python 2 to produce a Python 2.8 release came about, although that obviously never occurred.
Bargaining
Once people realized that being mad about Python 3 wasn't going to solve anything, the bargaining began. People came to the Python development team asking for features to be added to Python 3 to make transitioning easier such as bringing back the u string prefix in Python 3. People also made requests for exceptions to Python 2's "no new features" policy which were also made to allow for Python 2 to stay a feasible version of Python longer while people transitioned (this all landed in Python 2.7.9). We also extended the maintenance timeline of Python 2.7 from 5 years to 10 years to give people until 2020 to transition before people will need to pay for Python 2 support (as compared to the free support that the Python development team has provided).
Depression
7 years into the life of Python 3, it seems a decent amount of people have reached the point of depression about the transition. With Python 2.7 not about to be pulled out from underneath them, people don't feel abandoned by the Python development team. Python 3 also has enough new features that are simply not accessible from Python 2 that people want to switch. And with porting Python 2 code to run on Python 2/3 simultaneously heavily automated and being doable on a per-file basis, people no longer seem to be adverse to porting their code like they once were (although it admittedly still takes some effort).
Unfortunately people are running up against the classic problem of lacking buy-in from management. I regularly hear from people that they would switch if they could, but their manager(s) don't see any reason to switch and so they can't (or that they would do per-file porting, but they don't think they can convince their teammates to maintain the porting work). This can be especially frustrating if you use Python 3 in personal projects but are stuck on Python 2 at work. Hopefully Python 3 will continue to offer new features that will eventually entice reluctant managers to switch. Otherwise financial arguments might be necessary in the form of pointing out that porting to Python 3 is a one-time cost while staying on Python 2 past 2020 will be a perpetual cost for support to some enterprise provider of Python and will cost more in the long-term (e.g., paying for RHEL so that someone supports your Python 2 install past 2020). Have hope, though, that you can get buy-in from management for porting to Python 3 since others have and thus reached the "acceptance" stage.
Acceptance
While some people feel stuck in Python 2 at work and are "depressed" over it, others have reached the point of having transitioned their projects and accepted Python 3, both at work and in personal projects. Various numbers I have seen this year suggest about 20% of the scientific Python community and 20% of the Python web community have reached this point (I have yet to see reliable numbers for the Python community as a whole; PyPI is not reliable enough for various reasons). I consistently hear from people using Python 3 that they are quite happy; I have yet to hear from someone who has used Python 3 that they think it is a worse language than Python 2 (people are typically unhappy with the transition process and not Python 3 itself).
With five years left until people will need to pay for Python 2 support, I'm glad that the community seems to have reached either the "depression" or "acceptance" stages and has clearly moved beyond the "bargaining" stage. Hopefully in the next couple of years, managers across the world will realize that switching to Python 3 is worth it and not as costly as they think it is compared to having to actually pay for Python 2 support and thus more people will get to move to the "acceptance" stage.
| 2024-11-08T12:09:11 | en | train |
10,821,588 | chei0aiV | 2016-01-01T10:57:27 | SFC 2015 YIR: Laying a Foundation for Growing Outreachy | null | https://sfconservancy.org/blog/2015/dec/31/yir-outreachy/ | 2 | 0 | null | null | null | no_error | 2015 YIR: Laying a Foundation for Growing Outreachy | null | Marina Zhurakhinskaya |
by
on December 31, 2015
[ This blog post is the fifth in our series, Conservancy
2015: Year in Review. ]
Marina Zhurakhinskaya, one of the coordinators of Conservancy's Outreachy program, writes about all the exciting things that happened in Outreachy's first year in its new home at Conservancy.
2015 was a year of transition and expansion
for Outreachy, which was
only possible with the fiscal and legal support Conservancy provided us. Becoming a Conservancy Supporter will ensure
the future in which more free software success stories like Outreachy's are
possible.
Outreachy helps people from groups underrepresented in free software get
involved through paid, mentored, remote internships with a variety of free
software projects. After successfully growing as the GNOME Foundation
project for four years, Outreachy needed a new home which could support its
further growth, be designed to work with a multitude of free software
projects, and provide extensive accounting services. With the current
participation numbers of about 35 interns and 15 sponsoring organizations a
round, and two rounds a year, Outreachy requires processing about 210 intern
payments and 30 sponsor invoices a year. Additionally, Outreachy requires
processing travel reimbursements, preparing tax documents, and providing
letters of participation for some interns. Legal entity hosting Outreachy
needs to enter into participation agreements with interns and mentors, as
well as into custom sponsorship agreements with some sponsors.
In February,
Outreachy announced
its transition to Conservancy and adopted its current name. The
alternative of creating its own non-profit was prohibitive because of the
overhead and time commitment that would have required. Conservancy was a
perfect new home, which provided a lot of the services Outreachy needed and
allowed seamlessly continuing the program throughout 2015. The transition to
Conservancy was completed
in May. 30 interns were accepted for the May-August round
with Karen Sandler, Sarah Sharp, and Marina Zhurakhinskaya serving as
Outreachy's Project Leadership Committee and
coordinators.
With the program's needs met, we were able to turn our minds to expanding
the reach of the program. In September,
Outreachy announced the
expantion to people of color underrepresented in tech in the U.S., while
continuing to be open to cis and trans women, trans men, and genderqueer
people worldwide. This expansion was guided by the lack of diversity
revealed by
the employee
demographic data released by many leading U.S. tech companies. Three new
cooridinators, Cindy Pallares-Quezada, Tony Sebro, and Bryan Smith joined
Karen Sandler, Sarah Sharp, and Marina Zhurakhinskaya to help with the
expansion. 37 interns were accepted for
the December-March
round.
One of the most important measures of success for Outreachy is its alums
speaking at free software conferences. In 2015, 27 alums had full-time
sessions at conferences such as linux.conf.au, LibrePlanet, FOSSASIA,
OpenStack Summit, Open Source Bridge, FISL, and LinuxCon. Isabel Jimenez
gave a keynote
about the benefits of contributing to open source at All Things Open. In a
major recognition for an Outreachy alum, Yan Zhu
was named
among the women to watch in IT security by SC Magazine.
Outreachy coordinators are also being recognized for their contributions
to free and open source software. Sarah
Sharp won the
inaugural Women in Open Source Award, sponsored by Red Hat, and
generously donated her stipend to Outreachy. Marina
Zhurakhinskaya won an
O'Reilly Open Source Award.
Outreachy coordinators, mentors, and alums promoted Outreachy and
diversity in free and open source software in the following articles and
conference sessions:
Karen Sandler spoke about
Outreachy in her FOSDEM
and FISL
keynotes
Marina Zhurakhinskaya moderated and Cindy Pallares-Quezada
participated in the panel
about opportunities in open source at the ACM Richard Tapia Celebration
of Diversity in Computing
Mentor and former career
advisor Sumana Harihareswara wrote about the triumph of
Outreachy, with examples from its history
Alum Sucheta Ghoshal spoke about
her experience with
Outreachy at LibrePlanet and alums Jessica Canepa, Barbara Miller, and
Adam Okoye spoke about their experience
with Outreachy at Open Source Bridge
Linux kernel coordinator
Julia Lawall moderated the panel on
Outreachy internships with the Linux kernel at LinuxCon North America;
panel participants included Karen Sandler, mentors Greg Kroah-Hartman, Jes
Sorensen, and Konrad Wilk, and alums Lidza Louina, Lisa Nguyen, and Elena
Ufimtseva
Marina Zhurakhinskaya
was interviewed about Outreachy and her other diversity
work by
Opensource.com and, for the Ada Lovelace Day, by the Free
Software Foundation
Weaving their work on
Outreachy into their greater involvement in free software diversity efforts,
Sarah Sharp wrote about what
makes a good community on her blog, Marina Zhurakhinskaya gave
a keynote
on effective outreach at Fossetcon, and Cindy Pallares-Quezada wrote an
article on diversity
in open source highlights from 2015 for
Opensource.com
Outreachy is made
possible thanks to the contributions of its many coordinators, mentors, and
sponsors. For May and December rounds, with the credit given for the highest
level of sponsorship, Intel and Mozilla sponsored Outreachy at the Ceiling
Smasher level, Red Hat at the Equalizer level, Google, Hewlett-Packard,
Linux Foundation, and OpenStack Foundation at the Promoter level, and
Cadasta, Electronic Frontier Foundation, Endless, Free Software Foundation,
GNOME, Goldman Sachs, IBM, M-Lab, Mapbox, Mapzen, Mifos, Open Source
Robotics Foundation, Perl, Samsung, Twitter, VideoLAN, Wikimedia Foundation,
and Xen Project at the Includer level. Additionally, Red Hat supports
Outreachy by contributing Marina Zhurakhinskaya's time towards the
organization of the program and the GNOME Foundation provides infrastructure
support. However, first and foremost, Outreachy is possible thanks to
Conservancy being in place to be its non-profit home and handle the fiscal
and legal needs of the program.
Conservancy's service of helping free software projects establish a
foundation for growth without the prohibitive overhead of creating their own
non-profits is a cornerstone of the free software community. We need
Conservancy securely in place to continue providing exceptional support for
its 33 member projects and to offer this support to new projects. To help
free software thrive, please join Outreachy's Project Leadership Committee
members Karen Sandler, Sarah Sharp, and Marina Zhurakhinskaya
in becoming a
Conservancy Supporter.
[permalink]
Tags:
conservancy,
Year In Review 2015
| 2024-11-08T06:03:00 | en | train |
10,821,686 | networked | 2016-01-01T12:06:14 | PCem - an emulator for old x86 computers | null | http://pcem-emulator.co.uk/ | 3 | 0 | null | null | null | no_error | PCem | null | null |
19th December 2021
Michael Manley is taking over as project maintainer, and will be responsible for development and future direction of the project.
The forums have also been reopened.
14th June 2021
Just a quick note to say that I (Sarah Walker) have decided to call it quits. Thanks to those who sent supportive messages, they're genuinely appreciated. Also thanks to those who have supported me and the project over the last decade or so.
If anyone is interested in taking over the project & github repo, please contact me.
1st December 2020
PCem v17 released. Changes from v16 :
New machines added - Amstrad PC5086, Compaq Deskpro, Samsung SPC-6033P, Samsung SPC-6000A, Intel VS440FX, Gigabyte GA-686BX
New graphics cards added - 3DFX Voodoo Banshee, 3DFX Voodoo 3 2000, 3DFX Voodoo 3 3000, Creative 3D Blaster Banshee, Kasan Hangulmadang-16, Trident TVGA9000B
New CPUs - Pentium Pro, Pentium II, Celeron, Cyrix III
VHD disc image support
Numerous bug fixes
A few other bits and pieces
Thanks to davide78, davefiddes, Greatpsycho, leilei, sards3, shermanp, tigerforce and twilen for contributions towards this release.
19th April 2020
PCem v16 released. Changes from v15 :
New machines added - Commodore SL386SX-25, ECS 386/32, Goldstar GDC-212M, Hyundai Super-286TR, IBM PS/1 Model 2133 (EMEA 451), Itautec Infoway Multimidia, Samsung SPC-4620P, Leading Edge Model M
New graphics cards added - ATI EGA Wonder 800+, AVGA2, Cirrus Logic GD5428, IBM 1MB SVGA Adapter/A
New sound card added - Aztech Sound Galaxy Pro 16 AB (Washington)
New SCSI card added - IBM SCSI Adapter with Cache
Support FPU emulation on pre-486 machines
Numerous bug fixes
A few other bits and pieces
Thanks to EluanCM, Greatpsycho, John Elliott, and leilei for contributions towards this release.
19th May 2019
PCem v15 released. Changes from v14 :
New machines added - Zenith Data SupersPort, Bull Micral 45, Tulip AT Compact, Amstrad PPC512/640, Packard Bell PB410A, ASUS P/I-P55TVP4, ASUS P/I-P55T2P4, Epox P55-VA, FIC VA-503+
New graphics cards added - Image Manager 1024, Sigma Designs Color 400, Trigem Korean VGA
Added emulation of AMD K6 family and IDT Winchip 2
New CPU recompiler. This provides several optimisations, and the new design allows for greater portability and more scope for optimisation in the future
Experimental ARM and ARM64 host support
Read-only cassette emulation for IBM PC and PCjr
Numerous bug fixes
Thanks to dns2kv2, Greatpsycho, Greg V, John Elliott, Koutakun, leilei, Martin_Riarte, rene, Tale and Tux for contributions towards this release.
20th April 2018
PCem v14 released. Changes from v13.1 :
New machines added - Compaq Portable Plus, Compaq Portable II, Elonex PC-425X, IBM PS/2 Model 70 (types 3 & 4), Intel Advanced/ZP, NCR PC4i, Packard Bell Legend 300SX, Packard Bell PB520R, Packard Bell PB570, Thomson TO16 PC, Toshiba T1000, Toshiba T1200, Xi8088
New graphics cards added - ATI Korean VGA, Cirrus Logic CL-GD5429, Cirrus Logic CL-GD5430, Cirrus Logic CL-GD5435, OAK OTI-037, Trident TGUI9400CXi
New network adapters added - Realtek RTL8029AS
Iomega Zip drive emulation
Added option for default video timing
Added dynamic low-pass filter for SB16/AWE32 DSP playback
Can select external video card on some systems with built-in video
Can use IDE hard drives up to 127 GB
Can now use 7 SCSI devices
Implemented CMPXCHG8B on Winchip. Can now boot Windows XP on Winchip processors
CD-ROM emulation on OS X
Tweaks to Pentium and 6x86 timing
Numerous bug fixes
Thanks to darksabre76, dns2kv2, EluanCM, Greatpsycho, ja've, John Elliott, leilei and nerd73 for contributions towards this release.
17th December 2017
PCem v13.1 released. This is a quick bugfix release, with the following changes from v13 :
Minor recompiler tweak, fixed slowdown in some situations (mainly seen on Windows 9x just after booting)
Fixed issues with PCJr/Tandy sound on some Sierra games
Fixed plasma display on Toshiba 3100e
Fixed handling of configurations with full stops in the name
Fixed sound output gain when using OpenAL Soft
Switched to using OpenAL Soft by default
12th December 2017
Re-uploaded v13 Windows archive with missing mda.rom included - please re-download if you've been having issues.
11th December 2017
PCem v13 released. Changes from v12 :
New machines added - Atari PC3, Epson PC AX, Epson PC AX2e, GW-286CT GEAR, IBM PS/2 Model 30-286, IBM PS/2 Model 50, IBM PS/2 Model 55SX, IBM PS/2 Model 80, IBM XT Model 286, KMX-C-02, Samsung SPC-4200P, Samsung SPC-4216P, Toshiba 3100e
New graphics cards - ATI Video Xpression, MDSI Genius
New sound cards added - Disney Sound Source, Ensoniq AudioPCI (ES1371), LPT DAC, Sound Blaster PCI 128
New hard drive controllers added - AT Fixed Disk Adapter, DTC 5150X, Fixed Disk Adapter (Xebec), IBM ESDI Fixed Disk Controller, Western Digital WD1007V-SE1
New SCSI adapters added - Adaptec AHA-1542C, BusLogic BT-545S, Longshine LCS-6821N, Rancho RT1000B, Trantor T130B
New network adapters added - NE2000 compatible
New cross-platform GUI
Voodoo SLI emulation
Improvements to Sound Blaster emulation
Improvements to Pentium timing
Various bug fixes
Minor optimisations
Thanks to AmatCoder, basic2004, bit, dns2k, ecksemess, Greatpsycho, hOMER247, James-F, John Elliott, JosepMa, leilei, neozeed, ruben_balea, SA1988 and tomaszkam for contributions towards this release.
18th February 2017
PCem v12 released. Changes from v11 :
New machines added - AMI 386DX, MR 386DX
New graphics cards - Plantronics ColorPlus, Wyse WY-700, Obsidian SB50, Voodoo 2
CPU optimisations - up to 50% speedup seen
3DFX optimisations
Improved joystick emulation - analogue joystick up to 8 buttons, CH Flightstick Pro, ThrustMaster FCS, SideWinder pad(s)
Mouse can be selected between serial, PS/2, and IntelliMouse
Basic 286/386 prefetch emulation - 286 & 386 performance much closer to real systems
Improved CGA/PCjr/Tandy composite emulation
Various bug fixes
Thanks to Battler, leilei, John Elliott, Mahod, basic2004 and ecksemmess for contributions towards this release.
7th June 2016
Updated v11 binary - anyone who's been having problems with Voodoo emulation should re-download.
5th June 2016
PCem v11 released. Changes from v10.1 :
New machines added - Tandy 1000HX, Tandy 1000SL/2, Award 286 clone, IBM PS/1 model 2121
New graphics card - Hercules InColor
3DFX recompiler - 2-4x speedup over previous emulation
Added Cyrix 6x86 emulation
Some optimisations to dynamic recompiler - typically around 10-15% improvement over v10, more when MMX used
Fixed broken 8088/8086 timing
Fixes to Mach64 and ViRGE 2D blitters
XT machines can now have less than 640kb RAM
Added IBM PS/1 audio card emulation
Added Adlib Gold surround module emulation
Fixes to PCjr/Tandy PSG emulation
GUS now in stereo
Numerous FDC changes - more drive types, FIFO emulation, better support of XDF images, better FDI support
CD-ROM changes - CD-ROM IDE channel now configurable, improved disc change handling, better volume control support
Now directly supports .ISO format for CD-ROM emulation
Fixed crash when using Direct3D output on Intel HD graphics
Various other fixes
Thanks to Battler, SA1988, leilei, Greatpsycho, John Elliott, RichardG867, ecksemmess and cooprocks123e for contributions towards this release.
7th November 2015
PCem v10.1 released. This is a minor bugfix release. Changes from v10 :
Fixed buffer overruns in PIIX and ET4000/W32p emulation
Add command line options to start in fullscreen and to specify config file
Emulator doesn't die when the CPU jumps to an unexecutable address
Removed Voodoo memory dump on exit
24th October 2015
PCem v10 released. Changes from v9 :
New machines - AMI XT clone, VTech Laser Turbo XT, VTech Laser XT3, Phoenix XT clone, Juko XT clone, IBM PS/1 model 2011, Compaq Deskpro 386, DTK 386SX clone, Phoenix 386 clone, Intel Premiere/PCI, Intel Advanced/EV
New graphics cards - IBM VGA, 3DFX Voodoo Graphics
Experimental dynamic recompiler - up to 3x speedup
Pentium and Pentium MMX emulation
CPU fixes - fixed issues in Unreal, Half-Life, Final Fantasy VII, Little Big Adventure 2, Windows 9x setup, Coherent, BeOS and others
Improved FDC emulation - more accurate, supports FDI images, supports 1.2MB 5.25" floppy drive emulation, supports write protect correctly
Internal timer improvements, fixes sound in some games (eg Lion King)
Added support for up to 4 IDE hard drives
MIDI OUT code now handles sysex commands correctly
CD-ROM code now no longer crashes Windows 9x when CD-ROM drive empty
Fixes to ViRGE, S3 Vision series, ATI Mach64 and OAK OTI-067 cards
Various other fixes/changes
Thanks to te_lanus, ecksemmess, nerd73, GeeDee, Battler, leilei and kurumushi for contributions towards this release.
4th October 2014
PCem v9 released. Changes from v8.1 :
New machines - IBM PCjr
New graphics cards - Diamond Stealth 3D 2000 (S3 ViRGE/325), S3 ViRGE/DX
New sound cards - Innovation SSI-2001 (using ReSID-FP)
CPU fixes - Windows NT now works, OS/2 2.0+ works better
Fixed issue with port 3DA when in blanking, DOS 6.2/V now works
Re-written PIT emulation
IRQs 8-15 now handled correctly, Civilization no longer hangs
Fixed vertical axis on Amstrad mouse
Serial fixes - fixes mouse issues on Win 3.x and OS/2
New Windows keyboard code - should work better with international keyboards
Changes to keyboard emulation - should fix stuck keys
Some CD-ROM fixes
Joystick emulation
Preliminary Linux port
Thanks to HalfMinute, SA1988 and Battler for contributions towards this release.
3rd January 2014
PCem v8.1 released. This fixes a number of issues in v8.
20th December 2013
PCem v8 released. Changes from v0.7 :
New machines - SiS496/497, 430VX
WinChip emulation (including MMX emulation)
New graphics cards - S3 Trio64, Trident TGUI9440AGi, ATI VGA Edge-16, ATI VGA Charger, OAK OTI-067, ATI Mach64
New sound cards - Adlib Gold, Windows Sound System, SB AWE32
Improved GUS emulation
MPU-401 emulation (UART mode only) on SB16 and AWE32
Fixed DMA bug, floppy drives work properly in Windows 3.x
Fixed bug in FXAM - fixes Wolf 3D, Dogz, some other stuff as well
Other FPU fixes
Fixed serial bugs, mouse no longer disappears in Windows 9x hardware detection
Major reorganisation of CPU emulation
Direct3D output mode
Fullscreen mode
Various internal changes
13th July 2013
PCem is now in source control at http://www.retrosoftware.co.uk/hg/pcem.
3rd August 2012
PCem v0.7 released. Windows 98 now works, Win95 more stable, more machines + graphics cards, and a huge number of fixes.
19th December 2011
PCem v0.6 released. Windows 95 now works, FPU emulation, and load of other stuff.
23rd September 2011
Uploaded a fixed version of PCem v0.5, which has working sound.
21st September 2011
PCem v0.5 released. Loads of fixes + new features in this version.
13th February 2011
PCem v0.41a released. This fixes a disc corruption bug, and re-adds (poor) composite colour emulation.
1st February 2011
PCem v0.41 released. This fixes some embarassing bugs in v0.4, as well as a few games.
27th July 2010
PCem v0.4 released. 386/486 emulation (buggy), GUS emulation, accurate 8088/8086 timings, and lots of other changes.
30th July 2008
PCem v0.3 released. This adds more machines, SB Pro emulation, SVGA emulation, and some other stuff.
14th October 2007
PCem v0.2a released. This is a bugfix release over v0.2.
10th October 2007
PCem v0.2 released. This adds PC1640 and AT emulation, 286 emulation, EGA/VGA emulation, Soundblaster emulation, hard disc emulation, and some bugfixes.
19th August 2007
PCem archive updated with (hopefully) bugfixed version.
15th August 2007
PCem v0.1 released. This is a new emulator for various old XT-based PCs.
| 2024-11-07T23:20:49 | en | train |
10,821,721 | asadjb | 2016-01-01T12:31:58 | Dropletconn: CLI utility to quickly connect to your Digital Ocean droplets | null | https://github.com/theonejb/dropletconn | 3 | 0 | null | null | null | no_error | GitHub - theonejb/dropletconn: A simple golang base CLI app to list and connect to your DigitalOcean droplets | null | theonejb | dropletconn
List and connect to your Digital Ocean droplets instantly (without a .ssh/config)
Quick Start
go get github.com/theonejb/dropletconn
go install github.com/theonejb/dropletconn
dropletconn config
dropletconn list
dropletconn connect <NAME OF DROPLET>
Installing and Configuring dropletconn
Listing your droplets
Connecting to a droplet
Usage
To use, go get github.com/theonejb/dropletconn and go install github.com/theonejb/dropletconn. dropletconn is the
name of the genrated binary. I personally have it aliased to dc using export dc=dropletconn in my .zshrc file since
I use it atleast 20 times a day to connect to various servers at work.
You will also need to generate a token from Digital Ocean API Tokens
that dropletconn will use to get a list of droplets available in your account. For safety, use a Read only scoped token.
Available commands and their usage is described here. Some commands have a short version as well, which is what you see after the OR pipe (|) in their help text below.
config: Generate config file that stores the API token and other settings. This needs to be generated before the rest of
the commands can be used
list | l [<FILTER EXPRESSION>]..: Lists all droplets from your account. You can optionally pass a number of filter expressions.
If you do, only droplets whose names or IPs contain at least one of the given fitler expressions will be listed
connect | c NAME: Connect to the droplet with the given name
run | r <FILTER EXPRESSION> <COMMAND>: Runs the given command on all droplets matching the filter expression. The filter expression is required, and only one filter
expression can be given
You can pass an optional --force-update flag. By default, the list of droplets is cached for a configurable duration (as set in
the config file). Passing this flag forces an update of this list before running the command.
The list command also accepts an options --list-public-ip flag. If this flag is used only the public IP of the nodes is printed, nothing else.
This is incase you want a list of all IPs in your DO account. I needed this to create a Fabric script.
Note: The way flags are parsed, you have to list your flags before your commands. For example, you can not do dropletconn list --list-public-ip.
Instead, you need to do dropletconn --list-public-ip list. Same for the --force-update flag.
To enable completion of droplet names, source the included Zsh completion file. Credit for that script goes to James Coglan. I copied it from his blog
(https://blog.jcoglan.com/2013/02/12/tab-completion-for-your-command-line-apps/).
| 2024-11-08T08:52:03 | en | train |
10,821,797 | SimplyUseless | 2016-01-01T13:13:06 | Web attack knocks BBC websites offline | null | http://www.bbc.co.uk/news/technology-35204915 | 2 | 0 | null | null | null | no_error | Web attack knocks BBC websites offline | 2015-12-31T10:41:54.000Z | BBC News | All the BBC's websites were unavailable early on Thursday morning because of a large web attack.The problems began about 0700 GMT and meant visitors to the site saw an error message rather than webpages.Sources within the BBC said the sites were offline thanks to what is known as a "distributed denial of service" attack.An earlier statement tweeted by the BBC, external laid the blame for problems on a "technical issue".In the message the corporation said it was aware of the ongoing trouble and was working to fix it so sites, services and pages were reachable again. At midday it released another statement saying that the BBC website was now "operating normally". "We apologise for any inconvenience you may have experienced," it said.The BBC has yet to confirm or deny that such an attack was responsible for the problems.It is now believed that a web attack technique known as a "distributed denial of service" was causing the patchy response. This aims to knock a site offline by swamping it with more traffic than it can handle. The attack on the BBC hit the main website as well as associated services including the main iPlayer catch-up service and iPlayer Radio app which were also not working properly. Social media reaction to the trouble was swift. Many urged the BBC to get the site back up quickly and lamented how long it was taking to fix the technical troubles.See more of the tweetsBy 1030 GMT the site was largely working again though some pages and indexes took longer than normal to load. The BBC's crop of websites have suffered other technical problems in the past. In July 2014, the iPlayer and many of its associated sites were offline for almost an entire weekend. That fault was traced to a database that sits behind the catch-up TV service. | 2024-11-08T08:12:45 | en | train |
10,821,882 | empressplay | 2016-01-01T13:58:31 | Perth man gets $330 Uber charge for 20km NYE ride | null | http://www.adelaidenow.com.au/business/companies/perth-man-lodges-complaint-after-copping-massive-uber-bill-on-new-years-eve/news-story/2a9d9f2596f19d7ba0f38a569b3fe574?nk=c8a03f3813ae2218c769e9ef8ed74320-1451656639 | 2 | 0 | null | null | null | no_error | No Cookies | The Advertiser | null | null |
Please note that by blocking any or all cookies you may not have access to certain features, content or personalization. For more information see our Cookie Policy.
To enable cookies, follow the instructions for your browser below.
Facebook App: Open links in External Browser
There is a specific issue with the Facebook in-app browser intermittently making requests to websites without cookies that had previously been set. This appears to be a defect in the browser which should be addressed soon. The simplest approach to avoid this problem is to continue to use the Facebook app but not use the in-app browser. This can be done through the following steps:
1. Open the settings menu by clicking the hamburger menu in the top right
2. Choose “App Settings” from the menu
3. Turn on the option “Links Open Externally” (This will use the device’s default browser)
Enabling Cookies in Internet Explorer 7, 8 & 9
1. Open the Internet Browser
2. Click Tools > Internet Options > Privacy > Advanced
3. Check Override automatic cookie handling
4. For First-party Cookies and Third-party Cookies click Accept
5. Click OK and OK
Enabling Cookies in Firefox
1. Open the Firefox browser
2. Click Tools > Options > Privacy > Use custom settings for history
3. Check Accept cookies from sites
4. Check Accept third party cookies
5. Select Keep until: they expire
6. Click OK
Enabling Cookies in Google Chrome
1. Open the Google Chrome browser
2. Click Tools > Options > Privacy Options > Under the Hood > Content Settings
3. Check Allow local data to be set
4. Uncheck Block third-party cookies from being set
5. Uncheck Clear cookies
6. Close all
Enabling Cookies in Mobile Safari (iPhone, iPad)
1. Go to the Home screen by pressing the Home button or by unlocking your phone/iPad
2. Select the Settings icon.
3. Select Safari from the settings menu.
4. Select ‘accept cookies’ from the safari menu.
5. Select ‘from visited’ from the accept cookies menu.
6. Press the home button to return the the iPhone home screen.
7. Select the Safari icon to return to Safari.
8. Before the cookie settings change will take effect, Safari must restart. To restart Safari press and hold the Home button (for around five seconds) until the iPhone/iPad display goes blank and the home screen appears.
9. Select the Safari icon to return to Safari.
| 2024-11-08T10:33:15 | en | train |
10,821,893 | xCathedra | 2016-01-01T14:03:27 | Automation should be like Iron Man, not Ultron | null | http://queue.acm.org/detail.cfm?id=2841313 | 4 | 0 | null | null | null | no_error | Automation Should Be Like Iron Man, Not Ultron | null | null |
Everything Sysadmin - @YesThatTom
October 31, 2015Volume 13, issue 8
PDF
The "Leftover Principle" Requires Increasingly More Highly-skilled Humans.
Thomas A. Limoncelli
Q: Dear Tom: A few years ago we automated a major process in our system administration team. Now the system is impossible to debug. Nobody remembers the old manual process and the automation is beyond what any of us can understand. We feel like we've painted ourselves into a corner. Is all operations automation doomed to be this way?
A: The problem seems to be that this automation was written to be like Ultron, not Iron Man.
Iron Man's exoskeleton takes the abilities that Tony Stark has and accentuates them. Tony is a smart, strong guy. He can calculate power and trajectory on his own. However, by having his exoskeleton do this for him, he can focus on other things. Of course, if he disagrees or wants to do something the program wasn't coded to do, he can override the trajectory.
Ultron, on the other hand, was intended to be fully autonomous. It did everything and was, basically, so complex that when it had to be debugged the only choice was (spoiler alert!) to destroy it.
Had the screenwriter/director Joss Whedon consulted me (and Joss, if you are reading this, you really should have), I would have found a way to insert the famous Brian Kernighan quote, "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
The Source of the Problem:
Before we talk about how to prevent this kind of situation, we should discuss how we get into it.
The first way we get into this trap is by automating the easy parts and leaving the rest to be done manually. This sounds like the obvious way to automate things and, in fact, is something I generally encouraged until my awareness was raised by John Allspaw's excellent two-part blog post "A Mature Role for Automation" (http://www.kitchensoap.com/2012/09/21/a-mature-role-for-automation-part-i).
You certainly shouldn't automate the difficult cases first. What we learn while automating the easy cases makes us better prepared to automate the more difficult cases. This is called the Leftover Principle. You automate the easy parts and what is "left over" is done by humans.
In the long run this creates a very serious problem. The work left over for people to do becomes, by definition, more difficult. At the start of the process, people were doing a mixture of simple and complex tasks. After a while the mix shifts more and more towards the complex. This is a problem because people aren't getting smarter over time. Moore's Law predicts that computers will get more powerful over time, but sadly there is no such prediction about people.
Another reason the work becomes more difficult is that it becomes rarer. Easier work, done frequently, keeps a person's skills fresh and keeps us ready for the rare but difficult tasks.
Taken to its logical conclusion, this paradigm results in a need to employ impossibly smart people to do impossibly difficult work. Maybe this is why Google's recruiters sound so painfully desperate when they call about joining their SRE team.
One way to avoid the problems of the leftover principle is called the Compensatory Principle. There are certain tasks that people are good at that machines don't do well. Likewise there are other tasks that machines are good at that people don't do well. The compensatory principle says that people and machines should each do what they are good at and not attempt what they don't do well. That is, each group should compensate for the other's deficiencies.
Machines don't get bored, so they are better at repetitive tasks. They don't sleep, so they are better at tasks that must be done at all hours of the night. They are better at handling many operations at once, and at operations that require smooth or precise motion. They are better at literal reproduction, access restriction, and quantitative assessment.
People are better at improvisation and being flexible, exercising judgment, and coping with variations in written material, perceiving feelings.
Let's apply this principle to a monitoring system. The monitoring system collects metrics every five minutes, stores them, and then analyzes the data for the purposes of alerting, debugging, visualization, and interpretation.
A person could collect data about a system every five minutes, and with multiple shifts of workers they could do it around the clock. However, the people would become bored and sloppy. Therefore it is obvious that the data collection should be automated. Alerting requires precision, which is also best done by computers. However, while the computer is better at visualizing the data, people are better at interpreting those visualizations. Debugging requires improvisation, another human skill, so again people are assigned those tasks.
John Allspaw points out that only rarely can a project be broken down into such clear-cut cases of functionality this way.
Doing Better
A better way is to base automation decisions on the complementarity principle. This principle looks at automation from the human perspective. It improves the long-term results by considering how people's behavior will change as a result of automation.
For example, the people planning the automation should consider what is learned over time by doing the process manually and how that would be changed or reduced if the process was automated. When a person first learns a task, they are focused on the basic functions needed to achieve the goal. However, over time, they understand the ecosystem that surrounds the process and gain a big-picture view. This lets them perform global optimizations. When a process is automated the automation encapsulates learning thus far, permitting new people to perform the task without having to experience that learning. This stunts or prevents future learning. This kind of analysis is part of a cognitive systems engineering (CSE) approach.
The complementarity principle combines CSE with a joint cognitive system (JCS) approach. JCS examines how automation and people work together. A joint cognitive system is characterized by its ability to stay in control of a situation.
In other words, if you look at a highly automated system and think, "Isn't it beautiful? We have no idea how it works," you may be using the leftover principle. If you look at it and say, "Isn't it beautiful how we learn and grow together, sharing control over the system," then you've done a good job of applying the complementarity principle.
Designing automation using the complementarity principle is a relatively new concept and I admit I'm no expert, though I can look back at past projects and see where success has come from applying this principle by accident. Even the blind squirrel finds some acorns!
For example, I used to be on a team that maintained a very large (for its day) cloud infrastructure. We were responsible for the hundreds of physical machines that supported thousands of virtual machines.
We needed to automate the process of repairing the physical machines. When there was a hardware problem, virtual machines had to be moved off the physical machine, the machine had to be diagnosed, and a request for repairs had to be sent to the hardware techs in the data center. After the machine was fixed, it needed to be re-integrated into the cloud.
The automation we created abided by the complementarity principle. It was a partnership between human and machine. It did not limit our ability to learn and grow. The control over the system was shared between the automation and the humans involved.
In other words, rather than creating a system that took over the cluster and ran it, we created one that partnered with humans to take care of most of the work. It did its job autonomously, but we did not step on each other's toes.
The automation had two parts. The first part was a set of tools that the team used to do the various related tasks. Only after these tools had been working for some time did we build a system that automated the global process, and it did so more like an exoskeleton assistant than like a dictator.
The repair process was functionally decomposed into five major tasks, and one tool was written to handle each of them. The tools were (a) Evacuation: any virtual machines running on the physical machine needed to be migrated live to a different machine; (b) Revivification: an evacuation process required during the extreme case where a virtual machine had to be restarted from its last snapshot; (c) Recovery: attempts to get the machine working again by simple means such as powering it off and on again; (d) Send to Repair Depot: generate a work order describing what needs to be fixed and send this information to the data center technicians who actually fixed the machine; (e) Re- assimilate: once the machine has been repaired, configure it and re-introduce it to the service.
As the tools were completed, they replaced their respective manual processes. However the tools provided extensive visibility as to what they were doing and why.
The next step was to build automation that could bring all these tools together. The automation was designed based on a few specific principles:
• It should follow the same methodology as the human team members.
• It should use the same tools as the human team members.
• If another team member was doing administrative work on a machine or cluster (group of machines), the automation would step out of the way if asked, just like a human team member would.
• Like a good team member, if it got confused it would back off and ask other members of the team for help.
The automation was a state-machine-driven repair system. Each physical machine was in a particular state: normal, in trouble, recovery in progress, sent for repairs, being re-assimilated, and so on. The monitoring system that would normally page people when there was a problem instead alerted our automation. Based on whether the alerting system had news of a machine having problems, being dead, or returning to life, the appropriate tool was activated. The tool's result determined the new state assigned to the machine.
If the automation got confused, it paused its work on that machine and asked a human for help by opening a ticket in our request tracking system.
If a human team member was doing manual maintenance on a machine, the automation was told to not touch the machine in an analogous way to how human team members would be, except people could now type a command instead of shouting to their coworkers in the surrounding cubicles.
The automation was very successful. Previously whoever was on call was paged once or twice a day. Now we were typically paged less than once a week.
Because of the design, the human team members continued to be involved in the system enough that they were always learning. Some people focused on making the tools better. Others focused on improving the software release and test process.
As stated earlier, one problem with the leftover principle is that the work left over for humans requires increasingly higher skill levels. At times we experienced the opposite! As the number of leftover tasks was reduced, it was easier to wrap our brains around the ones that remained. Without the mental clutter of so many other tasks, we were better able to assess the remaining tasks. For example, the most highly technical task involved a particularly heroic recovery procedure. We re-evaluated whether or not we should even be doing this particular procedure. We shouldn't.
The heroic approach risked data loss in an effort to avoid rebooting a virtual machine. This was the wrong priority. Our customers cared much more about data loss than about a quick reboot. We actually eliminated this leftover task by replacing it with an existing procedure that was already automated. We would not have seen this opportunity if our minds had still been cluttered with so many other tasks.
Another leftover process was building new clusters or machines. It happened infrequently enough that it was not worthwhile to fully automate. However, we found we could Tom Sawyer the automation into building the cluster for us if we created the right metadata to make it think that all the machines had just returned from repairs. Soon the cluster was built for us.
Processes requiring ad hoc improvisation, creativity, and evaluation were left to people. For example, certifying new models of hardware required improvisation and the ability to act given vague requirements.
The resulting system felt a lot like Iron Man's suit: enhancing our skills and taking care of the minutiae so we could focus on the big picture. One person could do the work of many, and we could do our jobs better thanks to the fact that we had an assistant taking care of the busy work. Learning did not stop because it was a collaborative effort. The automation took care of the boring stuff and the late-night work, and we could focus on the creative work of optimizing and enhancing the system for our customers.
I don't have a formula that will always achieve the benefits of the complementarity principle. However, by paying careful attention to how people's behavior will change as a result of automation and by maintaining shared control over the system, we can build automation that is more Iron Man, less Ultron.
Further Reading
John Allspaw's article "A Mature Role for Automation." (http://www.kitchensoap.com/2012/09/21/a-mature-role-for-automation-part-i).
David Woods and Erik Hollnagel's book Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Taylor and Francis, Boca Raton, FL, 2005.
Chapter 12 of The Practice of Cloud System Administration, by Thomas A. Linomcelli, Strata R. Chalup, and Christina J. Hogan.(http://the-cloud-book.com)
LOVE IT, HATE IT? LET US KNOW [email protected]
Thomas A. Limoncelli is an author, speaker, and system administrator. He is an SRE at Stack Overflow, Inc in NYC. His books include The Practice of Cloud Administration (the-cloud-book.com) and Time Management for System Administrators. He blogs at EverythingSysadmin.com
© 2015 ACM 1542-7730/15/0500 $10.00
Originally published in Queue vol. 13, no. 8—
Comment on this article in the ACM Digital Library
More related articles:
Catherine Hayes, David Malone - Questioning the Criteria for Evaluating Non-cryptographic Hash Functions
Although cryptographic and non-cryptographic hash functions are everywhere, there seems to be a gap in how they are designed. Lots of criteria exist for cryptographic hashes motivated by various security requirements, but on the non-cryptographic side there is a certain amount of folklore that, despite the long history of hash functions, has not been fully explored. While targeting a uniform distribution makes a lot of sense for real-world datasets, it can be a challenge when confronted by a dataset with particular patterns.
Nicole Forsgren, Eirini Kalliamvakou, Abi Noda, Michaela Greiler, Brian Houck, Margaret-Anne Storey - DevEx in Action
DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.
João Varajão, António Trigo, Miguel Almeida - Low-code Development Productivity
This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.
Ivar Jacobson, Alistair Cockburn - Use Cases are Essential
While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions.
© ACM, Inc. All Rights Reserved.
| 2024-11-08T06:39:50 | en | train |
10,821,922 | kernelv | 2016-01-01T14:13:27 | What Is Going to Happen in 2016 | null | http://avc.com/2016/01/what-is-going-to-happen-in-2016/ | 5 | 0 | null | null | null | no_error | What Is Going To Happen In 2016 | -0001-11-30T00:00:00+00:00 | Fred Wilson |
It’s easier to predict the medium to long term future. We will be able to tell our cars to take us home after a late night of new year’s partying within a decade. I sat next to a life sciences investor at a dinner a couple months ago who told me cancer will be a curable disease within the next decade. As amazing as these things sound, they are coming and soon.
But what will happen this year that we are now in? That’s a bit trickier. But I will take some shots this morning.
Oculus will finally ship the Rift in 2016. Games and other VR apps for the Rift will be released. We just learned that the Touch controller won’t ship with the Rift and is delayed until later in 2016. I believe the initial commercial versions of Oculus technology will underwhelm. The technology has been so hyped and it is hard to live up to that. Games will be the strongest early use case, but not everyone is going to want to put on a headset to play a game. I think VR will only reach its true potential when they figure out how to deploy it in a more natural way.
We will see a new form of wearables take off in 2016. The wrist is not the only place we might want to wear a computer on our bodies. If I had to guess, I would bet on something we wear in or on our ears.
One of the big four will falter in 2016. My guess is Apple. They did not have a great year in 2015 and I’m thinking that it will get worse in 2016.
The FAA regulations on the commercial drone industry will turn out to be a boon for the drone sector, legitimizing drone flights for all sorts of use cases and establishing clear rules for what is acceptable and what is not.
The trend towards publishing inside of social networks (Facebook being the most popular one) will go badly for a number of high profile publishers who won’t be able to monetize as effectively inside social networks and there will be at least one high profile victim of this strategy who will go under as a result.
Time Warner will spin off its HBO business to create a direct competitor to Netflix and the independent HBO will trade at a higher market cap than the entire Time Warner business did pre spinoff.
Bitcoin finally finds a killer app with the emergence of Open Bazaar protocol powered zero take rate marketplaces. (note that OB1, an open bazaar powered service, is a USV portfolio company).
Slack will become so pervasive inside of enterprises that spam will become a problem and third party Slack spam filters will emerge. At the same time, the Slack platform will take off and building Slack bots will become the next big thing in enterprise software.
Donald Trump will be the Republican nominee and he will attack the tech sector for its support of immigrant labor. As a result the tech sector will line up behind Hillary Clinton who will be elected the first woman President.
Markdown mania will hit the venture capital sector as VC firms follow Fidelity’s lead and start aggressively taking down the valuations in their portfolios. Crunchbase will start capturing this valuation data and will become a de-facto “yahoo finance” for the startup sector. Employees will realize their options are underwater and will start leaving tech startups in droves.
Some of these predictions border on the ridiculous and that is somewhat intentional. I think there is an element of truth (or at least possibility) in all of them. And I will come back to this list a year from now and review the results.
Best wishes to everyone for a happy and healthy 2016.
| 2024-11-08T15:57:15 | en | train |
10,822,132 | lkrubner | 2016-01-01T15:47:54 | Deferred in the ‘Burbs | null | http://mikethemadbiologist.com/2015/12/31/deferred-in-the-burbs/ | 19 | 11 | [
10823390,
10823097,
10823154
] | null | null | no_error | Deferred in the ‘Burbs | 2015-12-31T14:59:35+00:00 | Posted on |
There is a the long-term–and completely undiscussed–problem U.S. suburbs face:
Something that’s lurking in the background of the U.S. economy, and which will erupt with a fury in ten years or so is the need to replace suburban infrastructure: underground wires, pipes, and so on. This is something new that most suburbs, unlike cities, haven’t had to confront. A suburb that was built in 1970 is long in the tooth today, and time only makes things worse. No suburbs that I’m aware of ever decided to amortize the future cost of repairs over a forty year period–that would require an increase in property taxes. In fact, many suburbs never even covered the expenses of building new subdivisions, never mind worried about expenses decades down the road….
Once suburbs start having to repair their infrastructure, it’s going to get very expensive to live there…
The problem we will face is how to keep suburbs economically viable, both in terms of infrastructure and quality of life. Part of that will have to involve increasing ‘urbanization’ of the suburbs, while other suburbs will be left to decline. But this, not gentrification (which can be reduced with progressive taxation) is a much more difficult problem. Not only will there be resistance by homeowners to changes, but the very, well, infrastructure of the suburbs doesn’t lend itself to increasing density.
It appears Charles Marohn beat me to the punch (boldface mine):
Marohn primarily takes issue with the financial structure of the suburbs. The amount of tax revenue their low-density setup generates, he says, doesn’t come close to paying for the cost of maintaining the vast and costly infrastructure systems, so the only way to keep the machine going is to keep adding and growing. “The public yield from the suburban development pattern is ridiculously low,” he says. One of the most popular articles on the Strong Towns Web site is a five-part series Marohn wrote likening American suburban development to a giant Ponzi scheme.
…The way suburban development usually works is that a town lays the pipes, plumbing, and infrastructure for housing development—often getting big loans from the government to do so—and soon after a developer appears and offers to build homes on it. Developers usually fund most of the cost of the infrastructure because they make their money back from the sale of the homes. The short-term cost to the city or town, therefore, is very low: it gets a cash infusion from whichever entity fronted the costs, and the city gets to keep all the revenue from property taxes. The thinking is that either taxes will cover the maintenance costs, or the city will keep growing and generate enough future cash flow to cover the obligations. But the tax revenue at low suburban densities isn’t nearly enough to pay the bills; in Marohn’s estimation, property taxes at suburban densities bring in anywhere from 4 cents to 65 cents for every dollar of liability. Most suburban municipalities, he says, are therefore unable to pay the maintenance costs of their infrastructure, let alone replace things when they inevitably wear out after twenty to twenty-five years. The only way to survive is to keep growing or take on more debt, or both. “It is a ridiculously unproductive system,” he says.
Marohn points out that while this has been an issue as long as there have been suburbs, the problem has become more acute with each additional “life cycle” of suburban infrastructure (the point at which the systems need to be replaced—funded by debt, more growth, or both). Most U.S. suburbs are now on their third life cycle, and infrastructure systems have only become more bloated, inefficient, and costly. “When people say we’re living beyond our means, they’re usually talking about a forty-inch TV instead of a twenty-inch TV,” he says. “This is like pennies compared to the dollars we’ve spent on the way we’ve arranged ourselves across the landscape.”
By comparison, urban gentrification is an easy problem (one, of several solutions, to prevent price asset inflation among high end goods is more progressive taxation). Some suburbs will have to be left to die. Others will become impoverished. Others, the fortunate ones, will figure out ways to increase density.
This is going to be really ugly.
This entry was posted in Urban Planning. Bookmark the permalink. | 2024-11-07T22:41:49 | en | train |
10,822,185 | zdw | 2016-01-01T16:04:44 | Would You Put a Little Speaker in Your Vagina (for Your Baby)? | null | http://nymag.com/thecut/2015/12/vagina-speaker.html | 2 | 0 | null | null | null | no_error | Would You Put a Little Speaker in Your Vagina (for Your Baby)? | 2015-12-30T13:05:00.000-05:00 | Kelly Conaboy |
“I love this song!” —Your Baby
Photo: Mediscan/Corbis
Here’s a good question: Would you put a little tampon-style speaker in your vagina? What if I told you — you’re pregnant in this scenario — that, with the tampon-style speaker inserted into your vagina, your in-womb baby could listen to something like, ah, I don’t know, Dead Kennedys? The baby would learn to dislike California governor Jerry Brown in an outdated way, but at least he or she would come out with a healthy dislike of corporations and fascists. Hmm. Something to think about. Anyway, check out this speaker tampon.
The little speaker tampon is called “Babypod” and it’s shown in action, sort of, in a new video from the gynecology clinic Institut Marquès in Barcelona. In the video, singer Soraya performs a set of Christmas carols for expecting mothers (equipped with Babypods) and their unborn babies. “This is the first concert for fetuses ever held in the world” the video boasts multiple times, even though I highly doubt they did much research before making this claim.
The Babypod came about after a Spanish study proved fetuses are able to detect sound once they reach between 18 and 26 weeks. The fetuses apparently even sometimes move their mouths and tongues in response to the sound, which seems odd to me. Huh.
The Babypod is reportedly set at a rather quiet volume of 54 decibels, and it’s allegedly good for the fetus because of something about brain development on which experts don’t really agree.
So there you have it. Vaginal speaker — for your baby.
Would You Put a Little Speaker in Your Vagina?
| 2024-11-08T07:33:47 | en | train |
10,822,249 | bobajeff | 2016-01-01T16:27:41 | BaldrMonkey: the name of Mozilla's WebAssembly compiler | null | https://bugzilla.mozilla.org/show_bug.cgi?id=1234985 | 3 | 0 | null | null | null | no_error | 1234985 - BaldrMonkey: land initial wasm compilation and testing functions | null | Luke Wagner [:luke] |
Closed
Bug 1234985
Opened 9 years ago
Closed 9 years ago
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
Tracking
Status
firefox47
---
fixed
People
(Reporter: luke, Assigned: luke)
References
Details
Attachments
(9 files, 6 obsolete files)
This patch changes the interface between MG and clients to make things smoother for the baldr patch.
Building on bug 1239177, this patch adds the signature table so we can actually start validating function bodies. But to make more progress on wasm function bodies, we'll need stmt-expr unification.Comment on attachment 8708651 [details] [diff] [review]
baldr (WIP 3)
Review of attachment 8708651 [details] [diff] [review]:
-----------------------------------------------------------------
Random driveby review comments:
::: js/src/asmjs/WasmBinary.h
@@ +396,5 @@
>
> + template <class T>
> + MOZ_WARN_UNUSED_RESULT
> + bool writeEnum(T v, size_t* offset) {
> + return write(uint32_t(v), offset);
For added safety, this could static_assert mozilla::IsEnum<T>::value and sizeof(T) <= sizeof(uint32_t).
::: js/src/asmjs/WasmText.cpp
@@ +191,5 @@
> + SigVector sigs_;
> + SigMap sigMap_;
> +
> + public:
> + WasmAstModule(LifoAlloc& lifo)
explicit
@@ +657,5 @@
> + return nullptr;
> + }
> +
> + return exp;
> +
Stray whitespace.
For now, the ops are serialized as a simple uint8. This patch hoists the serialization functions to wasm::Encoder/Decoder so they can be changed later.
I realized neither of these methods belonged in the wasm::Module and they can be pushed into the AsmJSModule now that it derives wasm::Module.(In reply to Luke Wagner [:luke] from comment #0)
> This bug is for introducing a path to compile wasm, a shell testing function
> 'wasmEval(bytecode)' (which compiles an ArrayBuffer of wasm bytecode), and a
> shell 'wasmTextToBinary(text)' (which encodes the given wasm text format
> into the wasm binary format). Together this will allow writing unit tests
> in the wasm text format a la `wasmEval(wasmTextToBinary('...'))` while also
> allowing direct fuzzing of the binary format via wasmEval.
Any chance I could convince you to name those wasm.eval and wasm.textToBinary? The shell has an insane number of functions these days, making help() mostly useless, and I've started trying to separate things out into module-ish things (see for example shell/OSObject.cpp, which defines os.getenv, os.system, etc. Together with aliases for the old names, but wasm wouldn't need that part.)
I still need to fix up the help stuff a little for this (these not-modules should have help text, and the aliases should inherit help text from the "real" names). But that shouldn't block anything.(In reply to Steve Fink [:sfink, :s:] from comment #10)
Actually, you'll get your wish before long: what we'll probably standardize is a new 'wasm' or 'WASM' namespace object. 'wasmEval' is a temporary thing (and note that the tests are mostly not calling it directly so it'll be easy to switch).Comment on attachment 8708649 [details] [diff] [review]
tweak-generator
Review of attachment 8708649 [details] [diff] [review]:
-----------------------------------------------------------------
I see where this is going. Thanks for the patch.
::: js/src/asmjs/WasmBinary.h
@@ +377,5 @@
> typedef Vector<uint8_t, 0, SystemAllocPolicy> Bytecode;
> typedef UniquePtr<Bytecode> UniqueBytecode;
>
> // The Encoder class recycles (through its constructor) or creates a new Bytecode (through its
> // init() method). Its Bytecode is released when it's done building the wasm IR in finish().
This comment needs an update.
@@ +392,4 @@
> }
>
> public:
> + Encoder(Bytecode& bytecode) : bytecode_(bytecode) {}
Please MOZ_ASSERT(empty()) in the ctor.
::: js/src/asmjs/WasmGenerator.cpp
@@ +108,2 @@
> {
> +
nit: blank line
@@ +126,2 @@
> shared_ = Move(shared);
> + MOZ_ASSERT(shared_->importSigs.length() == shared_->importGlobalDataOffsets.length());
This hasn't been updated to the latest shared_.imports vector, right?
@@ +313,5 @@
>
> void
> ModuleGenerator::initSig(uint32_t sigIndex, Sig&& sig)
> {
> + MOZ_ASSERT(sigIndex == numSigs_);
Perhaps also MOZ_ASSERT that we're in AsmJS mode?
@@ +318,3 @@
> MOZ_ASSERT(shared_->sigs[sigIndex] == Sig());
> +
> + numSigs_++;
Another way to keep numSigs_ (resp. numFuncSigs_) in sync would be to just take out the arrays' lengths in finishFuncDefs. I actually prefer your way, as we can assert order.
@@ +330,5 @@
>
> bool
> +ModuleGenerator::initFuncSig(uint32_t funcIndex, uint32_t sigIndex)
> +{
> + MOZ_ASSERT(funcIndex == numFuncSigs_);
Also can assert that we're in AsmJS mode.Comment on attachment 8709265 [details] [diff] [review]
hoist Expr encoder/decoder ops
Review of attachment 8709265 [details] [diff] [review]:
-----------------------------------------------------------------
Cheers!
::: js/src/asmjs/AsmJS.cpp
@@ +2787,5 @@
> return encoder().writeU8(uint8_t(Expr::Unreachable), offset);
> }
> MOZ_WARN_UNUSED_RESULT
> bool tempOp(size_t* offset) {
> + return encoder().writeExpr(Expr::Unreachable, offset);
This can be unified with tempU8(); and temp32() can use tempOp() as well.
::: js/src/asmjs/WasmBinary.h
@@ +326,5 @@
> + // Variable-length is somewhat annoying at the moment due to the
> + // pre-order encoding and back-patching; let's see if we switch to
> + // post-order first.
> + static_assert(mozilla::IsEnum<T>::value, "is an enum");
> + MOZ_ASSERT(uint64_t(v) < UINT8_MAX);
static_assert(sizeof(T) == sizeof(uint8_t)); ?
@@ +334,5 @@
> + template <class T>
> + void patchEnum(size_t pc, T v) {
> + // See writeEnum comment.
> + static_assert(mozilla::IsEnum<T>::value, "is an enum");
> + MOZ_ASSERT(uint64_t(v) < UINT8_MAX);
ditto
@@ +457,5 @@
>
> + template <class T>
> + MOZ_WARN_UNUSED_RESULT bool
> + readEnum(T* out) {
> + // See Encoder::writeEnum.
The static_assert would be nice here too.
@@ +476,5 @@
> }
>
> + template <class T>
> + T uncheckedReadEnum() {
> + // See Encoder::writeEnum.
And here as well.(In reply to Benjamin Bouvier [:bbouvier] from comment #14)
> > bool tempOp(size_t* offset) {
> > + return encoder().writeExpr(Expr::Unreachable, offset);
>
> This can be unified with tempU8(); and temp32() can use tempOp() as well.
Well my intention here is that, while *U8/32 will always write exactly those sizes (and, ultimately, be eliminated since I think practically all ints will be VarU32 eventually), Op will become variable length one day.
> static_assert(sizeof(T) == sizeof(uint8_t)); ?
I would've, but that assert won't hold in the Baldr patch where ValType is passed (b/c of that GCC bug preventing a uint8 enum class which I was hoping we could resolve by making configure fail...).
This patch makes names optional, auto-generating wasm names like "wasm-function[0]" in their absence.
This trivial patch switches integer immediates of I32Const to be VarU32 from U32. Later I'd actually like to introduce VarS32 so that small negative integers use a small number of bytes instead of 5 with VarU32, but this is fine for now.
Ok, I think I have enough of a "Hello, World" working to land and then start iterating in small patches.
In summary:
- Wasm.cpp implements the binary -> wasm::Module compilation using the same WasmGenerator.(h,cpp) abstraction as asm.js. Most of the work up to this point has just been pulling all the asm.js/JS-specific stuff out of this pipeline.
- Wasm.cpp decodes all the module-level stuff (imports, exports, etc) but hands over the function body bytes to the shared pipeline for decoding in WasmIonCompile.cpp.
- Since WasmIonCompile.cpp assumes well-formed bytes, to avoid a bunch of fuzzer asserts, Wasm.cpp performs a validation pass before OK'ing the bytes to be passed to WasmIonCompile.cpp. This is suboptimal (the bytes are decoded twice) and later this decoding could be moved into baseline-wasm.
- Decoding is also recursive now, but should be made iterative later (when folded into baseline).
- The wasm binary is reminiscent of, but not exactly the same as, the v8 binary format in https://github.com/WebAssembly/design/issues/497. The expectation is that we'll be iterating on the binary format design over the next few months towards convergence, so SM is basically our take on where we should be going.
Oops, this patch goes before baldr. It avoids the need for a final explicit return opcode, making function bodies expressions for wasm.Comment on attachment 8710251 [details] [diff] [review]
final-return
Review of attachment 8710251 [details] [diff] [review]:
-----------------------------------------------------------------
Nice. I can already imagine all the tests checking that control flow expressions yield values!
::: js/src/asmjs/AsmJS.cpp
@@ +3424,3 @@
> }
>
> return true;
Control flow can be simplified a bit:
if (!lastNonEmpty... && !IsVoid(...))
return f.fail(...);
return true;Comment on attachment 8710243 [details] [diff] [review]
optional-names
Review of attachment 8710243 [details] [diff] [review]:
-----------------------------------------------------------------
Quite elegant.
::: js/src/asmjs/WasmGenerator.cpp
@@ +123,5 @@
> // already initialized.
> shared_ = Move(shared);
> if (kind == ModuleKind::Wasm) {
> numSigs_ = shared_->sigs.length();
> + module_->numFuncs = shared_->funcSigs.length();
note to self: No need to initialize in asm.js mode because ModuleData is PodZero'd.
::: js/src/asmjs/WasmGenerator.h
@@ +35,5 @@
>
> struct SlowFunction
> {
> + SlowFunction(uint32_t index, unsigned ms, unsigned lineOrBytecode)
> + : index(index), ms(ms)
nit: don't forget to set lineOrBytecode here!
::: js/src/asmjs/WasmModule.cpp
@@ +681,5 @@
> + if (!PerfFuncEnabled())
> + return true;
> +#elif defined(MOZ_VTUNE)
> + if (!IsVTuneProfilingActive())
> + return true;
If Spidermonkey is built with Perf support *and* VTune support, this means we'll have to enable both or none. Maybe
if (false
#ifdef JS_ION_PERF
|| !PerfFuncEnabled()
#endif
#ifdef MOZ_VTUNE
|| !IsVTuneProfilingActive()
#endif
)
This is a bit ugly though...
Or another way is to unify what enables Perf *and* VTune.
Or just get back to what it was?
@@ +710,3 @@
> unsigned method_id = iJIT_GetNewMethodID();
> if (method_id == 0)
> return;
This should return a boolean, not void;
@@ +1424,5 @@
> +
> +const char*
> +Module::getFuncName(JSContext* cx, uint32_t funcIndex, UniqueChars* owner) const
> +{
> + if (!module_->prettyFuncNames.empty())
In webassembly, can anonymous and named functions live in a same module? (I would say so, assuming that you've moved numFuncs to the pod so as to have the upper bound of the number of names). If so, this condition will trigger a OOB if we're trying to get the name of an anonymous function.
@@ +1427,5 @@
> +{
> + if (!module_->prettyFuncNames.empty())
> + return prettyFuncName(funcIndex);
> +
> + char* chars = JS_smprintf("wasm-function[%u]", funcIndex);
I guess annoying know-it-all who'll explicitly name their function "wasm-function[0]" won't be able to distinguish it from unnamed functions. That's a reasonable price to pay for being annoying :-)
::: js/src/asmjs/WasmModule.h
@@ +219,5 @@
>
> CodeRange() = default;
> CodeRange(Kind kind, Offsets offsets);
> CodeRange(Kind kind, ProfilingOffsets offsets);
> + CodeRange(uint32_t nameIndex, uint32_t lineOrBytecode, FuncOffsets offsets);
s/nameIndex/funcIndex(In reply to Benjamin Bouvier [:bbouvier] from comment #25)
Thanks! Good catches.
> In webassembly, can anonymous and named functions live in a same module? (I
> would say so, assuming that you've moved numFuncs to the pod so as to have
> the upper bound of the number of names). If so, this condition will trigger
> a OOB if we're trying to get the name of an anonymous function.
I'm expecting it'll be all-or-nothing, but if it isn't, then yeah, we'd have prettyFuncNames either be empty or numFuncs-sized and have null elements that we'd have to check for in getFuncName().
Rebased and fixed so that on platforms where wasm isn't supported (JS_CODEGEN_NONE, non-hardware-fp, non-standard-page-size (we could drop this requirement pretty easily)) the wasmEval/wasmTextToBinary functions simply aren't available so all the wasm tests can start with "if (!wasmEval) quit()".Comment on attachment 8710250 [details] [diff] [review]
baldr
Review of attachment 8710250 [details] [diff] [review]:
-----------------------------------------------------------------
First set of comments that I made yesterday before the rebased patch.
::: js/src/asmjs/Wasm.cpp
@@ +65,5 @@
> + JSContext* cx() const { return cx_; }
> + Decoder& d() const { return d_; }
> + ModuleGenerator& mg() const { return mg_; }
> + FunctionGenerator& fg() const { return fg_; }
> + uint32_t funcIndex() const { return funcIndex_; }
These getters are unused so far (cx(), mg(), fg(), funcIndex()). How about we add them only when we need them?
(FunctionGenerator is entirely unused (yet))
@@ +93,5 @@
> + switch (expr) {
> + case Expr::Nop:
> + return CheckType(f, ExprType::Void, expected);
> + case Expr::I32Const:
> + return f.d().readVarU32() && CheckType(f, ExprType::I32, expected);
If readVarU32() fails, the whole validation will fail with oom. Could you f.fail("unable to read variable-length u32") here instead?
It would actually be even nicer to have helpers for this,
bool DecodeVarU32(FunctionContext& f)
{
if (!f.d().readVarU32())
return f.fail("blablabla");
return true;
}
::: js/src/asmjs/Wasm.h
@@ +27,5 @@
> +// Compile and instantiate the given binary-encoded wasm module and return the
> +// resulting export object, which contains the module's exported functions as
> +// properties.
> +bool
> +Eval(JSContext* cx, UniqueChars filename, Handle<ArrayBufferObject*> code,
If you needed js/Utility.h in WasmText.h for UniqueChars, it's likely you'll need it here as well.
::: js/src/asmjs/WasmBinary.h
@@ +379,5 @@
> MOZ_WARN_UNUSED_RESULT bool
> writeExpr(Expr expr, size_t* offset = nullptr) {
> return writeEnum(expr, offset);
> }
> + MOZ_WARN_UNUSED_RESULT bool writeValType(ValType type, size_t* offset = nullptr) {
Can you please unify the signature style with writeExpr above (in one way or the other)?
@@ +423,5 @@
> + return bytecode_.append(reinterpret_cast<const uint8_t*>(cstr), strlen(cstr) + 1);
> + }
> +
> + MOZ_WARN_UNUSED_RESULT bool startSection(size_t* offset) {
> + if (!writeU32(0))
If you wanted, you could put a sentinel value that gets checked in finishSection afterwards before patching.
@@ +429,5 @@
> + *offset = bytecode_.length();
> + return true;
> + }
> + void finishSection(size_t offset) {
> + uint8_t* patchAt = bytecode_.begin() + offset - sizeof(uint32_t);
MOZ_ASSERT(patchAt < bytecode_.end());
@@ +430,5 @@
> + return true;
> + }
> + void finishSection(size_t offset) {
> + uint8_t* patchAt = bytecode_.begin() + offset - sizeof(uint32_t);
> + uint32_t numBytes = bytecode_.length() - offset;
MOZ_ASSERT(numBytes <= bytecode_.length());
@@ +509,5 @@
> public:
> + Decoder(const uint8_t* begin, const uint8_t* end)
> + : beg_(begin),
> + end_(end),
> + cur_(begin)
MOZ_ASSERT(begin <= end);
@@ +532,2 @@
> void assertCurrentIs(const DebugOnly<size_t> offset) const {
> MOZ_ASSERT(size_t(cur_ - beg_) == offset);
Can you use currentOffset() in that function please?
@@ +619,5 @@
> + MOZ_WARN_UNUSED_RESULT bool startSection(uint32_t* offset) {
> + uint32_t unused;
> + if (!readU32(&unused))
> + return false;
> + *offset = cur_ - beg_;
You can use currentOffset() here
::: js/src/asmjs/WasmText.h
@@ +26,5 @@
> +namespace wasm {
> +
> +// Translate the textual representation of a wasm module (given by a
> +// null-terminated char16_t array) into a Bytecode object. If there is an error
> +// other than out-of-memory an error message string will be stored in 'error'
nit: end with period
::: js/src/jit-test/lib/asserts.js
@@ +74,5 @@
> if (!(e instanceof ctor))
> throw new Error("Assertion failed: expected exception " + ctor.name + ", got " + e);
> if (typeof test == "string") {
> if (test != e.message)
> + throw new Error("Assertion failed: expected " + test + ", got " + e.message);
I hear we even have template strings now...
::: js/src/jsutil.h
@@ +136,5 @@
> }
>
> +template <class Container1, class Container2>
> +static inline bool
> +EqualContainers(const Container1& lhs, const Container2& rhs)
Could you describe in a comment what's expected from the classes Container1/Container2 (namely, they need to implement length() and the const operator[])
@@ +149,5 @@
> +}
> +
> +template <class Container>
> +static inline HashNumber
> +HashContainer(const Container& c, HashNumber hn = 0)
Ditto
Considering the signature, would a better name be AddContainerToHash? Ideally, that'd be called HashContainer if hn = 0 and AddContainerToHash otherwise, but we can't go to that level of detail.
::: js/src/vm/ArrayBufferObject.h
@@ -276,5 @@
> - /*
> - * Ensure data is not stored inline in the object. Used when handing back a
> - * GC-safe pointer.
> - */
> - static bool ensureNonInline(JSContext* cx, Handle<ArrayBufferObject*> buffer);
nice catch (compilers are really permissive, aren't they)(In reply to Benjamin Bouvier [:bbouvier] from comment #29)
Great comments, thanks!
> These getters are unused so far (cx(), mg(), fg(), funcIndex()). How about
> we add them only when we need them?
Ah, they were used by DecodeReturn in the previous patch which I was able to take out with 9c9ae4b5cacc. I could remove them but they're going to come back immediately as soon as we implement anything more complicated than nop/i32.const so I'd rather keep them to show the intended structure of the code.
> It would actually be even nicer to have helpers for this,
Yes, that's a good idea, probably as members of the FunctionContext which maybe should be renamed to FunctionDecoder for symmetry with the other Function*s. Perhaps that could be in a future patch that introduces formatting strings though so we could write `f.readVarU32("i32.const")` and get an error that mentioned the operator.
> MOZ_ASSERT(numBytes <= bytecode_.length());
This won't hold for wasm input since the bits are arbitrary.
> Could you describe in a comment what's expected from the classes
> Container1/Container2 (namely, they need to implement length() and the const
> operator[])
I could, but I felt like the name, tiny function body, and compile errors would make that sufficiently clear.Comment on attachment 8710818 [details] [diff] [review]
baldr
Review of attachment 8710818 [details] [diff] [review]:
-----------------------------------------------------------------
Still need to give a look at the tests and re-read all of WasmText, but this looks very nice and elegant. A few more comments below.
::: js/src/asmjs/Wasm.cpp
@@ +297,5 @@
> + if (!d.finishSection(sectionStart))
> + return Fail(cx, d, "func section byte size mismatch");
> +
> + int64_t after = PRMJ_Now();
> + unsigned generateTime = (after - before) / PRMJ_USEC_PER_SEC;
We probably want PRMJ_USEC_PER_MSEC here, don't we?
@@ +319,5 @@
> + if (!d.readVarU32(&numFuncs))
> + return Fail(cx, d, "expected number of functions");
> +
> + for (uint32_t i = 0; i < numFuncs; i++) {
> + if (!DecodeFunc(cx, d, mg, funcIndex++))
Probably want to limit the number of functions or to check funcIndex for overflow here?
::: js/src/asmjs/WasmBinary.h
@@ +411,5 @@
> return true;
> }
>
> + MOZ_WARN_UNUSED_RESULT bool writeCString(const char* cstr) {
> + return bytecode_.append(reinterpret_cast<const uint8_t*>(cstr), strlen(cstr) + 1);
MOZ_ASSERT(cstr);
@@ +601,5 @@
> + MOZ_WARN_UNUSED_RESULT bool readCString(const char** cstr = nullptr) {
> + if (cstr)
> + *cstr = reinterpret_cast<const char*>(cur_);
> + for (; cur_ != end_; cur_++) {
> + if (*cur_ == '\0') {
For consistency with the next function, can we use: if (!*cur_) ?
@@ +631,5 @@
> + MOZ_WARN_UNUSED_RESULT bool finishSection(uint32_t offset) {
> + const uint8_t* start = beg_ + offset;
> + uint32_t numBytes;
> + memcpy(&numBytes, start - sizeof(uint32_t), sizeof(uint32_t));
> + return numBytes == uintptr_t(cur_ - start);
So if I understand correctly and look at all the uses of startSection/finishSection, the apis could be simpler:
- startSection could write the number of bytes in an outparam uint32_t
- then finishSection could directly use this number of bytes, instead of recording the offset to where the number of bytes can be read.
Also, skipSection() could just read the value recorded by startSection(), instead of being more aware of the section format and explicitly readU32() the size of bytes, in DecodeUnknownSection.
What do you think?
@@ +636,5 @@
> + }
> + MOZ_WARN_UNUSED_RESULT bool skipSection(uint32_t numBytes) {
> + if (uintptr_t(end_ - cur_) < numBytes)
> + return false;
> + cur_ += numBytes;
Arbitrary bits could overflow addition this while having the first assertion being correct, e.g. if numBytes == UINT32_MAX we can effectively provoke an infinite loop (see an unknown section of size -1, go back byte per byte to the start of a previous unknown section, skip it, see the unknown section of size -1 again).
If this is true, could you add a test, please?
::: js/src/asmjs/WasmText.cpp
@@ +47,5 @@
> +using WasmAstHashMap = HashMap<K, V, HP, LifoAllocPolicy<Fallible>>;
> +
> +typedef WasmAstVector<ValType> WasmAstValTypeVector;
> +
> +struct WasmAstBase
Would using TempObject from jit/JitAllocPolicy work here? Seems we could create a TempAllocator from the LifoAlloc to the same effect.
@@ +119,5 @@
> + return static_cast<T&>(*this);
> + }
> +};
> +
> +struct WasmAstNop : WasmAstExpr
no public inheritance for Nop?
@@ +167,5 @@
> + : WasmAstNode(WasmAstKind::Export),
> + externalName_(externalNameBegin),
> + externalNameLength_(externalNameEnd - externalNameBegin),
> + internalIndex_(UINT32_MAX)
> + {}
MOZ_ASSERT(externalNameBegin <= externalNameEnd);
@@ +314,5 @@
> + }
> + const char16_t* textEnd() const {
> + MOZ_ASSERT(kind_ == Text);
> + MOZ_ASSERT(end_[-1] == '"');
> + return end_ - 1;
Per the previous assertion this will be '"' all the time, is this off by one?
@@ +329,5 @@
> +
> +static bool
> +IsWasmNewLine(char16_t c)
> +{
> + return c == '\n';
nitpicking here, but what about '\r'?
@@ +335,5 @@
> +
> +static bool
> +IsWasmSpace(char16_t c)
> +{
> + return c == ' ' || c == '\t';
Should a new line be a space? I think so, considering the beginning of WasmTokenStream::next()
@@ +407,5 @@
> + case ')':
> + return WasmToken(WasmToken::CloseParen, begin, cur_);
> +
> + case '0': case '1': case '2': case '3': case '4':
> + case '5': case '6': case '7': case '8': case '9': {
What about signed numbers? (later patch?)
@@ +413,5 @@
> + while (cur_ != end_ && IsWasmDigit(*cur_)) {
> + u32 *= 10;
> + u32 += *cur_ - '0';
> + if (!u32.isValid())
> + break;
If we break here, then we call u32.value() which MOZ_ASSERT(isValid()), so this will trigger an error. In non-debug builds, this will just yield the first digit. Can we fail instead here?
::: js/src/builtin/TestingFunctions.cpp
@@ +3692,5 @@
> fuzzingSafe = true;
>
> disableOOMFunctions = disableOOMFunctions_;
>
> + if (!wasm::DefineTestingFunctions(cx, obj))
This isn't very idiomatic, but I guess you're doing this because wasm functions will get into a WASM global object later?(In reply to Benjamin Bouvier [:bbouvier] from comment #32)
Thanks! Great catches; I'll upload a new patch with all these fixed.
> - startSection could write the number of bytes in an outparam uint32_t
> - then finishSection could directly use this number of bytes, instead of
> recording the offset to where the number of bytes can be read.
That would almost work but if finishSection is given the number of bytes, it doesn't know how to verify that this is correct: you need to know where you were at startSection. A less trixie strategy would be to return/pass both offset and the numBytes, but that didn't seem worth it.
> > + MOZ_WARN_UNUSED_RESULT bool skipSection(uint32_t numBytes) {
> > + if (uintptr_t(end_ - cur_) < numBytes)
> > + return false;
> > + cur_ += numBytes;
>
> Arbitrary bits could overflow addition this while having the first assertion
> being correct,
I don't see how: if the test doesn't return false then cur_ should be in the range [beg_, end_].
> Would using TempObject from jit/JitAllocPolicy work here? Seems we could
> create a TempAllocator from the LifoAlloc to the same effect.
It could perhaps but I kinda like decoupling the two (this is outside the JIT) and the code-reuse is pretty minimal. E.g., I don't want to do the ballast thing.
> > +struct WasmAstNop : WasmAstExpr
>
> no public inheritance for Nop?
structs default to public inheritance
> > + MOZ_ASSERT(end_[-1] == '"');
> > + return end_ - 1;
>
> Per the previous assertion this will be '"' all the time, is this off by one?
The end in a [begin, end) pair is usually non-inclusive so "one past the end" which, for string literals is what we want.
> What about signed numbers? (later patch?)
Yeah, this is super-bare-bones; *lots* left to do here if you compare with https://github.com/WebAssembly/spec/blob/master/ml-proto/host/lexer.mll.
> > + if (!wasm::DefineTestingFunctions(cx, obj))
>
> This isn't very idiomatic, but I guess you're doing this because wasm
> functions will get into a WASM global object later?
That's right and this function would be called instead from the vm/GlobalObject machinery. Even then, though, if the device doesn't support wasm (no hardfp, old SSE) I was thinking we'd represent that by not having a 'WASM' object.
Updated with comments addressed.Comment on attachment 8711210 [details] [diff] [review]
baldr
Review of attachment 8711210 [details] [diff] [review]:
-----------------------------------------------------------------
Great work! This is so nice that no other pieces in the module generation et al. don't change in this patch, so thank you for all the previous refactorings.
This is very nice to read and to understand in general, we can feel the asmjs/ subdir style touch (maybe time to rename this directory?).
I am excited to see this landing. Could we expose the wasmEval/wasmTextToBinary on Nightly only and/or behind a flag, so that we can demo it?
::: js/src/asmjs/WasmBinary.h
@@ +345,5 @@
> bytecode_[pc] = uint8_t(v);
> }
>
> + template <class T>
> + static const T loadUnaligned(const uint8_t* p) {
the "unaligned" is implicitly assumed in the Decoder class, maybe we could just assume it here too?
::: js/src/asmjs/WasmText.cpp
@@ +269,5 @@
> + ValType valueType_;
> + } u;
> + public:
> + WasmToken() = default;
> + WasmToken(Kind kind, const char16_t* begin, const char16_t* end)
Can we make it explicit as the other non-default ctors?
@@ +331,5 @@
> +
> +static bool
> +IsWasmNewLine(char16_t c)
> +{
> + return c == '\n' || c == '\r';
Hah, getting more information about EOL encoding, there could \r\n to indicate a single end of line, although this will be counted as two lines per the current impl. Can you fix this, please?
@@ +337,5 @@
> +
> +static bool
> +IsWasmSpace(char16_t c)
> +{
> + return IsWasmNewLine(c) || c == ' ' || c == '\t';
According the comment below FirstCharKind in TokenStream.cpp, there are also \v and \f in this category.
Bonus points if you can share code with TokenStream.cpp/h.
@@ +370,5 @@
> + uint32_t lookaheadIndex_;
> + uint32_t lookaheadDepth_;
> + WasmToken lookahead_[LookaheadSize];
> +
> + bool finishKeyword(const char16_t* end, const char16_t* keyword) {
Considering that this is much used in the context of a prefix tree, can you name it "startsWith" or "prefixedBy"? The name is misleading in the sense that you can do finishKeyword('e') and then finishKeyword('xport') but this is a single keyword.
@@ +513,5 @@
> + WasmToken get() {
> + if (lookaheadDepth_) {
> + lookaheadDepth_--;
> + uint32_t got = lookaheadIndex_;
> + lookaheadIndex_ ^= 1;
nit: please static_assert LookaheadSize == 2 or MOZ_ASSERT(lookaheadIndex <= 1)
@@ +514,5 @@
> + if (lookaheadDepth_) {
> + lookaheadDepth_--;
> + uint32_t got = lookaheadIndex_;
> + lookaheadIndex_ ^= 1;
> + return lookahead_[got];
You can just store the WasmToken before xor-ing the lookaheadIndex, and return the token rather than storing the lookahead index?
@@ +520,5 @@
> + return next();
> + }
> + void unget(WasmToken token) {
> + MOZ_ASSERT(lookaheadDepth_ <= 1);
> + lookaheadDepth_++;
Is it me or lookaheadDepth_ === lookaheadIndex_ + 1 everywhere? Maybe we could spare one variable here...
@@ +614,5 @@
> + if (!args.append(valueType.valueType()))
> + return nullptr;
> + break;
> + }
> + case WasmToken::Result: {
Can Param and Result be put in any order and interleaved? It seems that (func) (param) (result) (param) would validate, and it doesn't look very intuitive. Or is your intent to let wasmTextToBinary be a bit loose (thus not exposed to the open web), so that all errors get caught by wasmEval? In any cases, this needs tests.
@@ +616,5 @@
> + break;
> + }
> + case WasmToken::Result: {
> + if (result != ExprType::Void) {
> + c.ts.generateError(field, c.error);
Could we have a more explicit message, "can't have several result types"?
@@ +685,5 @@
> +
> + while (c.ts.getIf(WasmToken::OpenParen)) {
> + WasmToken section = c.ts.get();
> +
> + switch (section.kind()) {
Sections can be put in any order/interleaved, and there can be several of them? In any cases, this needs tests!
::: js/src/jit-test/tests/wasm/basic.js
@@ +27,5 @@
> +var ver1 = 0xff;
> +var ver2 = 0xff;
> +var ver3 = 0xff;
> +
> +assertErrorMessage(() => wasmEval(Uint8Array.of().buffer), Error, magicError);
If wasmEval is supposed to be long lived and callable on the wasm global object, you might want to test it as well: 0 args, 3 args, 1 arg but not an array buffer, 2 args and second one isn't an object. Ditto for wasmTextToBinary. Of course, if these are just testing functions, no need to test them.
@@ +38,5 @@
> +assertErrorMessage(() => wasmEval(Uint8Array.of(magic0, magic1, magic2, magic3, ver0, ver1, ver2).buffer), Error, versionError);
> +assertErrorMessage(() => wasmEval(Uint8Array.of(magic0, magic1, magic2, magic3, ver0, ver1, ver2, ver3, 0, 1).buffer), Error, extraError);
> +
> +var o = wasmEval(Uint8Array.of(magic0, magic1, magic2, magic3, ver0, ver1, ver2, ver3, 0).buffer);
> +assertEq(Object.getOwnPropertyNames(o).length, 0);
Binary decoding can go wrong in so many ways that are untested here. Do you mind to move these first tests to a binary.js file, and add more binary tests there? This can be done in a later patch / bug; actually I'd be volunteering to write such test cases.
@@ +44,5 @@
> +assertErrorMessage(() => wasmEvalText(""), Error, parsingError);
> +assertErrorMessage(() => wasmEvalText("("), Error, parsingError);
> +assertErrorMessage(() => wasmEvalText("(m"), Error, parsingError);
> +assertErrorMessage(() => wasmEvalText("(module"), Error, parsingError);
> +assertErrorMessage(() => wasmEvalText("(module"), Error, parsingError);
Can you add a test that "(moduler)" is also a parse error (to try the prefix tree)?(In reply to Benjamin Bouvier [:bbouvier] from comment #35)
Thanks, great comments.
> I am excited to see this landing. Could we expose the
> wasmEval/wasmTextToBinary on Nightly only and/or behind a flag, so that we
> can demo it?
The plan is to expose behind a pref (default off) however, that will likely be a separate WASM.compile/eval function which returns a Promise (which will be easier to do once we have bug 911216). Thus, wasmEval itself should remain a testing-only function.
> Is it me or lookaheadDepth_ === lookaheadIndex_ + 1 everywhere? Maybe we
> could spare one variable here...
When the lookaheadDepth is > 0, the lookaheadIndex can be either 0 or 1. You can think of this as a circular buffer specialized to the case where the buffer size is 2.
> Can Param and Result be put in any order and interleaved?
Yep: https://github.com/WebAssembly/spec/blob/master/ml-proto/host/parser.mly#L293
> Or is your intent to let wasmTextToBinary be a bit loose (thus
> not exposed to the open web), so that all errors get caught by wasmEval?
This is ultimately a temporary text format (https://github.com/WebAssembly/design/blob/master/TextFormat.md#official-text-format) and so the goal is really just to have "enough to write tests". Later a more precisely-defined text language would be specified which gets a more careful treatment (and probably a switch to lex/yacc). Note that, in any case, this won't ever be content-visible.
> Could we have a more explicit message, "can't have several result types"?
Given the testing-only goals and possible rewrite to lex/yacc, I'm not too interested in building up the error reporting. Patches welcome, of course.
decoder wants to build the files in asmjs/ in non-unified mode. This patch fixes some compile issues blocking that.Comment on attachment 8712209 [details] [diff] [review]
Fix non-unified build
Review of attachment 8712209 [details] [diff] [review]:
-----------------------------------------------------------------
Thanks!
::: js/src/asmjs/Wasm.cpp
@@ +122,5 @@
> uintptr_t bodyLength = bodyEnd - bodyBegin;
> if (!fg.bytecode().resize(bodyLength))
> return false;
>
> + mozilla::PodCopy(fg.bytecode().begin(), bodyBegin, bodyLength);
Can you add a 'using mozilla::PodCopy' up at the top?Status: ASSIGNED → RESOLVEDClosed: 9 years agoResolution: --- → FIXEDTarget Milestone: --- → mozilla47
You need to log in
before you can comment on or make changes to this bug.
Attachment
General
Created:
Updated:
Size:
Description
File Name
Content Type
| 2024-11-08T02:26:18 | en | train |
10,822,267 | javinpaul | 2016-01-01T16:33:23 | Most Entertaining Posts from StackOverFlow | null | http://javarevisited.blogspot.com/2015/08/5-entertaining-posts-from-stackoverflow.html | 3 | 0 | null | null | null | no_error | 5 Entertaining Posts from StackOverFlow - Must Read | null | null |
StackOverFlow is great place to look for help, learn and participate, but it's also a great place to taste some real entertainment, contributed by programmers from all over the world. Though, due to strict policies of stackoverflow.com, most of the entertaining post either are either gets closed or deleted, some of them remained to entertain programming community. Being a regular reader of StackOverFlow from long time, I have found couple of threads which are truly amazing, and has lot's of funny and entertaining content. Here I am going to share 5 of my favorite StackOverFlow posts, which I suggest you to read, if you get bored or you have some time to kill. By the way, don't forget to leave comments and let us know which is your favorite funny and entertaining threads in StackOverFlow.
What is your best programmer joke?
WOW, this was my reaction after reading couple of jokes :) Some of them you might have heard already, but this is a huge collection. Knowing couple of programming jokes, doesn't heart your chances of being like-able in your team. By the way, here is my favorite one, "if you put a million monkeys at a million keyboards, one of them will eventually write a Java program. The rest of them will write Perl programs". I would rather change that, rest will write JavaScript :).
Here is another one :
"What's the object-oriented way to become wealthy?"
A: Inheritance
If you want more fun, go check yourself this post here.
What is the best comment in source code you have ever encountered?
I guess, this is most classic, popular, entertaining and funny StackOverFlow thread, As you can see it's closed already :). While writing code comments best practices, I didn't think that world of comments are so funny. It has lots of classics, which you will remember forever. The very first one (highest voted), seems like a prologue of movies like Lord of the Rings to me, or may be 300 :)
and here is one more from same thread :)
Go, read some of them and enjoy, here you go http://stackoverflow.com/questions/184618/what-is-the-best-comment-in-source-code-you-have-ever-encountered
What's your favorite “programmer” cartoon?
How many of you are Dilberts Fan? I am sure quite a lot. I just loved them. They are short, humorous with pitch of reality and sometimes just amazing. This post is one of the best collection of Programming cartoons, I have ever seen. Many of them are just amazing. By the way this is my favorite and I guess many of fellow programmer's as well :
There is one more which is about Code quality, I first saw that in the Clean Code book and it become my second favorite.
If you enjoy these two cartoons then you will enjoy many more from this thread, here is the link http://stackoverflow.com/questions/84556/whats-your-favorite-programmer-cartoon
New Programming Jargon
Unfortunately, this post is deleted from StackOverFlow, but Jeff Atwood keeps it alive with some meta-commentary in his blog. I love this post and it's quite interesting for any Programmer. Out of Top 30 collected by Jeff, my favorite is Yoda Conditions if( 4 == count), and of course it's meta-commentary, as it reads like "Blue is the Sky" or "round is the earth".
Here is the link to enjoy : http://www.codinghorror.com/blog/2012/07/new-programming-jargon.html
Strangest language feature
This is one of its kind post and you will not find something similar. It's also a test of your programming love, how much you know the C programming language you have been programming from years. To be honest I didn't know about the first post that you can declare array in C as 10[a], its really strange, isn't it?
As you can see it's closed but thankfully not delete and you can still check out some strangest feature of any Programming language, http://stackoverflow.com/questions/1995113/strangest-language-feature
That's all guys, have fun and make most of your spare time. I really hope if StackOverFlow was little relaxed about their policy and understand that these kind of posts also add value to their site. A community needs entertainment and programming community is not different, in fact it needs more to keep up with the stress of deliveries, deadlines and customer satisfaction. | 2024-11-08T12:33:06 | en | train |
10,822,298 | jimsojim | 2016-01-01T16:41:17 | The Slow Death of the University (2015) | null | http://www.socjobrumors.com/topic/the-slow-death-of-the-university-by-terry-eagleton-uk | 47 | 37 | [
10823446,
10824137,
10823599,
10823286,
10823424,
10823710
] | null | null | no_error | The Slow Death of the University, by Terry Eagleton (UK) « Sociology Job Market Rumors | null | Sociologist
0a3 |
A few years ago, I was being shown around a large, very technologically advanced university in Asia by its proud president. As befitted so eminent a personage, he was flanked by two burly young minders in black suits and shades, who for all I knew were carrying Kalashnikovs under their jackets. Having waxed lyrical about his gleaming new business school and state-of-the-art institute for management studies, the president paused to permit me a few words of fulsome praise. I remarked instead that there seemed to be no critical studies of any kind on his campus. He looked at me bemusedly, as though I had asked him how many Ph.D.’s in pole dancing they awarded each year, and replied rather stiffly "Your comment will be noted." He then took a small piece of cutting-edge technology out of his pocket, flicked it open and spoke a few curt words of Korean into it, probably "Kill him." A limousine the length of a cricket pitch then arrived, into which the president was bundled by his minders and swept away. I watched his car disappear from view, wondering when his order for my execution was to be implemented.
Deconstructing Academe
Colleges claim they’re the last hope for revitalization. But can they really revive struggling towns and cities?
The False Promise of 'Practical' Education
Today's calls for pragmatic education are at odds with the idea's history.
This happened in South Korea, but it might have taken place almost anywhere on the planet. From Cape Town to Reykjavik, Sydney to São Paulo, an event as momentous in its own way as the Cuban revolution or the invasion of Iraq is steadily under way: the slow death of the university as a center of humane critique. Universities, which in Britain have an 800-year history, have traditionally been derided as ivory towers, and there was always some truth in the accusation. Yet the distance they established between themselves and society at large could prove enabling as well as disabling, ...See full post
9 years ago # QUOTE 42 GOOD 13 NO GOOD !
Sociologist
360
The Professor's In hatred.
9 years ago # QUOTE 11 GOOD 29 NO GOOD !
Sociologist
3f9
lol humanities
8 years ago # QUOTE 9 GOOD 20 NO GOOD !
Sociologist
0c5
The first paragraph was so full of bull s**t, I didn't feel it necessary to read the rest. This is like one of those anecdotes preacher's tell their congregation about "kids these days" that are 100% fabricated.
8 years ago # QUOTE 11 GOOD 21 NO GOOD !
Sociologist
ac4
If anything is to die first, it's Eagleton's English major Marxism Critical Theory based in Yuro. Oh gosh, it's like full-on idiocy hits on all spots.
8 years ago # QUOTE 9 GOOD 17 NO GOOD !
Sociologist
c0b
Considering that a lot of 'critique' amounts to unresearched conspiracy theory, the Internet has taken over and we have plenty of critique in society.
8 years ago # QUOTE 9 GOOD 15 NO GOOD !
Sociologist
f9f
Critique is necessary, even if it is borderline conspiracy theory territory. Its hard to tell what is true or what is just paranoia, what is certain is that main stream media serves main stream interests.
8 years ago # QUOTE 11 GOOD 8 NO GOOD !
Sociologist
0b0
If Terry wants to eschew filthy lucre and focus on lofty intellectual pursuits regardless of their utility, he should either find his own means of funding such an existence, or put his livelihood where his mouth is, swear an oath of poverty, and join a monastery.
8 years ago # QUOTE 7 GOOD 19 NO GOOD !
Humanomics
Rep: -157
This history of the academy is completely anachronistic. The most revolutionary or "critical" thing going on in the humanities up until the 18th century was the attempt to reconcile a collection of Greek and Christian books that fit in a backpack, and the modern humanities were routinely conservative up until leftists flooded the academic job market in the 60s and 70s.
Allan Bloom may or may not be a craggy conservative ball sack, but him and the other complainers whining about how the humanities have become dogmatically adherent to the critical tradition, and only recently, aren't wrong.
It's just incredible how many people who call themselves empiricists and scientists believe a demonstrably false story about the academy. I guess when sacralizing victimhood and remaining on the side of The Good is central to your worldview, you don't need evidence. A persecution complex about the big bad neoconservative world coming to get you will do.
8 years ago # QUOTE 8 GOOD 14 NO GOOD !
Sociologist
b72
This history of the academy is completely anachronistic. The most revolutionary or "critical" thing going on in the humanities up until the 18th century was the attempt to reconcile a collection of Greek and Christian books that fit in a backpack, and the modern humanities were routinely conservative up until leftists flooded the academic job market in the 60s and 70s.
This paragraph was good.
Allan Bloom may or may not be a craggy conservative ball sack, but him and the other complainers whining about how the humanities have become dogmatically adherent to the critical tradition, and only recently, aren't wrong.
This one seemed like a non-sequitar. And I don't know who Bloom is, and don't even care enough to Google.
It's just incredible how many people who call themselves empiricists and scientists believe a demonstrably false story about the academy. I guess when sacralizing victimhood and remaining on the side of The Good is central to your worldview, you don't need evidence. A persecution complex about the big bad neoconservative world coming to get you will do.
And this last one seemed like troll bait.
8 years ago # QUOTE 14 GOOD 13 NO GOOD !
Humanomics
Rep: -157
Closing of the American Mind is the most widely read criticism of the academy that emerged in the 90s culture wars. A large genre of conservative books bashing the academy came out in the 90s, as had they in the 70s when leftists themselves were upset that the university wasn't yet left enough.
http://www.amazon.com/Closing-American-Mind-Education-Impoverished/dp/1451683200
https://en.wikipedia.org/wiki/The_Closing_of_the_American_Mind
It's a relatively unenlightening genre in which conservatives assert with appeals to the authority of tradition that classical, liberal, Greek, Enlightenment documents are right, and that critical documents are wrong. But they're not wrong about when the critical tradition took over the humanities. Leftists are.
Troll bait or not, it is a complete myth that professors are a historically oppressed ideological minority that deserve protections from state and capitalist oppression. Progressives who defend tenure, self governance, and a billions dollar firehose of unmonitored demand and supply subsidies with this story are just embarrassing themselves.
8 years ago # QUOTE 4 GOOD 12 NO GOOD !
Humanomics
Rep: -157
And the lack of self awareness in paragraphs like the below is just astounding. In it, he admits that the professoriate is an aristocratic tradition in which blowhards feel entitled to financial patronage just for waking up in the morning and sneering at productive enterprise.
He is literally recommending premodern religious and landed gentrification as an alternative to capitalist oppression. Because though they may have been starving and illiterate, at least the poor had community in the 17th century.
When I first came to Oxford 30 years earlier, any such professionalism would have been greeted with patrician disdain. Those of my colleagues who had actually bothered to finish their Ph.D.’s would sometimes use the title of "Mr." rather than "Dr.," since "Dr." suggested a degree of ungentlemanly labor. Publishing books was regarded as a rather vulgar project. A brief article every 10 years or so on the syntax of Portuguese or the dietary habits of ancient Carthage was considered just about permissible. There had been a time earlier when college tutors might not even have bothered to arrange set tutorial times for their undergraduates. Instead, the undergraduate would simply drop round to their rooms when the spirit moved him for a glass of sherry and a civilized chat about Jane Austen or the function of the pancreas.
The odds an internet sociologist agrees with Eagleton are currently 23:4.
8 years ago # QUOTE 6 GOOD 18 NO GOOD !
Sociologist
049
He's basically accurate about whats happened to the universities
3 years ago # QUOTE 5 GOOD 4 NO GOOD !
Sociologist
38b
oh yeah, let's do a b/u/m/p!
2 years ago # QUOTE 3 GOOD 2 NO GOOD !
Reply
| 2024-11-08T12:40:38 | en | train |
10,822,438 | kneth | 2016-01-01T17:24:04 | Software with the most vulnerabilities in 2015: Mac OS X, iOS, and Flash | null | http://venturebeat.com/2015/12/31/software-with-the-most-vulnerabilities-in-2015-mac-os-x-ios-and-flash/ | 17 | null | [
10822790,
10822805,
10823025
] | null | true | no_error | Software with the most vulnerabilities in 2015: Mac OS X, iOS, and Flash | 2015-12-31T16:23:52+00:00 | Emil Protalinski |
December 31, 2015 8:23 AM
Image Credit: REUTERS/Mike Segar
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Which software had the most publicly disclosed vulnerabilities this year? The winner is none other than Apple’s Mac OS X, with 384 vulnerabilities. The runner-up? Apple’s iOS, with 375 vulnerabilities.
Rounding out the top five are Adobe’s Flash Player, with 314 vulnerabilities; Adobe’s AIR SDK, with 246 vulnerabilities; and Adobe AIR itself, also with 246 vulnerabilities. For comparison, last year the top five (in order) were: Microsoft’s Internet Explorer, Apple’s Mac OS X, the Linux Kernel, Google’s Chrome, and Apple’s iOS.
These results come from CVE Details, which organizes data provided by the National Vulnerability Database (NVD). As its name implies, the Common Vulnerabilities and Exposures (CVE) system keeps track of publicly known information-security vulnerabilities and exposures.
Here is the 2015 list of the top 50 software products in order of total distinct vulnerabilities:
You’ll notice that Windows versions are split separately, unlike OS X. Many of the vulnerabilities across various Windows versions are the same, so there is undoubtedly a lot of overlap. The argument for separating them is probably one of market share, though that’s a hard one to agree to, given that Android and iOS are not split into separate versions. This is the nature of CVEs.
It’s also worth pointing out that the Linux kernel is separate from various Linux distributions. This is likely because the Linux kernel can be upgraded independently of the rest of the operating system, and so its vulnerabilities are split off.
If we take the top 50 list of products and categorize them by company, it’s easy to see that the top three are Microsoft, Adobe, and Apple:
Keep in mind that tech companies have different disclosure policies for security holes. Again, this list paints a picture of the number of publicly known vulnerabilities, not of all vulnerabilities, nor of the overall security of a given piece of software.
If you work in IT, or are generally responsible for the security of multiple systems, there are some obvious trends to keep in mind. Based on this list, it’s clear you should always patch and update operating systems, browsers, and Adobe’s free products.
VB Daily
Stay in the know! Get the latest news in your inbox daily
By subscribing, you agree to VentureBeat's Terms of Service.
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
| 2024-11-08T06:00:46 | en | train |
10,822,554 | merqurio | 2016-01-01T17:51:57 | The McDonald's of the Future Opens in Hong Kong | null | http://kotaku.com/the-mcdonalds-of-the-future-opens-in-hong-kong-1750271639 | 2 | 0 | null | null | null | no_error | The McDonald's Of The Future Opens In Hong Kong | 2015-12-31T04:00:00-05:00 | Brian Ashcraft | Hong Kong is now home to the newest and neatest McDonald’s around. Say hello to McDonald’s Next, the McDonald’s of the future. Suggested ReadingLive Forever in the Universe of 'New World: Aeternum'According to Trend Hunter, Contemporist, and Hong Kong Navi, design firm Landini Associates joined forces with the fast food chain to create this oh-so slick McDonald’s in Hong Kong, located near Admiralty Station.Suggested ReadingLive Forever in the Universe of 'New World: Aeternum'Photographer Ross Honeysett took these terrific photos:[Photo: Ross Honeysett][Photo: Ross Honeysett][Photo: Ross Honeysett][Photo: Ross Honeysett][Photo: Ross Honeysett][Photo: Ross Honeysett]The restaurant features those Create Your Taste touch screens that previously launched in Australia. Related ContentRelated Content[Photo: Hong Kong Navi]The ingredients sure look fresh. [Photo: Hong Kong Navi][Photo: Hong Kong Navi]What a fancy-looking McDonald’s. Top image: Ross HoneysettTo contact the author of this post, write to bashcraftATkotaku.com or find him on Twitter@Brian_Ashcraft.Kotaku East is your slice of Asian internet culture, bringing you the latest talking points from Japan, Korea, China and beyond. | 2024-11-08T02:12:35 | en | train |
10,822,572 | Doches | 2016-01-01T17:56:24 | Texas airports get ready for state’s new 'open carry' gun law | null | http://www.usatoday.com/story/travel/flights/todayinthesky/2015/12/31/texas-airports-open-carry-gun-law/78131864/ | 1 | 0 | null | null | null | no_error | Texas airports get ready for state’s new 'open carry' gun law | null | null | Airports in Texas are posting signs and issuing memos in advance of the state’s new “open carry” law, which goes into effect tomorrow, Jan. 1, 2016.The law allows legally licensed handgun owners to openly carry a holstered gun in public but, as the Houston Airport System notes, “there still are some restrictions in certain locations, including at airports.”A statement outlining what the new state rules mean for passengers and employees at George Bush Intercontinental Airport (IAH), William P. Hobby Airport (HOU) and Ellington Airport (EFD) — and presumably other airports in the state — says gun owners with properly licensed and displayed guns (as well as gun owners with licenses for concealed weapons) “can have their gun in public areas only, like baggage, ticketing, garages and public sidewalks or walkways.”Federal law still prohibits passengers from bringing weapons to or past airport security checkpoints and the TSA is permitted to issue fines to travelers found with loaded or unloaded guns. But an amendment to the Texas handgun licensing law that went into effect in September says a passenger found with a licensed gun at an airport checkpoint won’t be subject to felony charges as long as the gun was taken to the airport accidentally (the explanation the TSA says most everyone caught with a gun at an airport seems to give) and as long as the passenger immediately takes their gun away from the secure area when it’s found."The Houston Airport System does not anticipate a discernible impact on our day-to-day operations with the implementation of the state's new open carry law," said Bill Begley, public information officer for the Houston Airport System. "State public safety officials have put in place requirements that are designed to ensure the safe use of firearms, and Houston's airports will follow those regulations and safeguards to ensure everyone who comes to one of our airports feels safe and comfortable."No new signs will be posted at Austin Bergstrom International Airport to remind or alert travelers to the new state gun law."Previously, a passenger could only bring a gun into the airport if they had a concealed permit or were checking it in as checked luggage with their airline," said Jason Zielinski, spokesman for Austin Bergstrom International Airport. "Now, if they have a license, they can open carry in the airport's public areas, ticketing and bag claim, but cannot open carry through security screening, while boarding an aircraft or in any Air Operations Area."The TSA issues a weekly report of the number of firearms (and other prohibited items) found at airports checkpoints and does an annual year-end tally. Three Texas airports — DFW and both George Bush Intercontinental and Hobby Airport in Houston — were in the TSA’s list of “Top 10 Airports for Gun Catches in 2014.”Open carry laws in many other states already permit licensed gun owners to bring firearms into the public areas of airports and in June a man dropping his daughter off at Hartsfield-Jackson Atlanta International made national news by walking through the airport carrying an AR-15 rifle.Harriet Baskas is a Seattle-based airports and aviation writer and USA TODAY Travel's "At the Airport" columnist. She occasionally contributes to Ben Mutzabaugh's Today in the Sky blog. Follow her at twitter.com/hbaskas. | 2024-11-08T07:20:06 | en | train |
10,822,717 | selva86 | 2016-01-01T18:28:43 | Advanced Outlier Detection in R | null | http://r-statistics.co/Outlier-Treatment-With-R.html | 1 | null | null | null | true | no_error | Outlier Treatment With R | Multivariate Outliers | null | Selva Prabhakaran |
Outliers in data can distort predictions and affect the accuracy, if you don’t detect and handle them appropriately especially in regression models.
Why outliers detection is important?
Treating or altering the outlier/extreme values in genuine observations is not a standard operating procedure. However, it is essential to understand their impact on your predictive models. It is left to the best judgement of the investigator to decide whether treating outliers is necessary and how to go about it.
So, why identifying the extreme values is important? Because, it can drastically bias/change the fit estimates and predictions. Let me illustrate this using the cars dataset.
To better understand the implications of outliers better, I am going to compare the fit of a simple linear regression model on cars dataset with and without outliers. In order to distinguish the effect clearly, I manually introduce extreme values to the original cars dataset. Then, I predict on both the datasets.
# Inject outliers into data.
cars1 <- cars[1:30, ] # original data
cars_outliers <- data.frame(speed=c(19,19,20,20,20), dist=c(190, 186, 210, 220, 218)) # introduce outliers.
cars2 <- rbind(cars1, cars_outliers) # data with outliers.
# Plot of data with outliers.
par(mfrow=c(1, 2))
plot(cars2$speed, cars2$dist, xlim=c(0, 28), ylim=c(0, 230), main="With Outliers", xlab="speed", ylab="dist", pch="*", col="red", cex=2)
abline(lm(dist ~ speed, data=cars2), col="blue", lwd=3, lty=2)
# Plot of original data without outliers. Note the change in slope (angle) of best fit line.
plot(cars1$speed, cars1$dist, xlim=c(0, 28), ylim=c(0, 230), main="Outliers removed \n A much better fit!", xlab="speed", ylab="dist", pch="*", col="red", cex=2)
abline(lm(dist ~ speed, data=cars1), col="blue", lwd=3, lty=2)
Notice the change in slope of the best fit line after removing the outliers. Had we used the outliers to train the model(left chart), our predictions would be exagerated (high error) for larger values of speed because of the larger slope.Detect outliers
Univariate approach
For a given continuous variable, outliers are those observations that lie outside 1.5 * IQR, where IQR, the ‘Inter Quartile Range’ is the difference between 75th and 25th quartiles. Look at the points outside the whiskers in below box plot.
url <- "http://rstatistics.net/wp-content/uploads/2015/09/ozone.csv"
# alternate source: https://raw.githubusercontent.com/selva86/datasets/master/ozone.csv
inputData <- read.csv(url) # import data
outlier_values <- boxplot.stats(inputData$pressure_height)$out # outlier values.
boxplot(inputData$pressure_height, main="Pressure Height", boxwex=0.1)
mtext(paste("Outliers: ", paste(outlier_values, collapse=", ")), cex=0.6)Bivariate approach
Visualize in box-plot of the X and Y, for categorical X’surl <- "http://rstatistics.net/wp-content/uploads/2015/09/ozone.csv"
ozone <- read.csv(url)
# For categorical variable
boxplot(ozone_reading ~ Month, data=ozone, main="Ozone reading across months") # clear pattern is noticeable.
boxplot(ozone_reading ~ Day_of_week, data=ozone, main="Ozone reading for days of week") # this may not be significant, as day of week variable is a subset of the month var.
What is the inference? The change in the level of boxes suggests that Month seem to have an impact in ozone_reading while Day_of_week does not. Any outliers in respective categorical level show up as dots outside the whiskers of the boxplot.
# For continuous variable (convert to categorical if needed.)
boxplot(ozone_reading ~ pressure_height, data=ozone, main="Boxplot for Pressure height (continuos var) vs Ozone")
boxplot(ozone_reading ~ cut(pressure_height, pretty(inputData$pressure_height)), data=ozone, main="Boxplot for Pressure height (categorial) vs Ozone", cex.axis=0.5)
You can see few outliers in the box plot and how the ozone_reading increases with pressure_height. Thats clear.
Multivariate Model Approach
Declaring an observation as an outlier based on a just one (rather unimportant) feature could lead to unrealistic inferences. When you have to decide if an individual entity (represented by row or observation) is an extreme value or not, it better to collectively consider the features (X’s) that matter. Enter Cook’s Distance.
Cooks Distance
Cook’s distance is a measure computed with respect to a given regression model and therefore is impacted only by the X variables included in the model. But, what does cook’s distance mean? It computes the influence exerted by each data point (row) on the predicted outcome.The cook’s distance for each observation i measures the change in $\hat{Y}$ (fitted Y) for all observations with and without the presence of observation i, so we know how much the observation i impacted the fitted values. Mathematically, cook’s distance Di for observation i is computed as:
$$D{_i}=\frac{\sum_{j=1}^{n}\left( \hat{Y}_{j} - \hat{Y}_{j \left(i \right)} \right)^{2}}{p \times MSE}$$ where,
$\hat{Y}_{j}$ is the value of jth fitted response when all the observations are included.
$\hat{Y}_{j \left(i \right)}$ is the value of jth fitted response, where the fit does not include observation i.
MSE is the mean squared error.
p is the number of coefficients in the regression model.
mod <- lm(ozone_reading ~ ., data=ozone)
cooksd <- cooks.distance(mod)
Influence measures
In general use, those observations that have a cook’s distance greater than 4 times the mean may be classified as influential. This is not a hard boundary.
plot(cooksd, pch="*", cex=2, main="Influential Obs by Cooks distance") # plot cook's distance
abline(h = 4*mean(cooksd, na.rm=T), col="red") # add cutoff line
text(x=1:length(cooksd)+1, y=cooksd, labels=ifelse(cooksd>4*mean(cooksd, na.rm=T),names(cooksd),""), col="red") # add labels
Now lets find out the influential rows from the original data. If you extract and examine each influential row 1-by-1 (from below output), you will be able to reason out why that row turned out influential. It is likely that one of the X variables included in the model had extreme values.
influential <- as.numeric(names(cooksd)[(cooksd > 4*mean(cooksd, na.rm=T))]) # influential row numbers
head(ozone[influential, ]) # influential observations.
#> Month Day_of_month Day_of_week ozone_reading pressure_height Wind_speed Humidity
#> 19 1 19 1 4.07 5680 5 73
#> 23 1 23 5 4.90 5700 5 59
#> 58 2 27 5 22.89 5740 3 47
#> 133 5 12 3 33.04 5880 3 80
#> 135 5 14 5 31.15 5850 4 76
#> 149 5 28 5 4.82 5750 3 76
#> Temperature_Sandburg Temperature_ElMonte Inversion_base_height Pressure_gradient
#> 19 52 56.48 393 -68
#> 23 69 51.08 3044 18
#> 58 53 58.82 885 -4
#> 133 80 73.04 436 0
#> 135 78 71.24 1181 50
#> 149 65 51.08 3644 86
#> Inversion_temperature Visibility
#> 19 69.80 10
#> 23 52.88 150
#> 58 67.10 80
#> 133 86.36 40
#> 135 79.88 17
#> 149 59.36 70
Lets examine the first 6 rows from above output to find out why these rows could be tagged as influential observations.
Row 58, 133, 135 have very high ozone_reading.
Rows 23, 135 and 149 have very high Inversion_base_height.
Row 19 has very low Pressure_gradient.
Outliers Test
The function outlierTest from car package gives the most extreme observation based on the given model. Here’s an example based on the mod linear model object we’d just created.
car::outlierTest(mod)
#> No Studentized residuals with Bonferonni p < 0.05
#> Largest |rstudent|:
#> rstudent unadjusted p-value Bonferonni p
#> 243 3.045756 0.0026525 0.53845
This output suggests that observation in row 243 is most extreme.
outliers package
The outliers package provides a number of useful functions to systematically extract outliers. Some of these are convenient and come handy, especially the outlier() and scores() functions.
outliers
outliers gets the extreme most observation from the mean. If you set the argument opposite=TRUE, it fetches from the other side.
set.seed(1234)
y=rnorm(100)
outlier(y)
#> [1] 2.548991
outlier(y,opposite=TRUE)
#> [1] -2.345698
dim(y) <- c(20,5) # convert it to a matrix
outlier(y)
#> [1] 2.415835 1.102298 1.647817 2.548991 2.121117
outlier(y,opposite=TRUE)
#> [1] -2.345698 -2.180040 -1.806031 -1.390701 -1.372302
scores
There are two aspects to the scores() function.
Compute the normalised scores based on “z”, “t”, “chisq” etc
Find out observations that lie beyond a given percentile based on a given score.
set.seed(1234)
x = rnorm(10)
scores(x) # z-scores => (x-mean)/sd
scores(x, type="chisq") # chi-sq scores => (x - mean(x))^2/var(x)
#> [1] 0.68458034 0.44007451 2.17210689 3.88421971 0.66539631 . . .
scores(x, type="t") # t scores
scores(x, type="chisq", prob=0.9) # beyond 90th %ile based on chi-sq
#> [1] FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE
scores(x, type="chisq", prob=0.95) # beyond 95th %ile
scores(x, type="z", prob=0.95) # beyond 95th %ile based on z-scores
scores(x, type="t", prob=0.95) # beyond 95th %ile based on t-scores
Treating the outliers
Once the outliers are identified and you have decided to make amends as per the nature of the problem, you may consider one of the following approaches.
1. Imputation
Imputation with mean / median / mode. This method has been dealt with in detail in the discussion about treating missing values.
2. Capping
For missing values that lie outside the 1.5 * IQR limits, we could cap it by replacing those observations outside the lower limit with the value of 5th %ile and those that lie above the upper limit, with the value of 95th %ile. Below is a sample code that achieves this.
x <- ozone$pressure_height
qnt <- quantile(x, probs=c(.25, .75), na.rm = T)
caps <- quantile(x, probs=c(.05, .95), na.rm = T)
H <- 1.5 * IQR(x, na.rm = T)
x[x < (qnt[1] - H)] <- caps[1]
x[x > (qnt[2] + H)] <- caps[2]3. Prediction
In yet another approach, the outliers can be replaced with missing values (NA) and then can be predicted by considering them as a response variable. We already discussed how to predict missing values.
Have a suggestion or found a bug? Notify here.
| 2024-11-07T09:13:56 | en | train |
10,822,750 | rvern | 2016-01-01T18:36:47 | Debian Linux founder Ian Murdock dies at 42, cause unknown | null | http://www.zdnet.com/article/debian-linux-founder-ian-murdock-dies-at-42-cause-unknown/ | 1 | null | null | null | true | no_error | Debian Linux founder Ian Murdock dies at 42, cause unknown | null | Written by | UPDATED: I'd known Ian Murdock, founder of Debian Linux and most recently a senior Docker staffer, since 1996. He died this week much too young, 42, in unclear circumstances. Ian Murdock backed away from saying he would commit suicide in later tweets, but he continued to be seriously troubled by his experiences and died later that night. No details regarding the cause of his death have been disclosed. In a blog posting, Docker merely stated that: "It is with great sadness that we inform you that Ian Murdock passed away on Monday night. This is a tragic loss for his family, for the Docker community, and the broader open source world; we all mourn his passing."The San Francisco Police Department said they had nothing to say about Murdock's death at this time. A copy of what is reputed to be his arrest record is all but blank.Sources close to the police department said that officers were called in to responded to reports of a man, Ian Murdock, trying to break into a home at the corner of Steiner and Union St at 11.30pm on Saturday, December 26. Murdock was reportedly drunk and resisted arrest. He was given a ticket for two counts of assault and one for obstruction of an officer. An EMT treated an abrasion on his forehead at the site, and he was taken to a hospital.At 2:40 AM early Sunday morning, December 27, he was arrested after banging on the door of a neighbor in the same block. It is not clear if he was knocking on the same door he had attempted to enter earlier. A medic treated him there for un-described injuries. Murdock was then arrested and taken to the San Francisco county jail. On Sunday afternoon, Murdock was bailed out with a $25,000 bond.On Monday afternoon, December 28, the next day, Murdock started sending increasingly erratic tweets from his Twitter account. The most worrying of all read: "i'm committing suicide tonight.. do not intervene as i have many stories to tell and do not want them to die with me"At first people assumed that his Twitter account had been hacked. Having known Murdock and his subsequent death, I believe that he was the author of these tweets.His Twitter account has since been deleted, but copies of the tweets remain. He wrote that: "the police here beat me up for knowing [probably an auto-correct for "knocking"] on my neighbor's door.. they sent me to the hospital."I have been unable to find any San Francisco area hospital with a record of his admission. Murdock wrote that he had been assaulted by the police, had his clothes ripped off, and was told, "We're the police, we can do whatever the fuck we want." He also wrote: "they beat the shit out of me twice, then charged me $25,000 to get out of jail for battery against THEM."Murdock also vented his anger at the police."(1/2) The rest of my life will be devoted to fighting against police abuse.. I'm white, I made $1.4 million last year, (2/2) They are uneducated, bitter, and and only interested in power for its own sake. Contact me [email protected] if you can help. -ian"After leaving the courtroom, presumably a magistrate court, Murdock tweeted that he had been followed home by the police and assaulted again. He continued: "I'm not committing suicide today. I'll write this all up first, so the police brutality ENDEMIC in this so call free country will be known." He added, "Maybe my suicide at this, you now, a successful business man, not a N****R, will finally bring some attention to this very serious issue."His last tweet stated: "I am a white male, make a lot money, pay a lot of money in taxes, and yet their abuse is equally doned out. DO NOT CROSS THEM!?"He appears to have died that night, Monday, December 28. At the time of this writing, the cause of death still remains unknown.His death is a great loss to the open-source world. He created Debian, one of the first Linux distributions and still a major distro; he also served as an open-source leader at Sun; as CTO for the Linux Foundation, and as a Docker executive. He will be missed.This story has been updated with details about Murdock's arrest.Related Stories:Not a typo:Microsoft is offering a Linux certificationDebian GNU/Linux now supported on Microsoft's AzureWhat's what in Debian Jessie | 2024-11-08T20:50:45 | en | train |
10,822,857 | harmindervirk | 2016-01-01T19:02:14 | AdonisJs will become popular in 2016 | null | http://adonisjs.com/ | 1 | 0 | null | null | null | no_error | AdonisJS - A fully featured web framework for Node.js | null | null |
Type-safe
We pay a closer look at type-safety, seamless intellisense, and support for auto imports when designing framework APIs.
ESM ready
AdonisJS leverages modern JavaScript primitives, including ES modules, Node.js sub-path imports, and much more.
Fast - Wherever it matters
We ship with one of the fastest validation libraries, and our HTTP server has performance on par with Fastify.
World-class testing experience
Testing is not an afterthought for us. We ship with primitives to manage test databases, swap dependencies, generate fake data, interact with cookies and sessions, and much more.
With AdonisJS, you will love writing tests.
import { test } from '@japa/runner'
import { UserFactory } from '#factories/user_factory'
test('render polls created by the logged-in user', async ({ visit, browserContext }) => {
/**
* Create a user with 5 polls using Model factories
*/
const user = await UserFactory.with('polls', 5).create()
/**
* Mark the user as logged in
*/
await browserContext.loginAs(user)
/**
* Visit the endpoint that renders the list of
* polls for the logged-in user
*/
const page = await visit('/me/polls')
for (let poll in user.polls) {
await page.assertExists(
page.locator('h2', { hasText: poll.title })
)
}
})
A huge collection of officially maintained packages
Always find yourself switching between the latest ORMs or migrating away from those two unmaintained libraries?
We have been there too! That is why AdonisJS ships with a collection of officially maintained and well-documented packages.
Pick and use the ones you need.
Lucid
SQL ORM with a database query builder, Active record based models, support for migrations, and model factories for testing.
Auth
Driver-based authentication layer with support for sessions, API tokens, basic auth, and much more.
Bouncer
Bouncer provides low-level APIs to build custom authorization workflows like RBAC or ACL.
FlyDrive
FlyDrive provides a unified API to manage user-uploaded files on S3, GCS, and the local filesystem.
Limiter
Protect your API endpoints by implementing fine-grained rate limiting. Supports multiple backend stores.
Edge
A modern and batteries-included template engine for the ones who keep things simple.
Bentocache
Speed up your applications by storing slowly changing data in a multi-tier cache store.
VineJS
A superfast and type-safe validation library for Node.js. VineJS comes with 50+ validation rules and 12+ schema types.
Ally
Add support for Social login to your apps. Ally ships Twitter, GitHub, Google, LinkedIn, and many more drivers.
I18n
Create multi-lingual web apps. We provide APIs for localization, formatting values, and managing translations.
Health checks
Monitor database connectivity, redis memory size, and your application health using a single package.
Mailer
Send emails from your application using a multi-driver mailer. Supports SMTP, SES, Mailgun, Resend and much more.
Transmit
Send real-time updates by leveraging the power of Server Sent Events (SSE).
Lock
Synchronize the access to a shared resource to prevents several processes, or concurrent code, from executing a section of code at the same time.
Vite
Bundle the frontend assets of your applications using Vite.
Inertia
Make your frontend and backend work together in perfect harmony.
Tuyau
An end-to-end type-safe client for interacting with your AdonisJS APIs.
IoC container
AdonisJS ships with a zero-config IoC container that uses reflection to resolve and inject class dependencies.
Supercharged CLI
Scaffold resources, run database migrations, use application-aware REPL, or create custom commands. All using a unified CLI framework.
Security primitives
Implement CORS policies to protect your APIs, or use AdonisJS shield to protect against CSRF, XSS, and man-in-the-middle attacks.
A wall full of love and support from developers across the world
Adonis in MVC is a blessing tbh! I use it with HTMX or Turbo and it's just perfect!
Adonis is wonderful - just a perfect level of cognitive load a framework can have both in documentation and implementation. Not too much, not too little, but exactly what you need as a developer.
Reagan Ekhameye
@techreagan
@adonisframework is the first ever framework I learnt from the docs, I'm in love with this framework, it's just like @Laravel but for the @nodejs world . I will be like I'm stuck how can i solve this, the docs got you covered. It's gonna be my long term buddy.
The more I work with @adonisframework the more I'm convinced that it is the best framework in the Nodejs Ecosystem. The docs are well written, well designed, you just want to stay there and learn more. Thanks to all the people who are working on this project.
Dragos Nedelcu
@Drag0sNedelcu
Tried recently @adonisframework v6 and I am blown away. I love how the modules are separated, like orm, validation etc. Wish there were more discussions on this. Feels highly underrated...
Carl Mathisen
@carlmathisen
We’ve used it at work for a couple of years, just replacing these express cocktails. They were fine, but I was tired of maintaining and upgrading a unique set of libraries that were our company unique combination of regular features.
Having a solid experience in Laravel and wanting to explore NodeJs, I found @adonisframework a good choice for me to go through. It is been easy for me to jump in since it feels like I am just writing Laravel but in typescript and NodeJS , Awesome developer experience!.
I've been using @adonisframework for the last two days an I've already got more done then I did in the last 3 weeks with just using expressjs.
If you’re a PHP dev who loves Laravel and want to give another language a go and use a similar style framework then make sure to check out @adonisframework. It has its own ORM, Auth support, and even a CLI tool called Ace which is very similar to Artisan.#Laravel #NodeJS
So funny seeing everyone looking/begging for the node/ts equivalent of rails/Laravel when it’s existed for years. I started using @adonisframework in 2016 in production for one of the biggest startups in Asia. If you want full stack with everything built in take a look.
Sponsored by fantastic companies and individuals
| 2024-11-08T09:12:15 | en | train |
10,822,861 | joeyespo | 2016-01-01T19:02:48 | Where are we in the Python 3 transition? | null | http://www.snarky.ca/the-stages-of-the-python-3-transition | 110 | 132 | [
10823716,
10823353,
10823618,
10823406,
10823386,
10823843,
10823738,
10823506,
10823898,
10823492,
10823526,
10823723,
10824569,
10824438,
10823695,
10824793,
10824294,
10827662,
10823680,
10824388,
10823647,
10823387,
10823668,
10823658
] | null | null | no_error | Where are we in the Python 3 transition? | 2015-12-31T04:35:00.000Z | Brett Cannon |
Dec 30, 2015
3 min read
Python
The Kübler-Ross model outlines the stages that one goes through in dealing with death:
Denial
Anger
Bargaining
Depression
Acceptance
This is sometimes referred to as the five stages of grief.Some have jokingly called them the five stages of software development. I think it actually matches the Python community's transition to Python 3 rather well, both what has occurred and where we currently are (summary: the community is at least in stage 4 with some lucky to already be at the end in stage 5).
Denial
When Python 3 first came out and we said Python 2.7 was going to be the last release of Python 2, I think some people didn't entirely believe us. Others believed that Python 3 didn't offer enough to bother switching to it from Python 2, and so they ignored Python 3's existence. Basically the Python development team and people willing to trust that Python 3 wasn't some crazy experiment that we were going to abandon, ported their code to Python 3 while everyone else waited.
Anger
When it became obvious that the Python development team was serious about Python 3, some people got really upset. There were accusations of us not truly caring about the community and ignoring that the transition was hurting the community irreparably. This was when whispers of forking Python 2 to produce a Python 2.8 release came about, although that obviously never occurred.
Bargaining
Once people realized that being mad about Python 3 wasn't going to solve anything, the bargaining began. People came to the Python development team asking for features to be added to Python 3 to make transitioning easier such as bringing back the u string prefix in Python 3. People also made requests for exceptions to Python 2's "no new features" policy which were also made to allow for Python 2 to stay a feasible version of Python longer while people transitioned (this all landed in Python 2.7.9). We also extended the maintenance timeline of Python 2.7 from 5 years to 10 years to give people until 2020 to transition before people will need to pay for Python 2 support (as compared to the free support that the Python development team has provided).
Depression
7 years into the life of Python 3, it seems a decent amount of people have reached the point of depression about the transition. With Python 2.7 not about to be pulled out from underneath them, people don't feel abandoned by the Python development team. Python 3 also has enough new features that are simply not accessible from Python 2 that people want to switch. And with porting Python 2 code to run on Python 2/3 simultaneously heavily automated and being doable on a per-file basis, people no longer seem to be adverse to porting their code like they once were (although it admittedly still takes some effort).
Unfortunately people are running up against the classic problem of lacking buy-in from management. I regularly hear from people that they would switch if they could, but their manager(s) don't see any reason to switch and so they can't (or that they would do per-file porting, but they don't think they can convince their teammates to maintain the porting work). This can be especially frustrating if you use Python 3 in personal projects but are stuck on Python 2 at work. Hopefully Python 3 will continue to offer new features that will eventually entice reluctant managers to switch. Otherwise financial arguments might be necessary in the form of pointing out that porting to Python 3 is a one-time cost while staying on Python 2 past 2020 will be a perpetual cost for support to some enterprise provider of Python and will cost more in the long-term (e.g., paying for RHEL so that someone supports your Python 2 install past 2020). Have hope, though, that you can get buy-in from management for porting to Python 3 since others have and thus reached the "acceptance" stage.
Acceptance
While some people feel stuck in Python 2 at work and are "depressed" over it, others have reached the point of having transitioned their projects and accepted Python 3, both at work and in personal projects. Various numbers I have seen this year suggest about 20% of the scientific Python community and 20% of the Python web community have reached this point (I have yet to see reliable numbers for the Python community as a whole; PyPI is not reliable enough for various reasons). I consistently hear from people using Python 3 that they are quite happy; I have yet to hear from someone who has used Python 3 that they think it is a worse language than Python 2 (people are typically unhappy with the transition process and not Python 3 itself).
With five years left until people will need to pay for Python 2 support, I'm glad that the community seems to have reached either the "depression" or "acceptance" stages and has clearly moved beyond the "bargaining" stage. Hopefully in the next couple of years, managers across the world will realize that switching to Python 3 is worth it and not as costly as they think it is compared to having to actually pay for Python 2 support and thus more people will get to move to the "acceptance" stage.
| 2024-11-08T12:09:11 | en | train |
10,822,880 | amirmc | 2016-01-01T19:05:29 | A Unikernel Firewall for QubesOS | null | http://roscidus.com/blog/blog/2016/01/01/a-unikernel-firewall-for-qubesos/ | 111 | 10 | [
10829667,
10828778,
10829451,
10829129,
10833858
] | null | null | no_error | A Unikernel Firewall for QubesOS | null | Thomas Leonard |
QubesOS provides a desktop operating system made up of multiple virtual machines, running under Xen.
To protect against buggy network drivers, the physical network hardware is accessed only by a dedicated (and untrusted) "NetVM", which is connected to the rest of the system via a separate (trusted) "FirewallVM".
This firewall VM runs Linux, processing network traffic with code written in C.
In this blog post, I replace the Linux firewall VM with a MirageOS unikernel.
The resulting VM uses safe (bounds-checked, type-checked) OCaml code to process network traffic,
uses less than a tenth of the memory of the default FirewallVM, boots several times faster,
and should be much simpler to audit or extend.
Table of Contents
Qubes
Qubes networking
Problems with FirewallVM
A Unikernel Firewall
Booting a Unikernel on Qubes
Networking
The Xen virtual network layer
The Ethernet layer
The IP layer
Evaluation
Exercises
Summary
( this post also appeared on Reddit and Hacker News )
Qubes
QubesOS is a security-focused desktop operating system that uses virtual machines to isolate applications from each other. The screenshot below shows my current desktop. The windows with green borders are running Fedora in my "comms" VM, which I use for gmail and similar trusted sites (with NoScript). The blue windows are from a Debian VM which I use for software development. The red windows are another Fedora VM, which I use for general browsing (with flash, etc) and running various untrusted applications:
Another Fedora VM ("dom0") runs the window manager and drives most of the physical hardware (mouse, keyboard, screen, disks, etc).
Networking is a particularly dangerous activity, since attacks can come from anywhere in the world and handling network hardware and traffic is complex.
Qubes therefore uses two extra VMs for networking:
NetVM drives the physical network device directly. It runs network-manager and provides the system tray applet for configuring the network.
FirewallVM sits between the application VMs and NetVM. It implements a firewall and router.
The full system looks something like this:
The lines between VMs in the diagram above represent network connections.
If NetVM is compromised (e.g. by exploiting a bug in the kernel module driving the wifi card) then the system as a whole can still be considered secure - the attacker is still outside the firewall.
Besides traditional networking, all VMs can communicate with dom0 via some Qubes-specific protocols.
These are used to display window contents, tell VMs about their configuration, and provide direct channels between VMs where appropriate.
Qubes networking
There are three IP networks in the default configuration:
192.168.1.* is the external network (to my house router).
10.137.1.* is a virtual network connecting NetVM to the firewalls (you can have multiple firewall VMs).
10.137.2.* connects the app VMs to the default FirewallVM.
Both NetVM and FirewallVM perform NAT, so packets from "comms" appear to NetVM to have been sent by the firewall, and packets from the firewall appear to my house router to have come from NetVM.
Each of the AppVMs is configured to use the firewall (10.137.2.1) as its DNS resolver.
FirewallVM uses an iptables rule to forward DNS traffic to its resolver, which is NetVM.
Problems with FirewallVM
After using Qubes for a while, there are a number of things about the default FirewallVM that I'm unhappy about:
It runs a full Linux system, which uses at least 300 MB of RAM. This seems excessive.
It takes several seconds to boot.
There is a race somewhere setting up the DNS redirection. Adding some debug to track down the bug made it disappear.
The iptables configuration is huge and hard to understand.
There is another, more serious, problem.
Xen virtual network devices are implemented as a client ("netfront") and a server ("netback"), which are Linux kernel modules in sys-firewall.
In a traditional Xen system, the netback driver runs in dom0 and is fully trusted. It is coded to protect itself against misbehaving client VMs. Netfront, by contrast, assumes that netback is trustworthy.
The Xen developers only considers bugs in netback to be security critical.
In Qubes, NetVM acts as netback to FirewallVM, which acts as a netback in turn to its clients.
But in Qubes, NetVM is supposed to be untrusted! So, we have code running in kernel mode in the (trusted) FirewallVM that is talking to and trusting the (untrusted) NetVM!
For example, as the Qubes developers point out in Qubes Security Bulletin #23, the netfront code that processes responses from netback uses the request ID quoted by netback as an index into an array without even checking if it's in range (they have fixed this in their fork).
What can an attacker do once they've exploited FirewallVM's trusting netfront driver?
Presumably they now have complete control of FirewallVM.
At this point, they can simply reuse the same exploit to take control of the client VMs, which are running the same trusting netfront code!
A Unikernel Firewall
I decided to see whether I could replace the default firewall ("sys-firewall") with a MirageOS unikernel.
A Mirage unikernel is an OCaml program compiled to run as an operating system kernel.
It pulls in just the code it needs, as libraries.
For example, my firewall doesn't require or use a hard disk, so it doesn't contain any code for dealing with block devices.
If you want to follow along, my code is on GitHub in my qubes-mirage-firewall repository.
The README explains how to build it from source.
For testing, you can also just download the mirage-firewall-bin-0.1.tar.bz2 binary kernel tarball.
dom0 doesn't have network access, but you can proxy the download through another VM:
[tal@dom0 ~]$ cd /tmp
[tal@dom0 tmp]$ qvm-run -p sys-net 'wget -O - https://github.com/talex5/qubes-mirage-firewall/releases/download/0.1/mirage-firewall-bin-0.1.tar.bz2' > mirage-firewall-bin-0.1.tar.bz2
[tal@dom0 tmp]$ tar tf mirage-firewall-bin-0.1.tar.bz2
mirage-firewall/
mirage-firewall/vmlinuz
mirage-firewall/initramfs
mirage-firewall/modules.img
[tal@dom0 ~]$ cd /var/lib/qubes/vm-kernels/
[tal@dom0 vm-kernels]$ tar xf /tmp/mirage-firewall-bin-0.1.tar.bz2
The tarball contains vmlinuz, which is the unikernel itself, plus a couple of dummy files that Qubes requires to recognise it as a kernel (modules.img and initramfs).
Create a new ProxyVM named "mirage-firewall" to run the unikernel:
You can use any template, and make it standalone or not. It doesn't matter, since we don't use the hard disk.
Set the type to ProxyVM.
Select sys-net for networking (not sys-firewall).
Click OK to create the VM.
Go to the VM settings, and look in the "Advanced" tab.
Set the kernel to mirage-firewall.
Turn off memory balancing and set the memory to 32 MB or so (you might have to fight a bit with the Qubes GUI to get it this low).
Set VCPUs (number of virtual CPUs) to 1.
(this installation mechanism is obviously not ideal; hopefully future versions of Qubes will be more unikernel-friendly)
You can run mirage-firewall alongside your existing sys-firewall and you can choose which AppVMs use which firewall using the GUI.
For example, to configure "untrusted" to use mirage-firewall:
You can view the unikernel's log output from the GUI, or with sudo xl console mirage-firewall in dom0 if you want to see live updates.
If you want to explore the code but don't know OCaml, a good tip is that most modules (.ml files) have a corresponding .mli interface file which describes the module's public API (a bit like a .h file in C).
It's usually worth reading those interface files first.
I tested initially with Qubes 3.0 and have just upgraded to the 3.1 alpha. Both seem to work.
Booting a Unikernel on Qubes
Qubes runs on Xen and a Mirage application can be compiled to a Xen kernel image using mirage configure --xen.
However, Qubes expects a VM to provide three Qubes-specific services and doesn't consider the VM to be running until it has connected to each of them. They are qrexec (remote command execution), gui (displaying windows on the dom0 desktop) and QubesDB (a key-value store).
I wrote a little library, mirage-qubes, to implement enough of these three protocols for the firewall (the GUI does nothing except handshake with dom0, since the firewall has no GUI).
Here's the full boot code in my firewall, showing how to connect the agents:
unikernel.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
let start () =
let start_time = Clock.time () in
Log_reporter.init_logging ();
(* Start qrexec agent, GUI agent and QubesDB agent in parallel *)
let qrexec = RExec.connect ~domid:0 () in
let gui = GUI.connect ~domid:0 () in
let qubesDB = DB.connect ~domid:0 () in
(* Wait for clients to connect *)
qrexec >>= fun qrexec ->
let agent_listener = RExec.listen qrexec Command.handler in
gui >>= fun gui ->
Lwt.async (fun () -> GUI.listen gui);
qubesDB >>= fun qubesDB ->
Log.info "agents connected in %.3f s (CPU time used since boot: %.3f s)"
(fun f -> f (Clock.time () -. start_time) (Sys.time ()));
(* Watch for shutdown requests from Qubes *)
let shutdown_rq = OS.Lifecycle.await_shutdown () >>= fun (`Poweroff | `Reboot) -> return () in
(* Set up networking *)
let net_listener = network qubesDB in
(* Run until something fails or we get a shutdown request. *)
Lwt.choose [agent_listener; net_listener; shutdown_rq] >>= fun () ->
(* Give the console daemon time to show any final log messages. *)
OS.Time.sleep 1.0
After connecting the agents, we start a thread watching for shutdown requests (which arrive via XenStore, a second database) and then configure networking.
Tips on reading OCaml
let x = ... defines a variable.
let fn args = ... defines a function.
Clock.time is the time function in the Clock module.
() is the empty tuple (called "unit"). It's used for functions that don't take arguments, or return nothing useful.
~foo is a named argument. connect ~domid:0 is like connect(domid = 0) in Python.
promise >>= f calls function f when the promise resolves. It's like promise.then(f) in JavaScript.
foo () >>= fun result -> is the asynchronous version of let result = foo () in.
return x creates an already-resolved promise (it does not make the function return).
Networking
The general setup is simple enough: we read various configuration settings (IP addresses, netmasks, etc) from QubesDB,
set up our two networks (the client-side one and the one with NetVM), and configure a router to send packets between them:
unikernel.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
(* Set up networking and listen for incoming packets. *)
let network qubesDB =
(* Read configuration from QubesDB *)
let config = Dao.read_network_config qubesDB in
Logs.info "Client (internal) network is %a"
(fun f -> f Ipaddr.V4.Prefix.pp_hum config.Dao.clients_prefix);
(* Initialise connection to NetVM *)
Uplink.connect config >>= fun uplink ->
(* Report success *)
Dao.set_iptables_error qubesDB "" >>= fun () ->
(* Set up client-side networking *)
let client_eth = Client_eth.create
~client_gw:config.Dao.clients_our_ip
~prefix:config.Dao.clients_prefix in
(* Set up routing between networks and hosts *)
let router = Router.create
~client_eth
~uplink:(Uplink.interface uplink) in
(* Handle packets from both networks *)
Lwt.join [
Client_net.listen router;
Uplink.listen uplink router
]
OCaml notes
config.Dao.clients_our_ip means the clients_our_ip field of the config record, as defined in the Dao module.
~client_eth is short for ~client_eth:client_eth - i.e. pass the value of the client_eth variable as a parameter also named client_eth.
The Xen virtual network layer
At the lowest level, networking requires the ability to send a blob of data from one VM to another.
This is the job of the Xen netback/netfront protocol.
For example, consider the case of a new AppVM (Xen domain ID 5) being connected to FirewallVM (4).
First, dom0 updates its XenStore database (which is shared with the VMs). It creates two directories:
/local/domain/4/backend/vif/5/0/
/local/domain/5/device/vif/0/
Each directory contains a state file (set to 1, which means initialising) and information about the other end.
The first directory is monitored by the firewall (domain 4).
When it sees the new entry, it knows it has a new network connection to domain 5, interface 0.
It writes to the directory information about what features it supports and sets the state to 2 (init-wait).
The second directory will be seen by the new domain 5 when it boots.
It tells it that is has a network connection to dom 4.
The client looks in the dom 4's backend directory and waits for the state to change to init-wait, the checks the supported features.
It allocates memory to share with the firewall, tells Xen to grant access to dom 4, and writes the ID for the grant to the XenStore directory.
It sets its own state to 4 (connected).
When the firewall sees the client is connected, it reads the grant refs, tells Xen to map those pages of memory into its own address space, and sets its own state to connected too.
The two VMs can now use the shared memory to exchange messages (blocks of data up to 64 KB).
The reason I had to find out about all this is that the mirage-net-xen library only implemented the netfront side of the protocol.
Luckily, Dave Scott had already started adding support for netback and I was able to complete that work.
Getting this working with a Mirage client was fairly easy, but I spent a long time trying to figure out why my code was making Linux VMs kernel panic.
It turned out to be an amusing bug in my netback serialisation code, which only worked with Mirage by pure luck.
However, this did alert me to a second bug in the Linux netfront driver: even if the ID netback sends is within the array bounds, that entry isn't necessarily valid.
Sending an unused ID would cause netfront to try to unmap someone else's grant-ref.
Not exploitable, perhaps, but another good reason to replace this code!
The Ethernet layer
It might seem like we're nearly done: we want to send IP (Internet Protocol) packets between VMs, and we have a way to send blocks of data.
However, we must now take a little detour down Legacy Lane...
Operating systems don't expect to send IP packets directly.
Instead, they expect to be connected to an Ethernet network, which requires each IP packet to be wrapped in an Ethernet "frame".
Our virtual network needs to emulate an Ethernet network.
In an Ethernet network, each network interface device has a unique "MAC address" (e.g. 01:23:45:67:89:ab).
An Ethernet frame contains source and destination MAC addresses, plus a type (e.g. "IPv4 packet").
When a client VM wants to send an IP packet, it first broadcasts an Ethernet ARP request, asking for the MAC address of the target machine.
The target machine responds with its MAC address.
The client then transmits an Ethernet frame addressed to this MAC address, containing the IP packet inside.
If we were building our system out of physical machines, we'd connect everything via an Ethernet switch, like this:
This layout isn't very good for us, though, because it means the VMs can talk to each other directly.
Normally you might trust all the machines behind the firewall, but the point of Qubes is to isolate the VMs from each other.
Instead, we want a separate Ethernet network for each client VM:
In this layout, the Ethernet addressing is completely pointless - a frame simply goes to the machine at the other end of the link.
But we still have to add an Ethernet frame whenever we send a packet and remove it when we receive one.
And we still have to implement the ARP protocol for looking up MAC addresses.
That's the job of the Client_eth module (dom0 puts the addresses in XenStore for us).
As well as sending queries, a VM can also broadcast a "gratuitous ARP" to tell other VMs its address without being asked.
Receivers of a gratuitous ARP may then update their ARP cache, although FirewallVM is configured not to do this (see /proc/sys/net/ipv4/conf/all/arp_accept).
For mirage-firewall, I just log what the client requested but don't let it update anything:
client_eth.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
let input_gratuitous t frame =
let open Arpv4_wire in
let spa = Ipaddr.V4.of_int32 (get_arp_spa frame) in
let sha = Macaddr.of_bytes_exn (copy_arp_sha frame) in
match lookup t spa with
| Some real_mac when Macaddr.compare sha real_mac = 0 ->
Log.info "client suggests updating %s -> %s (as expected)"
(fun f -> f (Ipaddr.V4.to_string spa) (Macaddr.to_string sha));
| Some other_mac ->
Log.warn "client suggests incorrect update %s -> %s (should be %s)"
(fun f -> f (Ipaddr.V4.to_string spa) (Macaddr.to_string sha) (Macaddr.to_string other_mac));
| None ->
Log.warn "client suggests incorrect update %s -> %s (unexpected IP)"
(fun f -> f (Ipaddr.V4.to_string spa) (Macaddr.to_string sha))
I'm not sure whether or not Qubes expects one client VM to be able to look up another one's MAC address.
It sets /qubes-netmask in QubesDB to 255.255.255.0, indicating that all clients are on the same Ethernet network.
Therefore, I wrote my ARP responder to respond on behalf of the other clients to maintain this illusion.
However, it appears that my Linux VMs have ignored the QubesDB setting and used a netmask of 255.255.255.255. Puzzling, but it should work either way.
Here's the code that connects a new client virtual interface (vif) to our router (in Client_net):
client_net.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(** Connect to a new client's interface and listen for incoming frames. *)
let add_vif { Dao.domid; device_id; client_ip } ~router ~cleanup_tasks =
Netback.make ~domid ~device_id >>= fun backend ->
Log.info "Client %d (IP: %s) ready" (fun f ->
f domid (Ipaddr.V4.to_string client_ip));
ClientEth.connect backend >>= or_fail "Can't make Ethernet device" >>= fun eth ->
let client_mac = Netback.mac backend in
let iface = new client_iface eth client_ip client_mac in
Router.add_client router iface;
Cleanup.on_cleanup cleanup_tasks (fun () -> Router.remove_client router iface);
let fixed_arp = Client_eth.ARP.create ~net:router.Router.client_eth iface in
Netback.listen backend (fun frame ->
match Wire_structs.parse_ethernet_frame frame with
| None -> Log.warn "Invalid Ethernet frame" Logs.unit; return ()
| Some (typ, _destination, payload) ->
match typ with
| Some Wire_structs.ARP -> input_arp ~fixed_arp ~eth payload
| Some Wire_structs.IPv4 -> input_ipv4 ~client_ip ~router frame payload
| Some Wire_structs.IPv6 -> return ()
| None -> Logs.warn "Unknown Ethernet type" Logs.unit; Lwt.return_unit
)
OCaml note: { x = 1; y = 2 } is a record (struct). { x = x; y = y } can be abbreviated to just { x; y }. Here we pattern-match on a Dao.client_vif record passed to the function to extract the fields.
The Netback.listen at the end runs a loop that communicates with the netfront driver in the client.
Each time a frame arrives, we check the type and dispatch to either the ARP handler or, for IPv4 packets,
the firewall code.
We don't support IPv6, since Qubes doesn't either.
client_net.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
let input_arp ~fixed_arp ~eth request =
match Client_eth.ARP.input fixed_arp request with
| None -> return ()
| Some response -> ClientEth.write eth response
(** Handle an IPv4 packet from the client. *)
let input_ipv4 ~client_ip ~router frame packet =
let src = Wire_structs.Ipv4_wire.get_ipv4_src packet |> Ipaddr.V4.of_int32 in
if src = client_ip then Firewall.ipv4_from_client router frame
else (
Log.warn "Incorrect source IP %a in IP packet from %a (dropping)"
(fun f -> f Ipaddr.V4.pp_hum src Ipaddr.V4.pp_hum client_ip);
return ()
)
OCaml note: |> is the "pipe" operator. x |> fn is the same as fn x, but sometimes it reads better to have the values flowing left-to-right. You can also think of it as the synchronous version of >>=.
Notice that we check the source IP address is the one we expect.
This means that our firewall rules can rely on client addresses.
There is similar code in Uplink, which handles the NetVM side of things:
uplink.mk1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
let connect config =
let ip = config.Dao.uplink_our_ip in
Netif.connect "tap0" >>= or_fail "Can't connect uplink device" >>= fun net ->
Eth.connect net >>= or_fail "Can't make Ethernet device for tap" >>= fun eth ->
Arp.connect eth >>= or_fail "Can't add ARP" >>= fun arp ->
Arp.add_ip arp ip >>= fun () ->
let netvm_mac = Arp.query arp config.Dao.uplink_netvm_ip >|= function
| `Timeout -> failwith "ARP timeout getting MAC of our NetVM"
| `Ok netvm_mac -> netvm_mac in
let my_ip = Ipaddr.V4 ip in
let interface = new netvm_iface eth netvm_mac config.Dao.uplink_netvm_ip in
return { net; eth; arp; interface; my_ip }
let listen t router =
Netif.listen t.net (fun frame ->
(* Handle one Ethernet frame from NetVM *)
Eth.input t.eth
~arpv4:(Arp.input t.arp)
~ipv4:(fun _ip -> Firewall.ipv4_from_netvm router frame)
~ipv6:(fun _ip -> return ())
frame
)
OCaml note: Arp.input t.arp is a partially-applied function. It's short for fun x -> Arp.input t.arp x.
Here we just use the standard Eth.input code to dispatch on the frame.
It checks that the destination MAC matches ours and dispatches based on type.
We couldn't use it for the client code above because there we also want to
handle frames addressed to other clients, which Eth.input would discard.
Eth.input extracts the IP packet from the Ethernet frame and passes that to our callback,
but the NAT library I used likes to work on whole Ethernet frames, so I ignore the IP packet
(_ip) and send the frame instead.
The IP layer
Once an IP packet has been received, it is sent to the Firewall module
(either ipv4_from_netvm or ipv4_from_client, depending on where it came from).
The process is similar in each case:
Check if we have an existing NAT entry for this packet. If so, it's part of a conversation we've already approved, so perform the translation and send it on its way. NAT support is provided by the handy mirage-nat library.
If not, collect useful information about the packet (source, destination, protocol, ports) and check against the user's firewall rules, then take whatever action they request.
Here's the code that takes a client IPv4 frame and applies the firewall rules:
firewall.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
let ipv4_from_client t frame =
match Memory_pressure.status () with
| `Memory_critical -> (* TODO: should happen before copying and async *)
Log.warn "Memory low - dropping packet" Logs.unit;
return ()
| `Ok ->
(* Check for existing NAT entry for this packet *)
match translate t frame with
| Some frame -> forward_ipv4 t frame (* Some existing connection or redirect *)
| None ->
(* No existing NAT entry. Check the firewall rules. *)
match classify t frame with
| None -> return ()
| Some info -> apply_rules t Rules.from_client info
Qubes provides a GUI that lets the user specify firewall rules.
It then encodes these as Linux iptables rules and puts them in QubesDB.
This isn't a very friendly format for non-Linux systems, so I ignore this and hard-code the rules in OCaml instead, in the Rules module:
1
2
3
4
5
6
7
8
9
10
11
12
13
(** Decide what to do with a packet from a client VM.
Note: If the packet matched an existing NAT rule then this isn't called. *)
let from_client = function
| { dst = (`External _ | `NetVM) } -> `NAT
| { dst = `Client_gateway; proto = `UDP { dport = 53 } } -> `NAT_to (`NetVM, 53)
| { dst = (`Client_gateway | `Firewall_uplink) } -> `Drop "packet addressed to firewall itself"
| { dst = `Client _ } -> `Drop "prevent communication between client VMs"
| { dst = `Unknown_client _ } -> `Drop "target client not running"
(** Decide what to do with a packet received from the outside world.
Note: If the packet matched an existing NAT rule then this isn't called. *)
let from_netvm = function
| _ -> `Drop "drop by default"
For packets from clients to the outside world we use the NAT action to rewrite the source address so the packets appear to come from the firewall (via some unused port).
DNS queries sent to the firewall get redirected to NetVM (UDP port 53 is DNS).
In both cases, the NAT actions update the NAT table so that we will forward any responses back to the client.
Everything else is dropped, with a log message.
I think it's rather nice the way we can use OCaml's existing support for pattern matching to implement the rules, without having to invent a new syntax.
Originally, I had a default-drop rule at the end of from_client, but OCaml helpfully pointed out that it wasn't needed, as the previous rules already covered every case.
The incoming policy is to drop everything that wasn't already allowed by a rule added by the out-bound NAT.
I don't know much about firewalls, but this scheme works for my needs.
For comparison, the Linux iptables rules currently in my sys-firewall are:
[user@sys-firewall ~]$ sudo iptables -vL -n -t filter
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DROP udp -- vif+ * 0.0.0.0/0 0.0.0.0/0 udp dpt:68
55336 83M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
35540 23M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- vif0.0 * 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- vif+ vif+ 0.0.0.0/0 0.0.0.0/0
519 33555 ACCEPT udp -- * * 10.137.2.12 10.137.1.1 udp dpt:53
16 1076 ACCEPT udp -- * * 10.137.2.12 10.137.1.254 udp dpt:53
0 0 ACCEPT tcp -- * * 10.137.2.12 10.137.1.1 tcp dpt:53
0 0 ACCEPT tcp -- * * 10.137.2.12 10.137.1.254 tcp dpt:53
0 0 ACCEPT icmp -- * * 10.137.2.12 0.0.0.0/0
0 0 DROP tcp -- * * 10.137.2.12 10.137.255.254 tcp dpt:8082
264 14484 ACCEPT all -- * * 10.137.2.12 0.0.0.0/0
254 16404 ACCEPT udp -- * * 10.137.2.9 10.137.1.1 udp dpt:53
2 130 ACCEPT udp -- * * 10.137.2.9 10.137.1.254 udp dpt:53
0 0 ACCEPT tcp -- * * 10.137.2.9 10.137.1.1 tcp dpt:53
0 0 ACCEPT tcp -- * * 10.137.2.9 10.137.1.254 tcp dpt:53
0 0 ACCEPT icmp -- * * 10.137.2.9 0.0.0.0/0
0 0 DROP tcp -- * * 10.137.2.9 10.137.255.254 tcp dpt:8082
133 7620 ACCEPT all -- * * 10.137.2.9 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 32551 packets, 1761K bytes)
pkts bytes target prot opt in out source destination
[user@sys-firewall ~]$ sudo iptables -vL -n -t nat
Chain PREROUTING (policy ACCEPT 362 packets, 20704 bytes)
pkts bytes target prot opt in out source destination
829 50900 PR-QBS all -- * * 0.0.0.0/0 0.0.0.0/0
362 20704 PR-QBS-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 116 packets, 7670 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * vif+ 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0
945 58570 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0
Chain PR-QBS (1 references)
pkts bytes target prot opt in out source destination
458 29593 DNAT udp -- * * 0.0.0.0/0 10.137.2.1 udp dpt:53 to:10.137.1.1
0 0 DNAT tcp -- * * 0.0.0.0/0 10.137.2.1 tcp dpt:53 to:10.137.1.1
9 603 DNAT udp -- * * 0.0.0.0/0 10.137.2.254 udp dpt:53 to:10.137.1.254
0 0 DNAT tcp -- * * 0.0.0.0/0 10.137.2.254 tcp dpt:53 to:10.137.1.254
Chain PR-QBS-SERVICES (1 references)
pkts bytes target prot opt in out source destination
[user@sys-firewall ~]$ sudo iptables -vL -n -t mangle
Chain PREROUTING (policy ACCEPT 12090 packets, 17M bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 11387 packets, 17M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 703 packets, 88528 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 6600 packets, 357K bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 7303 packets, 446K bytes)
pkts bytes target prot opt in out source destination
[user@sys-firewall ~]$ sudo iptables -vL -n -t raw
Chain PREROUTING (policy ACCEPT 92093 packets, 106M bytes)
pkts bytes target prot opt in out source destination
0 0 DROP all -- vif20.0 * !10.137.2.9 0.0.0.0/0
0 0 DROP all -- vif19.0 * !10.137.2.12 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 32551 packets, 1761K bytes)
pkts bytes target prot opt in out source destination
[user@sys-firewall ~]$ sudo iptables -vL -n -t security
Chain INPUT (policy ACCEPT 11387 packets, 17M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 659 packets, 86158 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 6600 packets, 357K bytes)
pkts bytes target prot opt in out source destination
I find it hard to tell, looking at these tables, exactly what sys-firewall's security policy will actually do.
Evaluation
I timed start-up for the Linux-based "sys-firewall" and for "mirage-firewall" (after shutting them both down):
[tal@dom0 ~]$ time qvm-start sys-firewall
--> Creating volatile image: /var/lib/qubes/servicevms/sys-firewall/volatile.img...
--> Loading the VM (type = ProxyVM)...
--> Starting Qubes DB...
--> Setting Qubes DB info for the VM...
--> Updating firewall rules...
--> Starting the VM...
--> Starting the qrexec daemon...
Waiting for VM's qrexec agent......connected
--> Starting Qubes GUId...
Connecting to VM's GUI agent: .connected
--> Sending monitor layout...
--> Waiting for qubes-session...
real 0m9.321s
user 0m0.163s
sys 0m0.262s
[tal@dom0 ~]$ time qvm-start mirage-firewall
--> Loading the VM (type = ProxyVM)...
--> Starting Qubes DB...
--> Setting Qubes DB info for the VM...
--> Updating firewall rules...
--> Starting the VM...
--> Starting the qrexec daemon...
Waiting for VM's qrexec agent.connected
--> Starting Qubes GUId...
Connecting to VM's GUI agent: .connected
--> Sending monitor layout...
--> Waiting for qubes-session...
real 0m1.079s
user 0m0.130s
sys 0m0.192s
So, mirage-firewall starts in 1 second rather than 9. However, even most of this time is Qubes code running in dom0. xl list shows:
[tal@dom0 ~]$ sudo xl list
Name ID Mem VCPUs State Time(s)
dom0 0 6097 4 r----- 623.8
sys-net 4 294 4 -b---- 79.2
sys-firewall 17 1293 4 -b---- 9.9
mirage-firewall 18 30 1 -b---- 0.0
I guess sys-firewall did more work after telling Qubes it was ready, because Xen reports it used 9.9 seconds of CPU time.
mirage-firewall uses too little time for Xen to report anything.
Notice also that sys-firewall is using 1293 MB with no clients (it's configured to balloon up or down; it could probably go down to 300 MB without much trouble). I gave mirage-firewall a fixed 30 MB allocation, which seems to be enough.
I'm not sure how it compares with Linux for transmission performance, but it can max out my 30 Mbit/s Internet connection with its single CPU, so it's unlikely to matter.
Exercises
I've only implemented the minimal features to let me use it as my firewall.
The great thing about having a simple unikernel is that you can modify it easily.
Here are some suggestions you can try at home (easy ones first):
Change the policy to allow communication between client VMs.
Query the QubesDB /qubes-debug-mode key. If present and set, set logging to debug level.
Edit command.ml to provide a qrexec command to add or remove rules at runtime.
When a packet is rejected, add the frame to a ring buffer. Edit command.ml to provide a "dump-rejects" command that returns the rejected packets in pcap format, ready to be loaded into wireshark. Hint: you can use the ocaml-pcap library to read and write the pcap format.
All client VMs are reported as Client to the policy. Add a table mapping IP addresses to symbolic names, so you can e.g. allow DevVM to talk to TestVM or control access to specific external machines.
mirage-nat doesn't do NAT for ICMP packets. Add support, so ping works (see https://github.com/yomimono/mirage-nat/issues/15).
Qubes allows each VM to have two DNS servers. I only implemented the primary. Read the /qubes-secondary-dns and /qubes-netvm-secondary-dns keys from QubesDB and proxy that too.
Implement port knocking for new connections.
Add a Reject action that sends an ICMP rejection message.
Find out what we're supposed to do when a domain shuts down. Currently, we set the netback state to closed, but the directory in XenStore remains. Who is responsible for deleting it?
Update the firewall to use the latest version of the mirage-nat library, which has extra features such as expiry of old NAT table entries.
Finally, Qubes Security Bulletin #4 says:
Due to a silly mistake made by the Qubes Team, the IPv6 filtering rules
have been set to ALLOW by default in all Service VMs, which results in
lack of filtering for IPv6 traffic originating between NetVM and the
corresponding FirewallVM, as well as between AppVMs and the
corresponding FirewallVM. Because the RPC services (rpcbind and
rpc.statd) are, by default, bound also to the IPv6 interfaces in all the
VMs by default, this opens up an avenue to attack a FirewallVM from a
corresponding NetVM or AppVM, and further attack another AppVM from the
compromised FirewallVM, using a hypothetical vulnerability in the above
mentioned RPC services (chained attack).
What changes would be needed to mirage-firewall to reproduce this bug?
Summary
QubesOS provides a desktop environment made from multiple virtual machines, isolated using Xen.
It runs the network drivers (which it doesn't trust) in a Linux "NetVM", which it assumes may be compromised, and places a "FirewallVM" between that and the VMs running user applications.
This design is intended to protect users from malicious or buggy network drivers.
However, the Linux kernel code running in FirewallVM is written with the assumption that NetVM is trustworthy.
It is fairly likely that a compromised NetVM could successfully attack FirewallVM.
Since both FirewallVM and the client VMs all run Linux, it is likely that the same exploit would then allow the client VMs to be compromised too.
I used MirageOS to write a replacement FirewallVM in OCaml.
The new virtual machine contains almost no C code (little more than malloc, printk, the OCaml GC and libm), and should therefore avoid problems such as the unchecked array bounds problem that recently affected the Qubes firewall.
It also uses less than a tenth of the minimum memory of the Linux FirewallVM, boots several times faster, and when it starts handling network traffic it is already fully configured, avoiding e.g. any race setting up firewalls or DNS forwarding.
The code is around 1000 lines of OCaml, and makes it easy to follow the progress of a network frame from the point where the network driver reads it from a Xen shared memory ring, through the Ethernet handling, to the IP firewall code, to the user firewall policy, and then finally to the shared memory ring of the output interface.
The code has only been lightly tested (I've just started using it as the FirewallVM on my main laptop), but will hopefully prove easy to extend (and, if necessary, debug).
| 2024-11-08T03:41:44 | en | train |
10,822,886 | crunk | 2016-01-01T19:06:38 | KeyBox: Web-Based SSH Access and Key Management | null | https://github.com/skavanagh/KeyBox | 2 | 0 | null | null | null | no_error | GitHub - bastillion-io/Bastillion: Bastillion is a web-based SSH console that centrally manages administrative access to systems. Web-based administration is combined with management and distribution of user's public SSH keys. | null | bastillion-io |
Bastillion
Bastillion is a web-based SSH console that centrally manages administrative access to systems. Web-based administration is combined with management and distribution of user's public SSH keys. Key management and administration is based on profiles assigned to defined users.
Administrators can login using two-factor authentication with Authy or Google Authenticator. From there they can manage their public SSH keys or connect to their systems through a web-shell. Commands can be shared across shells to make patching easier and eliminate redundant command execution.
Bastillion layers TLS/SSL on top of SSH and acts as a bastion host for administration. Protocols are stacked (TLS/SSL + SSH) so infrastructure cannot be exposed through tunneling / port forwarding. More details can be found in the following whitepaper: Implementing a Trusted Third-Party System for Secure Shell. Also, SSH key management is enabled by default to prevent unmanaged public keys and enforce best practices.
Bastillion Releases
Bastillion is available for free use under the Prosperity Public License
https://github.com/bastillion-io/Bastillion/releases
or purchase from the AWS marketplace
https://aws.amazon.com/marketplace/pp/Loophole-LLC-Bastillion/B076PNFPCL
Also, Bastillion can be installed on FreeBSD via the FreeBSD ports system. To install via the binary package, simply run:
pkg install security/bastillion
Prerequisites
Open-JDK / Oracle-JDK - 1.9 or greater
apt-get install openjdk-9-jdk
http://www.oracle.com/technetwork/java/javase/downloads/index.html
Install Authy or Google Authenticator to enable two-factor authentication with Android or iOS
Application
Android
iOS
Authy
Google Play
iTunes
Google Authenticator
Google Play
iTunes
To Run Bundled with Jetty
Download bastillion-jetty-vXX.XX.tar.gz
https://github.com/bastillion-io/Bastillion/releases
Export environment variables
for Linux/Unix/OSX
export JAVA_HOME=/path/to/jdk
export PATH=$JAVA_HOME/bin:$PATH
for Windows
set JAVA_HOME=C:\path\to\jdk
set PATH=%JAVA_HOME%\bin;%PATH%
Start Bastillion
for Linux/Unix/OSX
for Windows
More Documentation at: https://www.bastillion.io/docs/index.html
Build from Source
Install Maven 3 or greater
apt-get install maven
http://maven.apache.org
Export environment variables
export JAVA_HOME=/path/to/jdk
export M2_HOME=/path/to/maven
export PATH=$JAVA_HOME/bin:$M2_HOME/bin:$PATH
In the directory that contains the pom.xml run
Note: Doing a mvn clean will delete the H2 DB and wipe out all the data.
Using Bastillion
Open browser to https://<whatever ip>:8443
Login with
username:admin
password:changeme
Note: When using the AMI instance, the password is defaulted to the <Instance ID>. Also, the AMI uses port 443 as in https://<Instance IP>:443
Managing SSH Keys
By default Bastillion will overwrite all values in the specified authorized_keys file for a system. You can disable key management by editing BastillionConfig.properties file and use Bastillion only as a bastion host. This file is located in the jetty/bastillion/WEB-INF/classes directory. (or the src/main/resources directory if building from source)
#set to false to disable key management. If false, the Bastillion public key will be appended to the authorized_keys file (instead of it being overwritten completely).
keyManagementEnabled=false
Also, the authorized_keys file is updated/refreshed periodically based on the relationships defined in the application. If key management is enabled the refresh interval can be specified in the BastillionConfig.properties file.
#authorized_keys refresh interval in minutes (no refresh for <=0)
authKeysRefreshInterval=120
By default Bastillion will generated and distribute the SSH keys managed by administrators while having them download the generated private. This forces admins to use strong passphrases for keys that are set on systems. The private key is only available for download once and is not stored on the application side. To disable and allow administrators to set any public key edit the BastillionConfig.properties.
#set to true to generate keys when added/managed by users and enforce strong passphrases set to false to allow users to set their own public key
forceUserKeyGeneration=false
Supplying a Custom SSH Key Pair
Bastillion generates its own public/private SSH key upon initial startup for use when registering systems. You can specify a custom SSH key pair in the BastillionConfig.properties file.
For example:
#set to true to regenerate and import SSH keys --set to true
resetApplicationSSHKey=true
#SSH Key Type 'dsa' or 'rsa'
sshKeyType=rsa
#private key --set pvt key
privateKey=/Users/kavanagh/.ssh/id_rsa
#public key --set pub key
publicKey=/Users/kavanagh/.ssh/id_rsa.pub
#default passphrase --leave blank if passphrase is empty
defaultSSHPassphrase=myPa$$w0rd
After startup and once the key has been registered it can then be removed from the system. The passphrase and the key paths will be removed from the configuration file.
Adjusting Database Settings
Database settings can be adjusted in the configuration properties.
#Database user
dbUser=bastillion
#Database password
dbPassword=p@$$w0rd!!
#Database JDBC driver
dbDriver=org.h2.Driver
#Connection URL to the DB
dbConnectionURL=jdbc:h2:keydb/bastillion;CIPHER=AES;
By default the datastore is set as embedded, but a remote H2 database can supported through adjusting the connection URL.
#Connection URL to the DB
dbConnectionURL=jdbc:h2:tcp://<host>:<port>/~/bastillion;CIPHER=AES;
External Authentication
External Authentication can be enabled through the BastillionConfig.properties.
For example:
#specify a external authentication module (ex: ldap-ol, ldap-ad). Edit the jaas.conf to set connection details
jaasModule=ldap-ol
Connection details need to be set in the jaas.conf file
ldap-ol {
com.sun.security.auth.module.LdapLoginModule SUFFICIENT
userProvider="ldap://hostname:389/ou=example,dc=bastillion,dc=com"
userFilter="(&(uid={USERNAME})(objectClass=inetOrgPerson))"
authzIdentity="{cn}"
useSSL=false
debug=false;
};
Administrators will be added as they are authenticated and profiles of systems may be assigned by full-privileged users.
User LDAP roles can be mapped to profiles defined in Bastillion through the use of the org.eclipse.jetty.jaas.spi.LdapLoginModule.
ldap-ol-with-roles {
//openldap auth with roles that can map to profiles
org.eclipse.jetty.jaas.spi.LdapLoginModule required
debug="false"
useLdaps="false"
contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
hostname="<SERVER>"
port="389"
bindDn="<BIND-DN>"
bindPassword="<BIND-DN PASSWORD>"
authenticationMethod="simple"
forceBindingLogin="true"
userBaseDn="ou=users,dc=bastillion,dc=com"
userRdnAttribute="uid"
userIdAttribute="uid"
userPasswordAttribute="userPassword"
userObjectClass="inetOrgPerson"
roleBaseDn="ou=groups,dc=bastillion,dc=com"
roleNameAttribute="cn"
roleMemberAttribute="member"
roleObjectClass="groupOfNames";
};
Users will be added/removed from defined profiles as they login and when the role name matches the profile name.
Auditing
Auditing is disabled by default. Audit logs can be enabled through the log4j2.xml by uncommenting the io.bastillion.manage.util.SystemAudit and the audit-appender definitions.
https://github.com/bastillion-io/Bastillion/blob/master/src/main/resources/log4j2.xml#L19-L22
Auditing through the application is only a proof of concept. It can be enabled in the BastillionConfig.properties.
#enable audit --set to true to enable
enableInternalAudit=true
Screenshots
Acknowledgments
Special thanks goes to these amazing projects which makes this (and other great projects) possible.
JSch Java Secure Channel - by ymnk
term.js A terminal written in javascript - by chjj
Third-party dependencies are mentioned in the 3rdPartyLicenses.md
The Prosperity Public License
Bastillion is available for use under the Prosperity Public License
Author
Loophole, LLC - Sean Kavanagh
[email protected]
https://twitter.com/spkavanagh6
| 2024-11-07T22:57:09 | en | train |
10,822,975 | joeyespo | 2016-01-01T19:26:48 | Introducing RAIL: A User-Centric Model for Performance | null | http://www.smashingmagazine.com/2015/10/rail-user-centric-model-performance/ | 3 | 0 | null | null | null | no_error | Introducing RAIL: A User-Centric Model For Performance — Smashing Magazine | 2015-10-02 16:40:10 +0000 UTC | About The Authors | 11 min readCoding,
Performance,
StrategyRAIL is a model for breaking down a user’s experience into key actions. It provides a structure for thinking about performance, so that designers and developers can reliably target the highest-impact work. The RAIL model is a lens to look at a user’s experience with a website or app as a journey comprising individual interactions. Once you know each interaction’s area, you will know what the user will perceive and, therefore, what your goals are. Sometimes it takes extra effort, but if you’re kind to the user, they’ll be good to you.There’s no shortage of performance advice, is there? The elephant in the room is the fact that it’s challenging to interpret: Everything comes with caveats and disclaimers, and sometimes one piece of advice can seem to actively contradict another. Phrases like “The DOM is slow” or “Always use CSS animations” make for great headlines, but the truth is often far more nuanced.Take something like loading time, the most common performance topic by far. The problem with loading time is that some people measure Speed Index, others go after first paint, and still others use body.onload, DOMContentLoaded or perhaps some other event. It’s rarely consistent. When it comes to other ways to measure performance, you’ve probably seen enough JavaScript benchmarks to last a lifetime. You may have also heard that 60 FPS matters. But when? All the time? Seems unrealistic.Very few of us have unlimited time to throw at optimization work, far from it, and we need criteria that help us decide what’s important to optimize (and what’s not!). When all is said is done, we want and need clear guidance on what “performant” means to our users, because that’s who we’re building for.On the Chrome team, we’ve been thinking about this, and we’ve come up with a model to put the user right back in the middle of the performance story. We call it the RAIL model.If you want the TL;DR on RAIL to share with your team, here you go!RAIL is a model for breaking down a user’s experience into key actions (for example, tap, drag, scroll, load).RAIL provides performance goals for these actions (for example, tap to paint in under 100 milliseconds).RAIL provides a structure for thinking about performance, so that designers and developers can reliably target the highest-impact work.Before diving into what RAIL involves and how it could be helpful in your project, let’s step back and look at where it comes from. Let’s start with every performance-minded person’s least favorite word in the whole wide world: “slow.”“Slow”Is changing the DOM slow? What about loading a <script> in the <head>? JavaScript animations are slower than CSS ones, right? Also, does a 20-millisecond operation take too long? What about 0.5 seconds? 10 seconds?What does slow really mean?While it’s true that different operations take different amounts of time to complete, it’s hard to say objectively whether something is slow or fast without the context of when it’s happening. For example, code running during idle time, in a touch handler or in the hot path of a game loop each has different performance requirements. Put another way, the people using your website or app have different performance expectations for each of those contexts. Like every aspect of UX, we build for our users, and what they perceive is what matters most. In fact, number one on Google’s ten things we know to be true is “Focus on the user and all else will follow.”Asking “What does slow mean?,” then, is really the wrong question. Instead, we need to ask “What does the user feel when they’re interacting with the things we build?”Putting The User In The Center Of PerformanceLuckily for us, there’s long-standing academic HCI research on this topic, and you may have seen it before in Jakob Nielsen’s work on response time limits. Based on those thresholds, and adding an extra one for animations (because it’s 2015 and we all love a bit of showbiz), we get the following:100 milliseconds. Respond to a user action within this time window and they will feel like the result is immediate. Any longer and that connection between action and reaction breaks.1 second. Within this window, things feel part of a natural and continuous progression of tasks. Beyond it, the user will lose focus on the task they were performing. For most users on the web, loading a page or changing views represents a task.16 milliseconds. Given a screen that is updating 60 times per second, this window represents the time to get a single frame to the screen (Professor Math says 1000 ÷ 60 = ~16). People are exceptionally good at tracking motion, and they dislike it when their expectation of motion isn’t met, either through variable frame rates or periodic halting.These perception thresholds are great because they give us the building blocks we need. What we need to do next is map them to reality. Let’s consider a typical interaction that our users have:In that brief session were a number of distinct interactions:waiting for the page to load,watching an animation,scrolling the page,tapping an icon,watching the navigation animate open,waiting for the page to load,watching an animation,scrolling the page.Labeling those actions from the video would give us something like this:(View large version)Each of those color blocks represents a type of action, and you can see that there are four of them. And there are four letters in RAIL. Curious.Here’s another user journey that we can break down and label, this time from Voice Memos:We can break down user interactions and categorize them into four distinct areas. At Google, we call these areas RAIL, and each comes with its own performance goals, which are based on the human perception thresholds we just saw.RAIL stands for response, animation, idle and load.These four distinct areas are a way to reason about the actions in your websites and apps. If you optimize based on each area’s performance goals (which we got from those perception thresholds), then your users will be very happy.Let’s look at each one in detail.1. ResponseIf a user clicks on a button, you have to respond to their click before they notice any lag. This applies to any input, really, whether it’s toggling a form control or starting an animation. If it doesn’t happen in a reasonable window of time, then the connection between action and reaction breaks and the user will notice.Response is all about input latency: the lag between the finger touching the glass and the resulting pixels hitting the screen. Have you ever tapped on something and it took so long to respond that you started wondering whether it registered your tap? That’s exactly the kind of thing we want to avoid!Response’s primary interaction is:tapping — when the user taps or clicks on a button or icon (for example, tapping a menu icon to open off-screen navigation).To respond responsively, we would:provide feedback in less than 100 milliseconds after initial input.Ideally, the feedback would show the desired state. But if it’s going to take a long time, then a loading indicator or coloring for the active state will do. The main thing is to acknowledge the user so that they don’t start wondering whether their tap was registered.2. AnimationAnimation is a pillar of modern apps, from scrolling to view transitions, and we must be judicious with what we do in this period of time, because the user will often be interacting directly and will really notice if the frame rate varies. However, the user expects very smooth feedback for more than what falls under the classic definition of animation.Animation includes the following:visual animation. This includes entrance and exit animations, tweened state changes, and loading indicators.scrolling. This refers to when the user starts scrolling and lets go and the page is flung.drag. While we need to respond to the user’s interaction in under 100 milliseconds, animation might follow as a result, as when panning a map or pinching to zoom.To animate properly, each frame of animation should be completed in less than 16 milliseconds, thereby achieving 60 FPS (1 second ÷ 60 = 16.6 milliseconds).3. IdleCreating apps that respond and animate well often requires deferment of work. The Optimistic UI patterns leverage this technique to great effect. All sorts of work that must be completed likely does not need to happen within a critical time window in the same way as “response” or “load”: Bootstrapping the comments functionality, initializing components, searching and sorting data, and beaconing back analytics data are all non-essential items that we can do when the browser is idle.To use idle time wisely, the work is grouped into blocks of about 50 milliseconds. Why? Should a user begin interacting, we’ll want to respond to them within the 100-millisecond response window, and not be stuck in the middle of a 2-second template rendering.4. LoadPage-loading time is a well-trodden area of performance. We’re most interested in getting the user to the first meaningful paint quickly. Once that’s delivered, the app must remain responsive; the user mustn’t encounter trouble when scrolling, tapping or watching animations. This can be super-challenging, especially when much of the work for a page shares a single thread.To load pages quickly, we aim to deliver the first meaningful paint in under 1 second. Beyond this, the user’s attention starts to wander and the perception of dealing with the task at hand is broken. Reaching this goal requires prioritizing the critical rendering path and often deferring subsequent non-essential loads to periods of idle time (or lazy-loading them on demand).Performance GoalsPerformance is essentially the art of avoiding work. And when it’s unavoidable, make any work you do as efficient and well timed as possible.With the stages of RAIL interaction laid out, let’s summarize the goals:ResponseAnimationIdlePage LoadTap to paint in less than 100 milliseconds.Each frame completes in less than 16 milliseconds.Use idle time to proactively schedule work.Satisfy the “response” goals during full load.Drag to paint in less than 16 milliseconds.Complete that work in 50-millisecond chunks.Get first meaningful paint in 1,000 milliseconds.A Note On MeasurementHere are a few handy tips on measuring your project’s performance profile:Measuring these on a MacBook Pro or comparable machine (which many of us designers and developers have) won’t give a representative idea of your mobile users. Common phones can be over ten times slower than a desktop!A Nexus 4 is a good middle-of-the-road device to use.“Regular 3G” is recommended for network throttling.Also, look at your analytics to see what your users are on. You can then mimic the 90th percentile’s experience for testing.What About Battery And Memory?Delivering on all of the goals above but leaving the user with 10% battery and 0% available memory isn’t putting the user first.We’re not yet sure how to quantify being a responsible citizen of the shared resources, but RAIL may some day get a B (for battery) and an M (for memory), turning into BLAIMR, PRIMAL or something else equally fun and memorable.Business Impact Of RAILMany of us do what we do because we love building awesome things. We don’t work in a vacuum, though, and sometimes we have to make the business case to managers, stakeholders and clients to be able to prioritize performance as part of the user experience.Luckily, many large companies have shared numbers that help us make our case:Google: 2% slower = 2% less searching per userYahoo: 400 milliseconds faster = 9% more trafficAOL: Faster pages = more page viewsAmazon: 100 milliseconds faster = 1% more revenueAberdeen Group: 1 second slower = 11% fewer page views, 7% less conversionGoogle uses website speed in search ranking.SummaryThe RAIL model is a lens to look at a user’s experience with a website or app as a journey comprising individual interactions. Once you know each interaction’s area, you will know what the user will perceive and, therefore, what your goals are.Sometimes it takes extra effort, but if you’re kind to the user, they’ll be good to you.ResourcesIf you want more information on RAIL and how to optimize your websites and apps, take a look at these.Presentations On RAIL“Performance RAIL’s: The Art and Science of optimizing for Silicon and Wetware” (slides), Ilya Grigorik, Google, May 2015“How Users Perceive the Speed of The Web” (slides), Paul Irish, FluentConf, April 2015“Performance on RAILs” (video), Paul Lewis, Nordic.js, September 2015Guidance And Documentation“Optimizing Performance,” Web Fundamentals, Google Developers“Browser Rendering Optimization” (course), Udacity“Profile” (performance), Google Web Tools, Google Developers“The RAIL Performance Model,” Web Fundamentals, Google DevelopersPerformance AuditsPerformance Audit of theverge.com” (July 2015)Performance Audit of imore.com” (July 2015)Performance Audit of m.espn.com” (April 2015)Performance Audit of squarespace.com” (April 2015)Performance Audit of cafepress.com” (April 2015)Performance Audit of CNET, Wikipedia and Time.com” (February 2015)Performance Audit of Wikipedia Rich Editor” (February 2015)Academic Research On Perceivable Performance1968: “Response Time in Man-Computer Conversational Transactions” (PDF), Robert B. Miller, Fall Joint Computer Conference 19681991: “Response Times: The 3 Important Limits,” Jakob Nielsen, Nielsen Norman Group2004: “A Study on Tolerable Waiting Time: How long Are Web Users Willing to Wait?” (PDF), Fiona Fui-Hoon Nah, Behaviour and Information Technology, 20042005: “Interaction in 4-Second Bursts: The Fragmented Nature of Attentional Resources in Mobile HCI” (PDF), Antti Oulasvirta, Sakari Tamminen, Virpi Roto, and Jaana Kuorelahti, Interruptions in Human Computer Interaction2006: “Quantifying Interactive User Experience on Thin Clients” (PDF), Niraj Tolia, David G. Andersen, and M. Satyanarayanan, The Internet Suspend/Resume Project, Carnegie Mellon2012: “Characterizing Web Use on Smartphones” (PDF), Chad C. Tossell, Philip Kortum, Ahmad Rahmati, Clayton Shepard, Lin Zhong, Conference on Human Factors in Computing Systems 20122011: “Playing With Tactile Feedback Latency in Touchscreen Interaction: Two Approaches” (PDF), Topi Kaaresoja, Eve Hoggan, Emilia Anttila, Human-Computer Interaction: INTERACT 2011Further ReadingFront-End Performance Checklist 2017Getting Ready For HTTP/2Everything You Need To Know About AMPProgressive EnhancementImproving Smashing Magazine’s Performance
(al, mrn) | 2024-11-08T08:49:23 | en | train |
10,823,046 | jseliger | 2016-01-01T19:43:25 | We must put a million people on Mars if we are to ensure humanity's future | null | https://aeon.co/essays/elon-musk-puts-his-case-for-a-multi-planet-civilisation | 3 | 0 | null | null | null | no_error | Elon Musk puts his case for a multi-planet civilisation | Aeon Essays | 2014-09-30 | Ross Andersen | ‘Fuck Earth!’ Elon Musk said to me, laughing. ‘Who cares about Earth?’ We were sitting in his cubicle, in the front corner of a large open-plan office at SpaceX headquarters in Los Angeles. It was a sunny afternoon, a Thursday, one of three designated weekdays Musk spends at SpaceX. Musk was laughing because he was joking: he cares a great deal about Earth. When he is not here at SpaceX, he is running an electric car company. But this is his manner. On television Musk can seem solemn, but in person he tells jokes. He giggles. He says things that surprise you.
When I arrived, Musk was at his computer, powering through a stream of single-line email replies. I took a seat and glanced around at his workspace. There was a black leather couch and a large desk, empty but for a few wine bottles and awards. The windows looked out to a sunbaked parking lot. The vibe was ordinary, utilitarian, even boring. After a few minutes passed, I began to worry that Musk had forgotten about me, but then suddenly, and somewhat theatrically, he wheeled around, scooted his chair over, and extended his hand. ‘I’m Elon,’ he said.
It was a nice gesture, but in the year 2014 Elon Musk doesn’t need much of an introduction. Not since Steve Jobs has an American technologist captured the cultural imagination like Musk. There are tumblrs and subreddits devoted to him. He is the inspiration for Robert Downey Jr’s Iron Man. His life story has already become a legend. There is the alienated childhood in South Africa, the video game he invented at 12, his migration to the US in the mid-1990s. Then the quick rise, beginning when Musk sold his software company Zip2 for $300 million at the age of 28, and continuing three years later, when he dealt PayPal to eBay for $1.5 billion. And finally, the double down, when Musk decided idle hedonism wasn’t for him, and instead sank his fortune into a pair of unusually ambitious startups. With Tesla he would replace the world’s cars with electric vehicles, and with SpaceX he would colonise Mars. Automobile manufacturing and aerospace are mature industries, dominated by corporate behemoths with plush lobbying budgets and factories in all the right congressional districts. No matter. Musk would transform both, simultaneously, and he would do it within the space of a single generation.
Musk announced these plans shortly after the bursting of the first internet bubble, when many tech millionaires were regarded as mere lottery winners. People snickered. They called him a dilettante. But in 2010, he took Tesla public and became a billionaire many times over. SpaceX is still privately held, but it too is now worth billions, and Musk owns two-thirds of it outright. SpaceX makes its rockets from scratch at its Los Angeles factory, and it sells rides on them cheap, which is why its launch manifest is booked out for years. The company specialises in small satellite launches, and cargo runs to the space station, but it is now moving into the more mythic business of human spaceflight. In September, NASA selected SpaceX, along with Boeing, to become the first private company to launch astronauts to the International Space Station (ISS). Musk is on an epic run. But he keeps pushing his luck. In every interview, there is an outlandish new claim, a seeming impossibility, to which he attaches a tangible date. He is always giving you new reasons to doubt him.
I had come to SpaceX to talk to Musk about his vision for the future of space exploration, and I opened our conversation by asking him an old question: why do we spend so much money in space, when Earth is rife with misery, human and otherwise? It might seem like an unfair question. Musk is a private businessman, not a publicly funded space agency. But he is also a special case. His biggest customer is NASA and, more importantly, Musk is someone who says he wants to influence the future of humanity. He will tell you so at the slightest prompting, without so much as flinching at the grandiosity of it, or the track record of people who have used this language in the past. Musk enjoys making money, of course, and he seems to relish the billionaire lifestyle, but he is more than just a capitalist. Whatever else might be said about him, Musk has staked his fortune on businesses that address fundamental human concerns. And so I wondered, why space?
Musk did not give me the usual reasons. He did not claim that we need space to inspire people. He did not sell space as an R & D lab, a font for spin-off technologies like astronaut food and wilderness blankets. He did not say that space is the ultimate testing ground for the human intellect. Instead, he said that going to Mars is as urgent and crucial as lifting billions out of poverty, or eradicating deadly disease.
‘I think there is a strong humanitarian argument for making life multi-planetary,’ he told me, ‘in order to safeguard the existence of humanity in the event that something catastrophic were to happen, in which case being poor or having a disease would be irrelevant, because humanity would be extinct. It would be like, “Good news, the problems of poverty and disease have been solved, but the bad news is there aren’t any humans left.”’
Musk has been pushing this line – Mars colonisation as extinction insurance – for more than a decade now, but not without pushback. ‘It’s funny,’ he told me. ‘Not everyone loves humanity. Either explicitly or implicitly, some people seem to think that humans are a blight on the Earth’s surface. They say things like, “Nature is so wonderful; things are always better in the countryside where there are no people around.” They imply that humanity and civilisation are less good than their absence. But I’m not in that school,’ he said. ‘I think we have a duty to maintain the light of consciousness, to make sure it continues into the future.’
People have been likening light to consciousness since the days of Plato and his cave because, like light, consciousness illuminates. It makes the world manifest. It is, in the formulation of the great Carl Sagan, the Universe knowing itself. But the metaphor is not perfect. Unlike light, whose photons permeate the entire cosmos, human-grade consciousness appears to be rare in our Universe. It appears to be something akin to a single candle flame, flickering weakly in a vast and drafty void.
Musk told me he often thinks about the mysterious absence of intelligent life in the observable Universe. Humans have yet to undertake an exhaustive, or even vigorous, search for extraterrestrial intelligence, of course. But we have gone a great deal further than a casual glance skyward. For more than 50 years, we have trained radio telescopes on nearby stars, hoping to detect an electromagnetic signal, a beacon beamed across the abyss. We have searched for sentry probes in our solar system, and we have examined local stars for evidence of alien engineering. Soon, we will begin looking for synthetic pollutants in the atmospheres of distant planets, and asteroid belts with missing metals, which might suggest mining activity.
The failure of these searches is mysterious, because human intelligence should not be special. Ever since the age of Copernicus, we have been told that we occupy a uniform Universe, a weblike structure stretching for tens of billions of light years, its every strand studded with starry discs, rich with planets and moons made from the same material as us. If nature obeys identical laws everywhere, then surely these vast reaches contain many cauldrons where energy is stirred into water and rock, until the three mix magically into life. And surely some of these places nurture those first fragile cells, until they evolve into intelligent creatures that band together to form civilisations, with the foresight and staying power to build starships.
‘At our current rate of technological growth, humanity is on a path to be godlike in its capabilities,’ Musk told me. ‘You could bicycle to Alpha Centauri in a few hundred thousand years, and that’s nothing on an evolutionary scale. If an advanced civilisation existed at any place in this galaxy, at any point in the past 13.8 billion years, why isn’t it everywhere? Even if it moved slowly, it would only need something like .01 per cent of the Universe’s lifespan to be everywhere. So why isn’t it?’
‘If you look at our current technology level, something strange has to happen to civilisations, and I mean strange in a bad way’
Life’s early emergence on Earth, only half a billion years after the planet coalesced and cooled, suggests that microbes will arise wherever Earthlike conditions obtain. But even if every rocky planet were slick with unicellular slime, it wouldn’t follow that intelligent life is ubiquitous. Evolution is endlessly inventive, but it seems to feel its way toward certain features, like wings and eyes, which evolved independently on several branches of life’s tree. So far, technological intelligence has sprouted only from one twig. It’s possible that we are merely the first in a great wave of species that will take up tool-making and language. But it’s also possible that intelligence just isn’t one of natural selection’s preferred modules. We might think of ourselves as nature’s pinnacle, the inevitable endpoint of evolution, but beings like us could be too rare to ever encounter one another. Or we could be the ultimate cosmic outliers, lone minds in a Universe that stretches to infinity.
Musk has a more sinister theory. ‘The absence of any noticeable life may be an argument in favour of us being in a simulation,’ he told me. ‘Like when you’re playing an adventure game, and you can see the stars in the background, but you can’t ever get there. If it’s not a simulation, then maybe we’re in a lab and there’s some advanced alien civilisation that’s just watching how we develop, out of curiosity, like mould in a petri dish.’ Musk flipped through a few more possibilities, each packing a deeper existential chill than the last, until finally he came around to the import of it all. ‘If you look at our current technology level, something strange has to happen to civilisations, and I mean strange in a bad way,’ he said. ‘And it could be that there are a whole lot of dead, one-planet civilisations.’
It is true that no civilisation can last long in this Universe if it stays confined to a single planet. The science of stellar evolution is complex, but we know that our mighty star, the ball of fusing hydrogen that anchors Earth and powers all of its life, will one day grow so large that its outer atmosphere will singe and sterilise our planet, and maybe even engulf it. This event is usually pegged for 5-10 billion years from now, and it tends to mark Armageddon in secular eschatologies. But our biosphere has little chance of surviving until then.
Five hundred million years from now, the Sun won’t be much larger than it is today but it will be swollen enough to start scorching the food chain. By then, Earth’s continents will have fused into a single landmass, a new Pangaea. As the Sun dilates, it will pour more and more radiation into the atmosphere, widening the daily swing between hot and cold. The supercontinent’s outer shell will suffer expansions and contractions of increasing violence. Its rocks will become brittle, and its silicates will begin to erode at unprecedented rates, taking carbon dioxide with them, down to the seafloor and into the deep crust. Eventually, the atmosphere will become so carbon-poor that trees will be unable to perform photosynthesis. The planet will be shorn of its forests, but a few plants will make a valiant last stand, until the brightening Sun kills them off, too, along with every animal that depends on them, which is to say every animal on Earth.
In a billion years, the oceans will have boiled away altogether, leaving empty trenches that are deeper than Everest is tall. Earth will become a new Venus, a hothouse planet where even the hardiest microbes cannot survive. And this is the optimistic scenario, for it assumes our biosphere will die of old age, and not something more sudden and stroke-like. After all, a billion years is a long time, long enough to make probabilistic space for all kinds of catastrophes, including those that have no precedent in human memory.
Of all the natural disasters that appear in our histories, the most severe are the floods, tales of global deluge inspired by the glacial melt at the end of the last Ice Age. There are a few stray glimmers of cosmic disasters, as in Plato’s Timaeus, when he tells the story of Phaeton, the son of the Sun god, who could not drive his father’s fiery chariot across the sky, and so crashed it into the Earth, burning the planet’s surface to a crisp. Plato writes:
That story, as it is told, has the fashion of a legend, but the truth of it lies in the occurrence of a shift of the bodies in the heavens which move round the Earth, and a destruction of the things on the Earth by fierce fire, which recurs at long intervals.
A remarkable piece of ancient wisdom, but on the whole, human culture is too fresh an invention to have preserved the scarier stuff we find in the geological record. We have no tales of mile-wide asteroid strikes, or super volcanoes, or the deep freezes that occasionally turn our blue planet white. The biosphere has bounced back from each of these shocks, but not before sacrificing terrifying percentages of its species. And even its most remarkable feats of resilience are cold comfort, for the future might subject Earth to entirely novel experiences.
Some in the space exploration community, including no less a figure than Freeman Dyson, say that human spaceflight is folly in the short term
A billion years will give us four more orbits of the Milky Way galaxy, any one of which could bring us into collision with another star, or a supernova shockwave, or the incinerating beam of a gamma ray burst. We could swing into the path of a rogue planet, one of the billions that roam our galaxy darkly, like cosmic wrecking balls. Planet Earth could be edging up to the end of an unusually fortunate run.
If human beings are to survive these catastrophes, both the black swans and the certainties, we will need to do what life has always done: move in the service of survival. We will need to develop new capabilities, as our aquatic forebears once evolved air-gulping lungs, and bony fins for crude locomotion, struggling their way onto land. We will need to harness the spirit that moved our own species to trek into new continents, so that our recent ancestors could trickle out to islands and archipelagos, before crossing whole oceans, on their way to the very ends of this Earth. We will need to set out for new planets and eventually, new stars. But need we make haste?
Some in the space exploration community, including no less a figure than the physicist Freeman Dyson, say that human spaceflight is folly in the short term. We humans are still in our technological infancy, after all, only a million years removed from the first control of fire. We have progressed quickly, from those first campfire sparks to the explosions we bottle in tall cylinders, to power our way out of Earth’s gravity well. But not everyone who sits atop our rockets returns safely. To seed a colony on another planet, we need astronaut safety to scale up. Perhaps we should park human missions for now, and explore space through the instruments of our cosmic drones, like the Voyager probe that recently slipped from the Solar System, to send us its impressions of interstellar space. We can resume human spaceflight later this century, or next, after we have reaped the full fruits of our current technological age. For all we know, revolutions in energy, artificial intelligence and materials science could be imminent. Any one of them would make human spaceflight a much easier affair.
‘There is an argument you often hear in space circles,’ I said to Musk, ‘where people say the focus on human space travel in the near-term is entirely misplaced – ’
‘What focus? There isn’t one, you know,’ he said, cutting me off.
‘But to the extent you’re advocating for one,’ I said, ‘there is an argument that says until we ramp up technologically, we’re better off sending probes because, as you know, the presence of a single human being on a spacecraft makes the engineering exponentially more difficult.’
‘Well, we are sending probes,’ Musk told me. ‘And they are very expensive probes, by the way. They aren’t exactly bargain-basement. The last RC car we sent to Mars cost more than $3 billion. That’s a hell of a droid. For that kind of money, we should be able to send a lot of people to Mars.’
There is a story Musk likes to tell, part of the founding myth of SpaceX, about how he stayed up late one night searching NASA’s website for information about a crewed mission to Mars. This was back in 2001, when the space shuttles were still flying, their launches providing a steady drumbeat of spectacle, just enough to convince the casual observer that human spaceflight wasn’t in serious decline. Today, it is impossible to sustain that delusion.
The idea that humans would one day venture into the sky is as old as mythology, but it wasn’t until the scientific revolution, when the telescope made the sky legible, that it began to seem like a realistic objective. In 1610, the astronomer Johannes Kepler wrote, in a letter to Galileo:
Let us create vessels and sails adjusted to the heavenly ether, and there will be plenty of people unafraid of the empty wastes. In the meantime, we shall prepare, for the brave sky-travellers, maps of the celestial bodies.
After the hot air balloon and airplane were invented, a few visionaries moved on to planning for space colonisation itself. But it wasn’t until the Space Race, the extraordinary period of progress that began with Sputnik in 1957 and ended with the first Moon landing in 1969, that the idea of cosmic manifest destiny moved from the fringe to the mainstream. In the ensuing decades, it would inspire whole literatures and subcultures, becoming, in the process, one of the dominant secular narratives of the human future. But reality has not kept up.
It has been three years since NASA, the world’s best-funded space agency, fired a human being into orbit. Americans who wish to fly to the ISS must now ride on Russian rockets, launched from Kazakhstan, at the pleasure of Vladimir Putin. Even the successful trips are, in their own way, evidence of decline, because the space station sits a thousand times closer to Earth than the Moon. Watching NASA astronauts visit it is about as thrilling as watching Columbus sail to Ibiza. But that’s as good as it’s going to get for a while. The agency’s next generation rocket isn’t due until 2018, and its first iteration will barely best the Saturn V, the pyrotechnic beast that powered the Apollo missions. American presidents occasionally make bold, Kennedy-like pronouncements about sending humans to Mars. But as Musk discovered more than a decade ago, there are no real missions planned, and even optimists say it will be 2030 at the earliest.
It wasn’t supposed to be like this. Only a few decades ago, it seemed as though we were entering a new epoch of exploration, one that would shame the seafarers of the High Renaissance. We would begin by mastering lower Earth orbit, so that visits to space were safe and routine. Then we’d go to the Moon and build a permanent base there, a way station that would let us leap to the planets, each in quick succession, as though they were lily pads on a pond, and not massive moving worlds spaced by hundreds of millions of miles. We’d start with Mars and then shoot through the asteroid belt to Jupiter and its ocean-harbouring moons. We’d drink in Saturn’s sublimity, its slanted rings and golden hue, and then head for the outer giants, and the icy rubble at the Solar System’s edge. The Sun would look small out there, and the stars beckoning. We would spread through the Milky Way’s safe zone, the doughnut of gas and fire, billions of stars strong, that surrounds our galaxy’s violent core, and then we’d press out into intergalactic space. We’d use wormholes or warp drives, or some other vaguely sketched physics, to pretend away the millions of light years that separate us from Andromeda and the glittering web beyond it, whose glimpsable regions alone contain hundreds of billions of galaxies.
When Musk realized there were no missions to Mars on the books, he figured Americans had lost interest in space exploration. Two years later, the public response to the Columbia shuttle disaster convinced him otherwise. ‘It was in every newspaper, every magazine, every news station, even those that had nothing to do with space,’ he told me. ‘And yeah, seven people died and that was awful, but seven people die all the time, and nobody pays any attention to it. It’s obvious that space is deeply ingrained in the American psyche.’ Musk now sees the Space Race as a transient Cold War phenomenon, a technological pissing match fuelled by unsustainable public spending. ‘The Soviets were crowing after Sputnik, about how they had better technology than we did, and so therefore communism is better,’ he told me. ‘And so we set a really tough target and said we would beat them there, and money was no object. But once the ideological battle was won, the impetus went away, and money very quickly became an object.’
NASA’s share of the US federal budget peaked at 4.4 per cent in 1966, but a decade later it was less than 1 per cent, where it has remained ever since. The funding cut forced NASA to shutter the Saturn V production lines, along with the final three Moon landings, and a mission to Mars slated for the late 1980s. That’s why the agency’s website looked so barren when Musk visited it in 2001.
Aghast at this backsliding, and still thinking it a failure of will, Musk began planning a Mars mission of his own. He wanted to send a greenhouse to Mars, filled with plants that would become, in the course of their long journeying, the most distant travellers of all multicellular life. Images of lush, leafy organisms living on the red planet would move people, he figured, just as images of the Earth rising, sunlike, on the lunar plain had moved previous generations. With a little luck, the sentiment would translate into political will for a larger NASA budget.
When Musk went to price the mission with US launch companies, he was told transport would cost $60-80 million. Reeling, he tried to buy a refurbished Russian intercontinental ballistic missile to do the job, but his dealer kept raising the price on him. Finally, he’d had enough. Instead of hunting around for a cheaper supplier, Musk founded his own rocket company. His friends thought he was crazy, and tried to intervene, but he would not be talked down. Musk identifies strongly as an engineer. That’s why he usually takes a title like chief technical officer at the companies he runs, in addition to chief executive officer. He had been reading stacks of books about rockets. He wanted to try building his own.
Great migrations are often a matter of timing, of waiting for a strait to freeze, a sea to part, or a planet to draw near
Six years later, it all looked like folly. It was 2008, a year Musk describes as the worst of his life. Tesla was on the verge of bankruptcy. Lehman had just imploded, making capital hard to come by. Musk was freshly divorced and borrowing cash from friends to pay living expenses. And SpaceX was a flameout, in the most literal sense. Musk had spent $100 million on the company and its new rocket, the Falcon 1. But its first three launches had all detonated before reaching orbit. The fourth was due to lift off in early Fall of that year, and if it too blew apart in the atmosphere, SpaceX would likely have numbered among the casualties. Aerospace journalists were drafting its obituary already. Musk needed a break, badly. And he got it, in the form of a fully intact Falcon 1, riding a clean column of flame out of the atmosphere and into the history books, as the first privately funded, liquid-fuelled rocket to reach orbit.
SpaceX nabbed a $1.6 billion contract with NASA in the aftermath of that launch, and Musk used the money to expand rapidly. In the years since, he has reeled off 15 straight launches without a major failure, including the first private cargo flights to the ISS. Last year, he signed a 20-year lease on launch pad 39A, the hallowed stretch of Cape Canaveral concrete that absorbed the fire of Apollo’s rockets. Earlier this year, he bought a tract of land near Brownsville, Texas, where he plans to build a dedicated spaceport for SpaceX. ‘It took us ages to get all the approvals,’ he told me. ‘There were a million federal agencies that needed to sign off, and the final call went to the National Historic Landmark Association, because the last battle of the Civil War was fought a few miles away from our site, and visitors might be able to see the tip of our rocket from there. We were like, “Really? Have you seen what it’s like around there? Nobody visits that place.”’
Musk isn’t shy about touting the speed of his progress. Indeed, he has an Ali-like appetite for needling the competition. A Bloomberg TV interviewer once asked him about one of Tesla’s competitors and he laughed in response. ‘Why do you laugh?’ she said. ‘Have you seen their car?’ he replied, incredulously. This same streak of showmanship surfaced when Musk and I discussed the aerospace industry. ‘There have been a number of space startups,’ he told me. ‘But they have all failed, or their success was irrelevant.’
But SpaceX does have competitors, both industry giants and scrappy startups alike. The company has just spent three years in a dogfight to become the first commercial space outfit to launch US astronauts to the space station. The awarding of this contract became more urgent in March, after the US sanctioned Russia for rolling tanks into Crimea. A week later, Russia’s Deputy Prime Minister Dmitry Rogozin quipped: ‘After analysing the sanctions against our space industry, I suggest the US deliver its astronauts to the ISS with a trampoline.’
SpaceX was an early favourite to win the contract, but it was never a lock. Critics have hammered the company for delaying launches, and in August it suffered a poorly timed mishap, when one of its test rockets blew up shortly after lift-off. In the end, NASA split the contract between Boeing and SpaceX, giving each six launches. Musk said that he would move into human missions, win or lose, but his progress would have been slowed considerably. The contract is only for short hops to lower Earth orbit, but it will give Musk the chance to demonstrate that he can do human spaceflight better than anyone else. And it will give him the money and reputation he needs to work up to a more extraordinary feat of engineering, one that has not been attempted in more than four decades: the safe transport of human beings to a new world.
Great migrations are often a matter of timing, of waiting for a strait to freeze, a sea to part, or a planet to draw near. The distance between Earth and Mars fluctuates widely as the two worlds whirl around in their orbits. At its furthest, Mars is a thousand times further than the Moon. But every 26 months they align, when the faster moving Earth swings into position between Mars and the Sun. When this alignment occurs where their orbits are tightest, Mars can come within 36 million miles, only 150 times further than the Moon. The next such window is only four years away, too soon to send a crewed ship. But in the mid-2030s, Mars will once again burn bright and orange in our sky, and by then Musk might be ready to send his first flurry of missions, to seed a citylike colony that he expects to be up and running by 2040.
‘SpaceX is only 12 years old now,’ he told me. ‘Between now and 2040, the company’s lifespan will have tripled. If we have linear improvement in technology, as opposed to logarithmic, then we should have a significant base on Mars, perhaps with thousands or tens of thousands of people.’
Musk told me this first group of settlers will need to pay their own way. ‘There needs to be an intersection of the set of people who wish to go, and the set of people who can afford to go,’ he said. ‘And that intersection of sets has to be enough to establish a self-sustaining civilisation. My rough guess is that for a half-million dollars, there are enough people that could afford to go and would want to go. But it’s not going to be a vacation jaunt. It’s going to be saving up all your money and selling all your stuff, like when people moved to the early American colonies.’
Even at that price, a one-way trip to Mars could be a tough sell. It would be fascinating to experience a deep space mission, to see the Earth receding behind you, to feel that you were afloat between worlds, to walk a strange desert under an alien sky. But one of the stars in that sky would be Earth, and one night, you might look up at it, through a telescope. At first, it might look like a blurry sapphire sphere, but as your eyes adjusted, you might be able to make out its oceans and continents. You might begin to long for its mountains and rivers, its flowers and trees, the astonishing array of life forms that roam its rainforests and seas. You might see a network of light sparkling on its dark side, and realise that its nodes were cities, where millions of lives are coming into collision. You might think of your family and friends, and the billions of other people you left behind, any one of which you could one day come to love.
The austerity of life on Mars might nurture these longings into regret, or even psychosis. From afar, the Martian desert evokes sweltering landscapes like the Sahara or the American West, but its climate is colder than the interior of Antarctica. Mars used to be wrapped in a thick blanket of atmosphere, but something in the depths of time blew it away, and the patchy remains are too thin to hold in heat or pressure. If you were to stroll onto its surface without a spacesuit, your eyes and skin would peel away like sheets of burning paper, and your blood would turn to steam, killing you within 30 seconds. Even in a suit you’d be vulnerable to cosmic radiation, and dust storms that occasionally coat the entire Martian globe, in clouds of skin-burning particulates, small enough to penetrate the tightest of seams. Never again would you feel the sun and wind on your skin, unmediated. Indeed, you would probably be living underground at first, in a windowless cave, only this time there would be no wild horses to sketch on the ceiling.
‘Even at a million people you’re assuming an incredible amount of productivity per person, because you would need to recreate the entire industrial base on Mars’
It is possible that Mars could one day be terraformed into an Earthly paradise, but not anytime soon. Even on our planet, whose natural systems we have studied for centuries, the weather is too complex to predict, and geoengineering is a frontier technology. We know we could tweak the Earth’s thermostat, by sending a silvery mist of aerosols into the stratosphere, to reflect away sunlight. But no one knows how to manufacture an entire atmosphere. On Mars, the best we can expect is a crude habitat, erected by robots. And even if they could build us a Four Seasons, near a glacier or easily mined ore, videoconferencing with Earth won’t be among the amenities. Messaging between the two planets will always be too delayed for any real-time give and take.
Cabin fever might set in quickly on Mars, and it might be contagious. Quarters would be tight. Governments would be fragile. Reinforcements would be seven months away. Colonies might descend into civil war, anarchy or even cannibalism, given the potential for scarcity. US colonies from Roanoke to Jamestown suffered similar social breakdowns, in environments that were Edenic by comparison. Some individuals might be able to endure these conditions for decades, or longer, but Musk told me he would need a million people to form a sustainable, genetically diverse civilisation.
‘Even at a million, you’re really assuming an incredible amount of productivity per person, because you would need to recreate the entire industrial base on Mars,’ he said. ‘You would need to mine and refine all of these different materials, in a much more difficult environment than Earth. There would be no trees growing. There would be no oxygen or nitrogen that are just there. No oil.’
I asked Musk how quickly a Mars colony could grow to a million people. ‘Excluding organic growth, if you could take 100 people at a time, you would need 10,000 trips to get to a million people,’ he said. ‘But you would also need a lot of cargo to support those people. In fact, your cargo to person ratio is going to be quite high. It would probably be 10 cargo trips for every human trip, so more like 100,000 trips. And we’re talking 100,000 trips of a giant spaceship.’
Musk told me all this could happen within a century. He is rumoured to have a design in mind for this giant spaceship, a concept vehicle he calls the Mars Colonial Transporter. But designing the ship is the easy part. The real challenge will be driving costs down far enough to launch whole fleets of them. Musk has an answer for that, too. He says he is working on a reusable rocket, one that can descend smoothly back to Earth after launch, and be ready to lift off again in an hour.
‘Rockets are the only form of transportation on Earth where the vehicle is built anew for each journey,’ he says. ‘What if you had to build a new plane for every flight?’ Musk’s progress on reusable rockets has been slow, but one of his prototypes has already flown a thousand metres into the air, before touching down softly again. He told me full reusability would reduce mission costs by two orders of magnitude, to tens of dollars per pound of weight. That’s the price that would convert Earth’s launch pads into machine guns, capable of firing streams of spacecraft at deep space destinations such as Mars. That’s the price that would launch his 100,000 ships.
All it takes is a glance over your shoulder, to the alien world of 1914, to remind yourself how much can happen in a century. But a million people on Mars sounds like a techno-futurist fantasy, one that would make Ray Kurzweil blush. And yet, the very existence of SpaceX is fantasy. After talking with Musk, I took a stroll through his cathedral-like rocket factory. I wandered the rows of chromed-out rocket engines, all agleam under blue neon. I saw white tubes as huge as stretched-out grain silos, with technicians crawling all over them, their ant-farm to-and-fro orchestrated from above, by managers in glass cube offices. Mix in the cleanroom jumpsuits and the EDM soundtrack, and the place felt something like Santa’s workshop as re-imagined by James Cameron. And to think: 12 years ago, this whole thrumming hive, this assembly line for spaceships, did not even exist, except as a hazy notion, a few electrified synapses in Musk’s overactive imagination.
Who am I to say what SpaceX will accomplish in a century’s time? For all I know Musk will be hailed as a visionary by then, a man of action without parallel in the annals of spaceflight. But there are darker scenarios, too. Musk could push the envelope, and see his first mission to Mars end in tragedy. Travel to Mars could prove elusive, like cold fusion. It might be one of those feats of technology that is always 25 years away. Musk could come to be seen as a cultural artifact, a personification of our post-Apollo hangover. An Icarus.
I asked Musk if he’d made peace with the possibility that his project could still be in its infancy, when death or infirmity forces him to pass the baton. ‘That’s what I expect will be the case,’ he said. ‘Make peace with it, of course. I’ve thought about that quite a lot. I’m trying to construct a world that maximises the probability that SpaceX continues its mission without me,’ he said. I nodded toward a cluster of frames on his wall, portraits of his five sons. ‘Will you give it to them?’ He told me he had planned to give it to an institution, or several, but now he thinks that a family influence might be stabilising. ‘I just don’t want it to be controlled by some private equity firm that would milk it for near-term revenue,’ he said. ‘That would be terrible.’
‘We need to be laser-focused on becoming a multi-planet civilisation. That’s the next step’
This fear, that the sacred mission of SpaceX could be compromised, resurfaced when I asked Musk if he would one day go to Mars himself. ‘I’d like to go, but if there is a high risk of death, I wouldn’t want to put the company in jeopardy,’ he told me. ‘I only want to go when I could be confident that my death wouldn’t result in the primary mission of the company falling away.’ It’s possible to read Musk as a Noah figure, a man obsessed with building a great vessel, one that will safeguard humankind against global catastrophe. But he seems to see himself as a Moses, someone who makes it possible to pass through the wilderness – the ‘empty wastes,’ as Kepler put it to Galileo – but never sets foot in the Promised Land.
Before I left SpaceX, I wanted to know how far Musk thought human exploration would go. When a man tells you that a million people will live on Mars within a century, you want to know his limits, if only for credibility’s sake. ‘Do you think we will go to the stars?’ I asked him.
‘Wow,’ he said. ‘It’s pretty hard to get to another star system. Alpha Centauri is four light years away, so if you go at 10 per cent of the speed of light, it’s going to take you 40 years, and that’s assuming you can instantly reach that speed, which isn’t going to be the case. You have to accelerate. You have to build up to 20 or 30 per cent and then slow down, assuming you want to stay at Alpha Centauri and not go zipping past.’ To accentuate this last point, Musk made a high-pitched zooming noise, like kids make when playing with toy spaceships.
I pressed him about star travel a bit more, but he stayed tight. ‘It’s just hard,’ he said. ‘With current life spans, you need generational ships. You need antimatter drives, because that’s the most mass-efficient. It’s doable, but it’s super slow.’
‘So you’re skeptical,’ I said. He cracked then, but only a little.
‘I’m not saying I’m skeptical of the stars,’ he said. ‘I just wonder what humanity will even look like when we try to do that. If we can establish a Mars colony, we can almost certainly colonise the whole Solar System, because we’ll have created a strong economic forcing function for the improvement of space travel. We’ll go to the moons of Jupiter, at least some of the outer ones for sure, and probably Titan on Saturn, and the asteroids. Once we have that forcing function, and an Earth-to-Mars economy, we’ll cover the whole Solar System. But the key is that we have to make the Mars thing work. If we’re going to have any chance of sending stuff to other star systems, we need to be laser-focused on becoming a multi-planet civilisation. That’s the next step.’
You can see why NASA has given Musk a shot at human spaceflight. He makes a great rocket but, more than that, he has the old vision in him. He is a revivalist, for those of us who still buy into cosmic manifest destiny. And he can preach. He says we are doomed if we stay here. He says we will suffer fire and brimstone, and even extinction. He says we should go with him, to that darkest and most treacherous of shores. He promises a miracle. | 2024-11-08T12:31:37 | en | train |
10,823,090 | filament | 2016-01-01T19:53:53 | Shields Down | null | http://randsinrepose.com/archives/shields-down/ | 11 | 2 | [
10823133,
10824708
] | null | null | no_error | Shields Down | 2016-01-01T11:51:17-08:00 | null | Resignations happen in a moment, and it’s not when you declare, “I’m resigning.” The moment happened a long time ago when you received a random email from a good friend who asked, “I know you’re really happy with your current gig because you’ve been raving about it for a year, but would you like to come visit Our Company? No commitment. Just coffee.” Now, everyone involved in this conversation transaction is aware of what is going down. While there is certainly no commitment, there is a definitely an agenda. The reason they want you to visit The Company is because, of course, they want you there in the building because seeing a potential future is far more compelling than describing it. Still, seeing it isn’t the moment of resignation. The moment happened the instant you decided, “What the hell? I haven’t seen Don in months and it’d be good to see him.” Your shields are officially down. A Potential Future Your shields drop the moment you let a glimpse of a potential different future into your mind. It seems like a unconsidered off-the-cuff thought sans consequence, but the thought opens you to possibilities that did not exist the moment before the thought existed. What is incredibly slippery about this moment is the complex, nuanced, and instant mental math performed that precedes the shields-down situation. When you are indirectly asked to lower your shields, you immediately parse, place a value, and aggregate your opinions on the following: Am I happy with my job? Do I like my manager? My team? Is this project I’m working on fulfilling? Am I learning? Am I respected? Am I growing? Do I feel fairly compensated? Is this company/team going anywhere? Do I believe in the vision? Do I trust the leaders? Now, each human has a different prioritized subset of this list that they rank and value differently. Growth is paramount for some, truth for others. Whatever unique blend is important, you use that blend and ask yourself one final question as you consider lowering your shields. What has happened recently or in the past that either supports or detracts from what I value? The answer to that question determines whether your shields stay up or go down. Humans Never Forget As a leader of humans, I’ve watched sadly as valued co-workers have resigned. Each time I work to understand two things: Why are they leaving? When did their shields go down? In most cases, the answers to Question #1 are rehearsed and clear. It’s the question they’ve been considering and asking themselves, so their answers are smooth. I’m looking for a smaller company where I can have more impact. I’ve been here for three years and I’m looking for a change of scenery. It happens. I want to work somewhere more established where I can dig my teeth into one hard problem. These answers are fine, but they aren’t the complete reason why they are leaving. It’s the politically correct answer that is designed to easily answer the most obvious question. The real question, the real insight, comes from the answer to Question #2: When did their shields go down? Their shields drop when, in the moment they are presented with the offer of potential future opportunity, they quickly evaluate their rubric and make an instant call: Is this job meeting my bar? To find and understand this shields-down moment, I ask, “When did you start looking?” Often the answers are a vague, “It kind’a just happened. I wasn’t really looking. I’m really happy here.” Bullshit. If I’m sitting here talking with you it means two things: I don’t want you to leave and, to the best of my knowledge, you didn’t want to leave either but here you are leaving. It didn’t just happen. You chose. Maybe you weren’t looking, but once your shields dropped, you started looking. Happy people don’t leave jobs they love. The reason this reads cranky is because I, the leader of the humans, screwed up. Something in the construction of the team or the company nudged you at a critical moment. When that mail arrived gently asking you about coffee, you didn’t answer the way you answered the prior five similar mails with a brief, “Really happy here. Let’s get a drink some time!” You think you thought Hmmm… what the hell. It can’t hurt. What you actually thought or realized was: You know, I have no idea when I’m going to be a tech lead here. Getting yelled at two days ago still stings. I don’t believe a single thing senior leadership says. Often you’ve forgotten this original thought in your subsequent intense job deliberations, but when I ask, when I dig, I usually find a basic values violation that dug in, stuck, and festered. Sometimes it’s a major values violation from months ago. Sometimes it’s a small violation that occurred at the worst possible time. In either case, your expectations of your company and your job were not met and when faced with opportunity elsewhere, you engaged. It’s Not Just Boredom I covered a major contributor to shield drops in Bored People Quit. Boredom in its many forms is a major contributor to resignations, but the truth is the list of contributing factors to shield weakening is immense. When you combine this with the near constant increasing demand for talented humans, you’ve got a complex leadership situation. The reason I’m cranky is I’m doing the math. I’m placing a cost on the departure of a wanted human leaving and comparing that cost with whatever usually minor situation existed in the past that led to a shields-down situation. The departure cost is always exponentially higher. My advice is similarly frustrating. Strategies to prevent shields dropping are as numerous as the reasons shields drop in the first place. I’ve discovered shield drops after the fact with close co-workers whom I met with for a 1:1 every single week where I felt we were covering topics of substance; where I felt I understood what they valued and how they wanted to grow. I’ve been here for three years and I’m looking for a change of scenery. It happens. Two months ago, someone told them their project was likely to be canceled. It wasn’t. You know, I have no idea when I’m going to be a tech lead here. At the end of last month, she heard via the grapevine that she wasn’t going to be promoted. When she got the promotion she deserved, it was too late. I don’t believe a single thing senior leadership says. At the last All Hands, I blew off a question with a terse answer because I didn’t want to dignify gossip. I forgot there is signal even in gossip. Every moment as a leader is an opportunity to either strengthen or weaken shields. Every single moment. Happy New Year. Editor, March 2024: Shirts are now available to proudly display your favorite shield | 2024-11-08T17:29:23 | en | train |
10,823,102 | Amorymeltzer | 2016-01-01T19:55:48 | The Popularity of Perl in 2015 | null | http://szabgab.com/the-popularity-of-perl-in-2015.html | 3 | 0 | null | null | null | no_error | The Popularity of Perl in 2015 | null | null |
A year ago I've published an article called The Popularity of Perl in 2014
It contained a list of sites with their Alexa ranking and a few site with information from their Google Analytics.
This is an updated version of that report for January 2016.
Highlights:
PerlMonks is now more popular than perl.org.
PerlMaven is now in the 4th place.
perl6.org moved up the 9th place
The Popularity of Perl
There are many ways to measure the popularity of a programming language.
We are going to look at 3 measurements here:
Perl Weekly subscribers
Alexa
Google Analytics of search.cpan.org, blogs.perl.org, MetaCPAN, and Perl Maven
Perl Weekly
The Perl Weekly website itself does not get a lot of visitors. What is interesting there is the number of subscribers.
It grew from 4,156 on 1 January 2013 to 5,103 on 31 December 2013, to 5,645 on 31 December 2014, and to 5,927 on 31 December 2015.
It's Google+ page was circled by 4,066 people on 1 January 2014, 4,874 on 1 January 2015,
and by 5,045 people on January 2016.
(I don't seem to have the number from the beginning og 2013.)
These numbers are, of course, more indicative to the popularity of the Perl Weekly, than that of Perl.
Alexa
The Alexa rankings provide a proximate ranking of web sites.
It has big and strange fluctuations, but lacking more exact data,
it can be a good indicator for the relative popularity of web sites.
Ordered according to their position on 1 Jan 2016.
Perl
site 15 Jan 2013 1 Jan 2014 1 Jan 2015 1 Jan 2016
cpan.org 10,222 8,807 14,428 23,132
perlmonks.org 21,521 21,273 26,500 25,771
perl.org 18,634 14,772 21,893 28,302
perlmaven.com 679,764 199,901 99,531 87,591
metacpan.org 143,975 73,877 69,477 95,279
munin-monitoring.org 106,196 89,993 129,329 166,561
twiki.org 158,640 125,646 160,519 172,169
bugzilla.org 94,309 91,851 148,722 182,114
perl6.org 956,529 1,597,330 970,857 231,040
otrs.org 187,82 130,611 327,586 277,734
perl.com 189,149 200,674 271,028 301,739
dadamailproject.com 153,316 135,448 357,760 322,546
perlmeme.org - 204,513 275,253 346,090
site 15 Jan 2013 1 Jan 2014 1 Jan 2015 1 Jan 2016
webgui.org 420,044 669,840 542,179 507,279
strawberryperl.com 421,282 354,047 406,097 526,210
foswiki.org 440,926 379,461 421,097 528,648
perltricks.com - 910,000 421,194 539,408
mojolicio.us 614,109 515,543 335,675 551,125
perlgeek.de 1,244,368 476,736 398,956 584,578
template-toolkit.org 535,891 407,008 483,630 622,442
perl-community.de 465,767 259,690 623,151 674,163
pm.org 382,919 447,865 765,765 681,008
perlide.org 941,509 799,919 763,733 733,524
perlfoundation.org 403,687 820,284 895,179 868,221
modernperlbooks.com 646,809 552,230 1,173,818 948,062
www.perlfect.com - 351,860 427,574 992,570
perl-begin.org 851,904 3,236,437 1,194,368 1,059,210
perldancer.org 612,468 566,055 723,862 1,207,023
1,341,985 4,497,852 - 1,232,354
cpantesters.org 553,281 265,792 344,731 1,243,869
catalystframework.org 679,637 447,718 563,539 1,399,517
dwimperl.com - - 1,759,392 2,001,871
perlhacks.com 4,922,207 1,440,058 1,640,884 2,110,479
perl6maven.com - - 7,301,179 4,153,084
perlnews.org 2,604,882 954,340 4,221,820 4,254,371
perl-tutorial.org 1,260,631 1,814,361 4,126,720 4,417,830
www.misc-perl-info.com - 597,487 963,995 5,199,525
perlbuzz.com 1,155,833 752,738 2,714,571 5,803,133
Google Analytics
Google Analytics provide a much more accurate measurements of number of visitors and page-views.
I have access to the data of several major, and a few smaller Perl-related sites.
Number of pageviews
Site December 2013December 2014Annual changeDecember 2015Annual change
CPAN total 2,183,610 1,940,160 -11.2% 1,516,152 -21.9%
search.cpan.org 1,758,977 1,363,260 -22.5% 1,053,618 -22.8%
metacpan.org 424,633 576,900 35.8% 462,534 -19.9%
perlmaven.com 116,574 250,970 115.0% 302,648 20.5%
blogs.perl.org 59,524 72,465 22.0% 53,379 -26.4%
strawberryperl.com 40,606 40,547 0.0% 38,931 -4.0%
padre.perlide.org 27,585 27,705 0.0% 27,042 -2.4%
dwimperl.com 10,942 10,221 -6.6% 9,400 -8.1%
perldancer.org 9,916 9,172 0.0% 6,847 -25.4%
perl6maven.com 1,561 3,983 155.0% 3,604 -9.6%
advent.perldancer.org 6,667 9,938 50.0% 1,967 -80.3%
Number of sessions
Site December 2013December 2014Annual changeDecember 2015Annual change
CPAN total 837,856 771,615 -7.9% 616,434 -20.2%
search.cpan.org 700,382 597,417 -14.8% 472,519 -21.0%
metacpan.org 137,474 174,198 27.0% 143,915 -17.4%
perlmaven.com 84,810 184,000 116.0% 201,418 9.4%
blogs.perl.org 44,195 52,012 18.2% 39,727 -23.7%
strawberryperl.com 31,690 31,329 0.0% 29,843 -4.8%
padre.perlide.org 10,333 11,360 9.7% 10,119 -11.0%
dwimperl.com 7,813 7,368 -6.5% 6,691 -9.2%
perldancer.org 5,214 5,388 0.0% 4,085 -24.2%
perl6maven.com 846 1,445 70.0% 1,608 11.2%
advent.perldancer.org 3,568 4,421 25.7% 1,257 -71.6%
Number of users
Site December 2013December 2014Annual changeDecember 2015Annual change
CPAN total 485,503 449,333 -7.5% 355,268 -21.0%
search.cpan.org 403,164 357,156 -11.5% 277,861 -22.3%
metacpan.org 82,339 92,177 12.2% 77,407 -16.1%
perlmaven.com 59,728 117,082 98.3% 116,869 -0.2%
blogs.perl.org 31,859 36,494 14.1% 27,834 -23.8%
strawberryperl.com 27,337 26,408 -3.3% 25,420 -3.8%
padre.perlide.org 8,531 9,559 11.7% 8,162 -14.7%
dwimperl.com 6,638 6,784 1.5% 6,147 -9.4%
perldancer.org 4,027 4,074 0.0% 3,232 -20.7%
perl6maven.com 680 1,169 72.0% 1,149 -1.8%
advent.perldancer.org 2,138 2,180 0.0% 880 -59.7%
CPAN total is metacpan.org+search.cpan.org
Data from perl.org and perl.com
See the report from last year.
Data from Stack Overflow
Number of Perl questions by year at StackOverflow
YearNumber of questions
20128,394
201311,230
201410,370
20157,794
Visitors stats of Perl6.org
Perl6.org visitor stats
| 2024-11-08T01:48:33 | en | train |
10,823,174 | ohjeez | 2016-01-01T20:13:59 | The Empathy Gap, and Why Women Are Treated Badly in Open Source Communities | null | http://perens.com/blog/2016/01/01/the-empathy-gap/ | 9 | 1 | [
10824461
] | null | null | no_error | The Empathy Gap, and Why Women are Treated Badly in Open Source Communities – Bruce Perens | null | null |
There are many stories of horrendous treatment of women in Open Source communities. Many projects are attempting to address the issue by instituting social codes and diversity policies. Yes, we really do need such things.
Some years ago, I contributed $1000 to be one of the seed funders of the Ada Initiative, which worked to assist women in participating in Open Source projects. That worked out for several years, and the organization had sort of an ugly meltdown in their last year that is best forgotten. There was something really admirable about the Ada Initiative in its good days, which is that it stuck to one message, stuck to the positive in helping women enter and continue in communities in which they were under-represented, and wasn’t anti-male. That’s the way we should do it.
People continue to work on women’s and diversity issues in the Open Source community in that tradition. Support them! But I remain interested in something they are not addressing:
How Did We Get Here??? How did we ever get to the point that a vocal minority of males in Open Source communities behave in the most boorish, misogynistic, objectifying manner toward women?
My theory is that in preschool through high school, we didn’t teach those individuals how to have healthy friendships and mutually respectful social interaction with women, and that they ended up having very little empathy for women. If the school environment didn’t actively segregate boys and girls, they naturally self-segregated and that wasn’t corrected. And we ended up with another generation of boys who hadn’t spent that much time around girl peers, didn’t understand them, didn’t have empathy for them. Later, when sexual attraction became a factor, the boys lack of empathy led them to objectify women.
It’s unfortunately the case that software development in general and Open Source communities are frequented by males who have social development issues. I once complained online about how offended I was by a news story that said many software developers were on the autism spectrum. To my embarrassment, there were many replies to my complaint by people who wrote “no, I really am on the spectrum and I’m not alone here”.
Why is software a comfortable world for people with social development issues? The world of social relationships isn’t a fair one. People like you or not for reasons of their own. In contrast, software development is inherently fair. If you write it correctly, your program runs. Otherwise, it doesn’t. Your computer doesn’t get offended if you don’t state your message well. It doesn’t hold a grudge. It just waits until you write it correctly.
Online communities like those hosting Open Source developers tend to use textual communications. This is a comfortable environment for people who have trouble with face-to-face interaction.
So, we have an environment that attracts people with social development issues that might lead them to have a lack of empathy toward women, and we have some males who don’t have a pathology but weren’t properly socialized regarding their interaction with women.
This isn’t only a women’s problem. Back in the 1950’s and 1960’s, the United States started to address the problem that White people didn’t grow up with much empathy for Black people because so many White people didn’t grow up with any Blacks around them who were peers rather than servants. So we integrated the schools. I was in Junior High when we started “busing”, and there was so much resistance to integration that we evacuated for a bomb scare sometime during each school day. There is still a strong “segregationist South” political block within the United States, it’s a factor in every national election.
But school integration has addressed the issue of White people who simply grew up without any Black peers. We didn’t solve the problem of racial inequality but we did make progress.
Are we, as a society, paying as much attention to integrating male and female students throughout their preschool to 12th-grade years? Do we really do much to teach social maturity at all? Do we prevent males and females from naturally self-segregating whenever they have a chance in the school environment?
It’s still an open issue whether males and females have built-in biases that, for example, lead fewer women to be programmers, or if such biases only develop as a response to social signals. There is more science to be done. But it’s difficult to do that sort of science because we can’t separate the individuals from the social signals they’ve grown up with. Certainly we can improve the situation for the women who would be programmers except for the social signals.
Does your school district have the first policy regarding male and female integration and defeating self-segregation wherever it occurs? Can we, by implementing and following one, arrive at a generation with better social development and fewer anti-female biases?
Some feminists object to this idea, because they feel I’m saying that women’s safe spaces are the problem. A “women’s safe space” is a supportive environment where women can be together with other women separately from men, without the social conflict and intimidation that the presence of men (at least the misogynistic kind) would bring. Apparently, there are womens safe spaces at some software conferences, etc.
Women’s safe spaces are a symptom, and by the time we need them it might be too late to treat the disease, misogynistic behavior that develops in males in the preschool to 12th-grade years.
To prevent that disease, we can’t always put women in safe spaces, just as we can’t always put the Blacks and Whites in schools across town from each other if they are to live together as equals.
We can do so much with social codes, and that must be done because solving the real problem takes generations. We’ll only be able to solve that problem if we work today, with our children, to close the empathy gap.
| 2024-11-07T18:22:13 | en | train |
10,823,179 | kentf | 2016-01-01T20:14:47 | How I Read Over 50 Books in the Busiest Year of My Life | null | https://medium.com/@kentf/how-i-read-over-50-books-in-the-busiest-year-of-my-life-38b13ac40a97#.dz2l58aza | 1 | 0 | null | null | null | no_error | How I Read Over 50 Books in The Busiest Year of My Life | 2016-01-01T20:09:31.314Z | Kent Fenwick | EDIT** — Disclaimer, I do not work for Amazon, or Audible I just love their products. If you want to get these books from the library or other sources power to you. I just find it easier going to Audible. I am flattered that you all think this would be a sponsored post ;)Being a father, having a spouse that works full time, working full time myself on a startup, helping friends with their startups on weekends, being the best father, husband, son, son-in-law, brother, brother-in-law, and friend I can be, doesn’t leave a lot of time to read.So I stopped reading (mostly) and chose to listen to my books instead.In a given week I don’t have a lot of free time, but I do have a lot of found time.Daily walk to and from train, 4 hours / week.Waiting for my son to fall asleep, 3 hours / week.Cleaning the house, 4 hours / week.Driving, 1 hour / week.Grocery store, 1 hour / week.Random solo errands, 1 hour / week.This gives me 14 hours each week or about 700 hours every year to listen to books.The length of books ranges a lot, especially when you are reading non-fiction. I love The Teaching Company’s offerings but they can run 30+ hours. I also just finished A Song of Ice and Fire, which was a heavy 200+ hours.One of the biggest problems with audiobooks is that sometimes you just aren’t in the mood for them, especially dense non-fiction. So the system that I use is to always have at least 1 non-fiction and at most 1 fiction on the go at once. Audible makes it easy to switch back and forth while saving your position so that’s not a problem, and this gives your brain a break and makes listening that much easier.Audible pro-tips:Listen to the preview first. If you don’t like the reader’s voice it’s going to be a struggle. This doesn’t happen often, most voice actors are incredible.Don’t listen to more than 3 non-fiction books at a time. You will likely gloss over details and you won’t retain nearly as much as you want.Buy paper books of the audiobooks you love. Audiobooks are hard to reference later, so having a physical copy that you can skim through is key, especially for non-fiction.Re-listen to books you love. Your brain will wander when listening and you will miss things. Re-listenting lets you get the details you missed.Download all books, don’t stream them.Buy a subscription.The last one is key. Audiobooks normally cost around $25–35, so this year could have cost me $1,700. Instead, I buy a subscription which gives me 24 books for $9 a book. 100% worth it.I do still read physical books, in fact, I read about 10 physical books this year using the traditional reading times: before bed, while Jack was napping, on the train, early in the morning etc.10 books vs 50+ books, simple math. If you love to read, and miss reading because life is busy, leverage that found time and stop reading, and start listening. | 2024-11-08T02:34:34 | en | train |
10,823,222 | gkop | 2016-01-01T20:24:14 | Don’t Learn to Code in 2016 | null | https://blog.bloc.io/dont-learn-to-code-in-2016/ | 2 | 0 | null | null | null | no_error | Chegg Skills | Skills Programs for the Modern Workplace | null | Jaqueline BTeam Lead | Education that empowers everyoneBuild your dream career by mastering essential soft skills and technical topics through flexible learning, hands-on practice, and personalized support.INVEST IN YOUR CAREERLearn real-world skills, fastEach program features hands-on projects to equip you with the skills necessary for achieving your career goals, allowing you to explore new fields and stack courses for ongoing growth.Explore all programsEXPERIENCE LONG-TERM IMPACTA brighter future starts hereMany of our learners gain career benefits, including new skills, increased confidence and satisfaction, higher earning potential, and new job opportunities.Discover our impactSince graduating, I've been promoted and made the decision to continue my education towards a Bachelors degree. The learnings from Technology Fundamentals continue to impact me at work and in pursuit of my degree - I'm so glad I enrolled!Since Chegg Skills, I've become a penetration tester and began my bachelor's in Cybersecurity. The support from the program is invaluable in preparing you to enter the field as an entry-level IT professional.Austin DennisSecurity EngineerComResourceStart your journey with Skills todayFOR LEARNERSAccelerate your careerFind and apply for a Chegg Skills program on Guild to start learning in-demand skills.Apply for a program on GuildINTERESTED INSTITUTION PARTNERSUnlock learning pathways Contact us to learn more about Chegg Skills and how to incorporate our programs into your catalog.Get in touch1, 2, 3, 4Results reflect a Chegg Skills survey conducted among self-reporting Thinkful program graduates between February 7, 2022, and February 3, 2023. Respondent base (n=242) among 901 invites. Sample size represents this population within a margin of error of 5% at 95% confidence. Survey responses are not a guarantee of any particular results as individual experiences may vary. The survey was fielded between February 22, 2023, and August 12, 2023. Graduates invited to the survey were offered a $15 gift card. | 2024-11-08T20:58:52 | en | train |
10,823,256 | dkarapetyan | 2016-01-01T20:30:23 | Haxe – The Cross-platform Toolkit | null | http://haxe.org/ | 3 | 0 | null | null | null | no_error | Haxe - The Cross-platform Toolkit | null | null |
Haxe 4 is here!
Haxe is an open source high-level strictly-typed programming language with a fast optimizing cross-compiler.
Download 4.3.6
Released: 2024-08-07
Haxe can build cross-platform applications targeting JavaScript, C++, C#, Java, JVM, Python, Lua, PHP, Flash, and allows access to each platform's native capabilities.
Haxe has its own VMs (HashLink and NekoVM) but can also run in interpreted mode.
Code written in Haxe can be compiled to any target Haxe supports.
When Haxe?
Haxe is useful in a wide variety of domains; games, web, mobile, desktop, command-line and cross-platform APIs. Take a look at who is using Haxe and explore the use cases.
Use Cases
Many libraries
Haxelib is the package manager for Haxe, which offers many free libraries powered by the Haxe community. Manage your project dependencies and distribute libraries.
Haxelib
Latest news
Latest videos
Haxe Spring Report - Simon Krajewski
Haxe Checkstyle In Continious Integration - Nicolas Banspach
Elvenar - Haxe: Switching to Haxe During Production - Alexander Rotter
[Re]Evolution of Forge of Empires with Haxe - Ricardo Neves & Nikolas Banspach
Closing words - Dan Korostelev
The Haxe Foundation
The goals of the Haxe Foundation are to support the Haxe ecosystem
by funding core technologies,
organizing events, helping the open-source community.
Haxe for your business?
If you are currently evaluating Haxe from either a business or technical point of view, you can contact us. We can help you either directly, or by putting you in touch with a consultant that will be able to help you understand how your company can benefit from using Haxe.
Partner program page
Haxe is what JavaScript should be: a lightweight, easy to learn, statically typed language with a real and useful compiler.
Peter Halacsy, Co-Founder & CTO at Prezi
| 2024-11-08T08:40:06 | en | train |
10,823,402 | jmount | 2016-01-01T21:00:06 | Some programming language theory in R | null | http://www.win-vector.com/blog/2016/01/some-programming-language-theory-in-r/ | 2 | 1 | [
10823418
] | null | null | no_error | Some programming language theory in R | 2016-01-01T20:52:44+00:00 | jmount |
Let’s take a break from statistics and data science to think a bit about programming language theory, and how the theory relates to the programming language used in the R analysis platform (the language is technically called “S”, but we are going to just call the whole analysis system “R”).
Our reasoning is: if you want to work as a modern data scientist you have to program (this is not optional for reasons of documentation, sharing and scientific repeatability). If you do program you are going to have to eventually think a bit about programming theory (hopefully not too early in your studies, but it will happen). Let’s use R’s powerful programming language (and implementation) to dive into some deep issues in programming language theory:
References versus values
Function abstraction
Equational reasoning
Recursion
Substitution and evaluation
Fixed point theory
To do this we will translate some common ideas from a theory called “the lambda calculus” into R (where we can actually execute them). This translation largely involves changing the word “lambda” to “function” and introducing some parenthesis (which I think greatly improve readability, part of the mystery of the lambda calculus is how unreadable its preferred notation actually is).
Recursive Opus (on a Hyperbolic disk)
Lots of ink is spilled on the ideas of “functional languages being based on the lambda calculus.” This misses the point of the lambda calculus. Most functional programming languages deliberately include powerful features deliberately not found in the lambda calculus (such as being able to efficiently name and re-use variables and values). The interesting fact about the lambda calculus is that a re-writing system as seemingly weak as what is presented already can simulate the features that are directly implemented in typical functional.
Typical functional languages have more features than the lambda calculus, so in principle may be harder to reason about than the lambda calculus. In practice the lambda calculus is itself fairly hard to reason about due to fundamental issues of recursion, lack of helpful type annotations, and illegibility due to overuse of associative notation (the shunning of helpful parenthesis).
But let’s play with this in R a bit as it actually does help our “functional thinking” to see how equational reasoning, function creation, function application, and recursion are related.
The usual first example of a recursive function is the factorial function. Factorial is defined over the non-negative integers as:
factorial(0) = 1
factorial(x) = x*factorial(x-1) (for integer x>0)
It is usually written as x! (so x! is shorthand for factorial(x)). Now we are not really interested in the factorial() function as:
It is already implement in R as factorial() (see help(‘factorial’)).
Any recursive implementation of is going to very slow compared to a more standard “special function” implementation.
You can’t even compute factorial(200) over double precision floating point- so a small look-up table could do all the work.
The right way to implement it in properly idiomatic R would be to commit some memory space and write: prod(seq_len(x)).
But it is a somewhat familiar function that is a one of the simplest examples of recursion. Here is a recursive implementation written in R:
# typical example recursive function
# function uses the fact it knows its own name to call itself
fact <- function(x) {
if(x<=0) {
return(1)
} else {
return(x*fact(x-1))
}
}
fact(5)
## [1] 120
Now suppose your programming language was weaker, maybe functions don’t have names (like in the lambda calculus), or you don’t trust calling yourself directly (like subroutines in early FORTRAN). You could try the following. build a function returning a function (actually a very powerful ability) notice the new function may call its supplied argument (as a function) up to once
factstep <- function(h) {
function(x) {
if(x<=0) {
return(1)
} else {
return(x*h(x-1))
}
}
}
factstep(NULL)(0)
## [1] 1
Compose this beast with itself a few time, we are using that each returned function is a new anonymous function (and thus has separate state) to make sure calls don’t interfere. The idea is: this would work even in FORTRAN if FORTRAN allowed us to create anonymous subroutines at run-time. What we are showing is the bookkeeping involved in function definition is already strong enough to support limited recursion (we don’t need) a separate facility for that, though R does have such).
f5 <- factstep(factstep(factstep(factstep(factstep(factstep(NULL))))))
f5(3)
## [1] 6
f5(4)
## [1] 24
But f5 isn’t the best implementation of factorial. It doesn’t work for long.
tryCatch(
f5(6),
error=function(e) print(e))
## <simpleError in h(x - 1): could not find function "h">
One way to solve this is introduce what is called a fixed point function. We used the fact we know our own function name (i.e. fix can call self) to define a function called “fix” that calculates fixed points of other functions. The idea is: By definition fix(f) is f(fix(f)). The idea is fix(factstep) will be a function that knows how to recurse or call itself enough times. The function fix() should obey equations like the following:
fix(f) = f(fix(f))
So if x = fix(f) we have (by substitution) x = f(x), or x is a fixed point of f() (f being an arbitrary function). In our case we will write fact1 = fix(factstep) and have fact1 = factstep(fact1), which means we don’t alter fact11 by one more application of factstep. fact1 is a function that seems to have an arbitrary number of applications of factstep rolled into it. So it is natural to suppose that fact1(x) = factorial(x) as both functions seem to obey the same recurrence relations.
This sort of direct algebra over functions is still using the fact we have explicit names on the functions (which we are also using as variable names) This does require the power to call yourself, but what we are showing is we can bootstrap one self-recursive function into a source of self-recursive functions.
Here is fix written in R:
fix <- function(f) { f(fix(f)) }
This gives us enough power to implement recursion/iteration for any function. This ability to write a function in terms of an expression or equation involving itself by name is considered ground breaking. Alan Kay called such an expression in the LISP 1.5 Programmer’s Manual “Maxwell’s Equations of Software!” (see here for some details). It is somewhat mind-blowing that both
You can write fix in terms of itself
The language implementation then essentially solves the above recursive equation (by mere substitution) and you use this to build recursive functions at well.
For example:
fact1 <- fix(factstep)
fact1(5)
## [1] 120
fact1(3)
## [1] 6
(factstep(fact1))(3) # same answer
## [1] 6
fact1(10)
## [1] 3628800
Fixed points are a bit less mysterious when applied to simpler functions, such as constants (which are their own fixed points as they ignore their argument).
# We can also apply this to a simple function like the following
g <- function(x) { 9 }
# applying the fixing operator is the same as evaluating such a simple function
fix(g)
## [1] 9
g(fix(g)) # same answer
## [1] 9
Note: fix(g) is not a fixed-point of fix(). We do not have fix(g) = fix(fix(g)) and fix() can not find it’s own fixed point: fix(fix). fix(fix) fails to terminate even under R’s lazy argument semantics. Also this fix() doesn’t work in languages that don’t have lazy-argument semantics (which is most languages except R and Haskel).
# Bonus question: what is fix(fix)?
tryCatch(
fix(fix),
error=function(e) print(e))
## <simpleError: evaluation nested too deeply: infinite recursion / options(expressions=)?>
Why did that blow up even under R’s lazy evaluation semantics? Because fix() always tried to use its argument, which caused the argument to be instantiated and in turn triggered unboudned recursion. Notice factstep() doesn’t always use its argument, so it doesn’t necessarily recurse forever.
The above method depended on R’s lazy evaluation semantics. What if you are working with a language that has more standard aggressive evaluation semantics (arguments are evaluated before calling a function, meaning they are evaluated even if they don’t end up getting used). You can implement a fixed point operator by explicitly hiding your argument in a function construction abstraction as follows:
# Fixed point function, two argument version (works under most semantics, including Python)
fix2 <- function(f) { function(x) f(fix2(f))(x) }
fact2 <- fix2(factstep)
fact2(5)
## [1] 120
fact2(3)
## [1] 6
The idea being: with function introduction you can implement your own lazy evaluation scheme.
The main feature we used up until now is the fact we know the name of the function we are working with (and and thus conveniently evaluate it more than once and pass it around). A natural question is: if you didn’t have this naming convenience (as you do not in the pure lambda calculus, which works only over values without variables) can you still implement recursion on your own? The answer is use and one of the methods is an expression called “the Y combinator.”
Here is the Y combinator version (version without using own name) of implementing recursion. Again, you don’t have to do this if you have any of the following features already available:
Variable names
Language supplied recursion
A fixed point operator
The Y combinator implements a fixed point function using only: function creation (eta-abstraction) and application (beta-reduction) and NOT using your variable names name (in this case Y). The trick is: we can use any value passed in twice. So even though we may not have variables and names, we can give some other values access to to related values (enough to implement fixed point calculations and recursion ourselves).
Here is the Y combinator written in R:
# Y combinator written in R
Y <-
function(f) {
(function(z) {
f(function(v) {
z(z)(v)
})
})(function(z) {
f(function(v) {
z(z)(v)
})
})
}
Y has the property for any function g(): Y(g) = g(Y(g)), exactly the equation fix() obeyed. This is often written without parenthesis as Y g = g Y g. Either way Y is a solution to an equation over functions. What we wrote above isn’t a derivation of the solution of the recurrence, but an explicit solution for Y that does not involve Y referencing itself. Notice it involves function introduction. The traditional way of writing the above function in lambda calculus notation is:
Y = lambda f . ( lambda x. f(x x)) ( lambda x.f(x x) )
Understand that the earlier R code is the same using R’s function introduction notation (“function” instead of “lambda”) and explicit introduction of more parenthesis to show the preferred association of operations.
Let’s prove what we have is a solution to the claimed recurrence equation. We are working over the lambda calculus, which is a theory of transformations of values The main inference step is function application or beta-reduction
function(x) { … } xvalue -> {…} with x replaced by xvalue throughout
The other step is eta-conversion which says the following two forms should be considered equivalent:
function(x) g(x) <–> g
Parenthesis and variable names are added and removed for convenience. Function application can be written as f(x) or as f x. (Notation associates to the left so z(z)(v) is equivalent to (z(z))(v).)
Proof:
We are going to show one pattern of substitution that converts Y(g) to g(Y(g)).
The rules of the lambda calculus are that if one form can be converted to another then they are considered to be equivalent, and so we are done.
It is a non-trivial fact that the lambda-calculus is consistent. This is the Church-Rosser theorem and says if P -> Q (we can transform P to Q) and P -> R (we can transform P to R) then there is a form S such that Q->S and R->S. The idea is convertible statements form equivalence classes, so it makes sense to call them equivalent. So if we can find a conversion in any order of operations we are done (i.e. we are free to try operations in the order we want).
For clarity let’s introduce a substitution
T = function(f) ( function(z) { f(function(v) { z(z)(v) }) })
so Y = function(f) { T(f)(T(f)) }
Y(g) = (function(f) { T(f)(T(f)) })(g) # by definition
= T(g)(T(g)) # function application, call this equation 2
= ( function(z) { g(function(v) { z(z)(v) }) })(T(g)) # expanding left T(g)
= g(function(v) { T(g)(T(g))(v) }) # function application
= g(T(g)(T(g))) # eta conversion on inside g() argument
= g(Y(g)) # substitute in equation 2
Let’s try it:
fact3 <- Y(factstep)
fact3(5)
## [1] 120
fact3(3)
## [1] 6
# Bonus question: what is Y(Y)?
tryCatch(
Y(Y),
error=function(e) print(e))
## <simpleError: evaluation nested too deeply: infinite recursion / options(expressions=)?>
Remember in R we don’t actually need the Y combinator as we have direct access to a large number of additional powerful constructs deliberately not found in the lambda calculus (where the Y combinator is must known):
Named variables (allowing re-use of values)
Explicit recursion (allowing direct definition of recursive functions)
Lazy evaluation (allowing recursion through mere substitution and evaluation)
R can also choose implement the Y combinator because we have all of the essential lambda calculus tools directly available in R:
Evaluation (reductions)
Function introduction (abstraction)
And this concludes our excursion into programming language theory.
This complete article (including all code) as an R knitr worksheet can be found here.
Categories: Coding Computer Science Expository Writing Tutorials
Tagged as: fixed point functional programming lambda calculus Programming R R as it is recursion theory of programming
jmount
Data Scientist and trainer at Win Vector LLC. One of the authors of Practical Data Science with R.
| 2024-11-08T06:53:37 | en | train |
10,823,415 | okasaki | 2016-01-01T21:01:22 | Mozilla Addons Blog: Loading Temporary Add-Ons | null | https://blog.mozilla.org/addons/2015/12/23/loading-temporary-add-ons/ | 2 | 0 | null | null | null | no_error | Loading temporary add-ons – Mozilla Add-ons Community Blog | null | null |
With Firefox 43 we’ve enabled add-on signing, which requires add-ons installed in Firefox to be signed. If you are running the Nightly or Dev Edition of Firefox, you can disable this globally by accessing about:config and changing the xpinstall.signatures.required value to false. In Firefox 43 and 44 you can do this on the Release and Beta versions, respectively.
In the future, however, disabling signing enforcement won’t be an option for the Beta and Release versions as Firefox will remove this option. We’ve released a signing API and the jpm sign command which will allow developers to get a signed add-on to test on stable releases of Firefox, but there is another solution in Firefox 45 aimed at add-on developers, which gives you the ability to run an unsigned, restartless add-on temporarily.
To enable this, visit a new page in Firefox, about:debugging:
In my case I’ve got a new version of an add-on I’m developing; a new WebExtension called “Sort Tabs by URL”. To load this unsigned version temporarily, click on the “Load Temporary Add-on” and select the .xpi file for the add-on. This will load an unsigned add-on temporarily, for the duration of the current browser session. You can see this in about:debugging and also by the green button in the toolbar that the add-on creates:
The add-on is only loaded temporarily; once you restart your browser, the add-on will no longer be loaded, and you’ll have to re-load it from the add-ons manager.
If you don’t want to go to the effort of making an .xpi of your add-on while developing – you can just select a file from within the add-on, and it will be loaded temporarily without having to make a zip file.
One last note, if you have an add-on installed and then install the an add-on temporarily with the same id then the older add-on is disabled and the new temporary add-on is used.
| 2024-11-08T02:11:31 | en | train |
10,823,739 | mikemaccana | 2016-01-01T22:16:02 | Install Win32 OpenSSH test release | null | https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH | 49 | 45 | [
10824711,
10824162,
10824156,
10824111,
10824216,
10824439,
10824606,
10824334,
10824818
] | null | null | no_error | Install Win32 OpenSSH | null | PowerShell |
Install using WinGet
Starting with GitHub Release 8.9.1.0, OpenSSH Beta releases are available through WinGet.
With WinGet installed on the machine, use the following commands:
Search:
winget search "openssh beta"
Install:
winget install "openssh beta"
Uninstall:
winget uninstall "openssh beta"
note: to install/uninstall only the OpenSSH client or OpenSSH server, see https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH-Using-MSI for args that can be passed to winget via --override (https://learn.microsoft.com/en-us/windows/package-manager/winget/install).
Install Win32 OpenSSH (test release)
Win32-OpenSSH Github releases can be installed on Windows 7 and up.
Note these considerations and project scope first.
Download the latest build of OpenSSH.
To get links to latest downloads this wiki page.
Extract contents of the latest build to C:\Program Files\OpenSSH (Make sure binary location has the Write permissions to just to SYSTEM, Administrator groups. Authenticated users should and only have Read and Execute.)
In an elevated Powershell console, run the following
powershell.exe -ExecutionPolicy Bypass -File install-sshd.ps1
Open the firewall for sshd.exe to allow inbound SSH connections
New-NetFirewallRule -Name sshd -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22
Note: New-NetFirewallRule is for Windows 2012 and above servers only. If you're on a client desktop machine (like Windows 10) or Windows 2008 R2 and below, try:
netsh advfirewall firewall add rule name=sshd dir=in action=allow protocol=TCP localport=22
Start sshd (this will automatically generate host keys under %programdata%\ssh if they don't already exist)
net start sshd
Optional
To configure a default shell, see here
To setup sshd service to auto-start
Set-Service sshd -StartupType Automatic
To migrate sshd configuration from older versions (0.0.X.X), see here
Uninstall Win32 OpenSSH
Start Windows Powershell as Administrator
Navigate to the OpenSSH directory
cd 'C:\Program Files\OpenSSH'
Run the uninstall script
powershell.exe -ExecutionPolicy Bypass -File uninstall-sshd.ps1
| 2024-11-08T20:46:15 | en | train |
10,823,740 | randomname2 | 2016-01-01T22:16:12 | Where Some of the Worst Attacks on Social Science Come From | null | http://nymag.com/scienceofus/2015/12/when-liberals-attack-social-science.html | 28 | 2 | [
10825895,
10825789
] | null | null | no_error | Why Some of the Worst Attacks on Social Science Have Come From Liberals | 2015-12-30T09:03:00.000-05:00 | Jesse Singal |
Alice Dreger, author of “Galileo’s Middle Finger”
Photo: Jenny Stevenson Photography
I first read Galileo’s Middle Finger: Heretics, Activists, and the Search for Justice in Science when I was home for Thanksgiving, and I often left it lying around the house when I was doing other stuff. At one point, my dad picked it up off a table and started reading the back-jacket copy. “That’s an amazing book so far,” I said. “It’s about the politicization of science.” “Oh,” my dad responded. “You mean like Republicans and climate change?”
That exchange perfectly sums up why anyone who is interested in how tricky a construct “truth” has become in 2015 should read Alice Dreger’s book. No, it isn’t about climate change, but my dad could be excused for thinking any book about the politicization of science must be about conservatives. Many liberals, after all, have convinced themselves that it’s conservatives who attack science in the name of politics, while they would never do such a thing. Galileo’s Middle Finger corrects this misperception in a rather jarring fashion, and that’s why it’s one of the most important social-science books of 2015.
At its core, Galileo’s Middle Finger is about what happens when science and dogma collide — specifically, what happens when science makes a claim that doesn’t fit into an activist community’s accepted worldview. And many of Dreger’s most interesting, explosive examples of this phenomenon involve liberals, not conservatives, fighting tooth and nail against open scientific inquiry.
When Dreger criticizes liberal politicization of science, she isn’t doing so from the seat of a trolling conservative. Well before she dove into some of the biggest controversies in science and activism, she earned her progressive bona fides. A historian of science by training, she spent about a decade early in her career advocating on behalf of intersex people — those born with neither “traditional” male nor female genitalia. For a long time, established medical practice was for the doctor or doctors present at childbirth to make the call one way or another and effectively carve a newborn’s genitals into the “proper” configuration, and in some cases to eventually prescribe courses of potentially harmful or unnecessary hormones. Sometimes the child in question was never even informed that they hadn’t been born a boy or a girl in the classical sense — indeed, sometimes even their parents weren’t. To the medical Establishment, all that mattered — even above patients’ physical and psychological health — was that young bodies fit neatly into one established gender category or the other.
Working together with a group of intersex activists, Dreger lobbied and educated tirelessly, eventually nudging the medical Establishment away from this protocol and toward a new, more humane norm in cases of genital malformation that don’t pose any health risk: Leave the kid’s genitals alone, allow them to grow up a little, and see what they and their family want to do later on. There doesn’t need to be a rush to assign gender and take aggressive medical action to enforce it.
Eventually, as a result of burnout and other factors, Dreger’s work in this area waned, and she moved on to other projects. Through some of the social networks she had developed in her intersex work, she became interested in the broader world of scientific controversies, and began investigating them as thoroughly as possible — interviewing hundreds of people, chasing down primary documents, and so on. What she found, over and over, was that researchers whose conclusions didn’t line up with politically correct orthodoxies — whether the orthodoxy in question involved sexual abuse, transgender issues, or whatever else — often faced dire, career-threatening consequences simply for doing their jobs.
Two examples stand out as particularly egregious cases in which solid social science was attacked in the name of progressive causes. The first involves Napoleon Chagnon, an extremely influential anthropologist who dedicated years of his life to understanding and living among the Yanomamö, an indigenous tribe situated in the Amazon rain forest on the Brazil-Venezuela border — there are a million copies of his 1968 book Yanomamö: The Fierce People in print, and it’s viewed by many as an ethnographic classic. Chagnon made ideological enemies along the way; for one thing, he has long believed that human behavior and culture can be partially explained by evolution, which in some circles has been a frowned-upon idea. Perhaps more important, he has never sentimentalized his subjects, and his portrayal of the Yanomamö included, as Dreger writes, “males fighting violently over fertile females, domestic brutality, ritualized drug use, and ecological indifference.” Dreger suggests that Chagnon’s reputation as a careful, dedicated scholar didn’t matter to his critics — what mattered was that his version of the Yanomamö was “Not your standard liberal image of the unjustly oppressed, naturally peaceful, environmentally gentle rain-forest Indian family.”
In 2000, Chagnon’s critics seized upon a once-in-a-career opportunity to go after him. That was the year a journalist named Patrick Tierney published Darkness in El Dorado: How Scientists and Journalists Devastated the Amazon. The book — and a related New Yorker article by Tierney — leveled a series of spectacular allegations against Chagnon and James V. Neel Sr., a geneticist and physician with whom Chagnon had collaborated during his work with the Yanomamö (Neel died of cancer shortly before the book’s publication). Among other things, Tierney charged that Chagnon and Neel had intentionally used a faulty vaccine to infect the Yanomamö with measles so as to test Nazi-esque eugenics theories, and that one or both men had manipulated data, started wars on purpose, paid tribespeople to kill one another, and “purposefully with[held] medical care while experimental subjects died from the allegedly vaccine-induced measles,” as Dreger writes.
These charges stuck in part because Terence Turner and Leslie Sponsel, two anthropologists who disliked Chagnon and his work, sent the American Anthropological Association an alarming letter about Tierney’s allegations prior to the publication of Darkness in El Dorado. Rather than wait to see if the spectacular claims in the book passed the smell test, the AAA responded by quickly launching a full investigation in the form of the so-called El Dorado Task Force — a move that led to a number of its members resigning in protest. A media firestorm engulfed Chagnon — “Scientist ‘killed Amazon indians to test race theory’,” read a Guardian headline — and he was forced to defend himself against accusations that he had brutalized members of a tribe he had devoted his career to living with and studying and, naturally, had developed a strong sense of affection for in the process. A number of fellow anthropologists and professional organizations came to the defense of Chagnon and Neel, pointing out obvious problems with Tierney’s claims and timeline, but these voices were drowned out by the hysteria over the evil, murderous anthropologist and his doctor-accomplice. Dreger writes that Chagnon’s “career had essentially been halted by the whole mess.” (Chagnon’s memoirs, published in 2013, are entitled Noble Savages: My Life Among Two Dangerous Tribes — the Yanomamö and the Anthropologists.)
There was, it turns out, nothing to these claims. Over the course of a year of research and interviews with 40 people involved in the controversy in one way or another, Dreger discovered the disturbing, outrageous degree to which the charges against Chagnon and Neel were fabricated — to the point where some of the numerous footnotes in Tierney’s book plainly didn’t support his own claims. All the explosive accusations about Nazi-like activities and exploitation, and the intentional fomenting of violence, were simply made up or willfully misinterpreted. Worse, some of them could have been easily debunked with just a tiny bit of research — in one case, it took Dreger all of an hour in an archive of Neel’s papers to find strong evidence refuting the claim that he helped intentionally infect the Yanomamö with measles (a claim that was independently debunked by others, anyway).
In the end, Dreger published the results of her investigation in the journal Human Nature, recounting the full details of Chagnon’s ordeal at the hands of Tierney, and the many ways Tierney fabricated and misrepresented data to attack the anthropologist and Neel. Darkness Is El Dorado is still available on Amazon, its original, glowing reviews and mention of its National Book Award nomination intact; and Tierney’s New Yorker article is still online, with no editor’s note explaining the factual inaccuracies contained therein.
***
Dreger also recounts her earlier investigation into the controversy surrounding J. Michael Bailey, a Northwestern University psychologist and researcher of human sexuality and former chair of that university’s psychology department. In 2003, Bailey released The Man Who Would Be Queen: The Science of Gender-Bending and Transsexualism, a book in which he relates the stories of several transgender women and promotes the theories of Ray Blanchard, a Canadian sex researcher with a long history of working with patients who were born anatomically male but hoped to undergo gender reassignment.
In his book, Bailey explains that Blanchard believed his patients who had transitioned, or who were hoping to, fit into two rather different categories. Some were “transkids” (a non-clinical term Dreger, not Bailey, uses): folks who were born as boys but had been very effeminate by societal standards since childhood, and who were attracted to men once they hit puberty. In these cases, Blanchard posited, access to sex and intimate companionship might have been one component of what eventually pushed them to start presenting as female. As Dreger explains, the fact that transkids come across so effeminate “means that their sexual opportunities are often limited while they are presenting themselves as men. Straight men aren’t interested in having sex with them because they’re male, and gay men often aren’t sexually attracted to them because most gay men are sexually attracted to masculinity, not femininity, and these guys are really femme.” Transitioning, then, gives transkids an opportunity to have the relationships with men they’d like to — because they’re effeminate, they can pass as women whom straight men find themselves attracted to.
The second, more controversial type of male-to-female transitioner posited by Blanchard consisted of folks with so-called autogynephilia. These individuals have usually presented as male for most of their lives and are attracted to women, but they discover along the way that they are sexually aroused by the idea of being a woman. They tend to transition later in life, often after having married women and started families.
There’s also a really important cultural component to Blanchard’s theory, as Dreger writes:
Blanchard’s taxonomy of male-to-female transexuals recognized the importance of sexual orientation in the gendered self-identities of both those who begin as homosexual males and those who experience amour de soi en femme [the French phrase for “love of oneself as a woman”]. However, he didn’t see sexual orientation as the only thing a male factors in when deciding whether to transition. He recognized that in one environment — say, an urban gay neighborhood like Chicago’s Boystown — an ultrafemme gay man might find reasonable physical safety, employment, and sexual satisfaction simply by living as an ultrafemme gay man. But in a very different environment — say, a homophobic ethnic enclave in Chicago — he might find life survivable only via complete transition to womanhood. Whether a transkid grows up to become a gay man or a transgender woman would depend on the individual’s interaction with the surrounding cultural environment. Similarly, an autogynephilic man might not elect transition if his cultural milieu would make his post-transition life much harder.
There is, to say the least, a huge amount going on here. But what’s key to keep in mind is that some transgender people and activists hold very dear the idea that they have simply been born in the wrong type of body, that transitioning allows them to effectively fix a mistake that nature made. The notion that there might be a cultural component to the decision to transition, or that sexuality, rather than a hardwired gender identity, could be a factor, complicates this gender-identity-only narrative. It also brings sexuality back into a conversation that some trans activists have been trying to make solely about gender identity — roughly parallel to the way some gay-rights activists sweep conversations about actual gay sexuality under the rug, preferring to focus on idealized, unthreatening-to-heterosexuals portrayals of committed gay relationships between clean-cut, taxpaying adults.
But as Dreger explains, Bailey, being someone with a penchant for poking mischievously at political correctness, wasn’t too concerned about the political dimension of what he was arguing in his book. From a scientific perspective, he explicitly viewed the idea that “everybody is truly and easily assignable to one of two gender identities” as an oversimplification; part of his motivation for writing The Man Who Would Be Queen was to try to blow it up, to argue that transsexuality is more complicated than that. So it shouldn’t be surprising that some trans activists and allies didn’t appreciate the book’s argument — and they obviously have every right to disagree with Bailey and Blanchard’s views. What is surprising is just how big an explosion The Man Who Would Be Queen sparked, and how underhanded the campaign against Bailey subsequently got.
A small group of activists led by Lynn Conway, a transgender University of Michigan electrical engineer and computer scientist, and Andrea James, a trans activist, started going after Bailey shortly after the book’s publication. In allegations laid out on a large UM-hosted web page built by Conway, they charged that Bailey — as summed up by Dreger — “had failed to get ethics board approval for studies of transgender research subjects as required by federal regulation; that he had violated confidentiality; that he had been practicing psychology without a license; and that he had slept with a trans woman while she was his research subject.” Central to their argument was the idea that Bailey had dragged his trans subjects out into the spotlight without their consent, that he had callously manipulated them and used them for his own purposes — a particularly potent charge given that outing someone as transgender can, in the most extreme instances, put their life at risk given the scary levels of violence this population faces at the hands of bigots. (Conway’s website originally included Dreger’s own name on a list of trans activists and allies who were furious with Bailey over his book, even though, at that time, Dreger was only faintly familiar with the controversy and had never even expressed a public opinion on the issue. Dreger asked Conway to remove her name.)
James, in Dreger’s telling, went after Bailey with at-times-scary ferocity, engaging in a host of intimidation tactics: She posted photos of Bailey’s young daughter online with nasty text underneath (in one case calling her a “cock-starved exhibitionist”), sent angry emails to his colleagues, and quickly turned on anyone who didn’t join in her crusade — including some who said that they felt that their own life stories had been accurately and sympathetically captured in Bailey’s book. (James herself, Dreger reveals, acknowledged her own autogynephilia — using that exact word — in a 1998 letter.)
The allegations were so serious, and came in such a heaping quantity, that Bailey’s reputation was permanently tarnished in the eyes of many casual observers. What those observers can’t have known was his long-standing history of support for transgender people — he had used his perch as a researcher to advocate passionately for better treatment of this population and for improved access to gender-reassignment resources, and had even, at the request of one of the subjects in his book, written letters to physicians on behalf of a group of young trans women who were seeking reassignment surgery. Before the full weight of the controversy descended, The Man Who Would Be Queen had been nominated for the Lambda Literary Award’s 2004 prize in the transgender/genderqueer category for its textured, supportive portrayal of its transgender subjects. As a result of immense pressure — Deirdre McCloskey, a respected scholar of economics and history who wrote a memoir about her male-to-female transition, and who helped Conway and James go after Bailey, said nominating the book for the award “would be like nominating Mein Kampf for a literary prize in Jewish studies” — the organization voted to yank the nomination.
Just as she would later dive deep into the controversy that ensnared Napoleon Chagnon, Dreger devoted a huge amount of time to untangling what had really happened. It would take pages to even concisely summarize what she found — she eventually published her almost-50,000-word investigation in Archives of Sexual Behavior, in an article which starts, “This is not a simple story. If it were, it would be considerably shorter.”
But to get a flavor of the quality of the evidence amassed against Bailey by his critics, consider one charge: that Bailey had practiced psychology without a license. Conway, James, and McCloskey filed a formal complaint with the state of Illinois claiming that, since Bailey lacked a license as a clinical psychologist, he had violated state regulations by writing those letters in support of the young trans women seeking to transition. Not only was there no legal basis to the claim — if you don’t receive compensation for your services, which Bailey didn’t, you don’t even need a license to provide counseling in Illinois — but Bailey was completely forthright in his letters supporting the women, both about the fact that he had only had brief conversations with them (as opposed to having provided them with extensive counseling) and about his own qualifications and expertise — he even attached copies of his CV. “Presumably all this was why [Illinois] never bothered to pursue the charge,” writes Dreger, “although you’d never know that from reading the press accounts, which mentioned only the complaints, not that they had petered out.”
And that’s just one example. Over and over, in instances that covered every facet of the campaign against Bailey — including the charge that he had had sex with one of his subjects — Dreger discovered an astounding level of dishonesty and manipulation on the part of Bailey’s critics:
After nearly a year of research, I could come to only one conclusion: The whole thing was a sham. Bailey’s sworn enemies had used every clever trick in the book — juxtaposing events in misleading ways, ignoring contrary evidence, working the rhetoric, and using anonymity whenever convenient, to make it look as though virtually every trans woman represented in bailey’s book had felt abused by him and had filed a charge.
Of course, of all the right-thinking people who know, based on surface-level reporting or blog posts they read, that Mike Bailey is an anti-trans monster, only a tiny percentage are ever going to read, or even learn about, Dreger’s investigation. That’s the problem.
***
There’s a risk of getting too cute here, of drawing false, unwarranted equivalencies. In a sense, my dad was right in what he was getting at — conservatives have done a lot of damage to sound science in the United States. It’s conservative lawmakers and organizations who have refused to acknowledge anthropogenic climate change, who have rallied to keep evolution out of textbooks and comprehensive sex education out of classrooms, who have stymied life-saving research into stem cells and gun control.
But that’s in the world of politics and lawmaking, where conservatives often have a numerical advantage. In the halls of social-science academia, where liberals do, it’s telling that some of the same sorts of feeding frenzies occur. This should stand as a wake-up call, as a rebuke to the smugness that sometimes infects progressive beliefs about who “respects” science more. After all, what both the Bailey and Chagnon cases have in common — alongside some of the others in Galileo’s Middle Finger — is the extent to which groups of progressive self-appointed defenders of social justice banded together to launch full-throated assaults on legitimate science, and the extent to which these attacks were abetted by left-leaning academic institutions and activists too scared to stand up to the attackers, often out of a fear of being lumped in with those being attacked, or of being accused of wobbly allyship.
It’s hard not to come away from Dreger’s wonderful book feeling like we’re doomed. Think about all the time and effort it took her — a professionally trained historian as equipped as anyone to dig into complex morasses of conflicting claims — to excavate the full details of just one of these controversies. Who has a year to research and produce a fact-finding report that only a tiny percentage of people will ever read or care about? Who’s going to figure out exactly how some contested conversation between Mike Bailey and a young transgender woman in Chicago in two thousand whatever actually went down? Dreger herself is transparent about the fact that these days she can only afford to do what she does because her physician husband has a high-paying job at a medical school. There aren’t a lot of Alice Dregers. Nor are there, these days, a lot of investigative journalists with the time and resources to understand complicated debates involving controversial science. There is, however, a lot identity-driven content on the internet, because it’s easy to produce and tends to travel well. If you’re a writer or an editor looking for a quick hit, outrage at a perceived slight against some vulnerable group is a surefire bet.
While the false charges against Chagnon and Bailey were certainly helped along by the internet, neither episode occurred in our present age of bottomless social-media outrage. Imagine if the Bailey controversy dropped tomorrow. Imagine how various outlets, all racing to publish the hottest take and all forced to rely on only those sparse, ambiguous scraps of evidence that filter down in the first days of an uproar over an unfamiliar subject, would cover it. If anything, all the incentives have gotten worse; if anything, the ranks of dedicated, safely employed critical thinkers in a position to be the voice of reason have thinned. In all likelihood, the coverage today would be far uglier and more prejudicial than it was when the scandal actually broke.
Science can’t function in this sort of pressure-cooker environment. The way things are heading, with the lines of communication between scientific institutions and the general public growing increasingly direct (a good thing in many cases, to be sure), and with instant, furious reaction the increasingly favored response to anything with a whiff of injustice to it — details be damned — it will become hard, if not impossible, for careful researchers unencumbered by dogmatic ideology to make good-faith efforts to understand controversial subjects, and to then publish their findings. Chagnon and Bailey, after all, were good-faith researchers. They had both demonstrated, in the way only years of diligent scholarly work can, that they were fascinated by and cared deeply about their subjects. In their published writing, both men surfaced and amplified stories about hidden communities that never would have reached the wider world otherwise. And yet all this work counted for zilch, because when controversy erupted, they fit an easy-to-process, irresistible story line: They were white men exploiting vulnerable populations for personal gain. Imagine being a young psychologist genuinely interested in transgender issues, with a genuine desire to study them rigorously. What would the Bailey blowup tell you about the wisdom of staking your career on that field of research?
We should want researchers to poke around at the edges of “respectable” beliefs about gender and race and religion and sex and identity and trauma, and other issues that make us squirm. That’s why the scientific method was invented in the first place. If activists — any activists, regardless of their political orientation or the rightness of their cause — get to decide by fiat what is and isn’t an acceptable interpretation of the world, then science is pointless, and we should just throw the whole damn thing out.
When Liberals Attack Social Science
| 2024-11-08T01:27:40 | en | train |
10,823,850 | gokhan | 2016-01-01T22:42:10 | What Is Going to Happen in 2016 | null | http://avc.com/2016/01/what-is-going-to-happen-in-2016/ | 8 | 0 | null | null | null | no_error | What Is Going To Happen In 2016 | -0001-11-30T00:00:00+00:00 | Fred Wilson |
It’s easier to predict the medium to long term future. We will be able to tell our cars to take us home after a late night of new year’s partying within a decade. I sat next to a life sciences investor at a dinner a couple months ago who told me cancer will be a curable disease within the next decade. As amazing as these things sound, they are coming and soon.
But what will happen this year that we are now in? That’s a bit trickier. But I will take some shots this morning.
Oculus will finally ship the Rift in 2016. Games and other VR apps for the Rift will be released. We just learned that the Touch controller won’t ship with the Rift and is delayed until later in 2016. I believe the initial commercial versions of Oculus technology will underwhelm. The technology has been so hyped and it is hard to live up to that. Games will be the strongest early use case, but not everyone is going to want to put on a headset to play a game. I think VR will only reach its true potential when they figure out how to deploy it in a more natural way.
We will see a new form of wearables take off in 2016. The wrist is not the only place we might want to wear a computer on our bodies. If I had to guess, I would bet on something we wear in or on our ears.
One of the big four will falter in 2016. My guess is Apple. They did not have a great year in 2015 and I’m thinking that it will get worse in 2016.
The FAA regulations on the commercial drone industry will turn out to be a boon for the drone sector, legitimizing drone flights for all sorts of use cases and establishing clear rules for what is acceptable and what is not.
The trend towards publishing inside of social networks (Facebook being the most popular one) will go badly for a number of high profile publishers who won’t be able to monetize as effectively inside social networks and there will be at least one high profile victim of this strategy who will go under as a result.
Time Warner will spin off its HBO business to create a direct competitor to Netflix and the independent HBO will trade at a higher market cap than the entire Time Warner business did pre spinoff.
Bitcoin finally finds a killer app with the emergence of Open Bazaar protocol powered zero take rate marketplaces. (note that OB1, an open bazaar powered service, is a USV portfolio company).
Slack will become so pervasive inside of enterprises that spam will become a problem and third party Slack spam filters will emerge. At the same time, the Slack platform will take off and building Slack bots will become the next big thing in enterprise software.
Donald Trump will be the Republican nominee and he will attack the tech sector for its support of immigrant labor. As a result the tech sector will line up behind Hillary Clinton who will be elected the first woman President.
Markdown mania will hit the venture capital sector as VC firms follow Fidelity’s lead and start aggressively taking down the valuations in their portfolios. Crunchbase will start capturing this valuation data and will become a de-facto “yahoo finance” for the startup sector. Employees will realize their options are underwater and will start leaving tech startups in droves.
Some of these predictions border on the ridiculous and that is somewhat intentional. I think there is an element of truth (or at least possibility) in all of them. And I will come back to this list a year from now and review the results.
Best wishes to everyone for a happy and healthy 2016.
| 2024-11-08T15:57:15 | en | train |
10,824,251 | NvidiaCUDA | 2016-01-02T00:07:39 | FreeBSD Community Breathes Sigh of Relief as Toxic Activist Randi Harper Quits | null | http://www.breitbart.com/tech/2016/01/01/freebsd-community-breathes-sigh-of-relief-as-toxic-activist-randi-harper-finally-quits/ | 1 | null | null | null | true | no_error | FreeBSD Community Breathes Sigh Of Relief As Toxic Activist Randi Harper Finally Quits | 2016-01-01T22:00:49+00:00 | Milo |
A wave of relief is rippling across the FreeBSD community today, as the open source project’s most notorious “contributor,” Randi Harper, announced she was leaving for good.
Harper, who once went by the name “FreeBSDGirl” on Twitter, had been involved with FreeBSD for over 10 years – although her actual contributions are difficult to track down. But in a lengthy post on her blog, which reveals that her association with FreeBSD had become so toxic that the project asked her to change her Twitter username, Harper announced she was dissociating herself after 12 years.
Harper frames her departure as her own decision, but she has of course been regarded by everyone associated with FreeBSD as a liability for some time. In her long-winded post (I read these things so you don’t have to), Harper unleashes a blistering array of bizarre allegations against the FreeBSD foundation, including a claim that they stood by while she was being threatened online, and that she was “tone policed” by someone at FreeBSD who suggested she “be nicer” online.
No wonder that one stung her. Harper is one of the nastiest pieces of work on the internet. She is a self-proclaimed “anti-abuse activist” who has in fact spent most of the last two years generating hate-filled online mobs against her opponents, as voluminous reporting by Breitbart has demonstrated. Famous for telling people to “set themselves on fire,” this anti-abuse activist once drove a woman to tears for disagreeing with her.
Harper’s post details her history with FreeBSD, a story that is, of course, filled with fibs. She repeated her claim to have been responsible for fixing a bug that prevented FreeBSD from being installed via a USB thumbdrive. However, solutions for this problem existed for years before Harper arrived on the scene.
She also claims to have been harassed and threatened by another FreeBSD committer, but IRC logs of their discussion, unsurprisingly, show that Harper was doing most of the harassing. Harper is the consummate crybully – a vicious, aggressive firestarter who turns into a mewling victim whenever she receives a hint of her own medicine.
Judging by her blog post, she seems genuinely astonished that fellow FreeBSD developers weren’t taking her side. “A lot of developers didn’t know what was happening, just that one of the few women in the project was mad at another developer, and he quit. It made it look like the bullying was coming from my end.”
The bullying, coming from your end? Surely not, Randi!
Harper also sought to pressure the FreeBSD foundation into instituting a code of conduct for all their project members. Amazingly, they agreed, following the lead of a number of other open source projects that have instituted wacky, social justice warrior-led codes of conduct for their contributors in recent months. However, Harper still wasn’t satisfied.
“They later published a Code of Conduct, which went so far as to use the term “meritocracy”, and didn’t make a clear distinction between code mediation and responding to abuse.” That’s right, readers. One of the reasons Harper left FreeBSD was because the project valued meritocracy.
No doubt sick and tired of having to deal with her, and the damage she was doing to their public image, FreeBSD decided to ask Harper to remove the name of their project from her Twitter handle (then “@FreeBSDGirl”). According to Harper: “I received an email from this person threatening to involve the FreeBSD Foundation lawyers if I didn’t change my username immediately.”
Harper ends her post with a farewell to FreeBSD. “After 12 years, I left the IRC channel, and will not be going to any BSD conferences or participating in the community going forward. I remain friends with many people involved with the project, but FreeBSD no longer feels like my home.”
Follow Milo Yiannopoulos (@Nero) on Twitter and Facebook, or write to him at [email protected]. Android users can download Milo Alert! to be notified about new articles when they are published.
| 2024-11-08T10:12:31 | en | train |
10,824,287 | miiiiiike | 2016-01-02T00:16:54 | The Exemplary Narcissism of Snoopy | null | http://www.theatlantic.com/magazine/archive/2015/11/the-exemplary-narcissism-of-snoopy/407827/?single_page=true | 1 | 0 | null | null | null | no_error | Why Snoopy Is Such a Controversial Figure to ‘Peanuts’ Fans | 2015-10-09T11:30:00Z | Sarah Boxer | Charles M. Schulz Museum and Research CenterThe Exemplary Narcissism of SnoopySome of Charles Schulz’s fans blame the cartoon dog for ruining Peanuts. Here’s why they’re wrong.It really was a dark and stormy night. On February 12, 2000, Charles Schulz—who had single-handedly drawn some 18,000 Peanuts comic strips, who refused to use assistants to ink or letter his comics, who vowed that after he quit, no new Peanuts strips would be made—died, taking to the grave, it seemed, any further adventures of the gang.Hours later, his last Sunday strip came out with a farewell: “Charlie Brown, Snoopy, Linus, Lucy … How can I ever forget them.” By then, Peanuts was carried by more than 2,600 newspapers in 75 countries and read by some 300 million people. It had been going for five decades. Robert Thompson, a scholar of popular culture, called it “arguably the longest story told by a single artist in human history.”The arrival of The Peanuts Movie this fall breathes new life into the phrase over my dead body—starting with the movie’s title. Schulz hated and resented the name Peanuts, which was foisted on him by United Feature Syndicate. He avoided using it: “If someone asks me what I do, I always say, ‘I draw that comic strip with Snoopy in it, Charlie Brown and his dog.’ ” And unlike the classic Peanuts television specials, which were done in a style Schulz approvingly called “semi-animation,” where the characters flip around rather than turning smoothly in space, The Peanuts Movie (written by Schulz’s son Craig and grandson Bryan, along with Bryan’s writing partner, Cornelius Uliano) is a computer-generated 3-D-animated feature. What’s more, the Little Red-Haired Girl, Charlie Brown’s unrequited crush, whom Schulz promised never to draw, is supposed to make a grand appearance. AAUGH!!!Before all that happens, before the next generation gets a warped view of what Peanuts is and was, let’s go back in time. Why was this comic strip so wildly popular for half a century? How did Schulz’s cute and lovable characters (they’re almost always referred to that way) hold sway over so many people—everyone from Ronald Reagan to Whoopi Goldberg?Peanuts was deceptive. It looked like kid stuff, but it wasn’t. The strip’s cozy suburban conviviality, its warm fuzziness, actually conveyed some uncomfortable truths about the loneliness of social existence. The characters, though funny, could stir up shockingly heated arguments over how to survive and still be a decent human being in a bitter world. Who was better at it—Charlie Brown or Snoopy?The time is ripe to see what was really happening on the pages of Peanuts during all those years. Since 2004, the comics publisher Fantagraphics has been issuing The Complete Peanuts, both Sunday and daily strips, in books that each cover two years and include an appreciation from a notable fan. (The 25-volume series will be completed next year.) To read them straight through, alongside David Michaelis’s trenchant 2007 biography, Schulz and Peanuts, is to watch the characters evolve from undifferentiated little cusses into great social types.In the stone age of Peanuts—when only seven newspapers carried the strip, when Snoopy was still an itinerant four-legged creature with no owner or doghouse, when Lucy and Linus had yet to be born—Peanuts was surprisingly dark. The first strip, published on October 2, 1950, shows two children, a boy and a girl, sitting on the sidewalk. The boy, Shermy, says, “Well! Here comes ol’ Charlie Brown! Good ol’ Charlie Brown … Yes, sir! Good ol’ Charlie Brown.” When Charlie Brown is out of sight, Shermy adds, “How I hate him!” In the second Peanuts strip the girl, Patty, walks alone, chanting, “Little girls are made of sugar and spice … and everything nice.” As Charlie Brown comes into view, she slugs him and says, “That’s what little girls are made of!”Although key characters were missing or quite different from what they came to be, the Hobbesian ideas about society that made Peanuts Peanuts were already evident: People, especially children, are selfish and cruel to one another; social life is perpetual conflict; solitude is the only peaceful harbor; one’s deepest wishes will invariably be derailed and one’s comforts whisked away; and an unbridgeable gulf yawns between one’s fantasies about oneself and what others see. These bleak themes, which went against the tide of the go-go 1950s, floated freely on the pages of Peanuts at first, landing lightly on one kid or another until slowly each theme came to be embedded in a certain individual—particularly Lucy, Schroeder, Charlie Brown, Linus, and Snoopy.In other words, in the beginning all the Peanuts kids were, as Al Capp, the creator of Li’l Abner, observed, “good mean little bastards eager to hurt each other.” What came to be Lucy’s inimitable brand of bullying was suffused throughout the Peanuts population. Even Charlie Brown was a bit of a heel. In 1951, for example, after watching Patty fall off a curb into some mud, he smirks: “Right in the mud, eh? It’s a good thing I was carrying the ice cream!”Charles M. Schulz Museum and Research CenterMany early Peanuts fans—and this may come as a shock to later fans raised on the sweet milk of Happiness Is a Warm Puppy—were attracted to the strip’s decidedly unsweet view of society. Matt Groening, the creator of the strip Life in Hell and The Simpsons, remembers, “I was excited by the casual cruelty and offhand humiliations at the heart of the strip.” Garry Trudeau, of Doonesbury fame, saw Peanuts as “the first Beat strip” because it “vibrated with ’50s alienation.” And the editors of Charlie Mensuel, a raunchy precursor to the even raunchier Charlie Hebdo, so admired the existential angst of the strip that they named both publications after its lead character.At the center of this world was Charlie Brown, a new kind of epic hero—a loser who would lie in the dark recalling his defeats, charting his worries, planning his comebacks. One of his best-known lines was “My anxieties have anxieties.” Although he was the glue holding together the Peanuts crew (and its baseball team), he was also the undisputed butt of the strip. His mailbox was almost always empty. His dog often snubbed him, at least until suppertime, and the football was always yanked away from him. The cartoonist Tom Tomorrow calls him a Sisyphus. Frustration was his lot. When Schulz was asked whether for his final strip he would let Charlie Brown make contact with the football, he reportedly replied, “Oh, no! Definitely not! … That would be a terrible disservice to him after nearly half a century.”Although Schulz denied any strict identification with Charlie Brown (who was actually named for one of Schulz’s friends at the correspondence school in Minneapolis where Schulz learned and taught drawing), many readers assumed they were one and the same. More important for the strip’s success, readers saw themselves in Charlie Brown, even if they didn’t want to. “I aspired to Linus-ness; to be wise and kind and highly skilled at making gigantic structures out of playing cards,” the children’s-book author Mo Willems notes in one of the essays in the Fantagraphics series. But, he continues, “I knew, deep down, that I was Charlie Brown. I suspect we all did.”Well, I didn’t. And luckily, beginning in 1952 (after Schulz moved from his hometown, St. Paul, Minnesota, to Colorado Springs for a year with his first wife, Joyce, and her daughter, Meredith), there were plenty more alter egos to choose from. That was the year the Van Pelts were born. Lucy, the fussbudget, who was based at first on young Meredith, came in March. Lucy’s blanket-carrying little brother, Linus, Schulz’s favorite character to draw (he would start with his pen at the back of the neck), arrived only months later.And then, of course, there was Snoopy, who had been around from the outset (Schulz had intended to name him Sniffy) and was fast evolving into an articulate being. His first detailed expression of consciousness, recorded in a thought balloon, came in response to Charlie Brown making fun of his ears: “Kind of warm out today for ear muffs, isn’t it?” Snoopy sniffs: “Why do I have to suffer such indignities!?”I like to think that Peanuts and identity politics grew up together in America. By 1960, the main characters—Charlie Brown, Linus, Schroeder, Snoopy—had their roles and their acolytes. Even Lucy had her fans. The filmmaker John Waters, writing an introduction to one of the Fantagraphics volumes, gushes:I like Lucy’s politics (“I know everything!” …), her manners (“Get out of my way!” …), her narcissism … and especially her verbal abuse rants … Lucy’s “total warfare frown” … is just as iconic to me as Mona Lisa’s smirk.Finding one’s identity in the strip was like finding one’s political party or ethnic group or niche in the family. It was a big part of the appeal of Peanuts.Every character was a powerful personality with quirky attractions and profound faults, and every character, like some saint or hero, had at least one key prop or attribute. Charlie Brown had his tangled kite, Schroeder his toy piano, Linus his flannel blanket, Lucy her “Psychiatric Help” booth, and Snoopy his doghouse.In this blessedly solid world, each character came to be linked not only to certain objects but to certain kinds of interactions, too, much like the main players in Krazy Kat, one of the strips that Schulz admired and hoped to match. But unlike Krazy Kat, which was built upon a tragically repetitive love triangle that involved animals hurling bricks, Peanuts was a drama of social coping, outwardly simple but actually quite complex.Charlie Brown, whose very character depended on his wishes being stymied, developed what the actor Alec Baldwin, in one of the Fantagraphics introductions, calls a kind of “trudging, Jimmy Stewart–like decency and predictability.” The Charlie Brown way was to keep on keeping on, standing with a tangled kite or a losing baseball team day after day. Michaelis, Schulz’s biographer, locates the essence of Charlie Brown—and Peanuts itself—in a 1954 strip in which Charlie Brown visits Shermy and watches as he “plays with a model train set whose tracks and junctions and crossings spread … elaborately far and wide in Shermy’s family’s living room.” After a while,Charlie Brown pulls on his coat and walks home … [and] sits down at his railroad: a single, closed circle of track … Here was the moment when Charlie Brown became a national symbol, the Everyman who survives life’s slings and arrows simply by surviving himself.In fact, all of the characters were survivors. They just had different strategies for survival, none of which was exactly prosocial. Linus knew that he could take his blows philosophically—he was often seen, elbows on the wall, calmly chatting with Charlie Brown—as long as he had his security blanket nearby. He also knew that if he didn’t have his blanket, he would freak out. (In 1955 the child psychiatrist D. W. Winnicott asked for permission to use Linus’s blanket as an illustration of a “transitional object.”)Lucy, dishing out bad and unsympathetic advice from her “Psychiatric Help” booth, was the picture of bluster. On March 27, 1959, Charlie Brown, the first patient to visit her booth, says to Lucy, “I have deep feelings of depression … What can I do about this?” Lucy replies: “Snap out of it! Five cents, please.” That pretty much sums up the Lucy way.FantagraphicsSchroeder at his piano represented artistic retreat—ignoring the world to pursue one’s dream. And Snoopy’s coping philosophy was, in a sense, even more antisocial than Schroeder’s. Snoopy figured that since no one will ever see you the way you see yourself, you might as well build your world around fantasy, create the person you want to be, and live it out, live it up. Part of Snoopy’s Walter Mitty–esque charm lay in his implicit rejection of society’s view of him. Most of the kids saw him as just a dog, but he knew he was way more than that.Those characters who could not be summed up with both a social strategy and a recognizable attribute (Pig-Pen, for instance, had an attribute—dirt—but no social strategy) became bit players or fell by the wayside. Shermy, the character who uttered the bitter opening lines of Peanuts in 1950, became just another bland boy by the 1960s. Violet, the character who made endless mud pies, withheld countless invitations, and had the distinction of being the first person to pull the football away from Charlie Brown, was mercilessly demoted to just another snobby mean girl. Patty, one of the early stars, had her name recycled for another, more complicated character, Peppermint Patty, the narcoleptic tomboy who made her first appearance in 1966 and became a regular in the 1970s. (Her social gambit was to fall asleep, usually at her school desk.)Once the main cast was set, the iterations of their daily interplay were almost unlimited. “A cartoonist,” Schulz once said, “is someone who has to draw the same thing every day without repeating himself.” It was this “infinitely shifting repetition of the patterns,” Umberto Eco wrote in The New York Review of Books in 1985, that gave the strip its epic quality. Watching the permutations of every character working out how to get along with every other character demanded “from the reader a continuous act of empathy.”For a strip that depended on the reader’s empathy, Peanuts often involved dramas that displayed a shocking lack of empathy. And in many of those dramas, the pivotal figure was Lucy, the fussbudget who couldn’t exist without others to fuss at. She was so strident, Michaelis reports, that Schulz relied on certain pen nibs for her. (When Lucy was “doing some loud shouting,” as Schulz put it, he would ink up a B-5 pen, which made heavy, flat, rough lines. For “maximum screams,” he would get out the B-3.)Lucy was, in essence, society itself, or at least society as Schulz saw it. “Her aggressiveness threw the others off balance,” Michaelis writes, prompting each character to cope or withdraw in his or her own way. Charlie Brown, for instance, responded to her with incredible credulity, coming to her time and again for pointless advice or for football kicking. Linus always seemed to approach her with a combination of terror and equanimity. In one of my favorite strips, he takes refuge from his sister in the kitchen and, when Lucy tracks him down, addresses her pointedly: “Am I buttering too loud for you?”It was Lucy’s dealings with Schroeder that struck closest to home for Schulz, whose first marriage, to Joyce, began to fall apart in the 1960s while they were building up their huge estate in Sebastopol, California. Just as Schulz’s retreat into his comic-strip world antagonized Joyce, Michaelis observes, so Schroeder’s devotion to his piano was “an affront to Lucy.” At one point, Lucy becomes so fed up at her inability to distract Schroeder from his music that she hurls his piano into the sewer: “It’s woman against piano! Woman is winning!! Woman is winning!!!” When Schroeder shouts at her in disbelief, “You threw my piano down the sewer!!,” Lucy corrects him: “Not your piano, Sweetie … My competition!” Now, that’s a relationship!In this deeply dystopic strip, there was only one character who could—and some say finally did—tear the highly entertaining, disturbed social world to shreds. And that happens to be my favorite character, Snoopy.Before Snoopy had his signature doghouse, he was an emotional creature. Although he didn’t speak (he expressed himself in thought balloons), he was very connected to all the other characters. In one 1958 strip, for instance, Linus and Charlie Brown are talking in the background, and Snoopy comes dancing by. Linus says to Charlie Brown, “My gramma says that we live in a veil of tears.” Charlie Brown answers: “She’s right … This is a sad world.” Snoopy still goes on dancing. By the third frame, though, when Charlie Brown says, “This is a world filled with sorrow,” Snoopy’s dance slows and his face begins to fall. By the last frame, he is down on the ground—far more devastated than Linus or Charlie Brown, who are shown chatting off in the distance, “Sorrow, sadness and despair … grief, agony and woe …”But by the late 1960s, Snoopy had begun to change. For example, in a strip dated May 1, 1969, he’s dancing by himself: “This is my ‘First Day of May’ dance. It differs only slightly from my ‘First Day of Fall’ dance, which differs also only slightly from my ‘First Day of Spring’ dance.” Snoopy continues dancing and ends with: “Actually, even I have a hard time telling them apart.” Snoopy was still hilarious, but something fundamental had shifted. He didn’t need any of the other characters in order to be what he was. He needed only his imagination. More and more often he appeared alone on his doghouse, sleeping or typing a novel or a love letter. Indeed, his doghouse—which was hardly taller than a beagle yet big enough inside to hold an Andrew Wyeth painting as well as a pool table—came to be the objective correlative of Snoopy’s rich inner life, a place that no human ever got to see.Some thought this new Snoopy was an excellent thing, indeed the key to the strip’s greatness. Schulz was among them: “I don’t know how he got to walking, and I don’t know how he first began to think, but that was probably one of the best things that I ever did.” The novelist Jonathan Franzen is another Snoopy fan. Snoopy, as Franzen has noted, isthe protean trickster whose freedom is founded on his confidence that he’s lovable at heart, the quick-change artist who, for the sheer joy of it, can become a helicopter or a hockey player or Head Beagle and then again, in a flash, before his virtuosity has a chance to alienate you or diminish you, be the eager little dog who just wants dinner.But some people detested the new Snoopy and blamed him for what they viewed as the decline of Peanuts in the second half of its 50-year run. “It’s tough to fix the exact date when Snoopy went from being the strip’s besetting artistic weakness to ruining it altogether,” the journalist and critic Christopher Caldwell wrote in 2000, a month before Schulz died, in an essay in New York Press titled “Against Snoopy.” But certainly by the 1970s, Caldwell wrote, Snoopy had begun wrecking the delicate world that Schulz had built. The problem, as Caldwell saw it, was thatSnoopy was never a full participant in the tangle of relationships that drove Peanuts in its Golden Age. He couldn’t be: he doesn’t talk … and therefore he doesn’t interact. He’s there to be looked at.Snoopy unquestionably took the strip to a new realm beginning in the late 1960s. The turning point, I think, was the airing of It’s the Great Pumpkin, Charlie Brown in 1966. In this Halloween television special, Snoopy is shown sitting atop his doghouse living out his extended fantasy of being a World War I flying ace shot down by the Red Baron and then crawling alone behind enemy lines in France. Snoopy is front and center for six minutes, about one-quarter of the whole program, and he steals the show, proving that he doesn’t need the complicated world of Peanuts to thrive. He can go it alone. And after that he often did.In 1968, Snoopy became NASA’s mascot. The next year, Snoopy had a lunar module named after him for the Apollo 10 mission (the command module was called Charlie Brown). In 1968 and 1972, Snoopy was a write-in candidate for president of the United States. Plush stuffed Snoopys became popular. (I had one.) By 1975, Snoopy had replaced Charlie Brown as the center of the strip. He cut a swath through the world. For instance, in parts of Europe Peanuts came to be licensed as Snoopy. And in Tokyo, the floor of the vast toy store Kiddy Land that is devoted to Peanuts is called Snoopy Town.The Complete Peanuts: Volume 23To accommodate this new Snoopy-centric world, Schulz began making changes. He invented a whole new animal world for Snoopy. First came Woodstock, a bird who communicates only with Snoopy (in little tic marks). And then Snoopy acquired a family: Spike, a droopy-eyed, mustachioed beagle, followed by Olaf, Andy, Marbles, and Belle.In 1987, Schulz acknowledged that introducing Snoopy’s relatives had been a blunder, much as Eugene the Jeep had been an unwelcome intrusion into the comic strip Popeye:It’s possible—I think—to make a mistake in the strip and without realizing it, destroy it … I realized it myself a couple of years ago when I began to introduce Snoopy’s brothers and sisters … It destroyed the relationship that Snoopy has with the kids, which is a very strange relationship.He was right. Snoopy’s initial interactions with the kids—his understanding of humanity, indeed his deep empathy (just what they were often missing), coupled with his inability to speak—were unique. And that’s why whenever Snoopy’s relatives showed up, the air just went out of the strip.But for many fans, it wasn’t merely Snoopy’s brothers and sisters dragging him down. There was something fundamentally rotten about the new Snoopy, whose charm was based on his total lack of concern about what others thought of him. His confidence, his breezy sense that the world may be falling apart but one can still dance on, was worse than irritating. It was morally bankrupt. As the writer Daniel Mendelsohn put it in a piece in The New York Times Book Review, Snoopy “represents the part of ourselves—the smugness, the avidity, the pomposity, the rank egotism—most of us know we have but try to keep decently hidden away.” While Charlie Brown was made to be buffeted by other personalities and cared very much what others thought of him, Snoopy’s soul is all about self-invention—which can be seen as delusional self-love. This new Snoopy, his detractors felt, had no room for empathy.To his critics, part of what’s appalling about Snoopy is the idea that it’s possible to create any self-image one wants—in particular, the profile of someone with tons of friends and accomplishments—and sell that image to the world. Such self-flattery is not only shallow but wrong. Snoopy, viewed this way, is the very essence of selfie culture, of Facebook culture. He’s the kind of creature who would travel the world only in order to take his own picture and share it with everyone, to enhance his social image. He’s a braggart. Unlike Charlie Brown, who is alienated (and knows he’s alienated), Snoopy is alienating (and totally fails to recognize it). He believes that he is what he’s been selling to the world. Snoopy is “so self-involved,” Mendelsohn writes, “he doesn’t even realize he’s not human.”Just as some people thought that Charlie Brown, the insecure loser, the boy who never won the love of the Little Red-Haired Girl, was the alter ego of Schulz himself near the beginning of his career, so Snoopy could be cast as the egotistical alter ego of Schulz the world-famous millionaire, who finally found a little happiness in his second marriage and thus became insufferably cutesy. (In 1973, Schulz and his wife divorced, and a month later Schulz married Jeannie Clyde, a woman he met at the Warm Puppy Café, at his skating rink in Santa Rosa, California.) Two-legged Snoopy, with his airs and fantasies—peerless Snoopy, rich Snoopy, popular Snoopy, world-famous Snoopy, contented Snoopy—spoiled it all.Schulz, who had a lifelong fear of being seen as ostentatious, believed that the main character of a comic strip should not be too much of a showboat. He also once said he wished he could use Charlie Brown—whom he described as the lead character every good strip needs, “somebody that you like that holds things together”—a little more.But he was smitten with Snoopy. (During one of the Christmas ice shows in Santa Rosa, while watching Snoopy skate, Schulz leaned over and remarked to his friend Lynn Johnston, another cartoonist, “Just think … there was a time when there was no Snoopy!”) Schulz, Johnston writes in an introduction to one of the Fantagraphics volumes, found his winning self in this dog:Snoopy was the one through which he soared. Snoopy allowed him to be spontaneous, slapstick, silly, and wild. Snoopy was rhythm, comedy, glamour, and style … As Snoopy, he had no failures, no losses, no flaws … Snoopy had friends and admirers all over the globe.Snoopy was the polar opposite of Charlie Brown, who had nothing but failures, losses, and flaws.But were the two quite so radically far apart?Snoopy’s critics are wrong, and so are readers who think that Snoopy actually believes his self-delusions. Snoopy may be shallow in his way, but he’s also deep, and in the end deeply alone, as deeply alone as Charlie Brown is. Grand though his flights are, many of them end with his realizing that he’s tired and cold and lonely and that it’s suppertime. As Schulz noted on The Today Show when he announced his retirement, in December 1999: “Snoopy likes to think that he’s this independent dog who does all of these things and leads his own life, but he always makes sure that he never gets too far from that supper dish.” He has animal needs, and he knows it, which makes him, in a word, human.Even Snoopy’s wildest daydreams have a touch of pathos. When he marches alone through the trenches of World War I, yes, of course, he is fantasizing, but he also can be seen as the bereft young Charles Schulz, shipped off to war only days after his mother died at the age of 50, saying to him: “Good-bye, Sparky. We’ll probably never see each other again.”The final comic strips, which came out when Schulz realized he was dying, are pretty heartbreaking. All of the characters seem to be trying to say goodbye, reaching for the solidarity that has always eluded them. Peppermint Patty, standing in the rain after a football game, says, “Nobody shook hands and said, ‘Good game.’ ” Sally shouts to her brother, Charlie Brown: “Don’t you believe in brotherhood?!!” Linus lets out a giant, boldface “SIGH!” Lucy, leaning as ever on Schroeder’s piano, says to him, “Aren’t you going to thank me?”But it’s Snoopy who is grappling with the big questions, the existential ones. Indeed, by his thought balloons alone, you might mistake him for Charlie Brown. The strip dated January 15, 2000, shows Snoopy on his doghouse. “I’ve been very tense lately,” Snoopy thinks, rising up stiffly from his horizontal position. “I find myself worrying about everything … Take the Earth, for instance.” He lies back down, this time on his belly, clutching his doghouse: “Here we all are clinging helplessly to this globe that is hurtling through space …” Then he turns over onto his back: “What if the wings fall off?”Snoopy may have been delusional, but in the end he knew very well that everything could come tumbling down. His very existence seems to be a way of saying that no matter what a person builds up for himself inside or outside society, everyone is basically alone in it together. By the way, in the end Snoopy did admit to at least one shortcoming, though he claimed he wasn’t really to blame. In the strip that ran on January 1, 2000, drawn in shaky lines, the kids are having a great snowball fight. Snoopy sits on the sidelines, struggling to get his paws around a snowball: “Suddenly the dog realized that his dad had never taught him how to throw snowballs.”About the AuthorSarah Boxer, a critic and cartoonist, is the author of two psychoanalytic comics, In the Floyd Archives and Mother May I?, and one Shakespearean tragi-comic, Hamlet: Prince of Pigs. | 2024-11-07T14:57:23 | en | train |
10,824,305 | npluss | 2016-01-02T00:23:06 | Sentimental version numbers | null | http://sentimentalversioning.org/ | 2 | 0 | null | null | null | no_error | 1win Casino Perú - Tu Guía Definitiva de Casino en Línea y Apuestas Deportivas | 2023-12-04T12:44:55+03:00 | null |
1win Casino ha ganado rápidamente popularidad entre los aficionados al juego en Perú gracias a su amplia gama de juegos, atractivos bonos y alto nivel de servicio.
Característica
Detalles
Sitio web
1win.pe
Licencia
Licencia de Curazao
Año de fundación
2016
Deportes disponibles
Fútbol, tenis, baloncesto, voleibol, UFC, eSports
Mercados de apuestas
Pregame, en vivo, acumuladores, sistemas
Cuotas
Altas, competitivas en el mercado
Bono de bienvenida
100% hasta S/ 300
Métodos de pago
Tarjetas de crédito, transferencias bancarias, billeteras virtuales
Atención al cliente
Chat en vivo, correo electrónico, teléfono, WhatsApp
Idiomas
Español, inglés
Apps móviles
Sí, para Android e iOS
Seguridad
Encriptación de datos, KYC
Retiros
Procesados en 24 horas
Límites de apuesta
Desde S/ 20 hasta S/ 5000 según deporte
Promociones
Cashback, giros gratis, odds boost
Este casino ofrece una impresionante selección de tragamonedas de los principales desarrolladores, una variedad de juegos de mesa y la posibilidad de apostar en deportes. El diseño del sitio es intuitivo, lo que facilita la navegación tanto para principiantes como para jugadores experimentados.
Mi experiencia con los casinos en línea
Como alguien que ha pasado mucho tiempo jugando en casinos en línea y apostando en deportes, valoro los sitios que ofrecen no solo una variedad de juegos, sino también honestidad, seguridad y comodidad para los usuarios. En 1win Casino, encontré todo esto.
Me gusta especialmente su amplia selección de tragamonedas, incluyendo opciones clásicas y modernas, así como la posibilidad de apostar en diferentes eventos deportivos. La plataforma asegura transparencia en todos los aspectos, desde el registro hasta la retirada de fondos, lo que para mí es un factor clave en la confianza en un casino en línea.
Revisión de las Tragamonedas y Juegos en 1win Casino
En 1win Casino, hay una amplia gama de tragamonedas que satisfará a cualquier jugador. Aquí encontrarás tragamonedas clásicas con reglas simples y símbolos tradicionales, así como modernas tragamonedas de video con temas emocionantes, desde aventuras hasta fantasía.
Los tragamonedas con jackpots progresivos son especialmente atractivos, donde el posible premio aumenta con cada apuesta. Estos juegos son ideales para aquellos que sueñan con un gran premio.
Top 5 Juegos de Casino
Tipo de Juego
Características
Book of Ra
Tragamonedas
Temática egipcia
Blackjack
Mesa
Juego clásico
Ruleta
Mesa
Europea/Americana
Baccarat
Mesa
Reglas simples
Póker
Cartas
Diferentes variantes
Juegos de Mesa y Video Póker
1win Casino también ofrece una variedad de juegos de mesa. Los aficionados a los clásicos encontrarán aquí ruleta, blackjack y baccarat. Cada juego se presenta en varias variantes, permitiendo a los jugadores elegir la que más les guste. El video póker es otra categoría popular que combina elementos de póker y tragamonedas, ofreciendo una experiencia de juego única.
Juegos Únicos o Exclusivos
1win Casino se destaca de otros casinos en línea con sus juegos únicos y exclusivos. Pueden ser tragamonedas especiales creadas exclusivamente para 1win, o versiones únicas de juegos de mesa populares. Estos juegos no solo proporcionan una nueva experiencia de juego, sino que también aumentan las posibilidades de ganar gracias a bonificaciones y características especiales.
1win Casino ofrece algunos de los bonos de bienvenida más atractivos del mercado. Los nuevos jugadores pueden aprovechar el bono por registro y primer depósito, que a menudo incluye un aumento porcentual de la cantidad del depósito y giros gratis en tragamonedas. Estos bonos son una excelente manera de comenzar a jugar con fondos adicionales y aumentar las posibilidades de ganar.
Bono/Promoción
Descripción
Requisitos
Bono de Bienvenida
100% hasta 300 PEN + 150 Giros Gratis
Depósito mínimo de 10 PEN
Bono de Recarga
50% hasta 500 PEN
Depósito mínimo de 20 PEN
Bono Casino
100% hasta 1000 PEN
Depósito mínimo de 30 PEN
Bono de Bienvenida Casino Live
100% hasta 1000 PEN
Depósito mínimo de 50 PEN
Bono de Bienvenida Apuestas Deportivas
100% hasta 500 PEN
Depósito mínimo de 20 PEN
Bono Bingo
100% hasta 200 PEN
Depósito mínimo de 10 PEN
Cashback
Hasta 15% de reembolso
Apuestas en Slots y Casino
Bono Primer Depósito con Bitcoin
500% hasta 3000 PEN
Depósito mínimo de 50 PEN con Bitcoin
Promociones Especiales
Bonos y premios variables
Participación en campañas y eventos especiales
Promociones Regulares y Estacionales
1win Casino actualiza regularmente sus promociones y ofertas, brindando a los jugadores nuevas oportunidades para ganar. Esto puede incluir bonos semanales, reembolsos, torneos con grandes fondos de premios y promociones estacionales relacionadas con festividades o eventos importantes. Participar en estas promociones aumenta el interés en el juego y permite a los jugadores obtener beneficios adicionales.
Programa de Lealtad
Para los jugadores habituales, 1win Casino ofrece un programa de lealtad que los recompensa por su actividad y apuestas en el casino. El programa puede incluir bonos acumulativos, ofertas personales, límites de retiro más altos y otras ventajas. El programa de lealtad hace que jugar en 1win Casino sea aún más beneficioso y agradable para los jugadores regulares.
Nivel
Beneficios
Bronce
Bonos por apuestas
Plata
Mejor cashback
Oro
Gestor personal
Platino
Acciones exclusivas
Diamante
Regalos individuales
Proceso de Registro y Creación de Perfil en 1win Casino
El registro en 1win Casino es un proceso simple e intuitivo. Primero, es necesario visitar el sitio web oficial del casino y encontrar el botón de registro. Luego, se debe completar un formulario ingresando información básica como nombre, apellido, fecha de nacimiento, dirección de correo electrónico y número de teléfono de contacto.
Después de esto, se crea un nombre de usuario y contraseña para acceder a la cuenta. Es importante asegurarse de que todos los datos estén correctamente ingresados, ya que se utilizarán para la verificación posterior de la cuenta.
Requisitos de documentación y seguridad
Para garantizar la seguridad tanto de los usuarios como del propio casino, 1win requiere la confirmación de identidad. Esto puede incluir el envío de copias de documentos como el pasaporte o la licencia de conducir, así como documentos que confirmen la dirección de residencia. Todos los datos están estrictamente protegidos y se utilizan exclusivamente para la confirmación de identidad, lo que previene la posibilidad de fraude y asegura un entorno de juego seguro para todos los usuarios.
Sistemas de Pago y Operaciones en 1win Casino
1win Casino ofrece una amplia gama de sistemas de pago, lo que hace que el proceso de depósito y retiro de fondos sea conveniente y accesible para los jugadores de Perú. Entre las opciones disponibles están las tarjetas bancarias tradicionales como Visa y MasterCard, así como varias billeteras electrónicas y sistemas de pago en línea. Además, el casino admite transacciones con criptomonedas, lo cual es una gran ventaja para aquellos que prefieren el anonimato y la seguridad de las monedas digitales.
Método de Pago
Depósitos
Retiros
Límites
Tiempo de Procesamiento
Tarjeta de Crédito/Débito
Sí
No
Mín $1 / Sin máximo
Instantáneo
Skrill
Sí
Sí
Mín $1 / Sin máximo
24-72 horas
Neteller
Sí
Sí
Mín $1 / Sin máximo
24-72 horas
PSE
Sí
Sí
Mín $1 / Sin máximo
1-3 días
Transferencia Bancaria
Sí
Sí
Mín $1 / Sin máximo
1-7 días
Bitcoin
Sí
Sí
Mín $1 / Sin máximo
1-12 horas
Ethereum
Sí
Sí
Mín $1 / Sin máximo
1-12 horas
Litecoin
Sí
Sí
Mín $1 / Sin máximo
1-12 horas
Tether
Sí
Sí
Mín $1 / Sin máximo
1-12 horas
Velocidad de transacciones y seguridad
Un aspecto importante en la elección de un casino en línea es la velocidad de las transacciones y la seguridad de los pagos. En 1win Casino, el procesamiento de los depósitos es prácticamente instantáneo, lo que permite a los jugadores comenzar a jugar sin demoras.
En cuanto al retiro de fondos, generalmente toma un poco más de tiempo, pero aún así se mantiene dentro de plazos aceptables. El casino utiliza tecnologías modernas de cifrado para asegurar la seguridad de las transacciones y la confidencialidad de los datos de los usuarios, lo que garantiza un alto nivel de protección para todos los clientes.
Servicio de Atención al Cliente y Retroalimentación
El servicio de atención al cliente en 1win Casino está disponible 24/7 y ofrece varios canales de comunicación, incluyendo chat en línea, correo electrónico y teléfono. Personalmente, me comuniqué con el servicio de atención al cliente con preguntas sobre la retirada de fondos y quedé gratamente sorprendido por su rapidez y profesionalismo.
Las respuestas fueron claras y exhaustivas, lo que demuestra un alto nivel de capacitación y competencia del personal. Además, el casino ofrece un formulario de retroalimentación donde los jugadores pueden dejar sus comentarios o sugerencias, lo que contribuye a la mejora continua del servicio.
Experiencia Personal y Recomendaciones sobre 1win Casino
Mi experiencia personal usando el sitio de 1win Casino ha sido excepcionalmente positiva. Aprecié su interfaz limpia e intuitiva, que facilita la navegación y la búsqueda de juegos necesarios. La variedad de juegos impresiona, desde tragamonedas clásicas hasta juegos en vivo con crupieres y apuestas deportivas. La calidad de los juegos es alta, con excelentes gráficos y sonido, creando una sensación de inmersión en la atmósfera de un casino real.
Comparación con Otros Casinos
Comparando 1win con otros casinos en línea en Perú, puedo decir que se destaca por su enfoque en el usuario. Aunque muchos sitios ofrecen una gran selección de juegos, 1win también proporciona un excelente soporte al cliente y pagos rápidos, lo que lo convierte en una de las mejores opciones en el mercado.
Recomendaciones para Nuevos Jugadores
Para los nuevos jugadores, recomendaría comenzar familiarizándose con la sección de preguntas frecuentes en el sitio de 1win, donde se presenta toda la información necesaria para principiantes. También es importante aprovechar los bonos de bienvenida y promociones ofrecidas por el casino. Es una excelente manera de aumentar su bankroll y obtener oportunidades adicionales para jugar. Finalmente, siempre juegue de manera responsable y establezca límites en las apuestas y el tiempo de juego.
1win ofrece una experiencia de juego completa y segura para los apostadores en Perú. Con una cuidadosa combinación de una amplia variedad de juegos, generosos bonos y promociones, métodos de pago convenientes y un sólido servicio de atención al cliente, 1win se consolida como una opción destacada en el mercado de los casinos en línea peruanos. Ya sea que disfrute de las emocionantes tragamonedas, los clásicos juegos de mesa o las vibrantes apuestas deportivas, 1win Casino tiene algo para satisfacer todos los gustos y niveles de habilidad.
1win Casino toma muy en serio la responsabilidad social y el juego seguro. El casino promueve activamente el juego responsable y proporciona herramientas y recursos para ayudar a los jugadores a mantenerse dentro de sus límites. Algunas de estas medidas incluyen límites de depósito y pérdida, autoexclusión temporal o permanente, y enlaces a organizaciones de apoyo para el juego problemático. La salud y el bienestar de los clientes son una prioridad para 1win Casino.
Preguntas Frecuentes sobre 1win Casino en Perú
¿Cómo registrarse en 1win Casino?
Para registrarse, visite el sitio oficial de 1win Casino y haga clic en el botón de registro. Complete la información necesaria, como nombre, apellido, correo electrónico y detalles de contacto.
¿Qué bonos ofrece 1win Casino?
1win Casino ofrece varios bonos, incluyendo bonos de bienvenida por registro y primer depósito, así como promociones regulares y un programa de lealtad para jugadores habituales.
¿Qué métodos de pago están disponibles en 1win Casino?
Los jugadores pueden usar tarjetas bancarias, billeteras electrónicas, banca en línea y criptomonedas para depositar fondos y retirar ganancias.
¿Hay una aplicación móvil de 1win Casino?
Sí, 1win Casino ofrece una aplicación móvil para iOS y Android, lo que permite jugar a sus juegos favoritos sobre la marcha.
¿Cómo se garantiza la seguridad en 1win Casino?
1win Casino utiliza tecnologías modernas de cifrado para proteger los datos de los usuarios y las transacciones, y cumple estrictamente con la política de privacidad.
| 2024-11-08T15:43:20 | es | train |
10,824,383 | bpierre | 2016-01-02T00:49:45 | The Future of Node Is in Microsoft’s Fork | null | https://blog.andyet.com/2015/12/31/the-future-of-node-is-microsofts-fork/ | 80 | 42 | [
10825300,
10824913,
10825163,
10825514,
10828290,
10825095,
10824809,
10825317,
10825108,
10825109,
10825101,
10825370,
10824481,
10825121
] | null | null | no_error | The Future of Node is in Microsoft’s Fork | null | null | Node.js is a platform for running JavaScript (ECMAScript) that is powered by Google Chrome’s JavaScript engine, V8.
V8 pushed JavaScript forward in terms of speed when it was first released, but hasn’t been keeping up with the accelerated pace of the ECMAScript Standard.
We’ll likely see a new release of the spec every year, but V8 is lagging far behind Mozilla’s SpiderMonkey and Microsoft’s Chakra in terms of support for ECMAScript 2015 (aka ES6).
Node.js developers that have been eager for ES2015 features that V8 doesn't yet support have turned to Babel.JS for compiling their ES2015 code into ES5 code, but this only works for syntax features like arrow functions.
There are features within ES2015 that Babel.JS can’t emulate because ES5 fundamentally lacks the ability accomplish these features in any reasonable way, namely the Proxy constructor and extending built-in objects like Array.
[Update: This controversial statement has been too distracting.] The Node.js Foundation would be wise to migrate to Chakra, because Google’s updates are coming in at a trickle while Microsoft’s are roaring in like a river, but that’s not really the point.
The point is that these features are coming regardless, and you can play with them now.
With an annual ECMAScript releases adding new features, Microsoft's Node.js Chakra fork will continue to outpace Google's V8 engine by months.
So long as Microsoft maintains their fork, we’ll be able to preview features that aren’t yet ready in V8.
In order to use Microsoft's fork, you need a Windows 10 machine with the November update.
It’s a huge update, so just because you have auto-updates enabled doesn’t mean that you have it (this messed me up).
For the full instructions, look at an individual release on github, currently v1.3.
Once you have that working, you can play with features that V8 and Babel don't support, like proxies in this example:
const someObject = {};
const someProxy = new Proxy(someObject, {
get: function (target, property, reciever) {
if (property.substr(0, 6) === 'happy_') {
return Reflect.get(target, property.substr(6), reciever);
}
return 'sad';
},
set: function (target, property, value, reciever) {
if (property.substr(0, 6) === 'happy_') {
return Reflect.set(target, property.substr(6), value, reciever);
}
return false;
}
});
someProxy.happy_x = 'happy';
console.log(`someproxy.happy_x = 'happy';`);
console.log('='.repeat(40));
console.log('someProxy.x:', someProxy.x);
console.log('someObject.x:', someObject.x);
console.log('someProxy.happy_x:', someProxy.happy_x);
console.log('someObject.happy_x:', someObject.happy_x);
Output:
someproxy.happy_x = 'happy';
========================================
someProxy.x = sad
someObject.x = happy
someProxy.happy_x = happy
someObject.happy_x = undefined
@kangax maintains an ECMAScript Compatibility Table where you can compare support for Node.js and Edge for several versions of ECMAScript so that you know what features to explore with Microsoft's Node.js fork.
Feel free to poke me on Twitter @fritzy if you have any thoughts or feedback.
Update (Dec 31, 2015):
There are several people pointing out that Chakra isn't open source. However, it is going open source in January, and cross platform afterward.
My main point is not that ChakraCore should be the new Node.js JavaScript engine, but that Microsoft's fork of Node.js with ChakraCore in it is a pretty handy way to preview Node.js's future, regardless.
Update (Jan 2, 2015):
I've been receiving a lot of feedback.
I'm removing my statement saying that Node.js should move to Chakra -- that was meant to be controversial and get people thinking, but it has proven distracting to my main point, that you can play with Microsoft's Fork to preview features that they have that Node.js doesn't yet have.
Mikeal Rogers pointed out that I was making an inappropriate suggestion for the Node.js Foundation:
@fritzy "The Node.js Foundation" does not make technical decisions, the contributors to Node.js make them :)— Mikeal Rogers (@mikeal) January 2, 2016
He goes on to give some more context:
@fritzy the code in MS's fork will be in a PR to core in January, they've already announced it. It'll make it optional to compile w/ Chakra.— Mikeal Rogers (@mikeal) January 2, 2016
Several people pointed this out:
@fritzy it isn't clear that Chakra will *always* outpace v8 in ES-WHATEVA features.— Mikeal Rogers (@mikeal) January 2, 2016
In fact, Jake Archibald references a tweet showing that the V8 team has been catching up:
@mikeal @fritzy Canary (albeit with flags) currently has the highest es6 support, "lagging far behind" seems unfair https://t.co/9Qae8kURhK— Jake Archibald (@jaffathecake) January 2, 2016
Mostly, I'm excited to see the Node.js platform have some diverse implementations. I'm hoping we see more of this.
I mentioned that I recalled a similar Mozilla effort in the past, and Brendan Eich pitched in with the details:
@mikeal @fritzy 'twas a team of three, and all left for Facebook soon after: @sdwilsh @zpao @robarnold (onstage w/ me at NodeConf 2011).— BrendanEich (@BrendanEich) January 2, 2016
Personally, I'd love to see Mozilla come back and do this again; diversity makes for a healthier ecosystem.
This does raise the question: is there a better long term solution for multiple JS engines than V8 API shims?
Keep the feedback rolling. | 2024-11-07T22:51:12 | en | train |
10,824,406 | digitalnalogika | 2016-01-02T00:59:48 | Ruby 2.3 on Heroku with Matz | null | https://blog.heroku.com/archives/2015/12/25/ruby-2-3-0-on-heroku-with-matz?utm_content=buffer36656&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer | 2 | 0 | null | null | null | no_error | Ruby 2.3 on Heroku with Matz | null | Posted by Terence LeeBuild & Languages Architect |
Happy Holidays from Heroku. Congratulations to the ruby-core team on a successful 2.3.0 release, which is now available on Heroku -- you can learn more about Ruby on Heroku at heroku.com/ruby. We had the pleasure of speaking with Matz (Yukihiro Matsumoto), the creator of Ruby and Chief Ruby Architect at Heroku, about the release.
What’s New in Ruby 2.3: Interview with Matz
Ruby releases happen every year on Christmas day. Why Christmas?
Ruby was originally my pet project, my side project. So releases usually happened during my holiday time. Now, it’s a tradition. It’s ruby-core’s gift to the Ruby community.
Do you have any favorite features coming in Ruby 2.3?
I’m excited about the safe navigation operator, or “lonely operator.” It’s similar to what we see in other programming languages like Swift and Groovy— it makes it simple to handle exceptions. The basic idea is from Objective C in which nil accepts every message and returns nil without doing anything. In that way you can safely ignore the error that is denoted by nil. In Swift, they have better error handling using optional types, but the fundamental idea is the same. Instead of using nil, they use optional types.
Since Ruby is an older language, it follows the old way. In the very early stages of Ruby, before releasing in 1995, I added that nil pattern in the language. But it consumed all the errors, which meant error detection was terrible, so I gave up that idea. With the new operator, you can use the nil pattern and avoid accidentally ignoring errors.
Swift and Groovy use a different operator: ?. (question dot). And Ruby couldn’t use that because of predicate methods. So I coined it from the &. (ampersand dot) pattern.
Was there any discussion of adopting Active Support’s try?
The try uses the send method, which is not really fast enough. I believe those kind of things should be built into the language.
What are some other highlights in 2.3?
The did_you_mean gem, which helps with method name misspellings, is great as a default feature. We bundle it now, so it’s now a default and you don’t have to do anything extra.
We included it because for Ruby 3 we’re working towards better compilation and collaboration, and this is the first step in that direction. Better error messages help developers’ productivity and happiness.
Are there any features that are not getting the attention they deserve or may surprise developers?
I don’t know. I like the new dig method, which is related to the safe navigation operator. It makes it simple to “dig up” nested elements from nested hashes and arrays, which is useful when dealing with parsed JSON. These days, this method is kind of popular, especially when using an API for microservices. In such cases, the dig method and safe navigation operator would be quite useful, especially with optional attributes.
Are there any features that you secretly hope no one will use, so you can remove them later?
No, but the frozen string literal pragma is kind of controversial. We might change it in a later version.
How would you decide if it stays or goes?
The Rails team got a lot of pull requests that say add the freeze everywhere. The literal freeze method is much faster by comparison in 2.2. People wanted to make it faster by avoiding the string allocation and reducing the GC pressure.
We understood that intention, but adding a freeze everywhere is so ugly and we didn’t want that. That’s why we introduced the magic comment of frozen string literals. At the same time, I don’t like that a magic comment can change the semantics or behavior of the language. So that point is kind of controversial. We’ll see how well it works out.
Do you anticipate any backwards-compatibility issues when migrating from Ruby 2.3 to Ruby 3?
Fundamentally, we will not change anything in a backward incompatible way. I expect every Ruby 2.x program to run without any modification in the future version of Ruby. Even the frozen strings stuff is basically compatible.
In Ruby 3, if we do have to make an incompatible change, we have to provide a reasonable reason. And also we have to provide a migration path that is not too difficult. That’s our strategy towards the future version.
Thank you for your time!
I hope we can have this kind of conversation again in the future!
This interview was conducted by Terence Lee and Richard Schneeman, members of Heroku’s Ruby Task Force. You can find them on twitter, @hone02 and @schneems.
| 2024-11-08T03:58:57 | en | train |
10,824,407 | donohoe | 2016-01-02T01:00:02 | Listening to “Star Wars” | null | http://www.newyorker.com/culture/cultural-comment/listening-to-star-wars | 2 | 0 | null | null | null | no_error | Listening to “Star Wars” | 2016-01-01T02:00:29.000-05:00 | Alex Ross | My favorite film of 1977 was not “Star Wars” but “Close Encounters of the Third Kind,” Steven Spielberg’s U.F.O fantasia. Notwithstanding the fact that I was nine years old, I considered “Star Wars” a little childish. Also, the trash-compactor scene scared me. “Close Encounters,” on the other hand, drew me back to the theatre—the late, great K-B Cinema, in Washington, D.C.—five or six times. I irritated friends by insisting that it was better than “Star Wars,” and followed the box-office grosses in the forlorn hope that my favorite would surpass its rival.“Close Encounters” still strikes me as an amazing creation—a one-off fusion of blockbuster spectacle with the disheveled realism of nineteen-seventies filmmaking. It has a wildness, a madness that is missing from Spielberg’s subsequent movies. The Disneyesque fireworks of the finale can’t hide the fact that the hero of the tale is abandoning his family in the grip of a monomaniacal obsession. Looking back, though, I’m sure that what really held me spellbound was the score, which, like that of “Star Wars,” was written by John Williams. I was a full-on classical-music nerd, playing the piano and trying to write my own compositions. I’d dabbled in Wagner, Bruckner, and Mahler, but knew nothing of twentieth-century music. “Close Encounters” offered, at the start, a seething mass of dissonant clusters, which abruptly coalesce into a bright, clipped C-major chord, somehow just as spooky as what came before. The “Star Wars” music had a familiar ring, but this kind of free, frenzied painting with sound was new to me, and has fascinated me ever since.Now eighty-three years old, Williams remains a vital presence. “Star Wars: The Force Awakens,” his latest effort, is doing fairly good business, and he is at work on Spielberg’s next picture. He has scored all of the “Star Wars” movies, all of the Indiana Jones movies, several Harry Potters, “Jaws,” “E.T.,” “Superman,” “Jurassic Park,” and almost a hundred others. BoxOfficeMojo.com calculates that since 1975 Williams’s films have grossed around twenty billion dollars worldwide—and that leaves out the first seventeen years of his career. He has received forty-nine Oscar nominations, with a fiftieth almost certain for 2016. Perhaps his most crucial contribution is the role he has played in preserving the art of orchestral film music, which, in the early seventies, was losing ground to pop-song soundtracks. “Star Wars,” exuberantly blasted out by the London Symphony, made the orchestra seem essential again.Williams’s wider influence on musical culture can’t be quantified, but it’s surely vast. The brilliant young composer Andrew Norman took up writing music after watching “Star Wars” on video, as William Robin notes in a Times profile. The conductor David Robertson, a disciple of Pierre Boulez and an unabashed Williams fan, told me that some current London Symphony players first became interested in their instruments after encountering “Star Wars.” Robertson, who regularly stages all-Williams concerts with the St. Louis Symphony, observed that professional musicians enjoy playing the scores because they are full of the kinds of intricacies and motivic connections that enliven the classic repertory. “He’s a man singularly fluent in the language of music,” Robertson said. “He’s very unassuming, very humble, but when he talks about music he can be the most interesting professor you’ve ever heard. He’s a deep listener, and that explains his ability to respond to film so acutely.”It has long been fashionable to dismiss Williams as a mere pasticheur, who assembles scores from classical spare parts. Some have gone as far as to call him a plagiarist. A widely viewed YouTube video pairs the “Star Wars” main title with Erich Wolfgang Korngold’s music for “Kings Row,” a 1942 picture starring Ronald Reagan. Indeed, both share a fundamental pattern: a triplet figure, a rising fifth, a stepwise three-note descent. Also Korngoldesque are the glinting dissonances that affirm rather than undermine the diatonic harmony, as if putting floodlights on the chords.To accuse Williams of plagiarism, however, brings to mind the famous retort made by Brahms when it was pointed out that the big tune in the finale of his First Symphony resembled Beethoven’s Ode to Joy: “Any ass can hear that.” Williams takes material from Korngold and uses it to forge something new. After the initial rising statement, the melodies go in quite different directions: Korngold’s winds downward to the tonic note, while Williams’s insists on the triplet rhythm and leaps up a minor seventh. I used to think that the latter gesture was taken from a passage in Bruckner’s Fourth Symphony, but the theme can’t have been stolen from two places simultaneously.Although it’s fun to play tune detective, what makes these ideas indelible is the way they’re fleshed out, in harmony, rhythm, and orchestration. (To save time, Williams uses orchestrators, but his manuscripts arrive with almost all of the instrumentation spelled out.) We can all hum the trumpet line of the “Star Wars” main title, but the piece is more complicated than it seems. There’s a rhythmic quirk in the basic pattern of a triplet followed by two held notes: the first triplet falls on the fourth beat of the bar, while later ones fall on the first beat, with the second held note foreshortened. There are harmonic quirks, too. The opening fanfare is based on chains of fourths, adorning the initial B-flat-major triad with E-flats and A-flats. Those notes recur in the orchestral swirl around the trumpet theme. In the reprise, a bass line moves in contrary motion, further tweaking the chords above. All this interior activity creates dynamism. The march lunges forward with an irregular gait, rugged and ragged, like the Rebellion we see onscreen.This is not to deny that Williams has a history of drawing heavily on established models. The Tatooine desert in “Star Wars” is a dead ringer for the steppes of Stravinsky’s “The Rite of Spring.” The “Mars” movement of Holst’s “Planets” frequently lurks behind menacing situations. Jeremy Orosz, in a recent academic paper, describes these gestures as “paraphrases”: rather than quoting outright, Williams “uses pre-existing material as a creative template to compose new music at a remarkable pace.” There’s another reason that “Star Wars” contains so many near-citations. At first, George Lucas had planned to fill the soundtrack with classical recordings, as Stanley Kubrick had done in “2001.” The temp track included Holst and Korngold. Williams, whom Lucas hired at Spielberg’s suggestion, acknowledged the director’s favorites while demonstrating the power of a freshly composed score. He seems to be saying: I can mimic anything you want, but you need a living voice.In that delicate balancing act, Williams may have succeeded all too well. After “Star Wars,” he became a sound, a brand. The diversity and occasional daring of the composer’s earlier work—I’m thinking not only of “Close Encounters” but also of Robert Altman’s “Images” and “The Long Goodbye” and of Brian De Palma’s “The Fury”—subsided over time. Williams invariably achieves a level of craftsmanship that no other living Hollywood composer can match; his fundamental skill is equally evident in his sizable catalogue of concert-hall scores. Yet he’s been boxed in by the billions that his music has helped to earn. He has become integral to a populist economy on which thousands of careers depend. | 2024-11-08T00:32:53 | en | train |
10,824,499 | fariliang | 2016-01-02T01:32:46 | 6 Image Tools to Add Magic to Your Content | null | https://ratafire.com/blog/6-image-tools-to-add-magic-to-your-content/ | 1 | 0 | null | null | null | no_error | RataFire.com | null | null | RataFire.comis available for saleAbout RataFire.comFormer one word, exceptionally brandable .com domain representing Ratafire - a new crowdfunding platform, is designed to be easily integrated with pre-existing sites such as Facebook or a blog, allowing artists to raise money for their creative projects while simultaneously connecting with fans and fellow musicians.Exclusively on Odys Marketplace$3,470What's included:Domain name RataFire.comBecome the new owner of the domain in less than 24 hours.Complimentary Logo DesignSave time hiring a designer by using the existing high resolution original artwork, provided for free by Odys Global with your purchase.Built-In SEOSave tens of thousands of dollars and hundreds of hours of outreach by tapping into the existing authority backlink profile of the domain.Free Ownership TransferTech Expert Consulting100% Secure PaymentsOwn this Domain in 3 Easy StepsWith Odys, buying domains is easy and safe. Your dream domain is just a few clicks away..1Buy your Favorite DomainChoose the domain you want, add it to your cart, and pay with your preferred method.....2Transfer it to your RegistrarFollow our instructions to transfer ownership from the current registrar to you.....3Get your Brand AssetsDownload the available logos and brand assets and start building your dream website.Trusted by the Top SEO Experts and EntrepreneursRachel Parisi★ ★ ★ ★ ★I purchased another three aged domains from Odys in a seamless and painless transaction. John at Odys was super helpful! Odys is my only source for aged domains —you can trust their product.Stefan★ ★ ★ ★ ★Odys is absolutely the best premium domain marketplace in the whole internet space. You will not go wrong with them.Adam Smith★ ★ ★ ★ ★Great domains. Great to deal with. In this arena peace of mind can be difficult to come by, but I always have it with Odys and will continue to use them and recommend them to colleagues and clients.Brett Helling★ ★ ★ ★ ★Great company. Very professional setup, communication, and workflows. I will definitely do business with Odys Global moving forward.Larrian Gillespie Csi★ ★ ★ ★ ★I have bought 2 sites from Odys Global and they have both been of high quality with great backlinks. I have used one as the basis for creating a new site with a great DR and the other is a redirect with again high DR backlinks. Other sites I have looked through have low quality backlinks, mostly spam. I highly recommend this company for reliable sites.Henry Fox★ ★ ★ ★ ★Great company!Vijai Chandrasekaran★ ★ ★ ★ ★I’ve bought over 30 domains from Odys Global in the last two years and I was always very satisfied. Besides great quality, niche-specific auction domains, Alex also helped me a lot with SEO and marketing strategies. Auction domains are not cheap, but quality comes with a price. If you have the budget and a working strategy, these domains will make you serious money.Keith★ ★ ★ ★ ★Earlier this year, I purchased an aged domain from Odys as part of a promo they’re running at the time. It was my first experience with buying an aged domain so I wanted to keep my spend low. I ended up getting a mid level DR domain for a good price. The domain had solid links from niche relevant high authority websites. I used the site as a 301 redirect to a blog I had recently started. Within a few weeks I enjoyed new traffic levels on my existing site. Happy to say that the Odys staff are friendly and helpful and they run a great business that is respected within the industry. | 2024-11-08T08:32:10 | en | train |
10,824,608 | tagawa | 2016-01-02T02:07:42 | CES 2016 Preview | null | http://www.bbc.com/news/technology-35192737 | 1 | 0 | null | null | null | no_error | CES 2016: Preview of the Las Vegas tech showcase | 2016-01-02T00:15:43.000Z | Leo Kelion | Media caption, WATCH: Buddy the robot and his creator will be at CESAcross the globe, tech industry insiders are charging their batteries and taking a deep breath.CES kicks off next week - a sprawling consumer technology showcase that seems to extend to more Las Vegas venues every year.From Samsung to one-person start-ups, thousands of companies will demo new products, while, behind-the-scenes, deals will be struck to make further generations of gadgets possible."Every CES is fresh and different, and we try to see what the future will bring," the event's organiser Gary Shapiro says."What I've learned is that sometimes the companies themselves don't even know if they're going to get their product finished in time."Image source, Getty ImagesImage caption, This year's CES will cover 2.4 million sq ft (223,000 sq m) of exhibition space, up from 2.2 million sq ft last yearCES 2016: Vegas tech show previewVHS recorders, HD TVs, the Xbox games console and Blu-ray discs all made their debut at past shows.But one expert suggests the tech giants may temper their ambitions this time round."Say goodbye to cool, say hello to practical," explains John Curran from the consultancy Accenture."Many of the larger companies now put less emphasis on CES as a launch pad for major hardware. So, they will focus instead on showing off new services to help garner excitement for existing products."But for the smaller businesses this is as big a venue as they are going to be able to find and is an excellent opportunity to catch the eye of journalists and key buyers from the retailers."Image caption, Baby tech is one of the shows fastest growing sectorsTV techNew TV tech always makes a stir at CES, even if some of the innovations are not always practical.In recent years, Samsung and LG have slugged it out to boast the biggest sets, but this year it may be about having the bendiest.Image source, LG DisplayImage caption, LG has shown off small bendy screens in the past but has yet to offer the tech to the publicLG made headlines in May when it showed off an ultra-thin prototype that could be peeled off the wall, external - it will be fascinating to see how much further the two South Korean firms have developed the concept.As far as screens you might actually want to buy soon, expect the focus on be on "HDR".The acronym refers to high dynamic range, and basically means that TVs can show millions more shades of colour and a wider dynamic range - added shades of brightness in between black and white - letting more detail be shown.Image source, Getty ImagesImage caption, HDR-badged TVs should offer noticeably richer picturesAmazon actually started streaming some of its shows in HDR this year, but competing standards meant the TV-makers hadn't put their marketing muscle behind the format.That's likely to change at CES when a coalition of the leading players reveal a new specification, external. It will let them badge TVs to show they will support future HDR-coded content.Image source, Getty ImagesImage caption, Netflix co-founder Reed Hastings delivers his keynote address on WednesdayThat should prevent an embarrassing repeat of the fact that many of the early 4K sets ended up being incompatible with the way ultra high definition video is now streamed.Netflix had previously said it was waiting until this moment, external to start supporting HDR - expect its chief executive Reed Hastings to reveal more at his keynote CES speech.And while it's likely to be many years before the mainstream broadcasters adopt HDR, several of the movie studios have said they will offer it on 4K Blu-ray discs - the first players are also expected to be unveiled at CES.Image source, Film publicityImage caption, Mad Max: Fury Road has been confirmed as one of the first 4K Blu-ray discs that will go on saleDronesA big will-they won't-they question mark hangs over GoPro's CES plans.The action camera-maker has promised to launch a drone called Karma in 2016. Image source, GoProImage caption, GoPro has teased that "Karma is coming" and released a video shot by the aircraft, from which this image was takenRumours suggest it could, external deliver 360-degree views and incorporate collision-avoidance tech, externalThe firm's chief executive Nicholas Woodman is speaking at a dinner event, external, but it's still unclear if he'll offer a first peek at the aircraft.Even if GoPro holds fire, there are dozens of other firms set to show off flying tech, including:Image source, PowerUpImage caption, PowerUp FPV offers owners a first-person view from their aircraftPowerUp FPV - a piece of kit from an Israeli start-up that transforms a paper aeroplane into a controllable aircraft that streams views back to a virtual reality headsetFleye - a ball-shaped drone from Belgium that hides its rotor blades behind a plastic sphere to reduce the risk of injuriesUvify - a drone from South Korea said to be fitted with 3D-recognition equipment that can navigate its way around indoor environmentsImage source, FleyeImage caption, Fleye is pitching its crowdfunded drone as having a safer designBritain's Intelligent Energy will also be showing off a hydrogen fuel cell, which it says lets drones stay airborne for hours, external, rather than minutes, at a time.Chip-makers Intel, external and Qualcomm will, external try to explain why adopting their rival drone technologies could give manufacturers an edge.And the Federal Aviation Administration has a booth and will likely provide an update on its new register for US-based drone owners.Image source, UvifyImage caption, The Uvify drone can steer itself around objects indoors and outdoorsVirtual realityFollowing years of hype, 2016 looks to be the year that virtual reality becomes - well, reality.HTC is inviting select journalists to take a look at its revamped Vive VR headset on Monday.Media caption, WATCH: Spencer Kelly tried out a prototype Vive headset in JulyThe headset - created in conjunction with video games firm Valve - was supposed to have gone on sale by now. However, the Taiwanese firm delayed the launch to add what it says is a "very big technological breakthrough".Is it eye-tracking sensors, a way to get rid of its external wiring or something else? We'll soon know.Image source, Getty ImagesImage caption, Sony previously said its PlayStation VR headset would go on sale before July 2016Sony is set to follow with its own press conference on Tuesday when we should get more details of the PlayStation VR - the add-on headset for its bestselling console.But one company watcher thinks the Japanese firm will miss a trick if it doesn't make another VR-centric announcement."Sony should come out with an accessory to convert its Z5 Premium smartphone into a VR solution," says Ben Wood from consultants CCS Insight."Its 4K screen is a solution looking for a problem - its high resolution really would lend itself to the experience."Oculus has already carried out a similar trick for Samsung's phones, but its focus this time will be on the Rift.Image source, Getty ImagesImage caption, There were long queues to try out Oculus' Rift headset last yearThe Facebook-owned business recently confirmed the PC-powered VR headset is "on-target" for a Q1 launch, external. With pre-orders about to begin, surely it's time to find out how much it will cost.There should also be news about "affordable" 360-degree cameras - if VR is going to take off beyond gaming, people need an easy way to record their own videos.Image source, SamsungImage caption, New VR controllers will be on show including a first look at Samsung's Rink sensorsIn addition, keep an ear out for new audio-recording equipment capable of matching sounds to a VR user's point-of-view - France's Arkamys has already teased one solution, external that will be on show.RobotsThe Consumer Technology Association is talking up this category, noting there's 71% more space dedicated to robots than in 2015.Image source, JiboImage caption, Jibo's designers had intended to put it on sale last year, but have spent extra time working on making its movements seem more naturalFew think robots are ready to go mainstream just yet, but there's still several companies worth keeping an eye on.The US start-up Jibo is creating a lot of buzz after its "social robot" raised over $3.7m (£2.5m) on the crowdfunding site Indiegogo. The project's chief, Cynthia Breazeal, will be popping into town to provide an update.There are a couple of new droids from France - the Buddy companion bot (which you can see at the top of this article) and Leka, a machine designed to stimulate children with autism and other developmental disorders.Media caption, WATCH: Leka light-up robot could help autistic childrenAnd from Japan, Flower Robotics promises to bring "beauty" to the field by showing off robots designed to be as aesthetically pleasing as they are useful.You might appreciate its efforts in the future, when squadrons of the automatons are zipping about.Image source, Flower RoboticsImage caption, Flower Robotics Platina droid is designed to alert its owner by flashing lights and making soundsCarsAs we career towards roads full of self-driving, electric-powered vehicles, the automakers are embracing CES as a chance to reveal their latest innovations.Ten of the big-name car manufacturers, external are exhibiting this year, but much of the pre-show buzz is being generated by a Chinese firm looking to disrupt the sector.Image source, Faraday FutureImage caption, Faraday Future has trailed its event with teaser images of its concept carFaraday Future has promised to unveil a "concept" that will help "define the future of mobility". The company has already lured several executives away from Tesla, announced plans to build a state-of-the-art factory near Las Vegas, external, and received backing from China's tech billionaire Jia Yueting. We'll learn more at its event on Monday.Volkswagen will hope to steer back attention the next day - and repair its battered reputation - when its chief executive takes to the stage.Image source, VolkswagenImage caption, Will this mystery new vehicle help Volkswagen move on from its diesel emissions scandal?The firm has said it will have a new concept vehicle to show off - rumours suggest it will be an electric Microbus, external capable of driving up to 500km (310 miles) on a single charge.Ford's boss, Mark Fields, is also in attendance. It's been reported that he's working on a tie-up with Google, external to create a new self-driving car business. But it's not clear whether this will feature in his CES presentation or be kept back for the Detroit Auto Show later in the month.BMW, however, has already confirmed it will be demonstrating Air Touch, external - a control mechanism for its in-car maps and entertainment systems that 3D-scans hand gestures to let drivers avoid having to fumble for buttons.Image source, BMWImage caption, BMW is to demo advances to its existing gesture-based recognition systemAnd Toyota is promising to show off a new "high-precision" road imaging system, external that will let self-driving cars share what they've seen to keep their maps up to date. The technology may do away with the need to send out special vehicles equipped with expensive laser scanners to get the data."There'll also be more than 100 smaller auto tech companies," adds Accenture's John Curran."This year's focus will be a little bit less on infotainment and more on security and safety - so, we should see new collision avoidance technologies, anti-car jacking tech and ways for cars to communicate with each other."Media caption, WATCH: The Smartwheel will be one of the new car safety technologies at CESWearablesDespite smartwatches gaining ground in 2015, Fitbit and its fitness trackers remain wearable tech's bestselling brand.The firm has an early-morning press conference on Tuesday, suggesting it has something major to reveal.Image source, FitbitImage caption, Fitbit is hinting at something new with a leather strap in its CES invitation"We could see a revision of Fitbit devices and software to better track stress via heart rate variability and skin temperature, along with software that offers coaching for better sleep and stress management," predicts Charles Anderson from the investment bank Dougherty & Company."We also expect to see Fitbit in more pacts with fashion brands."The firm's activity-logging rival Misfit is also at CES. The business was recently taken over by the watch giant Fossil, and we may see the first fruits of their tie-up.Chinese tech giant Huawei is tipped to unveil a smartwatch targeted at women and smaller start-ups are also expected to unveil female-friendly wrist-wear.Image source, WisewearImage caption, Wisewear will target women with its tech-enabled "luxury" jewellery"There's been a very male bias to wearable tech but you're going to see what I call the jewellification of this stuff," predicts CCS Insight's Ben Wood."There's a big gap in the market - wearables for women will be a big theme."Of course, another theme will be wearables that don't call attention to themselves, at least not until needed.For instance, the French firm Atol will be showing normal looking glasses that tell a smartphone app where they are when lost.Image source, AtolImage caption, Atol's glasses use an app called Teou to indicate how far away a lost pair of glasses isIn&motion has what it says is the world's first "smart airbag" for skiers - a vest that inflates in less than a tenth of a second upon impact.And Digitsole has new shoes that tie up their own laces - something we've been waiting for ever since Back to the Future II.Wearable tech for pets is also set to become more subtle.Image source, PitPatImage caption, The PitPat is waterproof and tracks how much walking, resting and playing a dog has indulged inPitPat, for instance, has a fairly unobtrusive activity tracker for dogs - a far cry from some of the more clunky animal-centric efforts seen at past CES shows.Health and beautyCosmetics companies are starting to embrace consumer electronics.L'Oreal is back for a second year with a new mystery product following the success of its Makeup Genius app in 2015.Another French firm, Romy, is in town with a device that custom-mixes skincare ingredients to suit each user at different times of the day.Media caption, WATCH: Hands-on with the 'mix your own moisturiser' machineAnd the UK's Amirose is pitching in to offer special cucumber-enhanced eye pads said to be specially formulated to soothe "computer eyes".There should also be a plethora of products that promise to aid longer-term benefits.For instance, Ceracor will debut a sensor that measures the level of haemoglobin in the blood, which it says athletes can use to boost their endurance.Image source, EmberImage caption, The Ember logs haemoglobin levels to help athletes modify their training to boost enduranceAnd Skulpt is showing off Chisel, a device that it says can be used to measure body fat and muscle quality.As ever with health tech, some of the claims will need to be put under scrutiny.Canada's Medical Wearable Solutions, for example, will have to justify its boast that its EyeForcer glasses are the solution to "the epidemic of Gameboy disease".Image source, Medical Wearable SolutionsImage caption, The EyeForcer supposedly prevents children suffering from poor posture, neck pain and vision problemsAlso look out for an explosion in the number of products targeted at new parents, including a sensor that measures contractions, telling mothers when to go to hospital, and a smart changing pad that tracks the growth of newborns.Smart HomeFour million UK households already contain some sort of smart home system, according to a recent report by Strategy Analytics.Media caption, WATCH: A start-up is making a smart button for your home - but will anyone buy it?Nest and Philips paved the way, expect a fresh flood of internet-connected thermostats, lights, fire alarms and plug sockets at the show - as well as new ways to control them.Samsung has said it intends to make its next range of Smart TVs double as command-and-control "smart hubs", while LG has pre-announced the SmartThinQ Hub - a cylindrical device that does much the same thing. If at this point you are trying to stifle a yawn, hold on - there are a few products in this category that intrigue.Image source, LGImage caption, LG's SmartThinQ Hub collects information from a range of compatible smart home devicesThe man who developed the original iMovie for Apple has turned his attention to laundry and will unveil a washing/drying machine called Marathon at Monday's CES Unveiled preview.The device can be set to keep clothes locked inside until their rightful owner returns - although how many flatshares will be able to afford its $1,200 (£810) price is another matter.Media caption, WATCH: The Triby speaker adds Amazon's Alexa assistant to fridgesInvoxia will be showing off Triby - a connected-kitchen product powered by Amazon's voice-activated assistant Alexa.And there will also be other new kit to enhance the home including a showerhead that tracks how much water it has used, a module that lets you control gas fireplaces from your phone and a sofa that vibrates in time with shows on TV.Image source, SensorwakeImage caption, Attendees can also sniff out Sensorwake - a clock that wake you up with the smells of food, places... or moneyIt's been said that it's a heck of a lot easier to get an overview of CES from outside Vegas rather than trying to tramp around its epic-sized show floors.We'll certainly try to highlight the key announcements as well as some of the more weird and quirky reveals.Image caption, There's also a lot on show that we won't report about - like this booth dedicated to different types of wire at CES 2015More on this storyRelated internet linksThe BBC is not responsible for the content of external sites. | 2024-11-08T03:09:46 | en | train |
10,824,665 | jrkelly | 2016-01-02T02:26:11 | How to Hire | null | https://medium.com/@henrysward/how-to-hire-34f4ded5f176#.w0av7pj1k | 30 | 11 | [
10824947,
10834394,
10834441,
10837437
] | null | null | no_error | How to Hire - Owner’s Manual, Blog by Carta - Medium | 2016-01-01T19:56:01.494Z | Henry Ward | Below is an excerpt from a talk I gave at the eShares Town Hall in November 2015. I hope it is helpful to other CEOs struggling with hiring.Hi everyone. As you know, I have opened 27 new headcount for department heads to fill by end of year. If they succeed our headcount will be 74 in January. Of our 74 employees, 37% will have been at eShares less than three months and 79% less than one year.Many of you will be interviewing and hiring over the next couple of months. How do we hire quickly and create a better company (and culture) than we have today? That is the challenge we face now. To that end, I’d like to offer you a set of hiring principles and heuristics to guide your decisions.Hiring Principles1. Hiring means we failed to execute and need helpFirst, let me quell a misconception. Hiring is not a consequence of success. Revenue and customers are. Hiring is a consequence of our failure to create enough leverage (see eShares 101) to grow on our own. It means we need outside help. The perfect business is a computer plugged into the internet. Starting with me, every human thereafter is overhead. And we are increasing overhead by 50%.I want to repeat this point. We are increasing overhead by 50% because we failed to execute. It is not something to be proud of. It is humbling to go back to the labor market, hat-in-hand, asking for help. We did this when we hired you. We asked each of you to help us. You did not need us. There are plenty of great jobs. But we needed you. And thank goodness you came. We wouldn’t be here without you. But each of you was hired because the team before you failed to execute without you. And this is still true today.2. Employee effectiveness is a power lawMuch like startup performance follows a power law, so do startup employees. The most effective employees create 20x more leverage than an average employee. This is not true in an efficiency company — the best employees might work 2x faster than their peers. But in a high-leverage startup like ours, the effectiveness gap between employees can be multiple orders of magnitude.Our minds find it easier to think in terms of efficiency and normal distributions than leverage and power law distributions. So we mentally squash the employee power law curve into a normal distribution curve. We underestimate the most effective employees and overestimate the ineffective ones.We rationalize this behavior with “lies we tell ourselves.” Here are a few lies people use to keep an ineffective employee:He is trying really hard.She deserves another chance.People really like her.I feel bad for him.He’s good at other things.He has stuff going on in his personal lifeShe is in the wrong role.Conversely, we should dramatically expand the responsibility of 20x performers. Most don’t and rationalize limiting their most effective employees by saying:She’s great but not ready for a promotionHe’s good but I’m not blown awayShe doesn’t have the right backgroundHe’s never done this job beforeIf we promote and she doesn’t work out, what then?In startup hiring there are few shades of grey. Most employees are great. Some are not. There are surprisingly few in between.3. False Positives are ok, False Negatives are notA False Positive (FP) is when we hire somebody who doesn’t work out (i.e. we falsely believed they would be great). A False Negative (FN) is when we did not hire somebody who would have been great. Hiring efficacy is measured by a low False Positive and False Negative rate. A perfect hiring team would never hire somebody that didn’t work out and never pass on somebody that would have been great.The problem arises in measurement. It is easy to know False Positives but impossible to know False Negatives (i.e. we know if we made a bad hire but we know nothing about those we passed on). This, and a reluctance to fire, is why companies focus on reducing False Positives — it is their only measurement. The phrase “Hire slow, fire fast.” comes from this asymmetry. Companies hire slow because they fear False Positives.We should not be afraid of False Positives. We can quickly fix a False Positive hiring decision. However, we should be afraid of False Negatives. We can never fix a False Negative mistake. And the cost is unknown and uncapped. Facebook passed on Brian Acton (WhatsApp cofounder) and it cost $8B and a board seat.There are dozens of employees at eShares I describe as saying “I don’t know how we would have gotten here without them.” Most were controversial hires. I’ll pick on Eric — a classic 20x hire. It is hard to imagine this today, but the majority of people who interviewed him did not want to hire him. Today his nickname is the The Oracle. Eric is not unique. I can count at least a dozen more examples.It sucks to let people go. I hope we get better at not hiring False Positives. But False Positives is the only way we learn. We learn nothing from False Negatives. And there is a huge risk we miss out on a 20x employee. The way we get better at hiring is to hire, learn, and improve. Do not be afraid of hiring False Positives. Give people chances. Be afraid of missing the 20x employee.For those of you questioning the morality of fast iteration of new hires please consider the alternative: we deny people opportunity for fear they won’t succeed or we keep people in roles where they won’t be successful. This creates walls around (and within) organizations. Let’s welcome those who want to join us. Let’s give them as much as opportunity as we can. And let’s quickly tell them if they will have more opportunity elsewhere. As long as we do it helpfully and respectfully (which we always will), helping people sort themselves into and out of eShares is good for all involved.4. Culture-contributors are better than culture-fittersWhen eShares was a company of one, me, our culture was “my culture”. My culture was quickly replaced by the culture of the first ten employees. I couldn’t stop it if I tried — I was outnumbered 10 to 1. Today, the 37 of you that joined us since have vastly outnumbered the first ten. And thank goodness! Our culture today is far better than it was when we were ten and infinitely better than when it was just me.Because of Built to Last, good corporate culture is considered static and decided early in a company’s life. For these companies, hiring means selecting people who fit the existing culture and keeping out those who don’t. Hiring is gatekeeping. If culture is a Venn diagram and each circle is an employee’s contribution, then gatekeeping is preserving the intersection of those circles.Our culture is dynamic. It should expand like our business. We welcome its change. Just like we want people to contribute new skills and ideas, we want people to contribute new culture. Hiring culture-fitters does not make our culture better. On the contrary it makes culture worse through decay. The 47 of us in this room will soon be outnumbered by new hires. They will decide our future culture, not us. Hire culture-contributors who will make our culture better.Hiring HeuristicsSo how do we find these Helpful, 20x, Positive Positive, Culture Contributors? It is hard. Very hard. But I can offer a few interviewing and hiring tips that I hope will help:1. Hire for Strength vs Lack of WeaknessMost companies hire by consensus and committee. In a committee of N, each positive vote is worth 1/N of a hiring decision. However, one negative vote will reject a candidate. No matter how a strong a voter’s conviction, their vote will never count more than 1/N. Conversely, the slightest negative view will kill a hire. Consensus optimizes for employees with the fewest objections (least weaknesses). It works well to reduce False Positives but creates many False Negatives.**For that reason, hiring at eShares is not a democracy. We do not vote. The hiring manager makes the decision. However, we have an interview team help the hiring manager triangulate a good decision. They help by looking for and discovering strengths. The single purpose of an interviewing team is to answer the question, “what is this candidate amazing at?”We do this for two reasons. First, someone who is amazing at one thing will often become amazing at other things too. Most often the candidate hasn’t been trained yet. We can train them. As an organization we are very good at that.Second, we can hire complementary skills. For example, we can hire one candidate who is amazing at web development and another who is amazing at algorithms and make a team out of them. You may retort, “that means you need to hire two people when one person could be good at both.” That is the efficiency argument. We are a leverage company. Two amazing people are always better than one pretty good person. If we were in the business of buying cars we would buy trucks and Teslas — we would never buy a Prius.2. Hire for Trajectory vs ExperienceIt is important to note that Trajectory and Experience are not opposites. Trajectory is the first (and second) derivative of Experience. Most candidates have both and both are important. But Trajectory is far more valuable. Our job is not to hire for Experience. That’s what everyone else does. Our job is to hire people whose Trajectory will explode when they join eShares, pulling us along with them.Interviewing for experience is easy because you are discovering what someone has done. Interviewing for Trajectory is hard because you are predicting what they will do. The best indicator that someone will have high Trajectory is if they value Trajectory over Experience. The tell? They get excited talking about what they could do rather than what they have done.3. Hire Doers vs TellersThe best predictor of a successful new hire at eShares is if they like to get their hands dirty. Whether it is writing code, building spreadsheets, calling customers, or stocking the fridge. This is true at every level. Our senior managers are hands-on, care about details, and are not afraid to roll up their sleeves. They don’t last long otherwise.One way to find Doers is to ask a candidate how to do something and then ask them to do it. I ask engineers how they would solve a coding exercise or a sales rep how they would sell cap table software. After listening to the answer I ask them to take out their computer and write code or pretend I am a buyer and sell me software. You can quickly see who prefers doing something versus talking about doing it.Be wary of seductive Tellers. They tend to be good interviewers. Interviewing ability has almost no correlation to employee effectiveness. The most common hiring mistake is hiring good interviewers. Don’t make that mistake — hire Doers, not Tellers.4. Hire Learners vs ExpertsThis doesn’t mean expertise isn’t important. We are a company of specialists, not generalists. Each of us is an expert, or becoming an expert, in our domain. You cannot be successful at eShares without being an expert at something.However, the velocity of change at eShares is so high that static expertise quickly becomes obsolete. To survive and grow we must be a learning organization. And that means we need people who are awesome at learning. As Paul Graham says, “When experts are wrong it is often because they are experts on an earlier version of the world.”The clearest signal of a Learner is curiosity. Curious people, by definition, love to learn. While Experts talk about what they know, the Curious talk about what they don’t know. When you interview, verify expertise by discovering strengths. And then look for Curiosity.5. Hire Different vs SimilarIn our short history, our best hires were very Different from the team that hired them. They don’t seem Different now because they expanded our culture. They changed what Different looked like.There is a deep and natural human bias to hire people “like us.” Fight this bias. Hiring Similar means we value repeatability and efficiency over creativity and leverage. Hiring Different brings new skills, paradigms, and ideas which are the sparks and tinder of leverage. They expand our Venn diagram rather than contract it.I can’t stress this enough. You will naturally want to hire people you “connect” with. Fight your instincts. Hire Different.A quick aside about diversity: This is an important topic for a broader discussion we will have in the future. But I will preview that talk by saying that diversity is fundamentally about valuing people who are different. If we view our culture as the sum of our people, and a broader group of people creates a broader and better culture, then diversity, as defined by hiring people who are different, is competitive advantage. Notice I say nothing about race, gender, religious preference, sexual orientation, or any of the categories typically ascribed to diversity. Diversity at eShares starts with a self-awareness about personal bias and a conscious effort to recalibrate. Hire Different vs Similar.6. Always pass on egoFor most of this talk I have talked about signals to hire. I’ll conclude with a signal to not hire. Confidence and ego are opposites. Modesty and humility are traits of the strong. Ego and arrogance is a disease of the weak and insecure. The truly confident don’t need people to know they are great. They are happy to know it themselves. And the truly Great use their greatness to make those around them greater.Most companies have a no assholes rule. We do too. But there are many people who would pass the asshole test but not the ego test. And ego is the far more dangerous disease. Assholes are not contagious but ego is because it creates an arms race of competing egos. Anyone can be a victim but executives are the most susceptible making it an attack on our central nervous system. I would rather hire the humble asshole than the arrogant nice guy.The good news is egos and assholes are highly correlated. But not always. There are nice people with huge egos. They just disguise it well. Your job as the interviewer is to figure that out.Always pass on ego. Always.Good luckThat’s all I have. Good luck with your hiring. I’m excited to see who you bring to eShares.** I once spoke to a venture investor about this and he said the same thing about investment decisions. Their partnership doesn’t vote on deals because it produces the least risky investments. They prefer seeing a partner with huge conviction over broad consensus. They think that produces the most exciting and ambitious deals. | 2024-11-07T19:31:24 | en | train |
10,824,724 | skbohra123 | 2016-01-02T02:51:09 | Netflix set to be launched in India next week | null | http://www.thequint.com/technology/2016/01/02/netflix-all-set-for-india-to-be-announced-at-ces-2016-next-week | 3 | 0 | null | null | null | no_error | Netflix All Set for India, To Be Announced At CES 2016 Next Week | 2016-01-02T07:16:15+05:30 | The Quint | It’s time India got the taste of uncensored, latest television series streamed online on mobile. We’re talking about Netflix, the popular online streaming platform, which will reportedly make its Indian debut announcement at the Consumer Electronics Show (CES) 2016, Las Vegas next week. The service will be offered to consumers in the coming months.According to the Hindu Business Line (HBL) report, Netflix could join hands with local telecom operator, thereby taking advantage of the growing 4G connectivity access in the country. It’s highly probable that Reliance Jio could bear the fruit of partnership, providing video streaming at 4G speeds in India. Users can subscribe to Netflix for viewing latest video content known in the market. The company charges $8.99 (Rs 600) per month in the US for HD-quality shows and serials. In India, Netflix is expected to take the Apple Music route and price it much lower than its US pricing. India already has services like Bigflix, RComm, Google Play Movies and even Youtube offers chance to pay-and-watch movie titles in the country but Netflix entering India could change the dynamics of video consumption on mobile in the country and your life.(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.) | 2024-11-08T15:51:37 | en | train |
10,824,798 | ericjang | 2016-01-02T03:16:51 | Awesome Hacking | null | https://github.com/carpedm20/awesome-hacking | 19 | 0 | null | null | null | no_error | GitHub - carpedm20/awesome-hacking: A curated list of awesome Hacking tutorials, tools and resources | null | carpedm20 | Awesome Hacking -An Amazing Project
A curated list of awesome Hacking. Inspired by awesome-machine-learning
If you want to contribute to this list (please do), send me a pull request!
For a list of free hacking books available for download, go here
Table of Contents
System
Tutorials
Tools
Docker
General
Reverse Engineering
Tutorials
Tools
General
Web
Tools
General
Network
Tools
Forensic
Tools
Cryptography
Tools
Wargame
System
Reverse Engineering
Web
Cryptography
Bug bounty
CTF
Competition
General
OS
Online resources
Post exploitation
tools
ETC
System
Tutorials
Roppers Computing Fundamentals
Free, self-paced curriculum that builds a base of knowledge in computers and networking. Intended to build up a student with no prior technical knowledge to be confident in their ability to learn anything and continue their security education. Full text available as a gitbook.
Corelan Team's Exploit writing tutorial
Exploit Writing Tutorials for Pentesters
Understanding the basics of Linux Binary Exploitation
Shells
Missing Semester
Tools
Metasploit A computer security project that provides information about security vulnerabilities and aids in penetration testing and IDS signature development.
mimikatz - A little tool to play with Windows security
Hackers tools - Tutorial on tools.
Docker Images for Penetration Testing & Security
docker pull kalilinux/kali-linux-docker official Kali Linux
docker pull owasp/zap2docker-stable - official OWASP ZAP
docker pull wpscanteam/wpscan - official WPScan
docker pull metasploitframework/metasploit-framework - Official Metasploit
docker pull citizenstig/dvwa - Damn Vulnerable Web Application (DVWA)
docker pull wpscanteam/vulnerablewordpress - Vulnerable WordPress Installation
docker pull hmlio/vaas-cve-2014-6271 - Vulnerability as a service: Shellshock
docker pull hmlio/vaas-cve-2014-0160 - Vulnerability as a service: Heartbleed
docker pull opendns/security-ninjas - Security Ninjas
docker pull noncetonic/archlinux-pentest-lxde - Arch Linux Penetration Tester
docker pull diogomonica/docker-bench-security - Docker Bench for Security
docker pull ismisepaul/securityshepherd - OWASP Security Shepherd
docker pull danmx/docker-owasp-webgoat - OWASP WebGoat Project docker image
docker pull vulnerables/web-owasp-nodegoat - OWASP NodeGoat
docker pull citizenstig/nowasp - OWASP Mutillidae II Web Pen-Test Practice Application
docker pull bkimminich/juice-shop - OWASP Juice Shop
docker pull phocean/msf - Docker Metasploit
General
Exploit database - An ultimate archive of exploits and vulnerable software
Reverse Engineering
Tutorials
Begin RE: A Reverse Engineering Tutorial Workshop
Malware Analysis Tutorials: a Reverse Engineering Approach
Malware Unicorn Reverse Engineering Tutorial
Lena151: Reversing With Lena
Tools
Disassemblers and debuggers
IDA - IDA is a Windows, Linux or Mac OS X hosted multi-processor disassembler and debugger
OllyDbg - A 32-bit assembler level analysing debugger for Windows
x64dbg - An open-source x64/x32 debugger for Windows
radare2 - A portable reversing framework
plasma - Interactive disassembler for x86/ARM/MIPS. Generates indented pseudo-code with colored syntax code.
ScratchABit - Easily retargetable and hackable interactive disassembler with IDAPython-compatible plugin API
Capstone
Ghidra - A software reverse engineering (SRE) suite of tools developed by NSA's Research Directorate in support of the Cybersecurity mission
Decompilers
JVM-based languages
Krakatau - the best decompiler I have used. Is able to decompile apps written in Scala and Kotlin into Java code. JD-GUI and Luyten have failed to do it fully.
JD-GUI
procyon
Luyten - one of the best, though a bit slow, hangs on some binaries and not very well maintained.
JAD - JAD Java Decompiler (closed-source, unmaintained)
JADX - a decompiler for Android apps. Not related to JAD.
.net-based languages
dotPeek - a free-of-charge .NET decompiler from JetBrains
ILSpy - an open-source .NET assembly browser and decompiler
dnSpy - .NET assembly editor, decompiler, and debugger
native code
Hopper - A OS X and Linux Disassembler/Decompiler for 32/64-bit Windows/Mac/Linux/iOS executables.
cutter - a decompiler based on radare2.
retdec
snowman
Hex-Rays
Python
uncompyle6 - decompiler for the over 20 releases and 20 years of CPython.
Deobfuscators
de4dot - .NET deobfuscator and unpacker.
JS Beautifier
JS Nice - a web service guessing JS variables names and types based on the model derived from open source.
Other
nudge4j - Java tool to let the browser talk to the JVM
dex2jar - Tools to work with Android .dex and Java .class files
androguard - Reverse engineering, malware and goodware analysis of Android applications
antinet - .NET anti-managed debugger and anti-profiler code
UPX - the Ultimate Packer (and unpacker) for eXecutables
Execution logging and tracing
Wireshark - A free and open-source packet analyzer
tcpdump - A powerful command-line packet analyzer; and libpcap, a portable C/C++ library for network traffic capture
mitmproxy - An interactive, SSL-capable man-in-the-middle proxy for HTTP with a console interface
Charles Proxy - A cross-platform GUI web debugging proxy to view intercepted HTTP and HTTPS/SSL live traffic
usbmon - USB capture for Linux.
USBPcap - USB capture for Windows.
dynStruct - structures recovery via dynamic instrumentation.
drltrace - shared library calls tracing.
Binary files examination and editing
Hex editors
HxD - A hex editor which, additionally to raw disk editing and modifying of main memory (RAM), handles files of any size
WinHex - A hexadecimal editor, helpful in the realm of computer forensics, data recovery, low-level data processing, and IT security
wxHexEditor
Synalize It/Hexinator -
Other
Binwalk - Detects signatures, unpacks archives, visualizes entropy.
Veles - a visualizer for statistical properties of blobs.
Kaitai Struct - a DSL for creating parsers in a variety of programming languages. The Web IDE is particularly useful for reverse-engineering.
Protobuf inspector
DarunGrim - executable differ.
DBeaver - a DB editor.
Dependencies - a FOSS replacement to Dependency Walker.
PEview - A quick and easy way to view the structure and content of 32-bit Portable Executable (PE) and Component Object File Format (COFF) files
BinText - A small, very fast and powerful text extractor that will be of particular interest to programmers.
General
Open Malware
Web
Tools
Spyse - Data gathering service that collects web info using OSINT. Provided info: IPv4 hosts, domains/whois, ports/banners/protocols, technologies, OS, AS, maintains huge SSL/TLS DB, and more... All the data is stored in its own database allowing get the data without scanning.
sqlmap - Automatic SQL injection and database takeover tool
NoSQLMap - Automated NoSQL database enumeration and web application exploitation tool.
tools.web-max.ca - base64 base85 md4,5 hash, sha1 hash encoding/decoding
VHostScan - A virtual host scanner that performs reverse lookups, can be used with pivot tools, detect catch-all scenarios, aliases and dynamic default pages.
SubFinder - SubFinder is a subdomain discovery tool that discovers valid subdomains for any target using passive online sources.
Findsubdomains - A subdomains discovery tool that collects all possible subdomains from open source internet and validates them through various tools to provide accurate results.
badtouch - Scriptable network authentication cracker
PhpSploit - Full-featured C2 framework which silently persists on webserver via evil PHP oneliner
Git-Scanner - A tool for bug hunting or pentesting for targeting websites that have open .git repositories available in public
CSP Scanner - Analyze a site's Content-Security-Policy (CSP) to find bypasses and missing directives.
Shodan - A web-crawling search engine that lets users search for various types of servers connected to the internet.
masscan - Internet scale portscanner.
Keyscope - an extensible key and secret validation tool for auditing active secrets against multiple SaaS vendors
Decompiler.com - Java, Android, Python, C# online decompiler.
General
Strong node.js - An exhaustive checklist to assist in the source code security analysis of a node.js web service.
Network
Tools
NetworkMiner - A Network Forensic Analysis Tool (NFAT)
Paros - A Java-based HTTP/HTTPS proxy for assessing web application vulnerability
pig - A Linux packet crafting tool
findsubdomains - really fast subdomains scanning service that has much greater opportunities than simple subs finder(works using OSINT).
cirt-fuzzer - A simple TCP/UDP protocol fuzzer.
ASlookup - a useful tool for exploring autonomous systems and all related info (CIDR, ASN, Org...)
ZAP - The Zed Attack Proxy (ZAP) is an easy to use integrated penetration testing tool for finding vulnerabilities in web applications
mitmsocks4j - Man-in-the-middle SOCKS Proxy for Java
ssh-mitm - An SSH/SFTP man-in-the-middle tool that logs interactive sessions and passwords.
nmap - Nmap (Network Mapper) is a security scanner
Aircrack-ng - An 802.11 WEP and WPA-PSK keys cracking program
Nipe - A script to make Tor Network your default gateway.
Habu - Python Network Hacking Toolkit
Wifi Jammer - Free program to jam all wifi clients in range
Firesheep - Free program for HTTP session hijacking attacks.
Scapy - A Python tool and library for low level packet creation and manipulation
Amass - In-depth subdomain enumeration tool that performs scraping, recursive brute forcing, crawling of web archives, name altering and reverse DNS sweeping
sniffglue - Secure multithreaded packet sniffer
Netz - Discover internet-wide misconfigurations, using zgrab2 and others.
RustScan - Extremely fast port scanner built with Rust, designed to scan all ports in a couple of seconds and utilizes nmap to perform port enumeration in a fraction of the time.
PETEP - Extensible TCP/UDP proxy with GUI for traffic analysis & modification with SSL/TLS support.
Forensic
Tools
Autopsy - A digital forensics platform and graphical interface to The Sleuth Kit and other digital forensics tools
sleuthkit - A library and collection of command-line digital forensics tools
EnCase - The shared technology within a suite of digital investigations products by Guidance Software
malzilla - Malware hunting tool
IPED - Indexador e Processador de Evidências Digitais - Brazilian Federal Police Tool for Forensic Investigation
CyLR - NTFS forensic image collector
CAINE- CAINE is a Ubuntu-based app that offers a complete forensic environment that provides a graphical interface. This tool can be integrated into existing software tools as a module. It automatically extracts a timeline from RAM.
Cryptography
Tools
xortool - A tool to analyze multi-byte XOR cipher
John the Ripper - A fast password cracker
Aircrack - Aircrack is 802.11 WEP and WPA-PSK keys cracking program.
Ciphey - Automated decryption tool using artificial intelligence & natural language processing.
Wargame
System
OverTheWire - Semtex
OverTheWire - Vortex
OverTheWire - Drifter
pwnable.kr - Provide various pwn challenges regarding system security
Exploit Exercises - Nebula
SmashTheStack
HackingLab
Reverse Engineering
Reversing.kr - This site tests your ability to Cracking & Reverse Code Engineering
CodeEngn - (Korean)
simples.kr - (Korean)
Crackmes.de - The world first and largest community website for crackmes and reversemes.
Web
Hack This Site! - a free, safe and legal training ground for hackers to test and expand their hacking skills
Hack The Box - a free site to perform pentesting in a variety of different systems.
Webhacking.kr
0xf.at - a website without logins or ads where you can solve password-riddles (so called hackits).
fuzzy.land - Website by an Austrian group. Lots of challenges taken from CTFs they participated in.
Gruyere
Others
TryHackMe - Hands-on cyber security training through real-world scenarios.
Cryptography
OverTheWire - Krypton
Bug bounty
Awesome bug bounty resources by EdOverflow
Bug bounty - Earn Some Money
Bugcrowd
Hackerone
Intigriti Europe's #1 ethical hacking and bug bounty program.
CTF
Competition
DEF CON
CSAW CTF
hack.lu CTF
Pliad CTF
RuCTFe
Ghost in the Shellcode
PHD CTF
SECUINSIDE CTF
Codegate CTF
Boston Key Party CTF
ZeroDays CTF
Insomni’hack
Pico CTF
prompt(1) to win - XSS Challenges
HackTheBox
General
Hack+ - An Intelligent network of bots that fetch the latest InfoSec content.
CTFtime.org - All about CTF (Capture The Flag)
WeChall
CTF archives (shell-storm)
Rookit Arsenal - OS RE and rootkit development
Pentest Cheat Sheets - Collection of cheat sheets useful for pentesting
Movies For Hackers - A curated list of movies every hacker & cyberpunk must watch.
Roppers CTF Fundamentals Course - Free course designed to get a student crushing CTFs as quickly as possible. Teaches the mentality and skills required for crypto, forensics, and more. Full text available as a gitbook.
OS
Online resources
Security related Operating Systems @ Rawsec - Complete list of security related operating systems
Best Linux Penetration Testing Distributions @ CyberPunk - Description of main penetration testing distributions
Security @ Distrowatch - Website dedicated to talking about, reviewing and keeping up to date with open source operating systems
Post exploitation
tools
empire - A post exploitation framework for powershell and python.
silenttrinity - A post exploitation tool that uses iron python to get past powershell restrictions.
PowerSploit - A PowerShell post exploitation framework
ebowla - Framework for Making Environmental Keyed Payloads
ETC
SecTools - Top 125 Network Security Tools
Roppers Security Fundamentals - Free course that teaches a beginner how security works in the real world. Learn security theory and execute defensive measures so that you are better prepared against threats online and in the physical world. Full text available as a gitbook.
Roppers Practical Networking - A hands-on, wildly practical introduction to networking and making packets dance. No wasted time, no memorizing, just learning the fundamentals.
Rawsec's CyberSecurity Inventory - An open-source inventory of tools, resources, CTF platforms and Operating Systems about CyberSecurity. (Source)
The Cyberclopaedia - The open-source encyclopedia of cybersecurity. GitHub Repository
| 2024-11-08T11:02:06 | en | train |
10,824,877 | dalek2point3 | 2016-01-02T03:38:33 | These consumers like a dud when they see it | null | http://www.cnbc.com/2015/12/23/these-consumers-like-a-dud-when-they-see-it.html | 2 | 0 | null | null | null | no_error | These consumers like a dud when they see it | 2015-12-23T19:37:53+0000 | John W. Schoen | Consumer product managers may want to rethink how they test new product ideas, thanks to study from MIT. Testing products before rolling them out often involves finding that rarified group of elite consumers who set trends, influence friends and family and blaze a trail for the next megapopular, pure gold, mass-market product. Come up with a product they like, the theory goes, and they'll spread the word. But what if you could find that ideal shopper's polar opposite? A consumer with such narrow, eclectic tastes that the new ideas they're drawn to consistently turn out to be products shunned by the mass market. Think diehard fans of Diet Crystal Pepsi. Or McDonald's Arch Deluxe. Katrina Wittkamp | Getty ImagesLeave it to a pair of researchers at MIT to identify this gifted cohort. Some consumers, it turns out, have an uncanny ability to buy the least popular products. Over and over. "There were some products you think to yourself: How could anyone have thought this was a good idea?" said MIT economist Catherine Tucker, one of the co-authors of "Harbingers of Failure," which recently appeared in the Journal of Marketing Research. The researchers identified these hapless consumers by sifting through two large data sets tracking purchases of nearly 9,000 new products by nearly 78,000 consumers at a national convenience store chain between November 2003 to November 2005. They considered a product a failure if it was pulled from the shelves within three years of launch. Only about 40 percent of new products survived that long. When they took a closer look at who was buying these failed products, a distinct pattern began to emerge. Some shoppers apparently have a natural gift for choosing new products that are doomed to fail, a group researchers dubbed "harbingers of failure." When these consumers made up between a quarter to half of a product's sales, the product's survival rate fell by 31 percent. And when they bought the product at least three times, the data showed, these consumers' interest was more often than not, the kiss of death; the survival rare for those products dropped by 56 percent. The researchers looked for alternative explanations, but the data confirmed that they had discovered a new breed of consumers who are drawn to doomed products. "We tried to control for everything," said Tucker. "We couldn't kill the effect." The finding could be a boon to consumer product companies. Identifying the losers early in a new product launch can save a company enormous expense — from marketing and advertising to distribution and precious retail shelf space. The failure rate is brutal — an estimated 60 to 80 percent of new products introduced every year fail to find a following. One recent study found that some 75 percent of consumer packaged goods and retail products fail to generate even $7.5 million in sales during the first year, and less than 3 percent reach first-year sales of $50 million, according a recent article in the Harvard Business Review. One reason is that consumers appear to be fiercely loyal to their favorite products. The HBR authors cited separate research showing that American families repeatedly buy the same 150 items — as much as 85 percent of their household needs. That's why much of the emphasis of testing new products typically is geared toward a small group of consumers who help drive the lion's share of new product sales. Less than 1 percent of consumers — or 1 in 143 — accounted for 80 percent of sales of new consumer packaged goods, according to a study from market researcher Calatina. Also, just 11 percent of those consumers who bought a product within the first six months of a launch were still buying after a year. The hope is that these early purchasers will identify products that will turn into long-running hits. But the MIT study turns that strategy on its head. The researchers speculate that consumers with a knack for finding failed products may be more willing to take risks with new products than the general population. And the products aren't necessarily bad — they just fail to attract the mass following product manufacturers are seeking. But the gift for finding failed products, the researchers say, appears to be comprehensive. "You might have thought this was a category-specific effect — someone who buys the wrong makeup," said Tucker. "But the strongest effects were going across category. If you bought the unpopular yogurt or the unpopular shade of lip gloss, you also bought the laundry detergent that no one wants to buy." | 2024-11-08T04:10:20 | en | train |
10,824,894 | tokenadult | 2016-01-02T03:42:25 | What Is the Most Interesting Recent [Scientific] News? Why Is It Important? | null | https://edge.org/contributors/what-do-you-consider-the-most-interesting-recent-scientific-news-what-makes-it | 2 | 0 | null | null | null | no_error | 2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT? | null | null |
Early Life Adversity and Collective Outcomes
Associate Professor, UC Berkeley Department of Psychology...
Q-Bio
Physicist, University of Illinois at Urbana-Champaign
This Is The Science News Essay You Want To Read
Computer Scientist, UC Berkeley, School of Information;...
An Unexpected, Very Haunting Signal
Mallinckrodt Professor of Physics and Professor of the...
Society Is Not A Machine, Optimization Not The Right Paradigm!
Professor of Computational Social Science, ETH Zurich; TU...
Differentiable Programming
Research affiliate, MIT Media Lab
The Rule Of Anomalies
Professor of Physics at Brown University; Author, The Jazz...
The (Much Anticipated) Dawn Of Understanding
Associate Professor of Psychological and Brain Sciences,...
How To Be Bad Together: Antisocial Punishment of Pro-social Cooperators
Philosopher and Researcher, Centre National de la Recherche...
Quantum Entanglement Is Independent Of Space And Time
Nobel laureate (2022 - Physics); Physicist, University of...
Microbial Attractions
A World That Counts
Professor of Computer Science, MIT; Director, MIT...
r > g: Increasing Inequality of Wealth and Income is a Runaway Process
Founding Editor, 3QuarksDaily.com
The Big Bang Cannot Be What We Thought It Was
Albert Einstein Professor in Science, Departments of...
The Hermeneutic Hypercycle
Associate Professor in Arts and Technology, The University...
Computational Complexity And The Nature Of Reality
Science writer; Author, Trespassing on Einstein's Lawn
The Mother Of All Addictions
Biological Anthropologist, Rutgers University; Author, Why...
The Breathtaking Future Of A Connected World
Curator, TED conferences, TED Talks; author, TED Talks
Synthetic Learning
Senior Maverick, Wired; Author, What Technology Wants and...
Memory Is a Labile Fabrication
Professor of Behavioural Neuroscience, Dept. of...
Emotions Influence Environmental Well-Being
Assistant Professor of Psychology, University of Colorado,...
That Dress
Psychologist; Visiting Professor, University of Plymouth;...
Send In The Drones
Professor, Department of Psychology Hunter College; Author...
Pluto Now, Then On To 550 A.U.
Emeritus Professor of Physics and Astronomy, UC-Irvine;...
Broke People Ignoring $20 Bills On The Sidewalk
Co-founder and Chief Science Officer, MetaMed Research
Technobiophilic Cities
President & CEO, Science World British Columbia;...
We Are Not Unique In The Universe, But We Are Very Much Alone
Editor-at-large of the German Daily Newspaper, Sueddeutsche...
Self-Driving Genes Are Coming
Founder, the Whole Earth Catalog; Co-founder, The Well; Co-...
The Epidemic Of Absence
Science Writer; Fellow, Royal Society of Literature and the...
The Strongest Prejudice Was Identified
Social Psychologist; Thomas Cooley Professor of Ethical...
The Conquest Of Human Scale
Professor of Journalism, New York University; Former...
Looking Where The Light Isn’t
Chancellor's Distinguished Professor of Physics,...
Deep Science
Professor of Psychology and Neuroscience; Stanford...
Replacing Magic With Mechanism?
Professor of Security Engineering at Cambridge University
Anthropic Capitalism And The New Gimmick Economy
Mathematician and Economist; Managing Director of Thiel...
Interdisciplinary Social Research
President of Global Publishing, SAGE; Author, Judged: The...
The Origin Of Europeans
Consultant; Adaptive Optics and Adjunct Professor of...
Low Energy Nuclear Reactions Work And Could Supplant Fossil Fuels
Serial Entrepreneur; Co-founder, eGroups
Religious Morality Is Mostly Below The Belt
Professor of Psychology, Director, Evolution and Human...
The Infancy Of Meta-Science
Professor, Department of Psychological and Brain Sciences,...
The Dematerialization Of Consumption
Vice-Chairman, Ogilvy London; Columnist, the Spectator;...
The Platinum Rule: Dense, Heavy, But Worth It
Davis-Brack Professor in the Behavioral Sciences, Stanford...
Einstein Was Wrong
Professor of Philosophy, Princeton University
The Mating Crisis Among Educated Women
Professor of Psychology, University of Texas, Austin;...
Fecal Microbiota Transplantation
Director, MIT Media Lab; Coauthor (with Jeff Howe),...
New Experimental Probes Of Einstein’s Curved Spacetime—And Beyond?
Theoretical Physicist; Professor, Department of Physics,...
The Brain Is A Strange Planet
Artist; Founder, Pioneer Works
Simplicity
Director, Niels Bohr Chair in Theoretical Physics at...
Morality Is Made Of Meat
Senior Researcher, Director, The Oxford Morals Project,...
Pluto Is A Bump In The Road
Sterling Professor of Social and Natural Science, Yale...
Tabby's Star
Physicist; Entrepreneur & Venture Capitalist; Science...
Doing More With Less
Physicist, Institute of Advanced Study; Author, Disturbing...
Our Collective Blind Spot
Associate Professor of Environmental Studies, NYU; Author,...
Complete Head Transplants
Software Pioneer; Philosopher; Author, A Realtime...
There Is (Already) Life On Mars
Harold M. Brierley Professor of Business Administration,...
Everything Is Computation
Cognitive Scientist, MIT Media Lab, Harvard Program for...
Hope Beyond The Higgs Boson
Horace D. Taft Associate Professor of Physics, Yale...
Hi, Guys
Actor; Writer; Director; Host, PBS program Brains on Trial...
A Compelling Explanation For Science Misconduct
Neurobiologist; Professor of Pharmacology and Physiology,...
One Hundred Years Of Failure
Professor of Quantum Mechanical Engineering, MIT; Author,...
Energy Of Nothing
Theoretical Physicist, Stanford; Father of Eternal Chaotic...
The Twin Tides Of Change
Founding Managing Director, SchoolDash; Co-organizer, Sci...
The Broadening Scope Of Science
Professor of Psychology, UC Berkeley
Carpe Diem
Head of Research Group Systems, Neuroscience and Cognitive...
Magnet Meridian Contemporary
Diversity In Science
Professor of Physics & Astronomy, University of...
People Are Animals
Anthropologist; Historian
Rethinking Authority With The Blockchain Crypto Enlightenment
Philosophy and Economic Theory, the New School for Social...
Science Made This Possible
Visiting Professor, Stevens Institute of Technology; Author...
Fundamentally Newsworthy
Professor and Chair, Department of Biological Sciences,...
The Decline Of Cancer
Master of the New College of the Humanities; Supernumerary...
Weapons Technology Powered Human Evolution
Jan Eisner Professor of Archaeology, Comenius University in...
The Race Between Genetic Meltdown and Germline Engineering
Founder of field of Evolutionary Psychology; Co-director,...
Paleo-DNA and De-Extinction
Professor of Cognitive Biology, University of Vienna;...
A New Algorithm Makes Us Rethink What Computers Can—and Cannot—Do
Senior Research Fellow, Centre for Research in the Arts,...
J. M. Bergoglio’s 2015 Review of Global Ecology
Doris Duke Chair of Conservation Ecology, Duke University;...
Macro-Criminal Networks
Philosopher; Director, Scientific Vortex, Inc.
Pointing Is A Prerequisite For Language
Professor and Chair, Department of Linguistics, University...
Use of 3D printing in the medical field
Medical Director, Cardiac Surgery Step-Down Unit at...
The Predictive Brain
University Distinguished Professor of Psychology,...
True Breakthroughs Become Part Of The Culture
Physicist, Harvard University; Author, Dark Matter and the...
Science Itself
Brooks and Suzanne Ragen Professor of Psychology and...
The News That Wasn’t There
Space Exploration, New and Old
Professor Emeritus, University of Maryland, Baltimore...
Antibiotics Are Dead; Long Live Antibiotics!
Gerontologist; Chief Science Officer, SENS Foundation;...
The Ongoing Battles With Pathogens
Psychologist, UPenn; Director, Penn Laboratory for...
Virtual Reality Goes Mainstream: A Complex Convolution
Professor of Theoretical Philosophy, Johannes Gutenberg-...
Neuro-news
Associate Fellow, Warburg Insitute (London); Research...
Blue Marble 2.0
Chief Strategy Officer of The Nature Conservancy; Author,...
We Fear the Wrong Things
Professor of Psychology, Hope College; Co-author,...
Psychology’s Crisis
Psychologist, Boston College; Author, How Art Works: A...
A Call To Action
Curator, Serpentine Gallery, London; Editor: A Brief...
A Collective Realization—We May All Die Horribly
Neuroscientist, Stanford University; Author, Behave
Blinded By Data
Senior Scientist, MacroCognition LLC; Author, Seeing What...
Fatty Foods Are Good For Your Health
Evolutionary Scientist, University of Connecticut; Author,...
Unpublicized Implications Of Hawking Black Hole Evaporation
Professor of Mathematical Physics, Tulane University;...
Deep Learning, Semantics, And Society
Scientist, Self-Aware Systems; Co-founder, Center for...
Progress In Rocketry
Science Historian; Author, Analogia
The Wisdom Race Is Heating Up
Physicist, MIT; Researcher, Precision Cosmology; Scientific...
Leaking, Thinning, Sliding Ice
Professor of Environmental Studies, Brown University;...
Human Chimeras
George Putnam Professor of Biology, Harvard University;...
The 6 Billion Letters Of Our Genome
Professor of Genomics, The Scripps Translational Science...
The Truthiness Of Scientific Research
Independent Investigator and Theoretician; Author, The...
Biological Models of Mental Illness Reflect Essentialist Biases
Chair of Developmental Psychology in Society, University of...
Mathematics And Reality
Author, The Math Book, The Physics Book, and The Medical...
The News Is Not The News
Physicist, MIT; Recipient, 2004 Nobel Prize in Physics;...
Artificial Intelligence
Panasonic Professor of Robotics (emeritus); Former Director...
Bugs R Us
Biological Anthropologist and Paleobiologist; Evan Pugh...
3 Decarbonizing Scientific Breakthroughs and Some Lessons Learned
Co-Founder, Former Chief Scientist, Sun Microsystems;...
Growing A Brain In A Dish
Professor of Developmental Psychopathology, University of...
A New Space Age Takes Off…And Returns To Earth Again
Futurist; Senior Vice President for Global Government...
Seeing Our Cyborg Selves
Professor of English and Journalism, State University of...
Bayesian Program Learning
Nobel Prize in Physics; Senior Astrophysicist,...
The State Of The World Isn’t Nearly As Bad As You Think
Neuroscientist; Professor of Philosophy, Caltech; Co-author...
Gene Editing Will Transform Life Utterly
Research Professor of Life Sciences, Director (2014-2019),...
A Genuine Science Of Learning
Mathematician; Executive Director, H-STAR Institute,...
The Rejection of Science Itself
Media Analyst; Documentary Writer; Author, Throwing Rocks...
Breakthrough Listen
Former President, The Royal Society; Emeritus Professor of...
The Ironies of Higher Arithmetic
Author and Essayist, New York Times. New Yorker, Slate;...
Feces Standard Money (FSM)
Professor of Environmental Engineering, UNIST; Director,...
The Epistemic Trainwreck Of Soft-Side Psychology
Annenberg University Professor, University of Pennsylvania...
Theodiversity
Psychologist, University of British Columbia; Author, Big...
Cognitive Science Transforms Moral Philosophy
Board of Governors Professor, Department of Philosophy,...
News About How The Phyiscal World Operates
Felix Bloch Professor in Theoretical Physics, Stanford;...
The Universe Surprised Us, Close To Home, In Unexpected Places, And Unexpected Ways
Theoretical Physicist; Foundation Professor, School of...
The En-Gendering Of Genius
Philosopher, Novelist; Recipient, 2014 National Humanities...
News About Science News
Professor, Director, The Center for Internet Research,...
Intellectual Convergence
Psychologist; Assistant Professor of Marketing, Stern...
The Continually New You
Founding Dean, Minerva Schools at the Keck Graduate...
A Science Of The Consequences
Journalist; Editor, Nova 24, of Il Sole 24 Ore
Sub-Prime Science
Emeritus Professor of Psychology, London School of...
The Creation Of A "No Ethnic Majority" Society
Journalist; Author, Us and Them
Fear Of Dread Risks
Psychologist; Director, Harding Center for Risk Literacy,...
The Greatest Environmental Disaster in the World Today: Air Pollution
Physicist, UC Berkeley; Author, Now: The Physics of Time
Interconnectedness
Research Associate & Lecturer, Harvard; Author, Alex...
The Most Powerful Carcinogen May Be Entropy
Author; The Cancer Chronicles, The Ten Most Beautiful...
The Great Convergence
Deputy Technology Editor, The New York Times; Former...
We Are Not Special
Associate Professor of Psychology, University of North...
Programming Reality
Physicist, Director, MIT's Center for Bits and Atoms;...
DNA Programming
Behavioral Scientist, LSE; Author, Happy Ever After
How Widely Should We Draw The Circle?
David J. Bruton Centennial Professor of Computer Science,...
I, For One
Cyril G. Veinott Green and Gold Professor, Department of...
Imaging Deep Learning
Professor of Cognitive Philosophy, Department of Philosophy...
Modernity Is Winning
Independent Researcher; Author, The Princeton Field Guide...
The Thin Line Between Mental Illness And Mental Health
Psychiatrist; Clinical Associate Professor of Psychiatry,...
Open Water–The Internet Of Visible Thought
The Neural Net Reloaded
Psychologist; President Emeritus, Cooper Union
Neural Hacking, Handprints, And The Empathy Deficit
Psychologist; Author (with Richard Davidson), Altered Traits
News That Stayed News
Professor Emerita, George Mason University; Visiting...
Toddlers Can Master Computers
Psychologist, UC, Berkeley; Author, The Gardener and the...
Big Data And Better Government
Sara Miller McCune Director, Center For Advanced Study in...
The Democratization of Science
Publisher, Skeptic magazine; Monthly Columnist, Scientific...
Neuroprediction
Associate Professor of Psychology, Georgetown University
Juice
The Immune System: A Grand Unifying Theory for Biomedical Research
The State Of The Brain
Computational Neuroscientist; Francis Crick Professor, the...
Cancer Drugs For Brain Diseases
Distinguished Professor of Physiology, Pharmacology, and...
Sensors: Accelerating The Pace Of Scientific Discovery
Technology Forecaster; Consulting Associate Professor,...
The News Behind The News
Senior Consultant (and former Editor-in-Chief and...
Human Progress Quantified
Johnstone Family Professor, Department of Psychology;...
We Know All The Particles And Forces That We’re Made Of
Theoretical Physicist, Caltech; Author, Something Deeply...
No News Is Astounding News
Physicist, Perimeter Institute; Author, Einstein's...
The Facility Whose Data May Help Formulate And Test Ideas About A Final Theory Of Our Universe Is Now Working
Theoretical Particle Physicist and Cosmologist; Victor...
The Healthy Diet U-Turn
Science writer; Author, Monsters
Designer Humans
Professor of Evolutionary Biology, Reading University, UK;...
Adjusting To Feathered Dinosaurs
Professor of Linguistics and Western Civilization, Columbia...
Gigantic Black Holes At The Center Of Galaxies
Theoretical Physicist; Aix-Marseille University, in the...
The Trust Metric
Psychologist; Co-founder, The Gottman Relationship...
Optogenetics
Neuroscientist; Director, Social Brain Lab, Netherlands...
Advanced Ligo And Advanced Virgo
Theoretical physicist; cosmologist; astro-biologist; co-...
The Longevity Of News
Professor of Psychology, University of California, San...
Weather Prediction Has Quietly Gotten A Lot Better
Complexity Scientist; Scientist in Residence at Lux Capital...
Life Diverging…
Managing Director, Excel Venture Management; Co-author (...
The Universe Is Infinite
Mathematician; Computer Scientist; Cyberpunk Pioneer;...
People Kill Because It’s The Right Thing To Do
Classics Scholar, University Librarian, ASU; Author, Pagans
Global Warming: Once Again, A Most Serious Challenge To Our Species
Professor of Anthropology, University of Michigan; Adjunct...
Harnessing Our Natural Defenses Against Cancer
Evolutionist, CNRS, Santa Fe Institute, Institute for...
Super Massive Black Holes
Professor Emeritus, Stevens Institute of Technology; Former...
Extraterrestrials Don’t Land On Earth!
Director, Big History Institute and Distinguished Professor...
Datasets Over Algorithms
Scientist; Inventor; Entrepreneur; Investor
The Word: First As Art, Then As Science
Author, The Most Human Human; Co-author (with Tom Griffiths...
Cellular Alchemy
Director, External Affairs, Science Museum Group; Co-author...
The Convergence Of Images And Technology
Associate Professor of History in Art, University of...
Glaciers
Evolutionary Biologist; Professor of Anthropology and...
Life In The Milky Way
Astrophysicist; Author, Why?: What Makes Us Curious
Systems Medicine
Professor of Biological Sciences, Physics, Astronomy,...
A Cheap, Naturalistic, Large-Scale Research Method Designed To Assess And Interpret Our Social Media Linguistic Interactions
Web Psychologist, Speaker; Author, Webs of Influence: The...
High Tech Stone Age
Writer; Speaker; Thinker, Copenhagen, Denmark
The Mindful Opening And Meeting Of Minds
Archaeologist; Journalist; Author, Artifacts, Past Poetic
A Robust Challenge To The Value Of A University Education
Philosopher; Auguste Comte Chair in Social Epistemology,...
Nootropic Neural News
Professor, Harvard University; Director, Personal Genome...
Linking The Levels Of Human Variation
Assistant Professor, Department of Sociology, University of...
Our Changing Conceptions Of What It Means To Be Human
Hobbs Professor of Cognition and Education, Harvard...
The Abdication Of Space-Time
Cognitive Scientist, UC, Irvine; Author, The Case Against...
Those Annoying Ads? The Harbinger Of Good Things To Come
CEO, Socratic Arts Inc.; John Evans Professor Emeritus of...
The Disillusionment Hypothesis And The Decline and Disaffection For Poor White Americans
Theodore M. Newcomb Distinguished University Professor of...
The Most Important X...Y...Z...
Professor of Geography, University of California Los...
Identifying The Principles, Perhaps The Laws, Of Intelligence
Author, Machines Who Think, The Universal Machine, Bounded...
| 2024-11-08T07:27:41 | en | train |
10,824,918 | nafizh | 2016-01-02T03:49:50 | Learn to Speak or Teach Better in 30 Minutes by Andrew Ng | null | https://www.linkedin.com/pulse/20140320175655-176238488-learn-to-speak-or-teach-better-in-30-minutes | 14 | 0 | null | null | null | no_error | Learn to Speak or Teach Better in 30 Minutes | 2014-03-20T17:56:55.000+00:00 | Andrew Ng |
How can you rapidly improve your presentation skills? When I began teaching at Stanford University in 2002, I was one of the weakest teachers--bottom 13% according to my student reviews. Eleven years later, in 2013, students named me one of the top 10 professors across all of Stanford University. During that journey, there was one short period when my teaching and public speaking rapidly improved, through a process called deliberate practice.
We all know that to get better at a musical instrument or a sport, you have to practice. Practice does not simply mean “doing the activity over and over.” Instead, you learn fastest when you engage in a focused process called deliberate practice, in which you repeatedly attempt an especially challenging part of the task.
When the best musicians are working to improve, they don’t just play their favorite tunes for hours. Instead, they pick a short but challenging passage in a larger musical piece, and repeatedly play that passage until they get it right. Athletes use a similar process to hone their skills. This is hard work---you focus in every attempt, try to figure out what you’re doing wrong, and tweak your performance to make it better. If you do it right, you might be mentally drained after 30 minutes.
Deliberate practice is common in music and in sports, but is rarely used in the context of speaking or teaching. In fact, knowledge workers in most disciplines rarely engage in deliberate practice. This limits how rapidly we get better at our jobs; it also means that deliberate practice might help you progress faster than your peers.
Key elements of deliberate practice include:
Rapid iteration.
Immediate feedback.
Focus on a small part of the task that can be done in a short time.
Here’s a 30 minute deliberate practice exercise for improving your presentations:
Select a ~60 second portion of a presentation that you made recently, or that you plan to make.
Record yourself making that 60 second presentation. Use a webcam, camcorder, or your cellphone video camera to capture video and audio.
Watch your presentation. If you haven’t seen yourself on video much, you’ll be appalled at how you look or sound. This is a good sign; it means that your speaking ability is about to improve dramatically.
Decide what you’d like to adjust about your presentation. Then go back to Step 2, try again, making any changes you think will improve your speaking.
Repeat the cycle of recording, watching, and adjusting 8 - 10 times.
You want to select only a ~60 second portion of your presentation to practice. By using only 60 second segments, you can go through the steps above maybe 8-10 times in half an hour (i.e., you can perform many iterations in a short time). The first time I did this, I recorded myself talking for 30 minutes. But you don’t really want to watch a 30 minute video of yourself talking—it gets boring—and in a 30 minute video, you’ll also find far too many things to change that you won’t be able to keep them straight in your mind.
This was the process I used to improved my teaching. For about a year, I had a camcorder set up in my living room, and I went through the record-watch-adjust cycle whenever I had a few moments to spare. Although I still have much to learn, a series of many practice sessions helped me to improve my teaching more quickly than anything else I’ve done, and ultimately allowed me to develop and launch my first MOOC in 2011. In the later parts of my teaching career, when I was learning how to create MOOC-style online lecture videos, the process of deliberate practice helped me get much better at that too.
If you try this technique, or if you apply deliberate practice to other areas of your life, please comment below and share your experience.
---------------------------------------------------------------------------------------------------------------
Q: What 60 seconds of a presentation should I choose?
A: Don’t spend too much time picking the “perfect” 60 seconds. The first time out, you might pick a piece of your presentation that you’re already comfortable with. Once you feel like you’re mastering a particular 60 second piece, go on and pick a different 60 second part, ideally something that you find challenging.
Q: I really don’t like watching or hearing myself on video.
A: That’s like saying that you don’t want accurate feedback on your own performance. The video camera reflects back to you how you’re presenting to others. You should find out how others are seeing you, and you will need accurate feedback if you want to improve.
Q: Can this improve my speaking ability in other settings as well, for example improving my ability to give critical feedback in a 1:1, or improving how I speak at my team’s weekly meeting?
A: Yes! You can use this method to practice your delivery in these other settings.
Q: In sports and in music, usually having a coach improves the feedback you get. Won’t I need one too?
A: If you can have a friend or mentor give you better feedback—both what they see in your performance, as well as suggestions for how to adjust things—this would certainly accelerate your learning. But it’s more important for you to get going quickly, and you’ll be able to give yourself plenty of good feedback just by watching yourself on video. In order to get inspiration for ways to improve, I also watch YouTube videos of great speakers (my favorites include Bill Clinton, Steve Jobs and Michelle Obama) to identify things they do, which I then try to mimic. This can come much later in your learning process though.
Q: Can I apply deliberate practice to other aspects of knowledge work?
A: I don’t have a great answer, but frequently think about this. One challenge is that in other areas of knowledge work than public speaking (such as delegation, strategic planning, writing, …) it isn’t always easy to get good feedback. But if you have any ideas, please let me know in the comments below.
Q: Doesn’t this method only improve the delivery of my presentation, but not the actual content of the presentation?
A: Yes, that’s right. I will have more to say about the content of presentations in a later article. If you are interested in this other topic, please follow me on LinkedIn and Twitter (@AndrewYNg) so that you will be notified when I write about that.
| 2024-11-08T04:40:16 | en | train |
10,824,955 | mrry | 2016-01-02T04:02:06 | Everything is computation | null | http://edge.org/response-detail/26733 | 2 | 0 | null | null | null | no_error | 2016 : WHAT DO YOU CONSIDER THE MOST INTERESTING RECENT [SCIENTIFIC] NEWS? WHAT MAKES IT IMPORTANT? | null | null | These days see a tremendous number of significant scientific news, and it is hard to say which one has the highest significance. Climate models indicate that we are past crucial tipping points and are irrevocably headed for a new, difficult age for our civilization. Mark Van Raamsdonk expands on the work of Brian Swingle and Juan Maldacena, and demonstrates how we can abolish the idea of spacetime in favor of a discrete tensor network, thus opening the way for a unified theory of physics. Bruce Conklin, George Church and others have given us CRISPR, a technology that holds the promise for simple and ubiquitous gene editing. Deep Learning starts to tell us how hierarchies of interconnected feature detectors can autonomously form a model of the world, learn to solve problems, and recognize speech, images and video.
It is perhaps equally important to notice where we lack progress: sociology fails to teach us how societies work, philosophy seems to have become barren and infertile, the economical sciences seem to be ill-equipped to inform our economic and fiscal policies, psychology does not comprehend the logic of our psyche, and neuroscience tells us where things happen in the brain, but largely not what they are.
In my view, the 20th century’s most important addition to understanding the world is not positivist science, computer technology, spaceflight, or the foundational theories of physics. It is the notion of computation. Computation, at its core, and as informally described as possible, is very simple: every observation yields a set of discernible differences.
These, we call information. If the observation corresponds to a system that can change its state, we can describe these state changes. If we identify regularity in these state changes, we are looking at a computational system. If the regularity is completely described, we call this system an algorithm. Once a system can perform conditional state transitions and revisit earlier states, it becomes almost impossible to stop it from performing arbitrary computation. In the infinite case, that is, if we allow it to make an unbounded number of state transitions and use unbounded storage for the states, it becomes a Turing Machine, or a Lambda Calculus, or a Post machine, or one of the many other, mutually equivalent formalisms that capture universal computation.
Computational terms rephrase the idea of "causality," something that philosophers have struggled with for centuries. Causality is the transition from one state in a computational system into the next. They also replace the concept of "mechanism" in mechanistic, or naturalistic philosophy. Computationalism is the new mechanism, and unlike its predecessor, it is not fraught with misleading intuitions of moving parts.
Computation is different from mathematics. Mathematics turns out to be the domain of formal languages, and is mostly undecidable, which is just another word for saying uncomputable (since decision making and proving are alternative words for computation, too). All our explorations into mathematics are computational ones, though. To compute means to actually do all the work, to move from one state to the next.
Computation changes our idea of knowledge: instead of treating it as justified true belief, knowledge describes a local minimum in capturing regularities between observables. Knowledge is almost never static, but progressing on a gradient through a state space of possible world views. We will no longer aspire to teach our children the truth, because like us, they will never stop changing their minds. We will teach them how to productively change their minds, how to explore the never ending land of insight.
A growing number of physicists understand that the universe is not mathematical, but computational, and physics is in the business of finding an algorithm that can reproduce our observations. The switch from uncomputable, mathematical notions (such as continuous space) makes progress possible. Climate science, molecular genetics, and AI are computational sciences. Sociology, psychology, and neuroscience are not: they still seem to be confused by the apparent dichotomy between mechanism (rigid, moving parts) and the objects of their study. They are looking for social, behavioral, chemical, neural regularities, where they should be looking for computational ones.
Everything is computation.
| 2024-11-08T15:53:54 | en | train |
10,824,964 | vmorgulis | 2016-01-02T04:04:03 | A terminal emulator in D | null | https://github.com/adamdruppe/terminal-emulator | 3 | 0 | null | null | null | no_error | GitHub - adamdruppe/terminal-emulator: A terminal emulator in D. | null | adamdruppe | This is a terminal emulator library and ui with some xterm features and some extensions.
BUILDING
See the example makefile for what I do on my system. You'll need a collection of files from my arsd repo, then just pass them all right over using the same version and -J flags I did.
dmd main.d terminalemulator.d -L-lutil ~/arsd/{color,eventloop,stb_truetype,xwindows,png,bmp,simpledisplay}.d -version=with_eventloop -J.
for one example
It expects a monospace font to be availble. I use the bitstream vera or the dejavu monospaces and rename them as the monospace-2.ttf.
You can probably just do like
cp /usr/share/fonts/TTF/DejaVuSansMono.ttf monospace-2.ttf
if you're on Linux or download the font http://web.archive.org/web/20111127102009/http://www-old.gnome.org/fonts/
Or even just modify the code to remove that bit or load it from your system at runtime. Maybe I'll change that later anyway.
| 2024-11-08T14:09:55 | en | train |
10,825,062 | vmorgulis | 2016-01-02T04:38:04 | Red Language: Features and future directions | null | http://www.red-lang.org/2015/12/answers-to-community-questions.html | 66 | 9 | [
10827266,
10827316,
10827357
] | null | null | no_error | Answers to community questions | null | null |
Features, future directions
Will Red get lisp like macros?
The short answer is: yes, but.... We first need to define how we want the API for macros to look like and how to prevent users from shooting themselves in the foot too easily using them. Also, macros are challenging for a visual debugger, we also need first to have a clear vision of how an IDE would handle that (I have not yet looked into how top Lisp IDE managed that, though I hope they have solved it elegantly).
About the ETA, we had a strong need for macros in our Android bridge, though that need might be gone now that Red's startup time has been vastly improved earlier this year. Answer to that should come in a few weeks, once we merge the existing Android backend with our new GUI engine.
When will Red get optional types?
It already has, along with multi-typing: you can specify one or more allowed types for function arguments, return value and local words. Only arguments type-checking is implemented for now, the rest will come on the path to 1.0. The most interesting part of optional typing is how much the compiler will be able to leverage it to generate much faster code when everything is mono-typed. Though, we will find that out after 1.0, once we will work on the self-hosted compiler and toolchain. As the current toolchain has a limited lifetime, we try to avoid any non-mandatory feature before 1.0.
Will Red have built-in support for some type of concurrency and/or parallel processing? What kind of model is it going to follow?
Certainly. Increase in computation power is now horizontal, with multiple cores, multiple processors and distributed architectures, so, strong concurrency support is a key part of a modern general-purpose language.
For now, the model we aim at is the Actor model. It is a good fit for Red and would provide a clean way to handle concurrent and parallel tasks across multiple cores and processors. Though, a few years has passed since that plan was made, so we will need to revisit it when the work on 0.9.0 will start, and define what is the best option for Red then. One thing is sure, we do not want multithreading nor callback hell in Red. ;-)
Do you plan to implement any kind of app/web server in Red, similar to Cheyenne available for Rebol?
As Cheyenne's author, I have strong plans for a new web application server with much better scalability than what you could achieve using Rebol. Red 1.0 should come with pretty strong server-side programming abilities out of the box, then on top of that, we'll provide a modern framework for webapp creations (think GWT or Opalang-like approach).
In addition to that, we'll have a Cheyenne RSP compatibility layer for running old Cheyenne scripts aiming at at drop-in replacement for existing webapps.
Will Red support multiselect/switch soon?
As soon as possible, maybe for the upcoming 0.6.0 release.
Will we get promises/futures in Red?
Possibly. We will experiment with that in one of the 0.7.x releases. We will have to see how such abstraction could integrate in our future concurrency model.
Will Red get direct access to Android's (and IOS later) camera, location, gyroscope, etc features?
Absolutely, our GUI engine already features a camera widget (in our Windows backend). The work on Android backend in 0.6.x version will bring wrappers to all the common hardware features.
Red is going to get modules support in future, what about Red/System?
As Red/System is an embedded dialect of Red, Red's upcoming modules system will allow inclusion of Red/System parts, so a separate modules system for it is not necessary for now.
Will function! be first-class datatype in Red/System v2?
Strictly speaking no, as you won't be allowed to create new functions from Red/System at run-time (but you will be able to create new Red/System functions from Red dynamically). The other first-class features will be possible (to a greater extent than today): passing function! pointer as arguments, returning a function! value from a function or assigning a function! pointer to a variable.
Will Red have the equivalent of Go lang's net package?
Red will feature a complete networking layer in 0.7.0, including async IO support, through a nice high-level API (similar to Rebol's one). So DNS, TCP, UDP and many more common protocols will be built-in, fortunately, relying on a very lightweight API, unlike Go's net package. ;-)
What about a package manager (in future)?
We have a modules system planned for 0.8.0. Design details are not yet defined, though we'll strive to integrate the best ideas from other existing package managers around.
Is there going to be inbuilt unit testing, something like http://golang.org/pkg/testing/?
We'll have a built-in unit testing support, probably starting with a lightweight one integrated into our upcoming modules system.
Is there a chance Red gets self-hosted sooner than initially planned, removing the R2 dependency?
Self-hosting Red means rewriting the toolchain (compilers, linker and packagers) in Red (currently written in Rebol2). Technically, 0.6.0 should have all the needed features for starting such rewrite, unfortunately, we currently don't have the resources to start such big task while continuing the work towards 1.0. The self-hosting work would not be a port of the current toolchain, it would use a very different architecture (because of JIT-compilation requirement and extra features of Red compared to Rebol). We aim at a programmable modular toolchain model, not very far from LLVM (just simpler and magnitudes smaller).
To be accurate, fully removing Rebol2 dependency is a two steps process:
Removing the need for Rebol/SDK to build the Red binary, making it easy for anyone to rebuild Red binary from sources.
Rewriting the toolchain in Red.
Developers, community, documentation
How do you regard the development of Red 2.0 to proceed in terms of speed/progress? Will it be faster or equal to current road to Red 1.0?
Red 2.0 is mostly about rewriting the toolchain in Red, which represents only 25% of the current Red codebase (the other 75% part is the runtime library). Moreover, the modular architecture and public API of the new toolchain will make it much easier to write and integrate contributions from third-parties, so we'll gear all our efforts towards involving as many skilled contributors as possible. If you want Red 2.0 to come quicker, helping Red's user base grow up by contributing, writing apps, docs and tutorials is the best thing you can do right now. ;-)
What do you think is the "killer app" Red should provide, in order to attract more of developers/newcomers?
Definitely an innovative IDE. ;-) Beyond that, I believe that a successful Android app written in Red could do a lot to spread Red usage widely. If you have a great idea for such app, you'll soon be able to code it in Red 0.6.1, with full Android support.
What about documentation comments (something like rustdoc https://doc.rust-lang.org/book/documentation.html)?
In Red, like in Rebol, docstrings are allowed in functions (and in modules once we have them), so they can be processed more easily than comments (which exist only when the source is in text form). That's one of the tangible advantages of having an homoiconic language. That is also how the help command works in the console, it extracts information at runtime from functions and the environment.
That said, if you want heavier documentation embedded inside your code, you can easily define your own format for that and writing a preprocessor for it should be almost trivial using our Parse dialect (either in text form or after loading, in block form). You can go as far as implementing a Literate Programming layer if that suits your taste, like this one made for Rebol.
That's all for this time, if you want to discuss some parts deeper, you are welcome to join our chat-room on Gitter, which is way more convenient than Blogger's comment system.
See you soon for the 0.6.0 release, don't miss it! ;-)
| 2024-11-08T06:27:35 | en | train |
10,825,190 | dan_siepen | 2016-01-02T05:22:58 | We need scholarships for Women in Tech | null | https://medium.com/@DanSiepen/we-need-scholarships-for-women-in-tech-4b78691911b3#.ylw046d4o | 1 | null | null | null | true | no_error | We need scholarships for Women in Tech - Dan Siepen - Medium | 2016-01-02T05:21:45.228Z | Dan Siepen | This story originally appeared on Coder Factory Academy.Despite companies becoming increasingly conscious of gender disparities in tech, the gap has not closed nearly to the extent one might expect. Luckily, companies like Toptal have taken notice and created initiatives to bridge the gap. The network of the world’s top freelance software developers and designers has launched a scholarship program called Toptal Scholarships for Female Developers, which will award 12 scholarships ($US5000 each) in the next 12 months to help women pursue their technical career dreams. In this piece, Grace Fish breaks down the unfortunate reasons we desperately need scholarships for women in tech…The cat’s out of the bag: There’s a major gender gap in tech. It’s not shrinking, and it’s hurting the industry. By most estimates, women make up only 30% of the workforce in STEM. The reality is actually much more grim.Since Pinterest engineer Tracy Chou asked “where are the numbers?” in a viral editorial back in 2013, major players in the tech industry have started publishing their demographics. At Google and Facebook, women make up about a third of the workforce, but they fill only 18% and 16% of the technical roles respectively. Even though Twitter has equal gender representation in non-tech jobs, only 10% of their engineering jobs are held by women representation. This pattern is industry wide.It’s not always been this way. Of course not, you’re probably thinking. Women have historically always had to fight for greater representation. They’re probably on a slow uphill battle from 0% to 30% and they’ll keep climbing. That’s not what’s been happening though. Thirty years ago, thegender gap in tech was actually smaller both in terms of female representation in tech job and in terms of engineering degrees received. In the 80s, nearly 40% of computer science graduates were women. Now, only 18% are women.This decline doesn’t make sense. The tech industry is booming and everyone knows it. Just take a look at the surge in rent prices in the Bay Area or the watch the widely popular comedy series Silicon Valley. There’s lots of money and there are lots of jobs, so what’s going on?According to Reshma Saujani, the founder of Girls Who Code, there are a bunch of deterrents keeping women away from tech, one of the biggest being the widely promulgated idea that tech is a man’s world. From Silicon Valleyto the Social Network, tee-shirt clad white guys are the heroes of tech while women are slotted in the role of sidekick. Introductory coding classes are called “Emails for Females” and teen megastores like Abercrombie and Fitch and Forever 21 sell tank tops with sayings like “allergic to algebra” written across their chests.Now, we’re way past the point of arguing about whether or not women are as smart as men. So, this idea isn’t just harmful in terms of gender equality, it’s harmful for the whole industry. Gender balanced teams outperform all-male teams. Businesses with at least one woman in an executive leadership position receive valuations that are 64% larger than those who don’t. Sending the message to women that they aren’t wanted in tech hurts companies’ bottom lines.Reversing these trends is going to take a lot more than strong recruitment efforts by tech companies. There are too few female computer science grads to reach anything close to gender parity, and female engineers that are on the job market aren’t going to be excited about joining teams that only have other women in non-tech roles. The solution begins with showing girls starting at school level that they are supported in STEM fields both in the classroom and in the work world.By Grace Fish | 2024-11-08T08:18:48 | en | train |
10,825,233 | firebones | 2016-01-02T05:41:51 | Cap'n Proto: An Infinitely Faster Cerealization Protocol | null | https://capnproto.org/ | 1 | 0 | null | null | null | no_error | Cap'n Proto: Introduction | null | null |
Cap’n Proto is an insanely fast data interchange format and capability-based RPC system. Think
JSON, except binary. Or think Protocol Buffers, except faster.
In fact, in benchmarks, Cap’n Proto is INFINITY TIMES faster than Protocol Buffers.
This benchmark is, of course, unfair. It is only measuring the time to encode and decode a message
in memory. Cap’n Proto gets a perfect score because there is no encoding/decoding step. The Cap’n
Proto encoding is appropriate both as a data interchange format and an in-memory representation, so
once your structure is built, you can simply write the bytes straight out to disk!
But doesn’t that mean the encoding is platform-specific?
NO! The encoding is defined byte-for-byte independent of any platform. However, it is designed to
be efficiently manipulated on common modern CPUs. Data is arranged like a compiler would arrange a
struct – with fixed widths, fixed offsets, and proper alignment. Variable-sized elements are
embedded as pointers. Pointers are offset-based rather than absolute so that messages are
position-independent. Integers use little-endian byte order because most CPUs are little-endian,
and even big-endian CPUs usually have instructions for reading little-endian data.
Doesn’t that make backwards-compatibility hard?
Not at all! New fields are always added to the end of a struct (or replace padding space), so
existing field positions are unchanged. The recipient simply needs to do a bounds check when
reading each field. Fields are numbered in the order in which they were added, so Cap’n Proto
always knows how to arrange them for backwards-compatibility.
Won’t fixed-width integers, unset optional fields, and padding waste space on the wire?
Yes. However, since all these extra bytes are zeros, when bandwidth matters, we can apply an
extremely fast Cap’n-Proto-specific compression scheme to remove them. Cap’n Proto calls this
“packing” the message; it achieves similar (better, even) message sizes to protobuf encoding, and
it’s still faster.
When bandwidth really matters, you should apply general-purpose compression, like
zlib or LZ4, regardless of your
encoding format.
Isn’t this all horribly insecure?
No no no! To be clear, we’re NOT just casting a buffer pointer to a struct pointer and calling it a day.
Cap’n Proto generates classes with accessor methods that you use to traverse the message. These accessors validate pointers before following them. If a pointer is invalid (e.g. out-of-bounds), the library can throw an exception or simply replace the value with a default / empty object (your choice).
Thus, Cap’n Proto checks the structural integrity of the message just like any other serialization protocol would. And, just like any other protocol, it is up to the app to check the validity of the content.
Cap’n Proto was built to be used in Sandstorm.io, and is now heavily used in Cloudflare Workers, two environments where security is a major concern. Cap’n Proto has undergone fuzzing and expert security review. Our response to security issues was once described by security guru Ben Laurie as “the most awesome response I’ve ever had.” (Please report all security issues to [email protected].)
Are there other advantages?
Glad you asked!
Incremental reads: It is easy to start processing a Cap’n Proto message before you have
received all of it since outer objects appear entirely before inner objects (as opposed to most
encodings, where outer objects encompass inner objects).
Random access: You can read just one field of a message without parsing the whole thing.
mmap: Read a large Cap’n Proto file by memory-mapping it. The OS won’t even read in the
parts that you don’t access.
Inter-language communication: Calling C++ code from, say, Java or Python tends to be painful
or slow. With Cap’n Proto, the two languages can easily operate on the same in-memory data
structure.
Inter-process communication: Multiple processes running on the same machine can share a
Cap’n Proto message via shared memory. No need to pipe data through the kernel. Calling another
process can be just as fast and easy as calling another thread.
Arena allocation: Manipulating Protobuf objects tends to be bogged down by memory
allocation, unless you are very careful about object reuse. Cap’n Proto objects are always
allocated in an “arena” or “region” style, which is faster and promotes cache locality.
Tiny generated code: Protobuf generates dedicated parsing and serialization code for every
message type, and this code tends to be enormous. Cap’n Proto generated code is smaller by an
order of magnitude or more. In fact, usually it’s no more than some inline accessor methods!
Tiny runtime library: Due to the simplicity of the Cap’n Proto format, the runtime library
can be much smaller.
Time-traveling RPC: Cap’n Proto features an RPC system that implements time travel
such that call results are returned to the client before the request even arrives at the server!
Why do you pick on Protocol Buffers so much?
Because it’s easy to pick on myself. :) I, Kenton Varda, was the primary author of Protocol Buffers
version 2, which is the version that Google released open source. Cap’n Proto is the result of
years of experience working on Protobufs, listening to user feedback, and thinking about how
things could be done better.
Note that I no longer work for Google. Cap’n Proto is not, and never has been, affiliated with Google.
OK, how do I get started?
To install Cap’n Proto, head over to the installation page. If you’d like to help
hack on Cap’n Proto, such as by writing bindings in other languages, let us know on the
discussion group. If you’d like to receive e-mail
updates about future releases, add yourself to the
announcement list.
| 2024-11-08T10:07:36 | en | train |
10,825,261 | chdir | 2016-01-02T05:55:42 | Software with the most vulnerabilities in 2015: Mac OS X, iOS, and Flash | null | http://venturebeat.com/2015/12/31/software-with-the-most-vulnerabilities-in-2015-mac-os-x-ios-and-flash/ | 5 | 1 | [
10825434
] | null | null | no_error | Software with the most vulnerabilities in 2015: Mac OS X, iOS, and Flash | 2015-12-31T16:23:52+00:00 | Emil Protalinski |
December 31, 2015 8:23 AM
Image Credit: REUTERS/Mike Segar
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
Which software had the most publicly disclosed vulnerabilities this year? The winner is none other than Apple’s Mac OS X, with 384 vulnerabilities. The runner-up? Apple’s iOS, with 375 vulnerabilities.
Rounding out the top five are Adobe’s Flash Player, with 314 vulnerabilities; Adobe’s AIR SDK, with 246 vulnerabilities; and Adobe AIR itself, also with 246 vulnerabilities. For comparison, last year the top five (in order) were: Microsoft’s Internet Explorer, Apple’s Mac OS X, the Linux Kernel, Google’s Chrome, and Apple’s iOS.
These results come from CVE Details, which organizes data provided by the National Vulnerability Database (NVD). As its name implies, the Common Vulnerabilities and Exposures (CVE) system keeps track of publicly known information-security vulnerabilities and exposures.
Here is the 2015 list of the top 50 software products in order of total distinct vulnerabilities:
You’ll notice that Windows versions are split separately, unlike OS X. Many of the vulnerabilities across various Windows versions are the same, so there is undoubtedly a lot of overlap. The argument for separating them is probably one of market share, though that’s a hard one to agree to, given that Android and iOS are not split into separate versions. This is the nature of CVEs.
It’s also worth pointing out that the Linux kernel is separate from various Linux distributions. This is likely because the Linux kernel can be upgraded independently of the rest of the operating system, and so its vulnerabilities are split off.
If we take the top 50 list of products and categorize them by company, it’s easy to see that the top three are Microsoft, Adobe, and Apple:
Keep in mind that tech companies have different disclosure policies for security holes. Again, this list paints a picture of the number of publicly known vulnerabilities, not of all vulnerabilities, nor of the overall security of a given piece of software.
If you work in IT, or are generally responsible for the security of multiple systems, there are some obvious trends to keep in mind. Based on this list, it’s clear you should always patch and update operating systems, browsers, and Adobe’s free products.
VB Daily
Stay in the know! Get the latest news in your inbox daily
By subscribing, you agree to VentureBeat's Terms of Service.
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
| 2024-11-08T06:00:46 | en | train |
10,825,279 | scapbi | 2016-01-02T06:04:47 | Webpack-react-redux-babel-autoprefixer-hmr-postcss-css-modules-rucksack | null | https://github.com/tj/frontend-boilerplate | 1 | 0 | null | null | null | no_error | GitHub - tj/frontend-boilerplate: webpack-react-redux-babel-autoprefixer-hmr-postcss-css-modules-rucksack-boilerplate (unmaintained, I don't use it anymore) | null | tj |
Skip to content
Navigation Menu
GitHub Copilot
Write better code with AI
Security
Find and fix vulnerabilities
Actions
Automate any workflow
Codespaces
Instant dev environments
Issues
Plan and track work
Code Review
Manage code changes
Discussions
Collaborate outside of code
Code Search
Find more, search less
Explore
Learning Pathways
White papers, Ebooks, Webinars
Customer Stories
Partners
GitHub Sponsors
Fund open source developers
The ReadME Project
GitHub community articles
Enterprise platform
AI-powered developer platform
Pricing
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign up
| 2024-11-08T08:18:05 | en | train |
10,825,401 | DiabloD3 | 2016-01-02T07:08:46 | Guanfacine, but Not Clonidine, Improves Planning and Working Memory Performance | null | http://www.nature.com/npp/journal/v20/n5/full/1395310a.html | 1 | 0 | null | null | null | no_error | Guanfacine, But Not Clonidine, Improves Planning and Working Memory Performance in Humans | null | Riekkinen, Paavo |
MainDefective executive functions, such as planning, working memory and attentional set-shifting, are characteristic features of several neurological disorders including Alzheimer disease, frontal lobe dementia, and basal ganglia disorders (Sahakian and Owen 1992; Robbins et al. 1994; Coull et al. 1996). Previous evidence has suggested that the prefrontal cortex is involved in the modulation of executive functions, and that dysfunction of this region may result in specific cognitive defects (Luria 1969; Owen et al. 1990, 1995). For example, neuroimaging studies have revealed that discrete regions of the prefrontal cortex are activated by tasks that measure spatial working memory (Owen et al. 1996a), planning (Baker et al. 1996) or attention (Coull et al. 1996). Furthermore, Owen et al. (1991) compared the performance of patients with excision of frontal lobe or temporal lobe and patients who had undergone amygdalo-hippocampectomy in tests that measure executive functions. The frontal excision patients were impaired in measures of spatial working memory, planning [Tower of London (TOL)] and attentional set-shifting [Intra-dimensional/extra-dimensional set shifting, ID/ED)]. The patients with temporal excision and selective amygdalo-hippocampectomy performed accurately in the TOL test, were slower to respond at the extra-dimensional set-shifting stage of the ID/ED test, and were selectively impaired at the most difficult 8-box problem level in a spatial working memory test (Owen et al. 1990, 1991, 1995). The frontal excision patients solved the TOL task less accurately and their speed of thinking was not as quick (Owen et al. 1990). In the ID/ED attentional test, the frontal patients exhibited deficits when they were confronted with the demanding extra-dimensional set-shifting stage of this test, but speed of responding remained normal (Owen et al. 1991). In contrast, all three groups of patients were impaired in a spatial working memory task (Owen et al. 1995). However, the nature of the performance defect differed in these groups, such that frontal excision patients had a poor search strategy whereas temporal lobe patients exhibited a mnemonic failure.Pharmacological studies in animals have revealed that noradrenergic innervation is important for the functioning of the prefrontal cortex (Arnsten 1993; Berridge et al. 1993). Depletion of catecholamines by infusing 6-hydroxydopamine into the principal sulculs impairs the accuracy of young monkeys in a delayed response task (Brozoski et al. 1979) and the performance accuracy of these lesioned monkeys can be markedly improved by administration of α2-agonists, such as clonidine or guanfacine (Arnsten et al. 1988). As monkeys age, their levels of central noradrenaline become depleted and a defect in working memory performance is also revealed. This age-related spatial working memory failure can also be alleviated by treatment with α2-agonists (Arnsten and Contant 1992; Arnsten 1993; Arnsten and Cai 1993). Importantly, the improvement in working memory induced by α2-agonists is blocked by an α2-antagonist, but not by an α1-antagonist (Arnsten and Cai 1993). Therefore, one could propose that post-synaptic stimulation of α2-adrenoceptors enhances working memory performance and the efficiency of the sulcus principalis (Arnsten and Goldman-Rakic 1985). Unfortunately, the sedative and hypotensive side-effects associated with α2-agonists, such as clonidine, limit their use in the treatment of disorders associated with frontal lobe dysfunction (Arnsten et al. 1988).Three subtypes of α2-adrenoceptors have now been cloned in humans: the α2A, α2B, and α2C (Kobilka et al. 1987; Regan et al. 1988; Aoki et al. 1994). The anatomical distribution of all subtypes is unique, supporting the concept that adverse effects of α2-agonists can be dissociated from beneficial drug effects (MacDonald and Scheinin 1995). Indeed, in the rat brain, α2B messenger (m) RNA is found exclusively in the thalamus (Scheinin et al. 1994) and activation of this receptor subtype may impair functioning of thalamocortical arousal mechanisms (Riekkinen Jr. et al. 1993). The brainstem nucleus tractus solitarius, considered to be critical for the hypotensive effect of subtype non-selective α2-agonists (Reis et al. 1984) contains α2A and α2C mRNA (MacDonald and Scheinin 1995). Importantly, in the prefrontal cortex including the sulcus principalis region, the α2A subtype predominates (Aoki et al. 1994). In monkeys, the beneficial effects of subtype non-selective α2-agonists are related to their selectivity for the α2A site (Arnsten et al. 1988; Arnsten and Leslie 1991). Furthermore, idazoxan, a non-subtype selective α2-antagonist, blocked the beneficial effect of an α2-agonist on working memory, but prazosin which in addition to its α1-antagonist properties has quite high affinity for α2B- and α2C-adrenoceptors was ineffective in reversing the positive effects (Uhlen and Wikberg 1991; Marjamäki et al. 1993). These data suggest that actions at α2A-adrenoceptors may be important in mediating the improvement in prefrontal functions in monkeys (Arnsten et al. 1996).We designed the present study to investigate the hypothesis that α2A-adrenoceptor activation is important for the beneficial effect of α2-agonists on frontal lobe functions in humans (Arnsten et al. 1996). The theory of working memory that has been suggested by Baddeley (1986) includes an attentional co-ordinator, the “central executive,” which controls the modality related slave systems of working memory. Dysfunction of the central executive located in the prefrontal cortex also disrupts performance in other tests measuring executive functions, such as TOL planning and ID/ED attentional set-shifting tests. Therefore, if α2-adrenoceptors are important for the modulation of working memory, their role in other frontal functions, such as planning and attentional set-shifting, may also be important (Owen et al. 1990, 1991, 1995). We compared the actions of clonidine and guanfacine on spatial working memory, TOL and ID/ED attentional set-shifting tests in healthy volunteers. We hypothesized that the beneficial effects of guanfacine on frontal lobe functions would be more apparent than those of clonidine, as guanfacine has a greater α2A-adrenoceptor selectivity.MATERIALS AND METHODSSubjectsSix separate groups of healthy and equally intelligent (as indicated by WAIS-R Vocabulary subtest and verbal fluency tests) (Borkowski et al. 1967; Wechsler 1992) young (23–35 years of age, n = 55) university educated volunteers took part in the study. None of the volunteers were receiving concurrent medication, nor had a history of psychiatric, neurologic, or cardiovascular illnesses, or other medical conditions that could interfere with central nervous system functions or interpretation of the results. The studies were approved by the local ethical committee and national drug regulatory authority, and all the subjects provided their informed consent in writing. All the subjects were covered by an insurance. The number of test sessions was limited to two at the request of the local ethical committee.Pharmacological ManipulationsSix different experimental groups were used. Five of the groups received once placebo and clonidine or guanfacine at one dose. Clonidine hydrochloride (Catapressan,® Boehringer Ingelheim, Germany) was administered PO 0.5 (n = 6/group), 2.0 (n = 8/group) or 5.0 (n = 8/group) μg/kg in tablet form, or appropriate oral placebo, 90 min before starting the test session. Guanfacine hydrochloride (Estulic,® Sandoz Oy) was administered PO 7 (n = 9/group) and 29 (n = 12/group) μg/kg in tablet form, or appropriate oral placebo, 90 min before starting the test session. The doses were ±3% accurate: e.g., 5 ± 0.15 μg/kg clonidine. One group of 12 subjects received placebo before both of the two testing sessions.Subjects from the clonidine or guanfacine treated groups attended on two occasions (at least seven days between sessions), and received the relevant pharmacological manipulation on one occasion, with an appropriate placebo on the other day in a counterbalanced order for each group (placebo-controlled double-blind cross over design). One of the groups was tested identically but placebo was administered before both testing sessions. Both the subject and the investigator were blinded to the composition of the tablets.Procedure and Experimental DesignExperimental sessions were started at the same time of each testing day for each individual subject. The entire test session lasted 60–90 min for all the subjects, and the testing began 90 min post-ingestion of tablets for all the subjects.Visual Analogue ScaleAfter completion of the test session, the subjects were asked to rate themselves for subjective feelings of “sedation/tiredness” by asking them to place a mark on a 100 mm line numbered from 1 to 10, with 1 representing “not at all” and 10 representing “exceedingly sedated/tired.”Monitoring of Blood PressureBlood pressure of the subjects was measured before they received the study drugs or matching placebo tablets, 90 min afterwards (i.e., just before the beginning of the test session), and after the completion of the test session which lasted for 60–90 min.TestsSpatial Working Memory (Owen et al. 1990)This is a self-ordered search test of working memory, which also incorporates a strategic search component to tax “central executive” (Owen et al. 1990). Subjects had to search through a number of “boxes” (4, 6 or 8) for a hidden “token” without returning to a box which they had already examined on the same trial (to avoid “within search” errors) or which had already contained a token in the previous trial (to avoid “between search” errors). Tokens were hidden one at a time, and were never hidden in the same box twice. The numbers of each type of error at each level of difficulty were measured. In addition, a measure of the use of an efficient search strategy was also derived from this test, defined as the total number of times a subject began a search with a different box on the 6- and 8-box problems. The lower this number, the greater the use of a strategy (Owen et al. 1990 for a fuller description). The results of 6- and 8-box levels were used in the analysis. The 4-box level was not included in the analysis, since it is too straightforward for healthy volunteers and a ceiling effect occurs.Tower of London (Owen et al. 1990)This test of planning requires subjects to compare two different arrangements of “snooker or pool balls” in “socks or pockets” (one presented on the top half of the screen, the other on the bottom), and rearrange the balls in the lower half of the screen such that their positions match the goal arrangement in the upper half. Balls were moved by touching the ball to be moved, and then touching the space it was then to occupy. The number of moves required by the subject to rearrange the balls, as well as selection latencies for both the first and subsequent moves were recorded by the computer. These latencies were termed “initial” and “subsequent thinking times” respectively. For each test problem, a “yoked control” condition was employed to provide baseline measures of motor initiation and execution times. In this condition, the actual solutions that subject had generated for the two, three, four and five move problems were played back to the subject one move at a time, and he or she had simply to follow the movements that the computer made. In the analysis of results, these latencies were subtracted from the original selection latencies to give “purer” estimates of cognitive thinking time corrected for sensori-motor factors. The results of four and five move problems are shown here, as the ceiling effect blocks the ability to detect drug induced effects on performance on easier levels of the test.ID/ED Attentional Set-Shifting Task (Owen et al. 1991)This is a test of attentional set-shifting based in part on the Wisconsin card sort test (WCST). There are nine stages in which a subject has to learn a visual discrimination performance to a set criterion (six consecutive trials correct). The first two stages required a simple visual discrimination (SD), followed by a reversal of this discrimination upon reaching the criterion (SDR). Another visual dimension is then introduced which the subject must learn is irrelevant [compound discrimination with stimuli separated (C_D) or superimposed (CD)], even in the situation of a reversal of the original discrimination [compound discrimination reversal (CDR)]. An intra-dimensional shift is then introduced, at which point new exemplars of the two dimensions are given, and the subject must now learn a new discrimination to criterion [intra-dimensional shift (IDS)] followed by a reversal of this rule [intra-dimensional reversal (IDR)]. The penultimate stage of the test introduces an extra-dimensional shift, where again new exemplars of the two dimensions are presented, but this time the subject must shift his or her attention to the dimension which was previously irrelevant [extra-dimensional shift (EDS)]: followed finally by a reversal of this rule [extra-dimensional reversal (EDR)]. The EDS is akin to a category shift in the WCST. For each stage of the test, the computer calculates the number of trials to the criterion, number of errors made, and the latency to complete each stage.StatisticsThe repeated measures cross-over design may carry with it the problem of practice effects, which may confound the validity of the statistical interactions. To reveal possible practice effects in these tasks, we had beforehand tested a separate group (n = 12) of normal young healthy control subjects with the same test battery on two occasions with no less than 1 week between sessions. In the analysis of test data, a repeated measures analysis of variance (MANOVA) was used to analyze group (groups 1–6, see: pharmacological manipulations), repetition and difficulty level effects, and the appropriate interactions. A paired samples t-test was used also to compare drug-induced performance changes.RESULTSVAS Sedation RatingThe highest doses of clonidine and guanfacine (5 and 29 μg/kg, respectively) slightly increased the subjective feelings of sedation vs. placebo (p < .05 for both), whereas lower doses of the drugs had no effect (p > .01 for all) (Table 1).Table 1 Clonidine and Guanfacine at the Highest Doses Tested Increased Feelings of Sedation as Assessed with the Visual Analogue ScaleFull size tableBlood PressureBoth clonidine (5 μg/kg) and guanfacine (29 μg/kg) at the highest doses used slightly reduced both systolic and diastolic blood pressures (p < .05 vs. placebo), whereas the lower doses of the drugs had no significant effects on blood pressures (p > .1 for all) (Table 2).Table 2 Clonidine and Guanfacine at the Highest Doses Tested Decreased Blood PressureFull size tableSpatial Working MemoryThe between or within search errors and strategy score of the placebo treated group did not differ during the first and second testing session (t-test: p > .1; for all comparisons).ClonidineA comparison of the between search errors of clonidine 0.5 μg/kg, clonidine 2 μg/kg, clonidine 5 μg/kg with only placebo treated groups revealed a significant repetition (F(1,29) = 7.9, p = .009) effect and repetition × group interaction (F(3,29) = 4.22, p = .014), indicating that 0.5, 2, or 5 μg/kg of clonidine modulated performance (Figure 1 , Part A). The 0.5 and 5 μg/kg clonidine-treated groups made more between search errors after clonidine than after placebo treatment at 6- and 8-box levels (t-test: p < .01 for all comparisons). The medium dose of clonidine failed to affect the number of between search errors (t-test: p > .4; for both comparisons). In none of the groups were the number of within search errors or the strategy score affected by repetition ((F(1,29)/(3,29) < 0.4, p > .55; for all comparisons) (data not shown).Figure 1Effects of clonidine and guanfacine on between search errors in the spatial working memory test. On the Y-axis the number of errors made is shown. On the X-axis different treatments are shown. The values are expressed as mean ±SD. (A) Clonidine 0.5 and 5 μg/kg increased between search errors in the spatial working memory task, but clonidine 2 μg/kg had no effect. Clonidine (solid bars) and placebo (open bars). (B) Guanfacine 29 μg/kg decreased but guanfacine 7 μg/kg had no effect on between search errors. Guanfacine (solid bars) and placebo (open bars). *p < .05 vs. own baseline values.Full size imageGuanfacineAnalysis of 7 μg/kg or 29 μg/kg of guanfacine- and placebo-treated groups showed a significant repetition effect (F(1,30) = 12.3, p = .001) and repetition × group interaction (F(3,30) = 5.68, p = .008) on the between search errors (Figure 1, panel B). Guanfacine 7 μg/kg had no effect on between search errors (t-test: p = .65), but guanfacine 29 μg/kg decreased between search errors at 6- and 8-box levels (t-test: p < .01; for both comparisons). In the contrast, within search errors and strategy score analysis revealed no repetition effects or repetition × group interactions (F(1,30)/(2,30) < 2.0, p > .2, for all comparisons) (Data not shown).Tower of LondonThe number of excess moves did not significantly decrease in the placebo treated group during the second testing session (t-test: p = .28), but the initial thinking times were significantly reduced (t-test: p = .006) with a similar trend in subsequent thinking times (t-test: p = .071).ClonidineThe effect of repetition on the initial and subsequent thinking times varied between 0.5, 2, or 5 μg/kg clonidine- and placebo-treated groups (repetition × group interaction: F(3,29) > 9.2, p < .001; for both comparisons) (Table 3). A comparison of control treated group with 0.5 and 2 μg/kg clonidine treatment revealed that clonidine decreased initial thinking times (treatment: F(1,16)/(1,18) < 0.001, p < .008; for both comparisons), but had no effect on the subsequent thinking times. In contrast, 5 μg/kg clonidine increased initial and subsequent thinking times (treatment: F(1,17) > 17.7, p < .001; for both comparisons). The analysis of excess moves made revealed no repetition effects or repetition × group interaction (F(1,29)/(1,30) > 14.8, p < .001) (Figure 2 , panel A).Table 3 Tower of London Test (Planning Ability)Full size tableFigure 2Effects of clonidine and guanfacine on the number of excess moves made in Tower of London planning test. On the Y-axis the number of errors made is shown. On the X-axis different treatments are shown. The values are expressed as mean ±SD. (A) Clonidine 0.5, 2 or 5 μg/kg had no effect on the number of excess moves made. Clonidine (solid bars) and placebo (open bars). (B) Guanfacine 29 μg/kg decreased and guanfacine 7 μg/kg had no effect on the number of excess moves made. Guanfacine (solid bars) and placebo (open bars). *p < .05 vs. own baseline values.Full size imageGuanfacineAnalysis of 7 and 29 μg/kg placebo- and guanfacine-treated groups revealed a repetition effect (F(1,27) = 24.3, p < .001) but no repetition × group (F(2,27) = 0.001, p = .998) interaction, suggesting that guanfacine failed to modulate initial and subsequent thinking times (Table 3). In contrast, the number of excess moves showed a repetition effect (F(1,30) = 10.7, p < 0.003) and a nearly significant repetition × group (F(2,30) = 3.1, p = .06) interaction. Post hoc comparison of the group 5 (29 μg/kg guanfacine treated) with placebo-treated group revealed a repetition effect (F(1,22) = 19.0, p > .001) and a repetition × group (F(1,22) = 7.9, p < .01) interaction on performance, indicating that 29 μg/kg guanfacine significantly decreased the number of excess moves made (Figure 2, panel B).ID/ED Set-ShiftingThe placebo-treated subjects were faster at the ID and ED set-shifting stage during the second session than during the first session (p < .05 for all comparisons). However, no training effect was observed on the number of attempts needed to solve the ID set-shift stage (p > .05), but the training effect was significant at the ED set-shifting level (p = .004).ClonidineAnalysis of the number of trials required by the subjects of 0.5, 2, or 5 μg/kg clonidine and placebo-treated groups to solve the ID/ED set-shifting problem showed a significant repetition effect (F(1,29) = 32.3, p < .001), indicating that practice improved performance (Table 4). Clonidine 0.5, 2, or 5 μg/kg did not affect accuracy of performance at any level of the test (group × repetition × difficulty level interaction: F(24,232) = 0.63, p = .91). However, response latency after 5 μg/kg clonidine treatment was increased at the ED shift stage, but not at the ID shift stage (group × repetition × difficulty level interaction: F(3,29) = 6.95, p < .001 (Table 5).Table 4 Intra-Dimensional and Extra-Dimensional Attentional Set-Shifting TestFull size tableTable 5 Intra-Dimensional and Extra-Dimensional Attentional Set-Shifting Test Response LatenciesFull size tableGuanfacineIn contrast, guanfacine treatment failed to affect accuracy or speed of responses at ID or ED shift stage of the test (F(15,240) = 0.4, p > .6; for all comparisons) (Tables 4 and 5).DISCUSSIONClonidine and guanfacine induced qualitatively different effects on performance in tests measuring spatial working memory, planning and attentional set-shifting. Guanfacine enhanced performance in the spatial working memory and planning tests at the higher dose tested, but had no effect on ID/ED attentional set-shifting performance, whereas clonidine did not produce any reliable improvement of spatial working memory or planning performance. First, the deleterious effect of clonidine on spatial working memory followed an inverted U-shaped dose response curve. Second, 0.5 and 2 μg/kg clonidine increased impulsivity in the planning test. Third, the highest clonidine dose retarded response speed at the difficult test levels of planning and attentional set-shifting. These data showing that guanfacine was more effective in stimulating working memory and planning than clonidine in humans are in principle similar to those reported by Arnsten and collaborators in monkeys (Arnsten et al. 1996). Our results suggest that the effect of guanfacine is not limited to a single function mediated by the central executive system (Baddeley 1986), since spatial working memory and planning were both improved. However, ID/ED attentional set shifting, which is also considered to be dependent on the central executive was insensitive to guanfacine treatment. Furthermore, we observed beneficial effects with a slightly sedating and hypotensive dose of guanfacine, indicating that in neurologically healthy humans, its dose range for inducing side-effects and improvement of frontal functions may overlap.The beneficial effects of guanfacine occurred at 29 μg/kg in spatial working memory and planning tests, suggesting that guanfacine may produce its effects on working memory and planning via the same neurochemical mechanism(s). Interestingly, the previous pharmacological studies conducted with different animal cognition models suggest that post-synaptic α2A-adrenoceptors mediate the beneficial effect of guanfacine on working memory (Arnsten et al. 1996). Therefore, the present results supplement previous animal data in suggesting that guanfacine may stimulate the function of prefrontal cortical areas involved in the “central executive” (Baddeley 1986) component of working memory and planning tasks via post-synaptic α2A-adrenoceptors in different mammalian species (Arnsten et al. 1996). However, it is possible that guanfacine may modulate these two functions via different anatomical subregions of the prefrontal cortex.The dose response of clonidine to modulate performance in different tests was not unidirectional, but this is possibly explained by assuming that the drug acted on both pre- and postsynaptic α2-adrenoceptors. However, it is equally possible that the non-linear dose response curves of clonidine results from activation of α2-adrenoceptor subtypes located in characteristic anatomical regions (MacDonald and Scheinin 1995) or the highest dose of clonidine may have caused stimulation of α1-adrenoceptors (Arnsten and Leslie 1991).It is difficult to assign mechanisms of action to systematically administered drugs when they affect cognition, but comparisons with previous data describing the effects of more focal brain insults may be of help in elucidating the sites of action of α2-agonists. Therefore, it is relevant to compare the action of clonidine and guanfacine on spatial working memory, planning and attentional set-shifting function with the performance failure induced by temporal or frontal lobe lesions, and Parkinson's disease (Owen et al. 1990, 1991, 1993).The beneficial effect of guanfacine on between search errors at the 6- and 8-box levels in a spatial working memory test is the opposite of that seen in patients with frontal lobe excisions who are more error-prone in these tests (Owen et al. 1990) and is further support for the theory that α2-adrenoceptors located in the frontal cortex can mediate the effect of guanfacine on working memory (Arnsten 1997). However, frontal lobe excisions not only increased errors at the 6- and 8-box level, but also impaired the strategy measure. Indeed, it is possible that guanfacine stimulates selectively prefrontal areas involved in the processing of the working memory component of the paradigm, as this treatment had no effect on the strategy measure. Alternatively, it could be argued that guanfacine stimulates the accuracy of spatial working memory performance via temporal lobe structures, since temporal lobe excision impaired accuracy but had no effect on the strategy measure. However, Owen et al. (1996a, 1996b) revealed that a temporal lobe lesion disrupted accuracy selectively at the 8-box level. Therefore, it is less likely that guanfacine acts via temporal lobe structures, since an improvement in spatial working memory was observed at the 6- and 8-box levels.Clonidine produced a profile that is not easily interpreted in terms of dysfunction of frontal or temporal regions, since treatment lessened accuracy at the 6- and 8-box levels but had no effect on strategy score. The action of 0.5 and 5 μg/kg clonidine to increase between search spatial working memory errors at the 6- and 8-box levels cannot be explained in terms of a single behavioral factor, such as sedation. Indeed, 5 μg/kg clonidine impaired but 29 μg/kg guanfacine improved working memory, though at these doses both compounds induced similar subjective feelings of sedation. Furthermore, 0.5 μg/kg clonidine had no effect on subjective feelings of sedation, but still impaired working memory. It is possible that 0.5 μg/kg clonidine decreases the LC firing rate and noradrenaline release in the prefrontal cortex and on its own this is enough to disrupt working memory (Aghajanian et al. 1977; Cedarbaum and Aghajanian 1977; Aghajanian 1978). Furthermore, clonidine 2 μg/kg may have masked the working memory defect induced by presynaptic suppression of LC activity by stimulating a sufficient number of postsynaptic α2-adrenoceptors in the frontal cortex (Arnsten and Goldman-Rakic 1985). Finally, 5 μg/kg clonidine may also act via additional forebrain structures, such as basal ganglia or thalamus, all of which contain a characteristic distribution pattern of α2-adrenoceptor subtypes (Aantaa et al. 1995). Interestingly, patients with Parkinson's disease have a reduction of dopaminergic activity in the striatum and demonstrate a qualitatively similar failure as that induced by clonidine on spatial working memory behavior. Indeed, the spatial working memory behavior of Parkinson's disease patients is also characterized by an increase in number of errors at the 6- and 8-box levels, but an adequate strategy (Owen et al. 1993). Therefore, it is possible that the adverse effects of clonidine on spatial working memory is mediated via a decrease in dopaminergic activity or directly at the basal ganglia level (MacDonald and Scheinin 1995).The 29 μg/kg dose of guanfacine decreased excess moves but had no effect on thinking times in the TOL test, revealing that administration of this α2-agonist can also improve planning abilities. Again, this partly mirrors the effect of frontal lobe excision in humans, as Owen et al. (1990) described that excess moves and thinking times were increased in patients with frontal lobe excisions. It is tempting to speculate that guanfacine may facilitate functioning of the prefrontal cortex and improve planning, and that α2-adrenoceptors more effectively modulate this frontal mechanism underlying accuracy than having any effect on thinking times. In addition, our results agree with those of Coull et al. (1995) showing that 0.5 and 2 μg/kg clonidine increased impulsivity, as indicated in decreased initial responding latency in TOL, but did not affect subsequent thinking times or the number of excess moves made. It is relevant to note here that a previous positron emission tomography study suggested that an increase in rostral prefrontal activity was important for the components of executive functions comprising of response selection and evaluation (Owen et al. 1996a). Clonidine may act to suppress this frontal activation and thus increase impulsivity. However, as indicated earlier, the most characteristic change observed in subjects with frontal dysfunction is an increase in the excess moves in the TOL test (Owen et al. 1990). This raises an alternative possibility, i.e., clonidine acts via other brain structures that are important for suppressing impulsivity and are inhibited by α2-adrenoceptor activation, such as brainstem serotonin projections (MacDonald and Scheinin, 1995; Robbins, 1997).Guanfacine did not facilitate all forms of cognitive processes dependent on the integrity of the prefrontal cortex, since it had no effect on the ID/ED attentional set-shifting performance. Similarly, clonidine failed to stimulate ID/ED attentional set-shifting performance in agreement with earlier data (Coull et al. 1995). Thus, the ability to deduce rules on the basis of reinforcing feedback and to use them to solve discrimination tasks was not promoted by α2-adrenoceptor activation. A previous study of Owen et al. (1990) reported that frontal excision disrupted accuracy of performance at the ED stage in this test, in addition to causing defects in TOL and spatial working memory measures. However, the prefrontal areas involved in working memory, planning, and attentional set-shifting are partly distinct (Baker et al. 1996; Owen et al. 1996a) and it is possible that those areas differ in their sensitivity to guanfacine treatment. Indeed, neuroimaging studies in humans have shown a characteristic activation of different parts of prefrontal lobe during planning and spatial working memory performance (Baker et al. 1996; Owen et al. 1996a). Furthermore, in monkeys, lesions to the orbital and lateral prefrontal cortex impair affective processing and attentional set-shifting, respectively (Dias et al. 1996).In contrast to the effects of guanfacine, 5 μg/kg clonidine impaired not only working memory, but also slowed initial and subsequent responding in the TOL and decreased speed of responding in the ID/ED attentional set-shifting test at the ED stage. This may not result simply from sedation, since an equally sedative dose of guanfacine (29 μg/kg) had no effect on speed of responding. This shows that 5 μg/kg clonidine decreased the speed of effortful processing as well as vigilance, but 29 μg/kg guanfacine decreased only vigilance. The decrease in resting state vigilance induced by these two α2-agonists may be, at least partly, due to impaired thalamocortical activation (Riekkinen Jr et al. 1993; Aantaa et al. 1995; MacDonald and Scheinin 1995), as supported by a recent PET study (Coull et al. 1997).It is possible that during 5 μg/kg clonidine treatment, subjects have adopted an alternative strategy and traded speed for maintenance of higher accuracy of responding in TOL and ID/ED tests. However, it is difficult to pinpoint the site of action for clonidine to impair speed of effortful processing, though it may involve the temporal lobe, and areas of the “fronto-striatal” loops at the cortical and basal ganglia levels (Lange et al. 1992; Owen et al. 1993; Robbins et al. 1994). First, 5 μg/kg clonidine had no effect on the accuracy of ID/ED test but slowed responding at the ED stage, a finding mimicking closely the defect induced by temporal excision in humans (Owen et al. 1991). Thus, temporal lobe areas may mediate the action of clonidine on attentional set-shifting behavior. Second, the increase in initial and subsequent thinking times in TOL test induced by clonidine is not paralleled by excision of the frontal and temporal lobes, or Parkinson's disease (Owen et al. 1990; Owen et al. 1991; Owen et al. 1993). Indeed, Parkinson's disease slows only the initial movements, whereas frontal excisions decrease selectively the pace of subsequent responses (Owen et al. 1990; Owen et al. 1993). Therefore, it is theoretically possible that clonidine slows responding in TOL test via α2-adrenoceptors located both at cortical and subcortical levels of the fronto-striatal systems (Aantaa et al. 1995; MacDonald and Scheinin 1995).It is relevant to note here that the activity decreasing (Hunter et al. 1997) and hypotensive (MacMillan et al. 1996) effects of α2-agonists are attenuated in α2A-adrenoceptor mutant mice. Therefore, our result showing that 5 μg/kg clonidine and 29 μg/kg guanfacine had equal hypotensive and sedating action in resting subjects may indicate that these drugs at these doses had equal efficacy in stimulating α2A-adrenoceptors (MacMillan et al. 1996). Interestingly, we have observed that mice overexpressing α2C-adrenoceptors were retarded in developing an effective escape strategy in the water maze, and the involvement of α2C-adrenoceptors was supported by altered dose response relationships to an α2-antagonist (Björklund et al. 1996). Therefore, the increase in working memory errors and slowing of effortful mental processing by clonidine at the highest dose tested may be related to stronger activation of α2C-adrenoceptors by 5 μg/kg clonidine than that attained with 29 μg/kg guanfacine (Jansson et al. 1994). Furthermore, an additional possibility is that at the doses used in this study clonidine increases the activity of thalamic α2B-adrenoceptors more strongly than guanfacine (Jansson et al. 1994; Aantaa et al. 1995) and thus is more capable of depressing the speed of effortful information processing.CONCLUSIONIn conclusion, our data shows that guanfacine can stimulate spatial working memory and planning in humans, but has no effect on attentional set-shifting. This suggests that performance in those cognitive tests that utilize the central executive (Baddeley 1986) and are dependent on partly distinct prefrontal areas (Owen et al. 1996a, 1996b) is differentially sensitive to guanfacine. In contrast, clonidine produced many actions that may not be mediated via the prefrontal cortex and had no reliable effect in improving any measures of executive function. In fact, at a low dose it impaired working memory and increased impulsivity, and at a high dose, it impaired speed of effortful mental processing. The qualitatively different actions of guanfacine and clonidine may be related to greater selectivity ratio of guanfacine for α2A vs. non-α2A adrenoceptors compared with clonidine (Arnsten et al. 1996).
ReferencesAantaa R, Marjamaki A, Scheinin M . (1995): Molecular pharmacology of alpha 2-adrenoceptor subtypes. Ann Med 27: 439–449Article
CAS
Google Scholar
Aghajanian GK . (1978): Feedback regulation of central monoaminergic neurons: Evidence from single cell recording studies. Essays Neurochem Neuropharmacol 3: 1–32CAS
PubMed
Google Scholar
Aghajanian GK, Cedarbaum JM, Wang RY . (1977): Evidence for norepinephrine-mediated collateral inhibition of locus coeruleus neurons. Brain Research 136: 570–577Article
CAS
Google Scholar
Aoki C, Go CG, Venkatesan C, Kurose H . (1994): Perikaryal and synaptic localization of alpha 2A-adrenergic receptor-like immunoreactivity. Brain Research 650: 181–204Article
CAS
Google Scholar
Arnsten AFT . (1993): Catecholamine mechanisms in age-related cognitive decline. Neurobiol Aging 14: 639–641Article
CAS
Google Scholar
Arnsten AFT . (1997): Alpha 1-adrenergic agonist, cirazoline, impairs spatial working memory performance in aged monkeys. Pharmacol Biochem Behav 58: 55–59Article
CAS
Google Scholar
Arnsten AF, Cai JX . (1993): Postsynaptic alpha-2 receptor stimulation improves memory in aged monkeys: Indirect effects of yohimbine versus direct effects of clonidine. Neurobiol Aging 14: 597–603Article
CAS
Google Scholar
Arnsten AF, Contant TA . (1992): Alpha-2 adrenergic agonists decrease distractibility in aged monkeys performing the delayed response tasks. Psychopharmacology Berl 108: 159–169Article
CAS
Google Scholar
Arnsten AF, Goldman-Rakic PS . (1985): Alpha-2 adrenergic mechanisms in prefrontal cortex associated with cognitive decline in aged nonhuman primates. Science 230: 1273–1276Article
CAS
Google Scholar
Arnsten AF, Leslie FM . (1991): Behavioral and receptor binding analysis of the alpha 2-adrenergic agonist, 5-bromo-6 [2-imidazoline-2-yl amino] quinoxaline (UK-14304): evidence for cognitive enhancement at an alpha 2-adrenoceptor subtype. Neuropharmacology 30: 1279–1289Article
CAS
Google Scholar
Arnsten AF, Cai JX, Goldman-Rakic PS . (1988): The alpha-2 adrenergic agonist guanfacine improves memory in aged monkeys without sedative or hypotensive side effects: Evidence for alpha-2 receptor subtypes. J Neurosci 8: 4287–4298Article
CAS
Google Scholar
Arnsten AF, Steere JC, Hunt RD . (1996): The contribution of alpha 2-noradrenergic mechanisms of prefrontal cortical cognitive function. Potential significance for attention-deficit hyperactivity disorder. Arch Gen Psychiatry 53: 448–455Article
CAS
Google Scholar
Baddeley A . (1986): The fractionation of working memory. Proc Natl Acad Sci USA 93: 13468–13472Article
Google Scholar
Baker SC, Rogers RD, Owen AM, Frith CD, Dolan RJ, Frackowiak RS, Robbins TW . (1996): Neural systems engaged by planning: A PET study of the Tower of London task. Neuropsychologia 34: 515–526Article
CAS
Google Scholar
Berridge CW, Arnsten AF, Foote SL . (1993): Noradrenergic modulation of cognitive function: Clinical implications of anatomical, electrophysiological and behavioural studies in animal models [editorial]. Psychological Medicine 23: 557–564Article
CAS
Google Scholar
Björklund MG, Riekkinen M, Puoliväli J, Santtila P, Sirviö J, Sallinen J, Scheinin M, Haapalinna A, and Riekkinen PJ . (1996): Alpha2C-adrenoceptor overexpression impairs development of normal water maze search strategy in mice. Society for Neuroscience Abstract 22: 682
Google Scholar
Borkowski JG, Benton AL, Spreen O . (1967): Word fluency and brain damage. Neuropsychologia 5: 135–140Article
Google Scholar
Brozoski TJ, Brown RM, Rosvold HE, Goldman PS . (1979): Cognitive deficit caused by regional depletion of dopamine in prefrontal cortex of rhesus monkey. Science 205: 929–932Article
CAS
Google Scholar
Cedarbaum JM, Aghajanian GK . (1977): Catecholamine receptors on locus coeruleus neurons: Pharmacological characterization. Eur J Pharmacol 44: 375–385Article
CAS
Google Scholar
Coull JT, Middleton HC, Robbins TW, Sahakian BJ . (1995): Contrasting effects of clonidine and diazepam on tests of working memory and planning. Psychopharmacology Berl 120: 311–321Article
CAS
Google Scholar
Coull JT, Frith CD, Frackowiak RS, Grasby PM . (1996): A fronto-parietal network for rapid visual information processing: A PET study of sustained attention and working memory. Neuropsychologia 34: 1085–1095Article
CAS
Google Scholar
Coull JT, Firth CD, Dolan RJ, Frackowiak RSJ, Grasby PM . (1997): The neural correlates of the noradrenergic modulation of human attention, arousal and learning. Eur J Neurosci 9: 589–598Article
CAS
Google Scholar
Dias R, Robbins TW, Roberts AC . (1996): Dissociation in prefrontal cortex of affective and attentional shifts. Nature 380: 69–72Article
CAS
Google Scholar
Hunter JC, Fontana DJ, Hedley LR, Jasper JR, Kassotakis L, Lewis R, Eglen RM . (1997): The relative contribution of alpha2-adrenoceptor subtypes to the antinociceptive action of dexmedetomidine and clonidine in rodent models of acute and chronic pain. Britis J Pharmacol 120: 229P
Google Scholar
Jansson CC, Marjamäki A, Luomala K, Savola J-M, Schein M, Akerman KEO . (1994): Coupling of human alpha2-adrenoceptor subtypes to regulation of cAMP production in transfected S115 cells. European J Pharmacol Mol Pharm Sect 266: 165–174Article
CAS
Google Scholar
Kobilka BK, Matsui H, Kobilka TS, Yang-Feng TL, Francke U, Caron MG, Lefkowitz RJ, Regan JW . (1987): Cloning, sequencing, and expression of the gene coding for the human platelet alpha 2-adrenergic receptor. Science 238: 650–656Article
CAS
Google Scholar
Lange KW, Robbins TW, Marsden CD, James M, Owen AM, Paul GM . (1992): 1-Dopa withdrawal in Parkinson's disease selectively impairs cognitive performance in tests sensitive to frontal lobe dysfunction. Psychopharmacology 107: 394–404Article
CAS
Google Scholar
Luria AR . (1969): Frontal-lobe syndromes. In Handbook of Clinical Neurology, Vol. 2. Vinken PJ and Bruyn GW (eds), North Holland, Amsterdam, pp 725–757MacDonald E, Scheinin M . (1995): Distribution and pharmacology of alpha 2-adrenoceptors in the central nervous system. J Physiol Pharmacol 46: 241–258CAS
PubMed
Google Scholar
MacMillan LB, Hein L, Smith MS, Piascik MT, Limbird LE . (1996): Central hypotensive effects of the alpha2a-adrenergic receptor subtype. Science 273: 801–803Article
CAS
Google Scholar
Marjamäki A, Luomala K, Ala-Luottila S, Scheinin M . (1993): Use of recombinant human alpha-2-adrenoceptors to characterize subtype selectivity of antagonist binding. Eur J Pharmacol 246: 219–226Article
Google Scholar
Owen AM, Downes JJ, Sahakian BJ, Polkey CE, Robbins TW . (1990): Planning and spatial working memory following frontal lobe lesions in man. Neuropsychologia 28: 1021–1034Article
CAS
Google Scholar
Owen AM, Roberts AC, Polkey CE, Sahakian BJ, Robbins TW . (1991): Extra-dimensional versus intra-dimensional set shifting performance following frontal lobe excisions, temporal lobe excisions or amygdalo-hippo campectomy in man. Neuropsychologia 29: 993–1006Article
CAS
Google Scholar
Owen AM, Beksinska M, James M, Leigh PN, Summers BA, Marsden CD, Quinn NP, Sahakian BJ, Robbins TW . (1993): Visuospatial memory deficits at different stages of Parkinson's disease. Neuropsychologia 31: 627–644Article
CAS
Google Scholar
Owen AM, Sahakian BJ, Semple J, Polkey CE, Robbins TW . (1995): Visuo-spatial short-term recognition memory and learning after temporal lobe excisions, frontal lobe excisions or amygdalo-hippocampectomy in man. Neuropsychologia 33: 1–24Article
CAS
Google Scholar
Owen AM, Doyon J, Petrides M, Evans AC . (1996a): Planning and spatial working memory: A positron emission tomography study in humans. Eur J Neurosci 8: 353–364Article
CAS
Google Scholar
Owen AM, Evans AC, Patrides M . (1996b): Evidence for a two-stage model of spatial working memory processing within the lateral frontal cortex: A positron emission tomography study. Cereb Cortex 6: 31–38Article
CAS
Google Scholar
Regan JW, Kobilka TS, Yang-Feng TL, Caron MG, Lefkowitz RJ, Kobilka BK . (1988): Cloning and expression of a human kidney cDNA for an alpha 2-adrenergic receptor subtype. Proc Natl Acad Sci USA 85: 6301–6305Article
CAS
Google Scholar
Reis DJ, Granata AR, Joh TH, Ross CA, Ruggiero DA, Park DH . (1984): Brainstem catecholamine mechanisms in tonic and reflex control of blood pressure. Hypertension 6: 7–15CAS
Google Scholar
Riekkinen P Jr, Lammintausta R, Ekonsalo T, Sirvio J . (1993): The effects of alpha 2-adrenoceptor stimulation on neocortical EEG activity in control and 6-hydroxydopamine dorsal noradrenergic bundle-lesioned rats. Eur J Pharmacol 238: 263–272Article
CAS
Google Scholar
Robbins TW, James M, Owen AM, Lange KW, Lees AJ, Leigh PN, Marsden CD, Quinn NP, Summers BA . (1994): Cognitive deficits in progressive supranuclear palsy, Parkinson's disease, and multiple system atrophy in tests sensitive to frontal lobe dysfunction. J Neurol Neurosurg Psychiatry 57: 79–88Article
CAS
Google Scholar
Robbins TW . (1997): Arousal systems and attentional processes. Biological Psychology 21: 57–71Article
Google Scholar
Sahakian BJ, Owen AM . (1992): Computerized assessment in neuropsychiatry using CANTAB: Discussion paper. J R Soc Med 85: 399–402CAS
PubMed
PubMed Central
Google Scholar
Scheinin M, Lomasney JW, Hayden-Hixson DM, Schambra UB, Caron MG, Lefkowitz RJ, Fremeau RT Jr . (1994): Distribution of alpha 2-adrenergic receptor subtype gene expression in rat brain. Brain Res Mol Brain Res 21: 133–149Article
CAS
Google Scholar
Uhlen S, Wikberg JES . (1991): Delineation of rat kidney alpha-2A and alpha-2B-adrenoceptors with (3H)RX 821002 radioligand binding: Computer modelling revels that guanfacine is an alpha-2A-selective compound. Eur J Pharmacol 202: 235–243Article
CAS
Google Scholar
Wechsler D . (1992): Wechsler adult intelligence scale—Revised. New York, The Psychological Corporation.Download referencesAuthor informationAuthors and AffiliationsDepartment of Neuroscience and Neurology, University and University Hospital of Kuopio, Kuopio, FinlandPekka Jäkälä MD, Minna Riekkinen MD, Jouni Sirviö Ph.D, Esa Koivisto BSc, Kosti Kejonen BSc, Matti Vanhanen BSc & Paavo Riekkinen Jr MDAuthorsPekka Jäkälä MDYou can also search for this author in
PubMed Google ScholarMinna Riekkinen MDYou can also search for this author in
PubMed Google ScholarJouni Sirviö Ph.DYou can also search for this author in
PubMed Google ScholarEsa Koivisto BScYou can also search for this author in
PubMed Google ScholarKosti Kejonen BScYou can also search for this author in
PubMed Google ScholarMatti Vanhanen BScYou can also search for this author in
PubMed Google ScholarPaavo Riekkinen Jr MDYou can also search for this author in
PubMed Google ScholarRights and permissionsAbout this articleCite this articleJäkälä, P., Riekkinen, M., Sirviö, J. et al. Guanfacine, But Not Clonidine, Improves Planning and Working Memory Performance in Humans.
Neuropsychopharmacol 20, 460–470 (1999). https://doi.org/10.1016/S0893-133X(98)00127-4Download citationReceived: 08 July 1997Revised: 29 July 1997Accepted: 03 September 1998Issue Date: 01 May 1999DOI: https://doi.org/10.1016/S0893-133X(98)00127-4Keywords
| 2024-11-08T13:04:34 | en | train |
10,825,438 | bronz | 2016-01-02T07:26:31 | Gas Theft Gangs Fuel Pump Skimming Scams | null | http://krebsonsecurity.com/2015/11/gas-theft-gangs-fuel-pump-skimming-scams/ | 132 | 131 | [
10828336,
10829752,
10828772,
10830005,
10828390,
10828401,
10828375,
10829390,
10828296,
10829835,
10829261,
10830045,
10828491,
10828582
] | null | null | no_error | Gas Theft Gangs Fuel Pump Skimming Scams | null | null |
Few schemes for monetizing stolen credit cards are as bold as the fuel theft scam: Crooks embed skimming devices inside fuel station pumps to steal credit card data from customers. Thieves then clone the cards and use them to steal hundreds of gallons of gas at multiple filling stations. The gas is pumped into hollowed-out trucks and vans, which ferry the fuel to a giant tanker truck. The criminals then sell and deliver the gas at cut rate prices to shady and complicit fuel station owners.
Agent Steve Scarince of the U.S. Secret Service heads up a task force in Los Angeles that since 2009 has been combating fuel theft and fuel pump skimming rings. Scarince said the crooks who plant the skimmers and steal the cards from fuel stations usually are separate criminal groups from those who use the cards to steal and resell gas.
An external pump skimmer is attached to the end of this compromised fuel dispenser in Los Angeles (right).
“Generally the way it works is the skimmer will sell the cards to a fuel theft cell or ring,” he said. “The head of the ring or the number two guy will go purchase the credit cards and bring them back to the drivers. More often than not, the drivers don’t know a whole lot about the business. They just show up for work, the boss hands them 25 cards and says, ‘Make the most of it, and bring me back the cards that don’t work.’ And the leader of the ring will go back to the card skimmer and say, ‘Okay out of 100 of those you sold me, 50 of them didn’t work.'”
Scarince said the skimmer gangs will gain access to the inside of the fuel pumps either secretly or by bribing station attendants. Once inside the pumps, the thieves hook up their skimmer to the gas pump’s card reader and PIN pad. The devices also are connected to the pump’s electric power — so they don’t need batteries and can operate indefinitely.
Internal pump skimming device seized from a Los Angeles fuel station.
Most internal, modern pump skimmers are built to record the card data on a storage device that can transmit the data wirelessly via Bluetooth technology. This way, thieves can drive up with a laptop and fill their tank in the time it takes to suck down the card data that’s been freshly stolen since their last visit.
The Secret Service task force in Los Angels has even found pump skimming devices that send the stolen card data via SMS/text message to the thieves, meaning the crooks don’t ever have to return to the scene of the crime and can receive the stolen cards and PINs anywhere in the world that has mobile phone service.
MOBILE BOMBS
Scarince said the fuel theft gangs use vans and trucks crudely modified and retrofitted with huge metal and/or plastic “bladders” capable of holding between 250 and 500 gallons of fuel.
“The fuel theft groups will drive a bladder truck from gas station to gas station, using counterfeit cards to fill up the bladder,” he said. “Then they’ll drive back to their compound and pump the fuel into a 4,000 or 5,000 [gallon] container truck.”
A bladder truck made to look like it’s hauling used tires. The wooden panel that was hiding the metal tank exposed here has been removed in this picture.
The fuel will be delivered to gas station owners with whom the fuel theft ring has previously brokered with on the price per gallon. And it’s always a cash transaction.
“The stations know they’re buying stolen gas,” Scarince said. “They’re fully aware the fuel is not coming from a legitimate source. There’s never any paperwork with the fuel driver, and these transactions are missing all the elements of a normal, legitimate transaction between what would be a refinery and a gas station.”
Fuel theft gangs converted this van into a bladder truck. Image: Secret Service.
Needless to say, the bladder trucks aren’t exactly road-worthy when they’re filled to the brim with stolen and highly flammable fuel. From time to time, one of the dimmer bladder truck drivers will temporarily forget his cargo and light up a smoke.
“Two or three summers ago we had this one guy who I guess was just jonesing for a cigarette,” Scarince said. “He lit up and that was the last thing he did.”
This bladder truck went up in (a) smoke.
Other bladder trucks have spontaneously burst into flames at filling stations while thieves pumped stolen gas.
“There have been other fires that took place during the transfer of fuel, where some static sparked and the whole place caught on fire,” Scarince said. “These vehicles are not road-worthy by any means. Some of the bladder tanks are poorly made, they leak. The trucks are often overweight and can’t handle the load. We see things like transmissions giving out, chassis going out. These things are real hazards just waiting to happen.”
How big are the fuel theft operations in and around Los Angeles? Scarince estimates that at any given time there are 20 to 30 of these deadly bladder trucks trundling down L.A. freeways and side streets.
“And that’s a very conservative guess, just based on what the credit card companies report,” he said.
Aaron Turner, vice president of identity service products at Verifone — a major manufacturer of credit card terminals — leads a team that has been studying many of the skimming devices that the Secret Service has retrieved from compromised filling stations. Turner says there is a huge potential for safety-related issues when it comes to skimmers in a gas-pump environment.
“Every piece of equipment that is installed by gas station owners in the pump area is approved by reviewed and approved according to industry standards, but these skimmers…not so much,” Turner said. “One of the skimmers that we retrieved was sparking and arcing when we powered it up in our lab. I think it’s safe to say that skimmer manufacturers are not getting UL certifications for their gear.”
COUNTERING FUEL FRAUD
With some fuel theft gangs stealing more than $10 million per year, Scarince said financial institutions and credit card issuers have responded with a range of tactics to detect and stop suspicious fuel station transactions.
“A lot more card issuers and merchant processors are really pushing hard on velocity checks,” Scarince said, referring to a fraud detection technique that reviews transactions for repeating patterns within a brief period. “If you buy gas in Washington, D.C. and then 30 minutes gas later gas is being purchased on opposite side of the city in a short period of time. Those are things that are going to start triggering questions about the card. So, more checks like that are being tested and deployed, and banks are getting better at detecting this activity.”
Card issuers also can impose their own artificial spending limits on fuel purchases. Visa, for example, caps fuel purchases at $125. But thieves often learn to work just under those limits.
“The more intelligent crooks will use only a few cards per station, which keeps them a lower profile,” Scarince said. “They’ll come in a swipe two to three cards and fill up 40-80 gallons and move on down the road to another station. They definitely also have what we determine to be routes. Monday they’ll drive one direction, and Tuesday they’ll go the other way, just to make sure they don’t hit the same stations one day after another.”
Newer credit and debit cards with embedded chip technology should make the cards more costly and difficult to counterfeit. However, the chip cards still have the card data encoded in plain text on the card’s magnetic strip, and most fuel stations won’t have chip-enabled readers for several years to come.
On Oct. 1, 2015, Visa and MasterCard put in force new rules that can penalize merchants who do not yet have chip-enabled terminals. Under the new rules, merchants that don’t have the technology to accept chip cards will assume full liability for the cost of fraud from purchases in which the customer presented a chip-enabled card.
But those rules don’t apply to fuel stations in the United States until October 2017, and a great many stations won’t meet that deadline, said Verifone’s Turner.
“The petroleum stations and the trade organizations that represent them have been fairly public in their statements that they don’t feel they’re going to hit the 2017 dates,” Turner said. “If you look at the cost of replacing these dispensers and the number of systems that have been touched by qualified, licensed technicians…most of the stations are saying that even if they start this process now they’re going to struggle to meet that October 2017 date.”
Turner said that as chip card readers take hold in more retail establishments, card thieves will begin targeting fuel stations more intensively and systematically.
“We’re moving into this really interesting point of time when I think the criminals are going to focus on the approaches that offer them the greatest return on their investment,” Turner said. “In the future, I think there will be a liability shift specifically for petroleum stations [because] the amount of mag-stripe-facilitated fraud that will happen in that market is going to increase significantly along with chip card deployment.”
Part of the reason Los Angeles is such a hotbed of skimming activity may be related to ethnic Armenian organized crime members that have invested heavily in fuel theft schemes. Last month, the Justice Department announced charges against eight such men accused of planting skimmers in pumps throughout Southern California and Nevada.
Scarince and Turner say there is a great deal of room for the geographic spread of fuel theft scams. Although the bulk of fuel theft activity in the United States is centered around Los Angeles, the organized nature of the crime is slowly spreading to other cities.
“We are seeing pump skimming now shoot across the country,” Scarince said. “Los Angeles is still definitely ground zero, but Florida is now getting hit hard, as are Houston and parts of the midwest. Technology we first saw a couple of years ago in LA we’re now seeing show up in other locations across the country. They’re starting to pick on markets that are probably less aware of what’s going on as far as skimming goes and don’t secure their pumps as well as most stations do here.”
WHAT CAN YOU DO?
Avoid sketchy-looking stations and those that haven’t started using tamper-evident seals on their pumps.
“The fuel theft gangs certainly scout out the stations beforehand, looking for stations that haven’t upgraded their pump locks and haven’t started using tamper seals,” Scarince said. “If some franchised station decided not to spend the money to upgrade their systems with these security precautions, they’re going to be targeted.”
Scarince says he also tends to use pumps that are closest to the attendants.
“Those are less likely to have skimmers in or on them than street-side pumps,” he said.
Consumers should remember that they’re not liable for fraudulent charges on their credit or debit cards, but they still have to report the phony transactions. There is no substitute for keeping a close eye on your card statements. Also, use credit cards instead of debit cards at the pump; having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).
| 2024-11-08T10:19:34 | en | train |
10,825,440 | anemani10 | 2016-01-02T07:28:02 | Let’s Move Beyond Open Data Portals | null | https://medium.com/civic-technology/rethinking-data-portals-30b66f00585d#.6c9bktzge | 43 | 9 | [
10829749,
10828894,
10829243,
10829368,
10828760
] | null | null | no_error | Let’s Move Beyond Open Data Portals - Civic Technology - Medium | 2016-01-01T16:18:47.348Z | abhi nemani | What could be subtitled: biting the hand that “fed” meopen data has been my lifeFor a good number of years, open data has been my life: first at Code for America, where number of datasets opened and data portals launched annually were two of core metrics, and later at the City of Los Angeles, where managing, growing, and evangelizing open data was my stated top priority. Indeed, some of my most visible accomplishments have been publishing a book on open data, redesigning Los Angeles’ open data portal, and taking that city to #1 on the Open Data Census. I say all this in part just to emphasize that open data and open data portals, specifically, have been central to my career for some time.You may then think this will all be about open data portals, what they are, why we need them, and how to make them better. In fact, that’s not at all what I’m about to do.I actually think it’s time we abandon data portals altogether.Why? Because I think there’s a couple of key trends, certain profound shifts happening in the tech industry in general and in government technology in particular that force us to rethink the way we approach data, and that’s what I want to talk to you about today.Trend one. We’re moving from data to delivery.Let’s step back to about 2008. Many good government organizations were working on opening up data: lot’s of FOIA requests (which now I’ve had to deal with…), data scraping, etc. The premise: more data and more transparency, and so better accountability and better governance.That was 2008. Things started to change a little bit. Then we started to move into this idea of civic engagement, using technology for engagement. I’ll be honest: I haven’t seen too many of these platforms — Change by Us, IdeaScale, or even Google Moderator , etc — succeed; most have failed.That was step two. Now I think we’re finally moving into this area of digital transformation, of using technology to actually change the way we do service delivery. Taking processes that were before offline and information that was before on paper, online and machine readable.The interesting thing about that is actually that leads to better data.Right now it takes 16 months to get official police records from the state, 16 months, because we have to send paper forms to the state, they transcribe them and then send them back. The backlog is mind-numbing.That’s not the right way to handle data — open or otherwise. Instead, a digital first strategy — eg no paper forms, just iPhones or iPads in the field — would lead to immediately accessible and usable (and hopefully open) data.That’s trend one, is this movement from data to services.Trend two is from one app to many apps.A rather perceptive venture capitalist named Mary Meeker pens a must-read every year on internet trends. Recently, she noted that “the web is splintering.” Instead of just having one app to do everything, you in fact have multiple apps.A good example of this is Foursquare. Beforehand you’d do everything in one app. Now there’s Foursquare and Swarm. Facebook has Messenger as a separate app. Google has like 17 different apps. You’re seeing this shift from just one specific application that does everything to many different applications designed for a particular experience.Focus on a single, specific, and meaningful user experience, and build a beautiful interface simply for that.That’s trend two.Trend three is this notion of an architecture participation.What I like about this idea is that when you create web applications and create user experiences, you build the opportunity to get people to do more. You can up-sell them. We all know this. You see it every day when you go to Amazon or in a grocery store. Either when you’re checking out online or offline, you’re asked to pick up some else: a related book or a little bit of candy or a magazine (that you probably don’t need). They’re up-selling you. They’re asking you to do more.There’s no reason we couldn’t apply this same strategy to government engagement and services. Indeed it’s already happening. What this requires, though, is thinking less about a specific, sole engagement with an user, but instead about a string of related, though distinct interactions, architected to maximize participation. It’s about erecting architectures of participation.Bringing it togetherYou’re seeing a movement from data to delivery.You’re seeing the web start to splinter towards single-purpose apps.You’re starting to see levels of engagement increasing through architectures of participation.Now you’re thinking, “Abhi, weren’t you going to be tell me that we should abandon data portals? Services, splintering apps, architectures…? What gives?”At the core of all three trends is one basic notion: the web is moving away from building for one-size-fits-all towards designing for to-each-his/her-own — towards user-centric design. And we must do so for open data.That’s the core notion here: a shift away from building portals towards designing for people, away from unified platforms and more towards user-centric design.Then what do we do with data?First and foremost I think you have to go where people are.If you’ve every been to LA before — or even not — you may know that traffic is a little bit of a problem there. Just a bit. (That’s why I gave up my car.)The city, as we all know, has information about street closures, public events — all the planned activities that’ll likely gum traffic. Well, beforehand we kept all the data to ourselves. Indeed, all this data was available on our GIS Hub, but sadly most of the city staff didn’t even know that — let alone the regular citizen. Fortunately, we eventually found it…Now we actually give that directly to Waze, so they can reroute people dynamically. Indeed, this is a good open data story — taking the data to where people are —but there’s something more interesting: it’s a two-way street.Not only does Waze now share pothole and road condition data it collects regularly through its app, they went one step further. They began to proactively collect and share data in the interest public safety.Waze is giving the city data in return. Any time there’s a kidnapping or a hit and run, Waze will tell the users, saying, “This just happened. Do you have any intel/info?” Any time that there’s a kidnapping on Wilshire, Waze will actually ping you and say, “Hey, do you know what’s going on? Have you seen anybody? Do you know what’s happening?” Users can, in-app, report sightings directly, and immediately that data is sent to the LAPD. The news reported it as, “A Gift for Law Enforcement.”Two, I think we have to start thinking about data as content.Anyone want to guess what the most popular data set for the City of Los Angeles is?Immigration workshops.We collected information on accredited immigration workshops, and published it on the data portal; then we embedded that data on the mayor’s website — a website far, far more popular than data.lacity.org.That’s moving data from the fringes into the center, from a niche site to the homepage. Naturally, there are political and organizational dynamics to work through — the IT or data organization often does not control digital content or strategy. But it’s a fight worth having. If we say as cities that open data matters, then it should matter enough to be featured across the city’s digital presence. Open data ought to merit a few pixels off the portal.Finally you have to take data offline.An interesting thing that I learned when I was in LA is that there was a lot of interesting groups doing interesting work, but separately, even though they shared a common goal: using technology to serve the city they all loved. And to do so, using open civic data. But these efforts were too often disconnected. (Indeed, I found myself having the same conversation with different community leaders multiple times a week.)What we decided to do there was very deeply commit to actually bringing these groups together, having them all start to talk to each other. We launched a program called #TechLA. We’ve had the biggest hackathon, two years in a row. The idea was that actually just by building a brand, just this identity, #TechLA, it actually helped these different groups come together in a way that they couldn’t have done on their own. That’s one of the core responsibilities and opportunities that we have as government officials, is that we actually can build a tent, we can actually have people come together.And you can then take the data off of the portal, and put it directly into the hands of developers.Here’s an example from the #TechLA event last year:They built an app that helped people find homeless shelters, and all the kids were in high school. You’re seeing people do really remarkable and special things.I’m sure we can talk on and on about hackathons and app competitions and how to make those great. Those aren’t sustainable. You can have one great event. You can have another great event. Then what happens after that? People go home. They go back to their lives. The hard question is, how do you create staying power? How do you get people to come together on and on? How do you create infrastructure that lets people actually do meaningful work in an ongoing way?We built an organization specifically designed for this purpose in LA. It’s called Compiler.LA, a for-profit that actually works on civic issues, but they commit a percentage of all of their revenue back to the community to build the civic tech ecosystem. They’ll get contracts from foundations, from cities, and then they use part of that money to then go back and help build the civic tech community. That to me is the kind of staying power that we need to have.In closing, there’s a great book called Small Pieces, Loosely Joined. It’s about the web, and it’s about how the web is, by David Weinberger, who wrote this phrase, “the web is perfectly broken,” which I think is an interesting phrase, “perfectly broken,” because it’s messy deliberately. It’s not meant to be simple or easy; uniform and centralize. It’s supposed to be hacked together, distributed, customized, and remixed. It’s supposed to be human.Bringing this all back then to data and open data portals: We too often think about building that one great portal, one great experience, for everyone, but what we learn from the web is that actually we should be listening more than we are building, be understanding user needs and habits more than presuming them, and be going to where they are instead of asking them to come to us.We should do lots of different, small things — not just build one, big thing — and instead think about crafting the ideal experiences for the wonderfully diverse users you have: be it the mayor, city staff, or even your boss; be it the researcher, the entrepreneur, or the regular citizen; be it the person you hadn’t thought of until you meet them randomly at the library. Be it anyone.This essay is based on an edited transcript from my keynote address at the 2015 National Association of Government Webmasters 2015 (NAGW); video is below. | 2024-11-08T01:09:26 | en | train |
10,825,445 | F_J_H | 2016-01-02T07:29:34 | Fred Wilson: What is going to happen in 2016? | null | http://avc.com/2016/01/what-is-going-to-happen-in-2016/ | 1 | 0 | null | null | null | no_error | What Is Going To Happen In 2016 | -0001-11-30T00:00:00+00:00 | Fred Wilson |
It’s easier to predict the medium to long term future. We will be able to tell our cars to take us home after a late night of new year’s partying within a decade. I sat next to a life sciences investor at a dinner a couple months ago who told me cancer will be a curable disease within the next decade. As amazing as these things sound, they are coming and soon.
But what will happen this year that we are now in? That’s a bit trickier. But I will take some shots this morning.
Oculus will finally ship the Rift in 2016. Games and other VR apps for the Rift will be released. We just learned that the Touch controller won’t ship with the Rift and is delayed until later in 2016. I believe the initial commercial versions of Oculus technology will underwhelm. The technology has been so hyped and it is hard to live up to that. Games will be the strongest early use case, but not everyone is going to want to put on a headset to play a game. I think VR will only reach its true potential when they figure out how to deploy it in a more natural way.
We will see a new form of wearables take off in 2016. The wrist is not the only place we might want to wear a computer on our bodies. If I had to guess, I would bet on something we wear in or on our ears.
One of the big four will falter in 2016. My guess is Apple. They did not have a great year in 2015 and I’m thinking that it will get worse in 2016.
The FAA regulations on the commercial drone industry will turn out to be a boon for the drone sector, legitimizing drone flights for all sorts of use cases and establishing clear rules for what is acceptable and what is not.
The trend towards publishing inside of social networks (Facebook being the most popular one) will go badly for a number of high profile publishers who won’t be able to monetize as effectively inside social networks and there will be at least one high profile victim of this strategy who will go under as a result.
Time Warner will spin off its HBO business to create a direct competitor to Netflix and the independent HBO will trade at a higher market cap than the entire Time Warner business did pre spinoff.
Bitcoin finally finds a killer app with the emergence of Open Bazaar protocol powered zero take rate marketplaces. (note that OB1, an open bazaar powered service, is a USV portfolio company).
Slack will become so pervasive inside of enterprises that spam will become a problem and third party Slack spam filters will emerge. At the same time, the Slack platform will take off and building Slack bots will become the next big thing in enterprise software.
Donald Trump will be the Republican nominee and he will attack the tech sector for its support of immigrant labor. As a result the tech sector will line up behind Hillary Clinton who will be elected the first woman President.
Markdown mania will hit the venture capital sector as VC firms follow Fidelity’s lead and start aggressively taking down the valuations in their portfolios. Crunchbase will start capturing this valuation data and will become a de-facto “yahoo finance” for the startup sector. Employees will realize their options are underwater and will start leaving tech startups in droves.
Some of these predictions border on the ridiculous and that is somewhat intentional. I think there is an element of truth (or at least possibility) in all of them. And I will come back to this list a year from now and review the results.
Best wishes to everyone for a happy and healthy 2016.
| 2024-11-08T15:57:15 | en | train |
10,825,482 | otoburb | 2016-01-02T07:55:36 | A (MirageOS OCaml) Unikernel Firewall for QubesOS | null | http://roscidus.com/blog/blog/2016/01/01/a-unikernel-firewall-for-qubesos/ | 10 | 0 | null | null | null | no_error | A Unikernel Firewall for QubesOS | null | Thomas Leonard |
QubesOS provides a desktop operating system made up of multiple virtual machines, running under Xen.
To protect against buggy network drivers, the physical network hardware is accessed only by a dedicated (and untrusted) "NetVM", which is connected to the rest of the system via a separate (trusted) "FirewallVM".
This firewall VM runs Linux, processing network traffic with code written in C.
In this blog post, I replace the Linux firewall VM with a MirageOS unikernel.
The resulting VM uses safe (bounds-checked, type-checked) OCaml code to process network traffic,
uses less than a tenth of the memory of the default FirewallVM, boots several times faster,
and should be much simpler to audit or extend.
Table of Contents
Qubes
Qubes networking
Problems with FirewallVM
A Unikernel Firewall
Booting a Unikernel on Qubes
Networking
The Xen virtual network layer
The Ethernet layer
The IP layer
Evaluation
Exercises
Summary
( this post also appeared on Reddit and Hacker News )
Qubes
QubesOS is a security-focused desktop operating system that uses virtual machines to isolate applications from each other. The screenshot below shows my current desktop. The windows with green borders are running Fedora in my "comms" VM, which I use for gmail and similar trusted sites (with NoScript). The blue windows are from a Debian VM which I use for software development. The red windows are another Fedora VM, which I use for general browsing (with flash, etc) and running various untrusted applications:
Another Fedora VM ("dom0") runs the window manager and drives most of the physical hardware (mouse, keyboard, screen, disks, etc).
Networking is a particularly dangerous activity, since attacks can come from anywhere in the world and handling network hardware and traffic is complex.
Qubes therefore uses two extra VMs for networking:
NetVM drives the physical network device directly. It runs network-manager and provides the system tray applet for configuring the network.
FirewallVM sits between the application VMs and NetVM. It implements a firewall and router.
The full system looks something like this:
The lines between VMs in the diagram above represent network connections.
If NetVM is compromised (e.g. by exploiting a bug in the kernel module driving the wifi card) then the system as a whole can still be considered secure - the attacker is still outside the firewall.
Besides traditional networking, all VMs can communicate with dom0 via some Qubes-specific protocols.
These are used to display window contents, tell VMs about their configuration, and provide direct channels between VMs where appropriate.
Qubes networking
There are three IP networks in the default configuration:
192.168.1.* is the external network (to my house router).
10.137.1.* is a virtual network connecting NetVM to the firewalls (you can have multiple firewall VMs).
10.137.2.* connects the app VMs to the default FirewallVM.
Both NetVM and FirewallVM perform NAT, so packets from "comms" appear to NetVM to have been sent by the firewall, and packets from the firewall appear to my house router to have come from NetVM.
Each of the AppVMs is configured to use the firewall (10.137.2.1) as its DNS resolver.
FirewallVM uses an iptables rule to forward DNS traffic to its resolver, which is NetVM.
Problems with FirewallVM
After using Qubes for a while, there are a number of things about the default FirewallVM that I'm unhappy about:
It runs a full Linux system, which uses at least 300 MB of RAM. This seems excessive.
It takes several seconds to boot.
There is a race somewhere setting up the DNS redirection. Adding some debug to track down the bug made it disappear.
The iptables configuration is huge and hard to understand.
There is another, more serious, problem.
Xen virtual network devices are implemented as a client ("netfront") and a server ("netback"), which are Linux kernel modules in sys-firewall.
In a traditional Xen system, the netback driver runs in dom0 and is fully trusted. It is coded to protect itself against misbehaving client VMs. Netfront, by contrast, assumes that netback is trustworthy.
The Xen developers only considers bugs in netback to be security critical.
In Qubes, NetVM acts as netback to FirewallVM, which acts as a netback in turn to its clients.
But in Qubes, NetVM is supposed to be untrusted! So, we have code running in kernel mode in the (trusted) FirewallVM that is talking to and trusting the (untrusted) NetVM!
For example, as the Qubes developers point out in Qubes Security Bulletin #23, the netfront code that processes responses from netback uses the request ID quoted by netback as an index into an array without even checking if it's in range (they have fixed this in their fork).
What can an attacker do once they've exploited FirewallVM's trusting netfront driver?
Presumably they now have complete control of FirewallVM.
At this point, they can simply reuse the same exploit to take control of the client VMs, which are running the same trusting netfront code!
A Unikernel Firewall
I decided to see whether I could replace the default firewall ("sys-firewall") with a MirageOS unikernel.
A Mirage unikernel is an OCaml program compiled to run as an operating system kernel.
It pulls in just the code it needs, as libraries.
For example, my firewall doesn't require or use a hard disk, so it doesn't contain any code for dealing with block devices.
If you want to follow along, my code is on GitHub in my qubes-mirage-firewall repository.
The README explains how to build it from source.
For testing, you can also just download the mirage-firewall-bin-0.1.tar.bz2 binary kernel tarball.
dom0 doesn't have network access, but you can proxy the download through another VM:
[tal@dom0 ~]$ cd /tmp
[tal@dom0 tmp]$ qvm-run -p sys-net 'wget -O - https://github.com/talex5/qubes-mirage-firewall/releases/download/0.1/mirage-firewall-bin-0.1.tar.bz2' > mirage-firewall-bin-0.1.tar.bz2
[tal@dom0 tmp]$ tar tf mirage-firewall-bin-0.1.tar.bz2
mirage-firewall/
mirage-firewall/vmlinuz
mirage-firewall/initramfs
mirage-firewall/modules.img
[tal@dom0 ~]$ cd /var/lib/qubes/vm-kernels/
[tal@dom0 vm-kernels]$ tar xf /tmp/mirage-firewall-bin-0.1.tar.bz2
The tarball contains vmlinuz, which is the unikernel itself, plus a couple of dummy files that Qubes requires to recognise it as a kernel (modules.img and initramfs).
Create a new ProxyVM named "mirage-firewall" to run the unikernel:
You can use any template, and make it standalone or not. It doesn't matter, since we don't use the hard disk.
Set the type to ProxyVM.
Select sys-net for networking (not sys-firewall).
Click OK to create the VM.
Go to the VM settings, and look in the "Advanced" tab.
Set the kernel to mirage-firewall.
Turn off memory balancing and set the memory to 32 MB or so (you might have to fight a bit with the Qubes GUI to get it this low).
Set VCPUs (number of virtual CPUs) to 1.
(this installation mechanism is obviously not ideal; hopefully future versions of Qubes will be more unikernel-friendly)
You can run mirage-firewall alongside your existing sys-firewall and you can choose which AppVMs use which firewall using the GUI.
For example, to configure "untrusted" to use mirage-firewall:
You can view the unikernel's log output from the GUI, or with sudo xl console mirage-firewall in dom0 if you want to see live updates.
If you want to explore the code but don't know OCaml, a good tip is that most modules (.ml files) have a corresponding .mli interface file which describes the module's public API (a bit like a .h file in C).
It's usually worth reading those interface files first.
I tested initially with Qubes 3.0 and have just upgraded to the 3.1 alpha. Both seem to work.
Booting a Unikernel on Qubes
Qubes runs on Xen and a Mirage application can be compiled to a Xen kernel image using mirage configure --xen.
However, Qubes expects a VM to provide three Qubes-specific services and doesn't consider the VM to be running until it has connected to each of them. They are qrexec (remote command execution), gui (displaying windows on the dom0 desktop) and QubesDB (a key-value store).
I wrote a little library, mirage-qubes, to implement enough of these three protocols for the firewall (the GUI does nothing except handshake with dom0, since the firewall has no GUI).
Here's the full boot code in my firewall, showing how to connect the agents:
unikernel.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
let start () =
let start_time = Clock.time () in
Log_reporter.init_logging ();
(* Start qrexec agent, GUI agent and QubesDB agent in parallel *)
let qrexec = RExec.connect ~domid:0 () in
let gui = GUI.connect ~domid:0 () in
let qubesDB = DB.connect ~domid:0 () in
(* Wait for clients to connect *)
qrexec >>= fun qrexec ->
let agent_listener = RExec.listen qrexec Command.handler in
gui >>= fun gui ->
Lwt.async (fun () -> GUI.listen gui);
qubesDB >>= fun qubesDB ->
Log.info "agents connected in %.3f s (CPU time used since boot: %.3f s)"
(fun f -> f (Clock.time () -. start_time) (Sys.time ()));
(* Watch for shutdown requests from Qubes *)
let shutdown_rq = OS.Lifecycle.await_shutdown () >>= fun (`Poweroff | `Reboot) -> return () in
(* Set up networking *)
let net_listener = network qubesDB in
(* Run until something fails or we get a shutdown request. *)
Lwt.choose [agent_listener; net_listener; shutdown_rq] >>= fun () ->
(* Give the console daemon time to show any final log messages. *)
OS.Time.sleep 1.0
After connecting the agents, we start a thread watching for shutdown requests (which arrive via XenStore, a second database) and then configure networking.
Tips on reading OCaml
let x = ... defines a variable.
let fn args = ... defines a function.
Clock.time is the time function in the Clock module.
() is the empty tuple (called "unit"). It's used for functions that don't take arguments, or return nothing useful.
~foo is a named argument. connect ~domid:0 is like connect(domid = 0) in Python.
promise >>= f calls function f when the promise resolves. It's like promise.then(f) in JavaScript.
foo () >>= fun result -> is the asynchronous version of let result = foo () in.
return x creates an already-resolved promise (it does not make the function return).
Networking
The general setup is simple enough: we read various configuration settings (IP addresses, netmasks, etc) from QubesDB,
set up our two networks (the client-side one and the one with NetVM), and configure a router to send packets between them:
unikernel.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
(* Set up networking and listen for incoming packets. *)
let network qubesDB =
(* Read configuration from QubesDB *)
let config = Dao.read_network_config qubesDB in
Logs.info "Client (internal) network is %a"
(fun f -> f Ipaddr.V4.Prefix.pp_hum config.Dao.clients_prefix);
(* Initialise connection to NetVM *)
Uplink.connect config >>= fun uplink ->
(* Report success *)
Dao.set_iptables_error qubesDB "" >>= fun () ->
(* Set up client-side networking *)
let client_eth = Client_eth.create
~client_gw:config.Dao.clients_our_ip
~prefix:config.Dao.clients_prefix in
(* Set up routing between networks and hosts *)
let router = Router.create
~client_eth
~uplink:(Uplink.interface uplink) in
(* Handle packets from both networks *)
Lwt.join [
Client_net.listen router;
Uplink.listen uplink router
]
OCaml notes
config.Dao.clients_our_ip means the clients_our_ip field of the config record, as defined in the Dao module.
~client_eth is short for ~client_eth:client_eth - i.e. pass the value of the client_eth variable as a parameter also named client_eth.
The Xen virtual network layer
At the lowest level, networking requires the ability to send a blob of data from one VM to another.
This is the job of the Xen netback/netfront protocol.
For example, consider the case of a new AppVM (Xen domain ID 5) being connected to FirewallVM (4).
First, dom0 updates its XenStore database (which is shared with the VMs). It creates two directories:
/local/domain/4/backend/vif/5/0/
/local/domain/5/device/vif/0/
Each directory contains a state file (set to 1, which means initialising) and information about the other end.
The first directory is monitored by the firewall (domain 4).
When it sees the new entry, it knows it has a new network connection to domain 5, interface 0.
It writes to the directory information about what features it supports and sets the state to 2 (init-wait).
The second directory will be seen by the new domain 5 when it boots.
It tells it that is has a network connection to dom 4.
The client looks in the dom 4's backend directory and waits for the state to change to init-wait, the checks the supported features.
It allocates memory to share with the firewall, tells Xen to grant access to dom 4, and writes the ID for the grant to the XenStore directory.
It sets its own state to 4 (connected).
When the firewall sees the client is connected, it reads the grant refs, tells Xen to map those pages of memory into its own address space, and sets its own state to connected too.
The two VMs can now use the shared memory to exchange messages (blocks of data up to 64 KB).
The reason I had to find out about all this is that the mirage-net-xen library only implemented the netfront side of the protocol.
Luckily, Dave Scott had already started adding support for netback and I was able to complete that work.
Getting this working with a Mirage client was fairly easy, but I spent a long time trying to figure out why my code was making Linux VMs kernel panic.
It turned out to be an amusing bug in my netback serialisation code, which only worked with Mirage by pure luck.
However, this did alert me to a second bug in the Linux netfront driver: even if the ID netback sends is within the array bounds, that entry isn't necessarily valid.
Sending an unused ID would cause netfront to try to unmap someone else's grant-ref.
Not exploitable, perhaps, but another good reason to replace this code!
The Ethernet layer
It might seem like we're nearly done: we want to send IP (Internet Protocol) packets between VMs, and we have a way to send blocks of data.
However, we must now take a little detour down Legacy Lane...
Operating systems don't expect to send IP packets directly.
Instead, they expect to be connected to an Ethernet network, which requires each IP packet to be wrapped in an Ethernet "frame".
Our virtual network needs to emulate an Ethernet network.
In an Ethernet network, each network interface device has a unique "MAC address" (e.g. 01:23:45:67:89:ab).
An Ethernet frame contains source and destination MAC addresses, plus a type (e.g. "IPv4 packet").
When a client VM wants to send an IP packet, it first broadcasts an Ethernet ARP request, asking for the MAC address of the target machine.
The target machine responds with its MAC address.
The client then transmits an Ethernet frame addressed to this MAC address, containing the IP packet inside.
If we were building our system out of physical machines, we'd connect everything via an Ethernet switch, like this:
This layout isn't very good for us, though, because it means the VMs can talk to each other directly.
Normally you might trust all the machines behind the firewall, but the point of Qubes is to isolate the VMs from each other.
Instead, we want a separate Ethernet network for each client VM:
In this layout, the Ethernet addressing is completely pointless - a frame simply goes to the machine at the other end of the link.
But we still have to add an Ethernet frame whenever we send a packet and remove it when we receive one.
And we still have to implement the ARP protocol for looking up MAC addresses.
That's the job of the Client_eth module (dom0 puts the addresses in XenStore for us).
As well as sending queries, a VM can also broadcast a "gratuitous ARP" to tell other VMs its address without being asked.
Receivers of a gratuitous ARP may then update their ARP cache, although FirewallVM is configured not to do this (see /proc/sys/net/ipv4/conf/all/arp_accept).
For mirage-firewall, I just log what the client requested but don't let it update anything:
client_eth.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
let input_gratuitous t frame =
let open Arpv4_wire in
let spa = Ipaddr.V4.of_int32 (get_arp_spa frame) in
let sha = Macaddr.of_bytes_exn (copy_arp_sha frame) in
match lookup t spa with
| Some real_mac when Macaddr.compare sha real_mac = 0 ->
Log.info "client suggests updating %s -> %s (as expected)"
(fun f -> f (Ipaddr.V4.to_string spa) (Macaddr.to_string sha));
| Some other_mac ->
Log.warn "client suggests incorrect update %s -> %s (should be %s)"
(fun f -> f (Ipaddr.V4.to_string spa) (Macaddr.to_string sha) (Macaddr.to_string other_mac));
| None ->
Log.warn "client suggests incorrect update %s -> %s (unexpected IP)"
(fun f -> f (Ipaddr.V4.to_string spa) (Macaddr.to_string sha))
I'm not sure whether or not Qubes expects one client VM to be able to look up another one's MAC address.
It sets /qubes-netmask in QubesDB to 255.255.255.0, indicating that all clients are on the same Ethernet network.
Therefore, I wrote my ARP responder to respond on behalf of the other clients to maintain this illusion.
However, it appears that my Linux VMs have ignored the QubesDB setting and used a netmask of 255.255.255.255. Puzzling, but it should work either way.
Here's the code that connects a new client virtual interface (vif) to our router (in Client_net):
client_net.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
(** Connect to a new client's interface and listen for incoming frames. *)
let add_vif { Dao.domid; device_id; client_ip } ~router ~cleanup_tasks =
Netback.make ~domid ~device_id >>= fun backend ->
Log.info "Client %d (IP: %s) ready" (fun f ->
f domid (Ipaddr.V4.to_string client_ip));
ClientEth.connect backend >>= or_fail "Can't make Ethernet device" >>= fun eth ->
let client_mac = Netback.mac backend in
let iface = new client_iface eth client_ip client_mac in
Router.add_client router iface;
Cleanup.on_cleanup cleanup_tasks (fun () -> Router.remove_client router iface);
let fixed_arp = Client_eth.ARP.create ~net:router.Router.client_eth iface in
Netback.listen backend (fun frame ->
match Wire_structs.parse_ethernet_frame frame with
| None -> Log.warn "Invalid Ethernet frame" Logs.unit; return ()
| Some (typ, _destination, payload) ->
match typ with
| Some Wire_structs.ARP -> input_arp ~fixed_arp ~eth payload
| Some Wire_structs.IPv4 -> input_ipv4 ~client_ip ~router frame payload
| Some Wire_structs.IPv6 -> return ()
| None -> Logs.warn "Unknown Ethernet type" Logs.unit; Lwt.return_unit
)
OCaml note: { x = 1; y = 2 } is a record (struct). { x = x; y = y } can be abbreviated to just { x; y }. Here we pattern-match on a Dao.client_vif record passed to the function to extract the fields.
The Netback.listen at the end runs a loop that communicates with the netfront driver in the client.
Each time a frame arrives, we check the type and dispatch to either the ARP handler or, for IPv4 packets,
the firewall code.
We don't support IPv6, since Qubes doesn't either.
client_net.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
let input_arp ~fixed_arp ~eth request =
match Client_eth.ARP.input fixed_arp request with
| None -> return ()
| Some response -> ClientEth.write eth response
(** Handle an IPv4 packet from the client. *)
let input_ipv4 ~client_ip ~router frame packet =
let src = Wire_structs.Ipv4_wire.get_ipv4_src packet |> Ipaddr.V4.of_int32 in
if src = client_ip then Firewall.ipv4_from_client router frame
else (
Log.warn "Incorrect source IP %a in IP packet from %a (dropping)"
(fun f -> f Ipaddr.V4.pp_hum src Ipaddr.V4.pp_hum client_ip);
return ()
)
OCaml note: |> is the "pipe" operator. x |> fn is the same as fn x, but sometimes it reads better to have the values flowing left-to-right. You can also think of it as the synchronous version of >>=.
Notice that we check the source IP address is the one we expect.
This means that our firewall rules can rely on client addresses.
There is similar code in Uplink, which handles the NetVM side of things:
uplink.mk1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
let connect config =
let ip = config.Dao.uplink_our_ip in
Netif.connect "tap0" >>= or_fail "Can't connect uplink device" >>= fun net ->
Eth.connect net >>= or_fail "Can't make Ethernet device for tap" >>= fun eth ->
Arp.connect eth >>= or_fail "Can't add ARP" >>= fun arp ->
Arp.add_ip arp ip >>= fun () ->
let netvm_mac = Arp.query arp config.Dao.uplink_netvm_ip >|= function
| `Timeout -> failwith "ARP timeout getting MAC of our NetVM"
| `Ok netvm_mac -> netvm_mac in
let my_ip = Ipaddr.V4 ip in
let interface = new netvm_iface eth netvm_mac config.Dao.uplink_netvm_ip in
return { net; eth; arp; interface; my_ip }
let listen t router =
Netif.listen t.net (fun frame ->
(* Handle one Ethernet frame from NetVM *)
Eth.input t.eth
~arpv4:(Arp.input t.arp)
~ipv4:(fun _ip -> Firewall.ipv4_from_netvm router frame)
~ipv6:(fun _ip -> return ())
frame
)
OCaml note: Arp.input t.arp is a partially-applied function. It's short for fun x -> Arp.input t.arp x.
Here we just use the standard Eth.input code to dispatch on the frame.
It checks that the destination MAC matches ours and dispatches based on type.
We couldn't use it for the client code above because there we also want to
handle frames addressed to other clients, which Eth.input would discard.
Eth.input extracts the IP packet from the Ethernet frame and passes that to our callback,
but the NAT library I used likes to work on whole Ethernet frames, so I ignore the IP packet
(_ip) and send the frame instead.
The IP layer
Once an IP packet has been received, it is sent to the Firewall module
(either ipv4_from_netvm or ipv4_from_client, depending on where it came from).
The process is similar in each case:
Check if we have an existing NAT entry for this packet. If so, it's part of a conversation we've already approved, so perform the translation and send it on its way. NAT support is provided by the handy mirage-nat library.
If not, collect useful information about the packet (source, destination, protocol, ports) and check against the user's firewall rules, then take whatever action they request.
Here's the code that takes a client IPv4 frame and applies the firewall rules:
firewall.ml1
2
3
4
5
6
7
8
9
10
11
12
13
14
let ipv4_from_client t frame =
match Memory_pressure.status () with
| `Memory_critical -> (* TODO: should happen before copying and async *)
Log.warn "Memory low - dropping packet" Logs.unit;
return ()
| `Ok ->
(* Check for existing NAT entry for this packet *)
match translate t frame with
| Some frame -> forward_ipv4 t frame (* Some existing connection or redirect *)
| None ->
(* No existing NAT entry. Check the firewall rules. *)
match classify t frame with
| None -> return ()
| Some info -> apply_rules t Rules.from_client info
Qubes provides a GUI that lets the user specify firewall rules.
It then encodes these as Linux iptables rules and puts them in QubesDB.
This isn't a very friendly format for non-Linux systems, so I ignore this and hard-code the rules in OCaml instead, in the Rules module:
1
2
3
4
5
6
7
8
9
10
11
12
13
(** Decide what to do with a packet from a client VM.
Note: If the packet matched an existing NAT rule then this isn't called. *)
let from_client = function
| { dst = (`External _ | `NetVM) } -> `NAT
| { dst = `Client_gateway; proto = `UDP { dport = 53 } } -> `NAT_to (`NetVM, 53)
| { dst = (`Client_gateway | `Firewall_uplink) } -> `Drop "packet addressed to firewall itself"
| { dst = `Client _ } -> `Drop "prevent communication between client VMs"
| { dst = `Unknown_client _ } -> `Drop "target client not running"
(** Decide what to do with a packet received from the outside world.
Note: If the packet matched an existing NAT rule then this isn't called. *)
let from_netvm = function
| _ -> `Drop "drop by default"
For packets from clients to the outside world we use the NAT action to rewrite the source address so the packets appear to come from the firewall (via some unused port).
DNS queries sent to the firewall get redirected to NetVM (UDP port 53 is DNS).
In both cases, the NAT actions update the NAT table so that we will forward any responses back to the client.
Everything else is dropped, with a log message.
I think it's rather nice the way we can use OCaml's existing support for pattern matching to implement the rules, without having to invent a new syntax.
Originally, I had a default-drop rule at the end of from_client, but OCaml helpfully pointed out that it wasn't needed, as the previous rules already covered every case.
The incoming policy is to drop everything that wasn't already allowed by a rule added by the out-bound NAT.
I don't know much about firewalls, but this scheme works for my needs.
For comparison, the Linux iptables rules currently in my sys-firewall are:
[user@sys-firewall ~]$ sudo iptables -vL -n -t filter
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 DROP udp -- vif+ * 0.0.0.0/0 0.0.0.0/0 udp dpt:68
55336 83M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
35540 23M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 ACCEPT all -- vif0.0 * 0.0.0.0/0 0.0.0.0/0
0 0 DROP all -- vif+ vif+ 0.0.0.0/0 0.0.0.0/0
519 33555 ACCEPT udp -- * * 10.137.2.12 10.137.1.1 udp dpt:53
16 1076 ACCEPT udp -- * * 10.137.2.12 10.137.1.254 udp dpt:53
0 0 ACCEPT tcp -- * * 10.137.2.12 10.137.1.1 tcp dpt:53
0 0 ACCEPT tcp -- * * 10.137.2.12 10.137.1.254 tcp dpt:53
0 0 ACCEPT icmp -- * * 10.137.2.12 0.0.0.0/0
0 0 DROP tcp -- * * 10.137.2.12 10.137.255.254 tcp dpt:8082
264 14484 ACCEPT all -- * * 10.137.2.12 0.0.0.0/0
254 16404 ACCEPT udp -- * * 10.137.2.9 10.137.1.1 udp dpt:53
2 130 ACCEPT udp -- * * 10.137.2.9 10.137.1.254 udp dpt:53
0 0 ACCEPT tcp -- * * 10.137.2.9 10.137.1.1 tcp dpt:53
0 0 ACCEPT tcp -- * * 10.137.2.9 10.137.1.254 tcp dpt:53
0 0 ACCEPT icmp -- * * 10.137.2.9 0.0.0.0/0
0 0 DROP tcp -- * * 10.137.2.9 10.137.255.254 tcp dpt:8082
133 7620 ACCEPT all -- * * 10.137.2.9 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 32551 packets, 1761K bytes)
pkts bytes target prot opt in out source destination
[user@sys-firewall ~]$ sudo iptables -vL -n -t nat
Chain PREROUTING (policy ACCEPT 362 packets, 20704 bytes)
pkts bytes target prot opt in out source destination
829 50900 PR-QBS all -- * * 0.0.0.0/0 0.0.0.0/0
362 20704 PR-QBS-SERVICES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 116 packets, 7670 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT all -- * vif+ 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0
945 58570 MASQUERADE all -- * * 0.0.0.0/0 0.0.0.0/0
Chain PR-QBS (1 references)
pkts bytes target prot opt in out source destination
458 29593 DNAT udp -- * * 0.0.0.0/0 10.137.2.1 udp dpt:53 to:10.137.1.1
0 0 DNAT tcp -- * * 0.0.0.0/0 10.137.2.1 tcp dpt:53 to:10.137.1.1
9 603 DNAT udp -- * * 0.0.0.0/0 10.137.2.254 udp dpt:53 to:10.137.1.254
0 0 DNAT tcp -- * * 0.0.0.0/0 10.137.2.254 tcp dpt:53 to:10.137.1.254
Chain PR-QBS-SERVICES (1 references)
pkts bytes target prot opt in out source destination
[user@sys-firewall ~]$ sudo iptables -vL -n -t mangle
Chain PREROUTING (policy ACCEPT 12090 packets, 17M bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 11387 packets, 17M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 703 packets, 88528 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 6600 packets, 357K bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 7303 packets, 446K bytes)
pkts bytes target prot opt in out source destination
[user@sys-firewall ~]$ sudo iptables -vL -n -t raw
Chain PREROUTING (policy ACCEPT 92093 packets, 106M bytes)
pkts bytes target prot opt in out source destination
0 0 DROP all -- vif20.0 * !10.137.2.9 0.0.0.0/0
0 0 DROP all -- vif19.0 * !10.137.2.12 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 32551 packets, 1761K bytes)
pkts bytes target prot opt in out source destination
[user@sys-firewall ~]$ sudo iptables -vL -n -t security
Chain INPUT (policy ACCEPT 11387 packets, 17M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 659 packets, 86158 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 6600 packets, 357K bytes)
pkts bytes target prot opt in out source destination
I find it hard to tell, looking at these tables, exactly what sys-firewall's security policy will actually do.
Evaluation
I timed start-up for the Linux-based "sys-firewall" and for "mirage-firewall" (after shutting them both down):
[tal@dom0 ~]$ time qvm-start sys-firewall
--> Creating volatile image: /var/lib/qubes/servicevms/sys-firewall/volatile.img...
--> Loading the VM (type = ProxyVM)...
--> Starting Qubes DB...
--> Setting Qubes DB info for the VM...
--> Updating firewall rules...
--> Starting the VM...
--> Starting the qrexec daemon...
Waiting for VM's qrexec agent......connected
--> Starting Qubes GUId...
Connecting to VM's GUI agent: .connected
--> Sending monitor layout...
--> Waiting for qubes-session...
real 0m9.321s
user 0m0.163s
sys 0m0.262s
[tal@dom0 ~]$ time qvm-start mirage-firewall
--> Loading the VM (type = ProxyVM)...
--> Starting Qubes DB...
--> Setting Qubes DB info for the VM...
--> Updating firewall rules...
--> Starting the VM...
--> Starting the qrexec daemon...
Waiting for VM's qrexec agent.connected
--> Starting Qubes GUId...
Connecting to VM's GUI agent: .connected
--> Sending monitor layout...
--> Waiting for qubes-session...
real 0m1.079s
user 0m0.130s
sys 0m0.192s
So, mirage-firewall starts in 1 second rather than 9. However, even most of this time is Qubes code running in dom0. xl list shows:
[tal@dom0 ~]$ sudo xl list
Name ID Mem VCPUs State Time(s)
dom0 0 6097 4 r----- 623.8
sys-net 4 294 4 -b---- 79.2
sys-firewall 17 1293 4 -b---- 9.9
mirage-firewall 18 30 1 -b---- 0.0
I guess sys-firewall did more work after telling Qubes it was ready, because Xen reports it used 9.9 seconds of CPU time.
mirage-firewall uses too little time for Xen to report anything.
Notice also that sys-firewall is using 1293 MB with no clients (it's configured to balloon up or down; it could probably go down to 300 MB without much trouble). I gave mirage-firewall a fixed 30 MB allocation, which seems to be enough.
I'm not sure how it compares with Linux for transmission performance, but it can max out my 30 Mbit/s Internet connection with its single CPU, so it's unlikely to matter.
Exercises
I've only implemented the minimal features to let me use it as my firewall.
The great thing about having a simple unikernel is that you can modify it easily.
Here are some suggestions you can try at home (easy ones first):
Change the policy to allow communication between client VMs.
Query the QubesDB /qubes-debug-mode key. If present and set, set logging to debug level.
Edit command.ml to provide a qrexec command to add or remove rules at runtime.
When a packet is rejected, add the frame to a ring buffer. Edit command.ml to provide a "dump-rejects" command that returns the rejected packets in pcap format, ready to be loaded into wireshark. Hint: you can use the ocaml-pcap library to read and write the pcap format.
All client VMs are reported as Client to the policy. Add a table mapping IP addresses to symbolic names, so you can e.g. allow DevVM to talk to TestVM or control access to specific external machines.
mirage-nat doesn't do NAT for ICMP packets. Add support, so ping works (see https://github.com/yomimono/mirage-nat/issues/15).
Qubes allows each VM to have two DNS servers. I only implemented the primary. Read the /qubes-secondary-dns and /qubes-netvm-secondary-dns keys from QubesDB and proxy that too.
Implement port knocking for new connections.
Add a Reject action that sends an ICMP rejection message.
Find out what we're supposed to do when a domain shuts down. Currently, we set the netback state to closed, but the directory in XenStore remains. Who is responsible for deleting it?
Update the firewall to use the latest version of the mirage-nat library, which has extra features such as expiry of old NAT table entries.
Finally, Qubes Security Bulletin #4 says:
Due to a silly mistake made by the Qubes Team, the IPv6 filtering rules
have been set to ALLOW by default in all Service VMs, which results in
lack of filtering for IPv6 traffic originating between NetVM and the
corresponding FirewallVM, as well as between AppVMs and the
corresponding FirewallVM. Because the RPC services (rpcbind and
rpc.statd) are, by default, bound also to the IPv6 interfaces in all the
VMs by default, this opens up an avenue to attack a FirewallVM from a
corresponding NetVM or AppVM, and further attack another AppVM from the
compromised FirewallVM, using a hypothetical vulnerability in the above
mentioned RPC services (chained attack).
What changes would be needed to mirage-firewall to reproduce this bug?
Summary
QubesOS provides a desktop environment made from multiple virtual machines, isolated using Xen.
It runs the network drivers (which it doesn't trust) in a Linux "NetVM", which it assumes may be compromised, and places a "FirewallVM" between that and the VMs running user applications.
This design is intended to protect users from malicious or buggy network drivers.
However, the Linux kernel code running in FirewallVM is written with the assumption that NetVM is trustworthy.
It is fairly likely that a compromised NetVM could successfully attack FirewallVM.
Since both FirewallVM and the client VMs all run Linux, it is likely that the same exploit would then allow the client VMs to be compromised too.
I used MirageOS to write a replacement FirewallVM in OCaml.
The new virtual machine contains almost no C code (little more than malloc, printk, the OCaml GC and libm), and should therefore avoid problems such as the unchecked array bounds problem that recently affected the Qubes firewall.
It also uses less than a tenth of the minimum memory of the Linux FirewallVM, boots several times faster, and when it starts handling network traffic it is already fully configured, avoiding e.g. any race setting up firewalls or DNS forwarding.
The code is around 1000 lines of OCaml, and makes it easy to follow the progress of a network frame from the point where the network driver reads it from a Xen shared memory ring, through the Ethernet handling, to the IP firewall code, to the user firewall policy, and then finally to the shared memory ring of the output interface.
The code has only been lightly tested (I've just started using it as the FirewallVM on my main laptop), but will hopefully prove easy to extend (and, if necessary, debug).
| 2024-11-08T03:41:44 | en | train |
10,825,536 | BerislavLopac | 2016-01-02T08:29:13 | Happy people don’t leave jobs they love | null | http://randsinrepose.com/archives/shields-down/ | 360 | 124 | [
10826096,
10828339,
10826324,
10826158,
10826027,
10826280,
10826066,
10826268,
10826285,
10826297,
10825986,
10826304,
10826366,
10826242,
10826230,
10826193,
10826519,
10826182,
10828407,
10825672,
10828591,
10826621,
10826594,
10828447,
10827596,
10826126,
10826303,
10827515,
10827483,
10826262,
10827458,
10826218,
10826622,
10826233,
10826986
] | null | null | no_error | Shields Down | 2016-01-01T11:51:17-08:00 | null | Resignations happen in a moment, and it’s not when you declare, “I’m resigning.” The moment happened a long time ago when you received a random email from a good friend who asked, “I know you’re really happy with your current gig because you’ve been raving about it for a year, but would you like to come visit Our Company? No commitment. Just coffee.” Now, everyone involved in this conversation transaction is aware of what is going down. While there is certainly no commitment, there is a definitely an agenda. The reason they want you to visit The Company is because, of course, they want you there in the building because seeing a potential future is far more compelling than describing it. Still, seeing it isn’t the moment of resignation. The moment happened the instant you decided, “What the hell? I haven’t seen Don in months and it’d be good to see him.” Your shields are officially down. A Potential Future Your shields drop the moment you let a glimpse of a potential different future into your mind. It seems like a unconsidered off-the-cuff thought sans consequence, but the thought opens you to possibilities that did not exist the moment before the thought existed. What is incredibly slippery about this moment is the complex, nuanced, and instant mental math performed that precedes the shields-down situation. When you are indirectly asked to lower your shields, you immediately parse, place a value, and aggregate your opinions on the following: Am I happy with my job? Do I like my manager? My team? Is this project I’m working on fulfilling? Am I learning? Am I respected? Am I growing? Do I feel fairly compensated? Is this company/team going anywhere? Do I believe in the vision? Do I trust the leaders? Now, each human has a different prioritized subset of this list that they rank and value differently. Growth is paramount for some, truth for others. Whatever unique blend is important, you use that blend and ask yourself one final question as you consider lowering your shields. What has happened recently or in the past that either supports or detracts from what I value? The answer to that question determines whether your shields stay up or go down. Humans Never Forget As a leader of humans, I’ve watched sadly as valued co-workers have resigned. Each time I work to understand two things: Why are they leaving? When did their shields go down? In most cases, the answers to Question #1 are rehearsed and clear. It’s the question they’ve been considering and asking themselves, so their answers are smooth. I’m looking for a smaller company where I can have more impact. I’ve been here for three years and I’m looking for a change of scenery. It happens. I want to work somewhere more established where I can dig my teeth into one hard problem. These answers are fine, but they aren’t the complete reason why they are leaving. It’s the politically correct answer that is designed to easily answer the most obvious question. The real question, the real insight, comes from the answer to Question #2: When did their shields go down? Their shields drop when, in the moment they are presented with the offer of potential future opportunity, they quickly evaluate their rubric and make an instant call: Is this job meeting my bar? To find and understand this shields-down moment, I ask, “When did you start looking?” Often the answers are a vague, “It kind’a just happened. I wasn’t really looking. I’m really happy here.” Bullshit. If I’m sitting here talking with you it means two things: I don’t want you to leave and, to the best of my knowledge, you didn’t want to leave either but here you are leaving. It didn’t just happen. You chose. Maybe you weren’t looking, but once your shields dropped, you started looking. Happy people don’t leave jobs they love. The reason this reads cranky is because I, the leader of the humans, screwed up. Something in the construction of the team or the company nudged you at a critical moment. When that mail arrived gently asking you about coffee, you didn’t answer the way you answered the prior five similar mails with a brief, “Really happy here. Let’s get a drink some time!” You think you thought Hmmm… what the hell. It can’t hurt. What you actually thought or realized was: You know, I have no idea when I’m going to be a tech lead here. Getting yelled at two days ago still stings. I don’t believe a single thing senior leadership says. Often you’ve forgotten this original thought in your subsequent intense job deliberations, but when I ask, when I dig, I usually find a basic values violation that dug in, stuck, and festered. Sometimes it’s a major values violation from months ago. Sometimes it’s a small violation that occurred at the worst possible time. In either case, your expectations of your company and your job were not met and when faced with opportunity elsewhere, you engaged. It’s Not Just Boredom I covered a major contributor to shield drops in Bored People Quit. Boredom in its many forms is a major contributor to resignations, but the truth is the list of contributing factors to shield weakening is immense. When you combine this with the near constant increasing demand for talented humans, you’ve got a complex leadership situation. The reason I’m cranky is I’m doing the math. I’m placing a cost on the departure of a wanted human leaving and comparing that cost with whatever usually minor situation existed in the past that led to a shields-down situation. The departure cost is always exponentially higher. My advice is similarly frustrating. Strategies to prevent shields dropping are as numerous as the reasons shields drop in the first place. I’ve discovered shield drops after the fact with close co-workers whom I met with for a 1:1 every single week where I felt we were covering topics of substance; where I felt I understood what they valued and how they wanted to grow. I’ve been here for three years and I’m looking for a change of scenery. It happens. Two months ago, someone told them their project was likely to be canceled. It wasn’t. You know, I have no idea when I’m going to be a tech lead here. At the end of last month, she heard via the grapevine that she wasn’t going to be promoted. When she got the promotion she deserved, it was too late. I don’t believe a single thing senior leadership says. At the last All Hands, I blew off a question with a terse answer because I didn’t want to dignify gossip. I forgot there is signal even in gossip. Every moment as a leader is an opportunity to either strengthen or weaken shields. Every single moment. Happy New Year. Editor, March 2024: Shirts are now available to proudly display your favorite shield | 2024-11-08T17:29:23 | en | train |
10,825,580 | oldgun | 2016-01-02T08:51:09 | Encryption in the Balance: 2015 in Review | null | https://www.eff.org/deeplinks/2015/12/encryption-balance-2015-review | 56 | 8 | [
10828856,
10830380,
10830468
] | null | null | no_error | Encryption in the Balance: 2015 in Review | 2015-12-31T08:17:25-08:00 | Andrew Crocker and Bill Budington |
If you’ve spent any time reading about encryption this year, you know we’re in the midst of a “debate.” You may have also noted that it’s a strange debate, one that largely replays the same arguments made nearly 20 years ago, when the government abandoned its attempts to mandate weakened encryption and backdoors. Now some parts of the government have been trying to revisit that decision in the name of achieving “balance” between user security and public safety. The FBI, for example, acknowledges that widespread adoption of encryption has benefits for users, but it also claims its investigations of terrorists, criminals, and other wrongdoers will “go dark” unless it has a legal authority and the technical capability to read encrypted data. But because the principles of what makes encryption secure haven’t changed, the only “balance” that can satisfy the government’s goals is no balance at all—it would require dramatically rolling back the spread of strong encryption.
EFF has spent the past few months explaining the danger of the FBI’s demand, and mobilizing users to push back. And while the recent tragic attacks in Paris and San Bernardino have only increased the FBI’s (misguided) pressure to weaken encryption, we’ve also had real success in using grassroots advocacy to call on the president to support encryption. Here are some of the highlights:
Magical Thinking on Golden Keys
One of the biggest proponents of a “balanced” solution to the so-called Going Dark problem is FBI Director James Comey. At hearings in July and again this month, Comey has claimed that because some companies offer non-end-to-end encrypted communications tools, that’s proof that there is a way to achieve both user security and law enforcement access. He’s been backed up by the Washington Post editorial board and state and local law enforcement officials who all call on geniuses in Silicon Valley to “figure out” the balance.
The problem is that they don’t seem to have listened to the geniuses.
In fact, pushing back on the other side of this debate is a unified coalition of technologists, mega technology companies, and privacy advocates with a remarkably consistent message: weakening encryption is a terrible idea.
First up was an all-star group of cryptography experts who argued against government mandates for the Clipper Chip in the 90’s and reconvened to publish a paper in July rigorously analyzing a number of possible legislative mandates. As before, they concluded that inclusion of key escrow code that would siphon off key material to any third party would necessarily increase code complexity, in turn increasing the likelihood of security vulnerabilities and putting users at increased risk. They also noted that any organization (or set of organizations, with split-key schemas) holding "golden key" access to consumer devices would be a huge target for hackers. As recent compromises of such sensitive data as the employee records and fingerprints for 5.6 million government employees stored by the Office Of Personnel Management show, centralized high-risk databases can become targets for compromise.
What has changed since the initial report is the scale of encrypted communications that we rely on in our daily lives—from online transactions and HTTPS to device encryption and consumer security systems. A compromise in the 90s would have been very bad, but today it would be absolutely devastating. Finally, the experts point out important international governance questions that would have to be answered along with any mandate. Chief among these questions is whether encryption apps developed in the U.S. but intended for use in despotic regimes would be subject to similar key escrow systems based in those regimes. The intelligence community has offered no answers to these problematic looming questions. Nor have they explained how they could regulate the sizable number of open source and/or international encrypted apps.
Meanwhile, many of the biggest tech companies have also taken strong pro-encryption stances. Perhaps most noteworthy is Apple, which fought back against a court order demanding that it turn over communications by users of their iMessage app. Apple responded that the iMessage platform protects users with strong, end-to-end encryption, so Apple could not decrypt existing messages and was unwilling to jeopardize the security of its customers by building in a backdoor to compromise future communications. The government eventually backed off, but only after Apple delivered unencrypted iCloud backups of some of the messages in question. Apple CEO Tim Cook has vociferously opposed any new backdoor mandates, echoing cryptography experts with the statement "you can’t have a back door that’s only for the good guys."
Google, which has made full-disk encryption mandatory for smartphones running Android Marshmallow, joined Apple as well as over 140 other organizations and individuals (including EFF) in a joint letter delivered to President Obama in late May urging the administration to reject any proposal weakening the security of their products.
More recently, the Information Technology Industry Council, a tech trade association of the 62 largest global tech firms, issued a news release explaining that encryption "is a security tool we rely on everyday to stop criminals" and "preserve our security and safety," concluding that "weakening security with the aim of advancing security simply does not make sense."
Save Crypto
Above all, individual Americans don’t want the government to have backdoor access to their communications, and EFF has been working to help the government hear that message. On September 30, along with Access Now , we launched SaveCrypto.org as a way to let the public have its voice heard. Over 104,000 people signed on to a statement rejecting "any law, policy, or mandate that would undermine our security" and demanding "privacy, security, and integrity for our communications and systems"— enough to warrant an official public response by the White House. Despite the president’s support for “strong encryption,” the White House’s initial response to the petition was underwhelming.
In the midst of this action, a series of leaks to the Washington Post revealed that the Obama administration won’t seek legislation for now. (Though that hasn’t stopped Manhattan District Attorney Cy Vance from introducing his own deeply flawed model bill.) But statements by Comey and others suggest the government will instead try to privately pressure companies to weaken their own products. That’s an even worse outcome, so we’ll keep putting the pressure on the administration in 2016 to publicly disavow this approach and come out with the unequivocal defense of encryption the petition asked for in the first place.
This article is part of our Year In Review series; read other articles about the fight for digital rights in 2015. Like what you're reading? EFF is a member-supported nonprofit, powered by donations from individuals around the world. Join us today and defend free speech, privacy, and innovation.
Join EFF Lists
| 2024-11-08T12:18:38 | en | train |
10,825,732 | jjude | 2016-01-02T10:07:19 | The Myth of Epiphany | null | http://scottberkun.com/2015/the-myth-of-epiphany/ | 2 | 0 | null | null | null | no_error | The Myth of Epiphany | 2015-01-13T18:17:13+00:00 | null |
One of the most provocative chapters of The Myths of Innovation is The Myth of Epiphany.
Do you love stories about flashes of insight? Or wish you had more of them so you can be more creative? Most people do. But the reasons these stories are loved has little to do with how breakthroughs usually happen.
The surprise is if you scratch the surface of any epiphany story, you’ll find they are mostly fabrications and exaggerations. As fun and inspiring as they seem, their value fades in practice. I don’t say this to depress you: it is true that the thrill of an epiphany feels great. But if you’re serious about ideas you need to look deeper into what these stories are really about.
One of the best accountings of the mythology is from Tim Berners-Lee, describing how he invented the World Wide Web, one of the greatest inventions of the 20th century:
“Journalists have always asked me what the crucial idea was or what the singular event was that allowed the web to exist one day when it hadn’t before. They are frustrated when I tell them there was no Eureka moment. It was not like the legendary apple falling on Newton’s head to demonstrate the concept of gravity… it was a process of accretion [growth by gradual addition]”
Even Berners-Lee was a victim of the epiphany myth, as the apple falling on Newton’s head didn’t happen, and the entire story is problematic as it’s usually told
We love these stories because they support our secret wish that creativity only requires a magic moment. That it’s like a lottery where we just need to be inspired enough, or have the Muses favor us. It feels safer to believe this, but it is dangerous because of how far removed it is from reality. Do you find the excitement of a flash of insight fades quickly? All of our creative heroes experience this too.
“Inspiration is for amateurs — the rest of us just show up and get to work.” – Chuck Close
We love stories of flashes of insight because we love dramatic stories. The notion of an epiphany ties back to religious and spiritual concepts like the Muses, where forces in the universe instantly grant things to people. Even if we don’t literally believe in these forces we love the notion that creativity works through some system, and that all we need is one brilliant moment that can change everything for us.
The smarter way to think of ideas is that a flash of insight is one part of the process. You can take any epiphany story and shift it into giving you useful advice for how to follow in a successful creators footsteps.
Ask three questions:
What was the person doing before the epiphany? In most cases, they were working in their field trying to solve a problem, or building a project, and the work led them to learn things that increased the odds of making a breakthrough. Creativity is best thought of as a kind of effort.
What did they need to do after the epiphany to bring the idea to the world? There is always significant work after the flash to develop the idea into a prototype, much less a working solution. A brilliant idea for a movie or a business still demands years of effort to realize the idea. An epiphany is rarely the end of the challenge, but typically the beginning of a new one. While epiphanies are common, people willing to commit years of work to see them to fruition are rare.
What can we learn about how to have an epiphany ourselves? Most epiphany stories have no substance. They focus on seemingly ordinary facts, like Archimedes in a bathtub or Newton by a tree, where the discovery is presented as a surprise. Epiphany stories rarely teach us anything to do differently in our own lives as there are no useful patterns or habits suggested in the story.
Even the Newton apple story isn’t true in the way it’s commonly told. Newton certainly wasn’t hit on the head, and it’s unlikely that the singular moment of watching an apple fall from a tree, even if it happened, carried particular significance to a man who made daily observations and ran frequent experiments testing his ideas about the things he saw.
The lesson about creativity from Newton we should learn is his daily habits: he frequently asked questions and ran experiments, constantly trying new approaches and making prototypes to explore his ideas. But that’s not nearly as exciting a story to tell as the apple tale, so it’s rarely told..
Gordon Gould, a primary inventor of the laser beam, had this to say:
“In the middle of one Saturday night… the whole thing suddenly popped into my head and I saw how to build the laser… but that flash of insight required the 20 years of work I had done in physics and optics to put all of the bricks of that invention in there”
Most legendary stories of flashes of insight are like Gould’s: the inventor rarely obsesses about the epiphany, but everyone else does. Flashes of insight are best understood as our subconscious minds working on our behalf. In professor of psychology Csikszentmihalyi’s book Creativity he defines epiphany as having three parts: early, insight, and after. The insight feels like a flash because until the moment our subconscious mind surfaces an idea, we’re not fully aware that our minds are still working on the problem for us. We get ideas in the shower because it’s a place where it’s easier for our subconscious minds to speak up.
One way to think about the experience of epiphany is that it’s the moment when all of the pieces fall into place. But this does not require that the last piece has any particular significance (the last piece might be the hardest, but it doesn’t have to be). Whichever piece of the puzzle is sorted out last becomes the epiphany piece and brings the satisfying epiphany experience. However, the last piece isn’t necessarily more magical than the others and has no magic without its connection to the other pieces. It feels magical for psychological reasons, fueling the legend and myths about where the insight happened and why it was at that particular moment and not another.
Related:
Read about the other Myths of Innovation
See Creativity Is Not An Accident
Watch a lecture about the Myth of Epiphany (below), or buy the bestselling book
Leave a Reply
| 2024-11-08T09:59:53 | en | train |
10,825,762 | tangled | 2016-01-02T10:23:25 | Financial Strategies for Grad Students (2013) | null | https://itself.wordpress.com/2013/08/04/financial-strategies-for-grad-students/ | 2 | 0 | null | null | null | no_error | Financial Strategies for Grad Students | 2013-08-04T14:24:41+00:00 | Published by Adam Kotsko
View all posts by Adam Kotsko |
When I was in grad school, I faced near-constant financial problems. My income was barely adequate, and the variety of streams it came from meant that my access to the money I’d already earned was often delayed in unpredictable ways. My one advantage was a good credit rating. I had gotten my first credit card as an undergrad, and I used it sparingly and paid it in full nearly every month. After a semester abroad, I was carrying a balance, and I took out a small bank loan to pay it off. So I had drawn on a significant amount of credit and used it responsibly. I understand that not everyone starts from this point, so my strategies may be inapplicable.
My strategy for coping with the difficulties of financial management was based on three simple principles:
Think short-term: Long-term questions like how I was going to pay everything off were moot. The important thing was how I was going to keep meeting my immediate obligations until the next influx of cash came.
Favor liquidity: Given my access to credit, the only hard constraint was the availability of cash (meaning money in my checking account). If given a choice between going further into debt or making a cash payment that would quickly put me at risk of not being able to meet another cash obligation, I always chose going further into debt.
Preserve the credit rating: This meant always paying every bill by whatever means necessary. If I missed a single payment, that could lead to a decline in my credit-worthiness, leading to higher minimum payments and a decline in liquidity that could further endanger my ability to meet my ongoing obligations.
To make this strategy work, I maintained at least three credit cards at all times. My intention was to have one credit card as my “rolling account,” which I would pay off every month. The other two gave me room to bounce money back and forth.
I absolutely refused to ever have a debit card for a variety of reasons. First, if the credit card company was willing to give me a free loan every month for my day-to-day purchases, why not take it? Second, if I did wind up carrying a balance, the consequences were likely to be less expensive than if I overdrew my checking account (fees and penalties were at their pre-crisis peak). Finally, if someone stole my debit card, that gave them access to my actual money — and even if I’d get that back, any serious disruption to my liquidity could have very negative consequences.
Oftentimes, I would not be able to pay the full amount of my “rolling account,” and so I would do a balance transfer. This actually helped my short-term liquidity because the balance transfer satisfied the need to pay that account on that particular month. I always timed my balance transfers to take advantage of the ability to “skip” a payment out of my checking account. Balance transfers do normally carry a fee, but the priority under the emergency circumstances of grad school is not to minimize your debt load, but to maintain your ability to keep rolling over your debt on favorable terms. Making sure to keep rolling over balance transfers with new offers does have the long-term benefit of minimizing your interest payments, but in the short term, it also reduces your minimum payment, hence helping the all-important liquidity. If your card has cash-back rewards, it helps to stockpile these so that you can get a free minimum payment out of it every once in a while.
Informal credit can be helpful, too. Periodically paying for group outings on your card and taking cash can reduce the need for ATM withdrawals for cash-only settings, maximizing the amount of money available in your checking account. Having a roommate with a more stable financial situation can also help if he’s willing to let you delay paying your portion of the rent until that next check comes in (thanks, Mike!). I always avoided taking direct loans from friends and family members, however, because I knew I would never actually pay it back, at least not within a reasonable amount of time. Between the stress of being indebted to an evil bank and the stress of letting my financial situation ruin an important personal relationship, I always went with the former. (Plus my family frankly had no money to give me anyway.)
For this system, it helps to be as anal-retentive as possible. I always paid my minimum payments for my credit cards within a day or two of receiving my statement, just to be safe. I set up as many other bills to charge my credit card automatically as possible. I also kept up the seemingly antiquated discipline of maintaining a written check register, which allowed me to keep better track of where funds had already been committed. People sometimes make fun of me for doing this, but one benefit is that I’ve literally never overdrawn my checking account at any point in my entire life. Given how badly the downward spiral of overdrawing your account can become, that’s huge.
Now that I’ve gotten a job, I’m on pace to pay off my credit card debt over a period equal to how long I was in grad school — meaning that it was essentially “income smoothing” on a very long timeframe. My student loans are excessive, but I can still pay them off within the normal 10-year period without living in abject poverty. And this has all been possible even though my salary at both places I’ve worked has been far below average. I could have worked more in order to take on less debt, but that would have significantly prolonged my time as a grad student — which likely would have hurt my long-term prospects even more than I already have. The amount of money you can make in a year, even for a visiting position, is always going to be more than the amount of debt you will allow yourself to go into.
Of course, all of this only worked out because I got a job. But if I had not, I was prepared to work outside of academia because I viewed adjunct teaching as an absolute rip-off that more often than not tends to hurt people’s long-term job prospects. All through grad school, I did a variety of freelance work in the corporate sector that paid much more, for much less work, than adjunct teaching ever could have. I’ve written about this before, though, so I won’t repeat myself here. Long story short: your overriding priority should be to finish, because that’s when you get the chance at a real meal ticket. I know people worry about their PhD going “stale,” and that’s a real issue — but locking yourself into a low-income trap indefinitely is most likely not the solution.
I don’t know how much my strategy is replicatable without my starting conditions, and I’m sure others have different strategies that may work better. Hence I open the floor to you, my dear readers.
| 2024-11-08T01:52:02 | en | train |
10,826,037 | kevindeasis | 2016-01-02T12:48:34 | Japan's Automated Indoor Farm Can Produce 30,000 Lettuce per Day | null | http://www.digitaltrends.com/cool-tech/japan-automated-factory-lettuce/ | 1 | 0 | null | null | null | no_error | Japan building fully automated indoor lettuce farm with robots | Digital Trends | 2015-08-31T15:30:20-07:00 | By
Kelly Hodgkins
August 31, 2015 |
Home
Emerging TechNews
Japan is building a fully-automated indoor farm capable of producing 30,000 heads of lettuce per day
spread.coJapan’s Spread Vegetable Factory is working on a novel way of producing high quantities of lettuce using factory automation, reports the Wall Street Journal. Starting next year, the company will begin construction on a large-scale, fully-automated lettuce factory that’ll cost up to 2 billion yen to build ($16.5 million USD). According to the Kyoto-based company, its automated process will be able to produce 30,000 heads of lettuce in a single day starting in summer 2017 with a goal of 500,000 heads of lettuce with five years. Except seeding and germination that requires visual confirmation, most of growing process is automated, requiring minimal human intervention to take the lettuce to harvest.
Spread already has years of experience growing vegetables at a factory level using an indoor vertical farm system and artificial LED lighting, but this automated plant takes the growing process to a whole new level. The automated process uses stacker cranes that’ll carry the seedlings to robots who will transplant them into their final growing spots. When the plants have reached maturity, they will be harvested and moved to the packaging plant without outside intervention. This entire process is automatically controlled even down to the environmental controls that adjust automatically and work in any climate around the world.
Though the initial investment in machinery may be costly, the complete automation of the cultivation process will improve output by maximizing growing space and reducing labor costs by almost 50%. Spread believes it can recoup its investment by increasing its lettuce output exponentially and lowering the cost of production through this fully automated production.
The company currently produces 20,000 heads of factory lettuce that it sells in 2,000 stores throughout Japan under the brand name Vege-tus”. Spread matches the price of lettuce from local farmers and says its lettuce tastes the same as local heads grown outdoors.
Kelly's been writing online for ten years, working at Gizmodo, TUAW, and BGR among others. Living near the White Mountains of…
Cars
Nissan launches charging network, gives Ariya access to Tesla SuperChargers
Nissan just launched a charging network that gives owners of its EVs access to 90,000 charging stations on the Electrify America, Shell Recharge, ChargePoint and EVgo networks, all via the MyNissan app.It doesn’t stop there: Later this year, Nissan Ariya vehicles will be getting a North American Charging Standard (NACS) adapter, also known as the Tesla plug. And in 2025, Nissan will be offering electric vehicles (EVs) with a NACS port, giving access to Tesla’s SuperCharger network in the U.S. and Canada.Starting in November, Nissan EV drivers can use their MyNissan app to find charging stations, see charger availability in real time, and pay for charging with a payment method set up in the app.The Nissan Leaf, however, won’t have access to the functionality since the EV’s charging connector is not compatible. Leaf owners can still find charging stations through the NissanConnectEV and Services app.Meanwhile, the Nissan Ariya, and most EVs sold in the U.S., have a Combined Charging System Combo 1 (CCS1) port, which allows access to the Tesla SuperCharger network via an adapter.Nissan is joining the ever-growing list of automakers to adopt NACS. With adapters, EVs made by General Motors, Ford, Rivian, Honda and Volvo can already access the SuperCharger network. Kia, Hyundai, Toyota, BMW, Volkswagen, and Jaguar have also signed agreements to allow access in 2025.
Nissan has not revealed whether the adapter for the Ariya will be free or come at a cost. Some companies, such as Ford, Rivian and Kia, have provided adapters for free.
With its new Nissan Energy Charge Network and access to NACS, Nissan is pretty much covering all the bases for its EV drivers in need of charging up. ChargePoint has the largest EV charging network in the U.S., with over 38,500 stations and 70,000 charging ports at the end of July. Tesla's charging network is the second largest, though not all of its charging stations are part of the SuperCharger network.
Read more
Emerging Tech
Juiced Bikes sold at auction for $1.2 million, report says
Juiced Bikes, the San Diego-based maker of e-bikes, has been sold on an auction website for $1,225,000, according to a report from Electrek.Digital Trends recently reported how the company was showing signs of being on the brink of bankruptcy. The company and its executives had remained silent, while customer inquiries went unanswered and its website showed all products were out of stock. In addition, there were numerous reports of layoffs at the company.Yet, the most convincing sign was that the company’s assets appeared as listed for sale on an auction website used by companies that go out of business.Now, it appears that Juiced Bikes’ assets, including a dozen patents, multiple URLs, and the company’s inventory in both the U.S. and China, have been sold at auction, according to the report. It is likely that the buyer, who remains unknown, can capitalize on the brand and the overall value of the 15-year old company. Founded in 2009 by Tora Harris, a U.S. high-jump Olympian, Juiced Bikes was one of the early pioneers of the direct-to-consumer e-bike brands in the U.S. market.
The company had quickly built a reputation for the versatility of its e-bikes and the durability of their batteries. Over the years, the popularity of models such as the CrossCurrent, HyperScrambler, and RipCurrent only bolstered the brand’s status.Last year, Digital Trends named the Juiced Bikes Scorpion X2 as the best moped-style e-bike for 2023, citing its versatility, rich feature set, and performance.Juiced Bikes’ getting sold quickly might be a sign of what consulting firm Houlihan Lokey says is a recovery in the North American e-bike market.
The industry has had a roller-coaster ride during and after the COVID-19 pandemic: A huge spike in demand for e-bikes had combined with disrupted supply chains to create a supply/demand mismatch of “historic proportions," Houlihan Lokey said.
Read more
Cars
Rivian gets Knight Rider spooky for Halloween
Rivian vehicles are known for giving drivers the chance to take the party on the road, whether it’s stowing a travel kitchen onboard or using its elaborate software systems to spice things up.With Halloween just around the corner, the automaker based in Plymouth, Michigan, is pulling out some treats from its bag of tricks: Rivian owners are getting a number of options to turn their vehicles into traditional spooky or full-on sci-fi entertainment hubs.A software update available on the Rivian Mobile App until November 4 provides Car Costumes, which take over the vehicle’s screen, lighting, and sound systems while in park to transform it into three different cars.Nostalgic fans of the Knight Rider TV series will be pleased with the option to turn their Rivians into the famous K.I.T.T. crime-fighting car. After choosing the option on the app, the car’s interior display system features K.I.T.T.’s diagnostics on screen while playing the original show intro music. Here's an extra treat for Rivian Gen 2 owners: The exterior light bar will feature K.I.T.T.’s iconic beaming red light while playing its scanner sound effect.No-less nostalgic fans of Back to the Future movies will also get their treat with a chance to turn their vehicle into the DeLorean Time Machine. With this option, the screen turns into the classic time-traveling interface while the audio system plays the movie’s music and acceleration sound effects. Once again, Rivian Gen 2 owners get an extra treat. Hitting the key 88 mph button will engage the car’s lighting and sound effects in the front and back of the car to whizz you through the sound barrier.For a more traditional spooky time, you can opt for the Haunted Rivian car costume, featuring eight different sound effects and three different color themes. Static and ghosts will take over your interior display.Rivian Gen 1 owners get a green animation on the outside of the vehicle. Gen 2 owners can turn the exterior light bar into whichever color option they find most frightful.
Read more
| 2024-11-07T22:30:17 | en | train |
10,826,053 | ColinWright | 2016-01-02T12:55:06 | Danish Researchers: A Better Way to Line Up Than 'First Come, First Served' | null | http://www.theatlantic.com/business/archive/2015/09/lines-efficient-first-come-served/404218/?single_page=true | 15 | 14 | [
10828233,
10828203,
10828195,
10829077,
10828187
] | null | null | no_error | Danish Researchers: There's a Better Way to Line Up Than 'First Come, First Served' | 2015-09-08T16:52:03Z | Aamna Mohdin, Quartz | Their analysis showed wait times would decrease if the latest arrivals were let in first.ReutersGeorge Mikes, a Hungarian-born British author, once wrote “an Englishman, even if he is alone, forms an orderly queue of one.”Whether it’s at the bank or the grocery store, waiting in line is a staple of British life. What, then, would Brits make of Danish researchers who suggest the age-old discipline of “first come, first served” is a waste of time?In their study, published as a working paper with the University of Southern Denmark, the researchers describe the “first come, first served” principle as a “curse.” For the study, they consider a purely theoretical situation in which people could line up at any time when a facility opens, like boarding an airplane.The problem with “first come, first served” is it incentivizes people to arrive early, which researchers say results in people waiting for the longest period of time. When this incentive is removed—under a “last come, first served” system—the queues are more efficient. Researchers suggest that under this model, people are forced to change their behaviors and arrive at the queues at a slower rate. When people who arrive last are served first, there is less of a bottleneck and thus less congestion in queues.In another study, also out of the University of Southern Denmark, researchers looked at three queuing systems: “first come, first served,” “last come, first served,” and “service-in-random-order.” To test out their theory, researchers got 144 volunteers to queue under each system. When participants were told they would be served at random from the queue, the average waiting time decreased. The waiting time decreased even further under the “last come, first served” system. It seemed that most people didn’t want to risk turning up early, only to end up being served last.Yet when researchers measured how fair participants felt each queuing system was, “first come, first served” was seen to be the most fair, while “last come, first serve” was seen as the least—so good luck trying to implement this system in real life.About the Authors | 2024-11-08T00:12:29 | en | train |
10,826,075 | davidbarker | 2016-01-02T13:04:06 | VerbalExpressions: JavaScript Regular Expressions Made Easy | null | https://github.com/VerbalExpressions/JSVerbalExpressions | 2 | 0 | null | null | null | no_error | GitHub - VerbalExpressions/JSVerbalExpressions: JavaScript Regular expressions made easy | null | VerbalExpressions | VerbalExpressions
JavaScript Regular Expressions made easy
VerbalExpressions is a JavaScript library that helps construct difficult regular expressions.
How to get started
In the browser
<script src="VerbalExpressions.js"></script>
Or use the jsDelivr CDN.
On the server (node.js)
Install:
npm install verbal-expressions
Require:
const VerEx = require('verbal-expressions');
Or use ES6's import:
import VerEx from 'verbal-expressions';
Running tests
(or)
Creating a minified version
This will run Babel on VerbalExpressions.js and output the result to dist/verbalexpressions.js. A minified version of the same will also be written to dist/verbalexpressions.min.js.
A source map will also be created in dist, so you can use the original "un-babelified", unminified source file for debugging purposes.
Building the docs/ folder
The docs/ folder uses Jekyll for building the static HTML and is hosted at
gh-pages.
To install the Ruby dependencies, run:
This installs all needed Ruby dependencies locally
After you've installed dependencies, you can run:
This builds all static files to docs/_site/ folder.
If you want to develop the files locally, you can run:
This starts a local development web server and starts watching your files for
changes.
API documentation
You can find the API documentation at verbalexpressions.github.io/JSVerbalExpressions. You can find the source code for the docs in docs.
Examples
Here are some simple examples to give an idea of how VerbalExpressions works:
Testing if we have a valid URL
// Create an example of how to test for correctly formed URLs
const tester = VerEx()
.startOfLine()
.then('http')
.maybe('s')
.then('://')
.maybe('www.')
.anythingBut(' ')
.endOfLine();
// Create an example URL
const testMe = 'https://www.google.com';
// Use RegExp object's native test() function
if (tester.test(testMe)) {
alert('We have a correct URL'); // This output will fire
} else {
alert('The URL is incorrect');
}
console.log(tester); // Outputs the actual expression used: /^(http)(s)?(\:\/\/)(www\.)?([^\ ]*)$/
Replacing strings
// Create a test string
const replaceMe = 'Replace bird with a duck';
// Create an expression that seeks for word "bird"
const expression = VerEx().find('bird');
// Execute the expression like a normal RegExp object
const result = expression.replace(replaceMe, 'duck');
// Outputs "Replace duck with a duck"
alert(result);
Shorthand for string replace
const result = VerEx().find('red').replace('We have a red house', 'blue');
// Outputs "We have a blue house"
alert(result);
Contributions
Pull requests are warmly welcome!
Clone the repo and fork:
git clone https://github.com/VerbalExpressions/JSVerbalExpressions.git
Style guide
The Airbnb style guide is loosely used as a basis for creating clean and readable JavaScript code. Check .eslintrc.
Check out these slide decks for handy Github & git tips:
Git and Github Secrets
More Git and Github Secrets
Tools
https://verbalregex.com - it's a wrapper of JSVerbalExpressions; users can write down the code and compile to regex
https://jsbin.com/metukuzowi/edit?js,console - JSBin Playground
Other Implementations
You can see an up to date list of all ports on VerbalExpressions.github.io.
Ruby
C#
Python
Java
Groovy
PHP
Haskell
Haxe
C++
Objective-C
Perl
Swift
If you would like to contribute another port (which would be awesome!), please open an issue specifying the language in the VerbalExpressions/implementation repo. Please don't open PRs for other languages against this repo.
Similar projects
Here's a list of other similar projects that implement regular expression
builders:
https://github.com/MaxArt2501/re-build
https://github.com/mathiasbynens/regenerate
| 2024-11-08T09:48:32 | en | train |
10,826,138 | sagargv | 2016-01-02T13:35:45 | Advice to Aimless, Excited Programmers (2010) | null | http://prog21.dadgum.com/80.html | 3 | 0 | null | null | null | no_error | Advice to Aimless, Excited Programmers | null | null | I occasionally see messages like this from aimless, excited programmers:Hey everyone! I just learned Erlang/Haskell/Python, and now I'm looking for a big project to write in it. If you've got ideas, let me know!orI love Linux and open source and want to contribute to the community by starting a project. What's an important program that only runs under Windows that you'd love to have a Linux version of?The wrong-way-aroundness of these requests always puzzles me. The key criteria is a programing language or an operating system or a software license. There's nothing about solving a problem or overall usefulness or any relevant connection between the application and the interests of the original poster. Would you trust a music notation program developed by a non-musician? A Photoshop clone written by someone who has never used Photoshop professionally? But I don't want to dwell on the negative side of this.Here's my advice to people who make these queries:Stop and think about all of your personal interests and solve a simple problem related to one of them. For example, I practice guitar by playing along to a drum machine, but I wish I could have human elements added to drum loops, like auto-fills and occasional variations and so on. What would it take to do that? I could start by writing a simple drum sequencing program--one without a GUI--and see how it went. I also take a lot of photographs, and I could use a tagging scheme that isn't tied to a do-everything program like Adobe Lightroom. That's simple enough that I could create a minimal solution in an afternoon.The two keys: (1) keep it simple, (2) make it something you'd actually use.Once you've got something working, then build a series of improved versions. Don't create pressure by making a version suitable for public distribution, just take a long look at the existing application, and make it better. Can I build an HTML 5 front end to my photo tagger?If you keep this up for a couple of iterations, then you'll wind up an expert. An expert in a small, tightly-defined, maybe only relevant to you problem domain, yes, but an expert nonetheless. There's a very interesting side effect to becoming an expert: you can start experimenting with improvements and features that would have previously looked daunting or impossible. And those are the kind of improvements and features that might all of a sudden make your program appealing to a larger audience.permalink September 23, 2010previouslyErlang vs. Unintentionally Purely Functional PythonCommon Sense, Part 1Personal ProgrammingStop the Vertical Tab MadnessOne Small Step Toward Reducing Programming Language Complexity | 2024-11-08T11:22:00 | en | train |
10,826,235 | aaronchall | 2016-01-02T14:15:24 | Using Python's Method Resolution Order for Dependency Injection – Is This Bad? | null | http://programmers.stackexchange.com/q/306330/102438 | 1 | 0 | null | null | null | no_error | Using Python's Method Resolution Order for Dependency Injection - is this bad? | null | IainIain
45022 silver badges1010 bronze badges |
Using Python's Method Resolution Order for Dependency Injection - is this bad?
No. This is a theoretical intended usage of the C3 linearization algorithm. This goes against your familiar is-a relationships, but some consider composition to be preferred to inheritance. In this case, you composed some has-a relationships. It seems you're on the right track (though Python has a logging module, so the semantics are a bit questionable, but as an academic exercise it's perfectly fine).
I don't think mocking or monkey-patching is a bad thing, but if you can avoid them with this method, good for you - with admittedly more complexity, you have avoided modifying the production class definitions.
Am I doing it wrong?
It looks good. You have overridden a potentially expensive method, without monkey-patching or using a mock patch, which, again, means you haven't even directly modified the production class definitions.
If the intent was to exercise the functionality without actually having credentials in the test, you should probably do something like:
>>> print(MockUser('foo', 'bar').authenticate())
> MockUserService::validate_credentials
True
instead of using your real credentials, and check that the parameters are received correctly, perhaps with assertions (as this is test code, after all.):
def validate_credentials(self, username, password):
print('> MockUserService::validate_credentials')
assert username_ok(username), 'username expected to be ok'
assert password_ok(password), 'password expected to be ok'
return True
Otherwise, looks like you've figured it out. You can verify the MRO like this:
>>> MockUser.mro()
[<class '__main__.MockUser'>,
<class '__main__.User'>,
<class '__main__.LoggingService'>,
<class '__main__.MockUserService'>,
<class '__main__.UserService'>,
<class 'object'>]
And you can verify that the MockUserService has precedence over the UserService.
| 2024-11-08T13:33:58 | en | train |
10,826,248 | cybice | 2016-01-02T14:20:05 | Babel 6 plugin which allows to use webpack loaders | null | https://github.com/istarkov/babel-plugin-webpack-loaders | 37 | 3 | [
10826629
] | null | null | no_error | GitHub - istarkov/babel-plugin-webpack-loaders: babel 6 plugin which allows to use webpack loaders | null | istarkov |
babel-plugin-webpack-loaders
Important note!!!
Since plugin was published, there were a lot of changes in testing software. Be sure in most(all) cases you DON'T need this plugin for testing. I highly recommend you to use jest for testing, and use moduleNameMapper (identity-obj-proxy, etc) to mock CSS-Modules and other webpack loaders.
Begin
This Babel 6 plugin allows you to use webpack loaders in Babel.
It's now easy to run universal apps on the server without additional build steps, to create libraries as usual with babel src --out-dir lib command, to run tests without mocking-prebuilding source code.
It just replaces require - import statements with webpack loaders results. Take a look at this Babel build output diff to get the idea.
For now this plugin is of alpha quality and tested on webpack loaders I use in my projects.
These loaders are file-loader, url-loader, css-loader, style-loader, sass-loader, postcss-loader.
The plugin supports all webpack features like loaders chaining, webpack plugins, and all loaders params. It's easy because this plugin just uses webpack.
Three examples:
runtime css-modules example with simple webpack config,
run it with npm run example-run
library example with multi loaders-plugins webpack config,
build it with npm run example-build and execute with node build/myCoolLibrary/myCoolLibrary.js, assets and code will be placed at ./build/myCoolLibrary folder.
Here is an output diff of this library example built without and with the plugin.
minimal-example-standalone-repo
Warning
Do not run this plugin as part of a webpack frontend configuration. This plugin is intended only for backend compilation.
How it works
Take a look at this minimal-example
You need to create a webpack config
You need to add these lines to .babelrc
Now you can run example.js
// example.js
import css from './example.css';
console.log('css-modules result:', css);
with the command BABEL_DISABLE_CACHE=1 NODE_ENV=EXAMPLE ./node_modules/.bin/babel-node ./example.js and you'll get the following console output:
css-modules result: { main: 'example__main--zYOjd', item: 'example__item--W9XoN' }
Here I placed output diff
of this babel library build without and with the plugin.
As you can see the plugin just replaces require with loaders results. All loaders and plugins have been applied to generated assets
Install
npm install --save-dev babel-cli babel-plugin-webpack-loaders
Examples
webpack configs,
examples,
.babelrc example,
tests,
minimal-example-repo
You can try out the examples by cloning this repo and running the following commands:
npm install
# example above
npm run example-run
# library example - build library with a lot of modules
npm run example-build
# and now you can use your library using just node
node build/myCoolLibrary/myCoolLibrary.js
# test sources are also good examples
npm run test
Why
The source of inspiration for this plugin was babel-plugin-css-modules-transform, but it was missing some features I wanted:
I love writing CSS using Sass
I like webpack and its loaders (chaining, plugins, settings)
I wanted to open source a UI library which heavily used CSS Modules, Sass and other webpack loaders.
The library consisted of many small modules and every module needed to be available to users independently such as lodash/blabla/blublu.
With this plugin the heavy build file for the library could be replaced with just one command: babel src --out-dir lib.
How the plugin works internally
The plugin tests all require paths with test regexps from the loaders in the webpack config, and then for each successful test:
synchronously executes webpack
parses webpack output using babel-parse
replaces the required ast with the parsed ast output
Caching issues
By default Babel caches compiled files, so any changes in files processed with loaders will not be visible at subsequent builds,
you need to run all commands with a BABEL_DISABLE_CACHE=1 prefix.
(More information: #T1186, #36)
Dynamic config path
It's possible to interpolate env vars into the WebPack config path defined in your .babelrc using lodash.template syntax. This is mainly to achieve compatibility with ava.
The ava test runner runs each spec relative to its enclosing folder in a new process which hampers this plugins ability to use a relative path for the WebPack config. An absolute path to the WebPack config will work however, and you can set one in your .babelrc using an env var like this,
{
"presets": ["es2015"],
"env": {
"AVA": {
"plugins": [
[
"babel-plugin-webpack-loaders",
{
"config": "${CONFIG}",
"verbose": false
}
]
]
}
}
}
And then invoke ava something like this,
CONFIG=$(pwd)/webpack.config.ava.js BABEL_DISABLE_CACHE=1 NODE_ENV=AVA ava --require babel-register src/**/*test.js
(More information: #41)
Thanks to
Felix Kling and his astexplorer
James Kyle and his babel-plugin-handbook
Michal Kvasničák and his babel-plugin-css-modules-transform
| 2024-11-08T03:46:05 | en | train |
10,826,276 | Garbage | 2016-01-02T14:34:26 | Tiobe Index: December Headline: Java's popularity is going through the roof | null | http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html | 4 | 2 | [
10828888,
10826834
] | null | null | no_error | Home - TIOBE | null | null |
When software quality is more than a code checker
Software quality does not arise from simply running a code checker: it requires strategy, oversight, and the right tools. TIOBE serves as a partner for organizations that want to be completely unburdened in the software quality domain. We adapt our solutions to your organization, tools and specific technologies.
We focus on software quality.
Quality models are based on ISO/IEC 25010.
Fully automated software quality framework TiCS.
Supports all levels in your organization, from bit to board.
Objective measurement of software quality.
Checking more than 1 billion lines of code per day.
TIOBE Quality Models
The proverb “the proof of the pudding is in the eating” applies perfectly to software quality. Because only after a software product has been shipped, the true quality of a software product reveals itself.
For this purpose, TIOBE has developed quality models to enable an objective, reproducible and independent statement of the quality of your software. Furthermore, we work with world-renowned certification authority TÜViT to further develop our models and provide certification.
Learn more about our TQI Quality Model
Software quality framework TiCS
With our software quality framework TiCS we support software quality improvements from development to board level: from bit to board. TiCS provides a clear overview using multiple reporting and dashboard tools for every layer and team in your organization.
We provide a smooth integration in your organization’s development process, and tailor our solution to fit your organization if required.
Learn more about our TiCS Framework
Get your own proof of concept
Curious how your projects will be ranked by our TiCS Framework? Request your proof of concept now.
Request a demo
Trusted by our customers
We are proud to support the following companies with our software quality solutions.
Since code quality is quite a broad term, lots of measurements should be taken to determine the code quality of a piece of code.
Possible metrics to be applied are unit test code coverage, the number of compiler warnings, cyclomatic complexity, etc. TIOBE offers a predefined set of 8 software metrics to get a good indication of code quality. See our TIOBE Quality Indicator Quality Model for more details.
This is quite a difficult question because there are all kind of quality aspects to software code. These aspects are nicely defined in the ISO/IEC 25010 standard. It ranges from reliability (no bugs) to maintainability (is my code comprehensive).
Two easy ways to start with code quality are to start with manual code reviews and to use the compiler warnings as your “free” code checker. You can even use “treat warnings as errors” in your build environment to ensure no compiler warnings are accepted any more.
Manual code reviews appear the most effective way of improving code quality. Human beings are very good in spotting potential problems and issues, which are of course a very good indicator whether code is maintainable.
Possible drawbacks of manual code reviews are that they are subjective (depending on the reviewer) and time consuming. Moreover, deep flow analysis issues are hard to detect by human beings. If manual code reviews are backed up by automated code checkers you get a more complete picture that compensates for all drawbacks of manual code reviews.
The costs depend on your application and your application domain and might vary a lot. If you are developing software for an aircraft, the costs of a software defect might be huge, whereas if you are developing software for a pet project the possible costs of a defect might be close to zero.
Costs might consist of claims because of liability, but most of the costs are indirect and hard to measure. Think about the loss of reputation if a software bug in your system shows up in the news. Or, at a smaller scale, if your software is not maintainable, lots of extra costs are needed to add a new feature.
That depends on your situation. If you are new to the field of software quality, an assessment is a good way to start. The reason for this is that it is just a one-time measurement including a lot of explanation in a report and presentation. Continuous monitoring will result in real-time quality data and helps in case you want to make sure your software quality is always meeting the requirements.
It begins with the realization that each stakeholder in the organization has different need for a view on the same data.
With TiCS you have the best of both worlds: Using the same data source (one point of truth), each stakeholder can receive their own specific view:
High level global overview for fast and reliable decision making, supported with quality labels.
Cross sectional overviews to determine risks towards design changes to be made.
Detailed violation overviews to fully understand why certain metrics score weak, and with that the tools to improve.
Check out the TiCS Framework section for a complete understanding how TiCS can work for your organization.
Schedule a demo
| 2024-11-07T23:46:05 | en | train |
10,826,295 | NavyDish | 2016-01-02T14:41:59 | Each self-driving car is expected to generate 2 petabytes of data ever year | null | https://blog.socialcops.com/best-practices/driven-data-new-force-automotive-industry | 2 | 0 | null | null | null | no_error | Driven by Data: The New Force in the Automotive Industry | SocialCops | 2015-12-31T08:33:00+05:30 | Gaurav Jha | The automotive industry has long been seen as a laggard in terms of technological progress. Compliance with safety regulations and the capital intensive nature of R&D and production means that innovation cannot be as rapid as in the IT industry, for instance. However, the changes that make it through are rock solid, so cars have steadily become more refined and reliable. The average age of a car on American roads today is over 11 years — the highest it has ever been.Today, though, we are on the cusp of the biggest revolution in the automotive industry since the time Ford created the Model T.Over the past few years, with the push towards green technology, smart driving aids, and even autonomous mobility, cars are on the way to becoming powerful data-intensive machines like your smartphone.With each self-driving car expected to generate 2 petabytes of data every year, every aspect of how we own, drive, and service our vehicles will undergo a massive transformation through data.Canvassing the Automotive IndustryMost customers begin their search for their new vehicle online. The search patterns generated by their queries can help manufacturers gain valuable insights into what people are looking for. This lets manufacturers formulate their strategic and tactical moves accordingly, rather than using a trial-and-error approach that could be very costly. With a ton of data at manufacturers’ disposal today, market research will never be the same.For product development decisions where trade-offs are involved, manufacturers in the automotive industry sometimes monitor social media chatter to get a sense of which options might be viewed more favourably and direct their efforts accordingly. This helps the manufacturers design their vehicles to suit the target segment, right from the ground up. Features that have been introduced using this approach have almost always received a tremendous response. After all, everybody likes to know that they’re being heard.Optimizing Sales and ServicingLarge automobile companies with a vast network of dealerships have been using location analytics to optimize their presence. Cars today come with a plethora of sensors that capture a great deal of information about how the vehicles have been used. This also helps the automotive industry tailor its marketing and distribution strategy to put the right type and number of units at the right places. Understanding existing and potential customers also enables these companies to invest more precisely in advertisements.The impact of data collection and processing has been even more significant on the after-sales servicing ecosystem. Manufacturers and dealerships are now able to pre-empt the requirements and manage their inventory and supply chain accordingly, rather than waiting for their customers to report a failure. Some of the high end cars, for instance, have sensors fitted to key components to check for performance parameters. By keeping tab on the health of the parts and making projections based on the historical usage, the sensor sends an alert to the manufacturer directly when a component is nearing failure and needs replacement. The part is manufactured on call, dispatched to the dealership, and the customer receives a phone call to come in for their replacement even before they know that something could have gone wrong. With Internet of Things slated to come up in a big way, we are going to see more of this magic in the years ahead.Driving Aids for Better Efficiency and PerformanceKeeping aside the unfortunate cheating device fiasco at the Volkswagen Group – an incident that is more of an exception than a general trend — software trickery has been an increasingly important part of the automotive industry. Clever use of software has made the cars much better than what pure mechanics could have achieved.With a plethora of sensors that gather data from every inch of the vehicle, GPS connectivity, and the processing capability to rival the best of computers, the cars of today can make even the most modest of us feel like a driving God.They make life much easier by correcting our mistakes seamlessly, keeping us in the lane on expressways, and taking care of the monotonous, distasteful aspects of driving in traffic while we enjoy the comforts of the in-car entertainment systems.The software support isn’t just limited to anticipating trouble and keeping you safe. The new Ferrari 488 has a feature that lets you drift better, the Jaguar XF has an artificially enhanced soundtrack to give the engine a mesmerizing roar worthy of a Jaguar, and the Rolls Royce always seems to know what gear to take on the road ahead even before the car gets there, thanks to its “satellite aided transmission.” These are exciting times for car enthusiasts.Self-Driving VehiclesSince vehicles have already developed the ability to capture and process a vast amount of data and make logical inferences in real time. The natural progression in the automotive industry from here is complete autonomy.Tesla has led the way so far with its Autopilot software update that enables the car to steer within a lane, change lanes with the tap of an indicator, and maintain traffic-aware cruise control. It also takes control of the brakes to help avoid potential front- and side-on collisions. Equipped with 12 long-range ultrasonic sensors that scan an area of 16 feet around the car, a forward-looking camera, electronic brakes, and a forward radar, the system currently works best in dense, slow-moving traffic. Drivers are still encouraged to have their hands on the wheel.The entire Tesla fleet will be feeding the data from autonomous driving to a central server to process the information and use it to become better over time.Elon Musk says that within three years we could have a fully autonomous Tesla on the roads.Apart from Tesla, Apple, Google, and Faraday Future, the mainstream marques like Mercedes, Audi, and BMW have also trialled self-driving technology recently. Mercedes, with its F015 concept, envisions a future where cars will be like mobile homes — making the daily commute through crowded cities a pleasant activity where passengers can have the freedom to use their time constructively for work or leisure.That said, there are some very big questions that still hover over these new developments in autonomous driving. Who is responsible if an automated car crashes? Can a vehicle ever really make ethical choices if an accident is unavoidable? Will the legislators ever get their act together to create the necessary legal framework? These are difficult questions and answers will need to be worked out before self-driving goes mainstream in the automotive industry. When that happens, the vast amount of data that these cars generate will also need a long, hard look.Data Security and PrivacyMany cars on the roads today include wireless technology that could be vulnerable to hacking or privacy intrusions. The cars collect and transmit tons of data back to the manufacturers who use it for improving customer experience. GPS location, seatbelt usage, navigation-related data, and a whole host of information on other driving components — that’s a lot of very personal information that could be misused if it falls into the wrong hands. While some basic security measures have been implemented, the fact remains that transmitting data always poses a risk.According to a report by Senator Edward J. Markey titled ‘Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk’, most automobile manufacturers could not provide a satisfactory answer as to how they secure this data during transmission. “Drivers have come to rely on these new technologies, but unfortunately the automakers haven’t done their part to protect us from cyber-attacks or privacy invasions. Even as we are more connected than ever in our cars and trucks, our technology systems and data security remain largely unprotected.”Senator Markey posed his questions after studies showed how hackers can get into the controls of some popular vehicles, causing them to suddenly accelerate, turn, kill the brakes, activate the horn, control the headlights, and modify the speedometer and gas gauge readings. Additional concerns came from the rise of navigation and other features that record and send location or driving history information. One of the reasons why data security in the automotive industry isn’t as robust as we would like it to be is that automakers’ security measures are purely voluntary at this point. Regulations haven’t caught up with the possible risks that such technologies can pose.Data can do wonders for the automotive industry. However, as cars become an increasing contributor to our data footprint, a lot of thought must go into securing this data and using it responsibly. | 2024-11-08T11:53:34 | en | train |
10,826,317 | aurhum | 2016-01-02T14:49:25 | Eurocopter X³ | null | https://en.wikipedia.org/wiki/Eurocopter_X3 | 5 | 0 | null | null | null | no_error | Eurocopter X³ | 2010-09-28T03:09:27Z | Contributors to Wikimedia projects |
From Wikipedia, the free encyclopedia
(Redirected from Eurocopter X3)
X³
Airbus Helicopters X³ in flight
Role
Experimental compound helicopter
National origin
Multinational
Manufacturer
EurocopterAirbus Helicopters
First flight
6 September 2010
Status
Retired
Number built
1
Developed from
Eurocopter AS365 DauphinEurocopter EC155
The Eurocopter X³ (X-Cubed) is a retired experimental high-speed compound helicopter developed by Airbus Helicopters (formerly Eurocopter). A technology demonstration platform for "high-speed, long-range hybrid helicopter" or H³ concept,[1] the X³ achieved 255 knots (472 km/h; 293 mph) in level flight on 7 June 2013, setting an unofficial helicopter speed record.[2][3] In June 2014, it was placed in a French air museum in the village of Saint-Victoret.
Design and development[edit]
Eurocopter X³
The X³ demonstrator is based on the Eurocopter AS365 Dauphin[1] helicopter, with the addition of short span wings each fitted with a tractor propeller, having a different pitch to counter the torque effect of the main rotor.[1][4][5] Conventional helicopters use tail rotors to counter the torque effect.[6] The tractor propellers are gear driven from the two main turboshaft engines which also drive the five-bladed main rotor system, taken from a Eurocopter EC155.[1][5]
Test pilots describe the X³ flight as smooth,[5][7] but the X³ does not have passive or active anti-vibration systems and can fly without stability augmentation systems,[1][8] unlike the Sikorsky X2.[9] The helicopter is designed to prove the concept of a high-speed helicopter which depends on slowing the rotor speed[5] (by 15%)[1] to avoid drag from the advancing blade tip, and to avoid retreating blade stall by unloading the rotor while a small wing[10][11][12] provides 40–80% lift instead.[1][5][13][14]
The X³ can hover with a pitch attitude between minus 10 and plus 15 degrees.[15] Its bank range is 40 degrees in hover, and is capable of flying at bank angles of 120 to 140 degrees.[16][17] During testing the aircraft demonstrated a rate of climb of 5,500 feet per minute and high-G turn rates of 2Gs at 210 knots.[18][19]
flying at the 2012 ILA Berlin Air Show
The X³ first flew on 6 September 2010 from French Direction générale de l'armement facility at Istres-Le Tubé Air Base.[citation needed]
On 12 May 2011 the X³ demonstrated a speed of 232 knots (267 mph; 430 km/h) while using less than 80 percent of available power.[8][20][21][22][23]
In May 2012, it was announced that the Eurocopter X³ development team had received the American Helicopter Society's Howard Hughes Award for 2012.[24]
Eurocopter demonstrated the X³ in the United States during the summer of 2012, the aircraft logging 55 flight hours, with a number of commercial and military operators being given the opportunity to fly the aircraft.[25]
With an aerodynamic fairing installed on the rotor head,[26] the X³ demonstrated a speed of 255 knots (293 mph; 472 km/h) in level flight and 263 knots (303 mph; 487 km/h) in a shallow dive on 7 June 2013,[27][28] beating the Sikorsky X2's unofficial record set in September 2010, and thus becoming the world's fastest non-jet augmented compound helicopter.
Eurocopter suggested that a production H³ application could appear as soon as 2020.[25] The company had also previously expressed an interest in offering an H³ technology based solution for the United States' Future Vertical Lift program, with EADS North America submitting bid to build a technology demonstrator under the US Army's Joint Multi Role (JMR) program,[29][30] but later withdrew due to cost[31] and because Eurocopter might have to transfer X³ intellectual property to the US,[32] and Eurocopter chose to focus on the Armed Aerial Scout instead.[33][34] Ultimately the company was not downselected for the JMR effort,[35] and the AAS program was cancelled.[36]
Eurocopter saw the offshore oil market[31] and Search and rescue community as potential customers for X³ technology. An X³-based unpressurised compound helicopter called LifeRCraft is also among the projects planned under the European Union's €4 billion ($5.44 billion) Clean Sky 2 research program as one of two high-speed rotorcraft flight demonstrators.[26][37][38][39] Airbus began development of the hybrid composite helicopter with a 4.6-litre V-8 piston engine[40] in 2014,[41] froze the design in 2016 to start building in 2017,[40] and had plans to fly it in 2019.[42]
The X³ was moved to Musée de l’air et de l’espace in 2014 for public display.[43]
RACER model at Paris Air Show 2017
The Airbus RACER (Rapid And Cost-Effective Rotorcraft) is a development revealed at the June 2017 Paris air show, final assembly was planned to start in 2019 for a 2020 first flight.
Cruising up to 400 km/h (216 kn), it aims for a 25% cost reduction per distance over a conventional helicopter.
Eurocopter X³ at ILA Berlin Air Show 2012
Data from FlightGlobal[44]General characteristics
Crew: 2
Gross weight: 5,200 kg (11,464 lb) [43]
Powerplant: 2 × Rolls-Royce Turbomeca RTM322-01/9a[45] turboshaft engines, 1,693 kW (2,270 hp) each
Main rotor diameter: 12.6 m (41 ft 4 in)
Main rotor area: 124.7 m2 (1,342 sq ft)
Propellers: 5-bladed (two tractor propellers gear driven from main engines).
Main rotor: five-bladed from the Eurocopter EC155[1][46]
Performance
Maximum speed: 472 km/h (293 mph, 255 kn) at roughly 10,000 ft (3,048 m)[28]
Cruise speed: 407 km/h (253 mph, 220 kn) [20]
Service ceiling: 3,810 m (12,500 ft)
Rate of climb: 28 m/s (5,500 ft/min) [47][48][49]
Tip speed: 0.91 Mach[5]
Autorotation: 2,800 f.p.m[5]
Fairey Rotodyne
Related development
Eurocopter AS365 Dauphin
Eurocopter EC155
Aircraft of comparable role, configuration, and era
Kamov Ka-92
Mil Mi-X1
Piasecki X-49
Related lists
List of rotorcraft
^ a b c d e f g h Nelms, Douglas (9 July 2012), "Aviation Week Flies Eurocopter's X³", Aviation Week & Space Technology, archived from the original on 10 May 2014, retrieved 10 May 2014.
^ "Eurocopter's X³ hybrid helicopter makes aviation history in achieving a speed milestone of 255 knots during level flight", Helicopters (press release), Airbus, archived from the original on 27 May 2014, retrieved 26 May 2014.
^ Meet the World's Fastest Helicopter: The 293-Mph X³ (video), Bloomberg, 20 June 2013.
^ "An Update on the X3: A Conversation with Hervé Jammayrac" Second Line of Defense, 2011. Retrieved 5 September 2014.
^ a b c d e f g Erdos, Robert. "Flying the Future Archived 5 September 2014 at the Wayback Machine" Vertical (Magazine), 10 August 2012. Retrieved 5 September 2014.
^ "Eurocopter X³ (X Cubed) Experimental Compound Helicopter". Military Factory. 17 July 2011. Archived from the original on 13 April 2014..
^ Singing the X³'s praises (video) (press release), Airbus, 26 June 2012, archived from the original on 21 December 2021, retrieved 10 May 2014.
^ a b "The Eurocopter X³ hybrid helicopter exceeds its speed challenge: 232 knots (430 km/h) attained in level, stabilized flight", Eurocopter, Airbus helicopters, 16 May 2011, archived from the original on 12 May 2014, retrieved 10 May 2014.
^ Goodier, Rob (20 September 2010). "Inside Sikorsky's Speed-Record-Breaking Helicopter Technology". Popular Mechanics. Archived from the original on 17 October 2013. Retrieved 10 May 2014..
^ "The X³ concept", Helicopters, Airbus, archived from the original on 12 May 2014, retrieved 9 May 2014.
^ Video 1, 2 (Google You tube) (video), Airbus Helicopters, 26 September 2010, 2 m 50 s, archived from the original on 21 December 2021, retrieved 9 May 2014.
^ Stephens, Ernie (1 August 2012), "Pilot Report: The Exciting, Experimental, Exceptional X³", Rotor & Wing, archived from the original on 15 September 2012, retrieved 10 May 2014.
^ Eshel, Noam (6 September 2010), "Eurocopter Tests the X-Cubed, a New High-speed Hybrid Helicopter", Defense Update, archived from the original on 24 October 2013, retrieved 10 May 2014.
^ Norris, Guy (28 February 2012), "Eurocopter X-3 Targets US Market", Aviation Week, retrieved 1 March 2012[permanent dead link]. Mirror Archived 13 April 2014 at the Wayback Machine.
^ Padfield, R Randall (3 August 2013), "Eurocopter X³ 'Flies Intuitively,' Say Test Pilots", AIN online, archived from the original on 6 August 2013, retrieved 10 May 2014.
^ Parsons, Dan, "Eurocopter's X³ Shows Old Designs Could Be The Future of Army Aviation", National defense magazine.
^ Gubisch, Michael, Eurocopter's X³ restricted by US regulations, Farnborough: Flightglobal.
^ "Eurocopter's Revolutionary X³ Helicopter Continues Military Leg of Its US Tour". Reuters. 13 July 2012. Archived from the original on 14 July 2014..
^ "Eurocopter X³ Approaches the Sunset of its Brief Life", Aviation today, archived from the original on 14 July 2014, retrieved 26 May 2014.
^ a b "Flight testing of Eurocopter's X³ high-speed hybrid helicopter demonstrator marks a new milestone in the company's innovation roadmap". Eurocopter. 27 September 2010. Archived from the original on 29 September 2010. Retrieved 28 September 2010.
^ Eurocopters Hybridhubschrauber X³ übertrifft sein angestrebtes Geschwindigkeitsziel: 232 Knoten (430 km/h) bei stabilem Horizontalflug [Eurocopter's hybrid helicopter X³ surpasses its attempted speed target: 232 knots (430 km/h) with a stable horizontal flight] (in German), Presse Box, 16 May 2011, archived from the original on 11 September 2012, retrieved 7 June 2011
^ "L'Hélicoptère de démonstration X³ atteint les 430 km/h !" [The demonstration helicopter X³ reaches 430 km/h!]. Avia News (in French). 24 Heures. Archived from the original on 24 March 2012. Retrieved 17 May 2011.
^ "Le X³ d'Eurocopter a volé à 430 km/h" [Eurocopter's X³ flew at 430 km/h]. Zone Militaire (in French). Opex 360. 17 May 2011.
^ Eurocopter's X Development Team wins Howard Hughes Award for Outstanding Improvement in Helicopter Technology (press release), Airbus, 3 May 2012, archived from the original on 28 May 2014, retrieved 26 May 2014.
^ a b Norris, Guy (14 February 2012). "Eurocopter Outlines Plans For X4 Program". Aviation Week. Retrieved 24 March 2012.[permanent dead link]
^ a b Osborne, Tony. "Eurocopter Ponders X³ Helicopter’s Next Steps" Aviation Week & Space Technology, 17 June 2013. Retrieved 17 June 2014. Archived 13 May 2014 at the Wayback Machine on 13 May 2014.
^ Thivent, Viviane (11 June 2013), "Le X³, un hélico à 472 km/h" [The X³, a helicopter at 472 km/h], Le Monde, retrieved 10 May 2014.
^ a b Paur, Jason. "X³ Helicopter Sets Speed Record at Nearly 300 MPH" Wired, 11 June 2013. Retrieved 17 June 2014. Archived on 31 March 2014
^ Warwick, Graham (30 July 2012), "Eurocopter's X3 – Would You Go to War in One?", Aviation Week & Space Technology, archived from the original on 10 May 2014, retrieved 10 May 2014.
^ Warwick, Graham (11 March 2013), "EADS (ie Eurocopter) Bids for Army's JMR", Aviation Week & Space Technology, archived from the original on 13 May 2014, retrieved 17 June 2014.
^ a b Majumdar, Dave. "Cost drove EADS from US Army rotorcraft demonstration" 13 June 2013. Retrieved 17 June 2014. Archived 12 May 2014 at the Wayback Machine on 12 May 2014
^ "Intellectual Property Concerns Swayed EADS JMR Pullout", Aviation Week & Space Technology, 24 June 2013, retrieved 17 June 2014, Guillaume Faury said the company made the 'strategic decision' because it was concerned that it would have to transfer the intellectual property rights of the company's self-developed X³ technology to the US.
^ Warwick, Graham. "EADS Withdraws JMR Bid To Focus On AAS" Aviation Week & Space Technology, 4 June 2013. Retrieved 17 June 2014. Archived 17 June 2014 at the Wayback Machine
^ "EADS Quits Helo Competition To Pursue Uncertain AAS" Aviation Week & Space Technology, 10 June 2013. Retrieved 17 June 2014. Archived 17 June 2014 at the Wayback Machine
^ US Army selects Bell, Sikorsky/Boeing team for JMR demonstration, Flightglobal.
^ McLeary, Paul. "Outgoing General: US Army Must Continue To Fund Research and Development" DefenseNews, 14 January 2014. Retrieved 17 June 2014.
^ A Preliminary Programme Outline For Clean Sky 2 (PDF), EU: Clean sky, July 2012. Size 2 MB.
^ Dubois, Thierry (3 August 2014). "European Commission, Industry Launch Clean Sky 2". Aviation International News. Retrieved 6 September 2014.
^ "8.7 Compound Rotorcraft Demonstration (LifeRCraft) – WP2" pages 302–375. Size: 747 pages, 23 MB. Clean Sky 2, 27 June 2014. Retrieved 7 October 2014.
^ a b Nathan, Stuart (13 January 2017). "Rethinking rotorcraft: Airbus aims for speedy helicopter". The Engineer. Retrieved 14 January 2017. design was frozen last summer ahead of its construction phase starting this year. Clean Sky 2 rotorcraft is classified as a hybrid helicopter.
^ Sailer, J. "Airbus Helicopters to design new compound rotorcraft demonstrator in the frame of Clean Sky 2 program Archived 26 July 2014 at the Wayback Machine" Airbus PR, 16 July 2014. Retrieved 22 July 2014.
^ Dubois, Thierry (22 July 2014). "Airbus Helicopters Plans Follow-on to X3". Aviation International News. Retrieved 6 September 2014.
^ a b Airbus Helicopters X³ makes its new home at France's national Air and Space museum (press release), Airbus, 19 June 2014, archived from the original on 19 June 2014, retrieved 19 June 2014.
^ Croft, John (23 February 2009). "Heli-expo 2009: Rolls-Royce confirms role in Eurocopter X³ programme". Flightglobal. Retrieved 28 September 2010.
^ "The RTM322 shared a speed record with X³", Le Bourget, Safran, archived from the original on 17 October 2014, retrieved 26 May 2014.
^ "Airbus Helicopters Dauphin EC155 Characteristics", Specifications, Airbus Helicopters, archived from the original on 14 July 2014, retrieved 17 June 2014.
^ "Eurocopter's revolutionary X³ helicopter begins military leg of its US tour", Helicopters, Airbus, archived from the original on 1 February 2014, retrieved 27 January 2014.
^ Dubois, Thierry (August 2011), "Eurocopter Launches Dauphin Replacement; Preps for X³", Aviation Today, archived from the original on 13 May 2016, retrieved 31 March 2012.
^ Dubois, Thierry; Huber, Mark (February 2012), "New Rotorcraft 2012" (PDF), Aviation International, archived from the original (PDF) on 9 June 2016, retrieved 31 March 2012.
X3, Eurocopter, archived from the original on 30 September 2010.
Pictures: Eurocopter unveils high-speed hybrid helicopter, Flightglobal, 27 September 2010.
Eurocopter unveils new-look helicopter, Reuters, 27 September 2010.
Eurocopter High-speed, long-range Hybrid Helicopter H3 Demonstrator Makes First Flight, Deagel.
Video X³
Video X³, Cockpit
Making of
Ferrier, Jean-Jacques, Hybrid helicopter (patent), Eurocopter, US 8070089 B2.
| 2024-11-08T11:45:46 | en | train |
10,826,448 | antirez | 2016-01-02T15:34:28 | Disque 1.0 RC1 is out | null | http://antirez.com/news/100 | 275 | 51 | [
10828038,
10827204,
10828247,
10829328,
10831351,
10826983,
10827427,
10829349,
10827826,
10828377,
10827224,
10828465
] | null | null | no_error | Disque 1.0 RC1 is out! | null | null |
antirez 3231 days ago. 279765 views. Today I’m happy to announce that the first release candidate for Disque 1.0 is available.
If you don't know what Disque is, the best starting point is to read the README in the Github project page at http://github.com/antirez/disque.
Disque is a just piece of software, so it has a material value which can be zero or more, depending on its ability to make useful things for people using it. But for me there is an huge value that goes over what Disque, materially, is. It is the value of designing and doing something you care about. It’s the magic of programming: where there was nothing, now there is something that works, that other people may potentially analyze, run, use.
Distributed systems are a beautiful field. Thanks to Redis, and to the people that tried to mentor me in a way or the other, I got exposed to distributed systems. I wanted to translate this love to something tangible. A new, small system, designed from scratch, without prejudices and without looking too closely to what other similar systems were doing. The experience with Redis shown me that message brokers were a very interesting topic, and that in some way, they are the perfect topic to apply DS concepts. I even pretend message brokers can be fun and exciting. So I tried to design a new message queue, and Disque is the result.
Disque design goal is to provide a system with a good user experience: to provide certain guarantees in the context of messaging, guarantees which are easy to reason about, and to provide extreme operational simplicity. The RC1 offers the foundation, but there is more work to do. For once I hope that Disque will be tested by Aphyr with Jepsen in depth. Since Disque is a system that provides certain kinds of guarantees that can be tested, if it fails certain tests, this translates directly to some bug to fix, that means to end with a better system.
On the operational side there is to test it in the real world. AP and message queues IMHO are a perfect match to provide operational robustness. However I’m not living into the illusion that I got everything right in the first release, so it will take months (or years?) of iteration to *really* reach the operational simplicity I’m targeting. Moreover this is an RC1 that was heavily modified in the latest weeks, I expect it to have a non trivial amount of bugs.
From the point of view of making a fun and exciting system, I tried to end with a simple and small API that does not force the user to think at the details of *this specific* implementation, but more generally at the messaging problem she or he got. Disque also has a set of introspection capabilities that should help making it a non-opaque system that is actually possible to debug and observe.
Even with all the limits of new code and ideas, the RC release is a great first step, and I’m glad Disque is not in the list of side projects that we programmers start and never complete.
I was not alone during the past months, while hacking with Disque and trying to figure out how to shape it, I received the help of: He Sun, Damian Janowski, Josiah Carlson, Michel Martens, Jacques Chester, Kyle Kingsbury, Mark Paluch, Philipp Krenn, Justin Case, Nathan Fritz, Marcos Nils, Jasper Louis Andersen, Vojtech Vitek, Renato C., Sebastian Waisbrot, Redis Labs and Pivotal, and probably more people I’m not remembering right now. Thank you for your help.
The RC1 is tagged in the Disque Github repository. Have fun!🚀 Dear reader, the first six chapters of my AI sci-fi novel, WOHPE, are now available as a free eBook. Click here to get it.
blog comments powered by Disqus
| 2024-11-07T08:30:43 | en | train |
10,826,495 | dpflan | 2016-01-02T15:47:53 | Hive Consciousness: What Does Brain-To-Brain Communication Mean for Humanity? | null | https://aeon.co/essays/do-we-really-want-to-fuse-our-brains-together | 3 | 0 | null | null | null | no_error | Do we really want to fuse our brains together? | Aeon Essays | 2015-05-27 | Peter Watts | You already know that we can run machines with our brainwaves. That’s been old news for almost a decade, ever since the first monkey fed himself using a robot arm and the power of positive thinking. Nowadays, even reports of human neuroprostheses barely raise an eyebrow. Brain-computer interfaces have become commonplace in everything from prosthetic vision to video games (a lot of video games; Emotiv and NeuroSky are perhaps the best-known purveyors of Mind Control to the gaming crowd) to novelty cat ears that perk up on your head when you get horny.
But we’ve moved beyond merely thinking orders at machinery. Now we’re using that machinery to wire living brains together. Last year, a team of European neuroscientists headed by Carles Grau of the University of Barcelona reported a kind of – let’s call it mail-order telepathy – in which the recorded brainwaves of someone thinking a salutation in India were emailed, decoded and implanted into the brains of recipients in Spain and France (where they were perceived as flashes of light).
You might also remember breathless reports of a hive mind emerging from the depths of Duke University in North Carolina during the winter of 2013. Miguel Pais-Vieira and his colleagues had wired together the brains of two rats. Present a stimulus to one, and the other would press a lever. The headlines evoked images of one mind reaching into another, commandeering its motor systems in a fit of Alien Paw Syndrome.
Of course, the press goes overboard sometimes. Once you look past those headlines you notice that Reaction Rat had been pre-trained to press his lever whenever he felt a particular itch in his motor cortex (in exactly the same way you’d train him to respond to a flashing light, for example). There was no fused consciousness. It was a step forward, but you don’t get to claim membership in the Borg Collective just because a stimulus happens to tickle you from the inside.
And yet, more recently, Rajesh Rao (of the University of Washington’s Center for Sensorimotor Neural Engineering) reported what appears to be a real Alien Hand Network – and going Pais-Vieira one better, he built it out of people. Someone thinks a command; downstream, someone else responds by pushing a button without conscious intent. Now we’re getting somewhere.
There’s a machine in a lab in Berkeley, California, that can read the voxels right off your visual cortex and figure out what you’re looking at based solely on brain activity. One of its creators, Kendrick Kay, suggested back in 2008 that we’d eventually be able to read dreams (also, that we might want to take a closer look at certain privacy issues before that happened). His best guess was that this might happen a few decades down the road – but it took only four years for a computer in a Japanese lab to predict the content of hypnagogic hallucinations (essentially, dreams without REM) at 60 per cent accuracy, based entirely on fMRI data.
When Moore’s Law shaves that much time off the predictions of experts, it’s not too early to start wondering about consequences. What are the implications of a technology that seems to be converging on the sharing of consciousness?
It would be a lot easier to answer that question if anyone knew what consciousness is. There’s no shortage of theories. The neuroscientist Giulio Tononi at the University of Wisconsin-Madison claims that consciousness reflects the integration of distributed brain functions. A model developed by Ezequiel Morsella, of San Francisco State University, describes it as a mediator between conflicting motor commands. The panpsychics regard it as a basic property of matter – like charge, or mass – and believe that our brains don’t generate the stuff so much as filter it from the ether like some kind of organic spirit-catchers. Neuroscience superstar V S Ramachandran (University of California in San Diego) blames everything on mirror neurons; Princeton’s Michael Graziano – right here in Aeon – describes it as an experiential map.
I think they’re all running a game on us. Their models – right or wrong – describe computation, not awareness. There’s no great mystery to intelligence; it’s easy to see how natural selection would promote flexible problem-solving, the triage of sensory input, the high-grading of relevant data (aka attention).
But why would any of that be self-aware?
If physics is right – if everything ultimately comes down to matter, energy and numbers – then any sufficiently accurate copy of a thing will manifest the characteristics of that thing. Sapience should therefore emerge from any physical structure that replicates the relevant properties of the brain.
We might be about to find out. SyNAPSE – a collaboration between the US Defense Advances Research Projects Agency (DARPA) and the IT industry – is even now working on a hardware reconstruction of a human brain. They’re hoping to have it running by 2019, although if physics is right, ‘awake’ might be a better term.
Then again, if physics is right, we shouldn’t exist. You can watch ions hop across synapses, follow nerve impulses from nose to toes; nothing in any of those processes would lead you to expect the emergence of subjective awareness. Physics describes a world of intelligent zombies who do everything we do, except understand that they’re doing it. That’s what we should be, that’s all we should be: meat and computation. Somehow the meat woke up. How the hell does that even work?
they sat down over beers and took a show of hands on whether to admit bonobos to the Sapients Club
What we can get a handle on are the correlates of sapience, the neural signatures that accompany the conscious state. In humans at least, consciousness occurs when a bunch of subcortical structures – the brain stem, the thalamus and hypothalamus, the anterior cingulate cortex – talk to the frontal lobes. Integration is key. Neurons in all these far-flung regions have to be firing in sync, a co‑ordinated call-and-response with a signal lag of no more than 400 milliseconds. Tononi is using that insight to derive an integration metric he calls ɸ. It is designed not merely to detect consciousness but to quantify it: to hang a hard number on the level of self-awareness flickering in everything from roundworms to humans.
If it does all come down to neural integration – if self-awareness is a matter of degree, flickering at some rudimentary level even in the ganglia of nematodes – then the specific architecture of the conscious brain might be open to negotiation. This, at least, is the position of the so-called ‘Cambridge Declaration’ unveiled at the 2012 Francis Crick Memorial Conference on Consciousness. Its signatories – ‘cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists’ – attribute self-awareness to a wide variety of non-human species.
I’m not sure how seriously to take this. Not that I find the claim implausible – I’ve always believed that we humans tend to underestimate the cognitive complexity of other creatures – but it’s not as though the declaration announced the results of some ground-breaking new experiment that settled the issue once and for all. Rather, its signatories basically sat down over beers and took a show of hands on whether to publicly admit bonobos to the Sapients Club. (Something else that seems a bit iffy is all the fuss raised over the signing of the declaration ‘in the presence of Stephen Hawking’, even though he is neither a neuroscientist nor a signatory.)
Still, we are talking about a cadre of renowned neuroscientists, the least of whom is far more qualified than I to make assertions on the subject. One of the things they assert is that self-awareness does not depend on specific brain structures. The declaration grants ‘near-human levels of consciousness’ to parrots (who lack a neocortex) and to octopuses (whose brains – basically a bagel of neurons encircling the esophagus – don’t have any anatomical resemblance to ours at all). It’s neurological complexity that’s essential to the conscious state, they tell us. The motherboard can take any shape so long as it’s got enough synapses on board.
This is all preamble, though, a set‑up to the question posed at the outset: what are the implications of a technology that wires brains together, that in theory at least permits the existence of hive minds? In fact, you know a lot more about that than you might think.
You already are a hive mind. You always have been.
This thing you think of as you: it spreads across two cerebral hemispheres connected by the corpus callosum, a fat meaty pipe more than 200 million axons thick. Suppose I took a cleaver to that pipe, split it down the middle. (That’s no mere thought experiment: severing the corpus callosum is a last-ditch measure against certain forms of epilepsy.) In the wake of such violent separation, each hemisphere would go its own way. It would develop its own tastes in clothes, music, even its own religious beliefs. Ramachandran tells of a split-brain patient with a Christian hemisphere and an atheist one. You’ve probably heard of Alien Hand Syndrome, or at least seen the movie Dr Strangelove: try to put on a certain shirt, your evil hand rips it off. Try to pick up a favourite pen, your evil hand knocks it away and picks up a Sharpie instead.
Except it’s not your hand at all any more, of course. It belongs to that other self living across the hall, the one that used to be part of you before the break‑up.
A shy introvert morphs into a flirtatious jokester. A pleasant woman turns sarcastic. But when the other half wakes up, the new entity vanishes
You’re still talking, at least. Still friends of a sort. Even when the corpus callosum is severed, the hemispheres can communicate via the brainstem. It’s a longer route, though, and a much thinner pipe: think dial-up versus broadband. The essential variables, once again, are latency and bandwidth. When the pipe is intact, signals pass back and forth across the whole brain fast enough for the system to act as an integrated whole, to think of itself as I. But when you force those signals to take the scenic route – worse, squeeze them through a straw – the halves fall out of sync, lose their coherence. I shatters into we.
You might expect that an established personality, built over a lifetime and then split down the middle, might take some time to develop into distinct entities. Yet hemispheric isolation can also be induced chemically, by anaesthetising half the brain – and the undrugged hemisphere, unshackled from its counterpart, sometimes manifests a whole new suite of personality traits right on the spot. A shy, whole-brained introvert morphs into a wise-cracking flirtatious jokester. A pleasant, well-adjusted woman turns sarcastic and hostile. When the other half wakes up the new entity vanishes as quickly as it appeared.
So while the thing that calls itself I typically runs on a dual-core engine, it’s perfectly capable of running on a single core. Take you, for example. Chances are you’re running on two cores right now. Does each contain its own distinct sub-personality? Are there two of you in there, each thinking: Hey, I’m part of something bigger?
Not likely. Rather, the local personae are obliterated, absorbed into a greater whole; as the Finnish computer scientists Kaj Sotala (at the University of Helsinki) and Harri Valpola (Aalto University) recently declared in the International Journal of Machine Consciousness, ‘the biological brain cannot support multiple separate conscious attentional processes in the same brain medium’.
Remember that. It could end up biting us in the ass a few years down the road.
Krista and Tatiana Hogan of the city of Vernon in British Columbia are seven-year-old sisters fused at the head. Craniopagus twins are extremely rare in any event, but the Hogans appear to be utterly unique in that they aren’t just fused at the skull or the vascular system. They are fused at the brain – more specifically at the thalamus, which acts as (among other things) a sensory relay.
They share a common set of sensory inputs. Tickle one, the other laughs. Each sees through the other’s eyes; each tastes what’s on the other’s tongue. They smile and cry in sync. There’s anecdotal evidence that they share thoughts and, although they have distinct personalities, each uses the word ‘I’ when referring to the other. The Hogan twins are two souls with one sensorium. All because they’re fused at a sensory relay.
But the thalamus is lower-brain circuitry. Dial-up, not broadband. Suppose the twins were fused at the prefrontal cortex instead?
If two hemispheres can each run separate, standalone programs – yet fuse to form a single coherent entity – what about the fusion of complete brains, a single contiguous porridge of neurons spread across two heads? Given a slight developmental tweak to the left, would we still be talking about two souls, or a single conscious being with twice the neuronal mass of a normal human brain?
Whether you instil sights, sounds, political opinions or a craving for a certain brand of beer might come down to where you aim the beam
There are other ways to put our heads together. Neurosilicon interfaces, for example. We’ve had those for more than a decade now. In labs around the world, neuron cultures put robot bodies through their paces; puddles of brain tissue drive flight simulators. At Clemson University in South Carolina, Ganesh Venayagamoorthy is busy teaching tame neurons to run everything from power grids to stock markets. DARPA has thrown its weight behind the development of a ‘cortical modem’, a direct neural interface wired right into your gray matter (we’re already using implants to reprogram specific neurons in other primates). But DARPA may have already been scooped by Theodore Berger, down at the University of Southern California. Way back in 2011, he unveiled a kind of artificial, memory-forming hippocampus for rats. The memories encoded in that device can be accessed by the organic rat brain; they can also be ported to other rats. It won’t be long before such prostheses scale up to our own species (that is in fact the explicit goal of Berger’s research).
If the prospect of surgery squicks you out, Sony has registered blue-sky patents for technology that plants sensory input directly into the brain using radio waves and compressed ultrasound. They’re selling it as a great leap forward for everything from gaming to telesurgery. (For my part, I can’t help remembering that neurons fire pretty much the same way whether they’re processing sensory input or religious belief. The difference between instilling sights, sounds, political opinions – why not an irresistible craving for a certain brand of beer? – might come down to little more than where you aim the beam.)
None of these efforts are explicitly designed to connect one human mind to another. What they’re pioneering is an interface, the ability to translate thoughts from meat into mech and back again. What we are seeing, in other words, is the genesis of a new kind of corpus callosum that extends beyond the confines of a single skull.
We’re still in the Precambrian. Grau’s emailed brainwaves amount to a fancy kind of semaphore that happens to bypass the eyeballs. Pais-Vieira’s hive mind was a pair of distinct rat brains, pimped out so that a spark in one would trigger a poke in the other – a stimulus that would have been meaningless to the recipient if he hadn’t already been trained to respond in a certain way. That’s not integrated awareness, or even telepathy. It’s the difference between experiencing an orgasm and watching a signal light on a distant hill spell out oh-god-oh-god-yes in Morse Code.
So it’s early days yet. But it may be later than you think.
Cory Doctorow’s novel Down and Out in the Magic Kingdom (2003) describes a near future in which everyone is wired into the internet, 24/7, via cortical link. It’s not far-fetched, given recent developments. And the idea of hooking a bunch of brains into a common network has a certain appeal. Split-brain patients outperform normal folks on visual-search and pattern-recognition tasks, for one thing: two minds are better than one, even when they’re in the same head, even when limited to dial-up speeds. So if the future consists of myriad minds in high-speed contact with each other, you might say: Yay, bring it on.
I’m not sure that’s the way it’s going to happen, though.
I don’t necessarily buy into the hokey old trope of an internet that ‘wakes up’. Then again, I don’t reject it out of hand, either. Google’s ‘DeepMind’, a general-purpose AI explicitly designed to mimic the brain, is a bit too close to SyNAPSE for comfort (and a lot more imminent: its first incarnations are already poised to enter the market). The bandwidth of your cell phone is already comparable to that of your corpus callosum, once noise and synaptic redundancy are taken into account. We’re still a few theoretical advances away from an honest-to-God mind meld – still waiting for the ultrasonic ‘Neural Dust’ interface proposed by Berkley’s Dongjin Seo, or for researchers at Rice University to perfect their carbon-nanotube electrodes – but the pipes are already fat enough to handle that load when it arrives.
And those advances may come easier than you’d expect. Brains do a lot of their own heavy lifting when it comes to plugging unfamiliar parts together. A blind rat, wired into a geomagnetic sensor via a simple pair of electrodes, can use magnetic fields to navigate a maze just as well as her sighted siblings. If a rat can teach herself to use a completely new sensory modality – something the species has never experienced throughout the course of its evolutionary history – is there any cause to believe our own brains will prove any less capable of integrating novel forms of input?
Not even skeptics necessarily deny the likelihood of ‘thought-stealing technology’. They only protest that it won’t be here for decades (which, given the number of us who expect to be alive and kicking 30 years from now, is not an especially strong objection). If we do stop short of a hive mind, it’s unlikely to be because we lack the tech; it’ll only be because we lack the nerve.
So I don’t think it unreasonable to wonder if one day, not too far from now, Netflix might change its name to Mindflix and offer streaming first-person experience directly into the sensory cortex. I suspect people would sign up in droves for such a service. Moore’s Law will work its magic.
What might that mean to us as individuals?
Ask one-half of the supersized self that the Hogan twins might have been, if their brains had fused just a little further up. Ask the poor bastard who awakened into a single hemisphere and had a few minutes to live some fraction of a life before the drugs wore off and his other half swallowed him whole. Oh, but you can’t ask him. He doesn’t exist any more. Right now he has as much individuality as your parietal lobe.
Consciousness remains mysterious. But there’s no reason to regard it as magical, no evidence of spectral bonds that hold a soul in one head and keep it from leaking into another. And one of the things we do know is that consciousness spreads to fill the space available. Smaller selves disappear into larger; two hemispheres integrate into one. The architectural specifics aren’t even all that important if Tononi is right, if the Cambridge Declaration is anything to go on. You don’t need a neocortex or a hypothalamus. All you need is complexity and a sufficiently fat pipe.
Does a thought know to turn back at the edge of one skull when the paths lead into another? Does an electron know the difference between a corpus callosum and a brain-computer interface? Titles in the popular press – ‘Google Search Will Be Your Next Brain’ – might not be so much ominous as childishly naive; they assume, after all, that ‘you’ will continue to exist as a distinct entity. They assume that brains can support multiple separate conscious attentional processes in the same medium.
Keep the bandwidth too low and you lose the experience; edge it too high and you lose yourself
Throughout history we’ve communicated via the equivalent of dial‑up, through speech and writing and images on screens. A fat enough neural interface could turn everything broadband, act as a next-gen corpus callosum that fuses we into some new kind of I that’s never existed before.
Of course they’ll put safeguards in place, take every measure to ensure that nothing goes wrong. Maybe nothing will. Keep your baud rate dialled back far enough and you’ll be fine. But there are always those who push the envelope, who might actively embrace the prospect of union with another mind. They’re not all that uncommon in transhumanist circles. Some regard it as an inevitable step in abandoning the flesh, uploading consciousness into a gleaming new chassis with a longer warranty. To others it’s a way to commune with the souls of other species, to share consciousness with cats and octopuses. It’s a fine line, though. Keep the bandwidth too low and you lose the experience; edge it too high and you lose yourself.
Even if you’re not into that kind of thing, you use the internet – which neuroscientists and game developers, even now, are reshaping into an explicit embodiment of neural intelligence. The web’s ɸ score isn’t going anywhere but up. And servers hiccup sometimes. Floodgates fail. Shit happens, and – as Batman’s butler once pointed out – some men just want to watch the world burn. Given the option, those folks might get tired of distributed denial-of-service attacks and leaked celebrity emails, they might try hacking Mindflix for Allah or the lulz. God help anyone who’s streaming the latest Marvel Total-Immersion Extravaganza when that happens.
These are some of the things we might want to start thinking about now – because they won’t matter that much to you after some failsafe has failed, or you’ve been talked into trying the whole mind-meld thing by someone who figured out how to disable the bottleneck. You might not care about the potential of an emergent consciousness built from silicon or a network of 1,000 brains, or whether logging out of a freshly integrated hive mind should be defined as murder or mere lobotomy.
Immersed in that pool – reduced from standalone soul down to neural subroutine – there might not be enough of you left to even want to get out again. | 2024-11-08T13:57:18 | en | train |
10,826,516 | mikemaccana | 2016-01-02T15:54:15 | Why Are Digital-Privacy Apps So Hard to Use? | null | http://www.theatlantic.com/technology/archive/2015/12/why-are-digital-privacy-apps-so-hard-to-use/422310/?single_page=true | 2 | 1 | [
10826529
] | null | null | no_error | Why Are Digital Privacy Apps So Hard to Use? | 2015-12-31T14:43:22Z | Kaveh Waddell | Unless two people are in the same room, it’s hard for them to communicate information securely. Phone calls, emails, and text messages could be open to eavesdropping from governments, companies, or hackers—and even paper mail is subject to tracking.Truly private online communications have been available for some time, but most require a high level of technology know-how. Those uncomfortable setting up a PGP key to encrypt their emails, for example, have for decades been left without an option to communicate securely.But since Edward Snowden’s trove of leaked government documents revealed the extent of the National Security Agency’s domestic spying apparatus, digital privacy has begun to enter the consciousness of average consumers, and a small group of apps has sprung up to them. A few companies—most notably, Signal, Telegram, and WhatsApp—have created simple apps for private communication, their pleasant interfaces masking complex security systems built to withstand intense attacks.Another digital-security software company is trying to make straightforward privacy tools accessible to more app developers—and by extension, to more consumers. SpiderOak has been in the business of protecting data for years, with a Dropbox-like backup service that allows users to save mountains of data on the company’s servers, but in such a way that even the company itself can’t decrypt the information it holds.SpiderOak also developed an open-source platform called Crypton, a code library that’s free for other developers to lean on when creating their own apps.* The library handles privacy protections, allowing less crypto-savvy programmers to focus on other details.David Dahl, Crypton’s director, says privacy is a user-experience problem. “There has historically been very little interaction between [user-experience] designers who love to create very pretty and functional things and computer scientists who specialize in cryptography,” Dahl wrote in an email.That disconnect, he said, has prevented encrypted communication from “looking and acting like everyday software.” Sending a PGP-encrypted email, for example, is a many-step process that involves a lengthy initial setup, finding and verifying the public key of the intended recipient, using software to encrypt a message with that public key, and later decrypting the response.As a proof-of-concept for simple privacy software, SpiderOak built a basic social-networking app called Kloak on the Crypton platform. Like Twitter, Kloak allows users to broadcast short status messages—but unlike Twitter’s emphasis on public engagement, it only allows sharing between users who have agreed to follow one another, encrypting the messages and photos as they travel between users’ devices.Still in beta and rough around the edges, Kloak is more an experiment than a viable product.“It’s an easy way for us to encourage other people to build other zero-knowledge applications,” said Alan Fairless, SpiderOak’s co-founder and CEO. “Here’s a nice example of one: It was built without using any fancy tools, no advanced JavaScript frameworks—just very vanilla, approachable by new developers.”One of the privacy capabilities that Kloak demonstrates is a simple key-verification process, an essential part of most encrypted communications.When sending and receiving encrypted messages, each participant in a conversation must make certain that the person on the other end is indeed who they say they are. Many modern encryption services use a system of public and private keys, allowing users to verify their partners’ identities by comparing computer-generated passcodes or images. A Telegram encryption keyOne of the ways user-experience-focused apps are making secure messaging more friendly is by making this verification process easier. Telegram, for example, creates a pattern of blue squares based on the public keys of the participants in an encrypted chat, which both ends can view and compare: They should be identical. And an encrypted call made through Signal displays two words on the participants’ devices, which they can compare to verify that their conversation is secure.Kloak uses a system more akin to Telegram’s, generating a QR code that allows users to add others to their network by scanning it. But SpiderOak says it’s developing a “stylish” replacement for Kloak’s tired-looking QR code that will involve an animated pattern.“If you’ve ever used a product like PGP, the key-verification process is just a disaster for most people,” Fairless said. “How can we make it feel private, and be effective, and unobtrusive enough that people will actually do it?”The animated key-verification system will feature in a product SpiderOak plans to launch in the next few months, a team-collaboration application that will compete with tools like Microsoft SharePoint. Derived from the company’s Crypton framework, the software will allow teams to exchange messages and files that will remain encrypted and inaccessible even to SpiderOak employees.Although Kloak is a bare-bones experiment with little marketing behind it, it’s already attracted some early adopters. Andrew Mitry, a cloud-computing engineer at Walmart, said he was drawn in by Kloak’s privacy focus. Acknowledging that he runs in “pretty tech-savvy circles,” he said many in his network would only participate in social networking “in a private/secure environment.”Another early Kloak user, Brazil-based beta tester David Nielsen, said he ran into several early usability problems, but was also enticed the app’s approach to privacy. “At least Kloak offers something unique and hopefully valuable to users in this post-NSA-data-addiction world: the freedom to make an informed decision on privacy,” Nielsen said. “Provided everybody they care about make the same choice.”Indeed, the greatest obstacle to any privacy-first software is uptake. Security-conscious early adopters aside, many consumers aren’t willing to give up features they consider essential in exchange for encryption, a little-understood and often-vilified tool. And newer privacy-first apps are up against entrenched rivals like Facebook and Gmail, which have built enormous user bases, and survive by profiting from those users’ data.But as the compounded effect of Snowden’s NSA secrets and the ballooning list of hacks at major companies and government agencies seeps into the public consciousness, there is more of a chance that consumers will demand more privacy from their everyday software.To meet that demand, developers will need to invest in making the complexities of digital privacy accessible and user-friendly, extending the option of online security to even the least tech-savvy Internet users.* This article originally stated that SpiderOak's backup service is built on the open-source platform Crypton; it is actually built with proprietary software. We regret the error. | 2024-11-08T11:53:20 | en | train |
10,826,635 | n3mes1s | 2016-01-02T16:28:56 | Starting a tech startup with C++ | null | https://medium.com/@jamesperry/starting-a-tech-startup-with-c-6b5d5856e6de#.wv0nsmicb | 157 | 101 | [
10827655,
10826959,
10827553,
10826833,
10826782,
10828456,
10826873,
10827296,
10826764,
10826868,
10826880,
10826892,
10826856,
10826971,
10826899,
10828516,
10828243,
10827415,
10827765,
10826850,
10827363,
10827387,
10826991
] | null | null | no_error | Starting a tech startup with C++ - The Startup - Medium | 2016-01-02T10:10:39.404Z | James Perry | I founded a new tech-startup called Signal Analytics with an old University friend, Fedor Dzjuba of Linnworks. We are building a modern, cloud-based version of OLAP cubes (multi-dimensional data storage and retrieval) by building our own database system.I am taking the lead on the technical side and I am most comfortable with C++ so decided to build our OLAP engine with it. I did originally build a prototype in Rust but it was too high risk (I should write another post to explain more about this decision).A lot of my peers think it is bizarre that I am building a cloud service with C++ and not with a dynamic language — such as Ruby or Python— that provides high productivity to ship quickly.It started to question my own judgement to use C++ and I decided to research whether it is good idea or not.ProductivityC++ is not a dynamic language but modern C++ (C++11/14) does have type inference. There are lot of misconceptions that if you write it in C++, you must code with raw pointers, type long-winded namespaces/types and manage memory manually. A key feature to feeling more productive in C++ is the auto feature; you do not have to type long-winded namespaces and classes; it uses type-inference to infer the type of the variable.Manual memory management is the most popular misconception of C++. Since C++11, it is now recommended to use std::shared_ptr or std::unique_ptr for automatic memory management. There is a small computational cost to maintaining referenced pointers but it’s minuscule and the safety outweighs this cost.The last part to being productive is having libraries to build a service/product rapidly. Python, Ruby and others have great libraries to take care of the common infrastructure. In my opinion, the current C++ standard library is severely lacking in basic functionalities and certain APIs have poor performance (for example, reading files from iostreams). Facebook has open-sourced high quality libraries that have helped us to quickly ship out alphas of our OLAP cloud service.:FollyThis is a great general C++ library and has lots of high-performance classes to use. I use their fbvector, fbstring throughout our engine because it offers better performance than their std::vector and std::string respectively. We also use a lot of their futures and atomic lock-free data structures.Facebook made a really smart move with their dynamic growth allocations by not using quadratic growth (this can be easily proofed mathematically to explain why it is bad). Their containers grow memory size by 1.5x instead of 2x to improve performance.On a side note, reading Folly code has also made me a better C++ developer so I strongly recommend to read it.ProxygenProxygen is an asynchronous HTTP server that is also developed by Facebook. We use Proxygen as our HTTP server that inserts and retrieves data as JSON to and from our OLAP engine. It allowed us to create a high-performance HTTP server calling our engine in just 1 day. I decided to benchmark it against a Python Tornado server and got the following results for testing with 200 HTTP connections on an EC2 instance:C++/Proxygen =1,990,130 requests per secondPython/Tornado = 41,329 requests per secondIts API is more low-level and you will have to write your own HTTP routing but this is a trivial task. Here’s what our HTTP body handler roughly looks like:WangleOur OLAP engine is essentially a distributed database used to store and query multi-dimensional data. The engine uses Wangle as the foundation of an application server. All the logic are layered into Wangle handlers that are chained together to form a pipeline. It communicates with our Proxygen HTTP server to serve data queries and the nodes communicate with each other.It uses a grid of servers that share the same (symmetric) binary executable so there’s no master/slave paradigm. Each server is a node that acts as both a master and a slave and uses a custom binary data protocol to pass data/messages to each other.The only thing missing for our needs is fibers for cooperative scheduling of storage and querying tasks within the engine; however Folly/Wangle developers have an experimental version at the moment but it is not production-ready yet.2. Hardware/Labor CostsI quantified that 1 C++ server is roughly equivalent to 40 load-balanced python servers for raw computational power based on our HTTP benchmarking. Thus using C++ can really squeeze all the computational juice out of the underlying hardware to save 1/40 off server costs. I guess we could have written it in Python to start off with but, economically, it would be a wastage of labor cost and time because, at some stage, we would have to scrap it for a C++ version to get the performance we need. The Python code will have no economic value once scrapped.To summarize, C++ may not be the most popular choice for a startup but I believe modern C++ can be a viable choice giving you near-C performance with high-level abstractions. I am worried about build times once the code base grows significantly but hopefully C++17 modules alleviate this.I hope this post inspires others to look into C++ for their ventures. | 2024-11-08T01:10:25 | en | train |
10,826,836 | urs | 2016-01-02T17:21:46 | The Refragmentation | null | http://paulgraham.com/re.html | 852 | 444 | [
10827603,
10826930,
10827301,
10827825,
10827180,
10827210,
10827956,
10827113,
10830171,
10827728,
10829958,
10827495,
10831051,
10827600,
10827055,
10827969,
10827126,
10828006,
10826974,
10828044,
10827639,
10828013,
10828156,
10827002,
10827358,
10827220,
10827788,
10827533,
10828200,
10830585,
10827838,
10827062,
10827023,
10828396,
10827054,
10827891,
10827019,
10829125,
10828921,
10827311,
10827264,
10828231,
10860053,
10827038,
10828897,
10827811,
10830407,
10834057,
10831560,
10831391,
10828464,
10827481,
10828927,
10841494,
10829889,
10828991,
10827320,
10830481,
10827160,
10827215,
10828839,
10827664,
10829937,
10828240,
10829115,
10829555,
10829865,
10827234,
10828945,
10834553,
10828035,
10828948,
10829906,
10830233,
10829695,
10829988,
10829167,
10828882,
10830442,
10828902,
10835643,
10827897,
10829526,
10828308,
10829003,
10830498,
10828608,
10827698,
10827217,
10827855,
10827760,
10829276,
10827900,
10827059,
10829116,
10839915,
10828851,
10829847,
10829383,
10827110,
10830444,
10827381,
10830739,
10826923,
10827080,
10828827,
10829134,
10827021,
10828883,
10827960
] | null | null | no_error | The Refragmentation | null | null | One advantage of being old is that you can see change happen in
your lifetime. A lot of the change I've seen is fragmentation. US
politics is much more polarized than it used to be. Culturally we
have ever less common ground. The creative class flocks to a handful
of happy cities, abandoning the rest. And increasing economic
inequality means the spread between rich and poor is growing too.
I'd like to propose a hypothesis: that all these trends are instances
of the same phenomenon. And moreover, that the cause is not some
force that's pulling us apart, but rather the erosion of forces
that had been pushing us together.Worse still, for those who worry about these trends, the forces
that were pushing us together were an anomaly, a one-time combination
of circumstances that's unlikely to be repeated — and indeed, that
we would not want to repeat.The two forces were war (above all World War II), and the rise of
large corporations.The effects of World War II were both economic and social.
Economically, it decreased variation in income. Like all modern
armed forces, America's were socialist economically. From each
according to his ability, to each according to his need. More or
less. Higher ranking members of the military got more (as higher
ranking members of socialist societies always do), but what they
got was fixed according to their rank. And the flattening effect
wasn't limited to those under arms, because the US economy was
conscripted too. Between 1942 and 1945 all wages were set by the
National War Labor Board. Like the military, they defaulted to
flatness. And this national standardization of wages was so pervasive
that its effects could still be seen years after the war ended.
Business owners weren't supposed to be making money either. FDR
said "not a single war millionaire" would be permitted. To ensure
that, any increase in a company's profits over prewar levels was
taxed at 85%. And when what was left after corporate taxes reached
individuals, it was taxed again at a marginal rate of 93%.
Socially too the war tended to decrease variation. Over 16 million
men and women from all sorts of different backgrounds were brought
together in a way of life that was literally uniform. Service rates
for men born in the early 1920s approached 80%. And working toward
a common goal, often under stress, brought them still closer together.Though strictly speaking World War II lasted less than 4 years for
the US, its effects lasted longer. Wars make central governments
more powerful, and World War II was an extreme case of this. In
the US, as in all the other Allied countries, the federal government
was slow to give up the new powers it had acquired. Indeed, in
some respects the war didn't end in 1945; the enemy just switched
to the Soviet Union. In tax rates, federal power, defense spending,
conscription, and nationalism, the decades after the war looked more
like wartime than prewar peacetime.
If total war was the big political story of the 20th century, the
big economic story was the rise of a new kind of company. And this
too tended to produce both social and economic cohesion.
The 20th century was the century of the big, national corporation.
General Electric, General Foods, General Motors. Developments in
finance, communications, transportation, and manufacturing enabled
a new type of company whose goal was above all scale. Version 1
of this world was low-res: a Duplo world of a few giant companies
dominating each big market.
The late 19th and early 20th centuries had been a time of consolidation,
led especially by J. P. Morgan. Thousands of companies run by their
founders were merged into a couple hundred giant ones run by
professional managers. Economies of scale ruled the day. It seemed
to people at the time that this was the final state of things. John
D. Rockefeller said in 1880
The consolidation that began in the late 19th century continued for
most of the 20th. By the end of World War II, as Michael Lind
writes, "the major sectors of the economy were either organized
as government-backed cartels or dominated by a few oligopolistic
corporations."For consumers this new world meant the same choices everywhere, but
only a few of them. When I grew up there were only 2 or 3 of most
things, and since they were all aiming at the middle of the market
there wasn't much to differentiate them.One of the most important instances of this phenomenon was in TV.
Here there were 3 choices: NBC, CBS, and ABC. Plus public TV for
eggheads and communists. The programs that the 3 networks offered were
indistinguishable. In fact, here there was a triple pressure toward
the center. If one show did try something daring, local affiliates
in conservative markets would make them stop. Plus since TVs were
expensive, whole families watched the same shows together, so they
had to be suitable for everyone.And not only did everyone get the same thing, they got it at the
same time. It's difficult to imagine now, but every night tens of
millions of families would sit down together in front of their TV
set watching the same show, at the same time, as their next door
neighbors. What happens now with the Super Bowl used to happen
every night. We were literally in sync.
In a way mid-century TV culture was good. The view it gave of the
world was like you'd find in a children's book, and it probably had
something of the effect that (parents hope) children's books have
in making people behave better. But, like children's books, TV was
also misleading. Dangerously misleading, for adults. In his
autobiography, Robert MacNeil talks of seeing gruesome images that
had just come in from Vietnam and thinking, we can't show these to
families while they're having dinner.I know how pervasive the common culture was, because I tried to opt
out of it, and it was practically impossible to find alternatives.
When I was 13 I realized, more from internal evidence than any
outside source, that the ideas we were being fed on TV were crap,
and I stopped watching it.
nominally
apples. And in retrospect, it was crap.
[8]But when I went looking for alternatives to fill this void, I found
practically nothing. There was no Internet then. The only place
to look was in the chain bookstore in our local shopping mall.
[9]
There I found a copy of The Atlantic. I wish I could say it became
a gateway into a wider world, but in fact I found it boring and
incomprehensible. Like a kid tasting whisky for the first time and
pretending to like it, I preserved that magazine as carefully as
if it had been a book. I'm sure I still have it somewhere. But
though it was evidence that there was, somewhere, a world that
wasn't red delicious, I didn't find it till college.It wasn't just as consumers that the big companies made us similar.
They did as employers too. Within companies there were powerful
forces pushing people toward a single model of how to look and act.
IBM was particularly notorious for this, but they were only a little
more extreme than other big companies. And the models of how to
look and act varied little between companies. Meaning everyone
within this world was expected to seem more or less the same. And
not just those in the corporate world, but also everyone who aspired
to it — which in the middle of the 20th century meant most people
who weren't already in it. For most of the 20th century, working-class
people tried hard to look middle class. You can see it in old
photos. Few adults aspired to look dangerous in 1950.But the rise of national corporations didn't just compress us
culturally. It compressed us economically too, and on both ends.Along with giant national corporations, we got giant national labor
unions. And in the mid 20th century the corporations cut deals
with the unions where they paid over market price for labor. Partly
because the unions were monopolies.
[10]
Partly because, as
components of oligopolies themselves, the corporations knew they
could safely pass the cost on to their customers, because their
competitors would have to as well. And partly because in mid-century
most of the giant companies were still focused on finding new ways
to milk economies of scale. Just as startups rightly pay AWS a
premium over the cost of running their own servers so they can focus
on growth, many of the big national corporations were willing to
pay a premium for labor.
[11]As well as pushing incomes up from the bottom, by overpaying unions,
the big companies of the 20th century also pushed incomes down at
the top, by underpaying their top management. Economist J. K.
Galbraith wrote in 1967 that "There are few corporations in which
it would be suggested that executive salaries are at a maximum."
[12]To some extent this was an illusion. Much of the de facto pay of
executives never showed up on their income tax returns, because it
took the form of perks. The higher the rate of income tax, the
more pressure there was to pay employees upstream of it. (In the
UK, where taxes were even higher than in the US, companies would
even pay their kids' private school tuitions.) One of the most
valuable things the big companies of the mid 20th century gave their
employees was job security, and this too didn't show up in tax
returns or income statistics. So the nature of employment in these
organizations tended to yield falsely low numbers about economic
inequality. But even accounting for that, the big companies paid
their best people less than market price. There was no market; the
expectation was that you'd work for the same company for decades
if not your whole career.
[13]Your work was so illiquid there was little chance of getting market
price. But that same illiquidity also encouraged you not to seek
it. If the company promised to employ you till you retired and
give you a pension afterward, you didn't want to extract as much
from it this year as you could. You needed to take care of the
company so it could take care of you. Especially when you'd been
working with the same group of people for decades. If you tried
to squeeze the company for more money, you were squeezing the
organization that was going to take care of them. Plus if
you didn't put the company first you wouldn't be promoted, and if
you couldn't switch ladders, promotion on this one was the only way
up.
[14]To someone who'd spent several formative years in the armed forces,
this situation didn't seem as strange as it does to us now. From
their point of view, as big company executives, they were high-ranking
officers. They got paid a lot more than privates. They got to
have expense account lunches at the best restaurants and fly around
on the company's Gulfstreams. It probably didn't occur to most of
them to ask if they were being paid market price.The ultimate way to get market price is to work for yourself, by
starting your own company. That seems obvious to any ambitious
person now. But in the mid 20th century it was an alien concept.
Not because starting one's own company seemed too ambitious, but
because it didn't seem ambitious enough. Even as late as the 1970s,
when I grew up, the ambitious plan was to get lots of education at
prestigious institutions, and then join some other prestigious
institution and work one's way up the hierarchy. Your prestige was
the prestige of the institution you belonged to. People did start
their own businesses of course, but educated people rarely did,
because in those days there was practically zero concept of starting
what we now call a startup:
a business that starts small and grows
big. That was much harder to do in the mid 20th century. Starting
one's own business meant starting a business that would start small
and stay small. Which in those days of big companies often meant
scurrying around trying to avoid being trampled by elephants. It
was more prestigious to be one of the executive class riding the
elephant.By the 1970s, no one stopped to wonder where the big prestigious
companies had come from in the first place. It seemed like they'd
always been there, like the chemical elements. And indeed, there
was a double wall between ambitious kids in the 20th century and
the origins of the big companies. Many of the big companies were
roll-ups that didn't have clear founders. And when they did, the
founders didn't seem like us. Nearly all of them had been uneducated,
in the sense of not having been to college. They were what Shakespeare
called rude mechanicals. College trained one to be a member of the
professional classes. Its graduates didn't expect to do the sort
of grubby menial work that Andrew Carnegie or Henry Ford started
out doing.
[15]And in the 20th century there were more and more college graduates.
They increased from about 2% of the population in 1900 to about 25%
in 2000. In the middle of the century our two big forces intersect,
in the form of the GI Bill, which sent 2.2 million World War II
veterans to college. Few thought of it in these terms, but the
result of making college the canonical path for the ambitious was
a world in which it was socially acceptable to work for Henry Ford,
but not to be Henry Ford.
[16]I remember this world well. I came of age just as it was starting
to break up. In my childhood it was still dominant. Not quite so
dominant as it had been. We could see from old TV shows and yearbooks
and the way adults acted that people in the 1950s and 60s had been
even more conformist than us. The mid-century model was already
starting to get old. But that was not how we saw it at the time.
We would at most have said that one could be a bit more daring in
1975 than 1965. And indeed, things hadn't changed much yet.But change was coming soon. And when the Duplo economy started to
disintegrate, it disintegrated in several different ways at once.
Vertically integrated companies literally dis-integrated because
it was more efficient to. Incumbents faced new competitors as (a)
markets went global and (b) technical innovation started to trump
economies of scale, turning size from an asset into a liability.
Smaller companies were increasingly able to survive as formerly
narrow channels to consumers broadened. Markets themselves started
to change faster, as whole new categories of products appeared. And
last but not least, the federal government, which had previously
smiled upon J. P. Morgan's world as the natural state of things,
began to realize it wasn't the last word after all.What J. P. Morgan was to the horizontal axis, Henry Ford was to the
vertical. He wanted to do everything himself. The giant plant he
built at River Rouge between 1917 and 1928 literally took in iron
ore at one end and sent cars out the other. 100,000 people worked
there. At the time it seemed the future. But that is not how car
companies operate today. Now much of the design and manufacturing
happens in a long supply chain, whose products the car companies
ultimately assemble and sell. The reason car companies operate
this way is that it works better. Each company in the supply chain
focuses on what they know best. And they each have to do it well
or they can be swapped out for another supplier.Why didn't Henry Ford realize that networks of cooperating companies
work better than a single big company? One reason is that supplier
networks take a while to evolve. In 1917, doing everything himself
seemed to Ford the only way to get the scale he needed. And the
second reason is that if you want to solve a problem using a network
of cooperating companies, you have to be able to coordinate their
efforts, and you can do that much better with computers. Computers
reduce the transaction costs that Coase argued are the raison d'etre
of corporations. That is a fundamental change.In the early 20th century, big companies were synonymous with
efficiency. In the late 20th century they were synonymous with
inefficiency. To some extent this was because the companies
themselves had become sclerotic. But it was also because our
standards were higher.It wasn't just within existing industries that change occurred.
The industries themselves changed. It became possible to make lots
of new things, and sometimes the existing companies weren't the
ones who did it best.Microcomputers are a classic example. The market was pioneered by
upstarts like Apple. When it got big enough, IBM decided it was
worth paying attention to. At the time IBM completely dominated
the computer industry. They assumed that all they had to do, now
that this market was ripe, was to reach out and pick it. Most
people at the time would have agreed with them. But what happened
next illustrated how much more complicated the world had become.
IBM did launch a microcomputer. Though quite successful, it did
not crush Apple. But even more importantly, IBM itself ended up
being supplanted by a supplier coming in from the side — from
software, which didn't even seem to be the same business. IBM's
big mistake was to accept a non-exclusive license for DOS. It must
have seemed a safe move at the time. No other computer manufacturer
had ever been able to outsell them. What difference did it make if
other manufacturers could offer DOS too? The result of that
miscalculation was an explosion of inexpensive PC clones. Microsoft
now owned the PC standard, and the customer. And the microcomputer
business ended up being Apple vs Microsoft.Basically, Apple bumped IBM and then Microsoft stole its wallet.
That sort of thing did not happen to big companies in mid-century.
But it was going to happen increasingly often in the future.Change happened mostly by itself in the computer business. In other
industries, legal obstacles had to be removed first. Many of the
mid-century oligopolies had been anointed by the federal government
with policies (and in wartime, large orders) that kept out competitors.
This didn't seem as dubious to government officials at the time as
it sounds to us. They felt a two-party system ensured sufficient
competition in politics. It ought to work for business too.Gradually the government realized that anti-competitive policies
were doing more harm than good, and during the Carter administration
it started to remove them. The word used for this process was
misleadingly narrow: deregulation. What was really happening was
de-oligopolization. It happened to one industry after another.
Two of the most visible to consumers were air travel and long-distance
phone service, which both became dramatically cheaper after
deregulation.Deregulation also contributed to the wave of hostile takeovers in
the 1980s. In the old days the only limit on the inefficiency of
companies, short of actual bankruptcy, was the inefficiency of their
competitors. Now companies had to face absolute rather than relative
standards. Any public company that didn't generate sufficient
returns on its assets risked having its management replaced with
one that would. Often the new managers did this by breaking companies
up into components that were more valuable separately.
[17]Version 1 of the national economy consisted of a few big blocks
whose relationships were negotiated in back rooms by a handful of
executives, politicians, regulators, and labor leaders. Version 2
was higher resolution: there were more companies, of more different
sizes, making more different things, and their relationships changed
faster. In this world there were still plenty of back room negotiations,
but more was left to market forces. Which further accelerated the
fragmentation.It's a little misleading to talk of versions when describing a
gradual process, but not as misleading as it might seem. There was
a lot of change in a few decades, and what we ended up with was
qualitatively different. The companies in the S&P 500 in 1958 had
been there an average of 61 years. By 2012 that number was 18 years.
[18]The breakup of the Duplo economy happened simultaneously with the
spread of computing power. To what extent were computers a precondition?
It would take a book to answer that. Obviously the spread of computing
power was a precondition for the rise of startups. I suspect it
was for most of what happened in finance too. But was it a
precondition for globalization or the LBO wave? I don't know, but
I wouldn't discount the possibility. It may be that the refragmentation
was driven by computers in the way the industrial revolution was
driven by steam engines. Whether or not computers were a precondition,
they have certainly accelerated it.The new fluidity of companies changed people's relationships with
their employers. Why climb a corporate ladder that might be yanked
out from under you? Ambitious people started to think of a career
less as climbing a single ladder than as a series of jobs that might
be at different companies. More movement (or even potential movement)
between companies introduced more competition in salaries. Plus
as companies became smaller it became easier to estimate how much
an employee contributed to the company's revenue. Both changes
drove salaries toward market price. And since people vary dramatically
in productivity, paying market price meant salaries started to
diverge.By no coincidence it was in the early 1980s that the term "yuppie"
was coined. That word is not much used now, because the phenomenon
it describes is so taken for granted, but at the time it was a label
for something novel. Yuppies were young professionals who made lots
of money. To someone in their twenties today, this wouldn't seem
worth naming. Why wouldn't young professionals make lots of money?
But until the 1980s, being underpaid early in your career was part
of what it meant to be a professional. Young professionals were
paying their dues, working their way up the ladder. The rewards
would come later. What was novel about yuppies was that they wanted
market price for the work they were doing now.The first yuppies did not work for startups. That was still in the
future. Nor did they work for big companies. They were professionals
working in fields like law, finance, and consulting. But their example
rapidly inspired their peers. Once they saw that new BMW 325i, they
wanted one too.Underpaying people at the beginning of their career only works if
everyone does it. Once some employer breaks ranks, everyone else
has to, or they can't get good people. And once started this process
spreads through the whole economy, because at the beginnings of
people's careers they can easily switch not merely employers but
industries.But not all young professionals benefitted. You had to produce to
get paid a lot. It was no coincidence that the first yuppies worked
in fields where it was easy to measure that.More generally, an idea was returning whose name sounds old-fashioned
precisely because it was so rare for so long: that you could make
your fortune. As in the past there were multiple ways to do it.
Some made their fortunes by creating wealth, and others by playing
zero-sum games. But once it became possible to make one's fortune,
the ambitious had to decide whether or not to. A physicist who
chose physics over Wall Street in 1990 was making a sacrifice that
a physicist in 1960 didn't have to think about.The idea even flowed back into big companies. CEOs of big companies
make more now than they used to, and I think much of the reason is
prestige. In 1960, corporate CEOs had immense prestige. They were
the winners of the only economic game in town. But if they made as
little now as they did then, in real dollar terms, they'd seem like
small fry compared to professional athletes and whiz kids making
millions from startups and hedge funds. They don't like that idea,
so now they try to get as much as they can, which is more than they
had been getting.
[19]Meanwhile a similar fragmentation was happening at the other end
of the economic scale. As big companies' oligopolies became less
secure, they were less able to pass costs on to customers and thus
less willing to overpay for labor. And as the Duplo world of a few
big blocks fragmented into many companies of different sizes — some
of them overseas — it became harder for unions to enforce their
monopolies. As a result workers' wages also tended toward market
price. Which (inevitably, if unions had been doing their job) tended
to be lower. Perhaps dramatically so, if automation had decreased
the need for some kind of work.And just as the mid-century model induced social as well as economic
cohesion, its breakup brought social as well as economic fragmentation.
People started to dress and act differently. Those who would later
be called the "creative class" became more mobile. People who didn't
care much for religion felt less pressure to go to church for
appearances' sake, while those who liked it a lot opted for
increasingly colorful forms. Some switched from meat loaf to tofu,
and others to Hot Pockets. Some switched from driving Ford sedans
to driving small imported cars, and others to driving SUVs. Kids
who went to private schools or wished they did started to dress
"preppy," and kids who wanted to seem rebellious made a conscious
effort to look disreputable. In a hundred ways people spread apart.
[20]Almost four decades later, fragmentation is still increasing. Has
it been net good or bad? I don't know; the question may be
unanswerable. Not entirely bad though. We take for granted the
forms of fragmentation we like, and worry only about the ones we
don't. But as someone who caught the tail end of mid-century
conformism,
I can tell you it was no utopia.
[21]My goal here is not to say whether fragmentation has been good or
bad, just to explain why it's happening. With the centripetal
forces of total war and 20th century oligopoly mostly gone, what
will happen next? And more specifically, is it possible to reverse
some of the fragmentation we've seen?If it is, it will have to happen piecemeal. You can't reproduce
mid-century cohesion the way it was originally produced. It would
be insane to go to war just to induce more national unity. And
once you understand the degree to which the economic history of the
20th century was a low-res version 1, it's clear you can't reproduce
that either.20th century cohesion was something that happened at least in a
sense naturally. The war was due mostly to external forces, and
the Duplo economy was an evolutionary phase. If you want cohesion
now, you'd have to induce it deliberately. And it's not obvious
how. I suspect the best we'll be able to do is address the symptoms
of fragmentation. But that may be enough.The form of fragmentation people worry most about lately is economic inequality, and if you want to eliminate
that you're up against a truly formidable headwind that has
been in operation since the stone age. Technology.Technology is
a lever. It magnifies work. And the lever not only grows increasingly
long, but the rate at which it grows is itself increasing.Which in turn means the variation in the amount of wealth people
can create has not only been increasing, but accelerating. The
unusual conditions that prevailed in the mid 20th century masked
this underlying trend. The ambitious had little choice but to join
large organizations that made them march in step with lots of other
people — literally in the case of the armed forces, figuratively
in the case of big corporations. Even if the big corporations had
wanted to pay people proportionate to their value, they couldn't
have figured out how. But that constraint has gone now. Ever since
it started to erode in the 1970s, we've seen the underlying forces
at work again.
[22]Not everyone who gets rich now does it by creating wealth, certainly.
But a significant number do, and the Baumol Effect means all their
peers get dragged along too.
[23]
And as long as it's possible to
get rich by creating wealth, the default tendency will be for
economic inequality to increase. Even if you eliminate all the
other ways to get rich. You can mitigate this with subsidies at
the bottom and taxes at the top, but unless taxes are high enough
to discourage people from creating wealth, you're always going to
be fighting a losing battle against increasing variation in
productivity.
[24]That form of fragmentation, like the others, is here to stay. Or
rather, back to stay. Nothing is forever, but the tendency toward
fragmentation should be more forever than most things, precisely
because it's not due to any particular cause. It's simply a reversion
to the mean. When Rockefeller said individualism was gone, he was
right for a hundred years. It's back now, and that's likely to be
true for longer.I worry that if we don't acknowledge this, we're headed for trouble.
If we think 20th century cohesion disappeared because of few policy
tweaks, we'll be deluded into thinking we can get it back (minus
the bad parts, somehow) with a few countertweaks. And then we'll
waste our time trying to eliminate fragmentation, when we'd be
better off thinking about how to mitigate its consequences.
Notes[1]
Lester Thurow, writing in 1975, said the wage differentials
prevailing at the end of World War II had become so embedded that
they "were regarded as 'just' even after the egalitarian pressures
of World War II had disappeared. Basically, the same differentials
exist to this day, thirty years later." But Goldin and Margo think
market forces in the postwar period also helped preserve the wartime
compression of wages — specifically increased demand for unskilled
workers, and oversupply of educated ones.(Oddly enough, the American custom of having employers pay for
health insurance derives from efforts by businesses to circumvent
NWLB wage controls in order to attract workers.)[2]
As always, tax rates don't tell the whole story. There were
lots of exemptions, especially for individuals. And in World War
II the tax codes were so new that the government had little acquired
immunity to tax avoidance. If the rich paid high taxes during the
war it was more because they wanted to than because they had to.After the war, federal tax receipts as a percentage of GDP were
about the same as they are now. In fact, for the entire period since
the war, tax receipts have stayed close to 18% of GDP, despite
dramatic changes in tax rates. The lowest point occurred when
marginal income tax rates were highest: 14.1% in 1950. Looking at
the data, it's hard to avoid the conclusion that tax rates have had
little effect on what people actually paid.[3]
Though in fact the decade preceding the war had been a time
of unprecedented federal power, in response to the Depression.
Which is not entirely a coincidence, because the Depression was one
of the causes of the war. In many ways the New Deal was a sort of
dress rehearsal for the measures the federal government took during
wartime. The wartime versions were much more drastic and more
pervasive though. As Anthony Badger wrote, "for many Americans the
decisive change in their experiences came not with the New Deal but
with World War II."[4]
I don't know enough about the origins of the world wars to
say, but it's not inconceivable they were connected to the rise of
big corporations. If that were the case, 20th century cohesion would
have a single cause.[5]
More precisely, there was a bimodal economy consisting, in
Galbraith's words, of "the world of the technically dynamic, massively
capitalized and highly organized corporations on the one hand and
the hundreds of thousands of small and traditional proprietors on
the other." Money, prestige, and power were concentrated in the
former, and there was near zero crossover.[6]
I wonder how much of the decline in families eating together
was due to the decline in families watching TV together afterward.[7]
I know when this happened because it was the season Dallas
premiered. Everyone else was talking about what was happening on
Dallas, and I had no idea what they meant.[8]
I didn't realize it till I started doing research for this
essay, but the meretriciousness of the products I grew up with is
a well-known byproduct of oligopoly. When companies can't compete
on price, they compete on tailfins.[9]
Monroeville Mall was at the time of its completion in 1969
the largest in the country. In the late 1970s the movie Dawn of
the Dead was shot there. Apparently the mall was not just the
location of the movie, but its inspiration; the crowds of shoppers
drifting through this huge mall reminded George Romero of zombies.
My first job was scooping ice cream in the Baskin-Robbins.[10]
Labor unions were exempted from antitrust laws by the Clayton
Antitrust Act in 1914 on the grounds that a person's work is not
"a commodity or article of commerce." I wonder if that means service
companies are also exempt.[11]
The relationships between unions and unionized companies can
even be symbiotic, because unions will exert political pressure to
protect their hosts. According to Michael Lind, when politicians
tried to attack the A&P supermarket chain because it was putting
local grocery stores out of business, "A&P successfully defended
itself by allowing the unionization of its workforce in 1938, thereby
gaining organized labor as a constituency." I've seen this phenomenon
myself: hotel unions are responsible for more of the political
pressure against Airbnb than hotel companies.[12]
Galbraith was clearly puzzled that corporate executives would
work so hard to make money for other people (the shareholders)
instead of themselves. He devoted much of The New Industrial
State to trying to figure this out.His theory was that professionalism had replaced money as a motive,
and that modern corporate executives were, like (good) scientists,
motivated less by financial rewards than by the desire to do good
work and thereby earn the respect of their peers. There is something
in this, though I think lack of movement between companies combined
with self-interest explains much of observed behavior.[13]
Galbraith (p. 94) says a 1952 study of the 800 highest paid
executives at 300 big corporations found that three quarters of
them had been with their company for more than 20 years.[14]
It seems likely that in the first third of the 20th century
executive salaries were low partly because companies then were more
dependent on banks, who would have disapproved if executives got
too much. This was certainly true in the beginning. The first big
company CEOs were J. P. Morgan's hired hands.Companies didn't start to finance themselves with retained earnings
till the 1920s. Till then they had to pay out their earnings in
dividends, and so depended on banks for capital for expansion.
Bankers continued to sit on corporate boards till the Glass-Steagall
act in 1933.By mid-century big companies funded 3/4 of their growth from earnings.
But the early years of bank dependence, reinforced by the financial
controls of World War II, must have had a big effect on social
conventions about executive salaries. So it may be that the lack
of movement between companies was as much the effect of low salaries
as the cause.Incidentally, the switch in the 1920s to financing growth with
retained earnings was one cause of the 1929 crash. The banks now
had to find someone else to lend to, so they made more margin loans.[15]
Even now it's hard to get them to. One of the things I find
hardest to get into the heads of would-be startup founders is how
important it is to do certain kinds of menial work early in the
life of a company. Doing things that don't
scale is to how Henry Ford got started as a high-fiber diet is
to the traditional peasant's diet: they had no choice but to do the
right thing, while we have to make a conscious effort.[16]
Founders weren't celebrated in the press when I was a kid.
"Our founder" meant a photograph of a severe-looking man with a
walrus mustache and a wing collar who had died decades ago. The
thing to be when I was a kid was an executive. If you weren't
around then it's hard to grasp the cachet that term had. The fancy
version of everything was called the "executive" model.[17]
The wave of hostile takeovers in the 1980s was enabled by a
combination of circumstances: court decisions striking down state
anti-takeover laws, starting with the Supreme Court's 1982 decision
in Edgar v. MITE Corp.; the Reagan administration's comparatively
sympathetic attitude toward takeovers; the Depository Institutions
Act of 1982, which allowed banks and savings and loans to buy
corporate bonds; a new SEC rule issued in 1982 (rule 415) that made
it possible to bring corporate bonds to market faster; the creation
of the junk bond business by Michael Milken; a vogue for conglomerates
in the preceding period that caused many companies to be combined
that never should have been; a decade of inflation that left many
public companies trading below the value of their assets; and not
least, the increasing complacency of managements.[18]
Foster, Richard. "Creative Destruction Whips through Corporate
America." Innosight, February 2012.[19]
CEOs of big companies may be overpaid. I don't know enough
about big companies to say. But it is certainly not impossible for
a CEO to make 200x as much difference to a company's revenues as
the average employee. Look at what Steve Jobs did for Apple when
he came back as CEO. It would have been a good deal for the board
to give him 95% of the company. Apple's market cap the day Steve
came back in July 1997 was 1.73 billion. 5% of Apple now (January
2016) would be worth about 30 billion. And it would not be if Steve
hadn't come back; Apple probably wouldn't even exist anymore.Merely including Steve in the sample might be enough to answer the
question of whether public company CEOs in the aggregate are overpaid.
And that is not as facile a trick as it might seem, because the
broader your holdings, the more the aggregate is what you care
about.[20]
The late 1960s were famous for social upheaval. But that was
more rebellion (which can happen in any era if people are provoked
sufficiently) than fragmentation. You're not seeing fragmentation
unless you see people breaking off to both left and right.[21]
Globally the trend has been in the other direction. While
the US is becoming more fragmented, the world as a whole is becoming
less fragmented, and mostly in good ways.[22]
There were a handful of ways to make a fortune in the mid
20th century. The main one was drilling for oil, which was open
to newcomers because it was not something big companies could
dominate through economies of scale. How did individuals accumulate
large fortunes in an era of such high taxes? Giant tax loopholes
defended by two of the most powerful men in Congress, Sam Rayburn
and Lyndon Johnson.But becoming a Texas oilman was not in 1950 something one could
aspire to the way starting a startup or going to work on Wall Street
were in 2000, because (a) there was a strong local component and
(b) success depended so much on luck.[23]
The Baumol Effect induced by startups is very visible in
Silicon Valley. Google will pay people millions of dollars a year
to keep them from leaving to start or join startups.[24]
I'm not claiming variation in productivity is the only cause
of economic inequality in the US. But it's a significant cause, and
it will become as big a cause as it needs to, in the sense that if
you ban other ways to get rich, people who want to get rich will
use this route instead.Thanks to Sam Altman, Trevor Blackwell, Paul Buchheit, Patrick
Collison, Ron Conway, Chris Dixon, Benedict Evans, Richard Florida,
Ben Horowitz, Jessica Livingston, Robert Morris, Tim O'Reilly, Geoff
Ralston, Max Roser, Alexia Tsotsis, and Qasar Younis for reading
drafts of this. Max also told me about several valuable sources.BibliographyAllen, Frederick Lewis. The Big Change. Harper, 1952.Averitt, Robert. The Dual Economy. Norton, 1968.Badger, Anthony. The New Deal. Hill and Wang, 1989.Bainbridge, John. The Super-Americans. Doubleday, 1961.Beatty, Jack. Collossus. Broadway, 2001.Brinkley, Douglas. Wheels for the World. Viking, 2003.Brownleee, W. Elliot. Federal Taxation in America. Cambridge, 1996.Chandler, Alfred. The Visible Hand. Harvard, 1977.Chernow, Ron. The House of Morgan. Simon & Schuster, 1990.Chernow, Ron. Titan: The Life of John D. Rockefeller. Random House,
1998.Galbraith, John. The New Industrial State. Houghton Mifflin, 1967.Goldin, Claudia and Robert A. Margo. "The Great Compression: The
Wage Structure in the United States at Mid-Century." NBER Working
Paper 3817, 1991.Gordon, John. An Empire of Wealth. HarperCollins, 2004.Klein, Maury. The Genesis of Industrial America, 1870-1920. Cambridge,
2007.Lind, Michael. Land of Promise. HarperCollins, 2012.Mickelthwaite, John, and Adrian Wooldridge. The Company. Modern
Library, 2003.Nasaw, David. Andrew Carnegie. Penguin, 2006.Sobel, Robert. The Age of Giant Corporations. Praeger, 1993.Thurow, Lester. Generating Inequality: Mechanisms of Distribution.
Basic Books, 1975.Witte, John. The Politics and Development of the Federal Income
Tax. Wisconsin, 1985.Related: | 2024-11-08T00:42:53 | en | train |
10,826,854 | xAMI | 2016-01-02T17:25:51 | Dionaea Honeypot Obfuscation – Avoiding Service Identification | null | http://devwerks.net/blog/15/dionaea-honeypot-obfuscation-avoiding-service-identification/ | 2 | 0 | null | null | null | no_error | devWerks | Webdesign und Webentwicklung | Marketing, Shop, SEO, Grafik, Homepage - Dionaea Honeypot Obfuscation - Avoiding service identification | null | devWerks | Webdesign und Webentwicklung |
Posted: 2016-01-02
by Admin
Since some days we work on a fork of the great Dionaea Honeypot. Dionaea is a low-interaction honeypot that captures attack payloads and malware by offering a variety of network services.
We modify some of its services to avoid identification by network scanners like: Nmap.
All modifications below are in our repository on github https://github.com/devwerks/dionaea
We started a Intense scan with Nmap to see what services are identified and associated with Dionaea
nmap -T4 -A -v host
FTP21/tcp open ftp Dionaea honeypot ftpd
We search for the String "Dionaea honeypot ftpd" in the file nmap-service-probes. There we can see that Nmap detects the Welcome Message send by the Dionaea FTP service. So we changed the message to show a ProFTPD server.
- self.reply(WELCOME_MSG, "Welcome to the ftp service")
+ self.reply(WELCOME_MSG, "ProFTPD 1.2.8 Server")
HTTP443/tcp open https Dionaea honeypot httpd
The same procedure as above. We respond now with the protocol HTTP/1.1 instead of HTTP/1.0.
- self.send("%s %d %s\r\n" % ("HTTP/1.0", code, message))
+ self.send("%s %d %s\r\n" % ("HTTP/1.1", code, message))
This should also work for Port: 80 http
SSL CertificateIssuer: commonName=Nepenthes Development Team/organizationName=dionaea.carnivore.it/countryName=DE
This was not directly detected by Nmap, but it can be found by an attacker by reading the Nmap output.
- MBSTRING_ASC, (const unsigned char *)"Nepenthes Development Team", -1, -1, 0);
+ MBSTRING_ASC, (const unsigned char *)"RapidSSL SHA256 CA", -1, -1, 0);
- MBSTRING_ASC, (const unsigned char *)"dionaea.carnivore.it", -1, -1, 0);
+ MBSTRING_ASC, (const unsigned char *)"GeoTrust Inc.", -1, -1, 0);
We changed this to look like real Certificate.
How to useTo use dionaea, simply copy from https://github.com/devwerks/dionaea. The configuration is the same.
If you run into issues, feel free to get on touch on Twitter, check the current issues or create a new one. Patches are also welcome.
| 2024-11-08T17:51:41 | en | train |
10,826,872 | cl42 | 2016-01-02T17:31:24 | What Is Going to Happen in 2016 | null | http://avc.com/2016/01/what-is-going-to-happen-in-2016/ | 5 | 0 | null | null | null | no_error | What Is Going To Happen In 2016 | -0001-11-30T00:00:00+00:00 | Fred Wilson |
It’s easier to predict the medium to long term future. We will be able to tell our cars to take us home after a late night of new year’s partying within a decade. I sat next to a life sciences investor at a dinner a couple months ago who told me cancer will be a curable disease within the next decade. As amazing as these things sound, they are coming and soon.
But what will happen this year that we are now in? That’s a bit trickier. But I will take some shots this morning.
Oculus will finally ship the Rift in 2016. Games and other VR apps for the Rift will be released. We just learned that the Touch controller won’t ship with the Rift and is delayed until later in 2016. I believe the initial commercial versions of Oculus technology will underwhelm. The technology has been so hyped and it is hard to live up to that. Games will be the strongest early use case, but not everyone is going to want to put on a headset to play a game. I think VR will only reach its true potential when they figure out how to deploy it in a more natural way.
We will see a new form of wearables take off in 2016. The wrist is not the only place we might want to wear a computer on our bodies. If I had to guess, I would bet on something we wear in or on our ears.
One of the big four will falter in 2016. My guess is Apple. They did not have a great year in 2015 and I’m thinking that it will get worse in 2016.
The FAA regulations on the commercial drone industry will turn out to be a boon for the drone sector, legitimizing drone flights for all sorts of use cases and establishing clear rules for what is acceptable and what is not.
The trend towards publishing inside of social networks (Facebook being the most popular one) will go badly for a number of high profile publishers who won’t be able to monetize as effectively inside social networks and there will be at least one high profile victim of this strategy who will go under as a result.
Time Warner will spin off its HBO business to create a direct competitor to Netflix and the independent HBO will trade at a higher market cap than the entire Time Warner business did pre spinoff.
Bitcoin finally finds a killer app with the emergence of Open Bazaar protocol powered zero take rate marketplaces. (note that OB1, an open bazaar powered service, is a USV portfolio company).
Slack will become so pervasive inside of enterprises that spam will become a problem and third party Slack spam filters will emerge. At the same time, the Slack platform will take off and building Slack bots will become the next big thing in enterprise software.
Donald Trump will be the Republican nominee and he will attack the tech sector for its support of immigrant labor. As a result the tech sector will line up behind Hillary Clinton who will be elected the first woman President.
Markdown mania will hit the venture capital sector as VC firms follow Fidelity’s lead and start aggressively taking down the valuations in their portfolios. Crunchbase will start capturing this valuation data and will become a de-facto “yahoo finance” for the startup sector. Employees will realize their options are underwater and will start leaving tech startups in droves.
Some of these predictions border on the ridiculous and that is somewhat intentional. I think there is an element of truth (or at least possibility) in all of them. And I will come back to this list a year from now and review the results.
Best wishes to everyone for a happy and healthy 2016.
| 2024-11-08T15:57:15 | en | train |
10,826,954 | Davertron | 2016-01-02T17:51:39 | Where Flux Went Wrong | null | http://technologyadvice.github.io/where-flux-went-wrong/ | 2 | 0 | null | null | null | no_error | Where Flux Went Wrong | 2015-12-31 15:00:00 +0000 | null |
Already comfortable with the history around ReactJS and Flux? Skip to Flux and Component State to jump right into the problem statement.
Ahem. When ReactJS first entered the development scene it attracted front-end developers across the world with its promise to introduce some semblance of sanity back into the dreaded Single Page Application. The framework, commonly referred to as the V in MVC, popularized the concept of componentized applications; that is, everything you see on the page is a React component. “Rid yourself of complicated controllers and unintelligible views!” it shouted from the rooftops (if you’re not cool with the personification of a JavaScript framework, just imagine it’s Pete Hunt), and, of course, developers rejoiced and all was right in the world.
Except it never works like that. Many a blog writer saw fit to speak out against the blasphemous JSX and comingling of markup and JavaScript; a war which still rages today, though with far less fury. The intricacies of these arguments are not vital to the point of this post, but it’s important to note that React didn’t receive unanimous support upon release, nor can it boast such a claim even today. What is important, however, is that it introduced a paradigm shift for front end development. Developers were no longer frought with fear over a labor necessitated by nearly all applications based on jQuery, AngularJS, or any of their kin: imperative DOM manipulation. They traded that imperative complexity for something more declarative: properties (props if you’re hip) enter a React component, travel through the magical lands of the render cycle and VirtualDOM and, arriving at the end of their journey, some diffing occurs and they find themselves all grown up as part of the real DOM.
The internals are complicated, but the effect is simple: no more stressing about the DOM. Still, being only the V in MVC, some sort of larger structure had to be built around React in order to actually do things; you know, talk to the server, respond to events, and, most importantly, write TodoMVC’s. This is where Flux comes in, and it’s also about the point where I begin to argue that React really isn’t just the V in MVC because it encourages and lends itself to a not-so-MVC approach to application architecture. So what is that approach? You’ve probably heard of it: Flux. There isn’t enough time to cover the full history of Flux and all 52 of its flavors, but the gist is: components/views don’t manipulate application state directly, they dispatch actions that effect changes in stores, and those changes flow back through the application from top to bottom. The result: one-way data flow.
Here’s what just one small piece of this (the component) might look like:
import React from 'react'
import autobind from 'autobind-decorator'
import TodoStore from 'stores/todo'
import * as TodoActions from 'actions/todo'
// Sick of TodoMVC? Me too.
class TodoList extends React.Component {
constructor () {
super()
this.state = {
todos: TodoStore.getTodos()
}
}
componentDidMount () {
TodoStore.addChangeListener(this._onTodoStoreChange)
}
componentDidUnmount () {
TodoStore.removeChangeListener(this._onTodoStoreChange)
}
@autobind
_onTodoStoreChange () {
this.setState({
todos: TodoStore.getTodos()
})
}
_onToggleTodoComplete (id) {
TodoActions.toggleTodoComplete(id)
}
render () {
return (
<ul>
{this.state.todos.map(todo => (
<li key={todo.id} onClick={() => this._onToggleTodoComplete(todo.id)}>
{todo.text}
</li>
))
</ul>
)
}
}
As you can see, the todos represented in the view are not managed by the component but instead live in a store. We’ve managed to create a single source of truth for the todos by way of the TodoStore. Some state has been eliminated from our component, but it’s not perfect.
Flux and Component State
So what exactly is the problem with traditional Flux? Well, surprisingly, it’s not the verbosity of it all. Many initial abstractions sought to reduce syntactical boilerplate but missed something so painfully obvious it hurts to look back on and realize that you didn’t see it either. The real issue with its design is that application state (read: stuff from stores) must be applied to local component state. How does one go about testing a component given this architecture? Well, now that it’s coupled to one or more stores, you’ll have the added work of mocking stores and actions before you can properly determine what your component looks like at the end of it all (and don’t forget to check the store shape, too). Yes, the core problem with this pattern is the usage of this.state.
State is the root of all evil
- Pete Hunt
So of course React avoids state, right? Right Pete Hunt?
Proofing this post sounds like I have a beef with Pete Hunt, but I’m only kidding; I wouldn’t be where I am today without his inspirational talks. However, in all seriousness, state is in fact a core feature of React components – it’s literally called this.state – and it is the yin to a component’s yang (props). Local state makes it difficult to determine how a component will render because the logic determining its output is internal to the instance and can change without you ever having known, and that’s just not cool, man. What you have here is a rotten case of an impure function and, if that’s not enough to set you to quaking, not only does that make testing more difficult, your component’s dependence on a specific Flux store prevents it it from being reused in different contexts.
Looking back at our previous example, what makes a <TodoList /> component so special that it needs to know how to retrieve its own data? Its objective is simply to render a list of todos, maybe have a handler in there to toggle completion, but nothing more, and even that handler can be passed via props; it has no need for internal state. Yet all of the early flux abstractions, while often reducing the amount of boilerplate needed to apply some global state to local component state, still did exactly that: relied on this.state. We’re not much better off than in the pre-React days at this point; yes, we’ve gained some benefits with the VDOM and declarative rendering, but we’re still left with local state that severely complicates testing, couples components to specific stores, and increases the application’s cognitive overhead (I will jump through hoops all day not say “it makes applications difficult to reason about”).
We need a new approach.
A Better Way Forward
So what can be done resolve this predicament? Enter Redux, a Flux paradigm that is better for all the ways that it eschews Flux’s original implementation. There are many things that make Redux great, but the focus for this post is something specific that I find many Redux posts gloss over: react-redux’s connect decorator. What Dan Abramov, Redux’s creator and our lord and savior, figured out was that higher-order components could be used to abstract away the store subscription in a way that not only reduced boilerplate, but totes flipped the script on us and altered how application state enters a component. Let’s take a look:
import React from 'react'
import { connect } from 'react-redux'
import { actions as TodoActions } from 'modules/todo'
// Notice that we can export the raw class here as a named
// export, which means we can easily use the non-connected
// version in our tests or elsewhere in the application.
export class TodoList extends React.Component {
static propTypes = {
dispatch: React.PropTypes.func.isRequired,
todos: React.PropTypes.array.isRequired,
}
_onToggleComplete (id) {
this.props.dispatch(TodoActions.toggleTodoComplete(id))
}
render () {
return (
<ul>
{this.props.todos.map(todo => (
<li key={todo.id} onClick={() => this._onToggleTodoComplete(todo.id)}>
{todo.text}
</li>
))}
</ul>
)
}
}
export default connect(state => ({
todos: state.todos
})(TodoList)
Now, since the Redux documentation is awesome I’m not going to spend time covering this in great detail, but we’ll discuss the most important point: where is the <TodoList /> component receiving the todos from? And I don’t mean that they live in a store, but rather that they are entering the component as props.
The connect decorator is a higher-order component, a React component that wraps (i.e. renders) another component. Component state hasn’t completely packed its bags, since it is still used within the component generated by connect, but it’s been abstracted away and we don’t have to fret over it. We invoke connect just like we would any other function, passing it some arguments (in this example we provide mapStateToProps to tell it exactly what state slice we want from the global state) before finally handing it our component. When the higher-order component renders, it uses the arguments we provided to determine what props to pass down. That’s right, our component is rendered inside of it, which means we receive application state just like we would anything else in React land: as good old fashioned props.
This seemingly simple change has enormous benefits in practice:
The base component (TodoList) can now be tested entirely independently from any stores or global state. We can simply import it into our favorite test suite, pass it some props, and see how it renders. No need to mock any stores.
The base component can be freely shared across the application, since the class isn’t directly tied to any one store. We can wrap it in entirely different higher order components, connect it with totally different state selectors, or even just pass it a plain-old array of todos.
You can now rest easy knowing that if you give your component a set of props, you know exactly how it will function.
The point is: the component no longer cares where it gets its data from, that’s not its concern (and rightly so). And though in this case we’re using react-redux to create the container, the component itself is not actually tied to any specific framework and now behaves just like any other simpleminded React component. There are a slew of other benefits to this pattern, such as the ability to implement performance optimizations for state selectors (see: reselect), but at the end of the day the important part is that our components are once again sane and devoid of local state.
In Summary
State is a major contributor to application complexity, and looking back it’s part of the reason why (to many, not all) React seemed like such a breath of fresh air after the two-way data binding craze. It showed us that we could break even the most complex applications down into managable pieces, so let’s not turn our backs on that by reintroducing state where it’s not needed. It’s been argued, quite convincingly, that component state be avoided, even outside of the context of Flux, and doing so even opens up potential future optimizations. Sometimes all you need is a new way of looking at things.
Thanks Dan.
| 2024-11-08T14:26:29 | en | train |
10,826,972 | secfirstmd | 2016-01-02T17:54:27 | The intelligence behind tracking digital evidence | null | http://www.irishexaminer.com/viewpoints/analysis/future-of-mobile-the-intelligence-behind-tracking-digital-evidence-374065.html | 2 | 0 | null | null | null | no_error | FUTURE OF MOBILE: The intelligence behind tracking digital evidence | 2016-01-02T00:00:00+00:00 | Sat, 02 Jan, 2016 - 00:00 | In July 2007, an O2 electronics engineer testified that an analysis of mobile phone records put O’Reilly at or near the scene of the murder at the Naul, Dublin, and not the Broadstone Bus Depot, where he claimed to have been.When O’Reilly was eventually convicted of bludgeoning his wife to death at their home, the mobile phone evidence was cited as crucial.Phones too played a central role in Graham Dwyer’s conviction for the murder of Elaine O’Hara earlier this year.Evidence lead gardaí to Vartry Reservoir in September 2013, where, following a fingertip search, two Nokia phones were found.Despite the fact both had been lying on the muddy bed of the reservoir for over a year, technical experts were able to retrieve hundreds of text messages and deleted data from them.It appeared that the two phones — one of which was readily identified as Ms O’Hara’s — were in almost exclusive contact with each other.Now the technical probe split into two. As one team explored the content of the messages in an effort to find out who the other phone belonged to, civilian crime and policing analyst Sarah Skedd set out to see where the phones had been.The content search turned up a cluster of personal details; talk about a pay cut, the birth of a child, coming fifth in a model airplane flying competition.Cross-referencing this material with content from Ms O’Hara’s computer eventually gave the gardaí a name: Graham Dwyer.Mobile phone location analysis meanwhile revealed that the texts sent by the mystery phone during working hours originated in Dublin 2. During the evening, when far fewer texts went out, the phone was connecting with Co Dublin masts.The critical piece of evidence, the piece which brought those two technical examinations back together, came when Skedd established that the phone had been used in Galway on July 4, 2012.She then obtained toll booth records and searched for vehicles whose owners lived in south Co Dublin that went through the M6 tollbooth, then the M4 one an hour later.The search turned up the registration number 99 G 11850. It was registered to Dwyer.Trials like these provide the only real source of information on how gardaí access and use phone records in their investigations.None of the telecommunications companies will say how often gardaí come looking for phone records.In response to questions, they will only say they comply with their legal obligations. The gardaí are equally tight-lipped.A spokesman says that only a chief superintendent can request telephone data from a service provider, and only under one of three circumstances.“The prevention, detection, investigation or prosecution of a serious offence, the safeguarding of the security of the State or the saving of human life.”The spokesman goes on to say it is not Garda policy to release the number of requests applied for.Vodafone does however produce a law enforcement disclosure report annually, which is a little more revealing.It says that prior to the publication of last year’s report, the company asked the authorities if Vodafone could publish aggregate statistics about how often they — the authorities that is — tapped phones, something that is referred to as “lawful interception” in the report.“In response”, says the report, “the authorities instructed us not to disclose this information”. Vodafone can tell us that last year, Irish law enforcement authorities demanded access to communications data 7,973 times.Remember that these are requests across the Vodafone network only, and as such, can only represent a portion of the requests made to all providers.The report says, as the Garda themselves imply, that these requests can take many forms: “For example, police investigating a murder could require the disclosure of all subscriber details for mobile phone numbers logged as having connected to a particular mobile network cell site over a particular time period, or an intelligence agency could demand details of all users visiting a particular website.“Similarly, police dealing with a life-at-risk scenario, such as rescue missions or attempts to prevent suicide, require the ability to demand access to real-time location information.”We’ve seen at least one example of the latter in the recent past. In June, a young Limerick woman was reported missing in the city.Gardaí tracked her mobile phone signal to wetlands at Grove Ireland in Corbally, where she was found and rescued.Despite the obvious benefits of these powers, privacy advocates don’t like the fact Irish authorities are so secretive about how they use the technology.Richard Tynan is a technologist with Privacy International.“The first step in this matter is to understand the process and safeguards in place for the Irish government to get this highly intrusive data en mass,” he says, pointing out the aforementioned Vodafone transparency report highlighted Ireland as one of only five countries that mandated direct and unfettered access to their network.This effectively means there is no possibility of the company scrutinising any aspect of the interception regime and possibly pushing back against it.“The Government needs to make clear all the requests it makes, how many users are affected and what is done with the data it receives.”There’s another dimension to secrecy. Mr Tynan refers to the Garda Síochána Ombudsman Commission bugging scandal.When a UK counter- surveillance firm was last year brought in to conduct a security sweep of GSOC’s offices, one of three anomalies revealed suggested the presence of a UK mobile network in the vicinity.Mr Tynan explains that there are only two possible explanations for this. One is that one of the Irish mobile operators deployed a misconfigured device that incorrectly identified itself, but since none of the operators has come forward to admit that this happened, that only leaves the other possibility. Someone was using a Stingray.“Stingrays”, he explains, “or IMSI Catchers, are used by authorities around the world to put large groups of people under indiscriminate mass surveillance via their mobile phone”.While there are many different forms of this technology, in essence, the stringray mimics a real cell phone tower, but instead of relaying your call, it tracks both the location and content of your mobile phone.I asked gardaí if they use Stingray-type technology. Their response couldn’t really be called a denial: “Requests for call related data under the provisions of section 6 (1) of the 2011 (Act) are made to the relevant telecommunications service providers. An Garda has no input into the process of searching for or generating the results.”Mr Tynan believes it’s quite possible that an IMSI Catcher was in use during the security sweep of the GSOC offices.“While we have no other specific indication that Irish law enforcement or intelligence services are in possession of these devices, their low cost and ease of use mean that many countries around the world now admit to using them.”Research by the American Civil Liberties Union has revealed that this technology is widely deployed across the US, by everyone from the FBI and the Internal Revenue Service to the army and the DEA.However, it’s the fact there’s no regulation around how this technology is sold or used that causes most concern. Rory Byrne is founder and CEO of technology and physical security company, Security First.“I’ve spoken to former security services people in Europe”, he says, “and they will tell you that for a thousand pounds I can ring a buddy and get the location of any mobile you want me to find."Because the technology is accessible by a number of people for government purposes, there’s obviously a sideline going on for people to do that for private purposes”.Mr Byrne believes technology like this is being used widely for industrial espionage. Both he and Mr Tynan are however particularly concerned with how it’s being used by oppressive regimes.“This is a very real concern,” says Tynan. “The trade in turn-key surveillance tools for internal repression is extremely worrying and one that requires political accountability. Call monitoring technology, such as IMSI catchers, have been deployed widely."Large taps or probes on the provider’s network can intercept thousands of calls and data simultaneously across an entire city or country from a central monitoring centre.”Last year, The Washington Post reported on a New York-based company called Verint. It manufactures and exports communications analysis systems under the tagline “Locate. Track. Manipulate.”The blurb says the system offers government agencies “a cost- effective, new approach to obtaining global location information concerning known targets”. The firm, which also has an office in Dundalk, claims to have clients in more than 10,000 organisations in over 180 countries.An IMSI catcher, it should be said, is not a precision instrument. It operates indiscriminately, hoovering up the unique identifiers of every device within its reach; innocent parties as well as potential suspects.One of the goals of Privacy International is to stem the export of these technologies to regimes where they will be used for repression.Privacy activists are however focusing their attention closer to home at the moment. The UK government published the draft text of the new Investigatory Powers bill last month.If signed into law in its current form, this bill would require web and phone companies to store the online activity of every citizen in the UK for a period of 12 months.Once stored, this data can then be legally accessed by police, security services and other public bodies.The bill also explicitly enables security forces and police to hack into computers and phones, and places legal obligations on companies to help them to do this.This situation, says Mr Tynan, will have repercussions for the privacy and data protection rights of people beyond UK borders.“The UK government is seeking the power to compel companies, many of them based in Ireland, to hack users on their behalf."Accordingly, Facebook serving an Irish user malware for the British Government could be a reality by the summer.2Similarly, Apple could be compelled to install the malware via an update onto an Irish person’s phone."These new powers for the UK government, and their reach into Ireland need to be scrutinised and debated by our lawmakers.” | 2024-11-08T16:11:13 | en | train |
10,826,978 | luu | 2016-01-02T17:55:17 | Compiler-Introduced Double-Fetch Vulnerabilities – Understanding XSA-155 | null | http://tkeetch.co.uk/blog/?p=58 | 2 | 0 | null | null | null | no_error | Compiler-Introduced Double-Fetch Vulnerabilities – Understanding XSA-155 | null | null |
I recently read a great blog post from Felix Willhelm (@_fel1x) about some double-fetch vulnerabilities he discovered in the Xen Hypervisor. These bugs are described in the Xen Security Advisory XSA-155. This post is as a result of me trying to understand the bug better.
Double-fetch vulnerabilities are introduced when the same memory location is read from memory and assumed by the programmer to be the same value, when in fact the memory could have been modified by an attacker in a concurrent thread of execution, such as in a shared memory section. What was particularly interesting about this advisory was that upon inspection of the code, there was no apparent double-fetch, but it can clearly be seen in the compiled binary.
TL;DR – All pointers into shared memory should be labelled as volatile to avoid compiler optimisation introducing double-fetches. A good presentation about this is “Shattering Illusions in a Lock Free World” (especially slides 28+). The compiler is not doing anything wrong which is why this is a bug in Xen, not gcc.
To demonstrate the issue, here is the vulnerable code, distilled from the vulnerable Xen code:
1234567891011121314151617181920212223
#include <stdio.h>#include <stdlib.h>void doStuff(int* ps){ printf("NON-VOLATILE"); switch(*ps) { case 0: { printf("0"); break; } case 1: { printf("1"); break; } case 2: { printf("2"); break; } case 3: { printf("3"); break; } case 4: { printf("4"); break; } default: { printf("default"); break; } } return;}void main(int argc, void *argv){ int i = rand(); doStuff(&i);}
This vulnerability is specific to how gcc optimises switch statements with jump tables then there are 5 or more cases (more information here). Other cases where pointers to shared memory are not labelled as volatile could be exploitable, but in practise it seems relatively rare for the compiler to dereference a pointer twice.
And below is the resultant binary in IDA compiled with gcc 5.3.0 for Intel x64 (gcc 4.8.4 and x86 give the same result). The rbx registers points into a memory allocation, which could be shared memory and by defeferencing the pointer twice, we have our double-fetch vulnerability.
The compiler appears to do this to avoid using a register in the case where the default case is hit. Given that the two memory accesses are so close together, there is unlikely to be a big performance hit from the double fetch. But it’s not enough to prevent the race-condition from being winnable by the attacker (see the bochspwn research for techniques to try and win these tight race conditions).
The compiler is allowed to turn a single memory access in code into multiple accesses because without the ‘volatile’ attribute on the pointer, it’s assumed that the memory will not be changed by another thread of execution. Simply declaring the pointer as volatile as below resolves the issue.
void doStuff(volatile int* pi)
The double-fetch has now dissappeared as a register is used to store the switch value:
To try and find other cases where double-fetches would be introduced, I used the following code to trash all current registers and to try and force the compiler to dereference a pointer a second time (code assumes that -fomit-frame-pointer is enabled).
#define CLOBBER_REGISTERS asm volatile ( "nop" \: /* no outputs */ \: /* no inputs */ \: "eax", "ebx", "ecx", "edx", "esi", "edi" , "ebp" \);
But in every case that I tried, the compiler used the local variable in preference to a double-fetch from memory. The compiler could eliminate the local variable by fetching the memory location twice, but this didn’t happen with gcc.
Here are the setups I’ve tried so far that exhibit the double-fetch behaviour in switch statements:
Compilers: gcc 4.8.4 and 5.3.0
Architectures: x86 and x64
gcc Optimisation Levels: O1, O2, O3 and Os
Binaries compiled with optimisation disabled or with -fno-jump-tables are not affected. Also, an initial experiment with an arm64 compiler from the Android SDK suggests that ARM binaries may not be affected.
Conclusion: Failing to label pointers to shared memory regions as volatile allows compilers to introduce double-fetches that aren’t reflected in source code. But in practice the compiler will only do this in specific circumstances. One case is switch statements that use jump tables on Intel procecessors. Further research is needed to figure out which other compilers, flow-control contructs and CPU architectures could introduce these double-fetches.
The code and build script are on GitHub.
| 2024-11-08T10:07:43 | en | train |
Subsets and Splits