text
stringlengths 44
776k
| meta
dict |
---|---|
AT&T previews lawsuit it plans to file against FCC - Selfcommit
http://arstechnica.com/tech-policy/2015/02/att-previews-lawsuit-it-plans-to-file-against-fcc-over-net-neutrality/
======
cl42
The Economist's recent article on net neutrality and common carriage is
fascinating and related: [http://www.economist.com/news/leaders/21641201-why-
network-n...](http://www.economist.com/news/leaders/21641201-why-network-
neutrality-such-intractable-problemand-how-solve-it-gordian-net)
Part of the interesting piece is the history of "common carriage": "The idea
that certain businesses are so essential that they must not discriminate
between customers is as old as ferries. With only one vessel in town, a
boatman was generally not allowed to charge a butcher more than a carpenter to
move goods. This concept, called 'common carriage', has served the world well,
most recently on the internet."
I've never doubted my support for net neutrality, and the legal history of
"common carriage" makes this even more obvious.
~~~
wtallis
The Economist article is written with the presumption that there are are
problems that need non-neutrality in order to be solved and that there is thus
a tradeoff of some sort to be made. They cite better network management, and
give as examples latency-sensitive applications that suffer from the lack of
guaranteed latency.
Those are fallacies. The lack of guaranteed latency is fundamentally no bigger
of a problem than the lack of guaranteed bandwidth. Non-neutral traffic
shaping is not necessary to maintain good performance, and in practice a best-
effort network can achieve acceptable latency for things like VoIP _without_
having to explicitly identify, classify, and prioritize VoIP protocols. ISPs
may be accustomed to using such explicit techniques, but neutral alternatives
exist and work better for general purpose connections, so we shouldn't carve
out any exceptions here.
The Economist also points out the difficulty of determining when traffic is or
should be illegal and blockable. It is not at all apparent to me that ISPs
have a need to be involved in subjective decisions like that in order for us
to have a functional Internet. Is erring on the side of allowing traffic and
prosecuting later _really_ going to cause serious problems, or can we come up
with an objective set of guidelines for what constitutes a DoS worth blocking?
~~~
AnthonyMouse
People keep using DoS as the canonical example of the need for this, but in
that case one of the endpoints is requesting that the traffic be dropped. The
ISP isn't making the decision for anybody.
~~~
wtallis
Except that there isn't actually a real and usable protocol for pushing drop
rules up to your ISP, so in practice you can't authorize your ISP to block a
DoS on your behalf without getting humans involved.
~~~
AnthonyMouse
Which is why you pretty much always get humans involved.
How is the ISP even supposed to know it's a DoS otherwise? Maybe you just made
the front page of HN and you want the traffic.
------
chasing
Is AT&T trying to muddle the concept of an ISP as something that delivers
content and an ISP as something that hosts content? That's all I can figure,
here.
Especially when writing about the ability to decline service to customers. I'm
not sure how net neutrality relates to AT&T having freedom to pick and choose
its customers. (Surely the folks that just want their content delivered --
Hacker News, for example -- don't consider themselves AT&T customers.)
~~~
rsingel
They are. If they just deliver packets then they are clearly Title II. Last
time the FCC did this (2004/5), the ISPs claimed that their homepages and
email service and DNS made them not "telecommunications".
Well, Gmail, OpenDNS, Google DNS and Facebook (new homepage), make those
arguments less useful. So "caching" is the new DNS.
Not sure how they'll deal with the rise of VPNs and HTTPS though. VPNs and
non-cacheable are the most clear argument that people pay broadband ISPs just
to deliver their packets.
------
shmerl
Instead of wasting money on courts AT&T should upgrade all their DSL lines to
fiber optics, lower prices on their plans and increase bandwidth. But of
course they'd rather just whine about how their monopoly is threatened by
Title II. I hope they'll lose big deal.
Though it wouldn't help anything of the above anyway, since even with Title II
AT&T won't be facing much competition. They'd be just more limited in ways
they can abuse their monopoly.
~~~
rayiner
Why should they do that when the FCC is making their product less profitable?
~~~
shmerl
You mean less able to rip off undeserved profits?
------
skywhopper
AT&T's argument appears to be that since they are already shaping traffic and
abusing their customer's trust, they aren't actually an Internet service
provider anyway, so they can't be regulated as one.
------
sarciszewski
I stopped having any sympathy for AT&T after the Auernheimer case. If they
wanted to garner public support for any reason, a move like this is likely to
kill it for a lot of people. Myself included.
(I know Auernheimer's not well-liked, and I don't agree with his politics, but
he deserved to win that case on appeal.)
~~~
mullingitover
I read the indictment [1]. Auernheimer wasn't a white hat security researcher
--he intentionally went after AT&T with intent to damage them and reap rewards
from it. He's not exactly a martyr for the cause of freedom.
[1]
[http://www.scribd.com/doc/113664772/46-Indictment](http://www.scribd.com/doc/113664772/46-Indictment)
~~~
guelo
An indictment is a deliberately biased, one-sided document.
------
Tiksi
_" I have no illusions that any of this will change what happens on February
26," when the FCC is expected to vote, AT&T Federal Regulatory VP Hank
Hultquist wrote in a blog post yesterday. "But when the FCC has to defend
reclassification before an appellate court, it will have to grapple with these
and other arguments. "_
I might be misreading something, but is he not saying "We know this is useless
but we want to waste the FCC's time and resources anyways" ?
I'm not familiar with how lawsuits for these kinds of cases work, but wouldn't
this be enough for a judge to throw away any lawsuit they file? If they
clearly state t hey have no intention other than to get in the way, it doesn't
seem like a valid lawsuit to me.
~~~
Alupis
> I might be misreading something, but is he not saying "We know this is
> useless but we want to waste the FCC's time and resources anyways" ?
No, he's simply saying this prior release is not going to influence the FCC
vote in any way, shape or form on Feb 26th.
However, a court battle after-wards might (as it did previously with the FCC
National Broadband Act and the courts ultimately ruled the FCC had no
authority to do what they proposed).
~~~
Tiksi
Ah, alright, thanks for the clarification, I guess I did misread it.
------
jasonjei
I hope the FCC has the balls to say, "So sue me."
------
aioprisan
In other words, this will have to be settled in court.
~~~
rsingel
This is largely political saber rattling. AT&T's best arguments are going to
be procedural.
If FCC gets past arguments it didn't dot the i's, then this will go to the
Supreme Court where AT&T and Verizon will get walloped. I can explain why, but
basically there are 9 votes lined up against the ISPs on reclassification.
~~~
r00fus
I'd love to hear why you think there are 9 votes for reclassification, btw.
------
shaftoe
While I don't like Internet providers creating different classes of traffic,
the idea of the government getting involved should terrify anyone who values
innovation and freedom.
Soon, we'll end up with a monopoly guided by regulations from lobbyists and
using laws as a weapon against competition. That's hardly better than the
problem we seek to resolve.
~~~
badsock
Genuinely curious, how would you resolve the problem differently?
~~~
aaron42net
The lack of provider competition isn't a natural monopoly. It's a government-
created monopoly/duopoly in a given market. Instead of regulating the one or
two providers in a market, we could try a different model that improves
competition.
One way is to do city-owned and maintained layer 1, like Chattanooga
([http://money.cnn.com/2014/05/20/technology/innovation/chatta...](http://money.cnn.com/2014/05/20/technology/innovation/chattanooga-
internet/)), and sell access to as many ISPs want it.
The other way is to break the city franchise model. Cities generally grant
franchise rights to cable and phone companies, excluding other providers for a
promise of universal coverage and a few percent of the revenue.
The latter is what Google Fiber is asking for from the cities it goes into:
\- It wants blanket access to all of the telephone poles and other right-of-
ways, without having to do per-pole applications, application fees, and
approval process that can take weeks/months each.
\- It wants to not have to do universal access, but rather only roll into
neighborhoods with a high enough density to be profitable.
\- It won't pay the city a percentage of revenue. Instead, it agrees to build
out free internet access to schools, public spaces, etc.
Google Fiber's model has the advantage of not relying on a city to properly
maintain a fiber network, but the disadvantage of leaving poor communities un-
served.
| {
"pile_set_name": "HackerNews"
} |
Because everyone is a racist. - kamakazizuru
http://asymptotejournal.com/article.php?cat=Nonfiction&id=47
======
kamakazizuru
a Swedish writer with immigrant roots writes an open letter to the swedish
justice minister.
| {
"pile_set_name": "HackerNews"
} |
Manufacturing bombshell: AMD cancels 28nm APUs, starts from scratch at TSMC - DigiHound
http://www.extremetech.com/computing/106217-manufacturing-bombshell-amd-cancels-28nm-apus-starts-from-scratch-at-tsmc
======
Symmetry
Yup, Semiaccurate reported on this story a week ago. Its probably a good move,
given that the slips in GF's 28nm process mean that the two products would
only have been produced for 6 months or so.
[http://semiaccurate.com/2011/11/15/exclusive-amd-kills-
wichi...](http://semiaccurate.com/2011/11/15/exclusive-amd-kills-wichita-and-
krishna/)
~~~
DigiHound
SemiAccurate got the story wrong and blames the issue on GF pushing back their
SHP process. They don't mention the move to TSMC and they claim there will be
a follow-up in months.
------
feralchimp
In case anyone else read that article and wondered "what's all this gate-last
vs. gate-first business?"
<http://www.eejournal.com/archives/articles/20111114-gate/>
~~~
CamperBob
I'm not even sure what an "APU" is, frankly. Lots of undefined buzzwords in
that article.
~~~
r00fus
<http://www.amd.com/us/products/desktop/apu/Pages/apu.aspx>
tl;dr - AMD's new GPU+CPU combined chipset... the logical extension of their
purchase of ATI.
------
nas
The following little piece of news is also interesting. Brad Burgess (chief
architect of Bobcat) is now at Samsung (<http://www.linkedin.com/pub/brad-
burgess/26/aa9/93>).
From what little I've read about AMD's recent processors, the low power line
is kicking butt (Bobcat, etc) while the high end (Bulldozer) is not.
------
manuscreationis
So...
Are these Bulldozers really as bad as I keep reading about?
A friend of mine who is very knowledgeable when it comes to hardware insists
that the issues are being overblown, and that if you get the correct
configuration of hardware (ram/mobo/etc) along with the right overclocking
setup, these procs are just as good, as well as more future-proof. He says
most benchmarking tests are more single-threaded examples of load, which the
Bulldozer obviously performs worse with, despite this being a more realistic
representation of the kind of load you'd find in your average desktop,
especially when it comes to gaming.
Thoughts?
~~~
eropple
As you note, for desktop/gaming purposes it's pretty obvious that single- or
few-threaded performance is still king, and I see no reason to expect that to
change in the foreseeable future. And Bulldozer is really, really, really bad
at it. "But you can overclock it!" is a silly argument; you can get an i7 up
to 5GHz or something equally unnecessary and it'll blow away whatever you can
get that Bulldozer silicon up to. The bigger problem is in the hardware
design, which is intensely over-shared and results in hardware-level blocking
conditions, as evidenced by the various reviews out there...and overclocking
doesn't help that.
I'd consider Magny-Cours for some types of server workloads, though I'd
probably go with Sandy Bridge (and definitely would for a desktop). I wouldn't
buy Bulldozer for anything.
~~~
manuscreationis
Thats a shame...
I'm looking into a new rig, and he's completely sold on the design. I can
imagine a world in, lets say, 2013-2014 where the desktop becomes a more
multithreaded environment, but that just isn't where were at today, and thats
just my guess. He's convinced the overclocking aspect makes all the
difference, and Intels don't OC as well, but thats not what i'm reading (nor
what you're saying).
I do like the conceptual architectural changes made with the bulldozer, but
current, and forthcoming, software just doesn't seem like it will make use of
it. It definitely seems like more of a server-minded approach to an
architecture.
~~~
eropple
I think you're being optimistic. We're going to see all our desktop
applications become pervasively multi-threaded in two years?
And Bulldozer is going to be better at this than Sandy Bridge, which is good
at both single- and multi-threaded loads?
Ehhh. Not likely. The design isn't even that good or interesting; as I
mentioned before, it's overly reliant on shared components that aren't
conducive to the sort of magical perf improvements that "but you can
overclock!" would require.
------
comex
Extremetech's mobile interface is still unusable.
~~~
wiredfool
Yep, I couldn't read the first column on my ipad, it's half cut off on first
view, scroll over and the other half is cut off.
Wonder if privoxy could do that in for me.
| {
"pile_set_name": "HackerNews"
} |
Interactive FEC Campaign Finance Data Explorer - itay
http://blogs.splunk.com/2012/11/05/splunk4good-announces-public-data-project-highlighting-fec-campaign-finance-data/
======
erintsweeney
nice way to explore campaign finance contributions by state, employer, job
role, etc. Check it out.
| {
"pile_set_name": "HackerNews"
} |
Ubuntu Phone, Jolla Sailfish and KDE Plasma Active to share API? - emilsedgh
http://aseigo.blogspot.com/2013/01/qml-component-apis-to-come-together.html
======
scriptproof
Another good resolution for the new year...
------
shmerl
Good development.
| {
"pile_set_name": "HackerNews"
} |
Sky: A 60fps GPU-Powered Text Editor - noajshu
https://github.com/evanw/sky
======
ngrilly
Impressive piece of work for a single developer.
~~~
jmiserez
Also some cool demos on his website:
[http://madebyevan.com/](http://madebyevan.com/)
------
jandrese
The title makes it sounds like a crack at javascript developers. That they
need a GPU to make a text editor.
~~~
supernintendo
The title doesn't mention JavaScript nor is the editor written in or
exclusively compiled to it. Further, modern JavaScript engines are very fast.
The main performance bottleneck in web-based editors like Atom or Light Table
is the DOM. The web target of this editor only makes use of two DOM nodes -
the <canvas> that everything renders to and a hidden <input> for capturing
user input.
| {
"pile_set_name": "HackerNews"
} |
Why Scrum Should Basically Just Die in a Fire - gcoleman
http://gilesbowkett.blogspot.com/2014/09/why-scrum-should-basically-just-die-in.html
======
DupDetector
[https://news.ycombinator.com/item?id=8334905](https://news.ycombinator.com/item?id=8334905)
[https://news.ycombinator.com/item?id=8352235](https://news.ycombinator.com/item?id=8352235)
| {
"pile_set_name": "HackerNews"
} |
Hasp: An S-Expression to Haskell Compiler - kruhft
http://www-student.cs.york.ac.uk/~anc505/code/hasp/hasp.html
======
twfarland
Glad to see another attempt at this righteous marriage. The piping approach is
very tasteful.
I tried to get this working with Racket (on osx) but hit too many blocks and
ended up going with Gambit, which works well so far.
------
winestock
Compare this with Liskell (Haskell semantics with Lisp syntax):
<http://www.liskell.org/>
With both Hasp and Liskell, one writes code that is indistinguishable from
S-expressions. In fact, they _are_ S-expressions. I don't doubt that Haskell
has things that Lisp doesn't and that Lispers should learn from, but this is
another example of what Paul Graham said, that adding Lisp-style macros to any
language will only result in another dialect of Lisp.
~~~
Locke1689
I'm sorry but this is just ridiculous. Just because it looks like Lisp (i.e.,
the syntax is s-exprs) doesn't make it Lisp.
Let's count important features of Lisp beyond the fact that it's functional:
\- Dynamically typed
\- Unsafe
\- Eagerly evaluated
\- Syntax is s-expr AST
Important features of Hasp/Liskell:
\- Statically typed
\- Type classes
\- Side effects enforced by monads
\- _Lazy evaluated_
\- Syntax is s-expr AST
Congratulations, they have one major similarity. If anything Haskell is a
dialect of ML, not LISP. I know no one who isn't a PL grad student (guilty)
has ever even heard of ML, but it's helpful to look into the history of PL
before you start making uninformed statements which basically amount to, "all
functional languages are LISP."
P.S.
All functional languages are syntactic suger on λ-calculus.
~~~
TheBoff
"Syntactic sugar on the lambda calculus" is just silly, really. I wish people
would stop saying this. It's like saying imperative languages are syntactic
sugar on Turing Machines.
This is a bit of a pet peeve for me, really. It seems like an unnecessary
pithy dismissal of computation theory.
~~~
DanWaterworth
I disagree, it's not like saying "imperative languages are syntactic sugar on
Turing Machines" at all.
Haskell compilers generally compile in the following way:
text -> tokens -> AST -> lambda calculus variant -> abstract functional
machine code -> imperative IR -> machine code
the AST to lambda calculus variant step is a single step. It takes the Haskell
representation of the lambda calculus and outputs lambda calculus.
Contrast this with an imperative compilation:
text -> tokens -> AST -> imperative IR -> machine code
The imperative IR may be LLVM IR. LLVM IR is almost a first order functional
programming language, it is certainly not machine code for a turing machine.
So imperative languages are not syntactic sugar of a turing machine, there is
no desugaring step in the pipeline (except maybe when building the AST).
~~~
lubutu
And what of Lisp, of which most dialects have mutable state? If a Lisp
compiler would convert to genuine λ-calculus it would be as large a step as it
would for C.
~~~
DanWaterworth
If you would re-read my comment, you'll find that I didn't actually say I
agree that all functional languages are syntactically sweetened lambda
calculus, though I certainly said Haskell was.
My point was that although there are functional languages that are syntactic
sugar over the lambda calculus, I don't know of any imperative languages (and
in fact it would not make sense to design an imperative language) that is
syntactic sugar over turing machine code.
I should have made my position clearer. I do agree that compiling any non-pure
functional language via lambda calculus is a fruitless endeavor.
------
lalolol
Pro Tip : Examples first
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What are the best MOOCs on web development? - pyeu
======
hackermailman
If you're in the EU/US then probably Lambda School, but since you already know
Python may as well try a practical data science course
[http://www.datasciencecourse.org/lectures/](http://www.datasciencecourse.org/lectures/)
most of that course is wrangling with APIs and scraping/parsing html to clean
and manipulate data, at least it will get you a way to get paid immediately
after by going on those terrible freelancer sites (Upwork) and making $100
here and there scraping Amazon and cramming the results into shopify stores or
excel spreadsheets. You learn web development from the opposite direction as a
human browser. Linear Algebra isn't necessary, the course is self-contained
but if you want there's a great course for that done in Python too
[http://cs.brown.edu/courses/cs053/current/lectures.htm](http://cs.brown.edu/courses/cs053/current/lectures.htm)
and while this looks like a lot to do, if you have 45mins a day to eat
breakfast in front of a screen watching a lecture and another 45mins later to
try the homework you'll find you finish these courses in a matter of weeks and
can move on to your own experimental hackery building things which is when you
really begin to learn, as you figure out things for yourself.
Once you have experience manipulating APIs as a user you can try building your
own
[http://www.cs.bc.edu/~muller/teaching/cs102/s06/lib/pdf/api-...](http://www.cs.bc.edu/~muller/teaching/cs102/s06/lib/pdf/api-
design) and now you are a jr "backend developer" who can move on to a systems
programming course to further understand what you're doing
[https://scs.hosted.panopto.com/Panopto/Pages/Sessions/List.a...](https://scs.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx#folderID=%22b96d90ae-9871-4fae-91e2-b1627b43e25e%22&maxResults=50)
------
muzani
[http://freecodecamp.com](http://freecodecamp.com)
I find a lot of the MOOCs go too slow or cover things that aren't so relevant.
FCC has a good balance of both. It's not in the typical MOOC structure, but it
does have videos, forums, discussions, but much of it is code and text.
| {
"pile_set_name": "HackerNews"
} |
Show HN: Embed Map – Great looking maps, embedded, for free - jitnut
https://www.embed-map.com
======
slater
And pray tell how do you get around Google Maps' usage restrictions?
| {
"pile_set_name": "HackerNews"
} |
Show HN: Sawtooth – Online audio workspace - myzie
https://www.sawtooth.io
======
jensenbox
I find myself asking questions that could easily be answered with a demo or
some sort of try before signup.
The idea of everything I click on going to the signup page really is a huge
turn off and personally I consider it to be a poor user first experience.
You should add some sort of something that allows me to see what you are about
before making me sign up. I don't even know what I get if I sign up.
Otherwise, I am sure you have done a great job.
~~~
myzie
Thanks, you are right. I should add more content up front so that everyone can
get a better idea how it works without having to sign up.
~~~
andybak
Even better - the home page should be the app itself. Only prompt people to
register for an account when they've created something they might want to
save.
~~~
andai
This is what I was expecting.
------
cyberferret
As a frequent user of SoundCloud, Gobbler and plain old DropBox to share audio
files among fans and colleagues in the industry, I am wondering what the
advantage of Sawtooth could be?
I see that you have filters etc., but given that I would rather adjust EQ and
effects on my tracks on my own DAW with virtually no latency, I don't know
that I would actually do that on a web platform with all the vagaries of lag,
dropped signals etc.?
Plus the fact that most audio people have their favourite 'go to' plugins for
reverb, delay, chorus etc. as VST/AU plugins that they pull into their DAW -
it seems that Sawtooth straddles that line between being a quick 'grab an
audio recording snippet for sharing' and a full fledged web based DAW.
I am assuming a 'use case' for this would be to capture a song or riff idea
while I was sitting in a hotel room between travelling etc., but to be honest,
I have a lot of iPhone apps for doing that and posting directly to the sites I
mentioned above. To make Sawtooth compelling, it would probably have to supply
some rudimentary DAW like capabilities, such as perhaps a metronome, some sort
of ability to do basic MIDI patterns with uploaded samples, and perhaps some
rudimentary multi track ability - even 3 or 4 tracks would be great for doing
basic song ideas to send to my band members.
~~~
myzie
Gobbler looks great. Have you used the collaboration features and have they
worked well for you?
One of the goals is to streamline the experience of navigating and listening
to sets of files. If it's built correctly, then using Sawtooth should be a
smoother experience than managing where your files live in your Dropbox
folders, having it sync them, finding them on another computer, then listening
with a separate media player or the player in your OS file navigator (thinking
of Finder on MacOS).
This is not meant to replace any of your desktop DAWs. If anything, it could
interoperate with them in certain cases, if people are interested in that.
Maybe an API would be handy for others developing websites or apps that work
with audio.
I'd like Sawtooth to keep simple audio work really simple. I suspect it's easy
to overwhelm newcomers to the audio world with complicated UIs (which are
necessary for advanced work).
Gobbler for example seems very geared towards musicians and music creation,
which is great for many. But there are also lots of people working with audio
for other reasons... field recordings, voice recordings (podcasts etc), signal
analysis, etc. Maybe Sawtooth becomes more optimized for one of those other
cases.
------
puranjay
As an amateur producer, I'm wondering: what's the utility of something like
this?
If I create a new set, I have the option to use the 'Synth'.
Let's be very honest here: if I want to make real music, I'm going to turn to
a serious DAW + synth plugin. I personally use both Massive and Serum with
Ableton. Anything you cook up in a webapp is going to fall seriously,
_seriously_ short of what Serum can do.
Not to mention that a web tool just doesn't fit into the workflow. The synth
is the heart of digital music production. If I'm making music in Ableton, I
want my synth to be inside Ableton.
This might interest absolute amateurs, but amateurs won't pay for this, and by
the time they are advanced enough to _want_ to pay, they would have discovered
Ableton/Logic/FLStudio.
I don't mean any offense, but I find that online music tools like this are
generally very poorly thought. They are a solution searching for a problem
instead of the other way around.
A lot of people need simple tweaks to their photos or graphics. This is why
online photo editing tools work even though they fall far short of Photoshop
in capabilities.
But audio/music? This isn't something your average Joe needs for his Instagram
profile or his Facebook business page. If someone is serious about music/audio
editing, he will eventually want to use a professional tool.
Not to mention that Ableton Lite is quite decent for someone new to music
production
~~~
myzie
Sawtooth creator here...
Thanks for the feedback and I agree with a lot of what you said. You're right
that this can't compete with pro audio software but it's also not my goal to
compete with those tools.
I'm putting Sawtooth out there to evaluate if there is demand for this type of
web app, and what groups may be the most interested. Whether anyone will pay
for it... great question! I'm not worried about that just yet.
Web apps have a lot of limitations compared to native desktop apps when it
comes to serious audio work. At the same time, I wonder how many people could
use a reliable web app for super quick edits, to listen to some of their audio
while on the go (not just at an audio workstation), or share tracks privately
with bandmates or coworkers. We'll see.
~~~
eeZah7Ux
Thanks for your project.
The sharing aspect is the interesting bit. Having a "github for sound
samples", with the ability track forks and have commit history could be
wonderful.
Implementing a professional DAW or synth takes a staggering amount of work.
The toy synth might confuse users about the purpose of sawtooth. (Personally,
I would remove it)
~~~
puranjay
See: Splice.com. It works like GitHub and is already very popular with
producers.
------
microcolonel
I think the marketing page could do with some demos. There are online DAWs
already and most of them are pretty sad. If you showcase some real audio work
accomplished with Sawtooth, it'll be more worth signing up to try.
I have DAW software already, I use Ardour and Pure Data on my computer, so I
want to know that I can do something compelling with your tool before
bothering to set up an account.
~~~
myzie
I appreciate the feedback. A couple aspects of Sawtooth that might be
compelling for you, as a supplement to what you're already using -
1\. Share sets of recordings privately with any collaborators. All protected
by logins. This feature is basic right now (it's read-only if you are not the
owner) but could be expanded.
2\. The convenience factor of accessing your most used audio clips or
recordings from anywhere, on most any device. Maybe your final Ardour mixes
you would upload to have easier access when you're not at your workstation,
for example.
I know this isn't for everyone, and it's certainly not intended as a
replacement for desktop pro audio software. Rather, it may be a complimentary
tool for some for simple editing tasks, quickly streaming your recordings
while on the go, or collaborating with others on shared sound files.
~~~
microcolonel
Seems cool, good job. I look forward to seeing more of your platform.
DAW plugins are usually pretty hard to write and maintain, but maybe it'd be
cool to have a "send selected tracks to sawtooth and share" type flow.
------
jvanegmond
I've very long wanted something like this! Basically like a CyberChef (
[https://gchq.github.io/CyberChef/](https://gchq.github.io/CyberChef/) ) but
for audio. I work in digital speech processing and often work with audio
codecs and it would be very helpful to have an online tool which lets you
apply multiple filters to some audio.
What I'm missing after trying out Sawtooth is the interactivity of CyberChef.
Basically trying out a few filters, one after the other, showing the
intermediate results for each, and some frequency analysis on it, and hearing
the results. Can audio filters be applied in the browser for instant-feedback
to hear the audio 'live' (like CyberChef auto-bake) like it can be with text?
And some conversions would be really great as well, like being able to apply
various encodings: Treating the samples as a-law encoded and applying an a-law
decoding pass on it.
~~~
myzie
I was wondering if anyone would be interested in having some signal analysis
available. It would be a logical step from what's there now.
At the moment all filtering is done in the backend. Hearing instant-feedback
though seems to be a common request so I'll have to give it some thought.
Sawtooth does keep all versions around and available (to support undo) so
improving the ability to quickly play the different versions of the same file
would be the easiest addition.
Thanks for giving it a try.
------
jasonkostempski
Did you ever have a desktop app? In the early 2000's I used a program I
thought was called Sawtooth, or maybe it was just SAW, 'saw' was in the name,
but I can't for the life of me find it anywhere. You could draw a wave form
with the mouse, name it, piano roll it, sequence it into a song and export as
a wave and that was about it. I know that sounds like ever DAW on the planet
but it didn't have any advanced features and I think drawing the wave form was
unique to it at the time. If anyone knows what I'm talking about please let me
know.
~~~
jasonkostempski
2 days of really hard thinking I finally remembered. "SawCutter". Seems to be
abandoned though. cuttermusic.com is something else now and the download.com
link is, of course, completely shady. Hope I can find it on an old backup CD
somewhere.
Edit: Looks like this might be the author who seems to have a pretty
impressive resume:
[http://www.larryzitnick.org/](http://www.larryzitnick.org/)
------
gargarplex
I do a bit of online video production (for social media marketing and for
online courses) and I have to open up audacity every once in a while. I don't
like the audacity user experience. Here's my feature request list
1) Make it easy to switch audio formats ([mp3|wav|au|etc]->[mp3|wav|au|etc])
2) One button to make it louder, one button to make it quieter
3) A good cut and paste interface with the ability to zoom in and out and see
the spectrogram so one may be sure that one starts cutting at the audio part
(if there is white noise)
4) Abilities to selectively remove deep or high voices and remove background
noise
~~~
myzie
Thanks for the list!
~~~
gargarplex
You're welcome. I know how challenging it can be to launch a new product and
all you want is information from the market regarding where to go.
------
Optimal_Persona
Interesting. In "Works With Multiple Formats" section, it mentions 'AU' \- do
you mean AIFF (AU is Apple's Audio Unit plugin format)? 'Chorus' is misspelled
under "Filters".
What is bitrate/quality of transcoding, and how would you rate your DSP
algorithms compared to those in pro DAWs/editors? Like, is that FreeVerb or
something fancier, and what about pitchshift/timestretch quality and zero-
delay filters? Audio folks are pretty picky about quality these days.
~~~
myzie
In supported browsers I'm using the Opus codec which in general is quite good
quality compared to MP3. It falls back to MP3 in browsers that don't support
Opus... Safari and IE I think.
I believe it's using the default encoding settings for Opus and MP3 at the
moment (using opusenc and... maybe lame for MP3 I forget). Certainly I'm
looking to have great streaming quality so I should confirm that those
defaults are reasonable.
I'm using various open source and custom tools for the processing. YMMV. In
general they should be solid, but not as fancy as many of the latest VST
plugins. This could all evolve depending on feedback and what people are
interested in. One thing I considered as an addition is the ability to define
custom filters on the webpage... either interpreted or compiled in the backend
to edit your files. I think that would be a neat way to experiment with
filters, but would have some limitations as well.
------
tommynicholas
I like this idea - but why do you let people edit the .wav part of the
extension? That was super counterintuitive to me could I have done .mp3 for
example?
I'll leave any other feedback I have here unless you have an email I can send
it to - I've been looking for a good web version of Hum (the mobile app) and
this looks like it could be it + more!
~~~
myzie
Hey there, thanks for trying it out. Was this after uploading a wav then
clicking to edit its tags or file name? I'll see if I can improve that aspect.
In general, Sawtooth transcodes uploads behind the scenes to create mp3 and
opus encoded versions so that they can be streamed to your browser (not all
browsers support playing wav directly). These versions are all stored next to
each other. Editing the name string in the UI shouldn't be able to change file
extensions in the backend.
I'd be happy to field any other questions here or equally you can reach me at
curtis at sawtooth.io
------
mjmj
Seems like a great start. I too would like to see ways to combine waveforms.
And a bigger ask of being able to draw filters in real time while playing back
to hear changes instead having to process them to find out what will happen as
well as loop the same while editing. I realizing I'm asking for more DAW like
features! :P
------
sevilo
Seems like a really cool project with potential. Would like to see the ability
to preview on filters and synths, that seems like a big downside compared to
the desktop DAWs.
Also not sure if there's a place to report bugs and provide feedback in the
future?
~~~
myzie
Thanks, please send any feedback to curtis at sawtooth.io
------
redmand
I was playing with it for a bit and came across a few issues, but can't seem
to find a support or contact link. Where would you like such information sent?
~~~
myzie
Please shoot me an email with your findings: curtis at sawtooth.io
Thank you!
------
acuozzo
It would be worthwhile to mention if multichannel audio is supported or if the
application is limited to working with mono and stereo inputs.
~~~
myzie
Good point. Multi-channel is supported. I'll make a note to add it to the
feature list on the page.
------
myzie
FYI - you can use Google login if you visit the Sign In page (instead of Sign
Up). Need to make this more obvious.
------
thecrumb
Click - sign up. ... No.
------
thenormal
Can It be used for direct streeming?
| {
"pile_set_name": "HackerNews"
} |
Autonomy misled HP about finances, Hewlett Packard says - finknotal
http://www.bbc.co.uk/news/business-20412186
======
masukomi
I worked for a company that was Acquired by Autonomy. They made us fire a
large portion of our staff before the final papers were signed so that they
could continue with their claims that they never fired people as the result of
an acquisition.
Slimy Bastards if you ask me. This does not surprise me in the least.
~~~
politician
It doesn't sound like your old company was all that principled either...
------
manishsharan
Wasn't Autonomy a public company when HP acquired it ? Was it not the
responsibility of HP board and management and their investment bankers to do
due diligence before they made such a big acquisition ?
Could it be that HP management , having lost the position of largest PC maker
to Lenovo, is looking to throw our attention away from their incompetence.
~~~
pisarzp
I agree. This story is quite hard to believe... Nobody spends $12bn without
looking carefully into the books. There always is a long and thorough Due
Diligence process on transactions like this one. Investment bankers, lawyers
and accountants get get their fees mainly for going through every single
document in the company...
~~~
chucknelson
Apparently Deloitte was supposed to do this, with KPMG as a safety net. It
took a third (and also expensive) consulting firm (PwC) to notice issues.
I wonder if there will be any client fallout at either Deloitte or KPMG for
this. Probably not...
Maybe HP is just trying to blame it on them? Who knows!
------
ridruejo
"We did a whole host of due diligence but when you're lied to, it's hard to
find," Are you kidding me? That's the whole purpose of doing due diligence in
the first place.
~~~
rayiner
The purpose of due diligence is to uncover mistakes and ensure that the books
are in order, not to uncover fraud or wrongdoing.
~~~
olefoo
And one of the mistakes you should be looking for is the possibility that you
are being lied to so thoroughly that it's hard to know what's real and what
isn't. It's in situations like that, that forensic accountants earn their
salt.
~~~
wangarific
And even if they don't earn their salt, your lawyers should protect you in the
representations and warranties section of the definitive agreement.
------
j_col
> Autonomy founder Mike Lynch is a non-executive director of the BBC.
Way to go journalistic impartiality at the BBC.
~~~
bonaldi
There could be a technical concern here, but practically speaking the BBC is
so impartial it will happily half-destroy itself in the name of journalism.
Their _own Director-General_ just had to resign after a grilling by BBC
journalists on BBC programmes. If they'll do that to their own boss, some guy
from Autonomy has 0 chance of special treatment.
~~~
simonw
Extremely well put.
------
robk
This surely will hurt the reputation of Lynch. The other news reports have
quotes saying in effect this was perpetrated by senior management and that the
whistleblower who came forward is still at HP/Autonomy, which by inference
seems to point a rather stern finger at Lynch.
------
gadders
I have no comment on Autonomy's finances. We did however evaluate it's product
vs Google's search appliance.
The Google Appliance we pretty much plugged in and let it do it's thing. After
a few days it was giving excellent results on our massive (80,000 people)
company intranet.
The Autonomy server had to be constantly tweaked and fiddled with to even get
it near to the relevance of the results.
Unfortunately, Autonomy had flogged a loads of licenses to another part of the
business for peanuts, so we had to go with their inferior product.
| {
"pile_set_name": "HackerNews"
} |
Google is testing an arrow next to trusted queries - davidedicillo
http://www.flickr.com/photos/7896006@N06/5040747038/
======
slipstream
Not testing, rolling it out as a keyboard navigation feature indicator,
enabled by default since couple days ago:
[http://googleblog.blogspot.com/2010/09/fly-through-your-
inst...](http://googleblog.blogspot.com/2010/09/fly-through-your-instant-
search-results.html)
| {
"pile_set_name": "HackerNews"
} |
Elon Musk Needs Sleep - smacktoward
https://slate.com/business/2018/08/elon-musk-needs-sleep-to-save-himself-and-tesla.html
======
amacalac
srsly?
~~~
lutorm
yes.
| {
"pile_set_name": "HackerNews"
} |
At Airport Gate, a Cyborg Unplugged (2002) - kick
https://www.nytimes.com/2002/03/14/technology/at-airport-gate-a-cyborg-unplugged.html
======
icedata
I once drove through the border from Canada to the US with Steve. He was
wearing his headset. He explained what it was, the guard said "you can't come
in here with that". I managed to assuage his concerns. This was around 2014.
~~~
kick
That's amazing! I really wish he'd write an autobio-like book describing all
of the social challenges he's encountered with his gear.
------
boudin
It remembered me of a similar story at a McDonald restaurant in France. It's
actually the same person:
[https://www.forbes.com/sites/andygreenberg/2012/07/18/mcdona...](https://www.forbes.com/sites/andygreenberg/2012/07/18/mcdonalds-
staff-denies-physical-altercation-with-cyborg-scientist/)
~~~
matheusmoreira
> the best way to settle this may be for McDonald’s to release its own
> surveillance video footage of the incident–an ironic possibility given that
> the dustup seems to have started with the staff’s own concerns over
> recording
Isn't it interesting? Authorities like to surveil everyone but hate being
surveilled themselves.
Steve Mann coined the term for the idea of surveilling authorities:
[https://en.wikipedia.org/wiki/Sousveillance](https://en.wikipedia.org/wiki/Sousveillance)
------
ciymbpol
Disappointing reading a 2002 article without follow-ups. I tried searching the
web and the Candian court cases database canlii.org. Perhaps this was settled
out-of-court?
------
schappim
Do you think this would still happen today?
------
anotheryou
is there any documentation on what exactly his setup is?
~~~
kick
His websites are filled with documentation for what it used to be, but I can't
remember the links to the _really_ juicy stuff. His setup circa-1997 (I think)
was pretty much entirely documented with reasoning for every piece, and it was
_fantastic_.
He also has some modernized stuff on Instructables:
[https://www.instructables.com/member/SteveMann/](https://www.instructables.com/member/SteveMann/)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Anyone over 35 admitted to YC? - dreamzook
Is there any team admitted to YC where founders were over 35 years of age?
======
pg
There have been plenty that old. I believe the oldest founders we've funded
were in their early 50s.
~~~
dreamzook
Also PG thanks for the reply I was wondering do we have a chance with founders
35+ but still did an early submit
------
bsims
Ray Kroc got his start with McDonald's at the age of 52. Never too old to
think new.
[http://franchises.about.com/od/mostpopularfranchises/a/ray-k...](http://franchises.about.com/od/mostpopularfranchises/a/ray-
kroc-story.htm)
| {
"pile_set_name": "HackerNews"
} |
The wildest insurance fraud scheme in Texas - diaphanous
https://www.texasmonthly.com/articles/it-was-never-enough/
======
breakfastduck
What a fascinating character & interesting read.
> he is taking college correspondence courses, “the path of least resistance”
> toward a business administration PhD. “I simply thought, if someone is going
> to call me a con man or [say] ‘you’re an asshole,’ well—it will be doctor
> asshole,” he said.
He may be in prison, but at least he's not lost his sense of humor. Completely
in character based on the rest of the piece!
------
NelsonMinar
I gotta say, ditching a small airplane 30 miles off shore in the Gulf of
Mexico is a hell of a risky way to collect $50,000 in an insurance payout.
You've got a lot of faith in your ability to make a "water landing", much less
that someone comes out and gets you before something goes wrong.
~~~
fny
Sounds like a hell of a lot more fun than setting a house on fire.
~~~
notatoad
yeah, the article seems to paint a pretty clear picture of a guy who figured
out how he could crash-land a plane _and_ get paid for the experience. he
probably would have done it for $5.
------
theli0nheart
Given how long it took to catch him, after years of outrageous purchases and
shady business dealings, it makes me wonder if frauds are much more common
than conventional wisdom would lead one to believe.
~~~
3pt14159
Fraud is extremely common and the best way to avoid it is to get personal
recommendations for anything important, like business contacts, lawyers, or
accountants.
Over a decade ago, a friend of mine was under the legal age when he sold his
collection of websites with the same theme for around $100m. I'm sure you've
heard of at least one of the properties. Being underage, he didn't know how to
protect himself, and he gave his lawyers power of attorney. They took almost
the entire acquisition for themselves and left the USA.
That's fraud. Those guys are still out there. He contacted other lawyers and
they basically said "it's been too long the money is gone and so are these
criminals."
~~~
sizzle
Wow that's a crazy story that would benefit from the streisand effect to enact
some justice on those scumbag lawyers. Any idea why your friend isn't trying
to actively expose these individuals who screwed him over?
~~~
3pt14159
Well, at first he was worried that he'd be known as a sucker and wouldn't be
able to raise money for another company. "I sold XYZ for $100m, now fund my
new thing ABC." Sounds a lot better than the raw truth. Since then, he's
started something that's doing well. Some pretty interesting investors, some
market traction, but he's still no where near being worth $100m.
------
rudiv
They say everything's bigger in Texas, I didn't know it extended to
narcissistic personality disorder.
~~~
thebradbain
As a Texan, I can tell you that's exactly _why_ that saying exists in the
first place.
I'm only half joking...
[https://www.aiadallas.org/v/columns-detail/Everything-Is-
Big...](https://www.aiadallas.org/v/columns-detail/Everything-Is-Bigger-in-
Texas/qh/)
~~~
jfoutz
I'm from a neighboring state that provides hospitality to wealthy Texan
tourists. It's sort of an odd dynamic. Want people to enjoy themselves and
have fun, but also take the wind out of people's sails from time to time. It's
probably easier to explain with an old joke.
A Texan is bragging about how big his ranch is. "It takes all day to drive
around the edge of my ranch". The sly reply is "Yeah, my truck's like that
too".
Generally good natured, but from time to time, one side or the other is a
little too invested in the hype and it's not so funny.
------
AnIdiotOnTheNet
> The venture escalated on a kiosk-buying trip to the Shenzhen International
> Toy and Education Fair, in China, where, T. R. claimed, he came up with an
> idea for a console for pirated video games called Power Player that would
> plug into a TV and allow users to play classics like Space Invaders and
> Galaga. He decided to focus on selling Power Player wholesale. It was a huge
> hit, T. R. said, until the FBI began arresting the biggest Power Player
> retail operators. Panicking, he abandoned his business and left the United
> States with $8,000 to travel in Europe.
I'm pretty sure I actually own one of these. For a while I collected some of
these silly pirate consoles. If I recall this one correctly it had a N64
controller body for some reason.
------
simonebrunozzi
I don't want to read a novel before being able to understand what this is
about. At least the article could provide a quick glimpse at what the fraud
scheme is.
I kept reading for 5-6 minutes and then lost interest.
~~~
bluedevil2k
I don't understand this comment at all.
First of all, the first paragraph is about a suspicious plane fire - the plane
literally burned in half sitting in a hangar. That should provide you _some_
hint about what the fraud scheme is going to be. There's even an animated
image of a plane on fire! Did you need the author to print "this is a story
about insurance fraud" in big bold print at the top of the story?
Secondly, comments like this are really worthless on HN, it adds nothing to
the discussion and as you point out, you didn't even read the article. Why
even both writing a waste of a comment?
~~~
jasode
_> I don't understand this comment at all. [...] Why even both writing a waste
of a comment?_
I understand where the op's reading frustration is coming from. For some urls
that point to pdf files or racy content, we might put informal warning tags
such as "[pdf]" or "[NSFW]". But there really isn't a meta tag such as
"[human_interest_story]" to warn readers of this type of article:
[https://en.wikipedia.org/wiki/Human-
interest_story](https://en.wikipedia.org/wiki/Human-interest_story)
There's nothing wrong with human interest stories (and even long form texts of
it) but it's tedious for many who aren't expecting it. (I.e. some global
readers aren't familiar with TexasMonthly and its editorial focus on long-form
human interest articles.)
One type of reader just wants the _mechanics of the insurance fraud_
explained. Thus, the human names -- whoever they are -- are not important --
because they will be forgotten 5 seconds after finishing the article. If it's
the "wildest" insurance fraud, what makes it more wild than other insurance
scams? Unfortunately, many articles "trick" readers with a compelling title
about some <situation> but the actual article is mostly about <person(s)>.
Some readers care more about details of the insurance deception than the
escapades of Mr. TR Wright.
Another example of this mis-alignment between some readers and the author is
the _" Why it's so hard to find dumbbells in the US (vox.com)"_ article that
was on HN front page today. The actual article starts off with human-interest
stuff by mentioning people like like Andrew, Logan, Fread, etc and goes on
like that for many paragraphs. But the HN top-voted comment extracts the
relevant explanation that actually _answers the question_ put forth in the
title:
[https://news.ycombinator.com/item?id=24270770](https://news.ycombinator.com/item?id=24270770)
~~~
simonebrunozzi
> One type of reader just wants the mechanics of the insurance fraud
> explained.
Exactly me. And I was not familiar with TexasMonthly, despite having lived in
the US (California) for the last 8 years. (I am originally from Europe)
~~~
bluedevil2k
Then go to Wikipedia and look up "insurance fraud".
------
ryanmarsh
The most disturbing part of this story is how light his sentence was. I sat in
a Harris County courtroom and saw a 30 year old woman with no priors plead
guilty to check fraud (a few thousand dollars worth), and get a longer
sentence.
~~~
phonon
[https://www.smithsonianmag.com/arts-culture/theft-
carnegie-l...](https://www.smithsonianmag.com/arts-culture/theft-carnegie-
library-books-maps-artworks-180975506/)
$8 million, three years’ house arrest and 12 years’ probation :-/
------
sasaf5
Very interesting read!
This fellow reminds me of Barry Seal [0], recently dramatized in the movie
"American Made."
[0]
[https://en.m.wikipedia.org/wiki/Barry_Seal](https://en.m.wikipedia.org/wiki/Barry_Seal)
------
locallost
Some things don't seem very plausible. Obviously he was successful, but I
doubt it was on a scale he tells it. If you're smuggling helicopters from
Marseille to Chad, you don't deal with 40k insurance scams. I also doubt the
claim of 35 million total in insurance fraud. If his scams were in the 40-200k
range as mentioned in the article, he would need to deal with hundreds of
claims. IMHO he's a medium level Frank Abagnale catch me if you can type con
artist, who also likes to exaggerate his success. He's also a bit too open
telling his story for everyone to know.
I read the whole thing and also did not appreciate the writing. Mostly it's
just cliches and fluff, and superficial in that it didn't really dig deeper
other than taking the said things for granted.
~~~
Semaphor
My impression was (and TR claimed so as well), that he did those things for
fun. he was an adrenaline junkie. It might mostly be lies, and you could be
right, but reading this, doing crazy insurance scams just because he could
seems 100% in character for the person depicted.
------
dkarp
5 years seems like such a small penalty.
He was an international arms dealer and from the sounds of it selling weapons
to countries that it was illegal to sell weapons to. Who knows what those
weapons were used for.
~~~
adrianmsmith
But I think he wasn’t convicted for that? (Not sure why.)
~~~
giarc
I think its a common situation where you have to decide between crimes with
long sentences, but hard to prove and smaller charges that are easier to
prove.
------
W-Stool
Note - an L-39 is not a MiG - it was designed in Czechoslovakia by Aero
Vodochody. A small point, but these kind of large errors in articles I'm
supposed to be taking seriosly drive me crazy.
~~~
zaroth
There’s nothing about TFA that should be taken seriously. It’s pulp fiction.
Just like TR himself. Not to say “whoosh”, but I’m pretty sure that’s the
whole point of the piece?
~~~
Cederfjard
Completely besides the point, but isn’t it unnecessary to use ”TFA” in this
instance? Personally I’m not at all offended by strong language, but it just
seems hostile for no reason.
~~~
zaroth
To me 'TFA' means The Featured Article.
Is there an abbreviation for that which is as widely known as TFA which
doesn't have a negative potential interpretation (The Fucking Article)?
'OP' refers to the user who posted, not the post itself. The word 'Post' is I
guess an OK but not great alternative.
~~~
Cederfjard
>To me 'TFA' means The Featured Article.
Fair enough, I wasn’t aware/didn’t think of that. My mind went straight to
”the fucking article”, which is why it appeared hostile to me.
~~~
0xffff2
"TFA" is definitely "the fucking article". C.f. "RTFM": "Read the fucking
manual".
------
zhte415
Fascinating read. And perhaps more.
------
0xffff2
>“Yes, I had around $35 million in fraudulent insurance claims around the
world,” he wrote me
...
>He was also ordered to forfeit his Learjet and to pay $988,554.83 in
restitution to various insurance companies
And they say crime doesn't pay. :/
------
selimthegrim
Presumably title is a reference to
[https://en.wikipedia.org/wiki/The_Best_Little_Whorehouse_in_...](https://en.wikipedia.org/wiki/The_Best_Little_Whorehouse_in_Texas)
------
JoeAltmaier
Hey selling insurance in Texas is hard enough. A guy said "You're telling a
guy, buy this and your wife can live in your house and drive your car with
another guy after you're dead". Hard sell.
------
omega3
It's interesting that ATF would be investigating an insurance scam just
because there was arson involved. You would expect for a more suitable agency,
one with more experience with such things to take over.
------
efa
Sounds like a good candidate for an American Greed episode.
------
nakagin
Would make a good script for a Wolf of Wall Street type of movie
------
debacle
> Reed, a fit 29-year-old who was as careful with his clean-cut brown hair and
> clean-shaven face as he was with his deposition-ready phrasing
Is there a tl;dr that would allow me to skip the bulk of this creative writing
essay?
~~~
nwsm
[https://www.justice.gov/usao-edtx/pr/texas-pilot-
sentenced-w...](https://www.justice.gov/usao-edtx/pr/texas-pilot-sentenced-
wire-fraud-and-arson-conspiracies)
------
zalkota
He got away easy! Great read, thanks.
~~~
wyldfire
Losing his wife (and daughter?) seems like a pretty significant consequence.
| {
"pile_set_name": "HackerNews"
} |
Jquery is better than React - qhoc
https://medium.com/@melissamcewen/jquery-is-better-than-react-cd02dfb026a6
======
oxmo
total bullshit. jquery and react are very very different
| {
"pile_set_name": "HackerNews"
} |
Say Goodbye to Alexa and Hello to Gadgets Listening to Voice Inside Your Head - startupflix
https://medium.com/mit-technology-review/say-goodbye-to-alexa-and-hello-to-gadgets-listening-to-the-voice-inside-your-head-3405ef93835b
======
Finnucane
Video demo:
[https://www.youtube.com/watch?v=uUa3np4CKC4](https://www.youtube.com/watch?v=uUa3np4CKC4)
~~~
startupflix
Thanks :)
| {
"pile_set_name": "HackerNews"
} |
Programming reminds me of my stand up comedy days (2018) - songzme
https://medium.com/@kevinyckim33/how-programming-reminds-me-of-my-stand-up-comedy-days-5522722c4d73
======
JabavuAdams
"Whenever I catch myself wondering whether this function I’m writing is going
to crash or not, I remind myself to just run the tests or the app."
One lesson that often comes later in one's programming journey is that there
are surprisingly, shockingly, many bugs that won't be found by just running
the code. Like you're debugging one thing, and you find this other thing that
has been in the shipped code for a year, intermittently causing crashes, but
no one was the wiser. The "how could this possibly be working?" moment.
So, I fully embrace the idea of of incremental ground-up development, but
running the code just to see if it crashes is too low a bar. Beginner versus
craftsperson. I would suggest a high-value compromise is to single-step
through any function that you just wrote. This has revealed a huge number of
logic errors to me, even without writing additional tests. It's super low-
hanging fruit that a lot of people don't even pick. I think that suggestion
came from Code Complete, which although old, is still a great resource for
beginning programmers. I also recommend The Pragmatic Programmer.
EDIT> Liked the article. Am also interested in standup. Also just recently had
a very productive couple of evenings kit-bashing together some wood scraps
where the tactile nature of holding a battery here, there, trying to orient it
etc. seems to have been vastly more productive than sitting down with
Solidworks or overthinking the early design.
------
AshleysBrain
I've been surprised at being able to draw parallels between programming and
music performance, writing and more. I think there's plenty in common at a
high level when working on a creative project of any kind, such as your
attitude to improvement, dealing with setbacks, analysing results, and the joy
of when it all comes together!
~~~
DenisM
OTOH humans are wired to find patterns, existent or imaginary if it comes to
that.
Perhaps more important is the act itself of pondering your occupation - doing
it long enough is bound to yield insights.
~~~
qznc
Yes. The point of this article is not that you can learn something about
programming by doing stand up comedy. You cannot. The point is that you can
learn something about programming (by doing it) and make it memorable via
analogy. In this case the analogy is comedy, but the actual topic does not
really matter.
------
qznc
> Whenever I catch myself wondering whether this function I’m writing is going
> to crash or not, I remind myself to just run the tests or the app. The
> terminal and the browser will always have the answer.
This is not generally true. It works if you program for fun. It does not work
on critical software. It works for more programming projects than it should.
~~~
ThalesX
When I was a junior, I was trying to debug a piece of software. I was
attaching my .NET debugger to the thing and just running line by line like a
madman trying to catch a race condition.
The technical team lead had a sit down with me where we just inspected the
functions and reasoned about them. It was eye opening just how fast we found
the fault and the understanding I had after actually reasoning about the
system.
------
hyperpallium
I agree with getting started.
If Michelangelo iterated studies (in marble!) and drafts to work out mistakes
and what he was doing, surely I can too.
BTW Seinfeld showed his (long) development of his "pop-tart" joke:
[https://youtube.com/watch?v=itWxXyCfW5s](https://youtube.com/watch?v=itWxXyCfW5s)
------
phirschybar
this is great. I have actually met 3 developers in the course of my career who
went from standup to programming. Those are also the only 3 standups I have
ever met!! Never understood the connection until now!
| {
"pile_set_name": "HackerNews"
} |
1.8M American truck drivers could lose their jobs to robots - Futurebot
http://www.vox.com/2016/8/3/12342764/autonomous-trucks-employment
======
pedalpete
This is definitely coming, and truckers need to be aware. Upskilling now is
imperative.
At the same time, it isn't just the truck drivers. Logistics is also changing
and becoming automated, then you've got delivery drivers (I'll assume they are
also considered in the same bucket as truck drivers), bus drivers etc.
Self-driving cars also have a huge impact. It isn't just taxis/uber-drivers
that are being replaced, there is the change to infrastructure. How do parking
needs change? How about traffic police?
I was thinking about this yesterday, and how do we think far enough ahead to
prepare as a society, and benefit from these changes?
On a side-note, I seriously question the stats from NPR's most common
occupation. How can you have states where the most common job is Secretary? Is
Software Engineer really the most common profession in any state?
~~~
pkroll
Indeed, it appears there's a viable argument for "retail sales" as the most
common job in 42 states. [http://www.marketwatch.com/story/no-truck-driver-
isnt-the-mo...](http://www.marketwatch.com/story/no-truck-driver-isnt-the-
most-common-job-in-your-state-2015-02-12)
------
astrodust
This is an interesting sort of career path for people. A hundred years ago
there was no such thing as a long-haul trucker, the only reliable way to get
goods over vast distances was train if there was tracks or boat if there were
navigable waterways.
Through the 1950s trucking became a force to recon with, and these days with
seemingly everyone moving to zero-inventory systems it requires a degree of
flexibility that rail can't afford.
Still, you have to wonder how many horse and buggy drivers were put out of
business by long-haul truckers. A single motorized vehicle could do the work
of twenty wagons which were probably the standard mode of transportation in
many places until highways emerged.
Self-driving is just the next step.
~~~
edko
However, the effect this next step can have on the overall economy can be
disastrous. Not only is truck driver the most common profession, with their
unemployment creating hardships for millions of families, and the families of
industries related to truck driving, but also the knock on effect that will
have. Imagine all those families spending less, creating recession, lay-offs,
and bankruptcies in every industry. It could be more disastrous than the
housing crisis.
~~~
astrodust
It will have a pretty major impact on things, but then again, so did switching
from wagons to diesel trucks. How many people making horse shoes and buggies
went bankrupt? How many people were there that used to deal with the horses,
with the carriages, with selling goods and services to people?
The efficiencies gained by long-distance hauling being practical made entirely
new industries possible. So long as this automated transport enables _new_
business opportunities that require labour it will be a positive thing.
If this is robotic trucks driving goods to robotic factories with products
designed by robots, then we're screwed.
~~~
flukus
And how much social upheaval was there? Enough to kill off a good chunk of the
working population.
------
Shivetya
I think robots will be the future in many industries but honestly self driving
cars are no where near where they need to be. There are none on the road right
now, yeah, none. You cannot trust a single one to be safe in conditions the
average driver can drive in. Soon as it rains, snows, or such, these systems
start to fail.
Oh sure they brag up the internet but damn, they are doing the easiest driving
there is. Throw a few plastic supermarket bags in front on your "self driving"
car and you will find it isn't anything more than self tailgating. Really, why
are these systems so big on tail gating when its not guaranteed you are tail
gating another car with similar equipment? Finally, who is liable when a
supposedly self driving car is told to speed?
back to trucks, long haul might get it first but not unless roads are setup to
accommodate them because bad weather driving is a lot further off than we
suspect
------
force_reboot
There is a glaring contradiction between the two mainstream narratives, one of
which goes that we need working class jobs because not everyone is capable of
middle class jobs, the other of which says there aren't enough people to fill
working class jobs so we need immigration.
There two narratives exist in almost every Western nation.
| {
"pile_set_name": "HackerNews"
} |
Fake job listings to get users? - TokyoKid
Hi guys,<p>I'm a first-time poster but long-time start-up fan.<p>Recently I started looking for some work to help me through college. I found a job on my school internship page and applied, but a few things made me suspicious.<p>First, the job was for a "content selector", who must find good content and "push it through" to the site. But on this site, that is exactly what the users are supposed to do. Why hire someone to do the users job?<p>Second, they ask the applicant to sign up on the site and post a certain number of items, then include their username as proof. I have never seen that as a requirement before.<p>When I applied, I mentioned I found a large bug. But their response to my application was very canned-sounding and they did not ask about the bug. Instead, they asked me to give feedback on the invite-a-friend feature.<p>I responded that I would give my feedback during the interview instead. After sending a follow up a few days later, I still haven't heard a non-cut-and-paste response.<p>I noticed that the job is not posted on my city's de-facto job boards. It's only on my college internship page and their Twitter, from what I see.<p>Also, it's "to be done remotely", and I see a few users on the site who have about the same amount of activity as I do, from different areas.<p>Does this sound like a ploy to get users to anyone else? Is there a way it could be proven, if it is? Is this illegal? Is this common in startups? Who can I report this to?<p>Thanks a for reading.
======
palakchokshi
For reference Reddit created thousands of fake user accounts to solve the
problem of "ghost town" website[1]. I created a product that relied on users
creating/posting content to their accounts on the site however only a few
users are brave/inquisitive/adventurous to post content on a site that's a
"ghost town". So to mitigate this a company might pay interns to create an
account, post content and share that content, essentially become a legitimate
user of the site. The hope is that as they start to create content, post
content and share it, their friends will see it and maybe want to join too.
Regarding the other stuff you mentioned about the bug, if it was my site I
would have investigated your bug report and if it was indeed a big bug I would
have fixed it and hired you.
[1] [http://www.dailydot.com/business/steve-huffman-built-
reddit-...](http://www.dailydot.com/business/steve-huffman-built-reddit-fake-
accounts/)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Software Engineering or Computer Science? - cmelbye
Hi all, I have a quick question. When hiring someone, is it more desirable that they carry a Software Engineering degree or a Computer Science degree, or are both equally desirable?
======
shrughes
It doesn't really matter, except that if they got a Software Engineering
degree from a university that offers both, they're probably a soulless cretin.
(The actual manner in which their education is judged is based on the quality
of the university, and, if you even have the information, the sort of courses
that they took.) When I see somebody that willfully decided to take boring
software engineering courses, I tend to have prejudicial thoughts about them.
~~~
eshvk
> It doesn't really matter, except that if they got a Software Engineering
> degree from a university that offers both, they're probably a soulless
> cretin.
Or you know they could have done a EE degree and decided after they had done
enough courses that EE was not their thing and they would probably do Software
Engineering so as to graduate with some degree that made use of the
requirements that they had already completed and was at least related to what
they wanted to do in real life.
~~~
shrughes
> EE was not their thing
Like I said, soulless cretin.
------
3amOpsGuy
That's really hard to give a decent answer to. The answer really depends on
the people involved in filtering the applications.
Can you mirror what the job advert lists it's looking for?
A bit more outlandishly, could you identify a person working in or near the
position you're after, then google their CV to see what they listed?
For me personally (not that it should be used to base your decision at all)
i've never distinguished between either.
Just for contrast someone i used to work with (an electronic and software
engineering graduate) detested the idea of anyone without an engineer's
charter (himself included until recently) referring to themselves or their
education with the word "engineer". More than once he gave graduates a
dressing down for using the E word... life's too short for that IMO.
| {
"pile_set_name": "HackerNews"
} |
Where is your user name registered? - maxwell
http://www.usernamecheck.com/
======
randomwalker
There's this idea I've had for a while, but never got around to implementing:
every namespace is a market. So there should be a place where you can trade
ownership of tokens in different namespaces. There already exists a healthy
market for domain names, although it is heavily fragmented, but usernames on
websites are becoming valuable enough that it makes sense to trade usernames
as well. Once such an infrastructure exists, you can imagine auctioning a
variety of things this way -- phone numbers, license plates..
I've fleshed out the idea in my head in some detail, but that's the gist of
it.
------
mindaugas
Maybe it should do everything in parallel?
~~~
Hexstream
And cache results or at least prevent you from checking the same name twice
(or infinite times) in a row.
~~~
RossM
As far as I know it does do some caching (or at least I've heard
@usernamecheck (<http://www.twitter.com/usernamecheck>) tweet something to do
with caching).
------
ryanspahn
Does it let me claim my name and auto register said name to each site?
A new form of Open ID! Visual and lazy!
------
jasonkester
A natural feature for this would be to wait until the user leaves your site,
then register accounts at all those sites using the supplied username. Then
squat on those new accounts and offer to sell them back to people.
Collecting emails would help!
------
thwarted
The utility of a consistent username across sites that have them visible
(usernames are obviously more useful at flickr than at your bank, outside of
ease of remembering) is mitigated by content aggregation sites like
friendfeed. Once I've added a service to my "Me" listing at friendfeed, it
becomes authenticated to my identity, no matter what the exact username is.
------
alecco
Didn't have time to analyze his code, but from the privacy statement he does
server side stuff. No-go for me. This could be done client side only.
Edit: yep he does call /check/<site>/<username>.
~~~
mitchellh
A privacy statement for a usually public username anyways? I don't think this
kind of stuff needs to be done client-side only. If passwords were being sent
along too that would be something different but usernames are meant to be
shared and usually somewhat public information.
Unless you're worried about the maker of the site "stealing" your identity on
some site... although thats a pretty weak argument. If there is a "mitchellh"
on some site I just make a "mitchellh3" or something of the like... no big
deal.
~~~
alecco
This site could match usernames with OS/browser fingerprint and IP address
(location.)
------
petercooper
Very clever little tool. I can't see myself using it too often, but it's shown
me lots of semi-popular sites where I can still pick up my username.. so I
might just have to do that next!
------
HeyLaughingBoy
Am I the only one who still reads /. ?
------
indiejade
What, no Slashdot? ;)
| {
"pile_set_name": "HackerNews"
} |
A Sticky String Quandary - hyperpape
http://www.stephendiehl.com/posts/strings.html
======
teh
I read this and I agree with everything that he says - but I also think it
makes Haskell look worse than it is to some passer-by.
My impression is that the Haskell community is very self-critical (which is
great), but someone just peeking in from the outside might think that Haskell
is still in the random hobby project stage, and that it still hasn't figured
out strings.
That's totally not the case though! Strings are solved, just a bit annoying to
use sometimes. We're running Haskell in production and it's amazingly stable
and hard to break.
I wish the community was bigger though, that's why I'm posting this to
encourage everyone to try it.
~~~
StefanKarpinski
Stability seems like an orthogonal issue to the standard string representation
being "quite possibly the least efficient (non-contrived) representation of
text data possible". As a production Haskell user, what do you do when you
have to load a large amount of text data?
~~~
tome
> As a production Haskell user, what do you do when you have to load a large
> amount of text data?
Use Text
[https://hackage.haskell.org/package/text](https://hackage.haskell.org/package/text)
"The Text type represents Unicode character strings, in a time and space-
efficient manner. This package provides text processing capabilities that are
optimized for performance critical use, both in terms of large data quantities
and high speed."
~~~
StefanKarpinski
Isn't "hackage" the not-fully-vetted / unstable namespace of packages? Is it
recommended to use hackage packages in production systems?
~~~
Chris_Newton
In the case of the text package, it’s a well-respected _de facto_ standard
regardless of its official status, and probably as safe a dependency as you’re
ever going to get in practice.
Anecdotally, everything serious I write in Haskell uses Data.Text by default
these days, and just converts to and from other representations like String as
necessary. It’s mildly inconvenient for things like writing Show instances or
integrating with a library that uses a different convention, but still better
in almost any context than using String by default IME.
------
guard-of-terra
Haskell has well known deficiences in its std lib (Prelude).
I would make a conjecture that it's also true for Lisps (quality of std lib is
poor often) and other powerful libraries.
On other hand, languages of simpler kind, like Java or Python, have more
adequate std libs.
It's because, for a really powerful language std lib has to be opinionated.
And people understand that but they can't agree on something. So they live
with whatever common denominator. And lost a lot of traction there.
A counterexample is Clojure where std lib is very nice if heavily skewed
towards FP, reinforcing my point.
~~~
wyager
Complaints about Haskell's prelude usually fall strictly under the definition
of "first-world problems".
"Ugh, this length function isn't parameterized over the integrals?"
Or, alternatively:
"Ugh, this length function is parameterized over the foldables?"
People will never agree what's best, but it doesn't really matter. One nice
thing about Haskell is that all the "default" functions, types, and data
structures are just imported from the Prelude library. You can just import
your own version if you want, and people do that.
~~~
seagreen
I wish this was the case, but it's not. A bad string type and lots of partial
functions are legitimate red flags, not "first world problems".
~~~
wyager
> A bad string type
It's only bad in the sense that it's inefficient. It's preferable to most
other string types in many ways. It's also totally reasonable to have in the
prelude: [] is there, and Char is there, so really all they've done is slapped
the two together. The current situation (importing ByteString or Text for
performance) is pretty good IMO. Haskell doesn't have Map or Set or anything
in the prelude, so I think it's reasonable to leave powerful packed text types
out as well. In fact, I would even support leaving [] out of the prelude and
making the import of Data.List explicit.
> lots of partial functions are legitimate red flags
This is true, but the fact that we're even complaining about this is a first-
world problem. Other languages have partial behavior _built in_ to the
language. Try going to a python or java developer and telling them "array
access should return an optional type instead of throwing an exception on out-
of-bounds". This is totally reasonable to deal with in Haskell, but
inconceivable in other languages.
~~~
seagreen
The current situation (importing ByteString or Text for
performance) is pretty good IMO. Haskell doesn't have Map
or Set or anything in the prelude, so I think it's
reasonable to leave powerful packed text types out as well.
Interesting. Maybe all we have is a community problem then? Since `String` is
right there, and `Text` is both an additional dependency and an import away
`String` gets used in situations it shouldn't.
Try going to a python or java developer and telling them
"array access should return an optional type instead of
throwing an exception on out-of-bounds".
Alright, I think we have synthesis. From a Python programmers perspective the
Prelude issues are somewhat trivial. From a Haskell programmers perspective
it's offensive to good engineering. So it depends on your perspective:)
~~~
wyager
Agreed on both points.
It's unfortunate that people use String just because it's there, or Int when
they should use Word (because Word wasn't added to the Prelude until
recently). For example, it's awful that length returns an Int. It should
return a Word or be generic.
Indeed, the Haskell programmer is the one I'd say lives in "the first world".
| {
"pile_set_name": "HackerNews"
} |
Node.js - ANSIdom to share HTML templates between the browser and the terminal - coenhyde
http://ohh.io/ANSIdom
======
Cieplak
This is really awesome. I don't understand how it works yet.
~~~
Cieplak
My understanding is that the server sends either ANSI codes or HTML depending
on the user agent.
| {
"pile_set_name": "HackerNews"
} |
People often use the word ‘you’ rather than ‘I’ to cope with negative experience - upen
http://exactlyscience.com/archives/11689.html
======
DrScump
Blogspam of
[http://www.ns.umich.edu/new/releases/24689-it-s-really-
about...](http://www.ns.umich.edu/new/releases/24689-it-s-really-about-me-not-
you)
| {
"pile_set_name": "HackerNews"
} |
#GamerGate's Detractors Aren't Doing Themselves Any Favors - 5trokerac3
http://www.exitevent.com/article/gamergates-detractors-arent-doing-themselves-any-favors-101014
======
zimpenfish
Except, though, this has been going on for -decades- and there was no
organised campaign until the sudden miraculous pivot of the gamergaters when
people started noting and calling out their harassment bullshit.
------
jonifico
I'm sure FIFA is a massive example of this. Pretty much anything by, EA, come
to that.
| {
"pile_set_name": "HackerNews"
} |
Apple copies rejected app - Gupie
http://www.theregister.co.uk/2011/06/08/apple_copies_rejected_app/
======
bradleyland
Hrm. Dan Goodin. I recognize that name. This is the same author who published
an article on _The Register_ with the title "Skype bug gives attackers root
access to Mac OS X", which was factually incorrect. He corrected the headline
after much hoopla, but it strikes me that Mr. Goodin is a professional link-
baiter.
The title has it backwards. Isn't WiFi sync a fairly obvious feature that
Apple has likely had in the works for quite some time?
Based on what I've read, this sync app was only possible because of some low-
level sync frameworks that were already present in iOS. The feature wasn't
ready by _Apple Standards_ , but Apple didn't want a poor implementation of
what should be a system-level feature in the wild. One could argue that the
rejection of his app was an act of protecting the user experience. This is
something Apple does regularly. If you don't want the protection, you should
head over to another platform.
Acting shocked at any of these facts just shows that you haven't been paying
attention.
~~~
brudgers
Every app is based on features of IOS implemented in a way that Apple has not
done yet.
~~~
bradleyland
If you want to look at it as a dichotomy, sure, but can't we agree that
there's a continuum here? Syncing is certainly a "core" feature. Can anyone be
surprised when Apple protects this as something they want to implement?
I think that as an app developer, you have to consider this continuum when you
set out to develop an app. Many "utility" apps would be considered closer to
the core. Apps like a medical x-ray viewer are further from the core. That's
not to say you shouldn't develop a to-do app, but you should A) plan your
product ramp in a way that you recover your investment quickly, and B) not be
surprised when Apple announces a simple, integrated to-do solution.
~~~
brudgers
One could carry the same analogy to level apps, compass apps, music streaming
apps (particularly given the iPod and iCloud), etc.
More relevant to this case, is that Apple rejected an app that (based on the
evidence presented) met every knowable requirement for being included in the
app store and which had a high probability of generating substantial revenue.
Then Apple appropriated the name and icon.
All this makes it hard to consider Apple's actions in this matter to be
ethical in any meaningful sense of the term.
~~~
tobylane
The icon was a mix of the Mac's wifi and sync icons. For all we know Apple
didn't want their icons used by someone else, that's a legit reason that's
been used elsewhere. Also judging by the comments on the TUAW post of this
topic, the app was low quality, and the support was even worse. Apple don't
like that, rightly.
~~~
brudgers
At the time it was submitted, Apple was not purging "low quality" apps - and
as the noted in the _Register_ story, Apple thought enough of its
implementation to call the developer and request his CV. This would be more
consistent with Apple's engineers being impressed by the implementation rather
than it would be consistent with its poor quality being obvious.
Furthermore, given your premise that Apple had something in the works but
could not create an implementation which was good enough, it is clearly
plausible that Apple's technical review of the app provided a roadmap for
improving their implementation to the point where it was good enough.
Finally, it is highly unlikely for poor support to have been a reason for
rejecting the application because it can rarely if ever be determined for new
apps. The infringing icon argument is not backed up by the fact that Apple did
not mention it in their rejection and has not taken legal action in the year
it has been in use for the jailbroken versions.
~~~
scott_s
That something is _plausible_ is not evidence that it _happened_.
~~~
brudgers
Absolutely. That's why I chose "plausible."
I will be the first to recognize that the plausibility of one line of
speculation regarding the course of Apple's actions is no more evidence of an
actual state of affairs than the plausibility of other lines of speculation
within the discussion are evidence that those events indeed occurred.
------
peteretep
So to get this straight: the guy who took Apple's icon for syncing and added a
wifi symbol thinks Apple ripped him off taking their icon for syncing and
adding a wifi symbol? Who'd a thunk.
~~~
tmgrhm
Mhm. And the fact that he produced the first public implementation of this
means that Apple isn't allowed to implement their own version — never mind the
fact that such a feature requires lower level control than the App Store
guidelines allows for its apps (meaning it's exactly the kind of feature that
Apple should be implementing themselves, not App Store developers).
~~~
rb2k_
I think that's the main point is that he created an app that was probably
declined because it used private APIs (or required root access?).
Also: he produced the first public implementation of what? A wireless syncing
software? iSync on OSX came a while before that and I'm sure there have been
quite a few before that.
~~~
tmgrhm
Mhm, so I don't understand why he expected anything different or why people
are so outraged it was rejected — that's one of the major benefits of the App
Store: sandboxing and access restrictions of apps.
Yeah, as far as I know it was the first publicly-released wireless syncing
software designed to let you sync iTunes and iOS.
~~~
jarin
I would guess that 99% of the outrage stems from the similarity of the icons.
~~~
maguay
Only problem is, that's about as generic an icon as you could get by mixing
Apple's default iOS icon style with a standard WiFi and Sync icon. Not that
unique...
~~~
jarin
I'm pretty sure that wasn't the "standard" WiFi icon until it showed up in OS
X. I think the original "standard" WiFi icon was either this one or the one
that looks like a person with radio waves coming out of their head:
<http://en.wikipedia.org/wiki/File:Wifi_logo.jpg>
~~~
alanh
There _was_ no international standard wireless Internet logo for a long, long
time. There still may not be. But this one is indeed Apple’s standard AirPort
icon (they dropped one ring going OS X → iOS due to size contraints).
------
sambeau
Does anyone here seriously believe that anyone in the Apple department
responsible for Wifi Syncing will have ever seen this app and its icon?
Apple will have been working on Wifi syncing far longer ago than last May. I
wouldn't be surprised if they had it working when they first launched the
iPhone but held it back for other sensible reasons (not everyone had wifi,
power usage, speed, reliability, no delta updates etc).
Like Authors are warned by their lawyers not to read or accept fan fiction,
Apple's developers will be kept well away from reviewing of apps.
The concept is an obvious one; one that has had much discussion on the
internet and on this site in particular.
The icon is the most obvious and clearest solution you can draw. I spend most
of my day drawing icons and if you had asked me to create an icon for this I
am 100% certain that I would have put a wifi logo into the middle of a sync
logo. It is a completely obvious thing to do looking at the respective shapes
and line thicknesses.
This is a non story.
------
pseudonym
I wish I was surprised, but this seems to happen with a lot of OS-extending
apps on the iOS device. I've never heard of a game being banned from the app
store, but as soon as it's something that Apple doesn't already have baked
into the operating system...
It's been said before and it'll be said again: Playing in Apple's walled
garden isn't a safe way to make a living.
~~~
tvon
As a developer it is probably not a good idea to use private APIs to implement
a feature that has obviously been on the roadmap since day 1.
~~~
hullo
well, maybe it's actually all about timing, the article implies he's grossed
in the neighborhood of $500k so far (minus the impact of "sales")
~~~
jarin
Yeah, I mean stories like this shouldn't really discourage developers.
Number one: if you are a developer and you don't plan for something like this
that's just a lack of awareness on your part.
Which leads to number two: don't depend on a single revenue stream. You're
making decent money with your first app? Cool, now pay a couple of interns to
handle support requests and start working on the next one.
------
tobiasbischoff
Easily the greatest bullshit i've ever read. This cydia tool was just a hack
that activated functions already in place in iTunes and iOS. Just have a look
at the 1st gen Apple TV wich had wireless syncing to iTunes since 2006.
I guess they considered it to slow and unreliable in the past to activate it
for the iPhone, maybe the iCloud concept, faster processors and wireless
networks led to their decision activate it in iOS5.
~~~
voxmatt
Seriously. It was always obvious Apple would do this eventually. The kid's app
wasn't a novel idea, nor did it have a novel name or icon, it just did
something everyone and their mom knew Apple would implement eventually, but
just hadn't yet. This app was a hacked stop-over gap to put in place obvious
tech. This article is really over the top.
------
blownd
Ludicrous link bait headline and tabloid trash article from The Register.
Apple didn't copy the app, it sound like they were maintaining control of
their interests; no one should be surprised by that given Apple's track
record.
That's not to say Apple haven't copied others apps, they've positively
trampled on a slew of third party apps with enhancements in Lion and IOS 5,
but that's all part of the game at this point.
~~~
metageek
So, once someone has a track record of being evil, any further evil they do is
not worth covering?
------
alanh
1\. The idea of wireless sync is so obvious that customers have been asking
for it since, oh, half a decade ago when iPhone was introduced.
2\. The icon, while similar in concept, is literally nothing more than Apple’s
standard “sync” icon plus Apple’s standard AirPort (Wifi) icon.
3\. (Bonus) After rejecting the app, which _did_ perform activities not
allowed in the SDK, Apple expressed interest in hiring the kid anyway.
Manufactured controversy. Snore.
~~~
ugh
People have been asking for wireless sync for the last decade. Does nobody
remember the immortal “No wireless. Less space than a nomad. Lame.”?
That was written in 2001 about the first iPod. (The actual introduction of
wireless sync nearly a decade later has been pretty anticlimactic. Apple took
so long that no one is anymore very impressed or surprised. I think that it
was about 2006 when everyone started believing that wireless sync would be the
next big thing for iPods but then came the iPhone.)
------
yardie
I tried this app in the past. It was very....slow.
Which is why I think Apple rejected it. Their syncing protocol, even over USB,
was painfully slow. Over wifi it was dreadful. Apple has a, "do it right or
don't do it at all", philosophy.
They seemed to have fixed USB syncing in 4.3 because it takes me less time
than before. I'm fairly confident that if he submitted his app after 4.3 was
released it probably would have passed, but now that iOS 5 is on the horizon
and contains the same functionality it has made his app irrelevant.
~~~
tmgrhm
I think it's far more likely it was rejected because it uses private APIs and
takes lower-level access than App Store guidelines allow.
~~~
yardie
From what I remember it used the published APIs which Apple then unpublished
and rejected his app. This is why the story got so much traction in the first
place. If it was another developer doing cool things with unpublished APIs it
would have been sold through one of the other appstores and that would have
been the end of it.
It was rejected because Apple changed the rules mid-game
~~~
tmgrhm
That certainly does change the angle of my story — have you got a source for
that?
------
xedarius
I think the more interesting story is quite how much money you can make via
the jail-broken phone market place.
------
nphase
This seems silly to me. Apple knows its own product roadmap, so why wouldnt
they reject an app that implements a half-baked version of a product line
they're releasing themselves?
------
shinratdr
As a purchaser of Wi-Fi Sync, fuck him. He's an extremely unprofessional
developer who provides terrible customer service. Don't buy his app, even at
$2.99.
He dropped off the map after promising a Windows beta for WiFi Sync 2, he
won't refund purchases for any reason, and he used misleading language that he
refuses to own up to when promising sync over 3G.
Apple's implementation will be way better anyways. It's already much faster
and it syncs in the background over USB.
------
nhannah
Apple is setting themselves up for a Microsoft style lawsuit in the future.
Everyone here seems very defensive of apple, and while I think a review policy
does help a lot at keeping bad apps out, a move like this could easily be
brought to court with a huge settlement having to come from apple. Actually
trying to hire the guy could look pretty bad on them as it could be construed
as trying to avoid a possible suit.
------
bengl3rt
Happened to me as well... over a year, when iAd first came out, a friend and I
built an iAd gallery app. Rejected.
A few months ago I saw on Techcrunch that Apple had released their own iAd
gallery that looked practically identical. Oh well.
~~~
alanh
I bet the difference is that if you use Apple’s, no advertisers are forced to
pay for what are essentially fraudulent views.
------
scelerat
I'm not saying Apple _didn't_ blatantly rip off this guy's work. But. I'm
having a hard time believing someone at Apple saw this submitted to the App
Store in May and rushed to get it into the iOS 5 spec a month (or less) later.
More likely the app was rejected because the feature was already planned. The
rejection response was a cover lie.
------
Osiris
In cases like this, do developers have any legal grounds to sue? Would the
developer had to have patented some of the technology to gain a legal basis
for a suit? If Apple can claim it was a clean-room implementation copying the
same functionality, I assume he's just out of luck?
------
dbaugh
There is nothing like free contract work. This is no different than the way
Microsoft treated developers before the anti-trust hammer was brought down
upon them.
------
allan_
gaahh, all this apple shit, so 2009
| {
"pile_set_name": "HackerNews"
} |
Is Dentistry Science Based? - anthilemoon
https://sciencebasedmedicine.org/is-dentistry-science-based/
======
h2odragon
Dentistry is practiced in a wider variety of ways than medicine: Most dentists
would agree that things like drilling teeth at the gumline "to relieve
pressure" and then accusing the patient a year later of having taken up meth
because "look at all these cavities you have now!" isn't good.
If this was doctors, I'd expect other doctors to do something about their
fellow practitioner. Dentists seem to view that as rude, after all, "he's been
in business for a while," and he creates so much work for _them_.
I have to say dentistry is Market based.
| {
"pile_set_name": "HackerNews"
} |
Doom as a tool for system administration (1999) - ColinWright
http://www.cs.unm.edu/~dlchao/flake/doom/
======
ckeck
How did I never hear about this before?
| {
"pile_set_name": "HackerNews"
} |
"I Have a Startup" - Midwest vs Bay Area - garbowza
http://leavingcorporate.com/2008/12/30/i-have-a-startup-midwest-vs-bay-area/
======
Shooter
"I have a startup" seems a very ODD way of saying what you do when people ask,
anyway. It is almost intentionally oblique. Even as a serial entrepreneur, I
would probably have a similar "midwestern" reaction if someone said that when
I asked them what they did...it's just a weird, passionless answer.
If you ask a person at a big corporation what they do, they don't usually say
"I have a job with a big corporation." Do they? Instead, they would usually
say something like "I work in accounting for the largest trucking company in
the US." or "I'm a salesman with a company that makes office copiers." Boring,
but at least it's an honest answer.
An entrepreneur should be able to muster much more passion than "I have a
startup" when asked what it is they do. People tend to respond positively when
you show passion and enthusiasm and speak directly about what you do. Even if
you have to 'dumb it down' for them to understand (or omit secret
information.) Maybe this guy should have enthusiastically said, "Yeah, I'm
working on a really neat piece of software that helps people to communicate
better by XXX"
For some reason, "I have a startup" makes me think of a few people I know that
have no passion for school, but have stayed in grad school for years on end so
they can avoid choosing a career. People may have pity or confusion in their
voice only because what you're saying has a cop-out or apologetic vibe to
it...?
~~~
thingsilearned
This is an great point! Saying that you're working on some excellent software
and what its trying to do may be the best way to approach the conversation,
especially in the midwest...
Once you describe what you're doing, then you'll be asked who you work for,
and when you say yourself the idea of a startup will be better explained.
I'm always hesitant to do so because I feel that describing what project you
work on doesn't describe the hours you work, the responsibilities, and all the
extras that come with a startup.
~~~
mhartl
N.B. In case it wasn't clear, thingsilearned is the author of the post.
------
bradgessler
The bay area is special from a funding/VC perspective. After pitching in
Chicago, Boston, and SF; I've come to realize what makes the valley special:
there are so damn many investors that they don't all know each other which
reduces "group think."
In cities like Chicago, there are a handful of VCs that all talk to each
other. If you pitch to one and they don't like your idea, they trade notes
with their other VC buddies in the area and that's it. Your done. Pack up your
bags and head to the next city. I could count the number of funded Chicago web
startups with one hand.
In the valley there are so many investors with so much more experience willing
to fund ideas that investors from other areas simply wouldn't touch. If you
pitch to one guy and he doesn't like it, go pitch to another investor... and
another... and another... it will be a while before you exhaust this list.
From my experience the bay area doesn't suffer from the same group think
problems that most other areas in the world suffer from; including a sizable
city like Chicago.
------
male_salmon
Isn't this exactly what PG said in his Cities and Ambition -
<http://www.paulgraham.com/cities.html> \- essay?
_How much does it matter what message a city sends? Empirically, the answer
seems to be: a lot. You might think that if you had enough strength of mind to
do great things, you'd be able to transcend your environment. Where you live
should make at most a couple percent difference. But if you look at the
historical evidence, it seems to matter more than that. Most people who did
great things were clumped together in a few places where that sort of thing
was done at the time._
~~~
thingsilearned
Yup, its very much the same point but narrower in focus and less eloquently
stated. I should have linked to his article in mine.
------
fallentimes
One of my favorite quotes: _"Being a startup founder in SF is like being an
actor/model in LA."_
~~~
jaspertheghost
And being a venture capitalist is like being a producer ?!? :-0 !
I kid because I love exclamation QuEsTiOn mArK
------
mattmaroon
I don't think that's fair at all. I find Midwesterners my age or below to be
just as excited about internet startups as anyone. The guy who plays poker for
a living in Las Vegas is just another gambler, the guy who does it in Indiana
is a rock star. Same thing with internet entrepreneurs. I know all of the
above first hand.
The big difference is that in the Bay Area, people significantly older than
myself still get it. In the Midwest they're still not sure what this
newfangled Facebook is all about. </sweeping generalization>
------
papa
I'm in the Bay Area, but really I find the "I have a startup" only has the
desired affect on similarly afflicted individuals.
If I say the same thing to my relatives, no matter where they live, I get the
same blank stares.
I personally think it's not "where" but "who".
~~~
Shooter
I used to have this "blank stare" problem with some of my relatives and former
colleagues when they asked what I was working on. I finally realized I was
often just being too technical or was using industry-specific terms people
weren't familiar with...and sometimes I was being too specific or too general
in my explanations. Like I expected everyone else to have the same background
and interests as I do. I finally tried putting myself in their shoes. If
you're too general, you will sound evasive or apologetic. If you're too
specific, it's easy to sound boring. It's a difficult balance to be specific
enough that you keep their interest, but not so specific you confuse them or
flip their "Techy-talk OFF switch."
I try to use very common analogies and to speak as simply and directly as
possible. I usually just explain the selling proposition and/or business model
of the startup without any additional information about "how we do it" or "in
what industry" or "who our competitors are," etc. If they want more details,
they can ask.
I've ended up with a few stock answers I use for every one of my startups. I
usually just state the problem, and my solution to that problem, in layman's
terms. That usually gets the best response. People usually start asking more
specific questions, and we go from there. I try to explain what my startup
does, and to convey my passion for the business without sounding too much like
a pushy salesman or a nut ;-) If the startup is profitable, I add that tidbit.
I actually went from getting blank stares to getting new business and
referrals. A simple, clear answer about the benefits your business offers can
literally turn people into customers on the spot...or at least promoters of
you and your ideas. Genuine enthusiasm and passion is memorable, and people
are drawn to it.
------
tom_rath
_Having a higher level of respect and more assurance will make a huge
difference in your general happiness and future success._
I'm not too sure about that. If other people's opinions will strongly
influence your mood and business outlook, entrepreneurship might not be for
you.
For myself, visualizing how those politely nodding and smiling "that's nice"
people would one day be green with envy was a delightful motivator.
These days, the condescension is gone and smiles are a bit tighter.
~~~
garbowza
I disagree. Morale is huge deal within a startup. Startups are emotional
rollercoasters, and if your environment is always negatively affecting your
morale, the lows will eventually drag down your productivity. That doesn't
mean you aren't fit for entrepreneurship - if so, why are the majority of
successful web startups clustered in the Bay Area?
~~~
tom_rath
Respect has to be earned. If one's morale is buttressed by toadies and
'respecting' yes-men who gush about how awesome you are just because of your
job title, things are more likely to go off the rails when the sun stops
shining.
By all means mix with like-minded people, but your morale and motivation
shouldn't depend upon whether or not your neighbour thinks you're 'cool'.
~~~
potatolicious
This doesn't really have to do with yes-men. We're not talking about people
who feel negatively about _your company_ , but rather the fact that you _run
one at all_. The kind of scorn the article author talks about is not "oh man,
there's no future in (field)", but rather "why in the world would you risk
running your own company?!". The latter certainly still calls for a thick
skin, but it would help if your closest friends and family were supportive
about the concept.
~~~
tom_rath
If the scorn of others is enough to dissuade you, you will not have success in
business.
~~~
jaspertheghost
In theory, yes it's true that scorn may even help one's entrepreneurial flame.
The difference is that starting a company is like starting a fire. Little
gusts of wind can buttress and make the fire spread, but it can also knock it
out.
Having the support of the community within the bay area is just one less
headache to deal with in conjunction with the many headaches of doing a
startup.
------
dangrover
I'm from rural Vermont and more recently Boston. I'm moving to the Bay Area
next week. I'm really curious to see how pronounced this effect is.
~~~
jaspertheghost
I started a company in Chicago and moved to the Bay Area. the differences are
almost exactly what was described in the article. I jumped from a fairly large
and prestigious company to start the company and people automatically were
incredulous that I would quit. There is no area like the Bay Area for starting
company in terms of emotional well being and having fellow entrepreneurs in
the same boat.
------
strlen
Not always true. I'm in Bay Area, but when I left a big co to join a start-up,
I got the question of "why would you quit a big company to join a small
place?" and got blank stares as an explanation.
By default joining start-ups, founding businesses just _isn't_ what educated
intelligentsia do. If pg is right, that _will_ change.
------
jmtame
Illinois has a bad entrepreneurial scene too. Everyone pushes you to get
employment, it's nauseating after a while.
| {
"pile_set_name": "HackerNews"
} |
Does a charger that is plugged in but has no load use energy? - fananta
http://skeptics.stackexchange.com/questions/7287/does-a-mobile-phone-charger-that-is-plugged-in-but-has-no-phone-attached-to-it-u
======
dchichkov
Lol @ stackexchange.com:
Inside virtually every phone charger is a transformer.
Transformers have a finite resistance, and hence there
will always be current flowing through them if they are
plugged in, even if there is no load (i.e. nothing
charging). That's basic physics.
Basic Physics. LOL. And how that charger works with both 110AC and 230AC ;) ?
------
raintrees
Yes. Also, heat is another way of consuming power...
| {
"pile_set_name": "HackerNews"
} |
NASA has no idea why it exists. Where to now for the space program? - Eurofooty
http://www.theage.com.au/opinion/society-and-culture/mars-a-mere-curiosity-in-days-of-thrift-20120818-24f84.html
======
thinkingisfun
<http://www.nasa.gov/offices/ogc/about/space_act1.html#POLICY>
_(1) The expansion of human knowledge of the Earth and of phenomena in the
atmosphere and space.
(2) The improvement of the usefulness, performance, speed, safety, and
efficiency of aeronautical and space vehicles.
(3) The development and operation of vehicles capable of carrying instruments,
equipment, supplies, and living organisms through space.
(4) The establishment of long-range studies of the potential benefits to be
gained from, the opportunities for, and the problems involved in the
utilization of aeronautical and space activities for peaceful and scientific
purposes.
(5) The preservation of the role of the United States as a leader in
aeronautical and space science and technology and in the application thereof
to the conduct of peaceful activities within and outside the atmosphere.
(6) The making available to agencies directly concerned with national defense
of discoveries that have military value or significance, and the furnishing by
such agencies, to the civilian agency established to direct and control
nonmilitary aeronautical and space activities, of information as to
discoveries which have value or significance to that agency.
(7) Cooperation by the United States with other nations and groups of nations
in work done pursuant to this chapter and in the peaceful application of the
results thereof.
(8) The most effective utilization of the scientific and engineering resources
of the United States, with close cooperation among all interested agencies of
the United States in order to avoid unnecessary duplication of effort,
facilities, and equipment.
(9) The preservation of the United States preeminent position in aeronautics
and space through research and technology development related to associated
manufacturing processes._
I personally would categorize the above as follows, not that I gave it much
thought:
1 = science
2 = could go either way
3 = could go either way
4 = science, peace
5 = dominance
6 = dominance, specifically military
7 = could go either way, science, peace
8 = efficiency (which I'll file under dominance, too)
9 = dominance
final scores:
science: 3
peace: 2
neither/nor: 3
dominance: 4
| {
"pile_set_name": "HackerNews"
} |
Buying groceries for rich people, I realized upward mobility is largely a myth - drags
http://www.buzzfeed.com/nielaorr/two-college-degrees-later-i-was-still-picking-kale-for-rich
======
pjlegato
The headline ("Two College Degrees Later, I Was Still Picking Kale For Rich
People") is a statement of disappointed entitlement, of outrage over a belief
that some implicit social contract has not been fulfilled.
The reality is that just having "a degree" in a generic sense is no longer the
magical ticket to a middle class lifestyle that it was in 1960. It's become
too common and is no longer much of a differentiator in most job markets.
A related issue is that many, many people have degrees in non-marketable
subjects. Whatever one may think of the intrinsic value of studying history,
philosophy, English literature, anthropology, art history, etc. there simply
is not much demand in our society for specialists in these fields -- and so
you wind up picking kale for rich people, with a pile of student loan debt to
pay.
We utterly fail to communicate that fact to young students entering college.
We do the opposite: follow your dream, follow your passion for anthropology or
whatever and it will all somehow work out in the end. Turns out that's not
actually true. Telling students that it is true is what leads to indignation
and this sense of entitlement. Society just doesn't need more than a tiny
number of anthropologists. Whether one thinks that society _ought_ to need
more of them is irrelevant.
It's disingenous to keep encouraging kids to get degrees in non-marketable
subjects, to keep pretending that economic reality should not be a factor in
what you choose to study.
~~~
gearoidoc
I agree with you with one caveat: there really _was_ a social contract in
place that said something along the lines of "Finish third level and you'll
walk into a job thats better than flipping burgers".
That contract has been broken. I think it was stupid to begin with but it was
a message very clearly sent from generations, society, government that went
before. As you say, it still is.
So does the writer have reason to be aggrieved? I think so. However, at some
stage in an adults life they need to do some critical thinking and
independently decide whats the optimal way to climb the pay ladder (legally).
That critical thinking is something that is simply not taught in schools.
Perhaps its not teachable at all.
~~~
_rpd
> there really was a social contract in place that said something along the
> lines of "Finish third level and you'll walk into a job thats better than
> flipping burgers"
Was this ever true for a Masters in Creative Writing? My understanding is that
this class of degree has always been a social signal for "my family is so
wealthy, I will never need to work."
~~~
pjlegato
With the advent of government subsidized mass higher education in the US
starting in the 1960s, that changed. Many high school teachers and college
professors began heavily encouraging their idealistic, young, poor students to
"follow their passion" and study creative writing and other non-marketable
subjects.
This group feels that it is vulgar and crass to even mention money or
economics in the context of art or pure academics, much less integrate it into
your life's plans, thus setting almost all of their naive students up for
massive disappointment when they graduate with a huge pile of student loan
debt and no jobs available except picking kale for rich people for barely
above minimum wage.
~~~
gearoidoc
"This group feels that it is vulgar and crass to even mention money or
economics in the context of art or pure academics" \- I'm not sure where
you're gleaning that from - my point was there was/still is a social contract
in place that tells young people that graduating from a third level
institution will be a signifier of above average intelligence and/or work
ethic thus leading to at least better than working class job.
Perhaps you had the foresight (or your parents did) to see that such
qualifications would drastically decrease in social value. Others didn't. Then
again, maybe you just happen to work in tech and lack empathy for those who
didn't luck out in their chosen industry.
~~~
pjlegato
No, actually, I speak from painfully learned experience, as a former poor
student who is now the holder of a non-marketable university degree in history
and philosophy, and a pile of student loan debt.
But my personal experience is entirely irrelevant to the discussion at hand.
Let's keep the personal stereotyping and passive-aggresive insults out of this
discussion.
------
mywittyname
After reading that article, I realize how much truth there is to the old adage
that, with a good article, you delete more than you save. This article feels
like an interesting and thought provoking topic, but if it's there, it's hard
to pinpoint while wading through the biography of half the author's family.
Twenty three paragraphs but no message.
~~~
skylan_q
_Twenty three paragraphs but no message._
The message is "tax the rich".
~~~
nostrademons
No, it's not. The message is "This is my experience."
I think much of the HN community is accustomed to a style of discourse that
deals in big ideas with immediate applications: "This is where tech is going."
"How I hacked the YC interview process and got in." "Here's what's wrong with
the Javascript dependency mess."
Much of the world doesn't think this way, though. For much of the world, their
goal is _to be heard_ , and to be understood, and to have their existence as
an individual human being validated. When articles speaking from this angle
come out, people react with "What's the point?" And the point is precisely
that people react with "What's the point?", and they shouldn't.
The author said as much in her last sentence: "It’s the work I want to own."
But there's no way to make that connection to readers who are accustomed to
thinking of the big picture without trivializing the little picture.
Related video clip: [https://www.youtube.com/watch?v=qM-
gZintWDc](https://www.youtube.com/watch?v=qM-gZintWDc)
~~~
NhanH
> When articles speaking from this angle come out, people react with "What's
> the point?" And the point is precisely that people react with "What's the
> point?", and they shouldn't.
Oliver Sacks's late writing is of the latter type you describe, and various
eulogies type of writing occasionally popped up on HN as well. So I don't
think it's just the _type_ of the writing that isn't well received here.
~~~
nostrademons
Some'll get it already. The HN community isn't one monolithic hivemind, it's a
bunch of people who each have their own perspectives. But I'm trying to
connect with the people who _don 't_, who still think in terms of the big
picture, and so my comment needs to be phrased in the same terms that it
complains about. It's _hard_ to make a perspective shift, because you are
trying to see things that, by definition, you did not see before.
I remember wrestling with this when a friend of mine posted her personal
experience, as a woman and as a psychologist and as someone who has been
discriminated against, a year or so ago, and she said "You're a hero for
making the effort. I mean that."
------
NhanH
The topic itself is worth discussing. However I'm not actually sure why this
specific article was the one being picked for a second chance.
It's one long life story (stories?), and there is nothing in it supporting
either the title or subtitle. Yes, her upbringing was bad, but I wanted to
know what happened personally to her after graduating that she's where she is
now. Or even better: what exactly could have helped her (by government or
society) getting to where she want to be in life? Those details are no where
to be found.
And I'd like to hear others' opinion on this, but I don't consider her writing
to be good. _Maybe_ if she want to be a writer, that has something to do with
it?
~~~
x3n0ph3n3
I'm also curious what 2 degrees she has that prevent her from getting a job in
her field.
~~~
home_boi
I did a quick "site:linkedin.com" google search.
Bachelors in English and Masters in writing from not the best colleges.
~~~
gearoidoc
"from not the best colleges"
You do realise that it's a considerable achievement for someone from a working
class background to go to third level at all, right?
------
maerF0x0
> I was nonetheless positioned only marginally better off than my grandparents
>listing the indignities she felt working these jobs with a laconic intensity
and steady determination: washing the house’s windows inside and out, cleaning
the mattresses and box springs, scrubbing the floors on her knees, a lunch of
a cheese sandwich and a glass of milk offered by a client that was quickly
rejected, getting paid $3 a day.
I'd argue that smartphone in hand, greater than minimum wage rate, flexible
work schedule and the option of going to post secondary education constitutes
more than a "marginal" improvement. This person is claiming that she's no
better off than 2 generations prior, and but in reality is using a peer
comparison to try and prove it. Short of absolute equality, someone has to be
behind someone else. Someone has to have "less". But if that relative "Less"
is consistently more in an absolute sense, with each generation, then clearly
things are getting better.
The "poor" of today have more food, more tvs, better technology, greater
rights than several generations back. Largely because the rising tide is
lifting the vast majority of ships.
~~~
gearoidoc
I see your point but surely you agree peer comparison is fair?
No doubt the author has things better than generations before (though you do
have to factor in things like increased expectations as a cost for this) but
if this was the only measure then social equality would be move much slower
than it is.
~~~
maerF0x0
In my opinion only equal actions should demand equal results. I imagine nobody
working as a freelance writer, working for instacart, taking loans against a
useless asset are doing very well in this society. Therefore the author's
peers are likely doing roughly as well, and thus "fair".
Some sources of unfairness might be the disingenuous nature of post secondary
education, selling assets (degrees) far beyond their value. Being lied to
maybe the biggest claim the author has. But I dont see gender or race being a
part of that claim. The lie is unfair irrespective.
------
eitally
I think this article is important and useful, but the title she chose is
misleading. The content has absolutely nothing to do with higher education, or
_either_ correlative or causative connection to her employment with Instacart.
Social mobility is an interesting research area, and it's important to be
aware that most people who claim bootstrapping out of poverty is easy are the
folks who've never been in poverty or worked menial jobs.
[https://en.wikipedia.org/wiki/Socio-
economic_mobility_in_the...](https://en.wikipedia.org/wiki/Socio-
economic_mobility_in_the_United_States)
Press coverage:
[http://www.theatlantic.com/business/archive/2015/07/america-...](http://www.theatlantic.com/business/archive/2015/07/america-
social-mobility-parents-income/399311/)
[http://www.salon.com/2015/03/07/the_myth_destroying_america_...](http://www.salon.com/2015/03/07/the_myth_destroying_america_why_social_mobility_is_beyond_ordinary_peoples_control/)
[http://www.brookings.edu/blogs/social-mobility-
memos/posts/2...](http://www.brookings.edu/blogs/social-mobility-
memos/posts/2015/05/27-inequality-great-gatsby-curve-sawhill)
[http://www.economist.com/news/united-
states/21595437-america...](http://www.economist.com/news/united-
states/21595437-america-no-less-socially-mobile-it-was-generation-ago-
mobility-measured)
Original research / scholarly articles:
[http://www.sciencedirect.com/science/article/pii/S0022103115...](http://www.sciencedirect.com/science/article/pii/S0022103115000062)
[http://www.irp.wisc.edu/publications/focus/pdfs/foc262g.pdf](http://www.irp.wisc.edu/publications/focus/pdfs/foc262g.pdf)
~~~
fleitz
Did people stop hiring plumbers? It's difficult in North America to become a
millionaire, but it's pretty trivial to escape poverty. What I often find is
people would rather be poor than sell their skills to the highest bidder, eg.
I know welders who refuse $60/hr jobs because they dislike the oil industry.
Or train for jobs that actually pay, like plumbing.
~~~
TheOtherHobbes
I don't know what it's like the US, but in the UK becoming a plumber is a long
way from "pretty trivial."
[http://www.theguardian.com/money/2010/may/15/fast-track-
plum...](http://www.theguardian.com/money/2010/may/15/fast-track-plumbing-
courses)
~~~
marincounty
Here it's not difficult, unless you want a union plumbing job. The reason why
becomming a union plumber is difficult, say for San Francisco local ?, is
getting every question on an easy test right. Plus there's a lot of people
applying to take the test. Those Union plumbers are paid I believe over
$100/HR.
The easy way is just call yourself a Plumber, and put an ad on CL. Something
like what that very conservative guy did--what's his name--"Joe the Plumber".
In all honestly, these guys get the job done. I wouldn't want them installing
hydronic heating though.
The other way is get a licence through the state. You can get a general
contractor's licence, or a Plumber's licence. It's easy. There's schools that
will walk you through the paperwork, and an easy test. You don't need the
school. They are a ripoff.
If your young, and have kids, a union plumbing job is great. Non-union
plumbing is a horrid job. The only one making a real living us the owner.
I was in the San Franciso electrical union and it was a good deal. I didn't
stick around. Just found construction very boring, but it paid well.
If anyone reads this who's thinking about going into a union trade, I'll pass
this along. Construction is construction. Stay away from non-union
construction. If you are going to be a construction worker go union, and try
to get into these unions in this order. The order I'm picking us quality of
work, and pay.
Elevator mechanics union(might have changed name?) Electrical union local 6 if
in San Francisco. Plumbers union, or HVAC union(forget the name)
Stay away from carpenters union, unless you get into the finish carpenters
uinion(if still around?) Stay away from roofing, concrete, insulation, and
painting--if you can? If you really want to go into one of those trades make
sure to get into the union.
Non-union construction is right above not working. "Oh, but I see Tom, and
Horhe, and they seem happy?". I don't know how these guys are happy. I've
worked non-union, and it paid retail. The conditions were horrid.
To anyone against union, do a non-union construction job just one day. Just
one day. Look at what you are paid. Then look at the house that the owner of
the non-union lives in. He usually has houses, and he bought each of his kids
their first house.
Hands down worst job I ever had was at Bradley Electric. My father went
through a union apprenticeship program with this guy--if he's still around. He
opened a very successful non-union shop, and would hire desperate guys, at
horrid wages.
------
amsha
In my experience, writing (or any other art) is maybe the most downwardly-
mobile profession there is. The supply of artists _far_ outstrips the demand
for art, and getting your first job often depends on proximity to industry
gatekeepers. I can't speak specifically for the publishing industry, but in
film and television people tend to get writing jobs through personal
connections.
The four paths I've seen for people who make it in film/tv:
* Have a family member who gets you your first job.
* Have rich parents who completely subsidize your work for a few years and provide anonymous funding for your first feature film.
* Have upper-middle class parents who partially subsidize your work for a few years, and get ready to be an assistant for 3-25 years while you build connections with the business bros that determine your future.
* Have lower-middle class parents. Be extraordinarily driven and ignore all material needs while you win festivals and get noticed.
The lower you are on the list, the more effort it takes to maximize your
probability of success. Realistically, almost no one makes it from the bottom
category.
~~~
skylan_q
Serious question: Do you feel that society should be re-adjusted/obligated in
some way to make more upwardly-mobile paths for writers?
~~~
home_boi
It already has. The internet has made distributing and selling text content
thousands of times easier.
------
bko
> The woman who laughed at me was one of these customers with very discerning
> tastes currently causing me a lot of anxiety... With “all my education,” as
> my family would say, two degrees and the student loans to show for it, I was
> nonetheless positioned only marginally better off than my grandparents, who
> ran errands and did other grunt work two generations removed from where I
> now stood.
I can't tell who the more entitled person is. The wealthy woman who believes
she is entitled to have her discerning tastes met, or the author who believes
she is entitled to work as an author regardless of her commercial success.
No excuse to treat people who work for you poorly, but I think the entitlement
runs both ways.
------
Alex3917
I find it amusing that one of Instacart's competitors, TaskRabbit, is
currently blanketing the NYC subway system with ads that say "We do chores.
You live life." The implication being, at least the way I read it, that they
consider their workforce to be perhaps slightly less than human.
~~~
cperciva
I'm struggling to understand how you can possibly jump from "we do chores; you
live life" to "our workers are subhuman".
Are you saying that only subhumans do chores?
~~~
chipsy
Cold logic. The advertisement defines life as something that is not doing
chores. Therefore people who do chores are not living. Whether that makes them
"subhuman" does involve some interpretation, but if you call someone "not
alive" you aren't exactly complementing them.
------
drags
I think it's interesting that we're 44 comments in and nobody has commented on
how race fits into this.
She sees herself as someone working her way up into a freelance writing
career. Her customers, her bosses and her family view her as the kind of
person unlikely to do anything more than what her parents and grandparents
did: bounce around through low-wage, low-prestige jobs like Instacart their
entire working life.
When everyone around you assumes you won't make it higher, it's hard not to
wonder if they're right. And society assumes African-Americans are much less
likely to achieve career success. [1]
[1] See
[http://www.nber.org/digest/sep03/w9873.html](http://www.nber.org/digest/sep03/w9873.html)
for instance: "Race, the authors add, also affects the reward to having a
better resume. Whites with higher quality resumes received 30 percent more
callbacks than whites with lower quality resumes. But the positive impact of a
better resume for those with African-American names was much smaller."
~~~
pjlegato
Nobody has commented on how race fits into this because race is entirely
irrelevant to the theme of the article.
Her skin color is not relevant to her picking kale for a living. She's picking
kale because she got two college degrees in non-marketable subjects, not
because she's black.
Not _every_ topic contains a hidden narrative of latent racist oppression just
waiting for an overeducated postmodernist to come along and deconstruct it,
even if it does involve people of a visibly different ethnic background than
their employer.
~~~
drags
Race is the theme of the article:
"Our national history is rife with examples of black Americans facing
exclusion from labor movements, as well as general workforce discrimination.
It’s not hard to see how the effects of these policies have trickled down. I
see my family’s work history, rendered briefly here, as a particular kind of
ingenuity necessary for black Americans."
~~~
pjlegato
It's not at all, though she seems to think it is. It's about how indignant she
is that she got _two_ college degrees and still can't get a middle class job.
The headline is: "Two College Degrees Later, I Was Still Picking Kale For Rich
People."
That happened because she studied creative writing, a largely non-marketable
subject. Her being black is not relevant. If she had studied chemical
engineering or dentistry or any of a large number of other in-demand subjects
instead of creative writing, she'd easily have obtained a middle class job
despite being black.
------
swagv1
It took her that long to recognize her own under-employment??
------
lostmsu
TL;DR Why is it a myth?
~~~
skylan_q
Find one example of a poor person working hard and becoming rich! You can't!
Ergo, myth.
~~~
lostmsu
[https://en.wikipedia.org/wiki/Sergey_Brin](https://en.wikipedia.org/wiki/Sergey_Brin)
| {
"pile_set_name": "HackerNews"
} |
How slow is Python really? Or how fast is your language? - kisamoto
http://codegolf.stackexchange.com/questions/26323/how-slow-is-python-really-or-how-fast-is-your-language
======
wting
Python is fast enough, until it isn't and then there are no simple
alternatives.
If your problem is numerical in nature, you can call popular C modules (numpy,
etc) or write your own.
If your functions and data are pickleable, you can use multiprocessing but run
into Amdahl's Law.
Maybe you try Celery / Gearman introducing IO bottlenecks transferring data to
workers.
Otherwise you might end up with PyPy (poor CPython extension module support)
and still restricted by the GIL. Or you'll try Cython, a bastard of C and
Python.
Python has been my primary language the past few years and it's great for
exploratory coding, prototypes, or smaller projects. However it's starting to
lose some of the charm. Julia is filling in as a great substitute for
scientific coding in a single language stack, and Go / Rust / Haskell for the
other stuff. I've switched back to the static language camp after working in a
multi-MLOC Python codebase.
~~~
oblique63
> _Julia is filling in as a great substitute for scientific coding in a single
> language stack, and Go / Rust / Haskell for the other stuff._
I've been wondering about why so many python devs have migrated to using Go
recently instead of Julia, given that Julia is a lot closer to python and has
performed as good as, if not better than, Go in some benchmarks [1]. Granted
I've really only toyed with Julia and Go a few times as I've never really
needed the performance much myself, but I'm curious about your preference of
Go/Rust over Julia for "the other stuff".
What would you say makes Julia less suitable (or Go more suitable) for
nonscientific applications? Is it just the community/support aspect? Cause
that seems like an easy tide to overturn by simply raising more awareness
about it (we see Go/Rust/Haskell blog posts on the front page of HN every
week, but not too many Julia posts).
Just curious cause I'm not nearly experienced enough with any of these young
languages yet to know any better, and have only recently started to consider
taking at least one of them up more seriously.
[1] [http://julialang.org/benchmarks/](http://julialang.org/benchmarks/)
~~~
wting
Static typing is a boon when refactoring large codebases, even with >90% test
coverage.
I'm migrating an in house ORM to SQLAlchemy. Lack of compiler support and/or
static code analysis makes the transition more difficult than it needs to be.
Dynamic typing allows one to defer error handling to the future, essentially
creating technical debt for the sake of developer speed and convenience. For
many use cases this is an acceptable trade off.
However as a codebase grows in complexity, it's better to handle errors as
early as possible since the cost of fixing an error grows exponentially the
farther it is from the developer (costs in ascending order: editor < compiler
< testing < code review < production).
~~~
igouy
Tools matter:
_A very large Smalltalk application was developed at Cargill to support the
operation of grain elevators and the associated commodity trading activities.
The Smalltalk client application has 385 windows and over 5,000 classes. About
2,000 classes in this application interacted with an early (circa 1993) data
access framework. The framework dynamically performed a mapping of object
attributes to data table columns.
Analysis showed that although dynamic look up consumed 40% of the client
execution time, it was unnecessary.
A new data layer interface was developed that required the business class to
provide the object attribute to column mapping in an explicitly coded method.
Testing showed that this interface was orders of magnitude faster. The issue
was how to change the 2,100 business class users of the data layer.
A large application under development cannot freeze code while a
transformation of an interface is constructed and tested. We had to construct
and test the transformations in a parallel branch of the code repository from
the main development stream. When the transformation was fully tested, then it
was applied to the main code stream in a single operation.
Less than 35 bugs were found in the 17,100 changes. All of the bugs were
quickly resolved in a three-week period.
If the changes were done manually we estimate that it would have taken 8,500
hours, compared with 235 hours to develop the transformation rules.
The task was completed in 3% of the expected time by using Rewrite Rules. This
is an improvement by a factor of 36._
from “Transformation of an application data layer” Will Loew-Blosser OOPSLA
2002
[http://portal.acm.org/citation.cfm?id=604258](http://portal.acm.org/citation.cfm?id=604258)
------
Wilya
Using a Numpy-based program as an example of how Python can be fast is a bit
strange.
It shows that Python can be fast enough _if you leave the heavy computational
parts to libraries written in other languages_. Which is interesting, but
doesn't say much about the speed of Python itself.
~~~
pekk
It isn't strange, it's standard practice. What would be strange is to force
Python to hold one hand behind its back and be used in a totally unrealistic
way that doesn't reflect normal practice. And by strange I mean basically
dishonest about the performance available when using Python.
~~~
stefantalpalaru
> the performance available when using Python
The honest thing would be to say "yes, Python is slow, but you can use some
fast modules written in C".
You can even go farther and say that if you're going to use Python for non-
trivial projects you better learn C (or some C generator like Cython).
~~~
calpaterson
You only need to know C to get fast Python code for a tiny minority of
situations. For one, most interesting C wrappers have already been written and
second, the majority of C code you will use in normal business software
contexts isn't used by FFI.
For a normal web application the request flow goes something like: nginx
(WSGI) -> uwsgi -> web application code -> psycopg2 driver -> postgres. Only
the web application part is written in Python, so for practical purposes you
actually have a C stack that uses Python for business logic. Loads of
libraries, from json parsers to templating libraries include optional C code
for speedups.
~~~
stefantalpalaru
Guess where the bottleneck is when you need to generate a dynamic page for
each and every request.
~~~
calpaterson
In the database? About 90% of bottlenecks boil down to something along the
lines of "this query is slow" or "there are a lot of queries here". This has
been true since the dotcom boom
~~~
stefantalpalaru
Guess again. All these benchmarks use the same database:
[http://www.techempower.com/benchmarks/#section=data-r8&hw=i7...](http://www.techempower.com/benchmarks/#section=data-r8&hw=i7&test=query)
~~~
reitzensteinm
The real answer is "it depends".
The whole rise of memcached and NoSQL should pretty clearly indicate that many
developers are finding their database to be the bottleneck.
There's much less of a push for high performance languages, even though there
are many that are also quite nice to work with (eg, Clojure). Since this is a
Python discussion, searching for "Django PyPy" and "Django NoSQL" should be
instructive.
You're combining a false dichotomy with snark, which really shouldn't have a
place here on HN.
~~~
calpaterson
Well, at least some of the NoSQL movement was some people preferred to model
data as documents or dictionaries or graph, etc instead of relations
------
moe
Anecdotical datapoint: For a great many years I used to consider python a very
slow language. Then I switched to Ruby and realized how slow a language can
_really_ be and yet still be practical for many use-cases.
~~~
Freaky
Except Ruby isn't really significantly slower than Python?
~~~
dragonwriter
The mainline implementations are not that far apart _now_ , but IIRC it wasn't
all that long ago that MRI was significantly slower than CPython.
~~~
Freaky
I suppose it depends on how you define "that long ago" and "mainline
implementations". From my point of view it's coming along to about half a
decade, but I guess 1.8's remained popular in many places as recently as 2-3
years ago.
~~~
moe
The difference does not primarily stem from raw interpreter performance but
rather from the different community mindsets and resulting ecosystems.
The average code-quality in rubygems is just not very high. Consequently most
libraries are completely oblivious to performance aspects.
This reaches straight into the core infrastructure (rubygems, bundler) and all
major projects (Rails). Leading to the simple fact that Ruby loses out on many
practical benchmarks _before your script has even loaded_.
Likewise the synergies of less than bright behaviors from all the gems in your
average Rails project (and no least Rails itself) do indeed make the
performance gap towards an average django project _much_ larger than the mere
difference in sheer interpreter performance.
That's all not meant to bash Ruby anyway. It's a trade-off me and many others
are willing to make, for the convenience that ruby provides _after_ it has
finally set its fat belly into motion.
But let's not pretend these differences don't exist when everyone who has ever
used both languages knows them all to well.
------
lamby
Obviously, this is a silly benchmark and we should stop giving it any credit.
However, even "real world" anecdotes in this area can be a minefield.
Take, for example, an existing Python application that's slow which requires a
rewrite to fix fundamental architectural changes.
Because you feel you don't need necessarily need the flexibility of Python the
second time around (as you've moved out of the experimental or exploratory
phase of development), you decide to rewrite it in, say, Go, or D or
$whatever.
The finished result turns out to be 100X faster—which is great!—but the danger
is always there that you internalise or condense that as "lamby rewrote Python
system X in Go and it was 100X faster!"
------
chrisBob
I spend a lot of time debating program speed (mostly C vs MATLAB), but the
problem is that the programming and compile time usually makes more of a
difference than people consider.
If my C is 1000x faster and saves me 60 seconds every time I run the program,
but takes an extra 2 days to write initially, and the program is seeing lots
of edits meaning that on average I have to wait 2 minutes for it to compile
then I am MUCH better off with the _slower_ MATLAB until I am running the same
thing a few thousand times.
Plus there is the fact that I can look at HN while a slightly slower program
is running, so I win both ways.
~~~
jerf
I think a lot of that delta is going to prove to have been an accident of
history, though. In the past 10-15 years, we've had a lot of "dynamic"
languages, which have hit a major speed limit (see another comment I made in
this discussion about how languages really do seem to have implementation
speed limits). Using a "dynamic" language from the 1990s has been easier than
using gussied-up static 1970s tech for a quick prototype, but what if the real
difference has more to do with the fact that the 1990s tech simply has more
experience behind the design, rather than an inherent ease-of-use advantage?
It's not hard to imagine a world where you instead use Haskell, prototyping
your code in GHCi or even just writing it in Haskell directly, pay a minimal
speed penalty for development since you're not being forced to use a klunky
type system, and get compiled speeds or even GPGPU execution straight out of
the box. (And before anyone freaks out about Haskell, using it for numeric
computations requires pretty much zero knowledge about anything exotic... it's
pretty straightforward.) It's not out of the question that using Haskell in
this way would prototype even faster than a dynamic language, because when it
gives you a type error at compile time rather than at runtime, or worse,
running a nonsense computation that you only discover afterwards was nonsense,
you could save a lot of time.
I don't think there has to be an inherent penalty to develop with native-speed
tech... I think it's just how history went.
~~~
rdtsc
> In the past 10-15 years, we've had a lot of "dynamic" languages, which have
> hit a major speed limit
Exactly. I think that is correlated very well with single core CPU speedups.
Remember when Python was rising the fastest, single core CPU speed was also
pretty much doubling every year. SMP machines were exotic beasts for most
developers back then.
So just waiting for 2-3 years you got very nice speedup and Python ran
correspondingly faster (and fast enough!).
Then we started to see multiple cores, hyperthreads, and so on. That is when
talk about the GIL started. Before that nobody cared about the GIL much. But
at some point, it was all GIL,GIL,GIL.
> It's not hard to imagine a world where you instead use Haskell
Hmm interesting. I wonder if that approach is ever taken in a curriculum.
Teach kids to start with Haskell. It would be interesting.
~~~
jerf
I share that theory. Part of what led me down this road was when I
metaphorically looked around about two years ago and realized my code wasn't
speeding up anymore. Prior to that I'd never deeply thought about the
"language implementation speed is not language speed" dogma line, but just
accepted the sophomoric party line.
"Hmm interesting. I wonder if that approach is ever taken in a curriculum.
Teach kids to start with Haskell. It would be interesting."
To be clear, I was explicitly discussing the "heavy-duty numerical
computation" case, where typing as strong as Haskell's isn't even that hard.
Learn some magic incantations for loading and saving data, and it would be
easy to concentrate on just the manipulations.
But yes, people have done this and anecdotally report significant success. The
Google search "Haskell children" (no quotes in the real search) comes up with
what I know about, so I'll include that in this post by reference. It provides
support for the theory that Haskell is not _that_ intrinsically hard, it's
just so _foreign_ to what people know. If you don't start out knowing
anything, it's not that weird.
------
ahoge
Numpy is written in C. That's why it's fast.
Better:
[http://benchmarksgame.alioth.debian.org/](http://benchmarksgame.alioth.debian.org/)
~~~
wffurr
You didn't read the thread. The OPs code used very small arrays and using
numpy was slowing the code down by an order of magnitude. The pure python
solution is 17x faster.
~~~
TillE
It's really important to remember that the interface of a VM can be one of the
slowest parts. When your LuaJIT code is making a ton of calls to tiny C
functions, it's gaining hardly any benefit from the JIT.
~~~
awj
Yeah, things like marshaling data across the interface barrier, or chasing the
pointer indirections inherent in calling functions, can have a significant
cost. Usually it isn't significant _enough_ , but as always the devil is in
the details.
------
fmdud
_Holy false comparison, Batman!_
Why would you use Numpy for arrays that small? Oh, looks like someone actually
just wrote it in CPython, no Numpy, and it clocked in at 0.283s. Which is
fine. It's Python.
This thread reminds me of the scene in RoboCop where Peter Weller gets shot to
pieces. Peter Weller is Python and the criminals are the other languages.
------
Igglyboo
Judging by the top submission being also written in python, I think this just
shows how unoptimized OP's original code was rather than how slow the language
is.
Not that python is fast, it isn't. And using numpy seems a bit disingenuous
anyways "Oh my python program is faster because I use a library that's 95% C"
------
jzwinck
The same author previously posted this code as a question on Stack Overflow:
[http://stackoverflow.com/questions/23295642/](http://stackoverflow.com/questions/23295642/)
(but we didn't speed it up nearly as much as the Code Golf champions).
If you enjoyed this Python optimization, you may also enjoy:
[http://stackoverflow.com/questions/17529342/](http://stackoverflow.com/questions/17529342/)
This sort of thing comes up a lot: people write mathematical code which is
gratuitously inefficient, very often simply because they use a lot of loops,
repeated computations, and improper data structures. So pretty much the same
as any other language, plus the extra subtlety of knowing how and why to use
NumPy (as it turned out, this was not a good time for it, though that was not
obvious).
------
jules
You can make this far faster by changing the data representation. You can
represent S as a bit string so that if the i'th bit is 0 then S[i] = 1 and if
the i'th bit is 1 then S[i] = -1. Lets call that bit string A. You can
represent F as two bit strings B,C. If the i'th bit in B is 0 then F[i] = 0.
If the i'th bit of B is 1 then if the i'th bit of C is 0 then F[i] = 1 else
F[i] = -1. Now the whole thing can be expressed as parity((A & B) ^ C). The
parity of a bit string can be computed efficiently with bit twiddling as well.
Now the entire computation is in registers, no arrays required. The random
generation is also much simpler, since we only need to generate random bit
strings B,C and this is already directly what random generators give us. I
wouldn't be surprised if this is 1000x faster than his Python.
------
alexchamberlain
It's really fast to develop in, and with NumPy/Pandas/Scipy it runs numerical
models fairly fast too. You do have to spend time getting to know `cProfile`
and `pstats`; saved over 80% on runtime of something the other day.
------
pjmlp
What should the OP ask instead:
"How fast is the code produced by your compiler."
I keep seeing this misconception about languages vs implementations.
EDIT: Clarified what my original remark meant.
~~~
jerf
I no longer accept the idea that languages don't have speeds. Languages place
an upper bound on realistic speed. If this isn't true in theory, it certainly
is true in practice. Python will forever be slower than C. If nothing else,
any hypothetical Python implementation that blows the socks off of PyPy must
still be executing code to verify that the fast paths are still valid and that
nobody has added an unexpected method override to a particular object or
something, which is an example of something in Python that makes it
fundamentally slower than a language that does not permit that sort of thing.
The "misconception" may be the casual assumption that the runtimes we have
today are necessarily the optimal runtimes, which is not generally true. But
after the past 5-10 years, in which _enormous_ amounts of effort have been
poured into salvaging our "dynamic" language's (Python, JS, etc.) run speeds,
which has pretty much resulted in them flatlining around ~5 times slower than
C with what strikes me as little realistic prospect of getting much lower than
that, it's really getting time to admit that language design decisions do in
fact impact the ultimate speed a language will be capable of running at. (For
an example in the opposite direction, see LuaJIT, a "dynamic" language that
due to careful design can often run at near-C.)
(BTW, before someone jumps in, no, current Javascript VMs do _NOT_ run at
speeds comparable to C. This is a common misconception. On trivial code that
manipulates numbers only you can get a particular benchmark to run at C
speeds, but no current JS VM runs at C speeds _in general_ , nor really comes
even close. That's why we need asm.js... if JS VMs were already at C speeds
you wouldn't be able to get such speed improvements from asm.js.)
~~~
pjmlp
It is always a matter of ROI, how far one is willing to invest, money and
time, in a compiler/interpreter/JIT implementation for the use cases a
language is targeted for.
As for the current state of native compilers for dynamic languages, they
suffer from the fact that past the Smalltalk/Lisp Machines days, very little
focus has given to them in the industry.
Hence little money for research, while optimizing compilers for static typed
languages where getting improved.
Dylan was the last dynamic language with a AOT compiler targeted to the
industry as system programming language, before being canceled by Apple.
If it wasn't for the JavaScript JIT wars, probably the situation of compilers
for dynamic languages would be much worse.
~~~
lispm
Lisp Machines never had sophisticated compilers. The compilers for the Lisp
Machines were primitive in their capabilities. The compiled to a mostly stack
architecture, where some speed was recovered in hardware from generic
instructions. The work on better compilers for Lisp came most from other
places: CMU for their Unix-based CMUCL, Franz with Allegro CL, Harlequin with
LispWorks, Lucid had a very good compiler with Lucid CL, Dylan at Apple for
early ARM, CLICC in Germany, SBCL as a clean up of CMUCL, Scieneer CL as a
multi-core version of CMUCL, mocl as a version of CLICC for iOS/Android, ...
plus a few special purpose compilers...
Once you add sophisticated compilation, dynamic languages implementations are
no longer 'dynamic'.
This topic is pretty much solved for Lisp. On one extreme we have fully
dynamic interpreters + then dynamic AOT compiler based ones. For delivery
there are static delivery modes available (for example with treeshakers as in
LispWorks).
On the extreme side you get full program compilers like Stalin (for a subset
of Scheme) or like mocl (a recent compiler for a static subset of Common Lisp,
for iOS and Android).
------
donniezazen
As a new programmer with only Java(Android) under my arms, I find the whole
concept of "your language" mind boggling.
~~~
Roboprog
See:
[http://www.catb.org/jargon/html/L/languages-of-
choice.html](http://www.catb.org/jargon/html/L/languages-of-choice.html)
and
[http://www.catb.org/jargon/html/T/TMTOWTDI.html](http://www.catb.org/jargon/html/T/TMTOWTDI.html)
------
taude
Go/Scala, etc. programmers, go add your answer to the OPs S.E. answer. Lots of
other entertaining coding examples there.
------
WoodenChair
Summary: Question asker wrote a program in Python using numpy (A Python
library that calls C code) which could've been more performant if written in
pure Python (something to do with array sizes being used) and Python in
general is slower than C/C++/Fortran/Rust. Anything else new?
~~~
igouy
What's new is that StackExchange has a "Programming Puzzles & Code Golf beta".
~~~
yahelc
Not really new, at least not in internet time. It's been around for at least 3
years.
------
malkia
Python is slow, but handy for automation.
------
chrismorgan
Yet another attempt at a comparison scuttled by using randomness.
Different things use different types of randomness. Some are fast. Some are
slow. If your comparison is not using the same type of randomness, that
comparison is comparatively useless.
~~~
dbaupp
It's not being scuttled by randomness, since the times are so wildly
different; my measurements of my Rust code indicate "only" 20% of the time is
being spend in the RNG ranging up to 35% if I use StdRng rather than
XorShiftRng, this difference is peanuts compared to the 750× (not percent)
speed-ups the Fortran/Rust/C/C++ sees over the original Python (and even
compared to the 30× speed-up seen over the optimised Python).
~~~
chrismorgan
OK, I retract the word "scuttled". But the comparison is still meaningfully
damaged by it once you get to closer comparisons, say between two of the top
performers.
| {
"pile_set_name": "HackerNews"
} |
What iOS7 looks like (and other tidbits) - drinchev
http://9to5mac.com/2013/06/09/what-ios7-looks-like/
======
tathagata
The new icons look ugly, especially the gradients in the background.
Psychedelic looking, should come with a warning.
~~~
drinchev
I agree. In fact those icons are not the real ones. They are created on
Photoshop, by a inside beta tester, who described them. I really think that
we'll see something really cool after a couple of hours!
| {
"pile_set_name": "HackerNews"
} |
Show HN: Type-safe metaprogramming for Java (and other killer lang features) - rsmckinney
https://github.com/manifold-systems/manifold
======
rsmckinney
Think code generators are the only way to type-safely connect your Java code
to JSON, XML, SQL, Javascript, etc.? Think again. Manifold offers a radical
new way to connect Java to structured data.
IntelliJ IDEA provides comprehensive support for Manifold via the Manifold
Plugin. Connect directly to your data without wedging an expensive code gen
step in your build. Make incremental changes to files and type-safely access
the changes instantly. Plus usage searching, deterministic refactoring,
navigation, template editing, hotswap debugging, etc.
Manifold is both an API to build powerful metaprogramming features AND a set
of awesome prebuilt features such as: \- JSON Schema support \- Extension
methods (like C# a Kotlin) \- Template files, 100% Java scripted \- Structural
Typing (like TypeScript and Go) \- more
Manifold is just a JAR file you can drop into your existing project – you can
begin using it incrementally without having to rewrite classes or conform to a
new way of doing things.
Give it a go: [http://manifold.systems/](http://manifold.systems/)
------
zipperhed
Uhhh... How in the world does this work?! This is amazing. I hate IDE's and
Java, but this makes me want to use both of them.
| {
"pile_set_name": "HackerNews"
} |
Benzodiazepines: Our Other Prescription Drug Problem - pratheekrebala
https://journalistsresource.org/studies/society/public-health/benzodiazepines-what-journalists-should-know
======
maddyboo
I have battled with severe anxiety and panic disorders for years. At my worst,
I was often unable to get out of bed for days at a time due to fear of having
a panic attack outside the safety of my home.
Taking an SSRI has helped a lot, but there are times where I can feel a panic
attack coming and know the only way to stop it is with a Xanax.
I regard benzodiazepines with a lot of respect. Their power is a blessing and
a curse. Used responsibly, I believe they can be a very effective and safe
tool to live a normal life free of panic attacks.
At this point, I rarely take them - one dose every month or two at most. But
the knowledge that I have a tool to quell a panic attack, should I need it,
has actually done more for me than the pills themselves. Knowing I’m not
powerless gives me the strength to overcome the panic attacks on my own.
Recently, I’ve noticed doctors becoming more and more apprehensive about
prescribing benzodiazepines. This is definitely a good thing - I think they
should be reserved for severe cases as a last resort. But I also worry about a
future where people who could have benefited greatly from them without abuse
are denied a prescription.
~~~
jnovek
I'm similar -- thankfully, when I panic I can frequently get through it
without taking a benzo, but I do have clonazepam on hand for the situations
where it might be very useful.
There are times where I would've likely made poor decisions during an acute
panic attack that was aborted quickly by benzos. The ability to say "stop now"
\-- hell, even the security of know there's a _way_ to say "stop now" \-- is
important to coping with anxiety.
------
piazz
I was taking 3mg Lorazepam nightly for almost three years. Weening off of it
safely took almost an entire year of miserable work, and the final stages I
had to do while I had no other significant life responsibilities because of
the incredible rebound insomnia and background anxiety you experience
withdrawing off benzodiazepines. The only upshot is that when you finally do
manage to get yourself off of a drug like this, you sort of feel like you can
tackle most other challenges life throws your way.
So yeah, this stuff is serious. And of course, my Lorazepam was prescribed
legally, by a responsible, well regarded psychiatrist, with very little
warning regarding how quickly one builds both tolerance and physiological
dependence on this chemical.
~~~
icantdrive55
I really think most Psychiatrists know how addictive Benzodiazepines are, but
Americans are very stressed out.
In my case, I busted a gasket in my twenties. I went from the most capable
person in the room, to the trembling guy who could barely leave his room. I
can honestly say it ruined my life.
I was given a benzo with a long half life. It worked a bit, but I never fully
recovered. I think we all know the drug. 40 hour half life.
I tried all kinds of medications over the years, and nothing worked except
benzodiazepines , and alcohol. Yes--alcohol hits so many different parts of
the brain, but is horrid on the body. I really tried to avoid alcohol, but
some days the anxiety susptoms we just unbearable.
I've been on the long half life benzodiazepine for decades. I take the same
dose low dose, and try not to drink.
I've never even asked my doctor, but he knows my low dose isn't going to cause
physical problems. They are better than alcohol, if you're self-medicating. I
belive his thinking is I need the drug. I've been on it forever. Why put him
through a misserable detox, at this stage of the game?
There are a few big studies done on patients whom were on opiates, and
benzodiazepines for long periods of time. They didn't necessarily need to
increase their dosages. I believe the studies were done on rest home geriatric
patients.
I feel at my age, what's the point of a long withdrawal. It's easy to say for
myself because my doctor has reasonable rates. He is getting close to
retirement, and that has me very worried. The last thing I want is a long
misserable detox.
I don't like the way this drug problem is playing out. I don't like blaming
doctors. All their patients are very different.
My wish is we let, especially Psychiatrists, make these hard calls concerning
what's best for their patients. That's what they went to school for.
I don't know why we are even discussing it here.
I don't want to live in a world where doctors send their patients home a mess
because they are afaird of being accused of some sinister reason for keeping a
patient on a addictive drug.
In all reality, so many doctors just don't prescribe certain drugs. Probally,
one of the main reasons why former patients go to the streets, or liquor
stores.
(I would further like to see a governmental bill that would allow patients,
whom have been on addictive drugs for years, the ability to authorize their
own scripts. The Same dose, and any increase would require a doctor's visit.
At this point my office visits are pointless. There is a bill that is in
congress now I believe, but it's for drugs that aren't addictive. I doubt the
AMA will ever let it pass though.)
~~~
piazz
It’s your life, and your call, but one compelling reason to ween off these
drugs is simply that you’ll feel better (most likely) when you’re off of them.
I felt like I got my old brain back when I got off Lorazepam. While we take
these drugs to initially treat acute anxiety, they have a tendency to create
chronic anxiety in the user. This of course requires more of the drug to
combat, and you have a positive feedback cycle that makes them so difficult to
get off of. But, at least in my experience, there was light at the end of the
tunnel. And, FWIW, my doctor was _extremely_ fallible despite his years of
education, as you noted.
------
qwerty456127
Taking 1/4 pill of Xanax occasionally together with 1200 mg piracetam + 3 mg
sunifiram + another 1200 mg piracetam pill some hours later is amazing for
concentration (but that's my personal experience, just sharing it, I don't
recommend this to anybody, also neither piracetam nor sunifiram are approved
by the FDA). Almost cures my ADHD and anxiety altogether and makes me happy
and super productive (as compared to my baseline which is severely hindered by
untreated ADHD and anxiety). And no addiction ever (perhaps people that take
higher doses get addicted but I don't). God save the black market and the
grannies who don't mind sharing a pill. I really believe people should stop
this witch hunt and embrace the BLTC (better life through chemistry)
philosophy and start developing ways to fight the bad effects (physiological
addiction, withdrawal syndromes, tolerance development, liver/kidney harm,
receptors disregulation etc) instead of outlawing substances that improve
quality of life. A person mood/attitude and performance is 99% chemistry and
demonizing the very idea of seeking to improve it (even above what is
considered a norm) is madness.
~~~
throwaway77384
I'm with you here.
The problem is that you look like someone who has done their research, is
knowledgeable and self aware, and trying to (seemingly successfully) address a
problem.
Lots and lots of people are nothing like that. They just want to get high.
Escape reality at all costs, no matter the damage to themselves or others.
This isn't the drugs' problem or fault, obviously. Those people will use
alcohol and other means to get fucked up and they will obtain the drugs they
want illegaly anyway.
THE THING IS: While it's illegal to get those drugs, society can demonise
those people and politicians can run with that as their platform.
Should drugs be made legal, all it will take is one idiot killing themselves
or others while on drugs and suddenly it's the drugs' fault again, and the
next politician running with a 'tough on drugs' stance will win.
People will look for blame and they will not do so rationally.
Self-driving cars will be dragged through the press for every accident there
is, even if they are 10,000x less likely to crash. People are afraid of
flying. Videogames are the reason for killing sprees, etc. etc.
------
jnovek
Serious question: as we make opiods and now bezos increasingly difficult to
prescribe, what are the alternatives for people with chronic pain or chronic
anxiety?
I have friends and family members with chronic pain and, through them and
their communities, have become aware of many people who use opiods on a long-
term, occasional basis to manage their pain. A family member of mine who
suffers from chronic migraine lives in fear that she won't be able to get an
opiod which she uses as a last-ditch rescue treatment before she ends up at
the ER (not to mention that she gets treated like a drug seeker when she does
end up there).
I don't really see an alternative for acute intense pain; likewise an
alternative for acute, intense anxiety. Meanwhile the crackdowns on these
drugs also create a chilling effect for physicians. What do we do for people
who fall in those categories?
(Edit: not to claim that abuse of these drugs is not a problem... It just
seems like the people these drugs are inteded to help are being sidelined in
the dialog on the topic.)
~~~
spamizbad
For benzos: There really is no drug alternative to benzos other than _maybe_
SSRIs but most people perscribed them probably tried SSRIs in the past to no
effect. Your other alternative is extensive psychotherapy, which your
insurance is unlikely to cover. Perhaps in the future marijuana, MDMA or
ketamine might prove useful.
Benzos generally require you to taper off them, as I believe the withdrawal
side-effects include seizures. You cannot safely "cold turkey" them.... so I
hope they don't get all heavy-handed with them like they are for people who
rely on opiods to treat chronic pain.
~~~
tnecniv
> There really is no drug alternative to benzos other than maybe SSRIs but
> most people perscribed them probably tried SSRIs in the past to no effect.
Actually the two really serve different purposes. Benzos are commonly
prescribed as a way to manage panic attacks or other acute occurrences of
anxiety. SSRIs can help reduce your anxiety over time, but take a long time to
build up in your system. Often people are prescribed both simultaneously.
~~~
DanBC
> Benzos are commonly prescribed as a way to manage panic attacks or other
> acute occurrences of anxiety.
That's how they're supposed to be prescribed, but in this threaad we see a few
people who take a daily benzo and have done for several months.
------
maxander
What is the thesis here? Benzodiazepines are commonly prescribed and have the
potential for abuse; these things are both true; I hadn't heard the rate of
prescription was rising, but I'd believe it. There doesn't seem to be any
evidence presented for a trend or rise in benzodiazepine abuse, or evidence of
general harm from the use of the drugs. It highlights parallels between the
existence of this prescription drug class and another class that is associated
with significant issues, and makes it _sound_ as if there were an issue
here... and then leaves it at that, the literary equivalent of a wink and a
nudge. Are they arguing that prescription of drugs with abuse potential is
_inherently_ a problem? Because that would be a very extreme position, one
which would challenge a sizable fraction of the medications available to
modern psychiatry.
And this is a "journalist's resource," one associated with the Harvard Kennedy
School? No wonder journalism is garbage these days.
~~~
lmpostor
>A study published in 2016 in the American Journal of Public Health finds that
from 1996 to 2013, the number of adults in the United States filling a
prescription for benzodiazepines increased 67 percent, from 8.1 million to
13.5 million. The death rate for overdoses involving benzodiazepines also
increased in this time period, from 0.58 per 100,000 adults to 3.07.
In the first link in the article >the quantity of benzodiazepines they
obtained more than tripled during that period, from 1.1-kg to 3.6-kg
lorazepam-equivalents per 100,000 adults.
~~~
maxander
That's all prescribed doses, though. So, yes, the use of benzodiazepines is
going up, which obviously carries with it the associated rise in side effects
and drug-related deaths. It's not reasonably comparable to the narcotics
epidemic, where illegal use is driving mortality rates.
~~~
benbreen
I see how that's true on a legal level, but if we're just talking about social
and public health impacts, I don't see why the distinction between
prescription and illegal use matters here. A three-fold increase in a category
of drugs with major health impacts seems newsworthy to me. After all, the
boundaries between legal and illegal use are far from fixed. Methamphetamine
was once widely prescribed by physicians for weight loss, for instance (and is
indeed still legally available as a prescription medicine) [1].
Presumably we can agree that a world in which prescriptions for
methamphetamine have tripled might be a cause for concern, right? It's
debatable whether this class of drugs has the same abuse and health risks, but
based on my own reading and anecdotal experiences, I think they're pretty
comparable.
[1] [https://resobscura.blogspot.com/2012/06/from-quacks-to-
quaal...](https://resobscura.blogspot.com/2012/06/from-quacks-to-quaaludes-
three.html)
------
GABAthrowaway
GABA receptor modulation is no joke. I was prescribed Xanax for panic attacks.
My PC kept increasing my dosage, until I decided I had had enough. Withdrawal
was nightmarish, but luckily I hadn't been using it that long (only for two
weeks or so). My brain chemistry was never quite the same. I ended up looking
for substitutes like Phenibut and Etizolam. With these I was addicted to the
confidence they gave me in approaching women, so not quite physiological like
Xanax. What finally cured my anxiety was a macrodose of LSD-25 (111-150 ug).
Even then I wouldn't recommend it. Meditation is the best tool - our bodies
naturally produce Anandamide. In my case, due to certain traumas, LSD-25
allowed me to see the beauty of this World and Universe once again. It is a
powerful catalyst that allows one to See with clarity.
~~~
person_of_color
That's enough to trip
~~~
ssijak
That is why he called it MACROdose and not MICROdose.
------
honksillet
Fun facts, in county jails (and I'm sure in hospitals) there are 3 classes of
drugs that you will get detox medication for: opiods, benzos and alcohol. The
detox meds for alcohol is benzos. The detox med for bezos is more benzos
(although is a controlled, tapered manner). Both these two are much more
dangerous to detox off of than opiods, with alcohol being the most dangerous.
Everything else, cocaine, meth, etc is not particularly dangerous to withdraw
from and usually these patients will not get specific detox medications.
------
mnm1
I've seen plenty of doctors who prescribed benzos and not a single one had any
idea how to taper off their patients properly. Nor was a single one interested
in it. This is a money-making machine for them and they have no interest,
regardless of what's best for the patient. On the other hand, I've gone to
doctors who wanted to stop these cold-turkey risking seizures and death. Those
doctors clearly never heard of the hippocratic oath. I have never seen a
doctor willing to work with a patient to taper off properly. Until we get to
that point, talking about reducing prescriptions is akin to signing possible
death sentences for patients or pushing them to the black market / pill mills.
My own withdrawal took a few months and I did it on my own. It wasn't
pleasant, but it wasn't as horrible as some others' experiences. Basically,
the medical establishment says 'fuck you' by putting you on these meds long-
term, and another 'fuck you and die,' by not knowing how to taper you off
properly or even knowing when it is appropriate. We have a long, long way
before solving this problem, and reducing prescriptions by itself is an
incredibly stupid and cruel way to go about this. I can see why it's being
done this way. Once you become dependent on something like benzos, most
doctors and most of society does not think your life is worth living and they
try their hardest to make it so.
------
rincebrain
It seems like benzos, while sometimes quite powerful, can have really nasty
side effects that some doctors irresponsibly don't disclose, including the
rapid tolerance, rebound properties, and withdrawal in general.
It also seems that, like opiates, it can vary a lot from person to person.
I've been fortunate, and the few times I've had occasion to try taking benzos
for a non-hospital interval, they didn't do anything for me - positive,
negative, or otherwise, without any sort of visible withdrawal effects when we
stopped.
Conversely, there are people I know who have reported nasty side effects and
dependency issues rather rapidly (in my own family, even).
I really think the way to move forward and minimize this see-sawing of public
opinion on necessary evil versus unnecessary tool will be gaining better
insight into people's personal response profiles to these things before and
after giving them the drugs, so you can try to notice "huh, that's a lot
higher concentration of those metabolites than I expect, I guess they process
it fast" or "well that opioid sure is lighting up the reward parts of the
brain, guess they're at decent risk for addiction."
(Unfortunately, I'd speculate we're at least 20y out from anything like that
being ubiquitous/useful, so ...)
------
cc-d
GABAergenics (the class of drug which benzodiazepines fall under) in general
are pretty much the sole class of popular recreational drug which have a very
real possibility of lethal withdrawals.
In the case of alcohol, it often takes years for addicts to reach a point
where withdrawal becomes lethal. In the case of short acting
benzodiazepines/barbiturates, this point can be reached in less than a month.
Of course, benzodiazepines are in schedule IV, which means they are viewed as
being rather benign with no/low potential for abuse. In the eyes of the
federal government, alprazolam (xanax) is far less dangerous than
marijuana/the traditional psychedelics.
Just another data point demonstrating the utter absurdity of US drug
legislation and regulation.
~~~
jnovek
The DEA drug schedule is a hot mess.
[https://www.dea.gov/drug-scheduling](https://www.dea.gov/drug-scheduling)
There's no planet where Ritalin has a higher potential for abuse and addiction
than Xanax. Not to mention all the lower-risk drugs that have been categorized
schedule I for political reasons.
Under the current system _rohypnol_ is schedule IV but has special date rape
laws passed to make possession of it punishable like a schedule I drug as a
workaround.
~~~
cc-d
>There's no planet where Ritalin has a higher potential for abuse and
addiction than Xanax.
Most of the prescription opiates such as Hydromorphone, Oxycodone, etc are
schedule II as well.
>Not to mention all the lower-risk drugs that have been categorized schedule I
for political reasons.
Not just for political reasons (clonazolam would be FAR superior than anything
currently scheduled as a 'date-rape' drug, thanks for keeping us safe
politicians), but also anything 'new' is often placed in schedule I by
default, without any consideration as to the actual properties of the drug.
A great recent example of this is whenever the DEA moved to schedule kratom as
schedule I. Kratom. The DEA, in an age where it gets constant flack for
classifying marijuana as a schedule 1 drug, attempted to classify kratom as
having more potential for abuse than Hydromorphone.
It's an absolute fucking sham, but goodluck seeking a political career while
being seen as anything other than 'TOUGH ON DRUGS!'.
------
daeken
I take 1mg xanax up to once a day (typically every other day) and it has
completely changed my life for the better. In conjunction with propranolol
taken regularly (20mg twice a day, roughly), my anxiety is finally in a fairly
well-controlled state. Unfortunately, getting benzo prescriptions -- even for
the low dosage and frequency I'm on -- is hard and getting harder. Ordering it
online is possible but rife with scams and risks. I understand that some
people abuse these medications but for me they're life-saving; in cracking
down on benzo prescriptions, my anxiety medication is becoming a source of
anxiety in itself.
~~~
peteretep
Have you tried a medication that targets chronic rather than acute anxiety?
You're going to start tolerating the Xanax sooner or later, so you need a plan
for when that starts to happen. Escitalopram has worked great for me, in
addition to propanalol as needed.
------
qubex
According to my psychiatrist (whom I turned to when I realised that I had an
addiction problem I had to deal with) were it not for some highly unusual
metabolic pathways my sixteen-year benzodiazepine habit would have had a
chance to end my life multiple times (as it is I just ended up in ER once
after inadvertently combining a hefty dose of Valium in the morning with a few
celebratory margaritas at midday).
Said pathways have also given me the privilege of being able to quit cold
turkey (in the se se that I suffered no crippling withdrawal symptoms or
rebound effects, but man is it difficult to break the _habit_ ).
I count myself amongst the very lucky.
------
code_duck
I’m pleased to see this getting more attention. I find the memory-erasing drug
of these effects to be unpleasant, and duration disturbing. If you take three
of them, the next day 24 hours later or you may still have blood plasma like
one pill or more, depending on which benzo it is. Most people don’t understand
drug half life and are unaware of that. Then if you mix in cannabis or
alcohol, things start to get really dangerous memory-wise. These drugs are
prescribed fairly casually to people who don’t have any serious medical or
psychological conditions, and in my observation are treated equally casually
by consumers.
~~~
tnecniv
> If you take three of them
One should note that "three of them" is (probably) a lot. Even half a pill is
often sufficient to quell panic attacks.
~~~
code_duck
They are widely abused recreationally, too, typically in higher doses.
Half-life of Xanax varies between 6 and 29 hours, averaging 11.5 hours. A
“pill” is an arbitrary amount and that’s not what I’m referring to. It’s the
proportion that still affects you hours later and how long it lasts that
matters, including that dosages can overlap.
It’s also important to note that tolerance develops of these drugs. Half a
pill to you might be two for someone who is taking them every day for years.
Tolerance to various effects develops to different extents, and perceived,
subjective tolerance to dosages and impairment may be exceeded by measurable
motor skill and judgment reduction.
If you take half a pill, you’re still on more than a quarter of a pill when
you wake up the next day. If you take another half pill, you will be on more
than a half pill.
------
seancoleman
For anyone looking to quit benzodiazepines, the Ashton Manual is the canonical
resource: [https://www.benzo.org.uk/manual/](https://www.benzo.org.uk/manual/)
~~~
winstonsmith
The Ashton Manual is good resource, but the state of the art taper method as
far as I know is the liquid titration (via suspension, not solution) micro-
taper. See, e.g.,
[https://www.google.com/search?q=benzobuddies+micro+taper+liq...](https://www.google.com/search?q=benzobuddies+micro+taper+liquid)
.
------
Ftuuky
My mother had insomnia and her doctor prescribed some benzo (can't remember
which) _3 times per day_. She would take one in the morning and spend the rest
of the day sleeping or calling random people with super weird conversations. I
went back to the doctor with her demanding why he prescribed such a strong
medicine 3 times per day when her problem was having difficulties falling
asleep, and he says "oh she looked like she has anxiety". I wanted to punch
him in the face. These doctors prescribe whatever the pharma marketeers pay
them to prescribe.
------
Karrot_Kream
I feel for the patients that actually need a benzo to lead a normal,
functioning lifestyle. Due to the actions of abusers it seems the public is
starting to distrust medication.
~~~
jnovek
A similar situation has already played out with people who deal with chronic
pain and opiods. Stricter laws may reduce abuse but they have a chilling
effect on prescribing physicians.
~~~
TylerE
I don't think the laws even reduce abuse. If anything, they move people from
prescribed, professionally manufactured drugs with some degree of monitoring
to the black market.
------
lmpostor
Dirt cheap, "synergizes" with alcohol, street presses being incredibly
overdosed, it is weird seeing the writing on the wall then watch it be inked
into existence.
------
toonervoustosay
I've found Hemp-based CBD flower a viable alternative to benzos. There are a
few farm-to-customer websites where you can order it for a much more
reasonable price than full-spectrum cannabis. If anyone out there wants to rid
a benzo dependency, try CBD flower. The effects are rather immediate (due to
inhalation).
------
UpshotKnothole
A friend of mine got hooked on heroin and ended up on methadone maintence.
He’s since managed to get off that and is clean, but he had horror stories of
people on methadone abusing benzodiazepines like crazy. Apparently mixing
methadone and high doses of drugs like Xanax produce effects similar to
heroin, but benzos are really hard to get off. He talked about a woman who
couldn’t get her Xanax fix, and she started having seizures. Benzodiazepines
take months to titrate off safely, and higher doses associated with abuse do
unpleasant things to your seizure threshold and memory.
Bad stuff unless you must have it.
~~~
stryk
It is incredibly, _incredibly_ dangerous to mix benzodiazepines (Xanax,
Ativan, etc.) with Methadone. This is common knowledge amongst opiate addicts,
at least everywhere I ever went in the US back in my wilder days. I have 3
close friends whom I grew up with that all died before age 30 from abusing
that exact combination of narcotics, and know of countless more just in my
home state alone.
Benzos are a respiratory depressant, and when combined with Methadone it
amplifies it to the point where you stop breathing in your sleep and never
wake up from respiratory failure, lack of oxygen to the brain, or your body
freaks out and has a coronary episode, etc. it's really really risky -- no
joke & no exaggeration. If alcohol is in the mix too then it's even worse.
And I'm not going to pretend like it's not enjoyable -- because it is. It's a
great fuckin' buzz if downers are your thing. IMO it's better than heroin (no
'rush' to it, but the effects hit you like a ton of bricks and it lasts all
night long. And it's a cheap buzz too), but it's also asking for your life to
end.
methadone clinics know this and every one that I've ever seen, heard of, or
been to personally Benzos are their one big 'no-no' [as in: if we find it in
your Whiz Quiz we kick you out, some won't even give you a second chance and
most clinics have mandatory urine screening twice a month, some every week].
You can test positive for damn near anything else -- and they expect you to
test positive for opiates -- but if you have benzos in there then you kick
rocks.
~~~
mnm1
Do you have a source for this "common knowledge"? I've seen plenty of people
on methadone do just fine with benzos, especially if they take prescribed
doses. I'm not so sure this isn't some bullshit pushed by doctors without
evidence so that they have an excuse to stop treating their patients and leave
them without benzos in a state where they are forced to either go to the black
market or potentially withdraw and die. I've seen a lot of this from doctors
as regards to methadone patients, trying to take people who have been on
benzos for years or decades off without proper tapering and without a proper
reason. It's almost as if they think of methadone patients as less than human,
creatures whose lives are not of value. Wait, not almost. Whatever happened to
the hippocratic oath?
~~~
stryk
I mean I cannot link you to a direct source, it was just something everyone
knew, ya know 'common knowledge'. This was on both coasts as well as the
midwest.
And it was explained to me at 3 different clinics in 3 different areas of the
country that it was really about #1) liability -- particularly at clinics that
accepted insurance for payment but not exclusively, there were cash-only ones
with the same rule: No Benzos full-stop. If you had a legit prescription for
xanax or ativan then they would send a letter to the prescribing doctor and
would not dose you until they got an affirmative, positive response -- and to
a somewhat lesser extent #2) they know it has the real potential to be fatal,
and they're not monsters they don't want to kill all the junkies. Despite what
you might think, some of them actually do give a shit and got into substance
abuse medicine trying to help. Sure, for some it's just a job, and if you own
the clinic it's a gold-shitting goose, but there are a lot of them who are
genuinely trying to do good.
~~~
mnm1
Taking patients off benzos without properly tapering them off can lead to
death. Some clinics are putting their own liability worries ahead of patients'
well-being and risking patients lives in the process. It's not every place,
but the places that do this clearly do not have the patients' best interests
in mind. It's hard not to think that it's because they are dealing with
addicts that they even consider such actions. The way addicts are treated at
some clinics is simply unbelievable. They are lied to, disrespected, and
ignored. That's bad enough but putting their lives in danger based on
something that's allegedly common knowledge but hasn't even been studied is
beyond preposterous. However as you say, they are raking in the dough so what
do they care. It's not everywhere, but it's like that at a lot of clinics.
| {
"pile_set_name": "HackerNews"
} |
Can the EU become another AI superpower? - PretzelFisch
https://www.economist.com/business/2018/09/22/can-the-eu-become-another-ai-superpower
======
Ftuuky
Apologies if this rant makes no sense but I'm somewhat frustrated with this AI
thing.
One thing I've noticed in large EU corporations (where I and friends of mine
worked as data analysts/"scientists"): upper management decides to invest in
AI because they don't want to miss on this, so they create a new department
("AI/Robotics" or something cooler) and fill it to the brim with smart PhDs in
mathematics and physics. They're all data scientists and ML engineers now,
which means all the data mining, cleaning/preparation and labeling is going to
be beneath them, it's not cool or impressive in their LinkedIn profile. They
all want to work with the latest thing and each one has a different opinion
and held to it pretty strongly. Nobody pays attention to product/project
managers, they don't want to spend time creating PowerPoint presentations and
dashboards to communicate and align with stakeholder. Discussing ethics in AI
is a hippy silly thing. Then you end with something similar to what happened
in my company: you create a bot to parse through CVs and decide which ones are
better for any given job description. It took 4x more time than planned, and
it's racist and discriminatory because it mirrors what the company did until
now: hiring only certain kinds of people that studied specific degrees in
specific universities that learned which keywords are good on a CV even if it
means nothing. Nobody noticed or discussed this beforehand despite being so
obvious because everyone is busy troubleshooting Keras or complaining about
their GPU cluster.
~~~
gaius
_They 're all data scientists and ML engineers now, which means all the data
mining, cleaning/preparation and labeling is going to be beneath them_
You have absolutely hit the nail on the head there and this mirrors my
observations of what's happening in the wider industry. Data science and AI
are super cool viewed from the outside but the reality of the work day to day
is that it is not _fun_. Getting a result and making a meaningful impact is
very satisfying, but getting there requires careful, painstaking, meticulous
work, getting the data and getting it into a format you can use is the vast
majority of it. It is essential of course, but noone enjoys spending weeks (or
months) decoding exactly what the fields are in this big weird CSV file you
got from the mainframe and how exactly they marry up with the XML you got from
this other system then doing that 100 times to mash up all your data sources,
there's no documentation and the people who programmed it originally are long
gone and get something that you can finally feed into your ML step. And then
you come up with some recommendations which are immediately shot down because
they are actually illegal and noone in the business can believe you even
suggested it because that gets taught in Compliance 101 (I have really seen
this happen).
Someone with the patience and the good attitude to do the data prep, and who
has a bit of basic domain knowledge, armed with even the most rudimentary ML
techniques will in any practical sense run rings around any rockstar
researcher who just jumps in straight away with the AI. You would hope that
PhDs who spend literally years doing research before writing up would
understand this but it seems to be the first thing they forget!
~~~
Ftuuky
>Someone with the patience and the good attitude to do the data prep, and who
has a bit of basic domain knowledge, armed with even the most rudimentary ML
techniques will in any practical sense run rings around any rockstar
researcher who just jumps in straight away with the AI. You would hope that
PhDs who spend literally years doing research before writing up would
understand this but it seems to be the first thing they forget!
You articulated so well something that I've been trying to say to my managers.
Thank you for your post.
------
LeanderK
Excellent article. I don't think the possibility of success is that bad, but
they have to take it seriously and also seriously invest (germany doesn't...at
the moment!). I think in contrast to other technologies, the EU has a few
thing going for them (from a german perspective):
\- There are some important industries that are very much interested in AI
(for example the automotive industry). So serious private-sector money could
be raised.
\- Europe has the ability to pull of large-scale research projects and has the
experience to do so. From LHC to the various institutions, like the Helmholtz-
society or the DKFZ in germany (the EU also made some errors in preview large-
scale research projects to learn from).
\- The brains are there. I see many (very!) talented and dedicated students
and researchers here and the research infrastructure (universities, non-
university research agencies) is also established and quite diverse.
\- I see that there's an understanding that entrepreneurship and non-
traditional industries is an area in which the EU has been falling behind. I
feel like it's improving.
I also don't think we're too late yet.
I see two main obstacles:
\- lack of serious investment from public and private (this requires realising
what a significant investment looks like). This is at the moment quite obvious
if you follow the bmbf (german federal research agency). It seems like they
don't realise how insignificant creating a few research groups is.
\- no coherent strategy. Spreading everything thin without a thought where to
reach critical mass is wasting time and energy. This is a problem especially
in germany and, of course, the EU. We need a physical, European AI research
hub with enough things like conferences and exchange to the other research-
institute to get traction.
EDIT: What makes me really angry and frustrated (because in the end I am more
or less powerless) is the complete waste of potential in germany. We have many
great universities here with a lot of great faculty. But most of the
universites are seriously underfunded and not really well-maintained. Some
universities are so poorly maintained that their buildings are uninhabitable
because of danger of collapsing. It's crazy, it's just laying waste. I think
that we in germany wouldn't be in this situation if our universities could
seriously compete and could enable all their potential.
~~~
Eug894
So serious private-sector money could be raised.
Only after Elon's Tesla have a comfortable win over them again, I guess...
------
light_hue_1
I'm a European AI researcher in the US. I've seen both sides of the pond.
The EU needs serious reforms if it wants to be competitive in AI.
Academia in Europe is far behind academia in the US on average. And will be
far behind China in a decade or so. European academics generally don't adapt
to new technology. They have no reason to with fairly few links to industry, a
funding that's based on personal relationships and politics rather than
research, and an academic environment that doesn't really emphasize novel
research. The same people, do the same things, for decades on end, with little
to no progress or change. European academics don't put in much effort to get
industry funding since your students are funded by the university much of the
time.
Faculty hiring is very local and incestuous. German universities hire Germans,
and only from a few places. English universities hire the British. French
universities hire the French. etc. There are exceptions but it's rare.
The European PhD needs to be fixed. 3-4 years isn't nearly enough. It's
hurting everyone. The moment someone gets productive they graduate. It's a
total waste of time. They need to move to an American-Canadian style 5-6 year
PhD. The fact that students are generally not funded by projects and
researchers, but departments, also puts a big damper on people's motivation to
hustle and publish.
Funding for startups in Europe is a disaster. It's really hard compared to the
US and Canada, raising multiple rounds is harder, there's little
infrastructure for doing it, and universities are little to no help. Rich
people just don't have an appetite for risk, better to sit on your old money.
This should be fixed by tax laws.
Pay is terrible in academia in Europe. Around half of what it is in the US and
Canada at many ranks. When you can't live well, why would you stay in
academia?
The tenure system is a disaster in many places in Europe and drives anyone
good away to the UK, Canada, or the US. You have a lot of unpleasant steps
where you aren't autonomous.
European research is also very closed. Europeans cite europeans, who go to
european conferences, and do research with europeans. There are a lot of
communities like this that are very closed and 2nd tier compared to
international ones.
I could go on. Nothing will change any time soon unless governments take
action to revamp the university system, university funding, and the tax code
to encourage investment/risk. The next century won't be Europe's sadly.
~~~
bad_good_guy
I just want to point out you are wrong on one point: UK universities have an
extremely diverse faculty of researchers, with researchers from both various
European and Asian countries common
~~~
light_hue_1
Yup. This falls into "There are exceptions but it's rare."
The UK, Norway, to some extent Denmark, are much more open than say Italy,
Germany, Spain, or France. Top-tier places in the UK are very open and
international, mid-tier places aren't as diverse as mid-tier US or Canadian
universities.
------
singularity2001
Related rant:
That is the third time I hear Merkel utter this disgusting sentiment:
“In the US, control over personal data is privatised to a large extent. In
China the opposite is true: the state has mounted a takeover,” she said,
adding that it is between these two poles that Europe will have to find its
place.
It might not be that clear from above statement but a similar one left no
doubt that some leaders have told her: For AI we need data and for data we
need tracking/surveillance. So please look at ways to abolish privacy in the
EU.
It's not lack of data which hinders business in the EU, it's overtaxing small
businesses, cronyism, top-down-approaches, Google or something else which is
hard to grasp. Lack of ambition? Lack of youth? Foreign espionage/sabotage?
Negativity?
Angela, (if you read this, which I'm sure you do), you did a fantastic job
protecting us from Cheneys torture doctrine, and much else afterwards. Erosion
of privacy leads to erosion of societies. Don't mess it up in your last
months.
~~~
eksemplar
The American paradox is that you often have very good governments, but trust
them very little. Where as you often have very evil companies, but trust them
very much.
Europe is the opposite.
The EU is working hard to secure citizen rights though, but that doesn’t mean
it isn’t working for ways that you can share your data. The EU just wants
transparency and ownership to remain with its citizens.
~~~
JumpCrisscross
> _the American paradox is that you often have very good governments, but
> trust them very little_
That distrust is a reason why our government has been stable over the past 200
years, despite a series of technological, economic, geopolitical and cultural
changes.
~~~
adventured
The two party system also results in dramatically greater stability, typically
at a cost trade-off of dynamism. It's hard to change a two party system, so
hopefully if you've got one of those, you have something worth maintaining
underneath. Systems with lots of parties are far less stable over long periods
of time by contrast and are prone to rapid change and takeover. Australia for
example, with its 13 parties with parliamentary representation, lately can't
keep a leader for more than a year or two, with six prime ministers in a
decade. In that extreme case it's causing policy stagnation however, as none
of them are managing to tackle big, urgent problems before they're tossed out.
Europe's typical preference (excluding a few countries like Russia) toward
lots of parties has also resulted in neo-Nazi groups acquiring increasing
government power and representation, another downside to that approach.
------
novaRom
I lived and worked in different places of the world (Bay area, MA, middle
east, Japan). For me, Europe is simply the best environment if you have a
family and you care about your freedom, privacy, and comfort. Decisions made
by EU authorities have significant implications on everything, but the way it
works it's really democratic process, with long term implications and with
significant transparency. In some regard living in Germany is similar to Bay
Area, but with much more emphasis on social well being of the whole society
which in fact affects your everyday live, your safety, your comfort, your
family.
------
Barrin92
"Yet look beyond machine learning and consumer services, and the picture for
Europe is less dire. A self-driving car cannot run on data alone but needs
other AI techniques, such as machine reasoning, which is done by algorithms
that are coded rather than trained—an area in which Europe has some strength.
Germany has as many international patents for autonomous vehicles as America
and China combined, and not only because it has a big car industry."
Important point. Not only see I more hope for deep innovation in the
manufacturing sector than in selling people the most targetted ads, this also
has the potential to create much more equitable outcomes for everyone in the
economy.
I don't really understand the concept of a 'AI superower' at all. Superpower
at what, warfare? Concentration of wealth as AI returns flow to only a handful
of people? Have Americans and the Chinese pondered whether there is some
higher goal to the development of AI or just competition for competition's
sake?
As a European(and German) citizen, I am much less concerned about taking the
slow and steady route here. I have no interest in seeing Europe destroy its
privacy or using AI to malevolent ends just to stay ahead in some fictional
horse race.
People told us in the 80s that if we didn't move ahead with the service
economy we'd be stuck in an archaic industrial society, and Thatcher was
hailed as the reformer. I see parallels here to the AI debate. Now, where has
this gotten the UK outside of London? For me, AI looks more and more like the
hype around finance and services around that time. I'm okay with being a
somewhat slow and bureaucratic grumpy German, if we get to be the guys who put
advanced technologies into boring machines without much fanfare that's fine,
people seem to keep buying them.
~~~
ben_w
> I don't really understand the concept of a 'AI superower' at all. Superpower
> at what, warfare? Concentration of wealth as AI returns flow to only a
> handful of people?
In principle, in the sense of being able to give all of your citizens the
ability to complete any task which previously only experts could achieve.
Google Übersetzer ist oft schrecklich und schlechhören(?) oft meine Worte,
aber es ist immer noch deutlich besser als ich auf Deutsch, obwohl ich das
Äquivalent einer guten Note an der High School habe.
> Have Americans and the Chinese pondered whether there is some higher goal to
> the development of AI or just competition for competition's sake?
Americans certainly have; its part of both utopian and distopian fiction. This
fiction has been a guiding force with regard to what the AI looks and acts
like in many cases, for example Alexa’s adverts were clearly trying to sell it
as the ship’s computer in Star Trek.
~~~
Barrin92
>In principle, in the sense of being able to give all of your citizens the
ability to complete any task which previously only experts could achieve.
I'm not concerned on the consumer side of things. We're all part of a global
economy. If you want to buy AI services in Germany you can do that easily,
just as I can use Google without a problem despite Germany not exactly being
at the forefront of the tech. Having Google or Facebook physically located in
your country, from a consumer standpoint, actually doesn't really matter at
all.
Honestly this whole arrangement has always puzzled me from an American
perspective. Americans sacrifice quite a lot in terms of equality, privacy and
so forth to be at the forefront of this stuff, and we can just use it all the
same because we're good customers. I'll be sad the day the US decides it
doesn't want to be in charge!
~~~
ben_w
Ah, I think I see the point here.
Software (including A.I.) developed outside of your culture won’t reflect your
culture. We already see this problem with regard to racism and sexism — A.I.
which cannot detect black people at all, or hand-written healthcare software
which cares more about insulin than periods — so for example, if you leave it
to America and China, you won’t have any education software (with or without
A.I.) which can cope with the difference between a Gymnasium, a Realschule,
and a Hauptschule.
USA companies are also already having a lot of trouble with the cultural
difference between the “easier to seek forgiveness than ask permission”
attitude they’re used to in America as compared to anywhere which enforces
rules because they exist for a reason. Google Street View in Germany, for
example. I don’t know about Chinese companies, but I assume they also have
cultural assumptions that won’t apply outside China.
------
rcarmo
I see this as unlikely in the same way that having the next Facebook in the EU
is unlikely. Companies in Europe tend to bet more on B2B instead of B2C, which
leads to a lot of data siloing and many attempts at building ML models with
tightly focused legacy data within a specific domain.
The article doesn’t make a very clear distinction between academic and
business AI, but I can’t see any inherent advantages for the EU in an academic
perspective either-there are fields of AI that are under-represented in
current consumer tech, but... I’m skeptical.
(I live in the EU and work in AI and ML-there is so much low-hanging fruit in
terms of just making companies aware of what they can do that I seldom have
deep enough engagements to step outside prepackaged approaches)
------
baxtr
Thanks HN for all the rants and the honesty about this topic (and in general).
Contrary, whenever I open LinkedIn people seem to be naively super excited
about Europe becoming an AI superpower (whatever that means). While I’m not
against excitement, I can’t support it because it seems so detached from any
real problem that we want to solve. And ultimately, that should be the driver
for AI. Of course we can still engage in basic research, but we won’t become a
AI superpower just because we want it.
------
PeterStuer
Let's say Europe against all odds succeeds in creating the seeds of some
promising AI scale-ups. How will they prevent the US, Korea and China from
cherry-picking and buying those out just like in all the previous IT-tech
waves?
~~~
snaky
They will fix it European way.
> BERLIN (Reuters) - The German government is taking steps to counter a surge
> in Chinese bids for stakes in German technology companies, including the
> creation of a billion-euro fund that could rescue such firms in financial
> trouble, a government source told Reuters.
~~~
petre
Maybe they shouldn't have taxed the crap out of these companies in the first
place?
Most succesful EU companies are 100+ years old due to taxes, bureaucracy and
big company bullying. I find posts about the startup scene in most if not all
EU countries laughable. Western Europe is too regulated and expensive (taxes)
and Eastern Europe is too corrupt and politically unstable. Southern Europe is
too hot and distracting. Maybe it has mafia as well. 20% VAT? Come on, that's
outright theft. Hungary as 27% VAT. Portugal has 23% VAT. France has 20% but
they also tax the crap out of companies and private citizens, especially if
you're well off. Germany is over regulated, even the dogs barking hours is
regulated. How do you expect tech startups to survive in this environment? Oh
wait there's Ireland which gets lots of rain (people are cool working indoors)
and has small taxes, except for VAT which is 23%. That one could work.
~~~
pavlov
Taxes in USA are not meaningfully lower, except for state-local sales taxes
vs. European VAT.
But the thing about VAT is that it’s unimportant for most businesses. When
selling B2B you just subtract all the VAT you paid from the VAT you owe. The
only kind of business that is seriously affected by VAT is highly price-
sensitive B2C, and technology startups usually aren’t that.
~~~
jstanley
I don't think you understand VAT.
If you buy some computers for $100, do some work, and manage to sell your
services for $1000, you get to reclaim $20 of VAT but you have to pay $200 of
VAT, so a 20% VAT rate costs this hypothetical business 18% of its revenue,
which is hardly "unimportant".
And a typical tech company would be taking in far more than 10x in revenue
than what they are spending on VAT-able goods.
EDIT: But see below.
~~~
cuban-frisbee
He litterally just said it only matters in a B2C setting and not a B2B.
Can you name even one AI product that is sold directly to consumers? Most if
not all is B2B and there VAT is not used as it is a tax on consumption levied
at the consumer, not other businesses.
~~~
petre
Amazon Alexa? Wait, the AI there is meant for vendor lock in. Roomba vacuum
cleaners? Spotify? As a business you stll have to buy stuff to recover VAT and
sell your product 20% inflated.
~~~
cuban-frisbee
On of the most succesful EU businesses (spotify) does not seem to support the
assertion that VAT is a undue burden. Also I don't know if spotify uses their
own tech or if they are uses stuff from another vendor.
Do also remember that all your competitors are bound by the same rules, so
from a purely economical standpoint prices in a given country with VAT will
just appear x% higher accross the board, and then the real question is more of
disposable income.
To be perfectly honest I am no expert in VAT, but I do know that a country
like Denmark with 25 % VAT is also ranked in the top 5 countries to do
business in.
Also don't know what you mean by "As a business you stll have to buy stuff to
recover VAT" but oh well maybe you can clarify.
~~~
jstanley
> prices in a given country with VAT will just appear x% higher across the
> board, and then the real question is more of disposable income.
In other words, "if you can't make at least a 20% profit on the value you add,
you're not allowed to make a profit at all". This stifles innovation at the
margin.
------
m00dy
As a turk, migrated to EU 5 years ago, I put my bet on EU politicians. I hope
they are going to find a sweet-spot in this at-least two poles world.
| {
"pile_set_name": "HackerNews"
} |
Madoff Trustee Seeks $19.6 Billion From Austrian Banker - ojbyrne
http://dealbook.nytimes.com/2010/12/10/madoff-trustee-seeks-19-6-billion-from-austrian-banker/
======
badwetter
Makes me sick when I read of the greed. Hope the trustee has a solid case and
can retrieve the funds in whole.
| {
"pile_set_name": "HackerNews"
} |
5G will cost you a bundle - cdvonstinkpot
http://money.cnn.com/2015/05/18/technology/5g-cost-wireless-data/index.html?iid=ob_homepage_tech_pool&iid=obnetwork
======
DiabloD3
Except the article fails to say that upcoming LTE Advanced service is "true"
4G, not 5G.
4G has multiple requirements and are only satisfied by LTE Advanced and WiMAX
2. Both were ratified in 2011 by ITU-R in the IMT-Advanced specification. The
major sticking point is true 4G devices and networks must support up to
100mbit/sec for mobile devices, and up to 1gbit/sec for stationary or low
motion devices.
In other words, there have been no real 4G (as in, LTE Advanced) deployments
worldwide (very few at the end of 2013, most of them in 2014, all of them with
extremely limited scope), and none at all in the US. What they are now calling
our existing LTE and WiMAX networks is "3.9G".
A major feature being introduced by LTE Advanced is complex MIMO, where not
only can a device MIMO to a single base transceiver station (one or more of
these are on a cell phone tower), it can also communicate with multiple ones
belonging to the same network in disjunct physical locations (ie, you could be
in a middle triangle of them, and connect to all three if they were configured
correctly), and also be able to MIMO with a heterogeneous cluster (as in, a
nearby tower and a next generation femtocell sitting on your desk). Most
phones will only support 2x2 or 3x3, which is enough to support smooth hand
off as you pass by towers. Up to 12x12 is supported, I believe.
Other major features are allowing much wider channels, better forward error
correction, higher coding complexity (128QAM), and requiring support of cross-
band MIMO support
Nexus 6 and HTC One M9 (both 2x2, 300mbps downlink maximum) and Galaxy S6
(3x3, 450mbps downlink maximum) support LTE-A, and upcoming LTE-A capable home
access points may support up to 8x8 and/or wider channels.
A test by DoCoMo managed to get 5gbit/sec with a non-stationary 12x12 100mhz
channel test rig.
Heterogeneous MIMO seems to be required for Google-Fi, which explains why
Nexus 6 can do it but not Nexus 5, although that is just an educated guess.
------
calgoo
What we need to do is remove all these limits on the amount of data we
transfer. If they really want us to use wireless for the home and on the go,
they really need to remake the entire pricing plan, as it should have the same
service as my landline fiber (unlimited Transfer).
| {
"pile_set_name": "HackerNews"
} |
New Horizons enters safe mode 10 days before Pluto flyby - dandelany
http://www.planetary.org/blogs/emily-lakdawalla/2015/07042044-new-horizons-enters-safe-mode.html
======
dandelany
Alan Stern, the mission's P.I., dispelled rumors that contact had been lost on
the USF forum, saying "Such rumors are untrue. The bird is communicating
nominally."[0]
Also the Deep Space Network[1] page shows an ongoing 1kbps downlink from New
Horizons; during the safe mode event it was at only 9bps. So that's a good
sign! I'm sure they are still panicking a bit about what went wrong, but
hopefully we're out of the woods on this particular anomaly.
[0]
[http://www.unmannedspaceflight.com/index.php?showtopic=8047&...](http://www.unmannedspaceflight.com/index.php?showtopic=8047&st=180)
[1] [http://eyes.nasa.gov/dsn/dsn.html](http://eyes.nasa.gov/dsn/dsn.html)
~~~
tptacek
The idea that we have an ongoing digital comms with something this far from
Earth is completely fascinating to me. Are the encodings used documented
anywhere? All I can find out is that it's X-band, 1kb/s.
~~~
throwaway_yy2Di
"...encodes block frame data from the spacecraft Command and
Data Handling (C&DH) system into rate 1/6, CCSDS Turbo-coded
blocks."
[pdf]
[http://www.boulder.swri.edu/~tcase/NH%20RF%20Telecom%20Sys%2...](http://www.boulder.swri.edu/~tcase/NH%20RF%20Telecom%20Sys%20ID1369%20FINAL_Deboy.pdf)
~~~
tptacek
Cooooool. ["CCSDS" "standard"] was the Google search I was looking for.
~~~
planteen
If you are curious about reading more, CCSDS is an overloaded term in the
space industry for an onion of different OSI layers that are changing all the
time. My guess is that the telemetry coding standard CCSDS 101.0-B-6 is likely
to have been used on New Horizons. Even though it is now a "historical
document", some new space missions still use it.
Here is a copy:
[http://public.ccsds.org/publications/archive/101x0b6s.pdf](http://public.ccsds.org/publications/archive/101x0b6s.pdf)
------
r721
"NASA’s New Horizons mission is returning to normal science operations after a
July 4 anomaly and remains on track for its July 14 flyby of Pluto.
The investigation into the anomaly that caused New Horizons to enter “safe
mode” on July 4 has concluded that no hardware or software fault occurred on
the spacecraft. The underlying cause of the incident was a hard-to-detect
timing flaw in the spacecraft command sequence that occurred during an
operation to prepare for the close flyby. No similar operations are planned
for the remainder of the Pluto encounter."
[http://www.nasa.gov/nh/new-horizons-plans-july-7-return-
to-n...](http://www.nasa.gov/nh/new-horizons-plans-july-7-return-to-normal-
science-operations)
------
robertfw
I wouldn't want to be the one debugging this. Talk about pressure to deliver
and difficult constraints!
~~~
stevewepay
On the contrary, this is where you can really prove that you are worth your
salt. There is no better arena to prove yourself than a real-live production-
down situation.
~~~
kabouseng
It's a shame really, as the engineers fighting the crisis' gets a lot of
attention, but very often those crisis' are caused by themselves.
It is the engineers who's projects run smoothly who is ultimately worth more,
as they can predict and prevent problems before they become a crisis, but get
no recognition for it.
~~~
ende
Yes, this. It reminds me of goalies in hockey, and how people are in awe when
a goalie makes some ridiculous save when in fact the goalie would have never
had to have made such a save if they hadn't been out of position in the first
place. The best goalies are pretty boring to watch.
~~~
JohnBooty
Yes! The Phillies used to have a fan-favorite outfielder who played hard and
often made spectacular catches - but he was actually a pretty bad outfielder;
the reason he made spectacular catches is because he turned routine plays into
adventures.
~~~
speeder
I remembered now how some people considered the US goalie one of the best
world cup players...
The thing is, he was considered one of the best world cup players because the
US defense was so bad, but so bad, that without him US would have ended the
cup losing all games outright.
------
nathanb
That is a really well-written article. It explains the problem, explains what
the remediation plans are, and puts a human face on it...while remaining
positive and optimistic.
Looking forward to seeing what the problem turned out to be and how they solve
it!
~~~
dandelany
I highly recommend the planetary.org blogs for space news, especially Emily
Lakdawalla's - they are always excellent.
------
zatkin
Does anyone know how good of a quality the cameras are on New Horizons?
~~~
elahd
Wikipedia says the camera is 1024x1024. There are a bunch of other non-optical
sensors on board, as well.
[https://en.wikipedia.org/wiki/New_Horizons](https://en.wikipedia.org/wiki/New_Horizons)
~~~
kenrikm
1024x1024 seems rather low res even by 2006 standards. Any idea what
special/expensive/practical reason it needed to be that low res?
~~~
rich90usa
One of the team members had an excellent answer to this question a few days
ago in a Reddit IAmA:
[https://www.reddit.com/r/IAmA/comments/3bnjhe/hi_i_am_alan_s...](https://www.reddit.com/r/IAmA/comments/3bnjhe/hi_i_am_alan_stern_head_of_nasas_new_horizons/csntvhk)
>We’re limited in other ways, weirdly. For example, LORRI, our high resolution
imager, has an 8-inch (20cm) aperture. The diffraction limit (how much an 8”
telescope can magnify) is 3.05 mircorad. which is just over half the size of
single pixel 4.95 microrad. So if we swapped out the current sensor with a
higher res one, we couldn’t do much better because of the laws of physics. A
bigger telescope would solve that problem, but then it would make the
spacecraft heavier, which require more fuel to send to Pluto AND a longer time
to get there, because the spacecraft is more massive. We launched Pluto on the
largest, most powerful rocket available at the time (the Atlas V, with extra
boosters), so again we’re limited by physics: “At the time” doesn’t mean best
ever. The Saturn V rocket, which sent astronauts to the moon, was actually
more powerful.
>More megapixels also means more memory. For example, LORRI images are made up
of a header and then the 1024x1024 array of numbers that make up our image and
go from 0 to 65535 (216). There’s not really a way to make that info smaller
if we went to 2048x2048. We could downlink a compressed version, but we want
the full info eventually.
tl;dr
1\. Between optical physics and balancing different costs to launch mass, it
was the sound engineering choice.
2\. Higher resolution would take even longer to retrieve the captured data.
------
biot
> and a less educational (but not catastrophic) gap in our
> light curves for Nix and Hydra.
Can someone explain the importance of this? What data would this have
provided?
~~~
greglindahl
You can figure out the rotational period, and get an idea of what the surface
looks like, from a light curve. For example, if half of it reflects a lot more
light than the other half, then you'll see a repeating
brighter/dimmer/brighter/dimmer pattern with a period equaling the rotational
period.
If you have a gap in the light curve, it increases your uncertainty.
------
tsieling
Edge. Of. Seat.
~~~
TheOtherHobbes
Edge. Of. Solar System.
~~~
kristofferR
It won't reach the edge of the solar system for many decades, at the very
least (if the edge is defined as interstellar space).
However, it truly won't leave the solar system entirely for at least 30 000
years: [http://spectrum.ieee.org/tech-
talk/aerospace/astrophysics/vo...](http://spectrum.ieee.org/tech-
talk/aerospace/astrophysics/voyager-1-hasnt-really-left-the-solar-system)
~~~
TheOtherHobbes
I was being metaphorical and poetic. :)
------
user
>Not Implemented Tor IP not allowed
Come on! What's the point of blocking tor for them?
| {
"pile_set_name": "HackerNews"
} |
Mozilla Outlines Plan to Replace Firefox for Android with 'Fenix' - ecesena
https://www.androidpolice.com/2019/04/26/mozilla-outlines-plan-to-replace-firefox-for-android-with-fenix/
======
ZeroGravitas
Seems quite good, but still a few rough edges (only nightly builds with no
auto-update, Lastpass doesn't seem to work with it, you don't get that "open
in app" icon. Doesn't seem to be any way to install add-ons, yet it seems to
be using my adblocker from standard Firefox)
| {
"pile_set_name": "HackerNews"
} |
MongoDB Cloud and Backup Are DOWN - rmykhajliw
http://status.cloud.mongodb.com
======
rmykhajliw
[https://cloud.mongodb.com](https://cloud.mongodb.com) \- link to the main
page [http://screencast.com/t/L6nyk8DT](http://screencast.com/t/L6nyk8DT) \-
screenshot
| {
"pile_set_name": "HackerNews"
} |
Tails, you win - tosh
http://www.collaborativefund.com/blog/tails-you-win/
======
sharemywin
I doubt the author would have invested in Disney right before snow white.
VC's only exist because rich people can only throw so much money at google.
So, what I get out of this if you live in SV and your technical.
For 85 out of 100 go work for a big company.
10 out of 100 if your really successful and you have lots of capital and are
well connected you possibly be start a company be moderately successful.
2-3 of you if you happen to meet the next Steve jobs and he/she is probably
talking about making $1 or 2 off poor people in India or china. And your vital
to him/her getting some business off the ground. go for it.
1-2 of you, if you are actually are next Steve jobs and... I don't know your
probably actually in India or China right now and not reading dumb comments by
a burnt out developer like me.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: you're rich, now what? - ryanwaggoner
This question is just for fun:<p>Let's say you sold your startup yesterday for enough cash that you never have to work or worry about money again.<p>What would you do now?
======
sivers
I sold my startup a few months ago for enough cash that I never have to work
or worry about money again. <http://news.ycombinator.com/item?id=341565>
I went to the most peaceful spot I could find, and relaxed. I did nothing.
<http://www.vimeo.com/1292105>
After only a couple days, it was never more clear that I was never doing
anything for the money anyway, and the reason I'm always working, driving,
pushing, learning, growing, and building companies is NOT about the future-
goal but increasing the quality of my present moment. It's exciting! It's fun!
So, I started working again. Not because I have to, but beacuse I want to. It
makes my brain spark in a way that not-working doesn't.
So here I am again, programming, excited about some new thing I'm working on,
exactly the same as before I sold the company. I didn't buy anything because
there's nothing I want. My debts were already paid off.
Philip Greenspun's article really does describe it best.
<http://philip.greenspun.com/materialism/early-retirement/>
So does Felix Dennis' book How to Get Rich.
<http://www.amazon.com/dp/1591842050>
Feel free to contact me directly if you have any specific questions you don't
feel comfortable posting on the board here. <http://sivers.org>
\- Derek
~~~
gridlock3d
What I would really like to know is how much money it takes to get to that
point.
$5 million isn't enough according to a very interesting New York Times
article: <http://www.nytimes.com/2007/08/05/technology/05rich.html>
So how much does it typically take for someone in Silicon Valley to "solve the
money problem"?
~~~
ryanwaggoner
$5m invested invested fairly conservatively can throw off 8%, or $400k / year.
I'm sorry, but if you can't live on $33k / month, even in the Bay Area, you've
got problems.
A couple of those people in the article have homes worth over $1m that they
own free and clear while they work 70 hours a week to make ends meet. Not
smart. Mortgage the house to 70-80% (rates are dirt cheap right now) and
invest the cash at the aforementioned 8%, which adds $100k / year to your
bottom line, plus the tax benefits.
~~~
newsycaccount
Please link to a single "conservative" investment with an 8% yield.
~~~
gaius
Right now you can get 6% in the UK on a no-frills deposit account at Tesco
(hmm, thought they were a supermarket) but that'll be taxed so it's more like
4% real. Which is barely keeping pace with (real, not official) inflation.
~~~
weavejester
The base interest rate in the UK is at a 57 year low at the moment, so UK
banks might not be the best place to store $5 million ;)
~~~
gaius
True, but banks lend to each other at LIBOR which has remained stubbornly
high. Only a few weeks ago some were offering 8% 1-year deposit accounts to
get any cash they could get their hands on.
------
pg
I know this isn't the sense of "now" the question intended, but what I'd
recommend is taking a long vacation, to clear your head. You can't usually do
this immediately after selling, because you have to work for the acquirer for
a while. But when you finally leave the acquirer the best thing you can do is
go somewhere far away for at least a month.
------
rcoder
Improve the world: start a charitable foundation; support amazing candidates
for public office; invest in social services for developing nations.
Improve yourself: go back to school and get a degree in history, or music, or
art; travel to another continent and live somewhere new long enough to learn
the language; write a book, and read a lot of other peoples'.
Improve others: get your teaching certificate (usually <1 year of school) and
teach math to middle-school students; tutor kids at a local library or after-
school program in computers; adopt.
~~~
logjam
Admirable goals...none of which require we be rich...and many with which
riches will interfere.
~~~
gasull
For traveling around the world or working for a non-profit without a salary
you need financial independence.
------
strlen
Get a house in the Santa Cruz mountains large enough to hold a decent sized
library and a miniature lab/computer history museum, invest the money to get a
speedy Internet connection there. For the next few years spend the rest of the
time reading, writing code (especially code that can't immediately be
"something people want": design and write programming languages and OSes "just
for fun") doing photography (build out a proper studio and dark room) and
collect historical/esoteric hardware (e.g. PDP-10s, VAXen).
Then in few years, attempt another start-up (but without risking my entire
wealth on it - won't put more $100k into it) and if that fails, take a job
(without regards to pay or seniority of position) at a research lab /
university / technology company I believe in.
------
ratsbane
I'd go back to school and study bioinformatics. I've seen too many friends and
family suffer slow deaths from cancer, Parkinson's, and soforth and I think
the solutions to those problems are only a matter of time. To be even a small
part of that solution would be the most satisfying thing I can imagine.
~~~
antiform
If it's so important to you personally, what is stopping you from going after
it now?
~~~
etal
Doing the legwork to get back to school and lining your life up with the
admissions schedule can take a year or more of planning before it's safe to
quit your day job. It helps if other people you know are doing the same thing
at the same time. But one thing it doesn't require is getting rich beforehand.
------
noonespecial
Honestly, keep it a secret. Live like I've been living and worry _much_ less.
I've found (by other people I know actually getting rich) that there's really
nothing you can say or do with your money where you won't come off as a
condescending prick to _someone_ who knew you and imagines that they are in
some small (or not so small) part responsible for your current success.
Failing the "keep it a secret" part, Warren Buffet's life suggests that just
implementing the second part will go a long way in itself.
~~~
jimbokun
Sounds like this is the approach the funeral director with $250MM in IBM stock
took.
------
kirubakaran
1\. I'll spend time learning well. My current disjointed knowledge frustrates
me. [math, physics and computer science]
2\. I'll work on creating beautiful, useful, innovative products.
3\. Backpacking, flying paper planes, watching cartoons, video games, reading
fiction by the fireplace, writing, etc
4\. I'll fund cool projects using the YC model.
[I do these things now on a smaller scale. If I'm rich, I'll be able to do
much more of these, which would be awesome.]
------
jodrellblank
Grow my hair long (again) and get someone good to make it look stylish and
neat. Buy some nice clothes, such that I don't have to worry about not having
any sense of clothes.
No office dress code, woohoo.
Do some startup funding on useful wearable keyboards, better software for
small helpdesks/teams (I have plenty of ideas), better software/user
interfaces for virtual desktops, but with grouped applications by user task.
Inject self replicating nanobots under all of Africa, acting as a synthetic
water table, pumping, purifying, filtering seawater from the coast and
bringing it inland, then piping it up to the surface.
Secretly hire a contracting company to rip out the traffic lights at a nearby
road island and replace them with my new design. Keep stats and, when
convincing enough, announce the change to all and pressure to get all traffic
lights in the country moved over.
Bring some ultra-fantastic business internet access to this area for a
reasonable cost, spam the town with wifi and build a couple of good
datacenters around, buy various shop facilities around the town and
surreptitiously start selling the right kinds of interesting things, buy a
couple of central indoor spaces and run barcamp/unconvention/XYZcons
regularly. Try to seed and cultivate a high tech, hacker friendly yet quirky
and personal friendly cafe style environment around here. Or maybe in another
country with better weather. Or maybe both.
Daydream, in other words, while feeling awkward, guilty and undeserving.
~~~
t0pj
You're in Dallas, too?
------
wschroter
It's true about those who are endlessly fueled by creating something - that
doesn't change. You might not do it for money, but I'm not sure that really
matters.
I sold my first company in 1997 and am launching my 10th in January. It never
gets old.
~~~
sown
Wow! Where do you get ideas from?
------
jws
Finding myself in a similar situation, my answer is... "I don't know, I've
spent all my attention getting this done, I'll work on what comes next now."
Most people give me the eye like "Ah ha! He has some plan that he isn't ready
to divulge.", but really. I don't know. I'm working on it now.
------
brk
Do it again.
Once is luck. Twice is skill.
~~~
sebg
brilliant. reminds me of the missive: if you are the type of person who would
stop working once you obtained a fortune, then you are never going to get a
fortune...
~~~
modoc
Define work?
Hopefully I'll get to the point where I can live comfortably off the wealth
generated by work I've already completed. If I do, at that point, I won't stop
using the computer. I won't stop writing software. I will stop setting the
alarm clock though, and I'll stop saying "I have to work" to my wife when I'd
rather be spending time with her, and I'll stop working with hard deadlines.
I won't sit on a beach all day, but a few hours a day would be very nice!
~~~
sebg
I work in finance (trading) so from my perspective, I have seen many people
come into this field wanting to "get rich." They soon find that the most
successful investors/traders are not those who want to get rich, but rather
love financial markets. So I am in complete agreement with you. The
equivalent, I think, would be me joining a web app startup to "get rich"
because I want to be "rich", rather than I am passionate about exploring the
problem space that I will solve with the web app.
As a side note, the reason this is always the first site I visit when I am on
the internet -- because everybody is exploring their chosen problem space
passionately for the fun/excitement of it.
------
critke
Goof off for a while. Then do it again. Because goofing off is only fun for a
while.
------
raju
I don't work for a startup, and I don't own one - But to answer your question
-
To me being rich is not about money, its about being able to do what I want to
do, when I want to do it. Money only buys me freedom, and thats it.
Having said that, I would sit back, kick of my shoes for a couple of months,
and spend time introspecting. What's my purpose, what do I want out of life.
Then go out the pursue that.
Spending time on a quiet beach sounds great, but gets old quickly.
And yes, 2 chicks :D
------
edw519
2 chicks
~~~
tjic
That's it? If you had a million dollars, you'd do two chicks at the same time?
~~~
edw519
Damn straight, man. I've always wanted to do that. I figure if I were a
millionaire, I could hook that up. Chicks dig guys with money.
<http://www.imsdb.com/scripts/Office-Space.html>
~~~
strlen
Well not all chicks...
~~~
PStamatiou
Well, the type that double up on a guy like me do.
~~~
strlen
Good point.
~~~
tlrobinson
_[insert comment about how Hacker News is becoming Reddit]_
~~~
edw519
_[delete comment about inserting comments]_
------
dmoney
Not that we need another Office Space reference, but I would do nothing. For a
while, I would drift and do nothing in beautiful and interesting places.
When I'd done enough of nothing, I'd create and record some music. Rock,
techno, maybe a hybid of both.
Then I'd try and rediscover the joy of programming, now that I didn't have to
do it for money.
I don't think I have to be rich to do all of this (though that might get me
nicer accommodations, better guitars, and more impressive equipment), but
haven't yet figured out the details of how to do it from my own economic
situation. Getting out of debt is probably a good starting point.
------
Prrometheus
I'd be like the paypal guys and fund a lot of crazy/awesome/futuristic
projects (The Seasteading Institute, Space X, Tesla Motors).
Then I'd fly to the moon in a craft that costs less than $100 million and
paint a big fat logo up there.
~~~
owkaye
"Then I'd fly to the moon ... and paint a big fat logo up there."
Or you could get Hancock to do it for you ... :)
------
ercowo
spend time with my wife. walk the dog. go for hikes / ski / see the outdoors.
read books on philosophy. generally, lead a life of quiet contemplation
~~~
rfurmani
> spend time with my wife
Trying to off-load your wife on other people? tsk, tsk
------
pstinnett
Depending on HOW wealthy I am:
\- Clean up all friends/family student loans and credit \- Build a new house,
buy anything I want \- Start a college fund for my nephew \- Do it again
~~~
ph0rque
In your place, I'd make college funds for nephews (or anyone for that matter)
obsolete, by making education really inexpensive if not free.
~~~
pstinnett
Depending on the amount of $ I have, I agree with this. If I was only filthy
rich, family education would come first. If I was stupidly/insanely/infinitely
rich, everyone would benefit from free education.
~~~
ph0rque
I spent ~1.5 years working on an app that would do just that. I've since put
the project on hold: the problem is just so huge that it's very hard to
whittle it down to "just" an app that can be made within a reasonable amount
of time, and have it be useful.
------
iigs
1) Kick myself for not selling six months ago when valuations would have been
stupidly higher (relatively speaking). :) I think this is the grown up version
of "I'd wish for a million wishes!"
2) If the money is larger than just being comfortably set, I'd split it into
two pieces -- one being a charitable giving platform modelled after the Bill
and Melinda Gates foundation (
[http://www.gatesfoundation.org/grantseeker/Pages/foundation-...](http://www.gatesfoundation.org/grantseeker/Pages/foundation-
grant-making-priorities.aspx) ).
3) The other half into a fund for playing / investing / geeking. I think once
you get to the point that you have enough money that you don't need to go into
work any more the next horizon is to make other people successful like you
(looking at YC, here)
On the other hand, if it was enough less that this wasn't really doable, I'd
at least structure the money in such a way that I couldn't blow it
immediately. I understand that it's too common for lottery winners to become
depressed and possibly even suicidal after mismanaging their money. It would
be important to cover this before I started spending money on things that I
would think I wanted (Porsche 911 turbo comes to mind immediately).
~~~
bemmu
Wouldn't it be easier to just donate it to the Bill and Melinda Gates
foundation? ;)
~~~
iigs
It would, and it would arguably be more efficient, too. I'd consider it,
depending on the overlap of projects they undertake and the projects I'd
prefer to support.
------
cdibona
If you have kids and a wife, hang out with them. You owe them, as likely as
not.
------
thomasmallen
Maybe I'd found a zoo.
~~~
kirubakaran
May I ask why? I suppose the answer is "why not?", isn't it? :-)
~~~
thomasmallen
Because I care a great deal about conservation but am unwilling to put in the
many years of education required to become a zoologist. I think that I could
run a zoo effectively and compassionately, showing the public these majestic,
exotic animals that are at such great risk while doing my best to return their
species to viability. Isn't that what zoos are all about?
~~~
kirubakaran
Cool! My best wishes to you.
------
JabavuAdams
Duty: * Make sure that if I do nothing else, and barring a collapse of
society, I don't have to work for anyone else again. * Make sure my wife can
stop working for others, too. * Make sure my daughter can afford to go
anywhere for higher-education. If she doesn't want to go to school, provide
her the ability to try startup ideas. * Make sure my parents and parents-in-
law are looked after. * Keep my relatives provisioned with computers, and buy
my Dad copies of Mathematica and Matlab. * If I have sufficient funds, make
sure my relatives can study anything/anywhere they want, as long as their
marks are sufficient. * Prepare to leave some money to various causes.
Fun: * Actually spend the summer playing tennis, instead of in crunch mode. *
Take my parents to Wimbledon, the French Open, the US Open, and the Australian
Open. * Re-activate my Scuba licence, and spend some time diving. * Go sky-
diving * Go on a "shooting range tour" where I fire all those guns you only
see in games and movies, and to some kind of CQB training. * Learn to fly a
plane * Learn to fly a helicopter * Re-learn how to ski and spend more time
doing that. * Spend more time playing piano. Maybe learn the cello, the
violin, or guitar. * Spend some time in extremely remote locations, like the
high-seas, in mountains, or underground.
Non-Commercial Research: * Read even more. Make sure I've read the classics. *
Spend a few years really learning Physics, at least up to EM with relativistic
effects, QM, and GR. * Learn cell and molecular biology fundamentals. *
Research Superconductivity. * Research computer vision, motion control,
learning, and memory. * Research and implement a lifestyle that maximizes my
own cognitive abilities, even if it's eccentric. * Re-learn French and German.
Learn Mandarin and Japanese. * Research and implement programming languages. *
Research meta-programming.
Make: * Build a legged tele-operated or autonomous robot. * Build a submarine
* Start or contribute to open-source projects in games, simulation, and
education. * Start a game studio. * Become a competent illustrator * Make an
animated short. * Make a smart sci-fi film * Build a virtual CAVE.
... you get the idea ...
~~~
JabavuAdams
... learn how to format HN posts ...
~~~
jlsonline
lol at least you can do that one for free...
Having said that, I saved some money and took two years off to pursue all of
my life-interests (mine, in fact, are quite similar to yours) but when it came
down to it, I surfed the web, played games, did a small amount of programming
and just doddled around all day for the majority of my time off. Then I went
back to work.
To me it comes down to one thing: purpose. If you have no specific purpose,
you just get caught-up in day-to-day living. The main thing I learned in my
two years off is that I can accomplish 99% of what I want to do with my life
whether I have a job or not.
------
ivankirigin
"never have to work"?
That doesn't mean anything to me. I want to build things. And some things I
want to build cost lots of money.
Robots, micronations, space elevators...
I'd probably buy a good home theatre system, with giantrobotlasers
------
maurycy
To be rich means a lot of things. You can be named rich if you have either
$10M or $10bn in the bank.
Let's say, realistically, that I have $10M.
First of all I'd cut all unnecessary costs, rent a smaller flat and spend a
year thinking about my life, studying something just to clean my mind,
collecting startup ideas. It is important not to freak out.
Then, I can either go back to the university or start another company. I think
it depends on my thoughts during this gap year.
------
look_lookatme
Live between Mexico, LA and NYC. Spend my days dallying.
Share the wealth with my family.
------
schtog
1\. Pay off everything for me and my closest family. 2\. Buy house on Hawaii
and in Whistler. 3\. Batmobile. 4\. Try to solve a big, hard problem that I
didn't have the means to before. Perhaps not one as "pie in the sky" as "make
solar power work" or "cure cancer" but something along those lines.
------
stcredzero
Philip Greenspun has some thoughts:
<http://philip.greenspun.com/non-profit/>
<http://philip.greenspun.com/materialism/early-retirement/>
~~~
petesmithy
"Most of Europe is simply too crowded and expensive to be attractive to the
average North American."
Who is this Greenspun fella?!
~~~
gcheong
It's true. On my last trip to Romania, Austria and France I didn't run into
any average Americans.
~~~
stcredzero
Makes those places sound a lot more attractive!
------
turtle7
I run to keep in shape. I would spend some of my extra time making sure I was
training to my potential rather than just enough.
I would build a small workshop and/or apprentice with someone to learn how to
do some modest woodworking. It appeals to me as a nice hobby, but one which
time and resources do not allow me to pursue at this point.
I would practice guitar more often/seriously.
I would read more often, and spend more time in quiet reflection.
I would continue to develop applications, as I love many aspects of it.
I would take a vacation per quarter rather than a vacation per year.
Basically, commit more time to improving myself and my skills rather than
committing more time towards improving my financial condition.
------
MaysonL
Join Esther Dyson in funding Fluidinfo.
~~~
gruseom
Your comment caught my attention (I studied logic with Esther's mother years
ago) so I looked up Fluidinfo. This is impressive stuff, and Terry Jones is a
singularly impressive, engaging, understated guy. After browsing his blog for
a while and watching part of some videos, I'm sure I want to see more.
I'm curious how you found out about this, and what you think of it?
~~~
MaysonL
<http://news.ycombinator.com/item?id=386811>
See my comment there: <http://news.ycombinator.com/item?id=388401>
------
cmos
Expand the basement laboratory and revolutionize the world. Again.
------
adrianh
Become a professional musician, and hack on things on the side.
------
teehee
Aside from the usual self enrichment activities. I would set up a foundation
that does many small unique services to a community. Examples: Go around to
different communities to offer free construction to link up sidewalks to make
it more pedestrian friendly. Consulting and financing to small shoppes so they
can stay unique and viable against large competitors. Setup free clinics that
solely cater to helping people sleep more restfully.
------
petercooper
Move to the US. I wouldn't need to be super rich to do this - just have $300k
or so in liquid "doesn't matter if I lose it" funds. Yes, the US's immigration
rules are tricky ;-)
But then? Pursue all of my crazy number of hobbies to excess. Attend Stanford
or a similar institution (for the love of learning, not for a specific goal).
Be able to let my wife find the job she wants rather than the one she needs.
And so forth.
------
swombat
Examine my life with more attention.
(see <http://www.quotationspage.com/quote/24198.html> )
------
raleec
Let's quantify this, what number makes you rich?
~~~
jjs
Definitely one. If your bank account is full of zeroes, then you're not rich,
but with a lot of ones, you are (as long as none of them is a sign bit....)
------
DanielBMarkham
Find the hardest problem that intrigued me personally and start working to
solve it. Hire some top-drawer help to have some great minds to bounce ideas
around. Keep it simple and focus on bang-for-buck.
Heck. I guess I'd do the same thing as now, except I'd have more resources to
do it with.
------
aaronblohowiak
Mega mega rich: I would create an electromagnetic cannon to help us escape
earth more cheaply and rapidly, and invest in technologies to help us colonize
other planets.
Leisure rich: I would study martial arts, get personal training every day, and
work on projects with my friends.
~~~
Psyonic
You'll never get mega mega rich if you go ahead with your leisure rich plan
(assuming you make it there). Which is more important?
------
code_devil
invest your money in social causes, if possible your time into it as well. OR
spend more time on working what you did to be rich on the first place, so you
can spend even more on social causes BUT do take out time to relax and enjoy
with loved ones as well.
------
ashleyw
• Move to the SF area
• Attend loads of cool conferences
• Splash out on some new gear — a new Mac for starters
• Be able to spend a lot more time on personal projects
• Start a new startup; its not work if you enjoy doing it. ;)
~~~
nailer
• Buy more fonts
~~~
yters
Wow, never seen that before.
------
mrtron
I would keep enough cash so I wouldn't have to work for someone again
(including investors).
Then I would continue doing things similar to now, but with no business model
in mind. I would also travel more.
Sadly(?) I think thats all that would change.
------
jamiequint
6-12 months of adventure traveling, lots of reading, and spending time with
people I don't get to see often enough. Buy a fast car and move to a little
nicer place (with a bigger garage). Then start another company.
------
asmosoinio
Do: snowboard and hike on awesome mountains, kiteboard on the nicest beaches,
learn both sports really well, spend more time with family & friends Not do:
spend time indoors staring at a computer screen
------
bestes
Become an angel investor. Help startups grow. Share knowledge. Provide relief
from those who don't know or understand how to really make a startup work.
And, ideally, make more money.
------
mooneater
Time to change the world.
~~~
rksprst
Isn't that what the startup is for?
------
paraschopra
I would do 30 by 30 :)
<http://www.paraschopra.com/blog/personal/30-by-30.htm>
------
cperciva
Write the world's most secure cryptographic library.
~~~
yan
As a personal achievement or as a product?
~~~
cperciva
It would be free and open source -- whether that counts as a product depends
on your perspective, I guess.
------
kirubakaran
<http://news.ycombinator.com/item?id=79057> for reference.
------
subpixel
Keep it a secret, and try to make a difference: <http://is.gd/bPVr>
------
ig1
Non-profit work, my two main ideas are an mturk for charity work and a decent
career guidance website.
------
gcheong
Travel, travel some more then plan my next vacation. Try and qualify for an
Olympic archery team.
------
kingnothing
I'm chiming in a bit late here, but I would probably go to medical school.
------
brianobush
spend _more_ time with my kids: hiking, skiing, building robots, volunteering
with school. Then do it again, except this time I want to control the money
AND the math.
------
jcapote
Besides 2 chicks? Start my own capital management firm
------
jmtame
Do another startup.
~~~
siong1987
I will continue working on it for the acquirer until the acquirer can actually
work it out without me.
------
okeumeni
No more need of VC to do the things I want to do.
------
trey
hoard my money and not share a red cent
------
mdolon
Share the wealth! (cough)
------
time_management
Crank out a novel. Then try another startup.
------
albertcardona
You've got to be kidding me. What _not_ to do? But at least be sure to keep
some cushion of money somewhere, perhaps properly, securely invested.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Git Hooks, who uses them and for what purpose? - mroche
https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks<p>Decided to learn more about them the other day as they seem interesting. In a quick search here for past discussions it seems this feature is used in a variety of ways and more-so than I had anticipated. For those of you that use git hooks, how do you implement/enforce their usage for projects, and what are you doing with them?
======
alexjm
A couple years ago, I was designing an XML schema. The document was a mix of
the formal schema and English prose explaining each element. I used a pre-
commit hook that extracted the RelaxNG schema and confirmed that it was well-
formed, which helped me avoid committing invalid versions of the document.
On the server side, I have a post-receive hook in the repo for a website. When
I push to 'develop' it runs the static generation script and deploys to my
local server as a preview. When I push to 'master' it builds and pushes to
production.
------
smt88
We use them to reformat code before it hits any shared branch, and we also
reject commits that fail linting (unless they're going into a hotfix branch).
Formatting all code using the same strict settings is _hugely_ helpful when
looking at change history. It also frees developers from ever thinking about
formatting, which takes up many more brain cycles than people realize.
------
auslegung
I use prepare commit hook to enforce a commit template on myself that includes
the github link to the issue. We also use them as a team to format code.
| {
"pile_set_name": "HackerNews"
} |
Limitations and pitfalls of the job interview - galfarragem
https://fs.blog/2020/07/job-interviews/
======
simonw
If you want a really big competitive advantage, figure out how to hire great
people who don't interview well.
~~~
edoceo
I use a gig to hire process for this. It works very well. I've been meaning to
write more about it but I also get income for helping with hiring around this
model so im a bit conflicted. The whole answer is long and complex. Basically:
give candidate a proper task on your project and evaluate them based on that
work - the "interview" is less than an hour, the "work" is longer (many hours
(and you pay for it)) - provides a solid evaluation across their skill-set
~~~
fredophile
That will work for some people but not everyone. At my current job I can't
monetize work I do outside the company without approval. That means I can't go
through you application process unless I quit first. Even if I was allowed to
do this I don't think I'd want to. The time commitment of an extra project on
top of my normal workload is not something I want.
~~~
nawgszy
Well, then you can just work out a deal to not monetize it.
Your objection is of course valid - extra workload on top of a normal job
probably has its limits. But the fact one is willing to pay those who are
willing to be paid for such an interview doesn't suddenly make it a strict
requirement ha.
I must say though, I also view the interview standard I've faced - quick phone
screen, 1h tech screen, 4-6h onsite (probably online now) - as adding up to a
lot of time very quickly, especially as the "onsite" time is of course awkward
when it means you take PTO to go to an interview. In this sense, I'd probably
rather have an ~8-12h coding task than a 4-6h onsite. Maybe even a higher
factor
~~~
scarface74
It’s one day for the tech screen and one day for the on-site. Of course it’s
more of you have to travel.
------
stove
There's a growing rift in software between employers saying "there's a talent
shortage" and a rapidly growing population of devs who feel like they're
locked out due to the technical interview process.
Many of the engineers not being hired are recent bootcamp grads but there are
also tons of CS majors that can't seem to "crack" the interview process.
Part of my job is helping companies "fix" their hiring and one of the ideas
that I've been putting forth for years that's slowly gaining steam is
developing a "technical apprentice" role. This role would be responsible for
tasks that are frequently de-prioritized like documentation, testing, QA, bug
fixes, note taking, etc. and would be a foot in the door for entry-level
engineers. The role is designed to focus on communication and soft-skills
while also giving the person a chance to prove their "grit" on the technical
side. Even a few months in an apprenticeship role is generally enough for
companies to "take a chance" on someone as an entry-level engineer.
This has been a great way to shift interviews away from algorithms and more
towards finding people can add immense value to technical teams even without
having on-the-job programming experience.
I'm curious what the HN crowd thinks about that role as a way to bridge the
hiring gap.
~~~
mac01021
This sounds like what most internships are intended to be?
How much does it cost to employ one of these apprentices?
~~~
stove
Internships are generally thought of as something you do during college
(summers or otherwise). Most CS grads would scoff at getting an "internship"
after graduating. Internships are also very structured and generally involve
working on a specific project within a technical team (I know they're all
different).
The apprentice role, the way I've been pitching it at least, is different.
This is a role where you join an engineering org and learn the product by
QAing it, join technical discussions and help out by taking notes for the
team, show off your communication skills by documenting new features and, big
picture, you find ways to add value to the team in whatever ways they need.
Over time, the bugs fixed get bigger and the person can bite off small
features, etc.
The problem is that companies _want_ to hire new grads (even bootcamp grads)
but don't feel comfortable paying SWE rates for someone who hasn't worked as a
SWE (often rightfully so). The comp for this is equivalent to a QA eng but has
a clear path towards being an entry-level SWE (3-6 months maximum). If after
3-6 months it's not clear if the person can add value as an engineer then it's
clearly not a good fit.
------
Zaheer
Related message for students / new-grads: Find an internship. Internship
interviews tend to be much easier as the stakes are lower. If you perform well
during your internship (arguably easier / more accurate indicator of success),
the company will likely extend an offer. At big companies I've seen internship
to offer rates exceed 30%.
~~~
TrackerFF
Unfortunately, this has two sides.
I've noticed that in banking - and I'm not talking about investment banking,
or high-finance in general, but regular consumer banking - there's been a
trend to basically hire woefully overqualified people for the lowest positions
around - I'm talking about bank greeters, customer support, and what not, and
then train them from there.
This could very well be a local thing, but there are so many people today
qualifying for these jobs - lots of BBA and MBA candidates out there, willing
to do pretty much anything to get a foot inside.
When I was interning for a bank, even the greeters (basically the person that
just greets the clients, and forwards them to the right people within the bank
- a receptionist, really) had a Bachelors degree, many were working on their
Masters.
The more sought after positions had been re-labeled "graduate programs" or
"trainee programs", and were aimed at the top-shelf students. While the rest
pretty much had to get a foot inside by working their way up from the bottom.
So you suddenly have a ton of highly educated candidates applying for jobs
that only 15 years ago required a HS diploma, if that even.
Then when they're first inside, they tend to get moved around internally - as
a lot of positions only get posted internally.
It's almost the same way with internships. Internships are there to
practically train and select future company workers. If you do well, you get a
return offer - if not, well, at least you have some experience.
With the rising number of graduates, I can foresee a future where candidates
are being divided into the regulars and the elites. The regulars will, no
mater how qualified they are, will have to start at the rock bottom, proving
themselves for $8.5/hr, while the elites are trained for
leadership/management-track positions.
~~~
908B64B197
That should be a signal that's there's an oversupply for whatever Bachelors
degrees these employees were holding.
It's the same thing in law, the best advice to give to an aspiring lawyer is
to go have a chat with a practicing lawyer at a non-elite firm.
------
stepstop
Wow, that headline is a stick in the mud on a nuanced topic.
> What’s your biggest weakness? Where do you see yourself in five years? Why
> do you want this job? Why are you leaving your current job?
I do a lot of tech & business interviews, and I don't ask those questions
(unless they've recently left A LOT of jobs). I ask situational questions to
understand how they think, who they talk to, what research they did to
understand the problem, and the solution (was it simple?). If they tell me
they built a Rube Goldberg machine, I ask if they would have done anything
different with hindsight. If they don't realize they built a Rube Goldberg
machine, well, perhaps they won't be a good fit.
I look for people who can solve problems, do their own research, ask for help
when they get stuck, aren't afraid to attempt solutions (many that will
knowingly fail) and have the introspection to identify failures and admit it
are generally people you want to hire for senior positions.
Now I admit that it's a lot harder to hire this way for junior positions, when
they have less examples, less job history, etc. Educational projects are a
substitute, as well as working on personal projects.
~~~
dang
Ok, we'll replace it with some of the mud in the headline above.
------
grugagag
It is true, and not that I am trying to be dishonest myself but I am not the
same person while taking an interview. It's a sad that we can't be honest or
humble but in this system we have to sell ourselves, fake enthusiasm for the
hiring company is a must, do whatever it takes to pass the interview then
think later if we take the position or not. If not, somebody else with the
same capability or or less will snatch that job. Plus that interviews like
tests are gameable.
In an ideal world a trial period would ensure both the employees and employers
are a fit. But that could be abused as well if it becomes the norm.
~~~
Afton
Someone always suggests this. The issue is that if you did this, 95% of the
candidates that would agree to this kind of setup would be the kind you didn't
want.
If I'm sitting on 2 offers, one is a hire and one is a "Let's see how this
works out after 2 weeks of work", I'm going to take the first one. And that
says nothing for the necessary benefits question in the states, where changing
jobs often involves expensive (in money or time) changes to health care
insurance.
~~~
grugagag
We don't know for sure and it's hard to know how this works. What's obvious is
that the current system is broken and we need a replacement of some sort.
Being able to try a company and see whether they like me and whether I like
them and the type of projects I am supposed to work would be a major factor in
finding the right marriage. We take jobs for the salary tag and quite often we
do whatever we have to do to continue getting that nice paycheck but we're not
happy with the work we do.
~~~
Afton
For sure, it's an empirical question what %age of people who would accept this
kind of offer are "I have no other choice, so I accept your offer" or "I want
to make sure I will _actually_ want to work at your company".
I'm just saying that I would 100% not do this unless I had no other choice, or
was independently wealthy.
------
jaaron
I'm wondering if much of the discussion here is even about the article, which
advocates for things we long know work better:
* Structured interviews
* Blind auditions
* Competency-related evaluations
The title "Job interviews don't work" is rather bait-clicky when clearly they
advocate that _some_ form of job interviews work. Or are at least better.
As a technical hiring manager for over a decade, here's where I'm at:
\- The best interview is an internship. We can't always do that and often we
need senior talent _now_.
\- The next best interview would be a portfolio. I am _so envious_ of artists
with their public portfolios. If there's one thing I wish we as an industry
could figure out, it would be some way of stopping to test and retests
ourselves as if we have to constantly reprove what we've already done and
instead find a way to better showcase our work.
\- The next best technical interview would be a "homework" project, but I've
come around to the mentality that this just isn't fair to candidates. As a
hiring manager I love it, but most folks just don't have the time to do a
bunch of unpaid work. Even if you compensate them, it's unrealistic for many.
So we're mostly back to the suggestions in the article. They're good. A good
hiring process is _not easy_ but it's worth it.
And finally, a bit of anecdotal evidence: yes, there are folks out there you
probably shouldn't hire. They aren't a good fit for the role. You want to set
them and yourself up for success. That said, there are probably more people
who can excel than you realize. A major factor in their success is the
maturity of the team and leadership that's already in your company. Sometimes
you'll get lucky and hire some rare talent, but if all you're doing is looking
for "rare" talent, then you're likely poorly calibrated and relying too much
on outside talent to come in an fix the mess already on your hands.
~~~
RangerScience
> The next best interview would be a portfolio.
I tried using my (limited) open source hobby project portfolio as a substitute
for coding interviews. Companies either didn't take me up on it, or still also
required me to do their regular take-home. Twice now, I have had two companies
ask for the same take-home, although in the first case they asked me to re-do
the work in their preferred language.
~~~
chucky_z
FWIW, as a hiring manager, if someone has a portfolio I definitely judge them
on it, and if it's good it allows me to bypass huge swaths of technical/coding
interview stuff and dig much deeper into where/what I want. I always take it
as a positive, even if it's old stuff.
~~~
RangerScience
What do you find makes a portfolio better or worse for these purposes?
Not so much "more likely to get them the job", more... I felt like my projects
weren't actually suitable to take the place of coding interviews, largely
because I couldn't actually drop in and work on them in the way that coding
interviews show me actually doing work.
~~~
chucky_z
Literally anything. If I see someone with a lot of relevant forked repos, even
if they're old, I take that as interest and something I can bring up.
If I see a repo of rcfiles I know they care about working efficiently. If I
see abandoned stuff with more than 1 commit that's OK, that's something that
was cared about at one point. These are just two super generic examples.
Almost everything is a positive.
The _only_ thing I don't like to see is repos with 1 commit, and nothing other
than a README with the repo title in it. Not really negative, more of a 'cmon
gimme more.'
------
kanox
> What’s the best way to test if someone can do a particular job well? Get
> them to carry out tasks that are part of the job. See if they can do what
> they say they can do. It’s much harder for someone to lie and mislead an
> interviewer during actual work than during an interview. Using competency
> tests for a blinded interview process is also possible—interviewers could
> look at depersonalized test results to make unbiased judgments.
Why would a practical test be more effective than a discussion about prior
work experience? Short coding tests under pressure are extremely
unrepresentative of real work and I'm not sure homework-style interviews are
much better. I personally don't want to spend an entire weekend on your test
and I've dropped opportunities for that reason in the past.
All these "competency tests" are good for is catching blatant lying. Is this a
serious problem? In my view job interviews are mostly about finding a match
between skills/interests and project needs.
Communications skills are a very big advantage in interviewing and that feels
unfair but I'm not sure that true. Communication and persuasion skills are
extremely useful and important to all office jobs, even the most technical.
I'd bet that people who interview well also write beautiful code comments and
commit messages.
~~~
hellcow
> All these "competency tests" are good for is catching blatant lying. Is this
> a serious problem?
Of the people I interview that claim they have years of experience, mastery
over multiple languages, and expertise in various frameworks, a solid 80% or
more can't pass fizz-buzz in any language they choose.
~~~
smichel17
Claims made on resume or in person? On the seeking side, I've felt pressure to
put every technology I've used even in passing on my resume, to satisfy
"buzzword bingo" and get through the initial screen, but in an interview I
would give a (truthful) answer along the lines of "I don't have a _ton_ of
experience, but I know the basics; enough to be confident that I can quickly
pick up whatever else I need to learn on the job."
~~~
hellcow
> Claims made on resume or in person?
Both. And it's not "putting every technology I've used even in passing on the
resume" that bothers me. It's "I don't know how to write a loop or a function
in any programming language."
------
mundo
I interview a lot of people and found much to complain about in this essay.
This is the main thing:
> The key is to decide in advance on a list of questions, specifically
> designed to test job-specific skills, then ask them to all the candidates.
> In a structured interview, everyone gets the same questions with the same
> wording, and the interviewer doesn’t improvise.
This is good advice, _if and only if_ you're interviewing an undifferentiated
group of applicants, as in the cited examples (college entrance and army
recruits). If you're hiring a QA III and you have three different applicants,
it's terrible advice. You need to ask about the candidate's specific
experience, and ask follow-ups.
More generally, I don't think the stated goal of an interview according to
this essay (peering in to the candidate's soul to suss out traits like
"responsibility" or "sociability") is possible or reasonable. My goal is more
modest - I just want to figure out whether you were good at your last job or
not. If you say you're responsible, I can't prove you right or wrong in a one
hour conversation. But if you say you're a whiz at Selenium UI automation, and
you're lying, I will figure it out pretty easily.
------
MisterTea
My boss tasked me with hiring my assistant. HR filtered most of them and I was
left with three applicants. So I ran it the way I'd like to be hired which was
skip the useless small talk and other painful BS and just bring the applicants
around the shop and show them what I did.
The first applicant seemed like his mother dressed him and reminded him to
breath that morning. The second guy was pretty sharp but disinterested during
the walk and talk. The third applicant immediatly stood out. He was excited
and fascinated by our systems and kept asking technical questions - winner.
Excellent co-worker until he moved on to greener pastures.
All that wear your best suit and where do you see yourself in 5 years (best
answer: prison) nonsense sounds like it was lifted from one of those cheesy
1950's self help shorts the MST3K crew routinely riffed.
------
ironman1478
This is a great article. An undiscussed issue I've personally seen when
conducting interviews and being parts of roundups is a lack of self
understanding. lots of people who conduct interviews think they are way
smarter than they are (me included!). They hold up candidates to very high
standards and then dismiss them at the slightest mistake, however many of
these people also make many mistakes on the job. When I give interviews I
always just ask the easiest and clearest questions I possibly can that still
try to be relevant to the job to minimize this bias.
------
kube-system
> A job interview is meant to be a quick snapshot to tell a company how a
> candidate would be at a job.
> Unstructured interviews can make sense for certain roles. The ability to
> give a good first impression and be charming matters for a salesperson. But
> not all roles need charm, and just because you don’t want to hang out with
> someone after an interview doesn’t mean they won’t be an amazing software
> engineer.
If that's the attitude someone has towards interviews, then no wonder they
draw the conclusion that they don't work.
The real issue is that most teams either don't give much attention to
interviewing (because they have their primary job to attend to), or too much
of the process is delegated/outsourced elsewhere (where people only
tangentially understand the work area, and/or have no deep knowledge of the
job role).
Lean on interviews for measuring soft skills, and lean on demonstrations
(portfolios, code tests, pseudo-code, problem solving, etc) for measuring hard
skills. Every job requires _some_ balance of hard and soft skills. If you use
the wrong tool for the evaluation, or if the person using the tool doesn't
know how to use it, you get the wrong result. Interviews have their place, but
technical evaluation is not it.
------
xelxebar
This hits home too hard. My job search is going so poorly, that I have started
to doubt my technical value at all.
Confusingly, personal one-on-one interactions with companies or hiring
agencies almost invariably result in positive feedback and comments, "you are
exetremely hirable," "you have a strong technical background," etc. However,
none of these interactions has gone anywhere. Either they "move forward with
someone else" or simply evaporate into thin air. A few have even evaporated
after extensive interviews and claiming that they wish to hire.
Is positive-sounding feedback just a polite way of avoiding some "elephant in
the room" problem? Am I inadvertantly projecting an image of ineptitude or
hostility?
I have over 20 years of experience on Linux, tinker and program as a hobby,
and also lightly contribute to open source projects. I believe I have what it
takes, but geez, sometimes this job search is just soul crushing. I just want
to offer my skills and talents---to be a valuable member on a good team.
</vulnerable-rant>
~~~
putsjoe
Hang in there. I have a friend who's in a similar position, she's interviewed
and been ghosted a few times and it has really gotten her down. It can take
time but eventually a company will come to their senses and realise your
value.
------
eminence32
In the summer before my senior year at college, I did a 3 month internship at
a software development company. The interview for the internship was very
soft, partly because it was only an internship, and also because I knew
someone at the company who helped me get the position.
After I graduated, I applied for a full-time position there and have been
there for 10 years. The interview process for the full-time position was also
fairly soft and non-technical because I was hiring into a team that I worked
with during my internship. I like to describe it as a 3-month long interview
process. Not only did the company get to know me and what I was capable of,
but I got to know the company and its people (in order to make a decision
about if it was some place that I would like to work).
Surely this isn't scalable (internship are fairly rare, and generally are not
available to anyone except students or recent grads), but the whole internship
process worked out very well for me. It allowed me to bypass the traditional
tech interview, which is something I feel very fortunate about.
------
lasereyes136
Finding good people to work with is hard. Nothing you do will find 100% of the
good people (finding 50% of them is extremely good) and filter out 100% of the
bad people. Nothing you do will be effective for everyone. Trying to develop a
hiring strategy based on what you want interviews or hiring processes to be
like will be biased.
Accept that you will make mistakes. Accept that many of the good ones will get
away or be undiscovered. Accept that you will make hiring mistakes and have to
fix those.
If you are interviewing, do you really want to spend a few years of your life
at a place that does the bare minimum to vet you? They do that for everyone
and guess what kind of coworkers you are going to get. Sure taking PTO and
spending a day in an interview process is a lot of time. What is the
alternative? Not really being vetted and working with horrible people.
Finding the person or the right company to work with it hard. Take the time to
be comfortable that it can work for both sides.
------
exabrial
Silicon Valley interviews are worthless. The algorithm pop-quiz is just a way
for the interviewer to beat his chest about some obscure facts and demonstrate
his superior knowledge to the interviewee.
Has anyone found a better process than casual conversation? I've found it
effective as long as engineers and non-engineers get a chance to participate.
Usually I talk about what they want, talk about what you need, talk about
expectations from both parties, talk about needs, and talk about past
experiences both good and bad. Once expectations are set, there's literally no
opportunity for a "bad hire", because if they don't live up, it's a simple
conversation to refer back to the expectations that are set and help them
achieve , or worst-case, offer them severance.
~~~
the_jeremy
> because if they don't live up, it's a simple conversation to refer back to
> the expectations that are set and help them achieve , or worst-case, offer
> them severance.
That is a bad hire. You are describing a PIP and then firing someone that you
wasted time and resources recruiting, on-boarding, and training.
~~~
exabrial
That's not what I said... If a person doesn't feel they can meet the discussed
expectations, you wouldn't hire them in the first place.
------
bartread
This was potentially interesting but then:
> They are in no way the most effective means of deciding who to hire because
> they maximize the role of bias and minimize the role of evaluating
> competency.
I can believe that _badly_ planned and executed job interviews do the above
but I've overseen or been directly involved in the interviewing of hundreds of
candidates over the years, for dozens of roles, and the hit rate has been
pretty good. Two probation failures, and that's about it.
We're looking to assess skill and character in our interview process. We are
interested in whether you can do the job, and whether you're a reasonable
human being, and that's it. We have strong structures and guidelines in place
in terms of questions, answers, and evaluation. And inasmuch as it's possible
we strive to make our hiring process a pleasant experience, regardless of
whether a candidate is successful or not (obviously there's some level of
stress inherent in going through a selection process). We also give feedback
that we hope will help unsuccessful candidates in future (I realise this is
unusual and even frowned upon in some circles but our experience has been that
most people appreciate it enough that it's worthwhile to deal with the
headaches caused by the odd person who wants to argue about it).
I get it. There's a cohort of people on HN who don't like job interviews.
Honestly, I'm one of them. But done well, they work well.
Our process isn't perfect, and we're always looking for ways to improve it -
there was quite a lot of tweaking early on, for sure - but for us it's worked
well. We've spent a lot of time on it because - although my role is as a CTO
in a mid-sized firm, and this might not fit with everybody's expectations of
that role - literally my most important job has been and continues to be
hiring, building, and maintaining a strong, effective team.
And I am very happy with the people we've hired. Just as important, I'm also
happy with the decisions we've made about people we've chosen not to hire.
------
jokoon
The first time I ever heard the expression "social filter" was from Barrack
Obama.
There are many things that a democracy can give to its citizens. But
apparently, it seems civilization doesn't want to give up social filters.
I can understand social filters when it comes to friendships, sex and intimate
relationship, but for jobs, I will never understand why they exist.
~~~
antisthenes
> I can understand social filters when it comes to friendships, sex and
> intimate relationship, but for jobs, I will never understand why they exist.
Because jobs are just as social as intimate relationships, if not more so. Up
to about the 2010s, the majority of marriages came from getting to know
someone at work.
Even if nothing intimate does come from work, it's still people you have to
see 8 hours a day for years on end. If someone is repulsive/annoying/toxic, it
WILL make you miserable at work, and people are very wary of disrupting a
well-oiled collective.
~~~
jokoon
> Because jobs are just as social as intimate relationships, if not more so.
For very small or family companies, maybe. In small communities, maybe, but
small communities could also include people from any horizon.
But in other cases, I disagree. Work and production is the blood of human
civilization. Generally, the free market ideology says that if you're
competent, it's the only relevant parameter. Social filters are arbitrary,
unnecessary and backwards.
> If someone is repulsive/annoying/toxic
The nazis sent people who were not desired to death camps. Today, those same
people are being excluded from society through social filters. You cannot have
a healthy society if you keep segregating people like this, even if it's not
race, but other traits like education, politics or behavior. You're advocating
social darwinism through a detour.
~~~
the_jeremy
If you don't exclude socially toxic people from your company, talented people
who don't want to work with that type of person will leave. I know multiple
people who have left or transferred because of difficult coworkers, and more
who have left because of difficult managers. If one toxic person can't
outperform all the talented people who will leave, it is in the company's best
interests not to hire that person.
Yes, you can technically call this social darwinism. If you are an irritating
person that no one wants to be around, you will be socially excluded. I don't
know of any movement that is going to champion your cause. There is a
difference between preventing profiling / prejudice and avoiding manipulative,
mean people.
~~~
jokoon
> There is a difference between preventing profiling / prejudice and avoiding
> manipulative, mean people.
I see your point, and yes, there's a difference. I still believe those people
can be worked with, in some way or another. Being too selective at the
workplace isn't a healthy way to run society.
It's true that there's a difference between discriminating them and avoiding
them, but the result and the intention are the same, in my view. Avoiding them
is just politically correct and acceptable, but the truth is, it's the same
process.
The nazis lost the war, but they won the battle of social darwinism.
> If you are an irritating person that no one wants to be around, you will be
> socially excluded.
What's not really what I'm talking about. And that's a nature fallacy. The
role of civilization has always been to fix the problems of nature. There are
many ways to interpret some socials signs as being "irritating". They're often
normative or arbitrary.
> I don't know of any movement that is going to champion your cause.
Socialism, would it be democratic, or any form of progressivism, attempts to
aim at that cause. Europe is more progressive about this.
------
mathattack
This is very true. Sometimes it’s hard to know about hiring someone after a 10
Week internship. Crazy to think a 30 minute interview is better.
------
manfredo
A rather click-baity headline. The article doesn't claim that job interviews
don't work so much as it claims that subjective job interviews are more
subject to bias than structured job-interviews. I'd say that's true, but the
caveat is that structured job interviews are more subject to people studying
and honing skills specific to the job interview process. Grinding leetcode
definitely makes you a better at solving algorithms problems on a whiteboard
in 60 minutes but doesn't do much to improve working effectiveness.
I think there are 3 core trade-offs for the job interview process: logistical
feasibility, consistency, and resemblance to the actual work experience.
Doing 2 rounds of coding interviews, and one round of systems design from a
set list of questions with explicit rubrics is easy to implement and is very
consistent. But you can leetcode your way to knowing most coding question
archetypes. Systems design question archetypes are even smaller in problem
space. These questions have moderate to low resemblance to the actual work
experience. Sure, it can identify people who can't code or aren't experienced
in systems design. But does it show how well someone takes feedback, or
reviews other people's code? Not really.
One of my co-workers used to conduct 90 interviews that started with one
question: "how would you build a text editor?" He didn't specify whether this
text editor was WYSIWYG like Word, a web-based editor, a code editor, etc. It
expanded and touched on a whole variety of questions. It could be traditional
data structures, or UI design, or systems (e.g. implementing auto-saving text
fields on the web). This was low consistency, since the interview was mostly
unique to each candidate. It had moderate logistical feasibility since it was
hard to train interviewers on these open ended questions. But I think it had
better resemblance to the actual work experience, since it didn't just test
coding ability. It tested thinking through the problem and what the desired
end behavior for the user really was and navigating how those expectations
influence implementation.
An idea of an interview process that I have is to do it asynchronously through
github or another version control system. Give the candidate a task to open a
PR on a mock codebase. See how they implement the task and justify their
design decisions. How thoroughly the test. Respond to the PR with comments and
see how the candidate responds. And next, have the candidate review another
person's PR and see what they look for in a review. This potentially has even
better logistical feasibility since it's not dependent on the candidate and
employee being active at the same time. I think it would have the most direct
resemblance to actual work experience, since it's emulating the workflow most
developers actually use in their day to day work. Consistency may be difficult
to achieve, but if evaluation was broken up into multiple segments for
separate evaluation it may be able to be made consistent.
~~~
jaaron
Nice analysis.
In my hiring process, we use a number of filters to gather the _data_ we're
looking for to make a decision. That requires a bunch of different steps. By
the time we're done, we've spent at least 8-10 hours talking with this person.
From a technical perspective, we do the following:
\- A short screen to go over the resume and ensure we've got a rough fit to
the right role.
\- A _simple_ consistent coding exercise using coderpad. (surprisingly some
fail)
\- A series of consistent open-ended questions we ask everyone about their
tech background, such as, what was one of the most difficult bugs you ever
fixed?
\- A set of consistent design/architecture problems: "given this design, what
problems do you see? How would you fix them?"
\- Another consistent, more involved coding exercise with an existing code
base that is VERY much related to the work they'll be doing.
\- A Q&A session on a wide range of technical topics. Goal is NOT for someone
to know everything, it's to get a bit of map of their strengths and
weaknesses. We found we make assumptions of what someone knows based on our
background and their resume. We try to make this fun.
\- Another set of behavioral and situational questions with a shared scoring
rubric on values such as teamwork, collaboration, communication and
leadership.
And then we have to take all of that data and look at it holistically and
across a wide range of candidates. And even then we'll make mistakes, but we
keep trying to optimize it, reduce bias, and make it better for candidates and
us alike.
One last note: I like to finish my first interview with the question:
"Is there anything about yourself that you really want me to know that we
haven’t discussed?"
Because I know I've only had ~40 minutes to get to know this person. I have my
agenda of what I want to know, but I could easily miss a lot. So I want to
give them a chance to represent themselves in the broadest way possible.
------
k__
Never say no, but don't say yes unless you're sure about it.
You won't believe how often that will be good enough.
"Can you code in X?"
"I can program in many languages!"
Didn't answer the question, but was often enough for the interviewer.
~~~
commandlinefan
Yikes, I don't know about that. If somebody asked me if I could program in,
say, Perl, I'd say I knew what it was but that was about it. That's like
somebody asking if I can speak Chinese: I know what it sounds like, but no, I
can't.
------
29athrowaway
Does your interview process perform better than a coin flip? (50% chance of
making the correct decision). If the answer is yes, you have a useful process.
| {
"pile_set_name": "HackerNews"
} |
The Looming Battle Over AI Chips - poster123
https://www.barrons.com/articles/the-looming-battle-over-ai-chips-1524268924
======
oneshot908
If you're FB, GOOG, AAPL, AMZN, BIDU, etc, this strategy makes sense because
much like they have siloed data, they also have siloed computation graphs for
which they can lovingly design artisan transistors to make the perfect craft
ASIC. There's big money in this.
Or you can be like BIDU, buy 100K consumer GPUs, and put them in your
datacenter. In response, Jensen altered the CUDA 9.1 licensing agreement and
the EULA for Titan V such that going forward, you cannot deploy Titan V in a
datacenter for anything but mining cryptocurrency, and his company reserves
the right going forward to audit your use of their SW and HW at any time to
force compliance with whatever rules Jensen pulled out of his butt that day
after his morning weed. And that's a shame. Because there's no way any of
these companies can beat the !/$ of consumer GPUs and NVDA is lying out of its
a$$ to say you can't do HPC on them.
But beyond NVDA shenanigans, I think it's incredibly risky to second guess
those siloed computation graphs from the outside in the hopes of anything but
an acqui-hire for an internal effort. Things ended well for Nervana even if
their HW didn't ship in time, but when I see a 2018 company
([http://nearist.ai/k-nn-benchmarks-part-wikipedia](http://nearist.ai/k-nn-
benchmarks-part-wikipedia)) comparing their unavailable powerpoint processor
to GPUs from 2013, and then doubling down on doing so when someone rightly
points out how stupid that is, I see a beached fail whale in the making, not a
threat to NVDA's Deepopoly.
~~~
lostgame
FYI I think your comment is informative and I understood a lot of it but
that's a shitton of acronymns for the uninitiated.
~~~
hueving
FB: Facebook
GOOG: Google
AAPL: Apple
AMZN: Amazon
BIDU: Baidu
ASIC: Application specific integrated circuit
GPU: Graphics processing unit
CUDA: Compute-unified device architecture
EULA: End-user license agreement
weed: marijuana
HPC: High-performance computing
NVDA: Nvidia
HW: hardware
------
Nokinside
Nvidia will almost certainly respond to this challenge with it's own
specialized machine learning and inference chips. It's probably what Google,
Facebook and others hope. Forcing Nvidia to work harder is enough for them.
Developing a new high performance microarchitecture for GPU or CPU is complex
task. A new clean sheet design architecture takes 5-7 years even for teams
that have been doing it constantly for decades in Intel, AMD, ARM or Nvidia.
This includes optimizing the design into process technology, yield, etc. and
integrating memory architectures. Then there is economies of scale and price
points.
Nvidia's Volta microarchitecture design started 2013, launch was December 2017
AMD's zen CPU architecture design started 2012 and CPU was out 2017.
~~~
osteele
Google’s gen2 TPU was announced May 2017, and available in beta February 2018.
That 2018.02 date is probably the appropriate comparison to Volta’s 2017.12
and Zen’s 2017 dates.
EDIT: I’m trying to draw a comparison between the availability dates (and
where the companies are now), not the start of production (and their
development velocity). Including the announcement date was probably a red
herring.
~~~
Nokinside
I'm aware.
Making a chip and making competitive chip are two different things.
When Nvidia enters the market with specialized chip it's likely on completely
another level in bandwidth, energy consumption and price per flop performance.
They have so much more experience with this.
* [https://drive.google.com/file/d/0Bx4hafXDDq2EMzRNcy1vSUxtcEk...](https://drive.google.com/file/d/0Bx4hafXDDq2EMzRNcy1vSUxtcEk/view)
* [https://blogs.nvidia.com/blog/2017/04/10/ai-drives-rise-acce...](https://blogs.nvidia.com/blog/2017/04/10/ai-drives-rise-accelerated-computing-datacenter/)
------
joe_the_user
_Nvidia, moreover, increasingly views its software for programming its chips,
called CUDA, as a kind of vast operating system that would span all of the
machine learning in the world, an operating system akin to what Microsoft
(MSFT) was in the old days of PCs._
Yeah, nVidia throwing it's weight around in terms of requiring that data
centers pay more to use cheap consumer gaming chips may turn out to backfire
and certainly has an abusive-monopoly flavor to it.
As I've researched the field, Cuda really seems provides considerable value to
the individual programmer. But making maneuvers of this sort may show the
limits of that sort of advantage.
[https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/](https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/)
~~~
oneshot908
It probably won't. For every oppressive move NVDA has made so far, there has
been a swarm of low-information technophobe MBA sorts who eat their
computational agitprop right up, some of them even fashion themselves as data
scientists. More likely, NVDA continues becoming the Oracle of AI that
everyone needs and everyone hates.
~~~
arca_vorago
So is OpenCL dead? Because that's how everyone is talking. The tools you
choose, and their licensing, matters!
~~~
keldaris
OpenCL isn't dead, if you write your code from scratch you can use it just
fine and match CUDA performance. In my experience, OpenCL has two basic
issues.
The first is the ecosystem. Nvidia went to great lengths to provide well
optimized libraries built on top of CUDA that supply things people care about
- deep learning stuff, dense as well as sparse linear algebra, etc. There's
nothing meaningfully competitive on the OpenCL side of things.
The second is user friendliness of the API and the implementations. OpenCL is
basically analogous to OpenGL in terms of design, it's a verbose annoying C
API with huge amounts of trivial boilerplate. By contrast, CUDA supports most
of the C++ convenience features relevant in this problem space, has decent
tools, IDE and debugger integration, etc.
Neither of these issues is necessary a dealbreaker if you're willing to invest
the effort, but choosing OpenCL over CUDA requires prioritizing portability
over user friendliness, available libraries and tooling. As a consequence, not
many people choose OpenCL and the dominance of CUDA continues to grow.
Unfortunately, I don't see that changing in the near future.
------
deepnotderp
Do people think that nobody at nVidia has ever heard of specialized deep
learning processors?
1\. Volta GPUs already have little matmul cores, basically a bunch of little
TPUs.
2\. The graphics dedicated silicon is an extremely tiny portion of the die, a
trivial component (source: Bill Dally, nVidia chief scientist).
3\. Memory access power and performance is the bottleneck (even in the TPU
paper), and will only continue to get worse.
~~~
oneshot908
Never overestimate the intelligence of the decision makers at big bureaucratic
tech companies. Also, it is not in the best interest of any of them to be
reliant on NVDA or any other single vendor for any critical workload
whatsoever. Doubly not so for NVDA's mostly closed source and haphazardly
optimized libraries.
All that said, Bill Daly rocks, and NVDA is a hardened target. But the DL
frameworks have enormous performance holes once one stops running Resnet-152
and other popular benchmark graphs in the same way that 3DMark performance is
not necessarily representative of actual gaming performance unless NVDA took
it upon themselves to make it so.
And since DL is such a dynamic field (just like game engines), I expect this
situation to persist for a very, very long time.
~~~
Dibes
> Never overestimate the intelligence of the decision makers at big
> bureaucratic tech companies.
See Google and anything chat related after 2010
------
alienreborn
Non paywall link: [https://outline.com/FucjTm](https://outline.com/FucjTm)
------
etaioinshrdlu
It would be interesting to try emulate a many-core CPU as a GPU program and
then run an OS on it.
This sounds like a dumb idea, and it probably is. But consider a few things:
* NVIDIA GPUs have exceptional memory bandwidth, and memory can be a slow resource on CPU based systems (perhaps limited by latency more than bandwidth)
* The clock speed isn't _that_ slow, it's in the GHz. Still one's clocks per emulated instruction may not be great.
* You can still do pipelining, maybe enough to get the clocks-per-instruction down.
* Branch prediction can be done with ample resources. RNN based predictors are a shoe-in.
* communication between "cores" should be fast
* a many-core emulated CPU might not do too bad for some workloads.
* It would have good SIMD support.
Food for thought.
~~~
Symmetry
Generally speaking emulating special purpose hardware in software slows things
down a lot so I don't think that relying on a software branch predictor is
going to result in performance anywhere close to what you'd see in, say, an
ARM A53. And since you have to trade off clock cycles used in your branch
predictor with clock cycles in your main thread I think it would be a net
loss. Remember that even though NVidia calls each execution port a "Core" it
can only execute one instruction across all of them at a time. The advantage
over regular SIMD is that each shader processor tracks its own PC and only
executes the broadcast instruction if it's appropriate - allowing diverging
control flows across functions in ways that normal SIMD+mask would have a very
hard time with except in the lowest level of a compute kernel.
That also means that you can really only emulate as many cores as the NVidia
card has streaming multiprocessors, not as many as it has shared processors or
"cores".
Also, it's true that GPUs have huge memory bandwidth they achieve that by
trading off against memory latency. You can actually think of GPUs as
throughput optimized compute devices and CPUs as latency optimized compute
devices and not be very mislead.
So I expect that the single threaded performance of a NVidia general purpose
computer to be very low in cases where the memory and branch patterns aren't
obvious enough to be predictable to the compiler. Not unusably slow but
something like the original Raspberry Pi.
Each emulated core would certainly have very good SIMD support but at the same
time pretending that they're just SIMD would sacrifice the extra flexibility
that NVidia's SIMT model gives you.
~~~
joe_the_user
_Remember that even though NVidia calls each execution port a "Core" it can
only execute one instruction across all of them at a time._
There are clever ways around this limitation, see links in my post this
thread.
[https://news.ycombinator.com/item?id=16892107](https://news.ycombinator.com/item?id=16892107)
~~~
Symmetry
Those are some really clever ways to make sure that all the threads in your
program are executing the same instruction, but it doesn't get around the
problem. Thanks for linking that video, though.
~~~
joe_the_user
The key of the Dietz system (MOG) is that the native code that the GPU runs is
a bytecode interpreter. Bytecode "instruction pointer" together with other
data is just data in registers and memory that's interpreted by the native
code interpreter. So for each thread, the instruction pointer can point at a
different command - the interpreter runs the same instructions but the results
are different. So effectively you are simulating a general purpose CPU running
a different instruction on each thread. There are further tricks required to
make this efficient, of course. But you are effectively running a different
general purpose instruction per thread (actually runs MIPS assembler I
recall).
~~~
etaioinshrdlu
This is more or less what I'm talking about. I wonder what possibilities lie
with using the huge numerical computation available on a GPU applied to
predictive parts of a CPU, such as memory prefetch prediction, branch
prediction, etc.
Not totally dissimilar to the thinking behind NetBurst which seemed to be all
about having a deep pipeline and keeping it fed with quality predictions.
~~~
joe_the_user
I'm not sure if your idea in particular is possible but who knows. There may
be fundamental limits to speeding up computation based speculative look-ahead
not matter how many parallel tracks you have and it may run into memory
through-put issues.
But take a look at the MOG code and see what you can do.
Check out H. Dietz' stuff. Links above.
------
BooneJS
Pretty soft article. General purpose processors no longer have the performance
or energy efficiency that’s possible at scale. Further, if you have a choice
to control your own destiny, why wouldn’t you choose to?
~~~
jacksmith21006
Great post. It is like mining going to ASICs. We have hit limits and you now
have to do your own silicon.
A perfect example is the Google new speech synthesis. Doing 16k samples a
second through a NN is not going to be possible without your own silicon.
[https://cloudplatform.googleblog.com/2018/03/introducing-
Clo...](https://cloudplatform.googleblog.com/2018/03/introducing-Cloud-Text-
to-Speech-powered-by-Deepmind-WaveNet-technology.html)
Listen to the samples. Then think the joules required to do it this way versus
the old way and trying to create a price competitive product with the improved
results.
------
bogomipz
The article states:
>"LeCun and other scholars of machine learning know that if you were starting
with a blank sheet of paper, an Nvidia GPU would not be the ideal chip to
build. Because of the way machine-learning algorithms work, they are bumping
up against limitations in the way a GPU is designed. GPUs can actually degrade
the machine learning’s neural network, LeCun observed.
“The solution is a different architecture, one more specialized for neural
networks,” said LeCun."
Could someone explain to me what exactly are the limitations of current GPGUs
such as those sold by Nvidia when used in machine learning/AI contexts? Are
these limitation only experienced at scale? Ff someone has resources or links
they could share regarding these limitations and better designs I would
greatly appreciate it.
~~~
maffydub
I went to a talk from the CTO of Graphcore
([https://www.graphcore.ai/](https://www.graphcore.ai/)) on Monday. They are
designing chips targeted at machine learning. As I understood it, their
architecture comprises \- lots of "tiles" \- small processing cores with
collocated memory (essentially DSPs) \- very high bandwidth (90TB/s!)
switching fabric to move data between tiles \- "Bulk Synchronous Parallel"
operation, meaning that the tiles do their work and then the switching fabric
moves the data, and then we repeat.
The key challenge he pointed to was power - both in terms of getting energy in
(modern CPUs/GPUs take similar current to your car starter motor!) and also
getting the heat out. Logic gates take a lot more power than RAM, so he argued
that collocating small chunks of RAM right next to your processing core was
much better from a power perspective (meaning you could then pack yet more
into your chip) as well as obviously being better from a performance
perspective.
[https://www.youtube.com/watch?v=Gh-
Tff7DdzU](https://www.youtube.com/watch?v=Gh-Tff7DdzU) isn't quite the
presentation I saw, but it has quite a lot of overlap.
Hope that helps!
~~~
bogomipz
Thanks for the detailed response and link! Cheers.
------
emcq
There is certainly a lot of hype around AI chips, but I'm very skeptical of
the reward. There are several technical concerns I have with any "AI" chip
that ultimately leave you with something more general purpose (and not really
an "AI" chip, but good at low precision matmul):
* For inference, how do you efficiently move your data to the chip? In general most of the time is spent in matmul, and there are lots of exciting DSPs, mobile GPUs, etc. that require a fair amount of jumping through hoops to get your data to the ML coprocessor. If you're doing anything low latency, good luck because you need tight control (or bypassing entirely) of the OS. Will this lead to a battle between chip makers? Seems more likely to be a battle between end to end platforms.
* For training, do you have an efficient data flow with distributed compute? For the foreseeable future any large model (or small model with lots of data) needs to be distributed. The bottlenecks that come from this limit the improvements from your new specialized architecture without good distributed computing. Again better chips don't really solve this, and comes from a platform. I've noticed many training loops have terrible GPU utilization, particularly with Tensorflow and V100s. Why does this happen? The GPU is so fast, but things like summary ops add to CPU time limiting perf. Bad data pipelines not actually pipelining transformations. Slow disks bottlenecking transfers. Not staging/pipelining transfers to the GPU. And then there is a bit of an open question of how to best pipeline transfers from the GPU. Is there a simulator feeding data? Then you have a whole new can of worms to train fast.
* For your chip architecture, do you have the right abstractions to train the next architecture efficiently? Backprop trains some wonderful nets but for the cost of a new chip (50-100M), and the time it takes to build (18 months min), how confident are you that the chip will still be relevant to the needs of your teams? This generally points you towards something more general purpose, which may leave some efficiency on the table. Eventually you end up at a low precision matmul core, which is the same thing everyone is moving towards or already doing whether you call yourself a GPU, DSP, or TPU (which is quite similar to DSPs).
Coming from an HPC/Graphics turned deep learning engineer, I've worked with
gpus since 2006 and neural net chips since 2010 (before even AlexNet!!), so
I'm a bit of an outlier here having seen so many perspectives. From my point
of view the computational fabric exists we're just not using it well :)
~~~
justicezyx
Most top tier tech all.have their working solutions for these. It's a matter
of turning into product and moving the industry mindset.
~~~
jacksmith21006
There is nothing for the buyer to see. They are buying a service or capability
and what silicon that runs on is here or there to them.
A simple example is the Google New Speech synthesis service. It is done using
NN on their TPUs but nobody needs to know any of that.
[https://cloudplatform.googleblog.com/2018/03/introducing-
Clo...](https://cloudplatform.googleblog.com/2018/03/introducing-Cloud-Text-
to-Speech-powered-by-Deepmind-WaveNet-technology.html)
What the buyer knows is the cost and the quality of the service.
Now Google had to do their own silicon to offer this as otherwise the cost
would have been astronomical. The compute to do 16k samples a second with a NN
are astronomical.
If I could not see it myself I would say what Google did was not possible.
Just hope they share the details in a paper. If we can get to 16k cycles
through a NN at a reasonable cost that opens up a lot of interesting
applications.
------
MR4D
Back in the day, there was a 386. And also a 387 coprocessor to have the
tougher math bits.
Then came a 486 and it got integrated again.
But during that time, the GPU split off. Companies like ATI and S3 began to
dominate, and anyone wanting a computer with decent graphics had one of these
chips in their computer.
Fast forward several years, and Intel again would bring specialized circuitry
back into their main chips, although this time for video.
Now we are just seeing the same thing again, but this time it’s an offshoot of
the GPU instead of the CPU. Seems like the early 1990’s again, but the
acronyms are different.
Should be fun to watch.
------
davidhakendel
Does anyone have a non-paywall but legitimate link to the story?
~~~
trimbo
Incognito -> search for headline -> click
~~~
bogomipz
Thank you for this tip. Out of curiosity why does this trick work?
~~~
_delirium
Websites like this want traffic from Google. To get indexed by Googlebot they
have to show the bot the article text, and Google's anti-blackhat-SEO rules
mean that you have to show a human clicking through from Google the same text
that you show Googlebot. So they have to show people visiting through that
route the article text too.
------
Barjak
If I were better credentialed, I would definitely be looking to get into
semiconductors right now. It's an exciting time in terms of manufacturing
processes, and I think some of the most interesting and meaningful
optimization problems ever formulated come from semiconductor design and
manufacturing, not to mention the growing popularity of specialized hardware.
I would tell a younger version of myself to focus your education on some
aspect of the semiconductors industry.
------
mtgx
Alphabet has already made its AI chip...its second generation already.
~~~
jacksmith21006
Plus Google has the data and the upper layers of the AI stack to keep well
ahead.
------
jacksmith21006
The new Google Speech solution is the perfect example on why Google had to do
their own silicon.
Doing speech with 16k samples a second through a NN and keep at a reasonable
cost is really, really difficult.
The old way was far more power efficient and if you are going to use this new
technique which gets you a far better result and do it at a reasonable cost
you have to go all the way down into the silicon.
Here listen to the results.
[https://cloudplatform.googleblog.com/2018/03/introducing-
Clo...](https://cloudplatform.googleblog.com/2018/03/introducing-Cloud-Text-
to-Speech-powered-by-Deepmind-WaveNet-technology.html)
Now I am curious on the cost difference Google as able to achieve. It is still
going to be more then the old way but how close did Google come?
But my favorite new thing with these chips is the Jeff Dean paper.
[https://www.arxiv-vanity.com/papers/1712.01208v1/](https://www.arxiv-
vanity.com/papers/1712.01208v1/)
Can't wait to see the cost difference using Google TPUs and this technique
versus traditional approaches.
Plus this approach support multi-core inherently. How would you ever do a tree
search with multiple cores?
Ultimately to get the new applications we need Google and others doing the
silicon. We are getting to extremes where the entire stack has to be tuned
together.
I think Google vision for Lens is going to be a similar situation.
~~~
taeric
This somewhat blows my mind. Yes, it is impressive. However, the work that
Nuance and similar companies used to do are still competitive, just not
getting near the money and exposure.
I remember over a decade ago, they even had mood analysis they could apply to
listening to people. Far from new. Is it truly more effective or efficient
nowadays? Or just getting marketed by companies you've heard of?
~~~
sanxiyn
It is truly better. Objective metrics (such as word error rate) don't lie. You
can argue whether it makes sense to use, say, 100x compute to get 2x less
error, but that's a different argument; I don't think anyone is really
disputing improved quality.
~~~
taeric
Do you have a good comparison point? And not, hopefully, comparing to what
they could do a decade ago. I'm assuming they didn't sit still. Did they?
I question whether it is just 100x compute. Feels like more, since naturally
speaking and friends didn't hog the machine. Again, over a full decade ago.
More, the resources that Google has to throw at training are ridiculous. Well
over 100x what was used to build the old models.
None of this is to say we should pack up and go back to a decade ago. I just
worry that we do the opposite; where we ignore progress that was made a decade
ago in favor of the new tricks alone.
~~~
jacksmith21006
The thing is it is not simply the training but the inference aspect would have
require an incredible amount of compute compared to the old way of doing it.
Hope Google will do a paper like they did with the Gen 1 TPUs. Would love to
see the difference in terms of joules per word spoke.
------
jacksmith21006
The dynamics of the chip industry have completely changed. Use to be a chip
company like Intel sold their chips to a company like Dell that then sold the
server with the chip to a business which ran the chip and paid the electric
bill.
So the company that made the chip had no skin in the game with running the
chip or the cost of the electricity to run it.
Today we have massive cloud with Google and Amazon and lowering the cost of
running their operations goes a long way unlike the days of the past.
This is why we will see more and more companies like Google create their own
silicon which has already started and well on it's way.
Not only the TPUs but Google has created their own network processors as they
quietly hired away the Lanai team years ago.
[https://www.informationweek.com/data-centers/google-runs-
cus...](https://www.informationweek.com/data-centers/google-runs-custom-
networking-chips/d/d-id/1324285)?
Also this article helps explain why Google built the TPUs.
[https://www.wired.com/2017/04/building-ai-chip-saved-
google-...](https://www.wired.com/2017/04/building-ai-chip-saved-google-
building-dozen-new-data-centers/)
------
willvarfar
I just seem to bump into a paywall.
The premise from the title seems plausible, although NVIDIA seems to be
catching up again fast.
~~~
madengr
I was impressed enough with their CES demo to buy some stock. Isn’t the Volta
at 15E9 transistors? It’s at the point only the big boys can play in that
field due to fab costs, unless it’s disrupted due to some totally new
architecture.
First time on HN I can read a paywalled article, as I have a Barron’s print
subscription.
~~~
twtw
21e9 transistors.
| {
"pile_set_name": "HackerNews"
} |
Mt.Gox put announce for mtgox acq here - MPetitt
The only HTML on the Mt.Gox homepage<p><html>
<head>
<title>MtGox.com</title>
</head>
<body>
<!-- put announce for mtgox acq here -->
</body>
</html>
======
defcon84
Dear MtGox Customers,
In the event of recent news reports and the potential repercussions on MtGox's
operations and the market, a decision was taken to close all transactions for
the time being in order to protect the site and our users. We will be closely
monitoring the situation and will react accordingly.
Best regards, MtGox Team
| {
"pile_set_name": "HackerNews"
} |
Remote Work – NoDesk - swimduck
http://nodesk.co/remote-work/
======
rijoja
Nice collection of links! I'm quite interested in seeking some interesting
freelance work. Currently I'm working on putting together a portfolio. Enough
about me! Does anybody have any feedback on these sites? Does any of them
stand out?
~~~
swimduck
I've used quite a few of those links and found them to be pretty good. What
kind of freelance work are you looking for? Design, marketing, tech etc?
~~~
rijoja
Thanks for the feedback!
I've done some web development mostly PHP and such. Then I've studied computer
science for three years so I've some insight in the academic formal world as
well. On my spare time I do c programming and Linux stuff.
| {
"pile_set_name": "HackerNews"
} |
What We Know About Inequality (in 14 Charts) - warrenmar
http://blogs.wsj.com/economics/2015/01/01/what-we-know-about-inequality-in-14-charts/?mod=e2tw
======
jemacniddle
"The annual income Americans earn is unevenly distributed"
Income isn't distributed, but earned.
General quality of life is far better now than it has ever been.
~~~
mziel
You do realize that it's about statistical distribution, not political
REdistribution?
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What kind of products would you rent after landing in a new country? - kseudo
I have a kernel of an idea that I was thinking about but I think I need some honest external feedback before I decide pursue it further. Basic idea: Enable people entering a country on holiday/business to rent items that they can drop off again at the airport when they are leaving.<p>So an example use cases would be:
- A backpacker will be backpacking in the country for 10 days. Rather than having the hassle/cost bringing over a tent/sleeping bag they would like to rent them.
- A business person visiting Dublin for three nights. This person would like an ipad for this time.<p>In both of these cases the person uses our website/app to create and account and make a reservation. They determine the pickup/dropoff (I.E city/airport) locations and a prices is agreed and paid with credit card. Upon arrival they pick up the item, show some id and use the product for the duration of their stay. They can drop it off when they are leaving at a booth in the airport or perhaps at other convenient locations.<p>My question is: would you see a market for this kind of service? If so what products would you see as potentially applicable?<p>I am pretty aware of the pains involved in running a rental company so I under no illusions that this would be an easy company to run. However, I would like to know is if there is a potential for this idea. I dont want to invent a solution for a problem that does not exist(as I have done a couple times in the past). I do feel there is some interesting aspects to it especially when you think about the many potential revenue sources and I would like to hear some opinions.<p>Brief note: I come from Ireland (though I do not currently live there), which I think is the perfect test location for this idea : a small island, very few entry points, lots of tourists etc. As I am a soon to be unemployed web developer who will be spending the next few months travelling around the world and I would like a project to ponder/work on while I'm away :)
======
chanced
>> A backpacker will be backpacking in the country for 10 days. Rather than
having the hassle/cost bringing over a tent/sleeping bag they would like to
rent them
Maybe some but I suspect many climbers/hikers put a great deal of faith in
their equipment. Rental equipment is often abused. Beyond that, they aren't
comfortable with it which adds an element of risk.
>> A business person visiting Dublin for three nights. This person would like
an ipad for this time.
A business person traveling internationally will have all the tech they need
and then some.
~~~
kseudo
Good points. I think you are correct with the business user: also vpn/security
would likely mean that it would be unusable for their needs.
Perhaps people would like to rent the latest tech products and use them while
on holiday. They would get to try out the latest Ipad/nexus in the real road.
------
dgunn
Think of what is already being rented to travelers. Cars, lodging, insurance,
motorcycles, boats, etc.. It's usually large things that people simply can't
travel with.
If you want to rent something to someone, you have to think of things which a
person couldn't reasonably have brought with them or offer them something that
makes them feel safe in a strange place.
~~~
kseudo
Agreed, or perhaps something that they would like to try out while on holiday.
Thanks for your response I think you make a good point.
------
kseudo
Thanks to everyone who responded. It is invaluable to have people to have
people to bounce and idea off... it really helps to open your eyes.
------
gregjor
Lots of airports in the US have stores/kiosks that rent DVDs and DVD players.
I've also seen GPSs for rent through car rental agencies.
~~~
kseudo
Ok, so I'm not the first person to think of this :) Although I have not seen
these kiosks yet I havent been in the states in a few years so it makes sense.
Im interested to know what is their target market though. I understand that
people want entrainment while in the airport but I'm thinking about items that
can be used during the course of a stay in a country: \- Macbooks/Ipads(with
3g data access) \- Bulky items like tents/sleeping bags. \- Perhaps niche
items like Binoculars,cameras etc..
I would like to know if there are items that people would be willing to pay to
rent if they had a convenient method to do so... but perhaps there is not.
Thanks for your response by the way
------
ig1
A smartphone with a local data sim.
------
rdouble
There are already many places that do camping equipment rentals. I think even
REI does it.
~~~
kseudo
Yep. Seems unlikely that someone would rent this type of equipment from a
kiosk at the airport too.
Is there any product that this would work for.. that is the question I'm
asking myself.
| {
"pile_set_name": "HackerNews"
} |
Effective coding for small children? - fyacob
So we're about to launch a toy on kickstarter that effectively and truly introduces children to programming logic. It is basically a physical version of the logo turtle, and uses a visual sequence of instructions to breach concepts of high level abstraction.<p>The first time a child plays, he/she effectively writes her/his first line of code. The neat thing about all this is that children can do so without the need for literacy or numeracy.<p>We see all sorts of on screen and off-screen products that deal with the subject of programming and early learning. They are either to advanced, to complex or completely useless.<p>The earliest thing we found was something called www.Codebabies.com , which we thought was a bit of a parody at first (not their more advanced stuff which is OK, but their early range products).<p>In terms of physical toys the best one we could find is the super popular Bee-bot. While being a great toy, we thought it missed the point a little bit.<p>Here is a link to the product we are launching shortly. The actual packaging has changed slightly, this is footage we took while the product was still a University project. It's slicker and designed for mass production/consumption. Any feedback or conversation around the topic and/or product is massively appreciated.<p>http://tinyurl.com/on9zj9q
======
RodgerTheGreat
So it's essentially a physical version of the puzzle game Light Bot[1]. I can
definitely see this being an approachable toy, and it does encourage the kind
of visualization and sequential reasoning that underlies programming. That
said, with such a limited number and variety of instructions (Looks like a
12-step program with the opportunity for a single 4-step subroutine) I don't
see much potential for teaching about abstraction and reuse, and I think
children will reach the limits of the toy very quickly.
Given that you're already committed to putting this into production, are you
actually looking for critical feedback or are you simply trying to raise
awareness?
[1] <http://armorgames.com/play/2205/light-bot>
~~~
fyacob
No, genuinely trying to get some debate going. We have debated upgrades for a
while now at the Arduino head office in Torino where we work. We thought of
tapping into the endless wisdom of the HN community for a fresh perspective.
The learning curve plateaus eventually, but it's also about experimenting
control within new environments, "go around mommy's feet" "circle the kitchen
counter".
In it's basic form no numeracy or literacy is required to approach Primo. This
is key, but what do you suggest a suitable upgrade should be bearing this in
mind?
We have a Primo 2.0 for schools in R&D right now that will allow for multi
play (or super hog play) by controlling several vehicles from the same board.
what you think about that?
~~~
RodgerTheGreat
I see where you're going with taking the toy into different environments to
produce varied challenges. Controlling multiple vehicles at once could be
interesting- I'm reminded of tool-assisted speedruns in which a single set of
gamepad inputs are used to complete different games simultaneously. You could
create obstacle courses where the same program will help all the robots get
out.
From the perspective of learning about abstraction and code reuse, I think
having a single function definition that can only be four instructions long is
very limiting. If you had more (even just a second one) you'd have many more
ways of combining them together- they could be chained, they could call one
another, etc- and thus more ways to try breaking down problems. Breaking tasks
down into smaller subtasks is, in my opinion, one of the most fundamental
concepts in programming, and your toy _can_ teach this idea.
From a totally different angle, have you thought about adding a crayon/marker
mount to the car so that kids can lay out a big piece of paper and then have a
visual record of their trial-and-error progress? Thinking about the toy as a
sort of creative tool, if the kids had some ability to turn the overall
program into a repetitive loop rather than just a sequence you could do some
neat spyrograph style stuff. Maybe add a toggle switch to enable that?
| {
"pile_set_name": "HackerNews"
} |
How $96,000 can buy you a top 10 ranking in the U.S. app store - beshrkayali
http://venturebeat.com/2013/06/04/how-96000-can-buy-you-a-top-10-ranking-in-the-u-s-app-store
======
thedufer
I just wanted to point out that that last graph is outrageously misleading.
The bars start at around -20,000, which makes the 11x difference between the
USA and Spain bars appear to be only a 4x.
I doubt this is intentionally misleading, since I can see no incentive for
doing so, but it just goes to show that you have to be careful with
infographics - they're very easy to lie with, even accidentally.
~~~
apalmer
This appears to be a marketing/advertisement piece so take it with a grain of
salt.
~~~
icpmacdo
Venture Beat always seems to be filled with click bait and advert pieces.
------
togasystems
I participated in Free App of the Day promotion about 4 years ago. For the
week that it was featured, we experienced about 360% growth and made it to the
top ten in our category. Free App of the Day took a percentage of our profits
afterwards for a few weeks riding the long tail. In the end, it was win win
for both of us.
------
coldcode
We experimented at work with them and yes it "worked". Thousands downloaded
the app in India, China and Egypt, drove us to #2 in our category, then 2 days
later back to the basement again. Not worth it. I hope Apple kills these
folks.
~~~
panabee
you experimented with appgratis, trademob, or who? curious to know who didn't
work for you. if you don't mind sharing, was it a game, or what kind of app
was it? thanks!
~~~
coldcode
Not a game, travel, and it was the site shown, fiksu. A $10,000 experiment.
------
ryandrake
If I were Apple, I'd be kind of embarrassed that my organic ranking system
could be gamed this easily. I mean, there's a huge cottage industry (with its
own jargon and professionals) built around figuring out the exact wizardry
needed to get your web page highly ranked, yet on the AppStore, apparently the
only thing that drives their ranking is "number of downloads".
I wonder if the result of all of these "discovery" apps will be a more
complicated ranking system from Apple that takes into account more than just
downloads...
~~~
tmandarano
I completely agree. It's honestly quite hard to believe that they have
calculated it to be precisely 80k downloads.
------
sjsivak
The organic install part of this calculation seems pretty suspect. In order to
get the organic lift mentioned in the article the app would need to be pretty
highly ranked _and_ likely sustain that rank for a while before getting
65%-100% the additional installs.
Whenever running a burst campaign it is important to follow it up with
sustained installs afterwards to try and maintain the rank for as long as
possible to attempt to get some of the organic lift mentioned. It does not
simply happen instantly.
------
unreal37
I don't think it's a surprise that "advertising works". In any market (iOS or
bars of soap), spending money up front on commercials/ads/promotion gets you
some sales, and the rest happens through word-of-mouth, social sharing, social
proof, etc.
$96K is a lot of money to spend if your goal is a top 10 non-game app. Not
everyone can put up that kind of money.
And the more people do that (buying installs to kick off a campaign), the more
expensive it becomes and more difficult to get in the top 10.
~~~
interg12
Everybody who matters can put that kind of money up. Remember, the more
expensive it gets, the more the ad space is worth for publishers and the more
they can earn. As advertising gets more expensive, publishers earn more and
can ultimately do more.
~~~
uptown
"Everybody who matters can put that kind of money up."
Everybody who matters to whom? I understand what you're saying - it just seems
like a very deterministic view on the state of the gaming industry.
------
SurfScore
So how does this work for paid apps? If you sell your app for 2.99 and you're
paying 1.20 an install, aren't you making money? I know I'm missing something
here because otherwise anyone could pay for infinite installs and make
infinite money.
~~~
cclogg
I think it's not 1.20 for a paid install. Buying users can cost quite a bit
more for paid apps (for instance I know from googling that Flurry's 'buy user'
service doesn't work very well for paid apps).
But in the end, it's just a mad rush to be on the charts to get organics.
Without that, it's kind of difficult with apps to make your money back from ad
spending. It's not like you are advertising a hotel room where one purchase
through your adwords can net you a 100+ dollars (plus all the room service
that person will consume!). In apps, typically, you lose money buying users
(if you were only counting ad-spend vs revenue from those users that
installed).
~~~
SurfScore
It's a shame that the App Store is becoming this commercialized. One of the
best things about it in the early days was the fact that indie games could get
noticed internationally. Sure we get games like Infinity Blade but indie
developers gave us Tiny Wings.
Maybe Apple will implement something similar to Steam, where this is still
possible.
~~~
cclogg
Yes, and I think it's true of many platforms. It's best to be there in the
beginning, before the flood haha. Look at Youtube, it was much easier to
become a Youtube celebrity (or just garner MANY views) if you were on there
since/near the beginning.
------
notahacker
How many actual sales would $96000 worth of paid installs ultimately translate
into for an app that isn't particularly original, addictive or brilliant.
Would it even make the $96000 back before sinking back down the rankings
without trace?
~~~
jyap
You don't just throw money at things. You take a calculated risk.
So before this spending on paid installs happens you need to have metrics on
the Customer Life Time Value (CLTV). Now since you are buying users this
calculation could be off.
If:
Customer Life Time Value > Cost Per User Aquisition
Then:
Spend money to acquire customers
So if you rank high and word of mouths spreads then you hopefully have organic
growth where user acquisition costs equate to $0.
| {
"pile_set_name": "HackerNews"
} |
Job Search: Interview Disasters Revealed By Employers - eroach
http://roachpost.com/2010/02/25/job-search-interview-disasters-revealed-by-employers/
======
xenophanes
-- Candidate wore a business suit with flip flops.
God, who cares?
-- Dressing inappropriately - 57 percent
If dropping your dress code expectations will increase your potential hire
pool by over 50%, that is a serious competitive advantage over more stodgy
companies.
~~~
radu_floricica
It's not about the suit, it's about the message. As an employer you want to
hire people who make an effort, and want to avoid people who can't be bothered
to. Dressing well for an interview is so obvious it's actually a pretty good
way of sorting candidates.
This may apply less for programmers and high-end technical positions, but this
is the exception not the rule.
~~~
pg
What you're overlooking is that attention is a limited resource. Someone who
pays attention to how they dress has thereby paid less attention than they
might have to something substantive.
So not only don't we care how people dress at YC interviews, dressing up is
actually a (minor) red flag. We'd rather you spent that effort on something
else.
~~~
radu_floricica
In certain domains this applies (and indeed, given the blog's background it's
on the mark). But in large segments of the job market I still think it's
important to dress reasonably well.
Even if attention is a limited resource, the interview usually is a place
where first impressions matter, so I'd say budgeting some attention towards
looks is not misguided at all.
~~~
pg
You're right that dressing well matters in large segments of the job market.
But I'd argue that this is a pretty good heuristic for deciding which segments
of the job market to avoid.
~~~
radu_floricica
That is true. However there still remain the entry-level interviews. When most
of what you sell is potential every detail under your control matters.
------
boredguy8
We're going through panel interviews right now, and a lot of that article is
rubbish, as others have pointed out. But I want to tease out the "don't be
negative" because it's got the reasons wrong.
"Also, no matter how tempting it is, don't say negative things about a
previous employer, regardless of how the job ended - hiring managers may fear
that you will say the same things about their organization."
That's not why you don't say negative things. You don't say negative things
because it means you blame other people or you're a negative person. You say,
"I don't like my current job because they don't listen to me," and all I hear
is that you don't have very good ideas, you're very bad at explaining those
ideas, or your disconnected from the people you work with. And on top of that,
you don't understand how to solve interpersonal problems. None of that make me
super excited about you.
Instead try realizing that you might bear some of the blame for the things you
don't like, and realize the other party might have good justification. So
instead: "I find myself getting excited about very different opportunities
than my current coworkers. They have a real passion for solving the immediate
problem whereas I'm far more interested in solving the underlying cause. So
while I appreciate their desire to provide a quick solution, and have even
learned when that can be appropriate, I'm really looking for an environment
that emphasizes long-term thinking while still making sure customer needs are
met as quickly as possible."
------
CoryOndrejka
Pretty clear BigCo corporate bias in these results, as this would likely be a
positive in SF/Bay:
\-- Candidate used Dungeons and Dragons as an example of teamwork.
The meeting for drinks also cracked me up, as Linden ended many a job
interview with drinks at the end of the day!
------
radu_floricica
I wish there was a way to link to my younger acquaintances just this part:
-- Dressing inappropriately - 57 percent
-- Appearing disinterested - 55 percent
-- Speaking negatively about a current or previous employer - 52 percent
-- Appearing arrogant - 51 percent
-- Answering a cell phone or texting during the interview - 46 percent
-- Not providing specific answers - 34 percent
-- Not asking good questions - 34 percent
Edit: This and not re-reading the CV 3 times before sending it.
------
angelbob
This is an excellent summary of the kind of things a company should worry
about only if they can't actually measure performance.
Though "appearing disinterested" should still be a huge turn-off.
------
warfangle
Funny, the most common thing I've seen in candidates getting rejected promptly
is not being able to answer basic questions about things they've listed on
their resume. Example: listing Java and not understanding what an interface
is.
------
vital101
The author made a point about researching the company before the interview. I
always thought this was common sense (why are you applying with this company
anyways?), but apparently it isn't.
------
jacktasia
I am curious how many of these became "disasters" for the interviewer because
the interviewee had already reached a point where they considered the
interviewer/company/etc. a disaster (no longer wished to work there).
------
pw0ncakes
_\-- Candidate used Dungeons and Dragons as an example of teamwork._
That's a negative? WTF? There's a hell of a lot more interesting in the way of
teamwork, management, etc. in role-playing games (especially MMORPGs) than in
most corporations.
~~~
sounddust
On the flip-side, I once had an interviewer ask me to solve a problem where
several hurdles/criteria were introduced randomly based on rolling a 20-sided
die.
~~~
wlievens
That's pretty awesome. What kind of company was it?
------
lucifer
And at this late stage the realization comes that reciting poetry in
interviews was the silent assassin of my career! Just the other day I was
asked to explain a concurrent design and I just couldn't help but recite the
immortal words of Halyna Krouk:
Two couplets and a refrain
a carousel
of non-stop passing
at each turn one more door closes before us
with a rusty whinny
legless horses tear into the prairie –
racing
two couplets and a refrain
eyes gaping
two couplets and a refrain
catching up from hind to front
reach out to me
throw me the lasso of a glance
who made us so hopelessly distant
who conceived us such irreparable losers
on this overplayed record
– two couplets and a refrain –
where even love leaves only scratches
| {
"pile_set_name": "HackerNews"
} |
Military Aircraft Hit Mach 20 Before Ocean Crash, DARPA Says - Shalle
http://www.space.com/12670-superfast-hypersonic-military-aircraft-darpa-htv2.html
======
51Cards
August 18th, 2011? Possibly a few tests after this have already happened?
Edit: Looks like no third flight yet. Found this followup:
<http://www.darpa.mil/threeColumn.aspx?pageid=2147485247>
Also found this paragraph to be interesting:
“The initial shockwave disturbances experienced during second flight, from
which the vehicle was able to recover and continue controlled flight, exceeded
by more than 100 times what the vehicle was designed to withstand,” said DARPA
Acting Director, Kaigham J. Gabriel.
~~~
akavel
And this: "[G]radual wearing away of the vehicle’s skin as it reached stress
tolerance limits was expected. However, larger than anticipated portions of
the vehicle’s skin peeled from the aerostructure."
And, dated: April 20, 2012
~~~
iandh
The "gradual wearing of the vehicle's skin" is called is called ablation.
Its a standard way both hyper sonic and reentry vehicles manage heat.
Basically the skin boils off creating a pressure wave that the vehicle flights
behind.
Wikipedia has a good description of it.
<http://en.wikipedia.org/wiki/Atmospheric_reentry#Ablative>
------
Retric
Depending on what kind of impact angle they can achieve they may not need to
add high explosives to this. Mach 20 is 400 times the kinetic energy of mach 1
and 40,000x the kinetic energy of a 76mph collision. So, even a 100lb craft =
1,000 times the impact energy of a 4,000lb truck at highway speeds.
~~~
saosebastiao
Interesting perspective...I hadnt even thought of the possibility.
~~~
AUmrysh
Kinetic Bombardment has been a sci-fi concept for decades, and it's
interesting to see a real, possibly cost effective, technology that could make
it real. At those speeds, you don't need explosives, just a lot of mass to
slam into something.
~~~
Zarathust
According to project Thor, it would not be that cost efficient. The wikipedia
page states that the rods would need to be at around 8 tons. A ton of tungsten
is around 50k$ so 8 tons is rather negligible. What is very expensive is to
get 8 tons of material to space. During the space shuttles era, bringing a
kilogram of matter to space would cost around 20k$. A ton is 907 kilograms.
8 x 907 x 20 000 = 145M$ per payload.
I don't have the exact numbers for the cost of an ICMB with a nuclear warhead
but I'm pretty sure that it is less than that. We also didn't factor in the
cost of maintaining an orbital launcher, possibly manning that thing and other
costs which being in space incur.
Maybe with newer launch methods bringing goods to space will be cheaper, but I
don't think we'll see this kind of tech unless costs decrease to around
2000$/kg to space (1/10th of what it is now).
<http://en.wikipedia.org/wiki/Kinetic_bombardment>
~~~
duaneb
Well we are reaching an age (of asteroid mining) where it's probably more cost
efficient to manufacture in space and ship (or railgun, I guess) the mass to
earth. Probably not for a while, but I don't think it's science fiction
anymore.
~~~
AUmrysh
That gives me an interesting idea. Do you think it could be feasible to mine
ore from space and then drop-ship it to some low-depth (a few hundred feet at
most) part of the ocean? The heat from re-entry would likely turn the metal
into a molten blob, and the sea water would rapidly cool it.
~~~
duaneb
Can they guide things that accurately without assisting it mid-flight? If so,
that's a cool idea, dunno if it's actually practical.
------
zeteo
>HTV-2 is part of an advanced weapons program called Conventional Prompt
Global Strike, which is working to develop systems to reach an enemy target
anywhere in the world within one hour
They're getting really close. Mach 20 is about an hour and a half to the
opposite point on Earth.
~~~
lifeisstillgood
I read this too, but my first thought was "if this had been Iran we would have
invaded at midnight".
Its cool tech, yes, but really...
~~~
rayiner
Of course and that would've been the sensible thing to do. As Americans, we
want a world where the U.S. is the one that has this technology, not countries
who aren't the U.S. Our standard of living (1/4 of the world's resources for
5% of the world's population) literally depends on that state of affairs.
~~~
danielweber
The US doesn't need military dominance to have 1/4 of the world's consumption.
It needs to be able to pay people who have 1/4 of the world's production
enough that they sell it to us. And about 80% of that latter group are
Americans anyway.
~~~
rayiner
That works until the country that does have military dominance decides to
conquer you and take that production for itself. The market is meaningless
when force can be used to take what you can't purchase in the market.
It is possible to be a wealthy nation that doesn't have military dominance
(Switzerland, etc). The key is to be overall small enough that you can consume
a lot per capita but still fly under the radar of the big boys. A country as
big as the U.S. doesn't really have that choice.
~~~
danielweber
Okay, I think I misread you. The meme of "the US only has 5% of the world
people but consumes 25% of the world's resources!" is usually said by people
who think it's "unfair" that the US is consuming "so much."
If you are saying that the US needs military dominance because we need to
protect our own domestic production from invasion, that's something else. I
still disagree but not as forcefully as I would to the other characterization.
(If an invader tried to invade America for the purpose of seizing our wealth,
much of it would evaporate instantly. They can capture factories but more and
more of it is IP that a potential invader could just stay home and pirate
instead.)
~~~
duaneb
Why is it you do not care about the US consuming that much? Do you not care
that 96% of the world only has 75% of the world's resources? Are you really so
centered on yourself and your culture you literally want us to exploit the
rest of humanity?
I don't think it's unfair, just morally despicable.
~~~
rgbrenner
The US produces 22% of the world's GDP. There's nothing despicable about
consuming what you are producing.
What is despicable is people who think they are entitled to the labor of
others for free.
~~~
duaneb
> There's nothing despicable about consuming what you are producing.
I agree, what's despicable is ignoring the rest of humanity. No, there's no
entitlement (whatever that means really, a pejorative term for rights), but I
do think you have a moral obligation to help people who need it.
~~~
danielweber
I want China and India and Africa to have first-world standards of living,
with similar per-capita GDPs. That will be awesome, both for them as well as
for the US and Europe.
~~~
duaneb
Yes, standards of living is the big thing, and not just across china, india,
and africa. I would be fine with the US's dominance if I though other people
in the world could have satisfying lives where they don't have to worry about
employment (and the resulting food, shelter, health care). But, we are
probably decades from achieving that in even China and India, both of which
have a lot to offer the world even now.
Unfortunately, the US also has many inherent resource advantages (pretty
amazing farming, for instance), which much of the rest of the world doesn't
have. I really don't see at least the arid parts of Africa competing any time
soon on material goods, and they don't have the education or cultural draw to
attract production of intellectual goods. So at some point, the world does
need to help itself out. It's not just going to magically fix itself without
people helping each other.
I guess I should make this clear: I'm not advocating some kind of world
socialism thing. I'm pretty sure things like classes are inherent in human
societies. But I would like to drastically reduce the difference between the
poorest and the richest people, and I do want to make sure that the basic
things we take for granted in the US are available everywhere.
------
jvzr
Recent follow-up to the parent's old article:
[http://www.space.com/15388-darpa-hypersonic-glider-demise-
ex...](http://www.space.com/15388-darpa-hypersonic-glider-demise-
explained.html)
~~~
unwind
More recent, but still 11 months old. It's pretty cool to look at the
trajectory maps while remembering that the flight lasted 9 minutes Mach 20
(around 7000 m/s) is _fast_. :)
------
cadetzero
I found it really interesting to read about the underlying tech, scramjets:
<http://en.wikipedia.org/wiki/Scramjet>
~~~
hencq
I don't think the HTV-2 uses (or used rather, since this was in 2011) a
scramjet though, but uses a rocket engine. The HTV-3X was supposed to use a
scramjet it seems, but that was cancelled.
<http://en.wikipedia.org/wiki/DARPA_Falcon_Project>
~~~
cadetzero
As I understand, you can't hit those speeds with just a rocket engine. The
HTV-2 is considered a rocket glider - it needs a rocket to propel it to
altitudes and speeds where the SCRAMJET can kick in (as the scramjet itself
has no moving parts).
~~~
ballooney
You can hit those speeds and beyond with a rocket engine. They don't really
have a speed limit, they'll keep accelerating something (provided their force
is greater than the drag and any other retarding forces) until you turn them
off. Eg, a payload into earth orbit, which would be about Mach 25 if there was
some atmosphere. Or significantly faster if you're putting a probe on an
escape velocity, for example the Pluto probe New Horizons, which was launched
with a solar escape velocity that would be equivalent to about Mach 50 (all
sea level).
The advantage of the scramjet over a rocket is that you don't have to carry
the oxidiser in a tank with you (like a conventional rocket). You get it from
the atmosphere. But being inside an atmosphere making thermal management a big
challenge, as the article describes.
------
will_brown
I am almost positive I have video of this aircraft. I took night video
(1/13/13) of an aircraft near the Everglades, the aircraft moves so fast it
appears that I am moving, but I was stationary. When I get home I will upload
the video and add a link here to see if people agree.
~~~
iandh
There as yet to be a third flight. DARPA is surprisingly open about the Falcon
HTV-2 test flights. The releases have included more info that I would have
expected.
DARPA was actually live tweeting the 2nd flight.
~~~
will_brown
Here is the video:
[http://www.youtube.com/watch?v=f3VqiPhsnMM&feature=youtu...](http://www.youtube.com/watch?v=f3VqiPhsnMM&feature=youtu.be)
As "open" as DARPA is they are subject to confidentiality and are only allowed
to disclose what the DoD allows them to disclose. I am no conspiracy theorist
but compare the shape/light of the aircraft in my video to the artist night
time rendition from the OP article. Even if the aircraft in my video is
different, I can tell you I have never seen anything anywhere near as fast as
the aircraft in my video including F-15's, F-16's, F-18's or the space shuttle
when it lands (which breaks the sound barrier at very low altitude).
~~~
iandh
Thanks for sharing the video. Yeah, while its not Falcon there are tons of
classified launches coming from the Cape. Do you remember if the vehicle was
flying south or east?
~~~
will_brown
Sorry for the misunderstanding, the video is not from the Cape but the Eastern
edge of the Everglades around South Miami. The aircraft was traveling North,
seemingly along the edge Everglades.
There is a Air Force Base very near (10-15 miles)in Homestead, FL and a Navy
Base in the Keys (over 100 miles), still it not unknown for the Navy to do
exercises that far North, but I _think_ a single aircraft at night might be
unusual.
------
colinshark
If the US is going to field these weapons, we need to be cool with Russia and
China tossing around non-nuclear ICBMs- because that is what this is.
~~~
Thrymr
> non-nuclear ICBMs- because that is what this is.
Well, no, it's not. It's not a missile, it's not ballistic, and I don't know
if the range is truly intercontinental. Not that the Russians and Chinese
won't be concerned, but it is a rather different thing (and a much harder
engineering problem).
~~~
EwanToo
You're right it's not a ballistic missile, but it's end goal is in many ways
the same:
"Prompt Global Strike (PGS) is a United States military effort to develop a
system that can deliver a precision conventional weapon strike anywhere in the
world within one hour, in a similar manner to a nuclear ICBM"
<http://en.wikipedia.org/wiki/Prompt_Global_Strike>
------
ry0ohki
It did the equivalent of Boston to North Carolina in 3 minutes. Crazy testing
something that covers so much distance so fast.
------
will_brown
I commented earlier that I thought I recorded this aircraft at night on
(1/13/13), people already bashed me saying this aircraft has only flown 2
times blah, blah, blah... (ever hear of military testing? They don't tell the
public)
Here is the video:
[http://www.youtube.com/watch?v=f3VqiPhsnMM&feature=youtu...](http://www.youtube.com/watch?v=f3VqiPhsnMM&feature=youtu.be)
Why do I think it is same aircraft? When reading the OP article I immediately
identified the artists night time rendering as the same aircraft I recorded in
the video (video quality does not do it justice, but in person I can verify it
looked identical to the artist rendering). Separately, the aircraft I recorded
is by far the fastest thing I have ever seen, not in some UFO conspiracy way,
but that I fly aircraft, attend airshows (F-14, F-15, F-16, F-18), saw Space
Shuttles land (breaks sound barrier at low altitude) and I can saw I have
never seen anything move as fast as the aircraft in my video.
------
quarterto
Darpa Maintains Control of Unmaned Aircraft at Mach 20
...for three _whole_ minutes!
~~~
melling
Sounds like a lot to me. For the first DARPA Grand Challenge, for example, no
car finished the race. The farthest went 7 miles.
<http://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2004)>
The next year 5 vehicles completed.
<http://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2005)>
Also, consider all of the rocket failures in the 1960's and we still made it
to the moon by the end of the decade.
------
run4yourlives
Pretty sure this is 'old tech' because of the successful test of the Advanced
Hypersonic Weapon.
The Prompt Global Strike program seems to have moved on to other options.
<http://en.wikipedia.org/wiki/Prompt_Global_Strike>
------
axus
Watching that video feels like something out of Kerbel Space Program:
<http://www.youtube.com/watch?v=DWBgUnL_ya4>
------
baby
mach 20 = 24 500.88 kmh (~6800m/s)
This is incredibly fast. Distance New York to Paris : 5851km[1] it would take
14 minutes for this plane to do the distance.
[1] <http://www.wolframalpha.com/input/?i=paris+to+new+york>
~~~
pkfrank
Why do we even reasonably need something this fast?
Wouldn't it be much more cost-effective to merely link up with strategic
partners in other countries (Germany / Israel; Philippines; etc.) to give us
the distribution we need for "conventional" missiles that would hit in the
same timeframe?
~~~
mpyne
Missiles can be shot down by state actors, non-state actors can hide deep in
areas which are out of range of the U.S. or its allies. But launching anything
on an ICBM makes it almost impossible for other nations to distinguish from a
nuclear missile launch.
Of course, if Prompt Global Strike is capable of carrying cargo with it then
maybe countries like Russia or China would suspect it can _also_ carry a small
nuclear warhead, so it may be that concern doesn't completely go away.
~~~
duaneb
I don't think it's in any country's interest to do a nuclear attack with
anything but a blow that would cripple response, a small nuclear warhead seems
like an invitation to go from tolerated and accepted to hated.
------
sabertoothed
Why does it not have a mane?
~~~
metageek
It did have one, but it ablated.
------
tocomment
When is the next test?
| {
"pile_set_name": "HackerNews"
} |
Some Obvious Things About Internet Reputation Systems - jonathansizz
http://tomslee.net/2013/09/some-obvious-things-about-internet-reputation-systems.html?
======
ColinWright
I've noticed that you've deleted and re-submitted this constantly. This is the
third one I've counted - there may have been more.
It was submitted 9 hours ago here:
[https://news.ycombinator.com/item?id=6467599](https://news.ycombinator.com/item?id=6467599)
But you knew that, as you have deliberately appended a "?" to the URL.
Here are two of your earlier submissions:
* [https://news.ycombinator.com/item?id=6469199](https://news.ycombinator.com/item?id=6469199)
* [https://news.ycombinator.com/item?id=6469136](https://news.ycombinator.com/item?id=6469136)
Interestingly, one's just "gone missing" while the other is marked as
[deleted]. Not sure what the difference might be.
I see you've also done the re-submission thing here:
* [https://news.ycombinator.com/item?id=6468064](https://news.ycombinator.com/item?id=6468064)
* [https://news.ycombinator.com/item?id=6469285](https://news.ycombinator.com/item?id=6469285)
| {
"pile_set_name": "HackerNews"
} |
Which book describes all dirty business practises used in past? - xstartup
Is there any book which describes all the evil practices used in past by companies/individuals.
======
indescions_2018
Evil? Who am I to cast such a stone? But the recent PBS American Experience
episode on The Gilded Age. Demonstrates the scale at which J. P. Morgan backed
the full faith and credit of the US Government.
[http://www.pbs.org/wgbh/americanexperience/films/gilded-
age/](http://www.pbs.org/wgbh/americanexperience/films/gilded-age/)
------
rafa2000
I just got Dark Money and reading through it. It is very revealing.
[https://www.amazon.com/Dark-Money-History-Billionaires-
Radic...](https://www.amazon.com/Dark-Money-History-Billionaires-
Radical/dp/0307947904/ref=sr_1_5?s=books&ie=UTF8&qid=1518193379&sr=1-5&keywords=dirty+money)
------
thisisit
In which field? You have to realize there are tons of dirty or underhanded
practices across every industry. Chronicling each of them is neigh impossible.
So pick an industry and you will find yourself swamped with suggestions.
------
aflinik
Not a book, but you might like Dirty Money series on Netflix.
------
iron0012
There sure is: Capital by Karl Marx.
(wonder if down-voters have actually read this, or if it's just a boogie-man
to them?)
~~~
mcphage
It’s just not a very good answer to the question.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Does Reddit now also force you to use app? - drummer
I tried to visit /r/cpp just now via Firefox on mobile and get the following message on Reddit:<p>"This community is available in the app<p>To view posts in r/cpp you must continue in Reddit app or log in."
======
sarcasmatwork
Try: [https://old.reddit.com/r/cpp/](https://old.reddit.com/r/cpp/)
------
kuesji
i think,this is a/b testing. i see this often but not everytime.
------
detaro
not seen that yet
| {
"pile_set_name": "HackerNews"
} |
Show HN: Practical Modern JavaScript - bevacqua
https://ponyfoo.com/books/practical-modern-javascript
======
bevacqua
OP here:
Just published this book on Amazon, and it's also free to read online[1][2].
It covers ES6 in a comprehensive and practical manner, focusing on how
features can be used to write better code. The book also goes beyond ES6 to
explain things like async/await, async iterators and generators,
Intl.Segmenter, proposals to improve regexp's unicode support, and so on.
[1]: [https://github.com/mjavascript/practical-modern-
javascript](https://github.com/mjavascript/practical-modern-javascript)
[2]: [https://ponyfoo.com/books/practical-modern-
javascript/chapte...](https://ponyfoo.com/books/practical-modern-
javascript/chapters/1#read)
| {
"pile_set_name": "HackerNews"
} |
Hackers conquer Tesla’s in-car web browser and win a Model 3 - sahin-boydas
https://techcrunch.com/2019/03/23/hackers-conquer-tesla-and-win-a-model-3/
======
Corrado
This is a really great accomplishment. Tesla says that they only breached the
entertainment center but other car manufacturers have said similar things,
only to have the attackers be able to flash the lights or open doors. I wonder
how serious this hack really is?
As a side note, I'm impressed with how engaged Tesla is with the "hacker"
community. Not only do they put their products directly in the path of people
trying to break their products, they are increasing the bounty as well!
------
gcb0
wish my ad blocker blocked those paid articles too.
~~~
sahin-boydas
i really think hacker news should put a small icons for paid links and also a
fact checker
| {
"pile_set_name": "HackerNews"
} |
Operation Crossbow: How 3D glasses helped win WWII - JacobAldridge
http://www.bbc.co.uk/news/magazine-13359064
======
rrrazdan
The same technique was used later in satellites to map terrain. I had the
chance to see an image of the Himalayas, in my Remote Sensing class. You don't
use any special glasses and consequently experience minimal eyestrain, even
when you view the image for hours. ( This is important if you are scouring the
image for some small detail.) And the 3D effect is so much better and lifelike
than current 3D in movies. I wonder what it would take to get that kind of
experience to devices today.
~~~
TheloniusPhunk
I have always wondered why 3-d tech is so gimmicky and silly looking. Life is
3-d, and it looks good.
~~~
Groxx
Because when you look around in the "real world" _you_ control the focus and
depth. 3D otherwise chooses that for you, so you're stuck being dragged around
unnaturally.
~~~
stcredzero
I wonder if someone has developed a stereoscope interface to Google Earth?
~~~
Groxx
Maybe? I thought I'd seen something that did red/blue anaglyphs a while back.
A quick Googling found this: [http://freegeographytools.com/2009/3d-anaglyph-
views-in-goog...](http://freegeographytools.com/2009/3d-anaglyph-views-in-
google-earth) and others that look older.
~~~
etcet
Press '3' in Google Maps street view
~~~
Groxx
Hah! Never knew that one. I meant Earth specifically, though.
| {
"pile_set_name": "HackerNews"
} |
Show HN - My Weekend Project: Twitter Secret Santa - biggitybones
http://thegreattwittersecretsanta.com/
======
ben1040
From the terms:
_IF YOU CHOOSE TO PARTICIPATE IN THE GIFT EXCHANGE AND YOU DO NOT SEND A
GIFT, YOUR TWITTER.COM USERNAME WILL BE POSTED PUBLICLY._
I didn't go any further, but didn't see this anywhere else but in the terms
page. Are you going to run a list of users who do not follow through?
~~~
biggitybones
Probably far too strong choice of language. It's not something I plan on doing
and I'm going to edit the terms to reflect.
Sort of a line to encourage social responsibility :)
| {
"pile_set_name": "HackerNews"
} |
Training Computers to Find Future Criminals - nigrioid
http://www.bloomberg.com/features/2016-richard-berk-future-crime/
======
Houshalter
I could not disagree more with these comments. Psychologists are just now
starting to study the phenomenon of "algorithm aversion", where people
irrationally trust human judgement far more than algorithms. Even after
watching an algorithm do far better in many examples.
The reality is humans are far worse. We are biased by all sorts of things.
Unattractive people were found to get twice as long sentences as attractive
ones. Judges were found to give much harsher sentences right before lunch
time, when they were hungry. Doing interviews was found to decrease the
performance of human judges, in domains like hiring and determining parolle.
As opposed to just looking at the facts.
Even very simple statistical algorithms far outperform humans in almost every
domain. As early as 1928, a simple statistical rule predicted recidivism
better than prison psychologists. They predict the success of college
students, job applicants, outcomes of medical treatment, etc, far better than
human experts. Human experts never even beat the most basic statistical
baseline.
You should never ever trust human judges. They are neither fair nor accurate.
In such an important domain as this, where better predictions reduce the time
people spend in prison and crime, there is no excuse not to use them. Anything
that gets low risk people out of prison is good.
I believe that any rules that apply to algorithms should apply to humans too.
We are algorithms too after all. If algorithms have to be blind to race and
gender, so should human judges. If economic information is bad to use, humans
should be blind to it also. If we have a right to see why an algorithm made a
decision the way it did, we should be able to inspect human brains to. Perhaps
put judges and parolle officers in an MRI.
~~~
Smerity
Stating that method A is problematic does not automatically mean method B is
better.
> The reality is humans are far worse
Citation needed - especially when comparing against a specific instantiation
of a machine learning model. Papers published by the statistician in the
article used only 516 data points. Most data scientists running an A/B test
wouldn't change their homepage with only 516 data points. There's no guarantee
the methods he is using for the parole model involve better datasets or models
without deep flaws.
An algorithm or machine learning model is not magically less biased than the
process it is replacing. Indeed, if it's trained on biased data, as you
believe by stating "never ever trust human judges", then the models are
inherently biased in the exact same way.
If you give a machine learning model a dataset where one feature appears
entirely indicative (remember: correlation is not causation), it can overfit
to that, even if that does not reflect reality.
I highly recommend reading "How big data is unfair: understanding unintended
sources of unfairness in data driven decision making"[1], by Moritz Hardt, a
Google machine learning researcher who has published on the topic (see:
Fairness, Accountability, Transparency). It is a non-technical and general
introduction to some of the many issues that can result in bias and prejudice
in machine learning models. To summarize, "machine learning is not, by
default, fair or just in any meaningful way".
Algorithms and machine learning models _can_ be biased, for many reasons.
Without proper analysis, we don't know whether it's a good or bad model, full
stop.
[1]: [https://medium.com/@mrtz/how-big-data-is-
unfair-9aa544d739de...](https://medium.com/@mrtz/how-big-data-is-
unfair-9aa544d739de#.ykd1vedz1)
~~~
Houshalter
>Citation needed - especially when comparing against a specific instantiation
of a machine learning model. Papers published by the statistician in the
article used only 516 data points. Most data scientists running an A/B test
wouldn't change their homepage with only 516 data points. There's no guarantee
the methods he is using for the parole model involve better datasets or models
without deep flaws.
516 is more than enough to fit a simple model. As long as you use cross
validation and hold out tests to make sure you aren't fitting. 516 data points
is more than a person needs to see to be called an "expert". Many of the
algorithms I referenced used fewer data points, or even totally unoptimized
weights, and still beat human experts.
>An algorithm or machine learning model is not magically less biased than the
process it is replacing. Indeed, if it's trained on biased data, as you
believe by stating "never ever trust human judges", then the models are
inherently biased in the exact same way.
We have ground truth though. Whether someone will be convicted is a fairly
objective measure. Even if it's slightly biased, it's still the best possible
indicator we have of whether or not someone should be released. If you had a
time machine that could go into the future and see who would be convicted,
would you still argue against using that information, because it might be
biased? Leaving people to rot in prison, even if all the statistics point to
them being very low risk, is just wrong.
>"machine learning is not, by default, fair or just in any meaningful way"
_Humans are not, by default, fair or just in any meaningful way._ Nor are
they accurate at prediction. Any argument you can possibly use against
algorithms applies even more to humans. That's my entire point. You should
trust humans far, far less than you do.
~~~
pdkl95
> You should trust humans far, far less than you do.
Which is why I don't trust _the people picking the algorithm_. You still have
human bias, but now they are easier to hide behind complicated algorithms and
unreliable data.
edit: removed original editing error
edit2: You say I should trust the algorithm, but y9u seem to be going out of
your way to ignore that the algorithm itself has to be created by someone. You
haven't reduced the amount of bias; trusting an algorithm simply codifies the
author's bias.
~~~
Houshalter
You should trust the "complicated" algorithm far more than you trust the
complicated human brain of the judge, who was also trained on unreliable data.
Look it's easy to verify whether parole officers are better at predicting
recidivism than an algorithm. If the algorithm is objectively better than it
should be used.
~~~
kragen
Given an unbiased algorithm that does better at predicting recidivism, it
would be easy to _deliberately_ construct an algorithm that does almost as
much better, but is egregiously biased. For example, if you had been wronged
by somebody named Thiel, you could persuade it to never recommend parole for
anybody named Thiel. There aren't enough people named Thiel for this to
substantially worsen its measured performance.
Given that it's easy to construct an example of how you could _deliberately_
do this, and it's so easy to accidentally overfit machine-learning algorithms,
we should be very concerned about people _accidentally_ doing this. An easy
way would be to try a few thousand different algorithm variants and have a
biased group of people eyeball the results to see which ones look good. If
those people are racist, for example, they could subconsciously give undue
weight to type 1 errors for white prisoners and type 2 errors for black
prisoners, or vice versa.
The outcome of the process would be an algorithm that is "objectively better"
by whatever measure you optimize it for, but still unjustly worsens the
situation for some group of people.
A potential advantage of algorithms and other rules is that, unlike the brain
of the judge, they can be publicly analyzed, and the analyses debated. This is
the basis for the idea of "rule of law". Aside from historical precedents,
though, the exploitation of the publicly-analyzed DAO algorithms should give
us pause on this count.
Deeper rule of law may help, but it could easily make the situation worse. We
must be skeptical, and we must do better.
------
imh
I think the whole idea here is frightening and unjust. We are supposed to give
all people equal rights. What people might do is irrelevant. A person whose
demographic/conditional expectation is highly criminal should be given an
equal opportunity to rise above it, else they might see the system is rigged
against them and turn it into a self-fulfilling prophecy.
~~~
Taek
It's frightening depending on how you use the data.
A good example perhaps is that I like to horse around when I'm at the beach.
I'm more like to get hurt than others who are more cautious. I'm also more
likely to hit people accidentally.
I had some parents of younger children approach me and ask me to stay on the
far side of the beach. On one hand it felt rude, but on the other it allowed
me to be rambunctious and it allowed the parents to prioritize their
children's safety.
The world isn't flat enough for this to be a reality yet, but if you cluster
people by their morals, you don't have to throw them in jail. Put all the drug
users together. Keep the drugs away from people who don't want anything to do
with them.
Usually if people are more likely to commit crimes, it's either because they
are desperate (which means successful intervention is likely provided you can
solve their core problems), or it's because they find that
activity/action/crime to be morally or culturally acceptable. To the extent
that you can exclude that culture from your own daily life, you don't have to
punish/kill that culture.
Pollution is a good counter example. You can't really isolate a culture of
pollution because it's going to affect everyone else anyway. So there are
limits.
As long as our methods for dealing with criminals evolve appropriately against
our ability to detect them, I am okay.
Human history is full of genocide though. I don't think that bodes well for
our ability to respect cultures that allow or celebrate things we consider to
be crimes.
------
moconnor
"between 29 percent and 38 percent of predictions about whether someone is
low-risk end up being wrong"
Wouldn't win a Kaggle contest with that error rate. What's not disclosed is
the percent of predictions about whether someone is high-risk ending up being
wrong. These are the ones society should be worried about.
And these are the ones that are, if such a system is put into practice,
impossible to track. Because all the high-risk people are locked up. The
socio-political fallout of randomly letting some high-risk people free to
validate the algorithm makes this inevitable.
This leaves us in a situation where political pressure is _always_ towards
reducing the number of people classified as low-risk who then re-offend.
Statistical competence is not prevalent enough in the general population to
prevent this.
TL;DR our society is either not well-educated enough or is improperly
structured to correctly apply algorithms for criminal justice.
~~~
Houshalter
The question is whether human judges do better, and they don't. We have no
better method of determining whether someone is low risk or high risk. But
keeping everyone locked up forever is just stupid. If these predictions let
some people get out of prison sooner, I think that is a net good.
------
brillenfux
The nonchalance of these people is what really terrifies me.
They just laugh any valid criticism off and start using the references
"ironically" themselves.
I don't understand how they can do that; do they not have a moral compass? are
they psychopaths?
~~~
brillenfux
What happened here? There was a whole thread coming after that?!
~~~
dang
[https://news.ycombinator.com/item?id=12123453](https://news.ycombinator.com/item?id=12123453)
------
sevenless
The entire concept of using statistical algorithms to 'predict crime' is
wrong. It's just a kind of stereotyping.
What needs to happen is a consideration of the social-justice outcomes if
'profiling algorithms' become widely used. Just as in any complicated system,
you cannot simply assume reasonable looking rules will translate to desirable
emergent properties.
It is ethically imperative to aim to eliminate disparities and social
inequalities between races, even if, and this is what is usually left unsaid,
_judgments become less accurate in the process_.
Facts becoming common knowledge can harm people, even if they are true.
Increasingly accurate profiling will have bad effects at the macro scale, and
keep marginalized higher-crime groups permanently marginalized. If it were
legal to use all the information to hand, it would be totally rational for
employers to discriminate against certain groups on the basis of a higher
group risk of crime, and that would result in those groups being marginalized
even further. We should avoid this kind of societal positive feedback loop.
If you accept that government should want to avoid a segregated society, where
some groups of people form a permanent underclass, you should avoid any
algorithm that results in an increased differential arrest rate for those
groups, _even if that arrest rate is warranted by actual crimes committed_.
"The social norm against stereotyping, including the opposition to profiling,
has been highly beneficial in creating a more civilized and more equal
society. It is useful to remember, however, that _neglecting valid stereotypes
inevitably results in suboptimal judgments_. Resistance to stereotyping is a
laudable moral position, but the simplistic idea that the resistance is
costless is wrong. The costs are worth paying to achieve a better society, but
denying that the costs exist, while satisfying to the soul and politically
correct, is not scientifically defensible. Reliance on the affect heuristic is
common in politically charged arguments. The positions we favor have no cost
and those we oppose have no benefits. We should be able to do better."
–Daniel Kahneman, Nobel laureate, in Thinking, Fast and Slow, chapter 16
~~~
wtbob
> It is ethically imperative to aim to eliminate disparities and social
> inequalities between races, even if, and this is what is usually left
> unsaid, judgments become less accurate in the process.
Why? Why is it 'imperative' to be wrong?
> Facts becoming common knowledge can harm people, even if they are true.
Well, they can harm people who, statistically speaking, are more likely to be
bad.
If anything, I see accurate statistical profiling being helpful to black
folks. Right now, based on FBI arrest data, a random black man is 6.2 times as
likely to be a murderer as a random white man; a good statistical profiling
algorithm would be able to look at an _individual_ black man and see that he's
actually a married, college-educated middle-class recent immigrant from
Africa, who lives in a low-crime area — and say that he's _less_ likely than a
random white man to be a murderer.
Perhaps it could even look at an individual black man, the son of a single
mother from the projects, and see that he's actually _not_ like others whom
those phrases would describe, because of other factors the algorithm takes
into account.
> If you accept that government should want to avoid a segregated society,
> where some groups of people form a permanent underclass, you should avoid
> any algorithm that results in an increased differential arrest rate for
> those groups, even if that arrest rate is warranted by actual crimes
> committed.
That statement implies that we should avoid the algorithm 'arrest anyone who
has committed a crime, and no-one else,' because that algorithm will
_necessarily_ result in increased differential arrest rates. On the contrary,
I think that algorithm is obviously ideal, and thus any heuristic which leads
to rejecting it should itself be rejected.
~~~
pc86
> _a random black man is 6.2 times as likely to be a murderer as a random
> white man_
But I bet the likelihood of a random man or random person to be a murder is so
low that "6.2 times" doesn't really tell you much about the underlying data.
------
andrewaylett
I like the proposal from the EU that automated decisions with a material
impact must firstly come with a justification -- so the system must be able to
tell you _why_ it came out with the answer it gave -- and must have the right
of appeal to a human.
The implementation is the difficult bit, of course, but as a principle, I
appreciate the ability to sanity-check outputs that currently lack
transparency.
~~~
dasboth
Interpretability is going to be a huge area of research in machine learning,
what with the advent of deep learning techniques. It's hard enough explaining
the output of a random forest, what about a deep net with 100 layers? In some
cases it doesn't matter, e.g. you generally don't care why Amazon thinks you
should buy book A over book B, but in instances where someone's prison
sentence is the output, it will be vital.
------
Smerity
As someone who does machine learning, this absolutely terrifies me. The
"capstone project" of determining someone's probability of committing a crime
by their 18th birthday is beyond ridiculous. Either the author of the article
hyped it to the extreme (for the love of everything that's holy, stop freaking
hyping machine learning) or the statistician is stark raving mad.
The fact that he does this for free is also concerning, primarily as I doubt
this has any level of auditing behind it. The only thing I agree with him on
is that black box models are even worse as they have even worse audit issues.
Given the complexities in making these predictions and the potentially life
long impact they might have, there is such a desperately strong need for these
systems to have audit guarantees. It's noted that he supposedly shares the
code for his systems - if so, I'd love to see it? Is it just shared with the
relevant governmental departments who likely have no ability to audit such
models? Has it been audited?
Would you trust mission critical code that didn't have some level of unit
testing? Some level of code review? No? Then why would you potentially
destructively change someone's life based on that same level of quality?
> "[How risk scores are impacted by race] has not been analyzed yet," she
> said. "However, it needs to be noted that parole is very different than
> sentencing. The board is not determining guilt or innocence. We are looking
> at risk."
What? Seriously? Not analyzed? The other worrying assumption is that it isn't
used in sentencing. People have a tendency to seek out and misuse information
even if they're told not to. This was specifically noted in another article on
the misuse of Compas, the black box system. Deciding on parole also doesn't
mean you can avoid analyzing bias. If you're denying parole for specific
people algorithmically, that can still be insanely destructive.
> Berk readily acknowledges this as a concern, then quickly dismisses it. Race
> isn’t an input in any of his systems, and he says his own research has shown
> his algorithms produce similar risk scores regardless of race.
There are so many proxies for race within the feature set. It's touched on
lightly in the article - location, number of arrests, etc - but it gets even
more complex when you allow a sufficiently complex machine learning model
access to "innocuous" features. Specific ML systems ("deep") can infer hidden
variables such as race. Even location is a brilliant proxy for race as seen in
redlining[1]. It does appear from his publications that they're shallow models
- namely random forests, logistic regression, and boosting[2][3][4].
FOR THE LOVE OF EVERYTHING THAT'S HOLY STOP THROWING MACHINE LEARNING AT
EVERYTHING. Think it through. Please. Please please please. I am a big
believer that machine learning can enable wonderful things - but it could also
enable a destructive feedback loop in so many systems.
Resume screening, credit card applications, parole risk classification, ...
This is just the tip of the iceberg of potential misuses for machine learning.
Edit: I am literally physically feeling ill. He uses logistic regression,
random forests, boosting ... standard machine learning algorithms. Fine. Okay
... but you now think the algorithms that might get you okay results on Kaggle
competitions can be used to predict a child's future crimes?!?! WTF. What. The
actual. ^^^^.
Anyone who even knows the hello world of machine learning would laugh at this
if the person saying it wasn't literally supplying information to governmental
agencies right now.
I wrote an article last week on "It's ML, not magic"[5] but I didn't think I'd
need to cover this level of stupidity.
[1]:
[https://en.wikipedia.org/wiki/Redlining](https://en.wikipedia.org/wiki/Redlining)
[2]:
[https://books.google.com/books/about/Criminal_Justice_Foreca...](https://books.google.com/books/about/Criminal_Justice_Forecasts_of_Risk.html?id=Jrlb6Or8YisC&printsec=frontcover&source=kp_read_button&hl=en#v=onepage&q&f=false)
[3]: [https://www.semanticscholar.org/paper/Developing-a-
Practical...](https://www.semanticscholar.org/paper/Developing-a-Practical-
Forecasting-Screener-for-Berk-He/6999981067428dafadd10aa736e4b5c293f89823)
[4]: [https://www.semanticscholar.org/paper/Algorithmic-
criminolog...](https://www.semanticscholar.org/paper/Algorithmic-criminology-
Berk/226defcf96d30cf0a17c6caafd60457c9411f458)
[5]:
[http://smerity.com/articles/2016/ml_not_magic.html](http://smerity.com/articles/2016/ml_not_magic.html)
~~~
Houshalter
> There are so many proxies for race within the feature set.
Yeah but, so what? Surely you don't believe race is a strong predictor after
controlling for all the hundred other things? Algorithms are not prejudiced
and it has no reason to use racial information when so much other data is
available.
Even if somehow race was a strong predictor of crime in and of itself, so
what? Lets say economic status correlates with race, and it uses that as a
proxy. It still isn't treating a poor white person different than a poor black
person.
And if it makes a prediction like "poor people are twice as likely to commit a
crime", well it's objectively true based on the data. Its not treating the
group of poor people unfairly. They really are more likely to commit crime.
~~~
pdkl95
> Surely you don't believe race is a strong predictor after controlling for
> all the hundred other things?
It can be if you select the right data, algorithms, and analysis method.
> Algorithms are not prejudiced
That's correct. However, the _selection_ of algorithm and input data is
heavily biased. You're acting like there is some sort of formula that is
automagically available for any particular social question, with unbiased and
error free input data. In reality, data is often biased and a proxy for
prejudice.
> It still isn't treating a poor white person different than a poor black
> person.
I suggest spending a lot more time exploring how people actually use available
tools. You seem aware of how humans bring biased judgment, but you are
assuming that the _creation_ of an algorithmic tool and _use_ of that tool in
practice will somehow be free of that same human bias? Adding a complex
algorithm makes it easy to _hide_ prejudice; it doesn't do much to eliminate
prejudice.
> Its not treating the group of poor people unfairly.
Yes, it is. The entire point of this type of tool is to create a new way we
can _pre-judge_ someone based not on their individual behavior, but on a
separate group of people that happens to share an arbitrary set of attributes
and behaviors.
The problems of racism, sexism, and other types of prejudice don't go away
when you target a more complicated set of people. You're still pre-judging
people based on group association instead of treating them as an individual.
------
ccvannorman
>Risk scores, generated by algorithms, are an increasingly common factor in
sentencing. Computers crunch data—arrests, type of crime committed, and
demographic information—and a risk rating is generated. The idea is to create
a guide that’s less likely to be subject to unconscious biases, the mood of a
judge, or other human shortcomings. Similar tools are used to decide which
blocks police officers should patrol, where to put inmates in prison, and who
to let out on parole.
So, eventually a robot police officer will arrest someone for having the wrong
profile.
>Berk wants to predict at the moment of birth whether people will commit a
crime by their 18th birthday, based on factors such as environment and the
history of a new child’s parents. This would be almost impossible in the U.S.,
given that much of a person’s biographical information is spread out across
many agencies and subject to many restrictions. He’s not sure if it’s possible
in Norway, either, and he acknowledges he also hasn’t completely thought
through how best to use such information.
So, we're not sure how dangerous this will be, or how Minority Report
thoughtcrime will work, but we're damned sure we want it, because it's the
future and careers will be made?
This is a very scary trend in the U.S. Eventually, if you're born poor/bad
childhood, you will have even _less_ of a chance of making it.
~~~
sliverstorm
On the bright side, if we can pinpoint at risk children with high accuracy, we
can also _help_ them make it.
Like the attempts to deradicalize individuals at very high risk of flying of
to syria, rather than arresting them.
~~~
seanmcdirmid
We can already pinpoint at risk children with high accuracy, just check out
any inner city ghetto. We just lack the caring needed to do anything about the
root cause (e.g. and mostly poverty) that causes kids to go bad later. It
isn't a mystery.
------
kriro
Predictive policing is quite the buzz word these days. IBM (via SPSS) is one
of the big players in the field. The most common use case is burglary, I
suspect because that's somewhat easy (and also directly actionable). You
rarely find other use cases in academic papers (well I only browsed the
literature a couple of times preparing for related projects).
The basic idea is sending more police patrols to areas that are identified as
high thread and thus using your available resources more efficiently. The
focus in that area is more on objects/areas than on individuals so you don't
try to predict who's a criminal but rather where they'll strike. It sounds
like a good enough idea in theory but at least in Germany I know that research
projects for predictive policing will be scaled down due to privacy concerns
even if the prediction is only area and not person based (noteworthy that
that's usually mentioned by the police as a reason why they won't participate
in the research). I'm not completely sure and only talked to a couple of state
police research people but quite often the data also involves social media in
some way and that's the major problem from what I can tell.
~~~
jmngomes
> IBM (via SPSS) is one of the big players in the field
They have been pitching "crime prediction" since at least 2010 with no real
results so far...
~~~
antisthenes
The results are that IBM's consulting arm is flourishing from all the crime
prediction contracts.
------
peterbonney
Here's something I really dislike about all the coverage I've seen about these
"risk assessment algorithms": There is absolutely no discussion of the
magnitude of the distinctions between classifications. Is "low risk" supposed
to be (say) 0.01% likelihood of committing another crime and "high risk" (say)
90%? Or is "low risk" (say) 1% vs. "high risk" of (say) 3%?
Having worked on human some predictive modeling of "bad" human events (loan
defaults) my gut says it's more like the latter than the former, because
prediction of low-frequency human events is _really_ hard, and, well, they're
by definition infrequent. If that suspicion is right, then the signal-noise
ratio is probably too poor to even consider using them in sentencing, and
that's _without_ considering the issues of bias in the training data, etc.
But there is never enough detail provided (on either side of the debate) for
me to make an informed assessment. It's just a lot of optimism on one side and
pessimism on the other. I'd really love to see some concrete, testable claims
without having to dive down a rabbit hole to find them.
------
conjectures
What is Berk's model? How well does it do across different risk bands? What
variables are fed into it in the states where it is used? How does prediction
success vary across types of crimes, versus demographics within crime?
This article treats ML like a magic wand, which it isn't. There's not enough
information to make a judgement on whether the tools are performing well or
not, or whether that performance, or lack of it, is based on discrimination.
Where we do have information it is worrying:
"Race isn’t an input in any of his systems, and he says his own research has
shown his algorithms produce similar risk scores regardless of race."
What?!? The appropriate approach would be to include race as a variable, fit
the model, and then marginalise out race when providing risk predictions.
Confounding is mentioned but the explanation of how it is dealt with, without
doing the above isn't given - just a (most likely false) reassurance.
------
anupshinde
This is like machine introduced bias/racisim/castism... we need a new term for
that.. and its based on statistically induced pseudo-sciences many times
similar to astrology. This is the kind of AI everyone should be afraid of.
~~~
benkuykendall
I fail to see what is unscientific about stating conditional probabilities.
Astrology is unscientific because the orientation of the heavens is for the
most part independent of the observables of a person's life. But the inputs to
Berk's system clearly do effect the probability of committing crime. A
frequentist would say that a large group of people with this set of
characteristics would yield more or fewer criminals; a Bayesian would say we
have more knowledge about whether such an individual will commit future crime.
These are scientific conclusions. The question "how should we use this data"
is a question of ethics, not science.
~~~
eyelidlessness
> I fail to see what is unscientific about stating conditional probabilities.
1\. Create conditions which disadvantage and impoverish a segment of society.
2\. Refine those conditions for centuries, continually criminalizing the
margins that segment of society is forced to live in.
3\. Identify that many of the people in that segment of society are likely to
be identified as criminals.
4\. Pretend that you're doing math rather than reinforcing generations of
deleterious conditioning, completely ignoring the creation of those conditions
that led to the probabilities you're identifying.
And science can't be divorced from ethics. These are human pursuits.
~~~
vintermann
This looks like sloppy ML. But human judges do all those things already
(susbtitute "just applying the law" for "doing math" in 4) and they can't be
inspected - their brains are "closed source".
Sure, these humans can come up with wordy justifications for their decisions.
But there are plenty of intelligent people who employ their intelligence not
to arrive at a conclusion, but to justify the conclusion they already arrived
at. Legal professionals aren't merely capable of this, they're explicitly
trained to be experts at post-hoc justification.
And legal professionals basically ignore all criticism not coming from their
own professional class. They are rarely taught any kind of statistics in law
school. Nobody wants to discuss math in debate club - answers with hard right
and wrong answers are no fun to them.
Your pessimism wrt. modeling may be justified, but you're not nearly
pessimistic enough about people or the legal system.
~~~
eyelidlessness
I frankly don't understand your response. I described a list of despicable
things humans have done, and you're suggesting that I'm not pessimistic about
people.
------
acd
Would the following be common risk factors for a child becoming future
criminal? Would it not be cheaper for society to invest in this risk children
early on rather than dealing with their actions as an adult? Minority report.
What are your observations for risk factors? Has there been Social science any
interviews of prisoners and their background feed into classification engines?
Classification ideas: * Bad parents not raising their child * Living in a poor
neighbourhood with lots of crime * Going to a bad school * Parents who are
workaholics. * Single parent * Parent who is in jail
------
nl
For those who haven't read it, Propublica article on this is even better (and
scarier): [https://www.propublica.org/article/machine-bias-risk-
assessm...](https://www.propublica.org/article/machine-bias-risk-assessments-
in-criminal-sentencing)
------
phazelift
It might be a better idea to first train computers to define criminality
objectively, because most people cannot.
~~~
jobigoud
> most people cannot
So why do we hold computers to higher standards than humans? Either it's OK to
not being able to define criminality objectively and in that case algorithms
shouldn't be disqualified for this very reason, or it's not OK, but in that
case humans should not be allowed to do the job either.
~~~
phazelift
Convicting people for criminal behaviour based on a subjective definition of
it, is what we already do wrong. Way too many innocent people end up being
punished or killed, which I expect to be (objectively) a criminal act in
itself. So, I thought we better first have a tool that solves that, instead of
a tool that amplifies it.
------
Digit-Al
I find this really interesting. I think what most people seem to be missing is
the wider social context. Think about this. If you exclude white collar
financial crime, pre-meditated murder, and organised crime - most other crimes
are committed by the socially disadvantaged. So, if the algorithm identifies
an area where crime is more likely to be committed, instead of being narrow
minded and just putting more police there to arrest people, why not instead
try to institute programs to raise the socioeconomic status of the area?
People are just concentrating on the crime aspect, but most crime is just a
symptom of social inequality.
~~~
darpa_escapee
The American thing to do is to cry "personal responsibility" and treat the
symptoms with jail time, fines and a lengthened rap sheet.
Suggesting that we treat the cause suggests we all have responsibilities as
members of communities to ensure no one is in a place where crime might make
sense despite the consequences.
------
mc32
The main question should be, like with autonomous vehicles, is does this
system perform better than people (however you want to qualify that)? If so,
it's better than what we have.
Second, even if it's proven better (fewer false positives, less unduly biased
results) it can be improved continuously.
There is a danger in that people may not like the results because if we take
this and diffuse it, has the potential to shape people's behavior in
unintended ways (gaming), on the other hand this system has the potential for
objectivity when identifying white collar crime, that is surfacing it better.
------
justaaron
gee, what could possibly go wrong, Mr. Phrenologist?
SOMEONE seems to have viewed Minority Report as an Utopia rather than
Dystopia, I'm afraid.
------
DisgustingRobot
I'm curious how good an algorithm would be at identifying future white collar
criminals. What would the risk factors be for things like insider trading,
political corruption, or other common crimes?
------
liberal_arts
consider the (fictional) possibility that an AI will be
" actively measuring the populace's mental states, personalities, and the
probability that individuals will commit crimes "
[https://en.wikipedia.org/wiki/Psycho-
Pass](https://en.wikipedia.org/wiki/Psycho-Pass)
AI may be worth the trade-off if violent crime can be almost eliminated.
or consider (non-fiction): body-language/facial detection at airports; what if
they actually start catching terrorists?
~~~
vegabook
There is a school of thought that says some crime is necessary for a healthy,
functioning society. Personally, while I would hate to be the victim of
violent crime (obviously), I actually do agree that cities with very low crime
levels are often stultifyingly uncharismatic.
[https://www.d.umn.edu/~bmork/2111/readings/durkheimrules.htm](https://www.d.umn.edu/~bmork/2111/readings/durkheimrules.htm)
------
jamesrom
What is bloomberg's MO with these near unreadable articles?
~~~
puddintane
Still rocking paint like it's 1995 - Bloomberg
The color choice is just yikes!
Did anyone else get reminded of Futurama - Law and Oracle (S06E16 / E104)?
[1]
[https://en.wikipedia.org/wiki/Law_and_Oracle](https://en.wikipedia.org/wiki/Law_and_Oracle)
I do wonder if this type of technology is something we should slowly approach
due to the very nature of the outcome of sentencing. We already incarcerate a
lot of innocent people and I truly wonder if this is something we should tread
lightly on.
~~~
nickles
Law and Oracle is an adaptation of PKD's short story "Minority Report" in
which the Pre-Cog (short for pre-cognition) department polices based on
glimpses of future crime. Orwell's "1984" explored the similar, but distinct,
notion of 'thoughtcrime'. Both works examine the implications of such policing
methods and are certainly worth reading.
~~~
Pamar
To be honest I read 1984 maybe six or seven times and I do not remeber
"prediction" to be one of the main themes. See
[https://en.wikipedia.org/wiki/Thoughtcrime](https://en.wikipedia.org/wiki/Thoughtcrime)
The oppressive regime of 1984 does not use math or computers to look for
"suspects". We might argue this was because Orwell had no idea about these
methods (book was written in 1948, after all) but personally I doubt he would
have used this because advocating decisions to an impersonal algorithm would
make the bad guys slightly less bad: in the novel the main baddie says
something like "do you want to see the future of Human Race? Think of a boot
stomping on a human face, forever...".
I.e. power for them is a end in itself, and the only way to use it is to make
someone else suffer. I don't see any place for "impersonal algorithms to
better adjudicate anything" in this.
------
niels_olson
Can someone just go ahead and inject a blink tag so we can get the full 1994
experience? Oh, my retina...
~~~
cloudjacker
[http://www.wonder-
tonic.com/geocitiesizer/content.php?theme=...](http://www.wonder-
tonic.com/geocitiesizer/content.php?theme=1&music=2&url=http://www.bloomberg.com/features/2016-richard-
berk-future-crime/)
surprisingly more readable
~~~
cJ0th
On a serious note: They original link looks okayish when you switch to article
mode in Firefox.
------
Dr_tldr
I know, it's almost as if they don't consider you the sole and undisputed
arbiter of the limits of technology in creating social policy. What a bunch of
psychopaths!
~~~
eyelidlessness
If you have (created!) a job that closely resembles a work of dystopian
fiction, laughing that off is absolutely lacking in human empathy. That's not
even the first problem with this line of work, but since you're also laughing
off the problem, it deserves a rebuttal.
If I said to you that I was going to create a network of surveillance devices
that also serves as mindless entertainment and routinely broadcasts faith
routines that non-participants will be punished for, and you told me that
sounds like something out of 1984, and I told you were paranoid, you'd think
I'm mad.
And the advance of technology unhindered is not a universal good. Algorithms
only have better judgment than humans according to the constraints they were
assigned. If there's a role for automation in criminal justice, that role must
be constantly questioned and adjusted for human need, just as the role of
human intervention should be. Because it's all human intervention.
~~~
ionwake
What is a "faith routine" ? Thanks
~~~
Jtsummers
In context, faith routines would be things like, in the book _1984_ , the Two
Minutes Hate. In reality, it might be a (imlicitly mandatory, if not
explicitly) routine such as pledging allegiance to a flag, or mandatory
participation in a moment of prayer, or something similar.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How to transition out of startup I founded? - throwawaynum1
I started a company (sole founder) a few years ago. We were mildly successful, and have a handful of employees and are breaking even.<p>We never raised money, and I think the company could continue to grow organically on its own, growing revenues maybe 10% a year. As CEO, I'm taking an under-market salary for several years, which has taken its toll on me and my family.<p>The company is in a position where the team in place can probably grow it organically, and have a nice, comfortable workplace.<p>I do not think there's much opportunity to raise funds, and even if we did, I think traditional investors would be disappointed by the returns. Worse, it would lock me in for a few more years. I also don't know if anything I do personally is going to change the trajectory of the company. I've certainly tried.<p>I'm a fairly senior developer, and live in an area where tech jobs are easy to come by. I would like to keep my company going (giving most of my salary back towards growing the business + improving staff salaries) but I'd like to go back to a more normal role with market pay + benefits.<p>Trying to figure out how to navigate this without disrupting my company or scaring off future employers. Would love any advice or feedback.
======
new_hackers
Have you grown and developed other leaders in the company? I think this is the
answer to any "how to transition" question. Is there someone whom you have
groomed and wants to take over?
~~~
throwawaynum1
There are no obvious "CEO" candidates (and the company is not large enough to
attract new CEO talent). I have 3 director types who can run day-to-day.
I'd imagine I could have a weekly 1 hour meeting via slack or phone and they'd
be able to handle it.
------
grizzles
Just be upfront about why you are leaving. One option might be to make it an
employee owned company, with the current team gradually diluting out. Have
bonus incentives for anyone who can bring new business. Some huge companies
like SAIC have been built this way. Maybe hire a part time ceo aka a new sales
& marketing person.
My guess is that if you are a developer, then you probably aren't focusing
enough on sales with your ceo hat on.
~~~
throwawaynum1
This is an interesting idea!
I probably didn't focus on sales as much as another CEO might, but I tried my
best. Sometimes that's just not good enough I guess!
| {
"pile_set_name": "HackerNews"
} |
Thank you all, today is my first anniversary at News.YC - edu
Today I read on my profile<p><pre><code> user: edu
created: 365 days ago
</code></pre>
A full year here! Mostly lurking, and seldom commenting and submitting news. One year and it is as good as ever.<p>I wanted to thank you all guys, and specially PG, for making news.YC possible and for keeping it up fresh, clean and smart.<p>Thank you very much!
======
mixmax
Congratulations :-)
Especially since it seems that your submissions are interesting and your
comments insightful.
Keep up the good work...
------
edw519
Thank YOU, edu, for helping to keep it "fresh, clean, and smart". (nice choice
of words)
| {
"pile_set_name": "HackerNews"
} |
Help fight dementia by playing a mobile game - gregdoesit
http://www.seaheroquest.com/en/
======
brudgers
[The page load was painfully slow for me.]
The game appears to be sponsored by Deutsch Telekom (T-mobile).
The video explainer is here: [http://www.seaheroquest.com/en/what-is-
dementia](http://www.seaheroquest.com/en/what-is-dementia)
The goal is to research how people navigate and to use that data to understand
dementia.
| {
"pile_set_name": "HackerNews"
} |
Django Girls: workshops about programming in Python and Django for women - goblin89
http://djangogirls.org
======
bndr
I don't really understand the need to create a women only workshop. Why is
there such a need for this?
~~~
goblin89
Me neither. Every time I see Django Girls mentioned in my Twitter timeline
(often retweeted by a Django core developer), I fight the urge to ask aloud:
are girls not allowed on “regular” conferences and workshops? Or, perhaps, we
guys are being so mean that girls choose not to attend?
The first question is rhetorical, but the second isn't. Perhaps such workshop
is indeed warranted and it's my worldview that's missing something. However, I
don't want to seem like a hater and actually ask those questions on Twitter,
so I decided to post this to HN (I guess though the post wouldn't take off
this time).
| {
"pile_set_name": "HackerNews"
} |
Show HN: CV Compiler 2.0 – Instant Resume Suggestions for Techies - andrewstetsenko
https://cvcompiler.com/?hackernews
======
floki999
First off, where did it get its hands on 1M resumes to train their analytics?
Looks like yet another underhanded way of getting hold of personal information
on a massive scale.
If it isn’t the case, then address this upfront on the top of your landing
page. No time to dig further for it.
Looking for a job is hard enough work - don’t make me loose precious time
having to research your service.
| {
"pile_set_name": "HackerNews"
} |
Advertisers using face recognition to watch people watching TV - prostoalex
http://fortune.com/2016/02/13/bbc-ads-crowdemotion/
======
KannO
"Koyaanisqatsi" director Godfrey Reggio made a film using just the blank gazes
of kids watching TV:
[https://www.youtube.com/watch?v=vuI_nCADnW0](https://www.youtube.com/watch?v=vuI_nCADnW0)
When people talk about how kids are spending too much time on their smart
phones or internet, at least it's interactive and participatory where as TV is
a bizarre receptive one way experience.
~~~
sandworm101
But at least this was the BBC. If you are going to watch mindlessly then at
least watch the best.
------
x1798DE
The article is extremely slim on details, but it seems more like they are
using the webcams of study participants to watch people watching TV, and using
facial recognition to process the data. The headline on fortune seems to imply
that they are somehow _secretly_ watching you.
------
anexprogrammer
The article just seems to be a rehash of this BBC blog post (from 2014):
[http://www.bbc.co.uk/mediacentre/worldwide/2014/labs-
crowdem...](http://www.bbc.co.uk/mediacentre/worldwide/2014/labs-crowdemotion)
The link gives a clearer idea of what's being done than the article does.
------
Animats
_" By visiting this site, you agree that Site, Inc. may activate your
computer's camera, identify you by facial recognition, record and analyze your
facial expressions, and track your eye movements. We use this information to
select ads which draw your attention."_
------
paulwitte253
Isn't this technique against all privacy ethics/manners?
| {
"pile_set_name": "HackerNews"
} |
What is behind the name duckduckgo - blackvine
http://www.altvirtual.com/tech-news/duckduckgo-a-new-way-to-search-the-web.html
No seriously is it because most of the good domains are in the hands of cyber-squatters
======
villageidiot
Expected this to be entertaining or informative. Instead it's just a shameless
plug for the site and its founder. Lame, duck.
~~~
epi0Bauqu
For the record, I had nothing to do with the writing or submission of this
article. And I've never heard of the site.
~~~
villageidiot
No worries. Wasn't pointing the finger at you. Just thought the article was
kind of pointless. It's good that you clarified, though - thanks.
| {
"pile_set_name": "HackerNews"
} |
Stop Coming Up With Startup Ideas - rrhoover
http://ryanhoover.me/post/37498575737/stop-coming-up-with-startup-ideas
======
nomi137
i agree... we need more execution not ideas ali
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Any domain name registrars that don't require JavaScript? - glockenspielen
I wish to manage nameservers and records via a text browser such as links, lynx or w3m.<p>I recall when namecheap and namesilo did not require scripts.<p>Anyone have current experience with a registrar free from JavaScript?
======
nlolks
Try this one. freedns.afraid.org
~~~
glockenspielen
Not a domain name registrar, but looks useful nonetheless. Looking into this.
Thanks.
| {
"pile_set_name": "HackerNews"
} |
Machine learning is easier than it looks - jasonwatkinspdx
http://insideintercom.io/machine-learning-way-easier-than-it-looks/
======
eof
I feel I'm in a _somewhat_ unique position to talk about easy/hardness of
machine learning; I've been working for several months on a project with a
machine learning aspect with a well-cited, respected scientist in the field.
But I effectively "can't do" machine learning myself. I'm a primarily 'self-
trained' hacker; started programming by writing 'proggies' for AOL in middle
school in like 1996.
My math starts getting pretty shaky around Calculus; vector calculus is beyond
me.
I did about half the machine learning class from coursera, andrew ng's.
Machine learning is conceptually much simpler than one would guess; both
gradient descent and the shallow-neural network type; and in fact it is
actually pretty simple to get basic things to work.
I agree with the author that the notation, etc, can be quite intimidating vs
what is "really going on".
however, _applied_ machine learning is still friggin' hard; at least to me;
and I consider myself a pretty decent programmer. Naive solutions are just
unusable in almost any real application, and the author's use of loops and
maps are great for teaching machine learning; but everything needs to be
transformed to higher level vector/matrix problems in order to be genuinely
useful.
That isn't unattainable by any means; but the fact remains (imho) that without
the strong base in vector calculus and idiosyncratic techniques for
transforming these problems into more efficient means of computations; usable
machine learning is far from "easy".
~~~
csmatt
Does there exist a tool that will translate mathematical symbols or an entire
equation of them into plain English?
~~~
andreasvc
This sounds to me like a pointless exercise. There is a reason for using
mathematical notation for non trivial formulas, which is that is more compact
and succint, to allow it to convey information efficiently and unambiguously.
Think of a formula with a few levels of parentheses; you're not going te be
able to express that clearly in a paragraph of text. It's not so much the
symbols and notation itself which is hard to grasp, but the mental model of
the problem space; once you have that, the formula will usually make sense
because you can relate it to this mental model.
~~~
icelancer
>There is a reason for using mathematical notation for non trivial formulas,
which is that is more compact and succint, to allow it to convey information
efficiently and unambiguously.
Maybe, but not always. Remember that Richard Feynman took great issue with how
integration was taught in most math classes and devised his own method
(inspired from the Calculus for the Practical Man texts).
~~~
andreasvc
You can always try to find an even better notation, but the only point I was
making is that for certain cases anything is better than a wall of akward
text.
------
hooande
Most machine learning concepts are very simple. I agree with the author that
mathematical formulae can be unnecessarily confusing in many cases. A lot of
the concepts can be expressed very clearly in code or plain english.
For example, a matrix factorization could be explained with two arrays, a and
b, that represent objects in the prediction:
for each example
for each weight w
prediction += a[w] x b[w]
err = (prediction - actual_value)
for each weight w
a[w] += err x small_nuumber
b[w] += err x small_number
It's that simple. Multiply the weights of a by the weights of b, calculate
error and adjust weights, repeat.
K-Nearest Neighbor/KMeans are based on an even simpler operation:
dist = 0
for each weight w: dist += (a[w] - b[w])**2
Then make predictions/build clusters based on the smallest aggregate distance.
There are more advanced concepts. There are some serious mathematics involved
in some predictors. But the most basic elements of statistical prediction are
dead simple for a trained programmer to understand. Given enough data, 80%
solutions can easily be achieved with simple tools.
We should be spreading the word about the simplicity of fundamental prediction
algorithms, not telling people that it's hard and a lot of math background is
required. Machine learning is very powerful and can improve all of our lives,
but only if there is enough data available. Since information tends to be
unevenly distributed we need to get the tools into the hands of as many people
as possible. It would be much better to focus on the concepts that everyone
can understand instead of keeping statistics secrets behind the ivy clad walls
of academia.
~~~
ACow_Adonis
Spot on. I agree completely. I work at this stuff for a living, and it never
ceases to amaze how the academic literature and math papers are discussing
fundamentally simple ideas using styles and techniques programming evolved
past in the last few decades. Comments, structure, not-greek-characters (but
they make me look smart!), abstractions. When was the last time you saw a
stat/math paper in machine learning with comments/structure? And guess what,
its as undecipherable as code without comments/structure.
On the other hand, I'm also learning that what I/others find easy, a great
deal of people/programmers find hard. The number of programmers/hackers who
can actually implement such techniques on new real-world problems if they
don't have someone holding their hand I'm discovering is a very small
minority. So maybe its harder than it looks after all, and we just think its
easy because we've spent so much time with it? After all, programming isn't a
walk in the park for most people, and machine learning isn't a walk in the
park for most programmers.
~~~
bnegreve
> _When was the last time you saw a stat /math paper in machine learning with
> comments/structure? And guess what, its as undecipherable as code without
> comments/structure._
What do you mean? Academic papers are usually written in plain english with
mathematical formulas when it's necessary. What kind of comments would you
like to see?
~~~
dylandrop
Well, to quote the article, you can see things like "p is absolutely
continuous with respect to the Lebesque measure on En" and hundreds of
variable names and sub/superscripts, none of which are intuitively named. It's
really hard to argue that anyone who is not in academia would understand this
without passing through it multiple times.
That being said, I think mathematicians should be perfectly permitted to do
this, seeing as most people who read their papers are themselves,
mathematicians. Thus spelling out every dumb detail would probably just be a
waste of their time for the one case that the brave soul who is a programmer
tries to decipher it.
~~~
icelancer
Like another comment above alluded to, mathematicians tend to have parallel
gripes about code. While technically we're speaking the same language, the
dialect is often quite different.
~~~
dylandrop
Yes, which is why I mentioned that it makes sense for mathematicians to use
the language they do.
------
j2kun
The author clearly didn't read the page of the math paper he posted in trying
to argue his point. It says, and I quote:
Stated informally, the k-means procedure consists of simply starting with k
groups each of which consists of a single random point, and thereafter adding
each new point to the group whose mean the new point is nearest.
Admittedly, it's not the prettiest English sentence over written, but it's
just as plain and simply stated as the author of this article.
The article itself is interested in _proving_ asymptotic guarantees of the
algorithm (which the author of the article seems to completely ignore, as if
it were not part of machine learning at all). Of course you need mathematics
for that. If you go down further in the paper, the author reverts to a simple
English explanation of the various parameters of the algorithm and how they
affect the quality of the output.
So basically the author is cherry-picking his evidence and not even doing a
very good job of it.
------
munificent
This was a great post because I've heard of "k-means" but assumed it required
more math than my idle curiosity would be willing to handle. I love
algorithms, though, and now I feel like I have a handle on this. That's
awesome!
However, the higher level point of the post "ML is easy!" seems more than a
little disingenuous. Knowing next to nothing about machine learning, obvious
questions still come to mind:
Since you start with random points, are you guaranteed to reach a global
maximum? Can it get stuck?
How do you know how many clusters you want? How do I pick K?
This assumes that distance in the vector space strongly correlates to
"similarity" in the thing I'm trying to understand. How do I know my vector
model actually does that? (For example, how does the author know "has some
word" is a useful metric for measuring post similarity?)
I like what I got out of the post a lot, but the "this is easy" part only
seems easy because it swept the hard part under the rug.
~~~
andreasvc
You are asking exactly the right questions. As far as I know k-means will work
well when you can answer those questions for your problem, otherwise not so
much. In other words there's no silver bullet.
If you rephrase the "ML is easy" idea from the article to "it's easy to do
some simple but cool things with ML" then it's true, but in pushing the
envelope you can make it as complex as you like.
------
sieisteinmodel
Also: aerodynamics is not really hard, anyone can fold paper planes! Or:
programming 3D games is easy, just build new levels for an old game! Or: I
don't know what I am doing here, but look, this photoshop effect looks really
cool on my holiday photos!
etc.
Seriously: The writer would not be able to write anything about K-Means if not
for people looking at it from a mathematical view point. This angle is of
tremendous importance if you want to know how your algorithm behaves in corner
cases.
This does not suffice, if you have an actual application (e.g. a
recommendation or a hand tracking or an object recognition engine). These need
to work _as good as you can make it_ because every improvement of it will
result in $$$.
~~~
nikatwork
Author is not proposing "1 weird machine learning trick mathematicians HATE!"
They are encouraging non-maths people to give the basics a try even though it
seems intimidating. And it worked on me, like some others in this thread my
maths is shaky beyond diff calculus and my eyes glaze over when I see notation
- but now I'd like to give this a whirl. I have no illusions that I will
suddenly be an expert.
------
Daishiman
It's easy until you have to start adjusting parameters, understand the results
meaningfully, and tune the algorithms for actual "Bit Data". Try doing most
statistical analysis with dense matrices and watch your app go out of memory
in two seconds.
It's great that we can stand on the shoulders of giants, but having a certain
understanding of what these algorithms are doing is critical for choosing them
and the parameters in question.
Also, K-means is relatively easy to understand untuitively. Try doing that
with Latent Dirichlet Allocation, Pachinko Allocation, etc. Even Principal
Component Analysis and Linear Least Squares have some nontrivial properties
that need to be understood.
------
amit_m
tl;dr: (1) Author does not understand the role of research papers (2) Claims
mathematical notation is more complicated than code and (3) Thinks ML is easy
because you can code the wrong algorithm in 40 lines of code.
I will reply to each of these points:
1\. Research papers are meant to be read by researchers who are interested in
advancing the state of the art. They are usually pretty bad introductory
texts.
In particular, mathematical details regarding whether or not the space is
closed, complete, convex, etc. are usually both irrelevant and
incomprehensible to a practitioner but are essential to the inner workings of
the mathematical proofs.
Practitioners who want to apply the classic algorithms should seek a good
book, a wikipedia article, blog post or survey paper. Just about anything
OTHER than a research paper would be more helpful.
2\. Mathematical notation is difficult if you cannot read it, just like any
programming language. Try learning to parse it! It's not that hard, really.
In cases where there is an equivalent piece of code implementing some
computation, the mathematical notation is usually much shorter.
3\. k-means is very simple, but its the wrong approach to this type of
problem. There's an entire field called "recommender systems" with algorithms
that would do a much better job here. Some of them are pretty simple too!
~~~
Aardwolf
I'm pretty good at logic, problem solving, etc..., but do find parsing
mathematical notation quite hard. Is there actually a good way to learn it?
What I have most difficulty with is: it's not always clear which
symbols/letters are knowns, which are unknowns, and which are values you
choose yourself. Not all symbols/letters are always introduced, you sometimes
have to guess what they are. Sometimes axes of graphs are not labeled.
Sometimes explanation or examples for border cases are missing. And sometimes
when in slides or so, the parsing of the mathematical formulas takes too much
time compared to the speed, or, the memory of what was on previous slides
fades away so the formula on a later slide using something from a previous one
can no longer be parsed.
Also when you need to program the [whatever is explained mathematically in a
paper], then you have to tell the computer exactly how it works, for every
edge case, while in math notation people can and will be inexact.
Maybe there should be a compiler for math notation that gives an error if it's
incomplete. :)
~~~
amit_m
You probably want to look at a couple of good undergrad textbooks (calculus,
linear algebra, probability). The good textbooks explain the notation and have
an index for all the symbols.
Unfortunately, in most cases, you have to know a little bit about the field in
order to be able to parse the notation. The upside is that having some
background is pretty much a necessity to not screwing up when you try to
implement some algorithm.
------
myth_drannon
On Kaggle "The top 21 performers all have an M.S. or higher: 9 have Ph.D.s and
several have multiple degrees (including one member who has two Ph.D.s)."
[http://plotting-success.softwareadvice.com/who-are-the-
kaggl...](http://plotting-success.softwareadvice.com/who-are-the-kaggle-big-
data-wizards-1013/)
~~~
vikp
I've done pretty well at Kaggle with just a Bachelors in American History:
[http://www.kaggle.com/users/19518/vik-
paruchuri](http://www.kaggle.com/users/19518/vik-paruchuri), although I
haven't been using it as much lately. A lot of competition winners have lacked
advanced degrees. What I like most about Kaggle is that the only thing that
matters is what you can show. I learned a lot, and I highly recommend it to
anyone starting out in machine learning. I expanded on some of this stuff in a
Quora answer if you are interested: [http://www.quora.com/Kaggle/Does-
everyone-have-the-ability-t...](http://www.quora.com/Kaggle/Does-everyone-
have-the-ability-to-do-well-on-Kaggle-competitions-if-they-put-enough-time-
and-effort-into-them).
~~~
nl
Man..
To other readers, follow that link at your own risk. Vikp has a blog[1] which
has hours and hours of excellent writing about machine learning.
[1] [http://vikparuchuri.com/blog/](http://vikparuchuri.com/blog/)
~~~
vikp
Thanks, nl! Now I know that at least one person reads it.
~~~
sk2code
Count me as 2. Though I am sure there are lots of people who will be reading
the material on your blog. Pretty good stuff.
------
xyzzyz
I'd like to chime in here as a mathematician.
Many people here express their feelings that math or computer science papers
are very difficult to read. Some even suggest that they're deliberately
written this way. The truth is that yes, they in fact are deliberately written
this way, but the reason is actually opposite of many HNers impression:
authors want to make the papers _easier_ to understand, and not more
difficult.
Take for example a page from a paper that's linked in this article. Someone
here on HN complains that the paper talks about "p being absolutely continuous
with respect to the Lebesque measure on En", hundreds of subscripts and
superscripts, and unintuitively named variables, and that it makes paper very
difficult to understand, especially without doing multiple passes.
For non-mathematicians, it's very easy to identify with this sentiment. After
all, what does it even mean for a measure to be absolutely continuous with
respect to Lebesgue measure. Some of these words, like "measure" or
"continuous" make some intuitive sense, but how can "measure" be "continuous"
with respect to some other measure, and what the hell is Lebesgue measure
anyway?
Now, if you're a mathematician, you know that Lebesgue measure in simple cases
is just a natural notion of area or volume, but you also know that it's very
useful to be able to measure much more complicated sets than just rectangles,
polyhedrals, balls, and other similar regular shapes. You know Greeks
successfully approximated areas of curved shapes (like a disk) by polygons, so
you try to define such measure by inscribing or circumscribing a nice, regular
shapes for which the measure is easy to define, but you see it only works for
very simple and regular shapes, and is very hard to work with in practice. You
learned that Henri Lebesgue constructed a measure that assigns a volume to
most sensible sets you can think of (indeed, it's hard to even come up with an
example of a non-Lebesgue-measurable set), you've seen the construction of
that measure, and you know that it's indeed a cunning and nontrivial work. You
also know that any measure on Euclidean space satisfying some natural
conditions (like measure of rectangle with sides a, b is equal to product ab,
and if you move a set around without changing its shape, its measure shouldn't
change) must already be Lebesgue measure. You also worked a lot with Lebesgue
measure, it being an arguably most important measure of them all. You have an
intimate knowledge of Lebesgue measure. Thus, you see a reason to honor
Lebesgue by naming measure constructed by him with his name. Because of all of
this, whenever you read or hear about Lebesgue measure, you know precisely
what you're dealing with.
You know that a measure p is absolutely continuous with respect to q, if
whenever q(S) is zero for some set S, p(S) is also zero. You also know that if
you tried to express the concept defined in a previous sentence, but without
using names for measures involved, and a notation for a value a measure
assigns to some set, the sentence would come out awkward and complicated,
because you would have to say that a measure is absolutely continuous with
respect to some other measure, if whenever that other measure assigns a zero
value to some set, the value assigned to that set by the first measure must be
zero as well. You also know, that since you're not a native English speaker
(and I am not), your chance of making grammatical error in a sentence riddled
with prepositions and conjunctions are very high, and it would make this
sentence even more awkward. Your programmer friend suggested that you should
use more intuitive and expressive names for your objects, but p and q are just
any measures, and apart from the property you're just now trying to define,
they don't have any additional interesting properties that would help you find
names more sensible than SomeMeasure and SomeOtherMeasure.
But you not only know the definition of absolute continuity of measures: in
fact, if that was the only thing you knew about it was the definition, you'd
have forgotten it long ago. You know that absolute continuity is important
because of a Radon-Nikodym theorem, which states that if p is absolutely
continuous with respect to q, then p(A) is in fact integral over A of some
function g with respect to measure q (that is, p(A) = int_A g dq). You know
that it's important, because it can help you reduce many questions about
measure p to the questions about behaviour of function g with respect to
measure q (which in our machine learning case is a measure we know very, very
well, the Lebesgue measure).
You also know why the hell it's called absolutely continuous: if you think
about it for a while, the function g we just mentioned is kind of like a
derivative of a measure of measure p with respect to measure q, kind of like
dp/dq. Now, if you write p(A) = int_A (dp/dq) dq = int_A p'(q) dq, even though
none of the symbols dp/dq or p'(q) make sense, it seems to mean that p is an
"integral of its derivative", and you recall that there's a class of real
valued functions for which it is true as well, guess what, the class of
absolutely continuous functions. If you think about these concepts even
harder, you'll see that the latter concept is a special case of our absolutely
continuous measures, so all of this makes perfectly sense.
So anyway, you read that "p is absolutely continuous with respect to Lebesgue
measure", and instantly tons of associations light up in your memory, you know
what they are working with, you have some ideas why they might need it,
because you remember doing similar assumption in some similar context to
obtain some result (and as you're reading the paper further, you realize you
were right). All of what you're reading makes perfect sense, because you are
very familiar with the concepts author introduces, with methods of working
with them, and with known results about them. Every sentence you read is a
clear consequence of the previous one. You feel you're home.
_..._
Now, in alternate reality, a nonmathematician-you also tries to read the same
paper. As the alternate-you haven't spent months and years internalizing these
concept to become vis second nature, ve has to look up every other word,
digress into Wikipedia to use DFS to find a connected component containing a
concept you just don't yet understand. You spend hours, and after them you
feel you learned nothing. You wonder if the mathematicians deliberately try to
make everything complicated.
Then you read a blog post which expresses the idea behind this paper very
clearly. Wow, you think, these assholes mathematicians are really trying to
keep their knowledge in an ivory tower of obscurity. But, since you only made
it through the few paragraphs of the paper, you missed an intuitive
explanation that's right there on that page from an paper reproduced by that
blog post:
_Stated informally, the k-means procedure consists of simply starting with k
groups each of which consists of a single random point, and thereafter adding
each new point to the group whose mean the new point is nearest. After a point
is added to a group, the mean of that groups is adjusted in order to take
account of that new point_
Hey, so there was an intuitive explanation in that paper after all! So, what
was all that bullshit about measures and absolute continuity all about?
You try to implement an algorithm from the blog post, and, as you finish, one
sentence from blog post catches your attention:
_Repeat steps 3-4. Until documents’ assignments stop changing._
You wonder, but when that actually happens? How can you be sure that they will
stop at all at some point? The blog post doesn't mention that. So you grab
that paper again...
~~~
Houshalter
I can get that academic papers do this, but even Wikipedia articles are
written in this cryptic language. For example, you read about some algorithm
and want to find out how it works and you get this:
[https://upload.wikimedia.org/math/f/1/c/f1c2177e965e20ab29c4...](https://upload.wikimedia.org/math/f/1/c/f1c2177e965e20ab29c4ba51f70bbdfc.png)
You try to highlight some of the symbols and find out that it's an image. You
can't even click on things and find out what they mean like is standard for
everything written in text. You try to google something like "P function" and
get completely unrelated stuff.
~~~
nilkn
Symbols are used because equations like that would be crazy long and hard to
keep track of if every symbol were a fully-formed word or phrase. It's an
unfortunate side effect that those unfamiliar with mathematical notation are
somewhat left in the dark.
The real problem here is the difficulty of looking up the meaning of a symbol
you don't know. The notation is great once you know it, but it's a pain to
learn if you're doing so completely out of context. Maybe you don't know that
the sigmas there represent summation. Your idea of just clicking the sigma and
seeing an explanation is a pretty great one that I'd like to see implemented.
~~~
eli_gottlieb
Crazy long and difficult-to-track equations are why _function abstraction_ was
invented. If this were code, we would say to cut it into understandable
chunks, label them as functions, and then compose them.
~~~
nilkn
Writing it all as code would exclude nonprogrammers just as much as this
excludes people not familiar with basic mathematical notation.
At least everybody should share some common mathematical instruction from
school. The same certainly cannot be said of programming.
As somebody who can read both code and math, though, I'd greatly, greatly
prefer to see math in math notation, not code. Code would be so much more
verbose that it would take easily twice as long to digest the information.
You'd also completely lose all the incredibly powerful visual reasoning that
comes along with modern mathematical notation.
------
kephra
The question "do I need hard math for ML" often comes up in #machinelearning
at irc.freenode.net
My point here is: You don't need hard math (most of the times) because most
machine learning methods are already coded in half a dozen different
languages. So its similar to fft. You do not need to understand why fft works,
just when and how to apply it.
The typical machine learning workflow is: Data mining -> feature extraction ->
applying a ML method.
I often joke that I'm using Weka as a hammer, to check, if I managed to shape
the problem into a nail. Now the critical part is feature extraction. Once
this is done right, most methods show more or less good results. Just pick the
one that fits best in results, time and memory constrains. You might need to
recode the method from Java to C to speedup, or to embed it. But this requires
nearly no math skills, just code reading, writing and testing skills.
------
tlarkworthy
I find newbs in ml don't appreciate cross validation. That's the one main
trick. Keep some data out of the learning process to test an approaches
ability on data it has not seen. With this one trick you can determine which
algorithm is best, and the parameters. Advanced stuff like Bayes means you
don't need it, but for your own sanity you should still always cross validate.
Machine learning is about generalisation to unseen examples, cross validation
is the metric to test this. Machine learning is cross validation.
~~~
syllogism
First, you're always going to need to evaluate supervised models on some data
you didn't train on. No method will ever relieve the need for that.
Second, I never use cross-fold if I can help it. It's a last resort if the
data's really small. Otherwise, use two held-out sets, one for development,
and another for testing. Cross-fold actually really sucks!
The small problem is that cross-validation has you repeating everything N
times, when your experiment iteration cycle is going to be a big bottleneck.
Second, it can introduce a lot of subtle bugs.
Let's say you need to read over your data to build some sort of dictionary of
possible labels for the instances, given one of the attributes. This can save
you a lot of time. But if you're cross-validating, you can forget to do these
pre-processes cross-validated too, and so perform a subtle type of cheating.
Coding that stuff correctly often has you writing code into the run-time about
the cross-fold validation. Nasty.
With held-out data, you can ensure your run-time never has access to the
labels. It writes out the predictions, and a separate evaluation script is
run. This way, you know your system is "clean".
Finally, when you're done developing, you can do a last run on a test set.
Bonus example of cross-fold problems: Sometimes your method assumes
independence between examples, even of this isn't strictly true. For instance,
if you're doing part-of-speech tagging, you might assume that sentences in
your documents are independent of each other (i.e. you don't have any cross-
sentence features). But they're not _actually_ independent! If you cross-fold,
and get to evaluate on sentences from the same documents you've trained on,
you might do much better than in a realistic evaluation, where you have to
evaluate on fresh documents.
~~~
tlarkworthy
Yes I realise the importance of a testing set too. I see that as just another
level validation. Hierarchies of left out data. If you understand in your
heart what cross validation _is_ , then the step to using a testing set is a
trivial extension.
I brought up cross validation in particular, because I see PhDs eating books
on advanced techniques like structure learning. But they have no practical
experience. And they ignore the basics for so long, as they don't realise
validation is the most important bit. If you have not drawn the curve of
training set error, validation error against training time, you have no done
machine learning yet...
This article doesn't mention it either :(
------
upquark
Math is essential for this field, anyone who tells you otherwise doesn't know
what they are talking about. You can hack together something quick and dirty
without understanding the underlying math, and you certainly can use existing
libraries and tools to do some basic stuff, but you won't get very far.
Machine learning is easy only if you know your linear algebra, calculus,
probability and stats, etc. I think this classic paper is a good way to test
if you have the right math background to dive deeper into the field:
[http://www.cs.princeton.edu/~blei/papers/BleiNgJordan2003.pd...](http://www.cs.princeton.edu/~blei/papers/BleiNgJordan2003.pdf)
------
Ihmahr
As a graduate in artificial intelligence and machine learning I can tell you
that machine learning IS hard.
Sure, the basic concepts are easy to understand. Sure, you can hack together a
program that performs quite well on some tasks. But there are so much
(interesting) problems that are not at all easy to solve or understand.
Like structural engineering it is easy to understand the concepts, and it is
even easy to build a pillow fort in the living room, but it is not easy to
build an actual bridge that is light, strong, etc.
------
pallandt
It's actually incredibly hard, especially if you want to achieve better
results than with a current 'gold standard' technique/algorithm, applied on
your particular problem.
While the article doesn't have this title (why would you even choose one with
such a high bias?), I presume the submitter decided upon this title after
being encouraged by this affirmation of the article's author: 'This data
indicates that the skills necessary to be a data “wizard” can be learned in
disciplines other than computer sciences and mathematics.'.
This is a half-baked conclusion. I'd reason most Kaggle participants are first
of all, machine learning fans, either professionals or 'amateurs' with no
formal qualifications, having studied it as a hobby. I doubt people with a
degree in cognitive sciences or otherwise in the 'other' categories as
mentioned in the article learned enough just through their university studies
to readily be able to jump into machine learning.
------
tptacek
Is k-means really what people are doing in serious production machine-learning
settings? In a previous job, we did k-means clustering to identify groups of
similar hosts on networks; we didn't call it "machine learning", but rather
just "statistical clustering". I had always assumed the anomaly models we
worked with were far simpler than what machine learning systems do; they
seemed unworthy even of the term "mathematical models".
~~~
patio11
_Is k-means really what people are doing in serious production machine-
learning settings?_
Yes, that is one option. Ask me over Christmas and I'll tell you a story about
a production system using k-means. (Sorry guys, not a story for public
consumption.)
More broadly, a _lot_ of machine learning / AI / etc is simpler than people
expect under the hood. It's almost a joke in the AI field: as soon as you
produce a system that actually works, people say "Oh that's not AI, that's
just math."
~~~
glimcat
Most of the classifiers I've seen used in industry aren't even k-means.
They're fixed heuristics (often with plausibly effective decision boundaries
found via k-means).
Consider the Netflix Prize, where the most effective algorithm couldn't
actually be used in production. Commercial applications tend to favor "good
enough" classification with low computational complexity.
> It's almost a joke in the AI field: as soon as you produce a system that
> actually works, people say "Oh that's not AI, that's just math."
I think that's the standard "insert joke here" any time someone has a "What is
AI?" slide.
------
misiti3780
I disagree with this article, although I did find it interesting. Replace
k-means with a supervised learning algorithm like an SVM, and use some more
complicated features other than binary and this article would be a lot
different.
Also - maybe "article recommendation" is "easy" in this context, but other
areas such as computer vision, sentiment analysis are not. Some other
questions I might ask
How do you know how well this algorithm is performing?
How are you going to compare this model to other models? Which metrics will
you use? What statistical tests would you use and why?
What assumptions are you making here ? How do you know you can make them and
why?
There are a lot of things that this article fails to address.
Disclaimer: I realize more complex models + features don't always lead to
better performance, but you need to know how to verify that to be sure.
~~~
iskander
SVM prediction is pretty straight-forward, even if you're using a kernel/dual
formulation:
"Here are some training points from classes A and B, each with an importance
weight. Compute the weighted average of your new point's similarity to all the
training exemplars. If the total comes out positive then it's from class A,
otherwise it's from class B."
If you tried to explain training from a quadratic programming perspective it
would get messy, but if you use stochastic gradient descent then even the
training algorithm (for primal linear SVM) is pretty light on math.
Of course, this is only possible after a decade or two of researchers have
played with an digested the ideas for you. I think until Léon Bottou's SGD SVM
paper, even cursory explanations of SVMs seemed to get derailed into convex
duality, quadratic programming solvers, and reproducing kernel hilbert spaces.
~~~
misiti3780
I'm not sure I agree:i I'll say it differently:
As far as I know, k-means (as the author described it) has two parameters:
initial number of clusters K and max iterations (although I could be wrong),
SVMS have:
UPDATE: here is a gist:
[https://gist.github.com/josephmisiti/7572696](https://gist.github.com/josephmisiti/7572696)
taken from here :
[http://www.csie.ntu.edu.tw/~cjlin/libsvm/](http://www.csie.ntu.edu.tw/~cjlin/libsvm/)
I do not think that picking up LibSVM, compiling it, and running it is
straight forward for everyone ...
~~~
iskander
I think there's a big difference between explaining the essence of an
algorithm and understanding all the details and decisions that go into a
particular implementation.
If explaining SVM, I'd stick with the linear primal formulation, which only
really requires choosing the slack parameter C. If I needed to perform or
explain non-linear prediction, I'd switch to the RBF kernel, which gives you
one additional parameter (the RBF variance).
Implementations of K-means, by the way, can also potentially have lots of
parameters. Check out: [http://scikit-
learn.org/stable/modules/generated/sklearn.clu...](http://scikit-
learn.org/stable/modules/generated/sklearn.cluster.MiniBatchKMeans.html#sklearn.cluster.MiniBatchKMeans)
~~~
misiti3780
fair enough - btw i just realized i bookmarked your blog
([http://blog.explainmydata.com](http://blog.explainmydata.com)) a few weeks
back - i was really enjoying those posts. nice work
~~~
iskander
Thanks :-)
I'm going to finish grad school soon and the job I'm going to will be pretty
rich with data analysis, hopefully will lead to more blog posts.
------
apu
For those wanting to get started (or further) in machine learning, I highly
recommend the article, "A Few Useful Things to Know About Machine Learning,"
by Pedro Domingos (a well respected ML researcher):
[http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf](http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf).
It's written in a very accessible style (almost no math); contains a wealth of
practical information that everyone in the field "knows", but no one ever
bothered to write down in one place, until now; and suggests the best
approaches to use for a variety of common problems.
As someone who uses machine learning heavily in my own research, a lot of this
seemed like "common sense" to me when I read it, but on reflection I realized
that this is _precisely_ the stuff that is most valuable _and_ hardest to find
in existing papers and blog posts.
------
mrcactu5
The equations look fine to me - I was a math major in college. Honestly, I get
so tired of humanities people -- or programmers, bragging about how much they
hate math.
Except:
[https://gist.github.com/benmcredmond/0dec520b6ab2ce7c59d5#fi...](https://gist.github.com/benmcredmond/0dec520b6ab2ce7c59d5#file-
kmeans-rb)
I didn't know k-means clustering was that simple. I am taking notes...
* pick two centers at random
run 15 times:
* for each post, find the closest center
* take the average point of your two clusters
as your new center
This is cool. It is 2-means clustering and we can extend it to 5 or 13...
We don't need any more math, as long as we don't ask whether this algorithm
__converges __or __how quickly __
------
syllogism
I write academic papers, and I've started writing blog posts about them, and I
think this post doesn't cover one of the main reasons that academic papers are
less accessible to non-specialists.
When you write an academic paper, it's basically a diff on previous work. It's
one of the most important considerations when the paper first comes out. The
reviewers and the people up-to-the-minute with the literature need to see
which bit is specifically new.
But to understand your algorithm from scratch, someone needs to go back and
read the previous four or five papers --- and probably follow false leads,
along the way!
It's another reason why academic code is often pretty bad. You really really
should write your system to first replicate the previous result, and then
write your changes in on top of it, with the a _bare minimum_ branching logic,
controlled by a flag, so that the same runtime can provide both results. And
you should be able to look at each point where you branch on that flag, and
check that your improvements are only exactly what you say they are.
When you start from scratch and implement a good bang-for-buck idea, yes, you
can get a very simple implementation with very good results. I wrote a blog
post explaining a 200-line POS tagger that's about as good as any around.[1]
Non-experts would usually not predict that the code could be so simple, from
the original paper, Collins (2002).[2]
I've got a follow-up blog post coming that describes a pretty good parser that
comes in at under 500 lines, and performs about as accurately as the Stanford
parser. The paper I wrote this year, which adds 0.2% to its accuracy, barely
covers the main algorithm --- that's all background. Neither does the paper
before me, released late last year, which adds about 2%. Nor the paper before
that, which describes the features...etc.
When you put it together and chop out the false-starts, okay, it's simple. But
it took a lot of people a lot of years to come up with those 500 lines of
Python...And they're almost certainly on the way towards a local maximum! The
way forward will probably involve one of the many other methods discussed
along the way, which don't help this particular system.
[1] [http://honnibal.wordpress.com/2013/09/11/a-good-part-of-
spee...](http://honnibal.wordpress.com/2013/09/11/a-good-part-of-speechpos-
tagger-in-about-200-lines-of-python/)
[2]
[http://acl.ldc.upenn.edu/W/W02/W02-1001.pdf](http://acl.ldc.upenn.edu/W/W02/W02-1001.pdf)
------
pyduan
As someone who works in machine learning, I have mixed feelings about this
article. While encouraging people to start learning about ML by demystifying
it is a great thing, this article comes off as slightly cocky and dangerous.
Programmers who believe they understand ML while only having a simplistic view
of it risk not only to create less-than-optimal algorithms, and might instead
create downright dangerous models:
[http://static.squarespace.com/static/5150aec6e4b0e340ec52710...](http://static.squarespace.com/static/5150aec6e4b0e340ec52710a/t/51525c33e4b0b3e0d10f77ab/1364352052403/Data_Science_VD.png)
In the context of fraud detection (one of the main areas I work in these
days), a model that is right for the wrong reasons might lead to catastrophic
losses when the underlying assumption that made the results valid suddenly
ceases to be true.
Aside from the fact the techniques he mentioned are some of the simplest in
machine learning (and are hardly those that would immediately come to mind
when I think "machine learning"), the top comment on the article is spot on:
> "The academic papers are introducing new algorithms and proving properties
> about them, you’re applying the result. You’re standing on giants’ shoulders
> and thinking it’s easy to see as far as they do."
While understanding _how_ the algorithm works is of course important (and I do
agree that they are often more readable when translated to code),
understanding _why_ (and _when_ ) they work is equally important. Does each
K-Means iteration always reach a stable configuration? When can you expect it
to converge fast? How do you choose the number of clusters, and how does this
affect convergence speed? Does the way you initialize your centroids have a
significant effect on the outcome? If yes, which initializations tend to work
better in which situations?
These are all questions I might ask in an interview, but more importantly,
being able to answer these is often the difference between blindly applying a
technique and applying it intelligently. Even for "simple" algorithms such as
K-Means, implementing them is often only the tip of the iceberg.
~~~
benofsky
Hey, author here.
> Aside from the fact the techniques he mentioned are some of the simplest in
> machine learning (and are hardly those that would immediately come to mind
> when I think "machine learning")
The primary point of this article was the contrast between something as simple
as K-Means, and the literature that describes it. It wasn't meant as a full
intro to ML, but rather something along the lines of "give it a try, you might
be surprised by what you can achieve".
> Even for "simple" algorithms such as K-Means, implementing them is often
> only the tip of the iceberg.
Yup. But getting more people to explore the tip of the iceberg is, in my
opinion, a good thing. We don't discourage people from programming because
they don't instantly understand the runtime complexity of hash tables and
binary trees. We encourage them to use what's already built knowing that smart
people will eventually explore the rest of the iceberg.
~~~
pyduan
Thanks for responding. I fully agree with your comment -- as I said, I too
think many people are sometimes put off by the apparent complexity of machine
learning, and demystifying how it works is a great thing. Unfortunately
there's always a risk that a "hey, maybe this isn't so hard after all" might
turns into a "wow, that was easy". While I think the former is great, the
latter is dangerous because machine learning is often used to _make decisions_
(sometimes crucial, for example when dealing with financial transactions), so
I would argue more care should be taken than if we were talking about general
purpose programming: if you trust an algorithm with making important business
decisions, then you better have an intimate knowledge of how it works.
While I again agree with the underlying sentiment, I was just a bit
disappointed that it seems to invite the reader to be satisfied of himself
rather than motivate him to dig deeper. Nothing a future blog post can't solve
though!
------
rdtsc
A lot of concepts are easier when you know how they work.
CPUs were magical for me before I took a computer architecture course. So was
AI and machine learning. Once you see the "trick" so to speak you lose some of
the initial awe.
------
aidos
Most of the comments on here are from people in the field of ML saying "this
is a toy example, ML is hard."
Maybe that's the case. And maybe the title of the submission ruffled some
feathers but the thrust of it is that ML is approachable. I'm sure there's
devil in the detail, but it's nice for people who are unfamiliar in a subject
to see it presented in a way that's more familiar to them with their current
background.
I have a university background in Maths and Comp Sci so I'm not scared of code
or mathematical notation. Maybe if I'd read the comments on here I'd get the
sense that ML is too vast and difficult to pick up. I'm doing Andrew Ng's
coursera course at the moment and so far it's all been very easy to
understand. I'm sure it gets harder (I even hope so) and maybe I'll never get
to the point where I'm expert at it, but it would be nicer to see more of a
nurturing environment on here instead of the knee jerk reactions this seems to
have inspired.
~~~
m_ke
Ng's coursera class is really dumbed down and although he gives a nice
intuitive explanation of the basics, it's nowhere near as hard as his actual
Stanford class (or any other ml class from a good engineering school).
~~~
mrcactu5
many people still can't pass it or understand it...
------
ronaldx
I'm cynical about how machine learning of this type might be used in practice
and this is an illustration of why: the stated goal is a "you might also like"
section.
There is no reason to believe the results are any better than a random method
in respect of the goal (and it's reasonable to believe they may be worse) - we
would have to measure this separately by clickthrough rate or user
satisfaction survey, perhaps.
I believe you would get far better results by always posting the three most
popular articles. If you want to personalise, post personally-unread articles.
A lot less technical work, a lot less on-the-fly calculation, a lot more
effective. The machine learning tools do not fit the goal.
The most effective real example of a "you might also like" section is the Mail
Online's Sidebar of Shame. As best as I can tell, they display their popular
articles in a fixed order.
Machine Learning seems to make it easy to answer the wrong question.
~~~
andreasvc
This is really only an argument that the example in the article is not
realistic (which it doesn't have to be, it might be expository). There are in
fact countless applications of machine learning in actual daily use, such as
detecting credit card fraud, where simpler manual methods would perform
measurably worse in terms of money lost.
~~~
ronaldx
Sure, there are realistic applications of Machine Learning with great results.
But the article has failed in its headline goal ("ML is easier than it looks")
if it chooses an example that is mathematically more complicated and still
less effective than a naive alternative.
The first difficult task is to identify an effective method.
------
agibsonccc
Wait till you have to hand craft your algorithms because the off the shelf
ones are too slow ;). In the end you can stand on the shoulders of giants all
day, but until you actually sit down and write an SVM or even something more
cutting edge like stacked deep autoencoders yourself, machine learning isn't
"easy".
In the end, libs are there for simpler use cases or educational purposes.
Realistically, that's more than good enough for 90% of people.
That being said, it's not impossible to learn. Oversimplifying the statistics,
tuning, and work that goes in to these algorithms you're using though? Not a
good idea.
------
danialtz
I recently read a book called "Data Smart" [1], where the author does k-means
and prediction algorithms literally in Excel. This was quite eye opening as
the view to ML is not so enigmatic to enter. However, the translation of your
data into a format/model to run ML is another challenge.
[1] [http://www.amazon.com/Data-Smart-Science-Transform-
Informati...](http://www.amazon.com/Data-Smart-Science-Transform-
Information/dp/111866146X)
------
panarky
Sure, some ML concepts are intuitive and accessible without advanced math.
But it would help to highlight some of the fundamental challenges of a
simplistic approach.
For example, how is the author computing the distance between points in
n-dimensional space?
And does this mean that a one-paragraph post and a ten-paragraph post on the
same topic probably wouldn't be clustered together?
~~~
dylandrop
Well I don't think any of the problems you mentioned are difficult to solve,
even by a non-seasoned programmer.
For your first question, just use Euclidean distance. (In other words,
sqrt((x1 - y1)^2 + (x2 - y2)^2 + ... + (xn - yn)^2). IMO, anyone who can
program can do that.
Also for your second question - probably - but he's using a very simplistic
model. To fix that, you simply fix the heuristic. For example, you could
instead score groups by how frequently certain important words occur, like
"team" and "support".
~~~
m_ke
You have binary features, why would you use euclidean distance?
~~~
dylandrop
Well in this case, you can also do the normal, in other words adding the
differences. The poster asked about finding the differences between points in
N-dimensional space in general (he didn't specify for the author's example).
But the idea is that your means wouldn't be bucketed into just a couple of
values, but have a larger distribution. In any case, it doesn't matter much
for this problem, but if you scored it by word count rather than a simple 1 or
0, it would matter.
------
sytelus
Building a model on some training data is easy. Debugging it is _hard_. Th
machine learned model is essentially equivalent of code that gives
probabilistic answers. But there are no breakpoints to put or watch to add.
When model doesn't give the answer you expect, you need to do _statistical
debugging_. Is the problem in training data, do you have correct sampling, are
you missing a feature, is your feature selection optimal, does you featurizer
have a bug somewhere? Possibilities are endless. Debugging ML model is black
art. Most ML "users" would simply give up if model doesn't work on some cases
or if they added a cool feature but results are not that good. In my
experience less than 10% of people who claims ML expertise are actually good
at doing statistical debugging and able to identify issue when model doesn't
work as expected.
------
mau
tldr: the ML algorithms look hard reading the papers, while the code looks
simpler and shorter, also you can get pretty decent results in a few lines of
R/Python/Ruby so ML is not that complex.
I disagree in so many ways: 1\. complex algorithms are usually very short in
practice (e.g. dijkstra's shortest path or edit distance are the firsts that
come to mind) 2\. ML is not just applying ML algorithms: you have to evaluate
your results, experiment with features, visualize data, think about what you
can exploit and discover patterns that can improve your models. 3\. If you
know the properties of the algorithms you are using then you can have some
insights that might help you on improving your results drastically. It's very
easy to apply the right algorithms with the wrong normalizations and still get
decent results in some tests.
------
hokkos
Matrix multiplication, orthonormal basis, triangular matrix, gradient descent,
integrals, Lebesgue mesure, convex, and the mathematical notation in the paper
are not harder than the code shown here. It is better to have solid prof of
what you are doing is sound and will converge before jumping into the code.
------
outworlder
I don't get all negative comments.
From my limited text comprehension abilities, the author did not say that the
whole field is trivial and that we should sack all academics.
Instead, the argument is that basic Machine Learning techniques are easy and
one shouldn't be afraid of applying them.
~~~
freshhawk
Likely this part at the beginning that exposed the author's anti-
intellectualism/ignorance:
"The majority of literature on machine learning, however, is riddled with
complex notation, formulae and superfluous language. It puts walls up around
fundamentally simple ideas."
That's what got me, it's such a flat out stupid thing to say. Especially the
"puts walls up" phrasing to imply that math beyond what the author understands
is a conspiracy. _Especially_ from someone who writes code ... which is full
of "complex notation", symbols and "superfluous classes/libraries/error
checking/abstractions" by the same absurd reasoning.
~~~
benofsky
Hi, author here. I love reading academic papers and I love Math. But it's an
unfortunate fact that academic papers make up the majority of the literature
on ML — and that their notation, and writing style exclude a large population
who would otherwise find them useful. That's all I'm trying to get at here.
~~~
freshhawk
They don't exclude people by using math any more than programmers exclude
people by using subclasses or anonymous functions.
I too would love to see more entry level ML resources, but we are talking
about academic papers here.
Imagine how stupid it would seem if some outsider came in to HN and started
telling programmers to dumb down their code when it's open source because
their advanced techniques are too hard for a novice to understand off the bat.
Instead of optimizing for efficiency, elegance and readability for their peers
- the people they are collaborating with to solve actual problems - they're
told to cater to novices always.
The language in your post is textbook anti intellectualism isn't it? And
strangely entitled. You would certainly not apply these criteria to code but
since it's not your field they must cater to your lack of experience? You know
better than they how to communicate ML research to other experts?
------
samspenc
Upvoted this for an interesting read, but I agree with the sentiments in the
comments that (1) ML is in general hard (2) some parts of ML are not that
hard, but are likely the minority (3) we are standing on the shoulders of
giants, who did the hard work.
------
BjoernKW
The fundamentals of linear algebra and statistics are indeed quite easy to
understand. Common concepts and algorithms such as cosine similarity and
k-means are very straightforward.
Seemingly arcane mathematical notation is what frightens off beginners in many
cases, though. Once you've understood that - for instance - a sum symbol
actually is nothing else but a loop many things become a lot easier.
However, the devil's in the details. Many edge cases and advanced methods of
machine learning are really hard to understand. Moreover, when 'good enough'
isn't just good enough any more things tend to become very complex very
quickly.
------
Irishsteve
The post does do a good job of showing how easy it is to implement knn.
The post doesn't really go into centroid selection or evaluation, or the fact
that clustering on text is going to be painful once you move to a larger
dataset.
------
gms
The difficult aspects take centre stage when things go wrong.
------
kamilafsar
Some while back I implemented k-means in JavaScript. It's a really simple,
straight forward algorithm which makes sense to me, as a visual thinker and a
non-mathematician. Check out the implementation here:
[https://github.com/kamilafsar/k-means-
visualizer/blob/master...](https://github.com/kamilafsar/k-means-
visualizer/blob/master/k-means.js)
------
Toenex
I think this is one of the reasons why it should become standard practise to
provide code implementations of described algorithms. It not only provides an
executable demonstration of the algorithm but as importantly an alternative
description that may be more accessible to other audiences. It can also be
used as conformation that was is intended is indeed what is being done.
------
adammil
It is nice to read about this in plain language. But, can someone explain what
the X and Y axis are meant to represent in the graph?
~~~
davmre
They don't mean anything in particular. The _actual_ analysis is being done in
a high-dimensional space, in which each post is represented by a high-
dimensional vector of the form [0,0,1,0,...., 0,1,0]. The length of the vector
is the total number of distinct words used across all blog posts (maybe
something like 30,000), and each entry is either 0 or 1 depending on whether
the corresponding word occurs in this post. All the distances and cluster
centers are actually being computed in this 30000-dimensional space; the two-
dimensional visualization is just for intuition.
If you're wondering how the author came up with the two-dimensional
representation, the article doesn't say, but it's likely he used something
like Principal Component Analysis
([http://en.wikipedia.org/wiki/Principal_component_analysis](http://en.wikipedia.org/wiki/Principal_component_analysis)).
This is a standard technique for dimensionality reduction, meaning that it
finds the "best" two-dimensional representation of the original
30,000-dimensional points, where "best" in this case means something like
"preserves distances", so that points that were nearby in the original space
are still relatively close in the low-dimensional representation.
------
pesenti
Of the two methods described - search vs. clustering - the first one - simpler
and not involving ML - is better for this use case. The only reason it seems
to give worst results is because it's only used with the titles and not the
full body (unlike the clustering approach). So I guess machine learning is
easier to mis-use than it looks...
------
dweinus
They should try using tf-idf to create the initial representation of the
keywords per post...also, I find there are many cases where applying machine
learning/statistics correctly is harder than it looks, this single case not
withstanding.
------
Rickasaurus
It may be easier to do than it looks, but it's also harder to do well.
------
m_ke
This is as valid as someone stating that computer science is easy because they
know HTML.
------
fexl
I like the simple explanation of K-Means, and I like the contrast with the
dense set-theoretic language -- a prime example of "mathematosis" as W.V.O.
Quine put it.
~~~
fexl
I admit that was an unfair swipe at formalism, but I do think that formalism
is far more effective in the light of a clear, intuitive exposition that
conveys the concept straight into the mind. Formalism is good for battening
things down tightly, after the fact.
| {
"pile_set_name": "HackerNews"
} |
EBook - 12 Things you can do to Shorten Your Lead Time in Software Development - RexDixon
http://www.abtests.com/test/39002/product-for-ebook---12-things-you-can-do-to-shorten-your-lead-time-in-software-development
Test Details
Big Takeaway<p>3D eBook picture provides better conversion/downlaods.
Results Hypothesis<p>The book looks like a book when done in 3D.<p>It's worth reading the original writeup on this one, where the author breaks down the test results by click. The Page had three download links:
======
wgj
I'm really not sure what was intended by this link. It's not actually a link
to the eBook at all. However, the ABTests site itself looks interesting and
useful.
| {
"pile_set_name": "HackerNews"
} |
New Sublime Text update - pyed
https://www.sublimetext.com/3
======
Overtonwindow
I'm not a programmer, please forgive me, but I love using sublime as of
writing tool for legislation and public policy in my work. The color coding
system popular in programming, has been invaluable in drafting legislation.
~~~
dreen
Out of curiosity, which syntax highlighter are you using? as in for which
language? (it says which one in lower right corner)
~~~
mromanuk
super interesting! I'm thinking out loud: could be possible that there is
niche waiting to be untaped with new tech–tools for lawyers (and others
professions)?
Edit: I mean, no high–tech, but tools with a higher level of complexity where
the UI isn't a dumbed down version, and where the software exposes some
functionality to a better skilled user. I'm thinking the "millennial"
professional, should be more comfortable with software and pushing the limit a
little bit.
~~~
bonyt
I'm a third year law student (well, my last day was this week!), and I've been
using Sublime to take notes and outline things for class. I use a custom
syntax highlighting (modified version of the markdown syntax) to make it easy
to read.
It is common for law students to digest course material into a short-ish
(20-30 pages, depending on the class) outline of the material, as a way of
studying. With my modified syntax highlighting config, I use different color
bars to represent different level headings to make it easy to see how my
document or outline is organized.[1] I then have a latex template for pandoc
which lets me convert it to a beautiful document that is useful during open-
book exams.
Using Sublime as a WYSIWYM editor is much more pleasant, as the editor is far
more responsive, than using a WYSIWYG editor (like MS Word). I actually
recently wrote a paper for class entirely in Markdown in Sublime. Pandoc lets
you convert markdown to PDF (it uses latex internally), and when you convert
to docx format for Microsoft Word, you can use a reference file to define the
style formats. It's really easy to write something like a brief or a memo and
convert it with a reference file to a format that others can work with,
properly styled.
[1]: Here is a screenshot of what my editor looks like:
[http://i.imgur.com/xU9eSwt.png](http://i.imgur.com/xU9eSwt.png) :)
~~~
kochthesecond
This is pretty neat. As a programmer, my editor not only color codes, but it
gives me sort of hyperlinks to other definitions i am using/referencing, both
to others work and my own. It also provides me hints and help about the
current context i am writing about (working in), and it is often very relevant
to what I am currently working on. that help is invaluable.
------
micnguyen
The company behind Sublime really fascinates me.
Given how many developers I've seen use Sublime, in the modern age of social
media I'm so surprised SublimeHQ is still invisible. They hardly do any
marketing that I've seen online, no social media engagement, nothing. Not
necessarily a bad thing, but Sublime just seemed -primed- to be that sort of
company with a hyper-engaged user base.
~~~
ryanmaynard
IIRC, its just one person; Jon Skinner.
~~~
bshimmin
Definitely at least one other person:
[http://www.sublimetext.com/blog/articles/sublime-
text-3-buil...](http://www.sublimetext.com/blog/articles/sublime-
text-3-build-3103)
And despite their lack of "social media engagement", they do have a Twitter
account (@sublimehq) with nearly sixty thousand followers - and they tweet
just about as regularly as updates for Sublime Text are released!
~~~
nikolay
Jon Skinner [0]
Will Bond [1]
Jon's only GitHub project is funny: [2]
[0]: [https://github.com/jskinner](https://github.com/jskinner)
[1]: [https://github.com/wbond](https://github.com/wbond)
[2]: [https://github.com/jskinner/test1](https://github.com/jskinner/test1)
------
keithnz
loved my time with sublime, then atom crossed a line where, while not as good
as sublime in some area, it got good in other areas... then... vscode came
along and showed how snappy an electron app could be, but didn't have a lot of
stuff.... then bam, it opened up, and still not as many toys as Atom, and not
quite as slick as sublime, it hits a bit of sweet spot somewhere in between.
~~~
usaphp
can you tell me what exactly in Atom or vscode is worth switching from ST?
~~~
geoffpado
The fact that a minor update got released isn't a rare enough occurrence to
warrant a highly-voted post on HN.
Snark aside, VSC has grown up quite a lot, very quickly. If you like Sublime,
good for you, stick with it. But it looks like VSC is going to pick up support
for more new things, faster. Personal favorite: support exists to plug in
third-party debuggers and get debug tools/build errors inside VSC.
~~~
coldtea
> _The fact that a minor update got released isn 't a rare enough occurrence
> to warrant a highly-voted post on HN._
So, it's not as mature enough and still adds things regularly, including still
struggling with speed issues?
How often does Vim have updates released? Who even cares for most of them?
~~~
Artemis2
Well, Sublime Text 3 is still a beta.
[https://github.com/vim/vim/releases](https://github.com/vim/vim/releases)
~~~
coldtea
That doesn't mean much, it's just a label. ST3 has been stable (and my main
editor) for 2+ years now.
React was 0.15 until recently, and Node was adopted by major companies like
Microsoft at 0.xx versions.
~~~
Artemis2
The way people number their versions does not mean anything (React is actually
in v15 now). 0.xx has no meaning, except if the developer of the software
explicitly indicates it's unfinished. Version numbers are entirely subjective
by default.
I agree with you on the stability of Sublime Text, but the label beta has, on
the opposite of version numbers, a consistent meaning: it's not finished.
------
donatj
I forget that not everyone is in the dev channel and was wondering why this
was on the front page. There's new dev's every couple days. Didn't realize
there haven't been a stable in a long time.
~~~
HCIdivision17
Looking at the change list, it notes that there won't be artifacting with
theme changes any more? If so that's freaking _huge_. For a while there it
would start up, the package manager would update stuff, and then the interface
would sorta crap itself. A quick reload fixed it, but damn annoying.
Would you suggest switching to the dev builds? I really wouldn't mind being on
the bleeding edge if it meant getting fixes like that sooner.
~~~
coldtea
> _Looking at the change list, it notes that there won 't be artifacting with
> theme changes any more? If so that's freaking huge. For a while there it
> would start up, the package manager would update stuff, and then the
> interface would sorta crap itself._
How is that "huge"? It happened at most 1-2 times a year, and a simple 1
second restart fixed it -- and still left all the files you had loaded.
~~~
HCIdivision17
Due making a backup shortly before a breaking glitch, it actually happened a
whole bunch of times across a whole bunch of computers. Sublime's licensed to
the user, not the machine, so over a few months while I was setting up a slew
of machines I kept forgetting about it when I restored my app directory. Add
that one of the packages would freak out when curl wasn't already installed
first and it was a mess. Sublime would start, update, screw up the interface,
popup, and then I'd have to close it and restart a few times until the package
manager got it up to a stable point.
I say _huge_ because it was _god damn irritating_. In a giant checklist of
stuff I was worried about, it was always an afterthought that caught me by
surprise. When the artifacting completely craps out the side bar such that the
folder tree no longer updates when files change, you can't even tell if the
files are still there.
EDIT: To really drive the point home: this happened _a whole lot_. The above
is a story of woe and sadness, but the real kicker was simply the autoupdate
as noted elsewhere by `jbrooksuk. For a long time I used the Soda Theme [0],
one of the top packages on Package Control. It had a wierd glitch with hotkeys
and would end up reloading shortly after startup. It was possible that it
would garf things up even on restart as a result (possibly as a race condition
such that if no updates were performed, it restarted fast enough to not bork
whatever drew the iterface). I used Soda theme for years, but have recently
switched to Seti and enjoy the change quite a bit.
[0]
[https://packagecontrol.io/packages/Theme%20-%20Soda](https://packagecontrol.io/packages/Theme%20-%20Soda)
[1]
[https://packagecontrol.io/packages/Seti_UI](https://packagecontrol.io/packages/Seti_UI)
~~~
andromeduck
Whoa, that is a pretty nice theme
------
ihsw
> Themes may now be switched on the fly without artifacts
Excellent news.
Switching themes and seeing the UI barf is unsightly.
~~~
chatmasta
Why do you switch themes? Night time and day time?
~~~
dreamsofdragons
While not a sublime, I frequently switch iterm's themes when I code outdoors.
A dark background is my preference, but a light theme works much better in
sunlight.
------
bsclifton
If the author of Sublime is reading this thread, please know that I love your
work! I happily paid the $70 and use this as my primary editor (except when
using vim when working over SSH).
I'd also like to see more activity on the Twitter account or just more
engagement with the community. You've got a killer product :)
------
baldfat
RANT: "Ubuntu 64 bit - also available as a tarball for other Linux
distributions."
Linux requires THREE files (.deb, .rpm and a .tar) I personally use OpenSUSE
and can easily compile the software BUT you are not "supporting" Linux when
you only support Ubuntu.
~~~
falcolas
So, they have two out of three then. Does there appear to be any reason you
couldn't use the .deb for Ubuntu on Debian? Perhaps someone would be willing
to take the tarball and make a RPM file for him to host (or set up the tooling
to make it dead simple for the author).
~~~
baldfat
So not supporting Redhat the #1 Linux commercial product is fine? The vast
majority of commercial Linux is RPM based.
The issue is they should just have it automatically build RPM with the DEB if
you have a DEB it isn't diffecult to so a RPM.
------
PhasmaFelis
I was trying to choose a Mac text editor recently; got it down to Atom and
Sublime, and then discovered that _both_ of them do code folding wrong. They
think it's based on indentation, not syntax, so if you try to fold something
like:
if (something) {
some code
//commented-out code
more code
}
It folds everything from the opening bracket to the comment, then stops.
Both editors have had issues filed over this bug for years, which have been
ignored.
([https://github.com/atom/atom/issues/3442](https://github.com/atom/atom/issues/3442),
[https://github.com/SublimeTextIssues/Core/issues/101](https://github.com/SublimeTextIssues/Core/issues/101))
I eventually decided to go with Atom; it's open-source, so I can at least
aspire to fix the damn thing myself when I get annoyed enough.
~~~
coldtea
And why start the comment ahead of the indentation level?
~~~
PhasmaFelis
I didn't write the code in question, but I've done the same thing before.
Usually it's because I'm cursoring up and down a file, see some lines I want
to comment out, and don't bother moving over to the start of the text first.
Syntax highlighting greys out the comment, so it's not visually obtrusive.
(Which is another reason I'm baffled that Atom and Sublime both misbehave this
way--they've _got_ syntax highlighting! They're already aware of block
delimiters! Why don't they apply that knowledge to code folding?)
And I understand there are other reasons to not indent. In some languages,
it's conventional to have certain kinds of declarations come at the beginning
of a line, even if they're inside a block. (I think C++ does this, but I
haven't used it since college--anyone want to comment?)
------
bsbechtel
I recently updated from Sublime Text 2 to ST3. One thing I loved about version
2 was that it was insanely fast. Version 3 is significantly slower. Can anyone
point me to any resources that might help me determine why the slowdown?
~~~
kmfrk
I had to disable all my linters, because I also ran into severe input lag. It
fixed the worst input lag, but I still have no idea what exactly the culprit
was.
Really wish packages could be subjected to some kind of performance benchmark
to shame the worst offenders to fixing their isht.
------
wiesson
Why is it so much faster than atom?
~~~
trymas
Because C (or C++, I do not know exactly) and not vast amounts of layers of
abstraction?
~~~
Hurtak
I thought Sublime is written mainly in Python?
~~~
Terribledactyl
It implements a Python interface so the plugin ecosystem uses it, but the core
of it is c++ afaik.
------
marcosscriven
Regarding the comparisons with Atom, can I take the opportunity to ask if
anyone from ST could expand on their answer here[0] as how they go about
making a custom OpenGL interface?
Do they use GLEW? Skia? How do they ensure anti-aliasing is done right?
[0]
[https://news.ycombinator.com/item?id=2822114](https://news.ycombinator.com/item?id=2822114)
~~~
jskinner
Sublime Text 1 used OpenGL (1.1 only, hence no GLEW) and Direct3D, while 2 and
3 are mostly software only.
While Sublime Text 2 and 3 do use Skia, only a tiny fraction of its
functionality is actually used, just for rasterizing lines and blitting
images. Font rendering is done by using the underlying platform APIs to do the
glyph rasterisation.
~~~
marcosscriven
Wow, thanks for responding! Very impressive work.
~~~
billforsternz
It seems Jon comments on hacker news about once a year. So you win the hacker
news lottery!
------
bsimpson
In case anyone was wondering, the new JavaScript syntax highlighter appears to
understand ES6, but chokes on JSX. If you're writing React, the Babel package
is still a good idea.
~~~
bpicolo
Makes sense. JSX isn't javascript. :P
------
oneeyedpigeon
Is there a roadmap anywhere? I bought ST2, and I think I'm going to need to
pay again for ST3 when I decide to upgrade, but it's still in beta and has
been for over 3 years now. Would be nice to know when it will eventually be
released.
~~~
oliwarner
I don't think so, it's the sort of software that's ready when it's ready. You
can _use_ the pre-release versions of ST3 (ie what's currently available) on a
ST2 license. That gets rid of all the nag screens.
I bought ST2 too in 2012 after a few weeks using it. It was certainly an
expense but given how much it was speeding some things up, I considered it
very good value.
I've used it for _at least_ 6,000 hours since then, probably closer to twice
that. I'm not going to fight an upgrade when it eventually gets here.
------
nikolay
I wonder what kind of release process this guy has as the dev version always
lags behind stable -
[https://www.sublimetext.com/3dev](https://www.sublimetext.com/3dev)
~~~
coldtea
Who told you it lags? Version numbers? That's because stable version X+1 come
after dev version X.
This doesn't mean it lags, just that the dev version came out first, and only
after it was checked for a while, it was deemed worthy to hit the stable
(well, actually beta) channel.
That's not different than any other project I know.
~~~
nikolay
I don't look at the version number only, but at the changelog, too, but didn't
pay much attention, until now.
Most sane projects create a build and test it, and don't rely on testing
source code because the build process itself can cause defects as well.
What we've been testing as a dev build should have been promoted to stable,
but, I realize now, he has the changelog embedded in the executable, and it's
possibly merged across dev builds. He also has differences in the
functionality between the dev and beta builds as you can't run the dev without
a key.
~~~
SyneRyder
It sounds like his approach is similar to the Windows 10 Insider Builds model
of Fast Ring & Slow Ring. Even if all the Sublime builds in Dev are stable, I
don't want to be updating every 2 days to get some minor bugfix. When I launch
Sublime, I mostly just want to get to straight to work, not be thrown out of
the zone by an update.
It took me a while to learn that lesson from my own customers. As a developer
I'm excited to make sure customers always get new features ASAP, but customers
would complain they were perfectly happy with what they'd bought & only wanted
an update maybe once or twice a year. But everyone's different - that's why
the option of a Dev/Beta channel & a Stable channel is good.
~~~
nikolay
No, the dev/beta channels are a good idea. I think nginx has a similar release
process as their stable now is ahead of their mainline as well.
------
m_mueller
Is it just me or did this break the python syntax highlighter for line
continuations within strings? I had to install MagicPython to fix it.
~~~
bpicolo
In strings or docstrings?
~~~
m_mueller
normal strings with "\" line continuations.
------
wrcwill
Anyone know why this update changed highlighting for me?
Tomorrow Night Scheme, strings in double quotes are now gray instead of
green?..
~~~
dsego
Chalk it up to syntax highlighting improvements, I guess.
------
bobsoap
PSA: If you haven't updated yet, I'd wait. The new version apparently broke
syntax highlighting for many languages. Check out their support forum[1] -
it's causing ripples.
[1] [https://forum.sublimetext.com/c/technical-
support](https://forum.sublimetext.com/c/technical-support)
------
thincan11
Is sublime working on something big?
~~~
eddiecalzone
No.
------
Fizzadar
:( Python docstrings are no longer coloured as strings, but comments...
~~~
anentropic
I fixed that myself in ST2 in the syntax definition files
------
vyacheslavl
ST now supports async/await keywords in python! yay!
------
zyxley
It's good to see more ST updates, but it's sad that it feels like they're only
even bothering because of the popularity of Atom.
~~~
usaphp
What exactly are you looking for in ST improved? I just can't find anything
that I think can be any better, it's just perfect already for me, I've tried
using Atom but always went back to ST after a week of poor performance and
battery drainage of Atom.
~~~
elliotec
Yeah Atom still has issues. I've had crashing problems on 4 year old Macs.
But ST makes you pay $70. And also isn't open source. And more people know JS
than Python to hack on it. And Github has a powerful marketing team and lots
of resources to put behind Atom and it's development. And ST doesn't get
updated too often.
~~~
cageface
$70 is absolutely peanuts for a tool that you use for hours every day.
The main reason so much modern software is bad is that people aren't willing
to pay for even a fraction of the actual value it delivers. I would happily
pay $700 for my main code editor if that would guarantee regular updates and
new features.
~~~
jethro_tell
To your point I think a lot of people would pay that kind of money if it
guaranteed the company would both, continue to update and continue in its
current direction. But that really hasn't been the case with most projects,
and the open source products tend to have better shelf life.
I am happy to pay for these kind of things but rarely in a big upfront block
like that. I doubt I'm the only one.
~~~
drewcrawford
Very few people are willing to pay the kind of money that would actually
sustain a company.
Let's run the math, and see if we can build a business around a text editor.
Let's say we can get away with 2 good developers (across 3 platforms, that
seems difficult to me). Let's further stipulate they are willing to take a pay
cut from their previous life at Facebook and are willing to do this as a labor
of love. 90k/yr salary, 180k/yr for both.
Now we need a technical writer, to handle API documentation etc. Let's say we
can get a CS major to do it part-time at 20 hrs/week at $20/hr. 20k/yr.
We also need a sysadmin to wrangle the CI setup, website hosting, install
updates, setup email, etc. Suppose they automated everything in
Ansible/Docker/Kubernetes/whatever_the_cool_kids_do_these_days and so we only
need 10 hrs/week at $50/hr. 26k/yr.
We need a QA engineer to bang on things and file proper bugs. $50k/yr. Let's
further stipulate that this same person will handle customer support, because
we're a lean startup and combine multiple roles in the same individual.
We need somebody to handle marketing (or maybe direct sales, since it's a $700
pricepoint). Let's call it another $50k/yr.
We're now up to $325k/yr. Traditionally we would now have to lease office
space, buy macs, call our AWS sales rep, etc., but let's assume for this
conversation everybody works from home, they have their own equipment, and we
got free startup AWS credit, which is not very realistic at all, but whatever.
So let's stipulate this is a stable burn rate. Nobody will get poached by
Facebook, nobody needs to raise a round, if we can pull in _only_ 325k/yr, we
can do this forever.
When we sell our $700/license, let's say we net 75%. That is actually very
high: most software companies net around 50-60%, because the App Store,
WalMart, the state, etc., take their pound of flesh. But our $700 text editor
is so amazing that people will buy it direct from us, we will have a strong
brand, _handwave_. So we take home an incredible $525.
We now need to sell 700 licenses year-over-year in order to keep this thing
going. Not even 700 licenses _one-time_ , but we actually need to close 60
licenses a month, 2 licenses a day, 365 days a year. To do that, we needa
sales process.
I would love to live in a world where a salesman knocks on your door, does a
2-hour demo, and at the end you have 50% conversion to writing him a $700
check. I just don't live in that world. And that's the kind of sales process
you'd need to _sustainably_ develop a text editor.
Anything short of the exercise I just went through will result in a failed
product. A developer might take a pay cut for a year but will lose interest if
they're well below market. If we don't hire good QA then our quality will be
shit and not worth $700, just think of all the shitty software you use
already. If we don't have support then nobody will want to spend $700 to get
their emails ignored. If we fund via VC then we will have to blog about Our
Incredible Journey the next time Google Docs needs an engineer.
The result of this analysis is the current market. The only way to develop a
text editor is either all volunteers (like Atom), a small part of a large
corporation's marketing budget (VSCode), or a labor of love from one and a
half underfunded and overworked people who we somehow expect to stop being
underfunded and overworked even though we only paid them like half a billable
hour several years ago (ST).
~~~
coldtea
You DO know that apart from a programmer and a helper sales guy, ST has none
of all those other "roles".
And judging from the community size, polls, and similar numbers from other
project, it has sold at least 10.000 licenses (x70 -> 700,000) and i all
probability much more.
> _The result of this analysis is the current market. The only way to develop
> a text editor is either all volunteers (like Atom), a small part of a large
> corporation 's marketing budget (VSCode), or a labor of love from one and a
> half underfunded and overworked people who we somehow expect to stop being
> underfunded and overworked even though we only paid them like half a
> billable hour several years ago (ST)._
Maybe it's the analysis that's way off base, especially the dev numbers.
In fact the whole breakdown sounds ludicrous, like assuming the only dev work
out there is done in cushy Facebook style jobs in the Valley or with
extravagant VC money.
In fact tons of successful indie apps, not just editors, fly in the face of
all you wrote above. No reason to believe a graphics editor (e.g. Acorn) or
VST plugin (e.g. uHe Diva) or FTP Editor (e.g. Transmit) has much more
potential users than a language/OS agnostic programming editor. And yet, all
these companies exist for years, and even have several employees and nice
offices.
As for editors, there's not only IntelliJ, that has tons of people working for
it and is doing just fine, but other long time companies, like UltraEdit (that
boasts 2 million users), a whole ecosystem around paid Eclipse plugins, etc.
------
wnevets
the 3.0 dev channel has been fairly active, 13 updates this year.
------
_RPM
The only proprietary software that I use as a matter of choice is Microsoft
products and VMWare products, For something as trivial as a text editor, I
wouldn't dare use something closed source, proprietary like this.
~~~
haswell
I don't think it's quite fair to classify Sublime as a text editor. Your
comment trivializes the quality of the editing experience, the rich plugin
ecosystem, the speed of the editor compared to common competitors, etc.
You are of course entitled to your preferences, but I'm not sure why you
wouldn't "dare" use this?
~~~
ubernostrum
Emacs and vim are classified as text editors, and Sublime is a small fraction
of what Emacs and vim are.
~~~
pizza
To be fair, the small fraction of features that Sublime shares with Emacs and
vim are hardly insufficient for a good editor.
| {
"pile_set_name": "HackerNews"
} |
Robert Stone’s Bad Trips - samclemens
https://newrepublic.com/article/157573/robert-stone-madison-smartt-bell-biography-book-review
======
jonnypotty
Thanks for this. Never read any Robert stone but will now.
| {
"pile_set_name": "HackerNews"
} |
Ask HN:Why do browser don't display the tabs on the left? - baby
I've been using "Tree Style Tabs" for years on Firefox. This is what is pushing me away from chrome (that I use everyday but very lightly since it gets too messy when I have too many tabs opened).<p>I still don't get why it's not the default, or it's not available right away.
======
antidoh
I stay with Firefox for daily use because I like a lot of the available
addons; it makes it _my_ browser, not just Firefox. Chrome just doesn't have
enough specific addons or customization for it to ever feel like _my_ browser.
Tree Style Tabs is always the first addon that I install in a new Firefox,
followed closely by All In One Sidebar, It's All Text and Uppity. Then come
Pinboard, and adding DDG to the search bar.
I've tried many times to like Chrome, but being a bit faster doesn't make up
for less customization, for me.
------
shyn3
It only makes sense to have a side tab panel considering everyone has a
widescreen display.
------
tnorthcutt
Because text is read horizontally, not vertically, so horizontally oriented
title areas are more natural. Same reason applications put menus at the top,
titles at the top, etc.
~~~
baby
tabs on the left are read horizontally. Check "Tree Style Tabs" to see what I
mean.
------
duiker101
because it occupies too much space? i do not know actually, i never tried it
but i usually do not have so many tabs to require a whole column,most of the
space would be blank. But maybe i do not have so many tabs because i do not
have space.
~~~
baby
Screens nowadays tend toward being larger than taller, so there's usually an
extra awkward spaces on the sides of a website. Which is perfect for putting
tabs.
I've seen that in a presentation from firefox years ago, I actually thought it
was clever, I still don't get why haven't implemented it after all those
years.
------
cutie
I like the idea of it but it looks a bit ugly, since the boxes don't line up.
| {
"pile_set_name": "HackerNews"
} |
The challenging task of sorting colours - signa11
http://www.alanzucconi.com/2015/09/30/colour-sorting/
======
jugad
This is pretty cool.
4 years ago, I spent few days playing with colors to come up with an algo
which allows us to automatically generate a set of colors for objects in our
3D scene. Some of the algos considered were to sort the colors in some way
which includes the darker and lighter colors, and still allows us to choose n
different colors from the whole range.
The result was good but not as great as we expected. The conclusion was that
our algorithms are still not capable of replacing human chosen color themes.
Maybe if we had more time, or an intern with such an inclination for color
selection, we might have gotten farther.
| {
"pile_set_name": "HackerNews"
} |
Path to Success for One Palestinian Hacker: Publicly Owning Mark Zuckerberg - graeham
http://www.wired.com/threatlevel/2013/10/facebook_hacker/
======
simonswords82
If nothing else I hope this guy gets to continue what he so clearly loves
doing and doesn't have to switch back to hard labour to make ends meet.
Facebook dropped the PR ball on this one (again). A simple public
acknowledgement of the security hole, $500 sent to the ethical hacker and a
thank you note from Mark would have been sufficient.
~~~
therobot24
funny how getting $500 from a multi-billion company is pulling teeth,
especially when it's an easy PR move
| {
"pile_set_name": "HackerNews"
} |
How to Get Microsoft Office at 91 Percent Off - edw519
http://www.theultimatesteal.com/store/msshus/ContentTheme/pbPage.microsoft_office_ultimate
======
rantfoil
To this day I will not understand why Microsoft product managers like to use
non-microsoft.com domains for their promotions. It's just not a good idea --
so much trust is involved in a good trusted URL -- why throw it away?
~~~
wanorris
My guess would be that it has to do with the red tape involved in getting
something authorized to go up on whatever.microsoft.com.
~~~
rantfoil
You're right, that's probably it. Sadness.
------
nostrademons
This is a ripoff. I paid $5 for my copy. College site license. And it's still
legally mine (well, insofar as any software is...damn EULAs) even after
graduation.
~~~
rms
If this comes in a retail box and has a normal license on the box it could be
ebay'd for a lot more than $59.99.
------
rms
Was this in retail box? Can't believe I missed it.
------
weegee
too bad the offer ended April 30, 2008
| {
"pile_set_name": "HackerNews"
} |
Israel Asked Facebook CEO to Remove “Third Palestinian Intifada” Page - ArabGeek
http://arabcrunch.com/2011/03/israel-asked-facebook-ceo-mark-zuckerberg-to-remove-page-calling-for-third-palestinian-intifada.html
======
yuvadam
I'll give a slightly political insight on this issue, from within Israel.
The past few weeks have seen an ongoing effort in Israel to report said page
to Facebook. There are two questions to be asked here.
First, is this a call for violence? There is no clear cut answer. "Intifada",
in Arabic, means "uprising". Sure, the past two Intifadas (1987-1993 and
2000-2005) had a violent angle to them. But it seems the Palestinians, at this
stage, have established that violence and terror do not help their cause. They
understand very well that they can win (and _are winning_ ) Israel in the
international diplomacy arena. So, violence is probably not the intent, but I
find it hard to imagine a peaceful uprising, that does not expand to violent
acts.
Second, why the uproar in Israel? This seems to me like a classic oppressive
maneuver by the Israeli crowd. The status-quo in the peace talks with
Palestinians is largely to be blamed on current and previous Israeli
governments. Further expansion of settlements in the West Bank, with total
disregard to previous understandings with the international community does not
help. Sure, the Palestinians have their share of the blame, but it is clear
the Israeli government is doing nothing to help the situation.
The better part of the Israelis seem to think it's perfectly possible to
maintain this status-quo, all the while oppressing the Palestinians
aspirations towards an independent state. This will not happen. As long as the
occupation continues, and peace talks are stalled, the Third Intifada,
Facebook'd or not, is inevitable.
------
donnyg107
HN guidelines(in discussion of what Is not proper to post): "Off-Topic: Most
stories about politics, or crime, or sports, unless they're evidence of some
interesting new phenomenon." Please do not post politically charged links
again.
------
ArabGeek
First time a government official asks facebook to remove a page?
~~~
maratd
Second time you post this anti-semitic garbage?
~~~
joe_the_user
Can we get a reference on how a page detailing an Israeli request for the
removal of a Facebook page is antisemitic?
You might agree with that request. Feel free to tell us why.
But by calling _post itself_ antisemitic, are you not saying that _letting
people know what the Israeli government is doing_ is, in itself, antisemitic??
Seems rather problematic...
~~~
donnyg107
The problem is the claim that this page suggests no acts of violence toward
the Israeli people. In fact, I'd imagine the israeli government would be quite
alright with what it believed to be a peaceful palestinian rally. I don't
believe they would speak up to the CEO of facebook over something they believe
to be benign. This post has nothing to do with technology, and in essence
encourages the encouragement of the encouragement of violence, which should
not be mistaken as something purely informative. This does not follow HN
guidelines, Arabgeek, as is encourages political discussion rather than
technology discussion. Please do not post politically charged links again.
| {
"pile_set_name": "HackerNews"
} |
How the Japanese IT Industry Destroys Talent - jbm
http://www.japaninc.com/node/2674
======
flocial
Japan's IT industry is so thoroughly messed up I don't even know where to
begin. However, for the scope of this article, I would say that Japanese
enterprise solution companies (this is where the so-called "system
integrators" live) is a tangled, incestuous mess. I'm only going to talk about
one small aspect of it. One of the great tragedies of the Japanese software
industry is how it seems to have taken on the worst aspects of the
construction industry and the manufacturing industry. Many of the large
companies dominate the industry, at least in terms of large contracts from
major corporations and government. Some names that come to mind are NTT (the
telecoms giant has a whole universe of IT subsidiaries), NEC, Fujitsu, etc.
Yes, all companies from Japan's past glory days. Naturally, these companies
get the lion's share of stable income. Of course, they have more work than
they can handle or do so they naturally close the contract THEN find a company
to implement it for them.
Middlemen Casual students of Japan might have heard of "shitauke" which
roughly equals outsourcing but the concept is completely different. Just as
many Japanese car manufacturers have their own go to parts makers, these parts
makers might have their own go to parts makers ("magouke" or grand children
outsourcing). So basically, you have this massive web of IT companies that are
inter-connected and related in some way.
So big corporations get contracts, little companies get the scraps. This
distorts the market because naturally the companies with big name brand value
keep a greater share of the profit while the weaker outsourcing companies make
it happen. So even within these outsourcing companies, they might try to cope
by hiring temp programmers (yes, this is common in Japan and they live in
internet cafes no less) or even outsourcing it to a third world country.
In most cases, the big IT corporations (that hire top recruits from the best
colleges) are filled to the brim with project managers who might have the raw
ability or potential to code but spend their front line career simply writing
up specs and enforcing ridiculous deadlines.
As an anecdote, many symptoms of this malaise abound. ATMs malfunctioning
because of bad system code (prevalent when upgrading systems created by mega
bank mergers, the latest and most notorious being on the recent earthquake
when a donation account took down the entire bank), the Tokyo Stock Exchange
shutting down from massive trade volume (despite having nothing like American
high frequency trading), the pension system database not designed to give
people unique keys (or something like that), etc.
~~~
nandemo
> Just as many Japanese car manufacturers have their own go to parts makers,
> these parts makers might have their own go to parts makers ("magouke" or
> grand children outsourcing).
I've worked for a "great-grandchild" outsourcer. Megacorp A outsourced to B,
which hired C, which hired my company. I actually worked at A's offices so
personally it was a good experience for me, though my salary was crap --
around my 2 year anniversary I talked to a recruiter, who kindly informed me
that sanitation workers in the US earned about the same as me.
Once I went to Europe for a business trip. Though A's employees had all their
expenses paid, my own contract with my company didn't predict that clients
would ask grunts like me to go on international business trips, so they only
paid a fixed daily allowance in yen (for meals and the like). To make it
worse, the euro was really high against the yen, so my allowance amounted to 7
euros or so. After 2 weeks in Europe I was in the red. It was still a good
experience though, and I always boast about it in job interviews in Japan.
~~~
flocial
Yeah, I've worked in fancy offices too (in my case earning more on contract
than regular employees, without plump benefits but hey). My experience is that
being the outsourced to is a polar experience. You're either a hired gun with
special status or some dispensable clean up crew. It's a symptom of the rigid
labor laws that limit everyone from achieving their potential.
------
geolqued
Actual Article <http://www.japaninc.com/mgz_nov-dec_2007_it-talent>
Also Part I - My struggle at the Frontline of Japanese Enterprise IT
[http://www.japaninc.com/mgz_spring_2007_frontline_japanese_i...](http://www.japaninc.com/mgz_spring_2007_frontline_japanese_it)
------
mwill
Aside from this, can anyone in the know give some info about the startup
community/hacker culture in Japan. Is there one?
I'm in Australia, and I've always had a suspicion that China or Japan would be
the closest hub of innovation and interesting stuff outside of Aus.
~~~
flocial
It's not nearly as open as what you'd expect in other countries. The hacker
community is really inward looking like Japan as a whole. You could check out,
ON Lab. Hiro Maeda runs it and is a cool guy educated in the states.
<http://onlab.jp/>
------
epe
Apparently it's managed to destroy even the minimal level of talent needed to
construct a readable PNG.
------
edderly
"Instead of being evaluated on their capability to manage the overall system
architecture, Japanese IT project managers are often assessed on how they can
personally relate to the team members. Taking team members out for a drink,
listening to their personal issues, serving as both counselor and cheerleader,
are important to strengthen a project manager’s people network. "
OTOH, I've worked for several western corporate orgs where we have no shortage
of PMs, stuck in meetings and forwarding emails and not doing any of the
above.
+1 Japan
------
vph
as the main picture is too small to view, it remains a mystery why the
Japanese IT industry destroys talent.
~~~
5hoom
I zoomed right in and found it has something to do with three spirals and
pixelated text surrounded by jpeg artifacts...
------
droithomme
1\. This article is four years old, it was published in 2007. Have things
changed since then?
2\. The graph is unreadable and clicking to get a larger image goes through a
cycle of pages that don't have larger images but take a long time to load.
------
MikeMacMan
There are a few currents working against startups in Japan:
\- Smart, ambitious people tend to join large, prestigious corporations, or
government agencies
\- Up until a few years ago, forming a company was very expensive (3 million
yen minimum)
\- Seniority is a powerful force in the Japanese workplace, which I suspect
prevents young college grads from introducing the latest technologies in their
companies.
\- Japan has struggled to transition to a post-industrial economy, and IT is
still dominated by hardware companies.
------
cubicle67
any opinion from Patrick or any other HNers in JP? The article seems plausible
but is the opposite of what I'd have expected
~~~
patio11
I have a different take. Should write a blog post some day.
The usual disclaimer: Japan is a big place and not all Japanese
companies/people act the same, just like not US company is Google.
------
suyash
I thought Ruby on Rails came from Japan. Hard to believe they don't contribute
to Open Source - per article.
~~~
weiran
I thought it was just the Ruby language, not the Rails framework.
~~~
bitops
That's correct - Ruby is from Japan, Rails was developed by a Danish
consultant working for a US-based firm. (He now lives in Chicago I believe).
~~~
enry_straker
Ruby Creator and chief Designer - Yukihiro Matsumoto Rails Creator and
Maintainer - David Heinemeier Hansson
------
Maven911
can i get a cliff notes...
~~~
latch
"To summarize, Japanese corporations lack concrete IT strategies and the
ability to envision the appropriate enterprise architecture that aligns with
their business needs. As a result, SI services vendors adopt a ‘body shop’
strategy that gives no incentive for engineers to polish their skills.
Japanese software vendors are not encouraged to provide solutions with the
latest architecture that meets the needs of the global market. Being locked
into such a vicious cycle, even the most talented engineers have very little
opportunity to develop their skills to a world-class level."
| {
"pile_set_name": "HackerNews"
} |
Sim2Real – Using Simulation to Train Real-Life Grasping Robots - rreichman
https://www.lyrn.ai/2018/12/30/sim2real-using-simulation-to-train-real-life-grasping-robots/
======
rreichman
I found this paper to be very cool. Happy to answer any questions you may have
on it.
| {
"pile_set_name": "HackerNews"
} |
An Oral History of Unix - jacquesm
http://www.princeton.edu/~hos/Mahoney/unixhistory
======
barrkel
I wish I could hear the original audio. The transcripts look like they were
done by a non-computer expert and then edited to fix up the terminology
inaccuracies, but there are still missing and odd bits.
~~~
jacquesm
I'm sorry, I couldn't find them. I would have posted them as a comment if I
had. I spent over an hour looking for them earlier today, chances are I've
been looking in all the wrong places and they're out there somewhere.
Maybe someone else could give it a shot?
------
durbin
A Transcribed History of Unix
| {
"pile_set_name": "HackerNews"
} |
Show HN: Sellfy Market – Discover best digital content directly from creators - renaars
https://sellfy.com/
======
jdawg77
First thing I'm missing is, "Why not games?"
Second thing is as a creator, why not use Patreon, Indiegogo, Gumshow, or
IndieAisle. Most of my books are on all the major platforms, outside of Google
Play.
I can't see a compelling reason I'd list here, or shop here. The design is
decent, but it feels a bit...generic.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Any interest in data loss prevention(DLP) app for Slack? - alexgaribay
Would anyone be interested in a DLP app for Slack that monitors messages in realtime for sensitive info like credit cards, social security numbers, etc.? I've built a crude prototype that can do so and delete the culprit messages.
======
danjony11
Interesting: Do you support all ports and protocols? Do you support all file
types? What about SSL transmissions ? What about CCN's in images ?
~~~
alexgaribay
I'm not sure what you're meaning when you're asking about all ports and
protocols in relation to Slack. I haven't added file inspection but I plan to
after an initial release.
| {
"pile_set_name": "HackerNews"
} |
Why did I get so many emails from sites regarding Privacy Policy changes today? - Froyoh
======
bausshf
GDPR has to be in effect by May 25th and so companies are changing their
privacy policies to comply with it.
It's probably a coincidence that it all happened the same day for you.
------
MBCook
GDPR is going into effect so tons of companies are changing privacy policies.
| {
"pile_set_name": "HackerNews"
} |
The Man Who Broke Ticketmaster (2017) - barry-cotter
https://motherboard.vice.com/en_us/article/mgxqb8/the-man-who-broke-ticketmaster
======
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=13643045](https://news.ycombinator.com/item?id=13643045)
------
parliament32
>After Lowson and his cofounders were arrested, the Department of Justice
based much of its argument on the idea that Wiseguy had "hacked" CAPTCHA by
using OCR.
This is an interesting argument. Are you "hacking"/"circumventing" a
gatekeeper system by doing exactly what it wants you to do, just using a
computer instead of a human?
~~~
nlawalker
This reminds me about that story of the original algorithmic traders. I can't
find it right now but I know it's been on HN.
Long story short, Bloomberg came down on one of the first computer-automated
trading companies for reverse-engineering their proprietary wire protocols, so
they built a robot that would sit at the terminal and peck the keys.
| {
"pile_set_name": "HackerNews"
} |
Being Eric Schmidt (On Facebook) - stevefink
http://techcrunch.com/2010/10/10/being-eric-schmidt-on-facebook/
======
chrisbroadfoot
I started getting emails from Facebook, because someone had used my work
address to create an account. (I get a lot of spam on this address - it being
cb@... .com)
Really, very annoying! I guess this is kind of to be expected of Facebook,
though.
------
pama
I'm not sure what to make of this farce, but I'm definitely not following
Mike's suggestion to create a facebook account and list all my email accounts
there.
~~~
bhiller
You can create an account, add all of your emails, but then limit their
visibility to 'Only Me'. That way people can't impersonate you, and you don't
need to reveal all of your emails to everyone.
------
abraham
You can do this with Twitter too.
------
robryan
It's a tradeoff really, had they required an email validation before any
actions there would definitely be a drop off rate in completing signups,
especially early on when people didn't have large social graphs on facebook to
pull them in.
~~~
gasull
It's not a tradeoff, but an externality paid with time instead of money.
Facebook won't suffer any inconvenience, only non-users of Facebook will.
Add this to their opt-out by default when adding new features, and Facebook is
becoming more and more a spam platform.
| {
"pile_set_name": "HackerNews"
} |
Software Developer Salaries: Ruby on Rails vs. Java - iamelgringo
http://blogs.payscale.com/ask_dr_salary/2008/01/software-develo.html
======
Tichy
I don't think the differences in pay would be because Ruby is easy and Java is
(supposedly) hard. I think it is simply an effect of the different number of
jobs available.
------
far33d
How is it possible to have 20 years of java or ruby experience?
~~~
mechanical_fish
Those are dog years.
(Credit where credit is due: the author does explain that he's tabulating
years of generic "software development" experience, not experience in a
particular language. You do kinda have to read the fine print, though.)
| {
"pile_set_name": "HackerNews"
} |
Bitcasa Offers Infinite Storage for $100 - citizenkeys
http://money.cnn.com/2013/08/06/technology/innovation/bitcasa-cloud-storage
======
citizenkeys
The secret is MD5 hashes on all the files. If you upload a file and the MD5
hash matches another file, then instead of storing the file you're effectively
just storing a symlink to the original version of the file.
| {
"pile_set_name": "HackerNews"
} |
Ron Conway steps back as Y Combinator cuts team funding - pizu
http://news.cnet.com/8301-32973_3-57554351-296/ron-conway-steps-back-as-y-combinator-cuts-team-funding/
======
jasonkolb
Does it strike anyone else as odd that the context for this discussion
revolves solely around the funding they're given, and that's the only
barometer in this discussion?
While I get that it's not the reason being given, it's hard not to infer that
this pullback in funding is in some way related to the real-world success rate
of recent YC companies.
~~~
pg
Do you think I lied when I said we wanted the amount to be lower because $150k
was causing messy disputes?
To get the number down to $80k, I had to ask all the participants to invest
less than they'd originally planned to.
Among the YC partners there are still some who think the amount should be
lower than $80k.
~~~
kylec
$80k is still a lot more than the (iirc) initial $5k + $5k/founder.
~~~
pg
The $80k is not invested by YC. I get the impression from some of the threads
about this that people don't realize that.
YC still invests $11k + $3k per founder, as we've done for years. The $80k is
a separate, additional investment offered by third parties.
~~~
edanm
The article definitely reads as though YC is, at the very least, completely in
charge of this money.
I know that it's not true, as do other older members, but I wouldn't be
surprised if new members get the wrong impression.
~~~
tptacek
This is an 80k _sight unseen_ investment conditioned solely on acceptance into
a YC batch. YC controls it in the sense that they control the batches, but you
could fund your own YCXC program that offered $x0,000 to each company in each
YC batch. It would succeed or fail to the extent that individual YC companies,
all of which completely control their own operations, decided to take you up
on it.
------
ghshephard
November 26 - see the full thread of comments on reduction of team funding and
rationale behind it here: <http://news.ycombinator.com/item?id=4861867>
~~~
DanielRibeiro
More comments on Pg's submission[1], and relevant HN discussion[2]
[1] <http://ycombinator.com/ycvc.html>
[2] <http://news.ycombinator.com/item?id=4833074>
------
jnsaff2
Is there a list of failed/doomed YC startups? Maybe with some analysis?
~~~
astrodust
<http://yclist.com/> perhaps?
There's also <http://www.seed-db.com/accelerators> which was posted on HN
earlier.
~~~
balsam
on yclist it seems 70/84 made it to launch S12, compared to 45/60 for W11.
------
tomjen3
It seems insane to cut the best investor out of the group - even if you want
to reduce the total investment you still want the best people.
~~~
hayksaakian
From TFA, it seems like he'll still be available for office hours, just not
providing $$$
------
jonathanjaeger
Makes sense -- that extra $70K or so probably isn't the deciding factor in
seeing whether someone has a validated idea and product. Ron Conway probably
stepped back in part because now EVERY company is a Ron Conway-backed company
and that his backing doesn't have the same cachet as before.
------
hayksaakian
How is YC doing financially?
~~~
kapilkale
Very well. From last year: <http://ycombinator.com/nums.html>
~~~
rdl
The other thing to remember is that YC is growing. Even if YC isn't getting
any better at this despite lots of experience, bigger team, etc., they're
investing in larger batches (even the most recent decline to <50 is still
bigger than until W11). So, assuming a constant distribution of AirBnBs and
Dropboxes and Herokus and Cloudkicks and Wufoos, there should be a lot of
great companies still in play. (I think Stripe is clearly one of those; Parse
and Meteor may be others).
------
tokenadult
Some investors have more of a long-term perspective than others. Always the
scary part of investing in early-stage startups has been the need to fund a
lot of losers to cast a wide enough net to find the occasional winner. The
main YC team will keep refining its procedures for searching out winners who
haven't won yet, while screening out losers who won't win even with a YC
investment, but that will always be an inexact art.
~~~
001sky
_searching out winners who haven't won yet, while screening out losers who
won't win even with_
\-- Ad Hominem Investing. Also worth a second look, as a Thesis.
~~~
vidarh
It comes across a bit harsh worded that way, but if you look at it purely from
the point of view of a VC looking for a decent ROI, most of will be "losers" -
most startups fails, and while many founders will try again and again and many
will eventually make it, a large portion will never succeed at getting growth
enough for an investor like YC to make a decent return.
If it was meant as a personality judgement, then I agree with your assessment.
But if you view it as purely a statement about their odds of success at
present time, it's just business.
~~~
001sky
But most startups fail because the product sucks. its not the people that
"fail" in the market. The end-market could care less about "the people", in
that sense. Its worth sanity checking even good heuristics, occasionally.
| {
"pile_set_name": "HackerNews"
} |
The Problem with Tying Health Care to Trade - mrjaeger
http://fivethirtyeight.com/features/the-problem-with-tying-health-care-to-trade/
======
athenot
The word "patent" always sounds better than "guaranteed monopoly", yet there
is no fundamental difference. With respect to drug patents, I think the French
system has a good system:
\- Option 1: you sell the drug at whatever market price you want but no patent
will be granted and anyone is free to copy the drug.
\- Option 2: you receive a patent protection for you drug but you must defend
the price at which you want to sell in a bargaining round with the government.
In essence, it's a compromise where you as a drug manufacturer can recoup your
costs, and where the people don't overly get taken advantage of in life-or-
death situations.
Disclaimer: my Dad worked in a major French pharma company.
~~~
pkaye
Which option do the drug manufacturers typically take? The second option would
require a government that actually cares more about society in general vs
special interests. How has it worked out in practice?
~~~
ddingus
Seconded. I am very interested in learning more about this question.
~~~
refurb
I'm in the biotech business and I've never heard of the first option in France
at all. That said, individual countries regulations can be quite complex, so
it might be true.
As for the second option, France, Germany and the UK all have a system where
an independent body evaluates a drug in order to determine it's value. They do
this by comparing the new drug against what's currently available and provide
a rating. In France it's an ASMR rating, but I think that's changing.
Once the drug receives a rating (Important, moderate, mild and insufficient
improvement over current agents, ASMR I-IV), the company and gov't enter
negotiations over the price.[1]
[1] [http://www.has-
sante.fr/portail/upload/docs/application/pdf/...](http://www.has-
sante.fr/portail/upload/docs/application/pdf/2014-03/pricing_reimbursement_of_drugs_and_hta_policies_in_france.pdf)
~~~
pjc50
The UK system is called NICE. It attracts hostility for not approving drugs
that grant a very marginal improvement to some late-stage cancer patients at
extreme expense.
------
AdeptusAquinas
Big fear in NZ, however unfounded, is that TPPA will somehow try and reduce us
to the health care standard in the US :(
~~~
pasbesoin
Hmm, a new spin on "race to the bottom."
(Spoken as one who is watching the quality health insurance plans disappear
from the Affordable Care Act (Obamacare) for the coming year.)
~~~
marincounty
I'll take your word that Afordable Care Act is eroding "quality health
insurance plans". I have seen the rates, and they all seem outrageously
expensive.
What I have seen is Insurance companies just finding loopholes, and doing what
"for profit" insurance companies do; make money.
I don't know how the Affordable Care Act is making things worse.
The insurance companies were greedy, selfish bastards before the act? I don't
think health care would have magically gotten better with time?
As to Obamacare. No it's not what he originally wanted. Politically, he needed
to amend his original bill. At the time the Rebublicans pressure was
palatable. He knew it was this, or nothing. An election was comming up, and he
knew he wouldn't be able to get anything through next congress.
I'm not a fan of Obamacare, but we were cornered. It was this, or nothing.
I do believe we need to document our complaints about individual Insurance
companies, but that will probally never happen. Why--because it's one of those
subjects we don't like to think about.
I don't have an answer to the problem of horrid insurance companies, and
medical institutions/individuals that overcharge us. I would be the first one
pulling the cord on Obamacare if someone could get a better alternative
through congress. I don't see that day comming soon.
(To the Doctors/Institutions that don't abuse their power; you have my highest
regard. To the ones that accept a medi-cal patient every blue moon; you have
my highest regard. To the ones who are just in it for the money, or blame
everything on the Insurance companies; some of us see right through the lie.)
~~~
anonymous854
According to this article, if you look at the prices insurance companies are
actually paying (after negotiations), they're still way higher than in other
countries:
[http://www.washingtonpost.com/news/wonkblog/wp/2013/03/26/21...](http://www.washingtonpost.com/news/wonkblog/wp/2013/03/26/21-graphs-
that-show-americas-health-care-prices-are-ludicrous/)
This suggests that the providers of medical care are actually the biggest
culprits for our out of control healthcare spending and lack of access to
affordable care, not the insurance companies, yet your post only passingly
blames the former.
~~~
ddingus
It's both.
Private insurers need to make a profit for distributing risk, and due to there
being a lot of them, those risk distributions are much smaller than they could
be. Adding a public, non profit "cost + x percent" type plan, say Medicare
part E for everyone, would very significantly improve this, particularly if it
were allowed to bargain for bulk pricing.
(that we don't allow Medicare to do this is just nuts!)
Private entities also need to make a profit, and there is no meaningful check
on this at present. A similar, public non-profit type delivery entity would
reduce prices on most common treatments and maintenance activities.
Those wanting to profit on primary care could add value, such as home service,
etc... and do it that way, rather than mark up common, mass delivered
treatments. Secondly, niche specialization could remain high profit, high
value add and more people would be able to afford those services.
Doing something like this would carve a big hole into much of the profit
inherent in the US system, which would bring our national costs closer to
those found in the rest of the world, while leaving high profit, high value on
the table.
Some of us would not be able to afford those high value offerings, but would
definitely benefit from much improved access to preventative and or more
common, well proven health care.
This is all a compromise. The US could make much better choices and improve
both access and outcomes.
------
ThomPete
The fundamental truth about health care is that no amount of taxation or
insurance can live up to the demand. Even if you taxed every citizen a 100%
there still wouldn't be enough to pay for healthcare.
And so every society find themselves in one of the most complex paradoxes of
modern times.
If you tax your way to healthcare and provide everyone with free healthcare
like in ex. Denmark, then access to treatment is equal but it's always a cost
center and your will always mostly lag behind access to the latest treatments.
The only way to be able to pay for new equipment or new medicine is either
through higher taxes or by budget cuts somewhere else.
If you go the insurance way of the US and try to let market forces rule then
you end up with highly inflated prices and a younger generation that don't
insure themselves and therefore don't pay for the healthcare system until they
need it which drives up the prices of the insurance (hence Affordable Care
Act).
Here the access isn't equal but you have some of the best specialists in the
world and always the latest treatments.
Then there are countries in the middle trying to find a balance like
Switzerland, Germany, the UK and France but even here I fear that no one have
found a perfect system either.
So no matter how you try and pay for it whether trough trade agreements or NGO
etcs that problem still applies. Someone always have to pick up the bill and
the bill is potentially infinitely big.
Edit: For clarification
~~~
laotzu
So if the problem is too much demand, lets look at how to lower the demand?
The answer to lowering demand is clearly to encourage preventative medicine
over reaction medicine. Most of the cash cows in the US for sickness
profiteering are preventable, heart disease being number one. I know that's
not profitable for sickness profiteers but until we start to value health and
well-being over institutionalized greed well never have enough money to take
care of our health.
>He sacrifices his health in order to make money. Then he sacrifices money to
recuperate his health. And then he is so anxious about the future that he does
not enjoy the present; the result being that he does not live in the present
or the future; he lives as if he is never going to die, and then dies having
never really lived. -DL
~~~
orangecat
_The answer to lowering demand is clearly to encourage preventative medicine
over reaction medicine._
Not really: [http://www.reuters.com/article/2013/01/29/us-preventive-
econ...](http://www.reuters.com/article/2013/01/29/us-preventive-economics-
idUSBRE90S05M20130129)
~~~
laotzu
The article fails to convince. Mentions studies but does not cite them.
Doesn't even mention the words "exercise" or "healthy diet", the two best and
cheapest forms of preventative medicine. Narrowly defines preventative
medicine as "low- or no-benefit measures include annual physicals".
~~~
ThomPete
Do you know how few people have the energy or money to exercise and eat
healthy? It sounds easy but it's a lot of work.
It's not just a rational choice, people aren't rational. It's kind of like
saying that the solution to overpopulation is to get less children.
Thats not how things work.
~~~
laotzu
I'm not saying at all that it would be easy to get people to do what is in
their best interest; though once they are properly educated, it would create a
positive feedback loop. The more you exercise the more energy you have to
exercise. The healthier you eat the less you have to spend on preventable
diseases like heart disease and diabetes and so the more money you have to
spend on healthy food.
The solution to overpopulation, as evidenced by the Demographic-Economic
Paradox, is clearly to raise the standard of living by successfully
distributing the renewable surplus of food, water, and shelter that is
available but wasted every year. This is also an example of turning a negative
feedback loop into a positive feedback loop which in turn pays for itself.
[https://en.wikipedia.org/wiki/Demographic-
economic_paradox](https://en.wikipedia.org/wiki/Demographic-economic_paradox)
The issues of poverty, disease, and overpopulation are closely interrelated
indeed.
------
anonymous854
Tying healthcare to trade would be wonderful for the US, provided the trade
were free trade, because it would mean we could finally legally import dirt
cheap drugs and medical supplies from abroad and allow cheaper foreign doctors
and nurses to practice medicine in the US.
Instead, it apparently means stuff like this:
>He detailed the many ways in which U.S. law prohibits competition for
pharmaceuticals that Peruvian law doesn’t, including granting new patents to
old drugs for relatively small changes
------
evanpw
> In recent decades, the majority of new drugs brought to market have been of
> little real therapeutic benefit.
If drug companies are trying to charge ridiculous high prices for drugs with
"little therapeutic benefit", wouldn't it be easier just to _not buy those
drugs_ than to re-work the patent system?
~~~
zeveb
> If drug companies are trying to charge ridiculous high prices for drugs with
> "little therapeutic benefit", wouldn't it be easier just to not buy those
> drugs than to re-work the patent system?
But that would involve physicians making recommendations in the best interests
of their patients, and patients making decisions for themselves, and we
certainly can't have that!
I recall a surgeon friend of mine complaining about the decisions made by the
physician overseeing his child's care, and how hard it was to stand by and see
the wrong thing done. He's internalised the model of 'doctor knows best' so
much that even where his child was concerned he couldn't overcome that
conditioning!
This has analogies, I think, to the ideas of software users who don't develop
or people who rely on the police to protect them.
| {
"pile_set_name": "HackerNews"
} |
Strike that out, Sam [2004] - yuhong
http://lcamtuf.coredump.cx/strikeout/
======
yuhong
Inspired by this: <http://paulgraham.com/stypi.html>
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How do you process and pay incoming invoices? - lbr
Trying to figure out how best to manage the invoices that are sent to me through various channels (email, fax, mail, etc). Does anyone use bill.com, QuickBooks, or other software? Thanks!
======
edoceo
I use one I built myself, Imperium. Manually enter received bills, track AP
and pay via bank. Reconcile with bank monthly via import.
------
calbear81
We use QuickBooks here but have been looking at other options like Freshbooks.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Looking for quote on technology being used for revolution and porn - steerpike
Hi,
I'm looking for a quote I read years ago that, paraphrased, said something along the lines of: You know your service is successful when it's being used by freedom fighters and pornographers.<p>Realise it might be a bit vague, but I think it was a fairly well known quote and I figured if anyone knew the source it would likely be someone at HackerNews.<p>Cheers.
======
scrame
Yes! I had the same search a few months ago. Its called "The Cute Cat Theory"
by Ethan Zuckerman.
[http://www.ethanzuckerman.com/blog/2008/03/08/the-cute-
cat-t...](http://www.ethanzuckerman.com/blog/2008/03/08/the-cute-cat-theory-
talk-at-etec)
~~~
steerpike
Legend. Thank you.
| {
"pile_set_name": "HackerNews"
} |
Link between health spending and life expectancy: US is an outlier - mbroncano
https://ourworldindata.org/the-link-between-life-expectancy-and-health-spending-us-focus
======
erpaa
Looks to me that in USA there is no incentive for healthy lifestyle at all: if
you can pay (for the insurance), the healthcare provider does anything
regardless your condition or costs. In government-funded health care the idea
is to maximize results in the whole population. For example the surgery of fat
person is so risky and expensive that it is not worth it, you can save 10
normal persons while the fatty tries to lose some weight. People know that and
that is why you rarely see those Texas-style lumbering mounds of flesh in
Stockholm.
| {
"pile_set_name": "HackerNews"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.