text
stringlengths 44
776k
| meta
dict |
---|---|
Show HN: AgileCDN - jamescun
http://agilecdn.org/
======
pikewood
Your description for Font Awesome references jQuery instead.
| {
"pile_set_name": "HackerNews"
} |
David Simon, creator of The Wire and Treme, on the Times-Picayune cuts - kmfrk
http://www.cjr.org/the_kicker/david_simon_creator_of_the_wir.php
======
niallsmart
If you read that, read this:
[http://www.cjr.org/the_kicker/why_david_simon_is_wrong_about...](http://www.cjr.org/the_kicker/why_david_simon_is_wrong_about.php?page=all)
| {
"pile_set_name": "HackerNews"
} |
Giant archaeological trove found in Google Earth - djwebb1977
http://www.newscientist.com/blogs/onepercent/2011/02/giant-archaeological-trove-fou.html
======
aw3c2
* _potential_!
I am disappointed that there is no image of the aerial imagery nor the ground.
At least the photo looks 99% like generic stock footage.
The commemts seem to be spammers.
------
jackfoxy
To the best of my knowledge, Saudi Arabia is completely closed to archaeology
due to the state religion (happy to be corrected on this one). If it ever does
open up it will become a fantastic treasure trove of knowledge: all sites
never before excavated (by scientists) and the location astride the cross-
roads of all the old-world continents.
------
a5huynh
On a related note, National Geographic has been doing a similar thing for the
past couple years:
<http://exploration.nationalgeographic.com/>
Just this past summer they went to Mongolia and used data from that little
game to find tons of tombs and even an ancient city!
| {
"pile_set_name": "HackerNews"
} |
Imgur, please don't be the next TinyPic or ImageShack - dylz
https://dillpickle.github.io/imgur-please-dont-be-the-next-tinypic-or-imageshack.html
======
briholt
Let's dig a bit deeper and ask _why_ is imgur doing these things? The fact is
image uploading is one of the most commoditized services you can offer. Long
term, the most you could possibly get out of that is razor-thin margins,
assuming Google, Amazon, or Microsoft doesn't just walk in one day and crush
your entire business with one fell swoop. Clearly, imgur is trying to build
some competitive advantages around their business - namely network effect by
creating a community on top of their content destination. They're comparing
the Facebook to Dropbox and asking which business model looks better. I'm sure
they realize this makes straight image uploading slightly more of a hassle,
but from their perspective the business benefits of community far outweigh the
loss efficiency-oriented uploaders. They know it's annoying and they don't
care.
~~~
TeMPOraL
> _I 'm sure they realize this makes straight image uploading slightly more of
> a hassle, but from their perspective the business benefits of community far
> outweigh the loss efficiency-oriented uploaders. They know it's annoying and
> they don't care._
Which means they'll most likely die quickly. Nobody cares about their
"business side", they were an Internet No 1. image hosting because they were
hassle-free and mostly bullshit-free. What I expect to see is most users
leaving to someone willing to provide no-bullshit upload service, and imgur to
crawl into hole and die, like PhotoBucket or ImageShack.
The sad thing is, many links will probably get broken. Web, unfortunately, is
extremely fragile.
------
minimaxir
I discovered the redirect behavior about 6 months ago, which generated a lot
of controversy:
[https://news.ycombinator.com/item?id=7190952](https://news.ycombinator.com/item?id=7190952)
Interestingly, some of the arguments then were "What did you expect from a
free startup?" and "Who cares? You still see the image."
Recently (post-$50M), I've seen this behavior when clicking random imgur links
on the web from unaffected sites, but have been unable to reproduce it using
spoofing tricks.
~~~
opendais
I kinda agree with them. Imgur has to make the money it needs to survive. It
is really the only viable option in the long term.
That said, I do find myself using alternatives [e.g. mediacrush] more and
more. Its just not worth bothering with Imgur.
~~~
funkyy
Imgur was profitable from day 1.
What they are doing now is called greed.
~~~
bdcravens
Once you take VC money, greed is necessary - your customers are the VCs, and
they expect a several multiples exit.
~~~
TeMPOraL
That's why we're probably bound to replace our image hosting services every
few years. It's a waste of good links though, that will get broken in the
desperate attempt to recoup for fleeing users.
------
joslin01
Imgur is providing a valuable service. It doesn't ruffle my feathers if they
want to expand upon it. I really don't think Imageshack.us, Tinypic, or
Photobucket are fair comparisons.
Back in the day, imageshack & photobucket were all we had to upload photos
into albums because Facebook hadn't taken off and MySpace didn't know what it
was doing. After Facebook, the disconnect of User <-> Photos was gone. It was
clear, these are _my_ photos and look I'm even tagged in them. Imageshack and
Photobucket seem to have pivoted to try to stay competitive. Tinypic is hardly
worth mentioning because its quality is poor. Imgur's isn't.
So where do you go if you really just like popular photos? Imgur. The fact
that they have a community that dislikes Reddit should be indicative of the
success of their additions. Would you rather they stagnate and forever provide
free image hosting? Who's going to pick up the bill after awhile? They _need_
to do this.. and I think they're being pretty smart about it.
As a normal internet user, your points don't really stand out to me as a
reason why I should start to become weary of imgur. If they piss enough people
off, they'll get burned and another image hosting site will rise up to replace
it (because seriously, it's not that hard of a technical problem).
Finally, as a personal note, I don't really appreciate commentary that doesn't
at least try outline the pros of what they're doing. Oh no, they added a small
button to create memes with? oh the horror..
~~~
dylz
Thanks for the constructive criticism at the bottom -- I'll try to be more
neutral and remember to give arguments for both sides (this was my first blog
post).
~~~
joslin01
Thanks for understanding :)
------
meowface
The direct link redirecting and the extremely annoying "memebase" rebranding
were the last straws for me as well.
[http://mediacru.sh](http://mediacru.sh) is a great alternative with a much
more minimalist interface, and is free and open source
([https://github.com/MediaCrush/MediaCrush](https://github.com/MediaCrush/MediaCrush))
~~~
Sir_Cmpwn
Thanks for mentioning us! I'm one of the lead devs for MediaCrush. I think
what sets it apart from the traditional list of promising but eventually
failed image hosts is two things:
\- It's open source, so you can fork it if we screw up
\- We are not a business and do not have a bottom line
I've recognized this pattern as well, and we hope to end it.
~~~
donniezazen
MediaCrush is indeed nice. I didn't know about it. Imgur can be painstakingly
slow. A fast and right to the point image uploading service is quite
refreshing.
I have a few questions.
1\. Do you have accounts to store pics?
2\. It would be nice if after uploading pictures the page would show all
possible links like embed and share.
3\. Do you plan to implement something like delete in 30 days?
Thanks.
~~~
Sir_Cmpwn
1\. No, but you can use the default localStorage mechanism. We've been slowly
working on accounts for a while now, but it's not a priority.
2\. Click "share" on the view page
3\. No, but an external service could hook into MediaCrush to provide that
------
DanBC
Imgur is just following in the fine tradition of software that hit a sweetspot
then kept adding new features.
See ACDSee, Nero Burning Rom, etc etc.
Jeff Atwood wrote about this in 2007. It's a great shame that websites aren't
listening to the mistakes made by previous software authors.
[http://blog.codinghorror.com/why-does-software-
spoil/](http://blog.codinghorror.com/why-does-software-spoil/)
~~~
StephenGL
Nero... Perfect for a brief moment... Then WtF waaa is al this stuff and why
does it all work poorly?
------
gnu8
Imgur is of little consequence. When it becomes too annoying to use it will no
doubt be replaced.
"This is the sixth time we have destroyed Zion and we have become exceedingly
efficient at it."
~~~
TeMPOraL
I'm just sad for the content that will go missing. Internet is a fast-
forgetting place, it's damn hard to find anything beyond few years ago,
because most of the links are broken.
~~~
scrollaway
Please donate to the Internet Archive, then!
[https://archive.org/donate/](https://archive.org/donate/)
They constantly have (or fund) projects archiving services being deleted, etc.
For example, _right now_ they are archiving an immense amount of Twitch.tv
VODs which will be deleted in just a few days. This is an example archive
which has recently been uploaded:
[https://archive.org/details/archiveteam_twitchtv_leech1_2014...](https://archive.org/details/archiveteam_twitchtv_leech1_20140814052241)
------
funkyy
It seems that breaking point when all changes to bad is when smart, fun and
game changing startup accepts round of funds let by one of the main Tech
Funds.
Is it coincidence, or are they legally or in any other way pressured to do it?
It seems most startups follow this route and I thought Imgur will be one of
those exceptions. I was wrong it seems.
~~~
TeMPOraL
I read a book by Felix Dennis (was featured on HN when he died), in which he
strongly recommends against taking VC money for the very reasons pointed out
in commends here - VCs will want to get their returns, they'll be merciless,
so you'll have to either become a shark or let your company die.
------
ChrisAntaki
> Flash being enabled even very shortly drains the batteries of a lot of
> mobile and laptop devices.
Flash enables multiple uploads on legacy browsers. Just enabling Flash for
something simple like an uploader should not drain batteries.
Back in 2010, when Steve Jobs wrote a letter demonizing Flash, he had some
points. Many complex banner ads were being made in Flash, and disabling the
plugin led to static image ads being shown. Static images are much easier to
display than streaming videos. What's worthwhile to note is that as HTML5
banners progress, with animations and video, we'll be faced with the same
problem.
Regardless, Jobs' letter had a noticeable and lasting effect on many
worldviews.
------
sergiotapia
I already loathe uploading images there to quickly share something because I
have to wait for the home page to load all the thumbnail, and all the heavy
javascript. Then I click upload and I have to wait for the javascript for
-that- to load.
Imgur used to be so fast and quick to use, feature creep is going to be the
death of it.
Something as simple as image hosting should not really take so long to load.
------
LukeB_UK
I actually enjoy the community at Imgur.
To me it seems Imgur has 2 uses:
1) Image hosting
2) Viral image community
One doesn't have to go with the other, but also they can work together.
As for their redirect behaviour? I don't think that's an issue at all. If you
don't want to get that page, host it yourself.
~~~
meowface
>I actually enjoy the community at Imgur.
I couldn't disagree more. Comments in the reddit default subs are bad enough
when compared to the kind of discussion you see on HN, but comments on imgur
are a mixture of the kinds of things you'll see on 9gag or funnyjunk fused
with some of the worst of reddit's default subs' communities. They're absolute
drivel, and they're only going to get worse because Imgur is specifically
trying to attract these kind of people now.
~~~
LukeB_UK
Comparing Imgur to HN is comparing apples to oranges in my opinion, they have
different purposes and target audiences.
Target audiences:
Imgur - General public.
HN - Technical people, hackers, business people, founders.
Purposes:
Imgur - People submit funny/touching/stupid/whatever images and people post comments
HN - People post interesting/thought-provoking articles/sites and discuss them.
~~~
DanBC
Imgur used to have a great community. Even though it was a large community it
was still good fun.
Now? Not so much.
I'm not sure how the community collapsed so fast or if it can be rescued.
------
Dolimiter
Worse was when they ran noisy auto-play video adverts last month. Every time I
viewed an image, I was faced with "HEYY WAZAAAPP!!! JUSTIN BEIBER!!!"
nonsense.
I'm in Europe, the USA didn't get them, so there wasn't so much of a fuss.
They managed to get away with it. Appalling and cynical.
~~~
kemayo
That probably wasn't deliberate on their part. My own experience with running
ads is that the networks sell slots to each other in some sort of human
centipede-esque manner. And some of the lower levels of this selling centipede
get shady and will run abusive ads. You just have to watch out for them and
block them / report them to your own network when you can.
------
imnotsure
Too late, sorry.
------
kjackson2012
"Oh no, a free web site isn't behaving the way that I want it to!"
I don't know if it's just because I'm old, but people need to stop whining
about how free websites are behaving. If we were paying customers, then I
believe we should have a voice in how the product works but if we're using it
for free, then this feeling of entitlement has to stop.
Beggars can't be choosers. And whoever owns imgur has to make money as well,
they're entitled to do whatever it takes to make as much money as they can,
and if they lose you as a customer but make more money, that is their
prerogative.
~~~
meowface
Beggars absolutely can be choosers here, because there are tons of competing
sites, many of which don't have any of the annoying features that imgur has
been adding in recent months.
Imgur obviously needs money to stay afloat, but there are less annoying ways
they could have gone about it.
Reddit has been bleeding money for years, probably more money than Imgur has
yearly and for a longer time span, but they still have not compromised the
integrity or usability of their site to gain money. They rely only on non-
intrusive ads, Reddit gold, and donations.
4chan is in an even worse state, and has also only been making money through
their 4chan Pass semi-donation feature.
Imgur could have created a new subdomain for the "new" site, or could have
setup some entirely different applications that integrate with the main site,
instead of detracting from the main product. As a company, they have a right
to do what they want to make money, but as users most of us will always keep
moving to the best solution once the old ones start shooting themselves in the
foot.
Companies whose business model is to provide only a single free service for a
massive userbase will always have to balance revenue and user alienation.
Reddit and Imgur are leaning on opposite sides of that scale at the moment.
Honestly, I would consider an acquisition (by Google or whoever) to be a much
better solution for everyone involved compared to the things they're trying
now in desperation to get more revenue.
~~~
kjackson2012
Then switch. Stop whining about it. If this is as big of a problem as the OP
purports, and if people start leaving in droves, then imgur will die. This is
the risk that they are taking, and they know this and so do you and the OP. So
just switch. Stop whining about it.
~~~
plorkyeran
Whining about it _is_ switching. Since nearly everyone views far more images
than they upload, which image host they spend the most time interacting with
is dictated by what other people choose. Complaining about the currently
popular choice is perhaps not the most effective way to get people to switch,
but it is also not entirely ineffective.
| {
"pile_set_name": "HackerNews"
} |
Got $5? $10? $500? Here is how to start investing with any budget - flaviuspop
https://thefinancialdiary.com/got-5eur-10eur-500eur-here-is-how-to-start-investing-with-any-budget/
======
Cypher
Since when is P2P lending low risk?
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Is it possible for a mobile app store, not to report in-app purchases? - leowoo91
Just being a bit paranoid, I was wondering if app I develop gets abused by that giant evil cooperation, assuming it cuts all its network access, lets user buy the app but doesnt report me it as a number? How confident you would be about this can't happen?
======
ryanbertrand
On iOS you can verify receipts with your server. This will also prevent jail
broken devices from faking purchases.
~~~
leowoo91
I just checked the mechanism, not bad after all since app itself directly
communicates with the server I can provide. So, my concern converged into if
evil cooperation modifies my app in order to replace purchase button action
with an internal buy strategy then call my purchaseDone function inside. I
think that is still possible :)
| {
"pile_set_name": "HackerNews"
} |
More H-1B visas needed because of skill or money shortage? - SethMurphy
You often hear entrepreneurs say we need more H-1B visas. Are we really facing a skill shortage in tech or is it the lure of having an employee who is tied to their employer for lower wages than older americans (who have a job shortage) the real reason?
======
nkb
I was hired by TCS is US,when TCS could not get the VISA for so called skilled
worked, later down the road TCS managed to get the visa and I was kicked out.
The only reason to fire me was not skills but money. They replaced me with low
skilled worked, who was paid a lot less money.
| {
"pile_set_name": "HackerNews"
} |
The AWS Controllers for Kubernetes - bdcravens
https://aws.amazon.com/blogs/containers/aws-controllers-for-kubernetes-ack/
======
solatic
Currently no support for provisioning IAM permissions. ACK will be happy to
construct an S3 bucket for you, that is then inaccessible, unless you use
dangerous IAM wildcard permissions.
Team is concerned about the security ramifications of setting up IAM
permissions from ACK: [https://github.com/aws/aws-
controllers-k8s/issues/22#issueco...](https://github.com/aws/aws-
controllers-k8s/issues/22#issuecomment-595816197)
Look, it'll be great when it matures... but this is very much in the developer
preview stage. Caveat emptor.
------
Legogris
Am I the only one who's pessimistic about this? One of the big upsides of
Kubernetes is having portable workloads and provisioning cloud-provider
specific resources (whose lifecycles very likely outlive clusters!) in
Kubernetes just seems wrong to me. Kubernetes is great for managing,
orchestrating and directing traffic for containerized workloads but it really
shouldn't be The One Tool For Everything.
Coupling everything together like this just seems to make things less
manageable.
IMO infrastructure including managed services are better provisioned through
tools like Terraform and Pulumi.
~~~
solatic
The issue (or benefit, depending on your perspective) with Terraform is that
it's a one-shot CLI binary. If you're not running the binary, then it's not
doing anything. If you want a long-running daemon that responds to non-human-
initiated events, then Terraform isn't a good tool.
Any time you try to declaratively define state, if you don't have a daemon
enforcing the declarative state, then you will suffer from drift. One approach
is the one Terraform has - assume that drift is possible, so ask the user to
run a separate plan stage, and manually reconcile drift if needed. Another
approach is the controller approach, where the controller tries to do the
right thing and errors/alerts if it doesn't know what to do.
~~~
redwood
This is why Hashicorp needs to accelerate their cloud offering.
Frankly I get the sense they got a little bit too addicted to central ops
driven on-prem style deals for Vault but in the public cloud they need to be
front and center with SaaS which is a long road. They have a rudimentary
Terraform SaaS I believe but none for Vault as far as I'm aware. I see a lot
of folks going straight to cloud provider services because of this.
You sum it up well... In these times you don't want to run a daemon
~~~
t3rabytes
They used to have a managed Vault offering! But then it disappeared one day
never to return.
------
thinkersilver
Kubernetes is becoming the lingua-franca of building infrastructure. Through
CRDs and the kube api spec I can
\- start an single application
\- deploy a set of interconnected apps
\- define network topologies
\- define traffic flows
\- define vertical and horizontal scaling of resources
And now I can define AWS resources.
This creates an interesting scenario where infrastructure can be defined by
the k8s API resources and not necessarily have k8s build it. For example
podman starting containers off a K8S deployment spec. It's an API first
approach and its great for interoperability. The only downside is managing the
yaml and keeping it consistent across the interdependencies.
~~~
soulnothing
I really wish fabric8, and more specifically kotlin k8 dsl[2] was getting more
traction.
It removes the down side of yaml all over the place. It's missing the package
management features of helm. But I have several jars acting as baseline
deployments, and provisioning. It works really well, and I have an entire
language. So I can map over a list of services, instead of do templating. The
other big down side is a java run takes a minute or two to kick off
I was resilient to k8 for a long time. Complexity was secondary to cost, but
Digital ocean has a relatively cheap implementation now. This commonality and
perseverance of tooling is great.
I want metrics, a simple annotation. I want a secret injected from vault, just
add an annotation. It's also cloud agnostic, so this logic can be deployed any
where some one provides a k8 offering.
EKS was very powerful. As running service accounts via non managed clusters.
Removed the need to pass an access key pair to the application. That service
account just ran with a corresponding iam role.
[1] [https://github.com/fabric8io/kubernetes-
client](https://github.com/fabric8io/kubernetes-client) [2]
[https://github.com/fkorotkov/k8s-kotlin-
dsl](https://github.com/fkorotkov/k8s-kotlin-dsl)
~~~
thinkersilver
It's been a while since I've looked at Fabric8 but it had good java -> k8s
integration and was great for writing k8s tools.
It appears though that Fabric8 is useful for solo java projects without
complex dependencies on non-java projects or small java shop. It overlaps with
where jenkins-x is going, which has made major strides in the last 24 months.
The original team that worked on Fabric8 lead by James Strachan all moved on
from Redhat and many of them are working on Jenkins-x.
------
harpratap
Glad to see AWS finally embracing Kubernetes too. Google did a similar thing a
while back - [https://cloud.google.com/config-
connector/](https://cloud.google.com/config-connector/) So I guess this
solidifies Kubernetes as the defacto standard of Cloud Platforms.
~~~
zxienin
Azure as well [https://github.com/Azure/azure-service-
operator](https://github.com/Azure/azure-service-operator)
~~~
FridgeSeal
Bold of you to assume that:
* the magic permissions ghost that runs in Azure whose job it is to inexplicably deny you resources to things won’t interfere
* Said Azure service will stay up long enough to be useful
* you finish writing the insane amount of config Azure services seemingly require before the heat death of the universe.
* Azure decides that it likes you and won’t arbitrarily block you from attaching things to your cluster/nodes because it’s the wrong moon cycle/day of the week/device type/etc
* you can somehow navigate Azures kafka-esque documentation got figure out which services you’re actually allowed to do this with.
It is only a slight exaggeration to say that Azure is the most painful and
frustrating software/cloud product I’ve used in a long time, probably ever,
and I earnestly look forward to having literally any excuse to migrate off it.
~~~
xiwenc
I’m feel your pain also my friend. Azure quality is terrible compared to
competitors:
* no good working examples in docs
* docs hard to read
* docs are not consistent with reality
* the web portal UX is inconsistent and outright weird (when you navigate through resource group, you can scroll back horizontally to previous context/screen; what a joke)
* there are a gazillion preview api versions that never gets released officially.
* and if you’re lucky to work with azure devops, it’s like building a house of cards with different card types and sizes
I’ve worked with AWS and GCP in the past. Indeed Azure is often chosen by
CIO’s rather than people that have to work with the service every day.
~~~
FridgeSeal
Oh my god the web UX, how did I forget about that: for the life of me I cannot
figure out why they make all the interfaces scroll sideways. Why? Who does
that?
Docs being hard to read and inconsistent with reality is a big point. My
favourite mismatch is the storage classes one: it turns out there's actually 2
different grades of SSD available, but their examples and docs only mention
premium SSD's. I only discovered "normal" SSD because they happen to auto-
create a storage class with them in your Kubernetes cluster. The adventure to
figure out whether you can attach a premium SSD to an instance is a whole new
ball game - trying to find which instances _actually_ allow you to attach them
is like looking for a needle in a haystack. Why are they so difficult about
it? AWS is like "you want an EBS volume of type io1? There you go, done".
Azure: "oh no, you _can't_ have premium ssd. Because reasons".
~~~
ahoka
Actually there are three kind of SSD storage types in Azure: Standard, Premium
and Ultra. I’m assuming that you need to provision an ‘s’ VM because the
regular instances lack the connectivity for the faster storage, but that’s
just guessing.
~~~
FridgeSeal
Oh I forget about the ultra ones.
I found a few instance types when I went looking, but their interface does not
make it easy to figure out which ones are premium eligible, but I do remember
the price going up not-insignificantly for a premium capable machine, which
feels about like double-dipping if you’re also paying extra for the SSD.
------
sytringy05
I can't decide if I think this is a good idea or not. Conceptually I like that
I can get a s3 bucket/rds db/sqs queue by using kubectl but I'm not sure if
that's the best way to manage the lifecycle, especially on something like a
container registry, that likely outlives any given k8s cluster.
~~~
closeparen
Why are these clusters going away?
~~~
sytringy05
We rebuild ours all the time. New config, k8s version upgrade, node OS
patching.
~~~
closeparen
Interesting. I'm only familiar with Mesos/Aurora, which is often considered
outdated next to Kubernetes, but it can do all those things in place.
Do you end up with a "meta-kubernetes" to deploy kubernetes clusters and
migrate services between them?
~~~
harpratap
You definitely can do the same with Kubernetes too, just that the scope is too
large and it doesn't have a good reputation with rolling updates of
controlplane.
> Do you end up with a "meta-kubernetes" to deploy kubernetes clusters and
> migrate services between them?
Congratulations, you just discovered the ClusterAPI
------
ransom1538
Here is my container: run it. Where is my url? The end.
No, I don't want Terraforms, puppets, yaml files, load balancers, nodes, pods,
k8s, chaos monkeys, Pulumies, pumas, unicorns, trees, portobilities, or
shards.
I love cloudrun and fargate. Cloudrun has like 5 settings, I wish it had like
2.
~~~
throwaway894345
I too want simplicity, but Fargate still requires a load balancer in most
cases. Further, you’ll probably need a database (we’ll assume something like
Aurora so you needn’t think about sharding or scale so much) and S3 buckets at
some point, and security obligates you to create good IAM roles and policies.
You’ll need secret storage and probably third-party services to configure.
Things are starting to get complex and you’re going to want to be able to know
that you can recreate all of this stuff if your app goes down or if you simply
want to stand up other environments and keep them in sync with prod as your
infra changes, so you’re going to want some infra-as-code solution (Terraform
or CloudFormation or Pulumi etc). Further, you’ll probably want to do some
async work at some point, and you can’t just fork an async task from your
Fargate container (because the load balancer isn’t aware of these async tasks
and will happily kill the container in which the async task is running because
the load balancer only cares that the connections have drained) so now you
need something like async containers, lambdas, AWS stepfunctions, AWS Batch,
etc.
While serverless can address a lot of this stuff (the load balancer, DNS, cert
management, etc configuration could be much easier or builtin to Fargate
services), some of it you can’t just wave away (IAM policies, third party
servic configuration, database configuration and management, etc). You need
some of this complexity and you need something to help you manage it, namely
infra-as-code.
~~~
nojvek
Cloud run is one of my favorite cloud services. It’s so easy to use and cheap
for low traffic things. I set one up last year. GCP bills me 5 cents a month
(they have no shame billing in cents)
[https://issoseva.org](https://issoseva.org) hasn’t ever gone down.
------
hardwaresofton
At the risk of being early, RIP CloudFormation.
I posited that this was the benefit in knowing Kubernetes all along, and
possibly the ace up GCP's sleeve -- soon no cloud provider will have to offer
their own interface, they'll all just offer the one invented by Kubernetes.
~~~
weiming
There is also the AWS CDK
([https://aws.amazon.com/cdk/](https://aws.amazon.com/cdk/)) which is
essentially lets you use your favorite language like Typescript or Python to
generate CloudFormation, with an experience similar to Terraform. We've been
experimenting with instead of TF, hoping it's here to stay.
~~~
Normal_gaussian
Take a look at pulumi; it provides a programmatic interface and related
tooling on top of terraform.
~~~
hardwaresofton
Took the comment right out of my keyboard (?) -- these days whenever I talk
about devops with people, I bring up pulumi. HCL and and almost all config-
languages-but-really-DSLs are a death sentence.
I am very unlikely to pick terraform for any personal projects ever again,
imagine being able to literally drop down to AWS SDK or CDK in the middle of a
script and then go back to Pulumi land? AFAIK this is basically not possible
with terraform (and terraform charges for API access via terraform cloud? or
it's like a premium feature?)
------
ponderingfish
Orchestration tools are the way forward especially when it comes to on-demand
video compression - it's helpful to have the tools to be able to spin up 100s
of servers to handle peak loads and then go down to nothing. Kubernetes is so
helpful in this.
~~~
jtsiskin
Would AWS spot instances be useful here?
~~~
big-malloc
Currently the cluster autoscaler supports using a pool of spot instances based
on pricing, which is super helpful for test clusters, and there are some other
tools available to ensure that you can evict your spot nodes when amazon needs
them back
------
wavesquid
This is great!
Are other companies doing similar things? e.g. I would love to be able to set
up Cloudflare Access for services in k8s
~~~
sytringy05
GCP (Config Connector) and Azure (Service something) both have similar things.
I've not heard of it happening outside a managed k8s env.
~~~
harpratap
[https://crossplane.io](https://crossplane.io) is doing a multi-cloud one
~~~
zxienin
I like their work, but their OAM centricity is too heavy an opiniation.
~~~
bassamtabbara
disclaimer: I'm a maintainer on Crossplane.
OAM is an optional feature of crossplane - you don't have to use it if you
don't want to
~~~
zxienin
Good to know, at least that warm me up to crossplane further. The messaging
might need update, including within the docs. I mean - crossplane is _the_ OAM
implement - coupled with OAM sprinkled all over docs, gave me very different
impression.
This aside, I think crossplane work is interesting.
------
zxienin
There is now secular push towards use of custom operators instead of OSB. I
wonder what finally caused this.
~~~
jacques_chester
A mix of factors, I think.
1\. OSBAPI is not widely known outside of the Cloud Foundry community it came
from. In turn that's because Cloud Foundry is not widely known either. Its
backers never bothered to market Cloud Foundry or OSBAPI to a wider audience.
2\. It imposes a relatively high barrier to entry for implementers. You need
to fill in a lot of capabilities before your service can appear in a
conformant marketplace. With CRDs you can have a prototype by lunchtime. It
might be crappy and you will reinvent a whole bunch of wheels, but the first
attempt is easy.
3\. Fashion trends. The first appearance of OSBAPI in Kubernetes-land used API
aggregation, which was supplanted by CRDs. Later implementations switched to
CRDs but by then the ship was already sailing.
4\. RDD. You get more points for writing your own freeform controller than for
implementing a standard that isn't the latest, coolest, hippest thing.
It's very frustrating as an observer without any way to influence events.
OSBAPI was an important attempt to save a great deal of pain. It provided a
uniform model, so that improvements could be shared amongst all
implementations, so that tools could work against standard interfaces in
predictable ways, so that end-users had one and only one set of concepts,
terms and tools to learn. It also made a crisp division between marketplaces,
provisioning and binding.
What we have instead is a mess. Everyone writing their own things, their own
way. No standards, no uniformity, different terms, different assumptions,
different levels of functionality. No meaningful marketplace concept.
Provisioning conflated with binding and vice versa.
It is a medium-sized disaster, diffuse but very real. And thanks to the
marketing genius of enterprise vendors who never saw a short-term buck in
broad spectrum developer awareness, it is basically an invisible disaster.
What we're heading towards now is seen as _normal_ and _ordinary_. And it
drives me bonkers.
~~~
zxienin
I’d agree on the mess. I also find it on over-engineered side. Do I really
need service discovery of services that I already know of, from AWS GCP...?
~~~
jacques_chester
If you want a little from column A and a little from column G, having a single
interface is pretty helpful. It's easier to automate and manage.
------
Niksko
The approach of generating the code from the existing Golang API bindings
means that hopefully this project will get support for lots of resources
pretty quickly.
Excited about this, though you do wonder whether it'll suffer the same fate as
Cloudformation: the Cloudformation team finds out about new feature launches
the same time that the general public does. If the Kubernetes operator lags
behind, you're going to have to fall back to something else if you need
cutting edge features.
------
moondev
Seems odd there is no controller for EC2 or even planned on the roadmap
[https://github.com/aws/aws-
controllers-k8s/projects/1](https://github.com/aws/aws-
controllers-k8s/projects/1)
~~~
alexeldeib
It's not weird at all. A prime use case for this is to use Kubernetes itself
for the compute layer and orchestrating peripheral AWS components using
Kubernetes as the common control plane.
You can orchestrate entire application stacks (pods, persistent storage, cloud
resources as CRDs) using this approach.
~~~
harpratap
There is a fairly decent demand for orchestrating VMs using kubernetes
(kubervirt), many legacy apps are too expensive to be rewritten in a cloud
native way
------
etxm
This is nice from the app manifest perspective because you can declare your
database right along side your deployment.
The provisioning time of a deployment and an RDS instance is very different
though. This is probably most useful when you’re starting a service up for the
first time. This is also when it’s not going to work as expected due to that
latency of RDS starting up while your app crashed repeatedly waiting for that
connection string.
This would be really nice for buckets and near instant provisioned resources,
but also kinda scary that someone could nuke a whole data store because they
got a trigger finger with a weird deployment and deleted and reapplied it.
My feelings, they are mixed. :D
------
MichaelMoser123
Kubernetes is supposed to be cloud vendor agnostic; the cloud vendors counter
that by having extension operators to create some tie in to the kubernetes
deployment of their making.
I guess the 'kubernetes' way would be to create a generalized object for
'object store', that would be implemented by means of s3 on aws and on azure
it would be done as blob storage.
Now with this approach you can only do the common features between all
platforms, you would have a problem with features exclusive to aws for
instance, or you would need some mapping from a generalized CRD object to
specific implementation of each platform.
------
1-KB-OK
Interesting how they enforce the namespaced scope for ACK custom resources.
This is a logical design choice but makes it trickier for operators to use.
Say I have an operator watching all namespaces on the cluster. Since operator
CRDs are global in scope it makes sense for some operators to be installed as
singletons. A CR for this operator gets created in some namespace, and it
wants to talk to s3 -- it has to bring along its own s3 credentials and only
that CR is allowed to use the s3 bucket? You can imagine a scenario where
multiple CRs across namespaces want access to the same s3 bucket.
------
la6471
Everything from DNS to AWS SDKs gets reinvented in Kubernetes. It is the most
anal approach of Infrastructure design I have ever seen in the last three
decades. A good design builds on the things that are already there and does
not goes around trying to change every well established protocol in the
world.KISS.
------
sytse
This feels like Crossplane.io but limited to only AWS. Kelsey seems to think
the same
[https://twitter.com/kelseyhightower/status/12963213771342315...](https://twitter.com/kelseyhightower/status/1296321377134231552)
------
toumorokoshi
This has been posted a couple times, but GCP has an equivalent that's been
around for a while:
[https://cloud.google.com/config-
connector/docs/overview](https://cloud.google.com/config-
connector/docs/overview)
disclaimer: I work at GCP.
------
sunilkumarc
Wow. Now we can directly manage AWS services from Kubernetes.
Github: [https://aws.github.io/aws-
controllers-k8s/](https://aws.github.io/aws-controllers-k8s/)
On a different note, recently I was looking to learn AWS concepts through
online courses. After so much of research I finally found this e-book on
Gumroad which is written by Daniel Vassallo who has worked in AWS team for 10+
years. I found this e-book very helpful as a beginner.
This book covers most of the topics that you need to learn to get started:
If someone is interested, here is the link :)
[https://gumroad.com/a/238777459/MsVlG](https://gumroad.com/a/238777459/MsVlG)
| {
"pile_set_name": "HackerNews"
} |
Facebook bug 'kills' users in 'terrible error' - xufi
http://www.bbc.com/news/technology-37957593
======
jrockway
The biggest thing I take away from this is that users have learned to tolerate
minor problems in software. I always make it a personal goal to have 0 bugs,
but never succeed. It is good that users cut us some slack, because it means
we can spend some time pushing the featureset forward, rather than making
everything 100% perfect 100% of the time. (Be more careful if you're working
on life-critical software, though. Features are not necessarily the most
important thing there ;)
------
tbveralrud
"We hope people who love %s will find comfort in the things others share to
remember and celebrate %s life." is one of the most insincere code commits of
that day. Let others write about a lost loved one, not robots.
~~~
avg_dev
I don't know, my microwave flashes "Enjoy your meal" every time it's done
heating something and I like that little touch.
~~~
joshmn
What kind of microwave is that? Mine just swears at me. Consecutively.
~~~
InclinedPlane
Microwaves are the most aggressive robots.
"Hey! Hey! Your food has been ready for zero seconds! Hey!"
. . .
"Hey! F#$face! Your burrito has been ready for five seconds! Eat your g-d food
meat bag!"
. . .
"Ten seconds?! REALLY?! Are you serious?! We're enemies forever now! Eat this
crap for f's sake!"
~~~
terinjokes
I see you've met my dishwasher.
"Hey, the wash cycle is over" Yes, but everything is still as hot as lava.
"Hey, it's been 5 minutes." Still hot.
And then it beeps a "Hey!" every five minutes thereafter.
------
lsmod
"An unusual bug on Facebook briefly labelled many people as dead."
------
yawaramin
You know, I bet it was something to do with `memorializeUser` again (see
[https://www.columbia.edu/~ng2573/zuggybuggy_is_2scale4ios.pd...](https://www.columbia.edu/~ng2573/zuggybuggy_is_2scale4ios.pdf)
slide 46). In fact I would go so far as to say this is the kind of thing that
should be encoded in the type system so it's a compile error to try to do
this.
------
lalaithion
Thirty years from now: Same headline without quotes.
~~~
taneq
"It turns out these users had been dead for several years, but due to a glitch
in payroll, they had still been physically alive. We, uh, just fixed the
glitch."
------
rhizome
I have to wonder what the attempted feature was that resulted in this.
~~~
agildehaus
If it's anything like my bugs, it's a single-character typo in the template.
~~~
ben_jones
I feel like Facebook must do a phased roll out of their front-end
modifications such that they'd detect that before it was big enough to matter
(or perhaps during language normalization which must be huge for them). My
guess is they were running something on the back-end to "clean up" dead users
and it went haywire.
------
inimino
Let's wait for the post-mortem on this one.
~~~
avg_dev
(lol) do they usually give such? I'd love to know what happened here.
------
warsaw
Who uses Facebook?
~~~
dang
Comments to HN need to be civil and substantive. Please read the site
guidelines:
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)
[https://news.ycombinator.com/newswelcome.html](https://news.ycombinator.com/newswelcome.html)
| {
"pile_set_name": "HackerNews"
} |
Show HN: Tool for testing sites and apps on slow connections - pkhach
https://www.httpdebugger.com/netthrottler.html
======
shereadsthenews
Chrome has this built in, FWIW, and of course you can always use the standard
tools to achieve this on Linux.
~~~
Animats
_You can always use the standard tools to achieve this on Linux._
Add 200ms of network delay under Linux:
sudo tc qdisc add dev eth0 root netem delay 200
This guy at least provided a usable user interface.
~~~
Pokepokalypse
fwiw: when you're done messing around with a `tc qdisc add`; it's probably a
good idea to do a `tc qdisc del`. . . :)
~~~
Animats
Yes, I know that.
------
bacon_waffle
So I can't test this as I don't have a Windows machine, but from the
screenshot it seems only concerned with bandwidth. For my personal situation,
latency is the real killer.
I've got a gigabit fibre connection but am way down South in New Zealand, and
interact with a Perforce server that's in California for my day job. When one
does an operation in Perforce, like the equivalent of a 'git pull', there
seems to be at least a couple round-trips between the client and the server,
for each file. There are some tasks take a few seconds for folks in the
California office, where for me those are easily several minutes to several
dozen minutes. It's convenient when the weather is nice or the fire needs
feeding :).
~~~
adrianN
The joys of using VCS designed for LANs. Clearcase has the same problem.
~~~
repiret
Years ago I worked somewhere where we had to use Visual Source Safe, on a Mac
(OS9), over a sub-1MBit DSL. Doing anything would take hours to days.
------
eps
Mods, is this sort of spam blast an acceptable behavior on HN now -
[https://news.ycombinator.com/submitted?id=pkhach](https://news.ycombinator.com/submitted?id=pkhach)
?
It could've been a decent Show HN discussion, but as it stands this is nothing
more than an ad for a commercial software.
~~~
dang
It's a bit over the line. The FAQ says a small number of reposts is ok if a
topic hasn't had significant attention yet:
[https://news.ycombinator.com/newsfaq.html](https://news.ycombinator.com/newsfaq.html).
Now that this has, the reposts should stop.
~~~
miles
It's always felt as if HN members not only should, but as a rule do, submit
links they've found of interest or value. If a member merely and repeatedly
submits their own commercial project(s), it feels out of alignment with the
spirit of the site.
More explicit rules or code (like limiting the frequency of submissions from a
given member to the same domain name, etc) may not be necessary, as the voting
system generally seems to work quite well. Or perhaps such code is already in
place? which would help explain the high quality content on HN.
As ever, thank you Dan (and Scott) for maintaining this wonderful resource.
~~~
dang
There are a lot of users who come to HN just to submit their own stuff, aren't
participating in the community, but don't realize that they're breaking any
norms either. We tend not to treat them as spammers unless they really overdo
it. Often we explain to them that (a) using HN just for promotion is something
the community doesn't like, and which we eventually penalize or even ban
accounts for; (b) if they want to post to HN it would be better to fully join
the community and submit a variety of things they personally find
intellectually interesting, and (c) if they do that, it's fine to occasionally
include their own stuff.
Most people respond to that explanation pretty well and HN has even gained a
few excellent submitters that way. So we've learned to treat this class of
users with a lighter touch than outright spammers, who mostly leave quite
different fingerprints.
(p.s. thanks for the kind words!)
------
zinckiwi
No need for a tool, I can just go and stay in a Hilton.
------
jeremy_wiebe
If you’re on iOS or macOS there’s Network Link Conditioner which does the same
thing.
~~~
skunkworker
To install, download the Additional Tools for Xcode {{xcode version}}. And in
the download DMG "Hardware/Network Link Conditioner.prefpane"
[https://developer.apple.com/download/more/?=additional%20too...](https://developer.apple.com/download/more/?=additional%20tools)
------
argd678
If you need lower level emulation, with packet loss etc. Clumbsy is is great,
and QA departments can get up to speed quickly on it too. It’s also free.
[https://jagt.github.io/clumsy/](https://jagt.github.io/clumsy/)
------
rmetzler
I'm looking for something I could put between two docker images to test for
problems that arise from slow API connections.
I have go code (open source, but not written by me), I suspect to have
timeouts in certain situations and I would like change the code to be failure
tolerant.
Anyone has a tip?
~~~
ninjaoxygen
I used tc on a Linux VM between two hosts to simulate loss, latency and
bandwidth limits. Two interfaces, I think TC only works in one direction per
interface so you have to enable it on both interfaces to get delay in both
directions.
See [https://stackoverflow.com/questions/614795/simulate-
delayed-...](https://stackoverflow.com/questions/614795/simulate-delayed-and-
dropped-packets-on-linux)
------
exodust
Why not just use Firefox dev tools?
F12 > click 'Responsive Design Mode' > then click 'Throttling', the options
are 2G, 3G, 4G, DSL, Wi-fi.
I guess if you need throttling for something other than a web page, you need
something other than a web browser.
~~~
pkhach
Hello,
Yes, you can use built-in dev tool for your browser.
But if you are creating your own application (C++, .NET, JAVA) then you need
an external tool like this.
------
Waterluvian
Help me out here because I've been caught between what this website is telling
me and what I think is true.
What does "portability" mean?
It says "FREE portable download" but it's a .exe. So windows only?
~~~
niij
Portable as in it doesn't need to be installed, it runs the program as soon as
you start the exe.
Portable is different than cross platform.
~~~
frabert
To be fair, that's a definition of "portable" only used among windows users,
as far as I know.
------
adrianN
It would be nice if developers could also test their stuff on old computers.
Many websites are completely unusable on older mobile devices for example, and
not because of network issues.
------
keeler
You can probably accomplish this with Linux's tc command.
~~~
ninjaoxygen
In my experience tc is only designed to work in one direction, so I had to use
a separate VM, place that machine between the test host and the rest of the
network, then enable TC on both interfaces.
~~~
eitland
My understanding us that typically you'd create a minimal router out of a
minimal server distro installation (either physical or virtual) and make a
couple of scripts to automate standard settings.
Source: At some point I was part of a team that used a setup like this for
testing.
------
guidedlight
Any recommendations for doing something similar during load testing (e.g.
Jmeter)?
~~~
idoco
At Loadmill we use thousands of real user devices from around the world to
simulate the load.
This way you can simulate the complexity of different devices, geo-locations,
and network connectivity levels in your tests.
Disclaimer, I'm one of the founders of Loadmill.
------
zmarty
How does this work? What mechanism does it use?
| {
"pile_set_name": "HackerNews"
} |
Obstacles to Developing Cost-Lowering Health Technology: The Inventor’s Dilemma - wormold
http://jama.jamanetwork.com/article.aspx?articleID=2429454
======
ucha
I like the example of the MI and stroke reducing polypill. However, if the
only reason why this pill's not sold is because it delivers a low return on
investment due to the high cost of the clinical trials, why aren't countries
where medical research is less privatized stepping in?
Surely, this miracle pill would be tested and ultimately sold in Cuba and some
European countries with excellent medical facilities where the public sector
shares a larger part in healthcare costs.
What's preventing this?
~~~
ams6110
Um, the high cost of the clinical trials is not due to privatized medical
research.
_the cost of the large clinical trials required for FDA approval_
It is FDA regulations, not privatized medical R&D.
~~~
ucha
I didn't say that the cost was high because of private medical search. Only
that a private pharmaceutical company will not want to have negative returns
regardless of the origin of the costs.
~~~
a3n
Yes, a public good, which a country can decide to acquire and provide or not.
------
angersock
Oh dear christ.
A lot of the medical devices that could actually help people are well within
the ability of even a small team of undergrads to produce--and many do!
It's the sheer fuckheadedry of the market and regulatory environment and
capture of the FDA and insurers that basically make bringing anything to bear
a tedious, hair-graying experience.
~~~
pinaceae
totally, who needs the FDA. a little thalidomide is good for you, nothing like
untested and unproven stuff to shove into your veins.
let's also abolish the FAA, air travel needs to be freed from the shackles of
safety.
because innovation and disruption and stuff.
~~~
angersock
How many things have _you_ gotten through the FDA, friend?
The fact is, and it's well-documented, that there are many scientifically
unsubstantiated claims and procedures being done, that there is a vast paucity
of decent software, that patients are being injured and maimed and killed and
empoverished. And yet, there are these super-strict market-approval processes
and weird regulatory requirements on where and how you can open hospitals
(which is how they killed a lot of abortion clinics in Texas, by the way) and
a cartel-enforced supply-shortage of trained physicians.
Things are fucked.
The interactions of the government and the not-quite-free-market have produced
this sort of perverted and broken system.
------
shard
If there's $300B in annual costs due to health care expenditures and lost
productivity which can be reduced with a $1/day drug, it seems like the
medical insurance industry should be very interested in such a drug. Is the
medical insurance industry unable to carry out the research or pay a research
lab to do the work? Or are there other reasons they don't do this kind of
research?
~~~
DougWebb
What makes you think the Medical Insurance Industry is interested in reducing
medical costs? They benefit from higher costs.
If medical costs were much lower, fewer people would need/want insurance.
Premiums would have to be lowered, which means less cash coming into the
insurance company, and claims would be fewer and/or smaller which means less
cash flowing through the insurance company. Cashflow is where the insurance
company makes its profit.
The other profit center is captured premiums: money people pay in that they
never get back out. The insurance company doesn't care how high your medical
bills are or how much you wind up paying for them; the company only cares
about reducing its share as much as possible. Lowering the overall cost
doesn't alter the payment distribution, so the insurance company has no
inventive to lower overall costs.
If there was a pill you could take which would allow you to pay a larger
proportion of your medical costs, without reducing your premiums, the
insurance company would be all over it.
------
srunni
> Although the polypill could produce substantial public health benefits,
> people in the United States are unlikely to find out anytime soon. This is
> because the pill’s price is so low (≤$1 per tablet) and the cost of the
> large clinical trials required for FDA approval is so high, it is
> unattractive to investors.
Here are two great articles on this topic by Ben Roin of MIT:
_Unpatentable Drugs and the Standards of Patentability_ :
[http://dash.harvard.edu/bitstream/handle/1/10611775/Unpatent...](http://dash.harvard.edu/bitstream/handle/1/10611775/Unpatentable%20Drugs%20and%20the%20Standards%20of%20Patentability%20-%202009.pdf)
_Solving the Problem of New Uses_ :
[http://dash.harvard.edu/bitstream/handle/1/11189865/Solving%...](http://dash.harvard.edu/bitstream/handle/1/11189865/Solving%20the%20Problem%20of%20New%20Uses%20.pdf)
------
wehadfun
Here is article about the pain of dealing with Medical Industry
[0]
[http://www.washingtonmonthly.com/features/2010/1007.blake.ht...](http://www.washingtonmonthly.com/features/2010/1007.blake.html)
------
hmahncke
100% correct in my experience with medical device VCs.
------
coldcode
What were the drugs they wanted to combine?
------
worik
FFS Solution 6: Nationalise the pharmaceutical industry and get health experts
deciding priorities
~~~
aswanson
Sure. Creating an even greater bureaucracy unanswerable only to themselves
with guaranteed taxpayer cashflow is certain to produce pure innovation.
| {
"pile_set_name": "HackerNews"
} |
500 Startups partner Elizabeth Yin resigns over McClure situation - Geekette
https://www.axios.com/500-startups-partner-elizabeth-yin-resigns-2452787280.html
======
nikcub
Related:
[https://www.axios.com/exclusive-dave-mcclure-resigns-as-
gene...](https://www.axios.com/exclusive-dave-mcclure-resigns-as-general-
partner-of-500-startups-2452701900.html)
[https://techcrunch.com/2017/07/03/employee-email-
claims-500-...](https://techcrunch.com/2017/07/03/employee-email-
claims-500-startups-leadership-delayed-acknowledging-mcclures-harassment-as-
new-allegations-surface/)
[https://cherylyeoh.com/2017/07/03/shedding-light-on-the-
blac...](https://cherylyeoh.com/2017/07/03/shedding-light-on-the-black-box-of-
inappropriateness/)
This is a big series of stories breaking - not sure why it isn't getting much
traction on HN today.
~~~
hdra
Cheryl Yeoh's story was the #1 story just a while ago, and it is now nowhere
to be seen. I suspect some people are actively downvoting/flagging such
stories.
I'm even more surprised at the how number of people who are still defending
McClure even after all this.
I expected the usual "that can't be the entire story, there must be a detail
they aren't telling us", which is a skepticism I can still somewhat understand
given today's media landscape.
But even now after the details came out, I am now seeing the "that is a normal
behaviour from a heterosexual man" responses. It feels so disheartening to see
that these kind of responses not only exist in the industry but we are
actually having arguments about it. And I'm a man. Hard to imagine how it
would feels for the women in the industry.
~~~
pen2l
> It feels so disheartening to see that these kind of responses not only exist
> in the industry but we are actually having arguments about it.
It's disheartening. But is it surprising? Look at the things the dude sitting
in 1600 pensylvania ave. is saying and has said about women, maybe sadly it is
the unfortunate and sorry state of things today that some men are like this.
There has been a lot of clamor for change, probably most of it genuine about
making things better for the women in the industry. Do you think things will
be considerably better for women in ... say, 10 years? I don't think so.
Actually, based on the things I hear in video game voice chat by folks
sounding like they're 10-15, the future is scary. And don't be fooled into
thinking that this problem is unique to the tech industry (I work in a very
huge hospital, the last CEO was forced to resign because there was some
situation about him giving his mistress a high paying job, similar things
happened to other higher-up staff)
I don't have any good answers as to what _would_ make things better, so I'd be
very curious to hear from users here what are possible solutions.
~~~
mikestew
_Do you think things will be considerably better for women in ... say, 10
years?_
From the perspective of an olde phart who has been around since the 80s, no, I
don't think it will get better because my single data point says things have
been on a downward trend in about the last 15 years. Yes, I feel that
attitudes and behavior have gotten worse, not better, in the last thirty years
or so. Not every company, of course not, but I think there is a lot of "new
normal" going around (the "brogrammer" being just one aspect).
Solutions? You want to "make things better for women in the industry"? Start
by making things better for everyone: the workplace is not a frat house. Maybe
ditch the kegerator, for starters. Alcohol is for when I don't want to be
serious about what I'm doing. Hell, maybe we ditch the shorts and start
wearing long pants to work, even if it's jeans. If I had a _good_ answer, I'd
write a book and retire on the royalties. I don't, so I'll grasp at straws,
but there's something I can't quite put my finger on that says too much
casualness in our work environment spills over to a much more casual attitude
about how we deal with our coworkers. And, again I'm just spitballin', a
casual attitude toward my female coworkers might very translate to, "hey,
baby, nice ass".
------
guard0g
everyone has a mother, sister or daughter and wouldn't want to see this stuff
happen
------
Zikes
Hey dang, still waiting on my ban.
Or a response detailing how HN will be revamping the flagging system to
prevent its abuse for censorship. But I'm not holding my breath for that one.
| {
"pile_set_name": "HackerNews"
} |
Restlet raises $2M to facilitate RESTful API creation with APISpark - ferrantim
http://www.rudebaguette.com/2013/11/14/restlet-raises-2m/
======
ferrantim
Congrats to Jerome and team. Hope you guys do great!
For anyone interested, here is the java and js repos
* [https://github.com/restlet/restlet-framework-java](https://github.com/restlet/restlet-framework-java) * [https://github.com/restlet/restlet-framework-js](https://github.com/restlet/restlet-framework-js)
| {
"pile_set_name": "HackerNews"
} |
The Unreasonable Effectiveness of Deep Feature Extraction - hiphipjorge
http://www.basilica.ai/blog/the-unreasonable-effectiveness-of-deep-feature-extraction/
======
asavinov
Deep feature extraction is important for not only image analysis but also in
other areas where specialized tools might be useful such as listed below:
o
[https://github.com/Featuretools/featuretools](https://github.com/Featuretools/featuretools)
\- Automated feature engineering with main focus on relational structures and
deep feature synthesis
o [https://github.com/blue-yonder/tsfresh](https://github.com/blue-
yonder/tsfresh) \- Automatic extraction of relevant features from time series
o
[https://github.com/machinalis/featureforge](https://github.com/machinalis/featureforge)
\- creating and testing machine learning features, with a scikit-learn
compatible API
o [https://github.com/asavinov/lambdo](https://github.com/asavinov/lambdo) \-
Feature engineering and machine learning: together at last! The workflow
engine allows for integrating feature training and data wrangling tasks with
conventional ML
o [https://github.com/xiaoganghan/awesome-feature-
engineering](https://github.com/xiaoganghan/awesome-feature-engineering) \-
other resource related to feature engineering (video, audio, text)
~~~
mlucy
Definitely. There's been a lot of exciting work recently for text in
particular, like
[https://arxiv.org/pdf/1810.04805.pdf](https://arxiv.org/pdf/1810.04805.pdf) .
~~~
nl
Or from today, OpenAI's response to BERT: [https://blog.openai.com/better-
language-models/](https://blog.openai.com/better-language-models/)
Breaks 70% accuracy on the Winograd schema for the first time! (a lazy 7%
improvement in performance....)
------
kieckerjan
As the author acknowledges, we might be living in a window of opportunity
where big data firms are giving something away for free that may yet turn out
to be a big part of their furure IP. Grab it while you can.
On a tangent, I really like the tone of voice in this article. Wide eyed,
optimistic and forward looking while at the same time knowledgeable and
practical. (Thanks!)
~~~
gmac
_big data firms are giving something away for free_
On that note, does anyone know if state-of-the-art models trained on billions
of images (such as Facebook's model trained via Instagram tags/images,
mentioned in the post) are publicly available and, if so, where?
Everything I turn up with a brief Google seems to have been trained on
ImageNet, which the post leads me to believe is now small and sub-par ...
~~~
hamilyon2
Have you found anything?
~~~
gmac
Afraid not — I was hoping for some replies here!
------
bobosha
This is very interesting and timely to my work, I had been struggling with
training a Mobilenet CNN for classification of human emotions ("in the wild"),
and struggling to get the model to converge. I tried multiclass to binary
models e.g. angry|not_angry etc. But couldn't get past the 60-70% accuracy
range.
I switched to extracting features from Imagenet and trained an xgboost binary
and boom...right out of the box am seeing ~88% accuracy.
Also the author's points about speed of training and flexibility is major plus
for my work. Hope this helps others.
~~~
mlucy
Yeah, I think this pattern is pretty common. (Basilica's main business is an
API that does deep feature extraction as a service, so we end up talking to a
lot of people with tasks like yours -- and there are a _lot_ of them.)
We're actually working on an image model specialized for human faces right
now, since it's such a common problem and people usually don't have huge
datasets.
------
fouc
>But in the future, I think ML will look more like a tower of transfer
learning. You'll have a sequence of models, each of which specializes the
previous model, which was trained on a more general task with more data
available.
He's almost describing a future where we might buy/license pre-trained models
from Google/Facebook/etc that are trained on huge datasets, and then extend
that with more specific training from other sources of data in order to end up
with a model suited to the problem being solved.
It also sounds like we can feed the model's learnings back into new models
with new architectures as well as we discover better approaches later.
~~~
XuMiao
What do you think of life-long learning scenario that models are trained
incrementally forever? For example, I train a model with 1000 examples, it
sucks. The next guy pick it up and train a new one by putting a regularizer
over mine. It might still suck. But after maybe 1000 people, the model begins
to get significantly better. Now, I will pickup what I left and improve it by
leveraging the current best. This continues forever. Imagine that this
community is supported by a block chain. We won't be relying on big companies
any more eventually.
~~~
jacquesm
What is it with the word 'blockchain' that will make people toss it into
otherwise completely unrelated text?
~~~
oehpr
nothing, they're describing a series of content addressable blocks that link
back to their ancestors. Which is a good application of a block chain. Think
IPFS.
It's not cryptocurrency. Though cryptocurrency definitely popularized the
technique.
~~~
fwip
IPFS isn't a blockchain just like git isn't a blockchain. "Blockchain" has
semantic meaning that "a chain of blocks" does not.
------
stared
A few caveats here:
\- It works (that well) only for vision (for language it sort-of-works only at
the word level: [http://p.migdal.pl/2017/01/06/king-man-woman-queen-
why.html](http://p.migdal.pl/2017/01/06/king-man-woman-queen-why.html))
\- "Do Better ImageNet Models Transfer Better?"
[https://arxiv.org/abs/1805.08974](https://arxiv.org/abs/1805.08974)
And if you want to play with transfer learning, here is a tutorial with a
working notebook: [https://deepsense.ai/keras-vs-pytorch-avp-transfer-
learning/](https://deepsense.ai/keras-vs-pytorch-avp-transfer-learning/)
~~~
mlucy
There's actually been a lot of really good work recently around textual
transfer learning. Google's BERT paper does sentence-level pretraining and
transfer to get state of the art results on a bunch of problems:
[https://arxiv.org/pdf/1810.04805.pdf](https://arxiv.org/pdf/1810.04805.pdf)
~~~
stared
Thanks for this reference, I will look it up. Though, from my experience
people in NLP still (be default) train from scratch, with some exceptions for
tasks on the same dataset:
\- [https://blog.openai.com/unsupervised-sentiment-
neuron/](https://blog.openai.com/unsupervised-sentiment-neuron/)
\- [http://ruder.io/nlp-imagenet/](http://ruder.io/nlp-imagenet/)
~~~
samcodes
This is true, but rapidly changing. In addition to fine tuneable language
models, you can do deep feature extraction with something like bert-as-service
[0] ... You can even fine tune Bert on your days, then use the fine tuned
model as a feature extractor.
[0] [https://github.com/hanxiao/bert-as-
service](https://github.com/hanxiao/bert-as-service)
------
mlucy
Hi everyone! Author here. Let me know if you have any questions, this is one
of my favorite subjects in the world to talk about.
~~~
fouc
What do you think are the most interesting types of problems to solve with
this?
~~~
mlucy
I think if you have a small to medium sized dataset of images or text, deep
feature extraction would be the first thing I'd try.
I'm not sure what the most interesting problems with that property are. Maybe
making specialized classifiers for people based on personal labeling? I've
always wanted e.g. a twitter filter that excludes specifically the tweets that
I don't want to read from my stream.
~~~
fouc
One problem that intrigues me is Chinese-to-English machine translation.
Specifically for a subset of Chinese Martial Arts novels (especially given
there's plenty of human translated versions to work with).
So Google/Bing/etc have their own pre-trained models for translations.
How would I access that in order to develop my own refinement w/ the domain
specific dataset I put together?
~~~
mlucy
I don't think you could get access to the actual models that are being used to
run e.g. Google Translate, but if you just want a big pretrained model as a
starting point, their research departments release things pretty frequently.
For example, [https://github.com/google-
research/bert](https://github.com/google-research/bert) (the multilingual
model) might be a pretty good starting point for a translator. It will
probably still be a lot of work to get it hooked up to a decoder and trained,
though.
There's probably a better pretrained model out there specifically for
translation, but I'm not sure where you'd find it.
------
jfries
Very interesting article! It answered some questions I've had for a long time.
I'm curious about how this works in practice. Is it always good enough to take
the outputs of the next-to-last layer as features? When doing quick
iterations, I assume the images in the data set have been run through the big
net as a preparation step? And the inputs to the net you're training is the
features? Does the new net always only need 1 layer?
What are some examples of where this worked well (except for the flowers
mentioned in the article)?
~~~
mlucy
> Is it always good enough to take the outputs of the next-to-last layer as
> features?
It usually doesn't matter all that much whether you take the next-to-last or
the third from last, it all performs pretty similarly. If you're doing
transfer to a task that's very dissimilar from the pretraining task, I think
it can sometimes be helpful to take the first dense layer after the
convolutional layers instead, but I can't seem to find the paper where I
remember reading that, so take it with a grain of salt.
> When doing quick iterations, I assume the images in the data set have been
> run through the big net as a preparation step?
Yep. (And, crucially, you don't have to run them through again every
iteration.)
> And the inputs to the net you're training is the features? Does the new net
> always only need 1 layer?
Yeah, you take the activations of the late layer of the pretrained net and use
them as the input features to the new model you're training. The new model
you're training can be as complicated as you like, but usually a simple linear
model performs great.
> What are some examples of where this worked well (except for the flowers
> mentioned in the article)?
The first paper in the post
([https://arxiv.org/abs/1403.6382](https://arxiv.org/abs/1403.6382)) covers
about a dozen different tasks.
------
mikekchar
It's hard to ask my question without sounding a bit naive :-) Back in the
early nineties I did some work with convoluted neural nets, except that at
that time we didn't call them "convoluted". They were just the neural nets
that were not provably uninteresting :-) My biggest problem was that I didn't
have enough hardware and so I put that kind of stuff on a shelf waiting for
hardware to improve (which it did, but I never got back to that shelf).
What I find a bit strange is the excitement that's going on. I find a lot of
these results pretty expected. Or at least this is what _I_ and anybody I
talked to at the time seemed to think would happen. Of course, the thing about
science is that sometimes you have to do the boring work of seeing if it does,
indeed, work like that. So while I've been glancing sidelong at the ML work
going on, it's been mostly a checklist of "Oh cool. So it _does_ work. I'm
glad".
The excitement has really been catching me off guard, though. It's as if
nobody else expected it to work like this. This in turn makes me wonder if I'm
being stupidly naive. Normally I find when somebody thinks, "Oh it was
obvious" it's because they had an oversimplified view of it and it just
happened to superficially match with reality. I suspect that's the case with
me :-)
For those doing research in the area (and I know there are some people here),
what have been the biggest discoveries/hurdles that we've overcome in the last
20 or 30 years? In retrospect, what were the biggest worries you had in terms
of wondering if it would work the way you thought it might? Going forward,
what are the most obvious hurdles that, if they don't work out might slow down
or halt our progression?
~~~
aabajian
If you haven't, you should take a few moments to read the original AlexNet
paper (only 11 pages):
[https://papers.nips.cc/paper/4824-imagenet-classification-
wi...](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-
convolutional-neural-networks.pdf)
What you're saying is true, it _should_ have worked in theory, but it just
_wasn 't_ working for decades. The AlexNet team made several critical
optimizations to get it work: (a) big network, (b) training on GPU, and (c)
using a ReLU instead of tanh(x).
In the end, it was the hardware that made it possible, but up until their
paper it really wasn't for sure. A good analogy is the invention of the
airplane. You can speculate all you want about the curvature of a bird's wing
and lift, but until you actual build a wing that flies, it's all speculation.
------
al2o3cr
Contrast a similar writeup on some interesting observations about solving
ImageNet with a network that only sees small patches (largest is 33px on a
side)
[https://medium.com/bethgelab/neural-networks-seem-to-
follow-...](https://medium.com/bethgelab/neural-networks-seem-to-follow-a-
puzzlingly-simple-strategy-to-classify-images-f4229317261f)
------
purplezooey
Question to me is, can you do this with i.e. Random Forest too, or is it
specific to NN.
------
gdubs
This is probably naive, but I’m imagining something like the US Library of
Congress providing these models in the future. E.g., some federally funded
program to procure / create enormous data sets / train.
~~~
rsfern
I don’t think it’s that naive. NIST is actively getting into this space:
[https://www.nist.gov/topics/artificial-
intelligence](https://www.nist.gov/topics/artificial-intelligence)
------
CMCDragonkai
I'm wondering how this compares to transfer learning applied to the same
model. That is compare deep feature extraction plus linear model at the end vs
just transferring the weights to the same model and retraining to your
specific dataset.
------
zackmorris
From the article:
_Where are things headed?
There's a growing consensus that deep learning is going to be a centralizing
technology rather than a decentralizing one. We seem to be headed toward a
world where the only people with enough data and compute to train truly state-
of-the-art networks are a handful of large tech companies._
This is terrifying, but the same conclusion that I've come to.
I'm starting to feel more and more dread that this isn't how the future was
supposed to be. I used to be so passionate about technology, especially about
AI as the last solution in computer science.
But anymore, the most likely scenario I see for myself is moving out into the
desert like OB1 Kenobi. I'm just, so weary. So unbelievably weary, day by day,
in ever increasing ways.
~~~
coffeemug
Hey, I hope you don't take it the wrong way -- I'm coming from a place where I
hope you start feeling better -- but what you're experiencing might be
depression/mood affiliation. I.e. you feel weary and bleak, so the world seems
weary and bleak.
There are enormous problems for humanity to solve, but that has _always_ been
the case. From plagues and famines, to world wars, to now climate change, AI
risk, and maybe technology centralization. We've solved massive problems
before at unbelievable odds, and I want to think we'll do it again. And if
not, what of it? What else is there to do but work tirelessly at attempting to
solve them?
I hope you feel better, and find help if you need it -- don't mean to presume
too much. My e-mail is in my profile if you (or anyone else) needs someone to
talk to.
| {
"pile_set_name": "HackerNews"
} |
Did Angry Birds eat the iPad mags market? - atularora
http://blogs.ft.com/fttechhub/2010/12/did-angry-birds-eat-the-ipad-mags-market/
======
bauchidgw
simple enough WIRED om the ipad sucked, thats why the sales dropped.
| {
"pile_set_name": "HackerNews"
} |
Backup Shell script: each Mysql database to a separated dump - giuseppeurso
http://blog.giuseppeurso.net/export-each-mysql-database-to-a-separated-dump/index.html
======
pkhamre
You should add a command-line option to skip single databases.
| {
"pile_set_name": "HackerNews"
} |
Postgres Job Queues and Failure by MVCC - craigkerstiens
https://brandur.org/postgres-queues
======
chanks
(I'm the author of Que, the job queue discussed in the post most extensively)
This isn't terribly surprising to me, since I have an appreciation for what
long-running transactions will do to a system, and I try to design systems to
use transactions that are as short-lived as possible on OLTP systems. I
realize that this should be explicitly mentioned in the docs, though, I'll fix
that.
I'll also note that since the beginning Que has gone out of its way to use
session-level locks, not transaction-level ones, to ensure that you can
execute long-running jobs without the need to hold open a transaction while
they work. So I don't see this so much as a flaw inherent in the library as
something that people should keep in mind when they use it.
(It's also something that I expect will be much less of an issue in version
1.0, which is set up to use LISTEN/NOTIFY rather than a polling query to
distribute most jobs. That said, 1.0 has been a relatively low priority for
much of the last year, due to a lack of free time on my part and since I've
never had any complaints with the locking performance before. I hope I'll be
able to get it out in the next few months.)
~~~
brandur
> I'll also note that since the beginning Que has gone out of its way to use
> session-level locks, not transaction-level ones, to ensure that you can
> execute long-running jobs without the need to hold open a transaction while
> they work. So I don't see this so much as a flaw inherent in the library as
> something that people should keep in mind when they use it.
+1! I tried to clarify in the "Lessons Learnt" section that this isn't so much
a problem with Que, but something that should be kept in mind for any kind of
"hot" Postgres table (where "hot" means lots of deletions and lots of index
lookups). (Although many queues are more vulnerable due to the nature of their
locking mechanisms.)
But anyway, thanks for all the hard work on Que. The performance boost upon
moving over from QC was nice, but I'd say that the major win was that I could
eliminate 90% of the code where I was reaching into QC internal APIs to add
metrics, logging, and other missing features.
~~~
chanks
Thank you!
------
LukaAl
Thanks for the article, very interesting. Just one point on the solution. To
my understanding, you prefer a queue implemented on a DB because transactions
guarantee that a task could not start unless the task before as succeeded and
committed the required data on the DB. In our environment we run tasks on
RabbitMQ/Celery. One of the nice feature that I believe exists also in Sidekiq
is that it allows you to create chain (or more complex structure) of task and
the worker itself will take care of synchronizing them removing your problem
(when a task finishes successfully it commits on the db and triggers the next
step). The only problem we had where on entry points were we create the task
chains and we fire them before committing the transaction. One solution was
manually committing before firing the task. But that was somewhat difficult.
What we have implemented was a Python decorator to have the tasks actually
fired after the function completion (thus after the transaction commit). In Go
we achieve the same result in a more simple way using the defer statement. In
my experience, all these solutions are local to the process that fires the
task, so there is less risk of interaction with other process, easy to
implement and more robust compared to other solution.
~~~
brandur
> Thanks for the article, very interesting.
Thanks!
> What we have implemented was a Python decorator to have the tasks actually
> fired after the function completion (thus after the transaction commit). In
> Go we achieve the same result in a more simple way using the defer
> statement. In my experience, all these solutions are local to the process
> that fires the task, so there is less risk of interaction with other
> process, easy to implement and more robust compared to other solution.
Oh yes, totally. I've seen this same pattern in Ruby before whereby a job
enqueue is put on something like an ActiveRecord `after_commit` hook.
One (overly) pedantic observation is that this still leaves you with the
possibility of having your transaction and data safely committed in your
database, but with your job not enqueued if something happens to your process
between the time of commit and time of enqueue. Admittedly though, this
probably doesn't happen all that often in real life.
Probably the best answer I have is that we take this approach for the sheer
convenience. We can do things like this:
def create_user(email, password)
User.transaction do
user = User.new email: email
user.set_password_hash(password)
check_for_abuse!
# make an account ID queue job to create record in billing system
user.billing_account_id = uuid()
async_create_account_in_billing_system(user)
# queue job to send an e-mail invite
async_send_invite_email(user)
create_auditing_record
...
end
end
You can build out very explicit chains of actions that may enqueue jobs or
call into other chains that may enqueue their own jobs and all the while never
have to worry about any kind of ordering problem while working anything. There
are no hidden callbacks anywhere, but you still get to keep perfect
correctness: if anything fails at any point, the whole system rolls back like
it never happened.
~~~
LukaAl
The observation on failure modes is correct. When you process thousands of
tasks every day even problems with low probability happens. What bugs me with
the transaction approach is that you loose all the information (except
probably for logs). In your example, if I signup on your service and for any
reason the process fails (maybe after I received the confirmation that
everything is ok) I will end up having no record stored for my account. This
is problematic for post mortem diagnosis, for customer support (although its
easy to ask the customer redo the signup) and so for and so on. Imagine you
are handling payment with an external provider (e.g: Paypal). I could end up
being billed without a trace on your system that I've paid. I'm not saying
that my approach is correct. I'm saying that just assuming that transactions
will save all your problems discount the fact that your system, to be useful,
has side effects (not using the term in a strict way) on your customer and
possibly on systems outside your organization. I prefer to plan for actively
manage the possible failure modes and to detect and correct quickly anomaly.
For this reason inconsistent data is sometime better than no data. Another
point on the database approach that I haven't thought before: In the past we
designed a system that stored and updated user timelines on MySQL. It has
always been a nightmare. SQL databases are not designed with an high ratio of
write operation in mind. The indexes get quickly fragmented, even deleting
entries doesn't immediately deflate the disk consumption etc. I don't see this
being a problem immediately applicable to a queue use case. But with your
service growing in size there's a risk of hitting scalability problem.
Obviously you could react like we have done, use a bigger database, use better
disks, create cleanup jobs. But it is bad software engineering and you are
just buying more time.
------
atombender
Good article!
I am currently implementing a project in which we use Postgres to track job
state (eg., run status, failures, timings, resource usage, related log
entries), but Kafka as the actual queueing mechanism -- thus bypassing the
challenges mentioned in the article but still getting the best of Postgres.
This way we have complete, introspectable, measurable history about every
queue item. It greatly simplifies the Postgres part of it (state updates are
always appends, no locking) and thanks to Kafka, increases performance and
scalability.
It also adds a measure of safety: We can detect "lost" jobs that disappear
because of data failure, bugs in Kafka, failing clients etc. We know that if a
job was never logged as "complete", it probably died.
The job log also functions as an audit log, and we also intend to use it for
certain jobs that benefit from being incremental and from able to continue
from when it last left (for example, feed processing).
~~~
brandur
Interesting approach!
I'd be curious to hear about the mechanic that you came up with for division
of labor among workers — since every client is essentially reading the same
stream, I guess you'd have to distribute jobs based on job_id modulo
worker_number or something like that?
~~~
atombender
I suppose one could use a round-robin sharding approach like you mention, but
it goes against Kafka's design, and it's not necessary.
Kafka divides a queue into partitions. Each partition is a completely
independent silo. When you publish messages, Kafka distributes them across
partititons. When you read messages, you always read from a partition.
This means partitions are also the unit of parallelism: You don't want
multiple workers on a single partition (because of the labour division problem
you mention). Rather, Kafka expects you to have one partition per worker.
This is more elegant than it sounds if you're coming from something like
RabbitMQ. Partitions (ie., queues) in Kafka are append-only and strictly
linear; unlike RabbitMQ, you can never "nack" a message in a way that results
in the message ending up at the back of the queue and thus violating the
original message order. Rather, Kafka expects each consumer to maintain its
"read position" in the queue. Failure handling, then, is simply a matter of
winding back the read position. And unlike RabbitMQ, there's less need for
complicated routing, dead-letter exchanges and so on, because rather than move
messages around, you're just moving a cursor.
Of course, message order is only preserved within a single partition; if you
publish messages A, B and C and you have 3 partitions and 3 workers, then in a
real world, messages may be processed in the order C, B, A. That sounds bad,
but then other queue solutions such as Que or RabbitMQ suffer from the exact
same problem: If you run 3 workers against one queue, your queue may supply
each worker with messages in the right order, but there's no guarantee that
they will be _processed_ in that order. The only way to guarantee ordering is
to have just one worker per queue, using some kind of locking (RabbitMQ does
support "exclusive" consumers). But then you don't get any parallelism at all.
So I think Kafka's solution is quite sane, even if it's more low-level and
less developer-friendly than AMQP.
------
brandur
I'd be curious to hear what the general community thinks of putting a job
queue in a database and if there are a lot of other QC/Que users out there.
FWIW, the transactional properties of a Postgres-backed queue were so
convenient that we took advantage of them for a long time (and still do)
despite the fact that they have a few caveats (e.g. poor degraded performance
as outlined in the post), but more recently there's been a bit of a shift
towards Sidekiq (probably because it's generally very problem-free and has
some pretty nice monitoring tools).
(Disclaimer: I authored this article.)
~~~
apinstein
We are using Postgres as our queue backing store. I tried switching to Sidekiq
but ran into issues (read here
[https://github.com/mperham/sidekiq/pull/624](https://github.com/mperham/sidekiq/pull/624)).
Fortunately our job throughput is small enough to not hit any scaling issues
with Postgres, so I stuck with that because of my confidence and experience
w/Postgres over the years. The issues I ran into on Sidekiq just made me
skeptical of their architecture/code maturity, though that was several years
ago and it may be much improved by now.
We use JQJobs (which we authored) to manage queueing and it's architected such
that it could be ported to Redis or some other better backing store, or
potentially even to QC/Que, which I wasn't aware of until your article (so
thanks for that!).
~~~
brandur
Ah, nice, thank-you!
> Fortunately our job throughput is small enough to not hit any scaling issues
> with Postgres, so I stuck with that because of my confidence and experience
> w/Postgres over the years.
I think we're in a pretty similar situation. For what it's worth, I think that
a queue in PG can scale up about as well as Postgres can as long as you keep
an eye on the whole system (watch out for long-lived transaction and the
like).
------
wpeterson
This was a well written article with an interesting investigation.
However, storing small, ephemeral messages like jobs in a queue within
Postgres is a bad idea and the pain far outweighs the benefits of
transactional rollback for jobs.
Instead, a much simpler solution is to plan for jobs to run at least once, use
a more appropriate datastore like Redis or RabbitMQ, and build in idempotency
and error handling at the job layer.
Postgres used as a system of record shouldn't be used for ephemeral message
queues.
~~~
anarazel
Well, it's not necessarily that simple. It can be very interesting to be able
to directly enter jobs into a queue in a transactional manner, with very low
latency. Say from a trigger.
Edit: typo
~~~
wpeterson
For the sake of error handling on rollback, you're usually better delayed job
enqueueing events to after commit hooks if you're concerned about failures
resulting in a rollback.
~~~
adamtj
Doing anything after committing misses the point. If you can do that, you
don't need postgres.
For example, suppose you mark an account closed, commit, and then enqueue an
event to issue a refund from another system. It's possible that your process
may crash or be killed at just the wrong time leaving you with a closed
account but no refund.
So what if you enqueue the event before you commit? In that case, you might
crash before committing which will automatically rollback. Now you've done a
refund on a non-closed account.
Transactions make it trivial to guarantee that either both happen or neither
do. There are other ways to get that guarantee, but they require more work and
are more error prone.
------
haavikko
Were you using SERIALIZABLE isolation level in your application, as one code
example in your article seems to show? Would using READ COMMITTED level have
made a difference?
------
bmm6o
The long-lived transaction that keeps the deleted jobs from being reclaimed -
is that process working with the job queue table, or is it working with other
tables in the database?
~~~
nierman
Even if the long-running transaction hasn't accessed the job queue table yet
it might do so before it completes. Postgres needs to keep the "dead" tuples
accessible as long as there are active transactions that started before the
deletion.
~~~
bmm6o
Right, that's what I thought. Is there value in giving the job queue its own
database then, or are they taking advantage of the fact that the jobs
modifications can be included in transactions that involve other tables?
| {
"pile_set_name": "HackerNews"
} |
New Windows backdoor: SSL encryption is not safe anymore - hazelnut
http://www.spiegel.de/netzwelt/web/windows-hintertuer-gefaehrdet-ssl-verschluesselung-a-913825.html
http://translate.google.de/translate?sl=de&tl=en&js=n&prev=_t&hl=de&ie=UTF-8&u=http%3A%2F%2Fwww.spiegel.de%2Fnetzwelt%2Fweb%2Fwindows-hintertuer-gefaehrdet-ssl-verschluesselung-a-913825.html&act=url
======
hazelnut
translation:
[http://translate.google.de/translate?sl=de&tl=en&js=n&prev=_...](http://translate.google.de/translate?sl=de&tl=en&js=n&prev=_t&hl=de&ie=UTF-8&u=http%3A%2F%2Fwww.spiegel.de%2Fnetzwelt%2Fweb%2Fwindows-
hintertuer-gefaehrdet-ssl-verschluesselung-a-913825.html&act=url)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: What are you using for managing reference data in your app? - paramz
Hi HN, In almost every company I have ever worked in, we had some in-house solution for managing so-called dictionaries/parameters/reference-data. If you still wonder what I am talking about: assume you would have some data that normally you would put in an enum, but it would have to be dynamic and business people would need to have control over it, what would you use? Admin panel would be probably the first choice, but sometimes there are not enough resources or budgets to do so. Feature toggles are the closest thing that comes to my mind when I think about the potential product, but its use case is somewhat different (lack of queries, versioning, it is not suited to have lots of data). I have decided with my friends that we could deliver a solution to this problem, but I wonder if this is something that you would use or maybe you already using some solution that solves this problem?
======
smt88
I've been in software for 20 years and can't figure out what use case you're
talking about.
Can you explain or give more concrete examples?
If you want to sell a product to people who don't have time to build an admin
panel, that's a market without much time or money -- both of the things they
need to research your product, test it, and integrate it.
~~~
paramz
Let's say you have a platform where you can define excel-like tables
(parameters), each change to this table creates a new version. Assume that
after creating this parameter new version is created with generated rest
endpoint to it. e.g.
/param/currencies/version/1
could give you
{ data: [ EUR, USD, SEK, GBP ] }
Use cases: \- You have in your app map with store locations. Adding or
deleting these locations would require some kind of UI or access to DB, you
could create param and fetch this data from there
\- There are discounts feature in your app, you could define a parameter with
discounts for each product, and non-technical people could easily change its
values without developers assistance
\- You can create whitelists, blacklists, addresses, descriptions almost
anything that changes and non-technical should have the possibility to
configure/change.
Of course, creating parameters and managing them would be done through simple
UI.
The only thing the developer has to do is to integrate the feature with rest
API
Let me know what you think. I am really interested in knowing your point of
view and if this is something that you could find useful.
| {
"pile_set_name": "HackerNews"
} |
Jobs at Airbnb.com - bkwok
http://www.airbnb.com/jobs?=srcHN
======
bkwok
We announced to the Internet our latest round of funding. We launched our new
iPhone app and we're proud of it.
We're creating an industry and we need the best people to be a part of it plus
we love making people happy.
We have 40 positions open on our jobs page so take a look at the various
opportunities we have for you to join our revolution. If you have the passion
and hunger to change the world, then talk to us.
We're waiting to hear from you.
------
brianchesky
More than half of these positions are product, design, or engineering.
~~~
dotBen
Actually (+ interestingly) _none_ of them are product (as in "product
development", "product management", "product owner", etc).
No one seems to be hiring product people, and it seems to be a new trend from
what I've been seeing recently.
------
davidedicillo
You guys forgot to link the "You know the first names of who designed this
chair" to the Eames's chair picture in the UI Designer position ;)
------
Dramatize
Give me a year to finish learning RoR and I'd be happy to join Airbnb :)
------
alexcoomans
I'd love to work there - just a few more years to get out of college :)
------
perokreco
No engineering interns? Thats a bit odd.
| {
"pile_set_name": "HackerNews"
} |
Removing Python 2.x support from Django for version 2.0 - ReticentMonkey
https://github.com/django/django/pull/7867/files
======
Flimm
The next release, Django 1.11, will be a long-term support release, and the
one after that, Django 2.0, will no longer support Python 2.
[https://www.djangoproject.com/weblog/2015/jun/25/roadmap/](https://www.djangoproject.com/weblog/2015/jun/25/roadmap/)
I've grow to highly respect the Django project for its good documentation, its
healthy consideration for backwards compatibility, security, steady
improvements and all round goodness.
~~~
Galanwe
Interestingly, I have the exact opposite view on Django.
I hate their API and overall architecture, which I find to be the result of
glueing features on top of features for many years. The internal code also is
just like that: looks like every single method is riddled with out-of-band
conditionals, which is the result of a community that prefers to hack things
to work, instead of rethinking/refactoring.
~~~
stuaxo
Last time I looked a lot of the insides were terrible.
A project I was really impressed with the internals of is Celery (was
expecting the worst having seen Django).
~~~
Ralfp
> Last time I looked a lot of the insides were terrible.
But is it really? Speaking from personal experience it is easy to compare
project with large featureset (and one with heritage) to one with scope on
doing single thing and come with conclusion that smaller, focused codebase is
more consistent and better implemented. At the end of day what matters is if
those terriblenesses actually bite back:
\- is this code changed frequently? Does it need to be changed frequently? \-
is it written in a way that that makes fixes and improvement unbearably
costful? \- is it written in way that allows it to be put apart? How costful
are those individual parts to improve?
Django is large codebase that is worked on by different people and when time
permits this, which means different parts differ in their age, practices, and
ultimately, quiality. This eventually results in codebase that may give
appearance of being messy.
Joel Spolsky explains this nicely in his article about old and large codebases
appearing as hairy and messy to developers:
[https://www.joelonsoftware.com/2000/04/06/things-you-
should-...](https://www.joelonsoftware.com/2000/04/06/things-you-should-never-
do-part-i/)
Especially the part that follows below quote is valuable wisdom to keep in
mind:
> When programmers say that their code is a holy mess (as they always do),
> there are three kinds of things that are wrong with it.
------
yuvadam
This call has been made a while back, and it makes perfect sense. Python 2 is
slowly being EOL'd and if you're starting a brand new Django project there's
no reason on earth you should choose Python 2 anymore.
Sure legacy projects still need support and for that they get the 1.11 LTS,
but otherwise it's really time to move on.
~~~
hueving
Easy to say when you don't depend on C extensions only compatible with 2.7.
~~~
gkya
How hard it is to port a C extension? I don't really know the APIs, but is it
impossible to transform by a script?
~~~
throwawayish
Not hard. You can support 3 and 2 in the same file without much hassle.
Practical example:
[https://github.com/zopefoundation/BTrees/blob/master/BTrees/...](https://github.com/zopefoundation/BTrees/blob/master/BTrees/_compat.h)
[https://github.com/zopefoundation/BTrees/blob/master/BTrees/...](https://github.com/zopefoundation/BTrees/blob/master/BTrees/BTreeModuleTemplate.c)
There are a couple #if PY3K, but not much, really.
I ported a bunch of extension modules, total a couple thousand LOC, and it was
pretty much a matter of reading the docs (see guide at
[https://docs.python.org/3/howto/cporting.html](https://docs.python.org/3/howto/cporting.html)
) and adding a few #ifs. Total time maybe an hour or two.
------
rowanseymour
I'm glad they are making a clean break from Python 2 and I hope this pushes
other projects in the ecosystem to fix those remaining libraries without
Python 3 support. It does get a bit frustrating when things break between
Django releases, but they have a good system of deprecating things for a
couple of releases beforehand. And at the end of the day, Django is for people
who want to build websites, not life support machines... and I think they're
doing a decent job of striking a balance between breakage and stagnation.
------
nodamage
I have a Python 2.7 project that has been running smoothly for many years now
and I'm having trouble finding a reason to upgrade to Python 3. The project
uses the unicode type to represent all strings, and encodes/decodes as
necessary (usually to UTF-8) when doing I/O. I haven't really had any of the
Unicode handling problems that people seem to complain about in Python 2.
Can someone explain what benefit I would actually gain from upgrading to
Python 3 if I'm already "handling Unicode properly" in Python 2? So far it
still seems rather minimal at the moment, and the risk of breaking something
during the upgrade process (either in my own code or in one of my
dependencies) doesn't seem like it's worth the effort.
~~~
bmh100
You gain access to continued language support in 2020. New features involving
strings will have less risk of bugs. The "range" function is more memory
efficient. Integer division automatically floors, reducing bug risk.
Dictionaries with guaranteed ordering. Thousands separator in string
formatting. Bit length on integers. Combinations with replacement on
itertools. New, faster I/O library and faster json. Concurrent futures module.
Ability to define stable ABI for extensions. New CLI option parsing module.
Dictionary-based logging configuration. Index and count on ranges. Barrier
synchronization for threads. Faster sorting via internal upgrade to Timsort.
Async I/O. Support for spawn and forkserver in multiprocessing. Child context
in multuprocessing. Has collision cost reduced. Significantly faster startup.
Type hints. Faster directory traversal. Faster regular expression parsing.
Faster I/O for bytes. Faster dumps. Reduced method memory usage through better
caching. Dramatically less memory usage by random. Faster string manipulation.
Faster propert calls. Formatted string literals (interpolated strings).
Asynchronous generators. Asynchronous comprehensions.
How much faster might your code run just by upgrading to Python 3? How much
memory might you save?
~~~
Buttons840
> Dictionaries with guaranteed ordering.
I don't think you're suppose to depend on the ordering of dictionaries. It's
an implementation detail which might get changed, although it wont actually
ever be changed because people will come to depend on it.
~~~
bmh100
I'm specifically referring to OrderedDict in this case, which does have
guaranteed, insertion-based ordering. It was introduced in 3.1, circa 2009,
via PEP 372.
~~~
wolf550e
[https://docs.python.org/2/library/collections.html#ordereddi...](https://docs.python.org/2/library/collections.html#ordereddict-
objects)
> New in version 2.7.
~~~
bmh100
That's a good point. I didn't realize that it was also introduced into 2.7 at
the same time.
------
stevehiehn
Good. I've been getting into python a bit because i have an interest in
datascience. I'm mostly a Java dev. I have to say the python2/3 divide is a
real turn off. Many of the science libs want to use seem to be in 2.7 with no
signs of moving.
~~~
joeyspn
> Many of the science libs want to use seem to be in 2.7 with no signs of
> moving
The most important scientific libraries have pledged to drop support before
2020, and are all python3-ready
[http://www.python3statement.org/](http://www.python3statement.org/)
~~~
lmm
Is there any hope of other libraries and OSes doing the same thing?
I used to work on a GUI app in Python. I ported it to Python 3, then switched
OSes for various reasons. _5 years on_ , on Ubuntu Xenial (so new I can't even
use it in Travis, but that's a separate whine), I install pykdeuic4 and it's
using Python 2. So I've basically abandoned that project for 5 years now,
because every time I looked at it I thought "surely Python 3 will be here in a
few months, I don't want to port backwards to Python 2".
(Serious question: is there a PPA or anything I can use to get these things
for Python 3? I need PyQwt as well as PyKDE)
~~~
xioxox
Have you tried python3-pykde4? It seems to contain a pykdeuic4.py file.
~~~
lmm
No - but you've set me on the right track, there looks to be a pykdeuic4-3.4
executable (I had assumed if there were anything like that it would be a case
of update-alternatives, but apparently not?). Will try that when I get home.
~~~
StavrosK
In general, I've found that Ubuntu versions of the last 1-2 years support
Python 3 fine, you just have to use the "python3-*" packages.
~~~
lmm
Fair enough. E.g. I was assuming the reason PyQwt didn't exist for python3 was
because Ubunty hadn't packaged it, but having looked further it seems the
library itself is unmaintained.
------
oliwarner
A whole pile of people complaining about upgrading Django highlights two
things to me:
Not enough people are using tests. A decent set of tests make upgrade super
easy. The upgrade documentation is decent so you just spend 20 minutes
upgrading broken things until it all works again.
People pick the wrong version. I've seen people develop and even deploy on
-dev and it makes me cry inside because they'll need to track Django changes
in realtime or near enough. Pick an LTS release and you get up to three years
on that version with security and data-loss upgrades and no API changes.
------
misterhtmlcss
Is anyone going to talk about what this means for Python and Django? I read
the first 30-40 comments and they are all about off topic stuff related to
Django, but still the core premise is the committed move to Python 3.x going
forward.
What do people think of that?! I'm a newer dev and I'd really really love to
hear what people think of that and what it means for the future rather than
side conversations about how bad their API is, how good it is, how good their
Docs are and how bad they are.... Blah blah.
Please!! This community is filled with some of the most brilliant minds and I
for one don't want to miss out on this chance to hear what people think of
this change.
Please please don't reply that you disagree with my POV. That's irrelevant,
but please do if you are interested in the initial topic. I'd be be very
excited to hear your thoughts.
So Django moving to Python 3.X Go :)
~~~
spiffyman
First, this is a good thing for the community. The ecosystem has been pretty
well prepared for 3.x adoption for a while, but we just haven't done it.
Still, when Django switched its default docs to use 3.x instead of 2.x, it
noticeably increased adoption of 3.x. (Source: Kenneth Reitz on "Talk Python
to Me" episode #6.) By pushing on with 3.x, Django is doing its part to drag
the rest of us forward with it.
Second, this is necessary. Support for Python 2.x is supposed to end in 2020,
per Guido's keynote at PyCon 2016, so Django is going to have to get in line
in ~3 years one way or the other. A major version increment is a great time to
introduce such a breaking change.
So ... "what this means" is that Django is doing what it has to do, which
happens to coincide with the interests of the community at large. _shrug_ I'm
glad it's happening, but there shouldn't be a whole lot of drama or hand-
wringing here.
------
gkya
This is a nice patch [1] to review for Python coders. Seems to me that most
incompatibilities are provoked by the unicode transition.
[1] [https://patch-
diff.githubusercontent.com/raw/django/django/p...](https://patch-
diff.githubusercontent.com/raw/django/django/pull/7867.patch)
------
erikb
There are only two possible opinions here:
A) You mostly have Python3 projects: Then you like it because you know more
ressources will be spent on your pipeline and having more Py3 packages is also
helpful.
B) You still have Python2 projects: You hate it, because it pushes you out of
your comfort zone.
But I have to say, we want our langauges to develop as well. We want our
packages to get attention. And there was lots of time to switch and experiment
with switching. Ergo, it should happen. Even if you don't like it as much,
that's where things are heading. Deal with it, move on. Let the community help
you, if necessary.
------
karyon
The related django issue is here:
[https://code.djangoproject.com/ticket/23919](https://code.djangoproject.com/ticket/23919)
there are lots of other cleanups happening right now. It's a real pleasure to
look at the diffs :)
------
myf01d
I hope they just find a way to support SQLAlchemy natively like they did with
Jinja2 because Django ORM is really very restrictive and has numerous serious
annoying bugs that have been open since I was in high school.
~~~
anentropic
> Django ORM ... has numerous serious annoying bugs
Such as?
I've worked primarily with Django for years and I think if the ORM really had
"numerous serious annoying bugs" I'd have a mental library of these things to
watch out for. But I can't think of any ORM bugs off the top of my head, I
don't really remember encountering any.
We all know SQL Alchemy is 'better' and there are things Django ORM can't do,
but 99% of the time it's adequate.
Are you sure you didn't mean "features I wish it had"...?
~~~
myf01d
such as
1\. multi-column primary key.
2\. annotate several counts for some query correctly.
that what I remember for now.
~~~
jsmeaton
1\. Would be a new feature, not really a bug. There have been multiple
attempts to resolve which have all failed. DEPs exist to address this
shortcoming.
2\. Yep. Still a crappy situation to be in, but one that's also tricky to
solve due to not being able to control the joins across multi-valued
relationships.
------
ReticentMonkey
reddit discussion at /r/python :
[https://www.reddit.com/r/Python/comments/5otufg/django_20_no...](https://www.reddit.com/r/Python/comments/5otufg/django_20_now_on_master_will_not_support_python_2/?ref=share&ref_source=link)
------
Acalyptol
Time to introduce Python 4.
~~~
disconnected
I'm still waiting for Python 3.11 for Workgroups.
------
gigatexal
This is great news. It will help move people off their python 2 code bases
even more. Kudos to the Django team.
------
karthikp
Oh boy. And here I am still using Py2.7 with Django 1.6
~~~
Ensorceled
I'm in the midst of upgraded a 1.6 project to 1.10 and Python 3. Wish me
continued success :-)
~~~
karthikp
I've been trying that for the past 1 year with no luck. Product always takes
higher priority.
Make sure you have dedicated time for the migration
~~~
Daishiman
This is not a sustainable position to be in. Migrations _are_ product issues.
------
gojomo
Because incrementing version numbers is free, Django might as well bump the
Python-3-requiring version number to Django 3.0.
Lots of beginners and low-attention devs will find "Django 3 needs Python 3"
easier to keep straight than "Django 2 needs Python 3".
------
mark-r
I was surprised to see the elimination of the encoding comments, I thought
that the default encoding would be platform dependent. After a little research
I found PEP 3120 which mandates UTF-8 for everybody, implemented in Python
3.0. It also goes into the history of source encoding for 1.x and 2.x. I
wonder why there aren't more problems with Windows users whose editors don't
use UTF-8 by default?
~~~
rowanseymour
Makes sense given Python 3 lets identifiers contain non-ascii characters, e.g.
café=123, 变量="x"
~~~
mark-r
I'm all-in on the usefulness of UTF-8, I wish there was a way to configure
Windows to reliably use it as its default character encoding. If I create a
file with Notepad containing the line café=123 and save it without specifying
an encoding, I can't import it into Python. I spend a lot of time on
StackOverflow and I don't remember seeing that problem come up.
------
romanovcode
Good, it's about time this nonsense ends.
~~~
jdimov11
This is the beginning of the Python 3 nonsense, not the end yet. It will end
when the Python 3 joke is scrapped and replaced with Python 4 as a SEAMLESS
continuation of Python 2.
~~~
singularity2001
Don't know why you were downvoted: still waiting for a seamless Python X
upgrade as well, without code duplication.
~~~
stefantalpalaru
It already exists, as a Python2 fork:
[https://github.com/naftaliharris/placeholder](https://github.com/naftaliharris/placeholder)
~~~
billoday
This is the greatest thing ever - for systems work, python3 is a nightmare.
Thanks for sharing this.
~~~
detaro
What's special about "systems work" that makes python3 worse in your
experience? (also, was is "systems work" for you, since I might be
misinterpreting that -> I am assuming "low-level unix scripting" or something
like that)
~~~
billoday
In my last three companies, the bulk of the infrastructure was defined and
managed via Python scripts (a lot of this predated Ansible being great), so
what gets forgotten is the literal billions of lines of custom wrappers and
classes that are broken, usually on the print statement v function debate or
how string formatting works. I can't justify hiring someone to dig through all
that code just to bring it up to snuff and everything needs tweaking. Usually
we end up just writing new code in another language and call it from python or
the other way around. I can't seem to get comfortable handling both versions
in one project without getting REALLY frustrated. So, yeah, a fork with the
niceties from python3 that allow my tech debt to still run (and hopefully
better), allowing me to replace bits (likely into non python languages - Go is
growing on me) at a time and not en masse is pretty frikken awesome.
------
ReticentMonkey
Can we expect the async/await introduced from Python 3 for async request
handling or maybe some heavy operations ? Something like sanic:
[https://github.com/channelcat/sanic](https://github.com/channelcat/sanic)
~~~
lewiseason
It seems like that'll come, but that it'll cause some issues with some WSGI
implementations.
[https://www.reddit.com/r/Python/comments/5otufg/django_20_no...](https://www.reddit.com/r/Python/comments/5otufg/django_20_now_on_master_will_not_support_python_2/dcmiiv3/?st=iy4izox8&sh=e27de429)
------
hirokiky
Say good bye to django.utils.six. yay
------
alanfranzoni
So, after a poor evolution strategy that lead the Python world to be split in
two and forces maintainers to offer two versions for the same library, and
upstream maintainers to offer support for two different python versions, the
same is happening for Django!
I speculate that the latest Django 1.x will remain used - and possibly the
most used - for a lot, lot of time.
~~~
alanfranzoni
Please, don't tell me how "Python3 is good" \- I know everything. I just still
don't approve the way the transition was made - if we got to Python 3 through
progressive deprecation and evolution via python 2.8 and 2.9, we wouldn't be
where we are now.
~~~
Al-Khwarizmi
Or if they had called Python 3 a different name, and let both branches evolve
freely and compete.
~~~
gtaylor
This sounds like a fork. Nothing need stop someone from forking and
maintaining CPython 2.x. Open source is a do-ocracy.
But I doubt it'd be worth it. Python 3 is getting great traction and is a
fundamentally better language.
~~~
Al-Khwarizmi
Python 3 is already a fork.
The problem is that Python 2 cannot evolve freely alongside Python 3, because
even if someone wants to maintain it and keep releasing versions, the Python
Software Foundation won't let them use the name Python (there was a post some
weeks ago about someone who actually tried). So there is no free competition
between 2 and 3. 2 has been basically killed by a decision from above.
Don't get me wrong, I'm no Python 3 hater. In fact, I have some projects in
Python 3 and I would leave Python 2 if I could. But I, like many people, have
to code stuff that has dependencies on Python 2, and the way they have handled
the update bothers us for no good reason. In fact, the whole schism fiasco is
making me use less Python and more Java, where my stone-age code still runs,
lately.
~~~
slig
> the Python Software Foundation won't let them use the name Python
That's how trademarks are supposed to work; they must go after anyone using
without permission or they lose it.
~~~
true_religion
Or they could give permission.
~~~
rbanffy
And have a confusing set of different and incompatible languages with the same
name?
People are complaining about Python 3 being named Python because some code
breaks under it. That would be hell.
~~~
true_religion
You already have Python 2. This is a continuation of it that is simply closer
in semantic to Python 3. How could it be bad if apart from the Unicode
semantics the two versions became equivalent?
~~~
rbanffy
There is no guarantee the two branches would converge.
Some things that are very useful are backported to 2, but others are just too
much work.
------
daveguy
Seriously? The entire change to "unsupport" the majority of Python code is a
mass delete of from __future__ import unicode_literals and utf-8 encoding? Is
that really the extent of the "too difficult to maintain" code? There will be
a split.
~~~
jsmeaton
Just one step.
[https://code.djangoproject.com/ticket/23919](https://code.djangoproject.com/ticket/23919)
~~~
daveguy
Gotcha. Thanks for the clarification (actually 2 of those steps). This is a
great reference.
~~~
Ensorceled
Also factor in halving the on going QA, testing and environment dependent bug
fixing efforts.
------
scrollaway
Oh my god stop. You're all over this thread. _What bit you_?
_This is the price you pay for staying on an old version_. You do not get to
stick to an old version AND demand that others do too.
You CAN stay on Python 2. You CAN stay on Django 1.11. It's LTS. So is Python
2.7. You get to use both until 2020 with no issues. After that, not upgrading
is a technical debt that will start to accrue, faster and faster as you can no
longer use recent versions of various software.
You are free to make your infrastructure immutable; you then become
responsible for it of course. And the money you're not willing to spend
porting to Python 3 today will be money you spend on costs related to being on
outdated infrastructure, years in the future. That's a tradeoff. Banks do it a
lot I hear. A bunch of companies still use ancient hardware and technologies
nobody would think of starting a business with today. These companies make
billions.
You know what the employees of these companies aren't doing? They're not
bitching on HN that the tech they're using is no longer supported.
~~~
coldtea
> _Oh my god stop. You 're all over this thread. What bit you?_
As someone who has 6 comments in this thread yourself, I don't think you are
in position to complaint.
I also find "what bit you" and "please stop" rude. You don't get to dictate
what others opinion should be.
> _This is the price you pay for staying on an old version. You do not get to
> stick to an old version AND demand that others do too._
7+ years on and the "old" version has more users than the new one. That's a
fact supported by numbers. So maybe you want to recheck with reality whether
the transition was a success instead of arguing with me?
Not all transitions go well, the Perl 6 transition killed Perl, the PHP 4 to 5
transition (another major one) went quite smoothly.
~~~
scrollaway
> _As someone who has 6 comments in this thread yourself, I don 't think you
> are in position to complaint._
This isn't a numbers contest. Unlike yours, none of my comments are shitting
on the efforts of volunteers that are doing their best to keep people like you
happy and making money using a project you're not paying for.
> _So maybe you want to recheck with reality whether the transition was a
> success instead of arguing with me?_
You completely missed the point.
~~~
Chris2048
> making money using a project you're not paying for
says who? It this the standard FOSS strives for?
~~~
scrollaway
Where do you get the idea that it's OK to "demand" things from a project when
you're not paying for it?
FOSS gives you freedom to do these things on your own. Money gets you other
people doing it for you.
~~~
Chris2048
Why is "demand" in scare-quotes? I never said that, and it's a loaded term.
The issue here is suggesting you shouldn't freely _criticise_ flaws in FOSS
software. This is harmful, and goes directly to affecting information people
have available to them in choosing whether or not to use a piece of software
in the first place.
Do you actually know what money/time OP might be spending, losing, or making
on Django?
> FOSS gives you freedom to do these things on your own. Money gets you other
> people doing it for you.
What a cop out. A lack of being paid (money at least) doesn't imply no
obligations, nor freedom from criticism.
Do you speak for every Django contributor?
~~~
scrollaway
> _Why is "demand" in scare-quotes? I never said that, and it's a loaded
> term._
Because I was referring to coldtea's demands.
> _What a cop out. A lack of being paid (money at least) doesn 't imply no
> obligations, nor freedom from criticism._
Excellent, then you should be fine with me criticizing the attitude that's
been displayed here.
> _The issue here is suggesting you shouldn 't freely criticise flaws in FOSS
> software. This is harmful, and goes directly to affecting information people
> have available to them in choosing whether or not to use a piece of software
> ion the first place._
Why is this the conclusion you draw from my posts? I said it before, the
Python 3 transition sucked. It's something we kind of all agree on. There is
plenty of criticism to be made.
However, I really want to recontextualize this: Django is an open source
project, maintained by a non-profit. Python is an open source project,
maintained by a non-profit. The projects in question, with "tens of millions
of lines of Python 2 code" (only a tiny amount of which would need to be
ported, but I disgress...), are most often for-profit projects. Yeah, it's a
bit rich.
This is the same as the IE6 situation: Want support for it? Pay extra for it!
You should not expect free support for technology for which the EOL was
announced years in advance just because you're using a lot of it. And you will
have _no issue_ finding paid support. Heck tell you what, if you do, shoot me
an email, I do contract work sometimes.
You know why FOSS is great? It's great because the PSF/DSF do not get to
revoke your license to use the software they're no longer supporting. You get
to use it forever. This is your freedom and it's a _good one_. Make use of it!
~~~
Chris2048
Which comments are you interpreting as demands?
>> A lack of being paid (money at least) doesn't imply no obligations, nor
freedom from criticism.
> Excellent, then you should be fine with me criticizing the attitude that's
> been displayed here.
Great. Do you actually have a response to this point in context, then?
> This is the same as the IE6 situation: Want support for it? Pay extra for
> it!
You did not argue this. You said "shit on", which doesn't translate to
"demanding support". you are deflecting from the one thing I _actually_
criticised.
Your strawman is "support is being demanded" \- this isn't the case. Any
further arguments on that topic are just beating the strawman.
Furthermore, _oficially_ changing the direction of Django also may affect
contributions, changes to the roadmap or architectural design for example.
~~~
scrollaway
You're appropriating criticism that was not directed to you, but to coldtea.
Here and elsewhere.
Edit: Yes, _appropriating_. You're taking criticism I specifically directed at
coldtea, applying them to your comments and then complaining it doesn't fit. I
am done talking to you.
Edit 2: This was not meant to sound as aggressive as it did, sorry.
~~~
dang
You've been breaking HN's civility rule with bits like "I am done talking to
you", "Oh my god stop", "bitching on HN", etc. That's not cool, regardless of
how wrong other commenters may be. Please take greater care to be respectful
in comments here.
[https://news.ycombinator.com/newsguidelines.html](https://news.ycombinator.com/newsguidelines.html)
[https://news.ycombinator.com/newswelcome.html](https://news.ycombinator.com/newswelcome.html)
------
belvoran
A VERY GOOD NEWS!!!
Yea, I know, shouting is not the best thing, but this is a really good news.
------
jonatron
Django was designed for making content based sites and CMS's quickly. It
wasn't designed for webapps and REST APIs, and it can be used in those cases,
but it's not great. I'd look at other options.
| {
"pile_set_name": "HackerNews"
} |
Swedish Researchers Connect 160k Bees to the Internet - pseudolus
https://www.bloomberg.com/news/articles/2019-10-22/digital-beehive-becomes-latest-attempt-to-save-pollinators
======
sarcasmatwork
Not pay-walled:
[https://www.msn.com/en-us/news/technology/swedish-
researcher...](https://www.msn.com/en-us/news/technology/swedish-researchers-
connect-160000-bees-to-the-internet/ar-AAJ9FHA)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Is email the new fax? - Fsp2WFuH
======
spyckie2
No. Fax is a specific communication medium where you can send physical
documents to others, and was replaced by email because email was a general
communication medium that was more accessible (everyone had it), and didn't
have as high costs to accessibility especially for non business users (fax
machine).
The current trend is actually the reverse: general communication is being
replaced by specific communication that is better suited for the type of
communication it is. Social, pictures, group chat, business collaboration,
negotiation/agreement, etc - email CAN function as the medium but is not
ideal, whereas the apps that replace email streamline the communication
experience.
~~~
notheguyouthink
I've been wishing we'd come out with a few new email expectations, like email
in json format or something. It sounds insane I know, but right now all email
is human intended, right? Yet, I _love_ the distributed medium over email.. I
want more of it.
I realized I want more when, a while back, I saw someone working on a social
network over email. I realized that it's a brilliantly low-tech solution to a
problem people are trying to solve in fairly complex and inventive ways _(like
Scuttlebutt)_. Social network posts as email would be an interesting approach
to the problem.
Back to my original statement, data emails would allow users to carry a pile
of application data (like their social net feed) with them. Since email is
something federated, backup-able, migrate-able, etc - users would own their
own data. I like that idea.
Fundamentally I love the idea of email. I do of course have reservations with
the idea of introducing non-human oriented emails, but I hope you _(reader)_
can look past that and onto the intent - offering slightly more feature to a
robust, tried and true platform.
I say "slightly more" with care. Lest we make email a steaming pile of
innovation like we always do.
~~~
spyckie2
This is why I was so in love with Google wave, and so disappointed in it's
result.
------
newscracker
In the sense that it's not used as widely, except in certain circumstances? No
way! Email is here to stay for a long, long time (maybe even beyond the
lifespan of anyone reading this in 2018). Even if person-to-person
communication has shifted to different apps and platforms (like Facebook or
Telegram or WhatsApp or Snapchat or Instagram or Google+ or what have you),
communications within companies and communications from companies to
customers, potential customers, suppliers and others, are cases of large scale
regular use of email — as a communication platform and an archival platform.
------
randomerr
Yes. We have our Exchange system setup that if we have 'SECURE' in the title
it will check if we TLS agreement with the receiving server. If we do, we'll
send the email through with encryption. If not, the receiver is required to
sign-up for our secure web service and then use a one-time code to download
the file.
We do limit the file size of what we'll send in encryption. But does someone
really need a 3 gig PDF?
We really only keep fax for government requirements and the few outliers that
will not cut the phone cord. We were getting some many junk faxes we had to
implement a white-list.
------
na85
Yes, it is the new fax, in a good way.
I wish everyone would stop trying to disrupt it.
------
gsich
Every "competitor" has the same problem: No federation and walled garden.
Email is compatible with every programming language, device, operating system
... you name it. It's the most compatible system there is. Try sending a
message from Slack to Whatsapp for comparison.
------
jnordwick
As in dated technology that is still chugging along? Yes.
As in terms of usage? No.
I still have to fax things to the government, and I have no idea why. For
example, I had to fax an old tax return to California Franchise Tax Board, but
they would only accept mail or fax.
And I never scan something and email it. Even banks went straight past email
check deposits to smart phone photo check deposit. I put pictures on the web.
I text things way more. I use Google Docs, Drive, DropBox, or the cloud for
about everything else.
~~~
jlgaddis
> _... but they would only accept mail or fax._
It's because the USPS (and FedEx, UPS, et al.) and fax are considered "secure"
methods of transmission for confidential or private information (including
PHI).
E-mail is not, because it travels over the public Internet.
~~~
Rjevski
But technically it’s just as insecure, if not more - email has opportunistic
encryption, fax doesn’t.
------
htsideup
It's the new (old) fax since the 90s.
------
partycoder
Whoever argues that Slack is the new thing, I give you this
> sudo ps_mem | grep slack
564.3 MiB + 72.4 MiB = 636.8 MiB slack (5)
I run it because I have to, but I really hate how bloated it is, and how the
background color (white) cannot be changed.
Usually end up running "xcalib -i -a" (invert colors) before switching to
slack.
| {
"pile_set_name": "HackerNews"
} |
Portable contact lists and the case against XFN - bootload
http://factoryjoe.com/blog/2008/03/11/portable-contact-lists-and-the-case-against-xfn/
======
iamwil
Boils down to, of the 18 XFN relationships, only rel-contact and rel-me is
being used. For those of you that never heard of XFN,
<http://gmpg.org/xfn/intro>, <http://gmpg.org/xfn/11>
I'm not sure I entirely agree. On one hand, I agree with author that that at
this stage, XFN needs to stick to simple, and just have rel-contact and rel-me
will work for quite some time, as it gets adopted. However, on the other, I've
always found social networks' descriptions of my relationships to people
wanting, because it isn't binary. I don't want all my friends on facebook to
see everything I do. Even with limited profiles, I resort to rejecting
acquaintances.
Rather than specific roles in the XFN relationships, like rel-sweetheart, or
rel-colleague, it might be easier to specify a degree of intimacy rather than
the actual role, because unless you're building a genelogy tree through XFN, I
would venture to guess that when an application imports a contact list, it
mainly cares who they are, and how intimate you are with them so it can set
privacy measures correctly.
| {
"pile_set_name": "HackerNews"
} |
New Android phone crushes iPhone X in speed test - incan1275
http://bgr.com/2017/11/21/iphone-x-review-speed-vs-oneplus-5t/
======
nv-vn
Let's see the actual benchmarks though. I'm more interested in seeing if it
beats the iPhone on benchmarks, since these app opening races have been done a
million times, and Android devices have frequently won.
~~~
ricardobeat
At around 8:30 in the video you can see the iPhone has more than double
single-core performance, and almost double multi-core.
| {
"pile_set_name": "HackerNews"
} |
PON-Z, the world's first honest ponzi scheme. Using bitcoin. - KennethMyers
http://www.pon-z.com/
======
burke
This raises an interesting question. I wonder if explicitly labeling itself as
a Ponzi scheme precludes it from being technically classified as a Ponzi
scheme.
Is it still technically fraud if you walk up to someone on the street and say:
"Hey, I want to defraud you, so I'm pretending to be your bank. I need
interest on that thing you did. Pay up please?"
------
guiomie
Is this legal?
~~~
KennethMyers
Since there's no fraud and no money, I hope so.
| {
"pile_set_name": "HackerNews"
} |
From a Farm in Egypt to Building a YC Computer Vision Startup for Fitness - dang
http://techcrunch.com/2015/03/23/smartspot/
======
x0x0
[http://www.smartspot.io](http://www.smartspot.io) because googling completely
failed and I had to find it via product hunt
This looks like a fascinating product; if the founder's around, does it track
back angles? Is it useful for dls/squats/cleans? Can it track bar paths? Can
it track body angles over time, or does it (as of now) just track final
angles?
~~~
augustinspring
Cofounder here. We do track back angle. Squat was our #1 priority, since so
many people screw up.
It does track body angles over time! Check out the end of our video for a few
different exercises:
[https://www.youtube.com/watch?v=L_qaqoGXHDU](https://www.youtube.com/watch?v=L_qaqoGXHDU)
~~~
kolencherry
Out of curiosity, does it track and differentiate betwen both high-bar and
low-bar squats? This is a pretty cool service.
~~~
augustinspring
Great question! It does and it doesn't - we can guide you with either type,
but that's something that you (or your personal trainer) might better be able
to dial in offline.
On our site, you'll be able to see your full 3D skeleton recording and do more
investigation into your form - slow mo, frame by frame, and all that detail-
oriented stuff.
------
7Figures2Commas
This looks like a cool technology that will most likely be stuck in no man's
land in its current incarnation.
Selling this at scale to gyms will be very difficult. Personal training is one
the largest profit centers for gyms and at many gyms, personal training is
_the_ most profitable profit center. Convincing members to sign up can be
difficult though (at an average gym typically less than 10% of gym members use
personal training services at any given time) so gym owners are going to be
skeptical about anything that might deter members from trying personal
training.
If this technology is as good as the founders say it is, it will be viewed by
most gyms as a problem, not a solution. A gym isn't going to pay $2,500/unit
for the privilege of potentially cannibalizing its personal training revenue.
On the flip side, if the technology isn't as good as the founders say it is, a
gym isn't going to pay $2,500/unit to add a piece of equipment that doesn't
offer any benefit.
Another poster mentioned selling to individuals for home gyms. Notwithstanding
the fact that this is a non-starter at anywhere near the $2,500 price point,
the market for personal fitness equipment is unfathomably competitive.
Customer acquisition costs are insane and you could easily spend millions of
dollars just to launch a new product.
If there's any value here, it's in the computer vision technology. The
question is whether the company and whoever invests will recognize that before
it's too late.
------
padobson
_Personal training, like hair cuts, is a non-tradable service._
But, this is part of the value of personal training. You're not going to miss
your work out if you lose a hundred bucks by not going.
Also, there's value in motivation as well. Having someone there to push you to
get those extra one or two reps is another big part of the value add of a
personal trainer.
This is really cool, and I'm looking forward to seeing how the product evolves
going forward. Props to Mr. Eldeeb for his amazing journey and the beginnings
of an interesting startup. Sounds like the pressures of running a company
might be a cakewalk compared to the rest of his life!
~~~
augustinspring
Cofounder Josh Augustin here. Motivation is a huge part of what a personal
trainer does, and that's why we're going to help trainers reach out to their
clients with email and SMS.
For people who have never been able to afford a personal trainer, like Moawia
and I, this is an awesome alternative, and we believe it can be just as
motivating.
~~~
andrewfong
Josh, given that the Smartspot is using a Kinect, are there plans for this to
become an Xbox app? The motivational "hump" for working out in your living
room is much lower than going to the gym (and cheaper too).
~~~
augustinspring
We feel that our tech is most useful when it's helping people do high-weight
exercises better, without injuring -themselves, so we're targeting gyms first,
but home gyms are definitely a possibility for the future.
------
xasos
Love the backstory. Fitness startups are pretty interesting, and I like that
Smartspot is using computer vision (because I haven't seen anyone else doing
it). I would love to see a performance comparison to Athos[1], which uses
embedded sensors in workout gear to detect muscle balance and engagement.
[1] [http://liveathos.com](http://liveathos.com)
------
paperwork
This is awesome. About a year ago I bought a Kinect with the intent of doing
something very similar. This story makes me want to get coding again.
I was excited by Amazon phone's stereo camera as well as Google's project
tango. Depth sensing technology is very exciting indeed!
------
tiffanyricks
Great Story!! I love the founder’s journey! Nothing was handed to you. You
worked hard for everything you have! I can identify with that. This company
would have kicked butt at SXSW because we did not see many fitness companies
there this year.
------
skizm
If you search "Smart Spot" in Google I get this article at #3 and not the
actual website at all. "Smartspot" returns the actual website as the last item
on the list. You should up your SEO game!
------
616c
As a personal rant, you can ignore the rest if you can tell this is off-topic
from here. I spent a lot of time in Egypt, I have a former Egyptian spouse,
and I know many Egyptians. This country has so much potential, and it is all
going to waste because variations of this story, kids struggling to even find
time to invest in the sub-standard education they are provided, is common.
Now, like a few other expats who studied there with me in 2006-2007, I was
really upset post-revolution. As expected, people wanted to bootstrap a nation
far behind. There were, like in other public sector jobs, numerous protests by
education sector staff and instructors about how it was bad, and not
improving. Keep in mind in the average university students are paying a few
hundred gineh (Egyptian pounds) per year. As a result, even higher education
is swampy messy of inadequate staff and resources like our worst inner city
middle schools. And this is the top of education chain in Egypt. Of course, as
expected even in the most developed nations (I was scarred and disturbed by
local politics, jaded from my time over there as I saw more parallels over
time), this was put on the back burner when it is crucial to improving the
general state of affairs. Not that many will address that, because it is not
politically expedient.
No one really focuses on the education problem in Egypt. I know some people
running companies there, if you can even believe it pre and post-revolution
when internet cuts were common, web development firms. Egypt invested in, for
its time, top-rate broadband availability in the region. It is one of the
first Arab countries to have Internet backbone as part of the higher education
networks in the 90s prior to the Zaki information economy push around the
millenium. When I studied at the time, with everything else faulting, the
Internet infrastructure, even for average consumer use, was surprisingly not
bad. They even had cool joint degrees. If I had money to spare, I would have
considered one.
[http://www.africabuild.eu/consortium/iti-
mcit](http://www.africabuild.eu/consortium/iti-mcit)
So there is great infrastructure and potential for the Eldeebs I know and
knew, pre-revolution at least. The sad reality is that stories like this are
so common in Egypt, it is crazy. No kid, even the most motivated, cannot be
expected to fight for enterpreneurship when he cannot eat and must work side
jobs so his whole family can survive. Education, even when kids can get
access, is terrible, in spite of great telecom infrastructure that was part of
a perceived information economy bump in the future.
I generally think Ahmed Zuweil, a famous Egyptian technocrat and Nobel
Chemistry Prize winner, is somewhat an asshole. But he used to run ads during
Ramadan on Egyptian television the last few years underlining education is THE
priority to make Egypt return even to a fraction of its true greatness.
If anyone thinks Eldeeb's story is moving, please look into orgs like this and
many other Egyptian NGOs invested in education. Lord knows I do.
[http://www.zewailcity.edu.eg](http://www.zewailcity.edu.eg)
~~~
jessaustin
AIUI, Egypt is yet another African country that fell victim to land reform.
The unique wrinkle to the story is that instead of taking the land from
experienced farmers and giving it to the less capable masses, Mubarak took it
from experienced farmers and gave it to corporate cronies. Less socialist,
still disastrous! Egypt could feed itself 8,000 years ago, it could feed
itself 800 years ago, and it could feed itself 80 years ago. Now, not so
much...
| {
"pile_set_name": "HackerNews"
} |
Seventh RISC-V Workshop: Day One - bshanks
http://www.lowrisc.org/blog/2017/11/seventh-risc-v-workshop-day-one/
======
rwmj
For those not following RISC-V closely, 2018 promises to be an interesting
year:
* 64 bit hardware will be available from SiFive. It'll be low-ish end, 4 application cores, but it'll run real Linux distros. SiFive are already shipping early hardware to various partners.
* Linux 4.15 will ship with RISC-V support. It's available in RC releases now. (-rc1 was 3 days ago I think)
* glibc will ship RISC-V support. That'll happen in February 2018. I think it's not appreciated how important this is. It means there will be a stable ABI to develop against, and we won't need to re-bootstrap Linux distros again.
* GCC and binutils have been upstream for a while.
* A lot of other projects have been holding off integrating RISC-V-related support and patches until RISC-V "gets real", ie. it's really available in the kernel, there's hardware. These projects are unblocked.
* Fedora and Debian bootstrapping will kick off (again). [Disclaimer: I'm the Fedora/RISC-V maintainer, but I'm also getting everything upstream and coordinating with Debian]
* There'll be finalized specs for virtualization, no hardware though.
* There should be at least a solid draft of a spec for industrial/server hardware. Of course no server-class hardware available for a while.
~~~
gorbypark
Has SiFive released any dates for their dev board that can run Linux? I
haven't been following it to closely but would love to get one.
~~~
baobrien
My reading of the blog post is that it'll be released in Q1 of 2018. Until
then, they're giving a few FPGA stand in boards out. The dev board itself will
also use an FPGA to implement a few SoC peripherals, like USB and HDMI.
~~~
rwmj
Actually they are sampling the real chips out to some developers now. However
you are correct in saying that an FPGA is used to implement the Southbridge,
which is mainly for practicality of getting a board out quickly, not because
of awesome self-modifying hardware(!)
------
jabl
And day two: [http://www.lowrisc.org/blog/2017/11/seventh-risc-v-
workshop-...](http://www.lowrisc.org/blog/2017/11/seventh-risc-v-workshop-day-
two/)
~~~
AllSeeingEye
"Boomv2 achieves 3.92 CoreMark/MHz (on the taped out BOOM), vs 3.71 for the
Cortex-A9." \- it's a bit of a letdown. I was hoping it'd be closer to x64
performance than to Cortex-A, but it's probably not achievable for RICS-V
budget.
~~~
_chris_
Sorry to disappoint, AllSeeingEyes. What I taped out is a fairly modest
instantiation of BOOM. I was trying to reduce risk and we had a very small
area to play with, so I settled for ~4 CM/MHz. One potential win here would
have been to use my TAGE-based predictor, which is an easy >20% IPC
improvement on Coremark. Of course, a lot more reworking would be needed to
achieve x86-64 clock frequencies.
~~~
microcolonel
> which is an easy >20% IPC improvement on Coremark. Of course, a lot more
> reworking would be needed to achieve x86-64 clock frequencies.
Maybe these should be expressed as Instructions Per Second (at peak and at the
point of diminishing returns?) or something like that, rather than two
independent numbers. Higher clock frequency actually seems like a _bad_ thing,
all else being equal. It seems to me that throughput ought to trend toward
infinity, clock frequency toward zero. ;- )
~~~
_chris_
Of course it's important to always keep the "Iron Law" in mind, but it's far
easier to compare ideas and talk about things in terms of IPC. For example, if
a switch out one branch predictor for another, we're talking about an
algorithmic change (implemented in hw) that will have an effect on IPC, and no
effect on clock period (assuming we didn't screw something up).
This is particularly useful when talking about a processor _design_ , and not
a specific processor in particular. As you said, there's a lot of good about
slower clock frequencies, so you'll see the same ARM design being deployed at
a variety of frequencies. Far easier to talk separately about a design's IPC
from its achievable clock frequency (although both are important!).
------
x0ul
I've been following developments in RISC-V since I first heard about it a few
months ago and I want to get involved! What can I do? I do embedded systems
development as my day job and am really eager to dig into architecture and
hardware design.
Can somebody associated with the project reach out to me?
~~~
nickik
The lowrisc project [1] tires to be sort like a hardware linux. They are
working on a linux capable SOC and they have a number of innovative ideas you
might be interested in.
Maybe listen to this maybe:
[https://www.youtube.com/watch?v=it3vVtnCYiI](https://www.youtube.com/watch?v=it3vVtnCYiI)
[1] [http://www.lowrisc.org/](http://www.lowrisc.org/)
------
greenhouse_gas
Practically, what's stopping someone from making a privacy-oriented ARM
implementation (meaning, while I can't "make my own" ARM board due to patents,
why can't I make a board which is "source-available" and doesn't contain a ME
or PSP-like engine)?
Is there something in the ARM license?
~~~
subway
A ton of ARM boards like this already exist. You can bring up most Allwinner
chips with 100% open source code. The bootmask from for many have been dumped
and disassembled even.
The trouble is they all suck in some slightly different way. Maybe only 1gb
ram, maybe a bottleneck bus in front of your net or storage io. Graphics are
the big pain point right now. Etnaviv/vivante is your only choice for free
accelerated graphics, and you'll only find it in a few chips. Mali and PowerVR
are all around, but have absurdly difficult to work with closed source
drivers.
The nicest oss-all-the-way-down arm socs are i.MX6, which is expensive and
frankly old/slow.
To be clear: riscv doesn't solve any of these concerns. That doesn't mean it
isn't an amazing project.
~~~
greenhouse_gas
So how will RISC-V help? Do ARM patents cost that much to license?
~~~
subway
RISC-V doesn't directly help with _any_ of these concerns.
But it is an alternative to ARM, and has a governance model that parties
interested in open standards might be more willing to join.
Right now, ARM CPUs on the low end have little standardization at the SOC
level. They have a few cores from ARM, (what you think of as the main CPU,
maybe a GPU, maybe a low end micro for power management, and a couple more for
realtime tasks). Then you have the non-ARM IP in the form of image processors
for cameras, video decoding, etc. The SoC mfg is responsible for gluing
together all these parts, and every mfg has their own proprietary take on the
process, meaning a different initialization sequence, different firmware
layouts, etc.
The SoC mfg's goal is to ship a product. It isn't to define a standard, and
since there is no standard to follow, 'anything' goes.
Because ARM costs enough that it isn't viable to do a SoC layout you aren't
going to ship millions of the chip, academic research (where the seeds for
standards are often planted) just doesn't happen.
~~~
greenhouse_gas
>Because ARM costs enough that it isn't viable to do a SoC layout you aren't
going to ship millions of the chip, academic research (where the seeds for
standards are often planted) just doesn't happen.
Is it license or fab issues? I thought that license is $0.X per chip, so it
shouldn't make a difference (license-wise) if you make Y chips or 100,000 * Y
chips. Fab costs change by overhead, but how would RISC-V help there?
~~~
subway
It's a mix of upfront costs and per-chip royalties, with upfront costs for the
IP being in the millions.
Here's a somewhat accurate article on the topic:
[https://www.anandtech.com/show/7112/the-arm-diaries-
part-1-h...](https://www.anandtech.com/show/7112/the-arm-diaries-part-1-how-
arms-business-model-works/2)
------
makomk
I know at least Allwinner were using OpenRISC for their embedded controller on
recent chips, and it wouldn't surprise me if other companies were doing the
same, so I do wonder to what extent RISC-V is replacing that rather than
commercial cores for this application.
------
sitkack
These notes are wonderfully detailed and compact. Please do this for all my
meetings!
~~~
asb
I'm glad you find them useful. Really I'm indebted to the presenters for
giving such clear and well explained presentations.
------
mycall
I wonder what Microsoft will do with RISC-V. Hopefully they don't think it is
too risk-y.
| {
"pile_set_name": "HackerNews"
} |
Click and Grow (YC S15) Lets You Grow an Indoor Garden with Zero Effort - katm
http://blog.ycombinator.com/click-and-grow-yc-s15-lets-you-grow-an-indoor-garden-with-zero-effort
======
mr_cat
Read about your product in WIRED and became very interested in the agronomical
potential it might have when developed further. Do you have any plans to
contact some local communities or areas in less-developed countries, which
can't produce a lot of nutritious food locally, but would love to do so? The
Smart Farm looks perfect for that. Also, the mini-version seems to be a
perfect solution for people in very urbanised areas, who would like to eat
more healthily, but don't have the time to go to a local marketplace or don't
get a lot of sunlight coming through their windows.
~~~
click-grow
Thanks for the support! We are definitely planning to use our technology to
improve the food production on a larger scale as well. We are currently in
discussions to start testing it with a company involved in large scale food
production. Our technology actually helps save quite a substantial amount of
water in comparison to traditional agricultural practices (up to 95%)! So we
see there is a lot of potential to make the whole process a lot more
efficient, make the ecologial footprints smaller while improving the yields
and health of the plants.
------
click-grow
Hey, we're developing an indoor garden at Y Combinator. Feel free to give
feedback and we'd love to answer any questions.
------
paxmaster
A great initiative! What do you use as the light source? LED or High-Intensity
dischaege?
~~~
click-grow
neither. our experiments have shown that t5OH is the best and safest solution
for home use.
------
krand
Very interesting! A green addition to white goods sector. The rise of green
goods?
~~~
click-grow
Hopefully! Hope we can reintroduce homegrown fresh food to urbanised
environments.
------
techmart26
Looks cool. Any idea when it can be purchased?
~~~
click-grow
Prototype ready in a week, planning to sell a couple to early adopters by the
end of the program as well. Currently planning to officially launch in the
beginning of 2016.
| {
"pile_set_name": "HackerNews"
} |
KLEKTD: Super Simple Social Bookmarking - adk3
http://klektd.com
======
adk3
I've been working on this in my spare time for a while now. I built it to
scratch an itch I was having with keeping track of things I'd come across
online. I wanted something that could; track links in one click, give me
visual references of all the stuff I had collected, archive the page as it was
when I viewed it and give me ranked full text search across all my stuff. It's
pretty beta at the moment so I'm very open to constructive feedback.
| {
"pile_set_name": "HackerNews"
} |
Why does unsubscribing from a newsletter take “a few days”? - scop
https://twitter.com/Joe8Bit/status/1156312965265707013
======
JakeStone
I work for a company in a department that sends out on average, 2 million
emails a day. These are emails to people who (1 or more):
\- have signed up and are receiving a verification email.
\- are giving us money
\- are using our site at least once a week during the first month, and even if
they taper off, are at least using it every 180 days.
So, we're not sending email to people who we've never met, as it were. We have
an unsubscribe link in all our emails, and an account email settings page that
has about half the mailings we can send as initially subscribed (we like
money), but which can be turned off. We even have links to our settings page
in the emails and on all the site pages.
I like to think we're being fairly responsible, all in all.
Like I said, we send 2 million emails a day. That goes out on a lot of
machines, and we have lots of automation going on nearly every minute, and
lots of email queues being prepopulated to send out in volumes acceptable to
gmail, outlook, yahoo, etc.
So, you've asked to unsubscribe to the email you received last week. We got
you, fam. I analyzed logs and did some live testing using our VPN at offices
across the world. Over the past 3 years, you've been removed from your chosen
lists within 2 seconds of clicking the button. This obviously doesn't cover
hardware/network/server problems.
So we're cool, right? Nope. You're unsubscribed all right. However, we already
placed you in the queue for our most recent mailing about 4 hours ago, and
you're going to get the last one from us anywhere within 2 seconds from now to
maybe late tomorrow.
You really won't get anymore from us after that, though.
~~~
JMTQp8lwXL
It doesn't seem that complicated to remove a single entry from the e-mail
distribution list of millions that's going out in one to two days. Harder
problems have been solved. This doesn't adequately explain it.
The reality: it's financially a waste of time for a company to create an A+
off-boarding experience. Getting 1-2 extra emails isn't the end of the world:
it's good enough for the company, you get off the list, they don't have to
think about that unprofitable problem any further.
Most companies don't put much effort into off-boarding. Some go out of their
way to make it even less palatable. But for removing yourself for a mailing
list, I don't consider this to be the worse. Things surrounding payments are
much more important: if I cancel service, you better not continue charging me.
Most people don't consider the full lifecycle of anything, and I wish they
did. We've created a society, for example, that consumes disposable plastics,
as an ordinary daily activity, and that ends up in our waterways as
microplastics. It's easy to make things. It's an order of magnitude more
difficult to manage the full lifecycle of the thing.
~~~
pilsetnieks
The technology behind it all could actually function perfectly but in reality
an overworked marketing intern grabs a csv file from their desktop, opens it
in Excel, compares and manually removes the unsubscribes (from another csv
file that's delivered to them daily by email because that's easier) because
"that's how it's always been done around here", then saves it as "June email
push (1)(1) copy 2 (1)(1)(1)(1).xls" on their desktop, deletes all their
subscriber lists in Mailchimp, as per the usual procedure, and imports "June
email push (1)(1) copy 2 (1)(1)(1).xls" from their desktop, and sends out
whatever.
~~~
Fr0styMatt88
So much this. These kinds of workflows are far more common than you would
think they’d be, looking in from the outside.
This is true even if your company has in-house developers. At the small
company I work for, we have barely enough developer bandwidth to cover most
things related to our core products as it is. Improving internal support
processes just isn’t even on the radar for us as a dev team. So those things
either get contracted out (where we as a dev team may have limited or no
evaluation input with regards to the quality of the solution), purchased
(again with little or no input from us as devs) or are built ad-hoc by the
less technical side of the business.
Companies have a lot of internal moving parts and resource limitations that
together lead to these kinds of things.
------
lwf
Let's imagine you use 3 different marketing providers, plus an in-house one,
because you're a big company and your teams all want to use whichever tool
works best for them.
A user unsubscribes. This creates an entry in one specific system. In order
for that to be reflected across all systems, it needs to be copied somewhere
central, then synced back out.
Ideally you'd have a webhook hit a Lambda function and call it a day.
But, again, largish company with a gotta-move-fast employee growth mindset,
engineering doesn't want to work on it (or, if not a tech company, you don't
have in-house engineering).
So you hire some consultants who convince you that your email marketing is a
"big data" problem, and they contract out the work on some Enterprise
Infrastructure Platform as a Service product (an expensive, slow Lambda). The
resulting system is slow, and often breaks, and you run it every few days in
one bulk load/unload.
Poor engineering is why it takes a few days.
~~~
dheera
If they don't want to make it easy to unsubscribe me immediately I just report
it as spam. Hopefully that goes into Gmail's whatever "big data" of spammers
and starts getting them classified as spam across the entire network.
~~~
ryandrake
Why not simply always report as spam? I have a zero strike policy with
spammers. I’m not going to even try your “unsubscribe” link. You’re going to
get marked as spam if you’re spamming me, and I’ll have the satisfaction of
knowing you’re 0.000001% more along the way towards not being able to have
your emails received.
~~~
marcosdumay
If I subscribed, I'll try to unsubscribe. That's just fair.
If the newsletter appeared from nowhere, then yes, it's spam.
~~~
ryandrake
Yea, and it’s insulting to even call it “unsubscribe” since I did not
subscribe in the first place! The word itself subtly tries to shift blame onto
the victim, as if it’s their fault they are getting spammed.
~~~
dvtrn
Like those Robocalls that leave you a voicemail featuring someone clearly
reading from a script, intimating that they're "returning your call" about
something you know for a fact you never called about?
Go away Janice, I don't need an extended vehicle warranty, and no I didn't
contact you for information.
------
rkho
One of the problems with unsubscribing is that I've seen a LOT of marketers
re-using old lists and importing me back into a new list from some obscure
snapshot, often with names like "new-list-feb-2019". There's no guarantee
that, even when I unsubscribe from a given list, the company hasn't already
exported my email address to some CSV file for future marketing efforts.
~~~
Macha
Or the linkedin approach of "oh, you opted out of featured posts and featured
comments on featured posts but we've just invented a new category of "your
three hop connections commented on a featured post" and you're included
------
chris_st
This is an awesome story.
Oh, and in case you're wondering how they did (similar) things in the Good Old
Days(TM), let me tell you a story from the late 70's/early 80's.
I subscribed to "Cycle" magazine in my youth, and due to mistakes on their
part, for a couple of years I got two copies, and when I finally decided to
unsubscribe, I got only one copy a month (but for another year).
My mother had written to them to unsubscribe me (it was a recurring birthday
present).
My first year of college I wound up meeting a guy in the magazine subscription
business, and told him my story, and told me how it worked.
Turns out that they got a LOT of mail at the (US) publisher. So they had a
machine with a gripper that grabbed each letter, held it while a grinder
ground off (!) three edges of the letter, and then put it on a conveyer belt.
One person was tasked with folding the top page of the envelope back so the
letter was revealed, and a few people (IIRC) took them and put them in boxes,
taping the letter back together (!) when part had been ground off with the
edge of the envelope.
These boxes of letters were then sent to Ireland (!) to be processed, where
people entered what should be done on some kind of mainframe application,
which then cut 9-track tapes that were sent back to the US for processing.
Told this to my Mom, with the delightful result of seeing her collapse in
tears of laughter.
------
ryanworl
The actual reason is that CAN-SPAM dictates that an opt-out must be honored
within 10 days. [1]
[1] [https://www.ftc.gov/tips-advice/business-
center/guidance/can...](https://www.ftc.gov/tips-advice/business-
center/guidance/can-spam-act-compliance-guide-business)
~~~
stronglikedan
CAN-SPAM is US only and doesn't allow for a "REALLY REALLY want to
unsubscribe?" confirmation email.
------
verbatim
I've always assumed that the answer is that due to the way email works, mail
can, in certain (rare) situations, end up stuck in a queue somewhere between
mail servers and not delivered until a couple days later.
Saying that unsubscribing takes a few days means that in the off-chance that
this happens, the sender has some coverage against annoyed users who have one
of these mails delivered after unsubscribing.
But this is just my guess.
~~~
lbatx
I'm sure that's some of it, but it's also a result of lists being pulled ahead
of time. Imagine you have 100,000 subs and 10,000 of them are going to get
promotion A and then another 15,000 (with some overlap) promotion B. Often,
the lists are pulled before the content is ready. Sometimes getting the final
approval on marketing emails takes a bit, and so the person who unsubbed when
they got email A are already on the list for email B.
~~~
lancesells
As someone who works in email I've never used a platform that's not using
real-time lists or segments. A business could certainly do it this way but it
would be a lot more work and a lot less effective.
~~~
lbatx
I'm not saying it's best practice. I'm just describing some things I've seen.
------
ufmace
Nice story of enterprise life. But personally, I still apply strict standards
- if your unsubscribe link doesn't unsubscribe immediately, then you are spam,
and will be marked and treated as such.
~~~
WaylonKenning
Pretty tough when it's your bank. I get marketing phone calls from Bell all
the time trying to sell me on more TV channels. Joke's on them - I don't even
own a TV! I tell them every time they call, they say thank you, and I get a
call from them again every four weeks or so.
~~~
president
For most big banks/institutions, important account emails are usually sent
from a different domain than marketing campaigns. YMMV though.
~~~
NikkiA
I don't think I've ever had a single marketing email from my bank (cahoot),
tbh.
------
heyyyouu
It's because of permissions there's different databases, plus most importantly
the drop lists are usually keyed up 48 hours in advance (you have to do in
advance because of all the checks you have to do, etc.). They can take you out
of the main but you're probably in some drop lists that have already been send
to the provider -- that's why they say a few days/72 hours.
------
davchana
At least in India some companies sell/leak their user email list to others;
those others also cultivate these details from domain names, company
registration etc; & then fake or referred-link emails on genuine companies'
name (with or without Genuine Company's consent).
I got bombarded with 35+ emails every day few years back; & documented it at
[https://gitlab.com/davchana/gmail-indian-spam-
domains/blob/m...](https://gitlab.com/davchana/gmail-indian-spam-
domains/blob/master/readme.md)
Unsubscribe link click marks your email as live/hot; & gets them a higher
price everytime it is clicked by you & then sold by them as fresh hot.
~~~
lolc
I don't click on "unsubscribe". My standard procedure is to look up the
sending IP-Address in WHOIS and send a note to the operator's abuse address.
The good ones take reports about unsolicited mails seriously, and the bad ones
end up on blacklists.
Based on the responses I (rarely) get, I've helped boot a few spammers from
their servers by providing evidence to the operator.
------
snoldak924
At my company, we prepare one-off marketing and legal email blasts in advance,
and need the final recipient list a couple days before sending. This allows
time for processing the list for opt-outs, duplicates, etc.
~~~
ericd
That sounds like O(minutes) processing time with a reasonably written
program/indexed db and a few million subs? And I’m sure you could get it to be
faster than that. Based on all these comments, it frankly sounds like the real
answer is that no one gives enough of a shit to do this right.
------
mooreds
I would say because
* This gives cover in case it takes a bit of time. Better to promise a few days and do it sooner than vice versa.
* Letting people go is, in general, bad for business so hasn't been optimized.
* Multiple systems are involved, increasing complexity.
------
ufo
I think this is the first time that I have preferred to read something as a
long twitter thread instead of as a single blog post.
Seeing the explanation go on and on and on without a clear indication of how
close we are to the end enhanced the kafkaeske atmosphere.
------
parsimo2010
So often I assume a process is totally automated, but a lot of the time I
should be more empathetic, because there is a person in the loop and they are
usually just trying to keep the system they inherited from blowing up. They
have no time to fix it.
This illustrates something that I think a lot of us in the "computer industry"
often misunderstand. We see a mass email system (or anything happening at
scale) and assume the whole thing is automated, because that's how we would do
it.
Too often a system is cobbled together, only barely works, and is only semi-
automated. Even something that is 99% automated but generates thousands of
actions per day ends up creating a high workload for a human.
I've even seen where my email address was clearly hand-typed from a form (not
even copy/paste), because I usually sign up for website accounts using Gmail's
plus feature. I created an account at Website A with the address
"[email protected]" and then received an email a day layer sent to
"[email protected]" which Gmail still delivered because they ignore
everything after the plus sign. The only way that error happens is if someone
typed it by hand. [Plus emails are a great way to find out which companies
sell your info to spammers. Most of the time nobody bothers to run a regex to
fix "plus" addresses into the original address, so the evidence of data
selling ends up right in the email header.] I feel sorry for the person that
is eventually going to have RSI because they hand-type the entire list from
their web form into their email software.
------
crtasm
What's with the large number of 'unroll' requests to some bot in that thread?
Can't they just click the first response from it?
------
theshadowknows
In our case we often have people write or call our global support number and
request they be unsubscribed. They don’t have that ability be they create a
ticket. That ticket gets sent along and eventually winds up in an admin’s
inbox. That’s why it takes a few days.
------
Doctor_Fegg
> marketing team in Swindon
Presumably this refers to Nationwide Building Society, then.
------
ignu
This was a decade or so ago, but we used to get a list of ~2 million email
addresses that we'd import into a new database and a single mailing would take
anywhere from 2 to 3 days to complete. Sometimes it would take us a few days
to get to it though.
The one to two weeks lead time always made a lot of sense to me.
(And yeah, you don't need to tell me how horrible everything about that
process is... but it worked and no one's motivated to fix it. I wouldn't be
surprised if that's the same process they're using today)
------
pier25
Creating a Mailchimp (or similar) account and using that for your newsletters
doesn't take much effort and would be so much more efficient than the mess
described.
Why are big and medium companies usually such a mess?
Is it because the bigger the company the less people care?
Maybe it's that the complexity and discipline required is simply too much for
the average human?
Maybe companies do not have or are unwilling to invest the needed resources?
(which ironically creates more waste)
~~~
heyyyouu
Giving over your list to a third-party provider like Mailchimp can be a risk.
One thing most publishers is more protective over than anything else is the
database. Also, Mailchimp just isn't ideal for high volume mailings/companies
who do this as a revenue generator. It's designed more for the mom-and-pop
operations.
~~~
pier25
Ok, Mailchimp was a bad example (we actually use Sendgrid for our newsletters)
but my point was about using a service that solves this for you instead of
having such a convoluted process.
------
hanoz
The reason it takes a few days is because the next mailshot you're going to be
getting in the next couple of days will be sent out by some third party to
list of email addresses which was exported to a csv file and sent unencrypted
to their gmail account yesterday.
------
klauslovgreen
I came across this if you are using Gmail, pretty efficient:
[https://ctrlq.org/code/19959-gmail-
unsubscribe](https://ctrlq.org/code/19959-gmail-unsubscribe)
------
davesmith1983
I suspect the bank they are talking about is most likely Nationwide.
They are based in Swindon. I am quite surprised their in house tech is this
bad because their online bank account is one of the better ones.
------
ptmcc
I used to work at an email service provider that managed email marketing
campaigns for some pretty large companies. It's been quite a number of years
now, but I don't imagine things have changed all that much.
Mostly, it's just CYA language because of the way the various old and slow
systems work, plus the CAN-SPAM act legally allows up to 10 days to process an
unsub.
There are multiple checkpoints that prune lists as they get churned through
the machine, so typically you'll be fully unsubbed within 24 hours (often much
less), but they don't catch all cases at all times.
The abbreviated process for sending out a marketing campaign at a large ESP
typically looks like:
\- Marketing manager makes a request for a list of people that match x/y/z
analytics criteria (e.g., purchased within last x months, typically opens
email, geographic region, etc). Depending on how "sophisticated" the criteria
are and how backed up the analyst department is, this may take a couple of
days to get turned around.
\- The list of addresses is created and then pruned down by known unsubscribes
or other do-not-email constraints in the system at the time the query runs.
\- The resulting list gets sent out for review and approval by the marketing
manager and client (how many people are we going to mail, what is it going to
cost, what sort of metrics do we expect, etc). Since this is a human-in-the-
loop process, it may again take up to several days to turnaround.
\- After approval, the list gets churned through the unsubscribe list again,
dropping any new unsubs. This step _should_ catch new unsubs within that "it
may take up to few days" window mentioned in the title.
\- The final list is then queued up for sending, which depending on the size
and meter rate may go out over the course of several hours. If you've already
been queued up your unsubscribe request is typically going to get missed for
this run.
Now, add on the complexity of syncing up multiple databases between the ESP
and the customer, which is typically a nightly batch job at best. So even
though your unsubscribe hit some web server instantly, it may take a couple of
days for it to fully filter through from the web server to the client's
marketing databases into the ESP's database. It's similar to why banking and
ACH is so terrible: it's just ancient design patterns and slow process and
nobody wants to pay money to modernize it. And if they miss a few unsubs they
are still well within the legal bounds so it's whatever.
tl;dr: A lot of email marketing still runs on chains of batch jobs which can
introduce windows of unsynchronized lists getting sent out.
~~~
heyyyouu
It 100 percent is still this way. But generally your queued list can be set up
1-2 days beforehand, so that's why it's often 48-72 hours, because you're
already in that list that's been cleaned at the provider for drop.
------
magoon
The next campaigns’ subscriber lists are compiled in advance whilst being
drafted.
------
pier25
Off topic, but god I hate the new Twitter for desktop.
~~~
tobib
Why they use twitter for what I think would fit way better in a short concise
blog post is beyond me.
~~~
drdrey
You get a lot more eyeballs posting directly on Twitter than posting an
external link. Sharing, liking, commenting are all one tap away.
~~~
pier25
Yes, but it's hard to follow branching conversations in Twitter unlike Reddit
or HN.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: How does this website know I'm in incognito mode? - MarkMc
When I try to view this page [1] in incognito mode it says, "You must exit incognito mode to read the content". How does it know I'm in incognito mode?<p>[1] https://www.technologyreview.com/s/429438/dear-everyone-teaching-programming-youre-doing-it-wrong/
======
phillipseamore
From looking at the source code they are using some mixture of these:
[https://github.com/Maykonn/js-detect-incognito-private-
brows...](https://github.com/Maykonn/js-detect-incognito-private-browsing-
paywall)
[https://gist.github.com/matyasfodor/15e8863ab15baf4791a5fa4c...](https://gist.github.com/matyasfodor/15e8863ab15baf4791a5fa4c748b64af)
And FYI in Chrome it's very easy to just F12, Application tab, Clear storage
in the left menu and "Clear site data" at the bottom on the right to get past
all these "you've read too much" blockers.
------
samjanis
Just tried using Firefox Quantum 62.2.2 on Debian with different
configurations - NoScript enabled and disabled, and tracking protection
enabled and disabled. Can't seem to replicate what you're getting.
What browser and OS are you using? Do you have any plugins/extensions active?
(Ad blockers, etc. I don't run any, NoScript does a better job.)
~~~
MarkMc
Seems to happen in Chrome for Mac and Android. No extensions or plugins.
~~~
samjanis
Gotcha. Turns out to be Chrome (or Chromium) specific.
In article.js loaded with this page there is a section that looks like:
dataLayer.push({event:"mittr:detectIncognitoMode",action:"detect",category:"incognito-
mode",label:n})
Commenting out the function block surrounding it where it starts with
"function y() {..." stops the class ".incognito-wall-shown" from being applied
to "section.incognito-wall" found in the main HTML page - although this is
just a quick dig and haven't debugged any further (I'm halfway through
something else but this caught my interest ;) )
| {
"pile_set_name": "HackerNews"
} |
Deaf New York residents sue Cuomo for not using a sign language interpreter - colinprince
https://www.cnn.com/2020/04/30/politics/andrew-cuomo-lawsuit-asl-interpreter-trnd/index.html
======
bb2018
I have been seeing all the ASL interpreters during government press
conferences and have been very confused about their usefulness. These feeds
are all on TV news channels with closed captioning. Do deaf people prefer an
ASL interpreter on a small corner of a TV over written text?
The article mentions a deaf man who did not know about the stay-at-home order
until a relative called him. Even if true, are we suppose to assume he had
been watching the news broadcast of the Cuomo presser and only didn't pick up
on it because there was no interpreter?
I would love for someone with more knowledge to chime in and tell me why I'm
misinformed. I'm more curious than anything else.
| {
"pile_set_name": "HackerNews"
} |
Turkish coup – bridges, social media blocked - AdamN
http://www.bbc.com/news/world-europe-36809083
======
chinathrow
Good live coverage also at the guardian. Latest news is that the military has
full control.
[https://www.theguardian.com/world/live/2016/jul/15/turkey-
co...](https://www.theguardian.com/world/live/2016/jul/15/turkey-coup-attempt-
military-gunfire-ankara)
~~~
Analemma_
> Latest news is that the military has full control.
Is there a source for this other than the military itself? Because this is one
of those situations where you can't really trust any reports until everything
shakes out: right now the Turkish state media is, of course, claiming the coup
has failed.
------
AdamN
Looks like social media is going down:
[https://twitter.com/Holbornlolz/status/754044391656914944](https://twitter.com/Holbornlolz/status/754044391656914944)
| {
"pile_set_name": "HackerNews"
} |
Fukushima's ground zero: No place for man or robot - kevindeasis
http://www.reuters.com/article/us-japan-disaster-decommissioning-idUSKCN0WB2X5
======
mkesper
Recent Greenpeace article re Fukushima: The environmental impacts are already
becoming apparent, with studies showing:
\- High radiation concentrations in new leaves, and at least in the case of
cedar, in pollen;
\- apparent increases in growth mutations of fir trees with rising radiation
levels;
\- heritable mutations in pale blue grass butterfly populations and DNA-
damaged worms in highly contaminated areas, as well as apparent reduced
fertility in barn swallows;
\- decreases in the abundance of 57 bird species with higher radiation levels
over a four year study; and
\- high levels of caesium contamination in commercially important freshwater
fish; and radiological contamination of one of the most important ecosystems –
coastal estuaries.
[http://www.greenpeace.org/international/en/press/releases/20...](http://www.greenpeace.org/international/en/press/releases/2016/Fukushima-
nuclear-disaster-will-impact-forests-rivers-and-estuaries-for-hundreds-of-
years-warns-Greenpeace-report-/)
~~~
mapt
Coming out of an ecology program? And observing the Chernobyl data?
None of that is remotely as damaging to the natural environment as continued
human habitation of the area that has been evacuated. Cry not for the fishes
and birds, for they are better off with slight genetic damage than they are
with us.
The threshold at which people become uncomfortable about radiation is several
orders of magnitude lower than the threshold at which it causes population
decline in wild populations, and people cause declines and extirpations in
wild populations all the time.
~~~
morsch
So a rational ecology program should advocate irradiating wide swathes of
land?
Of course the people displaced from the area didn't disappear, they're
affecting wildlife somewhere else. Seems like mostly zero sum in that specific
regard.
~~~
theoh
There's a major question in ecology of how we feel about our "anthropocentric"
civilization. For many environmentally conscious thinkers it seems like a
world without humans would be preferable.
~~~
mapt
A world without humans would be _necessary_ to fulfill their goals of the
environment being unmolested by humans. We cannot live in perfect harmony.
That's not a thing.
At best, we can establish areas that look roughly like they used to look
before humans, but this is no less deliberate design than the styrofoam rocks
at the zoo, it's just on a different scale. Many such areas are, in fact,
better seen as zoos, because of their small size or limited variety. Most of
the productive arable land is predictably already being used for something by
somebody.
Personally, I'm a humanist. The worst things environmental devastation can do
to us are disrupt some of our services, like agricultural collapses; After
that environmentalism is mostly a romantic or novelty-based aesthetic, albeit
an attractive one. Pit it against human lives and human profit, and groups of
humans will nearly always make the same decision.
EDIT: Worth tacking on, for some perspective:
[https://xkcd.com/1338/](https://xkcd.com/1338/)
~~~
hemptemp
As much as i abhor commenting online do bear with me. You say that "We cannot
live in perfect harmony". That's not a thing", however what is this in
comparison to. Surely this would be comparative to the rest of the eco-system.
Animals make changes, they create habitats and dam rivers, however as it is
more rudimentary (sticks and holes in earth) we dont consider it a change to
the enviroment.
Creating our homes in a more ecological and recycleable way would be in
perfect harmony. The arguement is whether we should go further and drastically
change the enviroment, do we have the right as the dominant species on the
planet to abuse its resources for ourselves (as seen in the attached xkcd
whose figures roughly show the populations of mammals who we have artifically
inflated to such proportions for our dietray wants).
So if we changed our architechture and agriculture then yes we could live in
perfect sync and harmony, it's just that we wont.
~~~
jahewson
There is no such thing as "harmony" in nature. Species come and go, some
destroy their own habit, others drive their competitors to extinction. We do
need to be careful about how we use natural resources and that we don't change
our environment in a way that is detrimental to us. But understand that change
is the status quo for nature, that's especially easy to miss when it happens
on timescales longer than a human life, but happen it does.
------
taneq
I don't understand how "each robot has to be custom built for each building"
and "takes two years to build".
I mean, they don't need Atlas here. They just need a ruggedized remote control
car with a camera and a ton of lead plate on it.
~~~
TotesAThrowAway
Friend of BillinghamJ here.
We do build mock facilities and we have recently started working on the
Fukushima Daiichi inspections, and there is only so much I can actually say
but I'll share what I can. I'll explain what caused the failure in this part
and then I'll move onto the fun stuff in the next post (robots!).
In the past, the mock facilities we have made in the past were a quarter of a
full scale reactor, rather than a quarter scale reactor, if that makes any
sense at all.
Anyway back to Japan. I'm assuming people have a basic understanding of how
fission reactors work (Boiling Water Reactors if you are interested in doing
further reading).
To break the situation down, the cooling failed (believe it or not, diesel
generators don't work too well on water!) on reactors 1, 2 and 3, causing a
complete meltdown of the fuel rods. When cooling failed, all of the cooling
water was turned into steam, which in turn reacted with the released
radioactive isotopes creating hydrogen. I'm assuming people know mixing
hydrogen with oxygen is basically a recipe for an explosion, and that is
important for what happened next. They tried to vent the gases out to the
atmosphere to prevent the pressure vessel from exploding, but the hydrogen
went the wrong way and caused reactors 1 through to 3 to explode in various
places. I can't remember correctly (I think its reactor 3?) but the explosion
happened within the pressure vessel which is why there was a large
contamination breach. Because of this complete loss of control of the
reactors, and the meltdown currently happening, they flooded the whole system
with sea water and pumped as much out into storage as they can, but they lose
a lot of it out to the sea, hence the Americans whining about the radiation in
the Pacific. Now, the reactors are stable (ish) and they are continuing to
pump water in to stop them going critical again.
I'll move onto the robots when I get home in part two, but right now I need to
go have an argument with a lawn mower as my hair is getting unruly. The video
linked below explains more about the actual failure of the reactor:-
[https://www.youtube.com/watch?v=JMaEjEWL6PU](https://www.youtube.com/watch?v=JMaEjEWL6PU)
Brb.
~~~
TotesAThrowAway
PART 2
So, ROVs.
We probably aren't going to use a ROV for our solution as it doesn't suit and
they're a pain in the arse quite frankly.
To start, I'll address the issue with the 'wires' failing. An American company
built a ROV that was heavily shielded and was driven via an umbilical.
Wireless is hard to use and autonomy is too hard to use as the reactor
conditions are unknown. The ROV was a good design and could survive the
radiation for a reasonable amount of time, however for whatever reason they
used PVC wire sheathes that break down under heavy radiation, and as a result
the wires touched and shorted the electronics out, rendering the ROV useless
and 'dead'.
In terms of why it takes so long and why we can't use an off the shelf
version, basically radiation is a bitch. At the base of the reactor vessel,
just above the corium, the radiation output is estimated to be around 3000
microsieverts per hour, that translates roughly to a human life expectancy of
around 6 seconds, give or take. This amount of radiation causes electronics to
fail (transistors commonly), and materials to break down. The breaking down of
materials caused the American ROV to die, and another example would be that it
can causes greases to harden, which stops motors working.
Reactors aren't big spacious areas either, so it's not like we can just deploy
a lead (lead weighs a metric shit'tonne) shielded tank to have a look, it's
just too big and cumbersome. We decided against using a ROV as it had to be
30kg or less, which is absolutely nothing once you bring in drilling packages
and the likes.
Also quite often you will be deploying through a hole between the size of your
fist up to just smaller than the diameter of your head, so that restricts you
hugely as well.
You also have material compatability. If you get something stuck you have to
be sure that it won't react and cause the reactor to become critical again,
which could happen in one of the reactors (can't go into more detail sorry).
One last major consideration as too why it takes so long to build and test a
ROV to suit. The reactors are under immense thermal stress, and metal likes to
bend and warp when it's heating/cooling. You have to build your solution
around the worst case scenario. An example would be we went into a boiler tube
trying to plug a 1 inch hole from 18m above it using a manipulator arm. That's
already hard on its own, but then we discovered the originally 7mm gap we were
aiming for was actually now as small as 3.5mm. Trying to develop ROVs and
remote solutions is really not easy, the best way I can put it is that this
line of work is an art, not a science. That's why it will probably take the
best part of a century I reckon to fix this problem.
~~~
x0x0
That's why it will probably take the best part of a century I reckon to fix
this problem.
Just wow.
But a question -- you say you won't use a ROV. But you say wireless is hard
and autonomy is too hard. So that leaves what?
And thanks for taking the time to share.
~~~
TotesAThrowAway
The alternative and what is most commonly used for control, in the nuclear
industry, is a very long 'umbilical' (wire with loads of cores). A lot of
nuclear providers are against people using wireless as if you lose
connectivity and it gets stuck you are in trouble.
Also we are going to use a manipulator arm more than likely. Manipulator arms
are cheaper and more widely used in our industry.
~~~
effie
So its like a very long colonoscope that has camera on its end and can be
manipulated to turn around a corner?
[https://duckduckgo.com/?q=colonoscope&iax=1&ia=images](https://duckduckgo.com/?q=colonoscope&iax=1&ia=images)
~~~
TotesAThrowAway
More like this minus the electronics:-
[http://www.lasersnake.co.uk/images/galleries/2b8q6hcombined-...](http://www.lasersnake.co.uk/images/galleries/2b8q6hcombined-3.jpg?width=800&height=600&shrink=true)
Your idea would be perfect if we were just doing an inspection, however we
have other work we will need to do down there which will require various
packages so it has to be somewhat bulkier.
------
ams6110
The treated water should probably be pumped onto tanker ships and taken to the
middle of the ocean for release. It would be so massively diluted as to be
harmless, and remove any chance for local contamination. It would be far safer
than leaving it in tanks on-site, which are subject to leaks, intentional
damage, etc.
~~~
retube
Yeah but who's gonna crew a tanker of highly irradiated water?
~~~
joeyo
The lowest bidder!
In all seriousness, radiation shielding (necessary thickness of metal, etc) is
quite well understood. It's probably not even an especially hazardous cargo
compared to what most tankers haul.
------
sakopov
I'm curious if there is any research on purification of irradiated water. A
quick search took me here [1]. Couldn't find much else on this.
[1] [http://acselb-529643017.us-
west-2.elb.amazonaws.com/chem/243...](http://acselb-529643017.us-
west-2.elb.amazonaws.com/chem/243nm/program/view.php?obj_id=121787&terms=)
------
xigency
I used to live here! Not in Fukushima exactly but in Aizu, and I've traveled
to Fukushima City. The closest I've been to the reactor is probably 40 miles
away from the train in Koriyama.
------
hasenj
Am I correct in assuming that the cost of "fire fighting" the situation far
outweighs the value of the energy generated from the plant?
~~~
Symmetry
Unclear. In terms of deaths the number caused by Fukushima is comparable to an
equivalent coal plant[1] and we still use coal plants for some reason so the
consensus seems to be that the electricity is worth the cost. I'd guess that
the Fukushima cleanup and containment costs are going to be small compared to
the human costs. As to the cost of the evacuation, that might very well be
enough to zero out the economic value.
[1][http://hopefullyintersting.blogspot.com/2013/12/fukushima-
vs...](http://hopefullyintersting.blogspot.com/2013/12/fukushima-vs-coal.html)
~~~
hasenj
I meant the effort required to clean up the situation and turn off the plant.
------
mtahaalam
Indeed!
------
HillaryBriss
It's interesting that, in Japan's political environment, the local fishing
industry has enough clout to veto TEPCO's proposal to allow radioactive water
to leak into the ocean near the site.
It's also interesting that the fishing industry, which is usually (correctly)
assigned blame for depleting fish stocks is, in this case, protecting fish
habitat.
~~~
Pyxl101
They deplete fishing stocks by fishing. They don't want people worried that
their fish are contaminated, and they don't want their stock depleted by a
different cause than fishing.
~~~
HillaryBriss
Yeah. Makes sense.
I guess nuclear contamination is in a different category than say, pollution
from coal burning power plants, which add a lot of mercury to the environment
which then ends up in high level ocean predators like tuna. The fishing
industry doesn't seem to have the ability to stop that kind of pollution.
Maybe it's because many sea food consumers people casually ignore mercury
levels in fish they eat. Or maybe it's because the Japanese seafood supply
chain is well enough managed and regulated that people really know where a
piece of fish in a market actually came from. Perhaps people carry geiger
counters into the supermarkets there. I don't know.
~~~
nitrogen
The documentary "The Cove" has a side story about mercury in Japanese seafood
from some places.
------
marze
The robot problem is simply a lack of imagination. Just set up a long drill,
like an oil rig, aimed sideways into the reactor. Drill six inch hole with
cutting torch drill head, then insert equally long periscope. No rad hard
electronics needed.
~~~
mikeash
Cutting a big new hole in the structure holding the nasty stuff inside doesn't
seem like a very good idea to me.
~~~
marze
Give me a break, the structures are already full of holes. That's why the ice
dam is needed.
| {
"pile_set_name": "HackerNews"
} |
Sad reality: It's cheaper to get hacked than build strong IT defenses - jazzyb
http://www.theregister.co.uk/2016/09/23/if_your_company_has_terrible_it_security_that_could_be_a_rational_business_decision/
======
Noseshine
Why is that "sad"? Nature has gone the same path. We have basic defenses that
are "on" all the time (passive immune system - nonspecific), and we have an
adaptive response that reacts to what actually happens to us, which also means
threats we actually encounter will be recognized and fought more quickly and
better in the future. Or houses - having lived in the US, those front doors
are at least an order of magnitude less secure than any German front door, but
even those are not really able to keep out any determined intruder.
Why should be mount a very expensive all-out defense against a lot of
perceived threats? It's similar to " _every_ child (programmer, etc.) MUST
know this!". Making demands is easy. If people don't care there probably is a
deeper reason. Yes, the heuristic gets it wrong, that's why it's a heuristic,
but that it is one in the first place also has similar reasons.
It sure is possible to criticize a concrete company for concrete problems, but
the blanket statement of the headline is not useful.
~~~
Bartweiss
The problem is that this isn't about saving money _overall_. Users pay the
primary costs of the company's security errors, so it's a moral hazard
problem.
Right now, companies that lose data don't pay any costs at all until
afterwards, and those costs are usually minimal. The reputational damage is
reduced because no one knows until (well) after the breach, and any financial
info lost is consumer credit cards rather than corporate accounts. Yes, users
sometimes get free identity theft monitoring, but those services are quite
cheap to account for the fact that they don't actually _work_.
More specifically, this is asymmetric information and therefore the market
can't adjust for it. When Yahoo loses my data, will my passwords be salted and
well-hashed? How could I possibly know in advance? Consumers aren't making
privacy and risk choices, they're using the internet as best they can and
getting repeatedly burned for it.
If you want a clear contrast, companies are enormously concerned about
"whaling" attacks, and are working hard to prevent them. Those attacks take
corporate money in real time, so the costs are properly factored in. Moral
hazard is inherently about broken cost-benefit measurement.
~~~
mahyarm
The real problem is most payments & identity are pull vs push and the username
is the password. If they were push, then there wouldn't be customer payment
information to steal in the first place. All that would be taken would be
personal shipping addresses, and those are mostly public as it is already.
Same with social security numbers and identity in general.
To solve the root cause in this case although was decided to not be good by
the infrastructural organizations. Eating the fraud is cheaper than putting up
barriers to payments.
If fraud liability was moved %100 to banks, payment providers and governments,
we would see the problem fixed pretty quickly.
------
peterbonney
One reason it's true is because companies only measure actual cost, not
opportunity cost. How much did it cost Yahoo to have every tech-savvy person
in the world switch to Gmail because of Yahoo's lousy (and Google's excellent)
security infrastructure? Where the tech-savvy go, the tech-unsavvy often
follow. As they did with Gmail.
But lost revenue opportunities don't show up in the bottom line, so cost-
focused managers don't think about them. And they conclude it's "cheaper" to
not invest in this or that thing that their smarter competitors are doing.
"What gets measure gets managed." People think this (apocryphal) Drucker quote
is advice. It is not advice. It's a warning.
~~~
richmarr
Not sure I agree that it was Google and Yahoo's respective security
architecture that caused people to switch, even tech-savvy people.
~~~
peterbonney
Sure. But all the things Gmail offered were things that probably looked like
lousy investments to Yahoo. Why offer more storage? Why have better spam
filtering? Why have better security? It all costs money!!!
The point is only looking at actual cost, not opportunity cost.
~~~
richmarr
Yep. Good clarification.
------
vfxGer
I am sick of seeing headlines about teenager hacker being put in jail. It's
not because they are geniuses it's because of poor IT defense. The companies
should be severely fined for criminal negligence.
~~~
saiya-jin
I get what you mean, but poor defense ain't no excuse to hack the hell out of
company, neither legally nor morally. plus i don't buy the notion that some
teenager had no clue what he was doing would harm other's livehood (if yes,
then he should go through psychiatric evaluation).
if I don't put 3m electric fence with automatic sentry guns around my whole
hypothetical house and land, does it mean everybody is automatically invited
to freely try to break in, do damage, steal my stuff or post my private and
legal data online for others?
state should have better use for these guys, but there should definitely be
punishment, not reward in any way. that's how all countries run these days
~~~
thr0waway1239
I am not sure the analogy is very accurate. You do not advertise your house as
a place where other people can come and freely store their valuables and then
take it out as they please.
If you did, there is a name for what you have built: a bank. And you can be
pretty sure people then will not have any issues with whatever security
measures you take. Most of all, your cost of security installation is now
covered by other people's money, which effectively gives you very precise
calculations on what exactly you can and cannot spend. You are more than free
to return the money and shut down shop if you feel you are in a completely
unsafe neighborhood which makes your bank impossible to run at a profit.
To stretch this point a little further, imagine you did have a bank, and your
customer comes and demands to take their money out, and you say "Oops. I had
just left it out here on this desk, and when I went to pee, a kid just came in
and ran out with all your money. I feel bad for you, but the cost of moving
the stuff back and forth between front desk and the vault would make the
service unprofitable. Its not my fault, its all these children in the
neighborhood who keep pranking me".
The lowered barriers to hacking, combined with an ever moving target for what
constitutes good security, are genuine concerns. But as a company, you are
expected to shoulder the burden of security as a precondition of making the
claim that you provide a good service. One way or another, people actually pay
you to take care of their data as part of the service.
~~~
posterboy
> You do not advertise your house as a place where other people can come and
> freely store their valuables
A house offers protection, no doubt about it and anyone but a social recluse
will potentially offer it to others, although not foreigners. You are
certainly not trying to say negligence would be OK as long as it concerns
foreigners.
------
nickpsecurity
I think this article is making a decent point but with bad data. We know of
many cases where the cost of insecurity drastically outweighed the cost of
basic security. The most obvious is banking where no security would drain all
their money. So, they combine preventing, detection, auditing, and computers
hackers can't afford to keep losses manageable. Another example on putting a
number on it is the Target hit that, in last article I read, was something
like $100+ million in losses. Lets not even get to scenario where they start
targeting power plants or industrial equipment whose management foolishly
connected to net.
It also helps to look at the other end: minimum cost to stop most problems.
Australia's DSD said that just patching stuff and using whitelisting would've
prevented 75% of so-called APT's in their country. Throw in MAC-enabled Linux,
OpenBSD, sandboxed (even physically) browsers w/ NoScript, custom apps in safe
languages, VPN's by default, sanest configuration by default, and so on.
Residual risk gets _tiny_. What I just listed barely cost anything. Apathy,
which the article acknowledges, is only explanation.
A nice example was Playstation Network hack. I didn't expect them to spend
much on security. I also didn't expect it to come down to having no firewall
(they're free) in front of an Apache server that was unpatched for six months
(patches are free). That this level of negligence is even legal is the main
problem.
------
hannob
I wonder if one of the problems is that the focus is too much on costs.
What I see all the time in IT security that for many people doing security
means spending lots of money on products with highly questionable promises.
It's very doubtful that many of the security appliances you can see at RSA or
Black Hat do any good, in many cases they add additional risks. But the
industry is selling a story that the more boxes you buy and put in front of
your network the better.
For a lot of companies there are very cheap things they could do to improve
their security. This starts with such simple things as documenting on the
webpage who outside security researchers should contact if they think they
found an issue in the companies infrastructure.
So I have quite some doubts that the formula "spending more on security ==
better security" holds.
------
lagadu
It's sad because it's true. In 2018 the data protection EU regulation gets put
into play though, which might change that partially by effectively increasing
the cost of losing control of data.
~~~
sarnowski
For Reference:
[https://en.m.wikipedia.org/wiki/General_Data_Protection_Regu...](https://en.m.wikipedia.org/wiki/General_Data_Protection_Regulation)
This directive will drastically increase fines for data leaks in the EU.
------
marmot777
Everybody's probably seen this but please more forcing companies to
internalize their externalities. More law suits, please. I never thought I'd
say that. [http://www.scmagazine.com/class-action-lawsuit-filed-
against...](http://www.scmagazine.com/class-action-lawsuit-filed-against-
noodles-company-over-breach/article/521276/)
------
nathanaldensr
"Cheaper" is not including the full cost of compromised data. Compromises
don't only affect companies' bottom lines, but also those who were
compromised. The costs to individuals are undoubtedly much harder to quantify.
~~~
enraged_camel
I totally agree, but I think in this case they are saying it's cheaper _for
the company_ , which is what really matters in this context (since they're
comparing it to how much the company would pay for security).
I mean, if the company's website gets hacked and your credit card data is
stolen, then your card is charged $1,000, it's not the company that pays for
it, right? You either talk to your bank to mark the purchase as fraudulent and
get the charges reversed, or pay for it yourself (e.g. if it's a debit card).
Perhaps that's the solution though: a way to directly associate fraudulent
purchases with security breaches where credit card data has been stolen, and a
law that requires the breached party to pay all expenses related to that
fraud. _That_ would get all major retailers scramble to get their shit
secured.
~~~
nathanaldensr
Good point about what the article was comparing. I missed that.
I guess I'm just sour that articles like this tend to gloss over what is often
the most important impact of a security breach--the end-users' data and
privacy--and instead focus on easy-to-report numbers.
------
nmgsd
I'm not so sure it's cheaper. The business cost can be enormous. See the
Target breach, which led to FIRING the CEO. And Yahoo, which may have their
deal with Verizon at risk now due to the latest breach.
------
bikamonki
That is why as a sole dev I no longer offer full-stack solutions: clients
simply do not want to pay for the hours it takes to keep their back-ends
monitored and secured. Yet, dynamic data is mostly inevitable in any modern
web solution so I am increasingly relying on BAAS providers. My gamble is that
it should be easier/cheaper for BAAS providers to maintain a team of
knowledgeable and experienced engineers to tend infrastructure that runs
several back-ends. It seems like a natural step from _hey I trust you can run
my hardware take my money_ to _hey I trust you can manage my data take my
money_
------
jrochkind1
I think it's possible the global economy literally could not take the expense
of actually making everything secure.
~~~
raesene6
Definitely not if it was implemented in a big-bang, but a more gradual
approach might work.
The counterpoint of what will the costs be if we carry on with the current
level of security and drive IT systems more into everyone's lives has to be
considered too.
------
teekert
Yes, you notice it when you deal with sites where bad security can be costly,
like on a (bit)coin exchange (i.e. Bittrex). You get an email at every
successful login, 2FA is encouraged from the start, enabling the API keys
requires 2FA, Google reCAPTCHA at every login, logout as soon as you close the
browser, api keys with different levels of functionality, API requires SHA512
hashing of API key and API code and a time fingerprint. It's pretty refreshing
to be honest.
~~~
joosters
Seriously? Bitfinex was the latest, greatest bitcoin business with a security
breach, and they just pushed the losses onto their customers. Bad security at
bitcoin exchanges does not generally affect the company itself, but the users.
~~~
petertodd
Volume at Bitfinex has gone _way_ down; bad security is definitely costing
them in lost business. Equally, how could those losses not get pushed onto
customers? They were larger than the assets the company had available.
Bitcoin services aren't a good example here - they're very different than data
breaches. If anything, they're a rare example of a case where hacks usually do
lead to the destruction of the company; that Bitfinex wasn't killed
immediately is an exception, not the norm.
------
cmurf
Yahoo customers are advertisers, not people with email accounts. Account
holders are just a resource, and in aggregate I'm willing to bet most won't
know what this hack means to them, even if they learn about it. What are they
chances they lose 30% or more of this resource, users terminating their
accounts? The stock price suggests the account holders don't care or have no
meaningful recourse.
------
jbb555
Well physical security is the same. You could make your house entirely thief
proof but nobody does because the cost isn't worth it.
------
hoodunit
Part of the issue is that legally in the U.S. a) privacy violations are
usually punishable by law only if a specific non-privacy harm comes of it and
b) privacy is treated as an individual right and not a societal good. If a
company gets hacked and loses your credit card and bank information afaik it's
punishable only if someone actually fraudulently uses the information. It's up
to individuals to jointly complain about specific damages to effect changes,
and for any given individual there's little incentive to make your own life
difficult for vague potential benefits. Also in most cases the individual harm
is quite small, even if in aggregate or viewed as a societal harm there is
huge damage.
------
bagacrap
I found this to be true of securing my house. I had several break ins and the
total cost (mostly repairs) was still far less than the cost of installing an
alarm system, to speak nothing of paying for police response to false alarms.
~~~
a3n
Is there an emotional cost?
------
rbc
I think a lot of these problems could be nipped in the bud by more aggressive
code auditing and patch management. It's better to start with fewer zero-day
vulnerabilities. Once the zero-day exploits are out there, you have to act to
mitigate them. Another way to think about it is to compare it to home
construction.
You have to use good building materials to start. After the house is built,
you get into the decision cycle of maintaining, repairing or replacing the
home.
------
sandworm101
Sadder reality: This principal has been extended by many CEOs to justify not
doing _any_ security. The OP speaks of the costs of running a top-notch
system. That's expensive. But please do something. Something more than just
relying on your head of IT and your web designer. Read the Ashley-madison
report by the canadian privacy commissioner. A supposed unicorn and they were
doing nothing.
------
sabujp
Has your identity been stolen? If so, were you able to determine if a large
scale hack was the cause of that? Then were you able to go back and sue that
company for your losses? You probably don't even have much recourse, i.e. it's
cheaper for you to try to fix your own stolen identity issue than to sue the
company that got hacked for renumeration.
------
devonkim
All we have to know that it really doesn't matter to the business world
despite all the drama in corporate IT over security (if that) is that Apple,
Target, and Home Depot are having great quarters after their security breaches
so any consumer backlash is materially ineffective even if people do care -
not _enough_ care.
------
josaka
This may change as the plaintiff's bar gets more sophisticated. Many probably
remember the Home Depot data breach a few years ago. The card issuers brought
a class action against HD and the complaint (under MDL No. 14-02583-TWT) reads
like a nice treatise on causes of action in various states implicated by a
breach.
------
emodendroket
I feel like a lot of our problems would go away if companies faced penalties
with teeth for losing customer information.
------
tlogan
Now people ask why Oracle is still around? And this is the answer.
At least companies have somebody (with $$) to sue when security breach
happens.
I'm really confused with following: 1) people want free services and 2) people
want extra security
The above is like getting free home security system and then complaining how
alarm do not work consistently.
------
jdc0589
unless you work in an industry that deals with fairly private and regulated
data, but aren't a huge company with tons and tons of cash to burn. Then you
are horrendously screwed.
The hardened security infrastructure is still extremely expensive to implement
and maintain. You can't just deal with breaches because the fines (straight
from Uncle Sam) can be huge relative to your profits. Even if the fines
weren't bad enough at face value, you aren't a huge corporate giant, so
customer churn after a bad enough breach is going to be worse than it would be
for a bigger/older company. You are also paying large insurance premiums that
don't even fully cover the fallout of a potential breach.
------
lgleason
it's actually the tip of the ice burg. Given that there is no standard of care
and that there is no barrier to entry to being a software developer there are
a lot of things that are poorly done in this industry. Security is just one of
them. With that being said I've seen a lot of secruity people go overboard
with security and not take the other factors into account. IE: security people
trying to prevent the CEO from having acccess to resources, or adding in
policies that cost more to implement than the cost of the threat etc..
------
pjmlp
You see this in users as well.
I don't monitor the Apple forums nowadays, but it was common in the early
switcher days to have people asking how to disable UNIX security and make it
work just like Windows 9x.
------
KirinDave
Unleeeeessssss you are a bank.
The costs of intrusions against financial institutions are seldom fully
understood by people outside the industry but represent a lot of ongoing
costs.
------
cowardlydragon
What's even worse?
A mountain of bureaucracy that slows down everything as much as if you had
strong defenses, but is effectively as weak as bad security.
------
Raphmedia
"Oh, we just leaked the passwords of 300 0000 of our users? Too bad. Let's
make a tongue-in-cheek apology on twitter and move on!"
------
omouse
Time to start class-action lawsuits and force IT companies to at least buy
_insurance_.
| {
"pile_set_name": "HackerNews"
} |
Unit testing in the enterprise: Five common myths dispelled - kungfudoi
http://searchsoftwarequality.techtarget.com/tip/0,289483,sid92_gci1312005,00.html
======
mleonhard
"The few organizations that enjoy long-term success are those that make unit
testing part of their daily workflow."
I wish the author would elaborate on that statement. I want supporting
evidence, even if it's anecdotal.
------
bsaunder
I'd rather spend the effort generating code (UPDATE: and I do mean generating,
not writing) than writing tests (presumably I could be generating tests at the
same time). But there are enough people on the TDD train that there must be
something of value there. Plus it's politically incorrect to say you don't
like testing your code.
When people claim to be dispelling myths, I'd really wish they would do so
with fact/numbers (ever see myth busters, they don't just dispel by telling
you). I'd really like to see some numbers from the TDD folks on number of bugs
found via TDD and it's effectiveness. And I don't want the story about the one
bug found that saved them (just like that friend that lived because they were
ejected from their car in an accident (fortunately they weren't wearing their
seat belt)).
~~~
suboptimal
I've seen a lot of programmers go from writing no tests at all, to TDD (and
pair programming, etc.). This extreme shift in behavior makes it seem more
like a fad; I prefer a balanced approach. Use what works for you and your
team, and if it stops working, do something else.
------
chriszf
Has anyone here ever been in a situation where they thought, "Wow, a unit test
would have saved me" or "Man, thank god for our unit tests?"
I don't think I've personally encountered that. Plenty of bugs at compile time
or during regressions, but nothing where I could even imagine a unit test
saving me.
I think the true value of unit tests is documentation. Instead of writing
documentation that says, "Hey, this function should do this", you're writing
code to do that, which is usually a pretty good idea. Executable documentation
is the way to go. It's a very lisp-y idea.
That said, I'm not fond of the author's implication that unit testing will
save you from cascading failures and make you a better programmer. He even
says it's not a silver bullet, only to turn around and say it's the most
magical thing to happen to software productivity since the invention of the
keyboard.
It seems to me that the productivity gains are from forcing your developers to
actually understand the code they're writing and interfacing with, but
personally, I don't think I'd trust or hire someone who wasn't proud enough of
their work to do that on their own.
------
demallien
Does anyone else here dislike automated unit testing?
Where I work, we have unit tests that are all hand-coded. Even with that, it
takes half an hour to run through the full suite - too long to do on any minor
change, but still manageable when preparing a sandbox for checkin.
If we added in automated tests, we would have many times the number of tests,
with a corresponding increase in the time taken to make them run. If you can't
run unit tests quickly, their value is reduced.
I personally feel that a well-designed test framework should test all of the
common use cases, some obvious edge cases, and should be easily adaptable so
that when bugs are found, you can readily create a unit test that reproduces
the bug, preventing regression.
I'm curious as to how others here decide which unit tests to write.
~~~
mleonhard
How about splitting your tests into a fast smoke test and a long test suite?
You could run the smoke test before checking in. You could set up a continuous
build system that would build with your latest changes, launch the long-
running test suite, and email you the results.
~~~
demallien
In practice we run a subset of tests during development, but the current rule
is that we should never checkin code that hasn't passed ALL of the unit tests.
The idea being that each person is responsable for their own modifications
passing all tests. If you check-in, and then get shifted onto another high
priority task, and a later run of the unit tests discovers that a funky side
effect from your changes breaks the unit tests, you are no longer available to
fix the problem. Someone else has to do it, allowing you to escape
responsability for your error. At least, that's the theory management here
ascribes to...
| {
"pile_set_name": "HackerNews"
} |
Euthanasia Coaster - ca98am79
http://en.wikipedia.org/wiki/Euthanasia_Coaster
======
gpcz
I know it's taboo to bring up the Holocaust in Internet discussions, but this
coaster and the verbiage that the Wikipedia article uses ("unloading of
corpses" in particular) creeps me out at a level similar to the industrial
processes the Nazis invented to perform mass murder. The coaster would
probably be horrifying to watch, too -- people would scream for the first drop
and the first few inversions, and then you would hear an eerie silence.
~~~
logfromblammo
The thing I found to be most offensive is that it does 24 passengers at a
time. That's really what makes it creepy. It's like they're not even trying to
make your death special. If I'm going to die on the most lethal roller coaster
ever designed, I don't want the last thing I see to be the backs of 22
screaming heads. And do you really need to kill 360 people an hour? Is demand
for suicide really that high? I don't think so. Pull out 23 of those seats and
put in a few video cameras. And play me Ride of the Valkyries or something on
the way down so I can go out feeling like a badass.
And don't "unload" me. You build a massive corpse ejector into that train that
launches my body through the air across the people waiting in line onto a
giant trampoline over a body funnel. I'll roll right into the plastic souvenir
coffin, which my family can purchase at the photo booth along with the final
video for $49.99, with free drink refills included for the duration of their
stay at the park.
~~~
shawkinaw
Wow where do I start with this? I LOLed at least 3 times reading this, but the
idea that _this is at a theme park with other rides_ is probably the best
part.
~~~
DanBC
They're still working on the log-flume of doom.
------
earljwagner
Trey Brackish: "I'm standing here at Thrill World where this roller coaster
continues taking the lives of innocent people. Earlier today we spoke with
John Oakfellow of the Red Cross."
John Oakfellow: "We're doing what we can, but the casualties continue to
mount."
Mr. Show "The Devastator"
[http://youtu.be/p5Oi57fqdU0](http://youtu.be/p5Oi57fqdU0)
~~~
zerohm
+1 for sending me down a Mr. Show youtube hole.
------
dkhenry
After seeing a close friend die slowly over three weeks from stage 4 ovarian
cancer. I think this would be worlds better then the slow euthanasia by
morphine drip that we currently use.
~~~
jobu
Totally agree. There are so many ways that a person can die a prolonged,
horrible death that is medically sanctioned and supposedly more ethical than
euthanasia.
After watching my father wither away with dementia I've decided to take up
skydiving after I retire (and pack my own parachute). If my mental state
becomes too poor to properly pack the chute I think hitting the ground at 200
mph would be a decent way to go.
------
jackschultz
What's with that last sentence of the intro? This is an article about a roller
coaster that kills people, not an argument for or against euthanasia. I come
to this page wanting to read about a death coaster and then I'm thrust into a
debate on whether it is moral to euthanize people.
Also, the source isn't really that solid. It's one sentence in a post on Metro
which, since it looks like it's based in the UK, I don't really know much
about.
If you want to include criticism for something make it a separate section or
something, not a sentence in the introduction.
It looks like this was the revision where that sentence was added in:
[http://en.wikipedia.org/w/index.php?title=Euthanasia_Coaster...](http://en.wikipedia.org/w/index.php?title=Euthanasia_Coaster&diff=426076801&oldid=426042569)
~~~
DanBC
Metro is a low quality free tabloid given away. It's found on buses and
trains.
~~~
bshimmin
Published by the same people as the Daily Mail, no less.
------
esquivalience
I'm interested that it's designed by an art student. I'm not sure whether it's
intended as art, but if so I suppose it lies in the dichotomy - horror of
death vs the very functional combination of entertainment and euthanasia.
In that case it would certainly the most interesting piece of 'artistic
research' I've seen.
And the "ultimate" designation is a nice mockery of brand hype!
------
tiku
I'm getting flashbacks to Rollercoaster Tycoon..
~~~
izzydata
Yea, I had coasters way worse than this, but customers only ever complained
that it was awful. You weren't able to kill them with excessive G-forces.
~~~
Agathos
I thought there was a way to fatally catapult them into a neighboring park and
then, because they died there, the game engine would penalize that park for
killing its guests. But I might be thinking of a different amusement park
simulator.
------
marknutter
Somebody needs to make an Oculus Rift demo of the Euthanasia Coaster.
------
rimantas
Damn, something related to my country at the top of HN and it's about suicide
more or less. Sad fact: Lithuania is among the top countries regarding
suicides :(
------
fixermark
I feel like this wouldn't work.
I'm not a doctor, but I was under the impression that the brain could survive
lack of blood flow for several minutes. How long is this ride supposed to
last?
~~~
unwind
The ride time is 3:20 according to the linked-to page.
I thought this was horrible. I guess it's "art", but still. Yuck.
~~~
k-mcgrady
>> "I thought this was horrible."
What about it did you find horrible? If you wanted to end your life it seems
like a relatively pain free way to do so. Artistically it's also pretty
interesting imo.
~~~
nsxwolf
I find euthanasia to be horrible, so a euthanasia roller coaster is
automatically horrible.
~~~
k-mcgrady
Why do you find euthanasia horrible? It's much more horrible to force someone
to endure sever pain against their will.
~~~
davidw
I think that someone dying is pretty horrible, even if it is less horrible
than dying some other way.
~~~
davidcollantes
Dying is horrible? Why? We all die, it is a known end, why will it be
horrible?
~~~
davidw
[https://news.ycombinator.com/item?id=7708986](https://news.ycombinator.com/item?id=7708986)
------
njharman
That does not sound fun. 2 min to contemplate you've made wrong choice but
with no way out. Then terror and passing out.
There's definitely more enjoyable ways to end your misery.
~~~
TausAmmer
Add eject button.
"I want to die, I will jump off the ledge!" \- "Please go ahead" \- "On other
thought...."
~~~
aaronem
For maximum fun potential, add an eject option but no parachute or other PPE
of any sort.
------
nilkn
Realistically, I think it would be much better to just use one of those
centrifuge machines. I imagine that actually riding this roller coaster,
before you faint, would be pretty painful. In a centrifuge you could probably
strap in the passenger a lot better and provide much more cushioned seating
and neck support so there's no pain involved.
------
maaarghk
Considering this in a purely pragmatic sense, my main concern is that if you
were one of the "particularly robust" "customers" yourself, you might end up
covered in someone else's vomit in your last moments.
~~~
ygra
I guess at 10 g, when the body isn't able to pump blood to the brain anymore,
your stomach would have difficulties expelling its contents as well. At least
if I understood the operating principle correctly.
------
ssprang
Reminds me of "The Centrifuge Brain Project"
[http://vimeo.com/58293017](http://vimeo.com/58293017)
------
mjamil
Any conversation about individuals having the right to control their lives
(including the means to end them) seems to me to be a good thing. Similarly,
on a larger scale, I also find talking about the (often shady) efforts by
various governments to control population to be a welcome thing. Vonnegut's
"Welcome to the Monkey House" touches on both topics; it's worth a read.
------
facesonflags
The artist's focus seems to be more on gravity than killing:
[http://www.julijonasurbonas.lt/t/gravitational-
aesthetics/](http://www.julijonasurbonas.lt/t/gravitational-aesthetics/)
------
GrinningFool
In related news:
[http://www.ibras.dk/montypython/episode17.htm](http://www.ibras.dk/montypython/episode17.htm)
------
lotsofmangos
It is less stupid and tasteless than the experiments being performed on humans
in Oklahoma at the moment.
------
cmiller1
I want to get off Mr. Bones Wild Ride.
~~~
facesonflags
"Not with a bang but a whimper."
[http://aduni.org/~heather/occs/honors/Poem.htm](http://aduni.org/~heather/occs/honors/Poem.htm)
------
easymovet
blackout for sure, but 60 seconds of oxygen starvation shouldn't kill you
right? 10g's might snap you neck though.
~~~
iLoch
I would think that the severity of hypoxia depends on where in the body it
happens. As far as I know (not much, admittedly) exhaling then counting to 60
is different than not allowing any oxygen into the brain. If you look at the
Wikipedia page on Hypoxia you'll see a picture of a person with Hypoxia in
their hand. Can't imagine your brain would do very well if the same thing
occurred there.
~~~
aaronem
Exhale and count to 60, and your brain keeps running off the oxygen still in
your bloodstream. Evacuate your brain of blood, and it can't do that; as
deaths go, this one would probably be faster than a neck tourniquet, but
slower than the .50 BMG to the head I mentioned elsewhere in this thread. (How
can St. Peter tell who died that way? They're the ones who ask him "What the
hell was _that_?")
------
djanogo
Seems like lot energy just to euthanize.
------
nkozyra
I rode this, wasn't scary at all!
------
ds9
This could never succeed here in the US - (a) general opposition to euthanasia
and (b) the amusement park industry would lobby against it because it would
scare people away from the regular, milder roller coasters.
It seems mostly academic, as there are much less costly, and not-unpleasant
methods - overdose of opiates for example.
~~~
peteretep
> This could never succeed here in the US
Dang, ya think?
~~~
gadders
I bet repeat business would be appalling.
| {
"pile_set_name": "HackerNews"
} |
How UK Government spun 136 people into 7m illegal file sharers - edw519
http://www.pcpro.co.uk/news/351331/how-uk-government-spun-136-people-into-7m-illegal-file-sharers
======
tomsaffell
The extrapolation from 1176 is reasonable (I think the error is roughly +-2%
for n=1000) _if_ the sample is not biased..
But the real issues is how the question was asked. From the article:
... _11.6% of which admitted to having used file-sharing software_
So what question were they asked? If I use Instant Messenger to send a photo
to a friend, have I used 'file-sharing software'? Is Skype file sharing
software? I used to write surveys that tried to get at issues like this (sw
piracy), and I believe it is nigh-on impossible to get good data by asking a
direct question in this way. You either make it very pointed (e.g. ".. used
file-sharing software to illegal share files"), and then few people say _yes_
, or you make is less specific, and people accidentally say _yes_ because they
don't understand it. That's the real problem.
~~~
Donald
A study can be designed to account for embarrassing and/or incriminating
questions.
Say the question is about "illegal file-sharing" and Yes means the subject has
participated in this behavior in the past and No means the subject has not.
Under private conditions, have the subject perform the following:
1\. Flip a coin.
2\. Have the subject answer Yes if they have participated in the illegal
sharing of files.
3\. If the subject has not done this activity, have them answer No only if
they flipped 'tails' in step #1. Otherwise, if the coin came up 'heads', they
answer Yes.
The true Yes proportion in the survey population of size n can then be
determined by (Y_count - N_count) / n.
~~~
anamax
While that "works", what fraction of survey participants will actually do
that?
I suspect that many/most folks who are reluctant to admit that they've done
something won't admit it even if you tell them that other people "will"
falsely admit to doing said thing so based on a coin flip. That's completely
rational because I suspect that many people who haven't done said thing won't
say that they have just because the coin tells them to.
------
hughprime
Title is silly. The wonderful thing about statistics is that if you have a
truly random sample of 1176 people you can extrapolate from 136 people to
seven million (plus or minus a certain error bar which I'm too lazy to figure
out right now).
The other points are somewhat valid, but by the usual standards of political
misuse of statistics this is pretty small beer.
~~~
ajross
Yeah, I was immediately screaming the same thing. Arguing that "only" a tiny
subset of a population is involved with a survey as a basis for rejecting its
results is just plain ignorance.
The other stuff in the article does seem dodgy though, like arbitrarily
tacking on 50% to reflect an assumed-but-unmeasured bias in the input sample.
Ridiculous.
Still, the title is innumerate, sensationalist junk, and the poster should be
ashamed.
~~~
dkokelley
The title was actually copied directly from the article. I completely agree
that the title appears to be linkbait, but the blame should be placed on the
editor and/or author of the post, not the poster.
~~~
pbhjpbhj
The editor/author created the title to be sensationlist on purpose. HN is, I
hope, not here to offer sensational headlines merely to grab extra viewers but
as a way to share knowledge, information and wisdom. We should have higher
standards for titles here IMO.
What I'd like to see is a subtitle (or tagging) system with a descriptive
subtitle that can't be applied by the poster only by someone else. Poor
subtitles would be downmodded too.
------
olkjh
Remember when they used to bust CD pirates and claim there were 1000s of
copying machines. They counted a 56x drive as 56 copying devices.
~~~
hughprime
At the risk of sounding cliched, citation required?
~~~
trezor
Sorry. I'm not going to bother digging up a source for this, but if it adds
any credibility to his statement I remember this as well.
------
gaius
Everything New Labour says is a lie.
~~~
gjm11
In which respect they differ from other major political parties ... how,
exactly?
~~~
gaius
I'd say the Lib Dems are basically honest. Problem is they're batshit crazy
too.
| {
"pile_set_name": "HackerNews"
} |
How To Get 30 Million Facebook Fans - schlichtm
http://www.forbes.com/sites/tomiogeron/2011/12/19/30-under-30-tracks-bys-founders-on-how-to-get-30-million-facebook-fans/
======
latchkey
Yawn, this is basically just an advertisement article for UStream, Socialcam
and Crowdbooster.
------
danso
I'm guessing by how this article on how to use UStream (sample advice: "While
you're live, there will be a live chat from the fans that you need to engage
with!") has already hit the HN front with 8 points in almost as many minutes,
that we'll someday see a "How to Get 30000 HN Karma!" piece
------
Slimy
This is not an article. This is an advertisement.
| {
"pile_set_name": "HackerNews"
} |
An Ex-Car Rental Agent’s Money Saving Advice - raymondhome
http://bucks.blogs.nytimes.com/2010/11/15/an-ex-car-rental-agents-money-saving-advice/
======
binarray2000
While the NY Times article is a nice summary you still can read the source
[http://www.edmunds.com/advice/buying/articles/165627/article...](http://www.edmunds.com/advice/buying/articles/165627/article.html)
which is part of a broader series
<http://www.edmunds.com/confessions/>
| {
"pile_set_name": "HackerNews"
} |
Do you know how much your computer can do in a second? - luu
http://computers-are-fast.github.io/
======
userbinator
Alternatively: do you know how much your computer _could_ do in a second, but
isn't, because the majority of software is so full of inefficiency?
In my experience, this is something that a lot of developers don't really
comprehend. Many of them will have some idea about theoretical time
complexity, but then see nothing wrong with what should be a very trivial
operation taking several _seconds_ of CPU time on a modern computer. One of
the things I like to do is tell them that such a period of time corresponds to
several _billion_ instructions, and then ask them to justify what it is about
that operation that needs that amount of instructions. Another thing is to
show them some demoscene productions.
I got a few of these questions wrong because I don't use Python, but I could
probably say with reasonable confidence how fast these operations _could_ be.
Related articles:
[https://en.wikipedia.org/wiki/Wirth%27s_law](https://en.wikipedia.org/wiki/Wirth%27s_law)
[http://hallicino.hubpages.com/hub/_86_Mac_Plus_Vs_07_AMD_Dua...](http://hallicino.hubpages.com/hub/_86_Mac_Plus_Vs_07_AMD_DualCore_You_Wont_Believe_Who_Wins)
(I know title is a BuzzFeed-ism, but this article came from before that era.)
~~~
bikeshack
Some of the work of Distributed.net (
[http://www.distributed.net/Main_Page](http://www.distributed.net/Main_Page) )
is wonderful. Does anyone know if this idea could be more than it is
currently, where computers (more than ever) are sitting idly and not
contributing their cycles in any meaningful way? Even 5 minutes of raw CPU
100% usage per device could do some serious computation. Theoretically
superseding modern supercomputers.
~~~
pjc50
Computers more than ever _rely_ on not being at 100% CPU all the time, because
the increased power consumption and heat dissipation is a problem. Instead
it's all about the "race to idle": do the work and then go to sleep for a few
miliseconds to cool down.
~~~
reubenmorais
Case in point: with my MBP battery, I can get 8 hours of browsing the Web,
reading articles, watching a YouTube video or another. But if I spin a
parallel build that uses 100% of all cores for about 15 minutes, I eat through
half of my battery life.
~~~
agumonkey
I wonder what's the sleep-state/consumption curve. Linear or not.
------
barrkel
... wherein you learn how slow Python is, and learn that the author severely
underestimates how fast optimized C can be.
Many of these questions are heavily dependent on the OS you're running and the
filesystem used, and of course the heavy emphasis on Python makes it hard to
make good guesses if you've never written a significant amount of it. I mean,
I have no idea how much attention was paid to the development of Python's JSON
parser; it's trivial to write a low-quality parser using regexes for scanning,
OTOH it could be a C plugin with a high-quality scanner, and I could
reasonably expect 1000x differences in performance.
Interpreted languages tend to have less predictable performance profiles
because there can be a large variance in the amount of attention paid to
different idioms, and some higher-level constructs can be much more expensive
than a simple reading suggests. Higher level languages also usually make
elegant but incredibly inefficient implementations much more likely.
~~~
Matumio
Python's JSON parser will obviously create Python objects as its output. There
is a limit of how much you can gain with clever C string parsing when you
still have to create a PyObject* for every item that you parsed. Because of
this, I don't think you can gain 1000x performance with C optimizations unless
the parser is really horrible (unlikely, considering the widespread use of
JSON).
~~~
tim333
There are some speed comparisons of Python json parsers here
[http://stackoverflow.com/questions/706101/python-json-
decodi...](http://stackoverflow.com/questions/706101/python-json-decoding-
performance)
Yajl (Yet Another JSON Library) seems to go about 10x faster than the standard
library json
------
comex
The first example (sum.c) is mistaken: it says the number of iterations per
second is 550,000,000, but actually any compiler with -O will remove the loop
entirely (since the sum variable is not used for anything), so the execution
time does not depend on the number at all. The answer is limited only by the
size of the integer, and the program will always take far less than one
second.
~~~
schoen
You're completely right!
Other readers, check it out for yourself with
gcc -g sum.c ; echo 'disas main' | gdb ./a.out
gcc -g -O2 sum.c ; echo 'disas main' | gdb ./a.out
~~~
dima55
Pro tip:
gcc -S -o- sum.c
~~~
schoen
Oh yeah, that assembly already existed in order to create the binary in the
first place!
Thanks for the tip.
(The gdb disassembly shows memory offsets, which might be helpful for some
purposes.)
------
kator
Funny but true, many people think computers are "smart". When I am confronted
with this statement in the general public I always remind them: "Computers are
fast, at well computing, humans not so much. Computers however are stupid,
they follow my directions exactly as I give them and will keep doing the same
stupid thing until I figure out my mistake."
When we have a computer that can read the original post and give estimates and
comment here on HN I will be impressed. Until then it's just a faster z80 to
me, amazing, don't get me wrong, the things we can do today with the power at
our disposal starts to feel like magic. [1]
All that said it makes me sad when I find code that someone didn't bother to
think through or even use the profiling tools available to maximize the amount
of resources it's consuming. It's true that "premature optimization is the
root of all evil"[2] however at some point it can be worth you time to review
your assumptions and crappy code and give it a tune up.[3]
[1]
[https://en.wikipedia.org/wiki/Clarke%27s_three_laws](https://en.wikipedia.org/wiki/Clarke%27s_three_laws)
[2]
[https://en.wikiquote.org/wiki/Donald_Knuth](https://en.wikiquote.org/wiki/Donald_Knuth)
[3]
[http://ubiquity.acm.org/article.cfm?id=1513451](http://ubiquity.acm.org/article.cfm?id=1513451)
~~~
Eleutheria
Thank god they're stupid, can you imagine a smart robot that can think a
billion times faster than us? It takes us a whole life to generate new
knowledge (PhD) but it would take them just seconds. Now imagine all that
knowledge accumulated in a couple of days, a week or a month. The last century
alone has brought us exponential discoveries with all the technological
advancements on our side.
No, we can't even comprehend.
~~~
Retra
You're describing a system that can quickly solve a large number of problems,
and you conclude that this is undesirable somehow?
------
blakecaldwell
As a developer, I think we'd all be better off if all software was developed
on 5-year-old machines, databases pre-loaded with a million records, and Redis
and Memcached swapped out with instances that use disk, not RAM.
~~~
candeira
Not related to performance, but please let's add "on machines connected with
average DSL speeds and sporting medium-resolution screens."
~~~
kps
… and phones with a 1G per month data cap, and no connectivity half the day.
~~~
Tyr42
1G? That's so generous. Try surviving on 20MB a month.
It's possible, but you really notice whenever things fail to be cached. (I'm
looking at you google maps!)
~~~
zymhan
You can save an offline version of a Google Map in the smartphone app:
[https://support.google.com/gmm/answer/3273567?hl=en](https://support.google.com/gmm/answer/3273567?hl=en)
~~~
Tyr42
Yes, but it will get deleted if your phone runs out of space (at least, that's
what I'm assuming happened to the map, because I did download a local cache
before leaving the hotel).
------
dilap
I like the idea, but I feel like as soon as I'm caring about performance and
looking at Python code, something has gone terribly wrong.
~~~
usrusr
But the basics are pretty much the same, no matter if it is python or
assembly: does this one-liner run entirely or mostly in L1 cache, or does it
have to wait for RAM access repeatedly? Does it have to wait for disk or does
it have to wait for network? Repeatedly? People who fail at this won't be able
to understand the difference between situations where python or not doesn't
matter much and those where it does.
Understanding that "computers are fast" (even in python!) is a very important
step towards understanding where we make them slow and whether that is because
of waste or because the task is naturally expensive.
Based on your skepticism i assume that you just haven't had much exposure to
people who are really bad at these thing, despite having all the formal
education (and the paycheck to match). "I'm working in ${absurdly high level
language}, of course i'm not supposed to care for performance" is what they
tell you before venturing off to make a perfectly avoidable performance
blunder that would be crippling even in fully vectorized assembly, followed by
a few days spent transforming all their code a different, but perfectly
equivalent syntactic representation that looks a bit faster.
~~~
dilap
Good points.
& probably there are more python coders out there that could benefit from
developing this kind of thinking than C programmers, so it makes sense from
that perspective, too.
(Side note: It's a trickier exercise in python than in C, which is itself a
trickier exercise than plain assembly.)
------
exacube
Author makes a comment that "If we just run /bin/true, we can do 500 of them
in a second" \-- this is very platform dependent -- i think Linux' process
creation is supposed to be 1-2 orders of magnitude faster than Windows, for
example (i don't have the exact numbers though).
~~~
LukeShu
The implementation of true also makes a difference! Not quite an order of
magnitude difference, though (except for the shell builtin).
method Hz comment
--------------------------------------------------------
empty file 500 an empty file gets passed to /bin/sh
dynamic libc 1000 "int main { return 0; }" -> gcc
static libc 1500 the same, but with "gcc -static"
assembly 2000 see below
bash builtin 150000 avoids hitting the kernel or filesystem
The empty file is the "traditional" implementation of true on Unix.
The assembly solution was my attempt at doing the littlest amount possible,
because libc initialization still takes time:
.globl _start
_start:
movl $1, %eax # %eax = SYS_exit
xorl %ebx, %ebx # %ebx = 0 (exit status)
int $0x80
~~~
kragen
This depends in part on how big the process that's forking is.
[http://canonical.org/~kragen/sw/dev3/server.s](http://canonical.org/~kragen/sw/dev3/server.s)
manages to get quite a bit more than 2000 forks per second out of Linux, which
might be in part because it only has two to four virtual memory pages mapped.
(see [http://canonical.org/~kragen/sw/dev3/httpdito-
readme](http://canonical.org/~kragen/sw/dev3/httpdito-readme) for more
details.)
------
rcconf
I got 10 / 18, that's a pass! I learned some of these numbers from doing a lot
of stress tests on the game I work on.
I think the really big thing is to actually create some infrastructure around
your product to run performance tests whenever you're developing a feature.
That's the only way you're ever going to good data.
As an example, the SQL tests will act very differently depending on if the
table was in the buffer pool, or it had to be fetched from disk (I wrote my
own tool to run tests on MySQL if anyone is interested,
[https://github.com/arianitu/sql-stress](https://github.com/arianitu/sql-
stress))
~~~
emn13
14 / 18 and I don't really program python (e.g. have no idea what the bcrypt
lib's defaults in python are...) - but performance is something I've always
cared about, and most of these are things you might happen to know.
I'm surprised by the poor memory performance in his tests; my machine get's
around an order of magnitude better performance in terms of throughput; which
leads me to believe he's compiling using a very outdated gcc, and/or has
really slow memory (laptops- you never know), and/or (reasonable, since he
only mentioned -O2, but depends on the bitness of the compiler) he's compiling
in "compatibility with 80386" mode.
I think it's odd that people still haven't quite figured that one out yet.
People use "-O2" all over the place, when that's rarely faster than "-O3", and
they leave out one of the simplest optimization options the compiler has -
"-march=native".
~~~
falcolas
> have no idea what the bcrypt lib's defaults in python are
It defaults to 12, IIRC.
------
lqdc13
The grep one is tricky. If no characters match, it's fast.
But if some match, and if it is ignoring case, it's much slower. It's actually
faster to read the whole file into memory, lowercase it and check with python
for index of match.
~~~
mehrdada
Assuming your pattern is static, it shouldn't be much slower. String matching
can be done in linear time with some preprocessing: check out Knuth-Morris-
Pratt and Boyer-Moore algorithms.
Basically, the idea is that you build a deterministic finite state automaton
and try feeding the string through it. Each character would cause exactly one
automaton transition. Therefore, you can do the whole thing in O(n) after you
pay the cost of preprocessing to build the automaton, with a quite tiny
constant for small patterns.
~~~
weinzierl
Actually Boyer-Moore (somewhat counter intuitively) is faster for longer
search strings than for short ones.
It makes sense if you think about a search string that has the same length as
the searched string. If they don't match you can find that out with a single
character comparison.
------
CyberDildonics
Even when you get past all the indirection and interpreted languages people
use there is STILL usually 12x - 100x the speed left on the table.
Even in a native program heap allocations can slow something down to 1/7th.
After that memory ordering for cache locality can still gain 10x - 25x
speedups.
After that proper SIMD use (if dealing with bulk numeric computations) can buy
another 7x (that's the most I've gotten out of AVX and ISPC.
Then proper parallelism and concurrency are still on the table (but you better
believe that the concurrency can be very difficult to make scale).
The divide between how fast software can potentially run and how fast most
software actually runs is mind blowing.
------
scandinavian
Interesting enough, but kinda predictable. I ran most of the tests using PyPy
2.7 on OS X for fun. As expected PyPy performed vastly better in almost all
tests, as they are all loop heavy, so the JIT can get to work.
As an example, for the first test I got:
pypy test.py 1000000000 1.01s user 0.03s system 99% cpu 1.042 total
python test.py 55000000 1.02s user 0.01s system 99% cpu 1.038 total
So about 18 times faster. On most tests PyPy was 3-10 times faster than
cPython. So what does this tell us? Nothing really, the benchmarks are not
really indicative of anything you would do with Python. Oh, and PyPy is very
fast at some stuff.
~~~
rockmeamedee
I don't think they're benchmarks though. I think the great part about this
piece is that it gives people more intuition about computer speeds in specific
use cases to identify bottlenecks better. If you have a complex operation like
serving a web page, and you measure each part of the process, this page gives
you a feel for what the ideal cases of file IO, memory access, computation,
serialization and network access are so you can sort of tell what to fix a lot
faster. Essentially a broader version of Numbers Every Computer Programmer
Should Know.
------
sdkmvx
Algorithms matter. Do you know how Vim inserts text?
It's exponential. It's worse than a shell loop spawning a new echo process
every iteration.
[http://www.galexander.org/vim_sucks.html](http://www.galexander.org/vim_sucks.html)
~~~
rasz_pl
one of my fav performance bugs:
[https://bugzilla.gnome.org/show_bug.cgi?id=172099](https://bugzilla.gnome.org/show_bug.cgi?id=172099)
Reported: 2005-03-30, unpatched to this day, because parsing opened files on
the fly recursively with O(2^n) complexity is enough.
------
devit
The first C result is absurd, not sure how the author could have gotten it.
First of all, the code as written will just optimize to nothing, so we need to
add an asm("" : "=g" (s) : "0" (s)) in the loop to stop strength reduction and
autovectorization, and we need to return the final value to stop dead code
elimination.
Once that is done, the result is more than 2 billion iterations per second on
a ~3 GHz Intel desktop CPU, while the author gives an absurd value of 500m
iterations which could not have been possibly obtained with any recent Intel
Xeon/Core i5/i7 CPU.
BTW, the assembly code produced is this:
1:
add $0x1,%edx
add $0x1,%esi
cmp %eax,%edx
jne 1b
Which is unlikely to take more than 1/2 cycles to execute on any reasonable
CPU as my test data in fact shows.
~~~
CydeWeys
Well there's always flags to prevent compiler optimizations, or maybe the
example was purposefully presented in readable C, not whatever hack you'd need
to do to bypass optimization. Inline assembly isn't exactly C anymore.
But yeah, I was surprised by the number of operations per second too. I was
thinking it had to be over a billion.
------
kabdib
It's pretty amazing how much computation you can buy for less than a cup of
coffee.
For less than 20 cents (in quantity, perhaps) you can buy a chip that out-
performs the personal computers available in the early 80s. Of course you have
to add peripherals to bring it to true parity, but you can probably have a
working board for about five bucks that'll run rings around an Apple II or a
vintage PC. The keyboard and monitor are the most expensive components.
Likewise, memory. Recently I was thinking about doing some optimization and
reorganization of some data for a hardware management project, when I realized
that the data, for the entire life of the project, would fit into the CACHE of
the processor it runs on. Projecting out five or six years, it would _always_
fit. I stopped optimizing.
Most of the time, the most valuable resource is the time of the person
involved. Shaving milliseconds of response time rarely matters, shaving an
hour of dev time does. (There are big exceptions to this when you are
resource-constrained, as in video games, or hardware environments that need to
use minimal memory or cycles for cost reasons).
Premature optimization still remains a great evil.
~~~
Merad
> you can probably have a working board for about five bucks that'll run rings
> around an Apple II or a vintage PC.
Hell, you can do even better than that. Assuming that CHIP
([https://www.kickstarter.com/projects/1598272670/chip-the-
wor...](https://www.kickstarter.com/projects/1598272670/chip-the-worlds-
first-9-computer/description)) delivers on it's Kickstarter, for $9 you get a
1 GHz CPU and 512 MB RAM. That's roughly on par with an average home PC from
about 2002-2003.
If you bump your budget up to $40, you get a Raspberry Pi 2 with a quad core 1
GHz chip and 1 GB of RAM. Now we're talking parity with an typical home PCs
from 10 years ago, or less.
------
Veratyr
I was kinda stunned when I found out how much my computer can actually do.
I've been playing with Halide[0] and I wrote a simple bilinear demosaic
implementation in it and when I started I could process ~80 Megapixels/s.
After optimising the scheduling a bit (which thanks to Halide is only 6 lines
of code), I got that up to 640MP/s.
When I scheduled it for my Iris 6100 (integrated) GPU through Metal (replace
the 6 lines of CPU schedule with 6 lines of GPU schedule), I got that up to
~800MP/s.
Compare this to naïvely written C and the difference is massive.
I think it's amazing that my laptop can process nearly a gigapixel worth of
data in under a second. Meanwhile it takes ~7s to load and render The Verge.
[0]: [http://halide-lang.org/](http://halide-lang.org/)
------
suprjami
Yes, actually. In one second it can parse the first ~33 million numbers for
primes using a Sieve of Eratosthenes. This requires about 115MiB of RAM.
~~~
dbaupp
You'll be happy to know that computers can go even faster, and it doesn't need
anywhere near 3.5 (= 115e6/33e6) bytes per number: you can use a single bit
for each one (3.9 MiB), or only store numbers that aren't obviously composite
(e.g. only odd numbers gives half that, and using a 30-wheel gives 1.0 MiB).
In any case, you can do a _lot_ better than merely 33 million: e.g.
[http://primesieve.org/](http://primesieve.org/) uses some seriously optimised
code and parallelism to count the primes below some number between 10 billion
and 100 billion in a streaming fashion (meaning very small memory use). For
non-streaming/caching the results, I'm not sure how primesieve does, but my
own primal[0] (which is heavily inspired by primesieve) can find the primes
below 5 billion and store everything in memory in 1 second using ~170 MiB of
RAM on my laptop (and it doesn't support any parallelism, at the moment), and
the primes below 500 million in ~0.75 seconds on a Nexus 5, and ~1 second on a
Nexus S (although both devices give very inconsistent timings).
[0]: [https://github.com/huonw/primal](https://github.com/huonw/primal)
------
ademarre
It would be helpful to know the default work factor for the bcrypt hash in
that Python library, since none was provided. Apparently it's 12:
[https://pypi.python.org/pypi/bcrypt/2.0.0](https://pypi.python.org/pypi/bcrypt/2.0.0)
~~~
vessenes
I guessed it was one, and answered a couple orders of magnitude off. I'm going
to give myself partial credit, since I actually thought about the work factor
before answering. But not enough to go read the docs, unlike you!
------
quaffapint
I run through loops with multiple inner loops playing around with football
stats. It will hit 4+ million combinations in under a second - all on an 8 yr
old Q6600 processor. Just amazes me every time the power we have available to
us.
------
Uptrenda
This is actually a useful site for learning about the costs of code. What
would be more useful is if a multi-language version were developed which I
imagine could turn into a pretty cool open source project.
------
tchow
This is extremely cool. Someone needs to do this for all the languages that
are commonly used. Knowing general speeds of various calls for javascript,
ruby, elixir, etc. would be great for web development.
~~~
amelius
There's a number of benchmarks at [1]. It would be nice if somebody would
compile+run them on an AltJS environment and publish the result for different
browsers.
[1]
[http://benchmarksgame.alioth.debian.org/](http://benchmarksgame.alioth.debian.org/)
------
amelius
Computers are fast? Try ray-tracing, or physics simulations in general :)
~~~
FLUX-YOU
Silly mortals and their non-N-body problems!
------
eklavya
So if it's all so fast what does atom (latest) do with it all?
------
cweagans
The first time I clicked on this link, I thought it was a joke, because the
page never loaded. I think there was some network issue at my ISP and things
weren't routing properly, but it tried to load for like 40 minutes. When I
finally clicked back on the tab and saw the URL, I laughed and closed it.
Clicked back again today from Hacker Newsletter and saw it was actually a
thing :P
------
kristopolous
Anyone else been struggling to get their suite
([https://github.com/kamalmarhubi/one-
second](https://github.com/kamalmarhubi/one-second)) running without
modification?
I've had to modify the python code in a few places ... don't know why it isn't
working out of the box - feel like I must be doing something wrong.
~~~
thedufer
With Python this is usually a version mismatch - 2.x and 3.x are subtly
incompatible.
~~~
mappu
Exacerbated by the fact the repo uses `/usr/bin/env python` instead of
explicitly python2 or python3 - which means it will use python 2.x on any
PEP394-compliant system, and python 3.x on e.g. Arch.
------
graycat
Yes, to some extent, and some of the examples are astounding: E.g., I wrote
some simple C code for solving systems of linear equations, and for 20
equations in 20 unknowns I got 10,000 solutions a second on a 1.8 GHz single
core processor. Fantastic.
------
skimpycompiler
3/18 , most of the time I picked something 10x slower than the lower num.
Guess I'm stuck in the past :(
------
ctdonath
It's fast enough to do something useful before the light from the screen
reaches your eye.
------
em0ney
Thanks for the post! The part on serialisation blew my socks off - big eye
opener
------
rasz_pl
>new laptop with a fast SSD
or is it macbook with the fastest consumer grade ssd on the market (until
yesterday I think)? :)
------
introvertmac
nice
------
rkwasny
Awesome! I will definitely include this in interview questions, it's a very
good way to check how much someone knows about computers.
~~~
forgottenpass
That's probably a bad idea. This falls into the realms of pointless trivia and
needlessly-specific experience in a narrow domain. If you're actually worried
about optimization, these aren't the questions you would ask anyway.
~~~
kragen
It might depend on how wrong the answers are. If you ask, "How many HTTP
requests per second can Python's standard library parse on a modern machine?"
then answers in the range of 100 to 1 million might be acceptable, but if the
answer is "10" or "1" or "1 billion", then you know the person doesn't have
much of a clue, about Python in the former case or about computers in the
latter case.
| {
"pile_set_name": "HackerNews"
} |
A Better Qt Because of Open Source and KDE - t4h4
http://www.olafsw.de/a-better-qt-because-of-open-source-and-kde/
======
giancarlostoro
> In case The Qt Company would ever attempt to close down Open Source Qt, the
> foundation is entitled to publish Qt under the BSD license. This notable
> legal guarantee strengthens Qt. It creates trust among developers,
> contributors and customers.
Woah, I had no idea about this. I wonder what kind of new changes would take
place if Qt were BSD licensed, such as languages like D embedding it as a
solution for UIs as part of the standard library (they already do this for
SQLite and Curl).
~~~
toyg
Qt is _massive_ , I doubt small communities would rush to integrate it and
make it their job to maintain it all...
I doubt anyone in OSS communities is seriously deterred from using Qt because
of the LGPL. It’s just a very big and very complex project that requires a lot
of manpower to “tame”.
~~~
ori_b
Already happened: [https://www.copperspice.com](https://www.copperspice.com)
~~~
Rochus
Thanks for the hint. Didn't know it. As it seems it is no longer Qt. They once
started with Qt (don't know which version) but they have "completely diverged"
as they say. The goal is not to have a free Qt but to have "Extensive use of
modern C++ functionality" (with all these buzzwords). Not even shure if their
containers are still implicitly shared. Will even though continue to have a
look at it.
~~~
dev-il
> don't know which version
apparently, they forked from Qt 4.8 and QML is disabled in Copperspice. Ref:
[https://www.copperspice.com/docs/cs_overview/timeline.html#t...](https://www.copperspice.com/docs/cs_overview/timeline.html#tm-05-2012)
[https://news.ycombinator.com/item?id=9685022](https://news.ycombinator.com/item?id=9685022)
[https://forum.copperspice.com/viewtopic.php?f=11&t=1152](https://forum.copperspice.com/viewtopic.php?f=11&t=1152)
------
cjensen
The behavior of the Qt company lately is a bit troubling.
First, the core can be licensed under Commercial or LGPL licensing. This let's
non-paying developers use the core in commercial software. This policy was
established to ensure trust with the community during one of the many company
transitions. For all new modules, Qt evades that requirement by licensing
under Commercial or GPL. I have mixed feelings on this.
Second and more importantly, they have started sending aggressive audit
letters to customers. I guess that makes sense from a bean-counter point of
view where you poke the customer and try to get them to buy more licenses
either because the customer actually needs the licenses, or because the
customer is afraid to let any dev work without paying protection money. This
is a huge pain in the neck for me as a paying customer. They even sent the
aggressive audit letter to an old license we have that had not been renewed
(or used) in around a decade.
I'll definitely be rethinking my relationship as a customer when the next
renewal comes up.
~~~
api
Its weird what people and businesses will and will not support.
Businesses will shovel loads of money into SaaS and cloud hosting without
blinking, but support a programming tool? Never! Another hundred Office users
and 50 more AWS VMs? No problem.
People will spend $10 on a coffee but would never spend $5 to support a
project that saves them hundreds or thousands of hours of work. They'll spend
$15/month to host a site, but would never pay for the software that runs it
even though that took far more effort than racking up some servers.
No wonder everything is surveillanceware and mega-corp silos. We get what we
pay for, or rather we don't get what we won't pay for... like independent
software.
~~~
cjensen
Sure. In the case of Qt, it was very expensive but well worth paying for
because it does a good job. The LGPL stuff is important because it provides us
with an "out" if the Qt company goes crazy with prices.
Avoiding the "out" makes me not want to make use of the new modules. And the
hassle of audits makes be question the cost of the inconvenience to me, the
dev, of having a license.
It's tradeoffs all the way down.
~~~
Rochus
It's outrageously expensive. And you can't just buy what you really need, just
all or nothing. In many projects I only need Qt Core; for that I would have to
buy a license for everything from these people with a far worse contract than
LGPL and pay royalties. No thanks.
------
dev-il
Sadly, I fear Digia (and its owned spin-off, the Qt Company) will be the death
of Qt:
Unlike Nokia, which bought Qt and opened it to a more liberal license
(LGPLv2.1) because it saw it as a strategic platform basis to attract
developers to its platform (that is, until the MS shill Elop was injected as
Nokia's CEO and destroyed the company… and sold Qt off)…
… unlike Nokia, the Digia-owned "Qt Company" (now publicly traded as QT-COM on
Nasdaq Helsinki) sees Qt as a direct revenue source to monetize to the maximum
and developers as milk cows to maximally squeeze out as long as possible. And
unlike Nokia, Digia's "Qt Company" does so in a quite unsustainable way. They
enormously increased the prices of commercial licenses to a level that can
only be qualified as extortion, and they do whatever possible to force
developers out of LGPL and into Pay-to-Play:
they switched Qt's open source edition from LGPLv2.1 to LGPLv3… and they
switched from LGPL to GPL or commercial only for most new modules, including
QtQuick 3D.
The bottom line is: it's really going down the drain, and lots of developers
of Qt-based programs and apps are drawn away and looking for something new.
The need for a new modern and more liberally licensed cross-platform UI lib is
bigger than ever.
Also, many devs are even switching out of Qt-based cross-platform development
and back to separate codebases for OS-dependent native UI toolkits… which is
kinda sad, though partly alleviated by some other factors (such as the
similarities between Swift and Kotlin)
~~~
de_watcher
Switched QtQuick 3D from LGPL to GPL?
Is that the regular license FUD or what? I don't see which 3D module you're
talking about.
~~~
Kelteseth
No QtQuick3d [1] was from the beginning GPLv3 only (It is not even released
yet, but Qt 5.14 release is tommorow). It is a bit awkward because now there
are competing 3d engines inside one toolkit.
[1] [https://doc-snapshots.qt.io/qt5-5.14/qtquick3d-index.html](https://doc-
snapshots.qt.io/qt5-5.14/qtquick3d-index.html)
------
toyg
Trolltech were pretty awesome, in their days. The “poison-pill BSD” setup is
pretty smart; if i remember correctly , it was introduced when they started
wobbling a bit from the commercial perspective, in order to keep the community
calm while they went looking for buyers (which they eventually found in
Nokia). It would be sad if the switch had to be triggered at a time when QT is
supposed to be “back in the game” after years of uncertainty.
Dear Qt owners, don’t mess around. If you can’t make money from Qt, it’s not
because of the license. Build more bridges, and more developers will come to
you.
~~~
Nokinside
>If you can’t make money from Qt, it’s not because of the license.
QT stock is up 124% this year, +247,06% last 3 years.
Qt has de facto monopoly in embedded, medical, automotive, appliance and
industry automation. It works in Embedded Linux, INTEGRITY, QNX, and VxWorks.
Qt just launched Qt for MCUs (bare metal toolkit for low end
microcontrollers). It runs on Cortex-M with several different 2D accelerators.
It's yet another market with no serious competitors.
~~~
toyg
That’s good, so why change the license now? Is it just greed?
~~~
Nokinside
Nobody knows what type of change they want. Their paying customers don't care
because the product is double licensed.
I suspect it has something to do with some 3d libraries and code they would
like to include, but I don't know.
------
jbk
> Background is the wish of The Qt Company to change some of the contract
> provisions. It is still a bit unclear which ideas exactly they are pursuing
I think this is the reason of the timing of this post.
Because else, this post is just reminding the existing contracts around Qt.
~~~
hoistbypetard
Can you briefly explain (or link an existing explanation) a summary of the
changes the Qt Company is asking for, to someone who's interested but not
intimately familiar with the details?
~~~
thomascgalvin
It says in the article and the quote that the proposed changes are still
unclear.
~~~
hoistbypetard
I understand that. I was hoping someone who's closer to the matter could
characterize them in broad strokes even if details were still unclear.
------
shmerl
It would be good for KDE to get stronger backing, but I've heard RedHat avoids
backing KDE and focuses on Gnome, due to aversion¹ to contributor agreements²,
is that correct in that case?
1\. [https://opensource.com/article/19/2/cla-
problems](https://opensource.com/article/19/2/cla-problems)
2\. [https://www.qt.io/legal-contribution-agreement-
qt](https://www.qt.io/legal-contribution-agreement-qt)
~~~
KozmoNau7
I recommend the KDE neon distro to every Linux-curious person I meet. It's the
latest and greatest KDE on top of an Ubuntu base, and it's by far the best
desktop distro I have tried in my ~20 years of using Linux on the desktop.
~~~
K0SM0S
A fantastic DE experience indeed.
Just some advice: public consensus is that if you don't want the bleeding edge
of KDE, Kubuntu is basically just as good (KDE over Ubuntu) and reportedly is
more compatible with various hardware — so if your laptop has issues with
Neon, try Kubuntu as a nearly identical alternative.
Note that you can get KDE on any major distro, e.g. Fedora, Arch. I can't
recommend it enough, KDE is the dream DE — great out-of-the-box, but settings
for pretty much everything, set each once and then forget it as it gets out of
your way without sacrificing any feature whatsoever. There are a few minor
glitches, but much less so than Gnome or MacOS or Win 10 in my anecdotal
experience (notwithstanding display support, that's driver-related and whole
other ballgame).
~~~
KozmoNau7
Neon _is_ KDE over Ubuntu, with a completely stock KDE packaging rather than
the slightly Ubuntu-modified KDE in Kubuntu.
So hardware support really should be identical.
------
BlueTemplar
Am I the only one that finds it weird that a document like this, that
basically shouldn't care about layout, and is very unlikely to be printed by
anyone (except maybe the author himself), would be distributed as a .pdf?
| {
"pile_set_name": "HackerNews"
} |
Show HN: ReSRC – Free programming learning resources incl. 500 free books - lalmachado
http://resrc.io/
======
eglover
Talk about information overload. :/
| {
"pile_set_name": "HackerNews"
} |
The first server side programmable graphics generator - kuszi
http://ideone.com/SQwvj
======
kuszi
Java example: <http://ideone.com/HrXrC>
------
brechin
First?
~~~
kuszi
Think so. Please let me know if you know more.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Books about industrial control systems - thecleaner
I know that control systems exist and I could find two companies that make those - Fujitsu and Mitsubishi. But I don't know anything else about how these systems are made. Books, reading-lists, articles / article sources most welcome.
======
aphextim
This was one someone once recommended to me regarding cyber-security of ICS
systems.
[https://www.amazon.com/Cybersecurity-Industrial-Control-
Syst...](https://www.amazon.com/Cybersecurity-Industrial-Control-Systems-
SCADA/dp/1439801967)
If you are looking at building them that is something else.
------
oddly
Not books, but Honeywell, Foxboro are two other manufacterers.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Should I branch off my startup's technology into a separate company? - laundrysheet
My two co-founders and I own a profitable home services company that runs on a proprietary technology platform that I've built. Over the past two years, I've put the majority of work into the startup, with most of the effort dedicated towards building and maintaining the platform. However, over the course of the past year, my two co-founders have begun to take a backseat in the operation of the business. We've had many late night conversations about this but it doesn't appear to resolve any of my concerns. It's been extremely demotivating for me while I still continue to invest the same chunk of effort into building out the platform. Sometimes, features and fixes are delayed simply because of the occasional resent I carry.
However, I have a proposal that I'd like to bring to the table--branching off the technology into a separate company and licensing it to the home services company to use for free. I feel this would greatly motivate me again to work on the platform--it's a win-win situation in my eyes. If this were to happen, there is still software development needed on the home services company to interface with the platform. What are your thoughts on this? Should I give equity to my co-founders within this new company? We did hire a short-term contractor to work on the platform which never surpassed $5,000.00. The rest of the work was done solely by me. Interested to hear some feedback on this since this has been brewing in my head for awhile!
======
mod
Depending on how you've set the business up, the company may (and probably
should) own the code, which would essentially make this not very feasible--
without a rewrite, anyway.
I've been in your shoes, in terms of co-founder not pulling his weight (it was
family, too--oops!), and I did not find a resolution. I ended up ceasing work
on what I think was likely to become a very nice lifestyle business (we were
already ramen-profitable), because I was going to have to do all of the work.
I could re-create the tech (I built everything), but I haven't, partially to
avoid any bad feelings, partially because I'm doing other things.
Given that you hired a contractor, and that presumably your technology is your
whole company's value (you seem to think it's almost all of the work that's
been done), I'm not sure why it'd be okay to take that elsewhere, except by
virtue of "these guys don't deserve it!"
The correct answer to "these guys don't deserve it" is to have clauses to
handle that in your business documents--vesting, cliffs, etc. It's almost
certainly not "take the technology and run."
You might be able to sort-of blackmail your co-founders into accepting your
deal, but I imagine it's going to be approached in that fashion. "I want this
to happen or I'm done building things" seems to be where you're at, and that's
really not okay. (Edit: I think it's not okay to demand the technology. I
think it's fine to demand their time, and perhaps a revised agreement where
they all have to put in time to vest shares!)
Don't get me wrong--I side with you. I just think the best option is probably
to not continue, or to continue at a capacity that you're comfortable with,
given their work input.
That said, I've been fucked in a few business ventures now, and perhaps I'm
erring way too far from where the money is. I don't think I have any regrets,
I'm proud of how I handle myself--but I don't have any money, either!
------
wayclever
Question: Have you and your co-founders each executed an agreement assigning
all right, title and interest in and to any IP to the company? If so, the code
isn't yours to take, and it makes no sense for your co-founders to give away
any IP rights (copyrights and potential patent rights).
Your narrative tends to cast your partners as distracted by other business
ventures, it seems that you are searching for a way to disengage. If they were
savvy, they would not agree to your proposal. The code base is company
property (that is if you have completed the process of corporate formation).
You might believe that your contribution in teh form of code
disproportionately exceeds your co-founders contributions. Had you actually
formed a company, assigned the IP to the company, determined the value of each
founders contributions in addition to any IP (cash, equipment, space,
expertise, time commitments) you would be able to foresee how your co-founders
will respond to your proposal.
IF I'm incorrect and you have filed articles of incorporation, drafted bylaws,
and executed agreements issuing founders stock, then I suggest you consult
with your outside counsel (you know, the attorney(s) who drafted all those
agreements for you) to help ensure you don't f_ck up the operations of your
"profitable home services business".
Suggest you ask yourself the following "why am I walking away from a
profitable business? Especially when I committed to building that business
with my co-founders?"
Maybe you have realized you don't get along and so you want out. In that case,
offer to purchase the code base, and agree to payment terms that provides
enough runway for you to generate revenue and pay for the code base over time.
Finally, you wouldn't be "giving" equity to your co-founders. You would be
issuing equity in exchange for the "home services company" assigning all
right, title and interest in and to the code base and any associated IP to
your new company. If your platform is adding value, it makes little sense for
them to agree to an assignment. Instead, they should offer to license it to
your new company for royalties, equity, and continued development.
Summum Bonum.
------
santiagobasulto
I think that you already have your answer :) You want to do it, do it. I saw
your submissions, this is the second time you post this along with "What to do
if co-founders are distracted by other business ventures?". If you're not
happy, just try to do whatever makes you happy. Your time is valuable, don't
waste it being dissatisfied.
| {
"pile_set_name": "HackerNews"
} |
He Was a Hacker for the NSA and He Was Willing to Talk. I Was Willing to Listen - prostoalex
https://theintercept.com/2016/06/28/he-was-a-hacker-for-the-nsa-and-he-was-willing-to-talk-i-was-willing-to-listen/
======
batbomb
This is garbage. There is easily enough information to uniquely identify him
(lamb of god, religious convictions, recently left the NSA, social media),
which calls into question the entire premise of the article.
~~~
AdeptusAquinas
Those details seemed to be highlighted almost, which means Id assume the
journalist made them up to anonymise the hacker.
------
ryanmarsh
There's enough personally identifying information given for the lamb that,
assuming it's not a smoke screen, he'd be easy to dox.
My instincts tell me this was a PR/recruitment piece (or a hoax) and the
journalist fell right into it.
~~~
ryanlol
> assuming it's not a smoke screen, he'd be easy to dox.
So what? Just because he doesn't want his name in the article doesn't mean
he's scared to death about his name coming out.
------
Spooky23
Apparently the author wasn't willing to write about what he heard. Pretty low
content ratio here.
------
robocat
This article smells like the journalist has been tricked.
~~~
drawnwren
I agree. The interviewee sounds more like a script kiddie with a very powerful
script than anyone of note at the NSA.
~~~
anf
What qualifies someone to be "of note"?
~~~
drawnwren
In the context of my comment, I meant someone deeply involved in the technical
implementation of collection techniques or strategy. The interviewee comes
across as someone who is skilled at a small process but doesn't seem to be the
person who discovered the exploit or did a significant part of the creative
thinking involved in the exploit.
~~~
anf
For the purposes of an article like this, I don't think technical prowess
would be very useful. He's providing an opinion informed by speaking with a
variety of people with clearance speaking freely about their work. That's
pretty rare in its own right.
~~~
drawnwren
While you are correct, my intent was to serve as a counterpoint to statements
like this in the article, "He identified himself and his highly trained
colleagues at the NSA as a breed apart — a superior breed, much in the way
that soldiers look down on weekend paintballers. Perhaps this shouldn’t be
altogether surprising, because arrogance is one of the unfortunate hallmarks
of the male-dominated hacker culture."
------
sverige
The problem with spies of any stripe is that you can never know if they're
lying. Le Carre's books do the best job of communicating this depressing fact,
I think.
So if you can't know whether they're lying, any information they provide is
just noise ultimately, because you'll very rarely (like never) beat them at
their game; and if you do, they'll change the game.
~~~
spacecowboy_lon
Do you mean Spies or Officers?
------
simbalion
No matter what he might think he feels, he is not amongst his 'brethren' at
def con.
I thought the reference to the film 'good will hunting' was appropriate,
because the subject of the article is a sell-out.
~~~
spacecowboy_lon
That's a big ask and very subjective POV
~~~
simbalion
Subjective? I disagree. From everything I know of the hacker scene, working
for the NSA to invade people's privacy for Uncle Sam is not good karma.
------
systematical
Not much substance in this article...
------
vonnik
I didn't find this piece particularly illuminating, frankly.
------
alexandercrohde
Not too much new in this piece.
I wonder why the interviewee is so confident that the world will always be a
place of conflict. I can think of no rational, or no necessary, reason why
multiple conflicting powers should exist based on the lines on a map.
Perhaps it's for the same reason he is religious (i.e. he's just wrong
sometimes, like every human).
~~~
gnaritas
> I wonder why the interviewee is so confident that the world will always be a
> place of conflict.
It always has been.
> I can think of no rational, or no necessary, reason why multiple conflicting
> powers should exist based on the lines on a map.
You mean other than the fact that it's always been that way? These are just
tribes on a larger scale, humans are tribal, we fight, it's in our nature.
~~~
woodman
> ...we fight, it's in our nature.
Do we perform any action that is not "in our nature"? Do you feel the same way
about slavery which, until relatively recently, was the natural order of
things?
~~~
gnaritas
Slavery is still very much alive, it's just no longer publicly supported.
Saying something is in our nature doesn't justify it, it's merely an
explanation why something still is. As long as resources are scarce, man will
fight, that is the way of things.
~~~
woodman
> Slavery is still very much alive, it's just no longer publicly supported.
That misses the point, but I can pin the example down if that helps: ...
publicly supporting slavery, until relatively recently, was the natural order
of things. The point is that saying "something is in our nature" doesn't
justify it or explain it - because everything everybody has ever done or will
do is something in our nature.
> As long as resources are scarce, man will fight...
This is better than the nature angle, but not by much - as pretty much
anything (including adherents to a religion) can be called "resources". We
just need to crack the whole post-scarcity thing... kind of a silver lining.
~~~
gnaritas
> saying "something is in our nature" doesn't justify it or explain it
It doesn't justify it, never claimed it did, but it most certainly does
explain it.
~~~
woodman
I understand your position, but I guess I've failed to communicate mine
effectively - because you don't seem to understand me when I've said in two
different ways that your nature argument is tautological.
~~~
gnaritas
Ok, better to just say that, but I still don't agree.
> Do we perform any action that is not "in our nature"?
Of course we do, much of culture isn't in our nature, but is learned. Advanced
mathmatics isn't in our nature, it is learned. "in our nature" essentially
means because it's human, and that isn't tautological imho. The OP could think
of no reason humans would have conflict over land, human nature is a reason,
this is not tautological.
~~~
woodman
Yeah we definitely aren't going to see eye to eye on this, because we don't
even agree on what it is to be human. Whereas I include the direct
consequences for biological imperatives in the definition, "Advanced
mathmatics" is a direct and logical consequence of curiosity and the capacity
for relatively high level cognition, you seem to restrict the definition to
only include the biological imperatives... which leaves me to wonder at how
you differentiate the species from the rest of the animals.
> The OP could think of no reason humans would have conflict over land, human
> nature is a reason, this is not tautological.
That fits the very definition of a tautology: humans fight over land because
it is human nature to fight over land. You can substitute one or both
instances of "land" with "scarce resource" if you like, but it is still a
tautology - because land is a resource that is scarce :)
~~~
gnaritas
Your definition of human nature includes anything humans do as natural which
is a completely useless definition of natural. Natural, to have any real
meaning, means not man made; mans culture is man made, our religions are man
made, these things are not natural in that sense and that's the only
meaningful use of that word in this context.
Advanced mathmatics are not natural, they are a development of culture, our
brains are in no way optimized for it and learning to do it often requires
letting go of common sense. We're so bad at it that stupid machines are a
bazillion times faster at it. Maths is not in our nature, it is a product of
cultural evolution that could easily be lost should the wrong people die and
could be reinvented with entirely different braches the next go around if at
all.
When someone is talking about human nature, we're talking about those
behaviors that always naturally emerge in individual human development like
language, aggression, mating habits, etc, not things that may or may not
happen like the development of science or math which are artifacts of
particular cultures, not of humans in general.
> which leaves me to wonder at how you differentiate the species from the rest
> of the animals.
Why do I need to differentiate them, we're animals like any other, we do some
things far better than other animals and many things far worse than animals,
none of our abilities are unique in the animal kingdom, they're only unique in
the level at which we can perform them, animals think, humans think better;
we're only special when we choose to judge by things we ourselves are good at
and we rig the contest by setting ourselves as the bar on something we happen
to be good at and that's no different than a dolphin judging themselves
superior to us because we're terrible in water and can't echo locate. It's
hubris, nothing more.
> That fits the very definition of a tautology: humans fight over land because
> it is human nature to fight over land.
We'll just agree to disagree, I think you're rephrasing is a strawman, and now
we're beating a dead horse.
~~~
woodman
> ...includes anything humans do as natural which is a completely useless
> definition of natural.
Useless for your purposes, where you are comparing things of the same kind -
you use behavior for that, not nature. Nature is used for comparing things of
a different kind, like humans vs sea slugs. Also, nature is not the same word
as natural...
> ...always naturally emerge in individual human development like language...
How is that any different from "Advanced mathmatics"? No known humans have had
a written language but no numbering system, and speculation about the earliest
humans without a written language is just that, speculation.
> Why do I need to differentiate them...
So that you can quantify, classify, compare, understand, intelligently
discuss, etc.
> I think you're rephrasing is a strawman...
Eh, it conveyed the exact same meaning - it just more clearly demonstrated the
logical flaw.
> ...and now we're beating a dead horse.
Maybe, but I will say that your last post communicated your thoughts on the
matter very clearly - I never would have known otherwise that we disagree on
about five other fundamental concepts.
~~~
gnaritas
> you use behavior for that, not nature.
I'll use whatever I choose to use when I'm making my point. You don't get to
define my choice of differentiation.
> Also, nature is not the same word as natural...
That's just absurdly pedantic and a ridiculous point; I defined what I meant,
take it or leave it but don't be obtuse.
> How is that any different from "Advanced mathmatics"? No known humans have
> had a written language but no numbering system, and speculation about the
> earliest humans without a written language is just that, speculation.
I think I was more than clear, naturally emerge in _individual_ human
development; i.e. all humans naturally develop it as part of their normal
life-cycle. Language for example, this is vastly different than advanced
mathematics which may not ever emerge until certain levels of culture are
accomplished. Mathematics are not a natural part of the development of the
individual human lifecycle.
> So that you can quantify, classify, compare, understand, intelligently
> discuss, etc.
Which can all be accomplished without said differentiation, so no, try again.
> Eh, it conveyed the exact same meaning - it just more clearly demonstrated
> the logical flaw.
No, it didn't.
| {
"pile_set_name": "HackerNews"
} |
Interesting Esoterica - cirgue
http://read.somethingorotherwhatever.com/
======
drallison
So many fascinating papers, so little time.
| {
"pile_set_name": "HackerNews"
} |
Google 2FA mobile breached? - ascended
https://imgur.com/a/dwwdj
======
flukus
Why do you suspect a breach and not someone that knows your phone number and
gmail address? They can try to log in as you, triggering the 2FA message to be
sent and then send you the reset message with sender spoofing.
| {
"pile_set_name": "HackerNews"
} |
Bats have outsmarted viruses–including coronaviruses–for 65M years - yread
https://www.sciencemag.org/news/2020/07/how-bats-have-outsmarted-viruses-including-coronaviruses-65-million-years
======
yread
Paper is here. Amazing that you can build such a high quality annotation for a
de novo genome
[https://www.nature.com/articles/s41586-020-2486-3](https://www.nature.com/articles/s41586-020-2486-3)
| {
"pile_set_name": "HackerNews"
} |
Did Stack Exchange staff members assist in the apprehension of Ross Ulbricht? - codezero
http://meta.stackoverflow.com/questions/199353/did-the-stack-exchange-staff-members-assist-in-the-apprehension-of-ross-ulbricht
======
kevinpet
Can we please not get distracted by police investigating criminal activity
acting within the bounds of individualize specific suspicion of a crime and
keep things concentrated on warrantless wiretapping and wholesale
surveillance?
~~~
IvyMike
The question (probably unanswerable) that fascinates me is: Did the
authorities find DPR by analyzing Tor network traffic, or by some other means?
The Tor network being ineffective has wide-reaching ramifications.
I know the evidence has presented to make us think they found him via a series
of mistakes, but the existence of parallel construction makes me question
everything.
------
sneak
This just in: absolutely no strangers (save statistically insignificant
outliers like Nacchio) will go to jail to protect your data during a police
investigation - nor should they.
Plan accordingly.
~~~
kylemaxwell
So on one hand, we have the execution of a warrant or subpoena that is
narrowly written, specific and reasonable, based on probable cause, and signed
by a judge.
On the other hand, we have recently seen much greater evidence about the
wholesale surveillance of society under secret law and apparently not
accountable in any practical sense to oversight.
The first is normal and appropriate and well within the bounds of civil
liberties guaranteed by Fourth Amendment. The second is highly problematic -
but one doesn't necessarily lead to the other. Our civil liberties and civil
rights have always been subject to appropriate exceptions. The problem is when
those exceptions become so broad as to render the freedoms ineffective, not
that they exist at all.
~~~
rodgerd
Unfortunately the Dunning-Krugerrand crowd seem determined to try to conflate
what seems to be a fairly legitimate piece of police work[1] with the
Orwellian surveillance state.
[1] Well, obviously one can disagree about the criminalisation of recreational
drugs, but they are, so the cops are working within their brief.
~~~
sneak
While that remains a particularly amazing insult, perhaps it is in fact you
who have conflated libertarians with minarchists?
Not all of us, gold/bitcoin stash or no, accept that police are a necessary
part of society.
Given that one part of society (the state) has historically demonstrated that
it will expand to fill any and all available opportunities to exert
destructive power over others, it doesn't make much sense to grant them a
monopoly on the opportunity to use violence to uphold the law.
Laws, we need. Cops, we don't. The NSA has nothing to do with it.
TL;DR: Fuck the police.
------
anfedorov
_Some press on this case implies that the FBI found this person from his
activity our site. I can 't disprove that, but it is much more likely that
they found him through other means, and then tracked his activity on various
sites to build enough evidence for an arrest, indictment, etc._
Anyone care to speculate how they found him?
~~~
logn
The NSA et al. know everything. It's just a matter of whether they can figure
it out again using legal investigative techniques. I imagine it like the
solutions in the back of a math textbook. I'm given the answer but I won't get
credit for answering them correctly unless I can actually list every step in
deriving the solution. See the recent news stories on NSA-DEA parallel
reconstruction.
Specifically to your question, I'd guess they run a large number of Tor exit
nodes and from there it was fairly simple to see exactly who was doing what.
Also it's come out recently in the Guardian, the NSA can backdoor machines
through special servers running man-in-the-middle attacks.
Basically, the Internet (and planet Earth too) is not secure, so trying to
pull off a large-scale crime is kind of foolish.
~~~
krapp
>The NSA et al. know everything.
Parallel construction doesn't actually imply that the NSA is omniscient and
that the entire rest of the American justice system is a charade meant to mask
its power from the muggles. The NSA doesn't know everything. They don't see
everything. They don't whisper words of power into the ears of every
prosecutor, and a dark man smoking a cigarette doesn't appear from out of the
shadows with fabricated evidence for the Department of Justice and a dossier
from ten minutes into the future every time a hacker opens their browser.
It gets mentioned every time a post comes up involving court case or arrest,
and it's quite honestly as useless a form of speculation as suggesting divine
intervention as a first cause in science. Assuming too much power on the part
of the NSA (and by extension, that _no other methods_ used by any other bureau
or department are effective except as a smokescreen) is as dangerous as
dismissing them entirely.
~~~
logn
My 'know everything' comment was a slight exaggeration for effect. I didn't
say they fabricate evidence (you said that) or that they divulge their secrets
to every prosecutor (those are your words, portrayed as mine).
You have a good point though that maybe my comment is not constructive. I
don't wish for this to become a cliche response on HN that it 'must have been
the NSA' but we must acknowledge that they posses powers of surveillance the
world has never before seen (except if you believe in God... however, we
actually have architecture diagrams and proof of the NSA technology... not
just 'The Book of Edward').
But yeah, it's dangerous to dismiss the NSA entirely and dangerous to make it
a given they're more powerful than they are. However, given their secrecy,
that's all a fairly expected situation for us.
~~~
krapp
Fair enough, I admit I was extrapolating from a number of comments i've seen,
and I shouldn't have implied things in your comment that, you're right,
weren't there.
------
mjmsmith
_This happens very, very rarely. I have more than enough fingers to count the
times this has occurred since I started working here a year and a half ago. I
wouldn 't need a single toe, and I'm pretty sure I wouldn't need both hands._
I'm not sure that I would call multiple times a year "very, very rarely".
~~~
benaiah
For a site that large and well-visited, with almost entirely user-generated
content, all of which are on technical problems many of which could involve
illegal activity?
I'd agree that that's very, very rarely.
------
turboroot
It's interesting to note from page 30 of the criminal complaint, StackOverflow
was able to record "Ulbricht [changing] his registration email [...] to
'[email protected]'".
Why do sites like StackOverflow keep audit logs of your account information?
~~~
baudehlo
More likely historical database backups
~~~
pstack
Actually, it's almost certainly as the other person stated - for
administrative moderation purposes. There is no other purpose to maintaining
historical backups of this sort of data. Especially not when that costs money.
When I built a site that existed for a very long time, was very popular, and
involved monetary transactions, I had to track nearly everything. IP
addresses, address changes, email changes. Everything I could think of. This
was then utilized when I suspected someone of fraudulent behavior. I could
pull up an administrative screen that compared data in an archive copy (where
I dumped the older information for just this purpose and to specifically keep
it inaccessible to the outside world for user security purposes). With that, I
could see whether several users were actually the SAME user. I even tracked
things like user-agent string and detected screen resolution.
A lot of pieces of data can come together to provide more than circumstantial
evidence that someone is shilling, trying to feedback-bomb another user, and
so on. Enough correlated points of data can confirm suspicions like this.
You'd be surprised how many people use an email address for one account,
change that address, then create a second account with the email address they
used to have on the first account and then use the second address to drive up
the value of their stuff by shill-bidding against another user on their own
item.
~~~
kmontrose
Don't forget user support. It's not all that uncommon for someone to forget
their account, lose a password, or an email address. Circumstantial evidence
can support ownership of the account, and let us fix things for them.
There are also errors on our end like account merge bugs, moderation mistakes,
dropped/flagged/whatever recovery emails, and so on. Keeping additional
historical data can help us recover in those cases.
If you're smart about what you track it's not that much data; we record most
changes to user records into a history table (likewise, and for the same
reasons on post records). Keeping traffic logs around and queryable forever
_would_ be really, really expensive though. We keep some around, but only
really recent stuff is easy to query (about 2 days) since that tends to be
what's needed when reproducing bugs. I don't even think we have _all_ traffic
history, and old stuff would require digging a tape out (if we even move those
to tape like we do with DB backups, I honestly don't know; it's never come
up).
Moderation is a good reason to keep lots of data around, you're right, but
it's not the only one.
Disclaimer: Stack Exchange, Inc. employee.
------
microcolonel
"This happens very, very rarely. I have more than enough fingers to count the
times this has occurred since I started working here a year and a half ago. I
wouldn't need a single toe, and I'm pretty sure I wouldn't need both hands."
Pfft, he's counting in ternary.
------
benologist
Almost as contrived as the crap all over Quora.
~~~
logn
Not sure I agree or get your point. Seems like he's doing the best he can to
explain what happened without being taken to a secret prison.
My guess is they've gotten one request from the NSA ('give us all your data
for everyone... otherwise we will just tap into your fiber lines at ISPs') and
one from the FBI ('we are doing some parallel re-construction and it says here
we have a warrant for a user by the name of Frosty').
I'm just surprised an admin on the site didn't close the Q&A as non-
constructive and speculative :)
~~~
nullc
> Seems like he's doing the best he can to explain what happened without being
> taken to a secret prison.
National security letters are only supposed to be used for national security,
not random drug busts. If they used an NSL it was unlawful.
If this was just a sealed request it should be open now that the indictment
has been handed down. If it was reasonable and lawful they should be asking
for it to be unsealed and the request should be granted.
| {
"pile_set_name": "HackerNews"
} |
Nicholas Carr on deep reading and digital thinking - elorant
https://www.vox.com/podcasts/2020/7/1/21308153/the-ezra-klein-show-the-shallows-twitter-facebook-attention-deep-reading-thinking
======
ciarannolan
His book, _The Shallows_ , is an incredible journey into the human/technology
relationship. As much as we create and change technology, it likewise changes
us.
Worth a read for anyone in tech.
[https://www.amazon.com/Shallows-What-Internet-Doing-
Brains/d...](https://www.amazon.com/Shallows-What-Internet-Doing-
Brains/dp/0393339750)
~~~
shmageggy
How does it hold up almost 10 years later? The internet has changed a lot in
that time. I recently became interested in reading this, but I was wondering
if I'd be missing out on anything more recent.
~~~
kmote00
From the article: "His book, a finalist for the Pulitzer that year, was
dismissed by many, including me. Ten years on, I regret that dismissal.
Reading it now, The Shallows is outrageously prescient, offering a framework
and language for ideas and experiences I’ve been struggling to define for a
decade."
------
flocial
One thing that surprised me about this pandemic is how vulnerable people are
to misinformation and conspiracy theories on social media. For many people
their idea of authority or "credentials" directly correlate with the strength
of the source's social media profile (much like the impact factor of peer-
reviewed articles only it feeds on itself). I don't know if they are simply
looking for a theory to fit their current preconceptions or information that
affirms their sense of self but it's really phenomenal. I honestly had no idea
how many people I know could take these crazy conspiracies so seriously.
~~~
zahma
What are you on about? While I don't really disagree with what you wrote, this
article and interview has nothing to do conspiracy theories and their
propagation through social media.
~~~
flocial
Guilty as charged but I thought it was a reasonable jump to speculate on how
there might be a cumulative effect of being in the "shallows" where critical
thinking skills are atrophied and people are susceptible to misinformation. I
did read the article but I was under the impression that HN doesn't restrict
comments to the specifics of the article.
------
sgt101
>"But we also lost something. One thing we lost is a lot of our visual acuity
in reading nature and reading the world. If you look at older cultures that
aren’t text-based, you see incredible abilities to, for instance, navigate by
all sorts of natural signs. This acuity in reading the world, which also
requires a lot of the visual cortex, we lost some of that simply because we
had to reprogram our brain to become good readers."
Or because larger numbers of people lived in cities and used craft skills to
get money to live rather than hunting and gathering?
~~~
camillomiller
Good point. And what if, from an evolutionary point of view, this new way of
acquiring information is actually better in the long term?
------
lacker
It’s ironic to read an article about how it’s a shame the internet is so full
of distractions, when that article has multiple inline video advertisements.
| {
"pile_set_name": "HackerNews"
} |
$5,000 for your dream project: 2 days left - yrashk
http://5kgrant.com
======
ryanx435
In all honesty, what can be done with 5k for a dream project?
It's not enough to go full time. I guess maybe pay for hosting, maybe bring on
a short term consultant, or do some advertising?
Obviously 5k is better than 0, but it still seems pretty worthless in the
grand scheme of things for software projects
------
The13Beast
It could get you hardware or other resources that you wouldn't otherwise have
access to.
| {
"pile_set_name": "HackerNews"
} |
Algorithm Is the Problem, Not Mark Zuckerberg - ceohockey60
https://interconnected.blog/algorithm-is-the-problem-not-mark-zuckerberg/
======
sharemywin
I would argue the algorithm needs to be personalized.
people should have multiple channels. And you should have a volume control.
if not query access.
| {
"pile_set_name": "HackerNews"
} |
China bans new Bitcoin deposits - aronvox
http://www.ft.com/cms/s/0/6707013a-67af-11e3-8ada-00144feabdc0.html#axzz2noKo4XIN
======
sillysaurus2
I present, for your satisfaction, my comment from two weeks ago:
[https://news.ycombinator.com/item?id=6829180](https://news.ycombinator.com/item?id=6829180)
Along with this gem:
[https://news.ycombinator.com/item?id=6858258](https://news.ycombinator.com/item?id=6858258)
In summary, I invested $11k @ $1106/coin, predicting it'd rise to $1,500.
Shazam! I've transmuted that $11k into... let's see ... $5,100.
I feel pretty ill. Thought you'd all enjoy the schadenfreude, and perhaps
learn from my example: "How to ignore Sam Altman and lose more than half your
money."
[http://blog.samaltman.com/thoughts-on-
bitcoin](http://blog.samaltman.com/thoughts-on-bitcoin)
EDIT: I'm holding onto my coins. But, now that China is strongly incentivized
_not_ to invest in Bitcoin, it's going to be a very slow climb back to $1106.
And at the end of that long road (I'd say at least several months, but what
the hell do I know, right?) I get to look forward to: "Yay. I've lost $0. I've
also gained $0."
EDIT2: Oh, nope! It just fell from $520 to $500. So, at current market prices,
my $11k is now $4,780. Of course, we've all seen bitcion dip and quickly rise
again. It's just very... interesting... to me that within the timespan of
writing a HN comment, I'm down another $320.
Yes. Interesting. That's the word. I'm _interested_. In learning from my
mistakes.
FYI, that -$5,900 was enough to pay off my car loan in its entirety, or all of
my (admittedly small, relatively speaking) credit card debt.
Oh well. I'm going to go build a product and sell it now.
EDIT3: The market rose from $500 to $540, so that means I've earned back $474!
And since my last edit was an hour ago, that must mean I'm earning $474/hr
writing HN comments in my pajamas! Why hold down a programming job when you
can write HN comments fulltime for 4x the wage? Quit tomorrow, sell everything
you own, and invest it all in bitcoin immediately!
~~~
pontifier
I anticipate your feelings being quite widespread. I only had $200 in them
after cashing out, but I've still felt terrible when the price drops. It's
rather odd. This is the first time I have ever participated in an actual
exchange market with volatility and all. I built a trading bot, and was making
a couple of dollars a day trading my 0.17 BTC. A few weeks ago I was feeling
suicidal based on not buying some a couple of years ago when it was $10. My
family has been having a rough time, and I kept thinking that I had missed my
chance to pull us out of the hole we are in.
This is a crazy and interesting time to live in, and I'm glad I'm still here
even if Bitcoin didn't make me rich.
~~~
eclipxe
U.S.A. Suicide Hotline 1-800-273-TALK (8255). Remember, it's just money.
~~~
tachion
I love when people just like that are assuming that _everything_ has to happen
in U.S.A. and that world around it doesnt exist ;)
~~~
pdx
Your comment is only valid because that's a 1-800 number. 1-800 numbers are
free to call in the USA, which is why many people have them, but their
downside is that you can't call them from outside the USA.
Upon first discovering SIP addresses and that everybody on earth could have an
email style phone number that would be free to call, I have been waiting and
waiting for the world to transition to SIP addresses instead of phone numbers.
If such a thing occurs, someday, then publishing a phone address from one
country will not be a problem. I'm sure the folks at the suicide hotline would
gladly take calls from anywhere in the world, if they could do so for free.
Sadly, as things stand now, they could publish a SIP address, but nobody would
know how to use it.
------
shadowmint
ft.com, source of all amazing paywalls.
Here's the same content from a non-paywall provider:
[http://www.scmp.com/business/banking-
finance/article/1384688...](http://www.scmp.com/business/banking-
finance/article/1384688/bitcoin-price-slump-after-beijing-bans-clearing-
services)
~~~
retube
Yeah I don't know how they get away with breaking Google's sacrosanct rule of
not sticking content behind a paywall that they make visible to Google.
~~~
gabemart
Anyone visiting from Google sees the full article with no paywall.
You can test this yourself:
[https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CC8QqQIwAA&url=http%3A%2F%2Fwww.ft.com%2Fcms%2Fs%2F0%2F6707013a-67af-11e3-8ada-00144feabdc0.html&ei=U3mxUpaLOY-
ShQfpioDYDg&usg=AFQjCNGw8dx8RLkD6od7_RmTI632CfZmpQ&sig2=YxY5zmippvgxXKbE1eFMlw&bvm=bv.58187178,d.ZG4)
~~~
retube
Doesn't work for me. Get the sign up prompt. It may work the first time or two
but after that....
~~~
maxerickson
That link doesn't send the right referrer for me (I also get the sign up).
If I click through from here I get a silly question and the content:
[https://www.google.com/search?q=China+bans+new+Bitcoin+depos...](https://www.google.com/search?q=China+bans+new+Bitcoin+deposits+\(ft.com\))
------
M4v3R
As far as I know they closed Chinese Yuan deposits. Bitcoin deposits or
withdrawals are NOT affected [1].
[1]
[http://www.reddit.com/r/Bitcoin/comments/1t5cfx/btcchina_clo...](http://www.reddit.com/r/Bitcoin/comments/1t5cfx/btcchina_closed_bank_deposit_as_a_way_to_deposit/)
------
skloubkov
Ouch, that market crash:
[http://markets.blockchain.info](http://markets.blockchain.info)
Here is another article: [http://www.scmp.com/business/banking-
finance/article/1384688...](http://www.scmp.com/business/banking-
finance/article/1384688/bitcoin-price-slump-after-beijing-bans-clearing-
services)
~~~
johnpowell
There was a lot of work to keep it over 500 for a bit.
------
gokhan
For serious entertainment, Bitcoin markets now are great places to try some
day trading rodeo. Instant buy/sell with lots of drama and reasonable depth.
Stick with your stop-loss strategy and it's a real adrenalin rush. I trade a
small amount up and down and even made 40% profit for the last two weeks just
by trying to catch bottoms and selling a little higher. Cheap entertainment
IMO.
~~~
locusm
Same here, done OK just watching the BTCWisdom charts with a few basic
indicators. Its good fun...
------
thkim
This move seems obvious. Bitcoin's popularity has far more negative
consequences for Chinese government (or any other government) than benefit.
Bitcoin might topple the dollar hegemony if popularized, which might be what
China wants, but central banks lose their power to control the money flow in
the system. Why would any government want that? Any money goes into Bitcoin is
essentially money gone offline right now so it reduces tax revenue and adds
noise to national statistics. Bitcoin or any other crypto-coin has real chance
of adoption only when it's sponsored by government.
~~~
SwellJoe
_" Bitcoin or any other crypto-coin has real chance of adoption only when it's
sponsored by government."_
While I agree with your assertion that governments have no reason to support
Bitcoin, I don't believe it follows that government sponsorship is necessary
for the success of Bitcoin. I suspect most people betting on Bitcoin are aware
that at some point, governments are going to act forcefully to kill Bitcoin,
and the fallout will be utterly unpredictable.
But, I don't think it's something that can be put back in the bottle. We're
crossing into uncharted waters here. Nothing like Bitcoin has ever happened
before...so, we don't know how it's going to play out. I'm bullish even with
the expectation that major world governments will collude with current
financial elites to kill Bitcoin, because I suspect there is no way they can
actually kill it.
~~~
jasonwocky
> governments are going to act forcefully to kill Bitcoin, and the fallout
> will be utterly unpredictable.
I don't think so. I think it's quite predictable that, if that happens,
Bitcoin will lose. In the sense that it will ultimately become a
cryptocurrency also-ran. Something else will probably rise to take its place
after it's been bloodied, something designed to be resistant to whatever
response the world's governments bring to bear.
~~~
SwellJoe
In the same way that peer-to-peer file sharing lost?
~~~
jasonwocky
In the same way that Napster lost. Imagine if you'd invested in that one after
the mainstream noticed it.
------
obilgic
BTCChina closed bank deposit as a way to deposit Chinese Yuan. Right now no
way to deposit CNY into the exchange:
[http://www.reddit.com/r/Bitcoin/comments/1t5cfx/btcchina_clo...](http://www.reddit.com/r/Bitcoin/comments/1t5cfx/btcchina_closed_bank_deposit_as_a_way_to_deposit/)
------
MattyRad
By "ban", I assume they mean "made illegal." The former of course being a
euphemism for the latter. A couple questions come to my mind as such.... How
will China enforce such a ban? Can China physically or technologically bar
access to Bitcoin deposits?
Regardless of the Chinese government's ability to effectively enforce such a
ban, the price of Bitcoin has begun another wild (and fascinating) price
fluctuation period.
------
ck2
If your digital currency relies on governments accepting a competing currency
their own currency - you are going to have a bad time.
------
omegant
Now with this temporary crash what will happen to the farms that heavily
invested in ASICS, and that with difficulty will be mining at a loss in weeks?
(honest question)
~~~
gokhan
What makes you say temporary?
~~~
omegant
I guess that bitcoin´s price will continue to rise steadily and reach 1000$
again in some time. Maybe years, maybe months, I don´t know.
------
fragsworth
I see no article text here...
~~~
rahimnathwani
Try searching Google for
[http://www.ft.com/intl/cms/s/0/6707013a-67af-11e3-8ada-00144...](http://www.ft.com/intl/cms/s/0/6707013a-67af-11e3-8ada-00144feabdc0.html)
If you click through from Google, you will probably see the article text.
------
Mchl
An article behind paywall? I'm sorry but I'm disappointed
------
DustinCalim
Yes, Bitcoin dropped 40% today- but keep in mind that on any average day it
swings 20%.
If you invest because you believe in the underlying technology, you'll have to
ride out the volatility waves until it matures(or dies).
But, it's going to take more than people holding onto their bitcoins and
waiting for the value to go up- people need to spend them on things for it to
work.
~~~
haakon
20% swings over the course of a day is not at all average, and is in fact seen
as quite dramatical.
------
xfour
Well now, let's hope after all this we can find a stable price, so people can
actually use this stuff as a mechanism for transferring money, though I
suppose with this China crackdown it's losing some of that appeal even. I
guess we're stuck with Western Union forever, nice pivot btw. Telegrams were
pretty cool though.
~~~
seanmcdirmid
You do know that RMB is not fully convertible right? This is just the
government cracking down on a forex loophole; you can still buy bitcoin in
China with your $50K/year USD exchange limit (available to all Chinese
citizens, foreigners with tax receipts). Of course, you have to take a hit on
forex now.
------
aabalkan
Why are you even posting a news site requires paid subscription? I cannot read
this article!
~~~
gabemart
[https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&c...](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CC8QqQIwAA&url=http%3A%2F%2Fwww.ft.com%2Fcms%2Fs%2F0%2F6707013a-67af-11e3-8ada-00144feabdc0.html&ei=U3mxUpaLOY-
ShQfpioDYDg&usg=AFQjCNGw8dx8RLkD6od7_RmTI632CfZmpQ&sig2=YxY5zmippvgxXKbE1eFMlw&bvm=bv.58187178,d.ZG4)
------
wtpiu
Am I the only one whose link has "Authorized=false.html?"...?
Can't get to the article...
------
userulluipeste
The Chinese are just against the trend of leaking Yuan's value to Bitcoin
(hence the RMB->BTC restriction), not against the Bitcoin itself. People are
still free to bring value in China through Bitcoin! ;)
------
dil8
Bitcoin is a threat to the Chinese government's control over the populace so
it's not surprising to see this...
------
lucb1e
Cannot read article. Paywall.
------
bitmania
Check the latest update at
[http://www.cryptocurrencylive.com/](http://www.cryptocurrencylive.com/)
| {
"pile_set_name": "HackerNews"
} |
HP made a laptop slightly thicker to add 3 hours of battery life - slantyyz
http://www.theverge.com/ces/2017/1/3/14126382/new-hp-spectre-x360-laptop-2017-announced-price-release-date-ces
======
rayiner
To put that into context, it will be 18 mm and 4.4 pounds, versus 15.5 mm and
4 pounds for the new 15" MBP. Apparently the new battery is 79 watt-hours (the
old one was 64 watt-hours), versus the 76 watt-hours on the 2016 rMBP. With
the power-guzzling 4K screen, it'll almost certainly get _less battery life_
than the new MBP.
Which confirms the consternation over the 2016 rMBP. It's a huge regression
compared to the 99 watt-hour battery in the 2015 rMBP 15", but with few
exceptions PC manufacturers were shipping 50-60 watt batteries in similar
configurations. And going with power-guzzling 4K displays instead of the much
more efficient 3K model in the MBP.
~~~
vacri
> 18 mm ... versus 15.5 mm
On a tangent, weight I can understand, as folks physically haul around a
laptop, but I've never understood shaving a millimeter or two from the
thickness of a laptop. I've heard genuine complaints about heavy laptops, or
too-large form-factor (big screen or overly large bezel), but thinness seems
to be just bragging rights. I've known people who bought a laptop to fit in a
leather bag they already liked (rather than the other way around!), but I've
never heard anyone complain that laptops are just too thick.
Am I missing something here?
~~~
ssalazar
In addition to what others have said here and elsewhere, I think some of it is
also internal competition gone too far among mechanical designers to make the
thinnest edge or the most subtle bezel. When I worked with product designers
in the past, there was always a lot of oohing and ahhing whenever they'd come
across a competing product with tighter tolerances than they'd seen before.
Its like the product design version of needing to use the latest pre-alpha
NodeJS framework or transpiled front-end language; it doesn't necessarily have
a direct effect on the end user experience, but it satiates its creators own
need for new and shiny.
~~~
6nf
This is like what Philippe Starck calls 'narcissistic design' \- design to
impress other designers.
~~~
digi_owl
And I get the impression that Apple has been doing a shit ton of that over the
years...
~~~
Fnoord
I agree, but I just bought a 15" MBP 2015 and 15,4-inch (2880 x 1800), ~2 kg,
plus the battery life feels just right _to me_ (YMMV). Its ~2/3 of my previous
15" MBP 2010, the battery life is ~25% better, and the screen is noticeably
brighter and crispier (partly due to the old screen being matte and 6 years
old).
I didn't have to go to extremely low weight device. I didn't have to go to
small battery. I didn't have to go to low or high resolution extremes. Sure,
the GPU isn't great, but I decided to not play games on it from the get go. I
wouldn't have my gaming equipment for it anyway (mechanical keyboard with
binds, Naga mouse, headset, etc).
------
proee
Wouldn't it be great if laptops had a "dewalt" powertool style bottom plate.
You could decide if you want the "thin" or "thick" baseplate to balance
between all-day battery life vs. ultra-thin portable 3-4 hour life.
This would give the option to create a solid "3-inch thick" battery that lasts
a week. It would weigh a ton, but for some situations like remote fieldwork it
could be a really great solution.
~~~
dekhn
thinkpads have this. I use "the tumor" (a 9-cell battery) and it's amazing.
Also, you can hibernate the computer, remove the tumor, and replace it with
another tumor (or the 6-cell battery), and restart- no reboot required to
switch batteries.
~~~
trynewideas
Not only the tumor (which also made a decent carrying handle), but also the
dream bottom plate battery OP is describing: [https://www.amazon.com/Thinkpad-
Battery-19-Cell-Slice/dp/B00...](https://www.amazon.com/Thinkpad-
Battery-19-Cell-Slice/dp/B004UC59Q4)
~~~
dekhn
I didn't know that existed....
------
hackuser
Why don't HP desktops and laptops get more attention in 'hacker' communities?
I hardly ever see them discussed.
IME with large numbers of HP corporate (not consumer) machines, they have by
far the best quality of any Windows options. They run for so long that users
get frustrated, wanting shiny new equipment but having no reason to replace
their old ones. An EliteBook not far away from where I'm sitting even has a
tool-less case - you can pop off the cover and service it by moving one lever,
no screwdriver required - on a laptop!
However, I don't have data on quality; that is hard to come by.
(I have no relationship w/ HP.)
~~~
bsder
> Why don't HP desktops and laptops get more attention in 'hacker'
> communities? I hardly ever see them discussed.
Because HP did a _LOT_ of crappy things and produced a lot of crappy hardware
and provided a really crappy customer experience.
I know a lot of people who are in the "I will _NEVER_ buy another HP" camp.
That reputation takes time to undo, and you have to stick with it and not
screw up during that time or you are back to square one.
~~~
thrillgore
I can count on two hands the number of printers and consumer-level laptops
from HP that have died on me. I'm sure its different in EliteBook land but I
am still trying to find out what Windows laptop vendor offers the best build
quality and performance.
~~~
hackuser
Buy the corporate line laptops from whatever vendor you use. Speaking
generally,
* Consumer products are sold based on cost; when consumers shop, that's all they look at. Corners are cut to keep costs down - that's what consumers demand.
* Corporate products are sold based on availability (reliability), serviceability, and support; that's what IT departments shop for because those line items cost businesses far more than the extra couple hundred in up front cost. Imagine the cost of just a a few hours of downtime over the entire lifetime of the computer - lost productivity, skilled labor to for repairs, parts, distraction and disruption.
Yes, corporate products cost more; if you insist on lower up front cost, the
manufacturer is going to give it to you. You get what you pay for. The same
goes for support; pay for premium support.
My impression of HP's consumer-level stuff is that it's cheap in every way,
but I never use it. I do have a ~12 year old heavily used HP corporate line
laptop not far from where I'm sitting; it works perfectly except for the
trackpoint (touchpad is fine), and the finish is a bit worn where the user's
palms rest.
------
MR4D
TL:DR - 4K screen sucks regular batteries dry - needed bigger battery for same
battery life. :(
I got excited that this might be a good trend until I read this paragraph:
"Unfortunately, the claimed three hours of additional battery life aren’t
meant to make this laptop into some long-lasting wonder — they’re really just
meant to normalize its battery life. HP will only be selling the 15.6-inch
x360 with a 4K display this year, and that requires a lot more power."
~~~
slantyyz
The paragraph following yours is quite important:
"By increasing the laptop’s battery capacity, HP is able to push the machine’s
battery life from the 9.5 hours it estimated for the 4K version of its 2016
model to about 12 hours and 45 minutes for this model. So it is adding three
hours of battery life, but in doing so, it’s merely matching the battery life
of last year’s 1080p model."
It is still a net gain over the previous year's 4K model's battery life.
~~~
mulmen
I believe Apple is already making the argument but how much battery life do
you really need? 8 hours seems like an easy figure to throw out because its a
"work day" although most people can just plug in their laptops.
What use case needs 12+? That seems "good enough" to me for most applications.
Is it building in more life so that as the battery degrades there is still
acceptable battery life?
Are there more applications than I think where the laptop is actually not
plugged in all day? When does that happen? Manufacturing? Some kind of in-the-
field work?
I have no experience with it so honestly curious how people use laptops as
more than a mobile terminal in an office setting.
~~~
developer2
The problem with Apple is that extra battery is thrown out the window for the
sake of thinness above all else. Can you _imagine_ how much battery life a
MacBook Pro could have if they just added back 3mm of height to the mold?
Throwing away 4-6+ hours of battery life in the supposedly _professional_
models for the sake of shaving off a couple of millimetres... it's a joke.
The latest MacBook "Pro" is no longer a professional machine... it's nothing
more than the new generation of MacBook Air. There is no longer a Pro line,
only upgrades for the least common denominator of consumer.
Source: desperately not wanting to give up OS X (it's the only operating
system I can stand), but I'm stuck in a position where I flat out refuse to
shell out $4200 CAD on a new laptop, when 3 years ago the same level of
upgrades cost $1000 less. Their pricing has reached unacceptable levels of
greed. That, and no fucking escape key... on a "professional" machine. Give me
a break.
~~~
krrrh
I hope you're taking into consideration that over roughly the last 3 years the
Canadian dollar depreciation has added around $1000CAD to the same levels of
upgrades by itself.
------
mcculley
How much more work would it be to put in more RAM that could only be used when
plugged into AC power? My understanding of the MacBook Pro design constraint
is that they had to hit <100 watt-hours while using LPDDR3. Could a laptop
easily have another 16 or 32 GB of RAM that consumes more power but is only
available when plugged into AC power?
I would be happy to have such a constraint. I love my MacBook Pro. I love
being able to carry it around and deal with things like email when on battery.
When I'm doing serious development work, I'm sitting down somewhere plugged in
anyhow.
The OS could gracefully page out to swap and power down the less efficient
memory when switched to battery.
~~~
pimlottc
Are there any existing operating systems that can handle a dynamically-
changing amount of RAM?
> The OS could gracefully page out to swap and power down the less efficient
> memory when switched to battery.
According to benchmarks, the SSD on the latest MacBook Pro hit 1.4GB/s. Even
at this speed, it would take at least 10 seconds to flush 16GB of RAM to disk.
I doubt your computer could do much while that was happening. I wouldn't call
that very graceful.
~~~
rrdharan
Solaris on SPARC had the ability to dynamically add or remove RAM as I
understand it. I believe IBM AIX (+ POWER maybe?) can do this too.
Modern Linux kernels will let you hot-add memory but won't let you hot remove
it.
~~~
scurvy
Yep, CPU's too. On the E4500's the CPU's and memory were on the same boards.
You could remove the board after sending the right ASR commands.
Then again, you were constantly swapping boards to find the ones not affected
by EDP (ecache data parity) errors.
------
Animats
Panasonic makes a rugged laptop with 27 hours of battery life.[1]
[1] [http://business.panasonic.com/toughbook/fully-rugged-
laptop-...](http://business.panasonic.com/toughbook/fully-rugged-laptop-
toughbook-31.html)
~~~
noonespecial
_8.2 lbs. with optional media bay 2nd battery_
I'm not sure it counts when it weighs so much you could just bring a second
laptop along!
~~~
StRoy
Most of the weight is due to the toughened design, not the battery, actually.
We are talking about laptops you can use as a weapon to bludgeon someone, then
take with you to swim, and they'd still be in a working state.
Actually you can run over them with an SUV and they still work!
[https://www.youtube.com/watch?v=41lXVKSTOGQ](https://www.youtube.com/watch?v=41lXVKSTOGQ)
Of course, unlike common laptops, they don't need a case or anything. Just
carry them as is with their built-in handle.
If they didn't cost so much I would consider buying one. The added weight may
make them slightly less comfortable in one way, but in another way, if you
think about it, they're not flimsy pieces of denting aluminum you always have
to treat with care, put in protective bags during transport and the like.
Using a laptop like this must feel.. liberating.
------
walrus01
I want something that's identical in size/weight to a 2009/2010 Macbook Pro
17" (huge in comparison to today's stuff), but with all of the internal volume
that was occupied by the DVD-RW drive full of battery... I bet you could fit
100Wh of battery.
~~~
stuckagain
I used to use (and still own) one of those PowerBook G3 laptops where you
could swap the media bay for a CD-RW, a floppy, or a Zip drive, or just more
batteries. With modern Li-ion battery technology in both the main battery and
the media bay, that thing ran for 20+ hours. It was amazing.
~~~
seanp2k2
Many Dell Latitudes can also accommodate a media bay battery. I had a D830
with a 1920x1200 screen and loved having the 9-cell + LiPo combo.
------
combatentropy
When batteries surpass 72 hours, then laptop bodies can shrink below 20 mm.
~~~
shpx
You have to cap either battery life or weight/size. They capped battery life
at ~10 hours and decrease size every year. Maybe one day laptops get light and
thin enough and that changes.
The 2016 13" macbook pro with 70 hours of battery life would weigh ~3kg [0].
Have you ever used a light laptop (new macbook, macbook airs or ms surface to
some extent)? The (lack of) weight feels really nice. The macbook would imo be
the best laptop if it weren't for the keyboard. In fairness I've never owned a
laptop with day long battery life so can't compare. Actually if I could buy a
100gram laptop with 1hour battery life I'd do that, the battery is less than
20% of the weight of a laptop though.
Portable 700gram 100Wh (the 13" pros are 55Wh and 50Wh) battery packs are less
than $100 on amazon.
Ultimately, if you _really_ believe in battery life, line the back of your
laptop with 18650's (one cell is 10Wh for less than $10), all you need is like
2 circuit boards, some solder and a bunch of electric tape. You can
theoretically fit 47 on the bottom of a 13" macbook pro (you need ~35 cells
for 72 hours[1]). Or get someone to manufacture a portable laptop battery
case, macbook pros vent through the hinge, so heat should be less of a
problem.
[0] [https://www.apple.com/ca/macbook-
pro/specs/](https://www.apple.com/ca/macbook-pro/specs/) and
[https://www.ifixit.com/Teardown/MacBook+Pro+13-Inch+Touch+Ba...](https://www.ifixit.com/Teardown/MacBook+Pro+13-Inch+Touch+Bar+Teardown/73480#s148072)
means 1370grams+235grams\ _6
[1] [http://www.batteryspace.com/ProductImages/li-
ion/specLI2400....](http://www.batteryspace.com/ProductImages/li-
ion/specLI2400.gif) and
[https://www.apple.com/ca/macbook/specs/](https://www.apple.com/ca/macbook/specs/)
(281mm\_197mm)/(65mm\\*18mm) = 47
~~~
seanp2k2
Or spend $700 on a pair of Hypermac 100wh battery packs
[https://www.hypershop.com/products/hyperjuice-external-
batte...](https://www.hypershop.com/products/hyperjuice-external-battery-pack-
for-macbook?variant=11044791238)
------
yani205
I wish they start doing this with phones more often
~~~
rm_-rf_slash
That's the "eat your vegetables" line of smartphone marketing. Good for you
but a hard sell. Besides if battery life really matters to you then you would
get much more utility from an external battery that can store far more than a
single smartphone charge.
~~~
mulmen
Why is it hard to sell people vegetables? I agree with you but I don't
actually know why. Is it just because there's no industry group running a "got
carrots" campaign?
~~~
WildUtah
I sell people vegetables professionally. It's not difficult at all. People
will pay a fair price for quality organically grown tomatoes, cucumbers, kale,
chiles, peas and snow peas, and various Southern and Asian vegetables. There's
too much hand work involved in root vegetables to pay easily for first world
labor, but you can make it work.
The problem being invoked is that the junk food market is much bigger.
------
RodericDay
I have a 13' Air and a 13' Retina Pro.
I do not care even a little bit for the Retina screen. Makes me sad that so
much energy is wasted on it.
~~~
CoolGuySteve
This was the biggest factor to my switching to Ubuntu when deciding to upgrade
from my 2010 MacBook Pro.
I can't stand glossy displays and I want battery life. High res displays tend
to be glossy because at some point the grain on the matte coating is larger
than the pixels.
On the PC side, it's easy to pick a matte 1080p display on the XPS13 and Asus
UX305 and it greatly increases battery life. The 1080p XPS13 gets 14 hours!
And it's matte! Win-win.
Whereas it's Tough Shit For You on a MacBook. Don't know why though, my old
MacBook Pro has a matte display as a configurable option.
~~~
WildUtah
1080p? Why not just go back to individual blinking LEDs? A non-retina screen
is a throwback to previous centuries and should be tolerated only by the
visually impaired.
------
21
I see a parallel trend for big laptops.
Like this ridiculous one (which I would love if not the price):
[https://gizmodo.com/acers-absurd-curved-display-laptop-
has-a...](https://gizmodo.com/acers-absurd-curved-display-laptop-has-a-
predictably-hu-1790521279)
------
thoughtsimple
Who cares how thick it is, what matters it what does it weigh? Not seeing that
information.
~~~
cwbrandsma
As an alternative point of view (as to say, I'm not calling you wrong at all,
just that there are other opinions on the matter): I don't care about weight
at all, or how thin the thing is. I want battery life, bigger screen size, and
horse power. (Apple about lost me when they killed the 17 MBP). I call my
laptops "luggable".
~~~
thoughtsimple
The good thing about bringing back the 17" MBP would be all the objections
over battery life and low power GPUs would be moot. Even a thin 17" is going
to be huge. The last one looked like 4 cafeteria trays stacked up.
I'm just not sure that there is much of a market for it. It's fun to imagine
though. Mobile Xeon, ECC memory, 32 or 64 GB DDR4-2400 RAM, RAID NVMe SSDs.
Priced like a Mac Pro :)
~~~
lj3
> The last one looked like 4 cafeteria trays stacked up.
It's funny you mention that. The nickname for the 17" MBP was 'the lunch
tray'.
~~~
seanp2k2
The 17" allow me to fit a dinner plate, side dish, and drink on top of it
(closed), then carry it back to my desk or to the next meeting. The 15" won't
fit the side dish anymore. I miss my Silicon Valley Lunch Tray.
------
randyrand
It's annoying that most manufactures are only giving you 1080p or 4k options.
WHY NOT SOMETHING IN BETWEEN!
Apple has the right approach IMO, 2880-by-1800 15", 2560-by-1600 13". ~48 -
67% more than 1080p.
~~~
seanp2k2
Windows and Linux have issues with HiDPI when scaling is not just 2x. 1080p is
conveniently ~1/4 of 4K, so 2X scaling for H and W makes it look normal.
Also, panel manufacturers want to pitch 4K, not 2.5k or whatever would make
sense. It's the same reason we have 16:9 instead of 16:10 which many people
(including me) liked much more: panel manufacturers and companies selling
these to consumers care more about marketing (and selling) than how good they
are to use. 100% srgb coverage is easier to sell than e.g. Backlight
uniformity or low backlight bleed.
------
sullyj3
On a side note, why is it that I see battery capacity measured in both mAh and
wH? One is charge, the other is energy. Which is more relevant?
------
jaxn
This is the same thing Microsoft did with the Surface Book a couple of months
ago. I bought one and love the battery.
------
bencollier49
I misread that as "He made a laptop slightly thicker to add 3 hours of battery
life". I was waiting for the subtitle "Apple hates him" or something.
| {
"pile_set_name": "HackerNews"
} |
The Story of a Great Monopoly (1881) - behoove
https://www.theatlantic.com/magazine/archive/1881/03/the-story-of-a-great-monopoly/306019/
======
dredmorbius
Far too much here for easy synopsis. Picking two arbitrary items:
_The contract is in print by which the Pennsylvania Railroad agreed with the
Standard, under the name of the South Improvement Company, to double the
freights on oil to everybody, but to repay the Standard one dollar for every
barrel of oil it shipped, and one dollar for every barrel any of its
competitors shipped._
Strong shades of Microsoft's per-CPU licensing agreement for PCs.
Or of how to respond to questions under inquiry:
_When Mr. Vanderbilt was questioned by Mr. Simon Sterne, of the New York
committee, about these and other things, his answers were, “I don’t know,” “I
forget,” “I don’t remember,” to 116 questions out of 249 by actual count._
The names change but the game's the same.
~~~
adventured
> The names change but the game's the same.
It's certainly true. Even though Gates was universally mocked for his
approach, and although it was widely interpreted as making Microsoft look more
guilty, it's exactly how Zuckerberg and Larry Page will respond when their
embryonic anti-trust parties get to that point. Their lawyers will advise that
that approach is still the safest way to go, despite how it will look. When in
doubt, play dumb.
~~~
dredmorbius
I actually had both general recent testimony and Reagan-era inquiries. The
pattern is rather older, it seems.
| {
"pile_set_name": "HackerNews"
} |
Drag out files like Gmail - ahrjay
http://www.thecssninja.com/javascript/gmail-dragout
======
keltex
Does it seem sort of Microsoftish that Google uses an undocumented API to
their own benefit without letting the rest of the community know?
~~~
kragen
There's a bit of difference: there's an open-source version of Chrome which
presumably has a fully-commented source code implementation of this feature,
together with a public bug tracker, etc.
~~~
woodall
The files are here:
[http://src.chromium.org/svn/trunk/src/chrome/browser/downloa...](http://src.chromium.org/svn/trunk/src/chrome/browser/download/)
drag_download_file.cc:
[http://src.chromium.org/svn/trunk/src/chrome/browser/downloa...](http://src.chromium.org/svn/trunk/src/chrome/browser/download/drag_download_file.cc)
drag_download_file.h:
[http://src.chromium.org/svn/trunk/src/chrome/browser/downloa...](http://src.chromium.org/svn/trunk/src/chrome/browser/download/drag_download_file.h)
drag_download_util.cc:
[http://src.chromium.org/svn/trunk/src/chrome/browser/downloa...](http://src.chromium.org/svn/trunk/src/chrome/browser/download/drag_download_util.cc)
drag_download_util.h:
ttp://src.chromium.org/svn/trunk/src/chrome/browser/download/drag_download_util.h
| {
"pile_set_name": "HackerNews"
} |
How a Quantum Satellite Network Could Produce a Secure Internet - nextstep
http://motherboard.vice.com/blog/quantum-satellites
======
mtgx
Except they would be even more vulnerable to government's monitoring the
conversations, since they'd own those satellites. Unless we can envision a
future where even a small business could have such a satellite.
| {
"pile_set_name": "HackerNews"
} |
Why Is PayPal So Successful Yet They Treat Merchants Like Crap? - jkuria
https://capitalandgrowth.org/questions/1524/why-is-paypal-so-successful-yet-they-treat-merchan.html
======
theamk
Because they provide security for the users.
With paypal, I can buy from any random sellers, and know that in the worst
case, I am risking just a purchase amount.
Security is hard, and even if someone makes great products, that does not mean
they are great programmers. For a small or a medium store, I always assume
that their website will be hacked and entire database will be stolen (it is
really sad how often this happens). Entering credit card directly just means
more problems for me down the line.
This means some sort of trusted checkout service. Stripe has no extra
security, so this leaves Paypal or Google Checkout. I think the latter has
died (I have not seen it for a while), so this leaves Paypal.
~~~
cpncrunch
>Because they provide security for the users.
And for the sellers too. I've been using paypal for 13 years, and I've never
had a single issue with anyone using a stolen card, and I don't think I've
even had any chargebacks during that time. I previously used Worldpay and
regularly had issues with chargebacks and fraudulent transactions.
Paypal goes to extreme lengths to ensure a transaction is not fraudulent. It
can be somewhat annoying as they occasionally reject customers' cards for no
apparent reason, but overall I appreciate their security measures.
Tip for anyone in Canada, Australia, EU or UK accepting US$ transactions
through paypal: withdraw your money through Transferwise and you'll pay 0.5%
instead of 1.5% or 2.5%.
~~~
ttty
How do you withdraw with TransferWise?
~~~
matthewheath
Sign up for their borderless account:
[https://transferwise.com/gb/borderless/](https://transferwise.com/gb/borderless/)
It enables you to receive money from over 30 countries without any fees. You
get bank details for:
* Australia * UK * Eurozone * US * New Zealand
So you can take payouts from any of those countries (e.g. USD goes to your US
bank account attached to the borderless account, GBP to your UK bank account,
and so on.)
~~~
ttty
But how do you add the account to PayPal, you can only put sort and account
number, not the American USD bank account details. I can only put a 6 digits
sort code, not the ACH which has more than 6 digits.
~~~
cpncrunch
In the "add bank" page, the first field is country. For me (in Canada) I have
the option of choosing Canada or United States.
------
christocracy
I’m a merchant and I once won a PayPal chargeback.
I sell a software product. Customer generates a key to unlock the code. Once a
key is generated, that key will unlock the product forever; it cannot be
revoked, thus no refund at that point.
In the dispute form, I wrote “It’s like buying a DVD from Walmart — once the
cellophane is torn open...”
PayPal judged in my favor and refused the chargeback of $349.
I’ve had 2 or 3 chargebacks from Visa and never won those.
~~~
kalleboo
We had a credit card charge back on our PayPal account and PayPal managed to
successfully argue it to the credit card company so we won (!)
------
bitcoinbutter
It's a combination of factors. As mentioned in the original link, PayPal has
an advantage because a portion of transactions are done internally using
PayPal balance.
This results in an instant and free transaction. They also have a network
effect which is compounded by saving the buyers'payment information, meaning
the buyer doesn't need to resubmit their payment information on each
transaction.
The company treats its merchants terribly and treats buyers like royalty. This
is a winning strategy because merchants have no choice but to use the payment
methods the buyer prefers. Otherwise the buyer will simply go to a competitor
who does take PayPal.
PayPal is an essential service to most merchants. The way they close and limit
accounts can basically determine which businesses in a specific industry are
successful. They seem to selectively enforce their own policies among
merchants. This will create monopolies or oligopolies simply through having a
functional PayPal account.
Considering this huge power they hold, it is unfortunate that their policies
are so opaque and inconsistent. A merchant is in a constant state of fear that
one day PayPal will just permanently close their account, and will have no
option to discuss or appeal the decision. It happens constantly and is the
reason many businesses eventually fold.
~~~
aboutruby
> The way they close and limit accounts can basically determine which
> businesses in a specific industry are successful
I think it used to be this way but now with the concentration of e-commerce
with Amazon and Shopify (and most other sites using Stripe / Braintree), it
seems to me that Paypal is becoming more and more irrelevant.
~~~
jkaplowitz
PayPal owns Braintree (and, relatedly, also Venmo).
~~~
aboutruby
Still those are different services (I was talking about the service becoming
irrelevant, not the company). And they probably bought Braintree to make
e-commerce companies use Paypal (Braintree is pushing everybody to integrate
Paypal into their payment methods).
------
lykr0n
Because for a silent majority of their users have no issues. I've been using
paypal for 5+ years and have never had a single issue with them.
~~~
jseliger
Like many people, though, you never have an issue till you have an issue. And
then you're screwed. [https://jakeseliger.com/2011/12/09/december-2011-links-
paypa...](https://jakeseliger.com/2011/12/09/december-2011-links-paypals-
bogusness-ribbed-tees-literary-friendships-literary-research-alex-tabarroks-
new-book-and-more/). And you will then write a piece bemoaning about how awful
Paypal actually is, and you'll be confronted by other people shrugging and
saying, "Never had a problem myself."
~~~
jdietrich
There's a very simple quid-pro-quo with Paypal - they'll let basically anyone
accept payments, they offer very strong protections to buyers, but they'll
freeze any account that smells even slightly iffy. If your transactions could
plausibly look like fraud, then there's a strong likelihood that Paypal will
freeze your account; there's an even stronger likelihood that a mainstream
merchant account issuer will politely decline your business.
Go to your bank and ask them _" I'd like to accept donations on the internet
via credit card. No, I'm not a registered charity"_ or _" I'm planning on
holding an event next year. I don't have enough funds to cover the cost of
running the event, so I need to pre-sell tickets."_ See if they offer you a
merchant account.
Paypal aren't perfect, they aren't the best fit for every business, but they
offer a genuinely unique service.
~~~
nitwit005
The issue with PayPal isn't that they freeze accounts, but that have a history
of preventing people from getting at their money for months without
justification. That is a bit too much like theft and they've had to pay to
settle related lawsuits: [https://blogs.findlaw.com/decided/2016/02/paypal-to-
settle-i...](https://blogs.findlaw.com/decided/2016/02/paypal-to-settle-
improper-account-freezing-class-action.html)
------
oblib
I've been selling on PayPal for at least 17 years now and never had a problem.
My case may be rare but I think the reason is pretty simple, whenever anyone
has requested a refund I've issued it asap, no questions asked.
PayPal makes that very easy and last time I looked they made it clear that's
what they expect.
In their defense, I don't think I can or should expect them to arbitrate a
claim for a refund and I can certainly understand why they wont.
Paypal really only has two options. They can either say the money stays with
the vendor and buyer beware, or the money always goes back to the buyer and
the vendor must deal with any issues that arise from a complaint.
I have a PayPal debit card so I also use PayPal to make purchases. I like
having that protection.
------
rossdavidh
1) PayPal is a pain sometimes, but they do usually seem to be trying (my wife
is a small business owner), and refereeing fraud disputes is a Hard Problem 2)
their old-school competition was not always great 3) my (anecdotal) impression
is that Square and Stripe both ARE doing well (marketshare-wise, anyway), so
the idea that PayPal owns this whole space is not accurate
------
marcinzm
If it's a choice between giving my CC to a random web merchant that likely
doesn't know technology (so non-trivial chance that they or their third party
payment processor are compromised) or using Paypal to pay then I know which
one I'll choose.
~~~
zeroimpl
I look it at the opposite. As a user, the only thing Paypal protects is your
credit card info. Your personal information (name and address) gets passed on
to the merchant regardless. There's zero risk to us consumers for stolen
credit card numbers, so why would I care about that? Instead, Paypal costs
more for the merchants, and I'd prefer the merchant stays in business so I'll
use their preferred payment system. Plus, why do I want Paypal keeping their
own record of everything I purchase? Thus I only use Paypal when there are no
other options.
~~~
felipelemos
> There's zero risk to us consumers for stolen credit card numbers
Care to elaborate on this?
~~~
techsupporter
If you're using a credit (NOT DEBIT) card number in the US, you have zero
liability for unauthorized and fraudulent transactions. You can even get your
credit card issuer to--usually--go to bat for you in the case of "not as
described" transactions, too. So there's theoretically no risk to you as a
buyer using a credit card because if it doesn't work out, you just tell your
credit card issuer to go get the money back.
This is, supposedly, one reason why PINs on chip cards weren't adopted widely
here. The customer can, with relative ease and usual success, reverse the
charge so what does it matter if someone makes off with a card?
The key is credit cards instead of debit. Your chargeback rights are much more
limited with a debit card and a debit card means your money has been held up,
not the card issuer's.
~~~
smelendez
I agree with you for myself but not for everyone.
Many people only have one credit card so having it temporally unavailable due
to fraud issues is a hardship. If you're not good with computers it can also
be a challenge to reconfigure all your automatic payments.
If you don't have language skills or phone skills reporting the fraud can also
be scary.
~~~
AnthonyMouse
> Many people only have one credit card so having it temporally unavailable
> due to fraud issues is a hardship.
"Get the Amazon card and get A% on your Amazon purchases." "Get the X card and
get X% on your X purchases."
The average American has 2.6 credit cards. People with less money typically
have more credit cards. So that's generally not a problem if you're making a
purchase in real time.
And they're pretty good about turning your card back on once you sort out the
issue, and will actually notify existing merchants of your new card number if
it changes, so it's basically the same level of inconvenience sorting it out
with the credit card company as sorting it out with Paypal.
------
mannykannot
Two of the first questions to ask are "could a competitor do better in this
regard?" and, if the answer is in the affirmative, "Could a competitor replace
PayPal by doing better in this regard?"
For a number of the most successful e-commerce fields (those having a large
market that can be served in an almost completely automated way), the answer
to the first is probably yes, but only in a tiny fraction of all transactions,
and only by spending a lot more on human customer service. Consequently, the
answer to the second question is probably no.
There is also the asymmetry with regard to whether PayPal leans towards the
customer or the vendor in a dispute. The former attitude tends to lubricate
commerce, while the latter applies the brake, so it is actually to the benefit
of vendors as a group, as well as customers (and also, of course, itself), if
it does the former. If customers regard PayPay as the preferable way to do
business, the pressure is on vendors to use it, while vendors who spurn it in
favor other methods risk losing business to competitors.
Because PayPal made a number of smart choices, it offers benefits to both
sides of a transaction in the vast majority of cases, and that is pretty
unusual (compare it to, for example, the credit rating agencies.) It is in a
sweet spot where it would be difficult to compete either by reducing costs or
by improving service, and that sweet spot is defined by what it is currently
technically feasible to automate. That sweet spot will move with technological
developments, and PayPal will need to adapt accordingly to stay in it.
------
leowoo91
If my understanding is right, their security measures are over-engineered even
that means losing profits. As an individual I told them (about 10 times, after
my account recovered) that I own them couple of thousand dollars. They
responded "no, you are perfectly fine" ¯\\_(ツ)_/¯
------
dpcx
Personally, I think the title of this should have been "Why is PayPal so
successful yet they treat their users like crap?"
I have had multiple people come to me asking why their PayPal payment was
rejected, or why PayPal closed their account, or myriad other questions. My
answer to them is always the same: PayPal does what PayPal wants, and (in
practice) users have little to no recourse to that fact.
------
alexandernst
IMHO PayPal offered 2 things that nobody else offered at the time they were
starting: easy & secure. Nowadays more platforms like PayPal offer the same
features, but once some company has made a big share of the market, it's
really hard for other companies to take that market share away.
~~~
marpstar
agreed. That on top of their eBay partnership (and later acquisition). I think
people forget how popular (relative to the rest of the internet) eBay was 20
years ago.
I remember getting a PayPal account in like...1999 or 2000. That's a long
time. Only within the past 5 years have processors like Stripe and Square have
really gained visibility, and they each have their own niche (developer-
friendly and small-shop-owner-friendly, respectively).
------
confiscate
because PayPal is like any marketplace platform. Demand side is always more
important than Supply side.
If there is Demand (people willing to pay), there will most likely be lots of
folks happy to Supply (merchants willing to sell).
The opposite is not true. If you ramp up Supply (add more good merchants),
it's hard to ramp up Demand to match (getting more customers).
Hence, "the customer is always right".
------
toast0
For a long time, PayPal was the fastest way to accept credit cards. You could
sign up for an account practically instantly vs finding a merchant bank
account, a credit card processor, and a third company to interface with you
and your processor via the Internet.
And the fees are at least towards the bottom of entry level fees with the
traditional method, if not lower. Nowadays there are many more options (ex
Stripe, Amazon payments, Google payments, probably more), so maybe pick the
one or two that are the least terrible, until you have enough volume to get a
better deal?
------
neonate
The quote which forms the body of the post is surprisingly interesting.
~~~
code_duck
It seems like that is the actual content of this submission.
It makes sense, too… PayPal was the only payment processor that stood out as
having their own currency and balance system. I can see why that would save
them a lot of money.
------
kalleboo
PayPal is very useful to me since my main credit card brand (JCB) is poorly
accepted abroad, but if they accept PayPal I can use it anyway. They help
debit card users with their extra layer of protection. They offer a way for
people without credit/debit cards a way to pay. And they are probably still
the fastest and easiest way (but absolutely not the cheapest) to send money
between individuals internationally. They fill in a lot of cracks in payments
that others leave open.
------
notahacker
Isn't the true answer to this _because most payment providers set up much more
onerous requirements to let people take payments via their service and /or
charge merchants more_?
When your payment services have lower bars to entry for scammers and lower
margins, you're going to flag a higher proportion of possible issues with your
merchants and not handle them as smoothly.
------
xchaotic
as a consumer PayPal works well for me - it has extra layer of insurance that
I never used but that certainly gives me 'feel good' thing. I can pay for
things in one currency with a credit card in another. Yes the exchange rate is
not ideal, but much better than a flat fee for every overseas transaction on
my otherwise cost-free credit card.
------
pier25
Because it's convenient for end users
~~~
runxel
So much this.
The convenience must never be underestimated. Paying with PayPal is just
_easy_.
------
chrischen
They treat merchants slightly less crappy than credit card companies.
------
fouc
Nice bit from the first answer:
>“That’s why we created a PayPal debit card. It’s a little counterintuitive,
but the easier you make it for people to get money out of PayPal, the less
they’ll want to do it.
Though technically the first answer didn't answer the question it seems.
------
Tigere
I think its success was initially was from first to market truly web payment
system and then the Ebay acquisition potentially has driven transaction
volumes. Almost every Ebay account has a PayPal account.
------
buboard
Maybe they caught the disease from their friends in banks
~~~
cwyers
Canada's largest Bitcoin exchange has lost $190 million in assets because its
founder died and nobody else has access to the cold storage wallets.
[https://www.ccn.com/190m-gone-how-canada-biggest-bitcoin-
exc...](https://www.ccn.com/190m-gone-how-canada-biggest-bitcoin-exchange-
lost-it)
Traditional banks do not have the "one person died/ran away to the
Bahamas/whatever and now everybody is out their deposits" problem. Banks have
other problems! Banks have loads of problems. But banks have to exist in a
world of piles and piles of regulations, and you have to realize is that most
of those regulations exist not because regulators are bastards but in response
to someone screwing over someone else in the past. Banking regulations and
banking practices have evolved for reasons, and if you don't understand those
reasons and try and invent banking from first principles like all the people
in the wild west of crypto have been trying to do, you are going to learn real
fast that there are things you did not account for.
~~~
buboard
paypal is not crypto, and in fact in europe paypal IS a bank
------
doe88
Ubiquitous.
------
oosjc9a5
As an end user I am happy PayPal treats merchants like crap.
Just think about it: unless you drink the libertarian kool-aid, you know in
any given business transaction there's a minimum of 1 (one) sucker. I'd rather
the merchant be the sucker.
~~~
paulie_a
You should tell that to my friend who routinely tells me about sending 2000
dollars of equipment just to get scammed. It's happened so many times now he
barely gets annoyed. Just a cost of doing business, getting scammed and PayPal
is complicit.
------
cosmin800
I didn't know paypal is still a thing in 2019. Stoped using it about 5 years
ago for the same reasons like in the article (high fees, random payment
rejects, high profile accounts closed for no reason). I am better with good
old credit/debit cards and now with crypto.
~~~
RandallBrown
PayPal owns Venmo, which is pretty huge now in its own right.
| {
"pile_set_name": "HackerNews"
} |
Why do Macs need so much fixing? - edw519
http://blogs.zdnet.com/Bott/?p=446
======
mechanical_fish
Ladies and gentlemen, I give you... Philip Greenspun!
"Computers are the tools of the devil. It is as simple as that. There is no
monotheism strong enough that it cannot be shaken by Unix or any Microsoft
product. The devil is real. He lives inside C programs.
...
"Everything that I've learned about computers at MIT I have boiled down into
three principles:
Unix: You think it won't work, but if you find the right guru, you can make it
work.
Macintosh: You think it will work, but it won't.
PC/Windows: You think it won't work, and it won't."
\-- <http://philip.greenspun.com/wtr/servers.html>
\---
Notes:
\-- This was written before the Mac OS became Unix.
\-- It is possible that, given that this paragraph is the prelude to a sales
pitch for Unix as the best of a bad lot... Greenspun is overselling the extent
to which Unix will work. ;)
The corollary to all of this is that the web's primary function is to contain
websites by Unix newbies looking for gurus, by Unix gurus looking for newbies,
and by Windows users looking for stiff drinks.
| {
"pile_set_name": "HackerNews"
} |
iPhone Privacy - taranfx
http://seriot.ch/resources/talks_papers/iPhonePrivacy.pdf
======
DougBTX
More details here: <http://seriot.ch/resources/talks_papers/iPhonePrivacy.pdf>
Note that this applies to only applications installed by the user, there is no
hacking going on. Much like installing an application on a desktop.
~~~
mikedouglas
Ironic that a talk that mentions distortions in the press around security
issues, would be linked to by such a horribly written hit piece. It would be
nice if a mod could replace the original link with the pdf above.
The talk finishes with four recommendations:
1. User should be prompted to authorize read or read-write access to AddressBook
2. WIFI connection history shouldn’t be readable by “mobile” user
3. Keyboard cache should be an OS service
4. iPhone should feature an outgoing firewall
Seems fairly uncontroversial. Hopefully we'll see them in 4.0.
------
htsh
You still need to get the application on the iphone somehow, and there's no
indication this can happen through Safari. If anything, this is a good
argument for the controversial "walled garden" approach Apple has taken to
date.
Also, its worth noting here that if what is described as possible here is a
security hole, then every operating system ever made is insecure. You can run
a keystroke logger on your mac or any other operating system that would access
everything you type, including passwords. You could also install a screen
capture utility that records and automatically uploads what you do. Just
because you can run a program that gets your personal data doesn't mean that
the platform is inherently insecure. Now I understand that it may be stupid to
allow apps access to information, but there may be a good reason here. Its
possible that applications might need to access contacts, bookmarks, etc. and
without knowing more about this particular situation, I can see why these
types of things might be possible.
As things currently stand, some level of common sense is required by the end
user. With the walled garden approach Apple has taken and with the coming
Cloud operating systems, security will be force-fed to the end-user. And
though this isn't perfect, its pretty damn good from a security standpoint.
------
tewks
The level of transparency about which APIs a particular app uses on the iPhone
is not particularly good. I have a feeling that some apps and libraries,
particularly advertising/analytics solutions have been abusing this fact.
The Android system of notifying the user exactly which APIs are being used by
an app, prior to install, seems like a step in the right direction.
~~~
mikedouglas
_The Android system of notifying the user exactly which APIs are being used by
an app, prior to install, seems like a step in the right direction._
The talk mentions that class unmarshalling, encrypted payloads, and other
tricks that make this a very hard problem. The truth is that code-based
analysis can only go so far, especially when what you're looking for will be
deliberately obfuscated. The legal barriers that mechanical_fish brought up
are probably far more effective.
~~~
DenisM
There is no need to analysis - simply demand the app declare what it plans to
use and then deny all other APIs at runtime.
------
beefburger
The problem is that Apple claims:
"Applications on the device are 'sandboxed' so they cannot access data
stored by other applications. In addition, system files, resources, and the
kernel are shielded from the user's application space."
[http://images.apple.com/iphone/business/docs/iPhone_Security...](http://images.apple.com/iphone/business/docs/iPhone_Security_Overview.pdf)
The research demonstrates the opposite.
------
zachbeane
I see process-title has been fixed not to mangle "iPhone" as the first word in
titles. Much better.
| {
"pile_set_name": "HackerNews"
} |
The Dumbest Business Idea Ever. The Myth of Maximizing Shareholder Value - brahmwg
http://evonomics.com/maximizing-shareholder-value-dumbest-idea/
======
jacquesm
[https://news.ycombinator.com/item?id=3392108](https://news.ycombinator.com/item?id=3392108)
| {
"pile_set_name": "HackerNews"
} |
4 months of work turned into Gnome, Debian testing based tablet - ashitlerferad
https://zgrimshell.github.io/posts/4-months-of-work-turned-into-gnome-debian-testing-based-tablet.html
======
lostmsu
Does it support deep sleep and push notifications?
| {
"pile_set_name": "HackerNews"
} |
2016 Annual Report - dtnewman
https://watsi.org/2016/?utm_source=2016&utm_campaign=annual%20report&utm_medium=email
======
saganus
This is a really nice idea for a transparency report.
I got the email from Chase Adam and he says:
"This year, we’re doing something different. Instead of using our annual
report to share 2016’s shiniest numbers, we’re using it to share the most
problematic ones — for example, the $54,242 in fraudulent donations we had to
refund."
It would be great if this caught as a trend for companies (at least some of
them). Publishing these kind of numbers could give great insight into how
certain organizations are run.
Kudos to Watsi for making such a risky move.
| {
"pile_set_name": "HackerNews"
} |
Java won't curl up and die like Cobol, insists Oracle - thebootstrapper
http://www.theregister.co.uk/2012/03/07/oracle_java_9_10_roadmap/
======
karianna
Some surprisingly well balanced comments on that article, colour me surprised,
might actually read more of the Register's tech articles from now on.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Working at Pivotal Labs? - mxplusc
Hi HNers, here is a follow up to http://news.ycombinator.com/item?id=1488273.<p>There is surprisingly little of Pivotal Labs on HN, if you consider their reputation. Any opinions on them as a work place? Pivots?
======
zbrock
I used to work at Pivotal and I really enjoyed it. I learned more in a month
there than in a year at my previous job. If you're into the idea of TDD and
Pair Programming it's incredible. I got to spend time at a bunch of different
companies (Mavenlink, Twitter and Get Satisfaction to name a few), and really
get a sense for what works in a startup. They have most of the smartest, most
pragmatic and nicest programmers I've ever met. I'd highly recommend the
place.
If you have any other questions I'd be happy to answer them.
~~~
mxplusc
Thank you. It's refreshing to finally see some feedback on them.
Are there any "cons" of working there? What made you move on?
------
farhan
I run Engineering at a Pivotal Labs partner, Xtreme Labs (www.xtremelabs.com),
so also happy to comment on experiences at my shop.
~~~
mxplusc
Sure, please do comment! Or how should I reach you.
~~~
farhan
Well.. was more thinking of answering any questions you may have :)
| {
"pile_set_name": "HackerNews"
} |
Tesla is 'going out of business,' says former GM exec Bob Lutz - kushti
https://www.cnbc.com/2017/11/17/tesla-is-going-out-of-business-says-former-gm-exec-bob-lutz.html
======
Nokinside
At least some points he makes are true.
__Battery tech __
Tesla don 't own any important battery technology or patents.
Main battery manufacturers are Panasonic, LG Chem and Samsung SDI. Panasonic
is the leader and Tesla’s manufacturing partner, but LG Chem has made lots of
progress and they are supplying to GM. Panasonic is rapidly expanding their
own battery production is going to sell to other manufacturers also.
__fixed cost and mass production __
Tesla has no experience with large scale mass production (producing less than
100,000 cars annually is not big league). They have struggled with quality and
delivery even with modest volumes for years, it will be harder when volumes
grow. They don 't survive massive recalls if the quality is not improving.
They have to get Model 3 into mass production within a year or they will run
out of money and investor's and customers patience. Nobody want's to wait
years for a car. Roadster and Semi are just distractions. Tesla's future is on
Model 3.
------
mtgx
Wishful thinking. The main issue with Model 3 is just that they _can 't make
batteries fast enough_. This is a solvable issue, and it would normally be
considered a "good problem to have" (not being able to make products fast
enough to meet demand).
The only reason this is seen so negatively right now is mainly because Musk
has set such high standards for Tesla by trying to increase production by 5x
within 12-18 months and hoping everything will go perfectly smooth.
~~~
jacquesm
Good problems to have are still problems and can still cause you to go out of
business. Car manufacturing is _all_ about logistics, quality and capital
management. Fail at any one of those three and your car company will fail.
There is a pretty good reason why there has been a huge consolidation
happening in the car industry over the last 30 years, getting those things
right isn't easy at all.
------
Fjolsvith
Hmmm... Didn't Pontiac go out of business recently? I thought they were owned
by GM. Am I wrong?
~~~
hkmurakami
iirc they wound down a few internal brands several years ago following the
financial crisis.
------
Boothroid
Well, he would say that, wouldn't he.
~~~
greglindahl
Same things he's been saying for years, yes.
| {
"pile_set_name": "HackerNews"
} |
Autonomous braking: 'The most significant development since the safety belt' - ZeljkoS
http://www.bbc.com/news/business-43752226
======
nxc18
Autonomous braking is a huge step, and, imho, more exciting at the moment than
full autonomous driving because it can save lives right now, and is unlikely
to take them (shots fired at Uber).
But buyer beware. Even within the IIHS safety standards, there is considerable
variability. I love my Toyota Corolla (2017), but it's braking will only take
a few mph off after warning you. I can't wait until my lease expires and I can
upgrade to the Subaru (edit: or maybe the Volvo from TFA). Look up the videos,
they are fully capable of stopping without any collision up to ~40 mph
(disclaimer: never rely on these safety features, it's still your
responsibility to be safe).
Do your research, happy and safe driving!
~~~
saltcured
I've also read enough complaints about inexplicable emergency braking
activation to say that all drivers should be aware these are on the road. You
need to consider that now random new cars may execute a "brake check" maneuver
in situations where you would never anticipate a human to have done the same
thing.
~~~
fantasticsid
I’ve thought about this and and I think the best thing to do here is to
relentlessly keep a long, safe distance from vehicles ahead. There are just
things you can’t possibly see that the driver before you can.(a
puppy/squirrel/whatever that he wants to avoid killing, you can’t possibly
know). You can’t anticipate what a human will do here.
You also give yourself a buffer when vehicle ahead have an emergency stop -
imagine there’s a truck behind you.
I guess what I’m saying is these ‘brake checks’ are welcome, even if they only
serve to educate people.
~~~
dolzenko
Problem is everybody else have to behave similarly, otherwise other cards just
take the place of that safe distance.
~~~
carlmr
It might be annoying, but even if a few cars get in between you usually don't
lose much by getting a safe distance again.
~~~
NLips
I second this.
There are three scenarios if I'm not overtaking:
1) I'm going faster than the vehicle in front. In this case, it doesn't matter
if another vehicle pulls in between us because I'm about to overtake anyway.
2) I'm going slower than the vehicle in front. In this case, it doesn't matter
if another vehicle pulls in between us because I'm falling back and the gap is
ever-increasing.
3) I'm going approximately the same speed as the vehicle in front. In this
case, there tend to be two ways a vehicle pulls inbetween us:
a- it's merging from an on-slip-road (on-ramp?), in which case this doesn't
happen often, and I'll just fall back or overtake b- it's just overtaken me,
then slotted into a gap that's too small for it anyway. If the car has
overtaken me, it's mostly because it wants to go faster (in which case it will
probably vacate the space again soon) or it wants to pull off (in which case
it will definitely vacate the space again).
If I am overtaking, then yes, someone may pull into the gap, but I'm still
overtaking the vehicle I want to get past.
If you stop worrying about going 2mph faster than another lane of traffic,
then leaving a safe gap is mostly pretty easy and stress-free. It will only
take you 15 minutes longer to drive 200 miles at 65 than at 70.
~~~
Silhouette
Here in the UK, where we drive on the left and overtaking is only allowed to
the right of slower vehicles under normal conditions, there are some other
variations:
c- a vehicle in the lane to your left that you were going to overtake has
itself caught up with a slower vehicle and wants to pull out to overtake it,
moving into the gap in front of you
d- a vehicle with an impatient driver is undertaking traffic (passing to the
left of slower vehicles) and then moves into the gap in front of you.
The first of these is a normal situation, but still results in a vehicle
moving into the space in front of you, sometimes without accelerating up to
your speed first. Fortunately, it's usually easy to anticipate this situation,
and many drivers will helpfully drop back a little to allow more space for the
other vehicle to move out.
The second of these is a result of aggressive and probably illegal driving,
and is more of a problem because the driver cutting in may well be going too
fast, move out into a space that isn't really wide enough, and then brake
suddenly.
Still, in my experience these don't cause much delay if you're allowing a
sensible gap in front. I find people who try to keep closer to the car in
front to deter others from pulling into "their" space seem to get far more
upset about these situations than I do.
~~~
NLips
(also in the UK) Completely agree with your closing remark. By and large, by
deciding to not care about stopping cars getting in front of me, it doesn't
bother me when they do. Any delay is completely negligible.
------
boxcardavin
I drove into a dust storm in Central Washington in a rental Volvo several
years back and autobraking saved me from rear ending a car. I made it out, but
a huge pileup ended up happening just behind me.
[https://www.kiro7.com/news/massive-crash-closes-
eastbound-i-...](https://www.kiro7.com/news/massive-crash-closes-
eastbound-i-90-near-vantage-i/81725917)
~~~
jakobegger
I've read about a couple of accidents like this, and I always wonder why
people don't slow down or stop before they drive into dense dust or fog?
~~~
namanyayg
Maybe because of the worry of getting rear ended themselves? And I'm sure a
lot of people do; but then those drivers don't make the news.
~~~
KozmoNau7
Slow down, put on the rear fog light. That's how you're supposed to handle it,
but either people have never been taught to do it, or they've forgotten.
~~~
magduf
You can't do that. We do not have rear fogs lights in America.
~~~
KozmoNau7
Well, you should fix that, then. They're quite useful.
~~~
magduf
It's utterly impossible for us to change this. We didn't invent rear fog
lights in America, so because of this it would be completely impossible for us
to adopt them. We're only able to change the standards for our cars when we
invent them first.
------
oldgradstudent
It's a weird piece. The describe the XC90 as the safest car they ever tested,
and that it hadn't had a fatality since 2002. Then it attributes it to AEB.
The problem is that the XC90 got its AEB in 2015. This cannot be the reason
for the impressive safety levels since 2002.
~~~
Sammi
Yeah. The Volvo XC90 is a big car and most of the excellent safety record for
it can be attributed to the inherent higher safety of larger cars:
[https://www.edmunds.com/car-safety/are-smaller-cars-as-
safe-...](https://www.edmunds.com/car-safety/are-smaller-cars-as-safe-as-
large-cars.html)
[https://www.technologyreview.com/s/413018/laws-of-physics-
pe...](https://www.technologyreview.com/s/413018/laws-of-physics-persist-in-
crashes-big-cars-win/)
It's just physics. When large a object meets a small object then the large
object wins because it has more energy and ends up pushing the small object
backwards.
~~~
userbinator
Also the fact that it's a Volvo, which among other things will introduce a lot
of selection bias. For several years, there were no deaths in a Volvo 240
either:
[http://community.seattletimes.nwsource.com/archive/?date=199...](http://community.seattletimes.nwsource.com/archive/?date=19941022&slug=1937339)
~~~
oldgradstudent
Fatalities are quite rare. 2011-era midsize luxury SUVs have 13-15[1] driver
deaths per million registered vehicle years in the US (UK has half the death
rate per km, vehicle or inhabitants).
It is not _that_ improbable that midsize luxury SUV that sold just 50,000 cars
in the UK since 2002 had no driver deaths.
AEB was first installed in the 2015 model year. If we assume[2] that the same
number of cars were sold each year, the XC90 had 20,000 registered vehicle
years with AEB. We should expect 0.03 deaths.
It would be more __surprising __if there was a driver death in a Volvo XC90
equipped with AEB in the UK.
[1]
[http://www.iihs.org/iihs/sr/statusreport/article/50/1/1](http://www.iihs.org/iihs/sr/statusreport/article/50/1/1)
(first table)
[2] It's probably false, adjust with your favorite fudge factor.
------
matt_the_bass
I recently bought a WV Atlas with adaptive cruise control and front assist. It
also has a variety of other sensors and assists. It is no way an AV, but these
features IMHO add a lot of value. If every car had them, I bet road safety
would increase significantly. I agree with the article. I think it is a big
deal.
~~~
akira2501
> If every car had them, I bet road safety would increase significantly.
If you look into the data on fatal accidents and examine them even for a few
minutes you'll easily see that this is a foolhardy bet. The causes of
accidents and fatalities are highly variable and not what you would expect.
There's also extreme variability between the individual states; for example,
Texas has more _total_ fatalities than California. There's extreme variability
between the sexes and for different age groups within those sexes. Finally,
there are motorcycles.
AI/Driverless, AV and all the attendant sensors and inputs will have an
impact, just not nearly as large of one as many people unfortunately expect.
~~~
edejong
> Finally, there are motorcycles.
Absolutely, I wish every prospective motorcyclist would study the stats (more
than 35 times increased mortality rate per driven km [1]) and discuss this
with their loved ones.
Those who insist driving a motorcycle afterwards deserve their genes to be
removed from the gene pool.
[1]
[https://en.m.wikipedia.org/wiki/Motorcycle_safety](https://en.m.wikipedia.org/wiki/Motorcycle_safety)
~~~
ghaff
Perhaps you can also provide a reference to how risk-taking is inherently an
undesirable genetic component that should be weeded out.
~~~
edejong
I'd prefer to give a reference to how risk-taking while involving others is an
undesirable trait. When I mortally hit a motorcyclist, even without a fault of
my own, I might feel guilty the rest of my life. So indirectly, the irrational
risk-taking of others is involving those who are trying to act responsibly.
So, I suggest to take the risk-taking elsewhere: go climb a mountain, do base-
jumping or take the motorcycle to a racing track (bonus: real competition!).
But leave other motorists out of your game.
------
jbms
I'd like to see more variable brake lights to go with this:
i.e. a strip of light across the rear of the vehicle, that progressively
lights up according to how hard the vehicle is braking (or anticipates
braking, if it's autonomous).
Some cars have a flash-brake-lights-under-heavy-braking, but I think it would
help traffic flow if you can more easily distinguish a touch of the brakes
from a press of the brakes.
~~~
veritas3241
I've thought this would make sense too and I feel like I'm missing something
as to why it hasn't been implemented. Complexity perhaps?
Interesting to note, though, that we do have weak forms of braking that don't
light up the tail lights - heavy engine braking in the case of manual
transmissions, lighter engine-braking for automatics, and in the case of
electric vehicles (at least a Tesla in my experience) the lights don't kick on
from regen unless it's passed a certain deceleration level.
~~~
arbitrage
There's no tangible benefit big enough for the added complexity and
maintenance cost.
What exactly do you get from incremental brake lights?
~~~
jbms
You get clearer communication from the car in front of its actions and
intentions, in an intuitive way. This assists traffic flow and might enhance
safety.
------
dmitriid
Volvo's stated goal is:
"Vision 2020 is about reducing the number of people that die or are seriously
injured in road traffic accidents to zero. "[1]
As sceptical as I am about corporate statements, you can see that Volvo is
steadily working on this. They don't do splashy announcements or announce
revolutions in driving, and yet they bring more and more changes and
improvements to their cars. From assisted braking to lane assist to blind spot
information to city collision avoidance to many many other small and big
improvements.
[1]
[https://group.volvocars.com/company/vision](https://group.volvocars.com/company/vision)
------
vaughanb
Anti-lock brakes never yielded the accident reduction expected, primarily
because drivers used the improved braking performance to drive faster in
poorer conditions.
I guess the AEB works at reducing accidents because it IS autonomous and does
not "improve performance".
BTW the KPI is reduction in insurance cost.
~~~
clhodapp
> drivers used the improved braking performance to drive faster in poorer
> conditions
That's not the ideal outcome but it's still a net win, no?
~~~
gargravarr
A German taxicab company did a study, pitting half its fleet with ABS against
the other half without it. The accident rate stayed the same (in fact, was
insignificantly higher) in the half with ABS because the drivers felt
overconfident in the braking system:
[https://web.archive.org/web/20100921074926/http://psyc.queen...](https://web.archive.org/web/20100921074926/http://psyc.queensu.ca/target/chapter07.html)
~~~
clhodapp
Yep! But they got to drive more aggressively without a substantial increased
risk to their safety! Being able to go faster in the rain is useful in and of
itself!
------
ggg9990
Another reason the XC90 has a great safety record is that it’s a 4500 pound
car with a 4 cylinder engine. This isn’t safety enhancing in itself but does
ensure that it is only bought by people with the most sedate driving habits.
~~~
dingaling
> does ensure that it is only bought by people with the most sedate driving
> habits.
I would disagree with that statement!
As an urban cyclist the two most terrifying vehicles on the road for me are
the XC90 and the VAG PL71 ( Q7, Touareg, Cayenne ) not only because of their
size but also because they are predominently driven by distracted parents on
the school-run.
Two tonnes of SUV, poor outward visibility, stressed driver looking for a
parking spot on the kerb, kids bickering in the back == danger.
I can understand parents buy them to keep their little darlings safe from the
other nasty cars but I'd much rather jostle with twice as many normal-sized
sedans.
~~~
oldgradstudent
The articles says that
> not a single person has been killed while driving it, or as a passenger.
It doesn't say anything about no one killed by the XC90, which is quite, ahem,
surprising for an article touting AEB.
------
frogcoder
I was wondering about the AEB when the Uber accident happened. It should’ve
been equipped with AEB but it still hit a pedestrian. Did they just pull out
the whole software and replace it with their own navigation logic?
~~~
facorreia
The company that makes the standard safety equipment said it had been
disabled[1].
[1] [https://www.bloomberg.com/news/articles/2018-03-26/uber-
disa...](https://www.bloomberg.com/news/articles/2018-03-26/uber-disabled-
volvo-suv-s-standard-safety-system-before-fatality)
------
tristanj
Volvo's automatic collision braking sure has improved since 2010
[https://www.youtube.com/watch?v=aNi17YLnZpg](https://www.youtube.com/watch?v=aNi17YLnZpg)
------
urban_winter
The BBC story, also reported in multiple other places, is a nice bit of Volvo
marketing g, but is nonsense.
Volvo introduced Aeb in 2007 on the XC60. The XC90 only got it when they
introduced the new generation a few years ago. Therefore claiming that the
exceptional safety record of the XC90 is in any way related to AEB is just
rubbish.
The reason why XC90s are associated with so few passenger injuries (note, no
claims are made for injuries to other road users by XC90s) is that they are
large, heavy and chosen by safsr-than-average demographics.
------
mirimir
Sure, AEB is a great thing. But it's odd to see "since the safety belt". Air
bags have saved more lives than safety belts, haven't they?
Also, I can imagine additional advantages of AEB. If someone's tailgating,
just hit your brakes enough that their AEB will trigger.
~~~
vilhelm_s
The first google hit
([http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.506...](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.506.777&rep=rep1&type=pdf))
says
> in the period 1991-2001 [...] about 109,000 lives were saved by belts and
> 8,000 by airbags
so I guess seat belts were a bigger improvement.
~~~
raverbashing
I guess the question then is: why bother?
Also airbags depend on seatbelt use to be effective, and some of them have
killed people.
~~~
allannienhuis
? why bother saving 8,000 lives?
In what world is it better to NOT have airbags?
~~~
raverbashing
While 8k is a substantial value, other technologies might be more effective,
less costly and possible safer.
Remember the takata airbag defect killed 15 people in the US.
[https://www.consumerreports.org/car-recalls-
defects/takata-a...](https://www.consumerreports.org/car-recalls-
defects/takata-airbag-recall-everything-you-need-to-know/)
~~~
andygates
8000 vs 15, and that's not a problem with airbags, it's a problem with
defective industrial components and corporate nonsense.
------
harel
This feature is also available in "cheaper" cars. My Kia Niro has is, and
although I didn't get a chance to make "full" use of this (thankfully), I did
incur the "too close" beep which prompted me to break one time I was not
paying the road the attention it deserved. As a side note, with auto distance
keeping cruise control, lane assist that actually moves the wheel to keep me
in lane, side radar that alerts if a car is coming as I try to switch lanes
and the AEB, this is great entry into autonomous driving (as far as some core
systems that are actually in commercial use already).
------
Tade0
I had this engage in my car once. It was the first time I was driving with
glasses on - I must have misjudged the distance between me and the next car.
Scared me, but not as much as the guy following me a little too closely in his
E46.
It does beep randomly sometimes - usually in heavy rain. But that one time it
turned on pretty late, so it's a good last resort.
------
vigdals
Thats some really impressive stats.
I think this is a combination of great safety equipment, the safeness of the
car itself (crash tests and so on) and the people who buys it. Its not the
most hardcore drivers who buys a Volvo, even tho the 2017 and newer models are
really good looking. Volvo has always been a pioneer in security as well
~~~
KozmoNau7
Define "hardcore driver". Is it someone who drives aggressively and takes
dangerous risks?
~~~
vigdals
Yep
------
jnsaff2
My relative owns one. The Adaptive cruise control only picks up moving cars in
front of you and tries to kill you when there’s a stopped car in front of you,
say at a red traffic light. I’ve never been brave (or stupid) enough to see
whether the AEB would counter that especially in marginal road conditions.
------
gmiller123456
Anybody know how well these handle water? I imagine a situation after/during
heavy rain with giant puddles of water. A car in the lane next to me hits a
puddle at high speed throwing a lot of water in the air in front of my car.
Does the car slam on its breaks?
~~~
dingo_bat
If the splash is large enough to cause visibility issues, I'd say it must slam
the brakes.
------
alphadevx
There has been a similar system on Mercedes cars (Collision Prevent Assist)
for years:
[https://www.youtube.com/watch?v=h5ia5e07BqU](https://www.youtube.com/watch?v=h5ia5e07BqU)
~~~
pmontra
Don't use that if you're leading a bicycle race. It happened in the Dubai Tour
earlier this year: the organizers car (a Mercedes) autobraked and the bunch
crashed into it. They disabled the sensor after that. Obviously it's not a
normal use case.
[http://www.cyclingnews.com/news/abu-dhabi-tour-organisers-
bl...](http://www.cyclingnews.com/news/abu-dhabi-tour-organisers-blame-
automatic-brake-sensor-for-cavendish-crash/)
------
alkonaut
Isn't autonomous braking standard in most new premium-ish cars? My not-so-
premium VW has it. Any car that has a distance-sensing cruise control should
have it.
------
squam
Perhaps Tesla should consider licensing this tech from Volvo.
/snark
~~~
taneq
Serious question: Would the Volvo system have picked up and appropriately
responded to the gore point involved in the recent fatal Tesla crash? Any time
anyone raised the question of Tesla's AEB's reliability, the responses were
along the lines of "no AEB is perfect", "AEB only works on cars", etc.
~~~
unityByFreedom
> Would the Volvo system have picked up and appropriately responded to the
> gore point involved in the recent fatal Tesla crash?
I'd rather ask, would the Volvo system steer towards the barrier like Tesla's
AP may have done [1] [2]?
"IIHS research shows that AEB systems meeting the commitment would reduce
rear-end crashes by 40 percent." [3]
AEB on its own may save lives. Whether autosteer systems do or not is an open
question.
[1] [https://youtu.be/VVJSjeHDvfY?t=37s](https://youtu.be/VVJSjeHDvfY?t=37s)
[2] [https://youtu.be/6QCF8tVqM3I?t=28s](https://youtu.be/6QCF8tVqM3I?t=28s)
[3] [http://www.iihs.org/iihs/news/desktopnews/u-s-dot-and-
iihs-a...](http://www.iihs.org/iihs/news/desktopnews/u-s-dot-and-iihs-
announce-historic-commitment-of-20-automakers-to-make-automatic-emergency-
braking-standard-on-new-vehicles)
~~~
taneq
Both are good questions.
1) Do other autosteering systems (such as Volvo's) share this failure mode? (I
don't know much about Pilot Assist but it seems to require a lead car to
follow, is that right? Apparently Pilot Assist 2 doesn't, though? Currently
reading [http://forums.swedespeed.com/showthread.php?348321-Auto-
Pilo...](http://forums.swedespeed.com/showthread.php?348321-Auto-Pilot-
Assist-2-thread-\(experience-problems\)) and it doesn't sound great.)
2) Should AEB stop the car if there's something solid in front of it,
regardless of what the autosteer system is doing? (I would have thought so,
and it's disappointing that in this case it didn't!)
------
Bromskloss
Still, I can't help but feeling sad, as a human, to be taken out of the
equation and not be needed anymore.
~~~
contravariant
As a programmer becoming obsolete is the greatest possible achievement.
------
roflchoppa
oh man my Datsun is a coffin compared to these things ha. lap belts are
designed to cut you in half <:0
| {
"pile_set_name": "HackerNews"
} |
Goldman, JPMorgan Said to Fire 30 Analysts for Cheating on Tests - petethomas
http://www.bloomberg.com/news/articles/2015-10-16/goldman-sachs-said-to-dismiss-20-analysts-for-cheating-on-tests
======
tokenadult
This was an interesting read on financial industry practices for career
development. The regulatory environment has changed, and thus manager scrutiny
of junior employees has increased.
| {
"pile_set_name": "HackerNews"
} |
Why Every Analyst Should Learn To Code - sixtypoundhound
http://www.marginhound.com/why-every-analyst-should-learn-to-code/
======
dguilder
Better to have every programmer learn to be an analyst. Practically any
project could survive if the analyst was fired and the programmer had to pick
up the slack, but the reverse is not true.
~~~
faucet
In big corporations the opposit might be true. Programming is outsourced and
analysts are driving the decision-making (in the best case). The point with
the programming skills stays, though. To me it seems there exists a whole
corporate ecosystem with analysts who can't code and the managers around them
who do not ask tough questions. It is even hard to get scored there if you can
and want to code and generally to dig deeper, because it bothers them.
------
sixtypoundhound
Ah... but I'm trying to lure them _into_ the fold, so they will become...
programmers? :)
| {
"pile_set_name": "HackerNews"
} |
The F# development home on GitHub is now dotnet/fsharp - omiossec
https://devblogs.microsoft.com/dotnet/the-f-development-home-on-github-is-now-dotnet-fsharp/
======
kpremote
I'll just throw a random comment here:
F#, to me, has the most beautiful syntax. Reading F# code is such an eye
pleasing experience!
I actually don't know much about the language, but always dream about being an
expert in it and using it every day.
Edit: to give some context, the others I find especially beautiful syntax-wise
are Ruby, Lisp, Haskell, Ocaml(very similar to F#). Still I think F# is the
best.
------
JCoder58
For those interested in the current state of F#, Sergey Tihon's Blog tracks
the latest news.
[https://sergeytihon.com/](https://sergeytihon.com/)
------
spanxx
Hijacking the thread (bear with me).
Is f# a good language to work with in Linux servers? Is possible? Would you
recommend it?
~~~
dustinmoris
Absolutely. F# is just one of 3 languages which you can use to develop .NET
Core applications. .NET Core itself is cross platform and really well
supported on Linux and macOS. The other two languages are C# and VB.NET, but
personally I think that F# is just the nicer of the three.
It's also worth noting that .NET Core is not only cross platform, but an
extremely performant runtime which really hits the ceiling in various
benchmarks
[https://www.techempower.com/benchmarks/#section=data-r17&hw=...](https://www.techempower.com/benchmarks/#section=data-r17&hw=ph&test=fortune&l=hra0hp-1)
~~~
BorRagnarok
Not really, and the highest Core test spews errors in that test. [0]
[0]
[https://www.techempower.com/benchmarks/](https://www.techempower.com/benchmarks/)
~~~
oblio
I see 1 error for: aspcore-ado-pg
------
brianzelip
Changelog podcast with a focus on F#,
[http://changelog.com/podcast/62](http://changelog.com/podcast/62).
------
joshsyn
I wouldn't recommend F# on other platforms yet, specially because .net core
isn't supported very well.
~~~
phillipcarter
Can you clarify what you mean by not being supported very well? F# has been
fully supported on .NET Core for a while: it's a part of the .NET SDK, FSI
support is in, it's fully tooled in Visual Studio, etc. Would love to know
what you feel is missing here.
~~~
joshsyn
I suppose language wise the support is there but there are lack of libraries.
A while ago I was consider considering F# for the backend. While looking for
database client I found most third party like SQLprovider, rezoom.sql didnt
support .net core very well.
~~~
akra
SQLProvider has been ported to .NET Core - I've used it no problem when
scripting/prototyping across a variety of different data sources at once.
Admittedly I tend to just use straight ADO.NET anyway - its simple enough to
use and with F# your not saving all that much code/if at all moving to Dapper
since the language tends to be more succinct.
| {
"pile_set_name": "HackerNews"
} |
Judge allows temporary ban on 3D-printed gun files to continue - LinuxBender
https://arstechnica.com/tech-policy/2018/08/judge-allows-temporary-ban-on-3d-printed-gun-files-to-continue/
======
M_Bakhtiari
What about CNC milled gun files? Seems like a much bigger threat since mills
capable of producing reliable firearms are probably orders of magnitude
cheaper and far more ubiquitous than the necessary 3D printers.
And of course thugs sell conventional mass-produced guns on the streets for
another few orders of magnitude cheaper and more ubiquitous than CNC mills.
| {
"pile_set_name": "HackerNews"
} |
Shen is a portable functional programming language - curtis
http://shenlanguage.org/
======
616c
Is anyone using this, despite the license?
I have seen this here on HN and elsewhere. The only reason I avoid is the
weird not-so-FOSS license and key-to-the-chest mentality.
I would love to hear semi-detailed experiences using it. And are there any
more open alternatives built on CL? I am very interested in this idea.
~~~
ZenoArrow
"And are there any more open alternatives built on CL?" Shen's predecessor Qi
runs on CL, and the first version was released under the GPL...
[http://en.wikipedia.org/wiki/Qi_%28programming_language%29](http://en.wikipedia.org/wiki/Qi_%28programming_language%29)
~~~
616c
I am aware of Qi but the fact that the second version went completely
commerical kind of sounded like a non-starter to me.
Have you used it?
------
erikb
A little bit off topic: Does anybody else dislike the word "portable [...]
language"? I never met a truly portable language. Examples: You can use Java
on all platforms, but if you are not on Windows it sucks. You can use Python
on all platforms, but if you are on Windows it sucks. It's always that the
language might be able to run on different platforms, but as a coder you need
more, and most of the standard tooling often is only taken care off well
enough on one or two platforms.
~~~
nightcracker
I don't have any experience with Python sucking on Windows? The only thing
that's a bit of a hassle is distributing the interpreter, but this is no
harder than a VC++ redistributable.
~~~
groovy2shoes
I've had good luck using cx_Freeze to "build" Python applications on Windows
(and on Linux, for that matter). It bundles the bytecode of your Python app
with a Python interpreter stub and any necessary shared libraries. On Windows,
it can even build an MSI package. Users can't tell the difference from other
applications.
------
prodigal_erik
Prior discussion, mostly of the restrictive license:
[https://news.ycombinator.com/item?id=4730535](https://news.ycombinator.com/item?id=4730535)
------
moron4hire
What are the functional languages that are in common usage that are not
portable?
~~~
chc
In the sense that they run under CLisp, SBCL, Clojure, Scheme, Ruby, Python,
the JVM and Javascript? I can't think of any that are portable in that sense.
~~~
CMCDragonkai
Can shen be embedded in the host language?
~~~
tizoc
That depends on the port. The Clojure port allows this:
[https://github.com/hraberg/shen.clj#神-define-prolog-and-
defp...](https://github.com/hraberg/shen.clj#神-define-prolog-and-defprolog-
macros)
------
mp8
Looks interesting. However, does the following not mean that it's not
portable?
> Note that if Shen is not running on a Lisp platform, then function may be
> needed to disambiguate those symbol arguments that denote functions.
~~~
tizoc
It means that to be portable you have to wrap symbols that represent functions
in (function <the-symbol>) when passing them as arguments. Not doing so will
work on some ports but not on others, and should be considered "undefined
behaviour".
Portable:
(foldl (function +) 0 [1 2 3])
May work depending on the backend:
(foldl + 0 [1 2 3])
------
p4bl0
The "Shin in 15 minutes" tutorial [1] is really nice (I would even say it's
more 5 minutes than 15). Once you read the beginning of it you can appreciate
the example on the front page.
[http://www.shenlanguage.org/learn-
shen/tutorials/shen_in_15m...](http://www.shenlanguage.org/learn-
shen/tutorials/shen_in_15mins.html#shen-in-15mins)
------
igl
Interesting but I wish there were more functional languages that omit braces.
I only know of F# and the js preprocessor livescript.
Seems like embracing lisp style is a must do for many lang-creators. Or is it
just for the sake of parsing simplicity and dislike of whitespace
significance?
~~~
jtmoulia
You can add erlang/elixir to the list of paren-free functional languages.
~~~
fenollp
No you cannot.
~~~
jtmoulia
Mind expanding?
edit: wait, are we talking about no parentheses at all? If so, I'm wrong. I
meant lisp-like.
------
__Joker
What is the use case of being portable to other languages ? I can vaguely
surmise that being portable to other language might provide more traction for
using Shen in existing project.
------
fithisux
But the question remains, has anyone used this language?
------
jopython
Does the language support concurrency primitives as part of the core?
| {
"pile_set_name": "HackerNews"
} |
PHP: So you'd like to migrate from MySQL to CouchDB? - Part I - barredo
http://till.klampaeckel.de/blog/archives/74-PHP-So-youd-like-to-migrate-from-MySQL-to-CouchDB-Part-I.html
======
dutchbrit
A very good introduction! I actually haven't looked much at CouchDB, how is it
scalability wise? I first had a look at Cassandra, didn't completely like what
I saw so I've been looking more into MongoDB.
~~~
emehrkay
Im exactly where you are at except I am also moving away from PHP to Python
(the lang makes SO much sense) :)
~~~
dutchbrit
Freaky - I've been thinking about doing that too, but I've completed so much
already in PHP - maybe when I release a second version of my application.
Maybe I should give Django a try.
------
tillk
There's also part II and part III.
Thanks for sharing on here, guys!
~~~
kennu
It's a nice series. What I find hardest in CouchDB is dealing with eventual
consistency and conflict resolution (instead of transactions). I wish there
were more articles and literature about how to handle that stuff, in various
kinds of application scenarios. (Not just documenting how _rev and _conflicts
work etc.)
~~~
tillk
First off, thanks! And sorry I didn't catch your comment earlier.
I'll make sure I focus on that in a later part of the series! :)
| {
"pile_set_name": "HackerNews"
} |
Data API for Amazon Aurora Serverless - vincentdm
https://aws.amazon.com/blogs/aws/new-data-api-for-amazon-aurora-serverless/
======
coderecipe
With this, VPC is no longer needed from lambda call to RDS, and this means
that cold start time will be lowered from seconds to milliseconds. I made a
ready to use recipe (source code+deployment script+demo included) here
[https://coderecipe.ai/architectures/77374273](https://coderecipe.ai/architectures/77374273)
hopefully this help others to easily onboard to this new API.
~~~
scarface74
This only works for Aurora Serverless, not regular Aurora or any other managed
databases.
------
etaioinshrdlu
I told my AWS account manager today that this is what I wanted to see on
Aurora Serverless:
\- mysql 5.7 compatibility
\- acting as replication master or slave
\- faster upscaling, more likes 5s instead of 30s
\- publicly accessible over internet (the rest of RDS has this)
\- aurora parallel query built in
\- aurora multi master built in
Basically, I asked for one product to merge all their interesting features.
That sounds nice and like a one-size-fits all database. I would very much like
to use it in production. It would require very little maintenance.
------
hn_throwaway_99
I wonder what effect this may have for AWS Lambdas connecting to a DB for
synchronous calls (e.g. through API gateway). The biggest issue with Lambdas
IMO is the cold start time. If your Lambda is in a VPC the cold start time is
around 8-10 _seconds_ , and if you have decent security practices your
database will be in a VPC. I know AWS said they would be working on improving
Lambda VPC cold start times, but would like to know if using Aurora Serverless
with these kind of "connectionless connections" would also get rid of the need
to be in a VPC. I've used Aurora (and really, really liked it) but I haven't
used Aurora Serverless.
~~~
ftcHn
Would it "get rid of the need to be in a VPC"? I think yes.
It looks like by enabling Data API, you expose that endpoint to the entire
internet - which is secured like all the other AWS services with HTTPS, IAM,
etc.
------
cavisne
Another cool thing about this is it avoids the connection pool issue with
Lambda (where concurrent requests cant reuse connections).
Aurora is already pretty good at handling a lot of connections but this is
even better.
~~~
djhworld
You can create a connection pool in a static context that lives throughout the
lifetime of the JVM.
Although admittedly if Lambda scales to multiple JVMs as request rate
increases, you'll have multiple pools. Or if your request rate is low you'll
not get much benefit
~~~
cavisne
Lambda containers serve 1 request at a time, so the number of JVM's tends to
scale out a lot quicker than you would expect. This is more of a broader
problem with Java on lambda, as the classic Java way of creating a bunch of
singletons on startup and accessing them from multiple threads doesn't work,
you just get a really slow cold start time and some near empty connection
pools.
------
tienshiao
The beta version seemed like it had pretty poor performance:
[https://www.jeremydaly.com/aurora-serverless-data-api-a-
firs...](https://www.jeremydaly.com/aurora-serverless-data-api-a-first-look/)
Does anyone have performance feedback now that it is no longer beta?
~~~
reilly3000
I'm definitely excited about this, especially after paying $36/month for a NAT
that I barely used for a long, long time, and spending too many hours
configuring it for my Lambdas.
That said, I don't know how Jeremy Daly got away with making that post, per
AWS preview terms. They are pretty explicit about not posting benchmarks on
their preview products, and that makes sense as the API is not stable at all.
Still, I'm glad to see the data and hope that the performance has improved. I
wasn't accepted into the preview, and I've started work now to move most of
our infrastructure to GCP. It notably does not require any fancy footwork to
have a Cloud Function talk to a Cloud SQL instance
[https://cloud.google.com/functions/docs/sql#overview](https://cloud.google.com/functions/docs/sql#overview)
~~~
keysmasher
Wait if I read that doc correctly, does it seem to suggest that connections
will be closed when the function goes cold. So the locked up connections where
lambda dies without disconnecting isn’t a problem google functions?
Think of a spike in traffic, 100 functions connect one connection per
function. Then a break 80 of them go cold. Your max connections is 100, so if
80 didn’t disconnect and are waiting to timeout you are stuck. Any more
functions coming online won’t have any connections.
The only work around in AWS was to setup an external connection pool, kind of
begins to kill the serverless savings and all.
------
blaisio
... Don't you have to establish an HTTPS connection to use this API? Is that
really easier than using the existing MySQL protocol? Or is it really so
horrible that HTTPS is faster?
Things establishing new connections will never be as fast as things reusing
existing connections. It seems wasteful to ignore this.
~~~
smt88
This appears to be targeted at Lambda function, which can't reuse existing
connections between executions.
Also, establishing an HTTP connection is much faster than establishing a
typical database connection, in my experience. I don't know why that is.
| {
"pile_set_name": "HackerNews"
} |
Using ReSharper with MonoTouch Applications - dnesteruk
http://blogs.jetbrains.com/dotnet/2013/02/using-resharper-with-monotouch-applications/
======
jackfoxy
On a related note, Dave Thomas has got F# working with MonoTouch
<http://7sharpnine.com/posts/monotouch-and-fsharp-part-i> and
<http://7sharpnine.com/posts/monotouch-and-fsharp-part-ii/> opening a path to
functional and perhaps functional/reactive programming on iOS
| {
"pile_set_name": "HackerNews"
} |
Learn You a Haskell for Great Good - tosh
http://learnyouahaskell.com/
======
DonaldPShimoda
I tried LYAH before I had any real functional experience, and it went poorly.
I think I wasn't dedicated enough.
I took a programming languages class taught in Racket, and by the end of that
semester I felt like I had finally "gotten" what functional programming was
all about.
Tried LYAH again and... it still didn't quite stick. Not sure why. But I was
determined, because I'd heard about Haskell so much on the internet.
So I took a follow-up "class" where we just implemented projects in the
functional languages of our choice, and I chose to do everything in Haskell. I
asked my research advisor (I work in a PL research lab) for guidance, and
that's when Haskell really started to make sense for me. I'm not sure why LYAH
wasn't good for me, when it's touted as being a great starting place for so
many people, but now Haskell is one of my favorite languages by far.
~~~
BoiledCabbage
> I'm not sure why LYAH wasn't good for me, when it's touted as being a great
> starting place
It's not. It's a terrible starting resource. Majority of people I anecdotally
hear gave up on Haskell seem to mention LYAH. Yes the material is there, but
it's not presented in a way that generally sticks and fits pieces together. I
guarantee someone will reply to this thread and say it worked for them, but in
general people seem to be unsuccessful with it.
There are better options out there.
~~~
neon_electro
What would you recommend?
~~~
wmfiv
Another entry in the not Haskell category, Functional Programming in Scala is
outstanding.
~~~
sulam
Yes, this book is one I recommend to anyone who wants to really understand
functional programming. Admittedly it's not Haskell, but it's the book I
wished I'd had back when I first learn FP (and Scala).
~~~
ChristianGeek
It lost me about 2/3 in when it got to monads.
~~~
acjohnson55
That's OK. You've already learned more than enough. My life in Scala was just
fine once I understood that a monad was just a data type that admitted a
flatmap operation.
------
AtticHacker
LYAH was my first taste of functional programming. Before that I only had 2-3
years of experience using PHP and Ruby. I read this book and quickly fell in
love with FP and category theory, for the first time really started to enjoy
programming to the fullest. This book is also the reason I was able to land a
job as an Erlang developer, and teach a little bit of Haskell to others. Now,
5 years later, I continue my studies with languages such as Emacs Lisp, Guile,
Clojure, Elixir, Elm, Hy.
I know this book isn't really popular (reading these comments) but to me it
holds a lot of emotional value and I felt obligated to share my experience.
I'll always be grateful for what this book taught me, and thankful to Miran
for writing it.
~~~
dmix
What are you building with Erlang? If you don't mind sharing. I always love
hearing about it being used in production... Usually it's used in interesting
ways. Unfortunately not to many jobs available (not that I've tried very hard
to find one).
~~~
mc_
We use Erlang to build Kazoo
([https://github.com/2600hz/kazoo](https://github.com/2600hz/kazoo)). In
production since 2011, deployed around the world in various clusters.
------
oalessandr
Another good (imho better) option is the Haskell book
[https://haskellbook.com/](https://haskellbook.com/)
~~~
myegorov
Not to start a flame war, but imho this is my definition of a "terrible book".
Of the everything and the kitchen sink variety. By contrast, I very much
enjoyed LYAH, not least for its self-effacing humor. Hutton was too dry for
me. There's a less well-known "Thinking Functionally with Haskell" book by
Richard Bird that is concise and targets the mathematically inclined crowd.
~~~
sixstringbudha
It is something that is marketed very well. From the sample content, It didn't
look interesting to me at all.
LYAH may not be the greatest Haskell tutorial. But it makes an easy read. That
means, you are not afraid to go back to it again and again, Which is the
crucial aspect of learning anything sufficiently complex or new or both..
~~~
sridca
I agree re: marketing -- something I had easily fell for. One of the authors
has moved onto writing another Haskell book (a "work of art" apparently), but
this time my interest, for an intermediate level book, is elsewhere:
[https://intermediatehaskell.com/](https://intermediatehaskell.com/) (which
would be more informative--thus pedagogically sound--than a work of art).
Lesson learned: don't fall for the marketing of art.
Generally the best way to learn is by actually doing projects:
[http://www.haskellforall.com/2017/10/advice-for-haskell-
begi...](http://www.haskellforall.com/2017/10/advice-for-haskell-
beginners.html)
------
dogweather
LYAH also didn't do much for me.
I got started with Haskell Data Analysis Cookbook. As an experienced
programmer, I LOVE the "cookbook" format. E.g., "Keeping and representing data
from a CSV file" or "Examining a JSON file with the aeson package". If you
need to actually USE the language, this is a great way to get started.
Complete programs are shown.
Unfortunately, most books dive into the language _implementation_ instead of
teaching its _interface_. (This probably points to a Haskell design weakness:
deep learning of the impl. is nec. for success.)
But my REAL wish: Because I'm like many people and have experience with dozens
of languages. I want books that leverage that knowledge and experience. E.g.,
to explain monads, just say it's an interface specification, and give the API.
It'll make sense to me.
~~~
js8
> But my REAL wish: Because I'm like many people and have experience with
> dozens of languages. I want books that leverage that knowledge and
> experience. E.g., to explain monads, just say it's an interface
> specification, and give the API. It'll make sense to me.
I understand the wish but I personally believe it is a bad idea. I think
Haskell is just too different due to purity and laziness and trying to adapt
your intuitions from imperative programming is going to mostly fail, badly. I
suggest it's better to approach Haskell with a completely blank mind and learn
the correct intuitions through just being very rigorous at the beginning.
I have written a bit of Haskell myself but I still get surprised by evaluation
order, for example. But it has paid off, I think, it's a lot more
understandable and modular and beautiful code that I write in Haskell than
elsewhere.
------
scottmsul
This is my all-time favorite Haskell resource:
[https://en.wikibooks.org/wiki/Haskell](https://en.wikibooks.org/wiki/Haskell)
Just jump to the bottom and go through as many pages as possible. Going
through Functors, Applicatives, and only then Monads puts everything in a
broader context and makes Monads easier to understand.
------
s_dev
Still recall the advice form a classmate asking if I should learn Haskell --
he said "Just learn it because it'll change how you think about programming".
I think people should learn it for this exact reason -- it really does change
how you program as well as thinking about programming even if on a daily basis
you're not writing pure Haskell or Clean or Scheme or whatnot.
Common programming concepts like "iteration", "state", "formal methods",
"parrallellism" are concepts that functional programming makes you reconsider
from a new perspective.
~~~
Barrin92
I always think that this attitude is a little bit off. Obviously learning a
new paradigm or a new language is great and expanding your horizon is very
rarely bad, but I feel a little bit sad about shoehorning Haskell, or
functional languages in general, still into this education category.
Haskell is a very solid production language and you can put industrial grade
software in Haskell out there, and it is in fact used by a fairly serious
amount of companies at this point.
Instead of just taking Haskell features and porting them into already adopted
languages, I'd really like to see more companies just go directly with
Haskell, or Clojure, or Ocaml and F#. It does work.
~~~
jmaa
I'm with you on Functional Languages being solid and useful, I've done quite a
lot of work in Standard ML, but the parent's point wasn't _only_ do it for the
education, it was _at least_ do it for the education.
------
khannate
Based on my experiences using this as a reference while taking a class on
functional programming, I think that while many of the explanations and
examples are helpful, the ordering is a bit weird. For example, pushing off
the explanation of higher-order functions for so long seems questionable,
since they're a fundamental feature of the language.
~~~
gh02t
I agree, it's really disjointed. It spends a long time on relatively easy
stuff and not much time on the hard things, IMO. I also have trouble with the
style and I think it goes too far with the sort of informal, conversational
tone. Ultimately Haskell is a language with a lot of formality at its core.
You don't have to present it as all abstract category theory theorems and
such, but that doesn't mean you should avoid formality entirely.
------
nanook
What do people think of Real Word Haskell
([http://book.realworldhaskell.org/](http://book.realworldhaskell.org/)) ?
~~~
egl2019
Excellent except that it is dated. GHC, libraries, and tooling have moved on
in ten+ years. I would buy a new edition instantly.
------
ionforce
Many years after my first exposure to this book and having learned more heavy-
FP in Scala, I would not recommend this as a resource for learning either FP
or Haskell.
But let that not be a pock on the overall great mission to make FP more
accessible.
------
Cieplak
Haskell is hard, mainly because it's normally not a first language, and the
first language people learn probably has Algol-like syntax (e.g. C, Java). I
think Erlang is a bit easier to learn due to having fewer features, and for me
was the gateway drug to Haskell. It let me learn to program with recursion
instead of loops, and pattern matching instead of `if`s. The type system is
the best part of Haskell, but unfortunately it makes it very easy to get stuck
when starting out ("IO String" vs "String", "ByteString" vs "Text"). It's well
worth the investment, though, to get a tool that lets you develop concise code
like Python/Ruby but gives you strong guarantees of correctness. Also worth
noting the incredible ecosystem of libraries and tooling, like _stack_ and
_intero_ :
\- [https://docs.haskellstack.org](https://docs.haskellstack.org)
\-
[https://commercialhaskell.github.io/intero/](https://commercialhaskell.github.io/intero/)
~~~
vapourismo
I wouldn't really consider stack to be a great tool. There is Stackage which
may be useful but the tool itself falls apart quickly when used to compile
more than a single package executable.
That being said, one should have looked at these:
- cabal Nix-style builds
- ghicd for continuous type-checking
- stylish-haskell for light code formatting
- brittany for slightly more invasive code formatting
- hlint for linting
------
Swizec
I read LYAH a few years ago and I enjoyed it. These days I'd probably find it
tries too hard to be funny, but that's me getting older, not the book getting
worse.
It is very much written by a (young) 20-something targeting other
20-somethings with a similar sense of humor. Came out in 2011, which puts the
author at 24 years old. So ... you know.
Went to high school with the dude for 4 years. He's cool.
------
jose_zap
I wish this was shared less, it does a really bad job at teaching practical
Haskell skills. For a long time this book left me with the feeling that I was
unable to learn the language.
~~~
Oreb
People have different styles of learning. I loved LYaHfGG, but hated Haskell
Programming from First Principles. As is evident from this thread, many others
feel the opposite way. Both books deserve to exist and be shared frequently.
------
wilsonfiifi
What about "Get Programming with Haskell"[0] from Manning. Is it any good?
[0]https://www.manning.com/books/get-programming-with-haskell
~~~
Koshkin
Yes, the book is true to its title.
------
mbroncano
It doesn’t seem to be the most popular opinion around, but I must say I got
back into FP (which I came to profoundly despise after completing my CS
education) thanks to this book.
It’s certainly not a book for the profane, despite its approachable aspect.
------
dasil003
It seems obvious to me that LYAH is an imitation of Why's Poignant Guide to
Ruby, and that neither is actually a good learning resource. I don't mean to
disparage these as works of art, but _why did it first, _why did it better,
and if you actually want to learn either ruby or haskell neither should be
your first stop.
~~~
thom
Yeah, I'm as much a fan of whimsy as the next guy, but every language wanting
one of these twee storybook intros is increasingly grating (Clojure has
Clojure for the Brave and True etc).
------
Insanity
I read LYAH and enjoyed it. But I did play around with Haskell a lot at the
same time.
In addition, the IRC channel was immensely helpful along the way.
Nowadays I just use haskell for a few things at work or for toy projects, and
the initial steps were taken with LYAH. (Beginner level though and I think I
learned more from irc than the book in the end)
------
lolive
I won't comment the book. But my personal feeling is that you need a working
environment first. And for that, you need to follow the advice at this
resource: [https://medium.com/@dogwith1eye/setting-up-haskell-in-vs-
cod...](https://medium.com/@dogwith1eye/setting-up-haskell-in-vs-code-on-
macos-d2cc1ce9f60a) with my comment (cf Olivier Rossel). If you are under
Ubuntu, forget about the brew paragraph. Just curl -sSL
[https://get.haskellstack.org/](https://get.haskellstack.org/) | sh
After all that, you will have an editor that can run/debug some Haskell.
~~~
lolive
I will quote (what i think is) an important part of the learning process:
"Much of the difficulty in learning a new language is simply setting up the
environment and getting comfortable with the tools to start programming."
Having a kind of IDE to play around was immenselly useful in my case. I then
could try to solve small problem i invented. And used the various Haskell
resources in a non-linear way (how to do this? Cf SO, books, articles, etc.
And then some theory articles to tell you why it is done this way. And that
again for another topic required to solve my problem, and again, and again).
Problem solving was my way to go to discover the Haskell way. Not just reading
a book linearly.
------
miguelrochefort
First programming book I've ever read. I love it. I haven't done Haskell
since.
------
leshow
Wouldn't be my first recommendation for a good Haskell book. I really enjoyed
Haskell: first principles. I'm sure that will get mentioned a lot here.
Still, as a free resource it does cover some fun things. I just felt the book
wasn't practical.
------
allenleein
Honestly, this is not a good book for beginner.
Here are some _FREE_ Functional Programming (Haskell,Purescript)learning
resources from 101 to building product:
[https://github.com/functionalflow/brains/projects/9](https://github.com/functionalflow/brains/projects/9)
More, over 20 _FREE_ FP books:
[https://github.com/allenleein/brains/tree/master/Zen-of-
Func...](https://github.com/allenleein/brains/tree/master/Zen-of-Functional-
Programming)
------
bribri
I don't recommend Learn you a Haskell or Real World Haskell. Check out Haskell
Programming from First Principles
------
drngdds
I got about 2/3rds through this and I didn't like the lack of exercises or
real example programs.
------
ilovecaching
Terrible book. Please, please read Graham Hutton's Programming in Haskell.
LYAH is full of incorrect definitions and broken analogies. Hutton on the
other hand is up there with K&R for clear and concise definitions.
~~~
creichert
> Terrible book.
Definitely overstated and not good advice for beginners.
My advice to beginners would be:
\- Read all the Haskell books available at your disposal. (In addition to LYAH
and the Hutton book, I would say Learning Haskell From First Principles and
Get Programming with Haskell are great, [https://www.manning.com/books/get-
programming-with-haskell](https://www.manning.com/books/get-programming-with-
haskell), [http://haskellbook.com/](http://haskellbook.com/))
\- When you hit something that doesn't make sense in one source, try
referencing it in another source.
\- When you have some experience writing programs in Haskell, refer to some
older books like Real World Haskell. There may be a few issues compiling the
examples, but nearly all the techniques in the book are still widely used and
you learn about the language has progressed in the last few years. This gives
you a compass to read and maintain older Haskell source code).
\- Read as much Haskell code as you can from popular libraries (Pandoc,
XMonad, and smaller libs as well).
~~~
ilovecaching
Beginners want a clear place to start learning from. All those things you said
are overwhelming and unnecessary. Some of your recommendations would be fine
as next steps, but to start with, just read Hutton's book. It is small enough
to be read in just a few days.
~~~
derefr
I think there are two different definitions of "beginner" being used here.
1\. Someone who has never programmed before, and is perhaps young. Someone who
has to be guided. A novice.
2\. Someone who is new to this particular language, but is experienced at
programming generally; who is attempting to learn the language to use it "in
anger" (i.e. with a specific goal in mind and a timeline for that goal); and
who is willing to "do whatever it takes" to learn the skill. A journeyman
beginning a new path.
A novice needs a definitive textbook. A journeyman-beginner, on the other
hand, needs definitive primary sources, however scattered.
If you're a high-school student learning precalculus, what do you need? A math
textbook.
If you're a post-graduate student learning some specific arcane sub-discipline
of math to see whether it could be used to solve the novel problem you've
decided to do your thesis on, what do you need? Primary sources. All the
journal papers in that field you can get your hands on. There's no one
textbook that could possibly help you; the only truth you will find is the
truth "between the lines" of everything you read.
------
PhantomBKB
The Haskell Book was better for me at least.
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Please help me with what to do with my life? - arikp9396
I am a 22 year old guy.Just about to complete my degree in mechanical engineering. I find hardware startup quite interesting and I kinda want to work there. But here is the problem, I have no technical knowledge nor passion left any more to learn. I am kinda depressed in what to do with my life considering I have no idea how to get that cool awesome job in Hardware startup with my lower skill set. I am drifting way. I dont know how to start off again. I know I am surrounded with great minded people like HN community and great resources but dont know where to start off. I even wanted to start a side project but isnt going well. I have lost all hope. I have no idea what to do from this point onwards. I believe I am getting old now.
======
DrScump
"Just about to complete my degree in mechanical engineering... I have no
technical knowledge"
Well, those two claims seem to be at odds with each other. Maybe you think you
lack a _particular_ realm of knowledge you desire?
When I was just about to finish my (C.S.) degree, I had little mental
bandwidth for anything else but that and work. Perhaps take a fresh look after
you complete your degree?
------
dang
This post got killed by a spam filter. Sorry. We marked your account legit so
if you repost it, it should go through. You're welcome to give that a try,
though it's hit and miss which of these posts get a community response.
Also: you're not old! Good grief!
| {
"pile_set_name": "HackerNews"
} |
A 'revisited' guide to GNU Screen - ypk
http://linuxgazette.net/168/silva.html
======
silentbicycle
Oddly, it doesn't mention tmux (<http://tmux.sourceforge.net/>), a newer
project that implements a similar terminal multiplexer. It was redesigned from
the ground up to more cleanly accommodate several features that have been
grafted onto the screen codebase. (It also has a BSD license, FWIW.)
After years, screen _still_ doesn't do vertical screen splits. It turned out
to be faster to just start from scratch.
~~~
tvon
Honestly, it would be odd for a tmux article to go without mentioning screen
but there is nothing odd about a screen article not mentioning tmux.
Speaking of which, I would like to see an article on tmux. Every now and then
a screen article pops up and a few people chime in with the vague advantages
of the BSD-licensed tmux, but I haven't seen any actual articles on tmux.
~~~
0wned
I use both. The only big difference? Besides the license? Screen uses 'Ctrl+A'
while tmux uses 'Ctrl+B' and screen -r (to reattach) is replaced by tmux
attach. That's it, for me at least. tmux seems to have a lot more features,
but I only use 10% of them.
~~~
silentbicycle
The biggest difference as far as I'm concerned is that it has vertical screen
splitting and xmonad / dwm-style automatic tiled layouts built in. I've also
looked into the codebases for both, and if I were working on new features, I
would far rather work on tmux's - it's much cleaner.
------
mark_l_watson
That was a good writeup. I use screen so often, that I don't think much about
it, so this article was a good refresher.
A little off topic, but: I should spend about 5% of the time I devote to
exploring new programming languages to revisiting command shortcuts, etc. for
tools like Emacs, Idea, Rubymine, Eclipse, etc. A few days ago, I set up Emacs
for Rails development (involved learning a bunch of new keyboard shortcuts) -
now depending on what aspects of a project I am working on, I use either
Rubymine or Emacs (or TextMate if I have my MacBook booted to OS X)
------
yan
I almost welcome the bimonthly screen article. I adore that program.
~~~
ivenkys
Same here , when i discovered it i couldn't understand how i had been working
without it for so long , suddenly i had 1 large terminal rather than "n"
different terminals.
The only thing i miss now is vertical splits , i think tmux solves that
problem as well.
~~~
rbanffy
"The only thing i miss now is vertical splits , i think tmux solves that
problem as well."
Ctrl-A+S or Ctrl-A+| ?
~~~
chewbranca
Just checked in ubuntu, those two commands will enable a horizontal split and
a vertical split, respectively. Thats great, screen has been amazingly useful
since the day I discovered it and have split windows working fully is just
icing on the cake.
------
Hoff
The screen handling of vt100 emulation is badly broken, based on my recent
experience with it. A valid vt100 sequence was completely borking the terminal
session.
I've unfortunately not had the time to chase the bug down, and the boxes that
are generating the vt100 sequences are not where I can make them available to
the screen developers, either.
For those that are looking to achieve (better) vt100 compliance, there is a
reasonable test suite available at the <http://vt100.net/> site.
------
psranga
Screen doesn't do vertical splits. That's a dealbreaker for me. Tmux does.
But tmux has another problem: very rapid scrolling in one window will make the
program completely unresponsive. In the same situation, screen doesn't have
this problem. I posted to the user list and the developer very quickly sent me
a patch, but it didn't fix the problem for me, although it worked for the
developer. Still working with the developer on this.
~~~
andrewscagnelli
There is a port of screen that includes vertical splits. Its nifty, but
requires the source to be patched:
<http://fungi.yuggoth.org/vsp4s/>
~~~
psranga
I read the README/blog post somewhere that vertical scrolling in one of the
subwindows is very slow. Hence I have not tried it out. Do you have any
experience with this?
------
tfh
gnu screen is the reason i find terminal tabs obsolete.
~~~
mkelly
Agreed. screen is one of the single most useful tools I use.
------
genieyclo
Ratpoison - <http://www.nongnu.org/ratpoison/>
~~~
silentbicycle
Could you explain why this is relevant, rather than just linking to it?
~~~
jacobolus
It's not really that relevant IMO, but maybe fans of screen want to extend the
screen approach to their whole computing experience? That's basically the goal
of ratpoison, which is a window manager based around the idea of tiling and
complete control via the keyboard, overall much simpler than KWin, etc.
~~~
silentbicycle
I knew why, but I've actually had much better luck with dwm
(<http://dwm.suckless.org/>) * . The automatically-tiled / multiple desktops
interface style works verrry well with a keyboard-centric usage style, but
also more gracefully accommodates programs that expect a more conventional UI
- ratpoison just seems to give up. (I used ratpoison exclusively for probably
three or four years.)
* Other people have also had good experience with XMonad (<http://xmonad.org/>) or awesome (<http://awesome.naquadah.org/>), though the former requires Haskell (I got burned by GHC's portability issues, and requiring GHC for a window manager strikes me as a bit silly), and awesome strikes me as a bit dodgy.
| {
"pile_set_name": "HackerNews"
} |
China Consumes Mind-Boggling Amounts of Raw Materials - rottyguy
http://www.visualcapitalist.com/china-consumes-mind-boggling-amounts-of-raw-materials-chart/
======
samspenc
The phrase "debt fuelled binge" come to mind.
The 2008 mortgage crisis was a shocker, but I think we may be under-prepared
for the next financial crisis that's going to be made in China.
| {
"pile_set_name": "HackerNews"
} |
Eben Moglen is no longer a friend of the free software community - JoshTriplett
https://mjg59.dreamwidth.org/49370.html
======
craigsmansion
It's a sad and bewildering affair.
My most innocent interpretation of the events is that Eben got seduced by "big
picture" thinking: the thought takes hold that there are actions that are
against ones principles, but will result in so much popularity and influence
that it will be easy to undo the wrongs and still enjoy the fruits of the
shortcut to success.
History has shown this hardly ever works out (Lindows, Red Hat, ESR, Ubuntu,
etc) as intended.
It's probably too hard to transfer a fairly complex philosophy by attempting
to temporarily raise its popularity.
~~~
jordigh
Stallman calls this ruinous compromise:
[https://www.gnu.org/philosophy/compromise.html](https://www.gnu.org/philosophy/compromise.html)
------
Operyl
I dislike articles that allude to wrong doings, but when pushed for more
details on rather harsh allegations, refuse to state it. I have no way to
verify it happened, and I have to take someone’s possible lie as the one sided
truth.
EDIT: I'm referring to: "Around the same time, Eben made legal threats towards
another project with ties to FSF."
~~~
jordigh
Links are provided. For example, the SFLC trying to invalidate SFC's
trademark:
[https://sfconservancy.org/blog/2017/nov/03/sflc-legal-
action...](https://sfconservancy.org/blog/2017/nov/03/sflc-legal-action/)
This is bizarre to say the least.
~~~
carussell
I think parent poster is referring to this paragraph in particular:
> _Throughout this period, Eben disparaged FSF staff and other free software
> community members in various semi-public settings. In doing so he harmed the
> credibility of many people who have devoted significant portions of their
> lives to aiding the free software community. At Libreplanet earlier this
> year he made direct threats against an attendee - this was reported as a
> violation of the conference 's anti-harassment policy._
The reason I think this is because when I opened the article there were zero
comments here, and after reading the post, I clicked through back here to
leave a comment much the same as the one you're responding to.
A couple remarks:
I'm aware who Matthew Garrett is, and I respect him and his contributions and
his overall stance, in that way where when you see someone's name attached to
something, it automatically kicks off good feelings.
Having said that, we need a term for something like this. (One probably
exists.) Wikipedia popularized "weasel words", but this is something more
specialized. Something like "proxy words", where rather than tell someone the
things that happened, you give them this sort of non-specific, pre-digested
proxy for people to derive their judgment from. It operates on almost the same
principle as strawman arguments. It may be a good proxy, or it may not be, but
for good reason you should always favor reserving your judgment for the real
issue when presented with a proxy, rather than accepting the proxy itself.
That aside, Bruce Perens's comments in the following two LWN threads are
relevant to the overall discussion:
[https://lwn.net/Articles/738046/](https://lwn.net/Articles/738046/)
[https://lwn.net/Articles/738279/](https://lwn.net/Articles/738279/)
~~~
craigsmansion
> Having said that, we need a term for something like this.
I think it's a form if "appeal to authority," which is not always a logical
fallacy. I think it fits even if said authority will not or cannot explain
certain statements.
If say, a Bruce Schneier told me to avoid certain software or a certain
processor, but didn't give any details, I would still heed his advice and
defend that advice by appealing to his authority.
In this case, of course, it depends how much trust one would place in Matthew
Garret as an authority on moral judgements concerning Free Software matters.
~~~
carussell
People with no authority at all can (and do) do this, too. I don't think it's
an appeal to authority.
~~~
qbrass
They fact that they have no authority doesn't stop it from being an appeal to
authority, it just makes it less effective as an appeal.
------
snvzz
While there might be actually something to it, article lacks so much in detail
it's hard to do anything else with it than dismiss it whole.
When trying to denounce a person, it's not OK to be glossing over the details
to this extent.
Pretty much reads as "You can't trust Eben Moglen because I say so."
~~~
ibotty
It is Matthew J Garrett's personal blog. He was a member of the EFF and has
publicly been involved in the free software community.
~~~
cmiles74
Eben Moglen also has some very real credentials, I don't think that makes it
reasonable to take someone's claims at face value. I, too, would like some
more details on the issues. The interpretation of the GPL around ZFS is
something I believe reasonable people may disagree upon. The claim that he has
harassed and threatened people is much more serious, in my opinion, and
deserving of some real evidence.
[https://en.wikipedia.org/wiki/Eben_Moglen](https://en.wikipedia.org/wiki/Eben_Moglen)
------
zantana
Eben recently had his one day conference the videos of which are here:
[https://softwarefreedom.org/events/2017/conference/video/](https://softwarefreedom.org/events/2017/conference/video/)
If you look at his closing remarks (the last video) he mentions that he
mentions being less combative as a strategy to reach more people. I suspect
this comes down to less being the same side than having different tactics.
| {
"pile_set_name": "HackerNews"
} |
Tardigrades may now be living on Moon - gglon
https://www.afp.com/en/news/15/hordes-earths-toughest-creatures-may-now-be-living-moon-doc-1jd4j52
======
Unknoob
Can this be flagged as clickbait? They are not living, they are in
cryptobiosis. Even though experiments have shown that they can come back to
life after a long time in this state, nothing can guarantee that they are
actually still alive.
~~~
dekhn
i agree the article isn't super informative, but is it correct to say that
something in cryptobiosis isn't alive? There is almost certainly some tiny
amount of remnant metabolism and sensor proteins.
------
newzombie
If we find one on the moon, can we know if it originates from earth?
~~~
en-us
We can sequence its DNA and see how similar it is to the ones on Earth. If
there are living organisms on the moon that share a common ancestor with Earth
tardigrades then they diverged a very long time ago and their DNA will reflect
that. But if their DNA is identical to those on Earth then we know they came
from Earth.
~~~
dwiel
Do you know how much tardigrades DNA have changed since they first appeared on
earth?
~~~
en-us
I do not know exactly how much but this is something that can be approximately
quantified.
------
macmac
They will be waiting for us when we come back.
------
dexen
Cue the "accidental panspermia hypothesis" \- the real (messy) world variant
of panspermia[1].
[1]
[https://en.wikipedia.org/wiki/Panspermia](https://en.wikipedia.org/wiki/Panspermia)
------
Reason077
> _”That distinction belongs to the DNA and microbes contained in the almost
> 100 bags of feces and urine left behind by American astronauts during the
> Apollo lunar landings from 1969-1972.”_
Gross! Talk about littering and polluting a pristine environment. This is at
least as bad as the climbers who leave poop on Everest, where it doesn’t
biodegrade.
Is it really so hard to bring poop back with you? Were payload restrictions
that tight on the return Apollo journeys?
~~~
itronitron
i wonder if they accounted for the reduction in mass when calculating the
return trajectory
~~~
bencollier49
Difficult to figure out how regular the astronauts would be.
~~~
lawlessone
they probably had them on a strict diet and knew pretty well.
| {
"pile_set_name": "HackerNews"
} |
Brain waves can be used to detect potentially harmful personal information - upen
http://sciencebulletin.org/archives/6145.html
======
woliveirajr
Technology can be used for good and for evil. To assure that EGC will only be
used in authenticating users and not extracting personal conditions, there's a
long road down the valley.
------
meira
Occultists know this for milenniums.
~~~
chrisdbaldwin
Occultists hate her! This one weird trick to read brain waves they don't want
you to know!
~~~
turc1656
HAHAHA! That was great.
| {
"pile_set_name": "HackerNews"
} |
Re-writing the site of Norway's largest transport provider in Elm - Skinney
https://blogg.bekk.no/using-elm-at-vy-e028b11179eb
======
kfk
I work mostly in Python for data analytics but I like to play with front end
from time to time. So I tried elm. I loved learning about the elm architecture
and the concepts of a ml type of language. But the community and the
principles of it threw me off. I need a simple parser and found this
[https://package.elm-lang.org/packages/elm-
tools/parser/lates...](https://package.elm-lang.org/packages/elm-
tools/parser/latest/)
I guess if you have a CS degree you can understand how that parser works, I
couldn’t. The community tried to help me on the forums but you are supposed to
know a lot of key functional concepts to even understand their answers.
Then I learned about how they decided to throw away Javascript interops. I
mean I love benevolent dictators but this was too much. Just fyi their
benevolent dictator thinks if you need a library you should program it
yourself. I can see his point but in Python word there are so many amazing
libraries. That principle sounds good in theory but it’s theory, the rest or
the world thrives with libraries.
However I became a better programmer thanks to elm. I would love for something
like ocaml to pick up more steam in data analytics. I think though python won
that battle for good principles (easy to use) and not for being functional, or
controlling side effects or having less bugs. If you think about it that’s
exactly why Excel is still so popular, it’s a monster but it’s easy to use.
~~~
wwweston
> Then I learned about how they decided to throw away Javascript interops.
Is this true? I'm not an Elm user yet, but I've been eyeing it and my
understanding was that there was interop:
[https://guide.elm-lang.org/interop/](https://guide.elm-lang.org/interop/)
It'd be a big deal if that _weren 't_ there -- FP stuff I may or may not be
used to? Bring it on. Stretch my CS knowledge? Cool. No 3P libraries? That'd
make it wrong for a lot of use cases.
~~~
Skinney
Elm has interop, it's called ports.
What the poster is referring to is that Elm 0.18 had an unsupported,
undocumented way of calling javascript code directly. This was inherently
unsafe (Javascript can throw exceptions and Elm doesn't support exceptions,
let alone catching them) and people were abusing it, so it was removed from
the language in 0.19.
------
_greim_
> To test out the latter, in the summer of 2017 we tasked a team of summer
> interns with the renewal of our seat map application, a crucial component of
> the ticket booking process. They were to use Elm, a language they had no
> prior experience with. To our surprise they took to the language very
> easily, and their work turned out great.
I think programmers—myself included—tend to be surprised by these stories,
because we envision two learning curves: general programming concepts, plus
the additional weirdness of learning Elm.
But "general programming concepts" unpacks into: A) valuable stuff everyone
needs to learn anyway, B) a bunch of hard-bought mental discipline which Elm
makes obsolete, due to its lack of side effects and mutable state. Going cold-
turkey into Elm, newcomers fast-forward through a lot of things JS learners
for example have to wrestle through.
[edit for clarity]
------
vlangber
I wonder how involved the customer was in the choice of technology. I think
the long term costs of choosing Elm will be higher than any perceived gain
during the initial development period.
The number of developers with Elm experience in Norway is small, and I think
it will make it harder to attract good consultants that want to work on it.
~~~
as-j
I'm in the process of replacing an Erlang service. Erlang is incredibly well
suited for the task, and it's a terrible choice for us.
Initial development was done, system worked and ran for years. Team left,
turned over and then 5 years later no erlang developers were left on staff.
The service is business critical, and you don't need 1 developer, you need a
team. 3 would provide some basic backup, but you need 5 to fill out the 24/7
on-call rotation. (yes people need vacations, weekends off, etc)
Sadly it's not the entire stack, far from it, it's one mission critical
service that's part of a very large system. So the excitement they get from
growing, enhancing and scaling the system is already a bit restricted. Problem
is, trying to hire is SF is already hard, and now we just selected the pool of
engineers to be a small subset of those.
So now the cost of 3-5 engineers, the work to hire them, manager and deal with
turn over. Wow.
Sadly (not sadly) we replaced the service with an AWS offering for $1000/mo.
World changed in the 9 years since the Erlang product was first written.
It's turned me off niche languages.
~~~
Zanni
Sorry, am I reading this right? You replaced a service that required a team of
five Erlang developers with an AWS offering for $1000/mo? Why wouldn't you do
that regardless of the language involved?
~~~
as-j
Sorry for the late reply. It actually needed little development, 1 person
would be just fine. But it was also scaling, and bugs crop up. Unfortunately
bugs crop up some days at 9pm on Friday, or 2am on Sunday. Since it's business
critical this need attention immediately, stop/restart isn't always good
enough. So this means you need someone who can supply emergency patches on
call all the time. (trust me turn if off, and turn it on again doesn't always
work, yay persistence, yay retries)
This can't be 1 person anymore, what if they person takes a vacation. So
that's 2 people. Perhaps the 2nd person can be much less capable that the
first, they just need to hold the system together for how ever long it takes
the lead dev to come back from his 2 week hiking trip in the amazon....yeah
not good enough. So then you end up saying we actually need proper on call, so
now you're hiring a team.
What if it was another language? Let's assume it's a core language of the
organization. Then you don't need a team, but capable Sr/Staff Engineers who
can jump in during emergencies. Might not be the perfect fix, but then you
have a series of people who can duck tape it together until the person
responsible is available.
Using Erlang tied our hands, and made a decision to throw a project business
requirement.
------
mhd
So no non-Javascript fallback for a quite important site & service?
~~~
jimbo1qaz
[https://www.vy.no/en](https://www.vy.no/en) does not load on Firefox, with
Javascript disabled in uBlock Origin, or Developer Tools.
~~~
jimbo1qaz
It also takes nearly 2 seconds for page content to appear, on Firefox with JS
enabled.
------
bgorman
An alternative worth considering to Elm is Bucklescript with the bucklescript-
tea library. This project gives you a more powerful, but similar language
(Ocaml/Reason) with a more direct interop mechanism than Elm ports.
------
ggregoire
> A common misconception is that it is risky to use a non-mainstream language,
> since it will then be difficult to find developers with the right
> experience. We have found, however, that we don’t need people to know Elm
> beforehand.
Are there a lot of people who actually want to use Elm tho? Seems like the
real risk to me, not finding anyone interested in learning and using Elm.
~~~
_greim_
From the POV of someone hiring, there's a sort of geek-magnet effect that
kicks in if you start hiring for something like Elm. You get the kind of
people who like learning new things, who would otherwise completely ignore yet
another React or Angular dev job listing.
~~~
mnsc
> people who like learning new things
Problem is that also attracts those where new technologies is not a mean but
the end goal. So when they start to get productive in the "new technology"
(that's now "one year old, ugh") and if the shop ain't up to the task of
rewriting everything every year those people move on to the next hype baby
[1]. Being a geek-magnet should be a very minor and explicitly stated short-
term factor in deciding what tools and technologies to use.
[1] Yes, this is anecdotal and based on one former colleague!
------
imedadel
Something (weird) that I noticed in many Scandinavian websites is their short
domain names. I don't know the reason, but I like the fact that you can type 5
letters and access the service that you want.
~~~
elektronaut
For .no, I reckon it's a combination of small population and the fact that you
need to be a citizen to buy one. Until recently, they were only available to
organizations. There's also restrictions on the number of registered domains
you can have (100 for organizations, 5 for individuals), that limits squatting
somewhat.
~~~
kaivi
You made me wonder -- what happened to ulv.no and sau.no?
~~~
hdfbdtbcdg
Probably too politically divisive.
Nothing gets Norwegians arguing more than wolf politics.
~~~
kaivi
We should control greenhouse emissions by releasing more wolves into shopping
malls.
------
roschdal
In my opinion, this is a case where a supplier (IT consultant company: Bekk)
uses a non-standard unpopular technology, to implement a technical solution,
for a customer who is state owned and has almost monopoly (Vy/Norwegian
Railways). I suspect, the reason for this choice, is to lock the vendor in.
For the customers of Norwegian railways, the results is a more expensive train
journey.
~~~
cstpdk
What's your evidence that train journeys have gotten more expensive in Norway
due to this?
FWIW i am Danish and almost all of our public IT projects are done in .NET,
almost always the reasoning is "more developers, more mainstream, less lock-
in". Our IT projects are always hilariously belated and more expensive than
budgeted. More often than not the same contractor (one of 5ish big
corporations) keeps getting the same contracts from the same departments
because they have pre-existing knowledge of the system they previously built
(hint: this is lock-in). Now, the last part is changing somewhat due to EU
tender rules, which I think Norway also abides by (they are not in EU, but are
committed to complying with most EU laws)
~~~
arcturus17
Isn't being locked to one of five big vendors that do .NET a bit better than
being locked to _the_ vendor that does Elm, though?
------
caspervonb
Hmm, personal preference but preferred the old NSB site. This takes forever to
load.
------
adreamingsoul
Is the mobile Vy application also using Elm?
I live in Oslo and use Vy to travel outside the city. I find the overall
experience to be excellent, and appreciate the friendly and simple interface.
~~~
Skinney
react-native. Although the seat selector is written in Elm and loaded via web
view.
------
winrid
Tried to use latest Elm with Websockets. It's just not ready yet but I look
forward to when it is.
~~~
IfOnlyYouKnew
I was frustrated by that as well. In the end, I actually found a solution that
was surprisingly easy and works well. I'm about 90% sure it was
[https://github.com/billstclair/elm-websocket-
client](https://github.com/billstclair/elm-websocket-client), although it's
been a few weeks and I can't check right now.
| {
"pile_set_name": "HackerNews"
} |
Boost sales with a simple 7 step pre-call plan - PeteMitchell
https://www.medtechy.com/the-ticker/articles/2016/boost-sales-with-a-simple-7-step-pre-call-plan
======
PeteMitchell
Many sales reps and even project managers do not use a pre-call plan before
important meetings. This article reviews theirs and lets you download a blank
one for your next meeting.
| {
"pile_set_name": "HackerNews"
} |
Sentient: a declarative language that lets you describe what your problem is - vmorgulis
http://sentient-lang.org/
======
yazr
How is it different from other declarative variants?!
I get the plug-in SAT solver. Great. (in practice SAT solvers need plenty of
specialist tuning for non-toy problems).
Is it suppose to be somehow more readable than Prolog? Easier to building more
complex rules ?
__No snark intended. Genuinely interested. __
------
vmorgulis
Related thread:
[https://news.ycombinator.com/item?id=12429393](https://news.ycombinator.com/item?id=12429393)
| {
"pile_set_name": "HackerNews"
} |
Ask HN: Facebook alternative? - startupflix
======
ddtaylor
I thought about this a while ago and came to the conclusion that if my goal
wad to keep information private that's not the function of a social network.
At best you'll find a different company to abuse you or a permanent
decentralized network where attackers can scrape everything if they are ever
let near your circle of friends.
If your goal is to simply have a better implemented social media network you
will likely face a network effect problem where nobody is on that platform - I
mean technically MySpace still exists.
~~~
startupflix
Myspace seems to be a ghost town :'(
------
mistermithras
Diaspora seems to be the choice for this.
~~~
startupflix
I tried it but none of my friends were ready to move on it. :(
~~~
cm2012
If you want the network most people use, it's fb. I wouldn't worry about it.
The worst thing that fb does with your data is to customize the ads you see.
| {
"pile_set_name": "HackerNews"
} |
Facebook disabled users' accounts for violating WhatsApp TOS 3 years ago - hubail
I don't use my FB account anymore and it was disabled 3 weeks ago, it was strange since generally no use = no abuse.<p>Then my friend's account was also disabled, we both released the inner workings of WA protocol along with API PoC a while ago (https://github.com/venomous0x/WhatsAPI) and it got popular, so we suspected that as the only relevant common thing.
Today, Tarek (https://twitter.com/tgalal/status/583508825329819648) who also built unofficial sort of WA clients, confirmed it's the WhatsApp thing.<p>This makes me think of the rationale of:
1) Disabling someone's account for allegedly violating another service's terms.
2) Three years ago.
3) At time where WhatsApp was an independent company.
4) And whether this practise will expand to Instagram? Parse? future acquisitions?
======
graghav
WhatsApp hasn't even updated their TOS in their site since 2012 and moreover
Facebook can disable account only if someone "infringe other people's
intellectual property rights" inside FB. This makes me wonder what made it in
the first place to allow FB disable account for violating WhatsApp terms.
~~~
mcintyre1994
Are you saying Facebook don't have a term in their ToCs equivalent to
"Facebook reserves the right to stop service or remove data for any or no
reason at any time without any notice"?
Edit: they do.
[https://m.facebook.com/legal/terms](https://m.facebook.com/legal/terms)
section 4 (termination):
> If you violate the letter or spirit of this Statement, or otherwise create
> risk or possible legal exposure for us, we can stop providing all or part of
> Facebook to you.
| {
"pile_set_name": "HackerNews"
} |
Scientists Have Shown There's No 'Butterfly Effect' in the Quantum World - pseudolus
https://www.vice.com/en_us/article/889ejg/scientists-have-shown-theres-no-butterfly-effect-in-the-quantum-world
======
gus_massa
Quite a big discussion a few days ago of another source,
[https://news.ycombinator.com/item?id=24167691](https://news.ycombinator.com/item?id=24167691)
(80 points, 3 days ago, 54 comments)but I think this article is better and has
a good discussion about the chaos and the problem.
But as I said in a previous comment, they did't show that all quantum systems
have no butterfly effect, they only show that in one quantum system the
butterfly effect is small.
| {
"pile_set_name": "HackerNews"
} |
Why Dropbox Needs Composer to Succeed to Become a $100B Company - jason_shah
https://medium.com/@jasonyogeshshah/why-dropbox-needs-to-own-collaboration-to-become-a-100b-company-af3c5cc527af
======
jason_shah
Personally I'm really excited about Composer. So many implications...
\- What does this mean for Evernote? \- Can Dropbox pull off a new product
without prior traction? Mailbox seems to have generally worked out OK, but
what's happened to Carousel? \- Will messaging inside of Dropbox a valuable
angle on collaboration and if so, will anything happen between Dropbox and
Slack? \- What will this mean for Dropbox's relationship with Microsoft? \-
Who will Dropbox acquire next? They have files, mail, notes...seems like a
modern Exchange.
| {
"pile_set_name": "HackerNews"
} |
Google Job Page from 1998 - meterplech
http://replay.waybackmachine.org/19991013034717/http://google.com/jobs.html
======
meterplech
I think it's interesting to see their early focus on hiring the right people.
They are even hiring for a College Recruiting Program Manager already- even
when they only have about 50 people at the company. That's thinking big from
the beginning.
Also- given all the MBA hate on HN, I thought it was interesting that they
asked all their "business" positions to have MBAs.
~~~
jtbigwoo
> Also- given all the MBA hate on HN, I thought it was interesting that they
> asked all their "business" positions to have MBAs.
Companies founded by people who have graduate degrees tend to overvalue
candidates with graduate degrees.
------
abstractbill
It was funny to see them explicitly mention "casual dress atmosphere" - I
don't think many software startups would even bother to say that these days.
------
rudiger
_The only Chef job with stock options!_
~~~
Hovertruck
I wonder what that Chef is doing these days.
~~~
AdamTReineke
He left Google in 2005. Opened his own restaurant in 2009.
<http://en.wikipedia.org/wiki/Charlie_Ayers>
------
makmanalp
"Several years of industry or hobby-based experience." -> Wow. Nowadays it's
just "industry experience".
"Experience programming in Python a plus " -> Keep in mind, this was 1998, and
Python was young.
~~~
chollida1
I think alot of that was that even back in 1998 they had Python code in their
code base.
Rather than using Python as a filter for finding hackers.
------
c2
Kind of ridiculous they require the VP of engineering to have a PhD. I haven't
heard great things about Google's culture, and if this is the kind of
requirements they had to put the technical leadership in place, a lot of what
I heard is starting to make sense.
~~~
_delirium
I dunno, the guy they hired for that position by most accounts turned out to
be a pretty excellent choice: <http://en.wikipedia.org/wiki/Urs_H%C3%B6lzle>
------
jganetsk
I wonder who filled these positions at that time in particular.
~~~
kapitalx
They are are really really rich now.
~~~
AdamTReineke
Just the chef was worth $26 million. [http://searchengineland.com/google-
employee-53-charlie-ayers...](http://searchengineland.com/google-
employee-53-charlie-ayers-the-google-chef-profiled-on-msnbc-12505)
~~~
yeahsure
If he still owns those 40K shares, that would be $23,670,800 today. Not bad
for a chef, though!
Thanks for the link :)
------
AlexMuir
Can anyone remember how they first heard about and started using Google?
~~~
mnml_
I was using yahoo and a friend told me to try google it was not really good
looking but the result ranking was better. It was in ~1998 as well. (And I was
using AOL !)
------
phlux
Heh. For fun - we should all apply for these jobs via the fax number they
list.
~~~
cosgroveb
And what if they are actually hiring for one of those positions? Go through
Google's recruiting process!? No thanks!! :)
~~~
phlux
haha.
I was interviewing for a network project manager position with google some
time ago. When I went in on the first day I was very candid with them by
saying "I am very qualified for this position, but I don't have a PMP
certification so I hope that isnt an issue." They said "Oh, no problem. In
fact we want to bring people in who are very experienced, but are flexible to
adapt to the google way of doing things - so not having a PMP cert is a plus
because we dont want people to try to impose some outside process on our way."
Cool I thought!
I interviewed over a 3 month period. I was told I did very well then got a
call from the recruiter in google I had been working with "Hey! Good news - it
looks like we will be extending you an offer - so let me write that up and
send it over to you"
I was ecstatic.
I told friends about it - but did not tell my employer - though they knew I
was interviewing anyway.
I got a call the next day from the recruiter:
"I'm sorry - it looks like we will not be extending you an offer. Apparently,
you don't have a PMP cert, and that is needed for this position - but you did
very well on the interview, maybe you can find another position we have listed
that you qualify for!"
I was LIVID.
What a waste of my time - and it was really enraging. So, yeah - Fuck your
interview process google.
~~~
Silhouette
I never understand people who still put up with these absurdly long
recruitment processes today. I don't care what your job is, if you're hiring
via typical recruitment channels, you're not important enough for a good
candidate to put their life plans on hold for months.
Heck, if you're not someone on the scale of Google/Facebook/Microsoft in the
software industry, you're probably not important enough for me to justify
doing your pet interview quiz question for half a day before I show up, unless
you're going to pay me for my time to do it.
Public health warning: Zealous adherence to this bizarre mindset, where you
expect that if you are negotiating with someone then both parties will take it
seriously and that if you are working for someone then they will pay you, may
result in abandoning applying for jobs as an employee and going freelance or
founding your own business. This may lead to a much more enjoyable lifestyle
than working for the kind of business that only hires people who would allow
themselves to be hired that way.
~~~
phlux
This was in 2007, so it was a bit of a different market at that time as
well...
| {
"pile_set_name": "HackerNews"
} |
An Infinite Number of Mathematicians Enter a Bar - endorphone
https://dennisforbes.ca/index.php/2017/04/11/floating-point-numbers-an-infinite-number-of-mathematicians-enter-a-bar/
======
nathanaldensr
This was a great article about floating point numbers. I was already somewhat
aware of how they are represented in memory, but the website's cool tool that
allows playing with the bits while reading the article made it much clearer.
| {
"pile_set_name": "HackerNews"
} |
A hardware-accelerated machine intelligence library for the web - obulpathi
https://pair-code.github.io/deeplearnjs/
======
gradys
Here's the associated Google Research Blog post:
[https://research.googleblog.com/2017/08/harness-power-of-
mac...](https://research.googleblog.com/2017/08/harness-power-of-machine-
learning-in.html)
~~~
dgacmu
Tl;dr Tensorflow and numpy-like API. Differs from tensorfire in that it can
also do backprop (training)
------
mncharity
[http://playground.tensorflow.org/](http://playground.tensorflow.org/) is fun
and worth looking at.
As I recall, Google created it to give their engineers an introductory feel
for DL, and then open sourced it.
------
mamp
It looks like this is the easiest way to train neural networks with GPU
acceleration on my Radeon Pro (i.e. non CUDA)...
Thank you JavaScript!
------
xamlhacker
"Currently our demos do not support Mobile, Firefox, and Safari. Please view
them on desktop Chrome for now." That's a bummer.
~~~
nsthorat
This will be fixed in ~2 weeks, max.
------
dang
Url changed from [https://github.com/PAIR-
code/deeplearnjs](https://github.com/PAIR-code/deeplearnjs), which points to
this.
| {
"pile_set_name": "HackerNews"
} |
Big Oil Is About to Lose Control of the Auto Industry - T-A
http://www.bloomberg.com/news/articles/2015-04-16/big-oil-is-about-to-lose-control-of-the-auto-industry
======
tcbawo
I wonder how long until we start synthesizing hydrocarbon fuel and
sequestering it underground like a giant earth-battery.
| {
"pile_set_name": "HackerNews"
} |
The YouTube Music Disaster. Another Google Failure - ryan_j_naughton
https://medium.com/@myabstraction/the-youtube-music-disaster-d4fe0d0a09af
======
marvion
> As it seems, blind, to the customers feedback.
I wonder how they gather feedback internally. Because either they don't even
test some of their services by 50 employees, or ignore any feedback of
coworkers too.
Even though the old app felt like it was build and tested by a team of 1, it
feels like not even the single person either used it daily, or want allows to
make a change after it was published.
It should take a elaborate company vision to build a service for people, who
actually want to use the service
------
zombiegator
The sad thing is they didn't had to do this. They could have literally just
waited a while and released this. But I agree, reading the article, it feels
like it was going to happen no matter what.
| {
"pile_set_name": "HackerNews"
} |
Oculus to Discontinue the Rift S, Quit PC-Only VR Headsets - T-A
https://www.tomshardware.com/news/oculus-to-discontinue-the-rift-s-quit-pc-only-vr-headsets
======
raxxorrax
I don't understand that business decision. One problem is certainly the low
spread of devices, the other is that there is a limited amount of software
available for VR.
I doubt too many devs would want to develop against Oculus if the market is
reduced in favor of store lock-in.
Maybe VR overall wasn't a success and they just want to fortify a niche.
I think VR could really be used in quite a few applications like modeling and
animation, but I don't see that happening outside a PC environment. Not that
the cable to the device isn't a huge pain.
| {
"pile_set_name": "HackerNews"
} |
11 Yr olds first stop motion vid - Cyndre
http://www.youtube.com/watch?v=QEHnNNJVxQQ&feature=youtu.be
======
Cyndre
My daughter showed me a bunch of pictures on her camera that she was working
on so I taught her how to use windows movie maker. This is her first stop
motion video and I know I will be seeing some incredible things from her.
P.S. Please upvote and show her our hacker spirit :)
| {
"pile_set_name": "HackerNews"
} |
Undetectable remote arbitrary code execution through JavaScript and HTTP headers - zeveb
https://bugzilla.mozilla.org/show_bug.cgi?id=1487081
======
rauhl
A Mozilla team member closed it as invalid, pointing to an online discussion
of the bug as a reason why a bug isn’t necessary[0].
Interestingly, this ‘just once’ attack is why Firefox Accounts are broken as
designed: it’s possible for Mozilla to target a user, just once, with
malicious JavaScript which steals his Firefox Account password. Mozilla could
do this of their own accord, could be suborned by a malicious employee but
even more likely could be ordered to do so by any government which has that
authority.
0:
[https://bugzilla.mozilla.org/show_bug.cgi?id=1487081#c3](https://bugzilla.mozilla.org/show_bug.cgi?id=1487081#c3)
| {
"pile_set_name": "HackerNews"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.