chosen
int64 353
41.8M
| rejected
int64 287
41.8M
| chosen_rank
int64 1
2
| rejected_rank
int64 2
3
| top_level_parent
int64 189
41.8M
| split
large_stringclasses 1
value | chosen_prompt
large_stringlengths 236
19.5k
| rejected_prompt
large_stringlengths 209
18k
|
---|---|---|---|---|---|---|---|
37,052,782 | 37,051,018 | 1 | 3 | 37,046,128 | train | <story><title>Mexico seeks to solidify rank as top U.S. trade partner, push further past China</title><url>https://www.dallasfed.org/research/economics/2023/0711</url></story><parent_chain><item><author>alephnerd</author><text>Why do HN commentators assume Mexico is a 3rd world shithole?<p>It&#x27;s kind of surprising how a country that China has only just caught up to in GDP per Capita [3] and HDI [4] is assumed to be a shitshow of an economy.<p>Hell, the household income in China [0] (~$4,000) is still 30-50% of Mexico&#x27;s [1] (~$12,000)<p>Mexico has it&#x27;s problems, but most of that stems from the Peso Crisis in the 1990s, which was similar to the recession China is faced during COVID.<p>[0] - <a href="http:&#x2F;&#x2F;www.stats.gov.cn&#x2F;english&#x2F;PressRelease&#x2F;202201&#x2F;t20220118_1826649.html" rel="nofollow noreferrer">http:&#x2F;&#x2F;www.stats.gov.cn&#x2F;english&#x2F;PressRelease&#x2F;202201&#x2F;t2022011...</a><p>[1] - <a href="https:&#x2F;&#x2F;www.economia.gob.mx&#x2F;datamexico&#x2F;es&#x2F;profile&#x2F;geo&#x2F;mexico" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.economia.gob.mx&#x2F;datamexico&#x2F;es&#x2F;profile&#x2F;geo&#x2F;mexico</a><p>[2] - <a href="https:&#x2F;&#x2F;data.worldbank.org&#x2F;indicator&#x2F;NY.GDP.PCAP.CD" rel="nofollow noreferrer">https:&#x2F;&#x2F;data.worldbank.org&#x2F;indicator&#x2F;NY.GDP.PCAP.CD</a><p>[3] - <a href="https:&#x2F;&#x2F;hdr.undp.org&#x2F;data-center&#x2F;country-insights#&#x2F;ranks" rel="nofollow noreferrer">https:&#x2F;&#x2F;hdr.undp.org&#x2F;data-center&#x2F;country-insights#&#x2F;ranks</a></text></item></parent_chain><comment><author>xtracto</author><text>Because it is in a lot of ways?<p>[Mexican living in Mexico here]<p>I&#x27;ve had the chance to live in other countries in Europe (Germany and the UK) for 8 years. I also have lived most of my life (and currently live) in Mexico.<p>As someone else said, &quot;rich&quot; people in Mexico can live a good-enough life. But the problem is that 43.9% [1] live in poverty in Mexico. That is, they don&#x27;t have a way to make ends meet. Worse, the next 20% are just barely living by.<p>On top of that, society is naturally corrupt. You have corruption ingrained in the Mexican culture. From the typical politician corruption, the &quot;plata o plomo&quot; narco corruption, the police officer &quot;get out of a fine&quot; layman corruption; to the &quot;inconsequential&quot; I&#x27;ll stop in the middle of the street for one minute, just-because. Like, normal layman people always have some reason why some law or rule wont apply to them. It&#x27;s freaking enraging.<p>Finally you have lack of accountability. In Mexico only 1% [2] of the crimes are resolved. There&#x27;s murders, assaults, robberies, violations, everything you can think of, and police, the judicial system and all that branch does absolutely nothing. They don&#x27;t do their work. Moreover, normal people are <i>scared</i> of going to the Prosecution Authority (Ministerio Publico) because they know most likely they will be abused and further defrauded there.<p>&gt; Mexico has it&#x27;s problems,<p>You have to have lived in Mexico to understand the extent of the brokeness that we have here. A lot of people who live in Mexico and haven&#x27;t had the chance to live somewhere else will tell you that &quot;it is not that bad&quot;, but it is just a lack of knowledge of what &quot;Normal&quot; should be, part a coping mechanism and part Stockholm syndrome. (What else can people do?)<p>[1] <a href="https:&#x2F;&#x2F;imco.org.mx&#x2F;las-cifras-mas-recientes-de-pobreza" rel="nofollow noreferrer">https:&#x2F;&#x2F;imco.org.mx&#x2F;las-cifras-mas-recientes-de-pobreza</a><p>[2] <a href="https:&#x2F;&#x2F;www.impunidadcero.org&#x2F;impunidad-en-mexico&#x2F;#&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.impunidadcero.org&#x2F;impunidad-en-mexico&#x2F;#&#x2F;</a></text></comment> | <story><title>Mexico seeks to solidify rank as top U.S. trade partner, push further past China</title><url>https://www.dallasfed.org/research/economics/2023/0711</url></story><parent_chain><item><author>alephnerd</author><text>Why do HN commentators assume Mexico is a 3rd world shithole?<p>It&#x27;s kind of surprising how a country that China has only just caught up to in GDP per Capita [3] and HDI [4] is assumed to be a shitshow of an economy.<p>Hell, the household income in China [0] (~$4,000) is still 30-50% of Mexico&#x27;s [1] (~$12,000)<p>Mexico has it&#x27;s problems, but most of that stems from the Peso Crisis in the 1990s, which was similar to the recession China is faced during COVID.<p>[0] - <a href="http:&#x2F;&#x2F;www.stats.gov.cn&#x2F;english&#x2F;PressRelease&#x2F;202201&#x2F;t20220118_1826649.html" rel="nofollow noreferrer">http:&#x2F;&#x2F;www.stats.gov.cn&#x2F;english&#x2F;PressRelease&#x2F;202201&#x2F;t2022011...</a><p>[1] - <a href="https:&#x2F;&#x2F;www.economia.gob.mx&#x2F;datamexico&#x2F;es&#x2F;profile&#x2F;geo&#x2F;mexico" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.economia.gob.mx&#x2F;datamexico&#x2F;es&#x2F;profile&#x2F;geo&#x2F;mexico</a><p>[2] - <a href="https:&#x2F;&#x2F;data.worldbank.org&#x2F;indicator&#x2F;NY.GDP.PCAP.CD" rel="nofollow noreferrer">https:&#x2F;&#x2F;data.worldbank.org&#x2F;indicator&#x2F;NY.GDP.PCAP.CD</a><p>[3] - <a href="https:&#x2F;&#x2F;hdr.undp.org&#x2F;data-center&#x2F;country-insights#&#x2F;ranks" rel="nofollow noreferrer">https:&#x2F;&#x2F;hdr.undp.org&#x2F;data-center&#x2F;country-insights#&#x2F;ranks</a></text></item></parent_chain><comment><author>goodbyesf</author><text>&gt; Why do HN commentators assume Mexico is a 3rd world shithole?<p>Because the economic disparity between the US and Mexico is so vast that in comparison, it is a 3rd world country. Also, the millions of mexicans immigrating to the US, the neverending news about cartels, etc adds to the perception.<p>Also, most HN commentators are americans, not chinese. Not sure why you are comparing mexico to china.<p>&gt; Mexico has it&#x27;s problems, but most of that stems from the Peso Crisis in the 1990s<p>No it doesn&#x27;t. The negative perception of mexico goes back to the first half of the 1800s when we conquered 51% of mexico and turned it into texas, california, etc. Remember the alamo, mass deportation of mexicans and so forth and that negativity has existed ever since. Also, the peso crisis affected mexico only, while the pandemic affected everybody, not just china. The world endured the recession while only mexico suffered immensely during the peso crisis.</text></comment> |
31,623,795 | 31,623,726 | 1 | 2 | 31,620,765 | train | <story><title>Coinbase is rescinding already-accepted job offers</title><url>https://www.sfgate.com/bayarea/article/Coinbase-rescinds-accepted-job-offers-17217989.php</url></story><parent_chain><item><author>codegeek</author><text>I honestly don&#x27;t get the point of Coinbase. Isn&#x27;t it a centralized system to manage what is supposed to be decentralized and that kinda defeats the purpose? I may be dumb.</text></item><item><author>carlivar</author><text>Coinbase tried to recruit me last summer. I politely told the recruiter that I felt it was all bubble speculation mania. Also the amount of equity sold by the CFO at the public listing indicated little confidence in the long term prospects.<p>I guess I was right? Not trying to brag or anything. Just a tip to everyone job hunting I guess. Watch what executives do (not what they say). Well, and human history of speculative bubbles.<p>I didn&#x27;t think it was a coincidence Coinbase did a direct listing rather than an IPO, since an IPO has a waiting period for insiders to sell. I think insiders knew they had a limited window of cheap money froth.</text></item></parent_chain><comment><author>ralston3</author><text>You are indeed not dumb. Coinbase is a bit of an oxymoron. If decentralized value exchange takes off, that will surely be the end of Coinbase. To a degree, Coinbase&#x27;s demise is baked into crypto&#x27;s success - which is kinda funny to think about. Coinbase has always seemed like a bit of a money grab, and crypto veterans sort&#x27;ve treat it this way.</text></comment> | <story><title>Coinbase is rescinding already-accepted job offers</title><url>https://www.sfgate.com/bayarea/article/Coinbase-rescinds-accepted-job-offers-17217989.php</url></story><parent_chain><item><author>codegeek</author><text>I honestly don&#x27;t get the point of Coinbase. Isn&#x27;t it a centralized system to manage what is supposed to be decentralized and that kinda defeats the purpose? I may be dumb.</text></item><item><author>carlivar</author><text>Coinbase tried to recruit me last summer. I politely told the recruiter that I felt it was all bubble speculation mania. Also the amount of equity sold by the CFO at the public listing indicated little confidence in the long term prospects.<p>I guess I was right? Not trying to brag or anything. Just a tip to everyone job hunting I guess. Watch what executives do (not what they say). Well, and human history of speculative bubbles.<p>I didn&#x27;t think it was a coincidence Coinbase did a direct listing rather than an IPO, since an IPO has a waiting period for insiders to sell. I think insiders knew they had a limited window of cheap money froth.</text></item></parent_chain><comment><author>solveit</author><text>Email is a decentralized protocol and the fact that you use a centralized provider such as gmail only somewhat defeats the purpose of email being a decentralized protocol instead of Google Messenger.</text></comment> |
33,373,327 | 33,371,018 | 1 | 2 | 33,370,882 | train | <story><title>Linux 6.1 on NanoPi R4S – On fixing SD-card support, Heisenbugs and Rabbit Holes</title><url>https://kohlschuetter.github.io/blog/posts/2022/10/28/linux-nanopi-r4s/</url></story><parent_chain></parent_chain><comment><author>jacob019</author><text>The R4S is a great little device. Love the solid metal case, tiny size, and low power usage. I used one to route my gigabit Ethernet for several months. Compiled OpenWRT from source with LibreELEC&#x27;s patches. No hardware problems for me--I did apply that voltage patch for the MicroSD. It handled the connection well, but couldn&#x27;t quite push full gigabit PPPoE with VLAN. Gets close if you enable acceleration, but then no AQM. CenturyLink requires you to tunnel IPv6, so that on top of the PPPoE was a bit much for it, I could only IPv6 at about 300mbps. No eMMC, you have to provide a MicroSD, but OpenWRT doesn&#x27;t write to the flash at all in normal operation, so that was fine.<p>Now I have upgraded to the R5S. This thing has HDMI out, dual 2.5Gbps + 1x Gigabit Ethernet, and eMMC. It was a bit difficult to acquire, but I&#x27;m very happy with it. I can max out the Gigabit WAN now, with AQM which really helps. IPv6 reaches over 750mbps, and the R5S has cycles to spare, so I think that&#x27;s the full speed. I can saturate the link over wireguard tunnels too.<p>The R4S is up on eBay now.</text></comment> | <story><title>Linux 6.1 on NanoPi R4S – On fixing SD-card support, Heisenbugs and Rabbit Holes</title><url>https://kohlschuetter.github.io/blog/posts/2022/10/28/linux-nanopi-r4s/</url></story><parent_chain></parent_chain><comment><author>rbanffy</author><text>One of the most amazing things in Free Software is its multiplying effect. One person had a problem, dug into it, figured it out and got it fixed. The effects will ripple through time and space and improve the support of a lot of other devices now and in the future.<p>This is FOSS at its best. Thank you, @kohlschuetter.</text></comment> |
14,048,740 | 14,048,420 | 1 | 2 | 14,047,627 | train | <story><title>High school journalists investigated a new principal’s credentials</title><url>https://www.washingtonpost.com/news/morning-mix/wp/2017/04/05/these-high-school-journalists-investigated-a-new-principals-credentials-days-later-she-resigned/</url></story><parent_chain></parent_chain><comment><author>nkurz</author><text>I don&#x27;t know how common falsified credentials are in school administration, but I&#x27;ll mention a parallel experience that was a formative part of my upbringing.<p>While I was in high school, my father taught at a small private college in Wisconsin, where he was on the hiring committee for the new college president. After being hired, the chosen candidate&#x27;s behavior was surprisingly erratic, prompting my my father to continue researching his background.<p>After a bit of digging, he found that the new college president had falsified almost his entire resume. Thinking he had solid proof of this, and not being politically savvy, my father presented his evidence to the other faculty who had been on the hiring committee.<p>To his surprise, he (rather than the fraudster) was promptly fired from his supposedly tenured position for &quot;gross insubordination&quot;. Shortly thereafter, the &quot;fake&quot; college president pocketed the proceeds from remortgaging the college dorms, drove his college provided little-red-sports-car out of town, and was never heard from again.<p>Embittered by the lack of backing from his purported colleagues, my father never returned to academia, and instead turned to odd jobs and house painting. Not long after, the college lost its accreditation, and went out of business: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Mount_Senario_College" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Mount_Senario_College</a>.</text></comment> | <story><title>High school journalists investigated a new principal’s credentials</title><url>https://www.washingtonpost.com/news/morning-mix/wp/2017/04/05/these-high-school-journalists-investigated-a-new-principals-credentials-days-later-she-resigned/</url></story><parent_chain></parent_chain><comment><author>pitaa</author><text>And she would have gotten away with it too, if it weren&#x27;t for those meddling kids!</text></comment> |
23,118,302 | 23,114,809 | 1 | 2 | 23,113,661 | train | <story><title>Show HN: Micro-mitten – Research language with compile-time memory management</title><url>https://github.com/doctorn/micro-mitten</url><text>I&#x27;ve been working on implementing the compile-time approach to memory management described in this thesis (https:&#x2F;&#x2F;www.cl.cam.ac.uk&#x2F;techreports&#x2F;UCAM-CL-TR-908.pdf) for some time now - some of the performance results look promising! (Although some less so...) I think it would be great to see this taken further and built into a more complete functional language.</text></story><parent_chain><item><author>flohofwoe</author><text>&gt; This means that it maintains the ability to insert freeing code at appropriate program points, without putting restrictions on how you write your code.<p>How does the approach in mitten compare to Automatic Reference Counting in Objective-C (and I think Swift too)? From my experience, ARC can still add a surprising amount of memory management overhead to a program and needs a lot of hand-holding to keep that overhead down to an acceptable level (e.g. low single-digit percentage of overall execution time in programs that talk to Obj-C APIs a lot). I would be surprised if a &quot;traditional GC&quot; can do any worse in that regard (maybe reference counting smears the overhead over a wider area, e.g. no obvious spikes, but instead &quot;death by a thousand cuts&quot;).<p>One thing I&#x27;d like to see in modern languages is to encourage and simplify working with an (almost) entirely static memory layout, and make manipulations inside this static memory layout safe. This static memory layout doesn&#x27;t need to be magically derived by the compiler as long as the language offers features to easily describe this memory layout upfront.<p>A lot of data structures in applications don&#x27;t need to live in &quot;short-lived&quot; memory regions, but they often do because that&#x27;s what today&#x27;s languages either encourage (e.g. when built on the OOP philosophy), or what happens under the hood without much control from the code (e.g. in &quot;reference-heavy&quot; languages like Javascript, Java or C# - or even &quot;modern C++&quot; if you do memory management via smart pointers).<p>Minimizing data with dynamic lifetime, and maximing data with static lifetime could mean less complexity in the language and runtime (e.g. lifetime tracking by the compiler, or runtime memory management mechanisms like refcounting or GCs).</text></item></parent_chain><comment><author>mcguire</author><text>&quot;<i>How does the approach in mitten compare to Automatic Reference Counting in Objective-C (and I think Swift too)? From my experience, ARC can still add a surprising amount of memory management overhead to a program and needs a lot of hand-holding to keep that overhead down to an acceptable level (e.g. low single-digit percentage of overall execution time in programs that talk to Obj-C APIs a lot). I would be surprised if a &quot;traditional GC&quot; can do any worse in that regard (maybe reference counting smears the overhead over a wider area, e.g. no obvious spikes, but instead &quot;death by a thousand cuts&quot;).</i>&quot;<p>The reference counts have to be incremented when a new reference is made and decremented when one is deleted; freeing the memory when the count goes to zero. (This activity is cache- and atomicity-unfriendly (in the presence of threads).) A sufficiently smart compiler can omit many if not most of the count activity, but this kind of static analysis promises to remove all of it.<p>Further, reference counting has difficulty with circular references as the counts never go to zero. This should also be able to handle that.<p>Both this and reference counting are likely victims of the &quot;death by a thousand cuts&quot; you mention, as well as &quot;drop the last pointer to a large structure and wait for a long time while the pieces are deleted&quot;---the reference counting equivalent of a stop-the-world-and-trace garbage collection.</text></comment> | <story><title>Show HN: Micro-mitten – Research language with compile-time memory management</title><url>https://github.com/doctorn/micro-mitten</url><text>I&#x27;ve been working on implementing the compile-time approach to memory management described in this thesis (https:&#x2F;&#x2F;www.cl.cam.ac.uk&#x2F;techreports&#x2F;UCAM-CL-TR-908.pdf) for some time now - some of the performance results look promising! (Although some less so...) I think it would be great to see this taken further and built into a more complete functional language.</text></story><parent_chain><item><author>flohofwoe</author><text>&gt; This means that it maintains the ability to insert freeing code at appropriate program points, without putting restrictions on how you write your code.<p>How does the approach in mitten compare to Automatic Reference Counting in Objective-C (and I think Swift too)? From my experience, ARC can still add a surprising amount of memory management overhead to a program and needs a lot of hand-holding to keep that overhead down to an acceptable level (e.g. low single-digit percentage of overall execution time in programs that talk to Obj-C APIs a lot). I would be surprised if a &quot;traditional GC&quot; can do any worse in that regard (maybe reference counting smears the overhead over a wider area, e.g. no obvious spikes, but instead &quot;death by a thousand cuts&quot;).<p>One thing I&#x27;d like to see in modern languages is to encourage and simplify working with an (almost) entirely static memory layout, and make manipulations inside this static memory layout safe. This static memory layout doesn&#x27;t need to be magically derived by the compiler as long as the language offers features to easily describe this memory layout upfront.<p>A lot of data structures in applications don&#x27;t need to live in &quot;short-lived&quot; memory regions, but they often do because that&#x27;s what today&#x27;s languages either encourage (e.g. when built on the OOP philosophy), or what happens under the hood without much control from the code (e.g. in &quot;reference-heavy&quot; languages like Javascript, Java or C# - or even &quot;modern C++&quot; if you do memory management via smart pointers).<p>Minimizing data with dynamic lifetime, and maximing data with static lifetime could mean less complexity in the language and runtime (e.g. lifetime tracking by the compiler, or runtime memory management mechanisms like refcounting or GCs).</text></item></parent_chain><comment><author>eklavya</author><text>From what I understood, it’s not reference counting but trying to determine at compile time when to drop using data flow analysis to come up with an approximation of the liveness.<p>I had a thought sometimes back, can compilers do a profile run to get information about the liveness of objects it couldn’t determine statically by dumping gc info?</text></comment> |
20,501,761 | 20,501,228 | 1 | 2 | 20,500,119 | train | <story><title>Robinhood raises $323M at a $7.6B valuation</title><url>https://www.reuters.com/article/us-robinhood-funding/trading-startup-robinhood-raises-323-million-achieving-7-6-billion-valuation-idUSKCN1UH1VX</url></story><parent_chain><item><author>AznHisoka</author><text>I&#x27;ve been using Robinhood for the past year, and I hope they add basic features such as:<p>- The ability to short a stock, which I still can&#x27;t believe isn&#x27;t available.<p>- Price Alerts<p>Everything else is gravy, IMO, especially if they keep the same basic, sleek interface, which I actually like.</text></item></parent_chain><comment><author>brianchu</author><text>Shorting a stock is <i>not</i> a basic feature, it&#x27;s an advanced feature for power users.<p>You can buy put options on Robinhood already.</text></comment> | <story><title>Robinhood raises $323M at a $7.6B valuation</title><url>https://www.reuters.com/article/us-robinhood-funding/trading-startup-robinhood-raises-323-million-achieving-7-6-billion-valuation-idUSKCN1UH1VX</url></story><parent_chain><item><author>AznHisoka</author><text>I&#x27;ve been using Robinhood for the past year, and I hope they add basic features such as:<p>- The ability to short a stock, which I still can&#x27;t believe isn&#x27;t available.<p>- Price Alerts<p>Everything else is gravy, IMO, especially if they keep the same basic, sleek interface, which I actually like.</text></item></parent_chain><comment><author>throwaway5752</author><text>Is there a way to allow shorts without having to manage margin and having to carry risk on the balance sheet? They have to deal with two counterparties (you, and whomever is lending you the shares). I am not sure how they could economically offer free short trades.</text></comment> |
13,206,926 | 13,206,718 | 1 | 3 | 13,205,945 | train | <story><title>How My 10-Year-Old Learned JavaScript</title><url>https://hackernoon.com/how-my-10-year-old-learned-javascript-d8782b586db7#.gpjilzh62</url></story><parent_chain><item><author>uniclaude</author><text>This reads like the author really wants his kid to learn programming, and the article shows more interest from the parent than from the child.<p>That&#x27;s obviously anecdotical but I&#x27;m very glad nobody steered me into programming. It was my private garden, and I enjoyed every single second of the freedom it gave me. Writing my Delphi programs was funnier to me than programming books.<p>I think my point here is that I&#x27;m not sure it is in the best interest of children to push them into our passions. Then again I&#x27;m not a parent, so my opinion doesn&#x27;t hold much weight!</text></item></parent_chain><comment><author>neovive</author><text>Author here. Thanks for the feedback.<p>My son&#x27;s experience has generally been self-driven via Scratch. I introduced him to Scratch at around age 7 through CoderDojo and he really enjoyed it (I&#x27;m a big fan of Scratch for kids--it&#x27;s the perfect balance of art, creativity and logic with programming concepts mixed in.<p>I introduced Python, then Javascript, after he asked me about what I do (I was excited that he actually asked, so I ordered the book). I&#x27;m more of a web designer&#x2F;developer, not a software engineer, so I showed him to the tools I know (trying to keep things at his level) and set him on his way. The thought of being able to write his own video game keeps him motivated. He plays with the examples on phaser.io&#x2F;examples and sees it as something he might be able to do after working through the lessons.<p>My biggest concern is pushing him into something that he doesn&#x27;t want for himself. It should never feel like work. He likes the CodeSchool videos, jingles and exercises, so he does it. He got tired of the books, so he stopped. There are many long gaps, diversions and pivots, which is totally fine. If he just wants to play outside, build with Legos, read, watch a movie or do nothing, I don&#x27;t bother him. Programming is not his job, it&#x27;s just a skill that I think will be useful for him and is something I can actually help him with.<p>Side Note: I originally had a section in the post about how in the past it was more common to pass knowledge and skills down to your children in the form of trades (farming, sewing, blacksmithing, etc.) and is &quot;computer programming&quot; something worthy of passing down. I thought it digressed a bit too much, so I removed it.</text></comment> | <story><title>How My 10-Year-Old Learned JavaScript</title><url>https://hackernoon.com/how-my-10-year-old-learned-javascript-d8782b586db7#.gpjilzh62</url></story><parent_chain><item><author>uniclaude</author><text>This reads like the author really wants his kid to learn programming, and the article shows more interest from the parent than from the child.<p>That&#x27;s obviously anecdotical but I&#x27;m very glad nobody steered me into programming. It was my private garden, and I enjoyed every single second of the freedom it gave me. Writing my Delphi programs was funnier to me than programming books.<p>I think my point here is that I&#x27;m not sure it is in the best interest of children to push them into our passions. Then again I&#x27;m not a parent, so my opinion doesn&#x27;t hold much weight!</text></item></parent_chain><comment><author>strictnein</author><text> &gt; &quot;That&#x27;s obviously anecdotical but I&#x27;m very glad nobody steered me into programming.&quot;<p>Same. When I was growing up, we had a Commodore 64 and a small collection of programming books. (For the youngens in the crowd: Back in the day, software would sometimes be distributed in print form and you had to type it all in). As I learned to read, I also learned to type code (starting around age 5 or so). Likely one of the first chapter books I really spent time with was the C64 manual: <a href="http:&#x2F;&#x2F;www.commodore.ca&#x2F;commodore-manuals&#x2F;commodore-64-users-guide&#x2F;" rel="nofollow">http:&#x2F;&#x2F;www.commodore.ca&#x2F;commodore-manuals&#x2F;commodore-64-users...</a><p>I also didn&#x27;t know how to fix errors, so any time I made a mistake I would have to retype it all. By default, on a C64 you can just start typing a Basic program and then type RUN to run it. So you&#x27;d enter something like:<p><pre><code> 10 POKE 53280,2
20 POKE 53281,5
30 PRINT &quot;CHRISTMAS!&quot;
</code></pre>
Then type RUN and it would run the commands. In the case of the above code, it would set the border of the screen to red, the background to green, and print CHRISTMAS! on the screen.<p>&gt; &quot;I think my point here is that I&#x27;m not sure it is in the best interest of children to push them into our passions. Then again I&#x27;m not a parent, so my opinion doesn&#x27;t hold much weight!&quot;<p>I think this is correct. My son has spent a fair amount of type with the faux-programming stuff they have for kids these days (Tynker, code.org, etc) and has enjoyed it, but I&#x27;ve been struggling to find a way to transition him to something more real without it feeling forced.</text></comment> |
18,189,584 | 18,189,572 | 1 | 2 | 18,188,831 | train | <story><title>A Taco Truck on Every Corner, or Not?</title><url>https://a2civic.tech/blog/2018/09/30/a-taco-truck-on-every-corner-or-not.html</url></story><parent_chain><item><author>pochamago</author><text>What&#x27;s the motivation behind ordinances like this? I&#x27;ve never spoken to anyone who likes them, so I&#x27;m not really sure why cities are so aggressive about it. I assume there must be a large segment of the population who are really bothered by food trucks for some reason, but I genuinely struggle to imagine why</text></item></parent_chain><comment><author>mikeyouse</author><text>Some people don&#x27;t like other congregating in front of their stores and not buying anything but the most vocal opponent are restaurants. They feel like by paying substantial money for a restaurant lease, they shouldn&#x27;t be subject to competition that can operate with far lower costs &#x2F; standards. Think about if you owned a nice Mexican restaurant and then a taco truck parked outside during your rush hour. They have almost none of the fixed costs that you do and could undercut you pretty easily.<p>I&#x27;m not convinced it&#x27;s worth regulating their hours&#x2F;locations, but I do sympathize a bit with the brutal economics of restaurants.</text></comment> | <story><title>A Taco Truck on Every Corner, or Not?</title><url>https://a2civic.tech/blog/2018/09/30/a-taco-truck-on-every-corner-or-not.html</url></story><parent_chain><item><author>pochamago</author><text>What&#x27;s the motivation behind ordinances like this? I&#x27;ve never spoken to anyone who likes them, so I&#x27;m not really sure why cities are so aggressive about it. I assume there must be a large segment of the population who are really bothered by food trucks for some reason, but I genuinely struggle to imagine why</text></item></parent_chain><comment><author>mikeash</author><text>I suspect it’s more like, 99% of the population is indifferent, a small number of people care a lot (in this case, it’s probably restaurant owners), and elected officials do what the small number wants because they’re the only ones voicing any sort of opinion. (And they probably donate to campaigns.)</text></comment> |
6,350,682 | 6,350,440 | 1 | 3 | 6,350,351 | train | <story><title>Hacker News Onion</title><url>https://twitter.com/hackernewsonion/</url></story><parent_chain></parent_chain><comment><author>biot</author><text>It&#x27;s like a fake version of <a href="https://twitter.com/shit_hn_says" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;shit_hn_says</a></text></comment> | <story><title>Hacker News Onion</title><url>https://twitter.com/hackernewsonion/</url></story><parent_chain></parent_chain><comment><author>vukmir</author><text>My favorites:<p><pre><code> 1. Self-proclaimed &quot;visionary&quot; seeks technical cofounder.
2. Blue Screen Of Death As A Service
3. &quot;Node.js is a Ghetto&quot; by Ryan Dahl and Zed Shaw</code></pre></text></comment> |
21,324,653 | 21,324,587 | 1 | 2 | 21,323,748 | train | <story><title>Elon Musk tweets using SpaceX’s Starlink satellite internet</title><url>https://techcrunch.com/2019/10/22/elon-musk-tweets-using-spacexs-starlink-satellite-internet/</url></story><parent_chain></parent_chain><comment><author>tpmx</author><text>I saw something interesting the other day that I had previously missed:<p>Starlink has the potential to provide lower latency around the globe than terrestrial fiber because light travels faster in vacuum than in optical silica fiber.<p>&quot;Delay is Not an Option: Low Latency Routing in Space&quot;<p><a href="https:&#x2F;&#x2F;discovery.ucl.ac.uk&#x2F;id&#x2F;eprint&#x2F;10062262&#x2F;" rel="nofollow">https:&#x2F;&#x2F;discovery.ucl.ac.uk&#x2F;id&#x2F;eprint&#x2F;10062262&#x2F;</a><p><a href="http:&#x2F;&#x2F;nrg.cs.ucl.ac.uk&#x2F;mjh&#x2F;starlink-draft.pdf" rel="nofollow">http:&#x2F;&#x2F;nrg.cs.ucl.ac.uk&#x2F;mjh&#x2F;starlink-draft.pdf</a><p>&quot;We conclude that a network built in this manner can provide lower latency communications than any possible terrestrial optical fiber network for communications over distances greater than about 3000 km.&quot;</text></comment> | <story><title>Elon Musk tweets using SpaceX’s Starlink satellite internet</title><url>https://techcrunch.com/2019/10/22/elon-musk-tweets-using-spacexs-starlink-satellite-internet/</url></story><parent_chain></parent_chain><comment><author>jniedrauer</author><text>I&#x27;m really excited about this technology for selfish reasons. I love disconnecting and spending time in remote wildernesses. But paradoxically, being disconnected from technology gives me a lot of anxiety about technology. Working on devops roles over the last decade, I&#x27;ve almost never had a time when I can just forget about it all. Even when I&#x27;m not on call, and no matter how well I try to document things, I&#x27;m often the one with the answers. When things break, I need to be able to step in. Being able to go off the grid, while still being able to step in when SHTF, would increase my quality of life significantly.</text></comment> |
13,687,698 | 13,687,617 | 1 | 3 | 13,686,863 | train | <story><title>Cmd/compile: Go 1.8 regression: sync/atomic loop elided</title><url>https://github.com/golang/go/issues/19182</url></story><parent_chain><item><author>vessenes</author><text>This is interesting, actually.<p>As the language is specified, it&#x27;s not wrong to just ignore that add completely -- the variable isn&#x27;t used inside the goroutine, and you&#x27;re not guaranteed a goroutine will run at any particular time.<p>So, code executed right after that routine is spawned will be hitting the &#x27;old&#x27; value anyway; hence Dvyukov&#x27;s comments that this is arguably to spec, or within spec.<p>Think if you actually wrote this code; you&#x27;d have some weird behavior if the compiler did what you &#x27;expect&#x27; - a couple outputs of low integers then some high ones as the goroutine gets scheduled in. So it&#x27;s almost certainly just &#x27;wrong&#x27; to write something like this, under most (all?) circumstances.<p>Because sync.Wait() isn&#x27;t used, there&#x27;s no guarantee the goroutine will execute at all before program halt.<p>That said, I bet there&#x27;s some related more reasonable scenario where you could argue the compiler should at least do the adding. I&#x27;m not sure why atomic.Add* should get special treatment though, and it looks like the fix is to remove some escape-analysis type code, so I&#x27;m curious how this will get fixed.</text></item></parent_chain><comment><author>alecrn</author><text>Yeah, this particular instance seems ok to me. This one makes the example feel weirder:<p><pre><code> func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
fmt.Println(runtime.NumCPU(), runtime.GOMAXPROCS(0))
started := make(chan bool)
go func() {
started &lt;- true
for {
atomic.AddUint64(&amp;a, uint64(1))
}
}()
&lt;-started
for {
fmt.Println(atomic.LoadUint64(&amp;a))
time.Sleep(time.Second)
}
}
</code></pre>
Here we explicitly wait until the goroutine is started, so we know it&#x27;s scheduled by the time our other loop runs. Here on my computer with go1.8 linux&#x2F;amd64 it still optimizes out the while loop, which makes sense as nothing changed that would convince the optimizer the loop should remain given the compiler&#x27;s current logic.<p>If you add time.Sleep(time.Millisecond) to the goroutine loop or any other synchronization it works fine. I&#x27;m having trouble thinking of a real world example where you&#x27;d want an atomic operation going ham in a loop without any sort of time or synchronization. At the very least a channel indicating when the loop is done would cause the loop to compile.</text></comment> | <story><title>Cmd/compile: Go 1.8 regression: sync/atomic loop elided</title><url>https://github.com/golang/go/issues/19182</url></story><parent_chain><item><author>vessenes</author><text>This is interesting, actually.<p>As the language is specified, it&#x27;s not wrong to just ignore that add completely -- the variable isn&#x27;t used inside the goroutine, and you&#x27;re not guaranteed a goroutine will run at any particular time.<p>So, code executed right after that routine is spawned will be hitting the &#x27;old&#x27; value anyway; hence Dvyukov&#x27;s comments that this is arguably to spec, or within spec.<p>Think if you actually wrote this code; you&#x27;d have some weird behavior if the compiler did what you &#x27;expect&#x27; - a couple outputs of low integers then some high ones as the goroutine gets scheduled in. So it&#x27;s almost certainly just &#x27;wrong&#x27; to write something like this, under most (all?) circumstances.<p>Because sync.Wait() isn&#x27;t used, there&#x27;s no guarantee the goroutine will execute at all before program halt.<p>That said, I bet there&#x27;s some related more reasonable scenario where you could argue the compiler should at least do the adding. I&#x27;m not sure why atomic.Add* should get special treatment though, and it looks like the fix is to remove some escape-analysis type code, so I&#x27;m curious how this will get fixed.</text></item></parent_chain><comment><author>fmstephe</author><text>Unfortunately this miscompilation probably is within spec, given that the atomic package has no documented ordering.<p>But it&#x27;s coming.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;golang&#x2F;go&#x2F;issues&#x2F;5045" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;golang&#x2F;go&#x2F;issues&#x2F;5045</a></text></comment> |
18,438,028 | 18,437,109 | 1 | 3 | 18,436,387 | train | <story><title>Google, Facebook, and Amazon benefit from an outdated definition of “monopoly”</title><url>https://qz.com/work/1460402/google-facebook-and-amazon-benefit-from-an-outdated-definition-of-monopoly/</url></story><parent_chain><item><author>Nullabillity</author><text>&gt; The classic example is: Coca-Cola has a 95% market share of the cola market in some countries. Does it mean it has a monopoly? No.<p>Effectively, yes. Why is the market so disfunctional that a single company has effectively swallowed all competition?<p>&gt; If it had 100% of the cola market, would it have a monopoly? No.<p>Err, yes.<p>&gt; Because the cola market doesn&#x27;t exist in isolation. Colas compete with all other sodas, with water, juices, etc. for a share of wallet and a share of stomach.<p>And AT&amp;T wasn&#x27;t a monopoly, since you could just walk to the person you want to talk to. Oh, wait..</text></item><item><author>ucaetano</author><text>It isn&#x27;t outdated, it is an economics definition of a monopoly, and isn&#x27;t even that: a monopoly is perfectly fine, AS LONG AS the economic power isn&#x27;t abused to prevent competition which harms consumers.<p>What the author is proposing is that regulation is no longer based on economics and economic power, but on a vague definition of monopoly, and people are absurdly trigger-happy when calling something a monopoly.<p>The reason why anti-trust is based on precise economic definitions is that it leaves as little room as possible for the government to favor friendly players. When you need to prove harm to consumers, the bar is high, as it should be.<p>Otherwise, any government in power will simply abuse their own monopoly on regulation to favor and transfer wealth from society to friends.<p>The classic example is: Coca-Cola has a 95% market share of the cola market in some countries. Does it mean it has a monopoly? No.<p>If it had 100% of the cola market, would it have a monopoly? No.<p>Because the cola market doesn&#x27;t exist in isolation. Colas compete with all other sodas, with water, juices, etc. for a share of wallet and a share of stomach.</text></item></parent_chain><comment><author>AnthonyMouse</author><text>&gt; Why is the market so disfunctional that a single company has effectively swallowed all competition?<p>This is actually pretty common in hypercompetitive commodity markets. The largest player has a slight cost advantage due to economies of scale, so they have the best price and everyone buys from them. But they still have no market power because their market share doesn&#x27;t come from barriers to entry.<p>&gt; Err, yes.<p>It&#x27;s not necessarily a monopoly even at 100% when there are competitors who <i>could</i> immediately enter the market if the incumbent were to be so audacious as to raise prices by 4%, or do anything else the customer even mildly dislikes -- because that fact keeps them from ever doing it.<p>Notice that this is <i>not</i> how it works for Comcast, because it&#x27;s not cheap or quick to wire a city with fiber, so they can get away with a great deal of abuse before anyone else would show up to compete -- even if they only had 50% market share, as long as the other 50% is another company doing all the same abusive stuff.<p>&gt; And AT&amp;T wasn&#x27;t a monopoly, since you could just walk to the person you want to talk to. Oh, wait..<p>To be a substitute it has to be a practical alternative that can be used for the same purpose at approximately the same cost. Having to spend an hour walking is not the same cost as picking up the phone.</text></comment> | <story><title>Google, Facebook, and Amazon benefit from an outdated definition of “monopoly”</title><url>https://qz.com/work/1460402/google-facebook-and-amazon-benefit-from-an-outdated-definition-of-monopoly/</url></story><parent_chain><item><author>Nullabillity</author><text>&gt; The classic example is: Coca-Cola has a 95% market share of the cola market in some countries. Does it mean it has a monopoly? No.<p>Effectively, yes. Why is the market so disfunctional that a single company has effectively swallowed all competition?<p>&gt; If it had 100% of the cola market, would it have a monopoly? No.<p>Err, yes.<p>&gt; Because the cola market doesn&#x27;t exist in isolation. Colas compete with all other sodas, with water, juices, etc. for a share of wallet and a share of stomach.<p>And AT&amp;T wasn&#x27;t a monopoly, since you could just walk to the person you want to talk to. Oh, wait..</text></item><item><author>ucaetano</author><text>It isn&#x27;t outdated, it is an economics definition of a monopoly, and isn&#x27;t even that: a monopoly is perfectly fine, AS LONG AS the economic power isn&#x27;t abused to prevent competition which harms consumers.<p>What the author is proposing is that regulation is no longer based on economics and economic power, but on a vague definition of monopoly, and people are absurdly trigger-happy when calling something a monopoly.<p>The reason why anti-trust is based on precise economic definitions is that it leaves as little room as possible for the government to favor friendly players. When you need to prove harm to consumers, the bar is high, as it should be.<p>Otherwise, any government in power will simply abuse their own monopoly on regulation to favor and transfer wealth from society to friends.<p>The classic example is: Coca-Cola has a 95% market share of the cola market in some countries. Does it mean it has a monopoly? No.<p>If it had 100% of the cola market, would it have a monopoly? No.<p>Because the cola market doesn&#x27;t exist in isolation. Colas compete with all other sodas, with water, juices, etc. for a share of wallet and a share of stomach.</text></item></parent_chain><comment><author>matthewmacleod</author><text>That’s not a particularly convincing argument - it’s like saying that Sony has a monopoly in the PlayStation market - possibly true, but not that useful!</text></comment> |
19,781,641 | 19,781,345 | 1 | 3 | 19,780,762 | train | <story><title>Burger King is rolling out meatless Impossible Whoppers nationwide</title><url>https://www.theverge.com/2019/4/29/18522637/burger-king-impossible-whopper-nationwide-rollout-meatless-vegetarian</url></story><parent_chain><item><author>chrisco255</author><text>As someone raised around cattle ranches, I really don&#x27;t understand the animal living conditions argument.</text></item><item><author>wccrawford</author><text>As far as I can tell, I&#x27;m the target demographic for this.<p>I eat beef (and other meat) burgers because I like how they taste. If this comes close, I&#x27;ll gladly switch, even though they cost a little more. I&#x27;m far from vegetarian, but I&#x27;ve been thinking more and more that I&#x27;d like to slim down on my actual meat eating for a few reasons, including the environment, the animal living conditions, and the cholesterol.<p>The one thing I&#x27;m not happy about is trading fat for carbs. I&#x27;d rather stick to the protein and fat. But no solution is going to be perfect.</text></item></parent_chain><comment><author>matthewtoast</author><text>My understanding is that the vast majority of cattle (80%?) in the United States is raised in factory farms; the vast majority the meat Americans available to purchase grocery store, served in restaurants, and processed for fast food, is sourced from those same farms.<p>And my understanding is that the conditions would be horrific by almost any standard: the animals are kept confined, in extremely cramped and nasty conditions, often unable to move or turn around, all day and every day, seldom or never permitted to roam. (I&#x27;m ignoring the whole antibiotics overuse question.)<p>When they&#x27;re ready for slaughter, they&#x27;re crammed into trucks and transported for hundreds of miles, in all weather, standing in their own feces. Exhausted, terrified, and freezing or overheated after transport, many are shocked with electric prods or beaten to force them back out of the trucks. Then they are hit with bolt guns to &quot;stun&quot; them, after which their throats are slit.<p>The lucky cows are unconscious before they are sawed apart, but you can bet that the types of workers who do these dirty, low-paying jobs, hundreds of times per day, as quickly as possible, without clogging up the factory line, aren&#x27;t always trained well enough nor concerned whether they cause suffering.</text></comment> | <story><title>Burger King is rolling out meatless Impossible Whoppers nationwide</title><url>https://www.theverge.com/2019/4/29/18522637/burger-king-impossible-whopper-nationwide-rollout-meatless-vegetarian</url></story><parent_chain><item><author>chrisco255</author><text>As someone raised around cattle ranches, I really don&#x27;t understand the animal living conditions argument.</text></item><item><author>wccrawford</author><text>As far as I can tell, I&#x27;m the target demographic for this.<p>I eat beef (and other meat) burgers because I like how they taste. If this comes close, I&#x27;ll gladly switch, even though they cost a little more. I&#x27;m far from vegetarian, but I&#x27;ve been thinking more and more that I&#x27;d like to slim down on my actual meat eating for a few reasons, including the environment, the animal living conditions, and the cholesterol.<p>The one thing I&#x27;m not happy about is trading fat for carbs. I&#x27;d rather stick to the protein and fat. But no solution is going to be perfect.</text></item></parent_chain><comment><author>mrfusion</author><text>Well most of them don’t live on “ranches” for one.</text></comment> |
19,240,343 | 19,238,778 | 1 | 3 | 19,238,638 | train | <story><title>System76 Thelio: A Review</title><url>https://nora.codes/post/system76-thelio-a-review/</url></story><parent_chain></parent_chain><comment><author>smacktoward</author><text>I don&#x27;t understand why these boutique Linux system vendors all end up gravitating towards rolling their own distributions -- Pop!_OS (ugh) for System76, PureOS (<a href="https:&#x2F;&#x2F;www.pureos.net&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.pureos.net&#x2F;</a>) for Purism, etc.<p>I&#x27;ve been running Ubuntu on my machines for something like a decade now, and have experience with Debian and Red Hat&#x2F;Fedora from before that. I know what to expect from these distributions. When you tell me you&#x27;ve rolled your own distribution, I have <i>no idea</i> what to expect, which turns me off from your product. And if the answer is &quot;you can expect exactly the same as what you&#x27;d get from Ubuntu&#x2F;Debian&#x2F;Fedora&#x2F;whatever,&quot; then why bother rolling a new distro in the first place? Why take what should be a simple sales pitch and make it more confusing?<p>They seem to think the idea of learning my way around Yet Another Linux Distribution is an enticement. It isn&#x27;t -- especially when the only community around the distribution is a single small company, which means that anything I do learn is unlikely to be useful beyond using that company&#x27;s products. As soon as I buy from someone else, or that one small company goes out of business or loses interest in maintaining the distro, I can take all that knowledge and toss it in the bin. No thanks.<p>(Weirdly, the only company in this space that seems to really get this is Dell. I feel like I&#x27;m living in the Mirror Universe.)</text></comment> | <story><title>System76 Thelio: A Review</title><url>https://nora.codes/post/system76-thelio-a-review/</url></story><parent_chain></parent_chain><comment><author>mark_l_watson</author><text>After the busy work of using Linux and making a hobby of configuration (I think I started in 1992 with Slackware), it was so nice buying a System76 laptop last fall and having everything ‘just work.’</text></comment> |
9,255,060 | 9,255,092 | 1 | 2 | 9,254,982 | train | <story><title>Purple – Heroku UI Kit</title><url>http://purple.herokuapp.com/</url><text></text></story><parent_chain></parent_chain><comment><author>yRetsyM</author><text>&quot;Purple should never be used outside of officially endorsed Heroku products or without explicit permission.&quot;<p>I wonder how others feel about this, I really enjoy breaking apart existing websites and having a guide like this is certainly something I can use for inspiration in any of my projects - but I also wonder about the above clause and any implications it may have on copycat behaviour.</text></comment> | <story><title>Purple – Heroku UI Kit</title><url>http://purple.herokuapp.com/</url><text></text></story><parent_chain></parent_chain><comment><author>colinmegill</author><text>Alex Lande did something similar for WalMart <a href="http://walmartlabs.github.io/web-style-guide/" rel="nofollow">http:&#x2F;&#x2F;walmartlabs.github.io&#x2F;web-style-guide&#x2F;</a> ... on the tail of that, he built Radium. We had many conversations in between about the shortcomings of CSS after working on such large projects. <a href="http://projects.formidablelabs.com/radium/" rel="nofollow">http:&#x2F;&#x2F;projects.formidablelabs.com&#x2F;radium&#x2F;</a></text></comment> |
30,101,515 | 30,100,078 | 1 | 3 | 30,099,606 | train | <story><title>Jonathan the 190-Year-Old Tortoise Was Photographed in 1886 and Today</title><url>https://petapixel.com/2022/01/26/jonathan-the-190-year-old-tortoise-was-photographed-in-1886-and-today/</url></story><parent_chain></parent_chain><comment><author>leokennis</author><text>Meanwhile, this shark has been swimming around the Atlantic since 1627<i>:<p><a href="https:&#x2F;&#x2F;www.captain-planet.net&#x2F;uploads&#x2F;2018&#x2F;10&#x2F;67241603_362186541125894_5812555219371819008_n-1.jpg" rel="nofollow">https:&#x2F;&#x2F;www.captain-planet.net&#x2F;uploads&#x2F;2018&#x2F;10&#x2F;67241603_3621...</a><p>~~Kids~~ Tortoises these days...<p></i> <a href="https:&#x2F;&#x2F;www.captain-planet.net&#x2F;400-year-old-shark-found-in-the-arctic-could-be-the-oldest-living-vertebrate&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.captain-planet.net&#x2F;400-year-old-shark-found-in-t...</a></text></comment> | <story><title>Jonathan the 190-Year-Old Tortoise Was Photographed in 1886 and Today</title><url>https://petapixel.com/2022/01/26/jonathan-the-190-year-old-tortoise-was-photographed-in-1886-and-today/</url></story><parent_chain></parent_chain><comment><author>JoeAltmaier</author><text>While Jonathan was alive Queen Victoria married Prince Albert, Samuel Morse received the patent for the telegraph, Beau Brummell died and Oliver Wendell Holmes was born. Jonathan is older than Canada and Hong Kong.</text></comment> |
37,016,574 | 37,015,352 | 1 | 3 | 37,012,185 | train | <story><title>Mortality patterns for patients hospitalized during cardiology meetings (2016)</title><url>https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4314435/</url></story><parent_chain></parent_chain><comment><author>mhb</author><text>Comment from Marginal Revolution by someone who sounds knowledgeable:<p>&quot;To start with there is a significant problem with the article. Cardiac surgeons don&#x27;t go to cardiology meetings and don&#x27;t perform the type of interventions mentioned in the article. Interventional cardiologists are not cardiac surgeons. The recurrent mislabeling of the specialty involved is yet another example of the slipshod treatment and lack of understanding of science and medicine in the lay press which makes for an ill-informed public.&quot;<p><a href="https:&#x2F;&#x2F;marginalrevolution.com&#x2F;marginalrevolution&#x2F;2023&#x2F;08&#x2F;86443.html?commentID=160636865" rel="nofollow noreferrer">https:&#x2F;&#x2F;marginalrevolution.com&#x2F;marginalrevolution&#x2F;2023&#x2F;08&#x2F;86...</a></text></comment> | <story><title>Mortality patterns for patients hospitalized during cardiology meetings (2016)</title><url>https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4314435/</url></story><parent_chain></parent_chain><comment><author>ChrisMarshallNY</author><text>I had a friend tell me of an old doctor of his.<p>He said his doctor (an internist) was attending a medical conference for allergists, in the Bahamas. About 500 doctors attended.<p>This doctor was fearfully allergic to peanuts. Like, anaphylactic allergic.<p>He had an anaphylactic reaction to something he ate, during the main speaker banquet.<p>He died.<p>Surrounded by 500 allergists.</text></comment> |
12,631,808 | 12,631,696 | 1 | 3 | 12,630,095 | train | <story><title>New lower Azure pricing</title><url>https://azure.microsoft.com/en-us/blog/new-lower-azure-pricing/</url></story><parent_chain><item><author>BonoboIO</author><text>Cloud Pricing is ridiculous ... made the comparison between a dedicated server and cloud offerings of microsoft, google, amazon ... rackspace (SO EXPENSIVE) and you pay 5 times in the cloud for the same.</text></item></parent_chain><comment><author>paulddraper</author><text>Restaurant prices are ridiculous ... made the comparison between groceries and menu offerings of McDonalds, Taco Bell, Burger King ... Olive Garden (SO EXPENSIVE) and you pay 5 times at a restaurant for the same.<p>---<p>You&#x27;re not paying for hardware. You&#x27;re paying for hardware, expertise, services, and convenience.<p>On-prem or colocation may be a good choice. But limiting your comparison to raw computing power mischaracterizes the decision.</text></comment> | <story><title>New lower Azure pricing</title><url>https://azure.microsoft.com/en-us/blog/new-lower-azure-pricing/</url></story><parent_chain><item><author>BonoboIO</author><text>Cloud Pricing is ridiculous ... made the comparison between a dedicated server and cloud offerings of microsoft, google, amazon ... rackspace (SO EXPENSIVE) and you pay 5 times in the cloud for the same.</text></item></parent_chain><comment><author>Scirra_Tom</author><text>We&#x27;re moving to AWS for our next release, managed by Rackspace. Had the same philosophy as you for years, and have saved a lot of money with DIY (maybe in the tens of thousands).<p>Turns out server admin is not my skillset, and it&#x27;s caused problems, headaches and risks in the past.<p>I consider the additional cost going forwards in effect as our next hire and an important one. For small startups looking to save money DIY can be a good option, but moving onto the cloud at a later date can be a big burden.</text></comment> |
31,038,706 | 31,038,250 | 1 | 2 | 31,037,273 | train | <story><title>Rustaceans at the border</title><url>https://lwn.net/SubscriberLink/889924/2b330ed9ea4a9e23/</url></story><parent_chain><item><author>enriquto</author><text>Rust is alright as a language, but cargo is extremely scary. I hope a culture of coding in rust outside of the cargo &quot;ecosystem&quot; develops. The current situation is alienating to reproducibility-oriented developers.</text></item></parent_chain><comment><author>cillian64</author><text>If you particularly want to replicate the common C situation where your dependencies are in-tree or you just don’t have external dependencies and write everything yourself then you are totally welcome to do that, even with Cargo. Nobody is forcing you to use external dependencies. If your problem is other people writing software with external dependencies, then I guess you have different priorities to them.</text></comment> | <story><title>Rustaceans at the border</title><url>https://lwn.net/SubscriberLink/889924/2b330ed9ea4a9e23/</url></story><parent_chain><item><author>enriquto</author><text>Rust is alright as a language, but cargo is extremely scary. I hope a culture of coding in rust outside of the cargo &quot;ecosystem&quot; develops. The current situation is alienating to reproducibility-oriented developers.</text></item></parent_chain><comment><author>CraigJPerry</author><text>Is reproducability achiveable?<p>If you preserved an immutable tag of the source code and all its dependencies, a copy of the compiler version used and all build flags, then you’ve still got some big holes in your ability to reproduce a binary:<p><pre><code> 1. OS version &amp; patches installed
2. OS configuration
3. Hardware used (processors can have weird subtle bugs, microcode can affect execution behaviour, etc etc)
4. Transient issues - the golden copy to be reproduced for some post event investigation might have contained a bit flip leading to impossible to reproduce verification signatures
</code></pre>
Etc etc<p>Or is reproducability just a spectrum and you try to get further along it with some careful attention to detail, rather than an absolute to be achieved?<p>If the latter are you not cheaper just archiving binaries and tagging them with the source + deps + compiler + arch used to build them? Thats a 5 minute job to setup in your CI process and costs comparatively little to maintain vs wasting expensive human brains chasing down a futile goal.</text></comment> |
16,476,171 | 16,475,998 | 1 | 2 | 16,474,938 | train | <story><title>Facebook turned on face recognition silently</title><url>https://imgur.com/a/tK8eW</url></story><parent_chain><item><author>jacquesm</author><text>Which is why I do not want <i>anybody</i> to make any pictures of me, especially not if they are active on social media. There are exactly two pictures of me online, the one is 30 years old, the other about 10. The quickest way to get me pissed off is to point a camera at me.</text></item><item><author>bjt2n3904</author><text>So, here&#x27;s the weird catch-22. Or perhaps, the illusion of choice.<p>There are two people who are in a photo. One wishes Facebook to use face recognition, the other does not.<p>Facebook will run face recognition on the photo regardless, but how will they know who wants to be recognized, and who doesn&#x27;t?<p>Simply, they have a model of your face already trained. And they recognize you in photos--you have no say whether or not they will. All this option does is hide the notifications when they do recognize you.</text></item></parent_chain><comment><author>kelnos</author><text>I find that a little bit sad. Both of my parents have passed away (fairly young), and they were somewhat camera-shy when they were alive (not for the same reasons you are, but the result is the same). I have barely a handful of photos of them, and that&#x27;s often a source of disappointment and sadness for me.<p>I get that rampant data collection is unacceptable, and companies are terribly poor stewards of our personal data, but at the end of the day I have precious little in a visual sense with which to remember my parents.<p>I think it&#x27;s entirely reasonable to request that your friends not post pictures of you on social media, though, and get pissed at them when they ignore your wishes.</text></comment> | <story><title>Facebook turned on face recognition silently</title><url>https://imgur.com/a/tK8eW</url></story><parent_chain><item><author>jacquesm</author><text>Which is why I do not want <i>anybody</i> to make any pictures of me, especially not if they are active on social media. There are exactly two pictures of me online, the one is 30 years old, the other about 10. The quickest way to get me pissed off is to point a camera at me.</text></item><item><author>bjt2n3904</author><text>So, here&#x27;s the weird catch-22. Or perhaps, the illusion of choice.<p>There are two people who are in a photo. One wishes Facebook to use face recognition, the other does not.<p>Facebook will run face recognition on the photo regardless, but how will they know who wants to be recognized, and who doesn&#x27;t?<p>Simply, they have a model of your face already trained. And they recognize you in photos--you have no say whether or not they will. All this option does is hide the notifications when they do recognize you.</text></item></parent_chain><comment><author>tyfon</author><text>I&#x27;m getting severe anxiety from having a camera pointed at me since this all Facebook stuff started and people posted pictures online. Needless to say, I don&#x27;t have an account there.<p>I also take off the moment I see a camera unless the shooter in advance ask me if it&#x27;s ok and they agree to not put it online. Unfortunately I can&#x27;t do anything about CCTV but that&#x27;s not posted online at least.<p>It&#x27;s to the point where I avoid social settings with a lot of unknown people and generally stay with my wife and kids. Not that that is a bad think, I&#x27;m quite happy to be with them :)</text></comment> |
6,964,015 | 6,964,098 | 1 | 3 | 6,963,220 | train | <story><title>Ruby 2.1 Released</title><url>http://ftp.ruby-lang.org/pub/ruby/2.1/</url></story><parent_chain></parent_chain><comment><author>daGrevis</author><text>Please stop posting links to file locations. Announcements or changelogs would be more useful! If I would like to download it, I&#x27;ll just use package manager. Personally, I just want to read what&#x27;s new.</text></comment> | <story><title>Ruby 2.1 Released</title><url>http://ftp.ruby-lang.org/pub/ruby/2.1/</url></story><parent_chain></parent_chain><comment><author>thinkbohemian</author><text>Now supported on Heroku! <a href="https://devcenter.heroku.com/changelog-items/376" rel="nofollow">https:&#x2F;&#x2F;devcenter.heroku.com&#x2F;changelog-items&#x2F;376</a></text></comment> |
5,896,422 | 5,896,497 | 1 | 3 | 5,895,672 | train | <story><title>How FPGAs work, and why you'll buy one</title><url>http://www.yosefk.com/blog/how-fpgas-work-and-why-youll-buy-one.html</url></story><parent_chain><item><author>_yosefk</author><text>It&#x27;s genuinely complicated; if Xilinx could disrupt Altera by making easier-to-use tools, it would. (In fact it tries with AutoESL.)<p>I hope to explain why FPGA tooling is intrinsically hard in my next write-up.</text></item><item><author>alok-g</author><text>I seriously want someone to disrupt FPGA tooling.<p>In my understanding the tooling for FPGAs has been made purposefully complicated to achieve a lock-in. The issue is that adoption of new tooling practically requires compatibility with the existing tools from the two key market holders, and I do not think even they themselves could do that anymore.</text></item><item><author>schabernakk</author><text>&gt; I&#x27;m continually surprised by how few software engineers in industry spend the time to pick up HDL and FPGA programming in general.<p>unsufficient tooling? I did some programming with FPGAs once and it seemed the best option I had for programming was proprietary software by altera (quartus). I never got debugging to work or perhaps I did and I didnt understand the stuff it was showing me (I am no hardware guy)<p>My impression was that an eclipse-like ide, perhaps with a built in HW simulator would make things A LOT easier, especially for beginners like me. Of course this could be completely unrealistic and impractical for hardware design in which case I will show myself out.</text></item><item><author>bhb916</author><text>This is a solid article. I&#x27;m continually surprised by how few software engineers in industry spend the time to pick up HDL and FPGA programming in general. In my mind, it is an easy way to expand your breadth of knowledge and make you a touch more valuable to future employers. They say that when all you have is a hammer, everything looks like a nail and I&#x27;m certainly inflicted with that same disease as I see the utility of FPGAs everywhere I look. Prices have plummeted while densities have skyrocketed. A simple $25 part gets you quite a bit of fabric and some $90 eval hardware will give you a sweet little platform. [1]<p>With that said, since I began working with them there have been two &quot;Holy Grails&quot; of FPGA design: (1) Partial Reconfiguration and (2) High Level Synthesis.<p>The first, Partial Reconfiguration, has been more-or-less solved although the tools have a long way to go. One current design I&#x27;m working on loads it&#x27;s PCIe endpoint and DDR3 controller first, establishes communication with the application running on the host PC, then based on user input loads the rest of the FPGA.<p>The second, High Level Synthesis, isn&#x27;t here yet. The goal is to turn a company&#x27;s vast army of software engineers into FPGA programmers overnight. A worthy cause. Every foray into this field has failed (although the jury is still out on Xilinx&#x27;s purchase of autoESL) Honestly, I&#x27;m not sure it will ever get there. The point of optimized, custom hardware is to make use of it. Abstracting it all away seems counterproductive, not to mention very hard.<p>[1] <a href="http:&#x2F;&#x2F;www.xilinx.com&#x2F;products&#x2F;boards-and-kits&#x2F;AES-S6MB-LX9.htm" rel="nofollow">http:&#x2F;&#x2F;www.xilinx.com&#x2F;products&#x2F;boards-and-kits&#x2F;AES-S6MB-LX9....</a></text></item></parent_chain><comment><author>amirhirsch</author><text>here&#x27;s a few good reasons why it&#x27;s hard to make easy-to-use vendor-neutral FPGA tools:<p>- all the devices&#x2F;bitstream formats are proprietary with little or no documentation of the logic blocks or programmable interconnect structures. it is probably technically easier to build a new FPGA from scratch and design tools for that, than to reverse engineer existing chips [1]<p>- there is very little cross-compatibility between vendor products (a 4-lut here, a 6-lut there, some carry-chain logic here, a DSP block there)<p>- all the optimizations (synthesis, place-and-route) are NP-hard problems<p>- sequential imperative (C-like) thinking is not the correct way to make parallel systems<p>- the FPGA vendors compete on tools and offer their software for free to push hardware. hard for an independent vendor to compete.<p>[1] some reverse engineering efforts exist. see &quot;From the bitstream to the netlist&quot; <a href="http:&#x2F;&#x2F;citeseerx.ist.psu.edu&#x2F;viewdoc&#x2F;download?doi=10.1.1.117.6043&amp;rep=rep1&amp;type=pdf" rel="nofollow">http:&#x2F;&#x2F;citeseerx.ist.psu.edu&#x2F;viewdoc&#x2F;download?doi=10.1.1.117...</a> &#x2F; <a href="http:&#x2F;&#x2F;code.google.com&#x2F;p&#x2F;debit&#x2F;" rel="nofollow">http:&#x2F;&#x2F;code.google.com&#x2F;p&#x2F;debit&#x2F;</a></text></comment> | <story><title>How FPGAs work, and why you'll buy one</title><url>http://www.yosefk.com/blog/how-fpgas-work-and-why-youll-buy-one.html</url></story><parent_chain><item><author>_yosefk</author><text>It&#x27;s genuinely complicated; if Xilinx could disrupt Altera by making easier-to-use tools, it would. (In fact it tries with AutoESL.)<p>I hope to explain why FPGA tooling is intrinsically hard in my next write-up.</text></item><item><author>alok-g</author><text>I seriously want someone to disrupt FPGA tooling.<p>In my understanding the tooling for FPGAs has been made purposefully complicated to achieve a lock-in. The issue is that adoption of new tooling practically requires compatibility with the existing tools from the two key market holders, and I do not think even they themselves could do that anymore.</text></item><item><author>schabernakk</author><text>&gt; I&#x27;m continually surprised by how few software engineers in industry spend the time to pick up HDL and FPGA programming in general.<p>unsufficient tooling? I did some programming with FPGAs once and it seemed the best option I had for programming was proprietary software by altera (quartus). I never got debugging to work or perhaps I did and I didnt understand the stuff it was showing me (I am no hardware guy)<p>My impression was that an eclipse-like ide, perhaps with a built in HW simulator would make things A LOT easier, especially for beginners like me. Of course this could be completely unrealistic and impractical for hardware design in which case I will show myself out.</text></item><item><author>bhb916</author><text>This is a solid article. I&#x27;m continually surprised by how few software engineers in industry spend the time to pick up HDL and FPGA programming in general. In my mind, it is an easy way to expand your breadth of knowledge and make you a touch more valuable to future employers. They say that when all you have is a hammer, everything looks like a nail and I&#x27;m certainly inflicted with that same disease as I see the utility of FPGAs everywhere I look. Prices have plummeted while densities have skyrocketed. A simple $25 part gets you quite a bit of fabric and some $90 eval hardware will give you a sweet little platform. [1]<p>With that said, since I began working with them there have been two &quot;Holy Grails&quot; of FPGA design: (1) Partial Reconfiguration and (2) High Level Synthesis.<p>The first, Partial Reconfiguration, has been more-or-less solved although the tools have a long way to go. One current design I&#x27;m working on loads it&#x27;s PCIe endpoint and DDR3 controller first, establishes communication with the application running on the host PC, then based on user input loads the rest of the FPGA.<p>The second, High Level Synthesis, isn&#x27;t here yet. The goal is to turn a company&#x27;s vast army of software engineers into FPGA programmers overnight. A worthy cause. Every foray into this field has failed (although the jury is still out on Xilinx&#x27;s purchase of autoESL) Honestly, I&#x27;m not sure it will ever get there. The point of optimized, custom hardware is to make use of it. Abstracting it all away seems counterproductive, not to mention very hard.<p>[1] <a href="http:&#x2F;&#x2F;www.xilinx.com&#x2F;products&#x2F;boards-and-kits&#x2F;AES-S6MB-LX9.htm" rel="nofollow">http:&#x2F;&#x2F;www.xilinx.com&#x2F;products&#x2F;boards-and-kits&#x2F;AES-S6MB-LX9....</a></text></item></parent_chain><comment><author>mjn</author><text>I&#x27;m waiting for Chuck Moore to release 2kb of Forth that he claims replaces the only parts you need...</text></comment> |
2,541,238 | 2,541,297 | 1 | 2 | 2,540,909 | train | <story><title>Show HN: New platform for finding work - $2000 project minimum</title><url>http://www.codeyouridea.com/coders/</url><text></text></story><parent_chain><item><author>plusbryan</author><text>Design is really important to me when deciding where to take my business. I love the idea, but the web site could use some serious design love.<p>For instance: That block of text might be good for SEO, but no one will ever read it.</text></item></parent_chain><comment><author>plusbryan</author><text>Also - design for your target customer. If non-technical business owners are your customer, go with a design that makes them feel welcome. Terminal icon? Blocky font? That tells me this site is for developers.</text></comment> | <story><title>Show HN: New platform for finding work - $2000 project minimum</title><url>http://www.codeyouridea.com/coders/</url><text></text></story><parent_chain><item><author>plusbryan</author><text>Design is really important to me when deciding where to take my business. I love the idea, but the web site could use some serious design love.<p>For instance: That block of text might be good for SEO, but no one will ever read it.</text></item></parent_chain><comment><author>latortuga</author><text>Other small nitpicks:<p>- The footer floats in the middle of the terms of service page, covering up text<p>- In the pricing pop up, "Why we charge for CodeYourIdea.com?" is not a real sentence. Eliminate the question mark or add a "do".</text></comment> |
23,993,443 | 23,992,151 | 1 | 2 | 23,983,974 | train | <story><title>Why Do People Stay Poor? [pdf]</title><url>http://sticerd.lse.ac.uk/dps/eopp/eopp67.pdf</url></story><parent_chain><item><author>jfrunyon</author><text>Easy answer: because expenses are higher for poor people.<p>Instead of buying a home, the lower class rent their whole life. Instead of their daddy giving them a home, the middle class pay interest on a mortgage most of their life.<p>Instead of buying a 20-pack of $necessary_product that costs $20 and will last for months, the lower class buy a 2-pack that costs $5 and lasts a couple weeks because they need the $15 to pay rent. And then they have to spend more money on gas (or bus fare, or a taxi, or time and energy walking, or ...) to get to the store again sooner.<p>Instead of the employer providing any supplies that are needed, the lower class have to buy their own uniforms or tools.<p>Instead of a 5% interest rate on their loan, they have a 25% interest rate.<p>Because they don&#x27;t make enough money to set some aside as savings, they have to pay late fees or NSF fees or overdraft fees.</text></item></parent_chain><comment><author>Balgair</author><text>Slight aside: Poor people aren&#x27;t trying to maximize profits. They are trying to minimize risk.<p>It&#x27;s a small quibble, but it helps explain the different behavior of the wealthy and the poor. More importantly, it explains the <i>mindset</i> that changes with wealth.<p>Bret Devereaux of the <i>acoup</i> blog had a good piece on this in reference to medieval farming estates of lesser nobles and the abject poor [0]:<p>&quot;I led in with all of that risk and vulnerability because without it just about nothing these farmers do makes a lot of sense; once you understand that they are managing risk, everything falls into place.<p>Most modern folks think in terms of profit maximization; we take for granted that we will still be alive tomorrow and instead ask how we can maximize how much money we have then (this is, admittedly, a lot less true for the least fortunate among us). We thus tend to favor efficient systems, even if they are vulnerable. From this perspective, ancient farmers – as we’ll see – look very silly, but this is a trap, albeit one that even some very august ancient scholars have fallen into. These are not irrational, unthinking people; they are poor, not stupid – those are not the same things.<p>But because these households wobble on the edge of disaster continually, that changes the calculus. These small subsistence farmers generally seek to minimize risk, rather than maximize profits. After all, improving yields by 5% doesn’t mean much if everyone starves to death in the third year because of a tail-risk that wasn’t mitigated. Moreover, for most of these farmers, working harder and farming more generally doesn’t offer a route out of the small farming class – these societies typically lack that kind of mobility (and also generally lack the massive wealth-creation potential of industrial power which powers that kind of mobility). Consequently, there is little gain to taking risks and much to lose. So as we’ll see, these farmers generally sacrifice efficiency for greater margins of safety, every time.&quot;<p>[0] <a href="https:&#x2F;&#x2F;acoup.blog&#x2F;2020&#x2F;07&#x2F;24&#x2F;collections-bread-how-did-they-make-it-part-i-farmers&#x2F;" rel="nofollow">https:&#x2F;&#x2F;acoup.blog&#x2F;2020&#x2F;07&#x2F;24&#x2F;collections-bread-how-did-they...</a></text></comment> | <story><title>Why Do People Stay Poor? [pdf]</title><url>http://sticerd.lse.ac.uk/dps/eopp/eopp67.pdf</url></story><parent_chain><item><author>jfrunyon</author><text>Easy answer: because expenses are higher for poor people.<p>Instead of buying a home, the lower class rent their whole life. Instead of their daddy giving them a home, the middle class pay interest on a mortgage most of their life.<p>Instead of buying a 20-pack of $necessary_product that costs $20 and will last for months, the lower class buy a 2-pack that costs $5 and lasts a couple weeks because they need the $15 to pay rent. And then they have to spend more money on gas (or bus fare, or a taxi, or time and energy walking, or ...) to get to the store again sooner.<p>Instead of the employer providing any supplies that are needed, the lower class have to buy their own uniforms or tools.<p>Instead of a 5% interest rate on their loan, they have a 25% interest rate.<p>Because they don&#x27;t make enough money to set some aside as savings, they have to pay late fees or NSF fees or overdraft fees.</text></item></parent_chain><comment><author>redorb</author><text>This is a great definition of what I consider to be the &#x27;poor tax&#x27;..<p>Not to mention if they ever get un-poor they don&#x27;t have the knowledge to &#x27;build wealth&#x27; until they start slowly learning it or from a mentor.<p>We would better off teaching &#x27;home finances, interest and banking&#x27; classes instead of Algebra 3 or geometry 2 (for 99% of people)</text></comment> |
40,664,406 | 40,660,590 | 1 | 3 | 40,644,366 | train | <story><title>Medieval game pieces emerge from the ruins of a German castle</title><url>https://news.artnet.com/art-world/medieval-game-pieces-emerge-from-the-ruins-of-a-mysterious-german-castle-2497815</url></story><parent_chain><item><author>zwieback</author><text>I grew up not far from this site but have been living in Oregon for 30 years. One of the things that still strikes me when I go back to visit is just how much old stuff was around me in my childhood. Over there it&#x27;s like &quot;oh, let&#x27;s put a nail salon in this 400 year old building&quot;. Here it&#x27;s like: &quot;oh, there&#x27;s a 100 year old barn, we must turn it into a heritage site.&quot;</text></item></parent_chain><comment><author>justinlloyd</author><text>Grew up in a 15th century farm house that was built decades before Columbus set sail and re-discovered the Americas. A few years ago the wife and I were contemplating buying a property in England where the realtor enthusiastically explained that &quot;the plumbing was modernized in the 16th century.&quot;</text></comment> | <story><title>Medieval game pieces emerge from the ruins of a German castle</title><url>https://news.artnet.com/art-world/medieval-game-pieces-emerge-from-the-ruins-of-a-mysterious-german-castle-2497815</url></story><parent_chain><item><author>zwieback</author><text>I grew up not far from this site but have been living in Oregon for 30 years. One of the things that still strikes me when I go back to visit is just how much old stuff was around me in my childhood. Over there it&#x27;s like &quot;oh, let&#x27;s put a nail salon in this 400 year old building&quot;. Here it&#x27;s like: &quot;oh, there&#x27;s a 100 year old barn, we must turn it into a heritage site.&quot;</text></item></parent_chain><comment><author>philk10</author><text>Yeh, moved to Michigan from England and was taken to the oldest pub in the city, 120 years old. My local pub used to be a hunting lodge used by Henry VIII</text></comment> |
32,537,539 | 32,537,406 | 1 | 3 | 32,537,004 | train | <story><title>Ask HN: Anyone else feel trapped in FANG? How did you get out?</title><text>Hi HN,<p>I’ve been working at a FANG for 2 years now and I feel like my career goals and interests are not aligned with where I’m at. Outside of my FANG I was a go-getter who loves learning new things and taking on hard problems. Inside my company I have tried over the past 2 years to propose different solutions to hard problems and I just get blown off.<p>I’m good at my job I already got promoted in 1 year and am moving up. I am making more money than I expected, but at work I’m not able to express myself creatively or do things outside the box. I just get assigned straightforward work from my manager that the product managers and “leadership” assign to our team with barely any input on the overall project or ability to propose new projects.<p>I feel like if I stay here I will just get stuck in a cycle of never accomplishing my goals but I’m scared to move teams or companies or build a startup because I have a good manager, a good comp and my job isn’t that stressful.<p>I could become more involved at work in building paper reading groups or other kinds of side projects to creatively express myself at work but I don’t want to be the person whose life revolves around their FANG.<p>Anyone else feel like this? What did you do?</text></story><parent_chain><item><author>quacked</author><text>My advice would be to put your head down, live like a monk, and save&#x2F;invest money. Get promoted as far as you can, but don&#x27;t burn yourself out. If you see yourself having a family someday, spend your free time on finding a spouse.<p>Five years in, you&#x27;ll have a huge pile of cash and an insane chunk on your resume that will almost certainly guarantee you an interview wherever you want. Once you hit that point, take some time off and do literally anything. Start a startup, be a contractor, get a new degree.<p>You have stumbled onto the sort of prosperity (easy low-stress job) that 99.95% of the world literally dreams about. To throw it away because you&#x27;re not &quot;creatively expressing yourself&quot; would be foolish.<p>This advice doesn&#x27;t apply if you&#x27;re a relentlessly competitive founder type like Gates or Bezons, but if you were of that mentality, I doubt you&#x27;d be having the same troubles you are.</text></item></parent_chain><comment><author>enra</author><text>The downside of this advice is while you make money and your experience will look good on the resume, you might not have learned that much in the end and in some ways wasted your time and life.<p>Also if you want to go back to earlier stage companies or different places than FAANG, the FAANG experience is not some golden ticket, but can be also a potential flag to those who know how things work. Basically a flag to verify can this person still build things outside of bigco or are they just a professional coaster at this point. It can be surprising to people that even though you did work at Facebook or Google, you are not automatically a good fit for some other company.<p>My personal way of thinking has been always to join companies or pick jobs where I learn and that pushes me in some way, especially while I&#x27;m still young. I also did work at FAANG but the main thing I learned was how manage politics. Also within FAANG there is always teams or roles where you are actually challenged but majority of the roles are fairly easy.<p>Ideally at each role you learn something that improves and expands your skills. Each role prepares you for your next role or the step you want to take in your career. Taking home a paycheck and being a professional coaster doesn&#x27;t help you make progress or make your life interesting (not to say that work is all your life but it is part of it).</text></comment> | <story><title>Ask HN: Anyone else feel trapped in FANG? How did you get out?</title><text>Hi HN,<p>I’ve been working at a FANG for 2 years now and I feel like my career goals and interests are not aligned with where I’m at. Outside of my FANG I was a go-getter who loves learning new things and taking on hard problems. Inside my company I have tried over the past 2 years to propose different solutions to hard problems and I just get blown off.<p>I’m good at my job I already got promoted in 1 year and am moving up. I am making more money than I expected, but at work I’m not able to express myself creatively or do things outside the box. I just get assigned straightforward work from my manager that the product managers and “leadership” assign to our team with barely any input on the overall project or ability to propose new projects.<p>I feel like if I stay here I will just get stuck in a cycle of never accomplishing my goals but I’m scared to move teams or companies or build a startup because I have a good manager, a good comp and my job isn’t that stressful.<p>I could become more involved at work in building paper reading groups or other kinds of side projects to creatively express myself at work but I don’t want to be the person whose life revolves around their FANG.<p>Anyone else feel like this? What did you do?</text></story><parent_chain><item><author>quacked</author><text>My advice would be to put your head down, live like a monk, and save&#x2F;invest money. Get promoted as far as you can, but don&#x27;t burn yourself out. If you see yourself having a family someday, spend your free time on finding a spouse.<p>Five years in, you&#x27;ll have a huge pile of cash and an insane chunk on your resume that will almost certainly guarantee you an interview wherever you want. Once you hit that point, take some time off and do literally anything. Start a startup, be a contractor, get a new degree.<p>You have stumbled onto the sort of prosperity (easy low-stress job) that 99.95% of the world literally dreams about. To throw it away because you&#x27;re not &quot;creatively expressing yourself&quot; would be foolish.<p>This advice doesn&#x27;t apply if you&#x27;re a relentlessly competitive founder type like Gates or Bezons, but if you were of that mentality, I doubt you&#x27;d be having the same troubles you are.</text></item></parent_chain><comment><author>enos_feedler</author><text>I did this but didn’t get promoted at all. Never tried to. Did the bare minimum. Bounced between 5 managers before bouncing myself out of the company (Google). Lived dirt cheep and stockpiled cash. Used all of my energy during that time to dream of cool ideas. Implementing them now and have zero pressure to go back. Would not trade my current state of existence for anything.</text></comment> |
29,802,209 | 29,802,213 | 1 | 2 | 29,800,730 | train | <story><title>Ask HN: How to continue to be gracious about the good fortune of rich friends?</title><text>I have no money. I do have rich friends who are getting richer and richer every year.<p>I try to be gracious and happy for their good fortune.<p>However it makes me depressed and angry and envious.<p>One friend told me a few days ago his house went up in value $1,000,000 in one year, at which point he sold it.<p>I visited my cousin who is a fabulous person and has a gorgeous house freshly renovated and extended and a new pool put it.<p>All around me my peers are becoming very wealthy.<p>And I’m at the bottom with nothing.<p>I try to be happy for them and gracious and to listen and enthuse whilst they tell me of their good fortune or show me around their stunning houses. And afterwards I feel smashed with depression as I go back to my shit rental house that I’m ashamed of.<p>Good people, great friends, and seeing them brings me down.<p>Rich people aren’t aware that their tales of success make people like me feel bad. They shouldn’t have to be aware of that or hold themselves back. As a good friend I should feel happy for them, and I pretend to, but inside it makes me feel terrible.<p>If you’re commenting on this thread and offering advice, I encourage you add the context of whether you are one of those who have money or not.</text></story><parent_chain><item><author>randycupertino</author><text>I worked in wealth management and LIVED the hedonistic treadmill firsthand.<p>Everyone was jealous of the next level up. I was making 300k and my high school hometown friends were like &quot;holy cow, you&#x27;re so lucky this is amazing, you have your own apartment&quot; meanwhile I was annoyed I couldn&#x27;t keep up financially with my trust fund boyfriend who had $3 million a year to piss away with random trips to Bermuda. My CFO was jealous of the Principal who could take netjets and didn&#x27;t have to fly first class everywhere. The NetJets guy was jealous of the billionaire principal who had his own jet. That billionaire was jealous of the main money dude who had family money inherited from the crusades. They all fought with their wives over private school tuition and horses. Everyone drank, did tons of drugs, had dramatic affairs and fought like cats and dogs with their families.<p>I left finance and went into healthcare and realized I&#x27;m pretty damn happy living a simple life. I kept a $1500 belt I bought from Henry Bendels that&#x27;s incredibly ugly as a reminder of dumb decisions and having too much money to piss away on stupid crap!<p>Read Blood Diamonds: Tracing the Deadly Path of the World&#x27;s Most Precious Stones
Book by Greg Campbell. Reading that made me realize how our planet has finite resources and I just I wanted to cleave the my own consumption habits so stopped needless shopping for &quot;fun&quot; and started being a stubborn bastard about driving my 12 year old Hyundai into the ground. It&#x27;s not much but it&#x27;s my own private rebellion against the gaping maw of endless consumerism.<p>Worship your family, friends, love ones, health, music, doing things that make you feel alive, shared experiences and nature over shiny toys and stuff that just sits around collecting dust and looking pretty.<p>At the end of the day, we&#x27;re all the same food for worms anyways no matter our net worth. Enjoy your friendships, realize they probably have their own internal struggles and problems they&#x27;re dealing with and try to be there for them in whatever way you can!</text></item></parent_chain><comment><author>rl3</author><text>&gt;<i>That billionaire was jealous of the main money dude who had family money inherited from the crusades.</i><p>Illuminati confirmed.<p>On a serious note, being born into wealth is a travesty of sorts. People who are tend to experience a reality that is far removed from that of the average person, and as such can&#x27;t identify nor relate. They are robbed of a certain type of life. Yet, enormous wealth confers power that can be exercised over common people—despite such an upbringing rendering one ill-suited to exercise said power. It&#x27;s a timeless problem, I suppose.</text></comment> | <story><title>Ask HN: How to continue to be gracious about the good fortune of rich friends?</title><text>I have no money. I do have rich friends who are getting richer and richer every year.<p>I try to be gracious and happy for their good fortune.<p>However it makes me depressed and angry and envious.<p>One friend told me a few days ago his house went up in value $1,000,000 in one year, at which point he sold it.<p>I visited my cousin who is a fabulous person and has a gorgeous house freshly renovated and extended and a new pool put it.<p>All around me my peers are becoming very wealthy.<p>And I’m at the bottom with nothing.<p>I try to be happy for them and gracious and to listen and enthuse whilst they tell me of their good fortune or show me around their stunning houses. And afterwards I feel smashed with depression as I go back to my shit rental house that I’m ashamed of.<p>Good people, great friends, and seeing them brings me down.<p>Rich people aren’t aware that their tales of success make people like me feel bad. They shouldn’t have to be aware of that or hold themselves back. As a good friend I should feel happy for them, and I pretend to, but inside it makes me feel terrible.<p>If you’re commenting on this thread and offering advice, I encourage you add the context of whether you are one of those who have money or not.</text></story><parent_chain><item><author>randycupertino</author><text>I worked in wealth management and LIVED the hedonistic treadmill firsthand.<p>Everyone was jealous of the next level up. I was making 300k and my high school hometown friends were like &quot;holy cow, you&#x27;re so lucky this is amazing, you have your own apartment&quot; meanwhile I was annoyed I couldn&#x27;t keep up financially with my trust fund boyfriend who had $3 million a year to piss away with random trips to Bermuda. My CFO was jealous of the Principal who could take netjets and didn&#x27;t have to fly first class everywhere. The NetJets guy was jealous of the billionaire principal who had his own jet. That billionaire was jealous of the main money dude who had family money inherited from the crusades. They all fought with their wives over private school tuition and horses. Everyone drank, did tons of drugs, had dramatic affairs and fought like cats and dogs with their families.<p>I left finance and went into healthcare and realized I&#x27;m pretty damn happy living a simple life. I kept a $1500 belt I bought from Henry Bendels that&#x27;s incredibly ugly as a reminder of dumb decisions and having too much money to piss away on stupid crap!<p>Read Blood Diamonds: Tracing the Deadly Path of the World&#x27;s Most Precious Stones
Book by Greg Campbell. Reading that made me realize how our planet has finite resources and I just I wanted to cleave the my own consumption habits so stopped needless shopping for &quot;fun&quot; and started being a stubborn bastard about driving my 12 year old Hyundai into the ground. It&#x27;s not much but it&#x27;s my own private rebellion against the gaping maw of endless consumerism.<p>Worship your family, friends, love ones, health, music, doing things that make you feel alive, shared experiences and nature over shiny toys and stuff that just sits around collecting dust and looking pretty.<p>At the end of the day, we&#x27;re all the same food for worms anyways no matter our net worth. Enjoy your friendships, realize they probably have their own internal struggles and problems they&#x27;re dealing with and try to be there for them in whatever way you can!</text></item></parent_chain><comment><author>deanmoriarty</author><text>&gt; Worship your family, friends, love ones, health, music, doing things that make you feel alive, shared experiences and nature over shiny toys and stuff that just sits around collecting dust and looking pretty.<p>Agree but let’s not forget that money can give you wonderful experiences even if you are not materialistic, including the ability to not work anymore. Those are the people I am jealous of.</text></comment> |
21,112,235 | 21,111,952 | 1 | 2 | 21,104,341 | train | <story><title>How to Succeed as a Poor Programmer</title><url>https://psgraphics.blogspot.com/2019/09/how-to-succeed-as-poor-programmer.html?m=1</url></story><parent_chain><item><author>overgard</author><text>The &quot;Avoid Learning Anything New&quot; advice is insane. (Well, practically all of it is, but that one really stands out).<p>I think the exact opposite advice is far better: never assume the way you know how to do something is best, and always be on the lookout for what others are doing that might be better.<p>Here&#x27;s the thing about learning: the more you learn, the easier learning the next thing becomes. You form links, insights on relationships, new concepts that apply to old things, etc. If this guy thinks learning is such a burden, it&#x27;s probably because he refuses to learn anything in the first place.<p>If he thinks he&#x27;s a poor programmer, it probably has less to do with innate ability and 100% to do with the attitudes he gives as &quot;advice&quot; in this blog.</text></item></parent_chain><comment><author>bjoli</author><text>Let me quote a friend that ranted about this very subject about a week ago:<p>I am of extremely average intelligence. On the absolute top of the bell curve, looking down at the rest of you. I am extremely tired of the whole attitude that _anything is possible if you just put your mind to it_. I put my mind to it. That is how I got through CS in university. I worked at least twice as hard as the people that said _just put your mind to it_ and then cruised through even the hardest course without ever studying more than an hour a day and then doing an all-nighter to get an essay in.<p>Working hard as hell to understand what some low-effort-high-intelligence &quot;just put your mind to it&quot; brat understood as soon as it left the professor&#x27;s mouth is hard. It just shows me again and again that I drew the short straw in the gene lottery. I have to work a lot harder to do the same things.<p>Being told to just try harder by someone who&#x27;s idea of intellectual work is &quot;wherever my curiosity takes me&quot; is really friggin taxing on my self esteem.<p>----------<p>Now, he was drunk and angry, but there is something to it. I think both him and I agree with you on a matter of principle, but it is more to it than that.</text></comment> | <story><title>How to Succeed as a Poor Programmer</title><url>https://psgraphics.blogspot.com/2019/09/how-to-succeed-as-poor-programmer.html?m=1</url></story><parent_chain><item><author>overgard</author><text>The &quot;Avoid Learning Anything New&quot; advice is insane. (Well, practically all of it is, but that one really stands out).<p>I think the exact opposite advice is far better: never assume the way you know how to do something is best, and always be on the lookout for what others are doing that might be better.<p>Here&#x27;s the thing about learning: the more you learn, the easier learning the next thing becomes. You form links, insights on relationships, new concepts that apply to old things, etc. If this guy thinks learning is such a burden, it&#x27;s probably because he refuses to learn anything in the first place.<p>If he thinks he&#x27;s a poor programmer, it probably has less to do with innate ability and 100% to do with the attitudes he gives as &quot;advice&quot; in this blog.</text></item></parent_chain><comment><author>geowwy</author><text>He is right though – new tech nearly always disappears.<p>Example: Learning CoffeeScript was a total waste of time. Learning JQuery helped me for a few years, but now JQuery is basically useless to me.<p>Based on past experience, I strongly suspect the same will happen with React, Rust and a bunch of other new exciting tech. There are countless examples besides the ones I mentioned.<p>But on the other hand, the time I put into mastering SQL or Unix will probably continue benefiting me for the rest of my career. Even C will continue to benefit me, even though it&#x27;ll only ever be a small part of my job.<p>So I would modify his rule: <i>Avoid Learning Anything New – Learn Something Old</i></text></comment> |
32,345,004 | 32,344,432 | 1 | 2 | 32,342,550 | train | <story><title>Is DALL-E 2 ‘gluing things together’ without understanding their relationships?</title><url>https://www.unite.ai/is-dall-e-2-just-gluing-things-together-without-understanding-their-relationships/</url></story><parent_chain><item><author>gojomo</author><text>There&#x27;s a lot of people who make this same argument – DALLE&#x2F;GPT&#x2F;etc is just a &#x27;mirror&#x27; or &#x27;parrot&#x27; – but they rarely make convincing supporting arguments.<p>They just assert it as axiomatic, whistling-past all the ways that they themselves – unless they believe in supernatural mechanisms – are also the product of a finite physical-world system (a biological mind) and a finite amount of prior training input (their life so far).<p>I&#x27;m beginning to wonder if the entities making this argument are conscious! It seems they don&#x27;t truly understand the issues in question, in a way they could articulate recognizably to others. They&#x27;re just repeating comforting articles-of-faith that others have programmed into them.</text></item><item><author>blooperdeoo</author><text>“ It actually knows what a giraffe is.”<p>No. You know what a giraffe is, Dall•E simply creates pixel groups which correlate to the text pattern you submitted.<p>Watching people discuss a logical mirror scares me that most people are not themselves conscious.</text></item><item><author>orlp</author><text>The most important thing I think DALL-E shows is that it has a model of our world and culture. It&#x27;s not intelligence, but it is knowledge.<p>Google can give you endless pictures of giraffes if you search for it. But it can only connect you to what exists. It doesn&#x27;t know things, it knows OF things.<p>DALL-E has knowledge of the concept of a giraffe, and can synthesize an endless amount of never-before seen giraffes for you. It actually knows what a giraffe is.</text></item><item><author>vannevar</author><text>I&#x27;m sure it is, but &quot;gluing things together&quot; coherently in response to a text prompt is a stupendous achievement. It&#x27;s not AGI, but it&#x27;s miles ahead of where we were even a few years ago and opens the door to automating a class of jobs I don&#x27;t think anyone back then believed could be automated, short of AGI.</text></item></parent_chain><comment><author>bglazer</author><text>I can give a random string of letters as a prompt and DALLE will generate coherent images based on that. To me, that is as clear a signal as any that there is no reasoning or even a consistent world model embodied in DALLE. It’s simply a high dimensional latent mapping between characters and pixels. Like OP said, that is a stupendous achievement, but it is just a very complex and impressive mirror. If it wasn’t just a mapping between characters and pixels, and instead DALLE had intelligence that “understood” the symbols it manipulated, then I would expect it to generate nothing, or white noise in response to random letters.</text></comment> | <story><title>Is DALL-E 2 ‘gluing things together’ without understanding their relationships?</title><url>https://www.unite.ai/is-dall-e-2-just-gluing-things-together-without-understanding-their-relationships/</url></story><parent_chain><item><author>gojomo</author><text>There&#x27;s a lot of people who make this same argument – DALLE&#x2F;GPT&#x2F;etc is just a &#x27;mirror&#x27; or &#x27;parrot&#x27; – but they rarely make convincing supporting arguments.<p>They just assert it as axiomatic, whistling-past all the ways that they themselves – unless they believe in supernatural mechanisms – are also the product of a finite physical-world system (a biological mind) and a finite amount of prior training input (their life so far).<p>I&#x27;m beginning to wonder if the entities making this argument are conscious! It seems they don&#x27;t truly understand the issues in question, in a way they could articulate recognizably to others. They&#x27;re just repeating comforting articles-of-faith that others have programmed into them.</text></item><item><author>blooperdeoo</author><text>“ It actually knows what a giraffe is.”<p>No. You know what a giraffe is, Dall•E simply creates pixel groups which correlate to the text pattern you submitted.<p>Watching people discuss a logical mirror scares me that most people are not themselves conscious.</text></item><item><author>orlp</author><text>The most important thing I think DALL-E shows is that it has a model of our world and culture. It&#x27;s not intelligence, but it is knowledge.<p>Google can give you endless pictures of giraffes if you search for it. But it can only connect you to what exists. It doesn&#x27;t know things, it knows OF things.<p>DALL-E has knowledge of the concept of a giraffe, and can synthesize an endless amount of never-before seen giraffes for you. It actually knows what a giraffe is.</text></item><item><author>vannevar</author><text>I&#x27;m sure it is, but &quot;gluing things together&quot; coherently in response to a text prompt is a stupendous achievement. It&#x27;s not AGI, but it&#x27;s miles ahead of where we were even a few years ago and opens the door to automating a class of jobs I don&#x27;t think anyone back then believed could be automated, short of AGI.</text></item></parent_chain><comment><author>meroes</author><text>And don’t blame others for not finding this satisfactory either. Many many mathematicians think abstract objects exist outside of physical reality. Chomsky says “physical” hasn’t been well defined for 200 years. And finite physical processes do not seem adequate to explain the infinite character of language and mathematical infinity. Or if they are, then go inform the mathematical realists infinity isn’t real because all proofs are finite yet Godel believed in actual infinity.</text></comment> |
30,918,613 | 30,911,308 | 1 | 3 | 30,909,076 | train | <story><title>Exchange electricity prices in France go through the roof</title><url>https://twitter.com/energy_charts/status/1510916995872702464</url></story><parent_chain><item><author>Archelaos</author><text>&gt; Now for some extra fun, overlay this with graphs over CO2 emissions.<p>Some other fun: Overlay this with a graph of nuclear waste production.</text></item><item><author>belorn</author><text>France (One year graph): <a href="https:&#x2F;&#x2F;www.epexspot.com&#x2F;en&#x2F;market-data?market_area=FR&amp;trading_date=2022-04-04&amp;delivery_date=2022-04-05&amp;underlying_year=&amp;modality=Auction&amp;sub_modality=DayAhead&amp;product=60&amp;data_mode=graph&amp;period=year" rel="nofollow">https:&#x2F;&#x2F;www.epexspot.com&#x2F;en&#x2F;market-data?market_area=FR&amp;tradi...</a><p>Germany (One year graph): <a href="https:&#x2F;&#x2F;www.epexspot.com&#x2F;en&#x2F;market-data?market_area=DE-LU&amp;trading_date=2022-04-04&amp;delivery_date=2022-04-05&amp;underlying_year=&amp;modality=Auction&amp;sub_modality=DayAhead&amp;product=60&amp;data_mode=graph&amp;period=year" rel="nofollow">https:&#x2F;&#x2F;www.epexspot.com&#x2F;en&#x2F;market-data?market_area=DE-LU&amp;tr...</a><p>Now for some extra fun, overlay this with graphs over CO2 emissions.</text></item></parent_chain><comment><author>JohnHaugeland</author><text>It&#x27;s stupid to fret about nuclear waste.<p>All plants, worldwide over all human history, for 20% of the power production of the entire planet, 100% green, have produced less than one half of one football field of barrels.<p>We can stop climate change on one field of barrels every 31 years<p>Thanks to mining byproducts, nuclear produces less and lower level radioactive waste than solar or wind<p>Zero humans in history have died from nuclear waste<p>It&#x27;s just something scared people say to sound like they know something important</text></comment> | <story><title>Exchange electricity prices in France go through the roof</title><url>https://twitter.com/energy_charts/status/1510916995872702464</url></story><parent_chain><item><author>Archelaos</author><text>&gt; Now for some extra fun, overlay this with graphs over CO2 emissions.<p>Some other fun: Overlay this with a graph of nuclear waste production.</text></item><item><author>belorn</author><text>France (One year graph): <a href="https:&#x2F;&#x2F;www.epexspot.com&#x2F;en&#x2F;market-data?market_area=FR&amp;trading_date=2022-04-04&amp;delivery_date=2022-04-05&amp;underlying_year=&amp;modality=Auction&amp;sub_modality=DayAhead&amp;product=60&amp;data_mode=graph&amp;period=year" rel="nofollow">https:&#x2F;&#x2F;www.epexspot.com&#x2F;en&#x2F;market-data?market_area=FR&amp;tradi...</a><p>Germany (One year graph): <a href="https:&#x2F;&#x2F;www.epexspot.com&#x2F;en&#x2F;market-data?market_area=DE-LU&amp;trading_date=2022-04-04&amp;delivery_date=2022-04-05&amp;underlying_year=&amp;modality=Auction&amp;sub_modality=DayAhead&amp;product=60&amp;data_mode=graph&amp;period=year" rel="nofollow">https:&#x2F;&#x2F;www.epexspot.com&#x2F;en&#x2F;market-data?market_area=DE-LU&amp;tr...</a><p>Now for some extra fun, overlay this with graphs over CO2 emissions.</text></item></parent_chain><comment><author>nicolaslem</author><text>Pick your poison, the former is one of the biggest threats to humanity right now, the latter is a footnote in our list of problems.</text></comment> |
8,826,371 | 8,826,000 | 1 | 2 | 8,825,375 | train | <story><title>A Dangerous and Costly Photo in Japan</title><url>https://news.vice.com/article/best-of-vice-news-2014-this-may-be-the-most-dangerous-and-most-costly-photo-in-japan</url></story><parent_chain></parent_chain><comment><author>jpatokal</author><text>So a dash of soy sauce for anybody reading this article without some familiarity with the background: Jake Adelstein is a reporter who shot to fame with his book &quot;Tokyo Vice&quot;, which is about being the first Western reporter to cover the crime beat at a Japanese newspaper, not a small achievement. However, in the five years since, Jake&#x27;s articles have been basically all yakuza all the time, and tend to imply that the yakuza are everywhere in Japanese society, pulling strings all the time.<p>However, in actuality, the yakuza are slowly fading away. The police started clamping down pretty hard in 1992 and have kept up the pressure ever since, with many traditional yakuza businesses like protection money, extortion becoming more and more difficult, and increasing societal pressure on yakuza members themselves. Consequently, yakuza revenues and membership have plummeted, and infighting between groups fighting for slices of the remaining cake (drugs, gambling, illegal prostitution, etc) isn&#x27;t helping. Here&#x27;s a summary, cowritten oddly enough by Jake himself, although he does somewhat unconvincingly claim that the yakuza are just going underground in response:<p><a href="http://www.thedailybeast.com/articles/2014/03/09/where-have-japan-s-yakuza-gone.html" rel="nofollow">http:&#x2F;&#x2F;www.thedailybeast.com&#x2F;articles&#x2F;2014&#x2F;03&#x2F;09&#x2F;where-have-...</a></text></comment> | <story><title>A Dangerous and Costly Photo in Japan</title><url>https://news.vice.com/article/best-of-vice-news-2014-this-may-be-the-most-dangerous-and-most-costly-photo-in-japan</url></story><parent_chain></parent_chain><comment><author>sdoering</author><text>Why is it, that whenever there is a FIFA WC or Olympics, that there seems to be ties to organized crime, poor working conditions, bribery and so on, but nothing ever happens, the games go through (every time) and everybody is watching, what is just a show of &quot;panem et circenses&quot;.<p>Why do &quot;modern&quot; societies accept these kinds of festivities, with all what is surrounding them?</text></comment> |
14,045,920 | 14,045,476 | 1 | 2 | 14,043,631 | train | <story><title>Growing Ubuntu for Cloud and IoT, Rather Than Phone and Convergence</title><url>https://insights.ubuntu.com/2017/04/05/growing-ubuntu-for-cloud-and-iot-rather-than-phone-and-convergence/</url></story><parent_chain><item><author>gshulegaard</author><text>I may be a minority, but I am very saddened by this. Not because I have any particular love for Unity, but rather I share Mark&#x27;s conviction that convergence is the future.<p>Love or hate it but Unity was IMO the best shot we had at getting an open source unified phone, tablet and desktop experience...and now this is effectively Canonical not only shutting down Unity, but refocusing efforts away from convergence and towards more traditional market segments. I mourn the death of this innovative path.<p>That said, hopefully this convergence with GNOME will eventually lead back to convergence...but for now that dream is dead it would seem.</text></item></parent_chain><comment><author>daveguy</author><text><i>data convergence</i> is the future, not <i>interface convergence</i>. I think they made a serious mistake conflating the two.<p>You should be able to shift your view of a document from a desktop to a laptop, but that doesn&#x27;t mean the fundamental interface from one should be shoe-horned into the other.<p>That could mean a phone that is docked, but when you have 10x the screen real estate, a keyboard and a mouse the interface should be different when docked vs when not docked.</text></comment> | <story><title>Growing Ubuntu for Cloud and IoT, Rather Than Phone and Convergence</title><url>https://insights.ubuntu.com/2017/04/05/growing-ubuntu-for-cloud-and-iot-rather-than-phone-and-convergence/</url></story><parent_chain><item><author>gshulegaard</author><text>I may be a minority, but I am very saddened by this. Not because I have any particular love for Unity, but rather I share Mark&#x27;s conviction that convergence is the future.<p>Love or hate it but Unity was IMO the best shot we had at getting an open source unified phone, tablet and desktop experience...and now this is effectively Canonical not only shutting down Unity, but refocusing efforts away from convergence and towards more traditional market segments. I mourn the death of this innovative path.<p>That said, hopefully this convergence with GNOME will eventually lead back to convergence...but for now that dream is dead it would seem.</text></item></parent_chain><comment><author>rlpb</author><text>I&#x27;m with you. I&#x27;d like a fully free software stack across my personal devices, and I also thought that Ubuntu had the best chance of achieving this. I think the issue is in a business model that would work for Canonical.<p>OTOH, most of the hard work is done. My Ubuntu Phone mostly works. The stack is complete. I&#x27;d love to see interested people taking that code and keeping it going. I&#x27;d spend spare time on contributing if this were to happen.<p>(I&#x27;m a Canonical employee and Ubuntu developer, but not on the convergence&#x2F;desktop&#x2F;phone side, and my opinions here are my own and not that of my employer)</text></comment> |
34,203,706 | 34,199,531 | 1 | 2 | 34,197,098 | train | <story><title>2022: A Retrospective</title><url>https://godotengine.org/article/2022-retrospective</url></story><parent_chain><item><author>Waterluvian</author><text>Something I learned in 2022 through Godot is why these engines often seem to write their own scripting language. It frustrated me for years: I don’t want to learn your custom language. Just use something that exists!<p>But after spending time making some games for fun and learning, I realised: 90% of making a game isn’t coding. The code is usually just glue, and you aren’t hiring software engineers at crazy tech salaries to do that. You’re empowering your game devs to do it.<p>(Of course there’s a lot of more complex coding in many games, and there’s bindings for other languages when you need that).</text></item></parent_chain><comment><author>still_grokking</author><text>&gt; The code is usually just glue, and you aren’t hiring software engineers at crazy tech salaries to do that.<p>And the result is that there is no other software category that ships so extremely buggy code.<p>The code quality of games is the lowest I&#x27;ve ever seen.<p>Dynamic scripting languages only add to this overall mess…<p>The other problem is: Building a proper language and runtime is a full time job already. Adding this as a &quot;side project&quot; to a game engine (which is a very complex piece of code by itself!) is a big error imho.<p>I hope we see soon some strongly typed static language with very good type inference and a nice, clean, and simple syntax in Godot.<p>Would be than nice if Godot would concentrate on the game engine (and the editor) instead of trying to invent a language on the side also, as Godot as such is really great! One of the nicest programs I&#x27;ve ever used. (Especially compared with the incredibly buggy and terribly &quot;documented&quot; shit show that Unity is).</text></comment> | <story><title>2022: A Retrospective</title><url>https://godotengine.org/article/2022-retrospective</url></story><parent_chain><item><author>Waterluvian</author><text>Something I learned in 2022 through Godot is why these engines often seem to write their own scripting language. It frustrated me for years: I don’t want to learn your custom language. Just use something that exists!<p>But after spending time making some games for fun and learning, I realised: 90% of making a game isn’t coding. The code is usually just glue, and you aren’t hiring software engineers at crazy tech salaries to do that. You’re empowering your game devs to do it.<p>(Of course there’s a lot of more complex coding in many games, and there’s bindings for other languages when you need that).</text></item></parent_chain><comment><author>johnfn</author><text>Agree with this. I&#x27;m a huge proponent of static typing systems and good languages, so seeing Godot create their own language was a huge sticking point for me - I even created a typescript to gdscript compiler! But after a while, I realized that GDScript was remarkably productive, even though it lacks a lot of the bells and whistles that I&#x27;m accustomed to.</text></comment> |
36,775,195 | 36,775,690 | 1 | 2 | 36,774,627 | train | <story><title>Llama 2</title><url>https://ai.meta.com/llama/</url></story><parent_chain><item><author>whimsicalism</author><text>Google&#x27;s model is not as capable as llama-derived models, so I think they would actually benefit from this.<p>&gt; I wouldn&#x27;t be surprised if Amazon does as well.<p>I would - they are not a very major player in this space.<p>TikTok also meets this definition and probably doesn&#x27;t have LLM.</text></item><item><author>stu2b50</author><text>I think more Apple. It&#x27;s not like Google or Microsoft would <i>want</i> to use LLaMA when they have fully capable models themselves. I wouldn&#x27;t be surprised if Amazon does as well.<p>Apple is the big laggard in terms of big tech and complex neural network models.</text></item><item><author>minimaxir</author><text>That&#x27;s an oddly high number for blocking competition. OpenAI&#x27;s ChatGPT hit 100 million MAUs in January, and has gone down since.<p>It&#x27;s essentially a &quot;Amazon and Google don&#x27;t use this k thx.&quot;</text></item><item><author>whimsicalism</author><text>Key detail from release:<p>&gt; If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.<p>Looks like they are trying to block out competitors, it&#x27;s the perfect commoditize your complement but don&#x27;t let your actual competitors try to eke out any benefit from it.</text></item></parent_chain><comment><author>chaxor</author><text>Google has far better models than llama based models. They just simply don&#x27;t put them facing the public.<p>It is pretty ridiculous that they essentially just set a marketing team with no programming experience to write Bard, but that shouldn&#x27;t fool anyone into believing they don&#x27;t have capable models in Google.<p>If Deepmind were to actually provide what they have in some usable form, it would likely be quite good. Despite being the first to publish on RLHF (just right before OpenAI) and bring the idea to the academic sphere, they mostly work in areas tangential to &#x27;just chatbots&#x27; (e.g. how to improve science with novel GNNs, etc). However, they&#x27;re mostly academics, so they aren&#x27;t set on making products, doing the janitorial work of fancy UIs and web marketing, and making things easy to use, like much of the rest of the field.</text></comment> | <story><title>Llama 2</title><url>https://ai.meta.com/llama/</url></story><parent_chain><item><author>whimsicalism</author><text>Google&#x27;s model is not as capable as llama-derived models, so I think they would actually benefit from this.<p>&gt; I wouldn&#x27;t be surprised if Amazon does as well.<p>I would - they are not a very major player in this space.<p>TikTok also meets this definition and probably doesn&#x27;t have LLM.</text></item><item><author>stu2b50</author><text>I think more Apple. It&#x27;s not like Google or Microsoft would <i>want</i> to use LLaMA when they have fully capable models themselves. I wouldn&#x27;t be surprised if Amazon does as well.<p>Apple is the big laggard in terms of big tech and complex neural network models.</text></item><item><author>minimaxir</author><text>That&#x27;s an oddly high number for blocking competition. OpenAI&#x27;s ChatGPT hit 100 million MAUs in January, and has gone down since.<p>It&#x27;s essentially a &quot;Amazon and Google don&#x27;t use this k thx.&quot;</text></item><item><author>whimsicalism</author><text>Key detail from release:<p>&gt; If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.<p>Looks like they are trying to block out competitors, it&#x27;s the perfect commoditize your complement but don&#x27;t let your actual competitors try to eke out any benefit from it.</text></item></parent_chain><comment><author>galaxyLogic</author><text>I just googled &quot;What is the order of object-fields in JavaScript&quot; and the bard-answer said nothing about the differences between ES5 and ES6 and ES2020 how by now the order of object-fields in fact is deterministic.<p>It seems it is not aware of the notion of historic development, perhaps its world-model is &quot;static&quot;?<p>Temporal reasoning is interesting , if you google for &quot;news&quot; do you get what was news last year because a website updated last year had a page claiming to contain &quot;Latest News&quot;.<p>REF: <a href="https:&#x2F;&#x2F;www.stefanjudis.com&#x2F;today-i-learned&#x2F;property-order-is-predictable-in-javascript-objects-since-es2015&#x2F;" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.stefanjudis.com&#x2F;today-i-learned&#x2F;property-order-i...</a></text></comment> |
28,350,933 | 28,351,035 | 1 | 3 | 28,350,466 | train | <story><title>Helm is a personal server that lives where you do</title><url>https://thehelm.com/</url></story><parent_chain><item><author>satyanash</author><text>Dockerized Nextcloud + Postfix + Dovecot + Strongswan + OpenLDAP + SpamAssassin running on an ARM machine.<p>Sounds mostly alright, although it seems you cannot buy it without the $99&#x2F;yearly subscription, which makes me wary.<p>Sure, a static IP and domain registration is good, but it ought to be an optional addon.</text></item></parent_chain><comment><author>allset_</author><text>The required subscription also means it&#x27;s useless if they go out of business.</text></comment> | <story><title>Helm is a personal server that lives where you do</title><url>https://thehelm.com/</url></story><parent_chain><item><author>satyanash</author><text>Dockerized Nextcloud + Postfix + Dovecot + Strongswan + OpenLDAP + SpamAssassin running on an ARM machine.<p>Sounds mostly alright, although it seems you cannot buy it without the $99&#x2F;yearly subscription, which makes me wary.<p>Sure, a static IP and domain registration is good, but it ought to be an optional addon.</text></item></parent_chain><comment><author>gsreenivas</author><text>Hi there - co-founder&#x2F;CEO of Helm here.<p>We don&#x27;t make the subscription optional at this time because the overwhelming majority of people on the Internet do not have a static IP address with a corresponding PTR record, which is required if you want to have deliverable email. There are other ways to handle domain registration, DNS and backups on your own, but we believe the subscription is a pretty great value for the convenience it provides.</text></comment> |
37,706,764 | 37,706,626 | 1 | 2 | 37,705,368 | train | <story><title>Privacy washing: Google claims to support privacy while lobbying against it</title><url>https://proton.me/blog/google-lobbying</url></story><parent_chain><item><author>tristor</author><text>&gt; If you don&#x27;t think that all ads are bad, then I think you may agree that sometimes, ad targeting produces more useful ads.<p>I do. Ads are psychological manipulation at scale. This well predates the Internet, even. Advertisements and marketing are immoral. As long as we have them though, I will concede that better targeted ads are sometimes better than non-targeted ads. However, I believe targeting does not need to be personal to be effective, it can be based on the content the ad is placed next to, rather than the individual visiting and be equally as effective without necessitating spying.<p>As it stands, at best ads are malware, at worst they are concentrated form of social evil.</text></item><item><author>arciini</author><text>While I agree Google is being a bit sly in lobbying against privacy legislation, I think they have a legitimate point, but also think that it&#x27;ll lead to them concentrating more power in the ad market.<p>Google clearly believes that they can obfuscate identity by enough so that ads can be targeted while privacy can also be preserved.<p>If you don&#x27;t think that all ads are bad (and Google argues that ads help make many of the sites we rely on economically feasible), then I think you may agree that sometimes, ad targeting produces more useful ads. Many of my friends have actually found Instagram ads interesting enough to talk about them at dinners. It&#x27;s better to have those ads rather than random banners for things that you don&#x27;t care about. Google would definitely also argue that ad targeting improves the worth of the Internet and allows more sites to offer their services for free because they can make more from AdSense.<p>The following only makes sense if you buy this argument:<p>The data for ad targeting has been abused so often that for many (most?) consumers, it&#x27;s not worth it.<p>Google&#x27;s perspective is: &quot;we can be a responsible steward of this data for this new age of privacy-conscious ad targeting&quot;. The Chrome topics API and mathematical&#x2F;statistical obfuscation are things that a blunt tool like the law may forbid. As far as arguments go, I think this is actually a plausible one. I do think Google has somewhat OK privacy controls compared to other large tech companies, and way better ones compared to bad acting small sites and ad companies&#x2F;data brokers.<p>That being said, I don&#x27;t love the concentration of power (that&#x27;s why the DOJ is going after them) - I&#x27;d much rather there be some decentralized way to ensure privacy but still allow useful ads, but we get what we get.</text></item></parent_chain><comment><author>gambiting</author><text>&gt;&gt;Advertisements and marketing are immoral.<p>So if you are selling something it&#x27;s <i>literally</i> immoral to tell other people about it? How do you think people will ever find your store or product that you make? Just randomly stumble upon it? I&#x27;m genuinely curious.</text></comment> | <story><title>Privacy washing: Google claims to support privacy while lobbying against it</title><url>https://proton.me/blog/google-lobbying</url></story><parent_chain><item><author>tristor</author><text>&gt; If you don&#x27;t think that all ads are bad, then I think you may agree that sometimes, ad targeting produces more useful ads.<p>I do. Ads are psychological manipulation at scale. This well predates the Internet, even. Advertisements and marketing are immoral. As long as we have them though, I will concede that better targeted ads are sometimes better than non-targeted ads. However, I believe targeting does not need to be personal to be effective, it can be based on the content the ad is placed next to, rather than the individual visiting and be equally as effective without necessitating spying.<p>As it stands, at best ads are malware, at worst they are concentrated form of social evil.</text></item><item><author>arciini</author><text>While I agree Google is being a bit sly in lobbying against privacy legislation, I think they have a legitimate point, but also think that it&#x27;ll lead to them concentrating more power in the ad market.<p>Google clearly believes that they can obfuscate identity by enough so that ads can be targeted while privacy can also be preserved.<p>If you don&#x27;t think that all ads are bad (and Google argues that ads help make many of the sites we rely on economically feasible), then I think you may agree that sometimes, ad targeting produces more useful ads. Many of my friends have actually found Instagram ads interesting enough to talk about them at dinners. It&#x27;s better to have those ads rather than random banners for things that you don&#x27;t care about. Google would definitely also argue that ad targeting improves the worth of the Internet and allows more sites to offer their services for free because they can make more from AdSense.<p>The following only makes sense if you buy this argument:<p>The data for ad targeting has been abused so often that for many (most?) consumers, it&#x27;s not worth it.<p>Google&#x27;s perspective is: &quot;we can be a responsible steward of this data for this new age of privacy-conscious ad targeting&quot;. The Chrome topics API and mathematical&#x2F;statistical obfuscation are things that a blunt tool like the law may forbid. As far as arguments go, I think this is actually a plausible one. I do think Google has somewhat OK privacy controls compared to other large tech companies, and way better ones compared to bad acting small sites and ad companies&#x2F;data brokers.<p>That being said, I don&#x27;t love the concentration of power (that&#x27;s why the DOJ is going after them) - I&#x27;d much rather there be some decentralized way to ensure privacy but still allow useful ads, but we get what we get.</text></item></parent_chain><comment><author>endisneigh</author><text>Ads are no more psychological manipulation at scale than words in general. Your entire post is FUD. What form is communication that is disseminated at scale not &quot;psychological manipulation at scale?&quot;</text></comment> |
2,874,344 | 2,873,786 | 1 | 2 | 2,872,588 | train | <story><title>How to be a faster writer</title><url>http://www.slate.com/id/2301243/</url></story><parent_chain></parent_chain><comment><author>alanfalcon</author><text>This is the perfect place to mention one of my favorite websites: <a href="http://750words.com/" rel="nofollow">http://750words.com/</a><p>750 words is really fantastic if you want to practice writing every day, especially if you want to practice writing quickly. The site keeps a live word count and pushes you to write 750 words (roughly three pages) every day by tracking your streaks and awarding badges for hitting certain milestones. On top of that, the site tracks how many interruptions you have while writing your 750 words and how long it takes you each day to write.<p>I had a 149 day streak broken when I lost track of time and forgot to write until after midnight one day, but I'm back up to a new 40 day streak. I was shocked to learn that I'm usually able to write 750 words in under 15 minutes, and on some days I like to push myself to try to write as quickly as possible (I've gotten as low as 8 minutes one time). Of course, writing quickly sometimes means writing somewhat lower quality, so I don't always "race the clock".<p>If you have any interest in improving your writing skills, I can't recommend 750words.com enough. I started less than a year ago, and I've written a total of 185,022 words that I probably never would have written otherwise! Of course, by writing so quickly I've spent only 56 hours writing on the site, so I have a long way to go before I reach the 10,000 hour mark.</text></comment> | <story><title>How to be a faster writer</title><url>http://www.slate.com/id/2301243/</url></story><parent_chain></parent_chain><comment><author>jonnathanson</author><text>I hesitate to prescribe speed-writing to everyone. Similarly, I am leery of such a teleological view of the evolution of a writer's "10,000 hours," i.e., the claim that <i>all</i> writers mature into speed writers after a certain amount of practice. Some certainly do. Others seem not to, and I do not think less of them for it.<p>The thing is, different writers have different modes of writing. Some are wholly capable of producing, in spans of 15 to 20 minutes, perfectly serviceable content on any topic imaginable. Others pore over every word, agonizing for possibly days on end. Nabokov, for instance, fell into that latter category; he was notoriously slow and picky about his construction, and his daily bursts sometimes yielded a sentence or two at most. Yet I doubt there's a credible critic around who would assail the beauty of Nabokov's results, or claim that he wanted for practice.</text></comment> |
3,515,590 | 3,515,187 | 1 | 3 | 3,514,671 | train | <story><title>22 EU states sign the ACTA ‘Internet censorship’ treaty</title><url>http://thenextweb.com/eu/2012/01/26/the-eu-and-22-member-states-sign-the-controversial-acta-internet-censorship-treaty/</url><text></text></story><parent_chain></parent_chain><comment><author>narrator</author><text>The oligarchy in China is at least out in the open and nationalistic.<p>In the west the oligarchy does everything in secret, their motives are largely unclear, and they keep everyone focused on culture war issues that never get resolved and celebrity like gossiping about politicians. Meanwhile, the lobbyists and the rest of the oligarchy write absurdly complex bills and pass them in obscurity.<p>In Russia it's a bit of a hybrid of the western and eastern system.</text></comment> | <story><title>22 EU states sign the ACTA ‘Internet censorship’ treaty</title><url>http://thenextweb.com/eu/2012/01/26/the-eu-and-22-member-states-sign-the-controversial-acta-internet-censorship-treaty/</url><text></text></story><parent_chain></parent_chain><comment><author>ergo14</author><text><a href="http://news.ycombinator.com/item?id=3514232" rel="nofollow">http://news.ycombinator.com/item?id=3514232</a> - ACTA will NOT pass as of now since it's in validation of Basic Human Rights Bill, so it looks with exception of Poland and Czech EU is safe, but i have high hopes that our (polish) parliament will not ratify the document. So lets hope not everything is lost.<p><a href="http://lubbockonline.com/interact/blog-post/bert-knabe/2011-11-25/european-court-justice-internet-filtering-illegal#.TyGXlkDCQkA" rel="nofollow">http://lubbockonline.com/interact/blog-post/bert-knabe/2011-...</a><p>here are more sources on the matter</text></comment> |
12,327,933 | 12,327,934 | 1 | 3 | 12,327,803 | train | <story><title>Redcon – Fast Redis-compatible server framework for Go</title><url>https://github.com/tidwall/redcon</url></story><parent_chain><item><author>butabah</author><text>I don&#x27;t see how it&#x27;s useful then. Maybe someone can give me a good use case?<p>redis is already very bare-bones and dead simple to set up and maintain, what more does one want? Wouldn&#x27;t adding more logic into a redis-like environment be a little cumbersome from an architecture perspective?</text></item><item><author>eknkc</author><text>Note that it&#x27;s a redis serialization protocol and tcp server implementation. It&#x27;s up to you to define redis commands such as get, set etc.<p>This can be used coupled with a backend storage to provide custom redis servers.<p>It&#x27;s not a redis implementation by itself.</text></item></parent_chain><comment><author>eknkc</author><text>Could be used for proxying redis connections, be it for clustering or I don&#x27;t know, add sophisticated access control or something.<p>You could expose a memcached cluster as a redis server using this. (Not sure why would you want to do that.. Maybe if you have an existing memcached cluster and have redis client code?)<p>Or you could go crazy and implement the entire set of redis commands on top of sqlite and call it a &quot;strongly persistent redis&quot; or something like that.<p>It&#x27;s just a nice building block, I&#x27;m sure people would find ways to abuse it :)</text></comment> | <story><title>Redcon – Fast Redis-compatible server framework for Go</title><url>https://github.com/tidwall/redcon</url></story><parent_chain><item><author>butabah</author><text>I don&#x27;t see how it&#x27;s useful then. Maybe someone can give me a good use case?<p>redis is already very bare-bones and dead simple to set up and maintain, what more does one want? Wouldn&#x27;t adding more logic into a redis-like environment be a little cumbersome from an architecture perspective?</text></item><item><author>eknkc</author><text>Note that it&#x27;s a redis serialization protocol and tcp server implementation. It&#x27;s up to you to define redis commands such as get, set etc.<p>This can be used coupled with a backend storage to provide custom redis servers.<p>It&#x27;s not a redis implementation by itself.</text></item></parent_chain><comment><author>otterley</author><text>I&#x27;m not sure, either. By the time you&#x27;ve implemented all the actual Redis functions, there&#x27;s no telling whether the server will be more or less performant than Redis itself. (Most likely less, since Redis is written in optimized C.)<p>Of course the existing benchmark shows that the example Go implementation is faster. For one, the Go implementation provided avoids a lot of functionality that the Redis server implements, such as TTL checking.</text></comment> |
4,081,993 | 4,081,924 | 1 | 3 | 4,081,524 | train | <story><title>Show HN: Proxino -- Monitor and Debug your JavaScript</title><url>https://www.proxino.com/</url><text></text></story><parent_chain></parent_chain><comment><author>ricardobeat</author><text>For hackers:<p><pre><code> window.onerror = function(m, f, l){
var err = JSON.encode({ message:m, file:f, line:l })
(new Image).src = '/errors?e='+err
}
</code></pre>
For Google Analytics users:<p><pre><code> window.onerror = function(m, f, l){
var err = [f, l, m].join(' : ')
_gaq.push(['_trackEvent', 'Errors', 'App', err, null, true])
}
</code></pre>
Analytics will allow you to filter by OS, browser, and all the other environment data it already captures. And nice graphs as a bonus :)</text></comment> | <story><title>Show HN: Proxino -- Monitor and Debug your JavaScript</title><url>https://www.proxino.com/</url><text></text></story><parent_chain></parent_chain><comment><author>cheeaun</author><text>Seriously, the beacon/tracker JS requires the whole jQuery (1.7.1)? <a href="https://www.proxino.com/p.js" rel="nofollow">https://www.proxino.com/p.js</a></text></comment> |
37,602,480 | 37,601,510 | 1 | 2 | 37,600,484 | train | <story><title>Bloomberg Is Throwing $500M at Efforts to Shut Down All U.S. Coal Plants</title><url>https://gizmodo.com/michael-bloomberg-500-million-shut-down-coal-plants-1850861082</url></story><parent_chain></parent_chain><comment><author>tomp</author><text>@dang the title is wrong. should be<p><i>Michael</i> Bloomberg Is Throwing $500M at Efforts to Shut Down All U.S. Coal Plants<p>(to avoid confusion with the company)</text></comment> | <story><title>Bloomberg Is Throwing $500M at Efforts to Shut Down All U.S. Coal Plants</title><url>https://gizmodo.com/michael-bloomberg-500-million-shut-down-coal-plants-1850861082</url></story><parent_chain></parent_chain><comment><author>thelastgallon</author><text>Concentrating Solar-Thermal Power (CSP) is rapidly getting cheaper: from 9.8¢&#x2F;kWh (2018) to 5.0¢&#x2F;kWh (2030) [1]. With 14 hours of storage, this can take care of baseload.<p>[1]<a href="https:&#x2F;&#x2F;www.energy.gov&#x2F;eere&#x2F;solar&#x2F;articles&#x2F;2030-solar-cost-targets" rel="nofollow noreferrer">https:&#x2F;&#x2F;www.energy.gov&#x2F;eere&#x2F;solar&#x2F;articles&#x2F;2030-solar-cost-t...</a>
From Endnotes: All costs quoted here exclude benefits from federal or state tax incentives, such as the Investment Tax Credit.</text></comment> |
14,174,586 | 14,174,730 | 1 | 2 | 14,164,847 | train | <story><title>The Internet Isn’t Making Us Dumber – It’s Making Us More ‘Meta-Ignorant’</title><url>http://nymag.com/scienceofus/2016/07/the-internet-isnt-making-us-dumber-its-making-us-more-meta-ignorant.html</url></story><parent_chain><item><author>lordnacho</author><text>The problem with ignorance in the internet age is you can now find sources that confirm your wrong headed ideas, entire ecosystems even.<p>If you haven&#x27;t learned how to think before becoming an adult, I fear that you won&#x27;t. Sorry for sounding condescending, but I&#x27;ve run into so many people who are misinformed, and let&#x27;s just say &quot;misinformed&quot; means they don&#x27;t know what the academic orthodoxy is in a given field, but they think they know the facts, and they even have theories about why other people are wrong.<p>I&#x27;ve tried to explain to a friend why vaccinations are a good idea, and I get a bunch of crap about how modern medicine doesn&#x27;t work for her, and how she took a homeopathic vaccine, went to India, and didn&#x27;t get ill. A half a dozen of her friends jump in with articles about how vaccines work, and she ends it with &quot;sure but I&#x27;m skeptical&quot;.<p>I&#x27;ve been invited to a friends house to look at his &quot;Time Waver&quot; machine, which supposedly connects to one of your auras, and has a nice animation of how it scans every single one of someone&#x27;s organs. Remotely. In fact he showed me a woman in Italy that he was helping out. First he asked me if I knew anything about quantum theory, which I don&#x27;t really beyond undergrad, and then he gets excited and and spouts something about how I&#x27;ll appreciate a cleansing. Good thing I can stay polite. But someone in his 40s who thinks this is how the world works is not going to have the veil of ignorance lifted.<p>These are just a couple of recent examples. Common to them is there&#x27;s a bunch of stuff you can readily access which supports it. If you have some opinion about just about anything, you can find support for it, in fact a web of support, which will really test your reasoning skills.</text></item></parent_chain><comment><author>canadian_voter</author><text><i>The problem with ignorance in the internet age is you can now find sources that confirm your wrong headed ideas, entire ecosystems even.</i><p>When I was young, lonely and isolated, the internet was a godsend: it connected me to like minded individuals and expanded my horizons. It exposed me to a world of ideas about science, technology, history, politics, music, etc. at a time when everyone around me was mentally stagnant.<p>That same power to connect can lead people down a dark or dangerous path. Approach the internet with an uncritical mind and you could emerge as a flat-earther, anti-vaccer, or worse.<p><i>I&#x27;ve tried to explain to a friend why vaccinations are a good idea, and I get a bunch of crap about how modern medicine doesn&#x27;t work for her, and how she took a homeopathic vaccine, went to India, and didn&#x27;t get ill.</i><p>I once knew someone who bought an expensive electronic detoxification device. The idea was you held on to the handles and it sent purifying waves through your body, killing any parasites. She insisted on showing off her stool after the process was complete. She said she could see the little &quot;bugs&quot; that had been in her system. Then we discovered that she had forgotten to put in the batteries.<p>I knew someone else who wanted to invest in a $50,000 machine that made similarly dubious claims. The idea was to make the money back by selling treatments. Thankfully her daughter talked her out of it.<p>Gullibility is a dangerous thing, especially at the intersection of personal health and second party financial incentive.</text></comment> | <story><title>The Internet Isn’t Making Us Dumber – It’s Making Us More ‘Meta-Ignorant’</title><url>http://nymag.com/scienceofus/2016/07/the-internet-isnt-making-us-dumber-its-making-us-more-meta-ignorant.html</url></story><parent_chain><item><author>lordnacho</author><text>The problem with ignorance in the internet age is you can now find sources that confirm your wrong headed ideas, entire ecosystems even.<p>If you haven&#x27;t learned how to think before becoming an adult, I fear that you won&#x27;t. Sorry for sounding condescending, but I&#x27;ve run into so many people who are misinformed, and let&#x27;s just say &quot;misinformed&quot; means they don&#x27;t know what the academic orthodoxy is in a given field, but they think they know the facts, and they even have theories about why other people are wrong.<p>I&#x27;ve tried to explain to a friend why vaccinations are a good idea, and I get a bunch of crap about how modern medicine doesn&#x27;t work for her, and how she took a homeopathic vaccine, went to India, and didn&#x27;t get ill. A half a dozen of her friends jump in with articles about how vaccines work, and she ends it with &quot;sure but I&#x27;m skeptical&quot;.<p>I&#x27;ve been invited to a friends house to look at his &quot;Time Waver&quot; machine, which supposedly connects to one of your auras, and has a nice animation of how it scans every single one of someone&#x27;s organs. Remotely. In fact he showed me a woman in Italy that he was helping out. First he asked me if I knew anything about quantum theory, which I don&#x27;t really beyond undergrad, and then he gets excited and and spouts something about how I&#x27;ll appreciate a cleansing. Good thing I can stay polite. But someone in his 40s who thinks this is how the world works is not going to have the veil of ignorance lifted.<p>These are just a couple of recent examples. Common to them is there&#x27;s a bunch of stuff you can readily access which supports it. If you have some opinion about just about anything, you can find support for it, in fact a web of support, which will really test your reasoning skills.</text></item></parent_chain><comment><author>pavlov</author><text>I personally know a man in his late thirties who nearly died of AIDS a few years ago because he had read &quot;credible sources&quot; on the Internet and convinced himself that HIV is harmless.<p>This happened already in the early 2000s. For over a decade, he ignored his HIV status (not knowing either way) thanks to the &quot;advice&quot; from contrarian experts on the Internet. One day he finally had a flu that didn&#x27;t seem to go away. A doctor took one look at the lesions on his face and told him the bad news.<p>He survived, and now he wants to warn everyone about the dangers of hopeful biases when combined with the immense amount of misinformation on the Internet. He&#x27;s even appeared on national media here in Finland to tell his story, and I admire his courage.</text></comment> |
27,357,712 | 27,356,769 | 1 | 3 | 27,353,672 | train | <story><title>Employees are quitting instead of giving up working from home</title><url>https://www.bloomberg.com/news/articles/2021-06-01/return-to-office-employees-are-quitting-instead-of-giving-up-work-from-home</url></story><parent_chain><item><author>dougmwne</author><text>I&#x27;ve been doing WFH for 5 years and feel like the rest of the world just caught on. In addition to what you mention:<p>- Control of food. No more bagels or carb dumping ground. No more limited food options. My own kitchen.<p>- Control of equipment. Need a 4k monitor? Need a trackball? No approvals needed.<p>- Control of ergonomics. Get exactly the chair you need. Get an electric height adjustable desk without going through facilities.<p>- Control temperature. Never be too hot or cold.<p>- So many great options for breaks. Walk down the street. Meditate in the garden. Play Beatsaber. Take a nap, naps are magic.<p>- Control your lighting. Good color temperature and comfortable brightness make the space more relaxing and can aid sleep and wakefulness.<p>- The ultimate corner office. Privacy and separate space that you can personalize to your heart&#x27;s content.<p>- Location flexibility. Work from a beach rental. Do a city-stay near a WeWork. Find a mountain cabin with high speed internet. Move to a new state without having to change jobs.<p>- Finances. Live in a low tax state. Have an older car. Spend less on clothes, lunches, parking, gas, tolls. Live in cheaper square footage without worrying about what it does to your commute.<p>- Stress. More emotional speration between you and your work. Relationships are through Zoom and require less emotional investment. Work forms less a part of your identity and changing it involves fewer changes to your daily routine.<p>- Caffeine. With more tools to manage your wakefulness, less need to lean on the crutch of caffeine. For me, less caffeine means less alcohol as well.<p>Other people are free to have their opinions that they don&#x27;t like WFH or can&#x27;t wait to get back into the office. For me, I really struggle to understand how you cannot love it. With total control of my environment, I can easily correct for minor downsides such as needing to maintain work-life balance and good social connections. After years of optimization, I have a better quality of life than our CEO. I&#x27;d be insane to give it up.</text></item><item><author>Jaygles</author><text>For me, the utility of being at home versus being at the office is massive. If I need a 15 minute break, I can start a load of laundry, or run the dishwasher, or do some other chore.<p>I can wake up later since I don&#x27;t have a commute. I don&#x27;t lose two hours a day due to the commute.<p>I can wear as comfortable clothing as I desire. I can delay my shower until after my daily workout (Which I can do since my workout equipment is at home).<p>I can control my environment so there are as many or as few distractions as i like. I can put on videos or audio that might not be considered work appropriate. I can use speakers since no one is around me to hear the sound.<p>The number and magnitude of inconveniences we subject ourselves to by heading to an office every day has been fully revealed. I will do all I can to work from home for the rest of my life.</text></item></parent_chain><comment><author>nonameiguess</author><text>The other thing I don&#x27;t see mentioned enough is the impact on disabled people. Thanks to spine issues, I can&#x27;t comfortably drive. That has limited me to either working where a train line goes or at the same place with my wife where I can carpool with her as a passenger. WFH is one of the most freeing things that has ever happened to me. It not only frees me to work anywhere, but it also means I don&#x27;t need to take as frequent breaks. I used to have to work at places that could accommodate my need to lay down to decompress several times a day, and that meant I couldn&#x27;t do any actual work during that time. At home, I can much more easily keep working from bed when sitting or standing gets too painful.</text></comment> | <story><title>Employees are quitting instead of giving up working from home</title><url>https://www.bloomberg.com/news/articles/2021-06-01/return-to-office-employees-are-quitting-instead-of-giving-up-work-from-home</url></story><parent_chain><item><author>dougmwne</author><text>I&#x27;ve been doing WFH for 5 years and feel like the rest of the world just caught on. In addition to what you mention:<p>- Control of food. No more bagels or carb dumping ground. No more limited food options. My own kitchen.<p>- Control of equipment. Need a 4k monitor? Need a trackball? No approvals needed.<p>- Control of ergonomics. Get exactly the chair you need. Get an electric height adjustable desk without going through facilities.<p>- Control temperature. Never be too hot or cold.<p>- So many great options for breaks. Walk down the street. Meditate in the garden. Play Beatsaber. Take a nap, naps are magic.<p>- Control your lighting. Good color temperature and comfortable brightness make the space more relaxing and can aid sleep and wakefulness.<p>- The ultimate corner office. Privacy and separate space that you can personalize to your heart&#x27;s content.<p>- Location flexibility. Work from a beach rental. Do a city-stay near a WeWork. Find a mountain cabin with high speed internet. Move to a new state without having to change jobs.<p>- Finances. Live in a low tax state. Have an older car. Spend less on clothes, lunches, parking, gas, tolls. Live in cheaper square footage without worrying about what it does to your commute.<p>- Stress. More emotional speration between you and your work. Relationships are through Zoom and require less emotional investment. Work forms less a part of your identity and changing it involves fewer changes to your daily routine.<p>- Caffeine. With more tools to manage your wakefulness, less need to lean on the crutch of caffeine. For me, less caffeine means less alcohol as well.<p>Other people are free to have their opinions that they don&#x27;t like WFH or can&#x27;t wait to get back into the office. For me, I really struggle to understand how you cannot love it. With total control of my environment, I can easily correct for minor downsides such as needing to maintain work-life balance and good social connections. After years of optimization, I have a better quality of life than our CEO. I&#x27;d be insane to give it up.</text></item><item><author>Jaygles</author><text>For me, the utility of being at home versus being at the office is massive. If I need a 15 minute break, I can start a load of laundry, or run the dishwasher, or do some other chore.<p>I can wake up later since I don&#x27;t have a commute. I don&#x27;t lose two hours a day due to the commute.<p>I can wear as comfortable clothing as I desire. I can delay my shower until after my daily workout (Which I can do since my workout equipment is at home).<p>I can control my environment so there are as many or as few distractions as i like. I can put on videos or audio that might not be considered work appropriate. I can use speakers since no one is around me to hear the sound.<p>The number and magnitude of inconveniences we subject ourselves to by heading to an office every day has been fully revealed. I will do all I can to work from home for the rest of my life.</text></item></parent_chain><comment><author>kat</author><text>I liked how to worded &quot;emotional separation&quot;
The reduction in my stress has been amazing, especially considering we&#x27;ve been in the middle of covid. My moods are way better regulated now. I&#x27;m no longer trying to ignore my angry office mate who&#x27;s muttering under his breath. I didn&#x27;t realize how much upset people influence my own mood. I thought I was good at ignoring angry people, but the action of ignoring took up a lot more energy than I previously thought. My biggest worry about back-to-office is how I&#x27;m going to managed my increased stress levels.</text></comment> |
20,602,173 | 20,601,159 | 1 | 2 | 20,600,178 | train | <story><title>Rustgo: Calling Rust from Go with near-zero overhead (2017)</title><url>https://blog.filippo.io/rustgo/</url></story><parent_chain><item><author>jaChEWAg</author><text>That line pretty much sums up the experience a lot of devs feel towards Rust</text></item><item><author>hu3</author><text>&gt; I&#x27;ll be upfront: I don&#x27;t know Rust, and don&#x27;t feel compelled to do my day-to-day programming in it.<p>I wonder if this changed. A quick glance over the blog hints me towards a no.</text></item></parent_chain><comment><author>hombre_fatal</author><text>This is how most devs feel about any language they don&#x27;t already use. There are even HNers who regularly brag about having never taken Javascript, one of the most ubiquitous languages, seriously enough to build a nontrivial project or to appreciate its upsides.<p>So it&#x27;s not as damning as you seem to suggest.</text></comment> | <story><title>Rustgo: Calling Rust from Go with near-zero overhead (2017)</title><url>https://blog.filippo.io/rustgo/</url></story><parent_chain><item><author>jaChEWAg</author><text>That line pretty much sums up the experience a lot of devs feel towards Rust</text></item><item><author>hu3</author><text>&gt; I&#x27;ll be upfront: I don&#x27;t know Rust, and don&#x27;t feel compelled to do my day-to-day programming in it.<p>I wonder if this changed. A quick glance over the blog hints me towards a no.</text></item></parent_chain><comment><author>disinclination</author><text>I’ve been thinking recently about how the symbol “C” in “C programming language” literally stands for “3”.<p>We out here like, “I’ve been getting really into this new language, 12,447. I think it might finally be the thing to replace 3 for me.”</text></comment> |
34,884,348 | 34,883,977 | 1 | 2 | 34,878,240 | train | <story><title>Replacing my MacBook Air M1 with a ThinkPad T480</title><url>https://maxrozen.com/replacing-my-macbook-m1-with-thinkpad-t480</url></story><parent_chain><item><author>yamtaddle</author><text>Every single laptop I owned before switching to Apple was a huge piece of shit, in hindsight—like, probably shouldn&#x27;t have been saleable, just a straight-up lemon by design.<p>... <i>except</i> an IBM (yes, it&#x27;s that long ago) Thinkpad that was super-underpowered even for the time. Though, it <i>did</i> have the ~2-3hr actual-in-practice battery life that was typical of laptops at the time, no matter what they promised, so like all the rest the battery was really just for hopping from one outlet to the next (I don&#x27;t know if Apple laptops were like that at the time, but I do know that no subsequent laptop let me feel like I could leave my power brick behind when getting up from my desk until I got a Macbook)<p>If I ever go back to PC-land, it&#x27;ll be to an underpowered device from a business-oriented line. Worked once, so, maybe it&#x27;ll work again...<p>&gt; Two years ago I bought a Lenovo gaming laptop for my son.<p>This especially is a big part of your problem. Discrete graphics card in a laptop = 2x the problems—and that holds even for Apple (in the Intel days, anyway). Gaming-marketed on top of that? Add another 1x for 3x the rate of trouble over baseline. Avoid, avoid, avoid.</text></item><item><author>browningstreet</author><text>Regarding Lenovo:<p>I bought an original Macbook Air. It always worked well. As it got on to about 10 years old, I took it in to an Apple Store a couple miles from my house and asked if they could replace the battery. The battery worked pretty well, but I figured.. the machine was going strong and if they still had batteries in stock, I should grab one now and the laptop will live even longer. They replaced the battery for $99 and when it came back (3 days later), they also replaced the motherboard for some reason. I don&#x27;t know why, but basically, aside from the screen and the keyboard, it&#x27;s a new machine. I still use it today -- the only thing I can&#x27;t really do on it is edit 4K video.<p>Two years ago I bought a Lenovo gaming laptop for my son. A couple of weeks ago the keyboard stopped working. We bought a replacement keyboard and swapped it out. Still didn&#x27;t work. So, something on the motherboard. When we called and asked about getting it fixed, they said it wasn&#x27;t in warranty anymore and the price to diagnose and replace the broken part couldn&#x27;t be estimated, nor could they estimate how long it would take. We&#x27;d have to ship it to a third-party repair center. So my son is walking around his college campus with a Lenovo laptop and an external keyboard, for a laptop we bought 2 years ago for almost $2,000.<p>So, no. No Lenovo for me. And given my previous stupendously negative experience with Dell (they couldn&#x27;t repair 2 brand new machines that were shipped to us with broken WiFi and took so long to declare it unrepairable the machines couldn&#x27;t be returned either -- two machines bought together had to be reported in separate calls to tech support and the third party tech support people who came to the house weren&#x27;t very knowledgable -- calls disputing the non-returnable declaration took so long to return and were bounced among so many unhelpful people at Dell Headquarters they basically convinced me to give up, buy third-party wifi cards and live without Bluetooth in these machines), no Dell XPS laptops for me either.<p>On those experiences, I&#x27;ll only spend my own money on Apple laptop hardware.</text></item></parent_chain><comment><author>IggleSniggle</author><text>You know what? I’ve only had my Steam Deck for slightly under a year, but I’m honestly astounded at the amount of abuse it has endured without showing any apparent problems. I’m very curious to see how long it lasts.<p>I doubt it will be competitive with a Switch for durability, which is the closest mirror to a Deck that also has some similarities to Apple-style hardware-quality + lock-in.<p>But that said, I’ve had problems with Apple laptop keyboards after only a couple years of use, and had a laptop monitor from Apple start to yellow bizarrely after less than a year. Nintendo joycons started having issues after under a year.<p>Sometimes you’re just at the bad-end of the bell-curve. It’s a shame that online reviews aren’t that helpful anymore; it can be hard to know what’s real with consumer tech, and hard to know how big the lemon segment is.</text></comment> | <story><title>Replacing my MacBook Air M1 with a ThinkPad T480</title><url>https://maxrozen.com/replacing-my-macbook-m1-with-thinkpad-t480</url></story><parent_chain><item><author>yamtaddle</author><text>Every single laptop I owned before switching to Apple was a huge piece of shit, in hindsight—like, probably shouldn&#x27;t have been saleable, just a straight-up lemon by design.<p>... <i>except</i> an IBM (yes, it&#x27;s that long ago) Thinkpad that was super-underpowered even for the time. Though, it <i>did</i> have the ~2-3hr actual-in-practice battery life that was typical of laptops at the time, no matter what they promised, so like all the rest the battery was really just for hopping from one outlet to the next (I don&#x27;t know if Apple laptops were like that at the time, but I do know that no subsequent laptop let me feel like I could leave my power brick behind when getting up from my desk until I got a Macbook)<p>If I ever go back to PC-land, it&#x27;ll be to an underpowered device from a business-oriented line. Worked once, so, maybe it&#x27;ll work again...<p>&gt; Two years ago I bought a Lenovo gaming laptop for my son.<p>This especially is a big part of your problem. Discrete graphics card in a laptop = 2x the problems—and that holds even for Apple (in the Intel days, anyway). Gaming-marketed on top of that? Add another 1x for 3x the rate of trouble over baseline. Avoid, avoid, avoid.</text></item><item><author>browningstreet</author><text>Regarding Lenovo:<p>I bought an original Macbook Air. It always worked well. As it got on to about 10 years old, I took it in to an Apple Store a couple miles from my house and asked if they could replace the battery. The battery worked pretty well, but I figured.. the machine was going strong and if they still had batteries in stock, I should grab one now and the laptop will live even longer. They replaced the battery for $99 and when it came back (3 days later), they also replaced the motherboard for some reason. I don&#x27;t know why, but basically, aside from the screen and the keyboard, it&#x27;s a new machine. I still use it today -- the only thing I can&#x27;t really do on it is edit 4K video.<p>Two years ago I bought a Lenovo gaming laptop for my son. A couple of weeks ago the keyboard stopped working. We bought a replacement keyboard and swapped it out. Still didn&#x27;t work. So, something on the motherboard. When we called and asked about getting it fixed, they said it wasn&#x27;t in warranty anymore and the price to diagnose and replace the broken part couldn&#x27;t be estimated, nor could they estimate how long it would take. We&#x27;d have to ship it to a third-party repair center. So my son is walking around his college campus with a Lenovo laptop and an external keyboard, for a laptop we bought 2 years ago for almost $2,000.<p>So, no. No Lenovo for me. And given my previous stupendously negative experience with Dell (they couldn&#x27;t repair 2 brand new machines that were shipped to us with broken WiFi and took so long to declare it unrepairable the machines couldn&#x27;t be returned either -- two machines bought together had to be reported in separate calls to tech support and the third party tech support people who came to the house weren&#x27;t very knowledgable -- calls disputing the non-returnable declaration took so long to return and were bounced among so many unhelpful people at Dell Headquarters they basically convinced me to give up, buy third-party wifi cards and live without Bluetooth in these machines), no Dell XPS laptops for me either.<p>On those experiences, I&#x27;ll only spend my own money on Apple laptop hardware.</text></item></parent_chain><comment><author>thrill</author><text>&quot;If I ever go back to PC-land, it&#x27;ll be to an underpowered device from a business-oriented line.&quot;<p>If I ever need a PC again, it&#x27;ll be by running Parallels on a Mac Air.</text></comment> |
27,067,294 | 27,062,271 | 1 | 3 | 27,061,700 | train | <story><title>Starting a crypto project</title><url>https://twitter.com/jonsyu/status/1389635626698297344</url></story><parent_chain><item><author>PragmaticPulp</author><text>Can confirm. I’m friends with a group of very smart distributed systems engineers who were excited to launch an actually useful crypto project that gets talked about here from time to time.<p>As much as they’re excited about the technology itself, very few people are actually interested in using it. Instead, their business seems to revolve around the value of the utility tokens for their project.<p>Few people are buying the utility tokens to pay for the service. They’re buying them to horde so they can resell to other speculators when the price goes up. They have a lot of tokens in circulation, but only a small number of them get used to pay for the service. Many of them are sitting in the wallets of exchanges where they get traded back and forth but never come near the blockchain.<p>Investors only care about getting more press releases out so they can pump up the price of their discounted tokens, which have a shortened lockup period relative to something like stocks. It’s almost like a pump-and-dump scheme for investors.<p>Maybe their underlying project will become popular in the future, but the volatility of the token price makes it increasingly unattractive for companies that want to actually use it.</text></item></parent_chain><comment><author>notJim</author><text>&gt; They’re buying them to horde so they can resell to other speculators when the price goes up.<p>It&#x27;s almost like all interest in cryptocurrency is driven by greed and speculation. I&#x27;ve been learning more about it, and I&#x27;m pretty convinced this is the case. Go into any crypto space, and all people are talking about is yield, price action, how much their tokens have grown etc. I think the only thing I&#x27;ve seen that isn&#x27;t purely about speculation is NFTs, oddly enough, which seem to be basically a fad.</text></comment> | <story><title>Starting a crypto project</title><url>https://twitter.com/jonsyu/status/1389635626698297344</url></story><parent_chain><item><author>PragmaticPulp</author><text>Can confirm. I’m friends with a group of very smart distributed systems engineers who were excited to launch an actually useful crypto project that gets talked about here from time to time.<p>As much as they’re excited about the technology itself, very few people are actually interested in using it. Instead, their business seems to revolve around the value of the utility tokens for their project.<p>Few people are buying the utility tokens to pay for the service. They’re buying them to horde so they can resell to other speculators when the price goes up. They have a lot of tokens in circulation, but only a small number of them get used to pay for the service. Many of them are sitting in the wallets of exchanges where they get traded back and forth but never come near the blockchain.<p>Investors only care about getting more press releases out so they can pump up the price of their discounted tokens, which have a shortened lockup period relative to something like stocks. It’s almost like a pump-and-dump scheme for investors.<p>Maybe their underlying project will become popular in the future, but the volatility of the token price makes it increasingly unattractive for companies that want to actually use it.</text></item></parent_chain><comment><author>monkeydust</author><text>Perhaps their initial approach was not the right one i.e. was the public token the right method to maximize the value of their actual offering? Maybe they need to rethink given what you have said.<p>I know of a very similar company (might even be the same one) the telegram forum they have setup is just full of people speculating what will &#x2F; might happen ...will it go to the moon... etc.. They have now moderated that out but still main reason people join the group to start with.</text></comment> |
18,446,809 | 18,446,430 | 1 | 2 | 18,445,714 | train | <story><title>Waymo CEO Says Alphabet Unit Plans to Launch Driverless Car Service</title><url>https://www.marketwatch.com/story/waymo-ceo-says-driverless-car-service-coming-soon-2018-11-13</url></story><parent_chain></parent_chain><comment><author>edoo</author><text>I&#x27;m excited. If you have been alive a substantial amount of time you are easily running a lifetime average of about 1-2% chance of dying in a car accident in your lifetime. Things have gotten much safer the past couple decades so that number is somewhere around 0.75% or lower now. I want to say being injured in a car accident is somewhere around 20-30% lifetime chance. Cars are incredibly dangerous and this will save a ton of lives.</text></comment> | <story><title>Waymo CEO Says Alphabet Unit Plans to Launch Driverless Car Service</title><url>https://www.marketwatch.com/story/waymo-ceo-says-driverless-car-service-coming-soon-2018-11-13</url></story><parent_chain></parent_chain><comment><author>ChuckMcM</author><text>This will be a huge milestone for the Waymo team. I had not thought they would get here until 2025 at least, so I guess I&#x27;m overly cynical these days :-).<p>I hope that a successful launch in Phoenix will expand out to other locales because this capability is a huge win for those groups that would otherwise not be able to make these trips.<p>It is also remarkable that it is possible to replace a regular vehicle operated by a human as a livery service, with an incredibly complex machine and still make a business case out of it. That says a lot about how far computers have come in the last couple of decades.</text></comment> |
19,562,206 | 19,561,021 | 1 | 2 | 19,560,515 | train | <story><title>A16Z is re-registering as a financial advisor, renouncing its status as a VC</title><url>https://www.forbes.com/sites/alexkonrad/2019/04/02/andreessen-horowitz-is-blowing-up-the-venture-capital-model-again/</url></story><parent_chain><item><author>jakequist</author><text><i></i>tl;dr<i></i> - A16Z is reclassifying themselves as an &quot;investment advisor&quot;, which will allow them to make riskier bets (crypto, real estate, etc). They&#x27;ll still invest in startups like any other VC firm.</text></item></parent_chain><comment><author>kenneth</author><text>I feel like there&#x27;s some misunderstanding of what these terms are. All VCs are investment advisors (either Exempt Reporting Advisors or Registered Investment Advisor depending on assets under management and investment types).<p>The article is vague in what it actually means, but it sounds to me like A16Z is going through the process of giving up its VC exemption to the Investment Advisers Act and registering as a RIA. It could also mean they&#x27;re registering as a broker-dealer and getting its relevant employees licensed as such (e.g. Series 65)<p>Source: I run a VC firm.<p>Here is some more info: <a href="https:&#x2F;&#x2F;www.strictlybusinesslawblog.com&#x2F;2018&#x2F;05&#x2F;31&#x2F;the-venture-capital-adviser-exemption-explained&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.strictlybusinesslawblog.com&#x2F;2018&#x2F;05&#x2F;31&#x2F;the-ventu...</a></text></comment> | <story><title>A16Z is re-registering as a financial advisor, renouncing its status as a VC</title><url>https://www.forbes.com/sites/alexkonrad/2019/04/02/andreessen-horowitz-is-blowing-up-the-venture-capital-model-again/</url></story><parent_chain><item><author>jakequist</author><text><i></i>tl;dr<i></i> - A16Z is reclassifying themselves as an &quot;investment advisor&quot;, which will allow them to make riskier bets (crypto, real estate, etc). They&#x27;ll still invest in startups like any other VC firm.</text></item></parent_chain><comment><author>dang</author><text>Yes, that&#x27;s the interesting thing here, so we&#x27;ve put it in the title above. Hopefully that will nudge the discussion to be more specific and less lame.</text></comment> |
22,035,676 | 22,035,220 | 1 | 2 | 22,019,594 | train | <story><title>ChrysaLisp</title><url>https://github.com/vygr/ChrysaLisp</url></story><parent_chain></parent_chain><comment><author>vygr</author><text>Thanks again folks for the interest. You catch me yet again when I’m at work but I’ll try answer question in an hour or so once I get home. Regards to all. Chris</text></comment> | <story><title>ChrysaLisp</title><url>https://github.com/vygr/ChrysaLisp</url></story><parent_chain></parent_chain><comment><author>tartoran</author><text>Taken from the documenation:<p>Lisp<p>It&#x27;s probably worth a few words specifically about the included Lisp and how it works, and how many rules it breaks ! The reason for doing the Lisp was to allow me to create an assembler to replace NASM, I was not concerned with sticking to &#x27;the lisp way&#x27;, or whatever your local Lisp guru says. No doubt I will have many demons in hell awaiting me...<p>First of all there is no garbage collector by choice. All objects used by the Lisp are reference counted objects from the class library. Early on the Lisp was never going to have a problem with cycles because it had no way to create them, but as I developed the assembler I decided to introduce two functions, push and elem-set, that could create cycles. However the efficiency advantage in coding the assembler made me look the other way. There are now other ways that a cycle can be created, by naming an environment within its own scope, but again this was too good an efficiency feature to miss out on. So you do have to be careful not to create cycles, so think about how your code works.<p>No tail recursion optimization ! There is a single looping function provided in native code, while, every other looping construct builds on this primitive. There are also two native primitives some! and each! that provide generic access to iterating over a slice of a sequence&#x2F;s, while calling a function on the grouped elements. Standard some and each are built on these but they also allow other constructs to be built and gain the advantage of machine coded iteration. I try to stick to a functional approach in my Lisp code, and manipulate collections of things in a functional way with operations like map, filter, reduce, each etc. I&#x27;ve not found the lack of tail recursion a problem.<p>All symbols live in the same environment, functions, macros, everything. The environment is a chain of hash maps. Each lambda gets a new hash map pushed onto the environment chain on invocation, and dereferenced on exit. The env function can be used to return the current hash map and optionally resize the number of buckets from the default of 1. This proves very effective for storing large numbers of symbols and objects for the assembler as well as creating caches. Make sure to setq the symbol you bind to the result of env to nil before returning from the function if you do this, else you will create a cycle that can&#x27;t be freed.<p>defq and bind always create entries in the top environment hash map. setq searches the environment chain to find an existing entry and sets that entry or fails with an error. This means setq can be used to write to symbols outside the scope of the current function. Some people don&#x27;t like this, but used wisely it can be very powerful. Coming from an assembler background I prefer to have all the guns and knives available, so try not to shoot your foot off.<p>There is no cons, cdr or car stuff. Lists are just vector objects and you use push, cat, slice etc to manipulate elements. Also an empty list does not evaluate to nil, it&#x27;s just an error.<p>Function and macro definitions are scoped and visible only within the scope of the declaring function. There is no global macro list. During macro expansion the environment chain is searched to see if a macro exists.</text></comment> |
36,760,233 | 36,757,329 | 1 | 3 | 36,752,215 | train | <story><title>Medicine containers used in the golden age of piracy</title><url>http://www.piratesurgeon.com/pages/surgeon_pages/medicine_containers1.html</url></story><parent_chain></parent_chain><comment><author>Luc</author><text>I was wondering about the domed base of some of the bottles.
Luckily there&#x27;s a &#x27;Bottle Bases Page&#x27;: <a href="https:&#x2F;&#x2F;sha.org&#x2F;bottle&#x2F;bases.htm" rel="nofollow noreferrer">https:&#x2F;&#x2F;sha.org&#x2F;bottle&#x2F;bases.htm</a><p>&gt; It may appear that the steep rise or pushed-up portion of the base was done to reduce the interior volume of the bottle. However, it was more likely done for some or all of the following reasons: for bottle strength enhancing, stability (i.e., the process helps form an even base and keeps the rough glass of some pontil scars out of the way so the bottle sits upright without wobbling), to provide a means of turning bottles in a stack using the fingers and thumb (a procedure still followed in traditional champagne manufacture), and&#x2F;or possibly to trap content sedimentation (Jones 1971a; Boow 1991).</text></comment> | <story><title>Medicine containers used in the golden age of piracy</title><url>http://www.piratesurgeon.com/pages/surgeon_pages/medicine_containers1.html</url></story><parent_chain></parent_chain><comment><author>mynameishere</author><text>So, did they have any drugs that actually worked besides laudanum and &quot;bark&quot;? My reading of Jack Aubrey books suggests that medicine then was mostly about trauma care and bedside manner.</text></comment> |
32,637,409 | 32,637,036 | 1 | 2 | 32,636,068 | train | <story><title>NASA Artemis I Launch</title><url>https://www.youtube.com/watch?v=k5KfrDAM2Bo</url></story><parent_chain></parent_chain><comment><author>addaon</author><text>Scrubbed due to engine bleed temperature issue. Sounds like this was a point of concern, but also something that was specifically tested during the dress rehearsal. More debugging being done on the pad (not defueling yet), but no going to space today.</text></comment> | <story><title>NASA Artemis I Launch</title><url>https://www.youtube.com/watch?v=k5KfrDAM2Bo</url></story><parent_chain></parent_chain><comment><author>jacquesm</author><text>Given the number of delays I would not be surprised at all if the launch is called off. Better to fix things properly than to risk a blow-up, which may well be the end of the program.<p>BTW: from a casual point of view it looks suspiciously like a shuttle launch stack without the shuttle.<p>edit: it looks like it has been scrubbed</text></comment> |
34,902,431 | 34,902,207 | 1 | 2 | 34,899,606 | train | <story><title>The dystopian underworld of South Africa’s illegal gold mines</title><url>https://www.newyorker.com/magazine/2023/02/27/the-dystopian-underworld-of-south-africas-illegal-gold-mines</url></story><parent_chain><item><author>phyphy</author><text>I sometimes wonder what the future of human rights is considering a scenario from 10 generations from now.<p>The survived miners will probably have children. The children who are not fit (neither physically nor mentally) will die working in mines. The mentally fit children will migrate to US. The physically fit children will have more children that are physically fit. What will happen then? Why are we ignoring the future of our species and human rights on a larger scale of time?</text></item></parent_chain><comment><author>knodi123</author><text>What, like Morlocks and Eloi? I suppose it&#x27;s an interesting idea to contemplate, but planning to avoid it is a textbook example of premature optimization. We have a million bigger problems that could wipe us out long before that becomes a risk, even with the wild presumption that the conditions in SA stay the same for 10 generations.<p><a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Morlock" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Morlock</a></text></comment> | <story><title>The dystopian underworld of South Africa’s illegal gold mines</title><url>https://www.newyorker.com/magazine/2023/02/27/the-dystopian-underworld-of-south-africas-illegal-gold-mines</url></story><parent_chain><item><author>phyphy</author><text>I sometimes wonder what the future of human rights is considering a scenario from 10 generations from now.<p>The survived miners will probably have children. The children who are not fit (neither physically nor mentally) will die working in mines. The mentally fit children will migrate to US. The physically fit children will have more children that are physically fit. What will happen then? Why are we ignoring the future of our species and human rights on a larger scale of time?</text></item></parent_chain><comment><author>georgeglue1</author><text>Human migration (under various situations &#x2F; stresses like famines) has been studied in economics, sociology, etc..<p>There are also studies about generational poverty, medical literature on helping people recover from traumatic conditions, etc..<p>Not to read any bad intent in your comment, but there also have been studies about social darwinism that have a long track record of problems. I think people do still do studies about genetic inheritance etc., but it&#x27;s probably impossible to draw society-level conclusions without data.</text></comment> |
12,304,836 | 12,304,126 | 1 | 3 | 12,303,075 | train | <story><title>“It's The Future”</title><url>https://circleci.com/blog/its-the-future</url></story><parent_chain><item><author>rajeshp1986</author><text>The article perfectly summarizes my frustration and sentiment. These days I hear these buzzwords all time. I work as a consultant for an enterprise product and most people whom I meet they somehow catch these buzzwords and blurt it out in front of everyone during meetings and discussions to either showoff that they know technology and things that are in the market these days(also latest iphone, apple news, tesla, space exploration and what not) or I feel they are somewhat trying to hide their insecurities.<p>Anyway long story short, most of these people do not really understand why they need all this rocket science to manage &lt; 500 internal users. One of the new buzzwords I am hearing these days is mostly related to bigdata and machine learning. One of my managers came to me and asked me why dont we integrate our product with hadoop it will solve the performance problems as it can handle lot of data.<p>I am frustrated by the industry as a whole. I feel industry is simply following marketing trends. Imagine the no. of man-hours are put into investigating technologies and projects dropped mid-way realizing the technology stack is still immature or not suitable for at all.</text></item></parent_chain><comment><author>mamcx</author><text>This is how we manage this problem at the times when Visual Basic was the king and we use <i>instead</i> Visual FoxPro.<p>People want theirs apps to be made with Visual Studio (BTW, FoxPro was part of the package).<p>So they ask: &quot;In what is the app made&quot;?<p>&quot;In Visual, Sir.&quot;<p>Done. End of story (like most of the time, obviously some times people are more dangerous and press it ;) ).<p>----<p>The point is not focus in the exact <i>word</i> but in what the people <i>know</i> the word will give to them.<p>So, for example &quot;Big Data&quot;. The meaning for us matter zero. The meaning to some customer is that it have a largeish excel file that with his current methods and tools take <i>too long</i> to get results.<p>So. Do you use &quot;Big Data Tools&quot;?<p>&quot;Yes Sir.&quot;<p>And what about use Hadoop?<p>&quot;We use the parts of big data tech necessary for solve this, and if we need to use hadoop or other <i>similar tools</i> that fit better with your industry and that use the same principles will depend in our evaluation. Not worry, we know this&quot;<p>Or something like that ;). Know that worry the people behind the words have help me a lot, even with people with WORSE tech skills (damm, I have build apps for almost iliterate people with big pockets but only witch cellphones as references of tech!)<p>And the anecdote about the largeish excel file that was too big and take too long? Yep, true. And was for one of the largest companies in my country ;)</text></comment> | <story><title>“It's The Future”</title><url>https://circleci.com/blog/its-the-future</url></story><parent_chain><item><author>rajeshp1986</author><text>The article perfectly summarizes my frustration and sentiment. These days I hear these buzzwords all time. I work as a consultant for an enterprise product and most people whom I meet they somehow catch these buzzwords and blurt it out in front of everyone during meetings and discussions to either showoff that they know technology and things that are in the market these days(also latest iphone, apple news, tesla, space exploration and what not) or I feel they are somewhat trying to hide their insecurities.<p>Anyway long story short, most of these people do not really understand why they need all this rocket science to manage &lt; 500 internal users. One of the new buzzwords I am hearing these days is mostly related to bigdata and machine learning. One of my managers came to me and asked me why dont we integrate our product with hadoop it will solve the performance problems as it can handle lot of data.<p>I am frustrated by the industry as a whole. I feel industry is simply following marketing trends. Imagine the no. of man-hours are put into investigating technologies and projects dropped mid-way realizing the technology stack is still immature or not suitable for at all.</text></item></parent_chain><comment><author>huuu</author><text><i>&quot;I am frustrated by the industry as a whole&quot;</i><p>Unfortunately I have to agree as a developer. My job is to make a fast, reliable, stable product but at the same time I&#x27;m questioned the tools I use by people who don&#x27;t have any knowledge but heard the latest trend.<p>But sometimes it&#x27;s also very easy to please people.
Big data: just insert 10M records in a database and suddenly everyone is happy because they now have big data :|</text></comment> |
3,732,549 | 3,732,382 | 1 | 2 | 3,731,600 | train | <story><title>Linus Torvalds: The King of Geeks (And Dad of 3)</title><url>http://www.wired.com/wiredenterprise/2012/03/mr-linux/</url></story><parent_chain><item><author>MatthewPhillips</author><text>Which of those will be remembered in 200 years. Wanna be the richest guy in the graveyard?</text></item><item><author>pessimist</author><text>RedHat's $1m stock is his only big payout, apparently. Rather sad, considering the number of mid-level facebook engineers who will become many times as rich in a few months.</text></item></parent_chain><comment><author>walexander</author><text>"My name is Ozymandias, king of kings:
Look on my works, ye Mighty, and despair!"
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.</text></comment> | <story><title>Linus Torvalds: The King of Geeks (And Dad of 3)</title><url>http://www.wired.com/wiredenterprise/2012/03/mr-linux/</url></story><parent_chain><item><author>MatthewPhillips</author><text>Which of those will be remembered in 200 years. Wanna be the richest guy in the graveyard?</text></item><item><author>pessimist</author><text>RedHat's $1m stock is his only big payout, apparently. Rather sad, considering the number of mid-level facebook engineers who will become many times as rich in a few months.</text></item></parent_chain><comment><author>icandoitbetter</author><text>Being the famous guy in the graveyard isn't particularly appealing either.</text></comment> |
11,401,396 | 11,400,658 | 1 | 3 | 11,400,179 | train | <story><title>Google Shutting Down Wallet Card</title><url>http://www.google.com/wallet/card/#faq</url></story><parent_chain><item><author>grahamburger</author><text>This is the first of Google&#x27;s product cancellations that will affect me personally. My wife and I have been using Google Wallet cards for budgeting - we transfer our spending money to the cards each week to help track our spending. Honestly there&#x27;s no reason Google Wallet was superior for this - I expect Simple will solve our problems better anyway. Should have seen this coming when they split it off of Android Pay.</text></item></parent_chain><comment><author>JumpCrisscross</author><text>It&#x27;s a UI layer on top of an old-fashioned bank. It offers worse rates, perks, benefits and service than Ally, Schwab or Fidelity. Good if you want something pretty on the side, I suppose.</text></comment> | <story><title>Google Shutting Down Wallet Card</title><url>http://www.google.com/wallet/card/#faq</url></story><parent_chain><item><author>grahamburger</author><text>This is the first of Google&#x27;s product cancellations that will affect me personally. My wife and I have been using Google Wallet cards for budgeting - we transfer our spending money to the cards each week to help track our spending. Honestly there&#x27;s no reason Google Wallet was superior for this - I expect Simple will solve our problems better anyway. Should have seen this coming when they split it off of Android Pay.</text></item></parent_chain><comment><author>noobiemcfoob</author><text>+1 for Simple!</text></comment> |
15,837,242 | 15,836,895 | 1 | 3 | 15,831,978 | train | <story><title>If your iOS 11 device unexpectedly restarts repeatedly on or after December 2</title><url>https://support.apple.com/en-us/HT208332</url></story><parent_chain><item><author>e1ghtSpace</author><text>This comment is written on iOS 6.1.3 on my iPad 2. It might not be able to run most new apps now, but at least it doesn&#x27;t lag. Plus, the OS actually looks nice.</text></item><item><author>wizardforhire</author><text>This comment is written on 8.4.1... If I need to use a hammer I don&#x27;t want to have to upgrade it... And when I do upgrade it I don&#x27;t want the head to fly off or the handle to break when I&#x27;m trying to drive a nail.</text></item><item><author>qq66</author><text>This comment is written on iOS 9. Still don&#x27;t feel like I&#x27;m missing anything.</text></item><item><author>labster</author><text>This comment is written on iOS 10. I just wish there was a way to turn off the upgrade nag screen. Nothing better than asking for your passkey in a modal while you&#x27;re trying to get shit done.</text></item><item><author>lostgame</author><text>You know what would help shit like this?<p>Letting us downgrade iOS.<p>I get it. It means potential security exploits.<p>But at least let us make the choice wether we’d prefer those to shiny new, and different issues that seem to be growing in number.<p>Please?
No?<p>Fine, I’ll wait for the jailbreak and collect the goddamn SHSH blobs or whatever, if that’s how far you’ll make me go.<p>My, how lucky you are that GarageBand iOS has no peers (namely its integration with Logic), or I’d downgrade to a dumb phone and start carrying a palm pilot again.</text></item></parent_chain><comment><author>bsaul</author><text>You’re kind of right. Trouble started with iOS 7. That’s when the focus went from usability ( which, btw, includes performance) to shiny and disturbing animations nobody cares about.<p>And now i can’t even modify text properly because the drag n drop interfeers with cursor positionning.</text></comment> | <story><title>If your iOS 11 device unexpectedly restarts repeatedly on or after December 2</title><url>https://support.apple.com/en-us/HT208332</url></story><parent_chain><item><author>e1ghtSpace</author><text>This comment is written on iOS 6.1.3 on my iPad 2. It might not be able to run most new apps now, but at least it doesn&#x27;t lag. Plus, the OS actually looks nice.</text></item><item><author>wizardforhire</author><text>This comment is written on 8.4.1... If I need to use a hammer I don&#x27;t want to have to upgrade it... And when I do upgrade it I don&#x27;t want the head to fly off or the handle to break when I&#x27;m trying to drive a nail.</text></item><item><author>qq66</author><text>This comment is written on iOS 9. Still don&#x27;t feel like I&#x27;m missing anything.</text></item><item><author>labster</author><text>This comment is written on iOS 10. I just wish there was a way to turn off the upgrade nag screen. Nothing better than asking for your passkey in a modal while you&#x27;re trying to get shit done.</text></item><item><author>lostgame</author><text>You know what would help shit like this?<p>Letting us downgrade iOS.<p>I get it. It means potential security exploits.<p>But at least let us make the choice wether we’d prefer those to shiny new, and different issues that seem to be growing in number.<p>Please?
No?<p>Fine, I’ll wait for the jailbreak and collect the goddamn SHSH blobs or whatever, if that’s how far you’ll make me go.<p>My, how lucky you are that GarageBand iOS has no peers (namely its integration with Logic), or I’d downgrade to a dumb phone and start carrying a palm pilot again.</text></item></parent_chain><comment><author>dotancohen</author><text>Just to be snark, this comment written in w3m&#x2F;0.5.3.</text></comment> |
25,905,602 | 25,905,009 | 1 | 3 | 25,903,873 | train | <story><title>Gabe Newell on brain-computer interface technology</title><url>https://www.tvnz.co.nz/one-news/new-zealand/gabe-newell-says-brain-computer-interface-tech-allow-video-games-far-beyond-human-meat-peripherals-can-comprehend</url></story><parent_chain></parent_chain><comment><author>johnnybaptist</author><text>I work at OpenBCI. It&#x27;s been great getting to work with Gabe and the Valve team. Can&#x27;t overstate how unique they are as partners on a project like this. Also cool to see OpenBCI (sortof) in the top 10 today :)<p>Happy to answer any questions people have about OpenBCI
<a href="https:&#x2F;&#x2F;openbci.com&#x2F;" rel="nofollow">https:&#x2F;&#x2F;openbci.com&#x2F;</a></text></comment> | <story><title>Gabe Newell on brain-computer interface technology</title><url>https://www.tvnz.co.nz/one-news/new-zealand/gabe-newell-says-brain-computer-interface-tech-allow-video-games-far-beyond-human-meat-peripherals-can-comprehend</url></story><parent_chain></parent_chain><comment><author>offtop5</author><text>This scares me a bit.<p>Let&#x27;s say you have certain nero indicators which indicate person X may do Y .<p>Do you we then minority report lock up people.<p>At the same time , it&#x27;s going to be amazing for people in a coma or unable to communicate. We could have a fantastic amazing world, where we effectively live forever via some type of matrix like interface.</text></comment> |
38,797,560 | 38,797,821 | 1 | 2 | 38,795,308 | train | <story><title>Mozilla 2023 annual report: CEO pay skyrockets, Firefox market share nosedives</title><url>https://lunduke.locals.com/post/5053290/mozilla-2023-annual-report-ceo-pay-skyrockets-while-firefox-marketshare-nosedives</url></story><parent_chain><item><author>mrazomor</author><text>&gt; a large part of the modern internet users are mobile-only, and the amount of people who use anything but what Google tells them to (or Apple allows them to) is vanishingly small.<p>I&#x27;m a Chrome user. Both on desktop and mobile because of the built in syncing.<p>If I were able to switch to Firefox mobile (Android), I would. But the rendering is often broken or awkwardly different on Firefox mobile. I thought this is a thing of the past...</text></item><item><author>Semaphor</author><text>I posted some stats from our website a few weeks ago [0], GA was heavily undercounting FF compared to our server-side stats based on UAs.<p>One other thing to remember, is to check the falling of desktop usage, because a large part of the modern internet users are mobile-only, and the amount of people who use anything but what Google tells them to (or Apple allows them to) is vanishingly small.<p>[0]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38533109">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38533109</a></text></item><item><author>woodruffw</author><text>In a previous thread (these &quot;Mozilla is dead&quot; threads appear perennially) someone pointed out that Firefox&#x27;s apparent marketshare drop is potentially indistinguishable from their deployment of privacy-improving features, including stubbing out Google Analytics when &quot;Enhanced Tracking Protection&quot; is enabled.<p>I&#x27;m a Firefox user, so I have a vested interest in Mozilla&#x27;s long term health and financial viability. But &quot;marketshare nosedives&quot; appears to be primarily an editorialization to fit the post&#x27;s larger narrative.</text></item></parent_chain><comment><author>miah_</author><text>Firefox on android also has syncing, you can open up the &#x27;share&#x27; and send pages directly to your various firefox browsers. It will sync passwords etc. I haven&#x27;t had any rendering issues and have been using FFM-Nightly for years now.</text></comment> | <story><title>Mozilla 2023 annual report: CEO pay skyrockets, Firefox market share nosedives</title><url>https://lunduke.locals.com/post/5053290/mozilla-2023-annual-report-ceo-pay-skyrockets-while-firefox-marketshare-nosedives</url></story><parent_chain><item><author>mrazomor</author><text>&gt; a large part of the modern internet users are mobile-only, and the amount of people who use anything but what Google tells them to (or Apple allows them to) is vanishingly small.<p>I&#x27;m a Chrome user. Both on desktop and mobile because of the built in syncing.<p>If I were able to switch to Firefox mobile (Android), I would. But the rendering is often broken or awkwardly different on Firefox mobile. I thought this is a thing of the past...</text></item><item><author>Semaphor</author><text>I posted some stats from our website a few weeks ago [0], GA was heavily undercounting FF compared to our server-side stats based on UAs.<p>One other thing to remember, is to check the falling of desktop usage, because a large part of the modern internet users are mobile-only, and the amount of people who use anything but what Google tells them to (or Apple allows them to) is vanishingly small.<p>[0]: <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38533109">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=38533109</a></text></item><item><author>woodruffw</author><text>In a previous thread (these &quot;Mozilla is dead&quot; threads appear perennially) someone pointed out that Firefox&#x27;s apparent marketshare drop is potentially indistinguishable from their deployment of privacy-improving features, including stubbing out Google Analytics when &quot;Enhanced Tracking Protection&quot; is enabled.<p>I&#x27;m a Firefox user, so I have a vested interest in Mozilla&#x27;s long term health and financial viability. But &quot;marketshare nosedives&quot; appears to be primarily an editorialization to fit the post&#x27;s larger narrative.</text></item></parent_chain><comment><author>Nexxxeh</author><text>Are you using uBlock Origin in Firefox on Android? If not, then try it.<p>You&#x27;d have to prise it from my cold dead hands. By far the best mobile browsing experience.<p>Not entirely perfect, there&#x27;s a bug where I&#x27;ll occasionally get a grey screen on a tab. Hasn&#x27;t happened enough for me to do anything about tho.<p>But still orders of magnitude better than my experience with Chrome on Android.</text></comment> |
26,366,528 | 26,366,299 | 1 | 2 | 26,364,285 | train | <story><title>Burned House Horizon</title><url>https://en.wikipedia.org/wiki/Burned_house_horizon</url></story><parent_chain><item><author>Ansil849</author><text>&gt; Tangential to this article, but can I just say that this is HN at its best. Random, curious, snippets of learning on topics that I can’t imagine hearing about anywhere else.<p>The submission, I agree is the best of HN; the comments, however, are the worst.<p>Tons of &#x27;bikeshedding&#x27; comments offering flippant uninformed analyses that belittle established and well-researched theories. By offering &#x27;back of a cocktail napkin&#x27; comments like &#x27;I think such and such theory doesn&#x27;t get enough attention because of my random opinion&#x27;, or &#x27;I think such and such likely happened, based on me just ruminating about this for a few minutes, versus doing years of research on&#x27;; the comments are tacitly disrespecting the work researchers whose specialty this is put into formulating theories.<p>We HN users need to learn to just say &#x27;we don&#x27;t know&#x27;, instead of playing internet experts in every single discipline a given submission is about.</text></item><item><author>Townley</author><text>Tangential to this article, but can I just say that this is HN at its best. Random, curious, snippets of learning on topics that I can’t imagine hearing about anywhere else. It’s a pure love of knowledge that leads thousands of people to spend Friday night reading a largely useless Wikipedia entry on house burning just because it’s mystery</text></item></parent_chain><comment><author>scribu</author><text>I don&#x27;t see any &quot;belittling&quot; comments here, except perhaps yours.<p>People enjoy talking about the stuff they read. Discussion is part of how one learns.<p>If you don&#x27;t find value in reading non-expert commentary, then don&#x27;t read it.</text></comment> | <story><title>Burned House Horizon</title><url>https://en.wikipedia.org/wiki/Burned_house_horizon</url></story><parent_chain><item><author>Ansil849</author><text>&gt; Tangential to this article, but can I just say that this is HN at its best. Random, curious, snippets of learning on topics that I can’t imagine hearing about anywhere else.<p>The submission, I agree is the best of HN; the comments, however, are the worst.<p>Tons of &#x27;bikeshedding&#x27; comments offering flippant uninformed analyses that belittle established and well-researched theories. By offering &#x27;back of a cocktail napkin&#x27; comments like &#x27;I think such and such theory doesn&#x27;t get enough attention because of my random opinion&#x27;, or &#x27;I think such and such likely happened, based on me just ruminating about this for a few minutes, versus doing years of research on&#x27;; the comments are tacitly disrespecting the work researchers whose specialty this is put into formulating theories.<p>We HN users need to learn to just say &#x27;we don&#x27;t know&#x27;, instead of playing internet experts in every single discipline a given submission is about.</text></item><item><author>Townley</author><text>Tangential to this article, but can I just say that this is HN at its best. Random, curious, snippets of learning on topics that I can’t imagine hearing about anywhere else. It’s a pure love of knowledge that leads thousands of people to spend Friday night reading a largely useless Wikipedia entry on house burning just because it’s mystery</text></item></parent_chain><comment><author>chucky</author><text>This seems like a more general problem with the Internet or even of people (and in particular men of a more technical and nerdy persuasion, in my own experience).<p>The majority have nothing of value to say, so will post nothing. The only ones who can realistically offer comments on this article are experts and those who will offer &quot;back of a cocktail napkin&quot; comments (as you put it), and the latter group is so much larger than the former.</text></comment> |
27,810,854 | 27,809,314 | 1 | 2 | 27,807,570 | train | <story><title>I wrote a 231-byte Brainfuck compiler by abusing everything</title><url>https://briancallahan.net/blog/20210710.html</url></story><parent_chain></parent_chain><comment><author>panic</author><text>Here&#x27;s an even smaller one for 64-bit Linux: <a href="https:&#x2F;&#x2F;gist.github.com&#x2F;ianh&#x2F;61e6219f0a9866b31a2b864033c1812e" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;ianh&#x2F;61e6219f0a9866b31a2b864033c1812...</a><p><pre><code> $ uname -a
Linux personal-1 4.19.0-6-cloud-amd64 #1 SMP Debian 4.19.67-2 (2019-08-28) x86_64 GNU&#x2F;Linux
$ as -g -o bf.o bf.S &amp;&amp; ld -o bf bf.o
$ size bf.o
text data bss dec hex filename
167 0 0 167 a7 bf.o
$ size bf
text data bss dec hex filename
199 0 0 199 c7 bf</code></pre></text></comment> | <story><title>I wrote a 231-byte Brainfuck compiler by abusing everything</title><url>https://briancallahan.net/blog/20210710.html</url></story><parent_chain></parent_chain><comment><author>xg15</author><text>a 231-byte Brainfuck-to-C compiler*<p>It&#x27;s still an impressive feat, and very informative on the assembly details - but doesn&#x27;t feel as incredible as the headline makes believe as the core logic seems to be a string search-and-replace of the strings in the &quot;reviewing brainfuck&quot; table.<p>Seems to me, you could write the next brainfuck compiler in sed.</text></comment> |
17,854,856 | 17,854,602 | 1 | 2 | 17,850,836 | train | <story><title>Solve Less General Problems</title><url>https://hacksoflife.blogspot.com/2018/08/solve-less-general-problems.html</url></story><parent_chain><item><author>elvinyung</author><text>The interesting thing is that the programmer&#x27;s tendency to abstract and future-proof their code mirrors the late 19th and early 20th centuries&#x27; high modernism movements, as James C. Scott analyzes in <i>Seeing Like A State</i> and Michel Foucault discusses in depth in <i>Discipline and Punish</i> and others.<p>&quot;Modernism&quot; is basically characterized by attempts to predict and control the future, to rationalize and systematize chaos. In the past this was done with things like top-down centrally planned economies and from-scratch redesigns of entire cities (e.g. Le Corbusier&#x27;s Radiant City), even rearchitectures of time itself (the French Revolution decimal time). The same kinds of &quot;grand plans&quot; are repeated in today&#x27;s ambitious software engineering abstractions, which almost never survive contact with reality.</text></item></parent_chain><comment><author>twic</author><text>&gt; The same kinds of &quot;grand plans&quot; are repeated in today&#x27;s ambitious software engineering abstractions, which almost never survive contact with reality.<p>Yonks ago, i was reading an introduction to metaphysics at the same time as an introduction to Smalltalk, and it struck me that the metaphysicians&#x27; pursuit of ontology was quite similar to the Smalltalker&#x27;s construction of their class hierarchy. The crucial difference, it seemed to me, was that the metaphysicians thought they were pursuing a truth that existed beyond them, whereas the Smalltalkers were aware that they were creating something from within them.<p>It&#x27;s very likely that my understanding of one or both sides was wrong. But ever since then, i&#x27;ve always seen the process of abstraction in software as <i>creating something useful</i> rather than <i>discovering something true</i>. A consequence of that is being much more willing to throw away abstractions that aren&#x27;t working, but also to accept that it&#x27;s okay for an abstraction to be imperfect, if it&#x27;s still useful.<p>I, probably arrogantly, speculate that metaphysicians would benefit from adopting the Smalltalkers&#x27; worldview.<p>I, perhaps incorrectly, think that the ontologists&#x27; delusion is endemic in the academic functional programming [1] community.<p>[1] Haskell</text></comment> | <story><title>Solve Less General Problems</title><url>https://hacksoflife.blogspot.com/2018/08/solve-less-general-problems.html</url></story><parent_chain><item><author>elvinyung</author><text>The interesting thing is that the programmer&#x27;s tendency to abstract and future-proof their code mirrors the late 19th and early 20th centuries&#x27; high modernism movements, as James C. Scott analyzes in <i>Seeing Like A State</i> and Michel Foucault discusses in depth in <i>Discipline and Punish</i> and others.<p>&quot;Modernism&quot; is basically characterized by attempts to predict and control the future, to rationalize and systematize chaos. In the past this was done with things like top-down centrally planned economies and from-scratch redesigns of entire cities (e.g. Le Corbusier&#x27;s Radiant City), even rearchitectures of time itself (the French Revolution decimal time). The same kinds of &quot;grand plans&quot; are repeated in today&#x27;s ambitious software engineering abstractions, which almost never survive contact with reality.</text></item></parent_chain><comment><author>panic</author><text>And if these abstractions do survive contact with reality, it&#x27;s often reality that has to change to adapt to the abstraction. You find yourself saying, &quot;the computer can&#x27;t do that&quot;, not for any fundamental reason, but because the abstraction wasn&#x27;t designed with that in mind. The unfortunate users have to change the way they work to match the abstraction.<p><i>Seeing Like a State</i> describes a similar situation, where people had to give themselves last names so they could be taxed reliably. I wonder how many people have had to enter their Facebook name differently for it to be considered a &quot;real name&quot;? Software that &quot;sees&quot; the world in a particular way is a lot like a state bureaucracy that &quot;sees&quot; the world in a particular way, especially when this software is given power.</text></comment> |
18,749,600 | 18,749,635 | 1 | 3 | 18,749,330 | train | <story><title>Ask HN: What happened to Pinboard?</title><text>Over the last year or so, pinboard.in&#x27;s paid archiving service has become increasingly sporadic. The process used to be nearly instant, but bookmarks are crawled maybe once a month now and frequently fail to archive correctly or are skipped completely.<p>From Pinboard&#x27;s Twitter mentions, I can see I&#x27;m not the only one who has been trying to contact Maciej for several months without success. A lot of former delicious users are also desperate to get an export of their data, which is still unavailable over a year and a half after Pinboard purchased and shut down the site.<p>Overall, it&#x27;s left a pretty bad taste in my mouth. I decided in September to leave the service, but emailing is the only way to request a full download of archived bookmarks. I&#x27;d also like to get a refund on the next four years of my archival subscription - I paid for five years in advance because I was so happy with the site before now.<p>I know Maciej reads HN, and there seem to be a number of Pinboard users here (it&#x27;s still mentioned just about anytime somebody posts about bookmarking). Has anybody else had any luck getting a refund and a copy of their data? What are the best Pinboard replacements, self-hosted or otherwise?</text></story><parent_chain></parent_chain><comment><author>heinrichf</author><text>From the FAQ of pinboard.in (<a href="http:&#x2F;&#x2F;pinboard.in&#x2F;faq&#x2F;#anxieties" rel="nofollow">http:&#x2F;&#x2F;pinboard.in&#x2F;faq&#x2F;#anxieties</a>):<p>&gt; What happens if the guy who runs Pinboard gets hit by a bus?<p>&gt; The bus is likely to be fine. They don&#x27;t go very fast and are designed with passenger safety in mind.</text></comment> | <story><title>Ask HN: What happened to Pinboard?</title><text>Over the last year or so, pinboard.in&#x27;s paid archiving service has become increasingly sporadic. The process used to be nearly instant, but bookmarks are crawled maybe once a month now and frequently fail to archive correctly or are skipped completely.<p>From Pinboard&#x27;s Twitter mentions, I can see I&#x27;m not the only one who has been trying to contact Maciej for several months without success. A lot of former delicious users are also desperate to get an export of their data, which is still unavailable over a year and a half after Pinboard purchased and shut down the site.<p>Overall, it&#x27;s left a pretty bad taste in my mouth. I decided in September to leave the service, but emailing is the only way to request a full download of archived bookmarks. I&#x27;d also like to get a refund on the next four years of my archival subscription - I paid for five years in advance because I was so happy with the site before now.<p>I know Maciej reads HN, and there seem to be a number of Pinboard users here (it&#x27;s still mentioned just about anytime somebody posts about bookmarking). Has anybody else had any luck getting a refund and a copy of their data? What are the best Pinboard replacements, self-hosted or otherwise?</text></story><parent_chain></parent_chain><comment><author>cogs</author><text>I cancelled my subscription last year after similar problems with full text search.<p>I did get a download of my data, albeit it took a while, after requesting it from my settings page.<p>I&#x27;ve never had a reply to any of my emails to him. Alhough I did eventually get a reply on twitter, which said &quot;Sorry for the terrible support&quot; But then I heard nothing more, and got no replies to further emails either.<p>I think he&#x27;s too busy being a celebrity. And it&#x27;s a shame because he was kind of a figure head for independents doing a good job, and I was a bit of an evangelist. Oh well.</text></comment> |
7,152,182 | 7,152,411 | 1 | 2 | 7,150,645 | train | <story><title>So Singletons are bad, then what?</title><url>http://programmers.stackexchange.com/questions/40373/so-singletons-are-bad-then-what</url></story><parent_chain><item><author>shitgoose</author><text>Hilarious! Doing x=C.getX() is bad, but if it is hidden by numerous layers of libraries and monstrous config files, somehow it becomes acceptable. Out of site, out of mind. The fact that global scope bean is essentially a singleton, doesn&#x27;t seem to bother architecturally inclined crowd - they are too busy admiring sound of their own voice pronouncing words &quot;dependency injection&quot;, &quot;mutability&quot; and &quot;coupling&quot;.<p>The top answer is a perfect example of what is wrong with IT today. It takes a working solution, declares it wrong and starts piling up classes and interfaces to solve a problem, that was never a problem in first place (OP never said that their singleton-based cache didn&#x27;t work, he merely asked if there are &quot;better&quot; ways of doing it). So in the end we have the same singleton cache, but hidden behind interfaces (&quot;It makes the code easier to read&quot; - yea, right, easier, my ass! Ctrl+click on interface method and try to read the code), thousand lines xml Spring configs, and other crap that is completely irrelevant, hard to follow and debug, but glamorous enough for SOA boys to spend endless hours talking about it.</text></item></parent_chain><comment><author>Sandman</author><text>Doing x=C.get(x) is bad because there&#x27;s no way of mocking C, thereby making unit testing a component that uses C impossible (the unit test will need to use the concrete implementation of C). Using dependency injection as described in the accepted answer, and particularly separating the concerns of C by using several different interfaces allows you to not only mock C, but actually create mock classes each of which is mocking one particular logical subset of C&#x27;s functionalities.<p>It&#x27;s easy to dismiss all this as &quot;bloated&quot; and &quot;enterprisey&quot; and people that use this as &quot;architecture astronauts&quot; but in reality, this pattern really does help. Well, at least if you want to be able to unit test your code properly. Otherwise, you might as well just use a singleton object.</text></comment> | <story><title>So Singletons are bad, then what?</title><url>http://programmers.stackexchange.com/questions/40373/so-singletons-are-bad-then-what</url></story><parent_chain><item><author>shitgoose</author><text>Hilarious! Doing x=C.getX() is bad, but if it is hidden by numerous layers of libraries and monstrous config files, somehow it becomes acceptable. Out of site, out of mind. The fact that global scope bean is essentially a singleton, doesn&#x27;t seem to bother architecturally inclined crowd - they are too busy admiring sound of their own voice pronouncing words &quot;dependency injection&quot;, &quot;mutability&quot; and &quot;coupling&quot;.<p>The top answer is a perfect example of what is wrong with IT today. It takes a working solution, declares it wrong and starts piling up classes and interfaces to solve a problem, that was never a problem in first place (OP never said that their singleton-based cache didn&#x27;t work, he merely asked if there are &quot;better&quot; ways of doing it). So in the end we have the same singleton cache, but hidden behind interfaces (&quot;It makes the code easier to read&quot; - yea, right, easier, my ass! Ctrl+click on interface method and try to read the code), thousand lines xml Spring configs, and other crap that is completely irrelevant, hard to follow and debug, but glamorous enough for SOA boys to spend endless hours talking about it.</text></item></parent_chain><comment><author>bd_at_rivenhill</author><text>I&#x27;ve never really understood the point of most of the Spring usage I&#x27;ve seen; seems like most people want to replace Java code that is type-checked and can throw exceptions at reasonable places when something goes wrong with XML &#x27;code&#x27; that has neither of these properties and thus turns debugging into a black art. Apparently, this is the right way to do things.<p>The only reasonable use case I&#x27;ve encountered in practice is the ability to replace some major components of your program with stubs for use in automated testing.</text></comment> |
14,784,937 | 14,784,267 | 1 | 2 | 14,783,516 | train | <story><title>San Francisco's VC Boom Is Over</title><url>https://www.bloomberg.com/view/articles/2017-07-16/san-francisco-s-vc-boom-is-over?utm_source=yahoo&utm_medium=bd&utm_campaign=headline&trackingId=yhoo.headline&yptr=yahoo</url></story><parent_chain><item><author>scythe</author><text>Bridge traffic + BART congestion means that it&#x27;s easier for me as a resident of San Francisco to visit Palo Alto than Oakland.<p>Maybe not how it should be, but certainly how it is; with a new Bay crossing slated for 2040 at the earliest it&#x27;ll be like that for a while.</text></item><item><author>JumpCrisscross</author><text>Same thing happens in New York with the Bronx and New Jersey. I think there&#x27;s a psychological stopping power to water that makes Palo Alto and San Francisco seem closer than Oakland (or the Upper West Side feel closer to Flatiron than New Jersey).</text></item><item><author>santaclaus</author><text>I have friends who have lived in a San Francisco (and Berkeley!) for years who are too sketched out to visit Oakland, weirdly.</text></item><item><author>austenallred</author><text>And Oakland is nonexistent entirely!</text></item><item><author>rdl</author><text>Hard not to LOL at SF and Silicon Valley being treated as separate markets for funding. For extremely local services, sure (like dry cleaning or a coffeeshop), they&#x27;re different markets. Maybe even for a daily-commute job (although that&#x27;s debatable). For fund raising, I don&#x27;t think there&#x27;s anyone unwilling to Uber&#x2F;drive&#x2F;BART&#x2F;caltrain an extra 20 miles for $1mm to $1b in funding. You meet investors monthly at most, and more likely quarterly or annually.</text></item></parent_chain><comment><author>sulam</author><text>Sorry, this is totally not true. It&#x27;s harder to DRIVE to the East Bay than to arbitrary points south (not Palo Alto), but public transit is generally much better to the East Bay. And most of the driving congestion is simply getting onto the bridge, which I admit is non-trivial (especially with Giants day games thrown into the mix), but once you&#x27;re on the bridge it&#x27;s markedly better than 101 and roughly the same as 280. 280 is still a prettier drive, though!</text></comment> | <story><title>San Francisco's VC Boom Is Over</title><url>https://www.bloomberg.com/view/articles/2017-07-16/san-francisco-s-vc-boom-is-over?utm_source=yahoo&utm_medium=bd&utm_campaign=headline&trackingId=yhoo.headline&yptr=yahoo</url></story><parent_chain><item><author>scythe</author><text>Bridge traffic + BART congestion means that it&#x27;s easier for me as a resident of San Francisco to visit Palo Alto than Oakland.<p>Maybe not how it should be, but certainly how it is; with a new Bay crossing slated for 2040 at the earliest it&#x27;ll be like that for a while.</text></item><item><author>JumpCrisscross</author><text>Same thing happens in New York with the Bronx and New Jersey. I think there&#x27;s a psychological stopping power to water that makes Palo Alto and San Francisco seem closer than Oakland (or the Upper West Side feel closer to Flatiron than New Jersey).</text></item><item><author>santaclaus</author><text>I have friends who have lived in a San Francisco (and Berkeley!) for years who are too sketched out to visit Oakland, weirdly.</text></item><item><author>austenallred</author><text>And Oakland is nonexistent entirely!</text></item><item><author>rdl</author><text>Hard not to LOL at SF and Silicon Valley being treated as separate markets for funding. For extremely local services, sure (like dry cleaning or a coffeeshop), they&#x27;re different markets. Maybe even for a daily-commute job (although that&#x27;s debatable). For fund raising, I don&#x27;t think there&#x27;s anyone unwilling to Uber&#x2F;drive&#x2F;BART&#x2F;caltrain an extra 20 miles for $1mm to $1b in funding. You meet investors monthly at most, and more likely quarterly or annually.</text></item></parent_chain><comment><author>russell_h</author><text>At what time of day? Currently Google maps shows a 20 minute drive from SOMA to downtown Oakland vs 40 minutes to PA. If you&#x27;re trying to use public transit obviously the difference will be even greater.</text></comment> |
24,296,117 | 24,296,103 | 1 | 2 | 24,295,195 | train | <story><title>Even Google engineers are confused about Google’s privacy settings</title><url>https://www.theverge.com/2020/8/26/21403202/google-engineers-privacy-settings-lawsuit-arizona-doubleclick</url></story><parent_chain><item><author>Spooky23</author><text>There&#x27;s a usability aspect to this.<p>If you disable location services for Yelp, it&#x27;s pretty unfriendly -- it chooses to make it as difficult as possible for you to use the app to get you to turn location back on. The Google approach for Maps is more pragmatic IMO -- lower resolution location data makes sure I don&#x27;t get a McDonald&#x27;s in Finland when I&#x27;m in Kentucky.<p>The hard thing is that I don&#x27;t want Yelp, Google, etc tracking my movements in a 10-meter radius forever, but I want location from a contextual perspective. I don&#x27;t think you can do that without meaningful policy controls outside of your local computer.</text></item></parent_chain><comment><author>ocdtrekkie</author><text>Apple is fixing this in iOS 14. It allows you to give an app access to only approximate location.<p>Note that if your location history (precise every five minute location stored to your account) is off, Google won&#x27;t even suggest a locally-stored list of previous locations you used with Maps. Which is an absolutist position that is usability-wise incredibly frustrating.</text></comment> | <story><title>Even Google engineers are confused about Google’s privacy settings</title><url>https://www.theverge.com/2020/8/26/21403202/google-engineers-privacy-settings-lawsuit-arizona-doubleclick</url></story><parent_chain><item><author>Spooky23</author><text>There&#x27;s a usability aspect to this.<p>If you disable location services for Yelp, it&#x27;s pretty unfriendly -- it chooses to make it as difficult as possible for you to use the app to get you to turn location back on. The Google approach for Maps is more pragmatic IMO -- lower resolution location data makes sure I don&#x27;t get a McDonald&#x27;s in Finland when I&#x27;m in Kentucky.<p>The hard thing is that I don&#x27;t want Yelp, Google, etc tracking my movements in a 10-meter radius forever, but I want location from a contextual perspective. I don&#x27;t think you can do that without meaningful policy controls outside of your local computer.</text></item></parent_chain><comment><author>gruez</author><text>&gt;The hard thing is that I don&#x27;t want Yelp, Google, etc tracking my movements in a 10-meter radius forever, but I want location from a contextual perspective. I don&#x27;t think you can do that without meaningful policy controls outside of your local computer.<p>Isn&#x27;t this just a separate location permission for coarse location (which both OS), and allowing the user to choose between the two (not on either OS, but in iOS 14 beta)?</text></comment> |
7,851,540 | 7,850,845 | 1 | 3 | 7,850,322 | train | <story><title>Tor Challenge</title><url>https://www.eff.org/torchallenge/</url></story><parent_chain><item><author>xwintermutex</author><text>I would like to run a Tor exit-relay, but I am too afraid to do this, as I live in a what used to be liberal western country called &quot;the Netherlands&quot;. Where the police sometimes raid Tor exit-relays on purpose, to discourage people from helping Tor [1].<p>[1]: <a href="https://blog.torproject.org/blog/trip-report-tor-trainings-dutch-and-belgian-police" rel="nofollow">https:&#x2F;&#x2F;blog.torproject.org&#x2F;blog&#x2F;trip-report-tor-trainings-d...</a></text></item></parent_chain><comment><author>Ihmahr</author><text>Dutch person here. I operate two 10MB exit relays from my home, where I live with my family. Have been doing this for a year now, never had any real problems. My ISP is SurfNet. Also xs4all said today that they would permit exit relays: <a href="https://twitter.com/xs4all/statuses/474514247222067200" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;xs4all&#x2F;statuses&#x2F;474514247222067200</a><p>Also, Dutch police has never actually raided a private domicile where there was an exit relay.<p>One advantage to running an exit relay from your home is that there is a lot of garbage traffic coming from your address, which I really like because it hides me a little bit more.</text></comment> | <story><title>Tor Challenge</title><url>https://www.eff.org/torchallenge/</url></story><parent_chain><item><author>xwintermutex</author><text>I would like to run a Tor exit-relay, but I am too afraid to do this, as I live in a what used to be liberal western country called &quot;the Netherlands&quot;. Where the police sometimes raid Tor exit-relays on purpose, to discourage people from helping Tor [1].<p>[1]: <a href="https://blog.torproject.org/blog/trip-report-tor-trainings-dutch-and-belgian-police" rel="nofollow">https:&#x2F;&#x2F;blog.torproject.org&#x2F;blog&#x2F;trip-report-tor-trainings-d...</a></text></item></parent_chain><comment><author>luxpir</author><text>The comments there took a dark turn very quickly. Shame the thread&#x27;s closed, with no counterpoints offered before doing so.<p>See also these two independent reports, one written, one an audio recording of a presentation, by a Dutch activist I happened across:<p>- <a href="http://raided4tor.cryto.net/" rel="nofollow">http:&#x2F;&#x2F;raided4tor.cryto.net&#x2F;</a> [text]<p>- <a href="https://archive.org/details/OHM2013-Partyvan" rel="nofollow">https:&#x2F;&#x2F;archive.org&#x2F;details&#x2F;OHM2013-Partyvan</a> [audio]<p>EDIT: To sum up, the activist advises that activism is not something to be taken lightly, and there is a real cost to being targeted by law enforcement. I know relays are different to exits, but it&#x27;s not encouraging when you want to assist!</text></comment> |
28,614,264 | 28,614,350 | 1 | 2 | 28,610,086 | train | <story><title>Confessions of a Michelin Star Inspector</title><url>https://www.luxeat.com/blog/confessions-of-a-michelin-inspector/</url></story><parent_chain><item><author>pjc50</author><text>&gt; The difference would be completely lost on them.<p>I&#x27;ve definitely heard the life advice to avoid trying this kind of thing in case you <i>do</i> like it, then everything less is permanently tainted with the knowledge that it&#x27;s not as good.<p>However, western super-high-price dining tends to be one or both of &quot;demoscene for food&quot; or &quot;NFTs for food&quot;<p>- demoscene: a series of increasingly elaborate and labor-intensive techniques for showcasing skill in a constrained environment. The Heston Blumenthal&#x2F;molecular gastronomy approach.<p>- NFTs: the process entirely about conspicuous consumption; the value of the food is determined by how many resources were expended in the process, not what it actually tastes like. Wines in particular have a value determined by rarity and the secondary market.<p>The best compromise is probably found around the &quot;twice as much as a chain restaurant&quot; price point; enough to find quality ingredients and staff who aren&#x27;t too rushed off their feet, not enough that you start getting weird stunts.<p>(a fun read: Jay Rayner&#x27;s negative review of the Polo Lounge <a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;food&#x2F;2021&#x2F;aug&#x2F;15&#x2F;the-polo-lounge-at-the-dorchester-hotel-dismal-food-at-eye-popping-prices-restaurant-review" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;food&#x2F;2021&#x2F;aug&#x2F;15&#x2F;the-polo-lounge...</a> )</text></item><item><author>jiggawatts</author><text>I&#x27;ve been to two Michelin-starred Sushi places in Japan. I thought they were both amazing, but I realised that my parents would not have thought so.<p>They grew up relatively poor, and still have the habit of eating at home from a rotating menu of relatively straightforward recipes. Stews, soups, that kind of thing. They&#x27;ve gone to restaurants maybe a few dozen times in their <i>lives</i>, if that.<p>I&#x27;ve been to so many Sushi restaurants in so many cities that I have a nice smooth gradient of ratings along which I can place them. I can recognise and appreciate the small touches like real wasabi or expensive sake. The fancy places in Japan extended that experience just past the upper end of what I&#x27;ve experienced before. I could place them in <i>context</i>.<p>My parents tried a Sushi train... once. A $300 sushi set would be wasted on them, because they would be just as happy with a $50 sushi set. The difference would be completely lost on them.</text></item><item><author>tomcam</author><text>Went to Paris and dined at a number of Michelin-starred places, including Le Cinq. The food was fine, but... I didn&#x27;t get it! I think there is a culinary language that I understand as little as a typical Michelin-starred chef would be able to evaluate a tightly-written C program. On the other hand I was lucky enough to dine at Poppy Seattle every year of its existence and felt their completely original approach to thalis would have absolutely merited three stars. Poppy was well known to be one of the best restaurants in town, but I still wonder how I would have reacted if I understand the language better.<p>To calibrate, on a scale of 1-10 I would rate Cheesecake Factory and Starbucks food at a 3, Tim Horton and McDonald&#x27;s at 4, and Olive Garden at 6. I&#x27;d put LeCinq at 8 and Poppy at 10.<p>My wife has more sophisticated taste. She though 3 stars for LeCinq made sense. I have double-blind tested her a number of times and her taste is unerring, including one trial where she correctly guessed Osetra, Beluga, and Sevruga caviar back when they were all legal, and 4 different kinds of Wagyu beef at a steakhouse.</text></item></parent_chain><comment><author>stdbrouw</author><text>&gt; I&#x27;ve definitely heard the life advice to avoid trying this kind of thing in case you do like it, then everything less is permanently tainted with the knowledge that it&#x27;s not as good.<p>Does that actually happen though? I know a thing or two about wine and sometimes buy bottles in the 50-100 euro range, which I feel are totally worth it, but I&#x27;m still happy to drink cheap plonk with friends most of the time. This works because (1) you forget about what the high end experience was like quite quickly and&#x2F;or (2) you drink it in a very specific setting so you&#x27;re not likely to conclude &quot;I now always need&#x2F;want this&quot; and (3) a lot of what makes high end cuisine &#x2F; wine &#x2F; spirits so fun to try is not just that they&#x27;re better but that they&#x27;re different from what you&#x27;re used to, the scale is not just worst-best but also ordinary-extraordinary, and you&#x27;re not always in the mood for extraordinary.<p>My guess would be that the phenomenon of permanently hankering for the good stuff after having tried it once is probably more likely to happen with everyday luxuries: good bread, slightly-above-average wine, etc. And in that case, you get a little bit more enjoyment out of your food for a little bit more money, which seems like a fair deal. But even in that case, I don&#x27;t know anyone who, after trying craft beer, has vowed to never drink a cheap pilsner again.</text></comment> | <story><title>Confessions of a Michelin Star Inspector</title><url>https://www.luxeat.com/blog/confessions-of-a-michelin-inspector/</url></story><parent_chain><item><author>pjc50</author><text>&gt; The difference would be completely lost on them.<p>I&#x27;ve definitely heard the life advice to avoid trying this kind of thing in case you <i>do</i> like it, then everything less is permanently tainted with the knowledge that it&#x27;s not as good.<p>However, western super-high-price dining tends to be one or both of &quot;demoscene for food&quot; or &quot;NFTs for food&quot;<p>- demoscene: a series of increasingly elaborate and labor-intensive techniques for showcasing skill in a constrained environment. The Heston Blumenthal&#x2F;molecular gastronomy approach.<p>- NFTs: the process entirely about conspicuous consumption; the value of the food is determined by how many resources were expended in the process, not what it actually tastes like. Wines in particular have a value determined by rarity and the secondary market.<p>The best compromise is probably found around the &quot;twice as much as a chain restaurant&quot; price point; enough to find quality ingredients and staff who aren&#x27;t too rushed off their feet, not enough that you start getting weird stunts.<p>(a fun read: Jay Rayner&#x27;s negative review of the Polo Lounge <a href="https:&#x2F;&#x2F;www.theguardian.com&#x2F;food&#x2F;2021&#x2F;aug&#x2F;15&#x2F;the-polo-lounge-at-the-dorchester-hotel-dismal-food-at-eye-popping-prices-restaurant-review" rel="nofollow">https:&#x2F;&#x2F;www.theguardian.com&#x2F;food&#x2F;2021&#x2F;aug&#x2F;15&#x2F;the-polo-lounge...</a> )</text></item><item><author>jiggawatts</author><text>I&#x27;ve been to two Michelin-starred Sushi places in Japan. I thought they were both amazing, but I realised that my parents would not have thought so.<p>They grew up relatively poor, and still have the habit of eating at home from a rotating menu of relatively straightforward recipes. Stews, soups, that kind of thing. They&#x27;ve gone to restaurants maybe a few dozen times in their <i>lives</i>, if that.<p>I&#x27;ve been to so many Sushi restaurants in so many cities that I have a nice smooth gradient of ratings along which I can place them. I can recognise and appreciate the small touches like real wasabi or expensive sake. The fancy places in Japan extended that experience just past the upper end of what I&#x27;ve experienced before. I could place them in <i>context</i>.<p>My parents tried a Sushi train... once. A $300 sushi set would be wasted on them, because they would be just as happy with a $50 sushi set. The difference would be completely lost on them.</text></item><item><author>tomcam</author><text>Went to Paris and dined at a number of Michelin-starred places, including Le Cinq. The food was fine, but... I didn&#x27;t get it! I think there is a culinary language that I understand as little as a typical Michelin-starred chef would be able to evaluate a tightly-written C program. On the other hand I was lucky enough to dine at Poppy Seattle every year of its existence and felt their completely original approach to thalis would have absolutely merited three stars. Poppy was well known to be one of the best restaurants in town, but I still wonder how I would have reacted if I understand the language better.<p>To calibrate, on a scale of 1-10 I would rate Cheesecake Factory and Starbucks food at a 3, Tim Horton and McDonald&#x27;s at 4, and Olive Garden at 6. I&#x27;d put LeCinq at 8 and Poppy at 10.<p>My wife has more sophisticated taste. She though 3 stars for LeCinq made sense. I have double-blind tested her a number of times and her taste is unerring, including one trial where she correctly guessed Osetra, Beluga, and Sevruga caviar back when they were all legal, and 4 different kinds of Wagyu beef at a steakhouse.</text></item></parent_chain><comment><author>Scoundreller</author><text>&gt; demoscene: a series of increasingly elaborate and labor-intensive techniques for showcasing skill in a constrained environment. The Heston Blumenthal&#x2F;molecular gastronomy approach.<p>I’d add on: anything that’s difficult&#x2F;time-consuming&#x2F;etc. to make at home even if you could, but scale makes it a lot easier.<p>I know a Ukrainian guy that, without fail, orders borscht if it’s on the menu. He could make it himself, but his wife forbids him from stinking up the house with beets and cabbage for the afternoon it takes to prepare.</text></comment> |
27,048,462 | 27,048,361 | 1 | 2 | 27,044,371 | train | <story><title>Please fix the AWS free tier before somebody gets hurt</title><url>https://cloudirregular.substack.com/p/please-fix-the-aws-free-tier-before</url></story><parent_chain><item><author>swiftcoder</author><text>I&#x27;m also going to point out, as a former AWS engineer, that
&quot;too hard&quot; isn&#x27;t in the AWS engineering vocabulary. If an AWS engineer were to dismiss something as &quot;too hard&quot;, they&#x27;d be on the fast track to a performance-improvement-plan.<p>The problem isn&#x27;t that it&#x27;s too hard. It&#x27;s that it isn&#x27;t a business priority. As soon as it becomes prioritised, a legion of very smart AWS engineers will solve it.</text></item><item><author>jiggawatts</author><text>Everyone: PLEASE stop making the argument that it&#x27;s &quot;too hard&quot; or even &quot;impossible&quot; to implement spending limits.<p>As the article points out: Every other cloud does this! They all have non-production subscription types with hard spending limits.<p>There&#x27;s a difference between &quot;unable&quot; and &quot;unwilling&quot;.<p>The people that refuse to understand the difference are the same people that don&#x27;t understand that &quot;unsupported&quot; isn&#x27;t synonymous with &quot;cannot be made to function&quot;.<p>Don&#x27;t be that person.<p>If you have a large account and you&#x27;re in regular contact with an AWS sales representative, <i>pressure them</i> into making this happen. Even if you work for Megacorp with a $$$ budget, keep in mind that your <i>future hires</i> need to start somewhere, need to be able to learn on their own, and need to do so <i>safely</i>.<p>Don&#x27;t walk down the same path as IBM&#x27;s mainframes, where no student anywhere ever has been able to learn on their own, making it a dead-end for corporations who pay billions for this platform. You, or your company will eventually pay this price if AWS keeps this IBM-like behaviour up.<p>Think of the big picture, not just your own immediate personal situation.<p>Apply this pressure yourself, because the students you want to hire next year can&#x27;t.</text></item></parent_chain><comment><author>crispyambulance</author><text>As an antidote to the &quot;nothing is too hard&quot; flex...<p>Everything has trade-off&#x27;s. When a responsible person says something is &quot;too hard&quot; it&#x27;s because they&#x27;ve evaluated the consequences and have deemed them too costly in terms of time, resources, cost, maintenance, or strategy&#x2F;lost-opportunity. Some things really are &quot;too hard&quot;.<p>I agree it&#x27;s not too hard technically but am surprised at your contradictory suggestion AWS would deploy &quot;a legion of very smart engineers&quot; to solve it if they decided to implement that ordinary feature. How smart are they?<p>I suspect the truth here is a bit more ugly. AWS doesn&#x27;t want to do it because they see the pocket-change taken from accidental mishaps as a net positive. They don&#x27;t care if it inflicts suffering on students and novices. Sure, they&#x27;ll fix individual cases if they&#x27;re shamed publicly or if the individual contacts them in exactly the right way, but at the end of the day, it&#x27;s more money in their pocket. For every 200 dollar oopsie that gets charged back, how many go silently paid in humiliation and hardship?</text></comment> | <story><title>Please fix the AWS free tier before somebody gets hurt</title><url>https://cloudirregular.substack.com/p/please-fix-the-aws-free-tier-before</url></story><parent_chain><item><author>swiftcoder</author><text>I&#x27;m also going to point out, as a former AWS engineer, that
&quot;too hard&quot; isn&#x27;t in the AWS engineering vocabulary. If an AWS engineer were to dismiss something as &quot;too hard&quot;, they&#x27;d be on the fast track to a performance-improvement-plan.<p>The problem isn&#x27;t that it&#x27;s too hard. It&#x27;s that it isn&#x27;t a business priority. As soon as it becomes prioritised, a legion of very smart AWS engineers will solve it.</text></item><item><author>jiggawatts</author><text>Everyone: PLEASE stop making the argument that it&#x27;s &quot;too hard&quot; or even &quot;impossible&quot; to implement spending limits.<p>As the article points out: Every other cloud does this! They all have non-production subscription types with hard spending limits.<p>There&#x27;s a difference between &quot;unable&quot; and &quot;unwilling&quot;.<p>The people that refuse to understand the difference are the same people that don&#x27;t understand that &quot;unsupported&quot; isn&#x27;t synonymous with &quot;cannot be made to function&quot;.<p>Don&#x27;t be that person.<p>If you have a large account and you&#x27;re in regular contact with an AWS sales representative, <i>pressure them</i> into making this happen. Even if you work for Megacorp with a $$$ budget, keep in mind that your <i>future hires</i> need to start somewhere, need to be able to learn on their own, and need to do so <i>safely</i>.<p>Don&#x27;t walk down the same path as IBM&#x27;s mainframes, where no student anywhere ever has been able to learn on their own, making it a dead-end for corporations who pay billions for this platform. You, or your company will eventually pay this price if AWS keeps this IBM-like behaviour up.<p>Think of the big picture, not just your own immediate personal situation.<p>Apply this pressure yourself, because the students you want to hire next year can&#x27;t.</text></item></parent_chain><comment><author>AtomicOrbital</author><text>You are correct, which makes it even more pernicious that Amazon chooses to not fix this ... profits at any cost is not a long term benefit ... the longer AWS fails to fix this more folks will tend to lean toward the competition especially Azure ... I have been a unix&#x2F;linux server side developer since the beginning and always use a linux laptop so no fan of Microsoft however their Azure platform is lightyears ahead of AWS ... its obvious the AWS console web front end needs a wholescale rewrite from the bottom up ... again as you say its not a priority probably because all big consumers never use the AWS console as they have written their own automation layer atop the AWS SDK</text></comment> |
22,713,386 | 22,712,586 | 1 | 3 | 22,708,094 | train | <story><title>New Grad vs. Senior Dev</title><url>https://ericlippert.com/2020/03/27/new-grad-vs-senior-dev/</url></story><parent_chain><item><author>corysama</author><text>It’s because Big O is Computer Science. Cache effects are Software Engineering. Professors of CS do a fine job of teaching CS. They even briefly mention that there is a implicit constant factor k in O(k n log(n)) and then they never mention it again. They certainly don’t mention that k can easily vary by 128x between algos. AKA: 7 levels of a binary tree. Or that most of the data they will be dealing with in practice not only won’t be infinite, but will actually be less than 128 bytes. Or, that even with huge data and an proven-ideal O() algo, there is often 10x speed-up to be had with a hybrid algo like a b-tree instead of a binary tree. And, another 2-10x with SIMD vs scalar. 100x isn’t infinite, but it’s still counts.<p>So, grads listen to their CS professors and that’s what they know. It’s not until they get lectures from greybeard software engineers that they learn about the reality algos and not just the idealized algos.</text></item><item><author>dimtion</author><text>I find that the biggest misunderstanding happens because &quot;new grads&quot; (and I happen to be one) confuse _asymptotic complexity_ with actual complexity.<p>I&#x27;m not sure sure why, but CS courses and interview questions mostly focus on _asymptotic complexity_ and usually forget to take into consideration the complexity for &quot;little values of n&quot;. And funnily enough, in real life n never goes to infinity!<p>In a strict sense big O notation only cares about what happens when n goes to infinity. The algorithm could behave in any way up to numbers unimaginable (like TREE(3)) but still, its big O wouldn&#x27;t change.<p>Maybe what is missing to those &quot;new grad&quot; is a felling of real world data, and how a computer behave in the real world (with caches, latencies, optimised instructions etc...) not just having an ideal computer model in their mind when they design algorithms.</text></item></parent_chain><comment><author>MaxBarraclough</author><text>&gt; briefly mention that there is a implicit constant factor k in O(k n log(n)) and then they never mention it again<p>A fine concrete example of this is the Coppersmith–Winograd algorithm (and its derivatives), a matrix multiplication algorithm with impressive complexity properties, but which in practice <i>always</i> loses to the Strassen algorithm, despite Strassen&#x27;s inferior complexity. [0][1][2]<p>(Aside: the Strassen algorithm is pretty mind-bending, but also easily shown. If you&#x27;ve got 22 minutes spare, there&#x27;s a good explanation of it on YouTube. Perhaps there&#x27;s a more dense source elsewhere. [3])<p>&gt; It’s not until they get lectures from greybeard software engineers that they learn about the reality algos and not just the idealized algos.<p>To mirror what some others are saying here, students should also be taught the realities of cache behaviour, SIMD-friendliness, branch prediction, multi-threaded programming, real-time constraints, hardware acceleration, etc.<p>[0] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Coppersmith%E2%80%93Winograd_algorithm" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Coppersmith%E2%80%93Winograd_a...</a><p>[1] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Strassen_algorithm" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Strassen_algorithm</a><p>[2] <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Computational_complexity_of_mathematical_operations#Matrix_algebra" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Computational_complexity_of_ma...</a><p>[3] <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ORrM-aSNZUs" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=ORrM-aSNZUs</a></text></comment> | <story><title>New Grad vs. Senior Dev</title><url>https://ericlippert.com/2020/03/27/new-grad-vs-senior-dev/</url></story><parent_chain><item><author>corysama</author><text>It’s because Big O is Computer Science. Cache effects are Software Engineering. Professors of CS do a fine job of teaching CS. They even briefly mention that there is a implicit constant factor k in O(k n log(n)) and then they never mention it again. They certainly don’t mention that k can easily vary by 128x between algos. AKA: 7 levels of a binary tree. Or that most of the data they will be dealing with in practice not only won’t be infinite, but will actually be less than 128 bytes. Or, that even with huge data and an proven-ideal O() algo, there is often 10x speed-up to be had with a hybrid algo like a b-tree instead of a binary tree. And, another 2-10x with SIMD vs scalar. 100x isn’t infinite, but it’s still counts.<p>So, grads listen to their CS professors and that’s what they know. It’s not until they get lectures from greybeard software engineers that they learn about the reality algos and not just the idealized algos.</text></item><item><author>dimtion</author><text>I find that the biggest misunderstanding happens because &quot;new grads&quot; (and I happen to be one) confuse _asymptotic complexity_ with actual complexity.<p>I&#x27;m not sure sure why, but CS courses and interview questions mostly focus on _asymptotic complexity_ and usually forget to take into consideration the complexity for &quot;little values of n&quot;. And funnily enough, in real life n never goes to infinity!<p>In a strict sense big O notation only cares about what happens when n goes to infinity. The algorithm could behave in any way up to numbers unimaginable (like TREE(3)) but still, its big O wouldn&#x27;t change.<p>Maybe what is missing to those &quot;new grad&quot; is a felling of real world data, and how a computer behave in the real world (with caches, latencies, optimised instructions etc...) not just having an ideal computer model in their mind when they design algorithms.</text></item></parent_chain><comment><author>lonelappde</author><text>That&#x27;s an arbitrarily restrictive view of computer science. It&#x27;s like saying teaching physics ignoring friction perfectly fine.</text></comment> |
36,869,112 | 36,868,121 | 1 | 2 | 36,864,115 | train | <story><title>Is technical analysis just stock market astrology?</title><url>https://alicegg.tech//2023/07/25/technical-analysis.html</url></story><parent_chain><item><author>raincole</author><text>Astrology would be a self-fulfilling prophecy if enough people took it seriously.<p>For example if everyone agreed on that Gemini means good at science and Leo is good at art, the whole education system would be designed around this belief and it would hugely impact how children think of their own potential.</text></item><item><author>amelius</author><text>The entire idea behind technical analysis is that it&#x27;s a self-fulfilling prophecy. The more people believe in a certain rule, the more that rule turns out to be true.<p>It might work that way with astrology too, but it&#x27;s not so clear.</text></item><item><author>nologic01</author><text>No. That comparison is an insult to astrology.<p>While the made-up character is similar, Astrology is far richer in concepts, orchestration and story telling.<p>Technical analysis is the ultimate dumbification of market structure and dynamics, which in itself is a major dumbification of the formal economy, which in itself is a major simplification of the stuff-that-matters(TM).<p>The objective of technical analysis is to get as many low-information actors as possible to transact with as low-cost as possible technology (I would totally not be surprised if current &quot;technical analysis&quot; verbiage is 100% automated).</text></item></parent_chain><comment><author>antognini</author><text>There is a good example of this in Ancient Rome. By around the time of Augustus Romans were fervent believers in astrology. One of the most notable cases of astrology becoming a self-fulfilling prophecy has to do with the death of the emperor Domitian.<p>An astrologer named Asclation had earlier predicted that Domitian would die on September 18, 96 AD, and Domitian&#x27;s enemies used this as a sort of nucleation point to organize his assassination. So although the date was made up, it had the effect of focusing the efforts of the assassins on a particular date, and they were successful in assassinating him on September 18.<p>The full story that survives probably has some embellishments, but is entertaining. As the day approached Domitian became increasingly nervous. On September 17, he called Asclation before him and asked him if he stood by his prediction. Asclation said he did. Domitian then asked Asclation how Ascaltion himself was to die. Asclation responded that the stars said that he would die by being torn apart by dogs. Domitian then had an idea, and condemned him to death by burning.<p>Asclation was immediately led to a public square, tied to a stake, and a bonfire was built underneath him. Not long after being lit, however, it suddenly began raining and the downpour quenched the flames. In the wet mess, Asclation&#x27;s stake tipped over and a pack of dogs found him and devoured him.<p>This development naturally did not put Domitian&#x27;s mind at ease. The next day he became a nervous wreck. Around noon, he asked an attendant what time it was. The attendant, who was part of the conspiracy, lied and said that it was an hour later. Seeing that the danger had passed, Domitian relaxed and decided to take a bath. As he was about to go out, an official rushed in and asked for his signature on some documents. The official appeared to have an injury to his arm, but this official was also in on the conspiracy, and his sling concealed a dagger. When he got close to the emperor he stabbed him to death.</text></comment> | <story><title>Is technical analysis just stock market astrology?</title><url>https://alicegg.tech//2023/07/25/technical-analysis.html</url></story><parent_chain><item><author>raincole</author><text>Astrology would be a self-fulfilling prophecy if enough people took it seriously.<p>For example if everyone agreed on that Gemini means good at science and Leo is good at art, the whole education system would be designed around this belief and it would hugely impact how children think of their own potential.</text></item><item><author>amelius</author><text>The entire idea behind technical analysis is that it&#x27;s a self-fulfilling prophecy. The more people believe in a certain rule, the more that rule turns out to be true.<p>It might work that way with astrology too, but it&#x27;s not so clear.</text></item><item><author>nologic01</author><text>No. That comparison is an insult to astrology.<p>While the made-up character is similar, Astrology is far richer in concepts, orchestration and story telling.<p>Technical analysis is the ultimate dumbification of market structure and dynamics, which in itself is a major dumbification of the formal economy, which in itself is a major simplification of the stuff-that-matters(TM).<p>The objective of technical analysis is to get as many low-information actors as possible to transact with as low-cost as possible technology (I would totally not be surprised if current &quot;technical analysis&quot; verbiage is 100% automated).</text></item></parent_chain><comment><author>azalemeth</author><text>There&#x27;s a very good episode of The Orville essentially based on this premise in disguise and it goes as horribly wrong as you think it would.<p>Humans would totally do this. If anything, I&#x27;m pleased by the fact that we don&#x27;t do it quite as much as we could...</text></comment> |
33,625,670 | 33,626,247 | 1 | 2 | 33,604,444 | train | <story><title>U.S. fines airlines more than $7M for not providing refunds</title><url>https://www.nytimes.com/2022/11/14/business/transportation-department-airline-fines.html</url></story><parent_chain><item><author>mattwad</author><text>They are the scum of the universe. If you don&#x27;t pay for your seats, they will separate you and your party even if there are available seats together, because they know you&#x27;ll shell out cash to fix it. (I think American does this now too, and possibly others but I haven&#x27;t confirmed they also do it when there&#x27;s available seats).</text></item><item><author>kaycebasques</author><text>I am relieved to see that regulators appear aware of the game that Frontier seems to be playing. I flew to Denver recently and didn&#x27;t need to travel heavy so I went with their cheap fare. The flight from SFO was fine. The flight from DEN to SFO was extremely frustrating. We arrived at DEN early because we got a text message warning us about major TSA delays. Literally the moment we were about to board Frontier changed the flight status to canceled. The plane was there, we saw it arrive and I didn&#x27;t get the impression there was mechanical issues. It was one of those situations where I intuited that it was useless to try to get my money back and I should just vow to never fly with them again and give everyone a heads up not to use them (like I am doing here).</text></item></parent_chain><comment><author>fossuser</author><text>Isn&#x27;t that their whole game? If you&#x27;re going to fly them you should know it.<p>They have the cheapest fares, but they do that by upselling everything and any mistake will be expensive. If that&#x27;s worth it to you (and sometimes the cost difference is big enough for it to be worth it imo) then it can be a reasonable tradeoff.<p>Flying an extreme lost cost airline that&#x27;s entire existence is predicated on this model and expecting them to act contrary to that is confusing to me.</text></comment> | <story><title>U.S. fines airlines more than $7M for not providing refunds</title><url>https://www.nytimes.com/2022/11/14/business/transportation-department-airline-fines.html</url></story><parent_chain><item><author>mattwad</author><text>They are the scum of the universe. If you don&#x27;t pay for your seats, they will separate you and your party even if there are available seats together, because they know you&#x27;ll shell out cash to fix it. (I think American does this now too, and possibly others but I haven&#x27;t confirmed they also do it when there&#x27;s available seats).</text></item><item><author>kaycebasques</author><text>I am relieved to see that regulators appear aware of the game that Frontier seems to be playing. I flew to Denver recently and didn&#x27;t need to travel heavy so I went with their cheap fare. The flight from SFO was fine. The flight from DEN to SFO was extremely frustrating. We arrived at DEN early because we got a text message warning us about major TSA delays. Literally the moment we were about to board Frontier changed the flight status to canceled. The plane was there, we saw it arrive and I didn&#x27;t get the impression there was mechanical issues. It was one of those situations where I intuited that it was useless to try to get my money back and I should just vow to never fly with them again and give everyone a heads up not to use them (like I am doing here).</text></item></parent_chain><comment><author>infotogivenm</author><text>Personally I’ve always seen them allow you to get up and relocate as soon as cabin doors closes. But yea, you didn’t pay for a seat assignment so you might get stuck where-ever. They’re pretty clear about this…<p>Flown Frontier many dozens of times, since I usually travel alone with just a backpack. It’s probably saved me $5-10k in my adult life. Very often it will be $150+ less than the next cheapest option.</text></comment> |
39,041,986 | 39,041,306 | 1 | 2 | 39,039,588 | train | <story><title>A million ways to die on the web</title><url>https://wiki.archiveteam.org/index.php/A_Million_Ways_to_Die_on_the_Web</url></story><parent_chain><item><author>inopinatus</author><text>There is a tale - perhaps apocryphal - handed down between generations of AWS staff, of a customer that was all-in on spot instances, until one day the price and availability of their preferred instances took an unfortunate turn, which is to say, all their stuff went away, including most dramatically the customer data that was on the instance storages, and including the replicas that had been mistakenly presumed a backstop against instance loss, and sadly - but not surprisingly - this was pretty much terminal for their startup.</text></item></parent_chain><comment><author>Thorrez</author><text>The third largest bitcoin exchange made a change to their RAM settings in EC2. This shut down the machines, wiping out the hard drive and RAM. Their wallet was stored there. They lost everything.<p><a href="https:&#x2F;&#x2F;siliconangle.com&#x2F;2011&#x2F;08&#x2F;01&#x2F;third-largest-bitcoin-exchange-bitomat-lost-their-wallet-over-17000-bitcoins-missing&#x2F;" rel="nofollow">https:&#x2F;&#x2F;siliconangle.com&#x2F;2011&#x2F;08&#x2F;01&#x2F;third-largest-bitcoin-ex...</a></text></comment> | <story><title>A million ways to die on the web</title><url>https://wiki.archiveteam.org/index.php/A_Million_Ways_to_Die_on_the_Web</url></story><parent_chain><item><author>inopinatus</author><text>There is a tale - perhaps apocryphal - handed down between generations of AWS staff, of a customer that was all-in on spot instances, until one day the price and availability of their preferred instances took an unfortunate turn, which is to say, all their stuff went away, including most dramatically the customer data that was on the instance storages, and including the replicas that had been mistakenly presumed a backstop against instance loss, and sadly - but not surprisingly - this was pretty much terminal for their startup.</text></item></parent_chain><comment><author>BossingAround</author><text>How..? Was there no local code on any of the dev machines? No git? I&#x27;m asking because for example if Github is vaporized today, my product would lose roughly a day or two&#x27;s worth of work, since we have like 30 computers having a repository copy.<p>Of course redeploying every single thing would not be seamless because of course, there might be some configuration stored in services, or something similar, but I&#x27;d say that ~90% of our automation is stored in Git.</text></comment> |
40,032,984 | 40,032,039 | 1 | 2 | 40,014,711 | train | <story><title>The One Billion Row Challenge in CUDA</title><url>https://tspeterkim.github.io/posts/cuda-1brc</url></story><parent_chain><item><author>_zoltan_</author><text>this is true for small datasets.<p>on large datasets, once loaded into GPU memory, cross GPU shuffling with NVLink is going to be much faster than CPU to RAM.<p>on the H100 boxes with 8x400Gbps, IO with GDS is also pretty fast.<p>for truly IObound tasks I think a lot of GPUs beats almost anything :-)</text></item><item><author>pama</author><text>There are some good ideas for this type of problem here: <a href="https:&#x2F;&#x2F;github.com&#x2F;dannyvankooten&#x2F;1brc">https:&#x2F;&#x2F;github.com&#x2F;dannyvankooten&#x2F;1brc</a><p>After you deal with parsing and hashes, basically you are IO limited, so mmap helps. The C code takes less than 1.4s without any CUDA access. Because there is no compute to speak of, other than parsing and a hashmap, a reasonable guess is that even for the optimal CUDA implementation, the starting of kernels and transfer of data to the GPU would likely add a noticeable bottleneck and make the optimal CUDA code slower than this pure C code.</text></item></parent_chain><comment><author>pama</author><text>Yes, GDS will accelerate the IO to the GPU. I’d love to see the above C code compared to hyperoptimized GPU code on the right hardware, but I don’t want to accidentally nerd snipe myself :-) The unfortunate part of this particular benchmark is that once you have the data in the right place in your hardware there is very little compute left. The GPU code would probably have constant performance with an additional couple thousand operations on each row whereas CPU would slow down.<p><a href="https:&#x2F;&#x2F;docs.nvidia.com&#x2F;gpudirect-storage&#x2F;overview-guide&#x2F;index.html" rel="nofollow">https:&#x2F;&#x2F;docs.nvidia.com&#x2F;gpudirect-storage&#x2F;overview-guide&#x2F;ind...</a></text></comment> | <story><title>The One Billion Row Challenge in CUDA</title><url>https://tspeterkim.github.io/posts/cuda-1brc</url></story><parent_chain><item><author>_zoltan_</author><text>this is true for small datasets.<p>on large datasets, once loaded into GPU memory, cross GPU shuffling with NVLink is going to be much faster than CPU to RAM.<p>on the H100 boxes with 8x400Gbps, IO with GDS is also pretty fast.<p>for truly IObound tasks I think a lot of GPUs beats almost anything :-)</text></item><item><author>pama</author><text>There are some good ideas for this type of problem here: <a href="https:&#x2F;&#x2F;github.com&#x2F;dannyvankooten&#x2F;1brc">https:&#x2F;&#x2F;github.com&#x2F;dannyvankooten&#x2F;1brc</a><p>After you deal with parsing and hashes, basically you are IO limited, so mmap helps. The C code takes less than 1.4s without any CUDA access. Because there is no compute to speak of, other than parsing and a hashmap, a reasonable guess is that even for the optimal CUDA implementation, the starting of kernels and transfer of data to the GPU would likely add a noticeable bottleneck and make the optimal CUDA code slower than this pure C code.</text></item></parent_chain><comment><author>candido_heavyai</author><text>I am testing a gh200 and the speed you can access the system memory is amazing.. Assuming you have already encoded the station into a smallint and the size of the dataset would be around 6gb that on such system takes just 20 ms to be transfered (I am sure about that because I&#x27;m observing transfer a 9.5gb that took about 33ms right now).</text></comment> |
10,979,613 | 10,979,244 | 1 | 3 | 10,978,069 | train | <story><title>VMware Confirms Layoffs as It Prepares for Dell Acquisition</title><url>http://techcrunch.com/2016/01/26/vmware-confirms-layoffs-in-earnings-statement-as-it-prepares-for-dell-acquisition/</url></story><parent_chain><item><author>chipx86</author><text>Workstation was heavily developed up until, well, yesterday really.<p>I personally spent 2 years of my life, starting in 2008, bringing Unity to Workstation on Linux and making it work with every combination of Linux and Windows I could throw at it. That work was continued by a teammate for several years.<p>I spent 3 years rewriting most of the foundation, UI, and server infrastructure for Workstation 8, bringing the ability to connect to remote VMware ESXi&#x2F;vSphere servers, along with the server component of Workstation 8. This work allowed VMs to be hosted on any server and accessed from any other server, and allowed VMs to be pushed between servers. 3 solid years on this feature alone, given just how much was needed to make that happen.<p>In the same release, we replaced the old Teams feature (a single feature that provided a multi-VM UI along with software-defined networking segments) with a series of more independent, more useful features. These were just a couple of the major features released in Workstation 8, and with all this came cleanup in the UI to keep the experience sane, not bloated.<p>That came out in 2011.<p>Workstation 9, released in 2012, came with a web-based UI for interacting with VMs called WSX (a feature I dedicated a bit over a year to). It also added UI refinement for the features that come out in Workstation 8, more remote VM support, hardware improvements (USB 3, Hyper-V, OpenGL for Linux VMs, nested&#x2F;Inception-like VMs), locked down virtual machines for IT, and probably more that I can&#x27;t remember.<p>Workstation 10 followed that a year later, and brought guest hardware support for tablets, enhancements for Windows 8 hosts, more remote VM improvements, better command line automation for remote-controlling&#x2F;creating VMs, and a bunch of other things. UI-wise, it was a smaller release, but it did a lot for the hardware support.<p>I left around this time to focus on Review Board (<a href="https:&#x2F;&#x2F;www.reviewboard.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reviewboard.org&#x2F;</a>) full-time.<p>Since then, they released 2 major versions: Workstation 11 and 12. From what I can tell, these were largely about hardware improvements and performance improvements, less about major UI changes, but there&#x27;s a lot that has to happen for these improvements. Hardware improvements are crucial to keeping the VMs useful in many situations. Performance was also a focus. While building these releases, the team was also busy helping out the View team by helping them consume bits of the Workstation&#x2F;Fusion codebase. They also begun development of AppCatalyst and Flex.<p>There&#x27;s also work that happened on Player, Ace, and other things, all throughout.<p>So that&#x27;s a lot of killer features in my opinion :) I barely scratched the surface of 8, and didn&#x27;t go into all the stuff we did in 6 and 7.<p>We were all very proud of the product, and often spent our free time working on it. I should point out, this was <i>not</i> a large team by any means. It was an amazing team, though. A family. One that will survive these layoffs, one way or another.</text></item><item><author>venomsnake</author><text>That is sad. Desktop virtualization is incredibly useful. And vmware are the only one that can pull off a macos virtualization in windows host decently so far.<p>But workstation was for long left on the backburner - we haven&#x27;t had there a lot of killer features since &#x27;07 probably. So I guess it is not a new decision.<p>Clarification: I mean that workstation was deprioritized by corporate, not by the team that worked on it.</text></item><item><author>awalton</author><text>Well, for starters...<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10976579" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10976579</a></text></item><item><author>jeffjose</author><text>What departments&#x2F;roles are these targeted at? Are they more support roles - HR&#x2F;Training&#x2F;IT or are they more engineering&#x2F;products - engineers, PMs etc?</text></item><item><author>thewarrior</author><text>Are there more layoffs than usual now or am I just imagining things ?</text></item><item><author>mgarfias</author><text>A little late. I got the call yesterday</text></item></parent_chain><comment><author>virtualwhys</author><text>First of all, thank you, accessing ESXi servers via vSphere has been a god send over the years (FWIW, still running relatively recent version of vSphere on an ancient 2003 Server VM ;-)).<p>Workstation was my go to as well for desktop development needs for many years, but switched to VirtualBox after Workstation 10. Kernel updates on Linux often broke Workstation. You needed to wait for VMware to release an update, upgrade to the next version (that would also soon lag behind latest kernels), or search for a patch over on Arch[0].<p>VirtualBox does the trick but Workstation&#x27;s a better product.<p><a href="https:&#x2F;&#x2F;aur.archlinux.org&#x2F;packages&#x2F;vmware-patch" rel="nofollow">https:&#x2F;&#x2F;aur.archlinux.org&#x2F;packages&#x2F;vmware-patch</a></text></comment> | <story><title>VMware Confirms Layoffs as It Prepares for Dell Acquisition</title><url>http://techcrunch.com/2016/01/26/vmware-confirms-layoffs-in-earnings-statement-as-it-prepares-for-dell-acquisition/</url></story><parent_chain><item><author>chipx86</author><text>Workstation was heavily developed up until, well, yesterday really.<p>I personally spent 2 years of my life, starting in 2008, bringing Unity to Workstation on Linux and making it work with every combination of Linux and Windows I could throw at it. That work was continued by a teammate for several years.<p>I spent 3 years rewriting most of the foundation, UI, and server infrastructure for Workstation 8, bringing the ability to connect to remote VMware ESXi&#x2F;vSphere servers, along with the server component of Workstation 8. This work allowed VMs to be hosted on any server and accessed from any other server, and allowed VMs to be pushed between servers. 3 solid years on this feature alone, given just how much was needed to make that happen.<p>In the same release, we replaced the old Teams feature (a single feature that provided a multi-VM UI along with software-defined networking segments) with a series of more independent, more useful features. These were just a couple of the major features released in Workstation 8, and with all this came cleanup in the UI to keep the experience sane, not bloated.<p>That came out in 2011.<p>Workstation 9, released in 2012, came with a web-based UI for interacting with VMs called WSX (a feature I dedicated a bit over a year to). It also added UI refinement for the features that come out in Workstation 8, more remote VM support, hardware improvements (USB 3, Hyper-V, OpenGL for Linux VMs, nested&#x2F;Inception-like VMs), locked down virtual machines for IT, and probably more that I can&#x27;t remember.<p>Workstation 10 followed that a year later, and brought guest hardware support for tablets, enhancements for Windows 8 hosts, more remote VM improvements, better command line automation for remote-controlling&#x2F;creating VMs, and a bunch of other things. UI-wise, it was a smaller release, but it did a lot for the hardware support.<p>I left around this time to focus on Review Board (<a href="https:&#x2F;&#x2F;www.reviewboard.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.reviewboard.org&#x2F;</a>) full-time.<p>Since then, they released 2 major versions: Workstation 11 and 12. From what I can tell, these were largely about hardware improvements and performance improvements, less about major UI changes, but there&#x27;s a lot that has to happen for these improvements. Hardware improvements are crucial to keeping the VMs useful in many situations. Performance was also a focus. While building these releases, the team was also busy helping out the View team by helping them consume bits of the Workstation&#x2F;Fusion codebase. They also begun development of AppCatalyst and Flex.<p>There&#x27;s also work that happened on Player, Ace, and other things, all throughout.<p>So that&#x27;s a lot of killer features in my opinion :) I barely scratched the surface of 8, and didn&#x27;t go into all the stuff we did in 6 and 7.<p>We were all very proud of the product, and often spent our free time working on it. I should point out, this was <i>not</i> a large team by any means. It was an amazing team, though. A family. One that will survive these layoffs, one way or another.</text></item><item><author>venomsnake</author><text>That is sad. Desktop virtualization is incredibly useful. And vmware are the only one that can pull off a macos virtualization in windows host decently so far.<p>But workstation was for long left on the backburner - we haven&#x27;t had there a lot of killer features since &#x27;07 probably. So I guess it is not a new decision.<p>Clarification: I mean that workstation was deprioritized by corporate, not by the team that worked on it.</text></item><item><author>awalton</author><text>Well, for starters...<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10976579" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=10976579</a></text></item><item><author>jeffjose</author><text>What departments&#x2F;roles are these targeted at? Are they more support roles - HR&#x2F;Training&#x2F;IT or are they more engineering&#x2F;products - engineers, PMs etc?</text></item><item><author>thewarrior</author><text>Are there more layoffs than usual now or am I just imagining things ?</text></item><item><author>mgarfias</author><text>A little late. I got the call yesterday</text></item></parent_chain><comment><author>vardump</author><text>I&#x27;d love to get Replay Debugging [1] back... That of course was already gone in VMWare Workstation 8.<p>That was and still is a killer feature.<p>I know about rr and such, it&#x27;s just another level to be able to record whole system state.<p>[1]: VMWare Workstation 7 demo about Replay Debugging: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=YjZWn3iDPiM" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=YjZWn3iDPiM</a></text></comment> |
38,186,397 | 38,184,461 | 1 | 3 | 38,183,454 | train | <story><title>Gleam: a type safe language on the Erlang VM</title><url>https://gleam.run/</url></story><parent_chain></parent_chain><comment><author>xixixao</author><text>I&#x27;m really impressed by the syntax. I have yet to find a piece of syntax I wouldn&#x27;t like. Labelled arguments for example are delightful:<p><pre><code> pub fn replace(
in string: String,
each pattern: String,
with replacement: String,
) {
&#x2F;&#x2F; The variables `string`, `pattern`, and `replacement` are in scope here
}
replace(in: &quot;A,B,C&quot;, each: &quot;,&quot;, with: &quot; &quot;)</code></pre></text></comment> | <story><title>Gleam: a type safe language on the Erlang VM</title><url>https://gleam.run/</url></story><parent_chain></parent_chain><comment><author>christophilus</author><text>Looks decent. I somehow missed the previous thousand discussions.<p>I’d love to hear from anyone running it in production.<p>I have always been BEAM-curious, but never felt comfortable running it in production, as it feels like a big black box, and I’m not confident that I’d be able to diagnose problems as readily as I can in .NET, Go, or Node. I’m not sure why I feel that way about BEAM.</text></comment> |
38,172,893 | 38,171,704 | 1 | 3 | 38,151,299 | train | <story><title>How did people deal with punch cards?</title><url>https://blog.computationalcomplexity.org/2023/11/in-bad-old-days-we-had-punchcards-how.html</url></story><parent_chain><item><author>kens</author><text>People complaining about punch cards? I started programming with optical mark cards which were much, much worse. I&#x27;d fill in bubbles on each card to make the program, mail the program to the Waterloo computer center, and get my output back in a week or so. After fixing the errors (either the bubble wasn&#x27;t dark enough or a coding error), I&#x27;d send the program back. It took me <i>weeks</i> to get my first program to run. Compared to that, punch cards were like living in the future.<p>By the way, if you want to experience punch cards in person, come to the Computer History Museum (Mountain View, CA) on Wednesdays or Saturdays for demos of the IBM 1401 and the opportunity to punch your own cards. You can also see a high-speed card sorter in action.</text></item></parent_chain><comment><author>Waterluvian</author><text>Ah my dad has the same story about his time in the Math and Computer building at UW. Though he said it got a lot better in later years when he could just show up at 4am and run things pretty much right away because nobody wanted to be there at 4am. That part I find hard to believe based on my time at UW.</text></comment> | <story><title>How did people deal with punch cards?</title><url>https://blog.computationalcomplexity.org/2023/11/in-bad-old-days-we-had-punchcards-how.html</url></story><parent_chain><item><author>kens</author><text>People complaining about punch cards? I started programming with optical mark cards which were much, much worse. I&#x27;d fill in bubbles on each card to make the program, mail the program to the Waterloo computer center, and get my output back in a week or so. After fixing the errors (either the bubble wasn&#x27;t dark enough or a coding error), I&#x27;d send the program back. It took me <i>weeks</i> to get my first program to run. Compared to that, punch cards were like living in the future.<p>By the way, if you want to experience punch cards in person, come to the Computer History Museum (Mountain View, CA) on Wednesdays or Saturdays for demos of the IBM 1401 and the opportunity to punch your own cards. You can also see a high-speed card sorter in action.</text></item></parent_chain><comment><author>renegade-otter</author><text>I just love these ancient programming war stories!</text></comment> |
36,552,795 | 36,552,416 | 1 | 3 | 36,552,015 | train | <story><title>The first Oxide rack being prepared for customer shipment</title><url>https://hachyderm.io/@oxidecomputer/110635621269494973</url></story><parent_chain><item><author>anaisbetts</author><text>Congrats to the team, but after hearing about Oxide for literal years since the beginning of the company and repeatedly reading different iterations of their landing page, I still don&#x27;t know what their product actually <i>is</i>. It&#x27;s a hypervisor host? Maybe? So I can host VMs on it? And a network switch? So I can....switch stuff?</text></item></parent_chain><comment><author>electroly</author><text>It&#x27;s AWS Outposts Rack without the AWS connection. That is, you get a turnkey rack with the servers, networking, hypervisor, and support software preconfigured. You plug it into power and network and it&#x27;s ready to run your VMs.<p>Outposts, too, started with a full-rack configuration only, but they eventually introduced an individual server configuration as well. It&#x27;ll be interesting to see if Oxide eventually decides to serve the smaller market that doesn&#x27;t have the scale for whole-rack-at-a-time.</text></comment> | <story><title>The first Oxide rack being prepared for customer shipment</title><url>https://hachyderm.io/@oxidecomputer/110635621269494973</url></story><parent_chain><item><author>anaisbetts</author><text>Congrats to the team, but after hearing about Oxide for literal years since the beginning of the company and repeatedly reading different iterations of their landing page, I still don&#x27;t know what their product actually <i>is</i>. It&#x27;s a hypervisor host? Maybe? So I can host VMs on it? And a network switch? So I can....switch stuff?</text></item></parent_chain><comment><author>steveklabnik</author><text>I wrote this a while back, does that help? Happy to elaborate. <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30678324">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=30678324</a></text></comment> |
25,468,146 | 25,467,166 | 1 | 2 | 25,466,304 | train | <story><title>Jetbrains founders turn billionaires without VC help</title><url>https://www.bloomberg.com/news/articles/2020-12-18/czech-startup-founders-turn-billionaires-without-vc-help</url></story><parent_chain><item><author>Philip-J-Fry</author><text>Thank the gods for JetBrains. I love their software.<p>Excellent business model, I get to keep the software I bought when the license expires. Never forced to continually pay for something you don&#x27;t see value in updating. I think that&#x27;s one of the key factors which drives them to continue to improve their products. If future version of their products wasn&#x27;t vastly improved over the previous then people wouldn&#x27;t upgrade because their current version serves them perfectly fine. And if you don&#x27;t like this years update, maybe you&#x27;ll like next years. No downsides whatsoever.<p>Never used any of their TeamCity&#x2F;Spaces stuff but it looks cool. I can definitely say I&#x27;m a fan of their IDEs though. Use them every day.</text></item></parent_chain><comment><author>welearnednothng</author><text>I ran (self-hosted) TeamCity at a division of Disney for years with hundreds of projects across teams with lots of complex dependencies. To this day (though I haven&#x27;t kept up as much in recent years), it&#x27;s far and away the most powerful CI system I&#x27;ve ever used. Rock solid stability, to boot. And it&#x27;s only become better and better over the years.<p>Does that mean it&#x27;s right for every job? Not at all. Sometimes you just need simple and accessible. Things like CircleCI or Travis or Heroku&#x27;s CI. But not Jenkins. I put Jenkins into the same category as TeamCity, and it falls far short.<p>If you&#x27;ve outgrown some of the other options and need really powerful CI, I can&#x27;t recommend TeamCity enough.</text></comment> | <story><title>Jetbrains founders turn billionaires without VC help</title><url>https://www.bloomberg.com/news/articles/2020-12-18/czech-startup-founders-turn-billionaires-without-vc-help</url></story><parent_chain><item><author>Philip-J-Fry</author><text>Thank the gods for JetBrains. I love their software.<p>Excellent business model, I get to keep the software I bought when the license expires. Never forced to continually pay for something you don&#x27;t see value in updating. I think that&#x27;s one of the key factors which drives them to continue to improve their products. If future version of their products wasn&#x27;t vastly improved over the previous then people wouldn&#x27;t upgrade because their current version serves them perfectly fine. And if you don&#x27;t like this years update, maybe you&#x27;ll like next years. No downsides whatsoever.<p>Never used any of their TeamCity&#x2F;Spaces stuff but it looks cool. I can definitely say I&#x27;m a fan of their IDEs though. Use them every day.</text></item></parent_chain><comment><author>vbezhenar</author><text>It is interesting that they initially offered subscription model without unexpirable license. But because of community backlash they changed it and people seem to be happy with the outcome.<p>Listen to your users.</text></comment> |
33,368,659 | 33,368,559 | 1 | 2 | 33,362,957 | train | <story><title>Wild mammal biomass has declined by 85% since the rise of humans</title><url>https://ourworldindata.org/mammals</url></story><parent_chain><item><author>hayst4ck</author><text>Militant-ism isn&#x27;t going to win people over.<p>The great lie of capitalism is pushing a theory of personal responsibility rather than legislating these problems away.<p>There are so many elements of my life that cause harm to other beings that addressing the harm could be a full time job.<p>Killing rodents, mosquitos, and ants is something most people will do without a second thought. I&#x27;ve sprayed a wasp nest with wasp killer and one writhed on the ground for 5 minutes and I felt awful about it, but there is practically a whole aisle of it at the local hardware store.<p>How many insects do you think have been harmed by industrial pesticides used in growing grains? What type of animal harm do you think pet ownership brings? How many wild birds do you think are killed in how many different ways? How many animals do you think are hit with cars? What do you think the effect the shipping industry has on sea life? What about trash? What about the smog? What about the chemicals dumped into the environment that are used to produce the hardware we are communicating with?<p>How much animal testing has been done by scientists on all types of animals to have what we have today?<p>My desire to pay as little as possible for coffee means the barista can&#x27;t afford a home and likely will have their body harmed by the stress of not having enough money to operate in today&#x27;s society. We send manufacturing off to poor countries were pollutants more directly harm their citizens than ours.<p>The wikipedia article states that poultry sex can be determined before hatching and that European countries have legislated that culling must stop, so it sounds like we are able to make progress. &quot;Beyond&quot; shows there is at least research going into lab grown&#x2F;cruelty free meats.<p>Do I have a moral obligation to go live next to a pond feeding on what I can forage while living in a dwelling I built with my own hands or is it satisfying enough that I vote progressive and hope to make progress over time by regulating the more atrocious of our actions.<p>FWIW, I have tried vegetarianism and I did not enjoy it at all. I found it greatly limited my food choices. It felt ascetic and I felt miserable. So while it is easy for you, I did not find it easy.</text></item><item><author>fnimick</author><text></text></item><item><author>hayst4ck</author><text>I am an unrepentant meat eater, but your visualization was absolutely fantastic.<p>I would add that (my maybe incorrect napkin math) shows that 5 Americans die per minute to give it scale.<p>My understanding is also that we destroy an absolutely phenomenal amount of chicks in an entirely gruesome way. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chick_culling" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chick_culling</a><p>I have trouble handling that.</text></item><item><author>SCUSKU</author><text>I made a website to help visualize American meat consumption in terms of animals slaughtered per second. It’s pretty insane and I think does something to help communicate just how large such industrial operations truly are.<p><a href="https:&#x2F;&#x2F;zach.ws&#x2F;meat&#x2F;" rel="nofollow">https:&#x2F;&#x2F;zach.ws&#x2F;meat&#x2F;</a></text></item><item><author>retrac</author><text>Going off on a tangent, but the scale of production is very hard to grasp. I&#x27;m probably not the first person to observe this, but I once did some napkin math about steel production, and what I realized kind of blew me away. About 1.9 billion tonnes in 2020. Like with billions of dollars, I have no intuition for such numbers. Context is needed.<p>Global steel production just before WW I was about 70 million tonnes. So production has increased about thirty-fold in one century. That wasn&#x27;t so shocking to me, at first. But 1910 was not the beginning of the industrial era; things had been under way for more than a century then. Railroads. Ocean liners. Factories. Knives and rivets for fabric owned by hundreds of millions of people. Dozens of skyscrapers in New York by then; the Brooklyn Bridge hung on thousands of tonnes of cable. All made out of steel.<p>Then it struck me. A few million tonnes a year in 1850. 70 million tonnes in 1910. All of it adds up to less than 1900 million tonnes. Every single tonne of steel manufactured by humans from prehistory until about a century ago -- the entire output of the industrial revolution -- amounts to less than one year at current production.</text></item></parent_chain><comment><author>flarg</author><text>We&#x27;re all guilty aren&#x27;t we, even speaking as a vegetarian, but for me it&#x27;s about how much cruelty I can avoid, not all or nothing. I don&#x27;t kill insects but I eat eggs. I try to use public transport but I do have a car. Don&#x27;t let perfection be the enemy here.</text></comment> | <story><title>Wild mammal biomass has declined by 85% since the rise of humans</title><url>https://ourworldindata.org/mammals</url></story><parent_chain><item><author>hayst4ck</author><text>Militant-ism isn&#x27;t going to win people over.<p>The great lie of capitalism is pushing a theory of personal responsibility rather than legislating these problems away.<p>There are so many elements of my life that cause harm to other beings that addressing the harm could be a full time job.<p>Killing rodents, mosquitos, and ants is something most people will do without a second thought. I&#x27;ve sprayed a wasp nest with wasp killer and one writhed on the ground for 5 minutes and I felt awful about it, but there is practically a whole aisle of it at the local hardware store.<p>How many insects do you think have been harmed by industrial pesticides used in growing grains? What type of animal harm do you think pet ownership brings? How many wild birds do you think are killed in how many different ways? How many animals do you think are hit with cars? What do you think the effect the shipping industry has on sea life? What about trash? What about the smog? What about the chemicals dumped into the environment that are used to produce the hardware we are communicating with?<p>How much animal testing has been done by scientists on all types of animals to have what we have today?<p>My desire to pay as little as possible for coffee means the barista can&#x27;t afford a home and likely will have their body harmed by the stress of not having enough money to operate in today&#x27;s society. We send manufacturing off to poor countries were pollutants more directly harm their citizens than ours.<p>The wikipedia article states that poultry sex can be determined before hatching and that European countries have legislated that culling must stop, so it sounds like we are able to make progress. &quot;Beyond&quot; shows there is at least research going into lab grown&#x2F;cruelty free meats.<p>Do I have a moral obligation to go live next to a pond feeding on what I can forage while living in a dwelling I built with my own hands or is it satisfying enough that I vote progressive and hope to make progress over time by regulating the more atrocious of our actions.<p>FWIW, I have tried vegetarianism and I did not enjoy it at all. I found it greatly limited my food choices. It felt ascetic and I felt miserable. So while it is easy for you, I did not find it easy.</text></item><item><author>fnimick</author><text></text></item><item><author>hayst4ck</author><text>I am an unrepentant meat eater, but your visualization was absolutely fantastic.<p>I would add that (my maybe incorrect napkin math) shows that 5 Americans die per minute to give it scale.<p>My understanding is also that we destroy an absolutely phenomenal amount of chicks in an entirely gruesome way. <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chick_culling" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Chick_culling</a><p>I have trouble handling that.</text></item><item><author>SCUSKU</author><text>I made a website to help visualize American meat consumption in terms of animals slaughtered per second. It’s pretty insane and I think does something to help communicate just how large such industrial operations truly are.<p><a href="https:&#x2F;&#x2F;zach.ws&#x2F;meat&#x2F;" rel="nofollow">https:&#x2F;&#x2F;zach.ws&#x2F;meat&#x2F;</a></text></item><item><author>retrac</author><text>Going off on a tangent, but the scale of production is very hard to grasp. I&#x27;m probably not the first person to observe this, but I once did some napkin math about steel production, and what I realized kind of blew me away. About 1.9 billion tonnes in 2020. Like with billions of dollars, I have no intuition for such numbers. Context is needed.<p>Global steel production just before WW I was about 70 million tonnes. So production has increased about thirty-fold in one century. That wasn&#x27;t so shocking to me, at first. But 1910 was not the beginning of the industrial era; things had been under way for more than a century then. Railroads. Ocean liners. Factories. Knives and rivets for fabric owned by hundreds of millions of people. Dozens of skyscrapers in New York by then; the Brooklyn Bridge hung on thousands of tonnes of cable. All made out of steel.<p>Then it struck me. A few million tonnes a year in 1850. 70 million tonnes in 1910. All of it adds up to less than 1900 million tonnes. Every single tonne of steel manufactured by humans from prehistory until about a century ago -- the entire output of the industrial revolution -- amounts to less than one year at current production.</text></item></parent_chain><comment><author>jibbit</author><text>Reminder to self: don’t read the comments</text></comment> |
28,643,448 | 28,642,860 | 1 | 3 | 28,641,480 | train | <story><title>Asynchronous Programming in C#</title><url>https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md</url></story><parent_chain><item><author>bob1029</author><text>We use async&#x2F;await pretty much universally throughout our codebase today.<p>One thing to keep in mind is that this mode of programming is actually not the most performant way to handle many problems. It is simply the most expedient way to manage I&#x2F;O and spread trivial things across many cores in large, complex codebases. You can typically retrofit an existing code pile to be async-capable without a whole lot of suffering.<p>If you are trying to go as fast as possible, then async is not what you want at all. Consider that the minimum grain of a Task.Delay is 1 millisecond. A millisecond is quite a brutish unit when working with a CPU that understands nanoseconds. This isn&#x27;t even a <i>reliable</i> 1 millisecond delay either... There is a shitload of context switching and other barbarism that occurs when you employ async&#x2F;await.<p>If you are seeking millions of serialized items per second, you usually just want 1 core to do that for you. Any degree of context switching (which is what async&#x2F;await does for a living) is going to chop your serialized throughput numbers substantially. You want to batch things up and process them in chunks on a single thread that never gets a chance to yield to the OS. Only problem with this optimization is that it usually means you rewrite from zero, unless you planned for this kind of thing in advance.</text></item></parent_chain><comment><author>reubenbond</author><text>&gt; Consider that the minimum grain of a Task.Delay is 1 millisecond.<p>The minimum here is contingent on a few things. The API can accept a TimeSpan which can express durations as low as 100ns (10M ticks per second: <a href="https:&#x2F;&#x2F;docs.microsoft.com&#x2F;dotnet&#x2F;api&#x2F;system.timespan.tickspersecond" rel="nofollow">https:&#x2F;&#x2F;docs.microsoft.com&#x2F;dotnet&#x2F;api&#x2F;system.timespan.ticksp...</a>). The actual delay is subject to the timer frequency, which can be as high as 16ms and depends on the OS configuration (eg, see <a href="https:&#x2F;&#x2F;stackoverflow.com&#x2F;a&#x2F;22862989&#x2F;635314" rel="nofollow">https:&#x2F;&#x2F;stackoverflow.com&#x2F;a&#x2F;22862989&#x2F;635314</a>). However, I&#x27;m not sure how any of this relates to &quot;go[ing] as fast as possible&quot;, since surely you would simply not use a Task.Delay in that case.<p>&gt; There is a shitload of context switching and other barbarism that occurs when you employ async&#x2F;await.<p>Async&#x2F;await reduces context switching over the alternative of having one thread per request (i.e, many more OS threads than cores) and it (async&#x2F;await) exhibits the same amount of context switching as Goroutines in Go and other M:N schedulers. If there is work enqueued to be processed on the thread pool, then that work will be processed without yielding back to the OS. The .NET Thread Pool dynamically sizes itself depending on the workload in an attempt to maximize throughput. If your code is not blocking threads during IO, you would ideally end up with 1 thread per core (you can configure that if you want).<p>Async&#x2F;await can introduce overhead, though, so if you&#x27;re writing very high-performance systems, then you may want to consider when to use it versus when to use other approaches as well as the relevant optimizations which can be implemented. I&#x27;d recommend people take the simple approach of using async&#x2F;await at the application layer and only change that approach if profiling demonstrates that it&#x27;s becoming a performance bottleneck.</text></comment> | <story><title>Asynchronous Programming in C#</title><url>https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md</url></story><parent_chain><item><author>bob1029</author><text>We use async&#x2F;await pretty much universally throughout our codebase today.<p>One thing to keep in mind is that this mode of programming is actually not the most performant way to handle many problems. It is simply the most expedient way to manage I&#x2F;O and spread trivial things across many cores in large, complex codebases. You can typically retrofit an existing code pile to be async-capable without a whole lot of suffering.<p>If you are trying to go as fast as possible, then async is not what you want at all. Consider that the minimum grain of a Task.Delay is 1 millisecond. A millisecond is quite a brutish unit when working with a CPU that understands nanoseconds. This isn&#x27;t even a <i>reliable</i> 1 millisecond delay either... There is a shitload of context switching and other barbarism that occurs when you employ async&#x2F;await.<p>If you are seeking millions of serialized items per second, you usually just want 1 core to do that for you. Any degree of context switching (which is what async&#x2F;await does for a living) is going to chop your serialized throughput numbers substantially. You want to batch things up and process them in chunks on a single thread that never gets a chance to yield to the OS. Only problem with this optimization is that it usually means you rewrite from zero, unless you planned for this kind of thing in advance.</text></item></parent_chain><comment><author>torginus</author><text>async&#x2F;await is not for CPU-intensive parallelism. I think that&#x27;s pretty much stated in the .NET docs. That&#x27;s why Parallel Compute APIs like Parallel.ForEeach&#x2F;For are not async. Their purpose is to enable non-blocking waits for IO, as well as to do stuff like animation on UI where you might want to execute procedural code over a larger timeframe.</text></comment> |
21,007,207 | 21,007,201 | 1 | 2 | 21,005,704 | train | <story><title>The cost of parsing JSON</title><url>https://v8.dev/blog/cost-of-javascript-2019#json</url></story><parent_chain><item><author>nostrebored</author><text>Why not just use promises?<p>side note: legit question, I don&#x27;t do web&#x2F;app dev</text></item><item><author>Klathmon</author><text>I find this really interesting, because at some point the absolute performance benefits of `JSON.parse` is overshadowed by the fact that it blocks the main thread.<p>I worked on an app a while ago which would have to parse 50mb+ JSON objects on mobile devices. In some cases (especially on mid-range and low-end devices) it would hang the main thread for a couple seconds!<p>So I ended up using a library called oboe.js [1] to incrementally parse the massive JSON blobs putting liberal `setTimeout`&#x27;s between each step to avoid hanging the main thread for more than about 200ms at a time.<p>This meant that it would often take 5x longer to fully parse the JSON blob than just using `JSON.parse`, but it was a much nicer UX as the UI would never hang or freeze during that process (at least perceptively), and the user wasn&#x27;t waiting on that parsing to happen to use the app, there was still more user-input I needed from them at that time. So even though it would often take 15+ seconds to parse now, the user was often spending 30+ seconds inputting more information, and now the UI would be fluid the whole time.</text></item><item><author>Tade0</author><text>I think this fragment catches the spirit of this piece:<p><i>A good rule of thumb is to apply this technique for objects of 10 kB or larger — but as always with performance advice, measure the actual impact before making any changes.</i><p>Although it may still not be worth it. At work I have this hand-rolled utility for mocking the backend using a .har file(which is a JSON). I use it to reproduce bugs found by the testers, who are kind enough to supply me both with such a file and a screencast.<p>On a MacBook Pro a 2.6MB .har file takes about 140ms to parse and process.</text></item></parent_chain><comment><author>Klathmon</author><text>Because JSON.parse blocks the thread it&#x27;s in, and JS is single threaded [1].<p>So even if you put it behind a promise, when that promise actually runs, it will block the thread.<p>In essence, using promises (or callbacks or timeouts or anything else like that) allows you to delay the thread-blocking, but once the code hits `JSON.parse`, no other javascript will run until it completes. And since no other javascript will run, the UI is entirely unresponsive during that time as well.<p>[1] Technically there are web-workers, and I looked into them to try and solve this problem. Unfortunately any complex-objects that get sent to or from a worker need to be serialized (no pass-by-reference is allowed except for a very small subset of &quot;C style&quot; arrays called TypedArrays). So while you could technically send the string to a worker and have the worker call `JSON.parse` on it to get an object, when you go to pass that object back the javascript engine will need to do an &quot;implicit&quot; `JSON.stringify` in the worker, then a `JSON.parse` in the main thread. Making it entirely useless for my usecase.<p>But continuing with that same thought process, I very nearly went for an architecture that used a web-worker, did the `JSON.parse` in the worker, then exposed methods that could be called from the main thread to get small amounts of data out of the worker as needed. Something like `worker.getProperty(&#x27;foo.bar.baz&#x27;)` which would only take the parsing hit for very small subsets of the data at a time. But ultimately the oboe.js solution was simpler and faster at runtime.</text></comment> | <story><title>The cost of parsing JSON</title><url>https://v8.dev/blog/cost-of-javascript-2019#json</url></story><parent_chain><item><author>nostrebored</author><text>Why not just use promises?<p>side note: legit question, I don&#x27;t do web&#x2F;app dev</text></item><item><author>Klathmon</author><text>I find this really interesting, because at some point the absolute performance benefits of `JSON.parse` is overshadowed by the fact that it blocks the main thread.<p>I worked on an app a while ago which would have to parse 50mb+ JSON objects on mobile devices. In some cases (especially on mid-range and low-end devices) it would hang the main thread for a couple seconds!<p>So I ended up using a library called oboe.js [1] to incrementally parse the massive JSON blobs putting liberal `setTimeout`&#x27;s between each step to avoid hanging the main thread for more than about 200ms at a time.<p>This meant that it would often take 5x longer to fully parse the JSON blob than just using `JSON.parse`, but it was a much nicer UX as the UI would never hang or freeze during that process (at least perceptively), and the user wasn&#x27;t waiting on that parsing to happen to use the app, there was still more user-input I needed from them at that time. So even though it would often take 15+ seconds to parse now, the user was often spending 30+ seconds inputting more information, and now the UI would be fluid the whole time.</text></item><item><author>Tade0</author><text>I think this fragment catches the spirit of this piece:<p><i>A good rule of thumb is to apply this technique for objects of 10 kB or larger — but as always with performance advice, measure the actual impact before making any changes.</i><p>Although it may still not be worth it. At work I have this hand-rolled utility for mocking the backend using a .har file(which is a JSON). I use it to reproduce bugs found by the testers, who are kind enough to supply me both with such a file and a screencast.<p>On a MacBook Pro a 2.6MB .har file takes about 140ms to parse and process.</text></item></parent_chain><comment><author>RussianCow</author><text>Promises are a way to deal with async code. Parsing JSON is synchronous and CPU-bound, so promises offer no benefit. And since web pages are single-threaded[0], there isn&#x27;t really any way you can parse JSON in the background and wait on the result.<p>[0]: There is now the Web Workers API which does allow you to run code in the background. I&#x27;ve never used it, but I have heard that it has a pretty high overhead since you have to communicate with it through message passing, so it&#x27;s possible you wouldn&#x27;t actually gain anything by using it to parse a large JSON object.<p><a href="https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;API&#x2F;Web_Workers_API&#x2F;Using_web_workers" rel="nofollow">https:&#x2F;&#x2F;developer.mozilla.org&#x2F;en-US&#x2F;docs&#x2F;Web&#x2F;API&#x2F;Web_Workers...</a></text></comment> |
31,637,615 | 31,637,735 | 1 | 3 | 31,637,428 | train | <story><title>No-op statements syntactically valid only since Python X.Y</title><url>https://github.com/jwilk/python-syntax-errors</url></story><parent_chain></parent_chain><comment><author>usr1106</author><text>I learned Python probably over 15 years ago. Haven&#x27;t used it very much and certainly not followed all the news.<p>I remember back then it was said the design philosphy was something like there should be one obvious way to code something correctly. In opposite to Perl where the philosohpy was that human thoughts can take weird ways and you should be able to put your thoughts to code directly. (Both characterizations from memory.)<p>Today I hear little from Perl. And with new syntax added to Python in every version I start to wonder how far away Python is drifting away from that original characterization above.</text></comment> | <story><title>No-op statements syntactically valid only since Python X.Y</title><url>https://github.com/jwilk/python-syntax-errors</url></story><parent_chain></parent_chain><comment><author>dvt</author><text>If you <i>really</i> need to do this, it should be done like so in some sort of bootstrap file that won&#x27;t run into parsing&#x2F;lexing errors:<p><pre><code> import sys
sys.version_info
if not sys.version_info[:2] == (3, 1):
raise Exception(&quot;You need Python &gt;= 3.1&quot;)
else:
from your_module import whatever
</code></pre>
I really hope no one&#x27;s seriously using this repo to get any ideas.</text></comment> |
21,815,704 | 21,815,090 | 1 | 2 | 21,813,636 | train | <story><title>Climbing the Wealth Ladder</title><url>https://ofdollarsanddata.com/climbing-the-wealth-ladder/</url></story><parent_chain><item><author>hultner</author><text>I wouldn&#x27;t fully agree with this, I&#x27;d say income that non-linear to amount time spent it the highest driving factor.<p>Let&#x27;s take an example.
I&#x27;m going to use Swedish living conditions and salaries as an example as that&#x27;s where I live but they are close to most European&#x2F;western countries.
We have a frugal grocery store cashier who makes $2000 USD&#x2F;month after taxes.
We have a high level executive in a medium-sized company who makes $5000 USD&#x2F;month after taxes.<p>The grocery worked lives like a student (student loan&#x2F;benefits in Sweden is about $1000 USD&#x2F;mo) and saves the rest on the stock market (assuming average yearly yield with reinvested dividends at 8.5%), efficiently saving $1000USD&#x2F;month for future non linear income.
The cashier plans to stop working early at 55, at a net worth of about $2 900 000 USD.
This allows for a safe withdrawal rate at about $10 000 USD&#x2F;month.
At this level travling&#x2F;vacation expenses isn&#x27;t a problem for the cashier. Housing probably does but they can most likely live comfortably even if they decide to rent. They can also easily increase monthly spending allowance by 10% if they decide to continue working half time.<p>At the same time our executive have been burning through every pay-check, and have effectively no net worth at the same age, sure there&#x27;s plenty enough to lease a nice car, don&#x27;t care about restaurants bills but more expensive traveling will still be a setback and vacation time per year is certainly limited as income is still linear to time spent.<p>So in this example we have a cashier ending up as a multi millionear set for life at 55 with no need to work a day more and a high level executive who&#x27;d be back at 0 without the job and a strictly linear income.<p>Sure salary increase does matter, but in the long run underspending matters more.
With a longterm plan it&#x27;s possible to become a multi millionaire with an entry level job, for instance there&#x27;s a famous Swedish railroad worker who recently passed away who ended up with somewhere around $17 000 000 USD net worth at the time of his death, and achieved this by living under his expenses with a low salary and investing the rest, he initially turned to the stock market because he couldn&#x27;t afford a house in his youth. Not Jay-Z levels for sure but well beyond what most people would consider very wealthy.</text></item><item><author>disintegore</author><text>I understand this isn&#x27;t the point of the article, but it seems like a roundabout way of saying &quot;don&#x27;t overspend&quot;. This part in particular bothers me :<p>&gt; More importantly though, the best way to climb the wealth ladder is to spend money according to your level.<p>As far as I (a non-economist) can personally tell, any notion of climbing up some abstract wealth ladder is synonym with a salary increase for the vast majority of people. Other methods, whether they involve quantity of free time or already-available money, are intrinsically tied to the quality of your job or, failing that, the quality of your parents&#x27; or partner&#x27;s jobs.<p>Personal net worth, while definitely an important factor in this equation, is far less so than income in my opinion. A fiscally irresponsibly professional worker living from paycheck to paycheck has &quot;grocery freedom&quot; while a person with 10,000$ of accumulated wealth and no income whatsoever (let&#x27;s say they are between jobs) is far more likely to buy the store brand margarine. Similarly, the former will most likely not achieve &quot;travel freedom&quot; without decades of hard work, of careful spending, of saving, investing, etc.<p>Simply put, no amount of &quot;not carelessly booking flights&quot; will turn you into Jay-Z, let alone into that small business owner across the street with the McMansion and the gaudy Christmas decorations. The undisputed &quot;best way&quot; to climb the wealth ladder is to receive large amounts of cash from some external source.</text></item></parent_chain><comment><author>AlexanderNull</author><text>No family? Also if we were talking about the US (which the article was) then you have to factor in healthcare, average of $200 a month for the insurance IF it&#x27;s sponsored (mostly covered) by their employer. If the grocery worker is living in any major city then rent is bare minimum of $600 a month for the privilege of living with at least 2 other flatmates or around $1000 for a pretty small apartment. So this person&#x27;s already out half their monthly income. Let&#x27;s say transportation costs are low and that this person only takes the bus to get everywhere, that&#x27;s an extra $100 a month (assuming lack of adequate transportation doesn&#x27;t also ending up costing them their job due to frequent late starts). Now this person also has to pay at least some utilities depending on where they&#x27;re living so a rough estimate would be around $120 a month for that. Now we look at food, average grocery costs per person in the US is $220 a month. This person also needs clothing obviously, if they go on the super low side they could get by on ~$100 a month over all. If this is it for their spartan life style, meaning no cell phone, no internet, no eating out, no leisure activities that cost money, no hobbies, no children, no trips to the doctor (where medical deductibles kick in), no car, no netflix etc. This person has potentially $460 a month to save.<p>That&#x27;s 100% best case scenario living a lifestyle that basically consists of going to work, going home, eating food, sleeping on the floor of their apartment (we didn&#x27;t budget for furnishings but w&#x2F;e it&#x27;s better for the back) using the public library and walks around the park for sole avenues of entertainment. ALSO: THERE&#x27;S NO KIDS IN THIS SCENARIO. Some of those estimates I lowballed also (like food as that&#x27;s average cost of groceries for people who also get calories from eating out at restaurants) so this is really not a realistic estimate here.<p>In reality though there&#x27;s plenty of hidden costs that this person will incur and even assuming they do manage to stay kidsless they&#x27;ll be lucky to put away $100 a month giving them a best case scenario of $250k in savings by 55. Not terrible for them, but not enough to retire on in the US when you think of medical expenses incurred later on in life.</text></comment> | <story><title>Climbing the Wealth Ladder</title><url>https://ofdollarsanddata.com/climbing-the-wealth-ladder/</url></story><parent_chain><item><author>hultner</author><text>I wouldn&#x27;t fully agree with this, I&#x27;d say income that non-linear to amount time spent it the highest driving factor.<p>Let&#x27;s take an example.
I&#x27;m going to use Swedish living conditions and salaries as an example as that&#x27;s where I live but they are close to most European&#x2F;western countries.
We have a frugal grocery store cashier who makes $2000 USD&#x2F;month after taxes.
We have a high level executive in a medium-sized company who makes $5000 USD&#x2F;month after taxes.<p>The grocery worked lives like a student (student loan&#x2F;benefits in Sweden is about $1000 USD&#x2F;mo) and saves the rest on the stock market (assuming average yearly yield with reinvested dividends at 8.5%), efficiently saving $1000USD&#x2F;month for future non linear income.
The cashier plans to stop working early at 55, at a net worth of about $2 900 000 USD.
This allows for a safe withdrawal rate at about $10 000 USD&#x2F;month.
At this level travling&#x2F;vacation expenses isn&#x27;t a problem for the cashier. Housing probably does but they can most likely live comfortably even if they decide to rent. They can also easily increase monthly spending allowance by 10% if they decide to continue working half time.<p>At the same time our executive have been burning through every pay-check, and have effectively no net worth at the same age, sure there&#x27;s plenty enough to lease a nice car, don&#x27;t care about restaurants bills but more expensive traveling will still be a setback and vacation time per year is certainly limited as income is still linear to time spent.<p>So in this example we have a cashier ending up as a multi millionear set for life at 55 with no need to work a day more and a high level executive who&#x27;d be back at 0 without the job and a strictly linear income.<p>Sure salary increase does matter, but in the long run underspending matters more.
With a longterm plan it&#x27;s possible to become a multi millionaire with an entry level job, for instance there&#x27;s a famous Swedish railroad worker who recently passed away who ended up with somewhere around $17 000 000 USD net worth at the time of his death, and achieved this by living under his expenses with a low salary and investing the rest, he initially turned to the stock market because he couldn&#x27;t afford a house in his youth. Not Jay-Z levels for sure but well beyond what most people would consider very wealthy.</text></item><item><author>disintegore</author><text>I understand this isn&#x27;t the point of the article, but it seems like a roundabout way of saying &quot;don&#x27;t overspend&quot;. This part in particular bothers me :<p>&gt; More importantly though, the best way to climb the wealth ladder is to spend money according to your level.<p>As far as I (a non-economist) can personally tell, any notion of climbing up some abstract wealth ladder is synonym with a salary increase for the vast majority of people. Other methods, whether they involve quantity of free time or already-available money, are intrinsically tied to the quality of your job or, failing that, the quality of your parents&#x27; or partner&#x27;s jobs.<p>Personal net worth, while definitely an important factor in this equation, is far less so than income in my opinion. A fiscally irresponsibly professional worker living from paycheck to paycheck has &quot;grocery freedom&quot; while a person with 10,000$ of accumulated wealth and no income whatsoever (let&#x27;s say they are between jobs) is far more likely to buy the store brand margarine. Similarly, the former will most likely not achieve &quot;travel freedom&quot; without decades of hard work, of careful spending, of saving, investing, etc.<p>Simply put, no amount of &quot;not carelessly booking flights&quot; will turn you into Jay-Z, let alone into that small business owner across the street with the McMansion and the gaudy Christmas decorations. The undisputed &quot;best way&quot; to climb the wealth ladder is to receive large amounts of cash from some external source.</text></item></parent_chain><comment><author>bronco21016</author><text>I think you’re both right. The combination of multiplying income while also saving more and more of that income is the key to jumping wealth levels.<p>It’s certainly possible to live a comfortable life on the cashiers plan but if the executive spent just half of their career living like the cashier then he would quickly pass up the cashier on the wealth ladder.</text></comment> |
39,263,469 | 39,263,802 | 1 | 2 | 39,263,106 | train | <story><title>Report on the role of standardized test scores in undergraduate admissions [pdf]</title><url>https://home.dartmouth.edu/sites/home/files/2024-02/sat-undergrad-admissions.pdf</url></story><parent_chain><item><author>ccleve</author><text>Here&#x27;s the money quote:<p>&gt;&gt; Importantly, these test scores better position Admissions to identify high-achieving less-advantaged applicants and high-achieving applicants who attend high schools for which Dartmouth has less information to interpret the transcripts.<p>Precisely. SATs aren&#x27;t about hurting the poor or disadvantaged. It&#x27;s about <i>giving them a chance.</i></text></item></parent_chain><comment><author>msravi</author><text>And this:<p>&gt;&gt; Third, under test-optional policies, some less-advantaged students withhold test scores even in
cases where providing the test score would be a significant positive signal to Admissions.
Importantly, Dartmouth Admissions uses SAT scores within context; a score of 1400 for an
applicant from a high school in a lower-income community with lower school-wide test scores is
a more significant achievement than a score of 1400 for an applicant from a high school in a
higher-income community with higher school-wide test scores. Admissions uses numerous
detailed measures of outcomes at the high school and neighborhood levels to account for these
known disadvantages. As one example, Admissions computes a measure of how each applicant
performs on standardized tests relative to the aggregate score of all test-takers in their high
school, using data available from the College Board.</text></comment> | <story><title>Report on the role of standardized test scores in undergraduate admissions [pdf]</title><url>https://home.dartmouth.edu/sites/home/files/2024-02/sat-undergrad-admissions.pdf</url></story><parent_chain><item><author>ccleve</author><text>Here&#x27;s the money quote:<p>&gt;&gt; Importantly, these test scores better position Admissions to identify high-achieving less-advantaged applicants and high-achieving applicants who attend high schools for which Dartmouth has less information to interpret the transcripts.<p>Precisely. SATs aren&#x27;t about hurting the poor or disadvantaged. It&#x27;s about <i>giving them a chance.</i></text></item></parent_chain><comment><author>crazygringo</author><text>Which was, of course, a major justification for creating the SAT in the first place:<p>&gt; <i>First offered in 1926 by the College Board... Early proponents of the SAT argued that the admissions exam made higher education more meritocratic. Admissions officers at elite institutions like Harvard believed the test would identify talented applicants at less academically strong high schools and accelerate their journeys into higher education.</i> [1]<p>[1] <a href="https:&#x2F;&#x2F;www.bestcolleges.com&#x2F;blog&#x2F;history-of-sat&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.bestcolleges.com&#x2F;blog&#x2F;history-of-sat&#x2F;</a></text></comment> |
8,173,765 | 8,173,650 | 1 | 2 | 8,172,565 | train | <story><title>A Brazilian Wunderkind Who Calms Chaos</title><url>http://www.simonsfoundation.org/quanta/20140812-a-brazilian-wunderkind-who-calms-chaos/</url><text></text></story><parent_chain></parent_chain><comment><author>ealloc</author><text>He is doing an investigation into the underlying foundations of statistical mechanics.<p>Statistical Mechanics and thermodynamics are the basis for a huge amount of technology and scientific models of the world, yet they rely on a fundamental assumption which is in some sense unjustified, known as the &#x27;Ergodic hypothesis&#x27;: Even though (classically) we know that the current position of gas particles in a box can be determined from their positions in the past, in thermodynamics we make the (unjustified) assumption that their positions are actually random and independent of their previous positions. In other words, these models for the world are probabilistic, which contradicts our more fundamental models of the world which say it is deterministic (and even QM is deterministic, with the single exception of the born rule). What he&#x27;s doing here helps justify the probabilistic treatment, and understand when it does or does not apply.<p>I have always though this to be one of the great &#x27;foundations&#x27; questions in physics (The others being QM foundations&#x2F;origin of the born rule, and foundations of Field Theory). These are &#x27;hard&#x27; and borderline philosophical questions, which most scientists (with good reason) simply assume to be true, to the point they often find them uninteresting. Lately though there seems to be renewed interest in them.</text></comment> | <story><title>A Brazilian Wunderkind Who Calms Chaos</title><url>http://www.simonsfoundation.org/quanta/20140812-a-brazilian-wunderkind-who-calms-chaos/</url><text></text></story><parent_chain></parent_chain><comment><author>temuze</author><text>This is Brazil&#x27;s first Fields Medal - no Brazilian citizen has won a Nobel prize either. Obviously, this is a big deal for our country.</text></comment> |
27,465,532 | 27,465,201 | 1 | 3 | 27,462,263 | train | <story><title>Software is eating the car</title><url>https://spectrum.ieee.org/cars-that-think/transportation/advanced-cars/software-eating-car</url></story><parent_chain><item><author>jacquesm</author><text>Nothing worse than automotive software. Buggy, slow, terrible user interfaces, outright dangerous and in many ways much worse than the systems they replace or augment.<p>The automotive industry has a long long way to come - assuming it will happen at all - before they can be said to be responsible software vendors.<p>Case in point: my - former - C class Mercedes that made two pretty good attempts to kill me by slamming on the brakes in a situation where that was totally unexpected and caused a perfectly safe situation to turn into a critical one. If not for playing ping pong for many years I highly doubt I would be writing this. After the first instance I had the whole car checked out to see if there was any fault in the system, the answer was that it was all working perfectly (that time the car had braked whilst on a very narrow bridge sending the car into a skid which I managed to correct before going over the side). Three weeks later it did it again, this time apparently because an advertising sign in a turn generated such a strong radar return that the car thought I was about to have a frontal collision. Again, out of nowhere an emergency stop.<p>I sold the car and got one where the most complex piece of software is the aftermarket radio, it has ABS and an ignition control computer but nothing in the way of &#x27;advanced safety features&#x27;.<p>My vehicle actively trying to kill me is something I can do without.<p>So: as far as I&#x27;m concerned <i>much</i> less software on board of cars, open source it all if possible and roll it out much slower so we can get the bugs out.</text></item></parent_chain><comment><author>liquidise</author><text>I’ll pile on with my own anecdote. Last winter I flew home to Maine to visit family. Rented a ‘21 Nissan Altima. It drove well until there was typical New England snow&#x2F;slush mix.<p>While driving on a flat straight stretch of road the car suddenly... yanked itself sideways. Thank god no oncoming traffic was present and I was able to course correct safely. I immediately drove home and paid careful attention to the wheel response. It kept feeling like it wanted to yank me off the road.<p>Once home i broke open the manual and found 4 different “driver assist” and “driver comfort” functions. After disabling them all the terrifying behavior ceased.<p>I’ve lived my whole life in Maine, Rochester NY and Colorado. I’ve never felt as unsafe in a car as I did with those software features enabled in about an inch of snow.<p>Bonus, it also has collision detection warnings on the side of the car. It was convinced every puddle I drove through that splashed slush beside the car was an object I was about to collide with.</text></comment> | <story><title>Software is eating the car</title><url>https://spectrum.ieee.org/cars-that-think/transportation/advanced-cars/software-eating-car</url></story><parent_chain><item><author>jacquesm</author><text>Nothing worse than automotive software. Buggy, slow, terrible user interfaces, outright dangerous and in many ways much worse than the systems they replace or augment.<p>The automotive industry has a long long way to come - assuming it will happen at all - before they can be said to be responsible software vendors.<p>Case in point: my - former - C class Mercedes that made two pretty good attempts to kill me by slamming on the brakes in a situation where that was totally unexpected and caused a perfectly safe situation to turn into a critical one. If not for playing ping pong for many years I highly doubt I would be writing this. After the first instance I had the whole car checked out to see if there was any fault in the system, the answer was that it was all working perfectly (that time the car had braked whilst on a very narrow bridge sending the car into a skid which I managed to correct before going over the side). Three weeks later it did it again, this time apparently because an advertising sign in a turn generated such a strong radar return that the car thought I was about to have a frontal collision. Again, out of nowhere an emergency stop.<p>I sold the car and got one where the most complex piece of software is the aftermarket radio, it has ABS and an ignition control computer but nothing in the way of &#x27;advanced safety features&#x27;.<p>My vehicle actively trying to kill me is something I can do without.<p>So: as far as I&#x27;m concerned <i>much</i> less software on board of cars, open source it all if possible and roll it out much slower so we can get the bugs out.</text></item></parent_chain><comment><author>6gvONxR4sf7o</author><text>&gt; The automotive industry has a long long way to come - assuming it will happen at all - before they can be said to be responsible software vendors.<p>It&#x27;s worse than that. The very best in the software industry has a reliability problem. And carmakers are certainly not among the best in the software industry.</text></comment> |
34,378,155 | 34,378,135 | 1 | 3 | 34,377,910 | train | <story><title>We’ve filed a lawsuit challenging Stable Diffusion</title><url>https://stablediffusionlitigation.com/</url></story><parent_chain><item><author>dr_dshiv</author><text>“Stable Diffusion contains unauthorized copies of millions—and possibly billions—of copyrighted images.”<p>That’s going to be hard to argue. Where are the copies?<p>“Having copied the five billion images—without the consent of the original artists—Stable Diffusion relies on a mathematical process called diffusion to store compressed copies of these training images, which in turn are recombined to derive other images. It is, in short, a 21st-century collage tool.“<p>“Diffusion is a way for an AI program to figure out how to reconstruct a copy of the training data through denoising. Because this is so, in copyright terms it’s no different from an MP3 or JPEG—a way of storing a compressed copy of certain digital data.”<p>The examples of training diffusion (eg, reconstructing a picture out of noise) will be core to their argument in court. Certainly during training the goal is to reconstruct original images out of noise. But, do they exist in SD as copies? Idk</text></item></parent_chain><comment><author>yazaddaruvala</author><text>&gt; That’s going to be hard to argue. Where are the copies?<p>In fairness, Diffusion is arguably a very complex entropy coding similar to Arithmetic&#x2F;Huffman coding.<p>Given that copyright is protectable even on compressed&#x2F;encrypted files, it seems fair that the “container of compressed bytes” (in this case the Diffusion model) does “contain” the original images no differently than a compressed folder of images contains the original images.<p>A lawyer&#x2F;researcher would likely win this case if they re-create 90%ish of a single input image from the diffusion model with text input.</text></comment> | <story><title>We’ve filed a lawsuit challenging Stable Diffusion</title><url>https://stablediffusionlitigation.com/</url></story><parent_chain><item><author>dr_dshiv</author><text>“Stable Diffusion contains unauthorized copies of millions—and possibly billions—of copyrighted images.”<p>That’s going to be hard to argue. Where are the copies?<p>“Having copied the five billion images—without the consent of the original artists—Stable Diffusion relies on a mathematical process called diffusion to store compressed copies of these training images, which in turn are recombined to derive other images. It is, in short, a 21st-century collage tool.“<p>“Diffusion is a way for an AI program to figure out how to reconstruct a copy of the training data through denoising. Because this is so, in copyright terms it’s no different from an MP3 or JPEG—a way of storing a compressed copy of certain digital data.”<p>The examples of training diffusion (eg, reconstructing a picture out of noise) will be core to their argument in court. Certainly during training the goal is to reconstruct original images out of noise. But, do they exist in SD as copies? Idk</text></item></parent_chain><comment><author>akjetma</author><text>I don&#x27;t think you have to reproduce an entire original work to demonstrate copyright violation. Think about sampling in hip hop for example. A 2 second sample, distorted, re-pitched, etc. can be grounds for a copyright violation.</text></comment> |
14,045,924 | 14,045,945 | 1 | 3 | 14,045,603 | train | <story><title>It’s now illegal in Russia to share an image of Putin as a gay clown</title><url>https://www.washingtonpost.com/news/worldviews/wp/2017/04/05/its-now-illegal-in-russia-to-share-an-image-of-putin-as-a-gay-clown/?tid=sm_tw</url></story><parent_chain></parent_chain><comment><author>Mendenhall</author><text>That only took a couple years to go from internet extremism to gay putin clowns.<p><a href="https:&#x2F;&#x2F;themoscowtimes.com&#x2F;news&#x2F;internet-extremism-bill-passes-first-duma-reading-30613" rel="nofollow">https:&#x2F;&#x2F;themoscowtimes.com&#x2F;news&#x2F;internet-extremism-bill-pass...</a><p>Something, something, slippery slope?</text></comment> | <story><title>It’s now illegal in Russia to share an image of Putin as a gay clown</title><url>https://www.washingtonpost.com/news/worldviews/wp/2017/04/05/its-now-illegal-in-russia-to-share-an-image-of-putin-as-a-gay-clown/?tid=sm_tw</url></story><parent_chain></parent_chain><comment><author>sremani</author><text><a href="http:&#x2F;&#x2F;foreignpolicy.com&#x2F;2011&#x2F;06&#x2F;30&#x2F;countries-where-you-could-go-to-jail-for-calling-the-president-a-dick&#x2F;" rel="nofollow">http:&#x2F;&#x2F;foreignpolicy.com&#x2F;2011&#x2F;06&#x2F;30&#x2F;countries-where-you-coul...</a><p>Not condoning what&#x27;s happening in Russia, just pointing out how prevalent similar practices are around the world.</text></comment> |
18,367,261 | 18,367,296 | 1 | 2 | 18,366,011 | train | <story><title>Goldman Sachs Ensnarled in Vast 1MDB Fraud Scandal</title><url>https://www.nytimes.com/2018/11/01/business/goldman-sachs-malaysia-investment-fund.html</url></story><parent_chain><item><author>beloch</author><text>Well, at least they didn&#x27;t start another famine[1]... <i>this</i> time.<p>Question: How does Goldman Sachs <i>still</i> get away with this stuff? Due to their lengthy rap sheet, <i>multiple</i> governments and every law enforcement agency in the world <i>should</i> be watching their every action. It&#x27;s a safe bet they&#x27;re going to do something illegal again... and again... and again. Why aren&#x27;t they being caught in the act and stopped before they can waltz off with the profits? Why haven&#x27;t they been shut down or barred from doing business in the jurisdictions they&#x27;ve screwed over? Malaysia is going after Goldman Sachs for reparations, but have they kicked the company out of Malaysia?<p>[1]<a href="https:&#x2F;&#x2F;www.independent.co.uk&#x2F;voices&#x2F;commentators&#x2F;johann-hari&#x2F;johann-hari-how-goldman-gambled-on-starvation-2016088.html" rel="nofollow">https:&#x2F;&#x2F;www.independent.co.uk&#x2F;voices&#x2F;commentators&#x2F;johann-har...</a></text></item></parent_chain><comment><author>browsercoin</author><text>Well, once you realize that essentially, large American Fortune 500 companies are actively supported by its spy agency (hint: They Hate Snowden) through a veil of foreign aid and other friendly name sounding causes such as &quot;volunteers of Amazonian language preservation of America&quot; calling in military assets to put down a rebellion from the foreign country, that get in their way of the lootin of their country.<p>The book &#x27;Confessions of Economic Hitman&#x27; perfectly illustrates a sort of <i>&quot;active measures&quot;</i> that take place with increasing lethal consequences for those that oppose them.<p><pre><code> 1. identify elites in a country (ex. bankers, biz ppl)
2. lucrative contracts and monopoly handed out to elites via government.
3. profit???
4. If no profit, use bribe to corrupt key players.
5. If still no profit, send in the jackals (what EH calls professional assassins), put poison in cigars, drone strike.
6. If tradecraft fails, then they put boots on the ground.
</code></pre>
South America is a very good example. It&#x27;s been thoroughly pillaged by American corporations now for over a century, which has given rise to narco-economy.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=NKwJI9axblQ" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=NKwJI9axblQ</a></text></comment> | <story><title>Goldman Sachs Ensnarled in Vast 1MDB Fraud Scandal</title><url>https://www.nytimes.com/2018/11/01/business/goldman-sachs-malaysia-investment-fund.html</url></story><parent_chain><item><author>beloch</author><text>Well, at least they didn&#x27;t start another famine[1]... <i>this</i> time.<p>Question: How does Goldman Sachs <i>still</i> get away with this stuff? Due to their lengthy rap sheet, <i>multiple</i> governments and every law enforcement agency in the world <i>should</i> be watching their every action. It&#x27;s a safe bet they&#x27;re going to do something illegal again... and again... and again. Why aren&#x27;t they being caught in the act and stopped before they can waltz off with the profits? Why haven&#x27;t they been shut down or barred from doing business in the jurisdictions they&#x27;ve screwed over? Malaysia is going after Goldman Sachs for reparations, but have they kicked the company out of Malaysia?<p>[1]<a href="https:&#x2F;&#x2F;www.independent.co.uk&#x2F;voices&#x2F;commentators&#x2F;johann-hari&#x2F;johann-hari-how-goldman-gambled-on-starvation-2016088.html" rel="nofollow">https:&#x2F;&#x2F;www.independent.co.uk&#x2F;voices&#x2F;commentators&#x2F;johann-har...</a></text></item></parent_chain><comment><author>justboxing</author><text>&gt; Question: How does Goldman Sachs still get away with this stuff?<p>Answer: Because there&#x27;s a history of Goldman Sachs executives going from Wall Street to Washington DC (Government) taking up top positions - ex: Paulson as Treasury Secretary - so they can create and change Laws to suit their business.<p>In short, it&#x27;s like a Thief who gets himself appointed as the Judge for his own Trial.<p>&gt; As the man who presided over the biggest market meltdown in recent U.S. history, he has become a symbol of two worlds colliding—he&#x27;s the Goldman Sachs CEO who came to Washington and then had to bail out his old friends on Wall Street.<p>Source: <a href="https:&#x2F;&#x2F;www.theatlantic.com&#x2F;business&#x2F;archive&#x2F;2014&#x2F;02&#x2F;hank-paulson-mastered-wall-street-and-washington-and-now-he-trusts-neither&#x2F;283572&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.theatlantic.com&#x2F;business&#x2F;archive&#x2F;2014&#x2F;02&#x2F;hank-pa...</a><p>Related: [2009] Why Goldman Always Wins =&gt; <a href="https:&#x2F;&#x2F;www.theatlantic.com&#x2F;magazine&#x2F;archive&#x2F;2009&#x2F;10&#x2F;why-goldman-always-wins&#x2F;307653&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.theatlantic.com&#x2F;magazine&#x2F;archive&#x2F;2009&#x2F;10&#x2F;why-gol...</a></text></comment> |
27,828,363 | 27,828,235 | 1 | 2 | 27,827,203 | train | <story><title>Dividend Cripples Saudi Aramco</title><url>https://oilprice.com/Energy/Energy-General/Huge-Dividend-Cripples-Worlds-Largest-Oil-Company.html</url></story><parent_chain><item><author>moralestapia</author><text>I can&#x27;t feel bad for them.<p>I lived there for four years (at KAUST), I am deeply familiar with the country and its people. The Saudis are nice and well-intended, but a few bad apples spoil the bunch.<p>I left after my 4yo daughter was kidnapped(!) when I refused to sign some papers regarding my work situation. I am not making this up. The kidnapping was carried away by an Australian professor and a couple American guys working there, but when I tried to look for help I was horrified that this seemed to be business as usual and no one even batted an eye, no jurisdiction. Fortunately, we are safe now and doing better than ever, but, what a story.<p>Until they fix many of these things it will be very hard for them to establish a thriving economy.</text></item></parent_chain><comment><author>michaelmrose</author><text>From the leaked diplomatic cables.<p>“Donors in Saudi Arabia constitute the most significant source of funding to Sunni terrorist groups worldwide,”<p>“More needs to be done since Saudi Arabia remains a critical financial support base for al-Qaeda, the Taliban, LeT, and other terrorist groups, including Hamas, which probably raise millions of dollars annually from Saudi sources.”<p>It&#x27;s fertile ground for raising money to support terrorism not because a few bad apples support but because it enjoys broad support. Although data is somewhat scarce a survey in 2003 found that 53% supported Bin Laden for example.<p><a href="https:&#x2F;&#x2F;icct.nl&#x2F;app&#x2F;uploads&#x2F;2017&#x2F;02&#x2F;ICCT-Schmid-Muslim-Opinion-Polls-Jan2017-1.pdf" rel="nofollow">https:&#x2F;&#x2F;icct.nl&#x2F;app&#x2F;uploads&#x2F;2017&#x2F;02&#x2F;ICCT-Schmid-Muslim-Opini...</a> Page 18<p>Which is probably why for example US made weapons have found their way via Saudi Arabia to al-Qaeda and ISIS linked groups.<p><a href="https:&#x2F;&#x2F;www.aljazeera.com&#x2F;news&#x2F;2019&#x2F;2&#x2F;5&#x2F;saudi-arabia-uae-gave-us-arms-to-al-qaeda-linked-groups-report" rel="nofollow">https:&#x2F;&#x2F;www.aljazeera.com&#x2F;news&#x2F;2019&#x2F;2&#x2F;5&#x2F;saudi-arabia-uae-gav...</a><p>Basically the bad apples are in fact somewhere between a majority and a near majority and they control the country.</text></comment> | <story><title>Dividend Cripples Saudi Aramco</title><url>https://oilprice.com/Energy/Energy-General/Huge-Dividend-Cripples-Worlds-Largest-Oil-Company.html</url></story><parent_chain><item><author>moralestapia</author><text>I can&#x27;t feel bad for them.<p>I lived there for four years (at KAUST), I am deeply familiar with the country and its people. The Saudis are nice and well-intended, but a few bad apples spoil the bunch.<p>I left after my 4yo daughter was kidnapped(!) when I refused to sign some papers regarding my work situation. I am not making this up. The kidnapping was carried away by an Australian professor and a couple American guys working there, but when I tried to look for help I was horrified that this seemed to be business as usual and no one even batted an eye, no jurisdiction. Fortunately, we are safe now and doing better than ever, but, what a story.<p>Until they fix many of these things it will be very hard for them to establish a thriving economy.</text></item></parent_chain><comment><author>nathanvanfleet</author><text>I don&#x27;t really get how there is on one hand only a few bad apples but also kidnapping a child is business as usual within their society?</text></comment> |
10,148,537 | 10,148,624 | 1 | 3 | 10,147,774 | train | <story><title>Sandstorm Oasis hosting open beta and App Market</title><url>https://blog.sandstorm.io/news/2015-08-31-oasis-beta-launch.html</url></story><parent_chain></parent_chain><comment><author>kentonv</author><text>One thing we&#x27;ll do before coming out of beta is make sure we can scale seamlessly to these impromptu HN load tests. :) Sorry it&#x27;s a bit slow right now! You can, of course, install your own and have all the resources to yourself:<p><a href="https:&#x2F;&#x2F;sandstorm.io&#x2F;install&#x2F;" rel="nofollow">https:&#x2F;&#x2F;sandstorm.io&#x2F;install&#x2F;</a><p>Edit: Things seem a bit better now. This is our first time getting a load spike on Blackrock (codename for the &quot;enterprise&quot; add-on to Sandstorm which can utilize a whole cluster, which we use to run Oasis); previously, it was always the single-machine demo.<p>The problem is, ironically, we haven&#x27;t parallelized our front-end yet. We are seamlessly distributing the apps themselves across way more machines than we actually need, but everyone is connecting to the same instance of the surrounding management UI. Honestly I didn&#x27;t prioritize this because I thought our reinvented-wheel back-end was going to be the bigger bottleneck -- whoops.<p>Luckily this shouldn&#x27;t be too hard to fix as the front-end is <i>mostly</i> stateless. Probably fixed in a week or two.<p>Nice to know what to focus on, in any case! Thanks for testing!</text></comment> | <story><title>Sandstorm Oasis hosting open beta and App Market</title><url>https://blog.sandstorm.io/news/2015-08-31-oasis-beta-launch.html</url></story><parent_chain></parent_chain><comment><author>yebyen</author><text>I have been happily using the Sandstorm closed alpha since it was available to Kickstarter users and I have been very happy with the progress. There are still places where the grain paradigm needs to be fleshed out a bit, like I had to explain to my old boss that in Sandstorm Gitlab there is no user, there are only allowed and deleted web keys...<p>When I explained that we might use this for a project or two, if it turned out to be too much trouble to put our GitLab instance with all its private repos on the public internet, he understood right away.<p>Up until that point he basically knew what Sandstorm was all about and he was saying &quot;I don&#x27;t think Sandstorm is for us.&quot; I&#x27;m still not sure he sees, but the free open beta and the app store will probably work to help convert him. He loves app stores. (Why not add, a shameless plug for Synology DSM OS with its Docker Hub support!)</text></comment> |
30,955,027 | 30,954,686 | 1 | 2 | 30,952,056 | train | <story><title>Firefox DNS-over-HTTPS</title><url>https://support.mozilla.org/en-US/kb/firefox-dns-over-https</url></story><parent_chain><item><author>Animats</author><text><i>Firefox by default directs DoH queries to DNS servers that are operated by a &quot;trusted partner&quot;.</i><p>That&#x27;s what I don&#x27;t want - Firefox offering services.<p>Once you have a centralized server, with a huge number of minor queries passing through it, the operators get uppity. They start thinking they have editorial authority. Someone will decide that the DNS server should censor something. Child porn is the usual excuse, and then, after a while, you can&#x27;t see sites that mention Tienanmen Square or Ukraine any more.<p>I&#x27;m quite happy with Sonic&#x27;s classic DNS server. It just answers DNS queries and forwards requests to the appropriate upstream DNS server as required.</text></item></parent_chain><comment><author>hiq</author><text>This is configurable though (it&#x27;s only a default), and is still better than the status quo in a lot of areas. Either you care enough to change it, or you stick to the default. Default (ISP) plain DNS is worse.</text></comment> | <story><title>Firefox DNS-over-HTTPS</title><url>https://support.mozilla.org/en-US/kb/firefox-dns-over-https</url></story><parent_chain><item><author>Animats</author><text><i>Firefox by default directs DoH queries to DNS servers that are operated by a &quot;trusted partner&quot;.</i><p>That&#x27;s what I don&#x27;t want - Firefox offering services.<p>Once you have a centralized server, with a huge number of minor queries passing through it, the operators get uppity. They start thinking they have editorial authority. Someone will decide that the DNS server should censor something. Child porn is the usual excuse, and then, after a while, you can&#x27;t see sites that mention Tienanmen Square or Ukraine any more.<p>I&#x27;m quite happy with Sonic&#x27;s classic DNS server. It just answers DNS queries and forwards requests to the appropriate upstream DNS server as required.</text></item></parent_chain><comment><author>contingencies</author><text>I live in China and (still) use Firefox in preference to Google. I agree they need to fire the management and focus on the code. The big pain with DNS is that it&#x27;s one avenue of censorship but also a proxy for many wrong-headed network geographic assumptions through the lens of geoDNS. <i>&quot;Oh, you resolved from Europe, so you really want a European server! Let me help you out there...&quot;</i> The internet has far too many of these half-baked hacks layered now, it&#x27;s getting to the point where to obtain a halfway trustworthy response you have to have a dynamic network of geographically distributed and temporally transient nodes seeking similar information and voting on the result. Geoscoping, echo-chamber personalization, household profiling, jurisdiction-second-guessing, ID verification as an outsourced service, political policy fandangling, globe splitting for artificial market segregation, walled-gardening, DRM...</text></comment> |
31,271,598 | 31,262,858 | 1 | 3 | 31,261,533 | train | <story><title>Mechanical Watch</title><url>https://ciechanow.ski/mechanical-watch/</url></story><parent_chain><item><author>ThePhysicist</author><text>I was curious how he did those visualizations so I looked at the source code. Turns out he codes everything <i>by hand</i> in WebGL [1]. Absolutely impressive stuff. Source code is non-minified so you can have a look and understand everything as well.<p>[1]: <a href="https:&#x2F;&#x2F;ciechanow.ski&#x2F;js&#x2F;watch.js" rel="nofollow">https:&#x2F;&#x2F;ciechanow.ski&#x2F;js&#x2F;watch.js</a></text></item></parent_chain><comment><author>rsp1984</author><text>I&#x27;m observing that developers these days are quite surprised to see anyone write code for OpenGL &#x2F; WebGL directly instead of using some layer of abstraction on top, such as Three.js or Unity etc. Few seem to know that OpenGL already is an abstraction of the computing model underneath.<p>A couple years ago I did some consulting for a company that needed a point cloud rendering engine. Luckily I had one ready to go. I showed them and they liked it and their young devs asked which library I was using. When I told them I used OpenGL they couldn&#x27;t believe it. To them OpenGL was the &quot;black magic box&quot; and using it akin to having secret conversations with the GPU in some arcane cryptic language.</text></comment> | <story><title>Mechanical Watch</title><url>https://ciechanow.ski/mechanical-watch/</url></story><parent_chain><item><author>ThePhysicist</author><text>I was curious how he did those visualizations so I looked at the source code. Turns out he codes everything <i>by hand</i> in WebGL [1]. Absolutely impressive stuff. Source code is non-minified so you can have a look and understand everything as well.<p>[1]: <a href="https:&#x2F;&#x2F;ciechanow.ski&#x2F;js&#x2F;watch.js" rel="nofollow">https:&#x2F;&#x2F;ciechanow.ski&#x2F;js&#x2F;watch.js</a></text></item></parent_chain><comment><author>panzerboiler</author><text>He does it &quot;the right way™&quot;. Use the platform. Don&#x27;t use any framework or generic library. Go straight to the point and code what you need, when you need it. Don&#x27;t minify or bundle anything, and let the people who are learning and courious a straightforward way to connect the dots, without forcing them into a github repository with 90% of the code unrelated to the thing and existing just to glue 1000 pieces written by 10000 people together.
Every essay by Bartosz is so top-notch and a such breath of fresh air! He gives me hope in humanity and I am immensely grateful for what he does.</text></comment> |
34,723,917 | 34,723,678 | 1 | 2 | 34,706,925 | train | <story><title>Something strange is happening on the sun, and we've never seen it before</title><url>https://www.vice.com/en/article/k7bm83/something-strange-is-happening-on-the-sun-and-weve-never-seen-it-before</url></story><parent_chain></parent_chain><comment><author>singularity2001</author><text>Saving two clicks:<p>A polar vortex formed in the north. video:<p><a href="https:&#x2F;&#x2F;twitter.com&#x2F;TamithaSkov&#x2F;status&#x2F;1621276153075109888?s=20&amp;t=OiMCMAdZsGPRDBpTyW717Q" rel="nofollow">https:&#x2F;&#x2F;twitter.com&#x2F;TamithaSkov&#x2F;status&#x2F;1621276153075109888?s...</a></text></comment> | <story><title>Something strange is happening on the sun, and we've never seen it before</title><url>https://www.vice.com/en/article/k7bm83/something-strange-is-happening-on-the-sun-and-weve-never-seen-it-before</url></story><parent_chain></parent_chain><comment><author>pfdietz</author><text>&quot;The find is just the latest in a series of interesting space observations thanks to the capabilities of the James Webb Space Telescope.&quot;<p>How has this anything to do with the JWST? That telescope would be ruined instantly if it were pointed at the Sun.</text></comment> |
12,996,956 | 12,996,970 | 1 | 2 | 12,996,019 | train | <story><title>12-Foot Traffic Lanes Are Bad for Safety and Should Be Replaced (2014)</title><url>http://www.citylab.com/design/2014/10/why-12-foot-traffic-lanes-are-disastrous-for-safety-and-must-be-replaced-now/381117/</url></story><parent_chain><item><author>LukaAl</author><text>I&#x27;m reading a lot of comment from people complaining that this article is wrong and we shouldn&#x27;t make roads less safe.<p>Question: what are you talking about? A 10-foot road is wide enough for a car or truck to drive safely at urban speed limits. The maximum vehicle width under Federal law is 102 inches, or 8 feet and a half. And this is for trucks. Cars are around 80 inches. This means over a foot per side for cars and more than half a foot for trucks. If it is not enough for you at city speeds, please give up your driver license, you are not good, and you shouldn&#x27;t drive.<p>Yes, I agree, I don&#x27;t want to drive 50 miles an hour next to a truck on a 10-foot wide lane, but that&#x27;s not a city or a suburban speed, that&#x27;s highway speed. On the opposite, no problem at driving at 15 or even 25 MPH on a 10 feet lane. Yes, probably I need to give up texting and driving, and I need to be focused on the road. Guess what: I need to do it anyway.<p>Honestly speaking, I have enough of our politicians of disregarding scientific studies and expert analysis, but this is somewhat accepted. But a community like Hacker News should have the smartest people, not the dumbest ones that fail to understand reality.</text></item></parent_chain><comment><author>slaman</author><text>I&#x27;ve found the older cities I&#x27;ve lived in with much tighter roads have much better drivers as a result.
People know their distances and are hyper-aware of their surroundings because it is a necessity in order to avoid minor collisions.<p>In cities that are more &#x27;designed&#x27; by traffic engineers there is ample room so people don&#x27;t know the dimensions of their own vehicle and travel at greater speeds while letting themselves be distracted more frequently. The accidents that result are usually frame-bending as opposed to bumper replacements.</text></comment> | <story><title>12-Foot Traffic Lanes Are Bad for Safety and Should Be Replaced (2014)</title><url>http://www.citylab.com/design/2014/10/why-12-foot-traffic-lanes-are-disastrous-for-safety-and-must-be-replaced-now/381117/</url></story><parent_chain><item><author>LukaAl</author><text>I&#x27;m reading a lot of comment from people complaining that this article is wrong and we shouldn&#x27;t make roads less safe.<p>Question: what are you talking about? A 10-foot road is wide enough for a car or truck to drive safely at urban speed limits. The maximum vehicle width under Federal law is 102 inches, or 8 feet and a half. And this is for trucks. Cars are around 80 inches. This means over a foot per side for cars and more than half a foot for trucks. If it is not enough for you at city speeds, please give up your driver license, you are not good, and you shouldn&#x27;t drive.<p>Yes, I agree, I don&#x27;t want to drive 50 miles an hour next to a truck on a 10-foot wide lane, but that&#x27;s not a city or a suburban speed, that&#x27;s highway speed. On the opposite, no problem at driving at 15 or even 25 MPH on a 10 feet lane. Yes, probably I need to give up texting and driving, and I need to be focused on the road. Guess what: I need to do it anyway.<p>Honestly speaking, I have enough of our politicians of disregarding scientific studies and expert analysis, but this is somewhat accepted. But a community like Hacker News should have the smartest people, not the dumbest ones that fail to understand reality.</text></item></parent_chain><comment><author>tbihl</author><text>Absolutely correct, the way we improve safety is by making the most dangerous people feel less comfortable.<p>I&#x27;ve read about some places where a rural highway leading into a town will narrow to ten foot lanes, waking drivers out of their daze because the driving environment is about to become much more complex. I wish I could remember where this was.</text></comment> |
29,102,385 | 29,101,077 | 1 | 2 | 29,100,400 | train | <story><title>Minimum Viable Secure Product</title><url>https://mvsp.dev/</url></story><parent_chain></parent_chain><comment><author>tptacek</author><text>If you broke the practice of securing a company into software security, network&#x2F;platform security, and corpsec, I think the proper prioritization from an engineering perspective would be corpsec &gt; software security &gt; network&#x2F;platform security.<p>Thankfully, this checklist doesn&#x27;t lead startups into a quagmire of stupid network security tools, scanners, and assessments. But it also leaves out corpsec almost completely (&quot;single signon&quot; is an application security control in the checklist, which <i>wildly</i> misses the point), so we&#x27;ll call that a wash.<p>What I&#x27;ll say is that if you&#x27;re concerned about closing deals and filling out checklists, the appsec controls here aren&#x27;t going to move the dials much for you, and the corpsec stuff that it&#x27;s missing is going to trip you up. I&#x27;m not in love with it.<p>Also: for most companies, you&#x27;re going to want to be well past product-market fit before you start engaging consultants to assess your code. Most startups are well past 30 engineers before they have their first serious assessment. Crappy assessments can hurt as much as they help, and they&#x27;re the kind you get if you&#x27;re shopping for $5k-10k pentests while delivering with 5 engineers.</text></comment> | <story><title>Minimum Viable Secure Product</title><url>https://mvsp.dev/</url></story><parent_chain></parent_chain><comment><author>lmeyerov</author><text>I dislike any compliance document that requires paid &amp; external vendors, so would love to see that factored out<p>SOC I vs SOC II helps get at these kinds of distinctions in practice. I&#x27;ve seen a lot of conversations enabled by that. &quot;We did the SOC I software checklist. At some point, we&#x27;ll pay vendors $50K-250K for SOC II, feel free to fast track that now as part of our contract.&quot;<p>I get why it&#x27;s there, but this kind of thing is also why, despite being designed to address a real need, initiatives like FedRAMP have been slow &amp; expensive disasters in practice. We should be pushing to self-serve &amp; automated accreditation, and all the way to 1 person projects. Anything that puts third parties, people, and $$$ in the critical path needs to be split out.</text></comment> |
28,858,889 | 28,858,953 | 1 | 2 | 28,858,427 | train | <story><title>Patrick Stewart on the teacher who spotted his talent</title><url>https://www.theguardian.com/lifeandstyle/2021/oct/13/a-moment-that-changed-me-patrick-stewart-on-the-teacher-who-spotted-his-talent-and-saved-him</url></story><parent_chain></parent_chain><comment><author>rohansingh</author><text>What I found interesting about this was that Patrick Stewart is 81, and his highly-influential teacher who just passed was 96.<p>It&#x27;s funny how, as a student, your teacher seems like a real adult person from a completely different generation. But I guess after you fast-forward a few decades, the difference becomes pretty trivial.</text></comment> | <story><title>Patrick Stewart on the teacher who spotted his talent</title><url>https://www.theguardian.com/lifeandstyle/2021/oct/13/a-moment-that-changed-me-patrick-stewart-on-the-teacher-who-spotted-his-talent-and-saved-him</url></story><parent_chain></parent_chain><comment><author>debacle</author><text>If you ever want to waste a few hours of your life, travel to YouTube and watch the many different renditions of MacBeth&#x27;s soliloquy &quot;Tomorrow, and tomorrow, and tomorrow.&quot;<p>Many of them are downright awful. Some of them have not stood up to the test of time. Stewart&#x27;s is especially contemplative, and sits close to the top in my opinion:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=HZnaXDRwu84" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=HZnaXDRwu84</a><p>Ian McKellen spends about 20 minutes discussing this specific soliloquy (it&#x27;s a bit pretentious, but only a bit):<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=zGbZCgHQ9m8" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=zGbZCgHQ9m8</a></text></comment> |
6,452,893 | 6,452,754 | 1 | 3 | 6,451,282 | train | <story><title>Troll-Killing Patent Reform One Step Closer</title><url>https://www.eff.org/deeplinks/2013/09/troll-killing-patent-reform-one-step-closer</url><text></text></story><parent_chain><item><author>rarw</author><text>As a lawyer I&#x27;m always concerned about changes designed to make it harder to sue someone. You almost always end up with unintended consequences. Take for example the heightened pleading standard. This standard has been used with other case types in which the allegations are considered dangerous to one&#x27;s reputation, like fraud or discrimination. In many situations, people with legitimate claims have been unable to overcome the heightened pleading standard, not because their claim is weak, but because the evidence required to plead the claim is difficult to come by.<p>Many of the changes suggested by the article can already be accomplished using existing procedural devices. The only provision I see as having any teeth is the one that deals with fee shifting. Fee shifting provisons are a big deal as recoving attorneys fees is very rare in the American legal system. Generally fee shifting provisions allow the winner to collect legal fees even if nominal damages (e.g. $1) are awarded. This greatly increases the risk of trolling since any loss, no matter how small, could equal hundreds or thousands (probably millions) of dollars in legal fees owed to the other side.</text></item></parent_chain><comment><author>gbhn</author><text>Being careful in reforms is good, but I think the heightened pleading concern is not that big an issue here. The plaintiff is looking at their own patent and saying &quot;this thing you did violated this part.&quot; If they don&#x27;t know enough to say that, they shouldn&#x27;t be in court -- either the patent is so vague they can&#x27;t figure out which piece of it you violated, or how you did so, or they know, but don&#x27;t want to say because uncertainty is a much stronger extortion position. Either way, getting rid of this is a good step, and an appropriate one to the subject matter.</text></comment> | <story><title>Troll-Killing Patent Reform One Step Closer</title><url>https://www.eff.org/deeplinks/2013/09/troll-killing-patent-reform-one-step-closer</url><text></text></story><parent_chain><item><author>rarw</author><text>As a lawyer I&#x27;m always concerned about changes designed to make it harder to sue someone. You almost always end up with unintended consequences. Take for example the heightened pleading standard. This standard has been used with other case types in which the allegations are considered dangerous to one&#x27;s reputation, like fraud or discrimination. In many situations, people with legitimate claims have been unable to overcome the heightened pleading standard, not because their claim is weak, but because the evidence required to plead the claim is difficult to come by.<p>Many of the changes suggested by the article can already be accomplished using existing procedural devices. The only provision I see as having any teeth is the one that deals with fee shifting. Fee shifting provisons are a big deal as recoving attorneys fees is very rare in the American legal system. Generally fee shifting provisions allow the winner to collect legal fees even if nominal damages (e.g. $1) are awarded. This greatly increases the risk of trolling since any loss, no matter how small, could equal hundreds or thousands (probably millions) of dollars in legal fees owed to the other side.</text></item></parent_chain><comment><author>lostinpoetics</author><text>&gt; In many situations, people with legitimate claims have been unable to overcome the heightened pleading standard, not because their claim is weak, but because the evidence required to plead the claim is difficult to come by.<p>the &quot;unless the information is not reasonable [sic] accessible&quot; really takes a lot of the teeth out of this though</text></comment> |
10,376,814 | 10,376,863 | 1 | 3 | 10,375,426 | train | <story><title>Off the Grid, but Still Online</title><url>http://motherboard.vice.com/read/off-the-grid-but-still-online</url></story><parent_chain><item><author>Retric</author><text>FedEx works because people near you also need packages. The more packages they ship the more efficient they become.<p>Also, the US population density is ~100 people per square mile so chances are good someone within 1 mile of you also needs a package today. Sure, it&#x27;s cheaper for them to drop off 10 packages at the same apartment complex, but even fairly remote areas can be surprisingly profitable.</text></item><item><author>Zach_the_Lizard</author><text>&gt;I suspect you could scale the &#x27;off grid&#x27; lifestyle to around 1&#x2F;2 the US population without many issues.<p>As you spread out the population, the cost of FedEx deliveries is going to grow. Now you can drop off goods in a city via a highly efficient train, have that switched to a truck, and have it delivered to the end customer.<p>If everyone is off the grid in the wilderness, that train becomes a lot less useful to deliver goods. Now you&#x27;ve got to reach a larger area to serve the same number of people.<p>&gt; Web + Fedex means a lot of jobs can be done remotely<p>A lot of jobs <i>can</i> be done remotely, but that doesn&#x27;t mean they could be done remotely without cost. Many people are less efficient when working remotely. I find communication to be much harder when remote than when in person, especially as issues get more complex.</text></item><item><author>Retric</author><text>I suspect you could scale the &#x27;off grid&#x27; lifestyle to around 1&#x2F;2 the US population without many issues. Web + FedEx means a lot of jobs can be done remotely. Consider plenty of teachers work remotely even if most people think of it as a face to face job. Even some doctors have started to work remotely let alone the classic 9&#x2F;5 office worker.<p>Sure, city&#x27;s are a far more efficient use of land and energy, but at least in the US we still have a lot of open space.<p>PS: My only gripe is people think of this as a &#x27;green&#x27; lifestyle. Generally, living in a city high-rise and using public transit is far better for the environment.</text></item><item><author>OneOneOneOne</author><text>Right on. When considering lifestyle choices I ask myself how many people could live the same way. If it turns out to be a small minority before collapse, I reject it as a path for myself. To me off grid lifestyle seems a retreat from the goods of society without a commensurate benefit to self and others. As an art movement or experiment I say fine.</text></item><item><author>ecobiker</author><text>(Disclaimer: Slightly off-topic. Also, it&#x27;s not my intention to belittle their lifestyle. I hope I don&#x27;t come across that way.)<p>Some of the things they need even in this disconnected lifestyle - like the mobile home, laptops, books, solar panels, batteries and even the slippers have to be made by someone doing a 9-5 job somewhere. The idea of civilization to me is to take advantage of these specialists who are really good at doing or manufacturing some of the things I need and in turn I become a specialist in something (probably one thing) which I contribute back to the society - it&#x27;s a barter. That I don&#x27;t have to do all the things I need to do to survive, seems efficient and effective. Also, not all of the jobs are going to be able to afford this &quot;luxury&quot;.</text></item></parent_chain><comment><author>Zach_the_Lizard</author><text>&gt;Fedex works because people near you also need packages.<p>That&#x27;s exactly my point. The more people who need packages around you, the less it costs to deliver a package specifically to you. Going off the grid in remote corners sounds great, but part of the reason we moved to cities is because sharing infrastructure costs makes things more efficient.<p>&gt;Also, the US population density is ~100 people per square mile so chances are good someone within 1 mile of you also needs a package today.<p>The US&#x27;s population density is 100 people &#x2F; square mile, but that doesn&#x27;t tell you the full story. We don&#x27;t live all over America, we are clustered into cities and towns, mostly on the coasts. There&#x27;s a lot of desert and Alaskan wilderness with no one around bringing down the average. NYC alone is ~6% of the US population. The Northeast megalopolis is ~17% on ~2% of the land.</text></comment> | <story><title>Off the Grid, but Still Online</title><url>http://motherboard.vice.com/read/off-the-grid-but-still-online</url></story><parent_chain><item><author>Retric</author><text>FedEx works because people near you also need packages. The more packages they ship the more efficient they become.<p>Also, the US population density is ~100 people per square mile so chances are good someone within 1 mile of you also needs a package today. Sure, it&#x27;s cheaper for them to drop off 10 packages at the same apartment complex, but even fairly remote areas can be surprisingly profitable.</text></item><item><author>Zach_the_Lizard</author><text>&gt;I suspect you could scale the &#x27;off grid&#x27; lifestyle to around 1&#x2F;2 the US population without many issues.<p>As you spread out the population, the cost of FedEx deliveries is going to grow. Now you can drop off goods in a city via a highly efficient train, have that switched to a truck, and have it delivered to the end customer.<p>If everyone is off the grid in the wilderness, that train becomes a lot less useful to deliver goods. Now you&#x27;ve got to reach a larger area to serve the same number of people.<p>&gt; Web + Fedex means a lot of jobs can be done remotely<p>A lot of jobs <i>can</i> be done remotely, but that doesn&#x27;t mean they could be done remotely without cost. Many people are less efficient when working remotely. I find communication to be much harder when remote than when in person, especially as issues get more complex.</text></item><item><author>Retric</author><text>I suspect you could scale the &#x27;off grid&#x27; lifestyle to around 1&#x2F;2 the US population without many issues. Web + FedEx means a lot of jobs can be done remotely. Consider plenty of teachers work remotely even if most people think of it as a face to face job. Even some doctors have started to work remotely let alone the classic 9&#x2F;5 office worker.<p>Sure, city&#x27;s are a far more efficient use of land and energy, but at least in the US we still have a lot of open space.<p>PS: My only gripe is people think of this as a &#x27;green&#x27; lifestyle. Generally, living in a city high-rise and using public transit is far better for the environment.</text></item><item><author>OneOneOneOne</author><text>Right on. When considering lifestyle choices I ask myself how many people could live the same way. If it turns out to be a small minority before collapse, I reject it as a path for myself. To me off grid lifestyle seems a retreat from the goods of society without a commensurate benefit to self and others. As an art movement or experiment I say fine.</text></item><item><author>ecobiker</author><text>(Disclaimer: Slightly off-topic. Also, it&#x27;s not my intention to belittle their lifestyle. I hope I don&#x27;t come across that way.)<p>Some of the things they need even in this disconnected lifestyle - like the mobile home, laptops, books, solar panels, batteries and even the slippers have to be made by someone doing a 9-5 job somewhere. The idea of civilization to me is to take advantage of these specialists who are really good at doing or manufacturing some of the things I need and in turn I become a specialist in something (probably one thing) which I contribute back to the society - it&#x27;s a barter. That I don&#x27;t have to do all the things I need to do to survive, seems efficient and effective. Also, not all of the jobs are going to be able to afford this &quot;luxury&quot;.</text></item></parent_chain><comment><author>curun1r</author><text>&gt; but even fairly remote areas can be surprisingly profitable.<p>This is only true because they subcontract these rural deliveries to USPS [1]. Without USPS, who&#x27;s already required to go there anyways to deliver traditional mail, it isn&#x27;t profitable.<p>[1] <a href="http:&#x2F;&#x2F;www.wsj.com&#x2F;articles&#x2F;u-s-mail-does-the-trick-for-fedex-ups-1407182247" rel="nofollow">http:&#x2F;&#x2F;www.wsj.com&#x2F;articles&#x2F;u-s-mail-does-the-trick-for-fede...</a></text></comment> |
34,114,258 | 34,113,798 | 1 | 2 | 34,112,874 | train | <story><title>TikTok banned on government devices under spending bill passed by Congress</title><url>https://www.cnbc.com/2022/12/23/congress-passes-spending-bill-with-tiktok-ban-on-government-devices.html</url></story><parent_chain><item><author>NikolaNovak</author><text>I get that this is a thing - isolating one specific app for very specific reasons. I am honestly surprised it&#x27;s a thing though - shouldn&#x27;t all unapproved apps be disallowed from government devices? Whether Instagram, TikTok, Facebook, Pornhub or whatever - unless you have specific reasons such as e.g. PR position, shouldn&#x27;t all these be relegated to your private device?</text></item></parent_chain><comment><author>jvanderbot</author><text>Phones are managed by an employer. Government phones are managed by the agency that issues them. What the law does is ensure that all agencies do not have tiktok. Pornhub will also likely appear on banned app lists, but they didn&#x27;t feel it necessary to make a law.<p>One interesting side effect, is that tiktok might now be bereft of government content, which sounds dry, but includes a lot of NASA&#x27;s work.</text></comment> | <story><title>TikTok banned on government devices under spending bill passed by Congress</title><url>https://www.cnbc.com/2022/12/23/congress-passes-spending-bill-with-tiktok-ban-on-government-devices.html</url></story><parent_chain><item><author>NikolaNovak</author><text>I get that this is a thing - isolating one specific app for very specific reasons. I am honestly surprised it&#x27;s a thing though - shouldn&#x27;t all unapproved apps be disallowed from government devices? Whether Instagram, TikTok, Facebook, Pornhub or whatever - unless you have specific reasons such as e.g. PR position, shouldn&#x27;t all these be relegated to your private device?</text></item></parent_chain><comment><author>IE6</author><text>Yeah agreed - why does a government issued phone or device need anything more than what is necessary to perform their job? They are compensated enough to have a separate non-work phone.</text></comment> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.