q_id
stringlengths
6
6
title
stringlengths
4
294
selftext
stringlengths
0
2.48k
category
stringclasses
1 value
subreddit
stringclasses
1 value
answers
dict
title_urls
sequencelengths
1
1
selftext_urls
sequencelengths
1
1
5xzfjd
What is AWS (Amazon Web Server) and what are the basic things one should know to understand how it functions?
Technology
explainlikeimfive
{ "a_id": [ "dem1xop", "dem2cmp" ], "text": [ "AWS stands for Amazon Web Services to contrast their shipping services. It is a collection of services that Amazon provides online. These are things like storage, backup, DNS, database, load balancing and maybe the most popular, virtual machine hosting. The concept about all this is the economy of scale. Amazon will build huge data centers and engineer them to provide the kind of services that most businesses need. Since they do it on a huge scale they are able to do it cheaper then if every business were to build their own from scratch. There are however lots of disadvantages like needing a lot more spare capacity and having to build something that fits everyone as opposed to something that fits a particular user. So it depends on the use case if you save money by using their services as opposed to making your own from scratch. One of the most revolutionary benefits that AWS brought into the market is that purchasing additional capacity is much easier and faster. You might have taken weeks to buy hardware and install it at your location. Or if you are renting though traditional suppliers you might take a few hours to let them manually reconfigure things. However AWS made everything automatic so you can get a new server within seconds. This have allowed businesses to build their applications to allow them to scale on demand. This means that they pay different amount of money for the services depending on how much they use. This have the ability to reduce costs but it again require more time to develop and maintain the more complex applications.", "When navigating to \" URL_0 \" in your browser, first thing you do is asking the DNS server \"Where is URL_0 ?\" Then it will redirect you to the IP address of the server, and you'll request \"Can you give me the homepage, please?\" On that server - think of it as a regular computer - there's a program that receives the request, looks up all the necessary data in the database - a hard drive-, and sends you back the webpage. Since Amazon has thousands of those servers and huge databases, they get a nice deal on their infrastructure price. Using this discount, they decided to buy even more servers and database space, and make it available to other people who want to host a website. So Amazon gets all the resources (server, database, internet connection, DNS registration, ... ) at a nice bulk discount, much cheaper than when you had to do this all yourself. Using AWS means you're renting some of that server space. Now you're not getting an entire server for yourself, those are way to powerful for just one simple website. So they use virtual machines to share the hardware. These machines act like individual computers, each with their own operating system and hard drive space." ], "score": [ 5, 3 ], "text_urls": [ [], [ "amazon.com" ] ] }
[ "url" ]
[ "url" ]
5xzmsq
How are cell towers able to transmit different information to every cellphone as opposed to broadcasting everyone the same information when listening to the radio?
Technology
explainlikeimfive
{ "a_id": [ "demb7no", "dem38nu", "demks3o", "demok3r" ], "text": [ "Excellent question. A bunch of people want to communicate at the same time. Since other people make a lot of noise, you have to speak louder. Everyone else speaks louder too and eventually you are shouting as hard as you can and nobody can hear you since everyone is shouting. To make this work, you need a system. First comes FDMA (Frequency division multiple access) which is basically giving everyone their own frequency. This is analogous to putting everyone in different rooms so they do not have to shout due to lack of interference. We quickly ran out of rooms to use once we got more users so we abandoned the idea. Then comes TDMA (Time division multiple access) which is basically giving everyone a short time slot to transmit their message. This is analogous to giving people turns to speak so they won't interfere with each other. This is how GSM network works. CDMA (Code division multiple access) is when you encode your data into the white noise and then run the decoding algorithm to get it back. This is analogous to everyone speaking at the same time but in different languages. This helps to separate different people speaking, but when there are a lot of users, the speech still becomes impossible to differentiate from the noise. This is how 3G works. OFDMA (orthogonal frequency division multiple access) is what LTE uses and is quite more complicated but is basically separating users by time AND frequency thus increasing capacity by a lot.", "Cell towers do in fact transmit the same information to every cellphone in range. However, the data is encrypted, so that only the correct phone can understand the information meant for that particular phone. Each phone gets a time slot where the data sent is for that phone, and each phone gets a time slot where it can send data back. The data the phone sends back can also be heard by other phones in range, but again, the data is encrypted so that only the cell tower understands it. Wi-fi routers work the same way, and this is one of the reasons the speeds decrease the more people on a tower/router, even if the signal strength seems fine. Every device needs to take its turn to send and receive.", "There are good explanations here to people with high technical understanding. I am going to shoot for the 5. Antennas emit radiation that we can't see, light is radiation we can see. There are different patterns that exist in light and radiation. For example, the different colors you see are different patternS of radiation. to take this further, you can visualize these patterns as waves in the ocean. Some are longer, some are taller. Some have waves on top of waves. Multiplexing is like having multiple waves on the same wave. We have mastered the abilitiy to Make tierred waves and constantly get better at it. Understanding that. You can explore the other answers that give detail about the various forms of mutliplexing.", "Basically all modern systems work by sending all the data out at the same frequency and getting the phone to only see the part of the data that is meant for it. Encryption means that you can't (maliciously or accidentally) overhear someone else's data. The fundamental way this works is by having the phone and tower agree on some scheme by which the data will be split up or mixed. In 2G technology, the phone and tower agree to only listen/transmit at certain times. In 3G, the phone listens all the time, but it uses a pseudo-random sequence to filter the signal it receives. The tower is using the same sequence to perform the opposite filter on the data to turn it into the signal. You can think of this as similar to the 2G technology; the tower switches extremely fast between transmitting different phones' data - faster than it takes to transmit a single bit of data. It's a little more complicated than that, but it ends up with all the data being transmitted at once, and each phone extracts the data that is meant for it." ], "score": [ 351, 53, 4, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5y0uu9
Why do servers need to be locked?
My server has a lock on it that can only be unlocked with a key, It locks the case and drive caddy. It also has an intrusion switch that stops it from running if someone opens it. Why? I thought that if you work at a company with impotent servers, You'll have common sense and no open it up while it was running.
Technology
explainlikeimfive
{ "a_id": [ "dembh4x", "dembdqb" ], "text": [ "Controlling physical access is the single most important aspect of computer security. Give me the most hardened Linux server on Earth, and so long as I've got physical access I'll own in under 5 minutes. I can just boot it to single-user mode, change the root password, and then reboot into normal mode. Likewise I can run a Hiren's boot CD on your Windows server and reset any password I want. Edit: Also worth noting that the physical lock & key are more common on lower end/consumer grade cases. I work on actual Enterprise hardware all day from major vendors, and there's not a lock or key to be seen. I suspect this is based around \"if you can afford a $10,000+ server you can afford proper physical security measures\" versus \"if you bought this cheap case, you're probably running it in a janitors closet\".", "> You'll have common sense and no open it up while it was running. You may be surprised, a simple lock can solve what might otherwise be procedure errors. Also this means that it is difficult for someone to \"steal yo' shit\"!" ], "score": [ 9, 5 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5y189s
What's the point of making AI robots (I.e like ones ee see in films) other than proving that we can do it, what are the real world benefits?
Technology
explainlikeimfive
{ "a_id": [ "demju7t" ], "text": [ "because AI is vastly useful. when computers became popular, it allowed a multitude of mundane tasks to be accomplished extremely quickly (think math, accounting, email, encryption, etc) but computers are still \"dumb\" machines. that is to say, they follow a discreet set of instructions given to them by an intelligent human. this beings a good side point: the definition of AI depends on your definition of intelligence. if intelligence is defined as simply a decision based on a set of parameters (ie i have decided to eat taco bell because their food is delicious and it is close to my house, therefore, i get the highest amount of satisfaction for the least amount of inconvenience) then AI has already been achieved. (lots of systems use simple decision making algorithms like this) based on your question it seems you mean \"true\" intelligence which can only really be defined philosophically. either way, such an advent would be extremely advantageous. imagine a plane pilot with the equal intelligence of a human, but does not tire, need to use the bathroom, isnt hungover from his overnight weekend layover at LAX. imagine military leadership that not only made informed decisions, but intelligently analyzed data without bias, political motivation, of pursuit of personal gain, rather than just calculating \"what if\" numbers like current computers. the problem with humans is that we do a LOT of unintelligent stuff. crime, war, abuse, self-harm, irrational fears, irrational emotion, the list goes on and on. imagine any system capable of making decisions without these short comings. if we ever achieve true AI, it will change the world in more ways than than the computer did. it would open questions of legality, morality, philosophy and others. would an intelligence be fit to hold office if it was publically and legally elected? sorry, i know i kind of over-answered. the bottom line though is that true AI would be an enormous benefit to almost everyone (if used benevolently), even though such an event would come with a monumental slew of outlying questions." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5y18we
How does a wifi router knows that you have entered a correct password in order to use it?
Technology
explainlikeimfive
{ "a_id": [ "deme5z0" ], "text": [ "WiFi security (the kind where a pass is required to connect, not the \"coffee shop wifi\" where you connect then need to go to a webpage) is based on encryption. Everything is hopelessly scrambled beyond recognition, but your computer has the formula to descramble it! Except for one very important piece: the key. The key is a missing variable in the math problem. When you type in the wifi password, and it's correct, your computer can now descramble what the wifi router is transmitting, and transmit it's own scrambled-up messages that the wifi router is expecting. With the wrong key, you just end up with a differently-hopelessly scrambled mess. So the way it knows is that they're able to communicate at all." ], "score": [ 9 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5y1h4l
How can Wikileaks be a trusted source without confirmed sources and unverifiable documents?
With the recent events from wiki leaks, I cant help but ask, Why should the people trust them?
Technology
explainlikeimfive
{ "a_id": [ "demgq1q", "demiksx", "demkpxv", "demwuji", "den0rlp", "demx4uo", "demkuxs" ], "text": [ "They leak mostly emails. Emails contain a mechanism called DKIM which basically inserts code into the email when it is first sent. With this code you can 100 percent verify if they were tampered with and the source. Using said method you can easily see all emails released by wiki leaks are verified. The site has never released an email that was not. Unfortunately, wiki leaks is more accurate and trustworthy than MSM. This only goes for emails. Everything else would need Authentication using other methods.", "The documents Wikileaks releases are genuine, by any journalistic standard, nor have they been found by anyone to be forged. This is WHY the Wikileaks releases have caused such consternation for those in power over the past decade. If you are old enough to remember the Afghan War Logs and the Collateral Murder video leaked by Chelsea Manning, (then Bradley Manning,) you know the history of where those documents came from and how Wikileaks protects its sources as EVERY journalist in the business does. Documents posted by Wikileaks are extremely reliable. Don't let your partisan duplicities get in the way of cold hard facts, or you're no better than the Republicans defending war crimes when it was their turn to come under Wikileaks' looking glass.", "I believe you are asking the wrong question. The authenticity of these leaks is never really questioned. The emails are real, however, the releases always happen at politically convenient times, and seem to focus on leaks from certain groups and not others (coughRussiacough) The question is, since Wikileaks seems to have their agenda coordinated by the FSB (Russian Intelligence), why should we play into their hands by treating them as an authentic and unbiased news-source?", "i think that if anything WL released was not genuine then WL's detractors and enemies would be all over the airwaves and internet crying FAKE NEWS! at light speed. the collective lack of anybody speaking out against the veracity of the WL source documents is all the proof i need that the documents are valid, genuine, and accurate. good question tho", "The volume of coherent documents they release make it virtually impossible to forge. Plus they are not disputed. They even trigger various official responses, always in the direction of documents authenticity.", "They've got 100% reporting accuracy, they've never published anything that was fake, false, or inaccurate", "I think it's possible that some folks might be missing the point of WikiLeaks. They release information, we all know that. But it is not their burden to provide proof of accuracy or authenticity even if they could. As Julian Assange has said several times, what would constitute proof or a measurement of authority? Even if there was a broad spectrum definition of accuracy, that proof or accuracy mechanism is only valid if we all agree upon it. So, by saying that there needs to be a way to measure the validity, we are also saying that the particular measurement of validity is absolute and always correct. That just simply is not possible. Edit: Mr. Assange has been on record saying that their concern is not with people believing the accuracy of their leaks. Rather, he and his team are concerned with the information, period. They are not in the business of opinion and make no revenue other than donations, what would be their motive for omitting information or releasing false information? I'm certainly not an expert, but I feel like it just takes a few moments of critical thinking to understand that proof of authenticity is irrelevant. Though, this situation is unique to organizations like WikiLeaks. /2cents" ], "score": [ 553, 67, 32, 14, 12, 9, 4 ], "text_urls": [ [], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5y1x7o
Why should I worry about a hacker getting passwords from a website, if the passwords a supposed to be hashed anyway?
I'm a web dev and when I build a authentication system passwords are hashed so they can't be read from the database. How would a hacker obtaining passwords help them with this limitation?
Technology
explainlikeimfive
{ "a_id": [ "deml3xa", "demj4wg", "demivrw", "demra9d", "demje7z" ], "text": [ "Three reasons, really. First, as the end user, you don't actually know that the passwords were hashed at all. All too often we find out that a site was storing passwords as cleartext only *after* an intrusion gets revealed. Or they might be hashing passwords but *also* leaking a cleartext copy of it somewhere. It's happened countless times in our short history. Second: As others have pointed out, a web site may have hashed passwords but failed to \"salt\" them. For any given hashing algorithm, this would mean that any password maps to exactly one *hashed* password. They can then use a \"rainbow table\" to quickly look up each hash against this precomputed list of many billions of the most likely passwords: if your password is on the list, the hash will match and your password is revealed immediately. If neither of the above are true, if the passwords are both hashed and salted, stealing the password hash still enables an *offline attack*. The web site itself may limit how many attempts you might make to guess a password, or how quickly those guesses can be made: but if the hacker has your hashed password he can test likely passwords against the hash as fast as he or she can, with none of those software-imposed limits. The remaining defense against this, other than two-factor authentication, is an algorithm that makes hashing the password very slow and costly. PBKDF2, Password-Based Key Derivation Function 2, is among the most well-known of these.", "Hackers steal password databases because it ties user to their password hashes. Before they actually perform the theft, they'll build a rainbow table, this is a map of hashes to plaintext passwords. These tables are huge, in the multi-terabyte range, but they allow for nearly instant lookup of a hash to a password. Bingo bango, most of your user accounts are compromised until you disable all accounts and resalt/force a reset of all passwords.", "Because if you know the answer, then it's much simpler to form the question. The Hash may not be the clear text password and depending on your hashing algo, it will still take time, but in the event you're not salting (or storing the salting in the same compromised area), it's not going to take as long to bruteforce the password -- OR -- attempt the known passwords that this particular user has used on other websites.. We as humans are horrible at password re-use. There are also published and online reversed hash tables someone can lookup as long as they've figured out how you created them in the first place.", "When you hash a phrase with an algorithm that same phrase will always result in the same hash. If a hacker obtains your hash and the exact algorithm used, they can rehash billions of phrases in seconds (depending on the complexity of the hashing algorithm) until they find the correct phrase used to result in your hash in a given password database. For example, the word \"password\" results in the MD5 hash \"5f4dcc3b5aa765d61d8327deb882cf99\" and it always will. If you create a program that hashes a dictionary of words (Called a dictionary attack), once it hashes \"password\" it will check if the result (In our case \"5f4dcc3b5aa765d61d8327deb882cf99\") matches the stolen hash. A GPU can perform MD5 hashing extremely fast, my old GTX 760 can hash (MD5) 900,000,000 phrases per second. Though it should be noted that heavy algorithm's can cripple that 900,000,000 per second down to less than 100 per second. In short, find a strong algorithm and stay away from older easily brute forced algorithm's such as MD5.", "If you are a web developer you probably know more about the topic at hand than I do. But most actual password thefts occurred when customary precautions weren't adhered to. E.g. passwords weren't being hashed at all, but stored in plain text. Not quite as bad, but sometimes passwords were only being hashed. Meaning two people using the same password would have the same hash ID. This can be avoided by adding an additional random sequence to the password hash, the \"salt\". The password is then hashed and salted. Here is a page you can enter your email address at to see if it is compromised and where. URL_0 Here is a nice writeup about one of the bigger password leaks: URL_2 The latter is definitely not ELI5 though. Edit: URL_1 A cartoon about the mentioned password theft at adobe. Because the passwords weren't salted, identical passwords did have an identical hash. The hash wasn't adjusted for the pw-length either (Usually you would store hash numbers of a set length, as not to be able to infer the actual pw-length). In addition with Adobe failing to encrypt the password hints, it was possible to have certain often used passwords of known length with several hints. Hence the crossword puzzle dub." ], "score": [ 39, 24, 5, 4, 3 ], "text_urls": [ [], [], [], [], [ "https://haveibeenpwned.com/", "https://xkcd.com/1286/", "https://nakedsecurity.sophos.com/2013/11/04/anatomy-of-a-password-disaster-adobes-giant-sized-cryptographic-blunder/" ] ] }
[ "url" ]
[ "url" ]
5y27k6
What is the Tesla Gigafactory and why is it so important?
Technology
explainlikeimfive
{ "a_id": [ "demlzqs" ], "text": [ "Tesla's goal is make conventional vehicles extinct and replace them all with electric vehicles (part of their overall goal of getting rid of dependence on fossil fuels). To do that they need to dramatically reduce the price of all the components of their vehicles, the most expensive being the lithium ion battery. Also there isn't even enough li-on batteries being produced in the world for them to reach their goal. The Gigafactory will (1) make a massive number of batteries; and (2) reduce the cost of batteries because of how many are being manufactured (economies of scale) at one place. It's projected that by 2018 the Gigafactory will be producing as many li-on batteries as the entire world output in 2013. The Gigafactory is so important because even with it's huge manufacturing potential it cannot keep up with demand. Gigafactory 1 has been built. Gigafactory 2 is scheduled to be opened this year in Buffalo (made by the Tesla subsidiary SolarCity). Another 2 or 3 locations will be announced this year." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5y3ee7
Why are there many free art software, but no free music making software?
1) I know music is art, I mean visual art 2) I know there is LMMS, but it is pretty much an unfinished software 3) By music making software I mean a full fledged one with piano roll, effects, drum sequencer, etc, not just a software to handle several recordings
Technology
explainlikeimfive
{ "a_id": [ "demuj90" ], "text": [ "It's a fair question, although Sunvox exists. I think part of the cost is most music creation software depends on samples. Unless you're digitally creating the effect totally from scratch, those drumrolls and piano notes were probably lifted from the real thing and cleaned up. That took someone's major time and effort and they probably wanted to get paid for it. So a FOSS music app would take a coder working for free AND musicians (likely multiple) working for free." ], "score": [ 7 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5y4g3t
Why in this digital age do we still use fax machines that use landlines and take longer to transmit than downloading movies and music?
and why do we still use them at all?
Technology
explainlikeimfive
{ "a_id": [ "den4jje" ], "text": [ "Although it's certainly true that fax is technically inferior compared to e-mail or web transfers, say, it has advantages as well. For example, developing a standards-conforming website to which people can upload private medical information is pretty expensive and has weaknesses that are difficult to manage, like user passwords. With a fax machine, you can just put the machine on a line in a secure area and give out the number to the public, because you normally don't need to worry about the phone line being tapped. Especially if you *already* have the fax system set up, it's tempting to stick with something functional but imperfect rather than invest in something new." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5y54uc
why are there so few games developed for current generation consoles compared to in the days of PlayStation 2?
Back when I was a kid there were thousands of PlayStation games, now there are dozens/ hundreds for newer consoles like xb1/ xb360 respectively. Is it to do with the shift to mobile / PC gaming? Higher development costs? Licensing?
Technology
explainlikeimfive
{ "a_id": [ "dena52k", "den9miz", "denb8u0" ], "text": [ "A large number of factors * It costs a whole lot more to develop a \"major\" game on the PS4 than the PS2 If you want to make a PS2-level game with those sorts of graphics and features it won't cost all that much, but the smaller developers who would make use of that usually go even lower and make things imitating NES games because it's much cheaper. * Shovelware developers, the sort that make crappy licensed games, have all moved on to the app store Why buy your kid a handheld when you can hand it the family tablet to shut it up? The PS2 and Wii attracted these developers like flies to crap, and while it inflated the library count nobody actually enjoyed those games so it's a non-issue. * Issues with the Japanese market Consoles aren't selling like they used to in the Land of the Rising Sun. The people who would make quirky games like *Katamari Damacy* or *Stretch Panic* have turned to the ever-growing mobile market. The Japanese are infamously busy and don't have time to sit around at home. Handhelds such as the 3DS and Vita are still going strong over there. * Unable to grow Microsoft has officially introduced the Xbox One in China, but even with its growing middle class Chinese gamers aren't used to buying games and pass over it instead hopping over to the local internet café with its many freemium games. Developing markets would rather play [suspiciously familiar games]( URL_0 ) on their shiny new smartphones or pirate PC games. In short, there's nowhere for consoles to go. * The indie flood If you log onto Steam you will find there are more games for less money than you could've ever dreamed of as a kid, but how many of them could even a kid get fun out of? There is a tidal wave of new games made by people who have no purpose being in the industry. While there are diamonds in the rough you better be prepared to dig and dig and *dig* and **dig** and ***dig*** and ***DIG*** to find something.", "Not an expert, but a gamer I believe this shift could be caused by more people having access to pc's and phones, especially phones. You would be hard pressed to find a phone without any games installed. Not only are games extremely common on phones, but they are also alot cheaper and easier to make over a AAA title. Amd since more people look to their phones for entertainment, they will be more likely to download them and possibly spend money on them.", "There are a lot of factors that go into this change The first and possibly biggest one is resolution. If you consider the early PlayStation 1 games, the standard resolution was 480i/480P. Now most consoles are at 1080P, that is a lot more rendering to do, a lot more texture maps to develop, in greater detail and a lot more time spent in developing models and animations at much higher resolutions. This all adds a lot more time, and therefore money to the cost of game development. This leads to another issue, the big studios now have to invest a lot more money and time in game development. Many PS1 and PS2 games were developed in 4-8 months with a moderate team of developers. There were always exceptions, big budget games, even then, might have a team dedicated for years to develop them. The problem now is that even simple games become big budget because of rising labor costs and higher technical requirements like graphics. Spending more money raises the stakes, if games cost more, they have to make more to be worth it. You won't just put out any IP, if you need to make hundreds of millions of dollars to break even. Many of the big developers have been driven to take an all or nothing approach. They put their entire crew on one project at a time, trying to finish a hot property and push it out, to cash in, before the market can cool for what they think is the next big thing, before the market shifts direction. The market for smaller titles has been taken over by new market segment. The advancement of tablets and smartphones has driven many smaller and simpler games away from the console market towards other devices. Online markets have created a large number of low budget titles that now occupy a space that was once dominated by major developers. With a combination of crowd funding, online services like xbox live and the Playstation store and even PC services like steam and kickstarter, many games can be developed cheaper, without the need of big name brands. These developers can be as small as one man operations, creating games without need of a normal development structure, making titles much cheaper and faster than the larger outfits can accomplish. This leads to less titles on store shelves. The bigger outfits can't compete in that space, though some are now trying through creating their own side brands. For new IPs, it is getting harder to stand out and developers don't want to take risks. With advertising being expensive, it is often kept for the best known IPs. When bigger developers create new games, they often just put them in online stores, just to see if they will be successful. In the past, this may have worked, now the marketplace online has soo many titles, many of which are junk, created by indivuals, in the hope of gaining cash by offering ongoing development later, often called \"pre-release games\", legitimately good games get lost in the noise. New IPs often don't get a chance to shine. There are many other factors including the increasing costs of consoles versus the decreasing costs of PCs, tablets and phones means that developing games for other platforms has a bigger potential market, even if it is a market that targets a different type of player. Factors are always changes, new ones appearing all the time. It is always possible that the current market will change or reverse. In a lot of ways, there are technically more games to play now then ever before, it is just that they are spread over a much wider array of media making it more difficult to see the full array of options." ], "score": [ 24, 7, 5 ], "text_urls": [ [ "https://www.youtube.com/watch?v=L6JRtK5R93E" ], [], [] ] }
[ "url" ]
[ "url" ]
5y5k72
Do actors really eat/drink during a scene that involves food?
Technology
explainlikeimfive
{ "a_id": [ "dencvus", "dend508" ], "text": [ "Yes, though they often have people standing by to allow them to spit out the food in the event of multiple takes. Some actors do prefer to swallow though, for effect. Chris Pratt for example eats everything.", "Depends on the situation. If a scene is taking place where alcohol is involved, the drinks are likely non alcoholic. Since most scenes take multiple takes to get everything done correctly, the actors could easily become drunk causing more mistakes to happen which would lead to the scene being shot more times which would lead to the actors co aiming more alcohol. It's a vicious cycle. As far as food is concerned, the good will be present but the actors likely will not be actively eating it. By eating they are risking dropping or spilling the food on to their clothes which would lead to a wardrobe change. They also risk missing their cue to talk if they are still chewing or they chew so fast so they dont miss their cue and start to choke. Either way it would cause a cut and the scene to be reshot. At the end of the day it is just easier to simulate food and drink instead of the risk of time wasted on reshoots." ], "score": [ 6, 6 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5y5meo
Why does music from the 60's-80's play at a noticeably lower volume than modern music?
My job has become a lot less social over the past year, so I've been listening to music a lot more than I had been and I notice whenever Queen, Led Zeppelin, Pink Floyd, or any other artist from that era comes on I have to turn up the volume on my phone two or three notches so the song is at the same volume as all my other songs. All of the music is from the same source (Apple Music), and are all downloaded to my phone, and a lot of those songs are even the remastered versions, so I would think that that the volume would have been tweaked in the process.
Technology
explainlikeimfive
{ "a_id": [ "dendonc", "dendlcf" ], "text": [ "Tweaking the volume usually means adding dynamic compression (not to be confused with mathematical compression). This is useful for making music easier to hear on weaker, lower quality amps, such as found in phones and other small devices, but distorts the original music, which is especially audible on better equipment, or just in an more quiet environment. Compression has gotten a lot more common and \"stronger\" over the years, in order to make the music sound better at the types of devices that most people use to listen to music, at the expense of making it less optimal for high-end equipment. It is possible that in the remasters of these old songs by pink floyd, they wanted to keep the artistic integrity of the originals, and decided not to cater to low-end playback equipment. Compression of dynamic range refers to the maximum difference between the most quiet and the loudest parts of the recording. You could think of it as the audio equivalent of contrast on a video or image.", "URL_0 In fact, listen to re-releases of Queen/Led Zeppelin/Pink Floyd material from the 90s and later and you'll probably find they are louder. The wiki articles goes into a good explanation, but basically there is some kind of trend or perception that louder music = better music." ], "score": [ 6, 5 ], "text_urls": [ [], [ "https://en.wikipedia.org/wiki/Loudness_war" ] ] }
[ "url" ]
[ "url" ]
5y5rjo
Why is 3D printing products usually cheaper than the normal way of making the product?
At least that is my impression which is probably untrue in some cases.
Technology
explainlikeimfive
{ "a_id": [ "denf7ip", "denf9cp", "denezbs" ], "text": [ "For mass production, it's usually not. For a small, plastic object, mass production would be done using something like injection molding. They make a metal mold, then inject molten plastic into it. Depending on the size and complexity, you can put out ~200 parts per hour, while a 3D printer may be able to do 1 or 2. With injection molding, the biggest cost is making the mold and setting up the machine. But you only pay that once. So it's the same whether you're making 10 parts or 100,000. 3D printing will probably be cheaper if you only need to make 10, but injection molding will definitely be more cost effective for 100,000. 3D printing also allows for the production of parts in a single step that would require multiple steps with traditional methods. For example, making hollow parts with injection molding requires making multiple pieces and joining them together.", "The premise is false. 3D printing is not always, or even usually cheaper, it depends greatly on what you're making, and how much you're making of it. 3D printing can be cheaper on low-volume products, but most certainly is not for high volume products, where a production line could make thousands of items in the same time it took to 3D print one. 3D printing a single miniature for a game (or a spare part for a machine) might be cheaper for you, if you only want to make one single miniature. However, for a company that produces thousands of the same item, traditional manufacturing methods are way cheaper and faster. Some car manufacturers use 3D printing for certain parts in prototype and concept models, because those cars only exist in a small number. Once they ramp up the production of a car model, creating the parts though traditional methods quickly becomes cheaper. Of course, that doesn't mean it will stay this way forever. It's possible that some 3D printing technologies in the future will be so good that making the majority of goods that way will be cheaper.", "Volume. If you want to make money by producing something, you usually need to set up a production line. That's expensive. If you want to make your money back, you either need to sell a *lot* or charge a lot (usually both). With a 3D printer, you have relatively cheap setup (a few thousand dollars compared to tens or even hundreds of thousands) with which you can go ahead and produce a *single* item, because your production setup is not limited to one unique item." ], "score": [ 11, 7, 5 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5y5ubb
what is a stack overflow
when I google this, I get an answer that I cannot understand. In plain, simple english, what is a stack overflow?
Technology
explainlikeimfive
{ "a_id": [ "denhnqw", "denii44", "denfbck", "dengir2" ], "text": [ "A stack overflow is when a program runs out of memory in a stack. A stack is a way of organizing memory such that the program sets one piece of data above the previous one, the next above that, etc. It normally refers to a specific stack, though. Each program has multiple areas of memory it can draw from, each with its own purpose. One is just called the \"stack\". Usually that's what a stack overflow is referring to. The stack is used for local memory. When a program jumps to a new piece of code that it wants to run, it allocates a chunk of memory on top of the stack to use as local memory. This is called a function call. Calling another function will use a chunk of memory further up, so that local memory is never overwritten by another function call (there are exceptions, but that's not important here). When the program returns from a function, it releases the allocated local memory. One common problem is if a badly coded function always calls itself before returning. What you get is more and more memory allocated on the stack until there is no more. edit: clarity", "ELI5 version: Stack overflow. It's like writing on a single a4 page and running out of room. The paper is the stack. There is too much writing for the space, it's overflown. That's it. Another analogy could be a dam and water. The dam is the stack, the water is the information. Sometimes there is too much water for the dam and it overflows. Sometimes there is too much information to be stored in some space and the information overflows.", "\"Stack overflow\" is a programming term relating to computer memory. When you're running a program, each time you do something you're sending a request to the system. The system takes these requests and puts them into a pile or a \"stack\" and then proceeds to address each request in a specific order. A computer only has so much memory, so if you make too many requests & the computer runs out of memory space, the stack can become too big which results in an overflow error.", "It comes down to programming junk. ok - so computers are good at doing instructions one at a time - the ones you \"program\" them to do. But when you boil them right down a lot of computer code comes down to something like: take number A take number B do some math with A and number B If the result is < some condition > , then go do < something 1 > . If not, go do < something 2 > . Any time there's a comparison operator and a resulting branch in execution, the CPU has to take everything its currently doing (in the above example, number A and number B, maybe the math result) and save it somewhere. It saves it on the stack. Or called \"pushing the stack\". Then it goes off and does something 1 or 2. Think of a stack as literally a stack of dishes on a buffet. You can take dishes off the top (\"pop\") or you can add more dishes to the top \"push\"). Except instead of dishes, a CPU stack is full of data and code it needs to remember to come back to. Now hopefully something 1 or 2 wraps up, that jump in execution ends - then the CPU reaches the end of that code. Goes \"huh. I'm done. I should go back to what I was doing before.\" So it pops the stack, retrieves variables and code references etc from what it was doing before something 1 or 2. Part of that info is a memory pointer to the _next_ instruction it needs to execute. And off it goes, executing instructions. Depending on < a whole lot of things > , stacks aren't infinate. Using the buffet dish stack analogy, the bus boy can't keep adding clean dishes, they'd make a big tower and fall over and someone gets fired. Same thing in code. The easy way to stack overflow in code is make something deeply recursive. Ill give you an example: Do_Something_1: number A = 12 number B = 30 result = B-A if result > 10 then Do_Something_1 if result < = 10 then print \"Hey, your example is lame but illustrates the point\" End_Do_Something_1 So this code calls itself - its recursive. And you can see since A and B always use the same values, the math B-A always results in 18 which is always > 10 in which case the CPU does _exactly the same thing again and again._ So if some other code calls Do_Something_1 the first time, this code never completes, it calls itself, each time forcing a push to the stack. Eventually the stack runs out of space and boom, \"Stack overflow.\"" ], "score": [ 9, 5, 4, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5y7zz3
Why do phone have a standard charging port but laptops don't?
Technology
explainlikeimfive
{ "a_id": [ "denu1m6" ], "text": [ "It is worth noting that until fairly recently, phone's tended to have proprietary charging ports that were specific to the phone. The reason is that there wasn't a compelling reason to. Until USB-C, there wasn't a universal standard that could provide the power that a laptop required to run, so each company would design a power port that worked for their product's design. Consumers didn't particularly care, as very few people evaluate buying a laptop based on what the power port looks like, so companies didn't have any reason to standardize." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5ya3je
What does the numbering system mean with PC application versions? Why is it likely to see a sequence like 1.0, 1.1, 1.2, 2.0, rather than 1, 2, 3, 4?
Technology
explainlikeimfive
{ "a_id": [ "deocow2", "deoccbv", "deodnpw" ], "text": [ "A little analog: When you have your first place to live (house, apartment and so on) that would be version 1.0. If you were to make changes to it, like renewing the kitchen, then you get 1.1. So after a few years, after much work of making change you could and up with let's say 1.12. But then you home to somewhere and have to get a new place. That place would be 2.0. Does this answer your question?", "The first number is for major releases. The secondary numbers are for minor releases. 1.0 is going to be a lot different from 2.0, but 2.0 won't be too different from 2.1 or 2.2 or 2.3. Then when something major gets changed again it would become 3.0. It keeps things organized, but it is ultimately arbitrary.", "It various from application to application. The first number is for major releases. Big changes, new training likely, possible loss of backwards compatibility. Upgrading is a big deal, and should not be taken lightly. The second is for minor releases. Some new functionality, but the application should pretty much work as before. When present, the third number is the patch level. Typically bug fixes that should correct errors, but not add new or change existing functionality. Sometimes there is a fourth number, this is the build number. It tracks the internal versions and is important when testing. A patch might take a dozen builds before it is ready to be released to the public." ], "score": [ 17, 11, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5yaauu
What happens to toilet paper and cosmetic cotton pads after flushing it down the toilet? Where does their journey end and how does it end?
Recently I've been wondering what would be more ecological - after removing my makeup or just cleaning my face with cotton pads, do o just throw it into trash or do I flush it? I know that trash is being burned nearby in a "power plant" to create some electricity but it's still burning, which is p eco unfriendly. If I flush for example toilet paper that I used to wipe some water or coffee spill with (I wouldn't throw away toilet paper that I used while actually using the bathroom cus I think that would smell bad), where does the paper end? Do they end in the water cleaning facilities? And if so are they being filtered out, dried and then burned? That's my theory. So basically I would like to k on which option is more ecological.
Technology
explainlikeimfive
{ "a_id": [ "deoo8ny" ], "text": [ "Toilet paper is designed to break apart pretty easily. Try taking a piece and getting it moist like you were wiping down a table- it'll start to crumble very easily. There are standards for how much agitation before they break apart. As far as \"flushable wipes\", feminine hygiene products, cotton pads etc- *don't* flush them. The system isn't really designed to handle them, and they aren't designed to break apart. Worst case scenario, they will clog, and the back up will come up out of your pipes (and yes, it's as disgusting as it sounds when it floods your place with sewage). Best case scenario, it will clog \"downstream\". It's currently a pretty big issue, because waste treatment places don't have a great way to filter that sort of stuff out, and it wastes a ton of money. (people do it anyway). And then later, yeah, it's burned/dumped whatever like trash. tldr: You can flush toilet paper. Try not to flush basically anything else. If you used TP to wipe up a spill, I'm not sure, probably just trash. You can flush and it will break down fine, but you're wasting water on the flush anyway. (although unless you have nothing else, i wouldn't use TP, because like i said above it'll kinda crumble if you're cleaning spills). edit: Also especially don't pour drugs/medicine down the toilet, either. Maybe obvious, but people do it all the time." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5ybraz
Why do storage devices have capacities that are multiples of 8?
Strictly speaking about gigabytes (8GB, 16GB, 32GB etc.)
Technology
explainlikeimfive
{ "a_id": [ "deot0ck", "deosl4x", "deoufcb" ], "text": [ "storage is a collection of 1's and 0's (bits) and 8 bits form a byte. all the information about your files, programs and anything else is stored in this binary form. lets just call this the data. however just storing the data isn't enough. you have to be able to manage all this data, and how to find it all. that way when you open a file, the computer knows where to go to find all the data that comes together to form the file. this is done by giving each location on the storage device an address. and since there are SO many addresses, we actually need a way to label them all. which we do - in binary. and because of this, a storage device has a certain number of locations we are able to reference before we run out. and without getting technical about binary (with 8 bits you can address 2^8, or 256 locations), multiples of 8 means we are using every single available location to store data. so if we had a 16GB USB and wanted to make it bigger, to expand it to 20GB would be silly. since we are already making more locations available to reference, we might as well use every single location and expand it to 32GB. *thougt id add this in in case you do want to get technical: so like i said, 8 bits mean you can address 256 locations. if we add one more bit, we get 2^9 different locations available to us, or 512. this continues... 1024, 2048, 4096..... so you can see that with the addition of each bit we get a twice as many more locations to store the data. this is why you often see 512mb, 1GB, 2Gb, 4Gb, 8Gb etc... sizes for storage!", "In fact, this is related to the powers of 2. Because computers use a binary language(0 or 1), the data is translated as a combination of different patterns with only these two options. Think like this: if you had a question, you would had only two possible answers, true or false(or on/off ), and you would need just one digit(bit) to answer it, which could be 0 or 1. That's 2^1. If you had two questions, and two possible answers for each one, that's 2 bits and four different representations(0/0, 0/1, 1/0, 1/1), 2^2. And the higher the information, the greater the power and so it goes... EDIT: Spelling.", "They don't always. Digital language is in powers of 2. 8 is a multiple of 2. Anything that is a multiple of 8 is also a multiple of 2. If you want to specify a memory address in digital, you have to use the language which lends itself to powers of 2. Every digital storage device has a maximum addressable range that is a power of two, however, they need not consume that entire range." ], "score": [ 45, 25, 7 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5ydnjj
Why, when it comes to YouTube videos and editing videos, higher frame rates are better (30, 60, 120 etc.), but movies are generally 24fps? Why is it beneficial? And how does it look so much better?
Technology
explainlikeimfive
{ "a_id": [ "dep71iz" ], "text": [ "It's like with LP records or black and white films. When movies came out the film equipment operated at 24. So people got used to 24. So other than 24 doesn't look right. Today, 24 is literally called cinematic quality. In comparison, video rates (30/60/120) look to smooth and 'artificial'." ], "score": [ 10 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5ye7ta
Why are application installers called Wizards?
Technology
explainlikeimfive
{ "a_id": [ "depaxlg" ], "text": [ "Wizards came into existence way back, probably in the 1990s. There are many situations in software where you want to do a task, and there are lots of options. Before wizards, you'd be presented with a huge dialog box on which you could set all of the options. But as software became more complex, this became less and less suitable. Then wizards came along. A wizard is simply an interface which takes you through setting the options for something step by step. In the bottom right corner, there's a Next button, and often a Previous button. There's probably a Cancel button, and on the last screen the Next button probably gets replaced by an Ok button. As for the rest of the interface - well, that's where the options are presented to the user, but crucially, only a small, related group of options are shown at once. Wizards are not specific to installers. Installers are a specific use of wizards, but wizards are used in many more places. Installers are often implemented as wizards, though, because there are several options you need to choose, and those options are easily grouped together and shown to you step by step, which is exactly what wizards are for. Have a look at [this MSDN page]( URL_0 ), where Microsoft describe what a wizard is, and when to use them (or not to use them). You'll see a few other examples there which are not installers." ], "score": [ 6 ], "text_urls": [ [ "https://msdn.microsoft.com/en-gb/library/windows/desktop/dn742503.aspx" ] ] }
[ "url" ]
[ "url" ]
5ye7xj
How do loop stations with pedals produce music?
I've been toying with the idea of putting some electronic/experimental/ambient tracks together. I want to purchase some sort of set up, but I want to fundamentally understand how they are used and why they work. Can any instrument be input? Do they record? I only ever plugged my guitar into an amp, please ELI5.
Technology
explainlikeimfive
{ "a_id": [ "depau59" ], "text": [ "When a loop station is in \"record\" mode, it takes the input (guitar, bass, microphone, synthesiser, anything that has the right jack and volume level) and records it to its internal memory. Then when you stomp on it again, it'll go into playback mode, where it'll just play back its memory (sometimes synced to a tempo if your looper does that). Stomp on it again and you'll be back in record mode." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yeha7
if silicon is neither a conductor nor an insulator, how do you get it to behave like one or the other?
Technology
explainlikeimfive
{ "a_id": [ "depdglw" ], "text": [ "Silicon is made into an insulator by oxidizing it. It becomes more conductive when doped with very small amounts of other elements. Silicon has four valence electrons in the outer shell. If you mix in a little boron or aluminum which have three valence electrons, you create an P-type semiconductor. Conversely, an element like phosphorus or arsenic which have five valence electrons can make an N-type semiconductor." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yei92
Why do everyday electronics usually require more than one battery to be inserted to work where others, such as cars and phones, only require one?
Technology
explainlikeimfive
{ "a_id": [ "depcilv", "depq5ly", "depo6nq" ], "text": [ "Every day batteries are a standard size - 1.5V or occasionally 9V. Never seen something need multiple 9V batteries so we'll stick to 1.5V. Using multiple gives you whatever voltage you need. 2 gives 3V. 4 gives 6V. Bigger batteries give you more duration. So, having lots of small batteries means you can switch the same batteries between your TV remote and gameboy or smoke alarm while still being flexible. Phones have one battery designed specifically for that phone. You can't transfer it. That makes that battery less flexible and more expensive since only that phone can use it. Cars were all standardised on 12V, so we make one single 12V battery - again because there is no need for changes in voltage, we just make bigger batteries as needed. Although some larger vehicles do need 24V and have 2 batteries like smaller devices.", "You are confusing Battery and Cell. A battery is a collection of cells wired in parallel or series for a higher voltage or current output. A 1.5V AA, A, C, D is a Cell A 9v is a Battery. If you open it up, it will contain 6 1.5v cells. The 6v block battery in your flashlight is a collection of smaller cells. The 12v battery in your car is a collection of lead-acid cells.", "Most normal, round batteries (like C, D, AA, AAA) are one-cell batteries. Depending on what they're made out of, they can be anywhere from 1.2 volts to just over 2 volts. A lot of batteries, like 9V batteries and car batteries, are actually made out of a bunch of smaller cells hooked up in a row. If you take apart a 9V battery (I don't recommend this), you'll usually find 6 AAAA-size 1.5V cells (1.5 x 6 = 9). Car batteries also have 6 cells inside, and each one makes 2.1 volts (car batteries are actually 12.6 volts). Edit: So, some things that you think only have one battery actually have a bunch of individual battery cells." ], "score": [ 11, 5, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5yet5h
If smartphones are basically small computers, why can't we easily boot up android on lumia?
Technology
explainlikeimfive
{ "a_id": [ "depev0v" ], "text": [ "The problem is the drivers for the hardware. A driver is a piece of software, that tells the OS how to talk to the components (e.g. camera, wifi-chip, modem, graphics chip, processor, etc.). On the computer you got standards for most stuff, so it is possible, to get basic functionality working with a default driver, but to use all the special features a component has, you need a driver for this component. Then you got people building their pc themselves and if hardware manufacturers want them to buy their stuff, they have to provide drivers for the most common OS's, which is Windows and Linux. Also because Microsoft and all the Linux distribution creators want their OS to run on as many combinations of components as possible, they come with many drivers for the most common components. Mobile phones however are built in factories and with only one operating system in mind. so as a hardware manufacturer you only have to provide a driver that works for that operating system and the operating system only has to come with the drivers for the hardware of that phone. if you look at custom roms for android phones, you'll notice, that there are individual releases for each phone out there. that's because the OS needs different drivers for each phone. now in theory, you could make an android rom with all the drivers needed to run on a lumia, but getting those drivers will be the hard part. you might even have to write some of those yourself." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yetef
How satellites on Direct TV and Dish Network maintain their signal as the Earth rotates without skipping/dropping your program?
Their are only a few in the sky, and everyone is connected to them at once anyway. As the Earth rotates, and the satellite switches between access points, how does it *not* drop service?
Technology
explainlikeimfive
{ "a_id": [ "depece6", "depeugk", "depn1b2", "depg1jh" ], "text": [ "They are in a geostationary orbit. That means that the satellite is rotating with the earth, and remain directly overhead.", "The satellites are in geostationary orbit, a circular orbit above the equator at a height that gives an orbital period synchronised with the Earth's rotation. So from your point of view the satellite appears to hang in the sky not moving and your satellite dish can just be pointed at it. Additionally, the satellites are broadcasting their signals for anyone on the ground, within the area covered by the signal, to recieve. There's no need for the satellite to make individual connections with the viewers or recieve any signals back from them. The satellite will recieve the 'uplink' signals from the TV company but that's only one signal per TV program at a time. To do pay TV channels, the signals are encoded and people who want to watch them have to pay the TV company for the gadget to decode the signals. Terrestrial broadcast TV and radio works the same way, except instead of a satellite in the sky there's a tall transmission tower on the ground.", "They are positioned in the Clarke belt. This is the same Arthur C clarke that wrote science fiction. URL_0 Satellites have small positioning engines on them to ensure that they do not wander from their assigned position. When the fuel (hydrogen peroxide) runs out after a few years, the satellite dies as it drifts out of position", "Bit of additional info: * you need 3 satellites to provide global coverage (north and south poles will have limited coverage) * when you move from 1 satellite region to another you need to readjust your antenna. * antennas on moving objects (ships, planes) move constantly to make sure they are still pointing at a satellite. * satellite's position is constantly being adjusted in space using thrusters to keep satellite in a given position to make sure you don't need to move your antenna. Satellite's position is mainly affected by earth's gravity field." ], "score": [ 64, 12, 3, 3 ], "text_urls": [ [], [], [ "https://en.wikipedia.org/wiki/Geostationary_orbit" ], [] ] }
[ "url" ]
[ "url" ]
5yev0e
How does a hacker or internal employee download such large amounts of classified documents from CIA or NSA which are highly secured or are world's top intelligence agencies.
Technology
explainlikeimfive
{ "a_id": [ "depg932", "depj3lg", "depg64q", "depiq7e", "depkgko", "depkqj1", "depl4t5", "depjbqb", "depgk9p", "depkha4", "depky13" ], "text": [ "\"“Our whole system is based on personal trust,” an exasperated Clapper said, adding that there were no “mousetraps” in place to guarantee there wouldn’t be another Edward Snowden. The NSA has enacted tighter restrictions on when and how agents can access classified documents since Snowden’s heist, including a “two-man rule” requiring two administrators to work jointly when dealing with certain files.\"", "I work at a research site. A few years back an employee pulled all the drives out of the machines in his area and was in the wind for a year. Only got caught because he tried to sell them to an undercover FBI agent. The reality is that stealing data is actually fairly easy. It's the getting away with it that's hard.", "Short answers: These organizations hire incredibly smart people. These people who do this have the knowledge and access to the facilities needed to copy these large sets of files. And they know how to fly under the radar while doing it.", "The security team needs to get it right 100% of the time. An attacker only needs to succeed once.", "The simplest answer is that the information is not that secure to begin with. Many people have access to top secret data but don't leak it. As for the size of the data they just take a small bit at a time over many months if not years. These days you can easily get a multi gigabyte micro SD card in and out of even the most secure places without much trouble. Data security is very hard and the only reason we don't have even more leaks is because people don't want to put themselves at risk. Getting the data is pretty much the easy part. The hard part is finding someone willing to exfiltrate it and then leak it. I know it is set 30 years ago but the TV show The Americans does a great job of showing how people are manipulated to do such things.", "The Vault 7 leaks were from a piece of software called Confluence which is basically an internal wiki. Confluence is made by Atlassian and usually run on a local Atlassian server. Not only are the servers very tricky to secure (they are pretty notorious for this) but also Confluence itself isn't particularly secure. You can normally log in from anywhere rather than on local network, and commonly isn't linked to Windows' Active Directory where you would sign in with your company login details. It doesn't require mandatory password changes, and because it isn't linked to AD when someone leaves the business it requires someone to manually go and close the Confluence account separate from all AD accounts. It's not clear if that happened in the Vault 7 or Year Zero leaks but it could be.", "I consult at places with reasonably high security and the way I've seen these kinds of problems crop up is this way: * You have a massive list of contradictory and incomprehensible rules no one actually understands. * You have organizational goals you can't accomplish with these rules. Something has to give and in the end it almost always comes down to \"fuck the rules we need to get shit done.\" That happens because not being able to get something done gets you in trouble now for sure and breaking the rules only gets you in trouble if something bad happens because of that and you get caught.", "Your security is only as strong as your most idiotic employee.. it only takes one dumbass to compromise the whole thing. Something as simple as a cell phone with hotspot on and connecting to that while on their internal network... many reasons people can be stupid on the job.", "A hacker doesn't. An internal employee works with the data every day. I don't know about American security clearances (Er nor do I know about any others, my friend told me) but most employees working in these situations all require the highest level. Security in these instances relies on heavily vetting employees before hire. They work hard to narrow it down to people who won't drop leaks, even if they aren't the best and brightest candidate, if you don't get that clearance, too bad. Edit, like another person has posted here, there's a huge amount of personal trust involved, plus a bit of fear. Edit 2, hehe I can't believe I'm getting downvoted for this.", "Well, for internal employees, it's because they trust people who are given access to sensitive stuff. There are very, very thorough processes you must undergo to get a secret or top secret clearance. But that's not perfect, there's no device for seeing into someone's soul. A trusted person would have a very easy time stealing data because of that very trust. All it takes is for someone to get disillusioned, or blackmailed, or something, and that person can steal whatever they have access to. This is very simplified, but who would have a harder time stealing from your office, me or you? I'm external, nobody knows who I am, there are systems in place to keep me out. But you're supposed to be there, you indeed have to be there to do your job. Much easier for you to walk out with stuff.", "If you can't access this data easily, you can't do your job easily either. Restrictions are always away from something. I suspect that the data was simply downloaded from a central storage document by document. Probably anyone with a high authorization level could have done it fairly easily over a year or two." ], "score": [ 211, 119, 102, 53, 16, 15, 10, 8, 7, 5, 3 ], "text_urls": [ [], [], [], [], [], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5yexin
US Cellphone Carriers (Sprint, Verizon, T-Mobile...). What difference, if any, is there between them all?
Technology
explainlikeimfive
{ "a_id": [ "depjbp9", "depjp3s", "depjptw" ], "text": [ "There are two radio transmission standards in the U.S: Sprint, Verizon and U.S. Cellular use CDMA, while AT & T and T-Mobile use GSM. GSM is also the standard used in most of the rest of the world, so an unlocked AT & T or T-Mobile phone can usually be used abroad by inserting a local SIM card. Some phones are only available with one of these two transmission systems. There are also MVNO's which are \"virtual\" carriers. Essentially, they lease access from the major providers and package their own phone plans. Cricket and Boost Mobile are in this category even though they're owned by AT & T and Sprint, respectively. Ting licenses both T-Mobile and Sprint networks, so practically any phone can be used with their service.", "Sprint and Verizon use CDMA to transmit voice data to their customers cell phones. T-Mobile and AT & T use GSM. CDMA does not require a SIM card. The info that links a phone to an account is stored on a cdma chip that is embedded in the phone and cannot be removed. If you need to switch phones, you must have the carrier transfer your account to the new phone. This can usually be done online, nowadays. ~~CDMA is also better at penetrating buildings with lots of walls, although metal roofs and walls will still give poor reception~~. GSM stores the account linking info on a removable SIM card. If you need to switch phones you can just move the SIM card to the new phone. ~~GSM signals are also not as good at penetrating buildings~~. Metal roofs and walls can basically cut off all signal. In general, CDMA phones cannot be used on GSM networks, and GSM phones cannot be used on CDMA networks. There are some exceptions, like google's new phones that have both CDMA and GSM or 4G LTE CDMA phones that have a sim slot(see below). 4GLTE Note: 4G LTE requires GSM to get full speed, so most new verizon/sprint phones will have SIM slots. They still use CDMA for standard communications, but use the SIM for fast 4G LTE. This also makes it possible to use Verizon 4G LTE phones on a gsm network, even if it isn't ideal. You might be able to do this with Sprint, but I don't know if Sprint locks their Sim Slot. Edit: Building penetration is determined by frequency, not the transmission scheme(GSM vs CDMA) as pointed out by u/PhotoJim99. In my area, Verizon penetrates better than AT & T, so I must have over generalized.", "The only important criteria for choosing a carrier is coverage where you live/work/visit. In major metropolitan areas the difference between telcos is minimal. But get out into the 'burbs or rural areas and the difference is huge. Choose wisely, and don't fall for marketing hype. Sprint's latest ad campaign talks about \"network reliability \". But they say nothing about network availability or network speed. And that's where the difference lies." ], "score": [ 13, 6, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5yfgkd
How do computer chips work on the smallest scale? What system allows it to store and access information quickly and run programs?
Technology
explainlikeimfive
{ "a_id": [ "depktda", "depjf0r" ], "text": [ "So we need to talk about what semiconductor devices are, what they're made of, and how they work. A semiconductor device is built from two metals in what is called a junction. This can be as simple as two metal foils compressed together, but in modern electronics, they use a form of lithography to bond layers of metals together. These metals tend to be silicon, germanium, and gallium. There is an N type, and a P type, where the N type has an excess of electrons the atoms are very willing to share, and the P type has a defect of electrons the atoms want to fill. What you get in this relationship, with regard to an electrical circuit is a *bias*, in that if the current was flowing from the N side to the P side, the circuit would conduct just fine, because the N side wants to get rid of electrons, and now it has a shit ton more coming behind it, and the P side desperately wants electrons, there's more demand for it down stream, and it has an ample supply right next to it. But if you were to reverse the circuit, the electrons would flow through the P side and reach the N side, which already has electrons and doesn't want any more. In this direction, the junction is acting like an insulator. And that describes a *diode*, a simple PN junction that allows electricity to flow in one direction, but not the other. You can get a wide variety of properties out of this component depending on the metals used, impurities introduced, shape of the circuit, etc. They use diodes in power supplies, in radios to prevent feedback, they make diodes that won't reverse until a threshold is hit, they make Light Emitting Diodes, and \"noisy\" diodes that act as hardware random number generators. A transistor is a PNP or NPN junction, with a emitter, base, and collector. The base is in the middle, and electricity is intended to flow from the emitter to the collector, but it can't. Instead, electricity can flow from the base to the collector, and when that happens, you get an interesting side effect - I forget all the details here, but basically you tip the bias, so that electricity can flow from emitter to collector because there are electrons flowing from base to collector. The first transistors were made for radio amplification, where a small current on the base, the unamplified signal, would allow a proportionately LARGE current to flow from the emitter to the collector, the amplified signal. Once again, mix up the metals and properties, and you get different types of behaviors. All transistors are amplifiers, but we design some specifically to switch sharply between on and off for use in digital computing. This post can get long, so I'm going to give cliff notes on my cliff notes. Transistors are arranged into groups called logic gates, of which the basics are the AND, OR, NOT, and XOR. The AND gate takes two input signals and makes one output signal, the output signal is 0 (no current) unless there is a 1 on both input circuits. The OR will output a 1 if either or both inputs are a 1. A NOT will invert the signal, so 0 becomes 1, 1 becomes 0. XOR outputs a 1 if only one input is a 1. These gates can be combined and chained so you can AND any number of inputs, for example, they also combine to form common groups such as NAND, NOR, or XNOR. And with these logic gates, you can express all of boolean logic, a branch of mathematics you might want to check out. Transistors can also be arranged to form flip-flops, a type of computer memory. This memory is fast, but it's not a dense way to store memory, so they use it in CPU cache memory, not in RAM (which are made from banks of capacitors). Logic gates are arranged to perform bitwise operations. Look up how the half-adder circuit works, you can add two bits together, and with carry, you can chain half-adders together to add two binary integers of any size. There are many such circuits in a CPU, and they all go into a computing unit called the ALU. Each circuit path is enumerated, they're numbered; those numbers are called opcodes or CPU instructions, and a sequence of those make up a program. There's some circuit whose job is to act as a traffic cop, and it routes bits of data, typically in that flip-flop memory, down the right circuit path so the correct thing is done to your data. That's the high concept, I leave the rest to you.", "if you want to get right down to the basic level, you have to go all the down to the basic components of CPUs, which are transistors. Transistors are, in essence, just a switch. But rather than physically hitting a button to change its state, you apply a voltage to a pin that controls it. These are the basic building blocks of computers. To go up a level, we can now look at logic gates. These are made of transistors and form the next basic level of building blocks for computers. Logic gates allow you to combine data signals to get a unique output." ], "score": [ 23, 6 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5yfj3d
How is it that iphone storage is full but it's still possible to stream songs over Spotify
My spotify tells me that my storage is full so I can't download any more songs, my iOS camera tells me the same. How is it that I can still stream songs from spotify, shouldn't the cache memory used be too much?
Technology
explainlikeimfive
{ "a_id": [ "depk2qc", "depjuuh" ], "text": [ "Your phone has a hunk of memory dedicated to long-term storage, and another hunk saved for short-term storage. When you download songs, you put them onto long-term storage. Streaming uses short-term storage, adding and deleting whatever it deems necessary. Presumably you could jailbreak your phone and fiddle with the ratios, but it will probably end up having some surprising side effects, as I wager many apps depend on all of that short-term memory space. If want more memory, I'd advise switching to an Android phone that accepts microSD cards. Get the phone with basic memory and load the 128gb Micro SD up with all the tentacle porn you like.", "Because the data that is used only gets saved temporarily and gets deleted or overwritten when the next song is played." ], "score": [ 10, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5yfmoh
Do supercomputers have similar parameters like ram or hdd size or something entirely different?
Technology
explainlikeimfive
{ "a_id": [ "depl2o5", "deplgpq", "depl4ye", "depxcmx", "deprknb", "deqc870", "deq0cte" ], "text": [ "They use the same general architecture as typical PCs, just in much greater quantities, and higher quality components. Their RAM almost certainly is of the ECC variety, which is basically the same type of RAM technology as used in normal PCs, except they have an additional chip on them that calculates checksums, to ensure that no memory corruption goes unnoticed. They usually use special motherboards that can hold 4 or more CPUs each, and often several hundred gigabytes of RAM. The CPUs they have often has a lot more cores than consumer models too. Intel's high-end server CPUs can have 18 cores each. However, if you really wanted to, you could still get one of these 18-core CPUs, put them on a motherboard with one single CPU socket, install windows and play counterstrike on it, if have more money than you have sense. Several motherboards with several CPUs each are then connected together with an extremely high-speed network link. In general, supercomputers operate in the same way as regular computers, but they're just hundreds, or thousands of semi-independent computers that are linked together into a cluster, and software written in a way that divides up the computing tasks into segments, and feeds a number of segments into each cluster node. When it comes to storage, supercomputers don't typically require an enormous amount of actual storage. The actual data sets aren't necessarily extremely big in size, they just require extremely complex calculations to be performed on them. If you have enormous computers that are mainly used to store enormous amounts of data, such as facebook storing every single picture and video anyone has uploaded, the term for these things are datacenters, not supercomputers. At URL_0 you can see the current toplist over the worlds biggest supercomputers, how many CPUs they have, and how high maximum performance they have. The current leader is in China, and has 10,649,600 cores in total.", "Super computers today consist of compute nodes, which are mostly like individual computers. Each node has a number of processors, with a number of cores, and their own RAM, and a couple network cards. There are nodes dedicated to the networking, essentially a switch or router, to ferry work and data to each compute node. The thing with the compute nodes is they're not stand alone, they're not each running their own operating system. The compute nodes have no need for video cards (though there may be some sort of primitive console access for diagnostics) or necessarily for hard drives (though there may be one as a cache). So you can take your notion of ram and processors, and just multiply it by the number of nodes. There are other characteristics, and that has to do with topology. This is an issue with your computer, too, about where CPU cache is, the memory controller, the north and south bridge, and other aspects. For a super computer, the complexity of connecting all these nodes becomes a significant factor. The technology may not exist to have a central dispatch to some 40,000 processors totaling some 10 million cores as exists in the current world's largest super computer. So there tends to be a number of hops and different routes so data can get from any processor to any processor as fast as possible.", "Supercomputers are generally built to the particular specifications of their application, which means they don't have any well defined specifications. You might for example have a render farm for a movie production team which is optimized for video rendering, or you might have a computer which simulates molecular interactions in order to do pharmaceutical research. While both of those applications require far more computing power than the average computer the types of computation differ and the setup would differ. Some supercomputers can perform up to ~~96~~ 93 PFLOPs, or 96 quadrillion floating point operations per second (a quadrillion is 1,000,000,000,000,000).", "I've worked in this field and provide some answers. RAM/HDD size is different depending on the architecture of the supercomputer. Most of them use a cluster of individual nodes (~2x CPUs up to 32, ~minimum of 256 GB RAM, fabric interconnect, management/login networking) or build them into a monolithic, singly addressable pool of CPUs and RAM . So you would log in to some systems (I think SGI?) and you'd be presented with a couple hundred cores and terabytes of RAM. The separated nodes use a message passing interface or remotely and directly address the memory when the compiler and application running the processing is configured. This allows all the nodes to know every part of the workload and can communicate as one. The multiple interconnects are usually Infiniband, a custom interconnect (used by Cray and some others) or the faster ethernet (think 100 gigabit or higher) and require all of those connections are bonded together in order for that one node to not slow down the nodes in the current workload. I hope I broke it down and kind of explained the reason for the RAM separation at least. For HDD, they're usually hosted on large storage area networks which are an entire system dedicated to providing ones and zeros to the supercomputer cluster as fast as it can. These are usually whole server cabinets of drives or increasingly solid state storage. They then connect to the interconnect directly (or through filesystem servers (usually a dedicated supercomputer node) that manage the clustered storage and have petabytes available in a high performance capacity directly through the nodes' interconnects. I've touched on a couple implementations, as there are other ways and locations for storage to be accessed or physically stored.", "Absolutely. Supercomputers are mostly made of consumer parts. A typical supercomputer is, in essence, a number of regular computers connected via a network. These are called nodes. Each node will have the usual processor speed, number of cores, ram. However, they usually use top of the line setup (ie 4x24 core setup, up to terrabytes of ram), but the specific setup depends on if it's set up for data heavy, memory heavy or CPU heavy tasks. The HDD is usually on a network and shared among these nodes, and is usually huge and set up for hugely parallel reads and writes, so files might take longer to open initially, but can have data firehosed into them. The important extra parameter is the network. The speed and latency of the network is very important, as is the shape of the setup. Nodes that are close together, in some sense, might be directly connected, so messages sent between them are sent quickly. Nodes that are far apart might be indirectly connected, so messages need to make several \"hops\" to go from one to the other, meaning they're sent more slowly.", "What has not been mentioned is interconnect. Packet switched Ethernet is too slow for thousands of commodity PCs communicating. Supercomputers tend to use [switched fabric]( URL_0 ) such as [InfiniBand]( URL_1 ). Much lower latency.", "Many super computers nowadays are really just regular computers, just networked together in a way to where they're all working on the same thing. The day of the custom Crays and stuff like that is largely gone. For a while, the Airforce had a super computer made of a few hundred PS3s running Linux. Why? Highly modular, each component is fairly cheap, and easily replaced or expanded. Get more money? Instead of a new system, add on a few hundred more nodes." ], "score": [ 722, 28, 18, 9, 7, 3, 3 ], "text_urls": [ [ "https://www.top500.org/" ], [], [], [], [], [ "https://en.wikipedia.org/wiki/Switched_fabric", "https://en.wikipedia.org/wiki/InfiniBand" ], [] ] }
[ "url" ]
[ "url" ]
5yfzm3
If wikileaks is a threat to corruption/collusion in developed countries, how is it still up or not constantly attacked?
Technology
explainlikeimfive
{ "a_id": [ "depp5my", "depr090", "deprwys", "depudno", "depvmxu", "depw7uf" ], "text": [ "A DDOS attack works by bombarding a website with so many commands that it has to shut itself down. It's effective if you have enough computers to do it, but the bigger the site, the more computers it takes to shut it down. It's also illegal to do a DDOS. It would be difficult for a government to do that without getting caught(especially when fighting a website who's entire shtick is revealing government secrets), and if the people found out that they were attacking a site like that, it would be seen as a sign of tyranny, which is bad for any government that calls itself a free country. TL;DR, DDOS attacks are hard for governments to organize, and a more efficient solution would be to just find the guy leaking information and fire him.", "Wikileaks has faced a great deal of strident opposition in its past. I very much suggest reading on what happened to Assange and Wikileaks after the release of the Collateral Murder video and the Afghan and Iraq War Logs during the Bush administration. All the major credit card companies blacklisted them for donations, the US targeted Assange for prosecution even though he's not a US citizen, and Wikileaks' website came under attack for a time, resulting in hundreds of people setting up Wikileaks mirrors to keep the content they shared online. That's why Assange is hasn't been able to leave the Ecuadorian embassy for 5 years, because of a contrived and completely fake criminal complaint intended to get him ultimately extradited to the US Of course, this was back when the left liked Wikileaks because the evil being done by a Republican administration was being exposed, and rightly so. Political partisanship is one hell of a drug that causes people not to care about what was done, but by whom.", "It's highly probable that wikileaks is a part of what traditionally was called[\"COINTELPRO\"]( URL_0 ). Since the latest release showed us that the CIA could get into ANYTHING since 2003, then clearly this is being allowed to happen.", "It would be pointless, since the information is already out there. If WikiLeaks is somehow taken down, there are mirror websites that host the same info.", "Wiki is a decentralized Org. They use Encryption VPNs and Proxies making it difficult for authorities to track. A DDOS attack is not going to take a website down permanently and Wiki has many avenues to distribute information.", "Wikileaks also has a \"dead man's switch\" in the form of a large encrypted file that anyone can download. If something bad happens to Julian Assange, the dead man's switch is triggered and the password to decrypt the file is released allowing everyone with the file to read whatever damning secrets it holds." ], "score": [ 40, 25, 7, 3, 3, 3 ], "text_urls": [ [], [], [ "https://en.wikipedia.org/wiki/COINTELPRO" ], [], [], [] ] }
[ "url" ]
[ "url" ]
5yg2qz
How exactly are files transfered via Wi-Fi and what dictates the transferspeed.
Is it just sending information what Data the destination Drive needs to write? If so, is the bottleneck how fast the source can give out the information or is it the writing speed of the destination? Or does it work totally different?
Technology
explainlikeimfive
{ "a_id": [ "deppwm8" ], "text": [ "Yes. Essentially, the data transferred is just a stream of 1s and 0s for the destination drive to write (it's probably encrypted, and it has some extra stuff around it to make sure that the data reaches the destination computer intact). The bottleneck depends on a) the destination drive's write speed and b) the WiFi speed. The WiFi speed is not always as simple as it sounds, because it can be affected by interference. Normally WiFi networks operate somewhere between 20 and 50 Mbps, which is slower than hard disk drives (even HDDs should be capable of writing at around 80MBps - note that capital B there which means it's 8 times faster than a small b) unless they're damaged or worn out. VERY new WiFi networks (called ac) can transfer at up to 20MBps for short periods of time, which is still 4 times slower than an old-style HDD can write the data. The difference is even more apparent with SSDs, which can write data many times faster than even the most modern WiFi network can transfer it." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yg4la
Why do camera resolutions drop when framerates go up?
Like the Slow no guys always have super slow footage but with crappy resolution, why?
Technology
explainlikeimfive
{ "a_id": [ "depph3t", "depph96" ], "text": [ "It's usually due to the amount of data the camera can move from its sensor into its memory. Imagine a basketball hoop: you can only fit through a single basket ball at the time, but but you would be able to push through a lot of tennis balls, because they are smaller. With higher frame rate, the frames resolution needs to be smaller/lower too.", "At any frame rate the sensor has effectively scan for light every frame. With higher resolutions there are more pixels to scan in a that fraction of a second and it's not easy to do especially at the insane frame rates that Gav and Dan shoot at. Basically: Lower resolution can be scanned quickler equaling higher framerates. Hopefully that makes some kind of sense. B" ], "score": [ 10, 5 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5yg8p3
How much storage does Disney/Pixar need for a typical movie?
Since we're always hearing about the absurd render times of Disney/Pixar films, that got me thinking - How much storage space does Disney/Pixar need to hold all of the assets (models, textures, audio, etc.) for an average movie?
Technology
explainlikeimfive
{ "a_id": [ "depqful" ], "text": [ "[citing a article from a Google search]( URL_0 ), the range is in the order of dozens of terabytes for a older Pixar movie, or hundreds or more for recent productions." ], "score": [ 3 ], "text_urls": [ [ "http://www.denofgeek.com/movies/computer-animation/40348/from-bytes-to-terabytes-computer-animation-s-constant-leaps-forward" ] ] }
[ "url" ]
[ "url" ]
5ygd0p
Why does the internet get slower when my laptop is further away from the router? Isn't the data still traveling at the speed of light?
Technology
explainlikeimfive
{ "a_id": [ "deprppf", "depvgw9", "deprngp" ], "text": [ "The waves travel at the same speed, but as you move further away from the router, the strength of the signal decreases, and becomes more prone to interference from other sources. When this happens, it becomes harder for your computer to determine what's part of the signal, and what's part of the environmental interference. When this happen, the laptop needs to ask the router to send the data once again, and hope it doesn't get too distorted by interference this time. However, your computer's request for re-transmission might also be hard for the router to interpret because of the same interference, and then you're really starting to feel the lag. Retransmission holds up data packets and lowers the average throughput of the link", "Have a conversation with someone sitting next to you. Chances are you won't have to repeat things often, because you can easily hear the person next to you. Now have a conversation with someone on the other side of a busy street. You have to yell, and when cars go by you might have to repeat yourself to be heard. The further you are from your router, the harder it is to communicate with it (interference and power needed)", "wireless signals are not perfectly transmitted and received due to interference. when a packet is not received, it takes time to acknowledge and resend the packet. that reduces your correct data throughput" ], "score": [ 9, 6, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5yh0p7
Why is it so difficult to make "perfect" software?
Operating systems and apps are generally far more reliable and stable than they were when I first started playing with computers decades ago...but why aren't then virtually bug free by now? What I'm saying is, they keep adding features to computer and mobile OS's that most people don't use. Why has no one thought to just suspend releasing updates until software is virtually flawless? With all the computing power we have, why can't supercomputers stress-test software to insure quality? I'm sure there is a much higher standard in embedded software and super critical applications, but why not everywhere? The Apple Maps debacle comes to mind as one example. Even iOS itself. Apple seems very conservative about making big changes, iOS is relatively simple and locked down. How is it that it has ANY bugs or performance issues? Or Roku and web streaming devices like that. WTF can an old fashioned TV remote on a non-smart TV respond instantly but a "smart" TV runs like shit with lags and delays? Why isn't it possible to make touch screens and smart TV UI's that respond with flawless, lightning quickness every time? Like I said, I definitely see that over the decades all this has improved but am I alone in thinking stuff should be flawlessly responsive and super fast first, and loaded with features no one asked for *second*?
Technology
explainlikeimfive
{ "a_id": [ "deq1o2x" ], "text": [ "Software developer here, > but why aren't then virtually bug free by now? We haven't been working on the same programs this whole time. Every time you change an existing code base, you no longer have the same program. Business people assume software is more malleable than it is, and they are loathe to throw away IP. So I write a piece of software. Let's say it's perfect. Now management comes to me and says they want +feature. The software wasn't built with +feature in mind, this changes the scope of everything. The proper thing to do would be to reconsider the whole project from square one and understand how +feature contributes, then throw away anything and everything in the program that does not conform to this new design. But that takes too much time, that *wastes* too much IP that is currently valuable and profitable. So what they want me to do is *shoe horn* +feature into the program. It's a hack. It's ugly, and it breaks design assumptions and invariants. If I can do it before, I can do it again, and again, and again, and the code gets more and more divergent from it's original design. This sort of tack on feature creep grows complexity exponentially. > Why has no one thought to just suspend releasing updates until software is virtually flawless? Business people are short sighted. We have a cancerous belief that our economy is a growth economy, and that market gains and profits now are more important than long term achievements for the company. I've never been with a company that has kept a CEO more than 4 years. These guys only care about looking good in front of shareholders, getting rich, and moving on before it blows up in their face. Then they can say look back at their history, their golden path of success. The failings of those companies after them are the faults of that next guy. There's also the principle of \"Worse is Better.\" Look it up. Perfect software isn't successful in the market. > With all the computing power we have, why can't supercomputers stress-test software to insure quality? You don't need a super computer. Most testing is useless, as they only prevent regression, they don't discover unknown bugs. QA departments are big wastes of money because they can't exercise every possible combination and permutation, only a sample set of possible inputs. And typically, they have no fucking clue what they're doing and only check for regression, which we have automation setup to do, and they'll only ever do the tests the developers have done themselves. If testing were productive, you wouldn't find bugs in production. > I'm sure there is a much higher standard in embedded software and super critical applications, but why not everywhere? It's unnecessarily costly and the market will tolerate a lot of shit. > WTF can an old fashioned TV remote on a non-smart TV respond instantly but a \"smart\" TV runs like shit with lags and delays? Old televisions were more like electro-mechanical machines than electronics. They were either on or off, they didn't need to boot. Hell, the old CRTs needed to literally warm up like you would a car engine. > am I alone in thinking stuff should be flawlessly responsive and super fast first, and loaded with features no one asked for second? The programmers lament..." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yhzk7
why can't we just colonise the moon instead of mars?
Technology
explainlikeimfive
{ "a_id": [ "deq6g3x" ], "text": [ "We could certainly put a base on the Moon, and it could even potentially have significant use, such as a place from which to launch further exploration without having to fight against earth's gravity. but it's a more hostile environment, and some questions we have, such as the search for extraterrestrial life, are much less likely to be answered there than on Mars." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yiubp
Why people use BSD
Windows = target: easy to use, amateur users Linux = target: programmers, servers MAC OS = target: artists, designers, musicians BSD = target: ??? Who uses BSD? Why people use it? What kind of people ( generally ) uses it? What are the main differences between its main distributions ( such as DragonflyBSD and FreeBSD )? Help me understand this BSD world.
Technology
explainlikeimfive
{ "a_id": [ "deqeix1" ], "text": [ "Usually a major motivator in BSD is licensing. It's a unix environment that performs comparably well to Linux (better in a few regards) but doesn't have some of the rules with the GNU license. Basically, you can tweak it and don't have to release the source code. Big companies like to not release the source code for things so they can keep the secret sauce that drives their products internal. Additionally, as it's not HUUUUUGELY different from Linux (I am going to be beaten by the BSD fans for that one I'm sure) and most Linux software (that's POSIX compatible) can be compiled to run on it just fine. It's most common use these days is embedded systems, where a rock-stable standardized environment that needs to work and stay working is needed, especially one with powerful & effective networking abilities. The Playstation 4 sports a BSD variant as it's internal system, and **maybe** the new Nintendo Switch does too (the legal documents say \"BSD Kernel\" but Ninty might have only plucked part of the network code, it's hard to tell at this point 'til someone cracks the Switch.) Other devices like routers and similar gizmos often sport it. If you're looking to get a \"whole enchilada\" package with lots of tools and toys (and the ability to update them) already installed and waiting, a Linux distro is probably your best best. BSD *can*, but it's usually more of a \"blank canvas\" setup. Edit: I forgot an important one! Mac OS X and iOS are based on Darwin, a BSD derivative. Most of what makes those their unique selves is Apple tools and code on top, but the deep down guts are the same." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yjbv2
How does your browser know what server to connect to when you type out a url? Where is this information stored?
Technology
explainlikeimfive
{ "a_id": [ "deqitt7" ], "text": [ "Your browser doesn't know! For the second question: Your browser sends the request to your proxy, which is maintained by your Internet Service Provider (they're the people you pay money to for your internet). Your proxy knows of the existence of a Domain Name System Server (DNS) that it sends the name to. If you want to know where that is: URL_0 This server has a list of names that it translates to an IP address, like 241.10.231.117. Your browser now replaces the request name with this IP. This is stored locally. Fun Fact: You can edit your computer's settings to always map a name to a certain IP. So for instance map URL_1 or URL_2 (those idiots who fill your pages with clickbait ads) to 127.0.0.1 (default name for your own computer) and whenever a webpage tries to connect to them, it will connect to your own computer, get nothing, and leave the space blank! Now the browser will attempt to find 241.10.231.117. It will send the request to the nearest router. The router has a bunch of streetsigns (not a technical term, just an analogy) saying in which direction that is. And eventually you find the right computer for your request." ], "score": [ 16 ], "text_urls": [ [ "http://www.root-servers.org/", "outbrain.com", "taboola.com" ] ] }
[ "url" ]
[ "url" ]
5ykxc1
Is there a reason why train horns have to be ridiculously loud?
I live a few blocks away from a train station and they wake me up most nights so it better be a good reason.
Technology
explainlikeimfive
{ "a_id": [ "deqvgkv", "deqxxw0", "deqvgz9" ], "text": [ "Trains are difficult to stop. The horns are really loud to bring awareness to their presence. It's easier for a car or a pedestrian to move out of the way than it would be for the train to make an emergency stop.", "Many years ago, I worked at a factory that used trains to move things around the plant. To get to my office building, I had to cross a set of tracks that ran along the side of the building and within about 8 feet of the entrance. You could hear the train's horn far away, but when it got closer to the offices, the people in the building complained about the noise. So the train stopped sounding it's horn so close to the building. There really wasn't a problem seeing or hearing the train when walking toward and into the building, but when leaving, one had to be VERY careful. It seems there is a strange phenomenon that when you stand within a certain range of the train, you can not hear it coming. I was surprised by this. So one day, when I heard the train's horn far off, I walked out the door to test this, and sure enough, as the train got closer I could not hear the train coming. I was shocked and I certainly have no scientific explanation for this, but it is something that has stayed with me for over 40 years. When my daughter was young and would complain of the noise, I told her to be grateful for that loud noise because that loud noise might keep her from getting hurt someday.", "They want to make sure people are well aware of their presence. It doesn't take much to derail a train. And a train derailment at high speed is a **terrible** thing!!! Edit: made terrible bold." ], "score": [ 24, 7, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5yl25f
Why do camera lenses have to focus if they're capturing in 2D?
As I understand it our eyes need to focus because they're pairing two flat images to make it one 3D image with depth. Camera lenses, however, need to focus even though they're capturing in 2D, so I imagine there must be some different mechanism in there. What exactly is going on?
Technology
explainlikeimfive
{ "a_id": [ "deqwjqu" ], "text": [ "Our eyes need to focus separately for the same reason a camera does, and not because we see in 3d. Here's a simple experiment: close one eye and hold a finger right in front of the other eye. See how you focus on the finger and the background becomes blurry? Alternatively, focus on the background and the finger becomes blurry. When we look at a certain point in space, the light coming from this point enters our eyes at various angles. This is because the pupil isn't a single point, it has a certain diameter - some rays of light go through the bottom of the pupil, others go through the top, others go through the middle, etc. Without a lens, all these rays of light will hit different points on our retina. In order to see clearly, we need all the rays of light coming from a certain point in space to reach the same point on our retina. That's what the lens does - it focuses the light to that point. The problem is that it can only do this for a specific distance - it can't focus on something close and something far at the same time. That's the reason we need to focus. Camera lenses do the same thing - focus the light coming from a certain distance. [Here's a diagram, I hope it helps.]( URL_0 )" ], "score": [ 8 ], "text_urls": [ [ "http://static.giantbomb.com/uploads/original/6/65562/2022820-accomodation.png" ] ] }
[ "url" ]
[ "url" ]
5ylcx3
Edward Snowden is fairly active on Twitter, participates in video conferences around the world and is most probably connected to the internet all day, why can't the CIA, NSA or the FBI trace his location? Last time I checked he was still living in an undisclosed location in Moscow.
Technology
explainlikeimfive
{ "a_id": [ "deqy70i", "deqy98p", "deqyb69", "der1ops" ], "text": [ "Who said they don't know where he is? They just can't touch him.", "What are they going to do, send a SEAL team into a government building in the middle of Moscow? The diplomatic repecussions would be enormous, and I wouldn't be surprised if Russia retaliated by assassinating a government official on US soil. The relationship between the US and Russia is strained enough as it is.", "Odds are, the NBA and CIA already know for sure where Snowden is currently living.the confidentiality of his place of residence is most likely to ensure that he's not being hounded by admirers, haters and the rest of the general public 24/7 And honestly, knowing where Snowden is doesn't really help the US, because any action they take against him would most likely lead to a diplomatic crisis, since Russia has granted asylum to Snowden", "If the US government wants to know where Snowden is, they do. They have tons of tools and money, and it's not really hard to find people when you know a general location. The US government doesn't care. They aren't going to arrest the guy on Russian soil and he isn't an active threat. Undisclosed in this case means it's not public knowledge." ], "score": [ 30, 24, 9, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5ylrto
what the difference is between a video format and a video codec.
For example, if I export something from editing software as a Quicktime .mov, I can change the extension to .mp4, but what have I actually changed? Is there a difference between MPEG-4 and a h.264?
Technology
explainlikeimfive
{ "a_id": [ "der19ol" ], "text": [ "With video files there are 3 different format types to consider: * The video codec. This is how it transforms the video frames in the file into something you can actually see on screen. H264 is a video codec, its a method of compressing and decompressing video data. * The audio codec. Equivalent of the video codec, but for the sound. * The container format. This describes how the video and audio frames are packaged together, and how you can extract them to pass them into the video and audio decoders. They also usually have features for multiple audio and video streams (e.g. for different languages) as well as storing subtitles. MP4 and mov are container formats. Container formats typically have support for many different codecs. So it may be possible to convert a file from mov to mp4 without actually having to change the video data itself, it's just packaging it in a different way. However if you change the video codec, you will lose quality because it has to be decompressed and then re-compressed in the new format." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5ylyt2
The dark web and how it works
Technology
explainlikeimfive
{ "a_id": [ "der2myc", "dercegf", "derawjy", "derlfu4", "der64n3" ], "text": [ "We consider dark web like beeing a part of the internet who required an encrypted connection where users are anonymized. You can navigate on dark nets (dark web is the name for all the dark networks) by using some softwares or gate servers (proxy). Please, feel free to ask me your questions about dark web if you want more information.", "Basically, you have the deep web and you have the dark web. The deep web are those sites that aren't indexed by search engines. You can't get to them unless you know exactly what you're looking for. The vast majority of the deep web are old servers and such that are completely useless. There are some gems, however. The dark web is part of the deep web. It refers to servers that you can only get into with special software or special network configurations. Some of them are curated in large networks like TOR, but others are very small and private. These are the types of places where drug and human trafficking tend to go down.", "As far as I know, any website that isn't indexed by a search engine is considered part of the \"deep web.\" But in terms of .onion sites, that part of the deep web is only accessible through certain customized browsers like TOR. Not exactly a safe or friendly place, I'm sure you've heard horror stories, I don't advise going on there.", "Alright so you have the surface web, the deep web, and the dark web. The surface web is the internet you're most familiar with and is classified as anything that can be found with a typical search engine. The deep web is basically shit like medical records, legal records, government databases, and the like. Then you have the dark web which I'm guessing is what you're interested in. All the illicit shit you hear about is what the dark web consists of. Drug trafficking, weapons trafficking, hitmen, kiddie porn, snuff films, etc. - they're typically all .onion sites from what I've seen. And they're all encrypted sites so it's not just a matter of typing in buyblackmarketshit.onion. The site addresses look like gibberish and end in .onion. So you either have to know exactly what you're looking for or find a link in a forum, or just find a rough list of sites on some dark web wiki page of some sort. It's a bit complex but that's why they reference an onion with the dark web: because basically there are many layers to the dark web. So download the TOR browser from a search engine link, have fun, and be safe. Also, just for reference, it's said that only 4% of the Internet is the surface web", "See all the stuff you see on Google, Yahoo and other search engines...stuff you see day to day...that's not the dark web cause you can see it. Those search engines don't know everything though...there's millions of other little corners of the internet where unknown sites, servers, programs etc sit. These can't be seen by Google (and other search engines) so we call these the 'dark web' cause we can't see it. People who knows these exist can see them...they may be able to even access them...but they would have to know how to do it...the other dark web is the one deeper. People can go on here using a special program...it sort of lives separately from the non-dark web...people can put stuff on the dark web which can't be seen from the normal web...it's a pretty strange place where one can see all sorts of things that wouldn't normally be available on the normal internet. I wouldn't recommend it tho...that rabbit hole can be seriously damaging to your psyche. no. I'm not joking." ], "score": [ 44, 42, 13, 10, 4 ], "text_urls": [ [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5ymigv
what do these gauges on pens mean ?
These things here URL_0
Technology
explainlikeimfive
{ "a_id": [ "der7ad7" ], "text": [ "Not gauges, baffles. [Capillary action]( URL_0 ) keeps the ink within the 'loading area' as I call it in the same way wax stays within a candle's wick. The majority of the ink in the pen is free-flowing in the tank, but the loading area keeps it more or less locked in there and feeds to the ballpoint at a regular rate. This prevents the pen from drying up or exploding, as the ink's feed rate is regulated by the baffles. Source: I'm a huge nerd for Pilot pens." ], "score": [ 207 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/Capillary_action" ] ] }
[ "url" ]
[ "url" ]
5ymjj6
Why do phone chargers break much more frequently than those for other electronic devices?
In six years my laptop charger never once faltered on me despite not handling it with the best care, and I don't think I've ever had an HDMI cord or microUSB cable stop working. So why does my phone charger (even official ones) break so easily then?
Technology
explainlikeimfive
{ "a_id": [ "der6wxi", "der6ubo", "der7ddq" ], "text": [ "Well there's a few ideas.. Your moving your phone around more when it's being charged so the charger is having to bend much more often. Your phone chargers is also going to be less bulky because the charging port is smaller. No ones going to design a charging cable where the wire is bigger than the port.", "I go through phone chargers constantly, the part that plugs into the phone can bend easy and then your screwed.", "Phone chargers are: * smaller (cable and connector) -- > more fragile * moved around way more * maybe packed in a bag frequently * a good possibility to get your money if you need to buy a new one every year (I am convinced that apple (e.g.) EASILY could build better chargers, but they just like selling them over and over again)" ], "score": [ 9, 5, 5 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5ymwaj
How do some music recognition apps detect humming or singing?
I know how the technology behind matching songs for original tracks works. But some apps like Soundhound can detect humming and whistling too?
Technology
explainlikeimfive
{ "a_id": [ "derdny8" ], "text": [ "Essentially it works the same way. [Here's]( URL_2 ) a paper by the people behind Shazam where they detail a part of their method. By taking small snippets of waveforms and frequencies, they can compare them to every song in their database with similar waveforms and frequencies. For example, if you're humming the star wars theme it will see where your volume increases and decreases and map a basic waveform that it can search against most songs to eliminate them since 95% of music won't be similar to that pattern right off the bat. From there, it looks to see if your volume is increasing at the same time as the songs it believes you're humming. Disclaimer: This is a bit of speculation combined with knowledge of how the original music matching from Shazam works. The specific process is called [Query by Humming]( URL_1 ). [Here's]( URL_0 ) a neat paper from Cornell that goes over the process in way more depth. A lot of it is just pattern recognition based on pitch and a hundred other measurable variables." ], "score": [ 89 ], "text_urls": [ [ "http://www.cs.cornell.edu/zeno/Papers/humming/humming.html", "https://en.wikipedia.org/wiki/Query_by_humming", "http://www.ee.columbia.edu/~dpwe/papers/Wang03-shazam.pdf" ] ] }
[ "url" ]
[ "url" ]
5yoie4
there are more than 300 undersea cables covering more than 550,000 miles. Since the continents are moving, how do the don't break apart?
Technology
explainlikeimfive
{ "a_id": [ "dero4zo", "derqbk4", "dero68l" ], "text": [ "Continents move ***INCREDIBLY*** slow. Like 2 to 5 centimeters a year. there's more than enough give and wiggle to the cables that this is a nonissue.", "If you move your desk an inch, would anything become unplugged or break? Probably not. Cables have slack", "The continents aren't moving that quickly and the cables have plenty of slack in them to cover such movement. Heck, temperature changes create more variance in the length of the lines." ], "score": [ 10, 8, 6 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5yorqs
Why is isn't easy for a game to utilize all the cores on a CPU?
Ryzen related
Technology
explainlikeimfive
{ "a_id": [ "derrbmd" ], "text": [ "This isn't easy to explain to people that don't know anything about programming, but the short version is this: Because a lot of things in a software program or game need to happen sequential, something you can't be sure to happen when using multiple cores. If you are only using 1 core, you can be sure that A will happen before B, so the result of A is ready when you need it for B, and B's result will be ready when you need it for C and D. If using multiple cores, you can't be sure that A will happen before B, but since B needs A to already be done, it will lock up the application until A is done and calculations on B begin. If one of the other cores is then already started on C, while the result of B still ain't done, the application will lock up again until B is done... and so forth. There are ways around a lot of this of course, but programming for true multi-core support is a lot more work then single-core support, like a massive amount of more work. A lot of multi-core games aren't even true multi-core. They are made like a single-core application would, where some parts that aren't time-sensitive are then off-loaded to the other cores. ie In an FPS game, the gfx/game engine and positition/aiming/view system could be running on one core, while the UI updates (health, ammo and the like) are then offloaded to another core, because a player won't notice if the UI is 0.03sec too late with updating your ammo count. Depending upon the game, more or less parts of the game can then be offloaded to other cores." ], "score": [ 7 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5ypicb
the Ethernet 5-4-3 rule vs Ethernet Way
Okay I kind of understand the 5-4-3 rule, but not very well. And what is the ethernet way? I cannot find much information about it online.
Technology
explainlikeimfive
{ "a_id": [ "derx04d" ], "text": [ "It's a blast from the past! In the olden days, before switched Ethernet, all devices were connected to a single coax cable. The Ethernet specs say that the maximum length is X meters and the signal needs to arrive on all stations within N microseconds. Everything which the signal passes through adds delay: The signal needs to be picked up and converted, then forwarded to the right output. This adds serialization delay. The 5-4-3 rule says that you can have five Ethernet segments connected with four repeaters (which add the serialization delay) in distance. It's not a hard rule as you are not aware of the exact numbers, it's a design guide to make sure that it most likely will work. The three in it is the number of segments with hosts in it, the other two segments are unpopulated. The Ethernet way is a different design rule which says that you can have two segments with hosts connected through two repeaters via a segment without any hosts on it. These days, with switched Ethernet, where the signal doesn't travel beyond the switch and there is only one hosts on the cable, these guidelines aren't relevant anymore." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yqnee
Why do airplanes use two-pronged headphones?
Technology
explainlikeimfive
{ "a_id": [ "des7bo4" ], "text": [ "So you won't take the headphones with you when you leave. It's an anti-theft measure. Headphones are cheap, but some people would still steal them." ], "score": [ 12 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yqtpd
How do flashbangs work?
Technology
explainlikeimfive
{ "a_id": [ "des86mg" ], "text": [ "A flashbang is an explosive device used to blind and disorient those it is utilized against. Typically, a flashbang will be tossed into a room as it is breached, and the non lethal explosion will disorient those inside to allow the attackers the element of surprise. The flashbang is constructed of a casing (unlike a fragmentation grenade, the casing is designed to remain intact and contain the explosion, rather than blowing outwards with shrapnel) and the filler is magnesium, or aluminum, and ammonium perchlorate or potassium nitrate. When the metal oxidant (magnesium or aluminum) and the oxidizer (ammonium perchlorate or potassium nitrate) mix, it results in a bright flash of light, and a loud \"bang\"." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yqtur
What's the difference between 23.976 fps and 24 fps, and why is 23.976 used?
Technology
explainlikeimfive
{ "a_id": [ "des95ex", "des9q1l", "desa6r2", "desbxs3", "des99ji" ], "text": [ "There is a good explanation from a [previous ELI5 thread]( URL_0 ). > Film standard is actually 24.0 frames per second. The 23.976 is the framerate in digital cinema cameras. The reason it isn't pure 24 is pretty stupid. Basically, our video system in America is called NTSC. It set the standard for video signals back in the infancy of video. In black and white everything worked nicely at pure frame rates. However, when color was introduced to videoall of the broadcasters suddenly had to be transmitting color information in addition to greyscale information. Technology already in place couldn't handle this, so the engineers for NTSC devised a trick where video would now be at a slightly lower framerate, and the extra bandwidth in the signal freed up by this would be used for color information. Today, this is no longer necessary as technology has grown beyond the need for such a gimmick. However, thousands of studios across the NTSC world still have some if not all equipment that can only handle the old NTSC standard. It would be too costly to update every piece of equipment, so we're stuck with this standard for at least another decade or two.", "It is actually is determined by physics and it dates back to cathode ray TV's. The electron beam is scanned across the screen in two passes. It first scans the odd pixels and then the even pixels. In North America TV was broadcast with 525 horizontal rows. Which means each scan did 262.5 rows. In North America each TV channel was given 6 MHz to broadcast it's programs, of which about 4.5 MHz was usable. Picture was sent, and then a small gap and then audio was sent. When color programming came out, they had to somehow fit the color data in between the picture and audio. The color signal was interfering with the normal picture. Using some complicated physics, you can get rid of the interference if the gap between the picture and color and the gap between color and sound are both odd integer multiples of the horizontal frequency divided by 2. Simplifying the expressions, you can find that you need an integer multiple time the horizontal frequency to equal 4.5MHz. So, going back to our 525 rows. Your horizontal frame rate is the number of rows times the frame rate. So, we need a number that when multiplied by 525 becomes divisible into 4,500,000 (4.5 MHz). That number turns out to be 29.97. This is why TV is broadcast at 29.97 FPS. Okay, now with that out of the way. The 23.976 comes into play when converting from the 24FPS that film cameras use to broadcast rates. 24x29.97/30=23.976. This is called a three-two pull down. Basically they are manipulating the frames in a way that is undetectable to the human eye so that they can match the frame rate of your TV. ETA: This might seem like it doesn't add up so I'll add one last step. 23.976 happens to be exactly 4/5 of 29.97. These 4 frames can be stretched into 5 frames because of the way TV images are produced, and that is what allows the movie to be broadcast in 29.97.", "[Matt Parker actually covered this in one of his youtube videos!]( URL_0 ) Short version: its to encode the color in the video. EDIT: woops, i linked the wrong thing. Should be the right video now!", "I'll piggyback on this. When a movie is shot at 24.000 fps (and presumably edited at 24.000 too) and then broadcasted at 23.976 fps, does the TV station go through the trouble of re-encoding the video to 23.976 (like they definitely would do with 30 to 24, for example), or do they simply slow it down 1.001 times to avoid frame interpolation issues?", "23.976 is what's used on TVs, because TVs aren't actually 30 FPS - they're 29.97 (and we use that because adding color to 30FPS TV changed the time base). When playing 23.976 FPS on a TV, you map 4 film frames to 5 video frames." ], "score": [ 240, 80, 32, 4, 3 ], "text_urls": [ [ "https://www.reddit.com/r/explainlikeimfive/comments/37z7lq/eli5_why_is_23976_fps_the_standard_on_film/" ], [], [ "https://www.youtube.com/watch?v=3GJUM6pCpew" ], [], [] ] }
[ "url" ]
[ "url" ]
5yqwmc
The difference between overdrive and distortion
Technology
explainlikeimfive
{ "a_id": [ "des8v4h" ], "text": [ "Hard to exactly say without audio examples (which I do not have). Distortion can create a range of tones, depending on specific pedals, or effects used. Distortion is more \"crunchy\" and \"loud\" sounding, a sound common in punk rock, heavy metal, alternative...any kind of rock music, really. Overdrive is a \"warmer\" sound, less \"crunchy\" and more \"fuzzy\". Overdrive is typically considered a \"bluesy\" sound, and is found sometimes in the blues, and in some southern music. Very similar concepts, but the sounds are notably different." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yrtn1
How does noise canceling headphones work?
Technology
explainlikeimfive
{ "a_id": [ "desfrvl" ], "text": [ "Hmm...since it's ELI5: There are two types of noise-cancelling headphones: (1) headphones with over-ear ear cups that are closed and therefore cancel both sound coming in and sound leaving. (2) headphones with technological noise canceling. These basically analyze the sound waves coming from your outside and produce a signal with inverted waves, matching those sounds from the outside. Imagine how when two people say exactly the same, it gets way louder. It's exactly the other way around: the \"positive\" waves from the outside and the generated \"negative\" waves from the inside of the headphones create waves close to 0, therefore \"cancelling\" the outside noise. Cheers!" ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5ystzc
How are touch screens made?
Technology
explainlikeimfive
{ "a_id": [ "dessylz" ], "text": [ "There's different kind of touch screens that work and are made, differently. Some are made with two layers that when pressed together, create a connection.. some work by needing a conductor (like your finger, or metallic things) to touch it and interfere with its electrostatic field... some use optic technology." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yt8fm
How do surveillance agencies like the NSA and CIA parse through so much information to narrow out terrorists?
Do they have computers looking for key word strings? How would they know the difference between a person saying: "I'd kill for that job" vs. an actual threat.
Technology
explainlikeimfive
{ "a_id": [ "desrfbw" ], "text": [ "That's not how it works. They have a target set that they're able to look at. They're not looking for random \"threats\" via your email or anything, and if you're American they can't look at your email unless a judge approves that you could have foreign intelligence/access to it.." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yveeo
how to learn a game controller
Technology
explainlikeimfive
{ "a_id": [ "det861f", "det76tv", "detjce6" ], "text": [ "It's all practice, practice, practice. There are tutorials that exist but they mostly train you what buttons perform what actions. To really know what you are doing you can't be thinking about the controller at all. With practice you'll get to the point where your brain thinks \"attack\" and your fingers perform that action. As long as you are thinking \"attack, that means press X, where is X, OK I'll press X\" you're going to fail. It's like driving a car. I don't actually think about pressing the gas or the break, I think about accelerating or stopping. My brain is so familiar with the actions that I don't need to instruct it on those middle steps. I think \"STOP\" and before I know what's going on my foot is on the break slamming it down. I've taught some older people some console games and the trick is to start simple. There are games that only use 1 stick and 1 button. Master that, then move onto a game that uses 3 buttons, then a new game that uses the sticks and buttons but has no reflexes factor. Jumping straight into a fast-paced shooting game is like driving on a racetrack before you know how to drive at all. You learn to drive in a parking lot for a reason, find a parking lot video game.", "It's different for every game, but there are a lot of common conventions. The left stick usually moves the character around, while the right stick moves the camera and aims. X (PS3) and A (Xbox) is usually jump. Firing a weapon is usually the right trigger. The \"start\" button pauses and/or brings up the menu. Not every game sticks to these conventions, and of course they don't make sense for every game. Most games have a tutorial which takes you through the basic controls. After that, it's just a case of getting used to it. People struggle at first with using two sticks to move a character around, but it feels natural once you're used to it. Although it's not as responsive and accurate as a mouse.", "My first suggestion is to play a game with limited controls, such as an old arcade game like Pac-Man, Galaga, Mario, etc. Once you feel comfortable, maybe play a game that uses more buttons but at a slower pace or less complicated pace. Personally, I recommend Portal (THIS GAME IS SO FLIPPING GREAT) or Skyrim. Not just because they're some of the best games created in modern times, but because they can be done at your own pace and have button hints for almost all of the actions." ], "score": [ 25, 11, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5yvh3n
How do download and upload speeds actually work,(I.e how do they limit the speed of download through your cables)
Technology
explainlikeimfive
{ "a_id": [ "det7kif", "detl6jv" ], "text": [ "They limit the speed of the download by limiting how many bits per second are allowed to transfer through the wire to you. Someone, somewhere tracks all the bits that go into your house, and counts those bits. Every second, that count \"refreshes,\" but if that count reaches the max rate, they stop sending traffic through until the next second. Basically, if your cable supports 10MBPS, they don't limit it by somehow making your cable support 5MBPS, they just transmit 5MB in half a second, then transmit 0MB for the next half a second.", "I always assumed it had more to do with the physical meaning of \"bandwidth\" rather than the way we measure it in the computer world (digital transfer rate - i.e. Bits per second). I could be terribly wrong, so input is welcome. I'm also no pro, so I may jack up terminology. \"Bandwidth\" refers to a range of frequencies. Data (signals) is transferred over a cable using a certain frequency. Think of a dump truck. You fill it up with dirt and drive it across town but your max speed is limited by the street's speed limit. If you need to quickly move 5 loads of dirt, the people on both ends have to wait on you while you drive back and forth. That's not very fast. The solution? Fill up 5 trucks *at the same time* and drive them *at the same time* and you move 5 loads in the same time you could have moved one. It's similar with sending signals - you can only send the signals so fast, and then you have to wait before you send more. Solution? Connect 5 wires and send data over them all at the same time. Luckily, instead of adding more wires, you can use the same wire as long as you can use more than one frequency. If you can use 5 frequencies then you can send 5 different signals, each with their own frequency, *at the same time.* We can mostly thank Jospeh Fourier for this. Almost 200 years ago some guy named Joseph Fourier realized that you can take multiple frequencies, mash them together to make one signal, then take them back apart, and end up with the exact same original frequencies. So now we can take 5 frequencies, upload/send data over each one, mash them together, shoot them through an Ethernet cable, and have a device on the other end that pulls them apart and receives the exact data you sent on each one. Alternatively, your computer can download/listen for the signal, split it apart into each frequency, and get that info from each frequency. (I think this is how cable worked - the cable company sends you all the channels down one wire, and when you turn the channel your TV just filters out the other frequencies and displays the one you wanted. Surely there's more to it, but I think that's the basics of cable tv, and explains why your neighbor could steal your cable!) **Wrapping up** (I promise) Your router/modem takes signals from all the computers connected to it, works that Fourier magic on those signals, and shoots the combined/composite signal to a magical cable in your wall that connects your house to the internet. That cable that brings internet to your house is connected (in my case anyway) to a big green box up the street. All of your neighbors' magic internet cables are also tied in there. I imagine that box kind of like a huge router (just like the one in your house). That big box takes signals from you and your neighbors, works it's Fourier magic on those signals, and sends it up the next wire. A group of those boxes all plug into an even bigger one, and so on until it connects back to the ISP. Go back to the dump truck example. If the dirt is the data, and each truck is another frequency, then the road is the wire. Even if you buy a million trucks, the road can only fit so many. Again, easy solution - upgrade the road by making it bigger. But that costs money! Instead, just make certain customers pay more money if they want their dirt faster. If you have 4 customers and one pays for quicker dirt delivery speed, then you can send 2 trucks together for his delivery and the other customers each get one truck for their delivery. Apply that analogy: Each cable can only carry a certain range of frequencies (something in physics explains this), so those boxes do eventually max out data transfer. This can be fixed by using bigger, better boxes and cables all the way to the ISP, but it gets way too expensive. The solution?? --- > make a customer pay more money in order to have more frequencies available to them. So now you give the ISP more money, and they push a button that tells the box up the street to allow you to use a bigger *range of frequencies,* which we now know that a certain range of frequencies is also referred to as a *bandwidth.* Since downloading is the majority of internet traffic, they assign you many more frequencies for download than they do for upload. Once you run out of frequencies to use, you spend time waiting on your computer to finish using the current ones so that you can use them for something else. Boom. Done. I'm fairly positive that this is how it works, but it could all be monitored and regulated. Heck if I actually know haha" ], "score": [ 9, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5yvyc6
What process does someone go through to create a coding language like Python or Ruby on Rails?
Technology
explainlikeimfive
{ "a_id": [ "detcxsn" ], "text": [ "A quick nit pick. \"Ruby on Rails\" is not a programming language. Ruby is a programming language. Rails is a framework written in Ruby for writing web applications. First we need to establish what a \"program\" is. When you write a program, you are writing out a bunch of words and save them in a text file. Here's a simple example of a Ruby program. loop do puts \"Hello\" end OK, so now I have that text saved in a file, \"hello.rb\". To run the program, I would type the following command into a terminal > ruby hello.rb This will then endlessly print out the word \"Hello\". What's going on there, is I'm running a program called `ruby` and giving it `hello.rb` as input. What is the `ruby` program doing, exactly? `ruby` starts by reading in the text file. What it does next depends on what is in the text file. In this case, it will print out the word \"Hello\" over and over again. If you want to write your own programming language, you have to write a program that reads in a text file, and then does different things depending on what's in the text file. This is a very well understood process at this point. After reading in the text file, the first step is to break the file into **tokens**. In the case of the `hello.rb` file, it would be broken into the following tokens. loop do puts \"hello\" end Now that our text file is broken into a stream of tokens, the next step is to take those tokens and do stuff with them. `loop` says, \"Remember this spot, we will come back here later.\" `do` says, \"I'm the start of a group of code\". `puts` says, \"I'm going to print the next thing to out.\" `\"hello\"` says, \"I am just a piece of data\". `end` says, \"I'm the end of a group of code.\" To create a new programming language, all you have to do is write a program that can break up a text file like I did above, and then follow the list of instructions." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yw3pd
Why can programs such as Skype and Snapchat perform international communications for free, but calls and text require additional fees?
Technology
explainlikeimfive
{ "a_id": [ "detc6dh", "detirw5", "detr796" ], "text": [ "They're not free though are they. Both users regardless of location require an internet connection. Even if you both used free wifi the wifi is free to entice business and is still being paid for. The fact that it appears free is down to it using the internet which is borderless. For traditional voice and text it requires your provider paying the provider in the other country for their service to deliver the call or text. Although usually the fees are massively inflated. Bottom line you pay either way just in a different way.", "You're already a phone company customer (trying to earn more). Skype is still hoping to make you a customer.(earn anything).", "Phone systems were built a long time ago, when international communication was really expensive - that set a precedent. Generally phone companies were national monopolies, often with legal protection from competition. So they would charge high fees to other telecoms trying to hand calls into their country. They could do thid because there was no other option and international calling fees weren't worried about by most people (in effect these fees were like the cream that the teleco was allowed in payment for providing an affordable national calling infrastructure). Then the internet came along. Many ISPs started as small competitive startups offering service over phone lines (dial up). That meant they were keen to freely interconnect with content providers as it made them more competitive (by providing better service to their customers), even when they were far apart they could get cheap links as they'd have access to a \"telecom hotel\" with lots of companies competing to offer connections between cities/towns/etc. So internet access has become very cheap, phone providers haven't kept up. N/B: this should also provide you with a good background on net neutrality. As the US shifts to a monopolised infrastructure (you could dial into any dial up provider but most americans only have one or two high speed cable options) ISPs are trying to shift back to the old model where they had much more power over what they charged consumers. In countries with more open infrastructure (eg in NZ you can choose any fibre provider in your local exchange, normally a few dozen options) this isn't such an issue." ], "score": [ 5, 3, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5yxfe1
How are Ethernet cables rated for different speeds actually... different?
So say between Cat4 and Cat5 standards, Cat5 has faster speeds than Cat4, but the cables for both Cat4 and Cat5 appear to be the same, with all the same number of wires. So what is different between the two cables that prevents a Cat4 rated cable from reaching Cat5 speeds?
Technology
explainlikeimfive
{ "a_id": [ "detngk6" ], "text": [ "Different gages of wire, different metal compositions in the wire, different amounts of shielding on the wire, and lots of other small differences." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yxfnc
Why can we land a robot on an asteroid but cant design a vending machine to take a slightly crinkled dollar bill?
Technology
explainlikeimfive
{ "a_id": [ "detntop", "detqods", "detnglb", "detpfkf" ], "text": [ "If a vending machine was made to have some leeway with bill and coin shape/sizes, it would be easier to make them accept fake money.", "Technology in different fields does not improve at the same rate, for a variety of reasons. Some of these reasons include: * Money. Perhaps someone did the math, and it turned out that paying someone to design and build a better scanner would not be worth the few additional dollars they would receive in sales that were prevented by wrinkled bills. The space probe engineers, on the other hand, were paid by the government directly and designed the best systems they could with the budget they had. * The problems are very different. Image recognition is a notoriously hard problem in computer science; that's why we have captchas, etc. The human brain is still better at it than machines are. On the other hand, the things necessary to design and build a craft that can land on an asteroid have been fairly well-understood since the 1960s or so. Actually doing so has just been a question of the political will to muster the funding required. It gets easier as technology improves, but if it was a priority we could definitely have accomplished it back then. * Motivation. It's a lot easier to make inspiring speeches about spaceflight and get people to care about advancing human knowledge than it is to get people to care about slightly improving a machine that still works most of the time, in such a way that really only enriches the people who own the machine. * Applicability/side effects/historical accident. In this particular case, the research into rocketry and spaceflight required to land on an asteroid was motivated largely by military research done to try to gain an advantage in WWII and the Cold War. If you can put a person or probe in orbit, then you can put a nuclear warhead there too, and drop it on the other guy. Turns out you can use the same technology for exploration and research. Thus far, there has been no major war where the ability of a machine to recognize that a crumpled bill is still a legal dollar has been important in the slightest. TL; DR: \"Technology\" is not a single thing where all things improve at once; different things improve at different rates for different reasons.", "The budget for an asteroid-landing robot is hundreds of millions of dollars. By contrast, a company buying a vending machine is on a tight budget. They can't afford NASA-level engineering.", "I'm sure that they could create an amazing vending machine that could do all sorts of great things, but it wouldn't be worth the time, money, and effort. Also, NASA employs top qualified scientists and engineers to create technological marvels, while the food and beverage industry hires whoever can mass produce a simple, reliable, relatively foolproof machine on the cheap." ], "score": [ 24, 11, 5, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5yyyim
What is the difference between TVs that are the same size and quality? For example: A 720p 32 inch led tv from Hisense manufactured in 2011 vs a 720p 32 inch led tv from Samsung manufactured in 2016.
Technology
explainlikeimfive
{ "a_id": [ "detyafw", "detz3kp", "deuqm3r", "deu3jjh", "deua0n5" ], "text": [ "Some parameters that might be different: * Brightness. A TV that's not bright enough will be difficult to watch in brightly lit environments, like a sunlit room. * Contrast. The difference between the brightest and darkest color the TV can display. Higher contrast means deeper blacks and brighter whites. * Color reproduction. How accurately the TV shows colors. * Transition time. The time it takes a pixel on the TV to switch colors. Too slow and fast movement will look blurry. * Input lag. How long it takes between a signal being received by the TV to actually being displayed. It's relevant if you play games on the TV, the lower the input lag the more responsive the game will feel. * Power use and reliability. * Features. A newer, fancier TV is probably going to have a lot extra features like automatic brightness control or connecting to the internet to watch YouTube or Netflix on its own without having to attach a computer. There's probably more but that's what I can think of right now.", "A major improvement is the elimination of burn-in. If you have an older model and leave the same image on the screen, you'll see the remnants of that image once you switch to a different picture. This is the reason for \"screen savers\" on a computer, to avoid burning your stationary desktop icons onto your screen for hours. This was fixed quickly-ish on screens intended for computer use, but it took much longer to address for TVs since it was less important. If you use an older TV for a computer screen or watch a program like news with a stationary display in one part of the screen, you'll have problems.", "Something no-one else has mentioned is materials/components, and quality standards. You can purchase cheaper materials and components, and make a tv that will work just as well as one built with much more expensive parts. But a cheap tv sacrifices reliability, lifetime, and predictability in quality standards. And two tv's built right next to each other can have unpredictable quality issues, like dead pixels. So they test the tv's, and the ones that are marginal are either rebranded and sold for a lower price, have features disabled and are sold as a lower model, or saved for events like black Friday.", "One I've noticed is the amount of inputs and outputs on the back. With more expensive tv's you have options to hook up many different devices including older devices, or pass audio or video through the tv to another device.", "Some people have mentioned the definite advantages but there can also be downsides to technology maturing. As prices come down on established technology some companies have to cut corners to remain profitable. This can lead to subtle changes in the quality of materials used in a product. As the machines that make the products age they also become less accurate and harder to maintain in top condition. The higher end manufacturers will change their machines every X years, these machines are then either moved to production facilities in lower wage countries, that often have less well trained/experienced staff, or sell them to lower tier manufactures, also often in lower wage countries. This can result in a price drop for consumers but also in products with a worse fit/finish and a drop in overall quality." ], "score": [ 151, 17, 5, 4, 3 ], "text_urls": [ [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5yz345
If nano-SIMs work perfectly fine and the material surrounding it previously wasn't a functional part, why were mini-SIMs and micro-SIMs ever used to begin with?
Technology
explainlikeimfive
{ "a_id": [ "detzapd" ], "text": [ "The actual chips of older sim cards were slightly bigger than the chips on today's nano sims. However, the contact points of nano sims were positioned in such an arrangement that they would also touch the pins of older, larger readers. A millimeter or so of the plastic around today's nano sim chips were actually part of the chip on older sim cards. They've gradually (well, in one step) gotten smaller while keeping the important contact points in the same relative position Back in the day, phones weren't as compact on the inside as they are now. There was more headroom for components, so it didn't matter that the sim card was the size it was. As phones got more powerful while at the same time thinner and thinner, they shrunk the sim cards mostly because a smaller sim card would require a smaller reader, and the sim card readers in phones take a lot more space than the actual sim card does. Saving internal space is the main reason why manufacturers want to shrink connectivity interfaces, both when it comes to charging ports, headphone jacks, memory card readers and sim readers (also the main reason some manufacturers completely ditch some connectors). Changing from mini sim to nano sim would free up space that they could use to fit a bigger battery, for example. Fitting a bigger battery wasn't as important in 2001, when phones had batteries that would already last literally two weeks." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5yzjhz
If I keep my smartphone plugged in all the time, is that better for the battery life cycle than if I use it until it's 5% and then charge it back up? Will keeping it plugged in all the time extend the life of my smartphone?
Technology
explainlikeimfive
{ "a_id": [ "deu3ams", "deu5kej", "deu82lr", "deuaj0a", "deu4avu", "deu6s83" ], "text": [ "That's true, deep discharges are most damaging to a lithium-ion battery. Repeatedly running it low to almost zero and then charging to full is the quickest way to wear it out. If it's convenient to you, you can keep your phone plugged in all the time. When it's connected, the phone is running of the mains and the battery isn't being used at all, which will extend its life. (Storing it fully charged isn't ideal either, but it's less damaging than deep cycling it all the time.) edit: since there's some contention: URL_0 > Similar to a mechanical device that wears out faster with heavy use, the depth of discharge (DoD) determines the cycle count of the battery. The smaller the discharge (low DoD), the longer the battery will last. If at all possible, avoid full discharges and charge the battery more often between uses. Partial discharge on Li-ion is fine. There is no memory and the battery does not need periodic full discharge cycles to prolong life.", "Lithium polymer batteries, as found in most cell phones, are designed to be discharged no lower than about 3 volts, as this will destroy the battery. The phone's charging circuitry is designed to prevent overcharging, generally no higher than 4.2 volts, and to prevent over discharging to below 3 volts. If a LiPo battery is never discharged below 3 volts, which the phone's circuitry prevents, it will have a normal life span. Bottom line: it doesn't make any difference if it is plugged in or not.", "> Cycling between 85 and 25 percent provides a longer service life than charging to 100 percent and discharging to 50 percent. The smallest capacity loss is attained by charging Li-ion to 75 percent and discharging to 65 percent. URL_0", "I went to college for renewable energy engineering and electrical engineering. It has to do with the deformation and degradation of the cathode (I think, whichever between the two is made of the lithium between the anode and cathode). The more you discharge a Li battery, the more exposed the cathode becomes to deformation and degradation. The same is true if you overcharge the battery, as it all comes down to the electrons being stripped and passed through the electrolyte. The longer the cathode is stripped of electrons, the longer that portion of the cathode is exposed to deformation and corrosion, and eventually that portion can no longer regain a free electron to pass across to the anode. This is what causes the loss of life in your battery. Keeping this in mind, you don't want to always re-use the same portion of the cathode, so keeping it between 95-100% is not ideal. Hence the desire to keep the battery between 60% and 90% charge, to vary the portion of the cathode utilized, without degrading the cathode entirely. Of course eventually, all batteries succumb to the electrochemical degradation that is ultimately unavoidable.", "I've read somewhere that the best is if you discharge the battery to 60-70% before recharging, that way you maximize life cycles. Can't give you any sources because this was a while ago tho.", "This really depends on how you plan on using your phone. Leaving it plugged in all the time and playing games and watching videos causes the battery to heat up, and Lithium Ion batteries are terrible when it comes to managing heat. (In other words, heat can shorten your battery life too). What I would suggest you do is that you keep your phone battery in between 60-80 percent, if you can. It really won't make that much of a noticeable difference. It is true that a Lithium-ion battery has about 600 charges in it, but will most likely be increased because even though your phone says your battery is dead, there is still some charge in it." ], "score": [ 109, 30, 21, 5, 4, 4 ], "text_urls": [ [ "http://batteryuniversity.com/learn/article/how_to_prolong_lithium_based_batteries" ], [], [ "http://batteryuniversity.com/learn/article/how_to_prolong_lithium_based_batteries" ], [], [], [] ] }
[ "url" ]
[ "url" ]
5yzrpq
Why do commercials sometimes only appear for a split second before being replaced by another one?
Technology
explainlikeimfive
{ "a_id": [ "deu57if" ], "text": [ "You mean on TV, right? The times I've experienced this was because my TV service provider aired their own commercials instead of those the channel had been paid for/those the channel had made. There was a little delay, so some commercials were only shown for a split second." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5z00q2
How do they get recordings of all the words for electronic reading voices?
Does someone literally read the entire dictionary and then some, or how does it work?
Technology
explainlikeimfive
{ "a_id": [ "deu8zl7" ], "text": [ "Yes, somebody literally reads a dictionary of words. But the dictionary is cut down to include only the ___ most common words and is tailored to the usage they expect you to need. For words that aren't in the main dictionary the voice actor records syllable sounds (phonetic dictionary) so that uncommon words can be read (sometimes poorly). The TTS program may include a secondary text-only dictionary of less common words, in a notation like [IPA]( URL_2 ). Very rare words might fall outside of all dictionaries and the TTS program will try to make a best guess at how the word sounds. This almost always sounds terrible, but it's better than nothing. I've used the default Google voice built into Android, the Samsung one that comes with Galaxy S_, and Ivona (Amazon) Amy (UK). When using Google maps, the Android voice has the most complete and natural sound to most uncommon street names, getting things like \"Sepulveda Blvd\" correct, but Ivona Amy doesn't even come close. So you can tell that Google included many street names and non-English words into their voice dictionary and Ivona didn't. But [when reading an eBook]( URL_1 ), Ivona Amy sounds almost natural, while the Google voice sounds terribly flat and lifeless. Ivona Amy still stumbles pronouncing \"No.\" as \"number,\" but you can tell instantly that it's much better suited for natural language than the Android, Samsung, and even Kindle options. But even if every single word were recorded there'd still be a problem with word stress and phrase pronunciation. \"To\" and \"two\" might normally be pronounced the same, but when saying the phrase \"nine to five\" many people would pronounce it more like \"nine tuh five,\" which can't be easily reproduced by a voice reader. --- edit: What /u/PirateBaeDotSE describes is only a basic level of TTS (phonetic dictionary). Doing it that way either ignores intonation or does not properly capture natural sounding speech. Compare [Ivona Amy]( URL_0 ) to the acapela link and you can hear the huge difference." ], "score": [ 27 ], "text_urls": [ [ "http://www.ttsforaccessibility.com/", "http://teleread.com/setting-up-moon-reader-for-text-to-speech-using-ivona/", "https://www.wikiwand.com/en/International_Phonetic_Alphabet" ] ] }
[ "url" ]
[ "url" ]
5z0979
How does cracking a videogame work and why is Denuvo so difficult?
Technology
explainlikeimfive
{ "a_id": [ "deu9uxk" ], "text": [ "Cracking a game works like this: - There is a code in the game that checks if the copy is legitimate, for example it may send the CD key you have entered during installation to a server via the Internet for it to check for its legitimacy. - The cracker tries to find this code with certain tools that allow to execute the program step by step, look into its memory etc. - The cracker finds the code and gets rid of it, for example by inserting a skip instruction before it. Denuvo claimed that it gets rid of this problem by constantly encrypting and decrypting itself, so that it is hard for the cracker to find the piece of code, because it is changing constantly. I admit I don't know the exact details and I'm also very positive very few people do, because revealing the technology details would help its downfall. Encryption can be safe even if its technology is known as long as the encrypting keys are secret - this case is different though, as the data get decrypted on your computer and so they can be captured. So my best bet is that the security of Denuvo is actually based on obscurity - hiding the details of the system and making it unclear and chaotic looking. This however fails once the system gets understood by the crackers, which possibly already happened, as Resident Evil 7 has been cracked in only 5 days." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5z0gn5
Why do game developers often say that it is much harder to make computer games for Mac than for Windows.?
Technology
explainlikeimfive
{ "a_id": [ "deuauy3", "deucms9", "deufgaa" ], "text": [ "When developing a game engine, which is the 'core' of the game, developers will use a 3D graphics API which means they don't have to worry about a lot of the details of exactly how computer graphics work, or about the difference between different computers. Microsoft have put a lot of work into making their API, DirectX, nice for game engine developers to work with but DirectX is only on Windows. OpenGL is another API available on Mac and Linux as well as on Windows but it's a bit harder to use and sometimes a bit slower, so developers tend to favour DirectX. Many game developers don't write their own engines though, they use an off-the-shelf one such as Unreal Engine or Unity. In that situation the game developer is somewhat reliant on the game engine developer to provide support for different operating systems. Even when the engine does, there's still a bit of extra work involved in making the game available on Mac or Linux and many game developers don't consider it worth the small extra market.", "I don't have a lot of experience in development for MacOS X, but I spent some time working on games that ran on iOS (among other platforms) and had to use Macs for this. One of the aspects is tools. Visual Studio is something almost every game developer knows, but it is only available for Windows. MacOS X has XCode instead. Which is very good in some aspects (better auto-completion than VS, very good profiling tools), but has some drawbacks (crashes or hangs all the goddamned time for no apparent reason, has worse debugger tool than VS). Also, to create a signed program for MacOS X (i.e. one you can run without toggling the switch for untrusted sources (or whatever it is called) in Settings), you need to register as an Apple developer. I don't know the current situation, but it cost money before, which is, of course, not a problem for a big company, but might make a small indie team reconsider working on that platform. And signing process itself is a pain in the ass, because XCode sometimes randomly stops accepting your signing certificate, and you have to find out why (which is not an easy task and may take several hours). The points about DirectX and game engines mentioned by /u/cantab314 are also very valid.", "OpenGL drivers are somewhat more buggy, slower, unreliable and/or offer fewer features in OS X. Source: I develop cross-platform 3D software" ], "score": [ 13, 7, 5 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5z1is3
Why is it that when we see a screen through a camera, we see a grid on it, but through normal eyes you see whats on the screen seamlessly?
Technology
explainlikeimfive
{ "a_id": [ "deuwndi" ], "text": [ "That sounds like a moire pattern. Both the screen and the camera sensor are made up of grids and it is next to impossible for them to line up perfectly and the resulting interference is the grid you see. Your eyes do not have a grid. URL_0" ], "score": [ 10 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/Moir%C3%A9_pattern" ] ] }
[ "url" ]
[ "url" ]
5z1lzg
If WiFi is an electromagnetic wave travelling at the speed of light, and it's digital, why does it slow down at distances from the router?
Technology
explainlikeimfive
{ "a_id": [ "deuk1gn" ], "text": [ "It doesn't. The radio waves are getting to you at the same speed they always are. However, they might not *all* be reaching you because there are obstacles in the way. Furthermore, the *protocol* that is transmitted over those radio waves is limited by many factors, including computation speed of the devices dealing with it." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5z2vw4
How is it possible for the Internet Archive's Wayback Machine to store copies of so many websites?
I know they have several datacenters, but as of 2014 they had stored copies of over 400 billion web pages. How can one organization store such a mind-boggling amount of data?
Technology
explainlikeimfive
{ "a_id": [ "deuveex", "deuv2ph", "deuxqjw" ], "text": [ "The simple answer is, they don't store everything; When you go to a website a lot of things are happening and there is a lot of information and what not moving around behind the scenes, but what you see is frankly a relatively small amount of data. Archives work by taking basically pure text with some minor important formatting (HTML which is also not much text). This is the reason that you can't use these Archives to go back and download powerpoints, PDFs, etc.. because those files with their various format are very expensive when it comes to space on a disk. As an example I downloaded the source of this page (which includes ALOT of information an archive will strip out) and it came out to 72k...rounding for posterity's sake... a 1 Terabyte HD (what my computer has) could store this page 14M times... shrink data a bit, grow storage a bit and it becomes pretty possible. Then there is the ability to capture a \"diff\" between two pages which allows them to not have to duplicate all of the information on a page every time it changes. Instead they are able to just capture the changed text (a diff) and a timestamp that the change occurred.", "The files that make up most websites are tiny. I used to build sites and the actual programming just takes no real storage space to speak of. Pictures are another story but with compression their also pretty small.", "One thing to remember is that for years the internet used to consist of mostly text-based websites with some low res graphics, so these were easier to archive. With broadband, the size of crap on the internet grew." ], "score": [ 5, 3, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5z3ldm
Why do we sometimes feel our phone vibrate in our pocket, only to find out it never even vibrated?
Technology
explainlikeimfive
{ "a_id": [ "dev0v1g", "devhujb", "devmpah" ], "text": [ "[Phantom Phone Syndrome Wikipedia]( URL_0 ) is what you're looking for", "The best guess we have at the time is that there's a process called \"hypothesis guided search.\" The easiest way to explain it is that our brains are always making assumptions about things because they've happened so many times that it doesn't need to create the experience from scratch. Basically it takes a mental shortcut to something it already knows and understands. So where that comes in to play for phantom vibrations is that you feel something in your pocket. Could be something brushing up against your leg, could be the fabric settling a little bit, could be anything. Your brain has been conditioned over time to assume that sensation means a ringing phone, so it just takes a mental shortcut and that's what you feel. The idea has some support for it because generally you only get phantom vibrations when you are in a situation you would expect them. If you carry your phone in an unexpected pocket you don't get them, and if you're wearing something you don't normally carry your phone with you don't get them even if you have your phone with you. I never get phantom vibrations in a robe or my jacket pocket as I don't do it often enough for the mental shortcut to exist.", "My phone did this occasionally and it drove me crazy until I realized sometimes the wifi connection will hiccup and it vibrates briefly when it connects again, without any lasting notification" ], "score": [ 20, 15, 3 ], "text_urls": [ [ "https://en.m.wikipedia.org/wiki/Phantom_vibration_syndrome" ], [], [] ] }
[ "url" ]
[ "url" ]
5z40uc
How do search engines work? Where are they searching from?
Technology
explainlikeimfive
{ "a_id": [ "dev43ey" ], "text": [ "So first you need an index. Pick a website, and make a note of all the terms on that website. Then follow links from that site to other sites, and repeat. Eventually you have a mapping from potential search terms to pages that contain those terms. But you can't just pull randomly from this index when someone searches for 'dog' - you need some sort of ordering, because users only want the *best* results for 'dog'. One way of figuring this out is by looking at links. You assume that a really good site about dogs is probably going to be linked to by other good sites about dogs. So you first come with a basic scoring for every site that has 'dog' in it, maybe based just on the number of times the word appears, and then you propagate those scores to the sites that each site links to. So you have three sites initially scored as '5', say, but sites A and B both link to site C, so you update C's score to be 10 to reflect the fact that other dog related sites seem to think it's a good resource. You do this over and over and over again, pushing scores between connected sites, and eventually you've got a reasonable guess at what the best sites are. Of course actual search engines do a lot more, but in a very basic sense, that's the sort of thing that's happening." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5z5skh
How does Google Maps on my phone track my location even when set to airplane mode?
Noticed this today while walking about London, seeing the sights. My phone is set to airplane mode as I don't have cell service and my battery murders itself trying to find a signal that will never come. When I pulled out my phone to check for a WiFi signal, I noticed Maps was still open and it knew where we were, and as we walked, it tracked us. I was unable to use any features of the map, but it knew where we were. I was not connected to any WiFi, nor do I have any Bluetooth connections or cell service. So how?
Technology
explainlikeimfive
{ "a_id": [ "devgtj0", "devyarb", "deviuju" ], "text": [ "GPS navigation doesn't transmit any data at all. All it does is listen for signals from GPS satellites, and use that information internally to triangulate its own position. A data connection is only needed for downloading map data. Contrary to popular (mis) understanding, GPS satellites don't know where GPS receivers are, or that they even exist. GPS is a completely passive technology.", "How were you searching for a wifi signal if your phone was in airplane mode? Or did you turn off airplane mode? Google can get estimated location data without using GPS just by using wifi, even if you're not connected to anything. They use a large SSID database of public wifi locations and if your phone picks up one of those wifi locations, Google knows approximately where you are, without using the GPS chip at all.", "There's a lot of misconception in this thread on how GPS satellites work so I figured I'd make an ELI5 on it... In space, we have many satellites which beacon out two primarily crucial data points 1. their location 2. their time. The time between the satellites is synced so they. Your GPS enabled device receives these beacons then does a little trigonometry and determines its relative position based on the signals it has received from multiple GPS satellites. This is why GPS devices work without having any cellular service and they work well. Adding in the cellular service doesn't make it any more accurate, but can help speed up the initial location metrics." ], "score": [ 83, 4, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5z60cq
SCORM packages
Technology
explainlikeimfive
{ "a_id": [ "devm1rm" ], "text": [ "Oooo, SCORM! My daily job and life's torment is this. SCORM is a \"standard\" for defining the communication between an application running on a client computer (which is almost always something in a web browser that uses javascript) and a server. It's specific rules and calls are built around the idea of learning and on-demand courses and tests. It's designed to let you submit scores & pass/fail information back to the server, the server is able to provide information like learner name, etc. SCORM content is usually hosted on the server, and it gets there via a standard format: a zip file that contains an XML listing off which files are in the package, and which one to launch. In practice, SCORM is **really annoying** to work with and frankly, I personally recommend using authoring tools designed to pack things up automatically rather than write your own projects. Different learning management servers have a bad habit of implementing things a little differently, since the standard gives a bit too much wiggle room sometimes. Alternatively, the newer Tin Can API serves the same purpose but attempts to eliminate some of the weird stuff." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5z77f5
Why do some airplanes produce smoke trails and others don't?
Technology
explainlikeimfive
{ "a_id": [ "devt1a0" ], "text": [ "These are contrails and they have more to do with the weather conditions and the altitude than the type of airplane. Jet exhaust has water vapor, as well as some small impurities like sulfur. The impurities in the exhaust will form condensation sites for the water vapor to condence forming drops of water. If its cold enough or the aircraft is high enough (which also would result in cold temperatures), these water drops freeze producing the trails." ], "score": [ 8 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5z79ei
What causes 'bugs' in computer code over time?
I understand a little about computer programming, but this part has always made no sense to me. I get how if you introduce new programs or new code into an environment, you can get errors. But how come the same program, using the same code, can just 'go wrong' when it gets old? It's the same lines of code; how have things changed that drastically?
Technology
explainlikeimfive
{ "a_id": [ "devtezw", "devtz6j", "devzc79", "devtkc5" ], "text": [ "Code doesn't \"wear out\", so bugs don't \"appear\" after a while. What happens is one of two things: * The bug was always there, but you didn't have enough data to make it apparent. * The bug is a resource leak and the program finally ran long enough to exhaust some resource.", "Often it is because the problem only appeared with something new that's been added. The code might always have been buggy, but nobody knew it because the conditions for the bug to appear were never met. I'll give you an example from my work. We upgraded the drives in a database server from spinning disks to SSDs. Makes everything much faster. And one application kept crashing. WTF? Turns out that there was a bug in their code where it couldn't handle events well if they came in too fast. It never was a problem before because the database was a natural speed bump and it was impossible for the data to come in too fast. Once we upgraded the server, data started flowing faster and their program crashed. Only when that happened did they discover the bug and were able to take steps to correct it. This kind of thing happens all the time with complicated systems -- a piece of code that might have been running fine for years might stop working because of a related system.", "I can give you an example: People applying to universities, system requires last name to be longer than 2 characters... after 20 years school decides to accept international asian students, they have last names with 2 characters... keep in mind that if no asian student were to register, this would never be an issue. Most of the bugs are like this, usually appear when something new is added, or the system is used in a slightly different way that it was originally intended to. > But how come the same program, using the same code, can just 'go wrong' when it gets old? It's the same lines of code; how have things changed that drastically? in my example it worked fine for 20 years, since the regular users never had last names of less than 2 characters... however the restriction was always there, but no one noticed / was affected by it. If it aint broke dont fix it.", "Usually when this happens, it's that something changed in the operating environment. Could be that you logged in under a different user and that triggered it. Or maybe an OS update got installed. Or maybe Acrobat Reader updated itself. Or maybe your hard drive got some bad sectors. Or maybe you were running more programs and using more CPU, which exposed a latent race condition. Code runs the same way every time, assuming the *exact* same conditions. If one of those conditions changes, then bugs can appear." ], "score": [ 21, 14, 5, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5z7p2m
Why does the picture of high frequency TV's look like a soap opera?
Technology
explainlikeimfive
{ "a_id": [ "dew610p", "dewj2ok" ], "text": [ "You could read [this article about the soap opera effect]( URL_0 ) but I'm going to eli5 this, too. Newer HD tvs have a pretty remarkable ability to transform shows that are shot in fewer frames per second (that's most shows) into more frames per second by predicting/inserting extra frames. A few important things to know: A) You can turn this effect off. See above link and lots of videos on YouTube. B) Some cinematographers & directors firmly believe that stuff shot in more frames per second is actually ideal/prettier. Avatar is a great example. It's rich and dense, but our brains are super not used to that effect. Yet. Might we someday have the bulk of our tv and cinema shot in more frames per second? Possibly. C) You're not crazy; lots of people hate the predictive technology. And now that you've seen it, you'll spot it everywhere in public tvs that have that setting turned on.", "The frame rate of the recorded or broadcast source media is almost always lower than that of the TV. 60 frames per second is as high as most source media goes, but a high frame rate TV wants to output 240 frames per second. The TV needs 180 more frames than the source media provided in order to output 240 frames in a second, so it electronically guesses and adds several made up output frames in between each frame from the source media. This electronic process tends to create a flattening effect that makes film look like video, or suggests fake-ness." ], "score": [ 4, 3 ], "text_urls": [ [ "https://www.cnet.com/news/what-is-the-soap-opera-effect/" ], [] ] }
[ "url" ]
[ "url" ]
5z7vyc
The internet of things
Technology
explainlikeimfive
{ "a_id": [ "devyuwm" ], "text": [ "In short: the internet of things is about putting a tiny computer and internet connection into everyday objects. This lets you do cool things like program your coffee maker to start brewing coffee before you wake up, or having your bed record how well you sleep and sending that information to the fitness app on your phone. There are many interesting, fun, and potentially revolutionary applications for this (imagine if food poisoning could be prevented by having packaging that detects contamination), but there are also some privacy concerns and issues with what happens if there's a power failure/internet outage/hacking and suddenly the smart locks on your door won't open or your carbon monoxide detector gets switched off without you knowing." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5z90uq
There are only 24 satellites for the GPS system, how do they handle signals from millions if not billions of devises communicating with them?
Technology
explainlikeimfive
{ "a_id": [ "dew88ze", "dewb39h", "dew87ps" ], "text": [ "GPS is only a one-way radio signal. The satellites broadcast radio signals out to GPS recievers but there's no communication back the other way. A GPS device can figure out where it is by passively listening to the signals from the satellites. If that device wants actual maps rather than coordinates, it needs to have them stored locally or have a network connection to download them.", "With GPS and other positioning systems (e.g. European Gallileo) there is no communication between the devices and the satelites. The saltelites just broadcast a very precise time signal down to earth. The GPS recievers recieve the time signals from multiple satelites. Depending on how far away a satelite is from you, its signal arrives a tiny little bit sooner or later than the other signals. Form this minute time differences the GPS reciever can than calculate your position on the globe.", "Devices don't communicate with the satellites. The satellites broadcast signals and the devices pick them up. The devices don't send anything to the satellites." ], "score": [ 12, 4, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5z9qo6
How does the new graphics card driver optimize the card for the specific game that I'm playing? How is a graphics card customizable?
Technology
explainlikeimfive
{ "a_id": [ "dewg8mx" ], "text": [ "Three things that I can think of. First, a graphics driver is very complicated and can do many things in several different ways. It tries to guess about the most efficient way to accomplish the things a game asks it for, and it's usually correct, but sometimes it may guess incorrectly and slow down the graphics card. If a game is popular, Nvidia and AMD engineers will analyze how it uses their graphics cards and if there are any inefficiencies to address. If there are, they may include special instructions in the driver on how to handle that specific game. Second, they often provide replacement shaders. A shader is a small program that is uploaded to the graphics card and runs there, doing some sort of processing task (there are many different types of shaders). Video card people will look at the shaders of popular games and see if there are any ways they can be improved to run faster, maybe because the game's developers didn't do a perfect job or maybe because insider knowledge about the video card can be used to squeeze out more performance. Then they'll write a new shader and include it in the driver. When the driver is given the original shader by the game, it'll recognize it and use its own replacement instead. Third, there may be actual bugs in the driver that cause it to run slowly under some specific circumstances, and a certain game is running into that bug. Fixing it isn't really a specific improvement to that game, but it could be described as that, because it was discovered through that game and if it's an obscure bug that game might be the only one affected." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5zast8
Why can't the realistic and bright graphics of films such as "Avatar" be replicated into video games?
Technology
explainlikeimfive
{ "a_id": [ "dewnjdv", "dewnedj" ], "text": [ "Different needs, different means. When playing a video game, you have hardware ranging from a $300 console to a $2000+ computer rendering (usually) either 30 or 60 frames per second in real time. Games are designed around those limitations, so that the graphics don't overwhelm the hardware and that framerate can be maintained. When rendering movie CGI, you have no hardware constraints and no time constraints. You need 24 frames per second, but you don't need them in real time. You can spend hours rendering a single frame, because once you've done it once, you can just save it and turn all the frames into a video file later. On top of that, you can have much more expensive hardware working on it. We're talking machine costing upwards of $10,000, and several of them working on a single movie.", "Processing power. When rendering a movie, you can have a whole cluster of computers working on it, and spend hours to render each frame of the video. A video game has to run on the average customer's computer, and do it at 30-60 fps per second." ], "score": [ 9, 6 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5zatbl
Why do people get mad when their phones aren't getting the latest Android update? I assume most people just use their phone for web surfing and texting, and those functions feel pretty similar on every recent Android version.
Technology
explainlikeimfive
{ "a_id": [ "dewo7s6" ], "text": [ "Because an outdated OS version is a potential threat to security. If you're no longer receiving patches, you can be targeted by new exploits. People are essentially blackmailed to buy a new phone even when the old phone's hardware is fully functional. There's nothing except high end games that an old S4, or even S3 can't do today, but using one is crazy because they have so many security holes. Buy new phone, or risk getting hacked. At least that's the reason for security-minded people." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5zay04
how big or strong does a battery need to be before it can harm an average person if dropped in a tub of water with them? Could a cellphone or ipad hurt them if not plugged in?
Technology
explainlikeimfive
{ "a_id": [ "dewog1x" ], "text": [ "The average phone battery is around 4 volts. You won't actually feel anything up to 40ish volts. That means you'll have to drop four or more car batteries in a bathtub with you to even begin to feel a shock. To put that into perspective. It's much easier to kill a person with a car battery by bashing their head in with the damn thing, then it is to kill them by connecting it to a person's body." ], "score": [ 9 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5zbpoc
With a low education ranking, how does the U.S. have the largest tech and finance sector?
Most rankings I see put the U.S. very far below most developed countries in education. Therefore, I'm curious how our tech and finance industries fare so well. Is it because college level education isn't factored into all these rankings? Here is an example ranking: URL_0
Technology
explainlikeimfive
{ "a_id": [ "dewvc9h", "dewtibp", "dewy6sc" ], "text": [ "The United States is enormous, so even if the average is not as good as other countries' we are still talking about millions of people at the upper edges. Secondly, most educational rankings are for 18 and under education. The US has an extremely prestigious university system. Finally, the US is generally ok with educated immigrants coming on what are known as H1B visas. Its therefore possible to take talent from other nations to enrich the American industries.", "The first point to remember is that the majority of the US's tech and finance power is a result of a legacy of success. Plus, as the inventors of a great many major computer technologies, the US had a major advantage in development. That being said, the education statistics do not tell the whole story. *Obligatory: I am not a racist, I make no conclusions, I'm merely reporting the federal statistics on the matter* If you look at the federal statistics that break down the numbers according to race, you see that there's huge disparities. On average, American Whites and Asians perform as well, or better, than kids in other developed countries. Blacks and Hispanics however, perform significantly worse. There's a ton of theories for why this is so, but the one certain factor is the language barrier. A large percentage of Blacks and Hispanics come to the US with very poor English skills, yet the vast majority of school districts make no exceptions for this. It doesn't matter if you don't know a word of English, if you're the right age to be a Junior in highschool, you get put in the same classes as everyone else. In most cases, that means Chemistry. I don't know about you, but I had a hard enough time with chemistry as a native English speaker. It would be nigh impossible to learn in another language. Plus, even people who can speak English well might have poor reading comprehension, which puts them at a major disadvantage in standardized testing. Some places have tests in Spanish, but many of these kids have very poor educations in Spanish too, so it doesn't help that much. The real problem with the American education system is that these issues aren't properly dealt with, and that's killing our future. But if you're lucky enough to be a native English speaker, the education system isn't half bad in most places. Edit: Here's a [Source]( URL_0 ) for all those asking. This isn't my original source for this info, but it's the best I can find on my phone during my work break lol.", "One of the odd things about the US, in general, is that it tends to be a VERY wide spectrum. Most countries tend to be very focused but the US offers a wider range of choices. The easiest place to see that is with Health Care. I'm Canadian and have a healthcare issue. I can go to the doctor who sends me to a specialist in the hospital to get it checked out. I'm put in the queue and receive treatment based on my need. It happens this way regardless of me own income or financial resources. The poorest people see the same species and go to the same hospitals as everyone else. So a common thing in Canada, for the super wealthy, is to go to the US for care. See, in the US there are expensive hospitals that offer luxury services and then there are cheap hospitals who service mostly the poor. So if you're able to pay a premium for the best care, you can go to a US hospital where all the beds are king sized, you get a personal nurse and eat caviar with every meal. That kind of place does not exist in Canada, in Canada everyone gets basically the same care and no one eats caviar. Same thing with education. If you are rich you can send your child to some of the best private schools on the planet. If you live in a wealthy area the public schools are still amazingly good. But the people who are not well off, their education SUCKS. In most developed countries, the education you get in a public school is basically the same regardless of where you live. Where I live, each province (aka state) funds the public schools. Therefore regardless of where you live within the province, your public school receives the same funding per student as anywhere else. So the public school in the wealthy suburb gets the same funding as the public school in the inner city. This is NOT the case in the US. Each school district gets funding from its local property tax. So a poor neighborhood with low property values pays little property tax and therefore has a poor school. A wealthy neighborhood with million dollar homes pays LOTS of property tax and therefore has good schools. Money is not the only issue. In addition, poor neighborhoods tend to have higher instances of students that lack home support. It's hard to do well in school when your mom is a drug addict and your dad is in prison and you have the responsibility of taking care of your 5-year-old little brother. Even more so, if your parents work 2 minimum wage jobs, they're not around much to help with homework. Less home support means the school needs MORE funding, but those kinds of areas actually get less funding! So what happens in the US is that the bad schools are REALLY fucking bad. The good schools are fine and the best schools are fucking amazing. In most other countries, all the public schools provide a basic education that meets a minimum standard of fineness." ], "score": [ 4, 3, 3 ], "text_urls": [ [], [ "https://www.insidehighered.com/news/2015/09/03/sat-scores-drop-and-racial-gaps-remain-large" ], [] ] }
[ "url" ]
[ "url" ]
5zc25y
If sites like facebook can see my browsing history for targeted advertising, what stops them from seeing my banking data?
Technology
explainlikeimfive
{ "a_id": [ "dewwr77", "dewvia7" ], "text": [ "Remember, there are 2 sides to every web browsing session. There's your computer then there are the computers you are connecting to. Facebook IS tracking you, but they're not doing so from your computer they're doing it from the other side of the transaction. What's happening here is this. Facebook gives your browser a token called a cookie. At any point during your browsing, the website you are looking at can ask \"do you have a facebook token? May I see it\" and your browser will send it the cookie. The website then turns around and tells facebook \"yo, Miliean was here\". Effectively facebook is having the websites you visit inform on you. That's why when you shop online for an item, you all of a sudden start getting facebook advertisements for that item. It's not facebook seeing what you are browsing, it's the websites reporting to facebook that you were there. Now, not all websites share that information. But many web pages have agreements with facebook to request the facebook cookie. Any site with a facebook like button, any site that lets you log in with facebook and even more sights that use certain advertiser networks.", "Encryption. Sites like facebook, etc, share anonymous data with vendors, like Google Analytics, for targeted advertising. These are things like which pages you viewed. These vendors will follow page views and links to determine what it is you were actually viewing, like maybe you were checking out a backpack. The next time you load a site, like facebook, the site will say - \"I have /u/732 here, I need to display some ads as well - what has been viewed recently?\" The vendor will reply with a list of things it thinks is relevant to your recent viewings. So back to encryption. Encryption works by masking the data it is containing. When you sign into your bank, targeted advertising (if the site shares it) would see things like \"Bank of America's home page\" and \"My Accounts\", but it has no knowledge of what content is actually being displayed because it is all encrypted." ], "score": [ 5, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5zc4i9
Why is it that even when I'm close to my router, i never have wifi speeds close to what I'm paying for on my internet plan?
Technology
explainlikeimfive
{ "a_id": [ "dewwekf", "deww2eg", "dewwr5m", "dex1ajy", "dex1n6n" ], "text": [ "lets see 1) Megabit vs Megabyte, 1 megabyte is 8 megabit, so if you have 10Mbps from your telco, you probobly have 1.25MBps 2) Protocol overhead, the data dosnt go un a raw format, its usualy packaged in a protocol called TCP/IP which requires envalopes called \"packets\" with their own structure and inner detail, this usualy can take between 2 and 10% of your bandwidth. 3) Wifi is very dependent on protocol used (A. B. G, N, AC...), interferance from local sources, sometimes some brands hate other brands and things like this. so for a perfect connection you would have to be on a \"clean channel\" (meaning no interference or anyone else using it), your antenna and equipment be 100% operational and your reciving device as well, using the same protocol and with a wifi standard that is able to provide speeds which are superior to the speed you get from a cabled connection. 4) if you have a DSL connection, your distance from the nearest DSL exchange device where your line is plugged in (a DSLAM) will affect the connection, around 2/3 of a mile gives you 100% but from there you start to drop off, i use to live around 2 miles from the exchange and had around 45db of attunuation, my 20mbps ADSL2+ would end up being a 6mbps DSL link.", "A common point of confusion for people when talking about internet speeds is that they are usually quoted in \"bits\" while the computer measures in \"bytes\". A bit is a single unit of binary information which is either a zero or a one. A byte is a standard functional unit of information composed of eight bits. As a result the speed the computer will display will be 1/8th of that which your internet provider would be advertising. Obviously the provider prefers to advertise the bigger number and the only way you can tell the difference is by checking the units.", "You could be faced with a multitude of reasons why. Do you get the speed you pay for when your computer is plugged in directly to the router? If that's the case, you need a better router that can handle the speed. If you don't get the speed you pay for when directly plugged into the router, but you get the speed you pay for when directly plugged into the modem, you still have a bad router. If you still don't get the speed you pay for when you're plugged directly into the modem, it's likely the fault of your ISP. Maybe, if your laptop/phone/etc is older, your wireless card in the machine isn't up to par to match the speeds (ac standard should allow you to top out). There are a lot of factors going into this. I have gigabit and get the full tap through my router, but I also have a very nice router and my wireless devices can handle speeds that high. Maybe the situation for you has a broken link", "There can be a couple of reasons. 1) You're confusing mbps and MB/s. If you have 150mbps service, your maximum speed is going to be 18.75MB/s. 2) Your router isn't capable of transmitting the full speed. If you have a B band router, the max speed is 11mbps. If you have a G band router, your max speed is 53mbps. Wireless N is 600-900mbps max. Wireless AC is up to 5300mbps. 3) Your device that's on wireless isn't capable of receiving the full speed (see #2 above for speed limitations). 4) There is signal degradation between the router and your device, preventing the top speed. 5) Add in switches, hubs, cable lengths, wall and floor composition, data reflection, and other nearby wifi transmitters using the same channel range, and you can see major variances.", "I think the one thing that many responses are missing is the key phrase \"I never have WIFI speeds...\" Wifi is sent over radio waves to your computer, which is the key to why you may be having problems. If many routers are nearby and on the same frequency, or you or a neighbor are using a microwave, data transmission can be very spotty when sent over radio. This is especially bad if you are in an apartment setting, where 8+ people could all have their own wifi connections and be using microwaves at various times. And that is only in one building, nearby buildings could also cause interference! Even when you are feet from your router, you could still be receiving major interference from the many sources nearby. If many devices are trying to receive data from the same signal, you will have similar issues. For best speed over wifi, a single device needs to be connected to the router. So that's one of the major things, but the next major thing is that your router may be using a older transmission format which has a lower capped speed. Wireless-G has a max throughput of only 54 mbps (mega bits per second). These speeds are only reached if the router is practically in isolation with no other signals interfering. Wireless-N can a throughput of 600 - 900 mbps. So in order to receive better speed from your router, using N will help tremendously. In addition, a new format was recently developed called Wireless-AC, which is even faster and more reliable. The takeaway from this is that wireless is a very variable (and sometimes unreliable) method of delivering internet. If you want max speed, you need a wired connection directly to your router. Only then can you really say that you are not receiving the speeds you are paying for. Speeds from internet companies are assuming you are using a wired direct connection to the modem. Its also important to note that most internet service providers do not guarantee the speeds you are paying for, but that is another topic entirely." ], "score": [ 117, 48, 16, 5, 4 ], "text_urls": [ [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5ze33d
how do websites check if you have a valid email without you needing to press the submit button first
Technology
explainlikeimfive
{ "a_id": [ "dexcc8z" ], "text": [ "They check for an @ symbol and a domain extension such as .com or .edu It doesn't normally do any test to check if the address is reachable etc" ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5zeeb8
Do scientists choose exactly where a rover will land or do they just land anywhere that was suitable?
For example, when we planned a moon landing, was a landing site identified before they left? Or did they land anywhere that was suitable/easiest?
Technology
explainlikeimfive
{ "a_id": [ "dexeztq" ], "text": [ "They plan the landing sites well in advanced. It is determined by terrain (will it be a safe landing), different features (what looks interesting to study), and by past landings (they want to spread out so they can study as much ground as possible)" ], "score": [ 11 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5zeexb
Why do people move fast in the videos recorded on old, black and white cameras?
Is it something to do with the cameras.
Technology
explainlikeimfive
{ "a_id": [ "dexf9e6" ], "text": [ "Commercial movie cameras were literally hand-cranked affairs, and number of frames exposed per second depended on how steadily the camera operator could crank. Also, film was less sensitive to light than later on, so most cameras exposed fewer frames. Home cameras, when they came out, generally used spring mechanisms which were similarly inconsistent and slow. Since we're not accustomed to that number of frames-per-second, even when the film is adjusted to be speed-correct to life from moment-to-moment it seems jerky and fast to us. Interestingly, the same thing happens on the other side of the equation. If you've seen a newer film shot in 60 frames per second instead of the 24 frames per second we've had for the last 40+ years, it seems too fast and herky-jerky because we're not used to motion that smooth in a film. But hey, [it's more cinematic](/r/PCMasterRace)." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5zemvl
Is there a difference between microwaves used for mobile networks and microwaves in microwave ovens?
Technology
explainlikeimfive
{ "a_id": [ "dexi55s" ], "text": [ "To make this a lil easier to understand, let's use another more familar form of electromagnetic energy: visible light. Same basic principles apply! Using your cell phone and its towers to communicate via microwaves is like getting a message from someone who's flashing a light on a hilltop far away. His light doesn't have to be exceptionally bright or intense, and the amount of light that *actually* reaches you is extremely low, most of it dissipates out into the environment. Which is fine, As long as you can see it flash on and off you can get the message. Cooking with microwaves is kinda like cooking with an EZ bake oven. (If you aren't a girl or didn't have a little sister growing up, EZ bake ovens use a light bulb right next to a little pan to cook stuff.) We're putting a high powered source of energy inside a closed chamber, right next to the food. Instead of dissipating out, all this energy bounces around and is absorbed by your snack, creating heat. So yeah, they're the same kind of thing, but being used in very different ways in very different amounts of energy." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5zf3ri
If the human lifespan were somehow greatly extended, how would we deal with the ensuing increase in population?
I always hear futurists go on excitedly about "getting closer to immortality" but doesn't the world have too many people already?
Technology
explainlikeimfive
{ "a_id": [ "dexl6w6", "dexr9mg" ], "text": [ "The world is certainly overpopulated for the resources we consume. If the average person were to significantly reduce their footprint, then the globe could handle the current population and much more (potentially). That being said, if we got close to immortality i think you would find, in general, lower birth rates. Many people would wait much later in life to have children (something that has already been happening in relative correlation with life expectancy) and therefore the effective population wouldn't be much effected by the change in expectancy (though it would certainly still increase)", "So there are two different consequences of an aging population and immortality. First is environmental, more people being born and fewer people dying means more people are on the planet consuming resources (both finite like oil and renewable like food and agriculture). Just because something is renewable doesn't mean it's sustainable. Renewable resources can be drained faster than they are replenished, and will eventually be depleted. We're not living sustainably now, and it would obviously be harder to be sustainable with even more people on the planet. Malthus hypothesized that the world would reach and exceed its carrying capacity, and the human population would plummet down to a fraction of its current size. That never happened, what we've seen is that more humans on the planet leads to more innovation and develooment, and rather than running into the limit of carrying capacity, we've simply extended it through technological advancement and innovation. Think GMOs increasing crop yields and reducing food spoilage. So a futurist might hypothesize that we'll all be eating nutritious, energy dense labgrown algaes or whatever that will allow more humans to thrive using less space and resources. Then there's the economic aspect. People are born and are supported by their parents and society for the first 20+ years of their lives. They work for 40 to 50 years, directly and indirectly supporting the younger and older population until they retire. Once that happens, they start to become a drain on society, so as we extend life expectancy we're making it more expensive for the working population, as there are proportionally fewer people supporting a growing elderly population. The problem is that the elderly generally *can't* work and produce the same as a younger adult. Physical and cognitive functions start to degrade by about 60 and science and medicine haven't been able to resolve those issues yet. We can only cure and treat diseases to improve and extend life, but there are limits to what we can restore. If, say, we ever do reach a point where we can dramatically slow down, freeze, or even reverse aging, then we could extend the working/productive lives of people and reduce or eliminate social security, medicare, or other government sponsored entitlement programs for retirees." ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5zf7yf
how can my smoke detector not be set off by my cigarette or fireplace smoke, yet goes off the moment my toast starts burning.
Not sure this has been asked yet. If it has link me to post? Reddits search function is weird...
Technology
explainlikeimfive
{ "a_id": [ "dexpa1b", "dexp6gp", "dexq6rk" ], "text": [ "Smoke detectors work by having a radioactive substance and a radiation detector inside. The detector keeps getting the emissions from the substance, and this keeps a circuit open. Whenever enough particles that can block the radiation get between the emitter and the sensor, the circuit closes and the alarm goes off. Fireplaces and cigarettes actually burn pretty cleanly, meaning that the smoke they create is made up of very fine particulate matter that the radiation can get through, unless you've got enough present to be difficult to see through. Burned toast smoke, on the other hand, creates larger airborne creosote-like objects that can easily block the radiation. This is due to the oxidation reaction being much less efficient, resulting in larger particulate matter being released into the air as smoke.", "There are two technologies currently being used in smoke alarms: ionizing and photoelectric detection. Almost all smoke alarms for sale use either one or both of these technologies. **Photoelectric:** A photoelectric fire detector detects if there is any visible smoke between a light emitter and a receiver. If some particles or a substance breaks the light beam the alarm will be triggered **Ionizing:** These smoke detectors use a little bit of a radioactive element that produces radiation. There is a sensor similar to the light sensor that senses this radiation. If any substance enters that absorbs the alpha particles the alarm will be triggered. These different kind of alarms respond differently to different kinds of fires and smoke.", "Thankyou both for your replies! Now it all makes sense..." ], "score": [ 21, 5, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5zik62
Why does Moore's Law exist?
I can get from a quick google search that it's the number of transistors on processing chips doubling every year because of how small we can make them (meaning greater computer power but not necessarily doubling speed). I want to know why such a seemingly arbitrary law holds so true. Is the co-founder of Intel who created it holding back and purposely only putting out chips that follow this rule?
Technology
explainlikeimfive
{ "a_id": [ "deyd2u4" ], "text": [ "Calling it a law is a bit of a disservice. It was actually an observation made by Moore. The distinction is important. As for your question, AFAIK it is mostly due to the progress of manufacturing and production of chips, rather than any willingness to prevent progress. It is expected that Moore's law will break down once we reach the 7nm scale, as beyond this point Quantum Tunnelling starts to become a problem. Quantum tunnelling is when the particle \"jumps\" the barrier (not my forte, so can't explain it properly). We are currently 4 generations away from that problem, i.e. we use 14nm commonly, 10nm is in development, 7nm are in development, and 5nm are predicted after. Without moving to quantum computing there is the likelihood that we will stall out in the improvement of processor speeds while reducing their size. Edit: I would expect this stall to happen in the late 2020s, given the rate of progression from development to manufacturing." ], "score": [ 7 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5zikp5
What does audio mastering exactly do to make music sound better on every sound system?
How do they know which frequencies they need to boost or cut in order to get a well balanced result?
Technology
explainlikeimfive
{ "a_id": [ "deyeozr" ], "text": [ "Audio is first recorded, processed and mixed. Once the audio mixer has done her/his job; they will give the mixed audio files to the mastering engineer. Well mastered audio should sound good on many different systems. So the one doing the mastering will listen on many different systems to see how it sounds. Most humans can tell if a song sounds bad, a mastering engineer needs to know what they can do to make a bad sounding song sound good. As an example: A band recorded a new song at a home studio. The studio does not absorb some audio frequencies well. These frequencies echo around the room a bit too long (this may be called room reverb). Each track (vocals, guitar etc) will have a bit too much of this those frequencies due to the room reverb. Listening to each track individually, there may be little or no problem. The problem may become more noticeable when all the tracks are mixed together. The engineer may decide that those particular frequencies should be reduced to increase the clarity of the overall mix. Another part of the mastering process (particularly pop songs) is to make the music sound loud. People like loud music" ], "score": [ 7 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5ziruv
Why do certain TV shows use a different director for every episode?
Hi ELI5! I've noticed that in certain shows, they use a different director for every episode. Why would this be? Why not just use the same director for each episode? Examples: URL_1 URL_0
Technology
explainlikeimfive
{ "a_id": [ "deyermx" ], "text": [ "The director needs to plan an episode, get rewrites, shoot for 1-2 weeks, then supervise the edit and completion. It's a big job. It's too long for one person to do 13 episodes back to back, so shooting is split into episodes or two episode tranches." ], "score": [ 11 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5zj2s8
Why is the "claimed" battery life of a product always significantly more that the actual battery life of a product?
I feel as though this is counter to the rest of marketing/consumerism. For example, a company might say "water resistant" instead of "water proof" to cover their butts. Or when a car's official towing capacity is 500lbs when in reality it is 3000lbs because the company is playing it safe. Why do companies claim 16 hours of battery life but you can only reliably get 10 hours of life?
Technology
explainlikeimfive
{ "a_id": [ "deyg19b" ], "text": [ "The battery WILL last 16 hours in lab conditions i.e. with the product turned on and doing nothing. When you start to use the device, it requires more power to run the tasks you are doing so the battery is being used quicker than if it's sitting idle." ], "score": [ 8 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]