q_id
stringlengths 6
6
| title
stringlengths 4
294
| selftext
stringlengths 0
2.48k
| category
stringclasses 1
value | subreddit
stringclasses 1
value | answers
dict | title_urls
listlengths 1
1
| selftext_urls
listlengths 1
1
|
---|---|---|---|---|---|---|---|
ej72gj | why you’re told to wait 30 seconds when unplugging a modem or DVR. | Technology | explainlikeimfive | {
"a_id": [
"fcvvzzz",
"fcvw7uk",
"fcvvyzl"
],
"text": [
"Electric circuits have parts called capacitors which can hold on to electric charge for a bit. In order for the modem to get a fresh start, you are told to wait a “longer than necessary just to be safe” amount of time, so all the capacitors will discharge.",
"There are things in the electronics called capacitors that take wall power and trickle it in to parts that don't want full wall power like a funnel under a kitchen faucet. The capacitors will continue to discharge into the parts just like it takes a little bit for the funnel to empty after you turn off the faucet. During this time, the parts may still have enough power to remember their previous, possibly messed up settings. Leaving the device off for a longer period lets everything drain out and ensures the device starts with a clean slate.",
"When you unplug it, there is still electricity in the device. Think of it like a sink, and electricity is water. Pull the plug and let it drain."
],
"score": [
18,
9,
4
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ej8h3j | When an actor is in a scene twice, how do they do it? | For instance, Austin Powers and Doctor Evil or Data, Lore, and Noonien Soong. Do they just shoot it twice and overlay the images? How is it done now vs. the late 1980s or earlier? | Technology | explainlikeimfive | {
"a_id": [
"fcw5u5l",
"fcw6yf8"
],
"text": [
"Often they will do tricks to make you think the actor is in the scene twice, such as filming the scene twice, but with a stunt double taking the place of the other character, and then filming from angles that make it seem like the two characters are talking to each other, such as filming the stunt double from behind, so you can't tell it's not really them. This is the most common method, and it's a low tech solution, just requiring you to be careful when editing. More complicated would be using film splicing, where you'd say film the same scene twice, but now physically cut and paste the shots together so that it'd appear to be the same image. Like taking two photos of the same place, cutting them in half, and putting the left of one and the right of the one together. Later, digital technology made this much easier to do well.",
"There’s a lot of ways to do this that don’t involve special effects. If you look at most conversations that take place in movies, the director will film a close up of one actor, and a close up of another actor and just switch between them back and forth throughout the scene. If one actor is playing two characters, you can just film the close up of one of the characters and then the close up of the other character and switch between them. They also use stand-ins. If the two characters need to hug or something, you can just dress another actor up like the character and put a wig on them and as long as you just see their back, that’ll work. If you see the same actor in two different places in the same shot, now-a-days, this will be done with a green screen. The actor will do one of the parts on a green screen and then, in a computer, you can make the green screen invisible and move the actor into the other scene. In the early days of movies, you could do something similar by double exposing the film. If you have an camera that uses physical film, it’s possible to take a picture with only half the film. So you can cover up one side of the film, shoot your scene and then cover up the other side of the film to shoot the other part of the shot."
],
"score": [
19,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
ejbas2 | (Photography) What is Noise, White Balance, ISO, Chromatic abberation, ETC? And how does it help in Photography? | I dont understand most of what they are and how it happens and what does each do to help photography? | Technology | explainlikeimfive | {
"a_id": [
"fcwqnik"
],
"text": [
"Noise: graininess on the film or, in case of digital photography, the graininess on a shot. ISO: the sensitivity of the film/sensors to a certain amount of light. When dealing with low light scenery, the less ISO you have, the longer the exposure time. White balance: every source of light has a temperature, its \"colour\". Your eyes quite often don't perceive it, as them and brain are pretty good in stripping away the \"colour\" part of a light source. Cameras aren't that good, so white balance is something the camera uses to attempt stripping out the source of light colour pollution. Chromatic aberration: parts of a shot whose colour doesn't look like the colour IRL. The whole picture can be affected as well."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
eji58q | For what reason is the snooze timer set to 9 minutes instead of 10? | Technology | explainlikeimfive | {
"a_id": [
"fcxwh1k",
"fcyvfay"
],
"text": [
"Original mechanical alarm clocks had already been standardized by the time the snooze feature was invented, so gearmakers had to mesh the gears with the existing clock mechanism. It worked out that they could best fit a gear set up of about 9-10 minutes, but the opinion at the time was that 10+ minutes was long enough for people to properly fall back asleep, so the nine minute gear configuration was chosen. Then digital alarm clocks came around; while easily altered to whatever snooze length is desired, they had to compete with mechanical ones and most people already expected a nine minute snooze. That was what stuck.",
"> By the time the snooze feature was added in the 1950s, the innards of alarm clocks had long been standardized. This meant that the teeth on the snooze gear had to mesh with the existing gear configuration, leaving engineers with a single choice: They could set the snooze for either a little more than nine minutes, or a little more than 10 minutes. But because early reports indicated that 10 minutes was too long, allowing people to fall back into a \"deep\" sleep, clock makers decided on the nine-minute gear, believing people would wake up easier and happier after a shorter snooze. > URL_0"
],
"score": [
89,
3
],
"text_urls": [
[],
[
"https://www.mentalfloss.com/article/22761/why-does-snooze-button-give-you-only-9-more-minutes-sleep"
]
]
} | [
"url"
]
| [
"url"
]
|
|
ejikzh | How was the USA/Russia able to build nuclear weapons in the 1940s, but countries today aren't able to just as easily? | Technology | explainlikeimfive | {
"a_id": [
"fcxzmol",
"fcy0cl9",
"fcy02o5",
"fcy75cn"
],
"text": [
"The Haves preventing The Have Nots from building a nuclear arsenal. It's more about politics than technology.",
"Weapons-grade uranium isn't cheap or easy to produce, you need a vast array of equipment and a large amount of specific types of uranium to process. It's costly, labor intensive, and serves no civilian purpose. For that reason, most nations that aren't military superpowers aren't interested. The few that are are effectively rogue states, autocratic nations that want nuclear capability to back up their regional harassment and deter invasion. It's hard for them to do it because they have very few allies willing and able to suply the hardware and expertise, and a great many enemies interested in stopping the production. The US and Russia didn't have these limiting factors in the 1940s and 50s, they could produce and test nuclear weapons freely with support from their global allies.",
"Because we all realized that everyone having nukes is more dangerous than a few superpowers having nukes. There is now a treaty that basically says \"if you have nukes, you have nukes, but if you don't, you don't seek them out.\" And then the members of this treaty make sure that the materials needed to make a nuclear weapon don't become available, and if someone pursues nuclear arms, they will be dealt with economically.",
"There are international treaties and agreements in place that intentionally make it difficult. The Nuclear Non-Proliferation Treaty, nuclear-weapons-free zones, various export control regimes, and attentive monitoring by intelligence agencies and the International Atomic Energy Agency are all in place to make it so that if someone does try to develop nuclear weapons, they will be noticed, and then there are various kinds of ways (e.g. economic sanctions, political threats, actual acts of war) to punish them. Additionally, most countries do not _want_ to develop the weapons: they don't see a need. This is because they are either feel entirely non-threatened by them (e.g., they and their rivals are under a treaty that prohibits them), or they are under the \"nuclear umbrella\" of another nuclear state. This system is not perfect, to be sure. Since it was put into place, Israel, India, Pakistan, South Africa, and North Korea each developed nuclear weapons. But it does seem to have slowed nuclear acquisition — there are only 9 nuclear states, not 20. The number of states that _could_ have nuclear weapons very quickly if they decided to is much larger than the number that have them. It is not about lack of technical ability — it is much easier for a nation, technically, to build nuclear weapons today than it was in the 1940s, when there were many more \"unknowns\" and new things to learn. The hitch of all of the above is that political circumstances can change. If Japan or South Korea suddenly felt like they was no longer protected by the US \"nuclear umbrella\" against China or North Korea, would they seek their own programs? There would be costs and consequences (sanctions, etc.), but it's possible. Most of the trick to keeping nations from \"going nuclear\" has been in trying to make sure there were more advantages to them _not_ doing it than them doing it. Again, it hasn't been perfect, but it works."
],
"score": [
10,
9,
4,
4
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ejisju | What's the difference between a "front end developer", UI designer, UX designer, and what do they do? | What's the difference between a "software engineer" and "Front-end developer". For example, if someone is designing the code for Google, who is designing the look of the app? Like take a popular app like Outlook. I understand that software engineers design the code and stuff, but who decides how it looks, where the reading pane goes and stuff like that. I googled "Front end developer" and was getting a lot of info about WEB developers, but that isn't the same right? Like are the same people who design the look of how Microsoft Paint (where the paint brush goes, where the scroll wheel goes, etc), are those the same people that design the menus for a video game? Are all those people "front end developers"? Are they "designers"? | Technology | explainlikeimfive | {
"a_id": [
"fcy2bpm"
],
"text": [
"First off, none of these terms are an exact science. They are certainly not used consistently everywhere, so I'm just providing a decent rule-of-thumb from what I've seen in the industry. That being said: Generally the term \"front end developer\" is used to contrast with \"back end developer\", both of which write code for web applications. The key distinction being that the back end is run on a server somewhere, and the front end is the code run on an actual user's computer (which, yes, does involved the visuals but also generally more than just that). Of note, \"full stack developer\" is used for developers who do both. In the case of a program like MS paint, I would likely call that job a generic \"application developer\", and there wouldn't be any distinction between who does the visuals and who does the 'rest': they are usually coupled enough that there isn't a lot of benefit of having distinct jobs handling each part. As for UI/UX designer, these are the people who actually make the decision of what the application/web site looks like. They often aren't coders at all, but rather designers who (to put \\*far\\* too simply) bring pretty pictures to the developers who then implement it as part of their design. Generally, they are also involved in user research and those kinds of things, to determine what is a good look for the product. Edit: Oh, and \"software engineer\" is usually just a synonym for \"software developer\", which is just about the most generic title you can have. In a web-oriented business it might be what they call their back-end developers, but as I said before there is no real specification so it could mean different things to different people. (Of note, you \\*can\\* be an accredited software engineer, meaning you've gone through similar governmental proceedings as architectural engineers or the like, but in most of the industry its really a useless title. Maybe some government jobs like NASA, or safety-critical software like airplanes, requires it, but for most private businesses software engineer == developer)"
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
ejj8rl | How does Google knows where have you been, even if you aren't connected to any network? (No mobile data or wi-fi) | I'm currently traveling in Europe and I don't have roaming on my cellphone, but somehow Google maps manage to know where I have been as soon as I connect to the hotel wi-fi. | Technology | explainlikeimfive | {
"a_id": [
"fcy6nrl",
"fcy6m2e",
"fcy7ck6"
],
"text": [
"Two ways: 1. Your phone doesn't need to connect to a WiFi network to know that it is close by. Your phone always listens for networks and uses them to determine where it is. Your phone can do this even when WiFi is turned off. 2. Your phone probably has built in GPS which it uses to determine where it is.",
"GPS by itself doesn't use data, so it's possible to log that data then when you connect back to a data connection they can read that data and line it up with a map to see where you've been. Navigation though typically uses data (and can be avoided by downloading the area to use offline).",
"Every WiFi hot-spot has a location associated with it. You can get location data even with GPS turned off. That's why your phone sometimes asks to turn on WiFi to improve location accuracy"
],
"score": [
17,
16,
5
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
ejjp2j | When you click on "delete" what physically happens in the computer? How is space retrieved? | Technology | explainlikeimfive | {
"a_id": [
"fcyawce"
],
"text": [
"The space is marked as \"available\". Nothing is deleted, but when the computer is looking to write a new file, it finds available space for it."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ejkeu2 | Why are electric cars better at accelerating than most gas powered supercars? | Technology | explainlikeimfive | {
"a_id": [
"fcyhhy4",
"fcykytl",
"fcyi2c5"
],
"text": [
"All of the torque in the motor is available instantly rather than being dependent on the RPMs of the engine.",
"A gasoline engine doesn't work entirely on it's own. It has to work in conjuntction with a transmission. All a transmission is, essentially, is a set of gears that allows the user to select the best gearing ratio for the situation at hand. To understand this, consider the same vehicle in two separate situations. One where it is starting from rest, and one where it is speeding down the highway. When you want to move the vehicle from rest -- as in starting after stopping at a stoplight -- you want the engine to deliver a large amount of power because the vehicle is at rest and you have to overcome the inertia it has because it wasn't moving. However, on the freeway, the vehicle is already moving, and past a certain speed, you are mostly just (a) overcoming the friction of the axels, the engine and other components of vehicle and (b) the air resistance as the vehicle moves. This actually requires a fairly small amount of energy (compared to starting the car from a stop). As you can imagine, both of these situations benefit from very different gearing ratios to deliver power from the engine to the wheels. But cars can't have a infinite number of gears (unless it's a CVT, but that's different), so manufacturers will generally select a small number (usually 5) that give the best \"general use\" benefits to the driver. The trade off, however, is that each gear is only really efficient at a particular engine RPM, and only delivers max torque at particular points as the engine revs. Electric motors do away with the transmission nonsense because the motor has the ability to be controlled by how much electric potential is applied to it. So if you stomp on the \"gas\" in an electric car, the controller simply puts the maximum potential across the motor, which means the motor produces maximum torque as fast as the current can be applied. There's no gearing like in an internal combustion car to cycle through. The electric car can get away with a single gear because the output is controlled by the input to the engine. [Here]( URL_0 ) is another breakdown with some more depth.",
"When flooring the accelerator pedal, the electric motor instantly has max power input and therefore the max power output. In a regular internal combustion engine, it builds up to its max power output more slowly. A regular engine max power output will be at high rpm, while an electric motor is at max power output almost instantly."
],
"score": [
18,
10,
6
],
"text_urls": [
[],
[
"https://www.roadandtrack.com/new-cars/car-technology/a12019034/why-dont-electric-cars-have-multi-gear-transmissions/"
],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ejmk5w | How does an LED display produce a "black color"? | I understand that in an OLED display, those black pixels are just turned off, but since an LED display has a backlight, I don't understand how you can achieve a black color, even on full brightness. (I know that it's never true black) | Technology | explainlikeimfive | {
"a_id": [
"fcz1xaf"
],
"text": [
"The principle of an LCD is to block light selectively in order to create different shades of color, so if you block all of it, you’ll see black, or at least it looks like black, but it’s really still lit up because the filter isn’t perfect The newer LED LCDs use leds as backlights, so they can selectively turn them off in dark areas and up their brightness in light areas, this won’t work with really high contrast (white text on black background) but it can help with the problem of not true black"
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
ejnk6m | When a video game crashes, why does the music continue to play even though everything else is frozen? | Technology | explainlikeimfive | {
"a_id": [
"fcze6pb",
"fcz4rvo",
"fd1cv3l"
],
"text": [
"Video games are extremely complex pieces of software. I could go into a lot of depth about this, but seeing as this is ELI5, here's a simple analogy. Just because you crash your car, doesn't mean the radio inside is now broken. Game engines usually know when something has gone wrong, and will close themselves down as a consequence. That said, it's hardly uncommon for things to break in weird, wonderful and often spectacular ways - that's just the nature of software development. When something like that happens, the game engine may not always recognize what's happened and you'll end up with, say, a black screen. But just because there's a visual problem, doesn't necessarily mean there's an audio problem, and thus, sounds and music may continue to play. It's worth nothing these sorts of occurrences are more common on platforms like Windows due to the vast amount of different hardware, software, peripherals etc that could interfere with the game in ways the developers had not anticipated.",
"Likely because the music process is still running, modern games are composed of multiple processes, as you can see if you open task manager, and if one gets stuck, not all the others will as well, of course this depends on the game",
"**ELI5 Version:** Fred and Bob are playing DnD, Fred is the DM and Bob is just a player. To make their sessions more \"immersive\", Fred wants some authentic live medieval flute music. He could play the flute himself, but playing the flute and doing all the DM stuff at the same time is difficult and risky: if he gets distracted rolling some dice, he might forget to play a note, ruining the experience for Bob. Instead, he asks his friend Jimmy to play the flute for him. All he needs to do is hand Jimmy the sheet music and Jimmy will play, freeing Fred up to do as much dice rolling as he needs and provide the rest of the experience for Bob. Now, let's say Bob is fighting a dragon, and Fred has given Jimmy the \"Dragon Fighting\" music, so Jimmy is playing. Fred is calculating the dragon's attack using the campaign instructions, but there's a mistake! The instructions were meant to say \"roll 1d8, if you roll a 4 or less, repeat the process and add to the damage total,\" but instead it was printed as \"roll 1d4\" etc. As far as Fred is concerned, the instructions are law, so he begins furiously rolling a 4-sided dice until he gets a 5 or more. Jimmy is still furiously playing the flute because Fred didn't tell him to stop. Bob gets bored and kills both of them. **Original:** Game engines use multiple \"threads\" of execution, running in parallel. The game app asks the Operating System to start the Threads, and the OS is responsible for running them. In this case, the game engine has an audio thread dedicated to running the audio, and it's been given the music file and just told to play it. If something else in the main thread hangs (enters an infinite loop or wait) the game appears frozen but the audio thread keeps playing the music, uninterrupted. Now, it's unlikely that the game engine has actually encountered an \"unrecoverable error\" (a crash), because when that happens it will typically have some kind of routine that cleans up or kills the audio thread, stopping the music, and then exits the program with an error message. It's also unlikely that the game engine has an infinite loop, widely used engines are well tested. Instead, it's far more likely that the game logic itself has an infinite loop. Now, why use a separate thread? Interestingly, the fact that the audio continues even though the game itself stalls is actually one of the primary benefits of using an audio thread. If some game logic were to \"spike\" (think some intensive event, like a large amount of TNT in Minecraft) and the audio was being handled on the same thread, then the music would stutter or stop. Sound, or any analog waveform, in computer is made up of a series \"samples\". The sample rate typically used is 48KHz, 48,000 samples per second (the Nyquist theorem tells us this is theoretically good enough to represent frequencies up to 24KHz). How does a computer play music then? Well, it needs to send the samples to a DAC, which is a chip that generates the analog signal. However, trying to accurately send a sample exactly once every 20 microseconds is a fools errand; instead, we send it in batches, say 1024 at a time. The received batch is put into a \"buffer\" which the DAC can read through in its own time. Now, if you send a batch late, the DAC won't know what to use. In fact, what it might do is just keep looping what's in the existing buffer, which makes a buzzing sound (you may have heard this in bad crashes where the audio driver has also crashed). Normally you aren't interacting with the DAC directly, you are using an OS audio driver, and that may choose to send zeroes (no sound) instead. So, the easiest way to ensure that the batches are going to be sent on time regardless of what other code is running is to handle this in a dedicated thread. The game engine takes away this complexity from the game developer and normally just gives them some \"play music\" function they can call, but in the background it's using some low-level code in a thread to send samples from a loaded music file."
],
"score": [
18,
10,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ejqelk | Why are the download speeds advertised by wireless providers never the same as what is actually achieved? | Technology | explainlikeimfive | {
"a_id": [
"fczpuia"
],
"text": [
"Wireless speeds advertised by providers are theoretical maximums in a near perfect test environment. Distance, other wireless devices, and obstacles significantly degrade signal & speed."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ejqqks | Why would digital cameras ever have a shutter? | A shutter is necessary on a film camera--the film itself is photo-sensitive and any amount of time it's exposed beyond what the photographer/meter wants would lead to an overexposed image. But in my limited understanding, a digital camera uses a photo-sensitive sensor and translates the light hitting the sensor into a digital image (in much the way film does, as a function of amount of light \[aperture size\] and duration of exposure \[shutter speed\]). But why is a shutter needed this for this? If the photographer wanted to set an exposure time of 1/100 of a second, couldn't the sensor just create an image based on the light that hit the sensor in a given time window? I.e., the digital image will be generated on the light that hit the sensor from time T to Time T+0.01. What does a physical shutter contribute? | Technology | explainlikeimfive | {
"a_id": [
"fczta3g",
"fcztbuw"
],
"text": [
"You can do a digital shutter like you've suggested, but it makes the circuit more complicated per pixel, reducing the size of the sensor for each pixel, which then makes image quality worse. There is also delay in reading the signal from each pixel and you get rolling shutter distortion. This is easy to see on digital video of airplane propellers. Since space isn't much of a concern on SLR cameras, they still use a physical shutter. Pretty much everything else uses a digital shutter.",
"Many don't. Your cellphone camera doesn't, lots of point and shoot cameras don't. Cameras that do either do just for fun because people like them, or in most cases of professional cameras because the camera is using an extremely delicate very large full frame sensor that can detect very small amounts of light but is bad about building up too much charge if continuously exposed to bright light and works best if it's getting an instantaneous burst of small amounts of light it can take a perfect picture of."
],
"score": [
13,
7
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
ejslj2 | How could Voyager 1/2 transmit data such a long distance ? | Technology | explainlikeimfive | {
"a_id": [
"fd0flb9",
"fd0ft23"
],
"text": [
"Any electromagnetic emission in space will travel forever unless intercepted by something. In the case of Pioneer, it's a radio receiver, though really it's radio telescope. Pioneer's transmissions are rather low power, so it takes a lot of antenna area to capture enough photons to \"see\" the signal. Eventually, the probes will travel so far away we won't be able to capture enough of those photons to see them as more than background radio noise. Part and parcel to this is the Inverse Square Law.",
"Because light (remember, radio waves are a form of light) travels forever in a straight line unless it interacts with something. In space, there's basically nothing to interact with. As long as you have a large/powerful enough antenna on Earth to receive the signal, you're fine. This works at any arbitrarily large distance; you just have to keep making bigger and bigger antennas. The Voyager probes have barely left our solar system. We can communicate across the entire galaxy if we want to."
],
"score": [
8,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ejv5sf | How do some games like Monster Hunter, or even mobile games like Underlords, allow for players all over the globe to play together seamlessly but other games like Dota has horrible lag and ping when you play outside your region? | Technology | explainlikeimfive | {
"a_id": [
"fd1tfq9",
"fd21vgd",
"fd1tfwo",
"fd27dw0",
"fd2dgls",
"fd2emn5",
"fd2v6vs",
"fd1waix",
"fd2ayon",
"fd5r8jx"
],
"text": [
"Differences in game styles and different quality programming of the network code in the game (netcode). The concept of ping is the same for all games - how long it takes for data to go from you to the server and back - but how much that ping affects gameplay depends a lot on the game style and how it's been programmed. Some games let the game hide the lag more easily. For example, in a turn-based or strategy game, it doesn't really matter if there is a brief latency between actions from different players. However, in a FPS game, latency is critical because if you are playing a small fraction of a second \"in the past\", and you shoot at someone, they might have moved by the time your action makes it to the server. There are ways to counteract this. Your game might \"predict\" what your opponents are doing to hide lag, or it might add an equal amount of lag to everyone. The exact approach that a game uses will affect how it \"feels\" when you're playing with someone a huge distance away. [Here]( URL_0 ) is an in-depth article of how different ways of programming games affect the way they lag when you have a bad connection. It's really quite interesting but hard to condense down to an ELI5.",
"eli5: There are different ways to handle networked information going in and out of your game and it mostly comes down to how much they \"fudge\" or predict actions. The games likely have the same amount of lag, but your opponents in monster hunter won't be upset if the player cheats a little for the purpose of hiding it. eli15: These days, almost no games will wait for confirmation from the server to move your character on the client side; they will only correct it once in a while if it's off by some treshold. That's when you get rubber banding. Server-side consistency is less important for a pve game where \"it's probably fine\" to let the client have some authority in dictating what happened, but needs to be more strict for a multiplayer game and especially for one as competitive and precise as dota. It might be that dota will only move the characters on the client according to what the server says, meaning you have to wait for a full return trip to see your command get carried out. It gets trickier once you start applying this to any other actions. You can fire off the attack animation clientside without asking the server, but most games will wait for confirmation before showing the effect it had on the target. Then you get into situations where two players can meet eachother around a corner at the same time, and each will (on their end) fire before the other. There's no good solution for this to make everyone happy, and you are forced to either kill one player who shot first on his screen, or kill both players (I know destiny does this). The only way to have a 100% consistent, fair, and correct representation is to only show the game state according to how it actually is on the server. It doesn't make it laggier, it only exposes how laggy it actually is.",
"Dota underlords is turn-based game so there is no problem with even 1 second lag because game can handle it. I dont know Monster Hunter very good but as i read about it i see there is multiplayer but its PVE so lag is not noticable because you are fighting with computer-controlled enemies and they are client based so latency is not the most important thing. In Dota or league of legenends or cs you have to have low latency (ping) because every ms count. If you shoot someone server needs to know it instantly so it can \"tell\" other player that he died and for example cant kill you.",
"Lag and ping is way more noticeable if you are facing other players, on games where players beat monsters together or that you send small armies that go slowly and stuff like that it doenst matter if there is a couple hundreds of miliseconds of delay, because the game does not depend on your reaction time.",
"Having played all 3. I can explain For monster hunter world, the game is a lot easier for client side prediction. Monster moves in a very static way and generally doesn’t get interrupted by most of your action. Your 3 other friends can move in also a limited set of direction and attack. And it’s not as important for client side to be ‘off’ ever so slightly because it doesn’t effect your gameplay. Static events like gathering and rewards doesn’t require lots of back and forth. So in general net coding the game is a lot lighter and can allow client side to do a lot of correction. For Underlords it’s even easier, since the only thing you ever care for are whenever client rolls and buy and probably a random seed for the AI to making decision in fighting. If your opponent buys a chess piece, something as high as 1 second discrepancy will not be noticeable. If you roll, than the server can do all the math and tell you which 5 you can buy. There no need for any other info between client/server at this point. The random seed for how the fight is dictated can be given to you at any point for you to see the action. For Dota, your client side can’t predict much and you require a lot of information between server and client. For a lot of abilities/attack you can cancel at anytime and hitting or missing a spell require a precise x/y, even creeps behavior can’t be predicted since you can force aggro them. Player for server outside of your region would require longer relay of information which if it’s too old it’s useless and tossed out. To put into perspective, when you lag in monster hunter, you see everything still move but then all of sudden gets corrected/“synched”, for Underlords you don’t really notice for a long while other than button not responding to you. For dota, you “freeze”/“stutter”",
"Not ELI5 but this this is a big presentation about netcode for Overwatch that I thought was interesting and will answer some questions! You can start at 25:30 for a cool moving graph that shows the how the server tries to predict player inputs. URL_0",
"There's different ways to handle latency in a game, and in essentially every case, that solution comes at the expense of one of the parties involved. It used to be common that one player was the \"host\" and had zero latency, meaning the other player was the disadvantaged one, this is less common nowadays, but still happens occasionally, but is the least \"fair\" one. Nowadays most games have a server hosting games, where it can handle latency in two ways, but in each way both players share the disadvantage. it can either let you input commands in real time, showing your actions immediately and then double checking if your inputs \"succeed\", this is more common in shooters, as it is more responsive, but more sensitive to latency and the game might reverse your actions during intense lag. The other way to do it is that your inputs are delayed according to your latency and you see the game as the server interprets it, this is how DotA and most strategy games do it, as these games generally value correct information over responsiveness. The common thread between these two methods is that since the different parties are both players, the computer is strict on its interpretations, it won't bend the rules so it can keep a fair playing environment, if your data differs from what the server sees, it will correct your data. HOWEVER, what makes Monster Hunter work so well is that the two competing parties have a different dynamic, there is the players versus the game itself. Because the computer doesn't care about being treated fairly, it can bend the rules of what is true and what isn't. This means that even though you and I have wildly different latencies, our game clients send in data to the server as we see it and the game completely skips fact checking either of our data, and just accepts it and processes it and returns a result. This means the disadvantage is almost fully on the server, not the players. As for Underlords, I haven't played it, but assuming it's similar to Auto Chess and TFT, there's no actual real-time interaction with other players, so the game can process your results ahead of time and simply replay the results to you, so latency is largely irrelevant. TL;DR: Most multiplayer games have to be fair because they're in real-time and also against other players. If a game isn't in real time, or not agaisnt other players, you can \"cheat\" latency by either processing results ahead of time, or letting the players tell the server what happens.",
"I'm a fair way off the leading edge with game engines so I might be about to spout a load of crap, but ... There are two ways of making a multiplayer game. The easy way is for each player to report their 'next tick' location to a server which then forwards this information to every other player. This works reasonably well over a fast lan. The hard way is to build an internal model of what's going on inside each client. This way the game knows everything that's going on, runs with a bunch of assumptions (the bullet continues going in a straight line), and then you only need to send updates to the other players when something unpredictable happens (the bullet hit someone). But writing one of these you need to accept that the event arrives *after* it has actually happened (very shortly after the gun was fired) and that you need to both change the model and apply some kind of time delta so everything comes back into sync. This is F for F'n hard.",
"I play MH plenty and can tell you that they narrow down your online options substantially when you're looking for random hunts to join versus searching for a certain monster to hunt. If I had to guess, the servers don't put every player on the same online playing field unless they connect directly with other players or their squad. So while you can play with players from anywhere in the world, I believe they sanction off parts of it's online servers based on how many people are playing. I could be wrong tho.",
"Back in the Quake 3 Arena days there was something called de-lag, which in essence the clients register shots and compare notes with the server. Evened out the odds between high latency players and low latency players, though the low latency players still had an advantage. The hit box was noticibly smaller though, but my dad and I got used to it. It made for some funny effects, with people snapping back around a corner when the hit registers nearly 2 seconds later."
],
"score": [
8497,
366,
144,
47,
23,
15,
15,
14,
5,
3
],
"text_urls": [
[
"https://arstechnica.com/gaming/2019/10/explaining-how-fighting-games-use-delay-based-and-rollback-netcode/"
],
[],
[],
[],
[],
[
"https://youtu.be/W3aieHjyNvw"
],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ek0hxz | Why do objects in video games vibrate rapidly when they somehow get stuck inside other objects? | Like when a character somehow gets stuck in a car door or wall, it moves rapidly until its thrown away somehow. Why does this happen? | Technology | explainlikeimfive | {
"a_id": [
"fd42wy0",
"fd49oyv"
],
"text": [
"The game engine (i.e. the underlying software that the game runs on) has to constantly keep track of all the various objects' locations in the world. Two solid objects can't occupy the same space without something weird happening. In your example, the player character and the car door are essentially fighting to occupy the same space with no way for the engine to prioritise either of them.",
"The collision detection is having a fit. Models in games are made up of polygons. At each calculation of the physics, the game engine will do some clever math to determine if the polygons of one object have intersected with the polygons of another object, and of so, it'll adjust their velocities so that they bounce off of each other, say. (And of course I'm simplifying things here a bit, but the general idea should hold). Now ideally, the physics will calculate fast enough that there will only be a very small overlap between the objects, but if one of the objects is moving very fast, or if the CPU can't quite keep up, you might have a situation where, between physics steps, one object pretty much ends up inside of the other. The weapon you just threw, say, is not only intersecting with the surface of the wall on the inside of the room, but it's also intersecting with the wall on the outside of the room that you can't see. If the game engine tries to move the weapon back into the room, there will be more collisions on the far side. If the game engine tries to move the weapon out of the room, there'll be more collisions on the interior side. So the weapon starts to bounce back and forth until it gets enough velocity in some direction that it's able to completely clear the wall in a single physics step."
],
"score": [
15,
7
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
ek0izj | Why do you have to pay for .com domain names? Where did those websites get them from in the first place? | Technology | explainlikeimfive | {
"a_id": [
"fd425uv",
"fd42d0s"
],
"text": [
"There is some upkeep required in owning a domain name, so part of the fee goes to support that. But most importantly, the price needs to be high enough that people can't just buy millions of domain names to sit on them forever and prevent other people from buying them. -- This is a pretty common tactic in business, even for a \"low cost\" item, you price it higher so that a small amount of people can't use their resources to completely control it just by being first in line. Think about it this way: You go to the store to get the new hot iPhone. They are free this year. The guy in the front of the line just \"buys\" all of them. Well, that sucks. So what if we priced it at $200? Well, maybe he buys 2, then the next guy buys 2, and the third buys 1. Having a price on this makes sure that people can't just abuse the system and its a bit more fair. Not completely fair, but far more than free or cheap.",
"You have to pay for them as you ‘rent’ the name space. Otherwise someone could just snap up every domain quickly and annoy everyone else. It’s a way of ensure domains that are in use are actually still needed. .com and .uk are top level domains. nominet are in charge of .uk. When you register a domain i.e from 123 reg; if you pick a .uk domain they register it on behalf of you by letting nominet know. You can google ‘whois’ and type a domain name to see the owner and the expiry/renewal date."
],
"score": [
6,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ek3kvy | How LED lights are so much brighter AND more efficient than previous lights like fluorescent, etc | Technology | explainlikeimfive | {
"a_id": [
"fd5ljo5",
"fd5n644",
"fd5t04y"
],
"text": [
"TLDR: An ELI5 way to think about it is how much heat is generated vs how much light. Incandescent bulbs dump current through a filament (read: wire) and it glows to generate light, but also generates a lot of heat, and that’s wasted energy that doesn’t turn into light. Fluorescent bulbs are better at utilizing that power to make a coating on the inside of the glass glow (fluoresce) and doesn’t waste as much as heat output. Light Emitting Diodes have an even higher light-to-heat efficiency because of the way a diode operates (not ELI5, so I won’t go into it).",
"Older styles of lights produce light as a side effect - for example, regular ol' filament bulbs use energy to get *really really hot* to the point that they start glowing, and throwing out some of that heat in the form of visible light (like an iron bar glowing brightly when it comes out of a forge). This means they're pretty darn inefficient since most of the electricity that goes in to them, comes out as something other than light. LEDs take advantage of tiny semiconductors that produce light when electricity is passed through them. Since they produce light as their primary effect, a ton more of the electricity that goes in comes out as light, making them very efficient, much cooler, and drastically brighter watt-for-watt.",
"LEDs aren’t too far ahead of fluorescent lights in terms of luminous efficiency. Commercial fluorescent tubes can be well over 100 lumens per watt, whereas many LED bulbs are well under that figure."
],
"score": [
13,
4,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ek765r | how do they do the graphics for the first down and line of scrimmage markers on tv for NFL games? | So I am watching playoff games today and the lines for first downs and the line of scrimmage seem like a layer beneath the players and not as a simple overlay on what’s being filmed since you can see the players on top of the lines, breaking it up. How do they do that? | Technology | explainlikeimfive | {
"a_id": [
"fd6pv5x",
"fd74a4e"
],
"text": [
"They scan the whole field, with nobody on it, before the game starts. The line only replaces pixels that match the pregame image.",
"There are two parts to the question: how do they know where the line should go, and how do they know what the line should go on or under? For the first, they model the entire field before the game, because it isn't completely flat, so the lines won't be completely straight. The main cameras for the broadcast have position sensors so that a computer always knows exactly where they're pointing. They use the combination of the 3d model and the camera positions to know where the line should go. For the second question, the field modeling also gives them a color palette that they can use to set up something that's a lot like a green screen, so they know what colors to put the line over. Green field means that it should show the line, white or colored uniforms means that it shouldn't. This sometimes has to change on the fly if there's heavy snow and the field starts turning white, but they can adjust as they go. This also means that in snow, they can digitally put in sidelines and other yard markers."
],
"score": [
7,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
ek8b9b | what is a cyberattack when it comes to warfare? | Technology | explainlikeimfive | {
"a_id": [
"fd77tmx",
"fd7a6d9"
],
"text": [
"An attack carried out on a digital system instead of one carried out physically. This is usually done by engineering a virus to attack the system. Some examples included Stuxnet, a virus which is speculated to have been created by the US and Isreal and targeted power plants in Iran, WannaCry, a crypto virus which is speculated to have been created by North Korea and with no known specific target, and Petya, a crypto virus which is speculated to have been created by the NSA and which was likely targeted at the Ukraine. Edit: The attack may be against a specific system, such as Stuxnex, or may generally attack computers in a specific country, or just computers in general. Edit: a commenter pointed out that Petya was likely created by Russia not the NSA.",
"The use of cyber resources to cause economic or physical harm to an opponent. For example, a state sponsored cell attacks the system for an enemy’s power distribution or stock exchange or flood gate controls at a dam."
],
"score": [
7,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ek9agy | How do fiber optic cables work? | I've heard that they transmit info at light speed, or near that speed or something, but how does that work? How does the cabling send information at all? | Technology | explainlikeimfive | {
"a_id": [
"fd7lbqo"
],
"text": [
"A digital signal is encoded using pulses of light and sent down the fiber optic. All digital devices (computers, etc) encode their data and instructions as a set of ones and zeros. Think of it like a light switch in your room. When the switch and light are on that’s a 1. When they’re both off it’s a 0. These ones and zeros can be arranged to represent any number. We use what’s called base 10 numbers in our daily lives. That means there are 10 “glyphs” (0 through 9) arranged to represent any number. When you do it with just 1 and 0 that’s called base 2 or binary. For instance the binary number 1111 equals 15 in base 10. To learn more google binary math. Anyway, these binary 1’s and 0’s, lights on and lights off are shined into the fiber. Because the fiber is made of glass it has a low refraction angle which means the light bounces off the walls of the fiber, without leaking out, as it travels down the line. At the end of the line a special integrated circuit called an optocoupler changes the energy from each light burst into an electrical signal which is then used by your digital device. No actually computing is done with the light. It’s just used to transmit data very fast. Someday we may have the capability to have light based computers which use optical transistors but we’re not there yet."
],
"score": [
12
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
eke6s7 | Why is it not possible to get a source code of a program/game if it's original is lost? | Technology | explainlikeimfive | {
"a_id": [
"fd9u4k1",
"fdasohl",
"fd9tnga"
],
"text": [
"Much for the same reason if you lose a recipe having the resulting cake baked and ready to go generally isn't enough to reconstruct the method and ingredients. When the final program is built, the human readable source-code is compiled in to something the computer actually understands. Anything that isn't vital to that function, like comments, variable names, or superfluous structure that makes it comfortable for humans is discarded. As such even if your decompile the program, you're going to get a source code that is *technically* human readable, but has none of the elements that humans put in to make the code easy to work with. You'd get *a* source, but it's not going to be fun or easy to work with.",
"Great question! Other comments referring to machine code are entirely correct, but I thought I'd expand on the explanation further, just because it is fun. The are many programs that have different source code which produce the same machine code. Imagine that you're taking fruits and vegetables as an input, and writing down their color as the output. Tomato - > Red Apple - > Red Carrot - > Orange Orange - > Orange So the item on the left is your source code, and the item on the right is your machine code. If your machine code spells Red, you can't tell if the source code was the Apple or the Tomato, right? Since you're not literally 5 years old, here's a small example I put together for you. Take a look at this [screenshot]( URL_0 ) test1.cpp is the source code that uses a variable named i, while test2.cpp uses a variable named j. (Never mind what the code does, it is not important for this example.) test1.s is the disassembly and the machine code (that is, the output) generated from test1.cpp, and test2.s corresponds to test2.cpp. Note how the sources are different, but the outputs are the same. Given one of these .s files, test1.s or test2.s, without giving the name of the file to you, you wouldn't be able to tell if the source had an i or a j in it, so the source cannot be recovered from the machine code.",
"It's happening because program is compiled to machine code. During compilation process most useful information for recreating original source code (class, method, variables names) is getting lost, additionally some compiler optimizations can drop some parts of code. Moreover in some programming languages (JS, Java) is common to use obfuscators to prevent distributable binaries be readable. Even if some methods of decompilation exists, produced code still need a lot of work to become somewhat readable and useful. Sometimes it's easier to recreate it again from scratch."
],
"score": [
39,
8,
4
],
"text_urls": [
[],
[
"https://imgur.com/Vu0D443"
],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ekjvbb | Why do FM radio stations sound clear but AM radio stations seem fuzzy? | AM radio stations seem to always have a fuzzy, unclear reception but FM radio doesn’t seem to have that issue. | Technology | explainlikeimfive | {
"a_id": [
"fdbzlpq",
"fdbzk6g",
"fdc5p3t",
"fdcntqa",
"fdbza9m"
],
"text": [
"AM radio is like communicating by changing the brightness of a light. Lots of things can block the light making it seem dimmer than expected. FM radio is like communicating by changing the color of a light. Many things block the light making it dimmer, but few things can change its color easily.",
"Imagine a flute player playing in heavy winds. AM would be if the player sometimes played a given note, and sometimes would be silent. If you were far away it would be hard to properly distinguish when he was silent and when the wind blew too much. FM would be if he played continuously but switched notes instead. It would be much easier for you to \"lock on\" to the flute and distinguish one note from the other.",
"AM waves are wide, so they can travel further. FM waves are thin, so you have to be close to the source. So, the FM waves are clearer in most cases compared to than AM waves (unless you live close to an AM station).",
"The way that FM extracts the information from the signal means that it can only \"hear\" one source at a time. AM, on the other hand, hears everything on the frequency. Suppose you have two radio stations at the same frequency. With FM, you will only hear the stronger station (called the capture effect). With AM, you will hear the weaker station in the background. With AM, the noise you hear is other things that emit interference on that frequency. This might be general static, another weaker station, the characteristic dit-dit-de-dit of a GSM phone, the whine from a trolleybus or anything else. With FM, the only interference you will usually here is a sort of cutting in and out with weak stations. Note that, while the effect is all-or-none, it is not a digital signal.",
"Because interference can only change the amplitude of a signal, so amplitude modulated signals can get a lot of noise from interference, which will be converted to sound FM doesn’t suffer nearly as bad interference because the noise only comes from other noise around that is powerful enough to be heard over the sound from the station or when then signal is too weak to be picked up properly"
],
"score": [
1075,
53,
12,
12,
11
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
ekliit | Why do scratches and smudges on a video game disc affect the video game itself? | And do/can scratches and smudges cause games to freeze? | Technology | explainlikeimfive | {
"a_id": [
"fdc9bwn",
"fdc9ex2"
],
"text": [
"Imagine you have to run round a full race track within a certain time, but there are obstacles (which you weren’t expecting) in your way to interrupt your flow and momentum. The track represents the data on the disc, the obstacles represent the smudges or scratches.",
"Well, if your glasses are scratched, you might not be able to read a book correctly. As for why it causes the game to freeze, computers do exactly what you tell them to do, if they can't read part of their instructions, they'll often just freeze when they get to that point. Even if the programmers tried to account for disc damage, if the unreadable data is required for the program to work, you have to have redundancy in how that data is stored, which reduces the usable space available. If the scratch is deep enough, it can destroy the actual data on the disk, like accidentally tearing the page out of a book."
],
"score": [
5,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
ekoyd2 | What is static and why does it always have that black and white fuzzy look no matter where you see it? | Technology | explainlikeimfive | {
"a_id": [
"fdcxyuu"
],
"text": [
"TV static? That's the cosmic microwave background. It's literally the birth-flash of our universe that has been shifted to longer and longer frequencies. Since the birth of the universe is the source, no matter where you point your antenna, you'd see it. As for WHY its black and white, the input signal swings WILDLY from a high power to a low power, and the TV cannot process it into colors as that requires a timing signal (which it isn't getting), so the signal it's being force-fed it applies to all 3 color bands at the same time. So, if it gets a strong signal, that section of the line turns white. If it gets a weak signal, it turns black."
],
"score": [
9
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ekpo1j | - Why combustion engine cars still use lead acid batteries when more advanced batteries like those used in smartphones are available? | Technology | explainlikeimfive | {
"a_id": [
"fdd0vcu",
"fdd0eyt",
"fdd1zjf",
"fdd61n9"
],
"text": [
"Cars don't need a lot of energy in their batteries - they need to get the energy out *really fast*. Lead acids are really good for this and can take a fair amount of abuse, and while a comparable lithium battery could be significantly smaller it would also require bulky and expensive regulator components while itself being more expensive. Easy to just stick with the bigger battery that goes for half the price.",
"Lead acid batteries are more suitable to cars because they have a high input/output compared to lithium ion, and lithium ion is crazy expensive",
"In addition to the \"cold cranking amps\" that /u/TheJeeronian mentions, lithium batteries do not do well in high temperatures (\"high\" being > 45 C / 113 F). At all.",
"A few of them do. My 2016 BMW M4 uses a lithium ion battery for example. The advantage is it's supposed to last about 10+ years and it weighs less than a standard lead acid battery. The downside is it costs close to $1000... That's a price that the vast majority of car buyers and owners are not willing to accept."
],
"score": [
14,
6,
5,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ektbbr | How does call tracking work and how different is it from tracking calls in movies? | Technology | explainlikeimfive | {
"a_id": [
"fddiw8e"
],
"text": [
"Call tracking works by determining the distance between the phone and nearby cell towers. The tower sends a message to the phone, and the phone sends one back. By measuring the time between sending and receiving a circle can be drawn around the cell tower. So the phone can be anywhere on (or near) the circle. If the phone does this with multiple towers, the most likely position is where the different circles meet. If they do this with 2 towers, there are two points where they meet and two possible locations. If they do this with 3 towers then the location can be pinpointed and we know where the phone is. This interaction between phone and tower usually happens once someone calls 911 or if the phone is bugged. In the movies they show fancy live tracking, but this is only really possible if the connection is perfect and the person has done nothing to hide the phone from such connections. The secret service has ways to track phones a bit more movie-like but I keep in mind the fact that they don’t need to track the phone if they have eyes on, so they usually track the phone until they find the person or can follow the car, at which point they don’t need to know the location anymore."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
eku4xd | Why do all diodes have a forward voltage? | Technology | explainlikeimfive | {
"a_id": [
"fddnbeb",
"fddswnx"
],
"text": [
"Sounds oddly like school homework. The electrical characteristics of the element silicon (and germanium) are such that a current flow will be allowed after a certain amount of pressure (voltage) is reached. The amount is different between the two materials. Those same properties prevent a flow in the opposite direction. Hopefully that gives you a simple enough answer while being not enough to satisfy your teacher.",
"Imagine a diode as a one way water valve with a ball and spring, a certain amount of pressure is needed to overcome the spring force keeping the valve closed, that pressure is the forward voltage."
],
"score": [
9,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
ekvw4f | Why can't I, a nearsighted person, use a VR headset without my glasses? Shouldn't everything still be clear since it's just a screen close to my eyes? | Technology | explainlikeimfive | {
"a_id": [
"fde1j5c",
"fddwzn0",
"fdfmcso",
"fdenjyz",
"fdfkkt9"
],
"text": [
"Glasses refocus light to correctly hit misshapen eyes. Each person, sometimes each eye, will have different corrections that need to be made. If a \"normal\" sighted person wears someones prescription glasses everything will look distorted. The lenses in a VR set also refocus light, but are designed to simulate distance rather than correct for eye shape. Wearing your glasses or contacts with the vr headset will add the corrections needed for your eyes.",
"The lenses in the headset simulate vision at a specific distance. If I recall that is 6 to 10 feet. Most nearsighted people have vision loss at this range.",
"I don't know if this is against the rules, please remove it if it is For yourself and anyone else who needs it - [VR Lens Lab]( URL_0 ) makes prescription lenses for VR. If your glasses are too large/fragile/ or it's just uncomfortable to wear them",
"VR headsets have unglasses (like glasses but the opposite) so that normal vision people feel like they're looking far away. So then you have to wear glasses to get it closer again which is what your eyes think is far away. You could maybe take out the unglasses (lenses) and be fine if you were really really nearsighted (like legally blind from it.)",
"I am nearsighted and it doesn't matter whether I do or do not wear my glasses with a VR headset. It looks fine either way. Blew my goddamn mind."
],
"score": [
244,
33,
18,
6,
3
],
"text_urls": [
[],
[],
[
"https://vr-lens-lab.com/"
],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
el2u9i | how solar energy is converted into electricity | ‘Tis for a project my brother is doing where he has to summarize solar energy using vocabulary that doesn’t make the kids he’s presenting to confused I don’t know the answer, nor do I know how to explain it in simple terms | Technology | explainlikeimfive | {
"a_id": [
"fdf98cl",
"fdf8yy8",
"fdfc7pl"
],
"text": [
"Sunlight comes as packets called photons. When a photon hits electrons in a metal, the electrons can escape and go to a another metal plate which will induce a difference in charge between the two connected points as the metal from which the electron left is positive and the one where it went to is negative, therefore a potential difference is created and a charge will move through the connections. Also known as the photoelectric effect.",
"Electricity can be thought of as tiny particles called electrons all running in a line. Electrons are found in every atom on earth, but most of them are very tired and doesn't have the energy to get up and go running. The sun can give those electrons some energy, and some very clever scientists and engineers have figured out a way to take energy from sunlight and give it to the electrons so the can go running.",
"It happens in two \"direct way\". On is solar cells where semiconductors directly convert light to electricity, A solar cell is very close to a LED that you shine a light on to get electricity instead of using electricity to get light out. The other way is to concentrate sunlight to boil water either directly or via a buffer of salt you heat up. The water become steam and power a generator. There is also indirect way. Hydroelectric power, wind power, the growing plants you burn is all powered by the sun. Even oil, natural gas or coal is energy from the sun but it was captured millions of a year ago and the biomass was transformed to what we can extract now. It is only nuclear power and power where you use the heat down in the earth that is not energy from our sun."
],
"score": [
12,
6,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
el3nmt | Why are drone strikes on moving targets so accurate, how does the targeting technology work? | Edit: Damn, I did not expect so many responses. Thank you, I've learned a fair amount about drone strikes in the last few hours. | Technology | explainlikeimfive | {
"a_id": [
"fdfet0f",
"fdg0dwd",
"fdffo40",
"fdfns19",
"fdfom55",
"fdg0qbd",
"fdfgf33",
"fdg5db5",
"fdfzct6",
"fdgqpeq"
],
"text": [
"All the US-operated ground-strike UAVs use the AGM-114 'Hellfire' air-to-ground missile, in addition to several types bomb. The hellfire missile, as well as the some types of guided bomb, are guided with laser beam riding. Basically, there is a fancy [dome camera]( URL_0 ) on the bottom of the drone with a powerful laser pointer with a very specific color that isnt visible to human eyes. In order to guide the missile to a target, the camera points the laser at the target, and a fancy camera on the front of the missile uses fins on the missile to steer it to point at the laser dot on the ground. If the target is moving, the camera just moves the laser to follow the target as it moves, and the missile will continually adjust to point at the laser dot.",
"So you know how your cat follows the laser as you point it on the wall and will jump on your aunt when she isn't looking and I point it at her back? Now imagine the cat was thrown out of an airplane and blows up when it catches the dot.",
"Drones are just remote control aircraft, and they can employ the same guided bombs and missiles that manned aircraft do. The drone operator \"paints\" the target with a laser on the drone, and the missile or bomb follows the laser to the target.",
"The ultimate reason why is because America spends massive amounts of money on accuracy, effectiveness, and reliability. NASA budgets for landing a man on the moon was pennies compared to defense budgets. I've worked in both fields for decades.",
"some older generation laser/ thermal imaging seekers could be defeated temporarily at least by close flare/ smoke ejectors and/ or chaffe bursts as well by ' dazzle ' multi faceted IR/ UV (?) reflectors, attempting to actively misguide various guided ground and air to air weapons. fog, bad weather, thick smoke and/ or industrial smog could significantly degrade their capabilities. then came GPS and suddenly far less missed, allowing for smaller but far more accurate weapons.",
"Missile follows a laser being fired by a targeting pod on the UAV. The targeting pod camera can follow the target it's shooting the laser at simply by tracking the difference in contrast between the target object and the ground.",
"You have a drone which is basically a big remote controlled plane. You shoot a missile that is either manually or automatically guided Manually guided means you control it with a camera Automatically guided means some other form of target identification is needed Targeting can be from any number of options. Heat seeking missiles target heat signatures. Laser guided follow a laser aimed at a target. And there are more options & #x200B; They are accurate because a lot of money was spent into making them accurate, mostly try something and adjust until it is good enough",
"So what makes a drone stay focused on a target then? Does the drone map an outline of what the shape of it is? And then the infra-red (or whatever) lasers just keep focused on it, as well as continuously scanning around it to make sure it stays locked on it? I was wondering like what if a similarly-shaped object came into close proximity. Would the drone be able to differ between the two if they were very similar? Say a basketball was being tracked as it bounced/rolled down a hill, and a soccerball either hit the basketball, or rolled/bounced alongside it. Or maybe even identical basketballs. Could the drone stay tracked on the one it was set for, even if they were both madly bouncing around in a small area?",
"Think of the drone as a remote control plane flown by one person with another operating a very powerful laser/camera. Now think of that laser like a flashlight. At the distance the plane operates the laser looks more like a large flashlight beam than a laser beam. The AGM-114 Hellfire missile has a seeker on the front, think of this as an eye. When the missile is shot the eye on it searches for the flashlight beam and attempts to guide itself to it. This flashlight beam is essentially \"flown\" onto the target by the camera operator who is well trained at moving the camera/laser. There is a lot more to it than that but that's the ELI5 version. Hope this helped!",
"If you understand how to adjust an angle in order to intercept a target travelling in a specific direction you've got it. That is the fine art of combining trigonometry with ballistics to get a science called Fire Control. The technology is ever changing but the underlying theory has been constant ever since Isaac Newton."
],
"score": [
4940,
4094,
333,
56,
49,
36,
31,
21,
13,
3
],
"text_urls": [
[
"https://www.bhphotovideo.com/images/images2000x2000/flir_n133ed_2_1mp_poe_day_night_1168089.jpg"
],
[],
[],
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
el58co | Why do some radio stations seem to lose connection at stop lights? | Technology | explainlikeimfive | {
"a_id": [
"fdfrr8e",
"fdfuxhd",
"fdfw8hb"
],
"text": [
"I've never had this happen. FM? AM? Urban area?",
"AM will loose signal under power lines and you will just listen to 60 or 50 hz depending on where you live. But I have never lost signal under a stop light.",
"Theory - stoplights typically have loops cut into the pavement to sense cars arriving. The interference in the loop in the pavement generated by the big metal of a car is what signals the light to change. These emit an RF signal - not something that would ordinarily interfere with your car radio; but maybe YOUR radio is SPECIAL. Crap antenna - weird capacitive gap in the power line - some voodoo that makes it sensitive. If my theory holds; you have the problem ONLY if you are first at the stoplight."
],
"score": [
8,
6,
6
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
el7qqx | How do Machines Calculate Body Fat Percentage? | Technology | explainlikeimfive | {
"a_id": [
"fdg4cfy"
],
"text": [
"Usually scale use two electrod under your feet and use Tiny Ac current. By measuring the impedance and frequency responce of the human body they can deduce the ammount of fat and water."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
elbhrl | How does SpaceX ensure that there is no air-traffic during a rocket launch? | By air-traffic I mean commercial or military flights - how is the air space in general allocated to differentiate between commercial and military purposes? What extra measures are taken to clear the air space during the launch of space missions? The question is for any space organization including NASA. | Technology | explainlikeimfive | {
"a_id": [
"fdgoix5",
"fdgoojj",
"fdh521u"
],
"text": [
"They file paperwork with the FAA to launch so the FAA keeps the area clear of commercial flights. They also launch from military bases (the Cape is considered a military base), so airspace over the area is generally restricted anyways.",
"During launch the airspace around the launch site (usually Cape Canaveral) is a no-fly zone and all nearby aircraft are informed before the flight or by ATC. As well as this, the area is sometimes patrolled by the Air Force (this was during launches of the Space Shuttle or other manned missions but on unmanned flights there isn’t necessarily a defensive wing on hand).",
"Here's an example FAA \"NOTAM\" (Notice to Airmen) for space operations at Cape Canaveral. URL_1 All pilots are required to check for and obey NOTAMs along their route, and air traffic controllers will steer them clear of the area. If a pilot does wander into the airspace, the launch may be scrubbed and the pilot will be in big trouble. Here's the perspective of a private pilot watching a SpaceX launch from outside the restricted airspace, so you can get an idea of how tightly controlled this area is on launch day: URL_0"
],
"score": [
17,
8,
4
],
"text_urls": [
[],
[],
[
"https://www.youtube.com/watch?v=Y1GbjpYyzSA",
"https://tfr.faa.gov/save_pages/detail_9_7920.html"
]
]
} | [
"url"
]
| [
"url"
]
|
elboze | When a game requires "Microsoft Visual C++ (year)" or "DirectX (number)" to be installed along with the game itself, what are those things and how does the game use them to work? | Often when installing a game through a digital storefront, after the game has downloaded and installed, but before rubbing for the first time, it will download various files titled "Visual c++ library 2008" etc, even if previous games have installed them. I suppose the main things I'm wondering are; What are these things, why do the games need them to work, and why aren't they included in the files Steam/Epic/Uplay downloads when you install the game? | Technology | explainlikeimfive | {
"a_id": [
"fdgpq76",
"fdh1zlk",
"fdhedo6"
],
"text": [
"Libraries are pieces of code that can be used by developers across different applications, so they don’t have to do things that have already been done (like directly control the mouse, interface with the operating system, read files, draw on the screen). Developers choose what libraries they want to use, but there has also been a lot of standardization by the game engines on using the ones use referenced. So the engines and games are designed to specifically use those libraries, and thus need them installed to run.",
"Back in the day before Windows, PC's could (mostly) only run one thing at a time. Each program took up the entire screen, you wanted to switch, you had to stop one program and start the next. This was a serious limitation, but it was great for games. They could take over your whole computer, use every last resource, and run really fast. Then Windows came along and let you run multiple programs at once. Instead of sending commands directly to the computer, programs would send them to Windows and it would sort things out. That was fine for a word processor, but for graphically intensive programs sending every pixel update to Windows instead of directly to the graphics card could be painfully slow and make games unplayable. Instead, you would exit Windows (you could do that back then) in order to play your game, then return when you were done. Kind of a pain in the ass. DirectX was Window's solution to this. It is a library to allows programs like games to have direct access to the graphics card once again. However, graphics cards are constantly changing and games are constantly finding better ways to coax a little extra performance out of them, so DirectX libraries were constantly being updated. To avoid incompatibility issues, games found it easier to just include a late enough version of DirectX with their install. C++ is a programming language many Windows programs are written in, and Visual C++ is Microsoft's implementation of it. They added a library of functions many programs use, particularly when it comes to user interface elements. Those programs need that library installed on any machine where they will be executed. Steam and similar services keep DirectX and other shared libraries up to date independently of the programs you download from them.",
"The pace of software development is so fast, and programmers are generally so productive, because they can share and reuse solutions to problems they've seen in the past. Any given problem only really needs to be solved once, and everybody after that can just reuse the same solution over and over again. A package of pre-solved problems is called a \"library\". (On windows, these are generally, but not always, \".dll\" files, which stands for \"dynamic link library\"). When you write a program, you don't have to start at the bottom and build everything yourself. You don't have to write code to put graphics on the screen, or do physics calculations or other stuff like that, those problems are already solved. Instead, what you do is bring together a bunch of libraries with those things already done. You'll start with a library of basic fundamental ideas (\"Microsoft Visual C++ Runtime\", for example), and you'll bring in a graphics/gaming library (\"DirectX\" for example) and a bunch of other things. When you download a game or install it off a CD, the game will generally come with a lot of the libraries the game requires. However, some libraries are either so common that every system uses them, or otherwise are part of the operating system itself. This is the case for Microsoft Visual C++ Runtime and DirectX: They are so common and so platform-specific, that it's expected that they already be installed or that they can be installed separately. Sometimes a game will want to use the most recent version of these libraries to get, for example, better graphics capabilities if possible. Other times you'll want to stick with a specific version because upgrading will cause a breakage. In either case, it's usually easier for every individual game to just go out and download the versions they want without having to search around to see if a suitable version already exists."
],
"score": [
11,
6,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
eldp7g | How do they test projector and other LED's that claim to last 30,000 or more hours? Are they just taking educated guesses? | Technology | explainlikeimfive | {
"a_id": [
"fdh3qhi"
],
"text": [
"Very well educated guesses, yes. One way to do it is to take a large sample and run them, and measure how long they last. Of course, that would take years to accomplish, since there are 8760 hours in a year. So instead the process is accelerated. You still take a significant sample and run them. But you make some of the conditions more extreme than the normal operating conditions. Typically, voltage and/or temperature are used to accelerate failure rates. If you do this a few times at multiple voltages/temperatures, you can calculate how much acceleration you get by increasing them. Then you can apply these \"acceleration factors\" to your experiment, and run enough devices to failure in a few hundred or a few thousand hours to figure out the expected lifetime of a typical component. Of course, there are ways to screw this up. You can't use such extreme voltage/temperature that you introduce new failure modes that wouldn't normally occur. Your sample has to be reasonably large and reasonably random, so that it represents a \"typical\" part. You can't have manufacturing issues later that create new failure hazards. Etc. Etc. This technique has been used for years on various electronics devices, and it seems to work pretty well when applied correctly. *Source*: I'm a former semiconductor reliability engineer."
],
"score": [
20
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
elghm5 | How do computers check if a file is corrupt? | This is a generalization, but I was thinking about how, if something goes wrong with your save data, most console games/OSes will tell you that your save game is corrupt and that it has to be deleted. How do people check that out? | Technology | explainlikeimfive | {
"a_id": [
"fdhnf7b",
"fdhnmz3",
"fdho8wp"
],
"text": [
"In the programs there is defined how the save file should look, the shape, the colour, the measurements. When the program tries to read the file and for example a part is gone, it can find the shape, but not the colour and measurements. Without all 3 the save file cannot be loaded and will result in some kind of error to the user. Edit: formatting Edit 2; wow my English is crap, but I hope the idea is clear.",
"In regard to a save game, it's a matter of knowing the format the file *should* be in. If it tries to read that file and it's missing necessary information, the file is corrupt and can't be used. There's many tricks to knowing if a file is corrupt without actually loading it though, for example you can calculate a checksum (which is a string of characters) by running the data through some form of calculation. Running the same string through the same calculation results in the same checksum. When it saves the file it can generate a checksum and save that with the file. When you try to load it later you can rerun the same calculation on the data in the file (minus the checksum), to generate a fresh checksum and compare that to the saved checksum. If they match, no data has changed, otherwise something changed and is probably corrupt.",
"Several ways. 1) At the \"raw data\" level. Files are usually saved with a checksum. This is a method that doesn't \"care\" about the data on the file but simply compares the retrieved data with a \"checksum\". This checksum can be a simple count of the number of 0's or 1's in a particular number of bits. So if the checksum says it should be \"odd\" and but the retrieved data shows \"even\", it flags the data as corrupt. 2) All data in a file is usually structured. So the first few pieces might contain a name, the next few pieces of data might be the date, the third piece... So the program reading the data will try to extract this information. If it comes back with nonsense, then the program can respond with \"corrupted file\". 3) File systems themselves have to give an \"address\" and \"length\" - ie where the data is located on the disk drive or SDD. If the OS goes to the file system and it gets a nonsense address (eg if the disk drive has 10 addresses and the file system says go to address 11) then it will flag the file as missing/corrupted. This is a very simplified explanation because things are actually a bit more sophisticated but gives you an idea how this works."
],
"score": [
9,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
elhgvx | Where/how is the internet “stored”? Does it have anything to do with servers? | Technology | explainlikeimfive | {
"a_id": [
"fdi00n9",
"fdhu4au",
"fdhua2b"
],
"text": [
"The \"Internet\" refers to the network of interconnected networks. This involves computers, their network interfaces, their signaling media (cables or radio waves), the signaling protocols over those media (electrical signals, optical signals, radio signals themselves), and the data protocols those signals carry. All the internet establishes is that computers can transfer information. Applications are built on top of that. That information can be a phone call, a file transfer, a video stream, a bank transaction, or an HTTP request from a web server. Most of the computers on the internet are transparent, they're the routers, switches, bridges, modems, firewalls, and other devices that exist purely to shape, transform, and direct traffic. Most internet traffic is \"dark\". That is to say, it doesn't correspond to websites. Often this is application and service data - phone calls, video streams, video games, and intermediate results - like an authentication handshake or an SQL query result, that may eventually service a web request. Much of it is encrypted, which I think also constitutes as \"dark\" since we don't know it's contents, just the routing information is open - where it came from and where it's going. Ultimately it does come down to servers. A server is any computer that provides a service. All it does is sit there with an open connection waiting for an authorized user to come and make an appropriate request. An authorized user can be literally anyone in the case of a public website, or perhaps an authenticated user of a database. Ultimately, most websites are a combination of \"static\" and \"dynamic\" components. The static data can be read off a disk, but most often come from a cache - the data has already been loaded off disk and is sitting in memory, for faster access. Dynamic components may be generated by a program in response to a request. In this case, the program is stored on disk, loaded into memory, and ran on demand. The results come from computation, not from file or cache, and this could be anything - dynamic sites can compute numbers, generate audio or video, or events like in a video game. If you own two computers, and each can see the other computer to transfer files, for example, both your home computers are acting as servers in that capacity. Your home computers, tablets, phones, etc. are often regarded as clients as that is their principle role, but in many other capacities, they can also provide many services that make your experience more fluid and convenient. It's how all of a sudden you can get a message that an upgrade is available or you have a new email. We don't traditionally regard your phone as \"a server\" because that's not it's principle role. I have an old desktop computer I've installed software on to make it my server. But servers can be purpose built. A data storage server may exist in a rack-mount case, which gets bolted to a frame, contains hundreds of hard drives, has multiple power supplies, fat power cords that are frankly intimidating and you don't have that kind of service in your home wall outlet, and all consideration is given toward operational efficiency ZERO toward ergonomics (?) ie. server rooms have dedicated air conditioning and are LOUD places because the fans they use in the computers are themselves very loud, because that refrigerated air has to be forced through the case. Your home desktop, by comparison, is very quiet and relies on more passive cooling and ventilation, which is just not afforded in dedicated hardware. The processors, too, are very different. A desktop processor like the Intel iCore family has features for media and video games that their Xeon family does not, but the Xeon family has more encryption and virtualization features you don't use at home. Some servers may be \"headless\" (no video output) and \"diskless\" (if it does have a hard drive, it's not for long term storage, but a type of temporary caching), but also have gobs (terabytes or bigger) of memory, because their principle role is to compute and serve from memory because disk is too slow. Where is the internet stored? In data centers. There are commercial data centers that provide the infrastructure just for hosting server hardware; they have redundant connections to peer networks, they have fire suppression, air conditioning, power, cabling, on-site staff, and billing and monitoring. You can rent this space yourself, these services are called co-locations. Some companies, like Google and Amazon, will have their own, privately held capabilities. When I was working in high speed trading, we had small data centers in the building, and owned the fiber optics that ran across the street to the exchange. This is a sort of topic we could ramble on about seemingly indefinitely.",
"It has everything to do with servers. Servers are literally where the information that makes up the internet is stored. They're all connected to a worldwide network that we call the internet, and there are pathways through which the data flows from the servers where it resides to your ISP, then to your device.",
"The internet is just a network, the data is stored on servers which are connected to the network. So when you type of URL, this is like the address of the server so your router will connect you to that server that will exchange data with you. If that server shut down, you won't be able to get to that date, unless the people managing the server made some redundancy, like a backup server or something like that. A server is just like a computer, but built specifically to be good at it's job. You can make your own computer into a server, it's just not gonna be particular good one."
],
"score": [
7,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
elj7wi | Do visually complex images take up more disk space than visually simple ones? | For instance, would a solid red image of 100 x 100 pixels take up the same amount of space as an image of the same dimensions which has a completely random color value on every pixel? Sorry if this doesn't make sense the question is bugging me. | Technology | explainlikeimfive | {
"a_id": [
"fdi842s",
"fdiap2v"
],
"text": [
"It depends on the filetype, but for image types with compression (most of them), yes, it will be smaller. Compression generally works by identifying patterns of similarity, and \"all the same colour\" is as simple a pattern as you can get! I don't understand the script part of the query though",
"We have two different cases: **Uncompressed Images:** A BMP (Bitmap) image created by MS Paint of size 500x500 with all pixels set to red has a size of 750kB. Changing something in the image to make it more random does not change the 750kB. With 500 by 500 pixels there are 250,000 pixels in total. We need 3 values (Red, Green, Blue) for each pixel and these values require 1 Byte each. With that we get 500 x 500 x 3 x 1 Byte = 750,000 Byte of space. So an uncompressed image just stores all pixels regardless of their content. **Compressed Images:** The same red image saved as a PNG image only needs 1.8kB while my modified (more random) version needs 61kB. I don't know how the compression algorithms work behind the scenes but you can think of it a bit like this. \"500x500 and all pixels set to rgb code 255,0,0\" is pretty much all you need to describe the solid red image and it is *a lot* shorter than 750kB. If I add a lot of random stuff, you have to describe a lot more. Note that PNG uses what is called lossless compression. That means that it describes the original in a more efficient way but no information is lost. \"500x500 and all pixels set to rgb code 255,0,0\" is a lot shorter but you can use it to reconstruct the original perfectly. JPG e.g. uses lossy compression. It also describes the original in a more efficient way but it is allowed to cut corners so to speak. For example I could reduce my description even further by saying \"500x500 and all pixels set to red\". I'm leaving out what particular shade of red so you can't reconstruct the original perfectly anymore but the description is still good enough and shorter. In reality the compression is a lot smarter than that and doesn't throw out *that much* information of course. (The solid red picture in my case is actually larger as a JPG than the PNG but in general JPG should be smaller) Edit: Additional note regarding PNG and JPG. JPG performs very will with real life photography. Basically, the complexity in the image helps with hiding the fact that JPG is cutting corners to save space. If you save something like text as JPG, you'll get very noticeable \"compression artifacts\". Bascially, you see the corner cutting and it makes the image worse. PNG doesn't have this issue for text but might result in larger files."
],
"score": [
10,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
elk0h1 | How do we know what the correct time is? | Technology | explainlikeimfive | {
"a_id": [
"fdig3vl"
],
"text": [
"It doesn't matter because time is arbitrary. It's whatever time we decide it is. Before clocks, people went by the sun. When the sun reached it's maximum height in the sky for the day, that was noon, and noon differed depending on where you were. It doesn't matter how many seconds or minutes have passed in 300 years or whatever because that's not meaningful or useful to know. It's doesn't matter in the slightest. It only matters what time we all agree it is now, and we have extremely accurate atomic clocks that measure time."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
elkul4 | What is the difference/relation between a node/server/cluster? (Preferably in the world of high performance computing) | Technology | explainlikeimfive | {
"a_id": [
"fdikdtl"
],
"text": [
"A server is basically any single computer. A cluster is compromised of many servers. A node is a server that is a part of a cluster. And a supercomputer is the entire cluster or multiple clusters connected by an interconnect. Though interconnects also connect nodes."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
elpidl | What is the prurpose of the "turbo" button on vintage PCs? | Technology | explainlikeimfive | {
"a_id": [
"fdjfjpz"
],
"text": [
"Its for compatibility for applications that are CPU speed dependant. The turbo button actually slows down the CPU clock. Here's a [video]( URL_0 ) on the subject."
],
"score": [
7
],
"text_urls": [
[
"https://youtu.be/p2q02Bxtqds"
]
]
} | [
"url"
]
| [
"url"
]
|
|
elr8fd | what are the benefits of recycling glass over just using more sand? | Technology | explainlikeimfive | {
"a_id": [
"fdjou4c"
],
"text": [
"It takes way less energy to melt glass for reuse than to start with raw materials. I work in a plant that makes glass and when we need to speed things up we run waste glass in."
],
"score": [
9
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
elsosy | Fighter planes have technology to disrupt ballistics that are headed toward them. Why can military bases not have this same thing? | Technology | explainlikeimfive | {
"a_id": [
"fdjxxto",
"fdjxzej"
],
"text": [
"Because they can not move. To hit a fighter plane in flight, you have to track it. If the tracking is disrupted, the chances to hit are basically 0. You don't need to track a base, if you know where it is you can just shoot at it directly. And it's big enough that even if a guided weapon looses its tracking before impact it will hit it most of the time.",
"The key word here is \"ballistic\" - following a curved trajectory. Anti-aircraft missiles typically aren't considered ballistic missiles, they're guided weapons that will pursue a moving target. Advanced military aircraft employ a number of techniques to blind or baffle those guidance systems and escape. A military base isn't moving, you can lob artillery at it using only gravity and math and hit it with great accuracy. The missile does not require advanced guidance since the location of the target is known and static. Hitting a plane is dodgeball. Hittting a base is a free throw."
],
"score": [
11,
9
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
elwz8k | What was that short barrel at the end of old USB cables, and why are they gone now? | Technology | explainlikeimfive | {
"a_id": [
"fdkrow8",
"fdks4pu",
"fdks6bm",
"fdkrxlc",
"fdkro94"
],
"text": [
"it's a ferrite bead, which is used to mitigate the effects of the inductance of the wire running through it. We don't need them nowadays on USB cables because we use better manufacturing techniques that better match the 'differential pair' of twisted wires, and better controllers at either end that can cope with a little interference.",
"It’s a magnetic coil filter. It’s used as an anti-interference tool to keep out interference on your wires where electrical signals, data, passes trough. Interference on your wires can interfere with the polarity of your signals which in turn gives false readouts. Data are a bunch of 1 and 0 (like bits). 1 is a positive signal, 0 is a negative. With interference that 0 can turn to a 1 or the other way around.",
"Those are called ferrite beads or chokes; any long wire has the potential to act like an antenna, that can either receive or transmit radio frequencies; a moving electrical current will potentially transmit an RF signal off of the wire transmitting it, and this can then interfere with other things. The ferrite choke basically absorbs all of the RF signals that are coming to/from the device to prevent interference. This isn't typically a big deal over USB unless you're using it to transmit a lot of power, and they can cause issues in the power ranges that USB typically uses, so they're often avoided for that use-case, unless the cord is going to be used in an environment where electromagnetic interference is going to be a constant problem.",
"They are ferrous rings that reduce interference from electronics. They are still used on some USB cables.",
"That was an EMI (Electro-Magnetic Interference) filter. It was essentially designed to remove interference from other devices/power sources. These days, devices are no longer as prone to issues due to this EMI as improvements have been made to the USB controllers to account for this interference, so most cables no longer include them."
],
"score": [
26,
7,
7,
5,
4
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
elz5yh | How do long exposure shots work? | I saw a while ago this one pic where the photographer took an exposure shot of the Falcon 9 launch. I understand that it layers images (I think) but then how does the smoke from the engines not show? | Technology | explainlikeimfive | {
"a_id": [
"fdlhrbf",
"fdl9zxo"
],
"text": [
"Each time a photo is taken, the shutter (little door that normally blocks light from hitting the sensor or film) is opened for a specific amount of time. For normal photography, this can be measured in the hundredths or thousandths of seconds (really short). For long-exposure photography, the shutter is open for seconds, minutes or even hours. In normal lighting conditions, opening the shutter this long would mean an over-exposed (very bright) resulting image. Probably unusable... But with long-exposure photography, you are typically taking a photograph of a very dark scene such as the night sky. Related to this is the aperture. This is a circular (mechanical) opening that can be made bigger and smaller. When it is bigger (wider), more light is let into the camera (low F-stops if you want to read more). When it is smaller it lets in less light (higher F-stops). So the longer you keep the shutter open, the smaller the aperture 'may' need to be, depending on how much light is available. Now lets think about an extreme example. Lets say that you are taking a photo of a candle that is 1000 meters away from you (3280 feet), in a totally dark room or environment (no other light). The number of photons (particles of light) will be very small that come in contact with the image sensor (or film). Imagine a single photon hitting a single pixel (very small portion of the sensor) of the image sensor. If you only have the shutter open for 1/100th of a second, only a few (relatively) photons will hit the sensor. This would result in a totally blank image (probably). Now do the same thing, but keep the shutter open for 10 minutes. Many more photons will impact the sensor and you would have an image captured. Now imagine the same experiment with the candle, but this time the candle moves (like a rocket). The photons hitting the sensor would shift over time (10 minutes in the example). Therefore, a streak of light would be captured on the resulting image. To directly answer your question about the smoke: The smoke would be so dim in comparison to the light from the engine that it would not be visible in the image. On top of that, the smoke was probably moving from wind, so even less light would be emitted from the same position of the smoke.",
"Cameras work by limiting how much light is allowed to reach the film, which is light sensitive. Long exposure allows really dim things, like stars/planets, to be able to be seen by giving more time for light to hit the film. Edit: the streak of light you see in this [Falcon 9 image]( URL_0 ) is the exhaust backlit by the engines flames."
],
"score": [
5,
4
],
"text_urls": [
[],
[
"https://wereportspace.com/img/uploads/2019/06/64680799-447028106111514-4817939265389112784-n-190625151732-800x445.jpg"
]
]
} | [
"url"
]
| [
"url"
]
|
elzehs | What’s physically happening inside a missile that allows it to adjust its flight path to hit a target even after its been launched from another country? | Technology | explainlikeimfive | {
"a_id": [
"fdlac2o",
"fdli19j"
],
"text": [
"Inside a missle not much. The control surfaces and thruster, depending on the missile type, can adjust the trajectory.",
"No one is going to talk about the little dudes inside there who sacrifice their lives for the betterment of humanity? Steely eyed missile men"
],
"score": [
6,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
em02m7 | OpenMPvs MPI | I have tried to ask my PI and several computer science friends what the heck the difference is but it still has not made sense. Please help! | Technology | explainlikeimfive | {
"a_id": [
"fdlmwi7"
],
"text": [
"OpenMP is for running a program in parallel on one processor (one computer). MPI is for passing data between machines, in order to run a program in parallel on multiple computers, that have each their own processor and memory."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
em2drz | Why are videos sent from Android to iPhone's and vice versa always blurry amd corrupt? | Technology | explainlikeimfive | {
"a_id": [
"fdlwc4n"
],
"text": [
"Depends, what are you using? If your texting it using the built in app over SMS, that’s the reason why. SMS is a messaging service that is very old and can only send pictures by compressing them to an almost laughable state. I would recommend uploading them to google drive and then giving the recipient the shared link, or just share the file directly to them. Hope this helps. If you have any other phone tech questions, reach out to me."
],
"score": [
15
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
em2htt | How does wireless charging work? | Technology | explainlikeimfive | {
"a_id": [
"fdlw766"
],
"text": [
"A moving electrical field creates a moving magnetic field, and a moving magnetic field creates a moving electrical field. Transformers, that change the voltage of a source, work like this, and there is no actual wire connection in a transformer. The magnetic field induces a current in the nearby wire. This is how wireless charging works: it's a \"transformer\" that uses the magnetic field created by the source to induce an electric current in the receiving device. It's simple but not as efficient as a wired connection. The small amounts of actual energy involved make the efficiency trade-off small enough to live with."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
em4fyn | Why does the battery suck more than the previous version of Android after every major version update? | Technology | explainlikeimfive | {
"a_id": [
"fdma5e0"
],
"text": [
"Every new OS update has more features than the previous one. It is possible that all those new features are draining more of your battery compared to the older, simpler OS. The new OS might not be as optimized for older devices (=it is built with newer devices in mind, and yours is just made to run it as an afterthought). Let's assume that it's not OS. What else could it be? Batteries are physical items that chemically degrade over time. As you use your phone, your battery life will diminish. This happens slowly and gradually, so you don't notice it happening over short periods of time (for example, you can't really tell how much your battery degraded today over yesterday), but you can tell over long periods of time: \"last year my phone used to last me 2 days, now it is done by noon the second day\". As each major OS update usually comes out every year or so, you tend to correlate the two: \"The new OS update came out, and I am scrutinizing my phone's performance as I am getting used to it. While I was scrutinizing, I realized that my battery lasts a smaller amount of time\". And thus, you are eager to blame the new OS because you only now noticed the big difference. It could also be something completely benign. When the new OS comes out, you play with the new features more, you use your phone more, so now you spend more of its battery. Modern OSes also need time to \"index\" themselves, put everything in caches, etc, after they get installed, which might take a couple of days of slowly rebuilding. During that time, it might use more power, and it will eventually right itself in the next few days."
],
"score": [
10
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
em52gi | Why do record players still make sound with the speakers off? | Technology | explainlikeimfive | {
"a_id": [
"fdmefjf"
],
"text": [
"Back in the day, those old Victrolas didn't have speakers, just a huge horn protruding thing. Records used to be both recorded and played with no electricity. Sound comes from the needle in the groove and all the electronics do is amplify it- just like you can still hear an unplugged electric guitar"
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
em7k7i | - why does old tech slow down? | Technology | explainlikeimfive | {
"a_id": [
"fdmte9h",
"fdmu5wn"
],
"text": [
"It's normally not likely that the electronics of the device have degraded in any way. Typically any failure of the processor or ram will give you a catastrophic failure (like bricking) instead of slowing down. Apps evolve over time and adapt to new technology. So, an app that was basic in 2012 to work on slower phones and tablets now has many more features because those devices are much more powerful now. Those features make all apps more resource hungry, and your tech has to work harder to keep up. What's strange is that if you did a complete factory reset and didn't allow it to update any apps (or download any non-factory apps) - it should be as fast as it was out of the box when using the apps included on the tablet in 2012. **If it's connected to wifi, your tablet is probably using a lot of resources to update it's apps and OS fixes in the background.** It should be faster when it's done updating, but is likely never going to be as fast as it was because the apps are going to be demanding more resources now than in 2012. In the rare case that it is actually some form of hardware degradation, remember that it's the exception rather than the rule.",
"It's funny because I am actually typing this with my Nexus 7. And mine works just fine because I don't update and overwork it. As for your case there are numerous things that can factor in the unresponsiveness: - software: is it the original android version when it was shipped to you. Because any updates of os from when you were using it could make things slower - hardware: some components may deteriote over time and make certain functions less responsive. In this case it could be the touch sensor - perception: it may not be as fast as you REMEMBER it to be. I am using mine and it's noticably lower than modern devices but not downright unresponsive/ frustrating to use."
],
"score": [
10,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
em8yma | what's the difference between software and firmware? | Technology | explainlikeimfive | {
"a_id": [
"fdn1i3w",
"fdn019m"
],
"text": [
"Firmware is a special kind of software that is permanently stored in non volatile memory in the hardware, such that it is always available. These days the distinction is less than it used to be, in that previously software could only be loaded from some 'external' media (tape, diskette, hard disk etc) but these days with flash and emmc so large and cheap, the distinction is not what it once was. It used to be that Firmware was stored in ROM and software was loaded into RAM - which meant that on reboot the software in RAM disappeared, but with current technology that divide is not as clear as it once was.",
"The easiest way to remember: - software can usually be modified without rebooting the hardware. - firmware often cannot be modified without rebooting the hardware."
],
"score": [
10,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emaamy | How does IR headsets in helicopters and planes work? | I need to take a helicopter to work(oil rigg). We use headsets that have a IR reciver on them, we get music and information from the cockpit on them. I tried to cover up the IR sensor, but the music never stops. When phones used to have IR, the transfer would fail if you touched the phone, so how does the headsets work so fine? | Technology | explainlikeimfive | {
"a_id": [
"fdn8lru",
"fdn8w0a"
],
"text": [
"IR? Are you sure they're not RF? I've never heard of a wireless headset that uses infrared communications, but then again I've never taken a helicopter to work on an oil rig.",
"almost certainly radio (rf) not infrared. they work the same as any other radio transmitter receiver, but aviation headsets usually have well designed noise canceling features built into the mic and earpieces"
],
"score": [
7,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
embdbv | Where do deleted files on phones and game consoles go? Does the data completely dissipate? | Technology | explainlikeimfive | {
"a_id": [
"fdng8cg"
],
"text": [
"There are programs that can completely scrub data from a hard drive (basically, it goes to the area on the disk where the file was stored and sets everything to 0), but in general, this is not how deleting files work (especially since truly deleting data takes time and effort that the computer could be spending doing the things you actually want it to do). Instead, most files have a header with some amount of metadata (such as file name, how big the file is, etc.). When split up among multiple places on the hard drive, it also contains links to the next part of the file. But, there's also a flag value that, if set to true, means the file's good and usable, while if set to false, marks the file as \"deleted.\" If, while saving a file, a computer comes across a chunk of memory marked as deleted, it will just start writing on top of whatever was there."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
embg0p | What is a Content Delivery Network (CDN)? | Technology | explainlikeimfive | {
"a_id": [
"fdnka4y",
"fdngvhw"
],
"text": [
"All the stuff you see when you load a webpage is stored in a computer (technically, called a server) somewhere in the world. The text, the images, videos, etc. If you have a webpage only few people visit, there is no problem. If you have a webpage a LOT of people visit, like for example netflix, you have a LOT of people trying to access the images, videos and text of the webpage, from a single computer. It would just grind to a halt. So they have this CDNs that are, basically, a lot of computers (servers) all around the world storing what you want to see. When you access the webpage, the CDN delivers the videos, images, and so on, from one of the hundreds or thousands of servers they have, usually they choose the closest and/or the least used in the moment. So, thousands of people can be surfing the same webpage, but not collapsing a single server.",
"A CDN is an entity (sometimes a whole company, sometimes a branch of an existing company, generally they own a lot of hardware) whose job is to distribute popular files around the Internet on behalf of the people who actually make those files. Eg: nVidia makes drivers for their 3d video cards, but they're not in the business of delivering such huge files to millions of gamers around the world... but a CDN is. So nVidia pays another company to make sure those drivers download fast no matter who in the world asks for them even if a million users download them all at once. You can imagine other companies have similar problems. Netflix and Disney+ have videos to deliver to viewers. Microsoft has Windows updates, etc. Some do their own CDN work, some contract another to do it."
],
"score": [
5,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emehft | If someone working at my ISP wanted to see what I'm doing online, what would they actually see? | Technology | explainlikeimfive | {
"a_id": [
"fdo3lvv",
"fdo447l"
],
"text": [
"Essentially anything that goes over the network with some caveats. Generally, they can see any content you visit over http (unencrypted), and they can see the domains you visit over https (encrypted) but not the actual content (so they would see URL_1 , but not URL_0 ). They can also see other non-web traffic (such as FTP, SSH, SMTP, any known video game traffic, etc) depending on whether or not that traffic is encrypted (and even if it is they can see you are utilizing it, but not details about it). Banks are a good example, they would see that you use Bank of America, but they wouldn't be able to see your username/password, or anything else that comes over the connection. That is unless they require you to install software on your computer, in which case that software might give them the ability to see what your browser sees (and effectively defeat encryption). EDIT: It also kind of depends on your ISP, they can technically act like a middle-man by using a proxy type of connection, creating a secure connection between you and their servers, and their servers and the place you are trying to visit. But that's a lot of overhead and almost certainly illegal in most instances, so ISPs don't bother attempting to do that.",
"They will see several things, - When you try to connect to Reddit, your computer performs a DNS lookup, where URL_0 is matched to the server name's possible IP addresses. That request is a no-brainer. They know you ask for Reddit. - When you connect to the web server, they see that you do so. and they see that it's a web server you are talking to. - if you use HTTPS (and there is a padlock at the server name), they are not supposed to see anything else. Because the communication is encrypted. There are ways around that, of course. But generally, they don't know what you are doing apart from that. - if you don't use https, but instead use the regular HTTP service, stye will be able to see everything that is sent back and forth between you and the web server. Which includes not just your username and password, but also information about the web browser you are using, what data you are requesting from the server and EXACTY what data you get sent back to you. If it's encrypted, they can totally see that you do something. And make some educated guesses on what it is that you are doing. But they can't know for sure. The same goes for mail services. VPN connections to your office. Wen you play online games. And so on. They can see what server you are talking to. And they can see what your computer says to the server and what it's response is. If the communication is in plain-text, they can also see exactly what you are doing. If it's encrypted, their knowledge stops at being able to tell whereto you are talking."
],
"score": [
12,
4
],
"text_urls": [
[
"www.reddit.com/r/explainlikeimfive",
"www.reddit.com"
],
[
"www.reddit.com"
]
]
} | [
"url"
]
| [
"url"
]
|
|
emgsc6 | We’re taught in school that white is all light/color combined, and black is the absence of color/light. So how do black pixels work on my monitor? | Technology | explainlikeimfive | {
"a_id": [
"fdotf16",
"fdolk28"
],
"text": [
"Many comments here aren't quite right. All LCD screens are backlit - this is where you get your white colors, because the backlight it white. There are then LCD layers that act as color filters in front of the back light, one for each color channel. LCD's are off by default - making for a white element for that color channel, and have to be energized to deliver opacity. Full on provides the fullest color value for that channel, and all three color channels full on will effectively block the backlight almost entirely, making black. And that's why \"black levels\" are important when talking about LCD based screen technologies, because some light escapes. At night, turn your screen on, view a full-screen black image, and turn the lights off. You will likely still see some light escape from the backlight, a limitation of the technology. Most LCD technologies try to dynamically adjust the backlight level to achieve darker black levels depending on the darkness of the entire frame being presented. Shitty cheap screen technologies can make dark scenes look terrible, I'm sure you've seen it. It also makes it nearly impossible to shop for a good LCD screen, because the industry defined and then gamed the black level ratings on the box, they're all meaningless, and there's no way now to define a meaningful standard and hold the manufacturers accountable to them. And most retailers intentionally do not have the right environment to judge a monitor critically. OLED is different. Each element of each color channel (so, each sub-pixel) is a light emitter, it's an LED light (O stands for Organic - carbon. Organic chemistry is just carbon chemistry since you can make more molecules out of \\*just\\* carbon than you can with the rest of the periodic table \\*combined\\*). So OLED screens don't have a backlight, if the element is black, it's because it's off. OLED is increasing in popularity, but the technology is still playing catchup to the more mature and refined LCD technologies. OLED doesn't match LCD in HDR capabilities yet, for example, but it's getting there.",
"A black pixel turns off and blocks all the light from the back light. So by blocking all the light the pixel is black."
],
"score": [
34,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emhktu | What is computer science? | Technology | explainlikeimfive | {
"a_id": [
"fdov4qi",
"fdosbmw"
],
"text": [
"So, this rundown is my own, and the idea here is to have a pseudo-historical list running in descending order of abstraction. Basically, I'll start with the most abstract and general ideas of the field, and work down towards nitty gritty practical bits that emerged. But anyway, the main idea of computer science is to deal with processes, and specifically, unlike mathematics, processes that have extra limitation that you need to be able to perform them in a finite amount of time and space. Because, you know, humans have only limited amount of time to wait for computation to finish, and there's only finite amount of the universe we have access to. So, in computer science specifically, what was a rather important point was that sometimes you have these processes be in the form of step-by-step lists of instructions(hereby called \"algorithms\") that even the stupidest could follow. So we built the stupidest thing, and we called it artifical computer(as opposed to computer of the old, who were humans, mostly women, performing calculations as required for some fields of science and engineering and such), and tried seeing what we can do with this concept. So now the question of study became, what can these artifical computers actually do. Some major results were achieved in 1940's, specifically Alan Turing was helpful, where he managed to prove some key things about things that can be computed, and perhaps more importantly, that there were some things that couldn't. And as computer technology advanced, computers itself started to become more complex, and the programs running on them started to require more and more sophisticated thinking, and computer science basically absorbed things like software engineering to itself, taking it further away from pure math world. Things like, what sort of tradeoffs you'd have when designing operating system fit neatly in this world of questions that more or less deal with what can and cannot be done with computers. But much of the discussion is still well within confines of pure math as well. Say, computational complexity is a measure of algorithms ability to use fewer steps to arrive at the right answer. You don't need to ever even have seen a computer to be able to answer questions about those kinds of things, and it's ultimately about processes and algorithms rather than this physical device, although limitations of this physical device did end up sparking interest in these types of questions. Likewise, \"formal language theory\" is basically mathematics, but that theory is the main way to understand programming languages, and the theoretical foundation for their existence. So the line gets blurred. I'm unsure but I believe linguistics also makes an appearance here in this multi-dispiclinary mess. Another field that I want to highlight for math'iness is artifical intelligence. Also, worth noting that encryption basically is just taking mathematical problems we can prove are hard in one way but easy in another. And then you also have fields that are more specifically about using computers, like user interface design, or user experience design, which start invoking psychology and such things. And obviously, physical design of computing devices with its electrical engineering, physics and chemistry connections has to be mentioned. Basically, it started out with a rather simple premise of \"what this box do?\" and then when the box turned out to be very powerful, the field just exploded to cover everything the box touched.",
"When computers were first built and people came to realize how powerful they were, they needed people to figure out how to make them work and how to make them better. The original designers tended to be mathematicians, physicists, engineers, etc. but no one field could really do it all. Computer science is sort of the catch all term for the people who ended up working on computers, both more theoretical and also applied."
],
"score": [
4,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emhoef | How do programs like Audacity or Fruity Loops actually clean up recordings? | I don't get how the de-noising actually works. Can someone who's an engineer explain exactly how Audacity or FL does this? And how did the creators come up with this technology? | Technology | explainlikeimfive | {
"a_id": [
"fdovd45"
],
"text": [
"All of the digital manipulation that happens in Audacity is mathematical, there's really no special technology happening that is specific to this function. The sounds are digitized and recorded, presenting a mathematical wave of sounds. A Fourier analysis is done to isolate the frequencies of the background noise independent of the other sounds in the recording, and then any tones at those frequencies that are not above the background noise in volume are again reduced in volume to give a cleaner sounding recording. There are different threshold settings that can change what comes out of the algorithm, but that is generally what's going on."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
emi87e | Why does the Nintendo Switch experience controller connectivity problems (aka 'Joycon drift') when both the Wii and WiiU largely did not have this problem? | Technology | explainlikeimfive | {
"a_id": [
"fdov572",
"fdow55n"
],
"text": [
"It basically boils down to a hardware design issue. Some sensor that's supposed to stay centered when theres no stick push registers when there is no actual stick push or the mechanical reentering mechanism has worn out and no longer properly recenters. The previous consoles used different designs.",
"I heard that is an issue with dirt getting into the analog sticks. Most analog sticks have the issue to some degree, the switch is just notoriously bad for it, sometime happening just a few months after purchase and almost everyone gets it. You can buy a repair kit for 10$ or so off amazon and relatively easily replace the analog stick yourself. The hardest part is getting the screws out to take off the back. (WARNING: this would likely void warranties, so do that first. The screw strip easily, and the kit comes with a very specific screwdriver). I bought the kit, found a youtube video showing how to do the fix. The only hard part was trying to get the stripped screws out. Tried to replace them with some philips screws which worked way better, but are hard to find. No drift for the last few months since."
],
"score": [
3,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emm4ug | how do the scales that measure your body composition work??? | Like how do I step on it and it just knows how much visceral fat, skeletal muscle, WATER IM CARRYING, and all that info? How accurate/reliable is it? Thank you for any answers this may get!! | Technology | explainlikeimfive | {
"a_id": [
"fdplz19"
],
"text": [
"They basically run a current of electricity through you. Muscle and fat conduct electricity differently so the scales measure how much of the current your body resists/absorbs and guesses your composition. As you can guess they're not very accurate as the skin on people's feet is typically highly variable plus people aren't completely dry when stepping on scales. The body comp scales that you use with your hands then to be more accurate."
],
"score": [
20
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
emowyy | What is an API (Application Programming Interface)? | Technology | explainlikeimfive | {
"a_id": [
"fdq27w5"
],
"text": [
"Hi! Someone has already asked this question and offered a really good explaination. You can find an explanation here: URL_0"
],
"score": [
4
],
"text_urls": [
[
"https://te.reddit.com/r/explainlikeimfive/comments/c6vq39/eli5_what_is_an_api/"
]
]
} | [
"url"
]
| [
"url"
]
|
|
emp1nu | What is so bad about big companies having access to all of our data? | Technology | explainlikeimfive | {
"a_id": [
"fdq2ld4",
"fdq2wxs"
],
"text": [
"You never know who could be buying your data and what are their intentions. Best case scenario : an ad network knows you are really into cats, you get targeted ads. Not-so-good scenario : your bank/insurance company buys data from Google and gets sensitive medical information, or suspects drinking issues, which may influence their mortgage decisions / rates.",
"The problem with having big companies having all your data is that you don't know what they're going to do with. Would you please share your phone number with me? I promise it's only to send you some links that answer your question. I don't want big companies to have my data because I don't trust them. 1. Big companies could potentially sell your searches to an insurance company which then they will use to analyse your lifestyle to decide your insurance premium. 2. Big companies could sell your data to scammers who will analyse if you could be a susceptible target for a scam. 3. In a country where let's say a religious group is under government funded persecution, the personal data might be used to locate and identify them. In countries, with anti-homosexuality laws, it could be used to track down the \"illegals\" 4. A government change in your country could make something that is legal today to be illegal tomorrow. And suddenly, you are an illegal and they could track you. 5. What's perfectly legal in your country could be totally illegal in another. You could be arrested on your arrival at an airport."
],
"score": [
10,
8
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emp3yo | Why can't we put an entire city under WiFi as opposed to providing coverage with cellular data? | Technology | explainlikeimfive | {
"a_id": [
"fdq3ip9",
"fdq6em6",
"fdqaeph",
"fdq9rvw",
"fdqaxhn",
"fdqahjd",
"fdqd0ys",
"fdqatph"
],
"text": [
"Frequencies.... wifi is designed to operate in a set of frequencies that makes them ideal for medium-range distances only, we are talking about something in the range of an small house. Wifi radio waves are easily blocked by objects like walls, doors, trees, etc. So in order to make it available in a large area, you need to place a lot of repeaters around it. To properly cover an small city basically you will need to place a repeater on every crossroads and in the center of every block. And thats without taking into consideration large buildings. Cellular radios on the other hand works on a very different set of frequencies that allows them to bounce and go trough walls and any objects more easily, so with a single high power antenna (tower) you can cover an area the size of an small-to-medium city with good reception quality. Effectively lowering the costs, and the amount of work required to setup a network (and its maintenance), at the expense of losing a little amount of bandwidth per user. Also remember that cellular towers provides a lot of services to their users, not just internet. Providing the same amount of services on wifi would require a whole lot of infrastructure changes, and a lot of very expensive devices to support it. They are just different things designed to solve different problems.",
"You can, and it's been done. There are several cities/towns that have installed wifi in at least part of their city/town for general use. But wifi has limited range (edit: and limited structural penetration) so it requires a lot of routers which is expensive and kinda complex to set up. Plus there is already a longer range service available via cell phones that works better as a technology. If you are thinking of this as a public service from the government then you also get into the whole government/business conflict. If you wifi a whole city and make it free then you are conflicting with cell phone carriers who are trying to run a business off of mobile internet and so as a broad policy it would eventually lead to a larger conflict and possible lawsuits.",
"Free (nearly) citywide wifi in Washington DC. It is pretty great, works well, and is a fantastic city service. URL_0",
"In Regina, Saskatchewan used to have free municipal WiFi in the downtown areas. It was a lifesaver the first time I was there around 2010. Visited recently and it's gone. Don't know what happened.",
"Look at college campuses in America if you want to see an example of this. A lot of them are basically small cities",
"Many cities throughout Europe have free city wide WiFi. Unfortunately, a few of them need to be activated via SMS which heavily limits who can use them to mobile users with active service...",
"Imagine you had 40 channel walkie talkies and they had a range that extended across an entire city. Wanting to link the city together, you pass out walkie talkies to 1000 citizens. They try to use them but there are only 40 channels, and so people are constantly interfering with each other and nobody can talk. You could increase the number of channels, but there is only a limited spectrum of frequencies available to the public to use, and that wouldn't really solve the problem. The solution that cellular carriers came up with was instead of every radio being able to talk to the entire city, they shorten the distance at which the radios communicate and break them up into cells. hence \"cellular\". So people using radios in one part of town, won't interfere with others using those same channels in another part of town because the radio waves aren't powerful enough to travel that far. The base stations used to send and receive calls are spaced out regularly over the entire city, and the frequencies used by each base station are set up so that no 2 nearby stations will ever use the same frequencies. This solves the problem of interference. Cellular infrastructure works because the base station radios are all connected together using hardwired communication lines, which allows a user to connect to them anywhere in the city. As a user moves around they may move out of range of one radio but into the range of another, so this connected infrastructure also handles the handover of your call from one radio base station to another. Without this dedicated infrastructure that's able to communicate with each other, cellular technology would not work. Wifi base stations don't have any standardized way to communicate with each other over an entire city to allow call handoffs with each other. In fact you could consider each wifi radio to be it's own cellphone company, with no interconnection between them. If you simply had one really high power wifi signal that reached an entire city, you would again run into the problem of everybody using the same limited 2.4ghz and 5ghz traffic channels and interfering with each other. Even inside of your own home, if you have enough security cameras, smart home devices, tv's, cellphones, kindles, etc all transmitting on your home wifi, you may start to run into radio congestion. It would be much worse city wide. Future wifi standards may provide for a way for different radios to all talk to each other to handle handoffs. People would also have to set up each wifi radio to transmit only in a limited area, and to re-use the radio frequencies at appropriate distances. This is a logistical challenge, and would require many different manufacturers, or even people running the hotspots, to all work together. There are standards in place that will allow for this but they aren't widely adopted. Finally, cellular radio technology already has the infrastructure in place to allow city wide data and voice roaming, and so there is little incentive to re-invent the wheel. I was a telecommunications engineer at Nortel for 10 years, specializing in GSM.",
"City wide wifi does exist in some places and is very effective. Inverness in Scotland is an example."
],
"score": [
543,
49,
14,
10,
6,
4,
4,
3
],
"text_urls": [
[],
[],
[
"https://dc.gov/service/public-wifi"
],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emp89m | Why all modern cellphones come with dual-camera and sometimes more? | Technology | explainlikeimfive | {
"a_id": [
"fdq3863"
],
"text": [
"As I understand it, there’s not enough space in phone to fit proper optical equipment like zoom and different lenses to one camera, so they make different cameras for different purposes."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
empbpc | Fuzzy Logic | Dear people of reddit I am in need of your help. I am simple law student and I don‘t understand the scientific papers on that matter. May you explain it to me, please? Thanks in advance | Technology | explainlikeimfive | {
"a_id": [
"fdq511j",
"fdqbcgb",
"fdqabdj",
"fdr4vfu"
],
"text": [
"Boolean logic can only take two values : True or False. When trying to perform text recognition with AI, you can't tell for sure if the single pixel you are analysing corresponds to the \"A\" letter with 100% certainty. In order to use information from the entire picture to deduce the letter, you need something in between \"True\" and \"False\". You will rather work with \"Likely\", \"Not Likely\". So rather than saying : 《 first pixel is Black, this letter is an \"A\"! 》, you would say : 《if this pixel is black, there is a 64% chance that this letter is an \"A\", but only 5% chance that it's \"Z\"》. When analysing the bigger picture, and compiling all of the probabilities with maths, you will just pick the most probable outcome and then state that the written letter is \"A\".",
"A popular use for fuzzy logic is in rice cookers - machines that are made to just cook rice (although they can cook some other things too). Without fuzzy logic, a rice cooker will keep asking the rice, \"are you done?\" while it cooks. This kind of rice cooker can only get two answers to that question, yes or no. As long as the answer is no, it stays on; as soon as the answer is yes, it turns off. This works okay, but if you put in the wrong amount of rice or water it might not come out done the way you want it. A rice cooker with fuzzy logic will keep asking the rice, \"are you done?\" while it cooks - but it can get more choices for an answer, like almost, just about, real close. Instead of being on-on-on-off, it can turn itself up or down if it needs to. Kind of like how when you pour juice in a glass or milk in a bowl, you can slow down pouring when it's close to full, a fuzzy logic rice cooker can slow down cooking when the rice is almost done. This works better than trying to quickly shut off as soon as the rice is done.",
"Fuzzy logic does not collapse with observation like probability does. Imagine a clear bottle with a clear liquid with 95% membership function of the classification \"pure water\". It could be water mixed with something else or polluted water. But it would not be pure water and it would not be pure acetone. But a bottle with a 95% probability of being \"pure water\" could still also be pure acetone. Now if you open the bottle and test the water you can determine whether it is pure water or not. When you know then the probability changes. Either it goes to 100% (it is pure water) or 0% (it is not pure water). With fuzzy logic, the stuff in the bottle will be something reasonably described as 95% water.",
"Some questions have a clear yes/no answer, e.g. \"Are you 25 years old?\". Others may be harder to answer clearly, e.g. \"Are you old?\". If you ask that question a centenarian, the answer will be \"yes\", if you ask it a baby, the answer will be \"no\", but if you ask it, say, someone in their 40's, you'll get a mixed bag of replies. Traditional logic doesn't handle cases like this, everything is true (truth value of 1), or false (0). Fuzzy logic allows the truth value to be anything in-between. Say, we poll people and find that 58% believe that 47-years-old is old. Then we can say that the statement \"this person is old\" has the truth value of 0.58."
],
"score": [
22,
7,
5,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
emq7cw | what is OS-level virtualization? | All articles I read are too high level for me. I want to understand this term to be able to understand what Docker does. | Technology | explainlikeimfive | {
"a_id": [
"fdq9757"
],
"text": [
"In regular virtualization one physical server runs multiple instances of an operating system to keep users and their actions separate. In OS-level virtualization, one *operating system* has multiple self contained partitions to achieve separation. So rather than have one OS per user, you have one OS for all the users, and the virtualization splits that one OS."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
emsmo7 | How do electric cars heat the cabin? | Technology | explainlikeimfive | {
"a_id": [
"fdr3t32",
"fdr9izz"
],
"text": [
"Electric heaters blow warm air. But they also tend to have heated seats and steering wheels since it's more efficient to heat you directly than to heat the air, which then heats you.",
"Typically they still use a heater core and coolant like a normal car, but use a heating element/resistive coil to heat the coolant rather than engine heat. Yes, using the heater can have a significant impact on ev range. To improve efficiency, the car will preheat this coolant when the car is plugged in/charging in low temperatures."
],
"score": [
55,
18
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emt021 | How does a surface to air missile battery mistake a large airliner for a fighter jet? | Technology | explainlikeimfive | {
"a_id": [
"fdqqac1",
"fdqrj13",
"fdqrv6h"
],
"text": [
"They don't. They mistake it for a big bomber and that's a bigger threat than a fighter jet.",
"It happened before. During the last Iran-Iraq war (when the US was still on the side of Saddam Hussein) an US warship shot down an Iranian airliner under very similar circumstances. It probably has a lot more to do with seeing what you expect to see than with anything technical. Get people into the mindset that they are going to be attacked and they become too trigger happy for everyone's good.",
"I read a good explanation yesterday here on Reddit from someone claiming to have worked on SAM systems in their military career. The explanation made sense, and lined up with what I little I know about these systems, but take it with a grain of salt. Essentially, a SAM system has human operator that can control how automated the system is, and change that automation as the situation changes. At one end of the spectrum, the human operator has to manually check the data the system is receiving from radar and Friend or Foe identification systems, and decide whether or not to fire. At the other end of the spectrum, the automated system can be told to fire on threats it detects with almost no human operator in the loop. In theory, the system could be totally autonomous, but that's generally frowned upon in the military community. The speculation is that the Iranian military, expecting US retaliation, set their SAM systems to the higher levels of automation, so they could respond faster. Then a problem with the plane's transponder system caused it not to signal the SAM system that it was friendly. The system and its operator were both on edge, and either the system fired without even consulting the operator, or the operator agreed with the system's assessment, and fired on the plane, thinking it was an enemy. Remember that when it comes to war gaming these kinds of scenarios, using basic logic can screw you over. The US has some of the best stealth tech in existence. Our bombers are invisible to radar, and can likely fool most friend or doe ID systems. If a SAM system and its operator caught a large aircraft with a messed up transponder in their airspace during such a tense moment, it could be as simple as bad human judgement, fearing the potential enemy might disappear at any second."
],
"score": [
16,
5,
5
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emvcp6 | . If a computer is made up of preprogrammed circuits, then how does it display anything on its screen, even if that particular thing hasn’t been preprogrammed in its circuits? | Technology | explainlikeimfive | {
"a_id": [
"fdrcu03",
"fdrcsdu",
"fdrfw0p",
"fdrmp3v"
],
"text": [
"The ELI5 answer is that the circuits are not designed to display an image. & #x200B; Instead there is a set of circuits designed to turn on or off a Red pixel, a green pixel or a blue pixel at a very specific location on the screen. (RGB) Different circuits in the computer are designed to tell that circuit which pixels need to be turned on.",
"A computer is made up of preprogrammed circuits that allow it to interpret instructions from a program and display the results on a screen (or do one of many, many other things with the output, whether it be music, instructions to a peripheral device, whatever). The computer is at its heart a giant set of millions of If / Then statements. It takes input, does something with it, and displays the results.",
"Think of it like an old school Lite Brite only way better. Instead of you reading an instruction manual to know which peg goes in which hole to get which color for the bigger, there is a computer doing it and the monitor is a super lite brite",
"Pretty much the same way you are just a bunch of cells. But, like each cell is actually doing a very different and specific thing, in a computer each preprogrammed circuit is doing a very specific and different thing. Now, some of those preprogrammed circuits are part of the screen of the computer. These circuits control 1 thing and only 1 thing: depending on what they receive, they will light up differently. The screen is connected (probably) to something else (unless it's an All in One). The many types of preprogrammed circuits inside the other part of the computer are in charge of several things including, remembering things at the moment (RAM), long term memory (Hard Drive) and coordinating all the different preprogrammed circuits that work with the computer (CPU). You, using input devices, change the input available in the RAM, Hard Drive, and sometimes even CPU. Stored applications in the Hard Drive, and running applications in the RAM, also change the input that goes to the screen. All of these interactions create a lot of different combinations of input, received, stored, and translated on your screen as different patterns of light. This means, of course, that everything you see in your screen is simply the preprogrammed circuits on it cycling in very specific ways between all available possibilities between an all-black screen and an all-white screen."
],
"score": [
29,
6,
3,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emwscs | What's the reason behind wireless being slower than wired? will it ever overtake it? | Technology | explainlikeimfive | {
"a_id": [
"fdruorg",
"fdrruuu",
"fds1m0u"
],
"text": [
"Wireless communication is like talking to a friend out loud. If you're in the same room, you can have a normal conversation. If your friend is outside the house, you can still talk to him - you'd just have to yell. If he's across the street, you'd have to scream at him and maybe at that range, you wouldn't be able to understand some things he's saying. Keep going further and not only is your communication slower, but more inaccurate as well. And because of all those misunderstandings, you have to slow down your speed of communication. Wired communication is like talking on a phone - landline, if you prefer. You pick up the phone and can have a normal conversation regardless where your friend is in the world. So, at current tech and the foreseeable future, I'd say there's no way that wireless ever beats wired. Physically, air is just a worse conductor of electromagnetic waves than copper wire and there's not much we can do to change that. Perhaps in the future, we can communicate through methods that ignore the transfer medium - gravity waves, perhaps? in which case wireless would be fine ... but for now, no.",
"signal integrity, frequency limitations and signal strength. You get no interference with wires, you can go as high frequency as you want, and signal strength is much higher.",
"You know how sometimes someone says something wirelessly (through sound) to you and you didn't quite hear it because some other sound interrupted them, so they have to repeat it and it takes longer? Like that, but with radio waves. If they could inject their words directly into your head you'd be able to talk a lot faster probably. There's also some things like wireless audio that purposely add a delay so that when that happens it doesn't stutter the music. It can resend the messed up signals again before the lag catches up so you don't notice that the signals got messed up for a bit."
],
"score": [
12,
10,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emwwhd | How does 8D music work? | Technology | explainlikeimfive | {
"a_id": [
"fdsh5f2"
],
"text": [
"So, the way that it works is by panning the audio signal, pretty much, instead of having a balanced output (as you would on any song) the modify the way the audio comes out (balancing between more output between the left or right headphone) AND most importantly, if you create a balance like L 80% - R 20% then you move it so that it ends the other way around (L 70% - R 30%; L 60% - R 40% and so on) that's why you feel some kind of ambience, as not a lot of music use it, they usually just pan and it remains like that the entire song thus not creating an \"ambience\" notion. I hope that helps, also sorry about the shitty explanation I am at work and should not be writing this. If you have any additional doubt please let me know"
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
emxndg | when streaming videos online, why do ads seem to load instantly, without buffering, and play in HD quality, but then the video itself... does not do these things? | Technology | explainlikeimfive | {
"a_id": [
"fds4jva",
"fds56yw"
],
"text": [
"Consider this question. Which of the two actually generates revenue?",
"They were going to show you ads anyway, so might as well load them onto local RAM behind the scenes. Just because you're already watching something doesn't mean they can't drip-feed the ads at the same time. And because the ads are shown more frequently than the actual content you want, they can be stored on every available server in a location that is quick to find. Your ISP might even have their own cached copies ready to go. Meanwhile, all of the desired content can't be stored as ea$ily since there is so much of it. Perhaps you chose to watch something only five other people want to view, so the lone copy is stored on some Alaskan server."
],
"score": [
5,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
en3uy7 | One Shot Film Making (Just saw 1917) | So I just saw 1917 and thought it was great. I have heard/read about how it was a one shot movie, but I don't fully understand the concept. I watched the AVGN version of a one shot episode (Immortal), and the BTS - is 1917 sorta like that, but on a larger scale? I am fascinated on the concept. Does that mean there were little to no retakes? cuts? Is it like a play and the actors needed to memorize it all in one go? | Technology | explainlikeimfive | {
"a_id": [
"fdtwtm1",
"fdu573x"
],
"text": [
"It's not actually one shot, they just hide the cuts. But obviously when you have to hide every single cut there are more and complex requirements on where you can or cannot cut and must do one take for real.",
"They're not actually one shot, they're just creative about hiding the cuts. If you liked 1917, I'd definitely recommend Birdman. It's also filmed as one shot and is one of my favorite movies ever"
],
"score": [
8,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
en3zdt | Why can't news providers provide their broadcasted streams over the internet like a live broadcast works on youtube or twitch? | Where one can hit up a local news site on a phone or pc and stream whatever the tv currently receives over the air. | Technology | explainlikeimfive | {
"a_id": [
"fdtyx5l",
"fdtzbi7"
],
"text": [
"They absolutely can, in fact many news providers such as [ABC News]( URL_3 ), [NBC News]( URL_2 ), [CBS News]( URL_1 ), and [bloomberg]( URL_0 ) have free video news feeds online. There are plenty of others too However, they are not allowed to broadcast their actual cable tv channel. Your cable companies pays for each channel they carry, and they pay ALOT for these channels. One of the benefits they get for paying the absurd amounts for the channel, is that the channel agrees to not make their channel free. For the news networks, its wildly more profitable to be on cable networks. Local TV stations are varied, some offer a variety of free streams, some don't. But there's generally little restriction on them broadcasting their local newscasts online for free and there's a big effort to stream the local news casts now in order to get more overall viewership (which means more ad money)",
"Don't they? I set my grandpa up with live news all the time through YouTube."
],
"score": [
53,
5
],
"text_urls": [
[
"https://www.bloomberg.com/live/us",
"https://www.cbsnews.com/live/",
"https://www.nbcnews.com/now",
"https://abcnews.go.com/Live"
],
[]
]
} | [
"url"
]
| [
"url"
]
|
en5435 | How did file sharers get hold of tv episodes so quickly? | Just a few hours after an episode of a popular tv show aired, back in the days, the episode would be available on torrent sites. How was this possible? | Technology | explainlikeimfive | {
"a_id": [
"fduhfd3"
],
"text": [
"In the old days, they used to have a TV capture cards. TV capture cards are like graphic cards, that you put in your computer and you connect your computer to your tv cable so you can watch TV on your computer. You can also record whatever you watch. After you record something from TV, you can share it. Now days, most TV companies stream their programs on their websites and you don't have to have a TV card to watch their channels on the internet and sharers just rip the show from their website or their streaming service. After they're done capturing, they compress it and upload it to file sharing sites. If they use torrents, they don't even need to upload it, people slowly download it from their PC directly, and then the new people that want to have that file download the file from the people that already have the file."
],
"score": [
26
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
en77t9 | Why are phone chargers compact, while laptop chargers have a huge brick in them? | Technology | explainlikeimfive | {
"a_id": [
"fdvldrp",
"fdvmf4q",
"fdvpldo",
"fdvn7rd",
"fdy01fw"
],
"text": [
"Very basic answer: Laptop batteries are bigger and need to deliver more power at once than phone batteries. So they need more powerful chargers. More powerful chargers need bigger ~~transformers~~ heatsinks to prevent overheating, which means they need a bigger housing. See below for more detailed replies. Edited after correction from u/smorejuice",
"Laptop chargers have a much larger power handling capability. A phone charger might be in the 10-20W range at max. but a laptop charger might be > 100W. Greater power means any efficiency loss shows up as heat. Trying to put that in a small form factor might lead to overtemperature situations. Most manufacturers treat their chargers as \"not a priority\" (there are exceptions) so generally it is also sourced pretty cheaply. So it isn't likely they'll put a lot of money into something expensive - and it costs money to make things small. It makes sense - how many reviewers or buyers think about the power brick when discussing or contemplating the latest laptops?",
"In the middle of a charger is a thing called a transformer. This converts electricity into a magnetic field, then back again into electricity. While it is a magnetic field it needs to be moved from A to B in the same way wires move electricity ... except it's much more of a pain in the arse. The *best* thing we have for doing this is lots of thin layers of iron stuck together. \"Our\" ability to make these things hasn't improved significantly since the 1950's because, honestly, physics. So a phone charger has a small transformer because it's not pushing much power around, and laptop chargers are pushing lots more power and the 'brick' ness is because the transformer needs to be physically large and made of iron :(",
"Multiply the size of your phone by 30 and you'll likely need a battery 30 times bigger than a phone charger",
"The correct answer (more power, heat dissipation) has already been given. I just want to add that there is a new technology (GaN) that will make it possible to further shrink chargers. It's a bit pricey but you can (or will soon be able to) get a 100W charger the size of a pack of cigarettes."
],
"score": [
781,
77,
44,
8,
4
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
en86aa | Native Americans were largely wiped out from European disease, why didn’t the opposite also occur? Did they have no immunities to harmful diseases of their own? | Technology | explainlikeimfive | {
"a_id": [
"fdw6aey",
"fdw7hx6",
"fdw0u4f",
"fdw9hh7",
"fdw19du",
"fdwav3g"
],
"text": [
"There were no plagues to spread. There are a couple of reasons, but the main one is a lack of domesticated livestock in large cities. Plagues are generally zoonotic diseases- that is, they originated in animals and mutated to spread to humans. Pigs and humans especially can basically swap diseases back and forth to keep the mutation cycle going. Zoonotic diseases make for the worst plagues because humans haven’t developed resistances to them. It works this way in the animal world too- canine parvovirus is thought to have originated from feline parvovirus. The New World had few large domesticated animals (because there were few animals that *could* be domesticated.) Basically, they had llamas and alpacas and that’s it. The Old World had pigs, goats, cattle, sheep, horses, etc, intensively farmed in close quarters in order to support massive cities. These animals were a melting pot of disease, so you can probably imagine that the cities were basically plague central. In contrast, the New World was not. Syphilis, mentioned below, is even contested as to whether it came from the New World or not. More details here: URL_0",
"The abundance of epidemic diseases among Europeans (and Africans and Asians) was due primarily to their high population density (that is, lots of people living close together) and their keeping of domesticated animals. High population density does not cause disease itself, but it can make it much easier for disease to spread. The second factor is important because diseases like Anthrax and tuberculosis can be contracted from animals. Native Americans, by comparison, had low population density and few domesticated animals, so they far fewer diseases than people of the \"Old World\" and thus less immunity. The same was also unfortunately true of the Aboriginal Australians.",
"It actually is reported that the European epidemic of syphillis is the 1500s originated in the New World.",
"The book ‘Guns, Germs and Steel’ explains this. The gist of the explanation is that domestication of animals changed the disease-resistance characteristics of cultures on the Europe side of the Atlantic. Fascinating book.",
"This is a good question, but just one clarification. Small pox may have been brought to the Americas by Europeans, but it wasn’t a European disease. It originated outside of Europe.",
"This guy explains the whole thing well. [ URL_0 ]( URL_0 ) I have to type more or this will get auto-deleted for not being long enough. But it's an 11 minute video that explains it very well. Not quite as if you were 5, I suppose, but plainly and simply with basic concepts that you don't require any prior knowledge to grasp. Okay hopefully this is now long enough not to get deleted. It is specifically directed towards your question."
],
"score": [
93,
18,
11,
8,
6,
4
],
"text_urls": [
[
"https://m.youtube.com/watch?v=JEYh5WACqEk&t=41s"
],
[],
[],
[],
[],
[
"https://www.youtube.com/watch?v=JEYh5WACqEk"
]
]
} | [
"url"
]
| [
"url"
]
|
|
en8qi8 | how stores can run out of digital content keys. | Technology | explainlikeimfive | {
"a_id": [
"fdwaz6y"
],
"text": [
"> Isn't it just a copy of digital files? No, it's a unique registration key that gives you access to a copy of those files. Like if you bought a voucher at the entrance to a mall for a sweater at Macy's, and then went to Macy's and turned your voucher in for the sweater. The publisher gives a limited list of keys to the retail store, and so the retail store has a limited quantity of keys to provide. The keys have to be unique for quality control (preventing resale) and to prevent abuse of any regional controls."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
en90zh | Why do telecommunication companies not implement a DNSSEC-like mechanism to thwart robocalls / ID spoofing countermeasures? | Technology | explainlikeimfive | {
"a_id": [
"fdwfh4f"
],
"text": [
"A well thought / well designed Kickstarter / crowd sourced project would seriously drive us a long way towards securing our peace at home, and get the attention + demand for this capability that most carriers otherwise leave up to the consumer to deal with? Why hasn't this happened? Why has there been no initiative to secure our lines and provide validated authenticity to the callers identity?"
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
enbt9r | why do pictures' resolution drops once you upload them on the internet ? | My bad, i meant quality | Technology | explainlikeimfive | {
"a_id": [
"fdxmnm7"
],
"text": [
"This will depend on where you are uploading it. More than likely, there is some processing of the uploaded image happening at the other end that results in it being compressed, which will save space on the server. The compression also results in the quality dropping."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
enkabk | why is it difficult for cameras to capture such things like snow falling on film when we can easily see with the human eye? | Technology | explainlikeimfive | {
"a_id": [
"fe0iwfd",
"fe0nlgp"
],
"text": [
"Cameras take discrete frames, whereas [our eyes continuously capture light using a biological process]( URL_0 ). Let me elaborate. A camera always has a shutter over top the sensor when you're not using it. When you take a photo, the camera drags a small sliver of opening over top the sensor,[like this]( URL_1 ). That exposes the sensor to just enough light. To do video, that process is repeated 30 times per second. The problem is when you get motion. Cameras can shoot at really fast shutter speeds -- i.e., how long the sensor is exposed to light. For example, 1/4000th of a second. When you do this 30 times a second, the individual pieces of snow are frozen in place. Stutters result, because your eye can tell that the pieces of snow are jumping from spot to spot. Keeping the shutter open longer allows the pieces of snow to drag across the sensor, introducing blur. This blur smooths it out and makes it less jarring. So, if you set up your camera just right, snowfall can look normal. The other reason why it is hard to capture snow is because a DSLR can have an exceptionally shallow depth of field. This means that only one small teeny tiny plane can be in focus at once. Everything else is super blurry. This is a problem when capturing snow, since the likelhood of snow falling EXACTLY in your plane of focus is unlikely. Therefore, 95% of snow is all blurry, and only like 5% is sharp and visible on camera. This is different from what *you* see, because your pupil constricts when you go outside (because it's so dang bright). This smaller opening means that your eye -- when outdoors -- has incredible depth of field. It's able to get EVERYTHING in focus. So like 80% of the snow will look nice and sharp and visible to your eye.",
"Usually video captured by a camera is between 24 frames per second and 60 FPS. The human eyes/brain according to some studies can detect an image lasting 4 ms in a one second video, which means a trained eye can detect an object at 250 FPS. Furthermore a snowflake a small and a camera has a maximum resolution (for example full hd: 1920x1080pixels), so if you are not close enough, one snowflake could be smaller than a pixel, therefore difficult to capture and see on a video. And the human eye according to Dr. Roger Clark can detect around 15 megapixels in a snapshot (5000x3000 pixels for example). This comparison is limited because the human eye is not a digital organ (it does not use pixels), but it gives you a tool to understand the importance of resolution. Additionally a human eye can focus on a specific point in space, the brain can \"freeze\" a frame in the short memory. You need to take in consideration that a snowflake under the sun reflects light, shadow and falls in three dimensions (camera video captures in 2D) and the eyes are more equipped to detect that."
],
"score": [
9,
3
],
"text_urls": [
[
"https://www.youtube.com/watch?v=dvovtbLGaUw",
"https://youtu.be/CmjeCchGRQo?t=165"
],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
enmj3j | does smaller nm in cpu actually matter like they said? Performance and Efficiency (android snapdragon) | Okay, this should include PC CPU & other mobile CPU but I will take a smaller scope. basically with each snapdragon increment, it always have smaller nm fabrication (14nm, 10nm, 7nm) with 25%++ performance and 20%++ efficiency for each jump (number is just example it vary between newer series but it should be around that number). I've used 14nm 820Snapdragon chipset with 3000mAh battery. most newer phone with smaller nm and bigger battery last the same with mine (usually one day usage). mine has degraded a bit now (but I guess it's because aging battery?), but when I first activate it, I get the same usage overall. let's say newer have extra hours but their battery also slightly bigger than 3000mAh. I get it with performance getting better (although, it won't be the same performance increase like they said each jump), but efficiency seems non-existent. technically it should last 2 days in 3000ish phone right? (almost get to 80% efficiency if we sum all the jumps from 4 years ago). there are 2 days phones, but those are usually with 5000/6000mAh battery. and outside from gaming, we have similar workload (social media, messenger, media consumption). of course there are a lot of variables regarding battery life but I think you all get what I'm trying to compare | Technology | explainlikeimfive | {
"a_id": [
"fe1r4n1"
],
"text": [
"2 ways you can increase performance: - More cores/larger cpu - overclock and overvolt Overclock is not an efficient way to increase performance, but making the CPU bigger is. With smaller transistor size you can have a CPU that is still the same size and cost but virtually bigger. (More complex)"
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
enms5l | Why do some bass guitars have a piece of metal covering the strings? | I saw an old post like this but the only answer was that it's purely for cosmetic reasons. Is this really the case? Won't the metal be in your way? | Technology | explainlikeimfive | {
"a_id": [
"fe21lo0",
"fe25m4h"
],
"text": [
"It's a [Pickup cover]( URL_0 ), a pickup being the thing with coiled wire and little magnets beneath the strings, which is what picks up (hence the name) your sound to be carried to the amplifier. They're only cosmetic nowadays, and most people take them off because they do get in the way. As for why they still have them, my best *guess* would be tradition.",
"It's said [ URL_1 ]( URL_1 ) there was some functionality when the electric bass was first being designed: *When Leo Fender designed his first* [*electric bass*]( URL_5 *he put a chrome cover over the* [*pickup*]( URL_0 ) *and the bridge for multiple reasons.* *A. He thought the cover over the PUP would provide some electrical shielding (it's too open to do any good there, but that was one reason- early Precision had a wire going to the cover even).* *B. The original* [*P bass*]( URL_4 *PUP was pretty open with the coils subject to damage, so the chrome cover would protect the PUP.* *C. He thought the instrument would be played with the thumb, like the way Wes Montgomery played guitar (his brother Monk was an early user of the Fender bass, and did play it that way too). The tug-bar was under the strings to help with this. You'd wrap your finger tips under the tug-bar, rest your palm on the PUP cover, and the thumb would lay on the* [*strings*]( URL_6 )*.* *D. Everything that was cool in the '50s had lottsa chrome. Aesthetics- the chrome covered up the kinda primitive/ugly parts of the bass.* *E. The* [*bridge cover*]( URL_2 *had a foam mute that knocked down the sustain, and helped the bass emulate the double bass.* *When the Precision was changed in 1957, they kept the covers for most of the same reasons, except they'd made the new split PUP covered in plastic to protect it. So, when they made the* [*Jazz bass*]( URL_3 *in 1960, they again kept the covers, mostly for aesthetic reasons. Fender shipped all the 34\" scale basses with the covers mounted until about 1982. Why? Because they'd always done it that way!*"
],
"score": [
9,
7
],
"text_urls": [
[
"https://www.dhresource.com/600x600/f2/albu/g6/M01/B4/95/rBVaSFpDSJCAfECLAASq-1EVq6o769.jpg"
],
[
"http://i.viglink.com/?key=c380339a639f7058585bf6b4c4e200cc&insertId=35ad6ea6b0a76247&type=CD&exp=60%3ACI1C55A%3A9&libId=k5b19bk1010004j1000DLbe5or550&loc=https%3A%2F%2Fwww.talkbass.com%2Fthreads%2Fpurpose-of-pickup-covers.700770%2F&v=1&iid=35ad6ea6b0a76247&out=https%3A%2F%2Fwww.walmart.com%2Fsearch%2F%3Fquery%3Dpickup&ref=https%3A%2F%2Fwww.google.com%2F&title=Purpose%20of%20Pickup%20Covers%20%7C%20TalkBass.com&txt=%3Cspan%3Epickup%3C%2Fspan%3E",
"https://www.talkbass.com/threads/purpose-of-pickup-covers.700770/",
"http://i.viglink.com/?key=c380339a639f7058585bf6b4c4e200cc&insertId=7afee7a1a90c9ad5&type=CD&exp=60%3ACI1C55A%3A9&libId=k5b19bk1010004j1000DLbe5or550&loc=https%3A%2F%2Fwww.talkbass.com%2Fthreads%2Fpurpose-of-pickup-covers.700770%2F&v=1&iid=7afee7a1a90c9ad5&out=https%3A%2F%2Fwww.walmart.com%2Fsearch%2F%3Fquery%3Dbridge%2Bcover&ref=https%3A%2F%2Fwww.google.com%2F&title=Purpose%20of%20Pickup%20Covers%20%7C%20TalkBass.com&txt=%3Cspan%3Ebridge%20%3C%2Fspan%3E%3",
"http://i.viglink.com/?key=c380339a639f7058585bf6b4c4e200cc&insertId=ada581caff4eac9b&type=KW&exp=60%3ACI1C55A%3A9&libId=k5b19bk1010004j1000DLbe5or550&loc=https%3A%2F%2Fwww.talkbass.com%2Fthreads%2Fpurpose-of-pickup-covers.700770%2F&v=1&iid=ada581caff4eac9b&out=https%3A%2F%2Fwww.walmart.com%2Fsearch%2F%3Fcat_id%3D0%26query%3Djazz%2Bbass&ref=https%3A%2F%2Fwww.google.com%2F&title=Purpose%20of%20Pickup%20Covers%20%7C%20TalkBass.com&txt=%3Cspan%3EJazz%20%3C%2Fsp",
"http://i.viglink.com/?key=c380339a639f7058585bf6b4c4e200cc&insertId=9676e72f8b62158e&type=KW&exp=60%3ACI1C55A%3A9&libId=k5b19bk1010004j1000DLbe5or550&loc=https%3A%2F%2Fwww.talkbass.com%2Fthreads%2Fpurpose-of-pickup-covers.700770%2F&v=1&iid=9676e72f8b62158e&out=https%3A%2F%2Fwww.walmart.com%2Fsearch%2F%3Fcat_id%3D0%26query%3Dp%2Bbass&ref=https%3A%2F%2Fwww.google.com%2F&title=Purpose%20of%20Pickup%20Covers%20%7C%20TalkBass.com&txt=%3Cspan%3EP%20%3C%2Fspan%3E%",
"http://i.viglink.com/?key=c380339a639f7058585bf6b4c4e200cc&insertId=e6ffe04310296ca1&type=H&exp=60%3ACI1C55A%3A9&libId=k5b19bk1010004j1000DLbe5or550&loc=https%3A%2F%2Fwww.talkbass.com%2Fthreads%2Fpurpose-of-pickup-covers.700770%2F&v=1&iid=e6ffe04310296ca1&out=https%3A%2F%2Frover.ebay.com%2Frover%2F1%2F711-53200-19255-0%2F1%3Ftoolid%3D10029%26campid%3DCAMPAIGNID%26customid%3DCUSTOMID%26catId%3D11232%26type%3D2%26ext%3D303434540759%26item%3D303434540759&ref=https%3A%",
"http://i.viglink.com/?key=c380339a639f7058585bf6b4c4e200cc&insertId=28631c2ec7db06fc&type=CD&exp=60%3ACI1C55A%3A9&libId=k5b19bk1010004j1000DLbe5or550&loc=https%3A%2F%2Fwww.talkbass.com%2Fthreads%2Fpurpose-of-pickup-covers.700770%2F&v=1&iid=28631c2ec7db06fc&out=https%3A%2F%2Fwww.walmart.com%2Fsearch%2F%3Fquery%3Dstrings&ref=https%3A%2F%2Fwww.google.com%2F&title=Purpose%20of%20Pickup%20Covers%20%7C%20TalkBass.com&txt=%3Cspan%3Estrings%3C%2Fspan%3E"
]
]
} | [
"url"
]
| [
"url"
]
|
env0t6 | How does a microphone pick up the tone of sounds it is recording? | I have, at the very least, a minimal understanding of music, but I don’t understand how microphones and sound work together. How does a microphone pick up on all the aspects of a sound it is recording: volume (loudness/amplitude), pitch (frequency), and specifically tone? | Technology | explainlikeimfive | {
"a_id": [
"fe5n3q4",
"fe5jlr4",
"fe6196o"
],
"text": [
"Think of a microphone as the opposite of a speaker. A speaker makes sound by vibrating, a microphone picks up sound by being moved by sounds vibrations. After that, your question is more about how sound waves work. Sound waves are areas of compressed or decompressed air but it's easier to visualize as a wave like the ocean. The frequency (how many waves move past you per second) determines how high or low the pitch is. The amplitude (how tall the wave is) determines the volume. The shape of the individual wave (smooth, rough, rippled on one side) determines what the wave sounds like (a piano and electric guitar wave have different shapes).",
"Sound can be thought of as a bunch of points in a line - at any given moment the pressure is at a certain value and over time these momentary pressures come together to form a wave. Microphones record these momentary pressures.",
"Sound is basically a varying air pressure. The microphone picks this up by having a very thin membrane that moves/vibrates with the varying air pressure. This movement is then converted to electrical energy. Pitch/frequency how fast that membrane is vibrating due to varying air pressure. Volume/loudness is how far did the membrane moved from its original position, which is related to the amount of air pressure. And lastly, tone is the air pressure pattern."
],
"score": [
13,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
enxery | When you’re watching certain videos, what causes a mismatch between the voices and their mouths? | Technology | explainlikeimfive | {
"a_id": [
"fe6a0tm"
],
"text": [
"Audio and video are recorded separately, so when the editors are putting the sound files over the video they have to try to align it correctly. This is why before they start the actual content they will isolate a very clear audio and video segment to help synchronize it. A common method is someone counting to 3 both outloud and with their fingers. So, any mismatch is simply an editing error where they did not quite line up the audio and video files right."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
eny8wp | How does Spotify pay artists? | Technology | explainlikeimfive | {
"a_id": [
"fe6dn3x",
"fe6fsy7",
"fe6dc52",
"fe6g81w"
],
"text": [
"I worked for a now-dead streaming service. We would keep track of what we played, and give money to the labels based on whatever payment schedule we had with them. It was then up to the labels to pay the artist. The same rate was paid whether it was a popular or an obscure artist. Some labels it was by plays, some it was by seconds streamed, and others had a combination.",
"From what I understand a band gets paid something like .08 for a song to stream. The band vulfpeck had a silent album that they encouraged listeners to stream to rack up plays. It eventually worked and they got put onto a Spotify stadium tour a few years back. Now they're a pretty popular funk band.one of my favorites give them a listen,",
"My understanding is it depends... Really in demand artists, or labels (whoever holds the rights) might work out a deal for $x a year for rights to all their works or specific albums. Less in demand artists might actually get paid per play or maybe per thousand plays or some structure that limits spotify's bill if no one listens to them. Probably subject to a cap etc. In between it might be step wise. First 10k plays get the artist $x, next 90k gets them $y. If you're really unknown you might have to pay Spotify to list you and hope you get back the plays. Really the sky is the limit. Every contract could be unique based on the demand for the artist, the max amount Spotify is willing to pay total, and so on.",
"Technically, they don't. Spotify pays the record labels who in turn are supposed to pay the artists. The rate paid depends on the number of streams, the artist itself and whether the Spotify customer is a premium (paid) customer or is a free (ad-based) subscriber (there are other factors, but those are the basics)."
],
"score": [
26,
11,
7,
6
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
eo0qos | why silicon is used for all the sophisticated tasks like building computer chips and all the electronics. I read an article today about how scientists have successfully built a particle accelerator on a silicon chip and it made wonder what makes silicon so useful in this area? | Link: URL_0 | Technology | explainlikeimfive | {
"a_id": [
"fe6xc3o",
"fe6pw1c"
],
"text": [
"Silicon is a semiconductor, which means it is possible to make it into switches that can be turned on and off by electrical signals. This is how all computers and other silicon-based electronics work. There are MANY semiconductors in the world: crystals like silicon and germanium, metal oxides (used by soldiers in WWII to make DIY radio receivers), biological molecules, natural minerals, even special plastics. But silicon has some advantages: \\- It's cheap, since it's literally made from sand \\- it isn't too hard to make it into big, perfectly pure, defect-free single crystals \\- Those single crystals can easily be integrated with a wide variety of functional films and coatings. \\- It's reasonably durable and robust (though some special materials are better) \\- It can easily be oxidized to produce a protective coating of glass on the surface of it \\- It isn't toxic \\- Its exact semiconductor properties are pretty good (though there are special materials that are better for specific purposes). \\- We have an absolute TON of experience shaping it into whatever micro-scale shapes we want (as long as they are more or less flat) so we can make all kinds of stuff with it.",
"Silicon is a semiconductor, which means under certain conditions it will pass electric current and on certain conditions it won’t. This makes it useful in computing, which relies on turning electrical signals on/off, since having one material that can do that is less cumbersome than a multi-part transistor."
],
"score": [
8,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
eo2jra | How do concert lightings sync so precisely with musicians? Especially with instruments like drums, it will be super obvious even if it's off by a second? | Technology | explainlikeimfive | {
"a_id": [
"fe7c14h",
"fe7cfic",
"fe7j6vz",
"fe81a7z"
],
"text": [
"Computers. The lighting system is usually programmed in advance, and these are professional musicians. Unless something catastrophic happens, they'll stay synchronized, and if something catastrophic *does* happen, there's a tech that can pause the sequence.",
"Well it’s from two reasons. Either the musicians have practiced enough to gain that precision or they are using midi (music instrument digital interface). Which connects to their instruments and equipment and translates a drum sound into a light flash through computer for example. Also one I forgot to mention is backing tracks. Some bands or rappers use backing tracks because it would be arduous to recreate a sound or series of sounds per live show. The artists follow these backing tracks and In some cases the lights are synced to these backing tracks.",
"Most of the lighting scenes are preset. But most \"bumps\" are done manually by the LD who knows the songs.",
"Big budget pop shows are almost always played to a click track, which means the song always runs at the exact same speed. Then the entire lighting sequence can be programmed from a computer to run perfectly in sync. This can include lights, video, lasers, and anything else. Heck, chances are some of what you hear is pre-recorded too (modern pop music has too many layers of sound to reproduce with a normal band in a concert, so often they cheat). Smaller venues/shows will just have a lighting technician \"playing\" the lights. They won't have complicated synchronized patterns because it'll mostly consist of switching between preset patterns, but the changes go along with the song because a human is triggering them. They can also manually tap out flashes and bursts of light to the beat of the song. For even lower budget shows, often basic lights at least have a microphone that can pick up sound transients (like drums) and change colors or patterns when that happens. It'll be kind of dodgy, but still provides that feeling that the lights go with the music. I just did my first lighting gig just a week ago, with lasers and some software that I wrote myself, at a DJ event. My approach was to combine preconfigured patterns that I could tweak as I go with a \"tap\" feature that let me tap out the beat of the music and then the patterns would sync to that (though it doesn't stay in sync forever, so I had to periodically tap), and then also have beam patterns that I could literally \"play\" on a small piano keyboard, plus flashes and the like mapped to buttons. I think that worked out quite well for improvising out the show (mostly to songs I didn't know)."
],
"score": [
12,
9,
7,
4
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
eo2nmv | Why can phone cameras not take good photos of the moon? They always seem to make it 10x smaller than you can see with the naked eye. | Technology | explainlikeimfive | {
"a_id": [
"fe889dj",
"fe9wour",
"fe7wr2n",
"fe8jdss",
"febia8c",
"fe8ppu7",
"feaz8g6",
"feaqytm",
"fe8ftzi",
"fea1icw",
"fecja6o"
],
"text": [
"It has nothing to do with the brain or the ''moon illusion'', camera have wide angle lens to make sure u can fit a lot in the frame, it makes the foreground and background completely disproportionate. The longer and narrower the focal length, the more the foreground and background gets compressed. If you used a something near a 55mm lens, you would get the same impression as what you see, a wider lens would make the moon smaller and a longer lens would make the moon appear much much bigger. Take a look at this link, it shows the difference when using different focals. [ URL_0 ]( URL_0 ) & #x200B; You could get the a similar effect by placing two similar size objects on a flat surface 2 feet apart, get very very close to the first object then step back gradually, the object in the backgroud will look bigger and bigger as you step back. & #x200B; Edit : It is also how they create the vertigo effect in movies, basically they zoom in or out of something while moving closer or further away : [ URL_1 ]( URL_1 )",
"One problem unrelated to moon size is exposure. Phone cameras and most cameras on auto set their settings based on average brightness of whatevers in frame. The moon is insanely bright, and the night sky is pitch black. The average exposure of the picture is dark so the camera adjusts settings to make the whole picture brighter, turning the moon into a white circle.",
"Because your phone's camera has a *wide-angle lens* \\- that is, it's designed to see as wide as possible (\"zoomed out\" - you want more people to fit in that group pic, don't you?). More things in the frame means each thing is going to need to be smaller in order to fit. If you use a *telephoto lens* \\- lenses that are designed for looking at smaller areas but with better quality (that is, \"zoomed in\"), that small area is going to look *waaay* bigger because it occupies more space in the picture.",
"In addition to the wide-angle fact others have pointed out, I think it's also partly the psychological effect of the Moon being by far the brightest object in the sky. You can verify how small it really looks by covering it with a single finger, but it still seems to illuminate the whole night. I imagine most people, if asked to answer quickly without thinking about it, would guess the Sun appears bigger than the Moon, simply because of how much brighter it is. But the existence of total solar eclipses proves that sometimes the Sun appears even smaller than the Moon.",
"I took [these]( URL_0 ) on my cell phone the other night... Through my telescope...",
"1. Just looking at a picture of \"the moon,\" it doesn't make any sense to say that it's bigger or smaller than you can see with the naked eye. Most photos are significantly smaller than the things they are photos of. Take a picture of your cat (or dog, or hamster, or girlfriend, or Eiffel Tower, whatever...) with your phone and look at it on your computer monitor. It will be smaller than your actual cat/dog/hamster/girlfriend/Eiffel Tower. 2. Given #1, the way that it really makes sense to think of \"size\" in photos is relative - how big does thing X appear to be compared to thing Y, which is also in the picture. 3. The way to make the moon look \"big\" in a photo *as compared to something in the foreground* is to get far away from the thing in the foreground, and zoom in (this is called perspective compression). For instance, in [this]( URL_0 ) picture that I took, I was more than a mile away from the Washington Monument, and using a big-ass zoom lens. [This]( URL_1 ) was my setup. If I'd been closer to the Washington Monument, the moon would look smaller in comparison. 4. Phone cameras don't have big-ass zoom lenses. In theory, I could have snapped a pic with my phone at the same time as I took the one with my camera, and the *relative* size of the moon vs. the Monument would have been the same. They just both would have looked small and crappy.",
"Phone cameras are wide angle, so they have a wider view than your eyes. This means they fit more subject matter into the image and thus everything, including the moon, is a little smaller. In terms of lenses, your eyes are “normal”. Phone cameras are wide angle - they see a wider view. The opposite of those are the long lenses you see people shooting sports or wildlife with are telephoto - they see a narrower, more zoomed-in view. For taking good photos of the moon, you want more zoomed-in telephoto lenses.",
"The moon is about as big (in angle) as your thumbnail at arms length. It just so happens that your eye's \"high resolution\" area is about as big, so you can see all of the moon, as sharp as possible without moving your eyes around. A cell camera is optimized to take images much MUCH wider than that, so it would either need to have massive resolution so you could blow up any small area, or have zoom. (the fovea gives maximum resolution for somewhere around 1-2 degrees, the moon is about .5 degrees, and a cell camera captures around 50-90 degrees)",
"While I do believe that many of the reasons given here have some truth, I want to point out that what you see is not generated by taking one picture, but your brain creates your perception by integrating visual input over longer times and from different focus points. If you focus on a part of the scene that catches your interest, this part is then sampled even more detailed. I do believe that this is the cause why we perceive the moon as more domitating than it actually is. Works as well with all other things. Take for example a face of somebody sitting at the train station and focus for a while on that. You'll realize that it feels like your mind zooms in. PS: Not a specialist, just my best guess from combining what I learned during studies about perception with my personal experience.",
"People already explained about focal length... The other aspect that tends to eff up moon photos is light metering. When your camera is on auto, it changes exposure settings to try and make the photo somewhat reasonably exposed (like medium grey, ignoring color). It sees black skies and increases the exposure time to try and turn them grey. This overexposes the moon, so you tend to just get a white spot. This is an issue on any camera in auto, not just phone cameras. One generally has to manually set exposure settings to properly expose the moon and leave the sky black. The correct settings tend to be about what you'd use in the middle of the day.",
"You've got two different question here whether you know it or not. People are saying because of the wide angle, that's only part of the problem. The size of the moon means that it'll only cover a small part of the picture. Of course you could zoom in, but if you ever tried that you'll notice a horrible picture. Most modern phones have enough dots that you should be able to blow up the moon without any problems, so what gives? The bigger problem is that the moon isn't very bright. Allow me to explain. People think telescopes are an instrument designed to make something bigger, nothing is further from the truth. Look at the two ends of a telescope, the end you see through is much smaller than the end the telescope sees through. On a small telescope the opening facing the sky is around 20,000 square mm, while a low mag eyepiece is only 200 square mm, so around 100 times bigger. So the telescope is actually taking what it sees and making it smaller. What a telescope really does is collect light, and by putting the same amount of light in a smaller area it makes the image brighter. So the telescope is actually a light collecting instrument. Okay back to your phone. The actual aperture (the part that the phone sees through) is actually very tiny. around 20 square mm (don't know the exact specs). So very little light gets in there. A full moon is around 10000 times less bright than the sun. It's less obvious to you because the human eye is adaptable to a wide range of brightness ranges, your phone camera not so much. That also means 10000 times less light to generate the image. In order to compensate your phones camera has a few tricks it can use including taking the picture for a longer time and cranking up the sensitivity. Each one has problems, especially when taking a photo of the moon. Taking a picture for a long period can work, but the camera has to be still because if anything moves you get a blurry image. Increasing the sensitivity increases the amount of noise in the image. So you can do one of two things. Attach your camera to a telescope, which will take a better picture even if the moon is the same size. Or get a bigger camera. Those big pro cameras have much bigger apertures and can take much better pictures."
],
"score": [
4460,
179,
121,
40,
19,
11,
8,
8,
5,
4,
4
],
"text_urls": [
[
"https://s.studiobinder.com/wp-content/uploads/2019/02/understanding-focal-length-different-distance.jpg?resolution=2560,1",
"https://www.youtube.com/watch?v=tn6RBet9i5w"
],
[],
[],
[],
[
"http://imgur.com/gallery/lPT2cl0"
],
[
"https://imgur.com/zS0QShu",
"https://imgur.com/eOhAcP0"
],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
eo34t9 | Why video game companies make such beautiful cinematic trailers about their video games, but they never lead to a full fledged movie on the big screen? | Technology | explainlikeimfive | {
"a_id": [
"fe8qp3t",
"fe7s48y",
"fe9gllf",
"fe7oead",
"fe98soz",
"fe931jo",
"fe9coj0",
"fe7n9kg",
"fe8w60j",
"fe7plru",
"fe9u6ls",
"fe9z50p",
"feanxeq",
"fe9pj83"
],
"text": [
"A lot of it comes down to budget as well. Blizzard puts out crazy sexy looking cinematics and it would look great on the big screen but Blizzard has also said that if they were to make a full length feature film that looked as good as the cinematics that it would be the most expensive movie ever made. At the end of the day the only reason companies don't make something is because it wouldn't make enough money to be worth it.",
"The storytelling and writing is different. The writing in games works in games, but the same writing sucks for a movie because games are meant to be played, not watched. That's the main reason why games that are turned into movies suck, and why movies turned into games suck. You wouldn't want to see a 90 minute movie of Master Chief shooting a bunch of flood creatures, but you would like to shoot flood creatures for 90 minutes. A cinematic should be used to transition from event to event and then go back to shooting.",
"Money. A beautiful cinematic trailer will get you to buy a game that is likely around $60 usd. A beautiful full-fledged movie will get you to buy a ticket for $15 and cost more than the entire production cycle for a video game most of the time. The goal of both products is to get your money not make you happy. When you already make video game money, it doesn't make financial sense to go and make movie money.",
"It has, on several occasions, and the movies were financial flops. See Final Fantasy the spirits within",
"There are certain lore deep games that could probably do it successfully, but most (good) games are made to be enjoyed as an active player, not a passive observer. The main issue with the lore deep games, however, is that your audience is going to be limited. For example: Kingdom Hearts. There’s enough deep lore and stuff going on that it could work, but there’s no way that you can fully include an audience outside of fans of the game.",
"Most effective video stories are effective because of the medium they are presented in. The last of us would not be interesting as a movie. World of Warcraft would not be interesting as a movie.",
"Anyone can look like an olympic runner for 2 seconds. You'll notice those cinematic trailers do not offer much in the way of dialogue or actual storytelling. Its much harder to tell a complete story in 2 hours, the trailer lets the actual gameplay do that so it doesnt have to",
"I doubt it would be cost effective. To make 90 minutes with those kind of visuals would take a huge amount of both time and money and it would require to make a huge box office to have a decent profit so it would be too risky to greenlit such project",
"Getting three minutes of content to put into a trailer campaign is a piece of cake. Seriously, you can do it even if you haven't actually got a fully fledged, carefully made product to sell. Remember Aliens: Colonial Marines? They had some glorious trailers for that garbage fire of a shoddy looking scam job, but barely a game to go with them. But you have to understand a few things. First is that these beautiful cinematic trailers take way longer to make than to watch. Hell, there are YouTubers making content out there, who shoot hours and hours of footage, just to get material together for a ten to fifteen minute video. That means watching the hours and hours of footage, curating out all the boring crap, selecting the good segments, refining them, adding graphics, animations, captions and the like. That takes flipping ages. So, making movies of any quality, takes an awful lot of time and patience and must either be done for the love of doing it, or with reasonable expectation of a return on that investment of time and patience, which brings us neatly round to.... Second, movies very particularly, do not get made unless someone with an awful lot of money and power, normally someone whose name is not household level, says they ought to be. They also do not get made unless the studio cranking the film out, believes it will make its money back and then some. Realistically, the track record of films based on computer games has been less than reassuring to investors, and when we are talking about making movies, we are really talking about securing investment. It is worth pointing out at this stage that movies that don't get funded, don't get made. It takes a lot of human and technical resources to make a movie, and without the reassurance of someone putting their money where their mouth is, those resources simply are not going to materialise. Third, once again, there is a stigma about computer games movies, a deep, abiding, and not in the slightest undeserved stigma, surrounding computer games movies. The stigma goes, basically they are terrible, and there have already been enough bad ones made, that no amount of success of computer games movies will EVER amount to a rebalancing of the scale in favour of the notion of a computer game movie. Basically, between the difficulty of making movies, the difficulty of making animated movies, the difficulty of selling computer games related movies to serious production houses, and the fact that computer game movies are often shit, its not surprising that these things don't happen often.",
"Prince of Persia, The Witcher, Assassins Creed, Warcraft, Resident Evil, Rampage, Silent Hill, Super Mario Bros.,Doom,Tomb Raider... I'm sure I forgot a few.",
"Square Enix tried valiantly with Final Fantasy : The Spirits Within... and hit the Uncanny Valley so hard it left mental scars.",
"Video game plots are usually equivalent to a b+ movie at best. What sells the game is how the player interacts with the mechanics and/or the environmental storytelling. Most game trailers, if they even have any narrative cohesion, would be like trying to sell you a short story that somehow lasts over 20+ hours. I think the difference between games and movies are great enough that it would be hard to translate between the two.",
"It’s surprisingly easy to make a 2 mn money shot that works well. I’ve produced/exec produced/co-directed/commissionned a few that made a splash back in the days : Haze - URL_3 I am Alive - URL_2 Ghost Recon - URL_0 Trust me - the sweat, the crunch, the tears, the endless approval loops, the actual physical fights over 2 mns costing 400k to 800k$. There’s no way any of those would make it into any longer form content without actual deaths. Or if it does get out with minimal casualties.. it’s not so great. A « good » example with Ghost Recon Alpha - URL_1",
"Money is the answer. Take Warcraft the movie as an example since it's the most recent I remember. Production costs of 160 million. Needed to make 380 million to break even.. This stuff consumes a lot of money and needs a lot of specialists in their specific workfield. Additionally: marketing purposes. You get people hyped and hooked on by epic Trailers, not gameplay. Last point: 2-3 minutes of storyboard are rather easy since in most cases Trailers either show a \"how did it come to the Situation we jump into as the player\" or an action scene that doesn't need story at all and rather cool Explosions; shoot outs and so on. In both cases you don't need to worry about story and character development, the boring middlepart of the movie, hybris, bridge. Everything that makes a movie a movie basically. Interesting enough though I wonder what the animated resident evil movies were supposed to be. Direct to DVD and definitely expensive judging by the production quality. maybe a love Project of capcom with no direct intend to make money."
],
"score": [
749,
463,
118,
45,
17,
16,
16,
12,
9,
6,
3,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[
"https://youtu.be/VhVx3jBXRSY",
"https://youtu.be/7-wAzlqzXH0",
"https://youtu.be/HZ6Aely9YrQ",
"https://youtu.be/kbrCZpaH9oM"
],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
eo48lu | Internet Cables on the Ocean Floor | All of the world's internet runs in cables on the ocean floor. How is that possible when we know so little about the oceans and when some parts of the ocean are too deep? | Technology | explainlikeimfive | {
"a_id": [
"fe8ph22",
"fe8f1k7",
"fe863z3",
"fe92psn",
"fe8e0a7"
],
"text": [
"The route for cables is checked before laying them down, in sort of the same way you \"call before you dig\" when installing a pool at home. The companies who own the cables want to make sure the path they're laying them on is relatively clear of issues (massive depth changes, volcanic activity, etc.) and mitigate cable damage with layers of protection. Like a coax cable, the data is sent along a proportionately small ammount of the cable, inside copper, alumninum, kevlar, and stranded steel. If there is damage you can tell where in the cable it is with a fiber optic cable tester, it times the response and strength of signals to give a distance estimate, then you send out a ship with a crane attachment to patch the cable. CNN has a nice report that goes into even more detail here: [ URL_0 ](https:// URL_0 )",
"It's all relative. The number one threat to Internet in your area = a construction worker with a backhoe. Cables buried underground in a city are much more likely to be damaged than undersea cables. There is much more digging going on. While the ocean is very deep in a few places, they don't run cables there. The ocean, below a couple of km in depth, is amazingly free of backhoes, life, and digging in general. A cable can just lie there undisturbed, sending along Internet.",
"The ocean floor has been mapped we know what the surface of the floor is shaped like, to lay the cable you get a large boat and slowly pay out the cable from the back of the boat and it sinks to the ocean floor.",
"May I recommend this magnificent article about undersea cables from Neal Stephenson. Mother Earth, Mother Board: URL_0 I read anything this guy writes.",
"The ocean isn't too deep. And we can make a lot of cable. There's 700,000 miles of cable along the ocean floor. That's more than going to the moon and back. And it's made durable enough so nothing is going to damage it."
],
"score": [
71,
36,
14,
5,
5
],
"text_urls": [
[
"cnn.com/2019/07/25/asia/internet-undersea-cables-intl-hnk/index.html",
"https://cnn.com/2019/07/25/asia/internet-undersea-cables-intl-hnk/index.html"
],
[],
[],
[
"https://www.wired.com/1996/12/ffglass/"
],
[]
]
} | [
"url"
]
| [
"url"
]
|
eo9gkr | How does a public/private key encryption work? | If something can be encrypted with a public key, why can't someone just reverse engineer the encryption using the public key to get the original data? | Technology | explainlikeimfive | {
"a_id": [
"feafc6k",
"feag05j"
],
"text": [
"Certain mathematical operations are difficult to reverse. It's easy to take two prime numbers and multiply them. But it's hard taking the product of that multiplication and figuring out the two original numbers. The larger the numbers, the more difficult the problem becomes, so public key mechanisms such as RSA use numbers that are hundreds of digits long. Without going into detail, this problem (prime number factorization) along with other difficult problems (discrete root and discrete logarithm) are the basis for the RSA public key encryption. If you want to actually know how RSA works, you can search this sub, since this question was asked many times.",
"Public key encryption uses mathematical \"one way\" functions. You encrypt something with the public key, but need the private keys to decrypt it. There's a relationship between the private and public keys, but you can't easily figure it out. One example might be multiplying two prime numbers together. Can you tell me what two numbers you need to multiply together to get 6,474,338,447? > !69,109 x 93,683! < The difficulty of determining what the private keys are from the public key goes up much faster than the difficulty of using bigger keys. Thus you simply increase the size of the key until the amount of computing power needed to \"reverse engineer\" the public key becomes so absurdly large nobody will bother to try."
],
"score": [
10,
10
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
eobwog | I get what Deepfakes are used for, but Wikipedia went right over my head explaining how they are made - so how is this done, exactly? | Technology | explainlikeimfive | {
"a_id": [
"feb9xwd",
"feb97jm",
"fecod1x"
],
"text": [
"Basically it uses a few data points to infer the rest. Like if you had to guess the next numbers in this series. 1, 3, 5, 7, 9, 11, 13, 15... you use the data and you have to guess the rest. Deepfakes takes actual footage of those people and notes their facial expression, lip movements, voice, etc and then it basically fills in the gaps of whatever you want it to say or do. For speech, it's all sound waves and transitions. So you get a sample of their voice and you can basically fill in the gaps. Same thing for facial expressions and lip movements. It's not always perfect but with feedback it can fine tune itself.",
"There’s no real way to explain this, because the program was trained to be able to do this. It’s like asking, how did they make this great athlete? Like did you want his training schedule? For Deep fake the most I can say is it’s a three part system. There’s one AI that can tell the fundamental expression from videos, there’s a second AI that can morph a photo of somebody’s face to any specified expression, and there’s a third AI that can photoshop a face onto an existing photo with natural blending.",
"You have to understand what a neural network is. A neural network is basically a bunch of math that takes some input and spits out some output. How it does that is controlled by a huge number of \"dials\" that relate the input to its output, through a series of steps in between. It's a lot of multiplication and addition, basically. The structure of the neural network (the connections) are preset in some regular pattern (a huge mesh basically), but the strength of each connection is variable. The idea is that no human can figure out how to \"set up\" all those knobs manually, so instead you give a training program a bunch of examples of what the outputs and inputs should look like, and it automatically tweaks all the knobs to try to make the network get closer and closer to the desired outputs. This is all inspired by how brains work with many neurons connected together with different strength connections that change to \"learn\", hence why we call them neural networks. Because they're just a massive pile of numbers, nobody \"understands\" how they work, but we can have computers train them to do useful things. Now deepfakes. First you take some regular old boring face detection technology (the kind that's in your camera/phone) and use it on a bunch of videos of the original person (A) and the person you want to replace their face with (B). This gives you the positions of the faces. You then use normal image processing stuff to pull out just the faces. At this point it's a good idea to have a human check the frames and throw away the ones that the algorithm detected wrong. Then you feed that into an autoencoder. An autoencoder is a type of neural network that turns an image into a smaller output, basically a small set of numbers (something like 1000 or so), then turns that *back* into an image. You train the network so that it can reproduce the original face (so input = output). The idea is that the simpler set of numbers in between eventually captures the \"variation\" of the face - the parts that change, like expression, eye movement, lighting, angle, etc - and the neural networks on either side learn how to interpret Person A's face into that set of parameters (an \"encoder\"), and then turn them back into Person A's face (a \"decoder\"). So you feed the encoder an image of Person A's face that is smiling and looking to the left, and you get out some set of 1000 numbers that in some way or another represent \"smiling, looking to the left\", which you can turn back into (something close to) the original face with the decoder. Now you do the same thing with Person B and a separate network. This isn't directly useful as is, because if you use a totally separate network, both networks are going to come up with different ways of \"representing\" an expression in that small set of numbers, so you can't turn one face into the other, you'd just get garbage. Like, the \"language\" that one network uses to say \"smiling and to the left\" might mean \"angry and looking up\" to the other network. To fix that, the trick is that you train both networks at once, and you actually use the *same* network for the encoder side. So you simultaneously train for: - An encoder (E) that can turn Person A's face into some set of parameters - A decoder (D1) that can turn those parameters back into Person A's face - The SAME encoder (E) that can turn Person B's face into some set of parameters - A decoder (D2) that can turn those parameters back into Person B's face And that's the magic. Now you have a neural network that can \"read\" a face of *either* person, and two neural networks that can \"create\" each of the two faces based on those read parameters. When person A smiles and looks to the left, the encoder spits out parameters that represent that, and then the decoder for person B can turn those into an image of person B smiling and looking to the left. So once you're done with the training, you just run a video through face detection, then the encoder, and then the result through the decoder for the *other* person, and insert the faces back into the original video. Now you have a deepfake. [Here's]( URL_2 ) a nice Ars Technica article that goes into more details with diagrams, and [here]( URL_1 ) is an excellent video intro to neural networks from CGP Grey. This [footnote]( URL_0 ) is closer to the way the deepfake neural networks work. Edit: a word"
],
"score": [
9,
7,
5
],
"text_urls": [
[],
[],
[
"https://www.youtube.com/watch?v=wvWpdrfoEv0",
"https://www.youtube.com/watch?v=R9OHn5ZF4Uo",
"https://arstechnica.com/science/2019/12/how-i-created-a-deepfake-of-mark-zuckerberg-and-star-treks-data/"
]
]
} | [
"url"
]
| [
"url"
]
|
|
eoexqf | how did telephones work in the early 1900s? | Technology | explainlikeimfive | {
"a_id": [
"fecbdnf"
],
"text": [
"Do you mean the decade the 1900s or the century the 1900s? If you're talking about the old timey box on the wall with a crank telephones, The crank is a generator that rings a bell for the operator, you tell the operator who you want to talk to, and the operator then plugs your line into either the line you want to call, or a line to another telephone exchange, closer to the person you want to call. If you're talking about the rotary dial telephones, there were automatic relays triggered by the number of pulses generated by the dial as it rotates."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
eoiyss | What do they mean by “signal failure” on London Underground train network | Technology | explainlikeimfive | {
"a_id": [
"fed2zo3"
],
"text": [
"Train signals are a bit more complicated than traffic lights. Because trains take so long to stop, the safe following distance for a train is something like 90 seconds. (Compare this to a car where the safe distance is around 2 seconds). A train traveling at the minimum safe following distance is so far back that the driver normally cannot actually see the train he is following. So the main purpose of the signaling system is to tell a train driver how far ahead the next train is. London Underground uses a number of different systems to do this. All these systems have a couple of features in common. -Firstly, there's a system for detecting the current location of all the trains. This may be using sensors wired to the tracks, GPS or RFID tags mounted on the sleepers. - Secondly, there's a system for figuring out what the safe following distance is for each train. This may be computerized, but in many cases it's much more primitive than that. Often times, it relies on the various components being physically installed a safe distance apart. So a particular sensor in the track does nothing more than switch a light on and off, but the light is physically installed a safe distance behind where the sensor is installed. - Finally, there's a system for communicating to a driver where the train in front is. In modern systems this involves wireless computer networks with a signal computer telling a computer in each train exactly where the train in front is and so on. The more traditional thing is coloured lights along the tracks. A 'danger' light means the train in front is closer than 90 seconds. A 'caution' light means the train in front is pretty much exactly 90 seconds in front (perhaps a little bit further away than that), and a 'proceed' light means the train in front is quite a bit more than 90 seconds in front. All three components need to be working, otherwise train drivers lose their ability to 'see' the train in front of them. And since driving blind is dangerous, it brings everything to a standstill."
],
"score": [
16
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
eokmbg | how does my car know to stop playing my music and switch to FM radio when traffic updates come on? | Like I'm assuming theres some voodoo in the airwaves? | Technology | explainlikeimfive | {
"a_id": [
"fed9vqs",
"fedgl5t",
"fedfsvm"
],
"text": [
"Radio checks your last tuned frequency for a special signal that you can't hear, like a very high beep noise, every second. If the radio \"hears\" it, it switches to the radio.",
"I'm starting to think this isn't a thing in America?",
"Never heard of or experienced this feature. What car do you have?"
],
"score": [
18,
14,
4
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
eon6aj | Why do downloads need to accelerate? | Technology | explainlikeimfive | {
"a_id": [
"fedqyde",
"feeggdr",
"feeodsc"
],
"text": [
"Your computer and the one you're downloading from *do not know* what the max speed is. The available capacity in the network in between them fluctuates. So they start slow and ramp up in the part of the Transmission Control Protocol called [slow-start]( URL_0 ). Doing it this way allows them to *find what the max speed is* without flooding the network with undeliverable packets.",
"Modern digital networks (including the Internet) divide data into *packets*. Each packet's size is usually between 40 and 1500 bytes. A typical Internet download goes through many different routers (10 or more). A router can \"drop\" packets it's unable to send. Dropping can be due to hardware limits. For example, a typical router might have 30 ports, each capable of sending 1000 MB of data per second. If 100 MB worth of data come in on each of the first 29 ports, and all of it is supposed to be sent out on the 30th port, you're trying to put 3000 MB per second of data into a pipe that can only handle 1000 MB per second. The router would drop 2000 MB per second worth of packets at random. Dropping can also be programmed for software-based reasons as well as hardware limits. For example, if a customer's only paid their ISP for a plan that offers 100 MB per second, the ISP router might be programmed to drop packets above that limit, even though the port hardware the customer's connected to is physically capable of handling 1000 MB. Downloads tune their speed based on dropped packets. When a download starts, a small number of packets (3 or so) are sent. If they all arrive, packets are sent faster and faster, until packets start being dropped. Then the packet send rate will be held steady, at just below the level that causes packets to be dropped, but every few seconds it will start to increase again (in case the maximum rate has changed over time). That's why connections start slowly. You may ask how the computer you're downloading *from* know which packets arrived? The answer is that your computer actually sends that information back \"upstream\".",
"This is because of something called \"Window size\" and how \"Packets\" (pieces of file) use it. ELI5: You *(Your PC)* and your friend timmy *(Other PC)* are trying to share blocks *(Packets)* because you want to build a castle *(Full file)* with timmy's blocks (and you have none). You and timmy are new friends so he isn't sure how strong you are or how many blocks you can carry in one load *(Window size)*, so he starts out small, just a block or two as a safe guess *(Starting Window size)* Every time you let timmy know you successfully carried the amount of blocks timmy handed you back to your house, he figures you are strong enough to carry 1 or 2 extra in the next load so you can get all the needed blocks in your house faster. *(Increasing window size)* But oops, you drop some blocks *(Dropped packets)* on a later load and let timmy know. So timmy reduces the block amount to what your last safe trip was *(Safe Window size).* This process continues until you have the full amount of blocks at your house. ELI15(?): This is how TCP (Transmission Control Protocol) Window size works, The source PC sends a burst of info (window) and waits to hear an \"Okay, got it\" back. Because this protocol values data integrity over speed, its starts with a small burst that is almost guaranteed to make it. With every \"Okay, got it\" the source pc hears, it increases the size of the next burst so data travels faster. If no \"Okay, got it\" is heard, it decreases burst size (and thus speed) to what was the last safe amount. UDP (User Datagram Protocol) is the reverse, it'd be like if timmy just chucked blocks at your houses window. He doesnt know if blocks make it and he doesnt care, timmy wants speed."
],
"score": [
256,
14,
7
],
"text_urls": [
[
"https://en.wikipedia.org/wiki/TCP_congestion_control#Slow_start"
],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.