q_id
stringlengths
6
6
title
stringlengths
4
294
selftext
stringlengths
0
2.48k
category
stringclasses
1 value
subreddit
stringclasses
1 value
answers
dict
title_urls
sequencelengths
1
1
selftext_urls
sequencelengths
1
1
7lz8g1
How do places like call centers have outgoing calls with the same identity (one phone #), but at home if someone is using the line we can't make calls?
Technology
explainlikeimfive
{ "a_id": [ "drq6sy3" ], "text": [ "It’s just a simple trick performed by call center software. The outgoing phones are all assigned a single identity that masks the actual number to display on the recipient’s caller ID. This is similar to how companies can use one mass email system to send millions of emails with specific or variable from names and from addresses, even though they are coming from the same IP address. Source: mass communications tech for a large organization" ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7m11m7
How can a smartphone camera, with such a small lens, detect different levels of depth to apply a portrait effect?
Technology
explainlikeimfive
{ "a_id": [ "drql3ge", "drqlgzl" ], "text": [ "1 - The device identifies what is in the first plane (using the parallax effect of dual lenses/sensors, similar effect with dual pixel sensors, or AI with single lens/single sensor) 2 - Whatever is not in the \"first plane mask\" is blurred by software after the actual image is taken - the background is captured with good focus (unless the main subject is so close that auto focus really creates the bokeh by itself), the blurr is \"photoshopped\" automatically afterwards.", "Ta da! A blog from Google about the technology. Long story short: dual cameras or convolutional neural networks. URL_0" ], "score": [ 6, 3 ], "text_urls": [ [], [ "https://research.googleblog.com/2017/10/portrait-mode-on-pixel-2-and-pixel-2-xl.html?m=1" ] ] }
[ "url" ]
[ "url" ]
7m1uru
How can Tesla's Grid battery solution can make sense when it can power 30k homes for only one hour? (South Australia)
Technology
explainlikeimfive
{ "a_id": [ "drqpd9c", "drqpn84" ], "text": [ "You’re definitely right, the battery system won’t stop a full blackout. Instead its purpose is to help buffer out periods of very high load until the power stations can respond.", "Batteries in a grid help to smooth out the difference between \"real\" power production and demand. Power production from coal plants, natural gas, wind, hydro, or solar. And demand from your house. Traditional power production plants take a VERY LONG TIME to turn on and off. And demand can go up and down in an instant. So what happens when everyone turns on their AC over summer? Well the grid says shit there is too much demand, let me turn parts of the grid off to reduce demand and we get blackouts. The battery comes in to play here, and becomes instantly another power source until more power generators can be turned on. Lets say the reverse happens and there isn't enough demand, in a normal grid we just burn that energy because we can't store electricity. If you are into bitcoins you just mine bitcoins. With a battery you dump electricity into the battery for use later. TLDR: You are not suppose to run on the battery all the time, it acts like a temporary solution." ], "score": [ 16, 11 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
7m2ebi
Why don’t airports and air travel companies update from the staticky ATC systems to something that is able to be more clearly heard and understood.
Technology
explainlikeimfive
{ "a_id": [ "drqtqim" ], "text": [ "Your cellphone also loses signal if it goes into a tunnel. Your cellphone is fm radio, pretty, nice, but weak signal. Plans are on lower frequencies, AM radio. Shittier quality but a much stronger signal strength." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7m4gg2
Do the IoT devices or sensors have to have IP address functionality in their firmwares? Or do they have to use the Internet Protocol Suite?
Technology
explainlikeimfive
{ "a_id": [ "drra89m", "drrakfm", "drrb2jc", "drrdqki", "drrerid", "drratnq", "drrdz0m", "drrhytd", "drrhlea" ], "text": [ "If they are using IP (e.g. TCP/IP or UDP) then yes they need their own IP address. They have the IP address handling and the network stack in their firmware. Many IoT devices even run little mini linux distros inside them as \"firmware\". Source: work with IoT stuff", "They don't *have* to, you could have a house full of sensors using simpler protocols and one or two relays that speak IP to do the \"cloud\" stuff, but it's common anyway, at least partly because the programming is easier for many developers.", "An IoT device needs to have an advanced enough processor embedded in it to handle *some sort* of networking. It doesn't necessarily have to be a full TCP/IP stack, it could use some simpler network locally and have a more powerful local controller/hub bridge the gap to the outside world.", "You can have 1 item that acts like a server that all the clients talk to on their own protocol (MODbus, CanBus, LonWorks, Honeywell SDS). That one server like item might be the only gate keeper with regular tcpip protocols", "Most do, but it is not required. Many devices from the IoT ( a term I don't really like for its vagueness ) are in fact based on some variation of a Linux kernel. You might not be aware of this, but developing a platform where you can just build applications to actually do stuff with your sensors is actually a difficult challenge. To ease that, a lot of manufacturers will start with some existing stuff and it just happens that Linux is pretty much ubiquitous in this domain. So most devices will have a full IP stack because they run a stripped down Linux, which they do because no one wants to start from scratch. EDIT: At some point, to be on the \"Internet of Things\", you'll need an IP address, but it is possible to have several devices connected to a single hub within a building without internet. Zigbee is a protocol that allows this : URL_0", "It is not necessary to have IP addresses for each IoT device. The data to and from the device can be directed using only the MAC address of the device. Every piece of networking hardware has a MAC address to uniquely identify it.", "Other have this mostly covered, but another point is that any new IoT device is likely to be IPv6 compatible where each network has the capacity of the entire IPv4 space (~4 billion addresses), ~~so it makes sense to just give them an IPv6 address.~~ edited for dumb", "Additional questions How does IOTA change all this, or is this a completely different thing than the question?", "They need to be assigned an IP address, yes. This IP wouldn't be defined in the firmware though, it would likely be assigned via DHCP. Instead, the device would have a set MAC address, which defines the \"name\" of the specific connection. Basically, a device says, \"Hey interwebz, I'm [mac address]. Give me an IP address!\" Router says, \"Sure, how about [open IP address]?\" Device says, \"Ok, I'll use that one!\"" ], "score": [ 108, 42, 27, 12, 9, 5, 4, 3, 3 ], "text_urls": [ [], [], [], [], [ "https://en.wikipedia.org/wiki/Zigbee" ], [], [], [], [] ] }
[ "url" ]
[ "url" ]
7m4s1a
what will happen if I put a new battery alongside an almost dead battery in a toy or remote?
Technology
explainlikeimfive
{ "a_id": [ "drrfsvs", "drrgoox", "drrht54", "drrgpzz" ], "text": [ "It depends on the toy. If they are in parallel (both plus sides connected to each other, both minus sides connected to each other) you rapidly charge (or try to charge) the old one while rapidly discharging the new one. In the best case you just break the batteries, in the worst case you start a fire with toxic materials. Don't do that. If they are in series, the voltages just add. The toy might work or not depending on the toy and the batteries, but even if it works, it won't work long as the old battery will stop supplying a sufficient voltage quickly.", "Here's the thing about an almost dead battery. Whenever you put a load on a battery there is a voltage drop. The closer to dead the battery is, the larger the drop. So if a new battery is 1.5V it may be 1.4 when under load. A dead battery may have 1V but drop to nearly 0 under load. So 2 new batteries in series would be 3v no load and 2.8 under load. 1 new 1 dead would be 2.5 no load, 1.4 under load. So most of the time a dead battery is not doing any good, and if anything is just doing harm be draining the good battery.", "I had a atari type game it was just a joystic and buttons with a few games and you plug the cables into the tv and it took 4 batteries but i found out you only had to put 2 and it would still work i thought it was pretty cool i dont know the answer to your question but i just wanted to share that", "Assuming they're in series, which they usually are. Imagine you have two 6-foot ladders, and you attach them together end-to-end to reach your 12-foot roof. After a certain amount of usage, the ladders break and become unusable. Since you are always using both these ladders at the same time to reach the roof, they break apart at the same time. Attaching a broken ladder to a new, 6-foot ladder won't help you reach the roof. You need two new ladders. Here, a 6-foot ladder is a working battery, a broken ladder is a dead one, 6-feet is the voltage of each battery, and 12 feet is the voltage the device requires." ], "score": [ 84, 82, 14, 13 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
7m55vt
What are undersea cables? What is their purpose, how were they built, and what would happen if they were cut?
Technology
explainlikeimfive
{ "a_id": [ "drrfzwm" ], "text": [ "> What are undersea cables? Thick cables that are under the ocean, exactly what their name suggests. > What is their purpose? We're using one right now to communicate over the internet assuming you aren't in the US. They provide the communication connections across the oceans, or at least they did until satellites became a thing. > How were they built? [Here you go]( URL_0 ). They basically just used modified ships and dropped it in. > What would happen if they were cut? Well there are several of them so cutting one would only reduce bandwidth. If you cut all of them you would break the internet and a lot of other communication techs since those cables are kinda important for that." ], "score": [ 8 ], "text_urls": [ [ "http://www.independent.co.uk/news/science/how-are-major-undersea-cables-laid-in-the-ocean-9993232.html" ] ] }
[ "url" ]
[ "url" ]
7m57ks
Difference between LED, AMOLED, LCD, and Retina Display?
Technology
explainlikeimfive
{ "a_id": [ "drrhfsd", "drrhaz5", "drrj1rb", "drrkjgu", "drrpfif", "drrhdyr", "drrlms1", "drrhvuv", "drrkz58", "drro0d9", "drrq7xd", "drs2od5", "drrqeej" ], "text": [ "So these are terms that refer to some fundamentally different things. I'll throw a few other terms in the mix that will hopefully clarify things: ###Display Technology * Cathode ray tube (CRT) where an electron beam is used to excite colored phosphors on the inside of a glass screen. You may have heard it referred to as a \"tube TV\". This is pretty old stuff, and is the earliest display technology for TVs. * Plasma displays, where a gas inside each pixel is made to glow. This is now pretty outdated, but still way newer than CRTs. It was especially common back when LCD TVs were new, and lower quality than they are today. * LCD (liquid crystal display). This is the most common type of display tech for televisions. There are three different colors of pixels (red, green, and blue) that can be made more or less opaque to let through light being created by a backlight behind the screen. The combinations of red, green, and blue can be used to form millions of different colors. * AMOLED (active matrix organic light emitting diode). Each pixel is made of of individual little lights that don't need a backlight. This is newer, and is being used in a lot of newer phones, but is still very expensive for large TVs. ###Backlight technology Note that backlights are only needed for LCD displays * Cold cathode. This uses a light similar to the overhead fluorescent lights used in stores and office buildings. * LED. This uses LEDs (light emitting diodes) to provide the backlight. Newer TVs will have hundreds of individual LEDs to provide even lighting and the ability to dim different sections of the screen to provide better contrast. ###Other stuff * Retina Display. This is just a fancy Apple buzzword for having lots of pixels that are really tiny, so you can't see the individual pixels on the screen even when you look pretty closely.", "Retina display refers to a display with pixels small enough that the human eye is physically incapable of distinguishing the difference between adjacent pixels, at a given distance. This is kind of funky because our eyes don't work with pixels but it's probably a decent approximation. LCD stands for liquid crystal display and basically works by having pixels made of liquid crystals and by applying a certain voltage they will let through different amounts of red green and blue light. The light comes from a backlight (typically one or many LEDs these days). OLED stands for organic light emitting diode and has tiny colored LEDs in each pixel. This is why a black pixel can emit zero light unlike an LCD which just attempts to block all light from the backlight. I'm not sure what AMOLED is and I just came here for karma, not to do work.", "Retina Display is not a technical designation, it's a marketing term. There are numerous display resolutions available in PCs (FHD, QHD UHD, etc) and Apple wanted to have a trademarked way to describe their display resolution that nobody else could legally use to make it sound like a unique offer. Depending on the device and screen size the term \"Retina Display\" can refer to significantly different resolutions and varying pixel density, though generally it means the pixel density is high enough that you cannot make out individual pixels at standard viewing distance. The Microsoft version of this is \"PixelSense\", which is again a marketing term rather than anything that has technical meaning.", "Something relevant that hasn't been explicitly mentioned is that AMOLED black = nothing. That's why blacks look so good. It's in Samsung products but also Google and Apple", "I have to deal with display technologies all the time in my line of work. Here's the major points: **LCD** is like an image being illuminated by a backlight. The backlight can mean that the viewing angles aren't necessarily fantastic and if the backlight is poorly done it can be viable around the edge. This also means a true black isn't achievable. However, more recent LCD display technologies like IPS and PLS, use larger RGB sub pixels and vastly improve uppon the technology. **LED** just means the backlight being used is LED lighting. **OLED and AMOLED** are essentially the same, the only difference being the way the transistors are handled. These screens don't need backlights- they make the light themselves. These are a hot newish technology because we can make them bright and we can make them thin. But they have some huge problems. The blue diodes we use in OLED decay at a very rapid rate. Have a Samsung Galaxy S5 and beyond? Pull up a full screen all grey image and you'll see the issues: burn in and a warmer (orange) color shift. Have an iPhone X? You'll see these problems more and more the longer you have your phone. It's a pretty bad technology in that regard. Far worse than we had with Plasma. It's important to note that OLED screens are not built to last. And though they're touted as high end, we have still not created a great version of OLED. OLED does have the advantage, like laser projection, of being able to display a true black. **Retina Display** doesn't mean anything. It's a silly Apple marketing term that just means more pixel density, but it's not even properly defined. Basically by Retina, Apple means any display technology (and they do mix them) in a device, but with pixels small enough to look smooth. It doesn't mean more resolution (because their phones actually have pretty poor resolution.) It just means decent resolution per inch. **Quantum dot** isn't one you asked about, but it's one to keep an eye out for. It can't, in its current consumer state, display a true black like OLED. But it has better accuracy at high brightness, it can get brighter, and most importantly, it doesn't suffer from burn in. With more development, it has the potential to be the OLED killer.", "LCD and LED are screens with white backlights, which have moving lens (pixels) that physically move to bend the light from the backlight and produce color AMOLED have no backlight. The pixels are organic and produce their own light. This allows the screen to be thin as well as produce true blacks. Retina is nothing more than a marketing term. Apple uses regular LCD/LED and slaps on Retina to make it sound more appealing", "There's a whole bunch of different display technologies out there today. LCD (liquid crystal display) being the most common. As the name suggests, you have some electrically sensitive crystals that can polarize light when you pass a current through them. Sort of like a high tech Venetian blind. LCDs don't produce light on their own though. (Think the original Gameboy.) So they need a backlight to make the screen visible. Originally they used bulky CCFLs (sort of a cross between a neon lamp and a florescent tube), but were eventually replaced with LEDs. These were marketed as LED TVs to differentiate them, and make an easier upsell. The main advantage with LEDs is you can make thinner, more energy efficient displays. Nearly all LCD displays use them now. The problem with LCDs is they can't display true black. The best ones can block most, but not all light from the backlight. So blacks will always look a bit washed out, resulting in reduced contrast ratio and colour accuracy. CRT and plasma displays can produce true black, but they have their own shortcomings in regards to size and power consumption. OLED is the next gen technology to replace them. OLED stands for organic light emitting diode. They're tiny LEDs made using an organic material that emits light in response to electrical current. An AMOLED display is a matrix of these, with each sub pixel (the red, green, or blue bits of a pixel) being its own individual OLED. They generate their own light, so a backlight isn't needed. And since you can turn them off completely, it can display true black. Hence better colour accuracy and contrast. Using organic materials also allows for thin and flexible displays. They do have some shortcomings though. They consume more power than LCD panels when showing a lot of white, like a text document. There's also lifespan issues with blue OLEDs. Lastly, they're quite a bit more expensive than LCD displays. Though prices have dropped significantly in the last 10 years. A Retina display is just a marketing term Apple used when the iPhone 4 first came out, to differentiate it from older devices. Basically anything with a 264ppi (pixels per inch) display or higher. Which is basically every phone now. At that point, the individual pixels are so small that that the average person would be unable to see the individual pixels at the closest comfortable viewing distance. A lot of low resolution LCD displays had a noticeable \"screen door\" effect, including early iPhones and the OG iPad, which is what the high PPI displays sought to address.", "LCD and LED are mostly the same, both being LCD panels. The difference is the backlight technology. Displays listed as LCD use fluorescent tubes to light them where as LED displays use LED lights to light them. They have their strengths and weaknesses, LED is generally considered better as they are often brighter, more energy efficient, and can sometimes utilize dimming zones to improve contrast. AMOLED is Samsung's OLED technology. OLED is unique where instead of the image being lit by a light behind the screen, each pixel produces its own light. The main advantage to this is near perfect blacks since pixels showing darker parts of an image can show less or even no light at all unlike LCD or LED. They can also be very energy efficient, only turning on parts of the screen that are needed. Retina display is Apple's term for a high PPI/DPI display, which means more than average pixels per inch. A retina display is usually LCD or in the case of the iPhone X, AMOLED. A higher DPI generally means more detail for the size of screen. It's not anything special, there are many displays out there that are higher DPI than Apple's retina displays, Apple just has a special word for it.", "As an Electrical Engineering student and avid graphics geek, I love this stuff! Great question :D **Displays** Computer screens are made up of a grid of \"pixels\", which are little tiny colored squares. Think back to [Super Mario 8-bit]( URL_0 ), see how you can see all of the squares in his body? That's because your GameBoy's pixels were pretty huge. The pixels in your computer now are really tiny, which is why we get smooth round shapes. In [old computer screens]( URL_3 ), pixels used to either be \"on\" or \"off\", either lit up or dark. This meant you could only have black and white images on the screen. **Nowadays, our pixels light up in any combination of colors.** How do they do this? One way to create any set of colors is to use different combinations of [Red, Green, and Blue light]( URL_4 ). You can make just about any color by combining these three colors. So a trick that you can use to create a colored light is by shining a red light, a green light, and a blue light right next to each other at different intensities. If they're small enough that your eye can't tell them apart, all your eye will see is the light combination. In fact, [that's what pixels look like up close]( URL_1 ). **The difference between types of displays is how they use those tiny colored lights to create colors.** * An **LCD** display has a backlight behind everything, and it controls the color by blocking different amounts of each color of light. This means that \"black\" on an LCD display still looks kind of bright. * An **OLED** display is dark by default, and turns on tiny little colored lights in different amounts to create different colors. * An **AMOLED** display is a type of OLED display. It allows you to access and control pixels faster than other types of OLEDs (PMOLEDs), which allows you to have bigger displays. * A **retina** display is a fancy Apple marketing term, meaning the [pixel density]( URL_2 ) is higher. Basically, this means everything is higher resolution, so your eyes are even less able to see the little tiny pixels. It's like going from 8-bit to 16-bit.", "\"retina\" is merely a term coined by Apple that is essentially 300ppi. Pixels per inch. Back when the iPhone 4 (I think) came out. It was highly praised for it's high resolution and super clear display. Basically most phones, even budget phones nowadays are \"retina\" displays because the screen has so many pixels that it's very clear and sharp. This is a term coined by Apple. So if you see shit like \"it doesn't have a retina display\" it's a load of BS.", "Here's an actual ELI5 without technical jargon: LCD = A panel of dots blocks light in certain areas to make a picture. Behind the panel, the screen is all white all the time. That's called the \"backlight\". (Side note: \"LED\" without the O refers to using a LED backlight on an LCD display, unlike previous displays that used a fluorescent light bulb for the same thing - more energy efficient. Fancy LED-lit LCDs can actually dim certain areas of the backlight so it's not \"all white all the time\", mostly to save even more energy.) OLED = instead of blocking the light, a panel of millions of tiny lights makes up the image directly. That's why they're so crisp and clear. But because they're individual lights, lights left on all the time will become dimmer over time, leaving a \"burn-in\". (Side note: you ever seen an electronic billboard on the road? Those are LED, using millions of full size LEDs (like those indicator lights on your TV and modem) to make a picture, but since you're viewing it so far away, it looks like a single big image. It's fun to get close to one some day! OLED is just a really, really tiny version of the same idea.) Retina = just a display (of any type) that has individual dots so small that your eye (retina) can't tell the difference between them.", "LED: Light Emitting Diode. These are typically used for backlighting in modern non-OLED displays. They're capable of emitting a very pure, white light, they're very power efficient, and they turn on instantly, unlike older cold cathode backlighting. AMOLED: Active Matrix Organic Light-Emitting Diode. This is a screen which is comprised of what are essentially many tiny green, red, and blue LEDs. Because these light up on their own, no backlight is required - and they can be turned off completely, giving them the deep blacks they're renowned for. LCD: Liquid Crystal Display, which is a more conventional type of screen technology. Liquid crystals get manipulated by electricity to change their color. They do not emit light by themselves, necessitating a backlight. Because of this, you cannot get perfect blacks with them. Retina's basically just a marketing term Apple uses for a certain amount of PPI (Pixels per Inch) on a panel/monitor. In most of their products, Apple uses IPS displays, which are a type of LCD panels. IPS panels are unmatched in terms of color reproduction, but since they require a backlight, you cannot get perfect blacks with them. IPS and (AM)OLED panels both have their advantages and disadvantages. As I previously mentioned, IPS panels have more accurate color reproduction, while OLED panels have deep blacks and more vibrant colors. However, unlike IPS panels, OLED panels are also more susceptible to burn-in and mura (uneven colors). OLED panels also tend to use a Pentile grid, which uses twice as many green subpixels as red and blue ones. This effectively lowers your resolution by one-third, and many people argue that you don't truly get the advertised resolution on, say, a 1440p Pentile AMOLED panel. [Here's a traditional RGB grid next to an AMOLED grid. RGB looks much better.]( URL_0 ) Thankfully, resolutions on phone screens are so high nowadays that this is practically a non-issue. I might've gotten some of this wrong, but it should be mostly correct!", "Retina != a display. It's marketing. Anyway. LCD == Liquid Crystal Display. A large lightsource, in the back of the display (a backlight) or around the edges provides white, multicolored light. Each pixel, or rather color sub pixel, is controlled by a pair of polarizers. Think of polarization as a direction, it goes through a polarizer, it has a polarized direction, call it up or left (it's actually quantum blah blah blah not important right now). Now that it's gone through one polarizer, it's chance of going through the second depends on the \"direction\" of the second polarizer. If the second polarizer's direction is parallel to the photon's (they're both \"up) the photon goes through. If it's perpendicular (the photon is up and the polarizer is left) the photon doesn't. If the direction is \"somewhere in between\" the photon has a chance of going through, dependent on how close its direction is to the polarizer's. If the polarizer's direction is halfway in between, it let's half of the \"up\" photons through and blocks the other half. A liquid crystal display has controllable polarizers. Each little pixel has 3, one for red, green, and blue, that change to let in however much red, green, or blue light you want going through to the users eyes. All this complexity means you get lightleak, or photons bouncing around and through pixels you don't want and blah blah blah. But it works. OLED, or AMOLED (Same thing really, for this purpose) uses something far simpler. Run a current through an organize compound (the O stands for organic) and it emits a specific color of light, easy. The brighter you want it the more current you run through it. You can turn it off completely by not running any, no lightleak. Just same as above, you run three colors (red, green, and blue) per pixel (dependent) and combine to get whatever color you want." ], "score": [ 7897, 7397, 654, 77, 55, 54, 45, 24, 16, 4, 4, 4, 3 ], "text_urls": [ [], [], [], [], [], [], [], [], [ "https://imgur.com/a/S2KOv", "https://imgur.com/a/uvy8v", "https://en.wikipedia.org/wiki/Pixel_density", "https://imgur.com/a/nbl54", "https://www.w3schools.com/colors/colors_rgb.asp" ], [], [], [ "http://us.v-cdn.net/6030075/uploads/userimages/gs4-vs-one-macro.jpg" ], [] ] }
[ "url" ]
[ "url" ]
7m8pbh
can someone explain how and why analogue is different than digital in context of computer processing power?
Technology
explainlikeimfive
{ "a_id": [ "drs5xxx", "drs61is", "drs4ra7" ], "text": [ "Analog-- I want to tell you the number 82. So I shine a light towards you at 82% brightness. You observe the brightness and write down what you see. Digital-- I want to tell you the number 82. So I blink a light on-and-off 8 times, wait a second, then blink it another 2 times. Advantages of analog-- what if I want to tell you 82.5? I just make it a tiny bit brighter. Easy. Disadvantages of analog-- accuracy. If there's fog in the way or I'm just not that good at judging brightness, I could interpret your 82% brightness as 80% or something. Advantages of digital-- accuracy and repeat-ability. I can tell the next person 82 using a similar code of blinks and as long as I don't **completely screw it up**, they'll know I mean 82 and pass it along to the next person who asks for that information. There's no worry about a minor imperfection adding a little inaccuracy here, then more inaccuracy at the next step, then more... as long as each step does it right, you can keep sending the signal and it will be exactly the same as it started out. Computers could not work with analog signals at the level of performance they do now. They're too fast and inaccuracy would be severe. There is also no such thing as 100% accurate analog signals. How can I know you meant 82% instead of 81.9999%? With digital, the signal is preserved as long as it's intact. We lose the ability to transfer information like 81.9999% unless we previously agree we're going to send over 4 decimal places of precision. That's a pain in the ass. But we can make that work. We can't make uncertainty or inaccuracy work at a speed of GHz (billions of operations per second).", "First: The basics. Digital computer can \"understand\" 2 values: 1 and 0 (ON and OFF). Analog computers can \"understand\" several values: Low thru High. Basically it operate on mathematical variables in the form of physical quantities that are continuously varying. For example, an analog computer may work with temperature values, voltages, pressure, etc. Regarding processing power, The digital ones can take measures with binary values (Combination of ones and zeros). For example: To process \"5\" (FIVE) on a digital computer the binary value is 101. On an analog computer the plain and simple value of 5 is entered in that state. So, as you can see, part of the processing power on a digital computer DEPENDS how many bits can candle the processor. Old computers had a 8-bit processor. New ones have a 64 bits processor, That means, the processor could process ANY value between zero and 1,844,674,407,3709,551,615 at the time. Analog computers are able to process any value (depending the precision) at the time. Back in the 60's, Analog computers were more \"powerful\" than any 8-bits computers but as technology advanced, digital computer became faster with more bits to process. Still to this date, we have analog computers, the simple one that we still using is [this ruler]( URL_0 ).", "Computers are digital, they can’t comprehend analogue signals. Any analogue signal must first be converted into a digital approximation for the computer to understand and store. To that end... there’s no such thing as a “difference in processing power”, as analogue signals don’t exist from the computer’s point of view. A higher resolution digital approximation of an analogue signal will have more samples, and contain more data, so you need more processing power and storage to have more accurate representations. (E.g. a 48 kHz .wav is larger than a 32 kHz .wav of the same song, as its digital approximation of the sound wave is higher resolution) The same applies to images." ], "score": [ 9, 4, 3 ], "text_urls": [ [], [ "https://qph.ec.quoracdn.net/main-qimg-71f6f902fbc118fdf6fc70eac5eb59f2-c" ], [] ] }
[ "url" ]
[ "url" ]
7maeok
Why do you hear a little bit of static in headphones whenever you plug them into a device or unplug them?
Technology
explainlikeimfive
{ "a_id": [ "drsierk" ], "text": [ "Headphone jacks have circular conductors the length of the jack. As you insert it they rub against parts of the socket meant for the circles higher up. They thus get a slight amount of power unintentionally that creates a bit of noise." ], "score": [ 9 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7mat0f
How do the anti-theft systems at store exits/entrances work?
Technology
explainlikeimfive
{ "a_id": [ "drszln7" ], "text": [ "There are little tags inside the devices that emit a reply radio signal when they’re subjected to another, higher power radio signal on the correct frequency. If you shoplift an item, this tag is still active and the sensors you walk through can detect it. When you check out, the cashier runs your item over a device that detects the tag. When the tag is detected, the device emits a very high powered pulse that burns out the tag, deactivating it. This is the “bong” you hear sometimes. This pulse does a good job of demagnetizing credit cards, so that’s why there’s signs saying keep them away." ], "score": [ 10 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7mbmu4
how does fast charging work
Technology
explainlikeimfive
{ "a_id": [ "drsric1" ], "text": [ "Fast charging works by providing a different voltage or having a lot of current available. It requires a special charger that can work with the phone. The phone and the charger negotiate what type of power will be used Phones still support the standard 5V 500mA charging that's been standard for a long time so they can charge off of any charger. If you restrict it to just working with fast charge chargers then only your charger works with your phone and when you go on a road trip with a friend you won't be able to use their car charger The point of standardizing on USB charging was so any charger works with any phone, making phones only accept their own fast charge breaks that standardization and puts us back in 1998" ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7mbutz
Why does shining a blue laser at an LED turn it on?
Technology
explainlikeimfive
{ "a_id": [ "drstke4" ], "text": [ "I'm assuming it was a white LED. They work by shining a blue LED at a phosphor which causes it to emit a mix of red and green light. Together with some of the blue, they mix to make white. Any blue light shining on the phosphor will make it glow to some extent. The blue laser is intense enough to make it apparent." ], "score": [ 13 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7mciaw
If amber lensed computer glasses reflect the blue light, why aren't they blue?
The way I was taught how light works in school is that an object absorbs all color that it doesn't reflect. IE an apple absorbs all light except red, which it reflects, therefore the apple is red. If amber lensed computer glasses (like these ones [here] ( URL_0 )) are supposed to deflect the blue light why aren't they blue?
Technology
explainlikeimfive
{ "a_id": [ "drt04ar", "drszpa4" ], "text": [ "They're not reflecting the blue light, they're absorbing it. And all other colours except amber. So amber is what you see coming through them as well as bouncing off them. When light is absorbed it is used to vibrate the molecules that make up the glass. This makes it slightly warmer, but not so much you'd notice, normally.", "Amber lenses *absorb* the blue light, blocking it from coming through. They don't reflect it." ], "score": [ 7, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
7md2nc
What is the thing about hyper loops that keep them from actually being a reality
Technology
explainlikeimfive
{ "a_id": [ "drt2qsd", "drt2o2k" ], "text": [ "> What is the thing about hyper loops that keep them from actually being a reality The whole not existing thing is what it really comes down to. We don't have extensive tubes of near vacuum running across the country to desired destinations. We don't have a reliable way to propel a capsule down such tubes which would keep it from rubbing against the walls of the tube and destroying itself. We don't have a reliable way to keep the inside of said capsules pressurized and livable for passengers. We don't have a way to power such capsules through their journey. We don't have a way to deal with potential emergencies such as a stuck capsule, a tube breach, a capsule breach, or any of the other as yet unexpected problems which might occur. And of course we don't know what we don't know about technologies which don't yet exist!", "Testing, research, energy efficiency, mass adoption. Hyperloops are in the very early stages of development. They are being invested in by a small number of people, which means that limited resources are being put into doing the design and testing required to scale up any designs. Energy efficiency is more of a personal opinion of mine. I think that hyperloops are a good concept, but keeping a 1, 10, 100, or 1000 mile long tube vacuum is not going to be an easy task. Nevermind engineering such a tube, which could in theory be done. Imagine the maintenence. Mass adoption is the other issue. As of now, it's marketed as a potential solution for individual travel, not group travel. But really, for it to pay for itself every it's going to have to adapt to a larger market, and not just cater to a few very wealthy individuals who participate in the project because it's a novelty." ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
7md4a8
Why we still use towers to broadcast signals (radio,cell phone, etc). Why not small antennas that could be built anywhere with wires buried in the ground.
Technology
explainlikeimfive
{ "a_id": [ "drt2txv", "drt2y6r" ], "text": [ "Cost. It's cheaper to build a smaller number of antennas in locations that allow sharing -- this usually means putting them high up so their signals can get past obstacles. One phone antenna tower may serve users across up to 100 square kilometres. (Typically fewer, but still a large area.)", "> Why we still use towers to broadcast signals (radio,cell phone, etc). Why not small antennas that could be built anywhere with wires buried in the ground. Tall towers are capable of transmitting signals long distances over a large area without having to actually go to those areas and build infrastructure directly. They are in one location which can be provided with power and maintenance with relative ease. Small antennas built all over the place would require going all over the place to build them and maintain them. Feeding them with wires buried underground also requires digging up huge amounts of the ground which, news flash, isn't easy. Dirt tends to be heavy and any time you want to start digging it becomes a huge problem. How do you figure you are going to dig a trench straight through a neighborhood, through people's yards and houses? Think you can run it down a road which obstructs traffic while you do it, requires approval from the local government, and all the while you are dodging other buried utilities such as water or gas lines? In contrast you can just build a big tower and blast radio waves through that neighborhood without any fuss at all." ], "score": [ 7, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
7mdkuh
How does a computer recover corrupted data?
Technology
explainlikeimfive
{ "a_id": [ "drt7lol" ], "text": [ "Tolerances, error detection and error correction. Tolerance: In TTL logic if the voltage on a data line is between 2V and 5V that's a logical 1, and between 0V and 0.8V is a logical 0. This means that the hardware can take a bit of variation and it makes no difference. This is by the way why expensive cables for digital data are nonsense -- unlike with analog, 4.5V and 5V are exactly just as good. Error detection: Take 8 bits, let's say 10011100. Then add an extra parity bit. When the number of 1s in the byte is even, parity is 0. When odd, parity is 1. If one bit flips now parity is wrong, and we know something went wrong somewhere. Error correction: With a slightly more complex system we can detect where something has gone wrong. Since we know the position, we know that the bit must have flipped, so we flip it back and problem solved. Easy example of error correction: We take a group of 8 bytes and write them down on one line each, then we calculate the parity for both rows and columns: 01100011 0 01101111 0 01101101 1 01110000 1 01110101 1 01110100 0 01100101 0 01110010 0 00000111 Now flip a bit somewhere: 01100011 0 01101111 0 01101101 1 01110010 1 < -- 01110101 1 01110100 0 01100101 0 01110010 0 00000111 ^ Now if you check the parity it'll be wrong on one row and one column, so we know where in the table the error can be found. Flip it around, and fixed! There are better methods of course, but this one is easy to explain. CDs and DVDs specifically have several levels of error correction, so that it can handle both small local mistakes, and large scratches on the surface. Now if that wasn't enough, then it really doesn't really deal with it, and you have corruption for good. Some programs can deal with an amount of it (eg, it can be tolerable for audio or video), some break badly." ], "score": [ 10 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7mdyve
Why do airplanes have double-pronged audio jacks?
Technology
explainlikeimfive
{ "a_id": [ "drtbtwz" ], "text": [ "I suspect there's also the opportunity to make you use, and charge you for, their headset instead of your own." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7mebn8
How do electronics that rotate stay connected?
For example, a PTZ (Pan Tilt Zoom) camera can keep rotating indefinitely, yet it's always connected for video and power. If the rotating component of the camera was connected to the static part with wires, it would just twist and break, so how does the camera transmit images and use electricity?
Technology
explainlikeimfive
{ "a_id": [ "drtb6zr", "drtj8p0" ], "text": [ "As a basic example, imagine a headphone jack plug in its socket. The jack plug is split into 2 or 3 segments which carry different signals, the socket has 3 contacts inside it which always touch these segments on the jack plug, and you can rotate it as much as you want. There is lots of ways to do it. [here]( URL_0 ) is an image to demonstrate. See how it has 3 stacked ‘PCB’s, and each PCB has a little prong which touches the jack plug in a specific area. I believe that image is of a TS (tip, sleeve) jack so it only has two contact points. One on the main shaft of the jack and one hooking down to touch the tip. There is also TRS jacks (Tip, ring, sleeve) these would have an extra bit similar to the hook which touches the ‘ring’. This is sounding very NSFW. Theoretically you could have a jack split into 100 segments with 100 contacts coming down to meet it.", "The PTZ cameras I use at work can't continuously rotate, but there are a few ways this can happen. 1. Use slip rings and brushes as described by u/jgpirie. The problem with slip rings and brushes is size, cost, and reliability. In a VCR, they would use precision slip rings and brushes with multiple wire contacts. 2. Use a rotary transformer. These were used in VCRs to get the signal to and from the rotating heads. They would not work well with analog video which needs frequency response from DC to a few MHz. The signal would need to be modulated into an RF signal. Slip rings would still be needed for power. 3. Use a camera which is stationary and uses a 360 degree mirror. [Like this.]( URL_0 ). The image has to be processed in software. The result is pretty poor, but has been used. One advantage is that it has no moving parts. 4. Use an optical transmit and receive system with slip rings for power." ], "score": [ 4, 3 ], "text_urls": [ [ "https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Jack-plug--socket-switch.jpg/220px-Jack-plug--socket-switch.jpg" ], [ "https://www.0-360.com/" ] ] }
[ "url" ]
[ "url" ]
7mecd4
How did google aquire the 8.8.8.8(google's dns server) ip adress?
Technology
explainlikeimfive
{ "a_id": [ "drtaqbo" ], "text": [ "If you do a Whois lookup (Whois IP 8.8.8.8). Level 3 Communications, Inc. owns the Class A Subnet 8.0.0.0 And Google is listed for the Class C Subnet 8.8.8.0 My guess is Google leased that Class C from Level 3 Comm. and decided to use it for their DNS server to make it easy to remember." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7mhsiy
Why does a good pair of headphones/earphones make it feel like the sound is coming from inside the middle of your head?
Edit: RIP Inbox... I knew thee well... Edit 2: Front Page! Wow!
Technology
explainlikeimfive
{ "a_id": [ "dru8pif", "drueovi", "dru6iq5", "dru60ox", "dru0v24", "druawnb", "dru9amt", "druqfzy", "dru8rk0", "drucnot", "drulimn" ], "text": [ "To put it simply, whenever a sound comes out of both the left and right channels at an equal volume, your brain will often trick you into believing that the sound is coming from the midpoint between the two channels, creating what's known as a Phantom Center. And since the left and right channels are on either sides of your head, your brain will have you believe that the sound is coming from the middle of your head. Here's a link if you're interested in a slightly more detailed explanation: URL_0", "A good pair of headphones *doesn't* make it feel like the sound is coming from the middle of your head. A good pair of headphones makes it feel like the sound originates from well outside of your head, with a good soundstage.", "They don't. GOOD headphones try to reproduce the neutral sound of flat speakers as best as possible, including making it sound like the audio is coming from around you rather than inside your head.", "It depends a lot on the recording technique, actually. Your brain uses a number of queues to determine where sound comes from. The most obvious is that sound is louder in the ear closest to the sound source. It also arrives sooner to that ear. Less obviously, the external part of the ear changes the frequency spectrum of the sound, emphasizing high (treble) or low (bass) content depending on the angle the sound approaches from. For sounds produced externally from the ear, the brain is **great** at figuring out where they're coming from. What this means is even if sound isn't recorded in a way that intentionally gives the impression of coming from one direction or another, you'll still be able to tell the location of the speaker that's playing it for example. Only headphones (especially in-ear headphones) can bypass all these systems and deprive your brain of the info it needs to understand the source of a sound. But for it to sound like it's coming from inside your head the audio engineers need to have NOT done any tricks to make it sound like it's coming from some other location... which can be as simple as an instrument being louder in one ear than the other. So lots of audio won't sound perfectly centered. The microphone or a number of other items/effects may create frequency content differences that may give the impression of direction (or at least an external source) as well, even if it wasn't intended by the audio engineer.", "Yo ho ho! Yer not alone in askin', and kind strangers have explained: 1. [ELI5: When wearing headphones, why do you hear the sound inside the head, but if you uncover one ear you hear it right in the covered one? ]( URL_1 ) ^(_2 comments_) 1. [ELI5: When I listen to some songs in headphones and can 'feel/hear' different parts of the music in different places inside my head, how does this happen? ]( URL_0 ) ^(_8 comments_)", "3d sound effects. Plays right sound in right speaker then softer in left with a delay. You know to trick your ears into thinking it came from in your head. Could make it sound like from anywhere, really, with this tech. Same way you can hear where a person is walking in counter strike relative to your position and facing. Ray kurzweil reverse engineered the hearing center of your brain and printed it on sound cards then was able to figure out how it worked. So today we have software surround sound. When steam was brand new you had to get an expensive sound card to get 3d sound. Back in the day you couldnt tell if someoen was in feont of you or behind you above you or below you unless you moved your head and got 2 sound points to figure it out. Pretty sweet eh? An actual application of aetificial intellligence from reverse engineering a part of the human brain. And it made him rich.", "Can someone expand these explanations to further explain how binaural sound recordings work? Ya know, where your brain can tell exactly *where* it was recorded (360 degrees, i.e. lower back left vs upper front right). [Example. Cool part starts at around 2:30.]( URL_0 )", "Why is this post upvoted so much? ***SHITTY*** headphones make it feel like the sound is coming from the middle of your head, actual **GOOD** headphones make it feel like the sound is *around* your head. If you think Beats studios, Skullcandy extra bass headphones, or Bose noise-cancelling headphones are actual good pairs of headphones, then you have my condolences.", "There's a lot of opinions here about the definition of \"good headphones\", but I think the answer you're looking for is this. A good pair of headphone or earphones is designed in a way that will, to the manufacturer's standard, accurately reproduce the source material. This is primarily based on two parts. The first is componentry. Shit parts = shit results. The second is processing, and physical design. Phase alignment between bands (low < - > mid, mid < - > high, low < - > high) is easily one of the biggest things to get right, and a manufacturer that's taken that into consideration and executed it in the correct fashion is likely going to have an excellent headphone.", "Music is made to sound like this. It is called the [Haas Effect]( URL_0 ). By creating a very quick echo, the music can sound like it is coming from anywhere in the room of the artist's choosing, including in your head. The Haas Effect is popular in pop music because it sounds similar to singing along with a song, feeling as if you're there singing with the musician.", "Sound waves from speakers don't just enter your ear through your ear canal, but rather they interact with your chest, skull, the shape of your ears, even with hair covering your ears. The ways in which sound is filtered by your own body before it reaches the ear drum is what allows you to hear the space around you. This is how your brain expects to hear on a constant basis, but headphones more or less inject the sound straight down your ear canals. Music is produced first and foremost for speakers. The overwhelming majority of music is produced in a way that means there's no spatial information in the audio that's useful to headphone users beyond your simple left-right pan. Most studio recordings have a single mono microphone on a single instrument or vocalist in an acoustically controlled environment (minimal reverberation). The mixing engineer then produces a mix that sounds good on speakers, but it's often said speakers *play the room*. They reflect and reverberate around the environment they're producing sound in to create a deeper sound field. Headphones simply don't do this; they are *acoustically dry*, with no contribution from room acoustics. The very best headphones do a better job of fooling your brain into thinking the sound they produce is coming from a three dimensional space, they have some tricks like using a larger driver for the sound to come from a larger area, and using drivers that are set at an angle relative to your ears (both of which should result in a sound that involves reflections of the outer ear more), but this doesn't guarantee a spatially accurate sound. FWIW I've also heard it said what audiophiles call \"soundstage,\" or the out-of-head presentation headphone users are looking for, I've heard acoustic engineers simply refer to as \"accurate treble response\" (the reasons I've discussed above also make this very difficult to achieve; i.e. your accurate treble response might not be the same as someone else's due to physiological differences.) Even the best headphones, to this day, fail to produce as realistic a soundstage as even 2 channel stereo speakers." ], "score": [ 9696, 3091, 522, 402, 95, 20, 17, 10, 10, 4, 3 ], "text_urls": [ [ "https://en.m.wikipedia.org/wiki/Phantom_center" ], [], [], [], [ "https://www.reddit.com/r/explainlikeimfive/comments/1p1cay/eli5_when_i_listen_to_some_songs_in_headphones/", "https://www.reddit.com/r/explainlikeimfive/comments/5iqs1s/eli5_when_wearing_headphones_why_do_you_hear_the/" ], [], [ "https://youtu.be/IUDTlvagjJA" ], [], [], [ "https://en.wikipedia.org/wiki/Precedence_effect" ], [] ] }
[ "url" ]
[ "url" ]
7mj1d0
what is the difference between the cheap batteries that come in toys and major brand batteries, like Duracell, and why does the latter last a lot longer?
Technology
explainlikeimfive
{ "a_id": [ "drubw3y", "druar6f" ], "text": [ "Usually toys come with an \"extra heavy duty\" battery which is a generic name for [zinc-chloride]( URL_2 ). These batteries are really bad. An AA might hold 1.5 watt-hours of power. Some really cheap toys come with \"heavy duty\" batteries or zinc-carbon. These are the worst and an AA might have 1 watt-hour of power. Duracell is an [alkaline]( URL_0 ) battery. An AA might hold 3 watt-hours of power. Twice or three times as much! But there is an even better battery out there. The [NiMH]( URL_1 ) battery doesn't hold any more power than an alkaline but it will never leak inside your stuff. And it is rechargeable so it can be used over and over. Saves a lot of money that way. To avoid a lot of fancy mumbo jumbo let's just say that NiMH can also \"push\" electricity a lot harder than alkalines can so they do better in toys with motors or flashlights or stuff like that.", "The cells in the nicer batteries are more energy dense, that is also why they tend to be heavier assuming you are using like for like alkaline vs alkaline, lithium vs lithium. This density is measured via how many milliamp hours a battery has." ], "score": [ 22, 3 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/Alkaline_battery", "https://en.wikipedia.org/wiki/Nickel%E2%80%93metal_hydride_battery", "https://en.wikipedia.org/wiki/Zinc%E2%80%93carbon_battery" ], [] ] }
[ "url" ]
[ "url" ]
7mjaib
How come touch screens only work when they come in contact with skin and not other things?
Technology
explainlikeimfive
{ "a_id": [ "drudb04", "drupsvr" ], "text": [ "Most touch screen sense the increase in capacitance when something conductive gets very close to the screen. Metal or conductive rubber can also affect touch screens. Edit: Capacitance is a property of capacitors. Capacitors are made of two conductors separated by some insulating material. The screen has two independent transparent layers of horizontal and vertical lines. A finger adds to the capacitance on the lines. The position is determined by measuring the capacitance and finding the lines with the highest values.", "Touch screens these days mostly work on detecting changes to electrical properties. You are conductive and touching the screen changes electrical properties that your phone is measuring. Your phone then knows where you touched. Oh forgot to mention that the screen has tons of small invisible wires for you are making contact with. Things like gloves are insulators and don't change the electrical field because it doesn't interact with electricity. So your phone can't detect it." ], "score": [ 9, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
7mjyr9
why does Netflix (and other videos) get stuck at “99%” buffering? Shouldn’t the percentage accuracy display the time left?
To add on: it will steadily increase from 0-99 at a fairly constant rate, but stay stuck at 99 for like 25s. This is inaccurate, but why does this happen? Why is the algorithm not able to incorporate that wait time and distribute it across the buffer timer, so that once it hits 99 it doesn’t wait, but instead the video resumes?
Technology
explainlikeimfive
{ "a_id": [ "drukphq", "drujlly", "drv51m2" ], "text": [ "\"loading screens\" are notorious for their prime purpose is to placate the user that something is happening, and not for being accurate or honest. Unless the people who coded Netflix's loading % function come in here and tell us how it works, you can assume its either right or wrong but we have absolutely no way to actually tell what is happening behind the scenes. This is one of the dirty secrets of software loading... sometimes, its just bullshit, sometimes it isn't, its hard to tell.", "So it loads extra time. It's not honest, but the last percent is giving it extra time to load more of the video. Except the part of the code to load more says don't load too far, using more internet for netflix. This means it gets stuck between loading quickly whats right up next and slowly what's a few seconds away. It takes much longer to stop buffering than it does to refresh the page. The same thing happens on youtube for a different reason. You can be loading from where there's no loading space left (past the light gray bar) and it'll take much longer than dragging it back a bit and playing it.", "The way coding works, loading bars are sort of like you sending a text (responsibly, when stopped) on the way home from work to keep your partner updated on your ETA. You get to a red light, text your significant other that you're now at X point in your journey and that Google Maps is saying you're X minutes away--then you carry on driving. & nbsp; Continuing that analogy, you can only accurately say how far you've gone and when you'll arrive home *based off of the assumption* that the environment doesn't change . . . realistically, no one can accurately predict when they will get home 100% of the time--no matter where in their journey you ask them to give you a status update. & nbsp; Back to Netflix's loading bar: the program says \"98% of all the code needed to run is done, put it up on the loading bar!\" It's most likely giving you an accurate assessment of the % of loading that it's completed as well as % left to load. & nbsp; However, that last 2% could see network issues (like you could end up behind an accident) or something failing (like you could end up IN an accident). If something like that occurs, then there you are, just like a significant other who texts back, \"It's been almost half an hour since you said you were 2 miles away!\" & nbsp; And that's also why most programs, especially those reliant on network availability / access, decline to give the end user it's best guess for how long it will take to get from A to B. In the end, it just wouldn't be useful or accurate." ], "score": [ 8, 3, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
7mmqrm
When did WASD become the standard for movement control in games and how did it become so widespread across all games?
Technology
explainlikeimfive
{ "a_id": [ "drv1wqd", "drv1zmv", "drv2pww", "drv23fh" ], "text": [ "It was some dude that played Quake. I'll see if I can find an official source, but some people had been using WASD but this guy was the first famous player to do so. [Source]( URL_0 )", "it evolved from the very first first person shooters. back then in the days of wolfenstein and quake, lots of movement layouts were popular. wadx, esdf and szxc. wasd became popular at the top level gamers so everyone wanted to be like the pro's", "I've always used ESDF and only ever recall seeing one game about 20 years ago come with ESDF as the default. Don't remember the name though. I find it better because your fingers are on the home row keys for typing and there's more keys around your hand to bind stuff. Great for MMOs. For shooters my pinky rests on A for crouch. Try it", "It became common in the mid-late 90s when 3D shooters like *DOOM* started to take off on PC. Early games often used the arrow keys, but their placement on the keyboard makes it very difficult to use any other keys. As games got faster and more complicated, enthusiasts realized that mapping movement to wasd gave you access to many more keys that could be mapped to other actions and was a more natural sitting position. Within a few years it became the industry standard." ], "score": [ 9, 4, 3, 3 ], "text_urls": [ [ "http://www.pcgamer.com/how-wasd-became-the-standard-pc-control-scheme/" ], [], [], [] ] }
[ "url" ]
[ "url" ]
7mn403
Why do USB cables have a max length?
title
Technology
explainlikeimfive
{ "a_id": [ "drv5l36", "drv4arr", "drv6t5x", "drv5f06", "drxbljj" ], "text": [ "When designing a protocol like USB, there are some parameters that need to be set. Depending on how you want to handle collisions, acknowledgements, retransmissions, ... you have to choose different parameters, including cable length. The USB engineers figured their protocol was going to be used to connect PC devices in an office setting. Therefore they found it appropriate to limit cable length to 5m, in order to allow a cable delay (= time a signal needs to travel through the cable) of 26ns. [Source]( URL_0 )", "The longer a cable is, the more electrical resistance it has. USB ports only put out so much power, so if the cable gets too long, the signal gets too weak to be read.", "As cable length increases, so does its capacitance. This tends to 'round off' the square waves used to transmit digital signals. That makes it harder (less reliable) for the receiver to discriminate where the waves (bits) begin and end, leading to data errors.", "To ensure they work reliably The longer your cable, the more noise it will pick up and the more the signal will lose power. The cable length standards are set so that in a normal environment, a cable of that length will work reliably. If you are in a super noisy environment or have a bad cable then it may misbehave before then, if you're in a low noise environment you may be fine with a longer cable; but the point is to provide a max length guideline for a normal environment.", "(this turned out longer than I expected) Imagine a garden hose that's passing through some rough terrain (think: lots of thorns and other sharp things). Inevitably, some of these thorns will puncture small holes on the hose that leak - in fact, let's say you can be certain that, for every 10cm of the hose, there's going to be, on average, one small hole. Each of these holes means less and less water is getting to the end of the hose and there's inevitably going to be a point where you have too many holes and not enough water is reaching the end. Because we have one hole per 10cm on average, this means that the longer your hose, the less water gets to the end of it. Of course, you can just use a better material - if you make a higher-quality hose, there will be less holes and thus you can have it longer without losing as much water - perhaps a different (more expensive!) material will sustain more thorns, so the holes appear every 30cm rather than 10cm. Still, there's always going to be a limit in length, where too much water is leaked and the hose is useless. Now, back to the subject of USB: while imagining cables to work like garden hoses is an oversimplification, we can certainly draw an analogy in that the longer your cable is, the weaker the signal is (due to background noise, resistance and other factors). To make longer cables that actually work, you need to use more expensive materials. And, keep in mind that, when it comes to cables, a broken cable may still work under some circumstances - when I say broken, it usually means \"flaky\", i.e. a good cable always works - a bad cable sometimes, under some circumstances, doesn't. Finally, imagine you're the organisation that needs to come up with the USB standard. Specifically, you want to make sure bad cables (as defined above) are easy to spot. One option would be to only manufacture cables yourself. Nobody else is allowed to make USB cables and problem solved. Another option is to demand that any company that wants to manufacture cables needs to send you a sample to review. Neither of these scale well, of course. So here's a third option: find the shittiest, cheapest material that can be used to build cables and assume all cables will be like this. This works well both when the standard is in an early stage (you make pessimistic decisions when you come up with how it works) and to set limits, like the cable length, knowing that, even if built from the worst possible material, it would still work reasonably well, as long as it's less than 5m." ], "score": [ 24, 11, 6, 6, 3 ], "text_urls": [ [ "https://www.lammertbies.nl/comm/cable/USB-cable-length.html" ], [], [], [], [] ] }
[ "url" ]
[ "url" ]
7mpc12
How does GPU/CPU bottleneck work?
First of all, I’m truly sorry if this has been asked here before, im new here on reddit and i couldn’t find an answer. So, what is exactly that “phenomenon”? What makes it happen? I’ve heard a lot about it and I’ve seen explanations for it but I truly need an ELI5 for this one. Thank you all in advance and sorry for any misspelling. (I’m Portuguese)
Technology
explainlikeimfive
{ "a_id": [ "drvmyc6", "drvrqix", "drvmyp1" ], "text": [ "A \"bottleneck\" in a system occurs when a single part of the system limits the performance of the whole system. Every system has a bottleneck or critical step, the goal is to make sure its as well balanced as possible so you don't lose too much performance In terms of CPU/GPU bottlenecks, it is when one of the two is more heavily loaded than the other. Certain games load up the GPU heavily, and others load up the CPU heavily If you're playing a Shooter game with graphics cranked up to max then your CPU might be finishing the math it needs to do for each frame in just 2 ms, there isn't much work it needs to do. Then it hands it off to the GPU that needs to start drawing, but you have AA cranked up super high so the GPU has to do a lot more work and it takes 15 ms to get the frame ready. When its done the CPU already gave it the info it needed for the next frame so you get the next one 15 ms after. This gives you just 66 FPS despite the CPU being able to do 500 FPS. In this case, the GPU is the bottleneck If you're playing a strategy game, which is traditionally more CPU heavy, then the CPU may take 20 ms to complete its calculations for the next frame before handing them off to the GPU. The graphics are reasonable so the GPU does its math and spits the frame out 2 ms later. This restricts you to just 50 FPS despite the GPU having plenty of spare processing time. In this case, the CPU is the bottleneck which leaves you with an underutilized GPU. There's always a bottleneck, the goal is to identify it, make sure its where you want it, and minimize it.", "Imagine you have a big bowl of M & Ms in front of you, you take a handful, and put them into your mouth. You take a second handful, but your mouth is still full, so your hands have to wait until your mouth is finished with the M & Ms it's currently working on, before it can take any more. The same with the next handful, and the one after that. It's the same with a bottleneck in a PC, one part, either the CPU or GPU can't get through the work quick enough, so the other part has to slow down to do work at the same speed.", "A \"bottleneck\" is just a way of saying that one aspect of a system is the primary limiting factor. Imagine trying to dump out a bottle of water - the mouth of the bottle is the part that slows down everything. Some games do a lot of complex things on the CPU, like AI, physics, tracking hundreds of moving objects in the game world, etc. This work can't be unloaded onto the GPU so, if the CPU is working at full capacity, it's considered the bottleneck even if the GPU would be capable of running things faster. Some games do a lot of complex things on the GPU - having high resolution graphics with complex lighting and amazing textures. This work can't effectively be done by the CPU so, once you've hit 100% of the GPU's capacity, that's the bottleneck in your system." ], "score": [ 11, 4, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
7mwgpu
Do video games give you any real life skills?
Technology
explainlikeimfive
{ "a_id": [ "drx62tl", "drx68w1" ], "text": [ "It depends on the video game, but yes they have been shown to have benefits such as.... - Increased hand-eye coordination. - Increased reaction times. - Increased focus/concentration and memory ability. Some have educational value (such as typing games and games focused on topics such as history, language, etc.).", "Studies have shown that they can possibly help with hand-eye coordination and certain types of problem solving. URL_1 URL_2 They can also help people become more persistent by presenting players with challenging tasks over time: URL_0 So they're potentially somewhat generally beneficial in that they sharpen your abilities a bit. However, it bears mentioning that this is not evidence that games are the best or even a particularly good way to do these things, compared to other activities. Also, games generally don't provide you with actual real-life *skills* per se - Guitar Hero doesn't really help you learn guitar, most driving games don't make you a better driver, and most shooting games won't help you with real-life self-defense. So, there are benefits in playing games, but that doesn't prove that games are the best way to spend a lot of your time." ], "score": [ 4, 4 ], "text_urls": [ [], [ "https://www.psychologytoday.com/blog/media-spotlight/201402/are-there-benefits-in-playing-video-games", "http://www.apa.org/news/press/releases/2013/11/video-games.aspx", "https://www.parentingscience.com/beneficial-effects-of-video-games.html" ] ] }
[ "url" ]
[ "url" ]
7mwp6n
Why does removing batteries, and then reinstalling them make the device work for a small period of time?
Technology
explainlikeimfive
{ "a_id": [ "drx8f42" ], "text": [ "Circuits are designed to shut off with batteries below a certain voltage. When it detects an excessively low battery voltage it turns itself off. That circuit stays in the \"off\" state until voltage is completely removed from the circuit. That circuit also has a range... Say cut off when voltage is below 3V, but only come back on when above 3.2V so that there's no flip-flopping if the voltage is exactly at the cut-off When batteries are no longer delivering current, they do tend to revive themselves. It's possible, with the example above, it got down to 3.0V, triggering the cut-off, then raised to 3.1V, but not above the cut-on. When you remove them, the cut-off resets, then the 3.1V is enough to power it until it goes down to 3.0V again. Sometimes it's possible to repeat this several times." ], "score": [ 8 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7mx2a8
How can companies come out with very similar products(Google home, Amazon alexa) without infringing on each others patent?
Technology
explainlikeimfive
{ "a_id": [ "drxbznn" ], "text": [ "Patents must describe, in detail, a very specific technological improvement. They don't just cover the general idea of a product. A computer talking to you has been around for years. A computer understanding speech has been around for years. A computer following instructions has been around for years. Combining these things isn't a particularly novel invention, it's just an application of existing technology - that makes it unworthy of a patent. I'm sure there's some feature of the devices that is patented so the competition can't copy it but the basic idea is nothing new." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7mxbgp
What is the benefit of a strong password?
Technology
explainlikeimfive
{ "a_id": [ "drxdian" ], "text": [ "When the server gets breached and the database gets downloaded, the attackers now have unlimited attempts to brute force the passwords within that database. If your password is weak then it will be one of the first to fall, potentially before you're notified of the breach. If you use that email and password combination elsewhere then you'll get breached there as well Your strong password argument relies on them not having the ability to spend days brute forcing passwords, this is a bad security assumption. You should always assume the breach and focus on minimizing damages when it happens Every system will get breached, its not a question of *if* merely *when*" ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7n0v8i
How does a finger print sensor work?
Technology
explainlikeimfive
{ "a_id": [ "dry7l20" ], "text": [ "When you take a close look at your fingers, you’ll notice that there are very small ridges on them. Everyone has a slight difference between the heights of these ridges, the distances between them, and all the empty spaces or unique points of these ridges. When you use a biometric system to record and use a fingerprint scanner, you often have two systems: an Optical and a Capacitor scanner. Optical scanners work by shining a bright LED light over the finger placed for scanning, and takes what is essentially a digital photograph of the finger print. The ridges closest to the scanner will reflect the most light, and the depths and curves will usually reflect less. The scanner records these “images” and sends the data to a computer that calculates the depths and distances between ridges, that are again, unique to (almost) everyone. The second type of scanner, a capacitor scanner, is more commonly found in phones and various tablets, and is more commonly known. These work by using a capacitor that stores a current, and another computer that measures these currents. When you place your finger over a plate where the capacitor is working, the ridges closest to the plate will affect the conductivity of the capacitor, whereas the air in the ridges between will leave it relatively the same as before the finger was placed. A computer records these changes, and because each distance and curve between ridges are different, the recorded difference will be different. This is how your electrical “fingerprint” is recorded! Edit: I didn’t expect this to blow up so much! In regards to all the interesting questions, I will get to them as soon as I can, and read up on the topic to make sure I’m not spouting none sense. I’m currently slammed at work, but I will make time to answer you all. :)" ], "score": [ 154 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7n3800
Why do you have to eject a thumb drive before removing it from your computer? How does not ejecting damage your computer?
Technology
explainlikeimfive
{ "a_id": [ "dryoc2e", "dryo2bg", "drynzu3", "dryo3lq", "dryo01w" ], "text": [ "Imagine someone is writing on a piece of paper. By asking the computer to remove the drive you are letting that person finish their sentence and the writing makes sense to the next person reading it. By yanking out the drive without telling the computer, you are ripping the paper from that person whilst they’re trying to complete their sentence and you’ll end up with missing words.", "It can't damage your computer, but it can damage the file system on the thumb drive. When you eject, it closes any open files and finishes writing any data before saying it is safe to remove.", "To improve system speed and drive performance operating systems typically buffer at least some of the writes to a drive. For example if an application wrote 1 character at a time and that was replicated by the OS into changing 1 byte at a time on the harddrive it would be exceptionally slow and downright damaging to SSDs. Since there is a buffer of some data (both file data and the various meta data that says where files are and what their properties are) you need to be sure that it has all be written to the drive before it is ejected.", "It doesn't damage the computer, but it can damage the data. Thumb drives don't read/write instantly. If you've written a large chunk of data to the drive, and then a little later you want to remove it, the computer could still be in the process of actually writing the data to the drive. Ripping it out in the middle of that could leave your drive in a funny state, with parts of files written. Ejecting is just warning the computer, and giving it a chance to tell you \"hold up a sec, I'm not done with it.\" USB drives are much faster than they used to, and if you haven't written anything or used the drive in the last few minutes, chances are it's safe to just rip out. The computer may whine about it but most likely it wasn't actually writing anything still.", "It doesn't damage the computer, but it can damage files on the external/thumb drive. Basically, the operating system may delay writing to the drive because it's busy with higher priority tasks. By telling the OS you want to remove the drive, it will finish the write and then let you know it's now safe to remove the drive." ], "score": [ 102, 17, 8, 8, 3 ], "text_urls": [ [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
7n4d9y
How can forensics tie a bullet to a specific firearm?
Can forensics tie a bullet to a specific gun? like a 9mm bullet from glock 19 #1234. Can forensic science actually do that or is it hollywood bs?
Technology
explainlikeimfive
{ "a_id": [ "drz5yj0", "dryxbsm", "drzwcgj", "drzd04t" ], "text": [ "It's Hollywood BS. A comparison to a ballistic forensics database has never yielded a positive match (Maryland has maintained one for years and gotten no tangible benefit from it). A US National Research Council Study concluded (from [wikipedia]( URL_0 ): > The United States National Research Council released a report in 2008 that endorsed the investigation of microstamping as an alternative to ballistic markings. **It had concluded that a national database of ballistic markings is unworkable and that there is not enough scientific evidence that, \"every gun leaves microscopic marks on bullets and cartridge cases that are unique to that weapon and remain the same over repeated firings\".** It described microstamping as a \"promising method\" that could \"attain the same basic goal as the proposed database\".[14] Microstamping is the laser etching of unique identifiers onto the firing pin, and it has independent problems of its own; namely that firing pins are a bit like tires, they're an easily changed consumable on most firearms like tire is a consumable on cars -- it's like requiring laser etching on tires so you can more easily identify what car was used in a crime.", "Actually can. Each firing pin has a distinct strike on a casing like a thumb print. Barrel rifling also varies so finding a bullet and casing basically tells you exactly which gun fired the bullet", "Real-world forensics cannot. Even if the chamber of a gun is rifled, the machine-produced rifling will be uniform across that make and model of firearm. This means the absolute closest you could actually get even in the best case is that you know **a**, say, Glock 19 shot the bullet. But it could be #1234 or #3456 or #Cookie. And you wouldn't be able to determine which of those it was. But most guns, especially non-rifled firearms, don't leave such marks. So it could be a Glock or a Remington or a Smith and Wesson.", "If you're saying that you can take a bullet, examine it and tell you which handgun in the nation fired that bullet, that's Hollywood BS. However, if you have a bullet and a suspected gun it was fired from, you can say with a certain amount of certainty that this was or was not a bullet fired from that gun. It's never, \"I'm 100% certain this bullet was fired from this gun.\" It's more like a partial fingerprint where you can use statistical analysis to determine the likelihood that a bullet was fired from a given gun. This certainty can be increased if the gun barrel has any unique flaws or marking patterns on the bullet that make a false match less likely." ], "score": [ 9, 3, 3, 3 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/Microstamping#US_National_Research_Council_Study" ], [], [], [] ] }
[ "url" ]
[ "url" ]
7n4nn8
How is the Criterion Collection able to restore movies that are 50 plus years old to Blu Ray quality?
Technology
explainlikeimfive
{ "a_id": [ "drz07pu", "drz0l00" ], "text": [ "35mm film still has more detail contained on it than any digital camera can capture. It captures far more texture and nuance, this is why some directors (Tarantino being a popular example) still use \"real\" film. This is comparable to the old records vs. digital format debate in audio recordings. Celluloid film does have a tendency to break down and degrade over time, but it can be \"remastered\" and digitized, often to considerably better quality than it would have been originally (as seen through an analogue projector). For example, I was watching reruns of the original Star Trek run on the BBC yesterday. These episodes never looked better! Not just the remastered \"cgi,\" but every shot was crisper, cleaner, and better colored than I remember them being originally. Edit: 32-35mm (I was a projectionist for Pete's sake, I should know this!)", "Boils down to: How many megapixels does film have? Infinitely many! It’s because the image isn’t divided up into tiny squares (the pixels) and is perfectly smooth. Film is far more detailed than digital as it’s analogue so it’s a good source for making better and better versions." ], "score": [ 21, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
7n5nq2
How does radar work? And why can you only pick up objects on radar if they are above a certain latitude?
Technology
explainlikeimfive
{ "a_id": [ "drz87fz", "drzkwhr" ], "text": [ "Radar is just just radio waves that get sent out, bounce of an object, and return to the receiver. Radar can detect objects at any altitude as long as there's nothing in the way. So for example, radar can't detect something on the opposite side of a mountain, and can have trouble picking up planes at very low altitudes because of ground clutter (trees and hills and such that get in the way or create noise that hides the plane) but radar can be on the ground or the sea looking up or in the air or space looking down.", "As a former 14E (PATRIOT radar and fire control) soldier: * Yes, as long as there is nothing between the object in the air and the emitter, it can see the object. This includes rain clouds. * You can't see below your wavelenght - the distance over which the wave's shape repeats. Most air traffic control radars are 1–2 GHz 15–30 cm wavelength. * The return (the radiation sent back to the emitter) suffers from the inverse square law: energy sent out is inversely proportional to the square of the distance . When the energy bounces off the object/target in the sky it is received inversely proportional to the square of the distance. Air search/fire control radars require a lot of power. * The radar \"gun\" used by police has a narrow beam and since the distances are smaller it can be made less complex than an air radar but they use the same frequencies as many air search radars." ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
7n6cld
How does an audio equalizer actually WORK? (Not how do I use one)
I already know how to use an equalizer either on a mixing console, or a parametric EQ in a DAW. But from an engineering perspective, how does an EQ actually work? If I wanted to build one, what's the general concepts I would have to know in order to build one? How does an equalizer actually differentiate the frequency ranges, and then boost the tones within those ranges? Since it is all one single audio stream, how does it differentiate/modify the ranges within the same stream?
Technology
explainlikeimfive
{ "a_id": [ "drzsuz3" ], "text": [ "It is all just [filtering]( URL_0 . By careful selection and placement of time-based elements in your circuit like capacities and inductors, you can filter out frequencies. In essence you can think that a capacitor takes a certain amount of time to charge. When fully charged, the capacitor acts as an open circuit and does not let any more current flow. Frequencies that are too fast will not let the capacitor charge/discharge between cycles, giving you what’s called a low-pass filter (it only lets low frequencies through). In general you can create most types of filters in circuits, but modern equalizers will do it with dedicated chips or even in software. The mathematics behind filtering is Fourier analysis, and most Fourier transforms have a circuit equivalent." ], "score": [ 4 ], "text_urls": [ [ "https://en.m.wikipedia.org/wiki/Filter_(signal_processing%29)" ] ] }
[ "url" ]
[ "url" ]
7n9glm
what's the difference between controlling the volume on the PC and controlling the volume on its speakers?
Is there a difference if I have the volume set at 1 on the audio source (PC) and at 100 on the audio output device (in this case, the speakers), and vice versa?
Technology
explainlikeimfive
{ "a_id": [ "ds02iw4" ], "text": [ "The difference is the signal to noise ratio you end up with. Keeping the PC volume low and then cranking up the speaker is like recording someone speaking softly outside and then amplifying it: all the background noise gets amplified too. Keeping the PC volume high and turning down the speaker is like recording someone speaking loudly, and playing it back quieter: their voice was loud compared to the background, so it sounds better. You can't straight up max out the PC output because it's designed to be able to power headphones and small speakers on its own, so it can be turned up \"too high\" and cause distortion. You want to turn it up as high as you can without getting distortion on loud sounds, and then adjust your speakers to taste. This sort of volume control is really important in areas like... oh, electric guitars, where the sound is overdriven, and distortion is often desired. Loud amp, quiet guitar vs loud guitar, quiet amp makes very different sounds." ], "score": [ 31 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7n9ogj
How is "realistic" water rendered so smoothly in realtime
I'm talking mostly about videogames, but the question applies beyond them. It seems like rendering water and fluid physics in realtime would be a very intensive task, especially when it must interact with other objects seamlessly. At the very least, there are an enormous amount of polygons in play. How is it rendered so smoothly and in such high fidelity in modern applications?
Technology
explainlikeimfive
{ "a_id": [ "ds07lvj" ], "text": [ "Currently, water is not modelled as a fully interactive fluid simulation. The fluid simulations you'd see if you searched for them on YouTube are small simulations rendered for a long time in something like blender. They're not rendered in real time. Water in a video game is pretty much just a mesh that may move and reflect stuff to show the surface, but there isn't really anything to it under the surface besides filters to simulate the lighting effects of being underwater. Objects don't typically interact with it as if it were a continuous liquid." ], "score": [ 9 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
7nai1x
How do email spam filters work?
Technology
explainlikeimfive
{ "a_id": [ "ds0a56o", "ds0ghbt" ], "text": [ "I’m by no means an expert and hope this will tide you over until someone more qualified arrives. I believe it’s a sort of artificial narrow intelligence, that is created with a set of known spam senders, a blacklist if you will. The AI then learns from your habits, like what you delete without reading, what you mark as spam etc. This means it will improve over time.", "It's a many-faceted approach and a constant battle. In the email servers I run, every email that comes in is scored and past a certain score, the email is flagged as spam and quitely tossed. The scoring goes through a few steps 1) Greylisting. Anytime something connects to my mail server, if it hasn't gotten a recent connection from that IP, it rejects email from it saying \"Try Again\". Proper email servers will wait a couple of minutes and retry, spam-generating bots will move on. 2) IP Address check - Is the source IP from overseas? Is the source IP part of a dynamic IP pool, like an ISP dialup or DSL pool? Is the source IP already part of a black list. All of those add to the scores. And then it uses things like SPF and DomainKeys to cross check the sending IP address with known mail server IPs for the domain in the sender address. 3) Heuristic algorithms. Analyzes the subject and body of the message. Does it contain phrases like \"B1GG3R P33N3R 4 U!!!!!!!!\" or \"I am Prince Ububu from Nigeria\"? And then finally, I maintain my own blacklist of sender addresses. THere's a few companies and people that I had dealt with in the past that would not honor unsubscribe requests from, so I just black hole their emails completely." ], "score": [ 5, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5rd8i0
How is YouTube a sustainable business model? If view count remains constant but video storage costs continue to increase, wouldn't this lead to a permanent loss?
Let's assume that in 2020, YouTube maintains a daily view count of 500 million per day. However, they also keep old videos on the site that aren't generating views. Don't these "dead videos" eventually accumulate and overcome the profit margins with cost of storage?
Technology
explainlikeimfive
{ "a_id": [ "dd6cep0", "dd6d2sp" ], "text": [ "Storage costs are going down exponentially. Every year the cost of storing 1 GB of data is half what it was the previous year. YouTube loses money every time someone uploads a long video that nobody watches, but it doesn't matter because they make insanely high profits off of the top 1% of the most popular videos. As long as YouTube is a good place for popular videos, the business model is sustainable. If it turned into an unpopular site where people just uploaded their personal videos but nothing popular or viral ever went there, it'd lose money.", "As of the last time Google released any financial info about YouTube (early 2015) was that it was not a profitable business unit to operate. However, since then, most analysts are thinking Youtube is now profitable due to them selling way, way more ads than in previous years. Google has really taken a strong effort to get advertisers on Youtube, and the latest analysis thinks they will likely be profitable during 2017. However, in completeness, even if it was a money loser, Google would continue to operate it, as it provides benefits to google's other services. They (like many businesses) are willing to take losses in their left pocket to reap bigger profits in their right pocket" ], "score": [ 17, 6 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5re0bo
Who, or what, finds and puts together the information on sites such as URL_1 and URL_0 ?
Technology
explainlikeimfive
{ "a_id": [ "dd6o3qv", "dd6px3y", "dd6k4kd", "dd6ru6j", "dd6qk1g", "dd6qpwh", "dd6sarl", "dd7467u", "dd6s01j", "dd6wfku", "dd6vtws" ], "text": [ "Short answer: Mormons. Why? Members of the Church of Jesus Christ of Latter Days Saints believe that they can be baptized on behalf of their deceased relatives. Part of that process involves meticulously documenting who has proxy baptisms done for them so that nobody gets doubled up on. Like most Christians, they believe that Jesus will be back soon and He's going to help them make sure they don't miss anybody. How? As part of that ambitious effort to essentially map out the entire human family tree, resources like the above mentioned serve as searchable archives of public records such as birth, death and marriage certificates as well as census records. Many of these records exist in hand-written form only. To deal with this huge barrier, all of the documents are scanned into a database and volunteers from around the world (mostly Mormon) painstakingly type out the image text so it can be digitally searchable. TLDR: Mostly Mormons do it, some are employed by those websites, many are just volunteer enthusiasts.", "I did! They have a volunteer base that reads documents and then keys in relevant information. I went through 50 or 100 WW2 era German documents from the Warsaw Ghetto I think it was. Sad, but I read German and I felt like I should help. I also did some union membership cards from Wisconsin. They have a lot of files to download and work on!", "The records was previusly in churches, hospital records and stuff. And a great deal of effort have been put into digitalizing it all. (University Genealogy studies mainly) Initiatives made by past relatives (such as mine - [ URL_2 ]( URL_1 )) that have been going since 1918 and uses tools by URL_0 TLDR: They make use of the public records made by the Genealogy societies and university studies", "My dad. He isn't mormon, but he is fascinated with where he comes from. His hobby has taken him all over the country. When he finds a lead he goes to the last known location of that possible ancestor, checks the cemeteries and libraries for documents. Once he is sure of the lineage he adds it to his tree. Some lines he has been able to take back a thousand years.", "There's a National Genealogical Society and one for pretty much every US state as well. Most accredited genealogists have accounts on sites like these to edit and create family trees. For instance, my mother has researched our family back to the 1700s in Europe and it's almost entirely trackable on Ancestry.", "I used to work at an archive in London that was digitising its collections of workhouse and asylum records. The job was both incredibly fascinating but also very repetitive as you turned page after page and took a high res photo of each. Some of the books are in bad condition and couldn't be looked at by the public. It's fabulous that they are accessible now and to people all over the world.", "I had a great friend in college who got his history degree. After college he worked for URL_0 at the National Archives scanning documents and archiving them for the website. Seemed like a pretty good gig for a while.", "Hi, I created a throwaway so as to not 'out' my main account, but I worked as product manager and business analyst doing this for almost a decade for URL_4 (by a very large margin, the largest.) Despite the current most popular response, Mormons make up a tiny fraction of the genealogical community (not even 1%). Most of the space is occupied by archives and libraries on one side, and private companies on the other - although FamilySearch, the Mormon genealogical wing, is fairly large. There are 4 main items which come together to actually make the process happen. 1) Acquisition of content 2) Digitization of content 3) Transcription/digitization and clean-up of data 4) Publishing of content and data The first step, in many ways, is the hardest. Teams of content aquisitionists spend years cultivating relationships around the globe looking for content sets which will fill in gaps in available records sets or add more information to existing ones. Early on in the internet-genealogy era, this was primarily from large federal archives in the US and the UK - thinks censuses, social security records, and so on. These are easy, well indexed, and well understood and usually came on microfilm, which is extremely cheap to digitize. As time has gone on, attention has shifted to state archives, which often have microfilm or microfiche, but increasingly collections were loose paper - which is very expensive to digitize, but collections are usually in good repair. As those have been exhausted, or local privacy laws prevent their digitization, it's shifted to church records, smaller archives, private collections, and some interesting ancillary collections like digitizing yearbooks. From the beginning of the negotiation of a contract to its close can easily be several years, even for a small collection of images. Usually the contracts will include a company (like Ancestry) digitizing and indexing the records free of charge to the archive, with some type of exclusivity contract preventing the archive from giving/selling the images to competitors for a handful of years, while the archive is still free to serve the images to their patrons all they want. Exceptions to this are items like the release of the 1940 census where the federal government managed digitizing the images (although it was outsourced to a private company to actually do the work) and companies had to pay to get access to the images all at the same time. The second step, digitization of the images, is one of the most fascinating, in my opinion. Very common types of formats for genealogical content are: Microfilm, Microfiche, Index Cards, Loose paper, news papers, maps, and books. For Microfilm and Microfiche there are excellent mostly automated machines for scanning this content. It's usually scanned as a 'ribbon', which means in a 1000 ft reel of microfilm, it gets scanned as a single 1000ft long image, and is sliced into individual images/frames later. For books, if the book isn't unique and super old, usually the binding of the book gets cut and it gets scanned in a high-end sheet-feed scanner (these can often do 200+ pages/minute). For books which are too old or unique/rare, they will get scanned in special cradles with two DSLR cameras looking at opposing pages. This is very expensive and very slow (think 2,000 images/day.) Images then usually go through a very rigorous audit/edit where extra space is cropped, color might be fixed, etc. Usually there is a zero tolerance policy for poorly scanned images - they get sent back to the archive for a rescan. For URL_4 , there are hundreds of people around the world involved in this process at any given time. Third, When the images return, they go to be transcribed/indexed. The content has to be analyzed as a set to understand what type of content is actually contained. It's not uncommon to scan a collection of Birth records, only to discover it contains Marriage and Death records as well, which requires the transcription project to be adjusted to account for this. Content is usually sent to overseas vendors for this process because it is less expensive, but most importantly - they tend to be significantly better/more accurate than westerners. Groups like FamilySearch, however, do this entirely via a volunteer force using a double-key/arbitrate process. So as each type of content in a collection is analyzed, each type of form must be accounted for, and a definition of what content is wanted/needed is written up, usually with rules as to accuracy levels required. When it returns from transcription vendors, a pretty extensive audit is performed to look for mis-transcriptions, incorrectly marked forms, missed data, etc. When transcription completes, the data still needs to be massaged. For example, \"Texas\" might have been keyed as \"Tex\", \"Texas\", \"Tx.\", \"Tx\", \"Tejas\", etc. All of these need to be normalized to \"Texas\" so that it is actually searchable as a state. Algorithms are also run to look for odd patterns such as names with many consonants in a row in non-Welsh collections, or a large portion of the names having mis-matched genders. Ie. you would expect most 'John's to be Male - but if most show up as a female, there was probably a major mistake in transcription. Families will be linked together as a 'group' at this point, instead of just a bunch of individuals. For some types of content, image-by-image form recognition is run so that the images can be overlaid with transcriptions and labels when a customer looks at them. Finally, fourth, the content gets published and made searchable. This is a fairly complicated process for large datasets. URL_1 , for example, has more than 19 billion records in almost 33,000 different collections. You can search them all in a few seconds. If you want to known more, there are a series of Ancestry blog posts about the scanning/digitization pipeline (not written by me) from a few years ago here: https://blogs. URL_4 /ancestry/2013/4/10/our-image-processing-pipeline-the-good-the-bad-and-the-ugly/ https://blogs. URL_4 /ancestry/2013/04/20/image-processing-at-ancestry-com-part-2-living-in-the-mesosphere/ URL_3 https://blogs. URL_4 /ancestry/2013/05/13/image-processing-at-ancestry-com-part-4-microfilm-scanning/ URL_0 URL_2", "Combo of volunteers and paid staff read original documents and transcribe them into digital format so they can be searchable. Can't speak to URL_2 but I have done this for URL_1 . URL_0 I have not transcribed any source information but have created family tree there and matched source docs to the tree individuals.", "A big source of the records they have are national and state-level archives. For example, they get census records, Civil War, and Revolutionary War records from National Archives. From states, they can get things like marriage licenses, birth/death certificates, and probate records (things like wills and estate papers). I work for a state archives, and Ancestry pays for us to make copies of the records for them to digitize and index. In my opinion, the indexing is one of the hardest and most time-consuming things they have to do to make the records usable. Just take a look at some of the millions of pages of records they have, many of which are hand-written. Sometimes the writing is poor or simply unreadable, so the transcriptionists have to make judgment calls about what they think certain words or names are. Then that information has to be paired with the digitized records in their database so that people can search for them.", "People like my mom. Which falls under the category of mormons. She spends all day once a week doing genealogy work at a Mormon library that has access to all sorts of databases. She also uses whatever free time she has. It's her main hobby. A lot of retired people ( especially but not exclusively Mormons) do it too." ], "score": [ 431, 60, 59, 32, 14, 10, 8, 8, 5, 4, 3 ], "text_urls": [ [], [], [ "http://www.tngsitebuilding.com/", "http://schroeder.dk/slaegt/pedigree.php?personID=I1410&amp;tree=schroeder", "schroeder.dk" ], [], [], [], [ "ancestry.com" ], [ "https://blogs.ancestry.com/ancestry/2013/05/28/image-processing-at-ancestry-com-part-5-auto-normalization/", "Ancestry.com", "https://blogs.ancestry.com/ancestry/2013/07/17/image-processing-at-ancestry-com-part-6-auto-sharpening/", "https://blogs.ancestry.com/ancestry/2013/05/03/image-processing-at-ancestry-com-part-3-where-do-images-come-from/", "ancestry.com", "https://blogs.ancestry.com/ancestry/2013/04/20/image-processing-at-ancestry-com-part-2-living-in-the-mesosphere/", "https://blogs.ancestry.com/ancestry/2013/05/13/image-processing-at-ancestry-com-part-4-microfilm-scanning/", "https://blogs.ancestry.com/ancestry/2013/4/10/our-image-processing-pipeline-the-good-the-bad-and-the-ugly/" ], [ "Familysearch.org", "ancestry.com", "familytree.com" ], [], [] ] }
[ "url" ]
[ "url" ]
5re3kb
What is the process behind counting TV votes? (As seen in talent shows, etc)
Technology
explainlikeimfive
{ "a_id": [ "dd6jj1u" ], "text": [ "Normally you phone into a number which will play an automated message before ending the call. From the production side of things they can see the number of calls to each number. This is a pretty standard part of telephony technology and is used in virtually every call centre for recording stats. If required they could report on unique calls vs total calls to get an idea of how many times people are voting multiple times etc." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5re890
Georeferenced Images
What does georeferencing really mean? How is it done? Recording latitude / longitude data when the images are captured? Is the latitude / longitude / altitude data always stored in EXIF? Or are there other possible formats?
Technology
explainlikeimfive
{ "a_id": [ "dd6lbcg" ], "text": [ "If you have pictures from your last vacation, it could be nice to know where they were taken, couldn't it? So, the camera - if it has an GPS - can record where the pictures was taken and store it together with the image. Such as in the EXIF header. This is sometimes called geotagging. If you on the other hand have a scanned map of a city for instance, and want to show your position on the map using GPS, you must tell the map software what position in the real world the pixels of the scanned map represents. This is usually done by selecting a few points on the map, enter the real world position for those points and let the software figure the rest out. This is georeferencing. Of course, if you buy aerial photos over a country, they are normally georeferenced already. There are tons of formats for this too" ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5rgnh8
If games can now render almost perfect textures, why it is yet so difficult to have a nice looking shadow?
Technology
explainlikeimfive
{ "a_id": [ "dd7242o" ], "text": [ "Shadows require redrawing the scene from the perspective of the light source to determine what is on top of what relative to that light source. When this redrawing happens the results get stored in a texture called a shadow map. This shadow map has a fixed resolution which is why shadows up close can look good but shadows far away can look patchy (just like if you stretch a texture over too much distance it looks good up close but terrible far away). If you have too many light sources this means you are redrawing the scene for each light source. This can be mitigated somewhat by only redrawing the parts of the scene that change but it's still an expensive operation." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5riov4
How can a internet web address be bought and sold? Who are you paying when you buy one?
For example, I can't just create a website that uses the domain address of URL_0 . Reddit owns that address and pays for it - so who are they paying? How did the person they're paying obtain it in the first place?
Technology
explainlikeimfive
{ "a_id": [ "dd7k3ql" ], "text": [ "There's a non-profit corporation, ICANN, that runs the global Domain Name Service. You can think of it like a phone book for the Internet- it maps names (like URL_0 ) to addresses (like 151.101.1.140). They give control of the top level domains (.com, .org, .uk, and some fancy new ones like .google and .xyz) to other companies, who in turn can sell them to you. When you buy a domain, it gives you control over that domain's entry in the Internet's global phone book- you can set that domain to point to any address you want. When a person wants to go to a domain, their browser goes to this phone book and looks up what address to go to, and sends a message to that address. You could, if you wanted to, run your own domain name server where you control all of the addresses rather than going through ICANN's system. But since everyone else is using ICANN's system, they wouldn't be able to see any of the things you set." ], "score": [ 9 ], "text_urls": [ [ "reddit.com" ] ] }
[ "url" ]
[ "url" ]
5rje32
Why aren't utility companies more supportive of developing alternative energy?
Technology
explainlikeimfive
{ "a_id": [ "dd7tmdd", "dd7tzox" ], "text": [ "I work for an electrical utility company as one of their engineers in the project management side. It is true that utility companies are first out there to make money, but it doesnt mean that they arent out to do the right thing either. We have a department solely to develop or gather new technologies or ideas on how to use more renewable and environmentally conscious methods, but cannot do so at the sacrifice of the current systems stability and current load. Also, a new project or a new site to even test doesnt come quickly, the engineering development to implement any plan be it a standard substation or something new takes years to develop from finding the right site within the network and obtaining the property, doing a full site investigation including any environmental impacts that can occur while at the same time determining the most efficient or cost effective way to build a site. Thats all just gathering the right information and that alone can take up to 2 years, I know of one site where its a standard substation and it has only gotten passed the environmental impacts after being in review for 10 years since the public fought it for so long (which didnt make sense considering we wanted to up the reliability in case of major storms for the same public fighting us). then the actual engineering of all the drawings and details, calculations, site design, and what not, which also needs to be reviewed. Now this doesnt mean that utility companies arent already doing this, what a majority of the public dont realize is that before even construction starts, putting together the whole package takes a VERY very very very VEEEERY long time as every detail has to be thought through, and a lot of things still get missed. So dont get discouraged, it takes time, it doesnt happen over night, it wont happen in 5-10 years, it will take a LONG time. but it will happen.", "Mostly because alternative energy sources are a lot like nuclear energy: cheap to run, but expensive to build. Conventional fossil-fuel based energy production has an ongoing cost: they have to buy fuel (coal, diesel, natural gas) to make electricity. But if they don't need the electricity, they can shut off a gas turbine or diesel generator and stop paying for fuel. If the price of electricity goes down, they can shut off a conventional generator and save money. Utilities can adjust the costs of conventional fossil fuel energy based upon demand. Natural gas turbines are especially good at this \"on demand\" production. But you don't have to buy fuel for solar panels or wind turbines or wave harvesters. They make electricity at the same price, whether you need it or not. But the interest on the loan that they took out to build them doesn't go down. The payments that the utility has to make on a wind farm or solar array stay the same, no matter whether the price of electricity goes up or down. The utility can't adjust their costs based on demand for their product (electricity). Also, nature is less than predictable. Clouds can block the sun, the wind can die down, and the ocean doesn't always have waves. They have to have spare generator capacity to fulfill the demand for power during those times. But if the utility has to have enough gas, diesel, and coal generators to fill the need for electricity without alternative sources, then why build the alternative sources at all? It is just another loan to pay interest on, to them." ], "score": [ 11, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5rjth1
Why credit/debit cards implemented chips
Technology
explainlikeimfive
{ "a_id": [ "dd7tcxt" ], "text": [ "A swiped card uses a magnetic strip and the information contained on it is static, it stays the same every time, so if that same information is skimmed, it can be used by thieves until the user catches the fraudulent purchases. A chip, however is part of a process called tokenization. It has the same information but transfers it in a different way every single time, and the card issuer knows which way it will transfer it each time and will always transfer it in a different time. In fact is has been used very effectively around the world which led to fraud in the United States being a disproportionate amount of the fraud worldwide. So while an inconvenience, it is much less inconvenient than a stolen identity and stolen funds. It would also help if the United States would shift straight to chip and pin (i.e. Every transaction requiring a pin not just debit) instead of chip and sign." ], "score": [ 21 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5rkleh
Why hasn't anyone made a gaming console the way 6-CD changers were made, having multiple discs in the console at the same time?
Technology
explainlikeimfive
{ "a_id": [ "dd7zx45" ], "text": [ "cost, reliability, size. But probably mostly just not a necessity. CD changers would shuffle songs that lasted 4 minutes. Its reasonable to expect gamers to swap the disk between games since they are likely to play much longer." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5rpup7
Why hasn't commercial supersonic flight been attempted on a wide-scale since concorde?
Surely in 14 years since the last commercial Concorde flight the technology would have advanced to the point we could think about this again?
Technology
explainlikeimfive
{ "a_id": [ "dd970ya", "dd96prx", "dd9g46b", "dd9f5k0", "dd9bcqu", "dd9wfw7", "dd9x8dq", "dd96ksc", "dd9uugm" ], "text": [ "The main issue is it just isn't worth it. It cost about as much to fly the Concorde across the Atlantic as it does 747, which has three times as many seats. Not many people are willing to pay that kind of money. 3.5 vs. 7 hours might sound good, but super sonic speed doesn't get you through the security line any faster. Door to door is more like 7 hours vs. 10.5, and that point you are blowing a whole day no matter what.", "Because it really was never cost-effective. The Concorde would have never even been built if not for government subsidies. The fact is that the increased fuel costs involved in moving people across the Atlantic Ocean in two hours instead of seven just wasn't worth it.", "We, as passengers, have told the airlines (by the way we purchase tickets) that price is our number one concern. We will switch airlines for $5, and take a flight at midnight to save $20. We'll happily double our trip time through another airport to save $100. So everything the airlines and plane manufacturers have done is to save cost. A plane that is expensive to run, even if it halves trip times? The customers have spoken: we don't want it.", "In addition to the factors people have already mentioned, the whole sonic-boom thing is a major problem. City and state authorities (not to mention populations) do NOT like it when a new source of window-breaking, ear-blasting, annoying noises starts up and wants to kinda continue blasting noise pollution every however many times they fly per day. This is why the Concorde was just a trans-oceanic thing. If it had gone over land, people would have thrown a fit until they stopped. For that matter, it would have been breaking existing regulations and laws, and would never have been allowed to start.", "> Surely in 14 years since the last commercial Concorde flight the technology would have advanced to the point we could think about this again? It has! Multiple companies are in late-stage development of commercial supersonic flight. [Boom]( URL_0 ) plans on flying later this year.", "Crash of Air France Concorde Flight 4590 with only 11,989 hours of service, and the Soviet Tupolev Tu-144S CCCP-77102 at the Paris Air Show, had a great impact on supersonic travel.", "Also British Airways holds the patent for the stretchy paint on the tail. When the plane breaks the sound barrier then paint stretches. Without the stretchy paint it just smears. You would have to reprint the plane each time it landed. Virgin tried to start Concorde back up a few years back but BA were pricks and wouldn't give them the paint. Watched a documentary on it a bit back", "my first guess was the fuel usage. did some lurking on the web \"I seem to recall that the average fuel burn on Concorde was 1 ton per passenger (so 100 tons of fuel) across the pond. Additionally, I can't remember where I heard it now (could have been on here, I joined just around the time the retirement was announced) but Concorde burned more fuel taxiing from Terminal 4 to the north runway than an A320 did on a flight to Paris.\"", "It's not profitable and nobody builds supersonic passenger aircraft anymore. Flying has a similar effect on your fuel economy as driving does. The faster you go, the more fuel it takes to get to your destination. And the more people you can pack into a plane, the lower the operational cost per passenger. To be profitable they need large, fully loaded planes travelling at regular intervals. Large aircraft are the aviation equivalent of a Greyhound bus. Great for carrying lots of passengers, but not exactly speed racer. The wear and tear on a supersonic aircraft and the maintenance costs, downtime, and operational costs are much higher than on a commercial airliner. They cost more to fly, they carry less people, and so nobody has stepped up to the plate since Concorde flights ended. If passenger space flight catches on, we may well be treated to not only a flight into space, but a journey of thousands of miles in dozens of minutes instead of hours." ], "score": [ 42, 19, 13, 13, 6, 3, 3, 3, 3 ], "text_urls": [ [], [], [], [], [ "http://boomsupersonic.com/" ], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5rqoie
Why is it when you watch a DVD or stream a movie on your tv, it isn't completely full screen?
If I watch a movie on Netflix the movies take up my entire TV. But if I watch a DVD or stream a movie there are black bars on the top and bottom of my tv. Why?
Technology
explainlikeimfive
{ "a_id": [ "dd9dd3f", "dd9dg1s" ], "text": [ "Because that movie was formatted for a resolution different from your TV. For example, if it was more widescreen than your TV then it would either have the sides cut off (which means you miss seeing anything happening at the sides of the screen) or they fit the width to your TV width and thus leave bars along the top and bottom.", "It's due to a difference in aspect ratio. That is, the ratio of the width of the display to the height of the display. If they don't match the source (such as using a television to watch something filmed for a theater screen), you'll have to do one of three things. 1) Scale the output to the display and put black bars on the difference. 2) Cut off parts of the output so you can match the ratio of the display. 3) Stretch one dimension more than the other, making everything look stretched out and horrible. 1 and 2 are the difference between DVDs labeled as widescreen and fullscreen. Nobody does 3 because it's horrible. However, if a television show or a movie designed for watching on a television and not a movie theater, they can film it to fit nicely on a television." ], "score": [ 19, 7 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5rrnta
HTTP vs. HTTPS
Technology
explainlikeimfive
{ "a_id": [ "dd9lrdh", "dd9qg70", "dd9uzw4" ], "text": [ "The \"s\" means secure. Think about talking to your BFF in a room full of people at a party. Everyone can hear you and hear what you're talking about. Now lets change that up, now you and your BFF are speaking a secret language that just the two of you guys know. Only you. Everyone else can still hear you talking, but no one can make any sense of that. Thats what moving from http to https does. It keeps your conversation with the website in your own secret language.", "HTTP (Hypertext Transfer Protocol) was something invented by Tim Berners Lee in 1989 at CERN when he created the WWW (he didn't do it from scratch and built on the ideas of others before him, but he created the version we now all use). It is a protocol (hence the **P** in the name) originally designed to transfer (the **T**) a certain sort of file called a HTML (Hypertext Markup Language) file (which is where we get the **HT** from). This means it was meant as a protocol (a set of rules) to transfer websites from one computer to another. There were other protocols with more generic applications before that like FTP (File Transfer Protocol) but HTTP was especially designed for websites. The was we use it today is to have a webbrowser (like the one you are using right now to view this website) communicate with a webserver and transfer files (like the website you are viewing right now) from the server to the browser so you can view it on your computer/phone/whatever. The HTTP is used to send the website from the server to your computer. It can also be used to transfer other types of files but websites is what it was built for and is mostly used for. The original HTTP had a big problem though. Everything that was transferred with it was transferred openly over the net in plain text. Think of it as writing something on the back of a post card. the mailman and everyone else who handles the postcard as it is send to you can see exactly who is sending what message to who. If you value your privacy you might prefer sending a letter in an envelope instead of a postcard. This is basically what HTTPS is to HTTP. Once the web got started people realized that it might not be a smart idea to send things like credit card numbers, passwords or sexual propositions openly around the net in ways everyone could see. So a variation of the HTTP was born called HTTPS (the S stands for secure) it basically does exactly the same thing as HTTP does, but the rules were amended to include a bit about encrypting the messages being send back and forth first. HTTPS has the advantage of giving you privacy on the web. One other advantage is authentication. Because of the way encryption works, websites using https have to have something called a certificate. This certificate that they get from trusted authorities can be used like a badge or a personal ID to prove that the website is exactly who they claim to be. So https does not just give you privacy it also makes sure that nobody is impersonating the website you are trying to view. The combination of privacy and making sure you got the right website is important for such things as online banking and basically everything else you do online where you don't want to give anything away.", "Lets say I'm not a very nice person and I want to take advantage of a person or business. Lets take a person, and say that I've parked outside their house a few nights in order to log wifi packets until I crack their weak encryption key. I can now join their network and use a technique called arp poisoning. I basically trick the persons computers into thinking I'm the fastest path to the internet so all of their traffic gets routed to me, then out to the web. I could use any number of tools like winpcap and wireshark to sniff the packets being sent back and forth between that user and any website they visit including their bank, their email, what username and passwords they typed in. I would basically know everything about what they are doing. How do you prevent that? You connect using an encrypted session which establishes 2 way communication with an internet server, but nobody but the server and the user can understand the conversation. So the bad guy sniffing their packets will get pages of garbage, instead of useful and illicit passwords and account numbers. HTTP is a non encrypted connection which is fine for googling things or visiting a public website that doesn't require you send any login information to it. HTTPS is an encrypted connection which not only encrypts the conversation, it also encrypts the username and password that are sent, and it prevents people from eavesdropping. It's not fool proof. If I was a savvy bad guy I might use my new found wifi connection to your house in order to serve you malware which infects your computer and logs all your keystrokes. Since the information is only encrypted on the wire, and once it reaches your web browser it's not encrypted anymore, if I can see whats on your screen and log your keystrokes, I can still do a lot of damage." ], "score": [ 28, 21, 5 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5rrznp
What is a Jewellers tool more like, a 'Magnifying glass' or a 'mini telescope'? Please Help.
To settle a petty argument. A stubborn friend just ignores any logical answer I give.
Technology
explainlikeimfive
{ "a_id": [ "dd9oopa", "dd9opab", "dd9owsl" ], "text": [ "It's called a [Jeweler's Loupe]( URL_0 ), and it does NOT telescope, so it is a magnifying glass.", "Microscopes (and by extension, magnifying glasses which are more primitive microscopes) are used to magnify small objects that are at a short distance from the viewer whereas telescopes are used to magnify large objects that are at a large distance from the viewer. A jeweller doesn't look at distant larger objects. So they aren't using telescopes. A jeweller looks at nearby small objects. Thus they are using microscopes or magnifying glasses.", "A Jewelry tool, mostly known as a loupe, is closer to a magnifying glass, as it is used to see things that are close. Telescopes are for seeing things far away." ], "score": [ 8, 4, 3 ], "text_urls": [ [ "http://www.thefreedictionary.com/jeweler's+loupe" ], [], [] ] }
[ "url" ]
[ "url" ]
5rs5g8
Why are people allowed to reupload YouTube videos onto Facebook without it being considered copyright infringement?
Many content creators make their livings on YouTube. But I see thousands of videos taken and reuploaded onto Facebook. Sometimes these are just small clips but sometimes they are the full videos, and almost never is the original creator mentioned. Is there a copyright strike system YouTube content creators can use on Facebook, and if so why is it rarely used? Secondly, why are videos stolen and put on facebook with crappy compression? Wouldn't it be easier to share a YouTube link via Facebook than go through all that work? I know this is what SoFloAntonio was accused of doing by H3H3Productions. Why even go through all this trouble, do they get ad revenue or is it just a popularity contest? Edit: If anything, I would have thought that at least Google would have put a stop to this by now. They are losing thousands of dollars daily/weekly from potential ad revenue.
Technology
explainlikeimfive
{ "a_id": [ "dd9szh7", "dd9vodq" ], "text": [ "If the content creators found their videos on Facebook, they could file a DMCA claim that would force Facebook to take the videos down. Google/YouTube legally can't do anything because they don't own the copyright to the videos, and it's near impossible to do anything technically because at some point, there has to be a video feed that gets displayed in your browser and there's no way to stop you from recording that. The YouTube creators don't have the money to hire an army of lawyers to scour Facebook for infringement, so they'll call it if they see it, but they won't find videos that don't go viral. Facebook doesn't have to pro-actively search for copyright infringement as long as they take down videos when they're notified of infringement because of the DMCA's safe harbor provision. As much as people hate on the DMCA, it is literally the only reason why you can have a video-sharing site with fewer resources than Google exist and not get sued to oblivion the first time someone uploaded a clip of a Hollywood movie. As for why people do it, I couldn't tell you for sure. It may be to get around region restrictions on YouTube, it may be to avoid YouTube's ads, it may be because people just don't like YouTube.", "> Is there a copyright strike system YouTube content creators can use on Facebook Facebook is subject to the Digital Millennium Copyright Act, which means that if the copyright owner files a complaint, Facebook must remove the video pending the outcome of the dispute or else be held liable for all copyright-infringing activities by its members. But first you have to find the copyright-infringing video, and as far as I know Facebook doesn't have an automated system that helps content creators find the infringing material. It's led some people to accuse Facebook of deliberately fixing things so that videos uploaded to Facebook are seen by more people, generating advertising revenue for Facebook, so they don't have much incentive to help out here. > why are videos stolen and put on facebook with crappy compression? In some cases, it may be a cynical ploy to generate interest in the page, attracting likes, shares and so forth. Since the Facebook algorithm appears to favour Facebook videos over embedded videos from other platforms, there's an incentive right there to re-upload them. But I also think that there are a lot of people out there who simply don't understand that there could be something wrong with what they're doing. They're not deliberately trying to deprive a content creator of revenue, and genuinely believe that if it's on the internet, they can do whatever they want with it. A lot of people really don't get the point that making videos costs money, and often assume that people who make videos do it in their spare time as a sort of hobby; or that video creators magically make obscene amounts of cash and won't miss a few bucks just because somebody re-uploaded it elsewhere; and very often people assume that they're \"promoting\" the video for the creator and giving them \"exposure\". It's very difficult to get people to understand that this simply isn't the case. As a content creator myself, this presents a bit of an ethical dilemma. I'm a living, breathing human being who needs to buy food to stay alive; on the other hand, it's unreasonable to expect 13-year-olds to understand the same copyright law that, say, network TV has entire legal teams on staff to help them with." ], "score": [ 14, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5rsl98
Why do photographers not just take video in HD or 4K then take screenshots afterwards from the footage?
Technology
explainlikeimfive
{ "a_id": [ "dd9ttxl", "dd9tqtx", "dd9u46m", "dd9ueou" ], "text": [ "So a couple things. First of all, most cameras already support a burst mode, which is basically what you want- it takes a whole bunch of pictures in a row and then you can choose from that. Second, full HD video is only 2 megapixels. It's not exactly a high resolution image, although you usually can't tell because it's changing fast enough and your eyes are only focused on the movement. 4k video is better, it's about 8.3 megapixels, but it's still not fantastic. Third, taking a video actually gives you less control over the images. Photographers choose how long to take an exposure for each image to ensure that the sensors get enough light for a clear image, but the image doesn't get blurred from movement. Photographs taken at night use a longer exposure time than photographs taken at a daytime sporting event, for example. By shooting a video, you lose that control since each frame is at most 1/24th of a second, and more likely less than that.", "4K is only equal to 9MP, far from the 20+ most DSLRs shoot at... Now, Megapixels aren't everything; videos will be much more compressed than normal photos, leading a much lower quality. Photos on my 13MP phone can take up to 3MB each. Now think of how big video files are; yes they're large, but each frame in a video isn't 3MB. The video would be huge. Theres a lot less data per frame in video than a single still shot. This doesn't factor in the ability to shoot in RAW files which generally makes them even larger and therefore higher quality. TL;DR Videos are lower resolution and the frames much more compressed than still shots.", "Video is a series of photographs taken X times per second, and always X times per second. This means the sensor or film is only exposed to whatever is in front of the camera a fixed amount of time (1/30 of a second for instance). Well in photography being able to control how long the sensor or film is exposed to the light via the shutter is one of the most important choices one can make in order to get whatever photograph they are after. Fast shutter speed freezes action like in sports. Slow shutter speed blurs moving parts of the image. Fast shutter allows less light to the film or sensor. Slow shutter allows more light to the film or sensor. It's a very important control for a photographer and the image they wish to convey. As well as a tool for dealing with the amount of light on hand or even being able to capture something moving quickly at all.", "Good question. The long and short of it is that movies encode visual information as a series of key frames, which are perfect, followed by a series of frames that only record changes in the key frame. This keeps movie sizes small but it results in image degradation. Because the images are updated 30 times a second it's okay if each individual frame isn't perfect, because our eyes will fill in the missing detail because it happens so fast. But taking a screenshot of the movie results in an image that is noisier, and contains less detail and visual information than even a .jpg. Consider the difference between a digital photograph, and a digital movie. They can both be the same resolution but they are not the same size. The movie is larger, but lets say you compared 30 pictures, to 1 second of video of the same resolution, you will find the video takes up far less space using 30 frames of motion, than 30 pictures would. The reason for this is that unless a video is recorded in RAW format, it discards much of the information it records in order to keep the file sizes small. Instead it uses perceptual techniques that divide the movie up into small sections, and simply looks for changes in each little block of the movie. Since not everything on screen is changing at the same time we don't tend to notice that only parts of the image are updating while others don't. It also happens very fast." ], "score": [ 44, 18, 5, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5rtgat
What does 'cdn' in a url mean?
Technology
explainlikeimfive
{ "a_id": [ "dd9ynci" ], "text": [ "CDN stands for Content Delivery Network. If a web site have a lot of images and videos they might go to a provider that have better infrastructure to deliver the content to the client. As they often have some dynamic elements and some static elements they need a separate domain so the clients know where to get the different content from. It is common to name these domains something with cdn, the name of the cdn provider or just static." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5rtnp9
How do different resolutions work?
Like, say you have a 1920x1080 pixels monitor. You play a 1920x1080p youtube video. All is well. Now you want to set it to a lower resolution. The way I can think of to do that, is to take four pixels in a square and take the average colour and brightness of that pixel and make that the same for all four. Now you've devided the amount of pixels in each dimention by two, so you have 960x540, unless im making a mistake here. So ELI5 how do you get 720p resolutions and other ones, and how do they translate to physical pixels in your screen, a number that doesnt change.
Technology
explainlikeimfive
{ "a_id": [ "dda014n", "dda6sfy" ], "text": [ "Imagine a grid of ~2M cells, going 1920 across and 1080 down. Now, make a grid of ~1M cells, going 1280 across and 720 down. Then, stretch the second grid uniformly until it's the same size as the first grid. For whatever cells overlap, you take the average color.", "The key term you're looking for is \"Bilinear Interpolation\". Your 1080p video is broken into the pixels for x1...x1920 and y1...y1080. Your screen is broken into pixels coordinates a1...a1280 (though maybe other values) and b1...b720 [In this picture]( URL_1 ), each pixel (a,b) is coloured the sum of: * The colour value of the red dot (x2,y1) * (Area of red rectangle/area of the pixel) * The colour value of the green dot (x1,y1) * (Area of green rectangle/area of the pixel) * The colour value of the blue dot (x2,y2) * (Area of blue rectangle/area of the pixel) * The colour value of the yello dot (x1,y2) * (Area of yellow rectangle/area of the pixel) This way, the result colour is proportional to the closeness of the neighbours. You can do this process when scaling up, (low resolution to high resolution) and get blurry approximations of what we hope is in between based on our limited data. You can also scale down if you include the averages of entire pixels encompassed in the square. It may be noted that older CRT monitors not operating in their Native Resolution would just [not draw certain lines every once in a while]( URL_0 ) as a form of scaling." ], "score": [ 29, 4 ], "text_urls": [ [], [ "https://upload.wikimedia.org/wikipedia/en/f/f7/Native-resolution_800x600_on_1024x768.JPG", "https://en.wikipedia.org/wiki/Bilinear_interpolation#/media/File:Bilinear_interpolation_visualisation.svg" ] ] }
[ "url" ]
[ "url" ]
5rtwd3
Why does hosting a server require changing firewall & router settings, but connecting to a server doesn't?
Technology
explainlikeimfive
{ "a_id": [ "dda1us9", "dda7bxv" ], "text": [ "When you host a server you're allowing people to make requests to your computer and have access to your network, firewall and routers block some ports so malicious people can't access your network or connected devices, by hosting a server you must willingly give access that's why you have to change a lot of things. When connecting to a server you only will get incoming traffic based on the requests that you make, that's why it's considered safe and will (mostly) work right out of the box.", "Computers are like people, and a firewall is like the front door. The point of the lock on a front door is that the people inside are allowed out, and are allowed to come back in. Random people on the outside, however, are not allowed in. When you connect to a server, you \"leave your own house\" and \"go into\" the server. There you get your precious cat videos and leave and go back to your own house. When you want to set up a server, you have to allow people from the internet into your own house in order to give them their cat videos. This means setting the lock on the front door to let them in (aka, setting the firewall rules). In general, firewall rules are set up to allow all connections from inside a computer to reach the outside, but only allow a few connections to come back in." ], "score": [ 8, 7 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5rukfk
If I run an internet speed test and it tells me I have a download speed of ~60mb/s but when I actually download something, it's around 5mb/s
Technology
explainlikeimfive
{ "a_id": [ "dda75nn", "dda7his", "dda7bjj", "dda74wp" ], "text": [ "I think you're confusing different units. Your download speed is almost definitely 60Mbps (MegaBITS per second). This translates to 7.5MBps (MegaBYTES per second). This makes for much less of a difference. In addition, whatever you're downloading from doesn't have infinite upload bandwidth, and likely caps the maximum download speed per user as to guarantee service to all their users.", "The download speed given by the test is 60 Mega**bits** per second (60 Mbps). 1 byte = 8 bits. 60 Mega**bits** per second (60 Mbps) ≈ 7.5 Mega**bytes** per second (7.5 MBps). During a test the best server (closest to your place distance wise) is chosen, while the same is not necessarily true for actual downloads, which explains your download speed of \"around 5 Mbps\".", "In any download there are two servers (computers) involved. The download speed depends on the distance between the two servers. Speedtest and other internet speed testing apps have servers across the globe and connect you to the closest servers. So you get to see a theoretical max of your internet speed. Depending on website you are downloading from and the server where they have the file for you to download determines your actual download speed.", "You're being limited by the companies upload speed. You're downloading as fast as they're uploading, but they're not uploading as fast as you can download. For the test, the reverse is true. They're uploading faster than you can download. So you're showing your max speed, but not theirs." ], "score": [ 38, 18, 5, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5rur4k
Why cant we take good pictures of lunar landing sites from earth or from satellites?
We can "see" really far in space with terrestrial telescopes, and with technologies like the Hubble space telescope we can see even further but a few good pictures of landing sites would put to rest a few conspiracy theories.
Technology
explainlikeimfive
{ "a_id": [ "dda8on5", "dda8ode" ], "text": [ "[You can]( URL_0 ). But of course if you don't believe that it happened, it's pretty easy to shut your brain off regarding proof. The best question I've found is - When the US went to the moon, we did so racing against the USSR for the \"first\". If there was ANY doubt that we didn't do it. If there was ANY proof that the US lied... why didn't the greatest opponent the US has ever faced **call us out on it**? They lost, they knew they lost and accepted it.", "The things you can see from terra firma are very bright and VERY VERY big. There have however been pictures taken of the landing site from satellites in orbit around the moon: URL_0" ], "score": [ 9, 5 ], "text_urls": [ [ "https://www.google.com/search?q=lunar+landing+sites&amp;espv=2&amp;biw=1600&amp;bih=810&amp;source=lnms&amp;tbm=isch&amp;sa=X&amp;ved=0ahUKEwiVmsyHqPTRAhWFs1QKHZAXArIQ_AUIBigB" ], [ "https://www.nasa.gov/mission_pages/LRO/news/apollo-sites.html" ] ] }
[ "url" ]
[ "url" ]
5rvjep
How come when the batteries in a remote control die, if you switch those two batteries around, the remote starts to work again?
So, say that you have two AA batteries in the remote and they're completely dead and the remote won't work at all. Then you take the back off and use those same batteries but just switch which one was + and which one was - the remote will start to work again. Why is this?
Technology
explainlikeimfive
{ "a_id": [ "ddag3ks" ], "text": [ "Because sometimes it's not that the batteries are completely dead, it's that dirt and corrosion have made the electrical contacts stop being as conductive as they should be. When you move the batteries, it removes the dirt and oxidation, exposing conductive metal." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5rvtv6
Why are computer CPUs so small? Wouldn't it be beneficial to make them bigger?
Technology
explainlikeimfive
{ "a_id": [ "ddaim4q", "ddal41n", "ddao4nn" ], "text": [ "Electricity takes time to travel through wires. If you want to make a signal go to a bunch of different points quickly then you need to put those points closer together. This is what CPU designers are doing; by making their components smaller they are able to cram more points into a given area and achieve more without slowing down the process with more distance.", "When you say CPUs are small do you mean the feature size or the size of the overall package? Neither CPU dies nor overall package size have changed size dramatically in recent years though the size of the individual features on the CPU have decreased rapidly as described by Moore's law. Shrinking feature size has a few benefits. You can fit more of them in a given area which usually means an increase in computing \"power.\" Also smaller features require less actual power to run them so the power consumption goes down with feature size. The reason we don't just make the die larger instead of making features smaller is because at the rate that modern CPUs run the speed of light becomes a very significant limiting factor. Light in a vacuum takes about 33 picoseconds to travel 1 cm (roughly the size of a CPU die). This is the absolute top speed that information can be transmitted. It doesn't sound like a lot but when you are doing billions of operations per second those picoseconds REALLY start to add up. If you have two circuits on opposites sides of the CPU that need to communicate then making them farther apart will actually slow the communication down.", "Assuming you're talking about the actual, physical size of a CPU whilst holding it in your hand, yes you *could* create a physically larger one to create a shit-ton more computing power into a single wafer. BUT: 1) There's not much purpose for something like that in an average consumer market. What would the average person **do** with it? 2) The price would increase dramatically as the size and power of the CPU increased, so it would *quickly* become prohibitively expensive for the average consumer. Not to mention, creating a physically larger CPU absolutely necessitates creating a physically larger mother board and case to put it into. There's even more expense for the consumer, placing this super computer *well* out of reach for the average consumer. 3) As the size and computing power increase, so do its power requirements. The average processor takes between 90 and 120 watts to function, and the average power supply on a store-bought computer puts out between 250-500 watts, depending on the system and what it's built for. You'd need a *massive* power supply unit just to run your processor. 4) Processors generate a *tremendous* amount of heat, which must be dealt with. The standard average processor, with a default heat-sink and case fan, will reach an average of 35-45 degrees celcius just sitting there. That heat gets pulled away from the processor and dissipated into your room. Ever leave your computer on overnight with the office door shut, and come back the next day and think \"Man, it's warm in here\"? Now imagine if your processor were four or five times its current size. You could heat your whole house (and probably fry a few of your PC's other components). All things considered, the pros of building a larger CPU simply don't outweigh the cons." ], "score": [ 20, 6, 4 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5rw7yt
How do electronic pH sensors work?
This is pretty much what I wonder about every kind of sensor - electronic scales, cameras, etc. - but the one I especially don't get is an electronic pH probe. What exactly is it that "senses" the H+ ions present in a solution, differentiates from any other kind of dissolved cation, and translates the intensity of the "H+ signal", whatever that is, into an electric signal picked up by a computer of some sort? And how does it detect the total volume of the solution that the pH probe is submerged in, in order to calculate the H+ concentration that is prerequisite to calculate the pH?
Technology
explainlikeimfive
{ "a_id": [ "ddanz1j" ], "text": [ "An electric pH meter is essentially just a voltmeter. An acid solution is essentially half of a battery, so the pH meter brings the rest of the battery along and measures the potential difference between the two leads on the probe. Once you've determined the voltage of your \"battery\" the Nernst equation gives you the relationship between voltage and hydrogen ion concentration. Volume is irrelevant to figuring the pH out in this way because volume does not affect voltage of a battery, just the life of the battery." ], "score": [ 8 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5rw99f
How does seeding for a download decrease my download time?
When the original piratebay was still up a couple of years ago, I remember some downloads for whole seasons of tv shows only taking around 30 min because of the ridiculous high amount of seeders. When someone seeds, what is it actually doing?
Technology
explainlikeimfive
{ "a_id": [ "ddalzs2", "ddam05g" ], "text": [ "A \"seed\" is an uploaded that has the entire file to provide. A leecher is a downloaded who has less than the entire file to provide. If there are more seeders it ensures that whatever piece you are looking for someone can provide it.", "Most computers can download (receive) information much faster than they can upload (transmit) it. While your computer may be able to download 5 megabytes/second, it can probably only upload, say, 500 kilobytes/second. That means it takes 10 seeders (with the same internet speed as you) to fully utilize your download bandwidth. As well, if multiple people are downloading at once, that seeder upload is split further; if 10 people are downloading, you now need 100 seeders to fill their bandwidth. This is of course all a gross oversimplification, but welcome to ELI5." ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5ry07m
How does an Oreo package reseal itself without being sticky?
Edit: I can't believe how many up votes this got! I was just a bit stoned on my couch eating these fuckers. Next thing you know I'm tits deep in answers and up votes. Thank you!!
Technology
explainlikeimfive
{ "a_id": [ "ddba5e3", "ddbea2p", "ddbkt9r", "ddbjm27", "ddcavbb" ], "text": [ "In medical devices they coat the surface being stuck to in low adhesion material like a thin silicone. This allows for some stick without allowing for a complete adhesion. Like the Post-it note, they use a powerful adhesive but coat it very thin. The thinner the adhesive layer the less it will tend to \"gum off\" and leave residual. Thin coats of powerful adhesive plus low adhesion coating on the surface being such to.", "It's often used as a example of excellent flexible packaging. It uses a cohesive layer (sticks to itself, but nothing else) that is laminated between two layers of film. These layers of film are laser scored slightly offset to each other to create the lip that once exposed by opening the package, giving you a resealable cohesive bond. Long story short, once broken open there is a sticky part, but it's just sticky enough to stick to itself.", "From what I understand, my neighbor is the designer (or part of the design team) for those Oreo packages. They come from the packaging company Sonoco, which is headquartered in my hometown. I could ask him for a more detailed explanation, if you would like.", "Follow-up question, how do you easily access the Oreos on the sides? Asking for a friend.", "what? you don't eat them all at once? When I buy oreo's, like once a year, I buy 2 litres of milk and finish them." ], "score": [ 259, 251, 155, 18, 3 ], "text_urls": [ [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5rz49v
How does my toaster lever only stay down when its plugged in to power?
Technology
explainlikeimfive
{ "a_id": [ "ddbeh89" ], "text": [ "There is a small electromagnet in there, which moves a catch which holds your toaster lever down. When it's not plugged in there is no electricity, so the electromagnet is not a magnet, so the catch doesn't move and your lever won't stay depressed." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5s15hi
Why are Captcha's getting more difficult?
Captcha's used to be typing in a word or number, now whenever I sign up to something I have to click on a few pictures. For instance, it will say 'Click on all the houses'. So you click on them, and them pictures are replaced with other pictures, some of them trees, some of them houses and some of them buildings that are similar looking to houses. Or ones that say 'Click on the street sign', but they're American street signs, and apparently only the green signs are streets? I don't know, help me understand!
Technology
explainlikeimfive
{ "a_id": [ "ddbn05d", "ddbquss" ], "text": [ "Because robots are getting smarter and smarter so every generation of capatcha becomes useless. The sign thing is just because capatchas developers are so lazy they can't be bothered making it regional", "Usually you only have to actually do that \"select all the _____\" if the captcha thinks you're not human. The new ones look at your mouse movement on the page and then determine whether to give you the actual captcha or not. I've gotten into the habit of swirling my cursor around for a second or two before clicking the circle, and I barely get a captcha unless I type too fast or use the keyboard to navigate around." ], "score": [ 8, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5s1ety
How come when you download something it starts slow then gets faster over a few seconds?
What makes it so that the download can't be fast instantly?
Technology
explainlikeimfive
{ "a_id": [ "ddbqrif", "ddbty7b" ], "text": [ "Neither you nor the computer you're downloading from know how fast the network between you is ahead of time. If the sending computer just guesses it can potentially flood the network with messages it has no way to deliver and cause network congestion.[ So it starts off sending slowly and ramps up until it finds the \"pain point\" where either network congestion starts or one of the two computers can't talk any faster.]( URL_0 )", "The internet is built on IP (internet protocol). The internet is a packet-switched network, meaning data is sent as small packets (one packet contains up to 1500 bytes, including some control information called the *IP header*). IP is unreliable delivery, basically the network says it will make the best effort to deliver your packets, but makes no guarantee that the packets will arrive in the correct order, or at all. Another protocol called TCP (transmission control protocol), builds on IP by assigning a *sequence number* to a packet. TCP adds some additional control information to the packet in the *TCP header*. The TCP header includes the sequence number of the packet, and also information about which packets have been sent and received. To summarize, TCP includes enough information in the packets that each sender/receiver can figure out when packets have been dropped or out of order. Now you might be thinking that maybe a download should figure out how fast your network is, and operate at that speed. But the thing is, the speed of the network between two endpoints depends on the speed at all intermediate points, which generally depends on how much other traffic there is at those intermediate points. Moreso, IP doesn't even require different packets from the same data stream to travel the same path through the network! So TCP has a brilliant and simple solution, requiring no technology beyond its already built-in ability to detect dropped packets. All transmissions start slowly, with a small number of packets. Then, as soon as it is reported that all those packets were received, more packets are sent the next time. The sender keeps sending faster and faster, until packet loss occurs [1], then slows its sending speed until it's just under the threshold where packets are lost. But occasionally the sender will attempt to increase the speed again, and in this way continually use packet loss to discover if network conditions might have become favorable to using faster transmission. [1] Unless some other bottleneck occurs first. For example, TCP headers als include information on packets that have been received but not processed; the sender stops sending if the receiver has too many such packets. This can occur, for example, when saving a download to a disk which is slower than your network connection." ], "score": [ 6, 6 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/TCP_congestion_control#Slow_start" ], [] ] }
[ "url" ]
[ "url" ]
5s31wc
Why can't any of my electronics use multiple channels/sources (e.g. wifi & 4g) to download data at the same time?
I get that if I connect my laptop to an ethernet cable, I will still have to deal with a bottleneck at my modem/router/subscription, but what limits (my) computer/phone to only one channel/source?
Technology
explainlikeimfive
{ "a_id": [ "ddc1m8j", "ddc2zvo" ], "text": [ "Nothing (well, other than your data plan). My Galaxy S5 has a \"download booster\" option which uses 4G and wifi at the same time.", "There is a way, but it's pretty recent (the specification was written in 2013- it's [here]( URL_0 ) if you want to take a look). The problem is that the Internet was built without taking multiple connections into account. It assumes that each device has one address, and that it should find the one best way to get to that address and keep sending more data that way. So it needed some changes in the way your device communicates with the server in order to know that your device has multiple addresses and it should split the data it sends between them." ], "score": [ 4, 3 ], "text_urls": [ [], [ "https://tools.ietf.org/html/rfc6824" ] ] }
[ "url" ]
[ "url" ]
5s4uvq
Why do cellphone manufacturers/communication companies recommend that you let a new phone die before recharging it?
Technology
explainlikeimfive
{ "a_id": [ "ddcg711" ], "text": [ "It should be noted that with Lithium-ion batteries, which is in most smartphones and laptops today, fully discharging them lowers the average lifespan. Small discharges and recharging more often between uses is encouraged in order to increase average lifespan. Fully discharging a lithium-ion battery could be if the electronic device was able to estimate how much time you had left (in time, not percentage) as a fully discharge would allow the device to recalibrate its estimates, but this should be done occasionally (e.g. on a monthly basis). **Sources:** [ URL_1 ]( URL_1 ) [ URL_0 ]( URL_0 )" ], "score": [ 5 ], "text_urls": [ [ "http://www.mpoweruk.com/life.htm", "http://batteryuniversity.com/learn/article/how_to_prolong_lithium_based_batteries" ] ] }
[ "url" ]
[ "url" ]
5s4wwo
Why are all tv's on sale all of a sudden
I have been somewhat researching televisions and cant help to notice that most tv brands such as samsung and sony have dropped the prices of 4k tvs. Huge price drops too such as 400-500$ off. Isnt 4k suppose to be the new standard of televisions? Is this not the right time to buy a tv? I have not seen anything about 5k or newer tvs. But like macbooks and iphones I know there is new technology yearly, but what goes for tvs?
Technology
explainlikeimfive
{ "a_id": [ "ddcdk5n", "ddcdl87", "ddcdiki" ], "text": [ "Also, tomorrow is the Super Bowl. That makes it a time where people would want to upgrade what they have. 4K has been around for awhile but hasn't caught on for cable, it's mostly for internet/Netflix stuff. They are wanting to push the OLED technology now as the premium priced TV's, so LED TVs are going to dive in price.", "It's the day before the super bowl, a last chance to push TV sales before everyone loses interest in TVs", "We are coming up on spring. Spring is the time when the newer TVs come out. So, dropping prices to make room for the new. I used to work as a TV salesman." ], "score": [ 8, 3, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
5s5248
How does shooting lasers (i.e. LASIK) into someone's eyes dramatically improve their vision?
Technology
explainlikeimfive
{ "a_id": [ "ddcet4x", "ddcm1lt", "ddceue8", "ddcf3cz" ], "text": [ "Same way stabbing someone can save their life if the surgeon does it in just the right place. Those lasers make tiny cuts in your eye's lenses, changing their shape so they focus more sharply.", "I currently work at an opthalmologists office, there are many types of lasers that we use to help people see better, let's start with LASIK. LASIK fires tiny laser into the cornea to reshape the cornea and allow for a different reflection of light to hit the retina. Our eyes are like cameras so it all depends on the amount of light coming in and how focused it is when it hits the retina so a person with 20/20 vision has a great shaped cornea that allows for great light reflection and LASIK allows the cornea to change shape to better refract that light. Alright, let's move to iris lasers. I don't know much on this one but I know enough. People with closed angle glaucoma have no way for the fluid behind the cornea but in front of the lens, anterior chamber, to drain. What surgeons do is the fire lasers into the iris, the colored part, to allow for a new drain to open and the pressure to drop. When patients with glaucoma have high pressures in their eyes it pushes on the optic nerve and causes vision loss and thus the laser brings the vision back. Alright now onto the retina, there are many types of diseases that affect the retina. Retinal detachments and retinal tears both cause major blind spots in the vision and retina surgeons fire lasers to tack the retina back down much like spot welders do with metal. Now, people with uncontrolled diabetes have a lot of blood leaking in the retina and if untreated can cause permanent vision loss. Most retina surgeons start treatment with a series of injections into the eye to stop the inflammation but if that doesn't work the shoot lasers to cauterize the area affected by the blood leaks. Source: work for opthalmologists office in AZ and have been studying to become a retina surgeon.", "They aren't \"shooting lasers into people's eyes.\" Usually the cornea is misshapen, and the laser is used to reshape it so that they can see better. It's only surface level modification.", "Vision impairment correctable by LASIK is due to incoming light being focused either in front or behind the retina and not spot on. LASIK reshapes the cornea, which is the transparent front part of eye that covers the pupil and is a large contributor in focusing incoming light, so it gets the right form and thickness so incoming light now is focused directly on the retina resulting in improved vision." ], "score": [ 76, 37, 21, 7 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5s5epw
why do online cable streaming services only show a select 4 or 5 commercials?
Technology
explainlikeimfive
{ "a_id": [ "ddcn5qk" ], "text": [ "Is the ad rate so different than broadcast? I watch a local San Francisco station on-line and they have 4 ads. If I pay for broadcast ad rates why don'tI also get to have my ad showing on-line at the same time?" ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5s5gfd
Why is automaton suddenly a big deal? Hasn't it already been happening for a long time? What's different now?
Technology
explainlikeimfive
{ "a_id": [ "ddcnedg", "ddcofxu", "ddcoq2t", "ddcizyr", "ddcr1j8", "ddcmd7b", "ddcm4u4", "ddcqmvp", "ddcocdx", "ddclaff", "ddcr0fo", "ddclpet", "ddcpu0k", "ddcnhw2", "ddcmq21", "ddcofgk", "ddcm4ea", "ddcs1hm", "ddcnta9", "ddcmfiz", "ddcq40v", "ddcpx47", "ddcovvn", "ddcwmcl", "ddcsjd1", "ddcihb5", "ddcqmxo", "ddcnev9", "ddcqwxt", "ddcndwq", "ddcmpmv", "ddco9bk", "ddco382", "ddctn9s", "ddctb4b", "ddcmxul", "ddcy1ml", "ddcn56n", "ddd9itn", "ddcyhqw", "ddcr82g", "ddd98t1", "ddcpuqk", "ddcpxon", "ddd5rem", "ddctsar", "ddd0bsl", "ddcp37x", "ddcmhtn", "ddcmf30", "ddcqlly" ], "text": [ "What I think other explanations are neglecting to mention is the change in machine learning. Automation has been a thing for a very long time, but computers have not been powerful enough, and we haven't yet had the methods to *teach* computers to do tasks. *That* is what is so different about automation today. Before machine learning techniques were implemented, automation was very limited. You tell a robot arm to go to *this* location, execute *this* command (like close the grabbing arm), then go to *this* location and do *this* command. If the part the machine is supposed to be grabbing is out of position, the robot doesn't know or care. It executes its command regardless, which means it might fail to grab the part, or grab it the wrong way and crush it, or drop it in a position on the other end that jams the entire assembly line. For that reason, every step of the process had to be very precise. Human workers are usually employed to check the parts and align them correctly for the robots to function properly. It's not a very glamorous job, but it's still a job. Then, we started making robots smart enough to recognize simple changes in position with cameras and settings so that a robot could take the place of a human fixing the alignment of parts. That takes a lot of computing, because it takes recognizing that the part is out of place, then computing the changes made to put it into place, how to execute that, and then doing it. That's not easy for a simple robot, but our technology has advanced. Suddenly, even just standing at the assembly line realigning parts isn't a guaranteed job. Then we figured out true machine learning, where we can teach robots to do complex tasks. This requires more intricate, complicated computer networks because it isn't very efficient. The robot has to \"think\" about all of the inputs and all of the past inputs to make a decision, which requires a lot of computing power and a lot of memory to store the data of past actions. It also takes a lot of complicated coding to tell the computer how to do all that. Those things are all relatively new. Our computers haven't been sophisticated enough to handle that until recently, and computer memory is getting much faster and bigger to store the data required to do it. Think about automated driving. You can't just tell a car to turn left *here* and turn right *here*. It has to react to pedestrians and cyclists in the road, it has to react to drivers doing the wrong thing, changing traffic lights, changing traffic patters, detours, all the actions of all the cars around it. That's a *lot* of data to process, and there's no simple formula to tell the car what to do. Every situation is different, every situation needs a decision instead of a command. The ability to do that didn't exist a decade ago, or at the very least, it wasn't commercially available. Now it is. Now you really don't need *any* humans on the assembly line, except one technician to service the robot. Humans don't fix the mistakes made by the robots, they fix the robots so they don't make the same mistakes. Those jobs are gone. And with machine learning, we can teach computers not only to act in complex ways, but also *to create new things*. Computers are now composing music, and it's not even bad. It's not Mozart, but it's not bad. We have computers that can take big sets of data and output the next step in manufacturing. Need a new car design? Used to be you needed a guy to think up a concept car, some more guys to check the aerodynamic efficiency, some guys to consider the market and what people are interested in buying, some guys to custom build a concept model for review, then custom build a full scale concept... Now you can have one guy feed a computer an Excel spreadsheet with sales on every car for the last ten years, and you don't even need 3D models of those cars, you *don't even need to Google image search them*, you just tell the computer to do it and it does, and builds its own 3D models. Then the computer automatically compares trends and looks at numbers and sales and what was popular and spits out a design for you, which you can then feed into a 3D printer that builds the scale model, then feed some instructions into an assembly line that doesn't need to be told how to custom make this car, it just figures it out and makes it. *That* is what's different about automation these days. Tasks that were once considered far too complicated or delicate for computers to figure out are routinely done by computers these days, or getting very close to it. We once believed that humans with our infinite, *living* creativity would always be solely capable of coming up with new, innovative ideas, but computers can do that now, too. It certainly hasn't been a fast process, we've been automating for generations now, but there's always been other things to fall back on that computers and automation couldn't handle, like driving a truck or flying a plane for the military. Turns out, computers are taking over those jobs. The only jobs left are menial labor, and those jobs either don't really exist because mining coal is terrible for the environment, so we stopped, or no one is willing to take the job because it's too physically demanding. And service jobs (retail, waiting tables, etc.) but those don't pay enough. The only real jobs left are super technical jobs that require years of formal education and probably more years of experience in the field (like, ironically, designing better robots). The aging population doesn't have time, money, or inclination to get those skills, which has made unemployment so much of a contentious issue. Edit: \"But we've had this technology for decades!\" But it hasn't been commercially available. We haven't been using it to automate jobs like we are now. \"This is a hyperbolic vision of the future of automation!\" On the one hand, yes. I did that deliberately to emphasize how changing technology is affecting the economy. On the other hand, Trump was elected on promises of employment. The problem is real. And even a few years ago self driving cars were still thought to be pipe dreams and no one was considering their impact on the economy. Now they're not even widespread and we're already talking about how they'll devastate trucking jobs. Automation isn't evil, it's inevitable. I don't think it's bad, but you can't deny that it's happening. Edit^2 : I'm not saying machine learning advances are the sole reason. Other people have brought up the aging factories taking the opportunity to replace old equipment with better technology that has already existed, and I think that contributes as well. At time of writing I had not seen any answers adequately delving into machine learning, though.", "Controls engineer here (the guy programming this stuff). * It got more flexible, so now it's cheaper to change a machine from making product A to making product B. If you were making small batches, changing products annually, or just juggling orders for different parts and you spent most of your time converting your production line, you would automate less. Now your machine automatically adjusts to handle a range of products with inexpensive change parts and actuators. Also, touch screen controls can be reconfigured way easier than buttons and dials. Think fun size candy bars vs regular or king size and Snickers vs Payday; you don't want a dozen different machines that only handle one thing, you want an easily adjusted machine that does all of them. * It got faster. High end automation is working in single digit microsecond reaction times and tens of microseconds for full program/network cycles completely deterministically. Back to candy, if you need 10 lines to fill the world supply of Snickers, that's a bit pricy compared to two that can fill the world supply of Snickers and Payday and have time to cover 100 Grand too. * Motion is way better. CNC is fine and all for hard coding a path for a tool, but multiple levels of electronic camming with profiles calculated on the fly is normal now. Robotics is easy now and when you add machine vision, it replaces anyone that ever picked stuff up on a production line. The tools and program functions to do this stuff exist and are easy and flexible to use (on some automation platforms). * Machine vision exists now. Cameras are better and the software you use to program them is better and more versatile. We aren't checking if the print is good enough for OCR or finding where a pancake is on a conveyor anymore. We're making complex visual analysis. Self driving is whole other level with mixed vision and fucking L~~A~~*I*DAR. * It costs less now. To the point there are clear ROI for all but the most complex, low-skill work. I see people doing things they think are just too hard or expensive to automate (driving is a good analogy here) all the time... While I install and test their replacement that will pay for itself in 6 months. Basically, the limit to automating everything is time to figure out and write the program; mechanically, humans are pretty easy to replace and processor speeds are no object. EDIT: *LIDAR, I'm in machine/process automation, not self driving or defense. I just assumed they replaced \"radio\" with \"laser\", but apparently it's \"light\", resulting in LIDAR instead of LADAR.", "There are three major divisions of work: **Primary** (food production, raw resource extraction) **Secondary** (processing and manufacturing) **Tertiary** (services and transport) The primary sector went from 99% of people's jobs to being less than one percent now because of *mechanization*. A man on a tractor can feed hundreds. That was ok because people could move from farming to the secondary sector to process this new wealth of resources. The secondary sector then became almost completely done in by advances in *industrialization*. One plant conveyor belt with ten workers can produce more tools in an hour than a craft shop could make in a month. That was ok because when these manufacturing jobs dried up we moved up to the tertiary focused economy: serving all these goods and delivering expertise to others. Now, *automation* will take away the last of the secondary sector, and many of these service and transport jobs. Machine intelligence will take away the jobs that require expertise but not a lot of creativity (already developed techniques like surgery, routine taxes, filing claims against parking tickets, etc). There's no real \"fourth level\" to go into. The only fourth level is research, development, and novel engineering. Obviously most people aren't suited for these jobs, even if we let everyone have free education to the phd level most would not pass.", "You are right that there has been a general push toward automation particularly since the industrial revolution. In general this has improved quality of life. It is also true that since them there have been people worried about what automation was going to do to jobs, and jobs adapted. There does seem to be a difference in what is happening now in both quality and quantity. In terms of quantity, the speed with which individual technology is threatening to wipe out huge swaths of the labor market is basically unprecedented. The breadth the kinds of automation that is quickly becoming available is as exciting as it is frightening (computers are already writing short journalism pieces, driving cars, trading stocks, assisting surgery, etc.) The idea that \"computers will help us do the lifting but humans are always goimg to need to do the thinking\" is looking more and more naive. Perhaps most importantly, the gap in skills necessary to take the \"next job\" is growing. It is easier to teah a former sewer to now operate a bobin on a loom than it is to teach a former lomghaul trucker how to be a coder programming that truck. This is causing fear of a thinning out between the highly skilled and the low wage worker. Or 50 years from now we could be looking back on this moment as the turning point toward a golden age in humanity. Telling the future is hard. Edit: this got a lot bigger than expected. So to respond to a few comments. 1) this isn't prescriptive, just descriptive. I don't advocate \"holding back progress to save jobs.\" This is just a description of why people are concerned. It is easy to wave ones hands and say \"well, yeah, some people will be hurt and they'll just have to get with the program,\" but that ignores the social and political influence that that hurt and fear have. 2) i did not mean to minimize the huge disruption that was the industrial revolution. As a commenter pointed out, you had about 25% of the country change jobs (using the responders figure) and living location in about 3 decades. It is hard to tell if automation will have that big an effect. But if you are looking at 2050, it is very possible that we will have automation solutions for every current job that involves driving, fast food, the supermarket floor, manufacturing, increasingly detailed physical manipulations, and cooking as well as increasingly brain-intensive processes like banking, surgery, writing, accounting, etc. It is really hard to tell what will catch on and what won't. It will also be interesting to see if remote work will do anything to change the past century's flight to the city or not. 3.) As pointed out, this tech also has an enabling effect. Cafe press let everyome be a tshirt manufacturer, square space lets people gett simple websites up. 3d printing could drastically lower the bar in small scale prototyping and botique manufacturing. 4) Basic Income is one solution people go to, greater socialistic tendancies is another (and related to the first). That's possible. It's also possible that the next wave of jobs we haven't dreamed about is coming. Schools in 10 years could be teaching aduino programming in middle school and graduating students who have to take 1 foriegn language and 1 computer language as a basic requirement. Girl scouts of 2050 might be able to take a flavor profile and custom build your personal cookie delivered by drones (though if there is one thing we've learned, its that the flying car is never gonna happen).", "I work for a medium sized, family owned company. The same family has owned it for over 50 years, the founder still comes into work almost every day, that sort of thing. Really good people who really know their industry. Our product has always been made with *machines*, and while the industry has been slowly moving towards automation as technology has advanced, it didn't change the fact that these machines are huge, heavy, and expensive (millions of dollars with little to no automation). Not the kind of thing you could just replace. You get good crews, you make improvements where you can, but you still have to have five or six guys running each machine, and since everyone in the industry is doing the same, you can compete just fine. Now the last 10-15 years or so, that has started to change. These big, expensive machines are getting older, and they are breaking. When they break, an arms race of sorts starts to emerge. One company will buy a new incredibly fast machine that is extremely automated. One or two guys run the whole thing that six used to, and they still output twice as much as the old machines. A couple companies do that, and their labor rates drop like a rock. They charge less. Now the next time something gets to be even slightly screwy, EVERYONE just wants to replace it with one of these fast, super efficient machines. It becomes a cycle. Your competitors are getting faster and more automated, so you have to as well. Now back to the company I work for. We are very much behind the times. Being the family owned, good natured people that that the owners are, they have hesitated to modernize and put people out of work. Sure, we have a few new machines for additions and new product lines, but we are also still using machines they bought 50 years ago and along the way. However, it is getting to expensive for them to run those anymore. Our competitors, and even other product lines in our own company can run the product for half the price! So while they have been squeezing every ounce of life out of those old machines, there just isn't anything that can be done. When we sell product made on those machines, we effectively break even. It keeps the crews employed, but we don't make any money. We make money off of the automated operations. As time continues, and people retire or quit, or the machines just beak, we will shut them down for good. If that doesn't happen in the next five years, we will probably be losing money on those machines, so we will be forced to shut them down and find other work for those employees. The point of my little anecdote is that we are at a tipping point. It's not just big automotive companies or high-tech manufacturers that have begun automating, it's the companies that make commodity products, like boxes (which is what we do). There is probably a box plant in every city and town in this country. Thousands of them. They all will either be extremely automated, or they will close their doors in the coming years. The pressure from the larger companies has made the midsized companies automate (my company), and in turn, we will make the small companies go the same route to compete and keep up. Any that can't afford to simply won't make a profit and will go under or get bought out for pennies on the dollar of what they were worth in their prime.", "**ELI5 answer:** Think of careers as a ladder where you start at the bottom and climb up steadily. If a robot removes the steps at the bottom and steadily moves up the ladder removing steps, eventually only the tallest people can reach the lowest remaining step to start climbing. **Non-ELI5 answer:** As jobs are automated, the average minimum skill level for a decent job can go up. Consider self-driving cars for example...what career can a mid-40's truck driver switch to when he's automated away that pays as well as driving the truck did? Imagine that cashiers, bank tellers, etc. are replaced by AI. These are low-skill jobs that many people work, and there's not much for them to switch to when these go away. Even if humanity as a whole can adapt, individuals are often greatly impacted and left with no way to adapt. Making it worse...it's unclear if humanity as a whole can adapt. Middle class wages have been relatively stagnant while productivity has increased and other income levels have seen increases since the 1970's. Automation is likely partly to blame for this (there are a lot of other reasons also). Finally...jobs that previously seemed safe from automation no longer seem that way. Much of what many accountants do can be automated and is currently (my wife does this for a living). Automation is steadily chipping away at clerical and legal research jobs. I personally automated away entire positions at my previous job in photometry research. Source: read stuff and my wife and I both automate people out of jobs for a living", "I've commented on this before. I work for a large industrial forged steel products manufacturer located up in New England. I'm stationed in the O & G capital myself. Over the past few years I've been with the company the main factory has gone from three shifts of ~100 people to 2 shifts of maybe a dozen. The error rate has decreased, production has gone up, and their favorite part they wont admit? Robotics aren't part of the union. Although robotics may have had a lengthy transition into the market (mostly due to cost I imagine), the 'late adopters\" are now investing and it's killing jobs. Realize, economics has a trickle down effect; the factory workers get laid off from the small town factory that *was* the #1 employer in the city. Those factory workers aren't buying cars from the local dealerships, they aren't going out to eat supporting the food industries, and they are cutting back grocery bills struggling to survive. In response to that, the car salesman, waiters/waitresses, and grocery store owners are doing the same thing caught in the cycle. It's killing U.S jobs like shipping jobs to China did back in the 70s and 80s. The worst part of it all? The rich get richer, the poor get poorer, and there's nothing you can do about it.", "The number of things you can automate is drastically growing, since we are no longer limited to automate the \"muscle\", we are now starting to automate the \"brain\". In the past we had machines and robots already, but those were limited to precisely repeating tasks. All their movement was either pre-programmed or hardwired into the mechanics. If something unexpected happened, they couldn't react, they had no sensors or cameras that could tell them what is going on or ways to react to it. So you always needed humans around to monitor them and do the tasks that the machines couldn't do. If you look at those \"How stuff is made\" videos from the 80s you always see humans around the machine, as there are always steps in the production that the machine can't do. This is changing now. Better cameras, sensors and AI means that we can now automate those last steps that still required humans. The robots can now [react dynamically and are no longer required to follow a hardcoded track]( URL_2 ). That kind of use is however still limited to the factory and not quite that revolutionary. Where it is starting to become really interesting is outside the factories. Most obvious this is with self-driving cars. If you wanted to automate transportation in the past, you could, but you had to lay train tracks and have big control centers regulating it all. Today you can just take a car and have it drive itself. You do not need to change the environment to suit the machine, you can put the machine into a human environment and it can work it's way around. This open up a lot of areas to automation that historically didn't allow it, since there were no robots flexible and safe enough to handle those environments. It also doesn't just stop with robots, a lot of jobs are already digital in nature, humans entering data into the computer and such, all of those jobs are open for grabs by AI, which has made rapid progress in the last few years and started to match human level performance in a lot of tasks. AI might still be a few decades away from passing the Turing test, but for a lot of jobs you don't need something indistinguishable from a human, but something that is good enough for the specific tasks and AI starts to be getting there. All that said, it's worth keeping in mind that such revolutions in the job market don't happen overnight. A technology that has been demonstrated in the lab or on a trade show might still take decades until it is common place in every factory and often it might require to build a new factory from scratch to take full advantage of the automation that is already possible. When you look at newly build factory lines, such as [Steam Controller]( URL_0 ) or what [Tesla is doing]( URL_1 ) you will see a lot more automation than in factories that have been around for decades. **TL;DR:** We are approaching a point where the machine can do almost all tasks just as good or better than a human. Thus making the human unneeded on the job market.", "Automation Engineer here. Actually more of a robotic engineer as of late. From my experience, the companies that have a lot of robots and automation are doing really well. So well, that they are hiring more drivers, accountants, engineers, logistics, etc. I think it is creating more jobs here in the US. Instead of shipping the business overseas for cheap, we can manufacture it here for cheap with robots. There is a shortage of engineers in all disciplines in my field. We also need more electricians, plumbers, machinist, etc. I also think that it is political. The super rich do not want minimum wage raised. It is a fear tactic to keep the Walmart and McDonald's workers from asking for healthcare or raising the minimum wage. If they keep people thinking that a robot can do their job, then they won't complain about getting paid so poorly. I personally know of a major pizza chain that tried to use a kiosk system that made pizza's on the spot; without humans. It failed miserably. I believe companies are starting to realize how much they actually need the poorly paid humans.", "Worries are like fads, people obsess about them for awhile and you see them everywhere, then people find something new to talk about. The worry people have about automation isn't anything new as you suspected and it's been going on in one form or another since the industrial revolution. You've probably heard the derogatory term \"Luddite\" before. Luddites were actually an association of tradesmen during the huge sweeping changes that occurred as factories and farming began mechanizing. They had worries that machines were going to replace people and take jobs away; sound familiar? Instead mechanized farming and large scale manufacturing made many previously unobtainable items affordable for the average person and helped to usher in the middle class. The manual labor jobs went away but were replaced with better jobs that paid more and didn't require people to work themselves to death. Technology is scary because it has the ability to enable sweeping changes in every aspect of our society and change our daily lives, our jobs, and even our health. While automation will remove many jobs, the jobs were not the best ones to begin with. It also creates new jobs. Large businesses used to have mailrooms that handled all communication within the company and staffed with hundreds of people all sorting and binning internal memos. They used to have phone operators connecting the calls from office to office and outside lines to desk phones. There used to be elevator operators taking people to different floors. Automated phone switching, email, and automatic elevators removed all of those jobs. But they created telephone engineer, line operator, exchange administrator, LAN and wiring tech, Network Engineer, Elevator control repair, and tons of other jobs that pay better and are more fulfilling for the people doing them. Flipping burgers, sweeping floors, driving people around in a taxii, these are all jobs that suck just like being an elevator operator sucked and while those jobs go away, many new jobs will replace them. Automated car infrastructure engineer. Automated custodial maintenance techs that service and program the cleaning equipment. etc etc.", "Work produces value. Luxury time also has inherent value. As automation takes over more and more essential jobs, the marginal value of adding another hour of labor to the economy decreases (farmers produce more value than Walmart greeters). But the value of luxury time stays the same. For generations, that was fine - even as automation increased, there was still enough labor available that produced more value than an equivalent amount of free time, for most people to be employed. But automation is now becoming so ubiquitous that the marginal value of adding another hour of labor to the economy is dipping below the value of adding another hour of luxury time. At this point, it becomes almost impossible to make it 'worth it' to hire more people. Theoretically this is great - if we're producing so many goods and services that it's not worth it for people to work more hours, then we should be able to all work less hours and enjoy the same standard of living. Unfortunately, our economy isn't set up like that - if you don't work 40 hours you don't get a living wage, and there's no mechanism for everyone to simultaneously lower their labor output so that everyone can stay employed. So basically, one of the foundational assumptions of our economic system - there's infinite valuable work to be one - is going away, and the system we have now doesn't function correctly when that happens.", "I can say that I work in automation and it's been pretty crazy lately. I see shit and think \"wow, this is really ancient. How did they get by with this for so long?\" And then I realize that stuff was from 10 or 15 years ago. Automation from the mid to late 90s is scary to me. The amount of technology in the last few years is amazing. The efficiency that is possible now is mind blowing. No real legitimate business can afford to not have automation now, and if they do, they are missing a massive opportunity. Our work pays for itself, and we give empirical proof of that.", "I think the best idea to get a grasp on the situation is to look at automated vehicles. Taxi drivers, semi drivers, delivery drivers, industrial truck operators, mailmen to name a few. You also have to consider that car accidents will be a thing of the past so a huge slash in the need for traffic officers and first responders on the roads. Even the ambulances might be driverless allowing for more space for medical equipment with out a need for a cab. Then look at the average persons relationship with a car. The time for family's needing two+ cars is a thing of the past. The car takes Mom to work then returns home by itself to take the kids to school. Car picks mom up from work at designated time. Now imagine even further in this scenario. Whose to say that a family will even own the car. They very well might opt to lease / rent it. Because there wont be a need for the car to sit at Mom's workplace parking area. There is no labor cost as if you were paying a taxi to pick you up. Renting very well might be the cheaper cost compared to owning. Then you get to avoid all of the maintenance aspects of the car. If it breaks down the company sends another to pick you up. You wont have to pay insurance either. Your not driving so your not liable for the actions of the car. The manufacturer will be responsible for that most likley. You dont own it so you wont need to insure a 30k+ item sitting in your driveway just waiting for a tree to fall on it. Think about how much the normal person actually uses a car on a daily basis. If this is how things go their wont be a need for auto part stores / service stations seeing that the company you have a contract with will be responsible for all of that jazz. No more gas stations. If my roomba can auto dock to a charger once its battery is low no reason the cars of the future cant do it. No need to pay some one to plug a car in. Because lets face it electric is the future. Heck they are even making tiny robots that can tow cars out of parking garages. You wont even need a tow truck but for very unique situations. URL_0 Driving will be reduced to a recreational activity on designated roads / tracks. My 4 year old son might never have a licenses in a traditional sense. Think of all the jobs that will just vanish completely.", "Automation was expensive and limited. It used to be very repetitive tasks that may require precision but can be placed blindly. For example: \"put this piece at location XY\" \"Move arm to XY and twist\". Then some computer vision started to appear: simple laser locator, that can find the edge of the thing being made, which can then calculate the exact orientation of the thing, allowing some level of misalignment to be corrected. Later on, they added some camera, now it can detect some fault, allow for more precision while assembling (as now the stuff can be slightly mis-manufactured and still be machinally assembled). Until now, it wasn't much of an issue: those machines were very expensive, and the labor rate was quite low. A single machine could easilly be 300000$ or more, and last maybe 10 years before they are too obsolete by the new technology. They were relativelly slow compared to today's ones. Now, those machines are available sometime for less than 50000$. Are way faster than before. The more expensives one now can have crasy computer vision and extreme precision, and are very fast. They may be 500000$, but replace maybe 20 workers. Let's say 10$/hour, 40h/week, 50 weeks... That is 20000$/employe = 400000$ replaced. Notice how close you are from the labor? But hey! It also cost the employer lots of money to have employes, so you get even closer to the price. Want to add more metrics? It do not get sick, do not make mistakes, don't screw up, no lunch break, can work double and triple shifts. And guess what! More precise than an human! So, what changed? Cost of the machine vs human and also the flexibility of those machines.", "It has never been as efficient as it is now. Before job replacement ratios were very low. Technology replaced a few jobs, but a few new jobs were created. Roughly a balance. The problem now, is that technology is so advanced, it is replacing jobs far faster than new ones can be created. It isn't a group of horse care takers being replaced by a car mechanic, it is a software engineer replacing hundreds, sometimes thousand of people in a given field. Our current economic model isn't designed to sustain changes like this. Changes like this, with our current economic model will lead to massive unemployment and a huge wealth disparity. Major revolution needs to occur, but our current society is easily placated through a variety of methods which most are unable to recognize.", "As an engineer in that field, I might be able to shed some light on the subject. Automated machinery in the 80s got rid of a lot of \"dumb\" labor. While it's true that things like PLCs and robots have been around for decades, they had the limitation of only being able to do one thing day in and day out. A robot could only pick up one object from one place and set it in another, or weld a specific pattern. If any of that changed, it was a big pain in the ass and was really expensive. Automation was high production, but low adaptability. If you were a factory pumping out 5,000,000 of the same widget a year, you automated almost everything you could, because it was worth it. But if you put out limited production runs of a bunch of different products, automation offered less of an advantage to you. The modern trend of lean manufacturing pushes more towards limited production runs of a wide array of products (supply being determined by demand) which is in opposition of the mindset of the 1950s and 60s which was \"push\" (supply not being determined by demand, produce as much as you can now, build up inventory, and worry about demand later). Humans are low production output, but high adaptability. So at first, it seemed like human labor was a better fit for the Lean Manufacturing philosophy that made companies like Toyota world leaders in manufacturing. There is one facet of automation that has seen huge advances in the last decade: machine vision. Machine vision is automated machinery using cameras to make decisions on-the-fly. With machine vision, automated machinery can be high output, *and* high adaptability. Take for example, a job where someone picks parts off a conveyor belt, inspects them, and the packs them in a box if they're good and puts them in a reject chute if they're bad. Without machine vision, you *have* to use a human being to do this. But with machine vision? You can completely automate this process. Mount a camera on the robot and now the robot can recognize the parts, count how many there are, what orientation they're in, and adapt it's positioning program *in real time* to compensate. It can now pick up those parts no matter where they are on the belt and no matter what orientation they're in, put it in front of another camera, spin the part while the camera checks for defects, and then make the appropriate choice, all automatically and without the need for a human. Best yet, the robot can't get tired or bored and start making mistakes. It will do it's job the same no matter what time of day or day of the week it is. It doesn't worry if it's wife is sleeping around on him, or if it's kids are being bullied at school, or if it's going to have to ship it's oldest kid to military school, or think about how great the fishing trip this weekend is going to be. It doesn't get sick, doesn't need breaks, doesn't need to sleep, doesn't worry about life, doesn't have distractions, etc. Another example. Let's look at a factory where people assemble components manually. They have a station set up with fixtures and the components to assemble a part. Before cameras, you had to hire very good people who wouldn't make very many mistakes. Those people were very hard to find, and so you had to pay them a lot of money. Supply and demand. Now, add machine vision to those stations. Now, the vision system does all the inspection and makes *sure* the operator assembles the part correctly. If the operator misses a gasket or a screw, or forgets to plug in a component, the camera will check, see that those components are missing, and refuse to release the part until all the required pieces are present. Now, *anybody* could do that job after a few minutes of training. You don't have to get really good operators that don't make a lot of mistakes because now the machine catches the mistakes and doesn't *let* the operator assemble it incorrectly. The qualifications to do that job went down, and so you have more people to choose from, and thus the wages for that job can be lowered, because there are more people willing and able to do it. Supply and demand. Machine vision systems have been around for decades, but until very recently they were extremely expensive. The cheaper systems were pretty low resolution and were dodgy. You needed a really good programmer to get something useable about of a cheap camera in the past. But now, affordable, smaller vision systems are becoming more commonplace. Wherever they're not replacing operators, they're lowering the skill required from the operators and thus lowering the wages for that job. If you can put a $2,000 camera on a machine and pay someone $1,000 to program it, all you need to do is lower the annual salary for that operator job by $3,000 to have a one-year ROI. So, in short, to answer your question of why people are starting to freak out about automation now: smart vision systems getting cheaper and more accessible.", "Because up until now, robots have been mostly replacing factory workers, a poor and voiceless demographic. But self-driving cars are coming. In some US states, truck driver is in the top 5 most common jobs. Almost 3 percent of all working Americans are drivers of some sort — more than 2 percent are truck drivers, 0.4 percent are bus drivers and 0.3 percent are cabbies and other types of drivers, according to Census Bureau occupational data. Self driving cars will cost 1.7 million people their jobs. Suddenly, the impending job shortage is very real and personal and coming very soon.", "For what it's worth, I have an answer that's mostly contrary to most of the other posters, even if it's super late (4:30EST atm). It isn't. It's the same as it's been for years. It's not different except in media coverage. I will give you examples as an automation engineer in the Midwest Auto industry. For several decades, automation has been taking over production. For 15 years or so, it's been a major effect on economy out here. For the the past 5 years, it's been the *most contributing factor to new jobs* in the industry. Note the language there, though. As far as jobs go, automation is a bigger deal now than it ever has been, although it's increasing at about the same rate as it has been for 30 years. In the 80s, some QA guys lost their jobs to a camera and a computer. Then a team of eight welders was lost to a team of two machine operators. Then a materials department of 50 people was reduced to 30, thanks to improved logistics handling in warehouse software. This has been increasing at a pretty steady rate for many decades, but only a few jobs at a time... most often even as slow as manufacturing production, which means instead of hiring 150 people you hire 90, which is much more difficult to notice as opposed to firing 5000 people and taking in 200 robots. Please note this is just the experience of an automation EE in the auto industry of the Midwest area, and may not apply to your manufacturing area.", "AI is making the difference. We are one technology leap away from AGI, Artificial General Intelligence. When applied to robotics the ramifications are simply staggering. A robot that can make a jacket, from base materials placed on a table, can make anything. At that point 80% of the world is out of a job. Not over the course of 30 years, but 10. All non-creative labor will be performed by robots, and that includes the service industry. But it doesn't stop there. Full Automation will eventually lead to the point where you no longer need human labor to make more robots. And I don't mean making them better, I mean every step in just making them. Prospecting for minerals, mining, smelting, crafting, transportation, assembly, etc etc. That means that in terms of human labor there is 0 difference in making one factory and making 10 thousand factories. As long as you have the resources. Which in turn makes space industry inevitable, as we will quickly run out of resources on Earth, but those kinds of rare minerals are found a plenty in the asteroid belt. So that means one of two possible futures. 1. The corporations maintain strict control on robot technology, under the guise of copyright ownership. And the rest of us live in shanty towns while our corporate overlords live happily in district 1. 2. We maintain an open and free internet and either manage our own open source robotics that keeps up with modern technology. Or straight up steal the technology and put out for all to have on pirate sites. This allows all of humanity to benefit and it fundamentally changes human society to something like the Zeitgeist project. In short, keep the internet free, it really matters.", "You're right, automation isn't exactly new technology. *New* technology is expensive. *New* technology doesn't change our lives. What changes our lives is yesterday's cutting-edge technology becoming cheap and easily accessible. Cheap enough that it's cost-effective to start using it more often. This is where we're at now with many forms of automation. For example, the technology to produce a touch-screen kiosk where people enter in their orders at McDonald's has been around for easily two decades, it was just too expensive - paying a person $9/hour to do this was far cheaper. Today, such a device costs one-tenth the cost it did 15-20 years ago, people are looking hard at these devices to cut costs.", "Read title and was like... Automatons are a big deal?", "because its the first new jamiroquai album in 7 years, how is that not a big deal?", "Because innovation is never about the newest thing, it's about the thing that was new 10-20 years ago that is now cheap enough for real use. Automation in factories was just the beginning, it was clunky, and single purpose. Now we are getting to general purpose automation, where you can buy a robot that can learn to do more than one job, and teach itself to do it better. CGP Grey, a youtuber, has a great video about it called Humans Need Not Apply.", "Automation is suddenly a big deal because you frequent a site where the avg demographic is young, poor, idealistic and bends anti capitalist/Anti-wealth. [Source]( URL_1 ) Note that not one of the top comments cites anything other than anecdotal evidence and opinion. Automation has been going on since the dawn of time and it has been a net benefit for society. AI and Machine Learning are just the latest flavor/buzzwords used to instill fear and sow confusion. To understand the impact of automation, one only has to look to history. * Why did slavery increase after the cotton gin 'automated' their job? [Source]( URL_0 ) * Why did jobs and real wages grow in the wake of the Industrial Revolution? [Source]( URL_2 ) * Were there breadlines when Henry Ford rolled out the Model T and made buggy drivers and public transportation \"obsolete\"? * I am still waiting for the death of all retail jobs and economic collapse that the internet was supposed to bring. Automation, in the modern sense, has been happening for almost 200 years. While entire industries have been wiped out, our economy and real wages have increased significantly, as has our standard of living. In-fact, I would challenge all of the Nay-sayers to find me one historical example where a technology innovation led to a prolonged and widespread economic collapse. > But, But - this time is different AI will automate everything - The machine will rule, we will become serfs to capitilist overlords. Every top comment has some flavor of this response; good in the sense that it is a red herring as to the bias of the 'analysis'. AI is just the latest step in a line of many. Jobs will never go away.. In 1790 you had 3.9m people in the US. In roughly 200 years we added 300m people to our population and likely automated every single job that would have existed at that time and done so on a scale un-imaginable (see agriculture). Why then is the sky not falling with 30 or 50% un-employment rates?", "What has changed is the internet, which lets wrongheaded ideas like \"machines will take our jobs\" spread more quickly, and lets the people who believe in these ideas congregate and create echo chambers, where they become even more adament in their beliefs, more easily. No evidence at all exists that technological advancement will reduce job opportunities. Remember, this is not a new theory.. URL_1 > Predictions that automation will make humans redundant have been made before, however, going back to the Industrial Revolution, when textile workers, most famously the Luddites, protested that machines and steam engines would destroy their livelihoods. “Never until now did human invention devise such expedients for dispensing with the labour of the poor,” said a pamphlet at the time. Subsequent outbreaks of concern occurred in the 1920s (“March of the machine makes idle hands”, declared a New York Times headline in 1928), the 1930s (when John Maynard Keynes coined the term “technological unemployment”) and 1940s, when the New York Times referred to the revival of such worries as the renewal of an “old argument”. > As computers began to appear in offices and robots on factory floors, **President John F. Kennedy declared that the major domestic challenge of the 1960s was to “maintain full employment at a time when automation…is replacing men”.** In 1964 a group of Nobel prizewinners, known as the Ad Hoc Committee on the Triple Revolution, sent President Lyndon Johnson a memo alerting him to the danger of a revolution triggered by “the combination of the computer and the automated self-regulating machine”. **This, they said, was leading to a new era of production “which requires progressively less human labour”** and threatened to divide society into a skilled elite and an unskilled underclass. The advent of personal computers in the 1980s provoked further hand-wringing over potential job losses. And every time, the predictions ring hollow. We've had more automation - and particularly cognitive automation - over the last 20 years, than any other period in history, and this is era that has seen the fastest wage growth in human history: URL_0 > **Progress in the global war on poverty** > *Almost unnoticed, the world has reduced poverty, increased incomes, and improved health more than at any time in history.* And means of automation are not concentrating into the hands of a small elite. They are becoming increasingly widely distributed. For example, we've gone from 122 million people owning smartphones in 2007, to 2.5 billion people owning them at the start of 2017. All of this power and technology is now in the pockets of nearly half of the world's population. Contrary to the fear-mongering about automation being concentrated in the hands of a small minority and impoverishing the masses, it is becoming ubiquitous and raising the standard of living of the vast majority of the population.", "What is different now is that it is happening in more fields, and it is happening faster than it has ever before. Clerical and Service industry jobs are becoming automated.", "I am going to try not to have an overly long treatise on why. **There is a point where computers can not only do what a human can do but do it faster and better. We crossed this point for a lot of things and are approaching doing this for most things** In the past we were making automated **tools** to help at task", "Some of this may have to do with the fact that we are quickly approaching a point where we can automate away a few industries with self driving cars. Once self driving cars can be proven to be more reliable than humans you will see massive lay offs in ground shipping and transportation. This isn't like other fields that may slowly bleed jobs as machines become more efficient there will be less jobs for people. There seems to be a real line that once machines can be shown to be better drivers than people it is just a matter of automating away all the workers. Professional drivers who are laid off in mass will not be able to just get new jobs since their automated cars will already be made in increasingly automated factories and driving in many places is one of the highest paying jobs a person without an advanced degree can have, so the quality of life for those people will drop off. Not to get political but the changing economy is what fueled the rust belt to be the hot bed for a Trump. People who are pinched out by the changing economy felt like people didn't care about their plight and people who will lose their job to automation will be in a similar plight.", "It's the personal connection one has to an automaton, a cyborg or another machine that looks like us. Sure, machines contribute to just about every product we have these days and have helped move modern civilization more quickly to next level technology, but these machines are largely behind the scenes in factories. We only see the end product. If automatons began to be more used in society, there are two schools of thought. One, the human-nature of the beings (think of cyborgs or something like the T-800 from Terminator) would help us accept them working in very human-conditions and trust them in certain situations (like baby delivery, e.g.). On the other side, the uncanny valley says that having a machine look like us would cause an unsettled feeling and it would be better to have something look like Number 5 from Short Circuit in our daily lives. Either way, the use of these machines is going to be a big deal because automatons are more useful in service industry (like Taco Bell) versus the Amazon Warehouse, because one of their big benefits is the human-likeness. In the warehouse, you would just have Number 5 roll around, but at the McDonald's counter, you might have something that looked like the love dolls from Japan. So, no, automatons have actually not really been used to their full potential because their main potential is client-facing interaction like the service industry. If you are referring to automatons in factories, they'd be there largely as proofs of concept. Things are different now because automatons have finally become advanced enough to be useful where they are needed most.", "Humans need not apply is probably the best video explaining this issue URL_0", "It's exponential. Think of it like an infection -- once technology opens up automation of a new task or type of work it rapidly expands to fill that space, displacing human workers who used to to that task/work. It goes in \"waves,\" as new technology develops and reaches the point where it can take over a work space. It may seem like it's *suddenly* a big deal because now computers are getting advanced enough that tasks/work that were thought to be complex enough that they could only be done by humans are now being done more and more effectively by computers, and so we are likely nearing a next \"wave\" of whole categories of jobs being delegated to computers.", "Contrary to popular belief, machines have been creating jobs for awhile now, but that trend is ending. Automation is suddenly a big deal, because we have passed the tipping point, and the machines are now consuming jobs. The only reason they haven't consumed more is because the corporations are using people in 3rd world countries like slaves, and it's cheaper to \"pay\" the slaves than automate the work they do. There is an article on the front page right now that shows factories in China are now replacing workers with machines, and getting huge increases in productivity and less defects. Self driving cars alone is looking to threaten MILLIONS of jobs just in the USA. Wattson, the computer that defeated the best jeopardy players, is now being used in diagnosing medicine. It will only be a few more decades before machines are doing many, if not most, of the jobs we do today. Then what? How does that economy function? If almost no one has a job, then they don't have money to buy things, and the economy collapses. Of course, in reality, this will be a slow trend of job destruction, and the economy would decline with it.", "It's all media hype. Countless innovations throughout human history have put people, and sometimes entire industries, out of work. The cotton gin. The sowing machine. The car. The computer. The internet. The smart phone. Advanced electronics. New software. And guess what? When jobs are destroyed because one industry successfully outcompete's or makes obsolete another industry, those new industries create more jobs. Lots of times those jobs pay better. The two big ones even the younger generation should be familiar with are the internet and smart phone. Stores with physical locations sometimes have trouble competing against online retailers like Amazon. And how many people do you see get GPS's now a days? They just use their smart phones. Heck, there were lots of the reasons the Cival War in the United States was started. The big one was the the southern states were very invested in slavery and they felt there way of life was being threatened. But that threat wasn't just legal. It was economic too. The northern states had much more automation than the southern states, which relied on manual slave labor to get work done. The plantation owners with salves were afraid of automation too. People are better off when we innovate and create new cool shit for people to use.", "When the Luddites first appeared they where put down and into there places by the upper classes. The folks in the middle didn't care and where happy To accept that all the poor where just stupid and ignorant, because their jobs where safe. All was good except for the poor at the bottom who got told to deal with it. Now those people in the middle have started to have their jobs threaten and have come to realise automation threatens the majority of jobs. Folks at the top making the big money will still be fine so don't care. The middle class people who work in IT don't care, they think they will have jobs fixing and programming the new machines. In my opinion the IT folks are deluded but making big money now so are not going to stop. Even selfdrive vehicles threaten huge numbers of peoples jobs and life styles. Who needs bus drivers taxi drivers, lorry drivers, ships captains, delivery drivers and all the jobs they support like insurance sales, car sales men. But folks are happy to support self driving cars because they can watch TV on their commute. Those commuters need to wake up and realise they won't be commuting soon when their jobs are gone. Down votes inevitable for knocking self drive cars, but it does not change the logic.", "A few reasons, but here's the biggies: * Job loss has become a very real issue for many people. Even as various governments publish employment statistics that distort the picture, many young and vocal people can see that jobs and job security are in increasing peril almost everywhere. * You correctly point out that automation is at least 100 years old and has been widely embraced already. But some more dramatic concepts such as autonomous vehicles and IBM watson-style problem solving have become more well known, causing imaginations to run with ideas of what the future might hold. * Citizens used to be more concerned about jobs and their fellow citizens, but many societies and individuals have turned cold to that. Unions and working people have been demonized. Once-honest work is now mocked, and a false goal has been planted that everyone should be a CEO or startup millionaire. * Consumers are more accepting of low quality automated goods and services. Today's consumers even believe and spread a narrative that actual human service and craftsmanship isn't economically viable. They expect their service need to be handled with an error-prone bot, touchtone phone maze, or outdated web site FAQ. And because their expectations are so low, they don't demand better.", "There is a genuine possibility that human brain labour as well as human muscle labour will become obsolete. This will create a huge problem for the way capitalism works....for you to sell you have to have someone to buy. What happens when a very large part of the world is unemployable through no fault of their own? General doctors, legal adsorbents, lorry drivers, baristas, accountants, journalists, anyone who writes word documents all ripe for automation....", "I'm too late so no-one will see this but it's like this: In the past power had to be completely centralised and rigid because pre-printing press ideas were incredibly expensive to transmit to large numbers of people. With the printing press and greater literacy, ideas could have much more significance to the point of actually being dangerous to those in power, leading to democratic systems to keep the peace as you could not subjugate an entire populace. They could go on strike and then you would be fucked, because if the work stops and people start getting hungry then the government gets overthrown violently. If we reach a point that the most important work is automated then why should those with power care if the workers strike? Will there even be any workers? If powerful people don't *need* workers to keep everyone fed or to make things/fight, then the existence of those people is just a risk of revolt without anything in return. So why not just have a mega-genocide? what's to stop you? reduce operating costs of feeding \"useless\" people. why not? why won't this happen?", "An automation engineer from another thread explained the economics of it nicely: [Automation engineer here. Automation is difficult and expensive but labor savings more than justify it. Last year I installed a few machines that automated part dimensional inspection. The inspection process became much more accurate and the plant manager thanked us and bought us dinner for saving him from hiring an additional 800+ people to do a worse job. The capital ROI was just a few weeks. I do this dozens of times a year. It's here to stay]( URL_0 ) (credit u/lostmessage256). So it's really just a matter of whenever a technology advances to the point where it can do a category of jobs more efficiently than a human, it takes over all those jobs very quickly.", "A lot of folks have covered the reality that machines are getting smaller, cheaper, and smarter. I would also submit that they are also become *lighter* and *heavier*. Companies are exploring how they can deliver items with drones. Other companies are exploring how self-driving cards and trucks can deliver people and products without the need to sleep and on a moment's notice. As we continue to improve electronics, it will be possible for these devices to safely recharge themselves. In the IT industry, software and hardware has become more robust, with self-deploying servers, consolidation of hardware resources, and the outsourcing of support jobs. Retail is going online because it's more convenient to shop at home, which consolidates workers into warehouses. Even cutting edge content delivery companies, like Netflix, are destined to roll back their mail-order DVD rentals. I remember growing up in the 90s, reading books that discussed how technology would lead to shorter work days and a higher quality of life. That's all speculative fiction for another timeline; people are now asked to work harder, to do more, while wages never kept pace. We're witnessing the tipping point of a technological revolution. It's fair to ask that prepare to embrace it in a way that helps realize the dreams that we've had for decades.", "Automation is infiltrating the work place in ways no one even considers. Sure we know a computer could take our fast-food order, and now automated vehicles are catching up to the mainstream expectations of robots taking over. But apps and chips have been taking jobs right before our eyes for a while now. I lost my job to automation...11 years ago. Movie theaters used to use film. Film used to require a small army of specially trained projectionists to keep a movie theater running, especially the 10 - 20 screen buildings. But in 2006 my theater replaced all the projectors with fancy new digital ones. No more building the film reel from the small reels they shipped to us. No more unpredictable, yet always identifiable amd fixable issues with film. Now it's always the same handful of glitches. And when these glitches happen we call a single technician who may not even live in the area. Tech-support. Basically all the control was taken from us via computer chip. Around the same time Fandango became a thing, and then the tills were all digital, so one didn't even need to count change and people could akip the box offoice. In the time span of one year the classic theater turned into one big computer and suddenly there was about a dozen employees where there only needed to be one. The building ran itself. It's not like any of us were paid living wages to begin with, but yeah most of us were gone within the year. No more hours. Too much idleness leading to gossip and general boredom. The boss had been around for a while and he never really adapted but did what he could for us, while losing the only gig he'd ever really known. And at the time, did anyone see what was happening? No. We were extatic about the new technology, and so were the customers. We felt privileged to reign in the future. So that's how we'll lose our jobs. Not with debate and cataclysm, but with smiles and cheers.", "[Humans Need Not Apply - CGP Grey]( URL_0 ) Probably the best explanation there is.", "Before we replaced muscles with machines. Now we are starting to replace minds with machines.", "Automation is now software based, not hardware based. Meaning 1 computer can now control thousands of automated functions......", "Automation has swept through various industries and eliminated whole classes of jobs over and over. These waves come and go with jumps in technological capability or cost efficiency. Right now, the imminent reality of self-driving vehicles is threatening truck drivers, taxi drivers, delivery drivers, many shipping or postal workers, transit operators... the list goes on. A fuckton of people drive for a living and won't pretty soon.", "The biggest difference is that before the IT revolution we were automating away unskilled repetitive labor. Now we are automating away skilled jobs. It doesn't seem like much but sites like webmd, avvo, paypal, and zillow reduce the demand for what are historically some of the highest paying professions: doctors, lawyers, bankers, and realtors. Anybody not in IT or creative industries should be nervous....it's only a matter of time.", "LOL, first i was thinking, WoW, Jamiroquai is bigger (automaton is their new album, announced a week ago, im a huge fan!) then we thought on Reddit, dam , 900 comments, must be a record, and then i read it and its not about them, but about industry. got me! When you subscribe to a new community, it populates your front page sporadically, TIL the world isnt all jamiroquai fans!", "What's different is that we came out of an 8 year recession AND we haven't raised wages as a whole in nearly that amount of time. So many many people have seen economic stagnation. That PLUS reduced demand for labor from outsourcing AND from automation has led to permanent systematic reduction in employment in the USA. And it as an election year so every problem that could be talked about and blamed on a candidate was. I think these all contributed to why all of a sudden automation is a bad thing.", "Simply put: the transistor. We had machines and electronics for in days before the transistor was invented, but the transistor revolutionized computing and it has been evolving since. Now we can put a computer in about anything, and its not brand new technology so robots that build things in a factory isnt too expensive an idea and a lot more manageable as someone trying to purchase the technology. I watched a video about automated robots that shares intelligence and could learn how to do different tasks and problem solve then share the solution with other machines", "* machine learning has come a long way, and can now do more complex jobs * almost all technology has gotten cheaper, 20 years ago it would be insanely expensive to have touch screens for every super-market station, now its dirt cheap which puts a lot more jobs at risk. Automated farming meant that more people get food and allowed people to seek out less back breaking forms of labor. Taking away a fast food job, a tax analyzer's job, a marketing job, or any other \"more complex, more cushy\" jobs and replacing it with an algorithm can save businesses a lot, but come at a cost of putting more people out of the \"better\" jobs", "because originally automation was portrayed as being used to do the jobs too boring or too dangerous for people to do. So people would still be employed to help do a thing, and robots would do the stuff people cant/shouldnt do. Accidents still happened, but public opinion was for its good intentions. Furthermore, when businesses wanted to cut down on labor costs, they wouldnt 'automate more' but would instead deploy vast social engineering schemes and lobby governments to import more third world refugue unskilled laborers to drive the value of labor down. Poor people would get better jobs, middle class people would feel good about helping the poor, and the displaced workers would simply go to another factory/country or even start their own businesses with the money they had earned thus far. But now there's a push to get robots to do ALL the things, which leaves a lot of humans unable to find viable work to support themselves, unemployed people are unhappy people, and unhappy workers inevitably start to do bad things that make more people unhappy. These unhappy people no longer have the ability to start their own businesses because only the already established organizations can afford the BILLIONS OF DOLLARS it takes to fully mechanize, leaving everyone not already rich totally fucked. Furthermore, its a lot harder to win public opinion with removing people and replacing them with robot especially for really simple shit. The company can no longer pretend they are helping others, they are blatantly helping just themselves and people call them out on their bullshit.", "There are two thresholds automation needs to overcome in order to replace human labor. Cost: it must be cheaper to produce and implement than human employees or there is no economic incentive to replace them. Capability: it must perform required tasks equal to or better than human labor, or there is a quality incentive to retain human labor. Automation technology in the early part of the industrial revolution and throughout most of the twentieth century was only capable of replacing routine, manual labor tasks. Examples include: skilled artisans, most agricultural jobs, factory jobs, etc. Pretty much any kind of rote, manual task could be performed better and cheaper through automation. As a result, employment in the agricultural industry went from a pre-twentieth century majority of the labor market to less than two percent today. Research the Luddites for an example of early resistance to automation. Most household and personal items (like clothing) are no longer fabricated by individuals, but by largely automated factories. In other words, the labor market transformed dramatically; but, on balance, automation technology had created more jobs than it replaced. This is the key point that is being argued today: That the nature of automation has changed in such a way that it is no longer creating additional employment opportunities for every job it replaces. While the early implementation of automation was only capable of and cost effective at replacing routine, manual tasks, the automation that is being developed today will be capable of replacing non-routine, cognitive tasks. Examples include: driving, writing (look up machine written articles), research and, perhaps most important, learning. That is all a preface. To answer your question, why is automation such a big deal? There is a funny statistic that I've heard from multiple sources, but could never track down the origin: every major revolution in modern history was preceded by a surplus of unemployed lawyers. While, on its surface, this serves mostly to poke fun at the trouble making wrought by idle lawyers, there is a deeper conclusion that can be drawn here. Regardless of your personal opinion about them, lawyers tend to represent the right of the intellectual bell curve in every society. Taken more broadly, this statistic could be restated as: every major revolution in modern society was preceded by a surplus of unemployed smart people. That implication is one of the major reasons why the automation of today is so much different than the automation of the last century. The old automation was only capable of displacing unskilled and uneducated members of the labor force. The new automation has and will continue to displace skilled and educated workers from the labor force. Some, perhaps many, will find alternative employment. However, there is a reality that really needs to be discussed more openly, which is: automation has enabled an economic model in which many (and possibly most) people are simply economically unnecessary. This brings up a number of existentially important questions, like: what responsibility does an economy have towards its useless, if not parasitic, members? Particularly when this segment comprises the majority of the population? What about when automation progresses to the point that no human labor inputs are required to run the economy? To what extent do our lives have purpose outside of economic production? Unfortunately, I feel that most of these questions will be left to the politicians to interpret. In the early twentieth century, Keynes wrote a great article titled \"Economic Possibilities for Our Grandchildren\". It is very interesting to read this article from the perspective of one of those notional grandchildren, and think about how much our society has gotten wrong over the past 100 years. Anyway, I hope at least some of this was interesting to you. The main concern about the automation of the future is that it will be displacing from the labor force the kind of people who can seriously disrupt society when left without gainful employment. But maybe that wouldn't be the worst thing." ], "score": [ 5124, 2393, 1617, 821, 815, 380, 173, 151, 44, 41, 33, 32, 28, 26, 25, 22, 19, 18, 18, 16, 12, 12, 11, 11, 11, 9, 9, 8, 8, 7, 7, 7, 7, 7, 6, 5, 5, 4, 4, 4, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3 ], "text_urls": [ [], [], [], [], [], [], [], [ "https://www.youtube.com/watch?v=uCgnWqoP4MM", "https://www.youtube.com/watch?v=8_lfxPI5ObM", "https://www.youtube.com/watch?v=aPTd8XDZOEk" ], [], [], [], [], [ "https://www.youtube.com/watch?v=QXSJ7dprzf4" ], [], [], [], [], [], [], [], [], [], [], [ "https://www.asme.org/engineering-topics/articles/history-of-mechanical-engineering/how-the-cotton-gin-started-the-civil-war", "https://en.wikipedia.org/wiki/Reddit?print=no#Demographics", "http://webs.bcp.org/sites/vcleary/ModernWorldHistoryTextbook/IndustrialRevolution/IREffects.html" ], [ "http://www.csmonitor.com/World/2016/0207/Progress-in-the-global-war-on-poverty", "http://www.economist.com/news/special-report/21700758-will-smarter-machines-cause-mass-unemployment-automation-and-anxiety" ], [], [], [], [], [ "https://youtu.be/7Pq-S557XQU" ], [], [], [], [], [], [], [], [ "https://www.reddit.com/r/worldpolitics/comments/5s435c/chinese_factory_replaces_90_of_human_workers_with/ddccmet/" ], [], [], [ "https://www.youtube.com/watch?v=7Pq-S557XQU&amp;t=397s" ], [], [], [], [], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5s64tr
Why is seawater so hard to convert into drinkable water?
Technology
explainlikeimfive
{ "a_id": [ "ddcoljv" ], "text": [ "It's just hard to do it cheaply. A lot of people think that membranes are the way to go because membranes can let water pass but stop other stuff. But you got to push the water through the membrane and that takes pressure which takes energy. But who knows, maybe someone will think of a cheaper way of doing it." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5s7vcg
Functional programming vs other types of programming
Technology
explainlikeimfive
{ "a_id": [ "dddkbb4" ], "text": [ "\"Normal\" (imperative) programming is based on an idea that you tell the computer what to do as a sequence of steps like: read a number add 1 to that number print the number on screen ... Functional programming is based on the idea that a program is an evaluation of a mathematical function. Mathematical function is a \"box\" into which you put a value (for example a number) and it spits out another value. It does nothing else than that and it has to spit out the same value every time you give it the same input value. These are the restrictions a mathematical function has to have. Now in \"normal\" programming you have functions too, but they are not mathematical functions, they are more like subprograms that can do other things than just take and return values. In functional programming, everything is a strictly mathematical function that is only allowed to take a value and output a value. The program then looks something like this: output = convert_number_to_text(add_numbers(input,1)) So functional programming is basically restricted from \"normal\" programming. But why would we restrict programmers? Turns out it is very good to be restricted like this, because: - The programs are very bug-proof because every time a programmer uses a function, he knows it will only return some value and do NOTHING ELSE. In a normal programming language you never know if a function changes some data in the background or something, which causes many bugs, which may also be very hard to find and fix. Basically with restrictions come guarantees. - The computer knows that the functions aren't allowed to do certain things and so it can optimize the program to run faster. For example if you call a function with two parameters a(b,c), the evaluation of b and c can always be done in parallel on different CPU cores, because they are completely independent! - The program has interesting mathematical properties, it may be analysed in certain ways, mathematical proofs of its correctness can be made etc." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5s89w0
Why is it so hard to create emulators? Emulators like Citrus are coming along, but the ones for the Wii and PS consoles are still in heavy development.
Technology
explainlikeimfive
{ "a_id": [ "ddd8uws" ], "text": [ "Operating systems like Windows and Linux are designed to work on a wide variety of hardware, so moving from one CPU or ram set to another is not big deal. Operating systems for static hardware systems like gaming consoles are entirely different; the OS is designed from day one to work with exactly one set of hardware and often includes checks and calls that expect extremely specific bits and pieces to be present, and fail if they are not. Emulation is essentially tricking an operating system into thinking it has one set of hardware when it is actually using another, and getting that trick to hold up is hard, ESPECIALLY when the hardware the emulator will be running on is changing too. The Wii and PS consoles are tricky examples because they run on proprietary hardware for the most part, often involving intricate and unique designs, ie. the ram setup in the PS3. Nintendo consoles use older styles of CPU that have been overclocked into oblivion, with both of these presenting challenges to emulation. It's a tricky issue, and getting the OS to continue working on varying hardware is very hard." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5s8i03
How will autonomous cars handle not being able to see the road, ie snow
Technology
explainlikeimfive
{ "a_id": [ "ddd7gn7", "ddd7tmq", "ddd7sj9", "ddd8c1s", "ddd7ftb" ], "text": [ "It certainly will be harder for an autonomous car to do this than normal driving. Early autonomous cars will likely have a failsafe mode, where if they can't handle the road conditions, they will pull over and return control to the driver. Also remember that these cars will have a lot of advantages, like 360^o vision, radar, IR, and future enhancements might allow them to talk to the road and to each other. What might appear as a white out to us would be no different than a sunny day to them.", "Right now it's not advised to use autopilot in snow. With that said there are plans to where it will be ok in the future and there are different ways to do it. One is to create detailed surroundings which include data sent from the cars themselves when conditions were not snowy. This allows the car to look for multiple different markers (curbs, telephone poles, signs, and other landmarks) to determine its location and drive in the same way as previous cars did in good conditions. This of course has issues with rather rural areas with little historical data and in those instances the car will advise not using autopilot.", "All these cars have LIDAR and radar to sense other cars and things around them. They use cameras to see road signs and lane lines. So you are right that the lines on the road disappear as far as the cars sensors are concerned but like us the car can see things like curbs and sign posts. Basically as the tech becomes more prevalent cars will map out road sign posts and curbs using LIDAR and radar. When the camera cant see the road it will match terrain contours, road posts, curbs and anything else it can detect with radar and LIDAR to previous information or maps of the given GPS coordinates. The programming has been used in other areas for years so its not super difficult just tweaking it for autonomous vehicles and creating usable databases of all the roads is kind of a big task.", "The same way that you do when you can't see the road - you do your best to get by with other visual cues other than the road itself, and information you have access to. Vision is just one aspect, there are many cameras of many types as well as recieving other kinds of data i.e GPS, lidar, physical sensors. The software controlling these things are getting better also. You don't just track the road itself, you track all objects and their spacing and their motion and parallax, their shape, how they're lit etc etc. Just like in your brain there are areas that track edges in your vision, and within there there's bits that deal with angles of those edges and so on, in addition to areas that track colour, brightness, occlusion etc. all of this information is put together into a model that you use to make decisions. We're improving it's models of the world beyond simply tracking the floor.", "Unless road standards are followed perfectly and the roads mapped exactly, the extreme conditions will still defer control to the driver. The typical rules the car has to follow will not be applicable in extreme conditions. You could see it the same as a car driving across the desert. Where are the lanes? Snow could additionally interfere with sensor functionality depending on the type." ], "score": [ 29, 15, 6, 6, 3 ], "text_urls": [ [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5s8t7h
Why do we still load fonts locally for websites, when bandwidth can support entire video ads?
I don't understand why i can visit a website in this day in age and still have un-displayable characters be a thing, especially considering the bandwidth madness that advertisements go through to track and load on your page.
Technology
explainlikeimfive
{ "a_id": [ "ddd9cm8" ], "text": [ "The HTML language is designed to save bandwidth as much as possible from the ages when it might be a scarce resources. Fonts are a local resource which are not designed to be updated and removed on the fly by web browsers. The designers of the website should use web-safe fonts or make the characters into images." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5s9l9p
how does my phone know how much percent of battery I have?
Technology
explainlikeimfive
{ "a_id": [ "dddpv07" ], "text": [ "Almost all smart phones use two different methods together. The most simple is voltage. Voltage is not a very good indicator of charge in li-ion batteries used in phones because the voltage does not fall at an even rate when you use them. Voltage measurement is used as a fail safe. It's very bad for a battery's voltage to get too low. This is why sometimes when you get your phone cold in the winter it drops dramatically in charge percentage instantly. What has happened is the battery's voltage dropped when the phone got cold. Now the phone has started reporting an estimate based on voltage. The second and much more accurate is coulomb counting. This is like counting the individual electrons as they go in and out of the battery. It counts them as they go in (charging) and as they go out ( using it not plugged in). This allows the phone to know exactly how much energy it put in and can accurately guess how much is left. Just like putting gas In your car. The electronics that do this are actually called 'gas gauge' chips. The phone will have programming in it that will use both methods together to track and report charge state to you. Source : electronics engineer that designs things w li-ion batteries." ], "score": [ 23 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5saodd
Why is self checkout technology so terrible?
Technology
explainlikeimfive
{ "a_id": [ "dddnj6f", "dddsyni", "dddt38g", "dddkk2f" ], "text": [ "The creator is still frustrated. Planet money made a podcast about it was informative and relatively funny. Here's a link out of pocket cast to the episode if you wanna check it out. URL_0 You can also go to the npr podcast website for planet money and look for the episode called self checkout", "Place the item in the bagging area. Unexpected item in bagging area. An attendant has been notified to assist you. Place the item in the bagging area. Unexpected item in bagging area.", "Checkout machines don't have 'POS' as their acronym for nothing. I wish corporate would pay attention to what its actually like to work with them. :( Source: I work in a pharmacy. Lol", "all of your issues hinge on the fact that they weigh the output. but that is the only way for them to track that its been scanned. Of course you could leave items in the cart and never scan or bag it, but thats far more easy to identify to loss prevention than to mimic the motion of scanning and drop it into a bag." ], "score": [ 15, 4, 3, 3 ], "text_urls": [ [ "http://pca.st/c6hJ" ], [], [], [] ] }
[ "url" ]
[ "url" ]
5sb0q8
Why is red eyes in pictures not as big of a problem as it used to be?
With more selfies being taken than 10 years ago why do I never see red eyes in pictures anymore? Every photo editor used to come with a red eye fixer, but it doesn't seem as common anymore.
Technology
explainlikeimfive
{ "a_id": [ "dddnsp1", "dddpsb9" ], "text": [ "A selfie generally doesn't use a flash. Flash photography, especially from a flash that is positioned directly next to the lens, is still very much a problem for red-eye.", "Now that most cameras are digital, most pictures aren't taken using a flash. (It is the flash that causes red-eye, by taking the picture at the same moment it exposes the subject to a bright light, before their pupil has had time to respond. The red you see is literally the blood vessels in the retina.) Smarter cameras do a little mini-flash before the main flash, which causes the pupil to dilate before taking the picture." ], "score": [ 7, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5sba32
How were the lights synchronized in Lady Gaga's halftime show?
Technology
explainlikeimfive
{ "a_id": [ "dddrxj9", "dddt4ug", "dddqeog", "dddqrdr", "dddtnhv", "ddds6os", "dddxw63" ], "text": [ "The drones use Intel's realsense technology combined with infrared LEDs, allowing the drones to know where each other are and form a sort of wireless mesh network. Once the drones know where they are and where they are supposed to be, you can program them. Example of drone formation from 5 years ago: URL_0", "I'm assuming you're talking about the people surrounding lady Gaga, and not the drones? There is a company that makes DMX controlled wrist bands. Using some crazy RFID technology they can control the lights based on the position of the wrist band. Then a light operator just controls different areas, and the wrist band location determines what commands it receives. Now the people at the halftime show weren't wearing wrist bands, but the technology works the same. Here is a link to the company who makes this technology URL_0 Not sure if they are the company that was responsible for the Super Bowl. But regardless it's the same technology.", "I was just thinking about this. My guess we're that they were drones. Hence the Intel drone commercial and logo after the show", "The sky was definitely drones. They even showed it at the end of the show \"drones powered by Intel \"", "It's an Intel drone program. It's a group of 500 drones synchronized via a single computer. [Here's a cool video on them]( URL_0 ) Edit: I now realize you where probably asking about all the lights in the crowd, not the lights in the sky. I was really excited to see the drones, so that's the first thing that popped in my head when reading your question lol", "I was just trying to figure out the same thing. I know the sky ones were Intel drones but I more curious about the lights everyone had on the field. It didn't seem their positions were pre determined.", "Since the lights at the beginning were clearly drones and can be individually addressed, I'm going to cover those handheld lights for the people dancing around on the field instead. There's a number of ways this could be done. Most likely, they set up an array of low power transmitters which send out a unique identifier or broadcasts a carrier on a unique frequency, something like that. The position of these transmitters are all known by each handheld light. The controller inside each handheld light knows, through radio triangulation, where it's own approximate location is, relative to the transmitters. Next, there's a second transmitter which sends out commands which look like this: \"If you are within 3 feet of [x,y], set rgb color to [#ff0000]. If not, disregard.\" I'd love to see a project like this on hackaday. It seems like something they could cobble together out of a bunch of Philips Hue bulbs, a Raspberry Pi, a 3d printer, and a bunch of Arduinos. tl;dr: It's done with radio and magic." ], "score": [ 232, 133, 68, 57, 35, 21, 5 ], "text_urls": [ [ "https://youtu.be/ShGl5rQK3ew" ], [ "http://crowdsynctechnology.com/led-wristbands/" ], [], [], [ "https://youtu.be/aOd4-T_p5fA" ], [], [] ] }
[ "url" ]
[ "url" ]
5sbouw
How does the hardware inside pregnancy tests works?
Technology
explainlikeimfive
{ "a_id": [ "dddzjzp" ], "text": [ "Pregnancy tests use antibodies, protein molecules that react to the presence of particular chemicals. Pregnant women produce a particular hormone called human chorionic gonadotropin (HCG), so antibodies that react to it are mixed with chemicals that change colour when the antibody reacts. [More info]( URL_0 )" ], "score": [ 3 ], "text_urls": [ [ "http://humantouchofchemistry.com/how-do-pregnancy-tests-work.htm" ] ] }
[ "url" ]
[ "url" ]
5sc1ua
How does a VPN protect your data on a public network from hackers, if the data still has to go through the modem?
Technology
explainlikeimfive
{ "a_id": [ "dde6lx6", "dddyo9x" ], "text": [ "Imagine you're sitting in a magical cafe and want to browse a website. You call him up, and the website sits down at your table. > **You:** Hey, Website! I want to see this image, could you describe it to me? > **Website:** Sure! It's a kitten lying on its back with all paws outstretched. Its fur is spotted ginger tabby, and it's lying on a green sofa. > **You:** Cool! Thanks, Website! When someone wants to snoop on your data, they are essentially putting a mic on your table. It's weak and doesn't have a lot of range, but they can hear everything the Website tells you. Now you're using a VPN connection. Instead of Website, it's the VPN Server sitting with you at the table over a nice cup of mint tea. Website is sitting at the next table near you. The VPN Server speaks both English and Klingon. You now also speak Klingon because it's a magical cafe. > **You [in Klingon]**: Hey, VPN Server! I want to see this image, could you describe it to me? > **VPN Server, turning away [in English]**: Hey, Website! This handsome fellow over there wants to see this picture, I need you to describe it. > **Website [in English]**: Sure, it looks like this: (...) > **VPN Server, turning back to you [in Klingon]**: It's a kitten lying on its back with all paws outstretched. Its fur is spotted ginger tabby, and it's lying on a green sofa. > **You [in Klingon]:** Cool! Thanks, VPN Server! Now, the mic that someone put on your table can't hear the conversation between VPN Server and Website, it's too weak. And what they hear from your table is useless because it's in a different language they first need to figure out. \"Tunneling\" is when you switch to a different language and speak only to VPN Server, and they ask everyone your questions on your behalf. The different language is the encryption that VPN Server uses to be discrete.", "Most popular sites you use, like Google, Facebook, Reddit, and Amazon, are already encrypted. If you're using a public network, like at a coffeeshop, others can snoop and realize you're using Reddit right now (because they can see you're connecting to URL_0 ), but they don't know what you're doing on Reddit (because the actual data is encrypted), and they can't modify the data in any way. However, some sites like CNN, eBay, IMDB, and Forbes don't offer HTTPS encryption at all. Someone hacking your coffeeshop network can not only see exactly what web pages you're looking at, they could even intercept the traffic and insert their own content, or their own ads. With a VPN, you're \"tunneling\" all of your connections, securely, to some other location, and making your Internet requests from there. Everyone on your public network only sees that you have a connection to that VPN host, and has no idea what you're doing otherwise. They can't see what sites you're visiting, and even if you visit insecure sites they can't see what you're doing there or modify the content." ], "score": [ 29, 5 ], "text_urls": [ [], [ "www.reddit.com" ] ] }
[ "url" ]
[ "url" ]
5scvoe
what can a hacker do with my IP address?
I'm not tech savvy. Someone got butthurt on warframe (ps4) and said they were going to 'grab my IP address'. Scare tactic or threat?
Technology
explainlikeimfive
{ "a_id": [ "dde4tcy", "dde8w20", "dde5n6h", "dde75o7" ], "text": [ "Generally they can start by scanning your ports to see if theres any vulnerabilities they can exploit. In some cases they can even find your location just from the IP. This can lead to doxxing. If they find a vulnerability then what happens next depends on their intentions and what the vulnerability is, it can be very open ended.", "Its a bit like asking \"What can a burglar do with my address?\" the answer is, \"That rather depends on your house\" If your local network is properly set up (And most ISP's set up your modem to be locked down by default in order to prevent you from shooting yourself in the foot and costing them time and money) they can't get in. If you do have some open ports and vulnerabilities they could theoretically execute code on your machine and gain control of it. (until you unplug it ofc) Alternatively, they can DDOS you, which is basically spamming your machine until its to busy to do anything else. Note that you generally have a dynamic IP, which depending on your ISP refreshes either at set times, or when you unplug your modem for X amount of time. (If you don't have a dynamic IP, you can often get it changed by calling the ISP helpdesk)", "Your IP is just your address. You live at \"gummybear street 210\", your computer live at \"111.222.333.444:25075\" Just like a burgler, they would have to break in. Knowing your address means nothing. All websites knows it (It has to, to send you the info) Everything online realy.", "If your system is secured in the general sense (no open ports or those ports that are open and listening are from a program that is up to date and generally considered secure), then the worst they should be able to do is a DDOS attack. DDOS attacks don't have to attack servers, and ontop of that can be much smaller if attacking a personal target while still affecting that target. Most routers are by default configured to detect and drop unwanted communication. To do that, your router has to do some checks, and your router is, like any other network enabled device, a small computer, and a computer has limits. It has limited bandwidth to work with, limited RAM to work with and limited CPU power to work with. While a few packets of unwanted traffic are easily filtered out, once an attacker starts with an attack of thousands if not tens of thousands of devices that bombard your router with that kind of traffic, there isn't much your router can do. At that point, multiple things can happen: * Your internet connection becomes unusable slow or is gone completely * Your router crashes and restarts * Your ISP temporarily terminates your connection If you don't have a static IP address (and for the normal user there is no reason to have one), then those things will cause your router to lose the connection, and with it, it's public IP address. Once reconnected your router should request a new IP address (depending on the ISP you might actually be assigned the old address again until a certain amount of time has passed). At that point the DDOS attacks no longer reach your router and you are safe unless the attacker can find out your new IP address." ], "score": [ 11, 10, 4, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5sd7qs
What are Active Directory Sites and Services
Technology
explainlikeimfive
{ "a_id": [ "dde7epi" ], "text": [ "Microsoft Active Directory is a role you can install on Microsoft Windows Servers to help you create a unified rights and login infrastructure across several windows computers and applications. To explain the above a bit more in depth. You have probably a computer at home and may even be sitting in front of one as you read this. If you have a computer with windows at home you may even have it set up in such a way that it asks you for a user name and password on startup. (Many people who are the only users of their computers set it up so it skips that part and automatically logs you in). If you are a bigger organization you will have many different computers and on each one there may be different users who need to log onto the computer. As you can imagine it can be hard to keep track of all the different logins. Microsoft AD makes it so that a central computer (or several working together) have a central place for all the users and their passwords in the company. The same user logging in one computer can use the same name and password to access all computers in the company. It can also be used to login to all sorts of server applications. If the system is set up correctly a user will only need to have one username and password for everything they do in the company. This simplifies things a lot. This piece of software can be managed with a number of programs among them \"Users and computers\" which is the main one, but also the management console called \"Sites and Services\". \"Sites and services\" is not a tool anyone will need very often because you mostly use it configure different locations. Your organization may have offices in a different city with their own networks and in sites and services you configure which place has which network and how theses different sites are connected to together. It also is where you configure where the Microsoft AD servers are. Unless you are a sysadmin setting up or changing things for a branch office or similar you will never have to use this tool." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5semdw
Why does software search for new updates immediately after installing the latest update?
Technology
explainlikeimfive
{ "a_id": [ "ddefkfu" ], "text": [ "Some times it is not possible to go straight from version A to version C without upgrading to version B first. For example a new firmware might have a new way to do firmware upgrades and the most recent upgrade does not have the ability to be upgraded on hardware running the old version. It could also be that there are changes to the name of the software so the last upgrade available in the old name is an update that change the name to the new name. The application might also have changed the format of its storage and needs to run a script to move the data from the old format to the new format and this have been removed for the latest upgrade to save room. So if you are at version A and request the latest version you might get version B, but then as you have upgraded and request the latest upgrade again you get version C. But version C were not available when you were at version A. To help with this it is easier to add a check for new versions as soon as you have upgraded in case there are further upgrades that can be done." ], "score": [ 7 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5semqz
How does the AI in video games convincingly miss?
Technology
explainlikeimfive
{ "a_id": [ "ddefqw8", "ddehx0p" ], "text": [ "There's no standardized way of doing it. Generally, the computer calculates the trajectory needed to hit you, then uses the random number generator and slides it's aim to one side that number of degrees. The computer can also intentionally skip over any code it might have for doing things like leading the shot, or recognizing it's not hitting with it's current weapon and try an grenade. This code can be flipped back on to make the game more challenging. (Pretty sure it was Halo which had a hidden \"whuppopotamus\" mode where extra evil enemy tricks like flanking and hearing you reload were enabled). In most modern games, enemy characters have shooting and attack animations, and the game waits until these animations complete before attacking again. This provides the delay human players would believably have.", "They just generate random numbers that assign a hit probability in terms of a percentage. Like, they might say, \"This guy shoots every five seconds and has a 50% chance to hit.\" You are correct in that the computer does not play at 100% of its ability. Every game's AI is dumbed down in order to give the player a chance. Many times they will also scale the difficulty in accordance with whether the player is doing well. For example, they might say, \"If the player got hit last time, reduce the odds of getting hit this time.\"" ], "score": [ 42, 9 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5ses7x
How do websites on mobile browsers automatically open their respective apps on your phone (and sometimes without requesting permission to do so)?
Technology
explainlikeimfive
{ "a_id": [ "ddegph8", "ddegunt" ], "text": [ "Mobile operating systems allow apps to register to URLs that fit certain patterns. For example the YouTube app in Android is registered to open URLs starting with URL_0 , so whenever you click such a URL, the OS will launch that app. In Android, if more than one app can open the URL then the OS lets you choose between them.", "I'm not sure how it's handled on iOS, but on Android there's the \"intents\" API, which is a tool specifically built for this purpse. By creating an intent, you can have your website (or app) ask the system to launch another app. It's more flexible than \"please launch Instragram,\" there's extra flexibility in the form of \"implicit intents.\" In the case of one of these, the intent makes a general request about what it's trying to do (get something, send something etc) and the phone's OS asks you to pick from a list of candidates it thinks can handle that request. This is what happens when the system prompts you. However, specific calls to open a specific app just happen. It's assumed safe to just immediately launch the application because you chose to install it, so ergo you must want it." ], "score": [ 8, 4 ], "text_urls": [ [ "www.youtube.com" ], [] ] }
[ "url" ]
[ "url" ]
5sf2r9
Why has "the man" been able to take down some torrent sites permanently, but TPB still exists?
Technology
explainlikeimfive
{ "a_id": [ "ddem1zf" ], "text": [ "Preparation and preservance. You can always start another website, find another way to avoid laws, move to a new country, etc. With kickass I think one of two things happened, or likely a mix of both; - They weren't prepared to have their website seized. Perhaps they didn't have backups of the necessary information. Kickass and such rely on a big database of torrent files and their metadata (names, discriptions, cover art, etc), rebuilding that would seem like an insurmountable task. - Perhaps it simply felt like too much trouble to re setup the sight somewhere else." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5sgfzh
Parallel was faster than serial, why isn't there an UPB - "Universal Parallel Bus"?
The maximum data transfer rate of RS-232 serial is about 115kb/s whereas parallel is about 1.1Mb/s when using ECP. When I discovered Laplink transferred files between computers via parallel much faster than serial, I always used the parallel port to transfer data. So I was wondering, why wasn't UPB invented, and why USB is faster/better than a theoretical UPB?
Technology
explainlikeimfive
{ "a_id": [ "ddevlm8", "ddf2ckv", "ddeuw9e", "ddevsn8", "ddevh28", "ddfscbx" ], "text": [ "It comes down to being able to group that parallel data at very high speeds. Say I'm tossing a ball to you and think of this as a serial bus. Now take seven of your friends and I will get seven of mine and we will all toss the ball back and forth at the same time and that is the parallel bus. Now start increasing the speed of the ball toss back and forth. It becomes difficult to keep all eight pairs transferring the ball at the same time. It is much easier to find one pair that kind transfer the ball very quickly than multiple pairs that can do it without one of the pair getting ahead or behind the others.", "At high clock speeds it becomes difficult to transmit and receive data in parallel because propagation delays cause the different data lines (wires) to be very slightly out-of-sync and at high frequencies you have to manage that synchronization. When you transmit the data in serial then you don't have that problem. Modern buses like Thunderbolt and PCIe are serial though much more sophisticated and faster than old RS-232 ports.", "Parallel = faster Serial = cheaper For most problems where USB is the solution: **cheaper > faster**. SCSI and ATA are still around, but SATA (Serial ATA) is displacing ATA because as the technology gets faster: **cheaper > faster**.", "There is. Thunderbolt use two lanes in each direction compared to the one shared lane in USB. USB-C is the same. USB 3.0 also have two lanes but shared between the directions. If you look at PCI-E they have up to 16 channels. When you are talking about multiple channels it is not exactly the same as a parallel bus since the signals is not synchronized but it turns out with modern electronics it is no problems synchronizing the data afterwards which improves the transfer rate and reliability. What Thunderbolt and USB 3.0 have in common is that they are more expensive. You suddenly need as much hardware for a single USB-C connector as you needed for an entire USB 2.0 hub with multiple connectors. And the cables are more expensive, thicker and more fragile. This is fine for some applications, for example when hooking up a TV. But it is not fine in cases where you do not need it which is where USB have found its market.", "Software developer here, The problem with parallel cables is that the signals each have to arrive at the destination *at the same time*. As signal frequency increases, this synchronization becomes impractical.", "To summarise others' contributions here and add a couple of minor points: Problems with parallel: 1. Synchronisation of data across multiple parallel lines at very high data rates is very difficult due to variable propagation speeds of those lines (due to variations in capacitance, inductance and resistance of those lines) 2. Increased cost due to increased complexity of the transmit-receive electronics and mechanical connectors and wiring. 3. Increased size of the connectors and cable, which is at odds with increased miniaturisation and available space on the connected devices. 4. Potentially increased noise due to cross-talk between the data lines, which can only be reduced by increasing (3) and hence (2). 5. Not all data transfers need to be at the maximum bandwidths possible with either serial or parallel connections, since both the data source and destination are likely to have other internal systems that have more limited bandwidth, such as a mechanical hard drive. So, why not stick with the smaller, cheaper serial solution? Back in my day as an electronic engineer, when RS-232 was standard, few could possibly have imagined the data rates that are achievable in serial connections today. There has been an evolution over the past couple of decades in our understanding of the signal propagation and EM effects in wires at very high frequencies, and a corresponding evolution in the mechanical, material and production technologies necessary to deliver such wired connections." ], "score": [ 38, 11, 9, 6, 3, 3 ], "text_urls": [ [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5sh8e5
Why is it that when watching a movie at 24 frames per second it seems perfectly normal, but when playing a video game it is almost unbearable?
Technology
explainlikeimfive
{ "a_id": [ "ddf0vm1", "ddf5par", "ddf7rwj", "ddfl48p", "ddf8xr9", "ddfuawc", "ddf1k3t", "ddf8fdf" ], "text": [ "Film frames can capture motion blur of fast moving objects, essentially the object is appearing in several places at once. Video games render a series of still images of objects in a single location at any point in time (for the most part). When viewed at full speed, the motion blur captured on film is much more pleasing to the eye than a bunch of sharply rendered video game frames.", "About half due to motion blur, the other half because we've been conditioned since birth to regard 24fps as the natural look for media. Higher framerates _are_ superior for media - Peter Jackson and James Cameron know this - but it's going to be a long time before general audiences embrace them. Two experiments to try: 1: Next time you're watching a movie at a genuine 24fps (in a theater or on a TV that is displaying a multiple of said), watch for scenes where the motion is faster than normal, such as fast pans or action. The limitations of such a low framerate start to stand out as the judder becomes hard to ignore. 2: Play Doom while artificially limiting the framerate to 24. Doom has the best motion blur ever achieved in a video game, so that is a non-issue. It should look decidedly cinematic. (Actually just check out [this video]( URL_0 ).) Just more evidence that 24fps sucks.", "A lot of good responses here, but there is one important factor I haven't seen addressed. Input lag is greater at lower framerates. When you press a button on your input method of choice, the time it takes for the display to reflect that input is determined almost entirely by how long the interval between frames it, because it's usually the slowest thing going on in the computer. At 60FPS, input lag is around 16ms, at 24FPS, input lag jumps to about 42ms, which is definitely noticeable. Your eyes can't detect things changing that fast, but your brain can easily measure the time between your finger moving and the screen updating down to the low tens of milliseconds. It makes you feel like you're moving through water, the lag feels like resistance, and that's annoying as fuck.", "cinematographers rarely pan at high speeds when shooting 24 fps. They generally are very conscientious about strobing.", "There's a big difference between passively watching something unfold on a screen and taking an active role in what is happening on the screen.", "A whole slew of reasons. 1) Movies rarely move the camera. The camera is a fixed point and the actors move. Computer games you are moving the camera. People go to great lengths to plan out how this is going to work. The bullet time for the matrix was conceptualised as a camera on a rocket sled, but in reality was a few hundred still cameras on precise timers. 2) Fixed frame rate. The frame rate on movies is low, but rock steady. Calculations in games are run based on how long the last frame took to render. When frame rate varies those calculations go astray very quickly, especially when physics is involved. 3) Perfect motion blur. Thanks to the fixed frame rate every frame of film is perfectly motion blurred and naturally.", "If you turn on motion blur in your video game it would be much better to watch, but then you should also be able to run the game at higher frame rates as motion blur takes a lot of rendering time. However playing would still be just as bad. This is because games are interactive so your actions gets delayed by a lot when the frame rate is low. When you watch a movie you do not care if there is a bit of lag, in fact there is several months of lag between the action you see on the screen happened to you see it. However when you play a video game the lag is very important. Imagine an enemy popping up on the screen, the enemy might pop up right after a frame is drawn. So the game should be on the screen but that information is still only stored in the game. When the next frame is rendered the game sends this information to the graphics card so it can render the next frame. But then the information is still only in the graphics card while the screen displays the last frame. When the graphics card is finally done rendering the frame it sends it to the screen and you can see the enemy. In your 24 FPS game it have already gone almost 100ms since the enemy should have appeared on the screen. If you have a reaction time of say 600 ms then 100 ms is a long time. And if your reactions are good and you are able to aim your gun at the enemy and fire then you are still firring at where the game thinks the enemy is and not where it is displayed on the screen so you can end up missing your target unless you lead by an unnatural amount. So at low frame rates the game might not look too bad but will feel sluggish and lagy.", "Somewhat related. If I remember rightly, intel did a frame rate study that showed viewers were most bothered by changes in frame rate, not low frame rates. There was an exception to this when the frame rate was too low. Can't find a link to this because I didn't try very hard." ], "score": [ 134, 108, 63, 11, 6, 6, 5, 3 ], "text_urls": [ [], [ "https://www.youtube.com/watch?v=5rOnVCtwouc" ], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
5shonw
Why do emojis show up differently on iOS vs Android devices?
Why aren't there universal emojis?
Technology
explainlikeimfive
{ "a_id": [ "ddf4kw3", "ddf6ceb" ], "text": [ "Well consider it like a different font type. It still conveys the same information, but it is a separate style.", "When your phone sends text back and forth, it's encoded as numbers. These are standard, so 33 is !, 65 is A, and 128512 is 😀. All your phone gets are these numbers, and to show them, your phone has a list of little pictures for all these characters that it puts on the screen. The pictures should all be similar, but there's no reason why each company has to use exactly the same pictures." ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5si4sd
How do headphone splitters no longer "half" volume of headphones?
I just got a passive headphone (3.5mm) splitter to watch movies on an airplane with my girlfriend, so we can each have our own headphones. I was just testing it to make sure it works and noticed that with both headphones plugged in, and it plugged into my iPad, there was no reduction in volume. I tested it by plugging a pair of headphones directly into the headphone port of the iPad and set the volume to 50%. I then plugged in the headphone splitter and plugged two pairs of headphones into the splitter and listened to through the same pair of headphones at 50% volume. The volume was indistinguishable. Does this have to do with new hardware/software inside the iPad? The headphone splitter doesn't have an amp or anything, as it is unpowered.
Technology
explainlikeimfive
{ "a_id": [ "ddfc6vb" ], "text": [ "The movement of a speaker cone depends on the magnetic field in the voice coil, and this in turn depends on the current. So it seems as if dividing the available current equally between two loads should result in half the current and therefore half the volume ability in each. The \"gotcha\" here is that at moderate volume levels, most amps aren't running anywhere close to their ability to supply current. The volume control doesn't act as a current limiter per se: It sets the gain of the amp, and therefore the voltage swing at the output; current in the load depends on this voltage and the impedance of the load, up to the amp's ability to provide. ( If you doubt this, think about lights on a 120V circuit. When you plug in two lamps instead of one on the same circuit, do they each get half as bright? Of course not, because two lamps are well within the circuit's current capacity, and your load impedance is still many many times that of the source. ) So, if you're within the amp's current limits, there isn't a problem. But there will be a significant reduction in maximum available volume and if you push the setup to that point you'll get significant distortion, too (clipping)." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
5siojz
If electricity travels so fast, why does it take so long to charge a battery?
Technology
explainlikeimfive
{ "a_id": [ "ddff2p1", "ddfd2lc", "ddfedh8", "ddfq9tu" ], "text": [ "You are thinking of charging a *capacitor,* which simply stores electric charge. It's incredibly fast, but doesn't hold a lot of energy per gram of equipment. By contrast, a *battery* doesn't directly store electricity. Instead, it uses a chemical reaction to deliver electricity, and when recharging it runs this reaction in reverse (using energy to undo the chemical reaction). It's the movement of these chemicals that takes more time -- they don't move at the speed of electricity in a wire.", "Because charging a battery is not just packing electricity into a box as fast as it can travel through a wire. It is causing a reversible chemical reaction which takes time and releases heat that can damage the battery itself.", "The amount of energy going into the battery is limited as to not damage the cells. Imagine it like filling a balloon with water, you fill it slowly but consistently you'll be fine, however if you try to put too much water in too fast and you'll risk breaking the balloon", "Actually the electricity travels slow. It's the energy which travels fast. Pipe full of tennis balls, where the balls are the electricity? Push one ball into the pipe, and a different ball pops out of the far end. Electricity is like pedaling a bike with a very loooong chain, and your rear wheel is at the end of a mile-long bike frame. The chain is the electricity. The bike wheel wheel still turns almost instantly as soon as you pedal. In other words, all wires are already full of electricity, and \"charging\" a battery is only forcing electricity through it and back out again. A battery is a chemically-powered electricity pump. Charging a battery is converting some waste-products into the chemical fuel (it's usually a metal like zinc, lithium, lead.) Battery charge rate, that's a separate issue. Charging a battery is a bit like winding up an old-style watch or alarm clock. You can turn the little winding key quite fast, but it still takes a whole lot of turns in order to wind the spring up all the way. And if you spin it too fast, the gears will be damaged. Hey, why don't we just charge batteries using much higher current? After all, the more the amperes, the faster the battery drains out (and the faster it recharges.) Only trouble is, batteries have a sort of \"internal friction.\" If we run the electricity through them too quickly, they heat up inside. That might be OK, but you take a chance in ruining the battery. ALso, batteries are full of wet chemicals, and **don't let the water boil, or the battery will explode.** That's where insurance companies come in. In products for sale, the rate of charging is carefully controlled, and it's kept far from \"the edge.\" Unsafe battery recharge, we might call it... \"edging?\" To charge a battery at max rate, you'll want to have it right on the edge of a violent explosion. (Cold water might help.)" ], "score": [ 35, 9, 7, 4 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
5siso7
In the late 70's, how did they manage to only produce less than 200 horsepower out of a 455 cubic inch engine, whereas today's 350 cubic inch engines crank out 500+ HP?
Technology
explainlikeimfive
{ "a_id": [ "ddfeguv", "ddfgv38" ], "text": [ "Carburetion vs. computers. Zero electronic controls means that your fuel-to-air ratios are governed by things like floating bowl valves and venturi suction. And design was largely a process of \"build one and we'll try it.\" Today, millisecond accurate fuel injection, mass-air sensors and computer controls mean that you can get a precisely balanced mixture of fuel and air into the combustion chamber to optimize for either power or economy, dependant on nothing more than the driver's mood and a control setting. Put that on an engine that was simulated and gone through Finite Element Analysis on computers powerful enough to tweak thousands of different variables for better airflow and valve response and higher revs. The result is more power from less metal, because you're burning more fuel, more efficiently, and capturing more of the resulting power than you could with the less in depth understanding available to engineers in 1976.", "Engines made quite a lot of horsepower up until the early 70's. A few things came together to make this happen: Catalytic converters were added to cars. These devices stop functioning if they come into contact with lead, so it was removed from fuel. Unleaded fuel was lower octane, so engine had to run at lower compression ratios to operate. End result: less power. Measuring horsepower changed. Before, only the engine was tested. Literally. Coolant was poured through from a giant tank, a giant cone was placed over the area that would normally house the air filter, and power came from a battery. There were no accessories being driven. Testing methods were revised and reported horsepower dropped significantly. People suddenly became very concerned with fuel economy thanks to the OPEC embargo. Fuel prices skyrocketed and fuel was rationed. American automakers didn't have small, efficient engines ready, so they took big engines and set them up to run at low speeds since the slower the engine spins, the less air and fuel was being brought in, and the less fuel had to be burnt. Since horsepower is based on torque and speed, horsepower was very low despite massive torque. (Typically, rated torque was double hp, while modern car engines have numbers that are usually almost the same.) In the late 70s and early 80s, designs were coming out that started addressing these issues. Packaging improved, making cars smaller and a lot lighter. Improved engines were released. EFI was introduced, allowing good cruising fuel economy while providing power. By the mid-80s, you could buy a car every bit as fast as the muscle cars from the 60s. From there, engine design kept improving. Distributors were replaced by electronically controlled ignition, so timing could be adjusted on the fly for more power. Lower friction materials and better machining was introduced, reducing power loss during operation. Variable valve timing allowed engines to breath well through a wide range of engine speeds. Compression started going up, squeezing more power out of fuel, even on regular unleaded. Better machining, better fasteners and high strength plastics allow the use of far better intake and exhaust manifolds. Put it all together, and you have massive increases in efficiency leading to far more power from a given engine size." ], "score": [ 9, 8 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5sj2yb
With advances in camera technology, especially in the phone and handheld market why hasn't webcam quality improved also?
Even with "1080p quality" webcams, its seems the generally quality of webcams seem bad, why hasn't webcam camera tech advanced in the same way as Mobile cameras or Go-Pro's
Technology
explainlikeimfive
{ "a_id": [ "ddfhzle" ], "text": [ "First, they definitely have. Second, you're probably using either a shitty/cheap webcam, a shitty/cheap laptop, or an ultrathin laptop without space in the lid for a decent camera. Edit: This article from today is an example of innovation in the Industry: URL_0" ], "score": [ 4 ], "text_urls": [ [ "http://www.theverge.com/circuitbreaker/2017/2/7/14531452/logitech-brio-4k-pro-webcam" ] ] }
[ "url" ]
[ "url" ]
5slhog
What is the difference between a "drone" and a something like a remote controlled vehicle?
Technology
explainlikeimfive
{ "a_id": [ "ddfxxho", "ddg5u5q" ], "text": [ "\"drone\" is just a scary new buzzword for something that's existed for decades. it was first applied to military UAVs (unmanned aerial vehicles) and then applied to consumer toys because the media will do anything to drum up fear-views", "In short, the difference is the amount of control and assistance the vehicle receives. **Remote Control**: does what it says on the tin. Like a cheap RC car. Forward, backward, turns-- does absolutely nothing unless you've got your hands on the controller. **Assisted Control**: Same as above, but now you might have a chip in your vehicle that allows it to continue doing something in absence of continued controller input. e.g Rather than your fancy flying UAV falling into the ground, it might have a microchip or flight controller capable of maintaining the vehicle in a hover without your input and regulating the motor speeds. There was another term for this method of control but I can't remember it off the top of my head right now. Most common civilian quadcopters are of this type. **Semi-Autonomous**: You give your vehicle an order to move to a waypoint, it moves there without further input. Many space rovers such as curiosity are of this type-- due to the signal lag between earth and mars not to mention low bandwidth capacity we send it waypoints far in advance. The vehicle knows enough on its own to follow the waypoints and-- if it gets stuck, to stop and wait for human assistance rather than keep going and burn a motor out. Flying wings used to map areas with composite images are of this type and US military drones like the predator are also likely of this type. **Autonomous**: As above, but now if the path to the next waypoint is blocked, the vehicle should be able to back up and try a new path to get to the waypoint without human input. Maybe it can catalogue points of interest on the route and stop to inspect them, close the solar panels and hibernate during a dust storm etc." ], "score": [ 6, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
5smf9u
How come a CD can some times play perfectly, even though it's scratched like crazy?
Technology
explainlikeimfive
{ "a_id": [ "ddga4c7", "ddg7zav", "ddgc6k3" ], "text": [ "CDs contain what's called \"error correction\". Think of the music as being a string of numbers. These numbers are basically the values of a line graph, which is just the shape of the sound waveform that gets played back. So the first point is a 0, then the next is a 5, then an 8, then a 5, 0, -5, -8, -5, for instance. That's kind of a sine-wave shape that I just estimated. It's typical of a sound wave. CDs contain that data. But they also contain *extra* data. There may be some data that says, \"the first 3 pieces add up to 13\". This is error correction. Let's say there's a scratch that renders the first \"5\" unreadable. Well, we can see there's a 0 and an 8, and the extra info says they should add up to 13. So even though it's partly damaged, we can tell it should be a 5 in the middle. Additionally, those different pieces of information are intentionally scattered around the disc a bit, so a scratch is unlikely to ruin enough of them at once to make it impossible to recover the missing data. The actual amount of information and complexity of the error checking algorithm is more complex than this example, for better resistance to missing data, but it's the same basic idea. Extra information to recover damaged portions.", "EDIT: Please refer to the error checking and correction posts. Also, in pressed disks, it's not burned dye that holds the information, but indentations caused by pressing. What's still the same is that damage to the reflective layer is usually more disruptive than damage to the underside of the disk. That depends on how it is scratched, the sensitivity of the cd player, and the type of data on the CD. A CD is basically made of four layers, from top to bottom: - a label - reflective aluminium - the information layer (dye) - a protective transparent layer. When you want to put something on a CD it is translated to a long string of 1s and 0s. Next that information is burned in the information layer: A zero gets a small burn spot, a one doesn't get a burn spot. When a CD player wants to read the disk, it points a very small laser at the underside of the disk. When the laser hits a black burn spot (a \"zero\") the light is absorbed/stopped by that black burn and will never be reflected by the aluminium above. When the laser hits a clear spot (a \"one\"), the laser goes right through the information layer and gets reflected back by the aluminium. So, scratches to the upper side of the disk, damaging the reflective aluminium layer, are really bad: When the aluminium is damaged, the laser will never be reflected. Scratches to the underside only cause the disk to skip when it really changes the path of the laser or truly blocks it. So, (deep) scratches on the top side are usually really bad, scratches on the underside may not be as bad. Whether or not a scratched disk plays also depends on your CD-player. Some are really sensitive and can't handle a slight change in the strength of the light, some can handle it better. When your laser lens is dirty, so that the light level is already lower, the sensitivity to scratches may also increase(if you begin with less light, you don't have a lot of margin for scratches). It also depends on the type of data: You can skip a few places with music (missing a few 1s and 0s), but with data (CD-rom) your data is probably \"corrupted\" (you're missing information that is vital to the program or file; you need every one and zero).", "u/Vesiculus wrote a very comprehensive answer. I'd like to add that audio CDs were designed to handle dirt and scratches: they contain a lot of error-correction data (extra data that's used to reconstruct the original if it wasn't read properly). And many players use oversampling, which reads the same data multiple times (2, 8, or 16 times, possibly more) and uses the result it got the majority of the time. Also, the orientation of scratches makes a difference. Data is arranged in concentric rings on the disc. Scratches that cross perpendicular to these rings cause only a small amount to damage to each, which is easily correctable. Scratches which are oriented so they wipe out too much consecutive data cause unrecoverable errors. (This is why manufacturers always tell you to wipe discs from the center to the edge.)" ], "score": [ 161, 11, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]