q_id
stringlengths 6
6
| title
stringlengths 4
294
| selftext
stringlengths 0
2.48k
| category
stringclasses 1
value | subreddit
stringclasses 1
value | answers
dict | title_urls
listlengths 1
1
| selftext_urls
listlengths 1
1
|
---|---|---|---|---|---|---|---|
d1soi2 | What is a mirror bot, and why does it make the first comment on so many posts? | Technology | explainlikeimfive | {
"a_id": [
"ezpp2r4"
],
"text": [
"Oftentimes people will post things via websites that aren't set up to handle a lot of traffic, or the content is somewhere that will likely be hit with a DMCA takedown notice. MirrorBots upload the content to another service that can handle the 'reddit hug of death' level traffic that little servers can't handle. If it happens that the original becomes unavailable, then the mirrored content is highly sought after in the comments, and the MirrorBot gets upvoted."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d1tf4v | Why aren’t commercial airplanes faster? Why don’t we use supersonic passenger planes like the Concorde? | It amazes me that we have the technology to produce and fly supersonic commercial planes (I.e. The Concorde, that has been around since the 1970’s) but still don’t use them today. Pls help me understand the reasoning behind this. | Technology | explainlikeimfive | {
"a_id": [
"ezpsnzq",
"ezpteuu",
"ezptdwx",
"ezpsra1",
"ezq9466",
"ezqdgs5"
],
"text": [
"Poor fuel efficiency. Drag increases dramatically when you cross the sound barrier and that means a ton more fuel. That in turn means much more expensive airline seats. If you could save 1 hour on a 3 hour flight at the cost of $400 bucks would you?",
"I flew in the concord a handful of times to London back in the 80s. It was pretty nice, very luxurious, and 2x as fast. It was _expensive_. Every seat was more than the cost of a business class at book rates, and you still had the same time on the two airport sides, same delays from logisitics and so on. It just wasn't THAT much better to warrant the cost. Basically flying transatlantic is an all day event with the concorde or without it. Underlying that cost is the increased cost of plane and maintenance - this might be solved with scale, we don't know. However, you use a lot more fuel for a few reasons: 1. drag at those speeds is a big deal. 2. you travel further, because you fly at a much higher altitude - about 3-4 miles higher (bigger circle around earth, more up, more down). more distance = more fuel. 3. Can't carry enough fuel for the distances where it would really make a difference - that weight just exacerbates the above issues. So..you're only using it on flights where the flying time is only a portion of the travel overhead.",
"People also don't like sonic booms. It's illegal to fly supersonic over land and at low (commercial traffic) altitudes, at least in the United States and most other countries in the world. Concorde could only go supersonic over the Atlantic ocean. & #x200B; Sonic booms aren't just annoying, they have a habit of shattering windows and being a general nuisance.",
"They can it just costs more. They currently go at the sweet spot of profitability for the airline companies. Slower = cheaper = more passengers = more profits",
"There's a story back in the late 90's where Boeing went to their customers and asked, \"Would you prefer fast, supersonic transport or a significant decrease in fuel usage?\" They picked the latter and that heavily influenced the design of the 787 Dreamliner and others. I'm sure you can find similar stories about Airbus.",
"The problem with SSTs (supersonic transport) is they have one upside but an awful a lot of downsides compared to your traditional tin sausage airliner. Fuel costs are the one big reason why Concorde was canned. Conventional airlines used what's called a turbofan engine. Which essentially is a smallish jet engine powering a large ducted fan. This fan produces most of the engine's thrust. These are very efficient at the speeds most conventional airliners travel at. However, they are less efficient at supersonic speeds and very high altitudes. So you use a turbojet engine instead. Which is a pure jet turbine with no fan attached. They produce all their thrust from the hot exhaust. Problem is these things suck a lot of fuel, especially at low speeds, and Concorde had four of them. On top of that, Concorde needed afterburners to help it takeoff and cross the sound barrier. Basically spraying raw fuel into the hot exhaust stream to provide a quick boost of thrust, like a rocket. This is extremely inefficient though. Even the military is trying to get away from them. On top of that, Concorde's engineering requirements (for efficiency at supersonic speeds) didn't allow for a large passenger cabin. It could only hold 92-100 passengers. What you'd expect to find on a typical small regional or commuter airliner like the Bombardier Dash 8 Q400. (If you'd ridden business class on one of these, like Porter Airlines, that's basically the Concorde experience.) So ticket prices were very high. Essentially limiting travel on Concorde to the wealthy. Now if you had oodles of money and needed to get somewhere fast, it was great. But slower conventional airliners offered much better first class accommodations if you weren't in that big a hurry. A 747 travels at less than half the speed, but crossing the ocean in 7 hours versus 3 isn't really that big a deal. Especially if the trip is more comfortable. The third problem was Concorde was extremely noisy. Turbojets are very loud, which limited the airports it could operate from. Sonic booms are even louder still. A lot of countries do not allow aircraft to go supersonic near populated areas for this reason. Which limited Concorde to transoceanic routes. As such, it never had a big pool of passengers to begin with. And due to its high operating costs, never really turned much of a profit. It was more a prestige thing for the UK and France. After 9/11, passenger numbers went down while fuel prices went up. The crash was just the final nail in the Concorde's coffin. Aeroflot ran into the same issues as well with the Tu-144 way back in the 80's. Operating supersonic aircraft made no financial sense. On top of that, SSTs are enormously complex machines, and the Soviets ran into numerous technical issues with their design. The Americans keep toying around with designs for a new SST, but I think most major airline manufacturers agree that they're not economically viable. A small handful of startups are working on small supersonic business jets though."
],
"score": [
11,
8,
7,
4,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d1tt2f | How do broken electronics flash a light warning they are not working? | Technology | explainlikeimfive | {
"a_id": [
"ezpvwgk",
"ezpwnnb"
],
"text": [
"If the traffic lights completely lose power they will be off (i.e. not flashing but completely dark) & #x200B; The flashing is a \"fail-safe\" function that is built into the light that activates when it loses touch with the control system - it's usually a box off to the side of the intersection but may be buried) or if there's an error in the system. The traffic light is still receiving power but isn't receiving a signal to tell it what color it is meant to be so it defaults to flashing red. Basically all traffic lights are designed to flash like this unless they get a signal telling them to do something else (i.e. be traffic lights) & #x200B; It's kind of like the blue screen of death you get when your computer crashes.",
"> The red light was flashing to indicate it was broken.. That isn't why it was flashing. The flashing red light indicates that it is now a stop sign, turning the intersection into a four way stop. The light itself was obviously working. What likely wasn't working was the connection to a central management office where different light patterns would be dispatched to intersections across the city to manage traffic. Without such a signal the traffic light was designed to default to flashing red so the intersection is in a safe, if slower configuration. So the light was operating exactly as designed, activating what is sometimes called a \"failsafe\". The idea is that if something stops working then it fails into a condition which is safe rather than catastrophic. For example you might have a big vehicle which operates by hydraulic control, but there is always the risk that a hydraulic line could break or the pump give out. To mitigate that risk you could design it so the brakes are held *open* by the hydraulics which means if pressure is lost by any of those malfunctions then the vehicle automatically slows itself."
],
"score": [
3,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d1uikj | What exactly makes newer games magnitudes more difficult to run than older ones | Why can I play Quake on a laptop and get nearly a thousand frames/second, but if I try to play CS:GO I can bearly reach 70. I don't quite understand why CS is at least 14 times more difficult for the CPU than Quake, when the fundimental idea is basically the same. | Technology | explainlikeimfive | {
"a_id": [
"ezq0tda",
"ezq188p"
],
"text": [
"New computer technology - both hardware and software - creates more complex games with coding to take advantage of the new features offered by such technology. That additional coding is part of what makes the games more resource-intensive. Also, the graphics in a game takes up a lot of resources - not so much full motion video, but the actual gameplay.",
"Efficiency and complexity are two major factors. Years ago developers didn't have nearly as many resources available for their games to run so they were required to figure out tricks in the code that allowed the game to utilize fewer resources than you might think. Modern games run on hardware so much faster that developers often don't need to do this anymore to reach acceptable performance. In addition, game engines are more common, there's tons to choose from and most of them do the job well enough that developers don't need to write their own. You can often find one that works for you, may not be the most efficient, but it works well enough on most common hardware that it doesn't matter if it isn't perfect. Complexity is another big factor. Older games were very simple in comparison to newer games. Take a look at Doom....it was what they call a 2.5D game. The game is effectively a 2D world, rendered from the viewport (the game camera) in a way to simulate 3D. This was a trick used to make the game feel 3D without actually requiring 3D rendering. This used significantly less resources than what modern 3D games require, but was mindblowing at the time. Now, games are commonly complete 3D which requires more resources. In addition they also tend to have full physics rendering, which wasn't a think in older games which used simple physics rendering at best (such as calculating which direction to push a player when a rocket explodes next to them, whereas modern games do that, and some of them even calculate bullet travel/drop, proper particle physics, etc). EDIT: A third factor is with all the software and hardware we have today, it's much easier for developers to create better animation, gameplay, etc without putting as much work into this process as what would've been required without all those software/hardware tools. This means they can more rapidly develop very complex effects, graphics, etc, without putting in nearly as much effort as would've been required years ago."
],
"score": [
3,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d1vj57 | - How the IC 555 timer works ? | Technology | explainlikeimfive | {
"a_id": [
"ezqcd9y"
],
"text": [
"There are two threshold voltages, the chip turns the output on until one of those voltages is reached, then it turns the output off until the other threshold is reached. [simulation]( URL_0"
],
"score": [
4
],
"text_urls": [
[
"http://www.falstad.com/circuit/circuitjs.html?cct=$+3+0.000005+5.023272298708815+64+7+50%0Aa+288+168+384+168+9+5+0+1000000+4.191924812178604+3.333333333333333+100000%0Aa+288+264+384+264+9+5+0+1000000+6.666666666666666+4.191924812178604+100000%0Ar+240+56+240+104+0+5000%0Ar+240+104+240+152+0+5000%0Aw+240+152+240+280+0%0Ar+240+280+240+328+0+5000%0Aw+240+152+288+152+0%0Aw+240+104+272+104+0%0Aw+272+104+272+280+0%0Aw+272+280+288+280+0%0Aw+464+176+464+192+0%0Aw+384+184+384+192+0%0Aw+384+192+464+240+0%"
]
]
} | [
"url"
]
| [
"url"
]
|
|
d201ou | Why do TV channels that are available for free with an over the air antenna (NBC, ABC, etc.) require a paid cable subscription to be watched on their app/website? | Technology | explainlikeimfive | {
"a_id": [
"ezroyuf"
],
"text": [
"Costs way more to store that stuff and deliver it over the Internet than it does to broadcast the content weekly. And you’re paying for the convenience of getting it on demand. Example - I use a music site called beatport a lot. They recently announced that they’d be deleting any records that sold 0 units in 2019 because the storage and egress bandwidth (the bandwidth used to send content to you from their servers) from cloud service providers is insanely expensive. Factor in a content delivery network for consistent speed in “any region” and you’ve got some serious overhead. Also the paid services are usually ad-free. On the flip side, a TV network is actually profiting from ad revenue on a well established show/program when it’s broadcasted."
],
"score": [
14
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d21lof | How are music videos lip synced when the video is shot in slow motion? | Technology | explainlikeimfive | {
"a_id": [
"ezs9n6x",
"ezs7pfn",
"ezscv59"
],
"text": [
"Watch URL_0 - behind the scenes of Mariah Carey’s ‘Heartbreaker’ video. Brett Ratner (the director) and Mariah explain (with visual demonstration!) how they make her have the slow-motion movements but the lipsync still be on tempo with the normal track.",
"The song is played back to the singer at a high speed so they sing/lip sync faster. It is shot in regular speed. When it is slowed down the speed of the singer seems normal.",
"While they're recording it, they play back the original track faster than normal, so the lip syncing lines up when it's slowed back down. Here's a nice video on the topic: URL_0"
],
"score": [
11,
8,
6
],
"text_urls": [
[
"https://youtu.be/-l-1A8qL93o"
],
[],
[
"https://www.youtube.com/watch?v=G025oxyWv0E"
]
]
} | [
"url"
]
| [
"url"
]
|
|
d21o22 | How are still images created to look like they're moving? | Technology | explainlikeimfive | {
"a_id": [
"ezsa6nf"
],
"text": [
"It’s because that the human eye isn’t perfect, and flashing something at ~24 still images a second tricks our brain into perceiving movement"
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d23y7c | How does an HDMI, or any other cable work? How can they transfer visual data? | Technology | explainlikeimfive | {
"a_id": [
"ezsnchu"
],
"text": [
"I cable is just a bunch of wires, some for power some for data. Think of it this way, one letter is a byte of data. Each byte is 8 bits. A bit is either on or off. Now you have a cable with 8 wires in it, each wire can have a tiny bit of electricity for on, or none for off. That is translated into the bits and the byte. Now once every piece of time, you change the cables on/off state for each wire. You can now send more bytes. This is overly simplistic but gives an overview of how a cable works."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d26dzh | Why do streams and downloads seem to speed on up until they get to 99 percent done the. Slow right down? | Technology | explainlikeimfive | {
"a_id": [
"ezszmhg",
"ezt0l19",
"ezt53gb",
"ezt6msh",
"ezt9eao",
"ezt1sfb",
"eztio46",
"eztcq0p",
"ezt8xiy"
],
"text": [
"When the download is finished, the computer has to copy the retrieved file, and often pass the file off to the anti-virus for scanning. All of this happens while the browser is displaying 99%.",
"Portions of the download will often be cached in RAM by the file system and only flushed to disk when the file is closed. And, as robbak said, virus scanners are another source of slowdown. On some systems, searchable files like PDFs will be indexed once downloaded too, although that usually happens in the background after the download gets to 100%, but there's some other bookkeeping going on too, like assigning an icon to the file, writing the download date to the file's metadata, etc.",
"In the case of torrents, the file(s) you're downloading are split up into pieces, and each piece can be downloaded individually from one of the available peers/seeds. Your computer then stitches together the file(s) form these pieces. Often there are peers that are only uploading at an extremely limited rate, so your computer ends up making a request for a piece which will take hours or even days to finish from that peer, even though the piece is only a megabyte or so. When everything else has finished, you're often stuck with the last few pieces being downloaded from those slow peers, which shows as a massive drop in download speed for the last 1% or so of the file(s). Most bittorrent clients nowadays have something called \"endgame mode\" specifically to counter this issue. When all pieces have either been downloaded or requested from some peer (meaning that we're most likely downloading the last few pieces from said slow peers), your client starts requesting all remaining pieces from multiple peers. This ensures the last few pieces actually finish in a few seconds or minutes instead of potentially hours, but this is still much less efficient than the previous download speed was.",
"In the context of progress bars, the graphics generally have algorithms that smooth out and estimate progress(a 100% accurate real-time graphic would be difficult produce). In these circumstances, especially if progress slows towards the end of a graphic, the time to complete can be misrepresented.",
"Progress bars are a little bit of a farce. They are just there to make you feel better about waiting. Check out this recent 99% invisible podcast on this very topic: URL_0",
"The reason it speeds up overtime instead of just immediately hitting some maximum download speed is that the internet is designed that way. Think of the internet connections as a funnel that you want to pour sugar (data) through as quickly as possible. If you pour too much then it can clog up the funnel but, if you can start to ease back once it gets unclogged and find the amount of sugar that let's it keep draining, you end up pouring at around its maximum speed. The internet connection actually moving the upload or download around ends up starting pretty slow because the computer doesn't assume the network is running perfectly. Once its able to use the slow speed, it eventually starts bumping the speed up more and more until data starts to get delayed or dropped too often. Once it notices that there is too much data being sent, it will scale back to a slower speed and try bumping it up occasionally to check if anything changed. This is what let's us do things on all different types of physical networks which may be clogged up or temporarily broken by over use or even bad weather.",
"The most recent [99 Percent Invisible podcast]( URL_0 ) is all about the history of the \"loading/busy\" icon on computers! Briefly, progress bars typically don't attempt to accurately portray progress anymore, but are there to make waiting less intolerable. An excerpt: \"Pretty soon, progress bars started popping up everywhere — but there was one problem. The progress bars gave the user an accurate depiction of how much of a task had been completed at any given time. So if the first ten percent loaded in ten seconds, then you would think — well this whole thing will take 100 seconds. Except it didn’t always take a hundred seconds. Sometimes the computer would slow down over some computational speed bump, and you’d end up feeling completely betrayed! This revealed something really key about the psychology of waiting and why things often feel slower than they really are. It’s all about our expectations. This is true on our computers and it’s true at lines at Disneyland. You look at it and it tells you how long it’s going to take and you set an expectation,” explains Jason Farman, “And when you get to the front of the line faster than you thought you were going to (or when that particular piece of software loads faster than you thought it was going to), you leave the encounter feeling positive.” And that realization about expectations led designers to new idea — a loading bar that had nothing to do with how much work the computer had done. Instead, it was designed just to make the wait feel better. It would always start off slow, to set your expectations for a fairly long wait, and then speed up at the end, so that you end up feeling pleasantly surprised. This “front-loaded loading” bar tricked you into feeling like you were waiting for less time than you actually were. In the early 2000s, that idea of trying to manipulate the users’ experience of time really took off, especially with big online retail companies whose profits depended on keeping customers on their website.\"",
"At 99%, the computer has to verify the integrity of what it has just downloaded, it checks every single file. Also needs to be passed through an Anti-virus at times like Windows Defender, or 3rd party one that force the files to redirect through their software to check the them.",
"The 99% is only what the copy tool can measure. I.e: How much actual data have been moved to your computer. The last 1% is for \"anything else\" that could take anywhere from a nanosecond (renaming the file) to several seconds (antivirus) to several minutes (Windows antivirus on Java files). The tool you use for copying the file simply has no way of knowing what happens to it afterwards, or how much time it takes. So it just stops at 99% and waits. It could stop at 95%, instead of 99%, like you mentioned earlier, but it wouldn't be any less of a guess than what it already does. The standard has been set to 99, so that's what everyone uses."
],
"score": [
3517,
931,
117,
34,
23,
9,
5,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[
"https://99percentinvisible.org/episode/wait-wait-tell-me/"
],
[],
[
"https://99percentinvisible.org/episode/wait-wait-tell-me/"
],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d26l0r | How do individually addressable led strips work? | Technology | explainlikeimfive | {
"a_id": [
"ezt1cl2"
],
"text": [
"These strips contain power traces, LEDs, control trace(s), and controllers. Most modern strips have the controller embedded inside the same case as the LEDs, DNS I'll refer to the combination of the LED emitters, and controller as the LED. Depending on which specific control chip is being used, a computer would put out a control signal on the control trace or traces that specified how each LED is supposed to light up its various emitters. The most common case is to use a single control trace, which goes to the DATA_IN pin on an LED, the controller grabs the data it needs (generally some control bits and the next 3 bytes) , and sends the remaining stream out the DATA_OUT pin, where the cycle continues until you run out of LEDs or the computer stops putting out a signal. Since only one signal is used, the timing is fixed to a certain frequency which requires fairly tight constraints to get the LED controllers to recognize a valid control signal."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d273fj | Even if no changes were made in the first place, why do Word/PowerPoint and similar software ask only sometimes if I wanted to save changes made? | Technology | explainlikeimfive | {
"a_id": [
"ezt5dm7"
],
"text": [
"It's hard to answer this question generally, as it's kind of akin to asking why my car doesn't turn on, there are a myriad of possible causes that are consistent with the behavior. While I'm a developer, some of this is speculation, and just based on my own hunches. Applications that work with files have very few options or behaviors that don't affect files, as such I don't think they code explicitly that this option needs a file save, it's sort of automatic where they look at the structure of the file in memory and maybe a flag is tripped when a mutation is performed. So what are some things that could mutate the structure in memory without you doing it, and would appear sometimes. 1) If you open a file from a different version of the Software. Word and Power point in particular may be able to parse a general format but save in a slightly different way that looks the same. For instance they could change compression algorithms that shrink the document and it would be the same document uncompressed, but it could require a change. Or it could be something like this version uses metric, and another uses imperial for internal document format. 2) You could have plugins etc that embed data in them that need updates. Word and Powerpoint in particular can embed other things in them, and those other plugins could potentially be registering as changes, or be changing content that you don't see. 3) Something automatic like Macros could be changed or updated in the document and when you open on this computer it updates the document. For instance (and I'm not saying that you have one, and would find it very surprising), but if a computer was infected with a virus that exploited a piece of software it might change the document every time and prompt you to save. Those are just three reasons I could think of."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d289zj | How do video game developers fix a games performance issues like glitches and stuttering and long loading screens in consoles? | I know through updates but what is the process of making the update to solve a performance issues | Technology | explainlikeimfive | {
"a_id": [
"ezth26j"
],
"text": [
"Software Development is often done under deadlines and often leads to quick solutions. Which works for 90% of the code anyways, and they want it done fast. They run simulations and profilers during the last month, and iron out the game for release. After initial release various systems will get to use your new game. This is sometimes considered as live testing, because it's not truly polished yet in many cases. User feedback will help prioritize the developers time for the next patch, and sometimes even leads to updates or revamped mechanics in another game. The development team is able to rewrite or modify the source code and test its performance. This can mean several hours of stepping through every line of code, following diagrams, and/or using code analysis and profiler tools. Most code can be improved with more efficient Algorithms and improved memory use. In some cases, the bottlenecks are from weird limitations within the hardware or timing between components and aren't easily caught in simulations or replicated. The updated code is uploaded to a patching service/server. This can be handled in a few ways, but usually it'll check for what's changed and saves the difference for your installation. Afterwards, the software will check for updates and only needs to fix the difference. This is more efficient than a complete reinstall, and allows you to keep enjoying your games, and other software right after patching. Further reading: * [Wiki: Patching]( URL_3 ) * [Stackexchange: How do Patches and Updates work]( URL_0 ) * [Gamasutra: Indepth simple system to patch your game]( URL_2 ) * [Gamasutra: How to debug]( URL_1 )"
],
"score": [
3
],
"text_urls": [
[
"https://softwareengineering.stackexchange.com/questions/164709/how-do-software-patches-and-updates-work",
"https://www.gamasutra.com/blogs/HermanTulleken/20170227/292463/How_to_debug.php",
"https://www.gamasutra.com/view/news/181455/Indepth_A_simple_system_to_patch_your_game_content.php",
"https://en.wikipedia.org/wiki/Patch_(computing)"
]
]
} | [
"url"
]
| [
"url"
]
|
d2ae4d | Why does a phone call's "hold" music sound like shitting in a microphone when the voice audio is clear? | Technology | explainlikeimfive | {
"a_id": [
"eztnn9v"
],
"text": [
"There are two factors at play. Firstly, telephone transmissions are optimized for voice, not music. The frequency range is limited to transmit the voice of humans clearly, but the high and low frequencies that are required for music to properly get trough are cut off. Secondly is compression. The music being played is a digital file, that is stored somewhere, probably compressed to save on space. That file is then sent to whatever system is answering the call, compressed further on the way. Finally it is sent over the phone line, probably compressed once more using compression optimize for the human voice, not music. The end result is a poor quality music file being forced down a system not designed to play music."
],
"score": [
15
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d2gvmu | How international emergency call work without a SIM card? | Technology | explainlikeimfive | {
"a_id": [
"ezuyc5u",
"ezurmdi"
],
"text": [
"A Sim card doesn't connect you to the network. It tells the network how to bill your calls. What identifies you on the network is actually your IMEI number that's hard encoded in your phone.",
"It’s part of the mobile phone standard that towers will allow phones to call that number without a SIM card. Think of a SIM card as a login token that’s used to log you in to your carrier’s network. Carriers are required to allow anyone to make emergency calls without being logged in."
],
"score": [
6,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d2le8i | How did people wake up in the old time, when there were no alarms and stuffs? | Technology | explainlikeimfive | {
"a_id": [
"ezvjr7r",
"ezvkm4r",
"ezvjrsc",
"ezvn00w",
"ezvnfpw",
"ezvm15o"
],
"text": [
"The sun, first and foremost. Later, churches and Town Halls had bell towers exactly for this purpose - keeping collective time.",
"There were also people called [knocker-uppers]( URL_0 ) who would go around and wake people up for work.",
"There wasn't anything to stay awake for once the sun went down. The bright sunrise was enough to let us know the day started.",
"I imagine the same as happens to me now, where the dog gets up, whines and then if I don’t move i get a paw in the face. She’s got a very reliable body clock, somehow 😂",
"I hear some Native Americans would drink a lot of water before going to sleep so needing to pee would wake them early.",
"One way was to put nails in your candle that would fall into their metal saucer with a clatter at the designated hour. I imagine it wasn't too hard waking up as you'd go to sleep early anyway."
],
"score": [
18,
12,
7,
5,
4,
3
],
"text_urls": [
[],
[
"https://en.m.wikipedia.org/wiki/Knocker-up"
],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d2mzt5 | Why did primitive people start cooking food? | Technology | explainlikeimfive | {
"a_id": [
"ezvr45h",
"ezvrbci",
"ezvrt7k"
],
"text": [
"Curious idiots will always mess around with something they don’t understand. And with the discovery of fire, some idiot probably threw their meat onto the flames and out of sheer curiosity, took a bite, and said, and I’m paraphrasing here, “damn. Meat good, Ghroahg like brown meat!” And eventually, since it tasted better, all the cave people switched to eating fire meat, without knowing how much It was better for them in terms of energy, and some more curious idiots thought, paraphrasing again, “groundmeat (mushrooms)like regular meat. FIRE FIRE OHH HA HA”, and tossed those on the flames too. Badabing badaboom we have sautéed mushrooms.",
"I believe the current prevailing theory is that when a forest fire would go through an area, humans would walk through the ruins and pick up roasted nuts/fruits/animals and eat them. Cooked food smells better than raw food in a lot of cases, so they likely had an urge to taste what smelled very good. Once fire was tamed they already knew that putting food on it made it more delicious, so they did it right away. They may have even tamed fire partially because they knew they could cook with it.",
"Curiosity would have made early humans throw stuff on a fire and see how it tastes. The most important *reason* that cooking is good is because it changes the structure of some stuff to make it easier to digest, meaning that we can access more energy and nutrients when we eat it. We can't digest raw potatoes, wheat, meat etc well but once cooked they turn into excellent energy sources. Cooking can also kill bacteria which is a big plus."
],
"score": [
15,
10,
4
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d2o5gq | What is the difference between nfc and rfid? | Interested in student card duplication to phone, but apparently nfc reader from playstore couldn't read it. | Technology | explainlikeimfive | {
"a_id": [
"ezvvfat",
"ezvvxcy"
],
"text": [
"NFC is a specialized subset within the family of RFID technology. NFC is designed to be a secure form of data exchange, and an NFC device is capable of being both an NFC reader and an NFC tag. RFID is the method of uniquely identifying items using radio waves.",
"NFC is “near field communication”, this means the whatever is sending information needs to be really close to what is receiving it. Communication is also a key here, the chips and readers can send super simple messages both ways. NFC uses radio waves to communicate between the chip and reader and also to power the chip when data is being transferred RFID is “radio frequency identification” it is a way to identify something, there is no two way communication. The chip ( example the microchip in your cat or dog ) needs to be close to the reader to get the information from it"
],
"score": [
6,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d2o6ba | why are smartphone companies (especially apple) adding like 3 didfrent cameras on thier phone. Is this really necesary, and does this increase the price by a lot? Because if it does I dont understand why its not better to buy a camera then add three cameras to a phone | Technology | explainlikeimfive | {
"a_id": [
"ezvviyo",
"ezvx31s"
],
"text": [
"There is probably a demand from people who use their phone as everything and do not have a separate camera. Demand could also come from people using services like iCloud to back up their photos, and companies like Apple can see how much their camera is being used). Of course cost will increase, but we won't know the exact price of sensors and lenses until people break down the new iPhones to find out the real cost. I very much prefer using my DSLR to take photos, for RAW photos of acceptable resolution, so phone cameras don't mean much to me. I pretty much only use phone cameras to scan documents.",
"Well. Believe it or not, cameras are an important feature on phones nowadays. Pretty much every single phone has a camera, and the quality of the pictures that camera produces kind of sets the tone for if the phone is still decent or not. Well, if you actually use the phone camera. Many people do. And most of us would replace a phone if the camera lens is broken so that we no longer can snap photos at all, even if the camera quality is not a deal breaker when you buy the phone, you quickly get used to actually being able to use the camera. But. Among the typical customer base of smartphones, cameras are pretty important. Most people don't even understand what the gigabyte thing is and can't remember how much flash storage they have on their current phone anyway. But it's easy to look at pictures taken with your phone and compare to your friends newest phone and realise that yours is now a few years behind. Even if there is nothing wrong with the phone. Or anything wrong with the camera, for that matter; people still replace phones because the camera features of newer phones are better than the camera features they have on their current phone. It's often not a dealbreaker in itself. But it adds up to the dealbreaker, if that makes sense. Anyway. One reason that Apple, among others, are adding more and more camera lenses in their phones is that a regular camera has the ability to zoom better. By physically alter the distance between the lenses, they can focus on objects at different distance away from the camera. But even though there are some experimentation going on with cell phone lenses that shift focus with the help of an electrical field, that stuff has still not caught on yet on the market. It's still cheaper to add a secondary lens that takes over when necessary than to have a lens that can focus properly. And it probably will be for a long time. That said, it's still better to buy a proper camera. But...that is beside the point. Cellphone cameras are getting better. That's why there are more lenses added. Because better can even for a short while mean that you are *best*, which everyone thinks is awesome and all."
],
"score": [
5,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d2p53z | How does a GPS work with ETA? | I noticed this the other day, I turned my Google Maps GPS on, put in destination. It told me the trip would take 9 hours. I drove ~8-10 over the speed limit, hit almost no traffic, it took a bit under 9 hours. How does it calculate time? I would have thought going 8 over the limit for 9 hours would have meant I would have gotten back over an hour earlier, no? | Technology | explainlikeimfive | {
"a_id": [
"ezw0dy7",
"ezw0iiv",
"ezwe76u",
"ezw0ei3",
"ezw24ll"
],
"text": [
"It has all the data from all phones and cars using its service and knows exactly how long the average drive. Knows there are traffic jams based on peoples phone locatio and them getting tied up. It has been tracking car travel data and with google earth driving on every road taking pictures it has a pretty good database for everything road related",
"How many times did you stop for breaks and fuel? GPS arrival times don't include any stops. It calculates a straight though trip based on posted speeds.",
"The ETA is also constantly updating - it will not show that you \"arrived earlier than expected\".",
"If going over the speed limit is the norm for a stretch of road Google maps will know, they can calculate to traffic speed regardless of the speed limit. This is what happens when traffic is moving slower than expected as well.",
"Apps like Waze and Google Maps (who bought Waze a few years back) use crowd sourced traffic data. What this means is that it takes the average speed data of everyone that is travelling on that road and using their app. If the average user is speeding, then that is the data it is using to calculate the time it will take."
],
"score": [
6,
4,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d2qrld | why do some automatic cars have a manual mode? What are the advantages and why is it there? | Technology | explainlikeimfive | {
"a_id": [
"ezw94w6",
"ezwamtp",
"ezw9jdt"
],
"text": [
"It's useful for when you want to stay in a lower gear, like if driving through snow. It can also just be more fun, when I want to drive fast I tend to go into manual and make the gear changes at higher revs.",
"You can take the revs higher than an automatic would normally change gear at, so you can accelerate quicker, and you can use engine braking to help slow down a bit quicker too. And it just generally makes it feel like you've got more control. I'm British and nearly everyone here learns in a manual car to start with, and about half of cars are manual anyway.",
"Automatic cars involve using a system that (generally pretty accurately) guesses what gearing it should use, typically involving how hard you're pressing the accelerator. This is *probably* more computerized than when I learned about it, but my first automatic car had what was called a kick-down linkage, which was in essence a cable that connected to the throttle-body of my engine to my transmission, and if the pedal was pushed down all the way (or past a certain threshold, more accurately), it would pull that cable and tell my transmission to go down a gear so that I could get more acceleration. But the manual mode or sport mode is there for basically that function: if you know you are going to want more torque or get your engine to higher RPMs for a burst of acceleration (for passing on the highway, for example), you might want to pre-emptively downshift so that you can get that without just flooring the throttle immediately. That and it's fun enough that some people will pay extra for a sport mode."
],
"score": [
7,
7,
4
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d2t4c8 | how exactly does a lie detector test work? | Technology | explainlikeimfive | {
"a_id": [
"ezwn5g9",
"ezwr863"
],
"text": [
"It doesn't work. Lie Detector Tests are pseudoscience. However, many people (erroneously) believe that LDTs work, so they refrain from lying even though the tests don't actually work the way people think they do. It's all a sham.",
"A lie detector or polygraph is basically a device that measures a bunch of vital signs such as: blood pressure, pulse, respiration, skin conductivity (e.g. sweat) and presents the status of these things to a view. The belief underpinning the use of polygraphs is that by tracking how these signs change in response to questioning, you can determine whether or not someone is being honest. What the polygraph measures is physiological arousal or, roughly: stress/anxiety. The idea is that a person who is lying will experience more stress than someone who is telling the truth. While this might sound good in theory, in practice this is not the case. People can experience stress for a variety of reasons, including disorders, substances, substance withdrawal, being in a heightened emotional state. Additionally, people might be inordinately calm and not suffer increased stress when behaving dishonestly."
],
"score": [
5,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d2uamq | Forgive me if this has been asked before, but I’m relatively new to the sub. When a discount wireless company (straight talk, etc.) says they run on the same networks as the big carriers, is that true? If that is the case, why are they not more widely used? What’s the catch? | Technology | explainlikeimfive | {
"a_id": [
"ezwva6d",
"ezxdi5n",
"ezwzb4d",
"ezxyuz3"
],
"text": [
"They do. They are referred to as an MVNO (Mobile Virtual Network Operator). They buy bandwidth on the host network at wholesale, and resell it at lower margins than the parent company. Think of it this way... Your local mini-mart buys soda from their distributor, who buys it from the bottling company. Each person adds a fee, as well as the mini-mart, and you end up spending $2 on a can of cola. Costco buys directly from the bottler. Since they do so much business, they buy large enough quantities that the bottler will run trucks directly to Costco, so the local distributor is cut out. Additionally, the rent per square foot, and the staffing per product for Costco is much lower than for the mini mart, so overhead is lower. Because of this, and their overall business model, Costco is willing to work on a thinner margian, and bank on volume to make their money, and you might pay 20c for the same can of cola. Buying straight from the wireless company costs more because they have to pay for customer service reps, stores, and advertising for each customer they bring in. The MVNO's buy in bulk, and spend less on all of these things. Many MVNO's have little or no phone-based customer service, no retail locations, minimal advertising (especially no primetime advertising), and overall low overhead, so they pass the bulk savings on to you.",
"One thing that others did not mention is that MVNOs typically don't have the same roaming rights as their host networks. I.e. when you sign up with one of the Big Three, they would typically have coverage outside of their own towers through peering contracts with other operators in the area. MVNOs don't get access to those contracts, so they might have less coverage even though their base network is the same.",
"It's possible to \"rent\" access to the cellphone network, and so for example, Ting uses the Sprint network, so for the minutes you buy from Ting, a percent go to Sprint to pay for the network. The *amount* of access you have varies depending on Ting's agreement with Sprint. You also may have limited options for roaming, or lower data speeds, that would be different from the regular Sprint customer because Sprint may not prioritize access for these leased-service customers versus their direct customers. But, for the most part, the people I know who go with the cheap plans don't have any complaints, because they don't use the service enough for it for the difference in quality to be a problem versus signing up with Verizon to get all the bells-and-whistles.",
"/u/spasticpoodle addressed how MVNOs rent time on other networks, but in regards to your question of why more people don't use them: Generally speaking, you get what you pay for. Verizon (or Sprint, etc.) customers get priority on congested towers and get more speed. Verizon works internationally without a lot of hassle. Verizon has higher end flagship phones, whereas most MVNOs offer previous generation phones. Most MVNOs are \"unlimited\" but throttle you dramatically when you exceed, say, 1.5 GB in usage. As with everything else, there are different products to meet different budgets. My personal experience with MVNOs has been that they're cheaper, but not as robust and reliable as top tier carriers."
],
"score": [
203,
13,
9,
4
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d2yzo5 | how do vlans work? I always hear just put everything on a separate vlan for security but at my job we have hundreds of vlans but with rules so they can all talk to each other. What’s the point besides organization is there if they all can talk to each other? | So for example I’m on vlan 5 and another computer is on vlan 40 I can log into that machine remotely access all the same shares everything. Is there still security being on separate vlans or is it moot cause they all can talk to each other? | Technology | explainlikeimfive | {
"a_id": [
"ezxogym",
"ezxnmg0",
"ezy8y0a",
"ezxt3be"
],
"text": [
"VLANs are to logically separate traffic over a shared infrastructure. Before VLANs, if you had the same Ethernet segments in different floors, for example sales on floor five and seven, to have their Ethernet segment connected you needed to bring a separate cable between every floor for every Ethernet segment. So worst case you needed N! cables in your services riser, where N is the number of floors. Why do you need to be on the same Ethernet segment? Several reasons: - Because the protocols for the discovery of printers, scanners, file shares are all based on Ethernet broadcast. Do you want to want to print something on the printer for your colleague at floor five? Just click on \"Floor five printer Sales\" icons, much easier than having to maintain lists of IP addresses of printers and hope you got it right. - The security policy for the computers attached on that VLAN: Some computers you don't want, or just do want, to be accessible by everybody: Printers for example could be accessible on the VLAN which everybody can access, the PABX and security cameras could be accessible on the VLAN only for the security team. And the last part is the normalization of calling Ethernet broadcast domains (or Ethernet segments, or LANs) a VLAN. It's not for the worst, but unless you know that a VLAN originally was for the logically separation of traffic over a shared infrastructure, it can be confusing if you have VLANs everywhere but never the same VLAN on other devices.",
"Vlans separate traffic across your physical network. Your company doesnt have ACLs in place to prevent access, but the vlans are still useful for mitigating potential problems. If one faulty NIC goes crazy, it affects just the devices on that same vlan. Also, vlans are useful for controlling filtering of devices in some firewalls/content filters. For example, we have Student devices on their own vlan/subnet so we can filter them more strictly.",
"Each VLAN is its own broadcast domain where each computer in it can talk directly to each other computer in it, without being restricted by the network infrastructure. In order to communicate with a computer in another VLAN, your computer's data has to pass through a router, which has the capability of restricting the data in various ways. It is far easier to restrict access between VLANs than to restrict access between computers in the same VLAN. That's where the security benefits come from, but it's not required that any restrictions exist.",
"Any place you want to physically separate traffic as though it were on 2 (or more) different switches, you can use vlans to separate them in the same way. Two 4-port switches not connected to each other is functionally identical to a single 8-port switch with half the ports on vlan 1 and half on vlan 2. Multiple vlans \"talking to each other\" requires something to be on multiple vlans at once and act as a relay. Usually that's a router or firewall and those tend to have good firewall capabilities. PCs and servers can do it as well. Thing is, switches (and servers if you go through the effort) support vlan tagging meaning that when you move data between two switches the vlan number is preserved on each individual packet and the other switch honours it. So now you can have multiple vlans safely traverse multiple buildings, cities, etc while still honouring your separation rules using a single \"normal\" network connection, where \"normal\" here is probably fibre-optic for such distances. Hell, at home I have my router in one room because that's best for the Wi-Fi and is where my computer and NAS etc are, but my cable modem is in another room with the TV and game systems because that's where the cable comes into the house. So I have 2 vlan-enabled switches (yes I spent some cash on this) and one vlan is literally just cable modem - > router WAN port through 2 switches on a private vlan. One cable goes through the wall between switches and it all just works. And all the important stuff gets wired directly rather than WiFi when it can. So, yeah, lots of vlans is absolutely a thing."
],
"score": [
7,
5,
3,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d2yzow | how we detect water in the atmosphere of a planet 110 light years away. | Technology | explainlikeimfive | {
"a_id": [
"ezxog5f",
"ezxycsb"
],
"text": [
"We use super-powerful telescopes that can look out into space in a wide range of spectrums of light. So when we start looking for potentially habitable planets, what we often start with is finding planets that are orbiting around a star in the \"Goldilocks Zone\". This term is a play on the Goldilocks story, where the perfect bowl of porridge is \"not too hot, not too cold\". So when a planet is too close to its sun, it's too hot for life to exist (as far as we know) and so hot that any water it might have had would just evaporate away. And when it's too far from the sun, it's too cold for life (as far as we know) and any water it might have would be frozen solid. So the Goldilocks Zone is where a planet is just the right distance away from the sun to have 1. Liquid water, and 2. Potential life.. So once we find a planet at the right distance from the sun, we can look at it through these powerful telescopes. Now the specific planet we're talking about here is much too far away for us to see it in detail, so through a telescope it kind of just looks like a little dot. But what we CAN see is the sun going dim each time the planet passes in front of it as the planet orbits. It's almost like seeing a little mini solar eclipse. Every time the telescope sees the sun go dark for a second, we know the planet has just passed in front of it. Ever more cool is that the scientists can then look at the color of the sun's light as it passes through that planet's atmosphere. Like imagine if you had a pink balloon and you filled it with blue water. If you just hold the balloon in your hand all you see is the pink rubber. But if you hold the balloon up in front of a really powerful lightbulb, that light will pass through the balloon and shine through the blue water, and we can see the blue through the thin pink skin of the balloon. So when we watch the planet, and we see the light of the sun filtering through its atmosphere, we can tell what that atmosphere is made of, because different gasses and substances show up as different colors. And by analyzing the colors we can tell that there's a great deal of water vapor in that atmosphere. We can't tell exactly how much water, but we can see that it's there, and that's a big deal. Now, just because a planet is in the Goldilocks Zone and it has water, that doesn't automatically mean that we could just land on it and live normal lives. It's too far away for us to see the surface so it could be covered in evil man-eating slime monsters, or it could be a planet that has water but absolutely no life at all, or there could be weird gasses that would kill any life form we know of. But finding water is a good start, and as technology advances we'll be able to see more and more.",
"A star produces all colors of light (in different brightnesses depending on temperature). If you put this light through a prism you get a rainbow. But this rainbow has dips in how bright it is depending on what it is made of. Hydrogen has dips at red (656nm), aqua (486nm), blue (434nm), and violet (410nm) for instance. Every element or molecule has these kinds of spectral lines and they are unique. When it is hot and by itself these lines are bright (emmision spectra). When these molecules are blocking something brighter (like a star) they appear dark (absorption spectra). When the planet is not in front on the star you measure the brightness of each color to establish a baseline. You then do this when the planet is in front of the star. Any new dark lines tell you what gasses are present on the planet."
],
"score": [
64,
7
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d30clg | What actually matters in computers and internet regarding speed and performance? | Technology | explainlikeimfive | {
"a_id": [
"ezxwz9q"
],
"text": [
"Okay, imagine your computer like an office cubical. The CPU is the employee, the RAM is the surface of the desk, and the hard drive like filling cabinets. Some programs are like paper work with lots of pages so they take up more desk space, other programs are like harder paper work so it takes more employee engagement. The trick with surfing the web is that you're not just connecting to the internet, you're running a program to interoperate the data and follow the protocols. So that too will take desk space and employee engagement. If you're anything like me, you run too many tabs for your own good and those tabs can add processes to the work load."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d32325 | How does hacking through WiFi really works? We know that data can be stolen but what are the hackers actually seeing on their screen that have access to our information? | Technology | explainlikeimfive | {
"a_id": [
"ezyfcqo",
"ezy7dr4",
"ezy702p",
"ezz9m6b",
"ezzaa56",
"ezyf1j4",
"ezy6x05",
"ezzknys",
"ezz5tg4",
"ezzjrek",
"f00mfz2"
],
"text": [
"When you go to a website, you send a request to a server for information. For example, when you go to [ URL_5 ](https:// URL_5 ), you send Google's server a request asking for their homepage. Their server sends that information back to you, and your web browser formats it correctly. > [192.168.0.3]( URL_2 ) wants to access [ URL_1 ]( URL_0 ); please send the homepage code. When you log in to a website, like your bank, you have to send some extra information to the server so that it knows who you are. That's usually your username and password. When it's not encrypted, that information is sent in plain text, right alongside of the website that you're trying to get to. > [192.168.0.3]( URL_2 ) wants to access [ URL_7 ]( URL_3 ), their username is [[email protected]](mailto:[email protected]) and their password is hunter2; please send the transaction list. Once you've told the server who you are, they sometimes send back a session key; this is similar to a coat check. When you go to the website later, you don't have to give your username and password again - you just give your coat check, and they can identify you from that. That keeps you from having to send the password repeatedly, and saves the server from having to re-authenticate you every time. However, just like in real life, if that coat check gets stolen, anyone can pick up your coat (your data) with it. On a public wireless network, anyone else can scan the network for these requests, and they'll see [every \"packet\" of information being sent over the network]( URL_4 ). From there, they can search the stream of data for patterns, such as looking for e-mail addresses. They can then see your password in the same request, and voila - they have your information. If you're interested in how encryption works, I'd highly suggest [this video]( URL_6 ) which explains the protocol really well and in an easy to understand way. **Edit:** It looks like I removed the part about encryption while I was editing my comment last night. Encrypting these requests is incredibly easy nowadays, so most websites will encrypt the data that you send it and the data that it sends you. It’s explained better in that video I linked above, but you and the server basically agree on a shared secret phrase that people scanning the network can’t figure out. That way, only you can see the data that you’re being sent and only they can see the data that they’re being sent.",
"When I was in college, we learned about hacking WiFi but were told that with current encrypted WiFi networks and https, purely getting packet traffic would not be helpful. Our campus shared space with a university, so my study group went over there and realized that their WiFi was unencryted. What was worst was their site to manage your courses and enrollment was also not using https. We literally watched as a student logged in and could see his password and username (which happened to be his email). From that, we could unenroll him for his course in a matter of seconds. The worst thing is with more digging we found he was using the same password for his Facebook. It was scary as hell to realize it could literally be that easy to hack someone This was 2008 so hopefully it has changed.",
"First, all data on an unencrypted Wi-Fi network can be received by all parties, like people talking in a public room. Secondly, they don’t see your screen but what you are sending and receiving over the network. For example, if you use an unencrypted protocol (like HTTP) then they can capture the packets and see what your browser is asking for and what the server is sending back. If this includes logging into a site then they can get your hashed password and run decryption attacks against it. The real danger there is that people tend to reuse passwords, so they can try your credentials against multiple popular sites. If they crack it, they can log in as you. This is why HTTPS is important to use (and why all banks, etc use it.)",
"Many of the answers here answer the \"what they are seeing\" portion really well. However more or less all of these answers talk about public/unencrypted WiFi networks. There are two ways to \"hack WiFi\": - Eavesdropping the communication between your device and the WiFi network. - Pretending to be the WiFi network and making your device communicate directly with the hacker. Eavesdropping works with unencrypted communication. Encrypted WiFi or encrypted communication (HTTPS) both defeat this to large extent (there are still things a hacker might learn on unencrypted WiFi, but if you are using HTTPS to read your e-mail, the hacker should not be able to read those). However if the hacker manages to trick your device into connecting to their WiFi network, they can now start messing with the communication in other ways as well. Not only can they achieve everything they could do previously by eavesdropping the communication, they can now also change it. They might try to change an encrypted connection in ways that makes it easier for them to break the encryption, they might completely alter the pages that you are seeing over unencrypted connection or they might even try to instruct your installed applications to do something the applications normally wouldn't do such as \"send all local files to us\" or \"install this totally-not-a-virus on the device\". The scary thing is how easy it is to have your device connect to a hacker's WiFi network. If you have your phone set to connect automatically to your HomeWiFi, CoffeeShopWLAN and UniversityWireless, it will keep calling for those when you are walking down the street. Essentially it will keep yelling \"Is MyHomeWifi, CoffeeShopWLAN or UniversityWireless around?\" the whole time WiFi is on and it's not connected to a network. At this point the hacker can just listen for those calls and then start advertising their own WiFi network as \"MyHomeWifi\" for example. Your phone can't tell the difference and will happily connect to the hacker's network. (At least few years back the devices didn't even check if the original network had been encrypted and the new network is now unencrypted. Not sure if this has changed in the last few years.)",
"Top comment right now doesn’t really answer the question and I’m curious as well. What does the screen of a hackers computer actually look like when they are doing this?",
"It’s still scary how much is unencrypted today such as DNS requests. When you perform Air-Pcaps (sniffing packets in the air) near hotspots You can usually see all the domains people are resolving and can build a profile of their internet usage.",
"When you're on a public WiFi and sending data unencrypted they can read all your data. They're looking for usernames/passwords, usually with programs. So they're usually watching a visual stream of data packets (Google Wireshark and check the images for an example) and waiting for their search finds a hit.",
"I worked in Information Security company that demonstrates exactly this. Good question. it has been answered in some ways, I'll go a different take: The \"Wi\" in WiFi stands for Wireless, that is, Over-the-Air (OTA) communication via electromagnetic signals in the radio band, more specifically around 2.4GHz. To answer what a hacker might see, let's take a look at the several layers information goes through in the process of accessing the Internet: The OSI Model describes an abstract method of communication between two (or sometimes more) parties. Broadly speaking, a 5 layer model will look like: (merging layers of 7-layer model) 1. Physical layer - the actual signal 2. Link layer - \"neighbors communication\", i.e. between adjacent devices 3. Network layer - communication within a network of devices (e.g. The Internet) 4. Transport/Session layer - responsible for handling \"full conversations\" (opposed to single packets of data) 5. Application layer - basically anything software adds on top of communication. (e.g. custom server applications, protocols, etc.) Back to what a hacker would \"see\": it all depends on which layer he is able to tap to! Starting with layer 1 - Physical: These signals are not much different than light we see, other than, well, we can't see them. But light is a great analogy for this. Think of a flashing light bulb - using the intensity of the light, the color or the frequency of flashes, it is possible to encode messages. Just imagine your friend sending you morse code using a flash light! A person or device (not necessarily malicious) who would tap to that layer would be able to measure the physical difference in the magnetic field, which when plotted over time - produces a signal. This is a whole story within itself, so without going into too much details, just think of a line graph - sort of like heart monitor or lie detector. The transceiver (transmitter-receiver, e.g. WiFi chip) would know how to decode these messages and pass them to the next layer. Now let's skip ahead to the last layer - application. One thing I have yet to mention is encryption! While this can be done in any layer, let's focus on the Application layer. Assuming a hacker was able to tap to your wireless communication, a good encryption would still prevent him from eavsdropping or modifying the underlying data. Unfortunately, in practice, much of the data is poorly encrypted, suffers from flaws or completely absent at times. In such a case, whatever you see in your browser when you browse the web, may be replicated and mirrored to the hacker and even modified. Hacking is a whole topic within itself, so to summarize: TL;DR: a hacker might see anything from meaningless signals to those \"cat videos\" you thought was secure to download in Incognito mode through VPN within a Virtual Machine; all depending on his attack vector.",
"What they visually see is a list of network requests. Most of them are not interesting, because it's just establishing a connection and finding the correct device to go to etc. Like others said, it gets dangerous when they can see what you sent over an unencrypted connection like HTTP. There they can see the payload in unencrypted form, aka plain text (even files get converted to plain text representation so by decoding it hackers can also see what images you downloaded, for example). Also, even if everything is encrypted, packet sniffing leads to valuable information nonetheless: patterns. If some requests and responses always look the same or come from the same location, this information can help the attacker \"spoof\" a legitimate responder by spamming the network with responses that look similar to those they observed. If they get lucky, a client browser / device mistakes them for a legitimate response, possibly leading to the user sending sensitive information to the hacker instead. This is pretty unlikely though and most hackers won't go through that amount of work just to potentially get to sensitive data of one person. But given enough time and effort, it can happen.",
"Another method is to spoof the wifi network and make you connect to their device instead. This is usually done to capture your cookies, then they can spoof your device and log in on the sites that you use.",
"Your computer sends little envelopes of information to the other computer (out on the internet usually) that it wants to talk to. If you are on a wi-fi network with no password, that means the envelope's send-to address, and sent-from address are clearly visible. Because there is no password to lock it with. Now is where encryption should come into play. If you have an \"encrypted\" connection to the other computer (like HTTPS, where you see the lock icon in your browser), it means the two computers both agree to scramble the letter in every envelope with a secret code. Only someone who knows the secret code can un-scramble the message your computers send to each-other. (Hackers can \"guess\" the code, but it might take a while, depending on how good the code is.) If you use wi-fi with a password, the send-to and sent-from addresses are also scrambled, and the content is scrambled (potentially a second layer of scrambling if you are using HTTPS underneath), so it's harder for hackers to even know what envelope they are looking at and what its purpose is. (Again, hackers can guess this secret code, but it should take them a while if the secret code is good enough.) This is why it's good to use a password with wi-fi, so your messages are hard to identify. And why it's good to use secure networking underneath, like HTTPS (green lock in the browser), so that if someone CAN identify what the messages are for, they can't read what's inside the envelopes. Any computer security can be overcome with enough effort, or else even the intended user could never use their computer. That said, some security measures are both convenient to use, AND slow down hackers enough to be worthwhile most of the time. Answering your direct question: When hackers read your messages over wi-fi, they see who sent the message (your computer's IP address, but maybe also your computer's name like Jenny-PC, your username on a website, your login session ID, which can be copied and used to pretend to be you!!!), who the destination was (what website, what page of that website, what your search terms were on that website...), and what you were sending back and forth (photos you looked at or uploaded, text you read or entered in, including passwords, your chat messages, your searches, enough info to know what things you clicked on, etc...). What shows first on the screen, in packet analysis, is usually metadata that the computers use to identify every message in an organized way (this can be revealing in and of itself! Don't be reassured when the NSA says they just collect \"metadata\" -- that is often the most valuable info for data analysis anyway, since it is neat and orderly enough to be sifted through by machines, not having to listen to each message in its entirety). Then the actual \"payload\" is shown in the packet analyzer -- what main pieces of text, or image, or video, etc. is being sent from one computer to the other. If you submit a comment to reddit, the comment text is probably the payload, whereas there is metadata attached to that payload so the computer at reddit's data center knows who posted the comment, and what thread it was posted to, and so on."
],
"score": [
11899,
800,
171,
92,
41,
18,
14,
7,
6,
5,
3
],
"text_urls": [
[
"https://google.com",
"google.com",
"https://192.168.0.3",
"https://bank.example.com",
"https://jvns.ca/images/wireshark_screenshot.png",
"Google.com",
"https://www.youtube.com/watch?v=3QnD2c4Xovk",
"bank.example.com",
"https://Google.com"
],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d32svb | How do companies like Apple and Qualcomm continue to produce faster and more powerful chips year after year? Are engineers still making new discoveries in the industry or have we had this technology all along and are controlling the rate at which our technology improves? | Technology | explainlikeimfive | {
"a_id": [
"ezycsk7",
"ezydbsm",
"ezyft8g",
"ezybhxi",
"ezyc9rw",
"ezyfiel",
"ezybeui",
"ezyf06w",
"ezyc2jb",
"ezyiliw"
],
"text": [
"Apple and Qualcomm depend on the chip foundries like Intel and TSMC. They have engineers and physicists trying their best to fit more and faster transistors on to chips ASAP, while keeping power usage manageable. Then you have CPU designers like Intel and AMD trying to make the best use of those transistors to achieve the best CPU performance per watt. It takes time for advancements in etching transistors on silicon to make their way through to consumer products. At times, if one manufacturer has had a lead over its rivals, there have been suspicions that they were sitting on advances until the others caught up but this doesn't seem to have caused drastic slowing of progress. One problem is that it's now so expensive to be in the market that there are fewer and fewer competitors.",
"The short answer is that engineers are still making new discoveries in the industry! Roughly speaking, you can sort of think of the processing power of a computer chip being proportional to how many individual elements (transistors) you are able to fit on one chip. The smaller you make each individual element, the more you can fit on a chip and the more powerful that chip will be. This transistor density has been roughly scaling with Moore's Law for the past \\~50 years and while Intel has at some points purposely slowed down their development to better align with Moore's Law, in general Intel, Samsung, and TSMC (the 3 large chip makers) are all releasing the most powerful chips as soon as they can reliably manufacture them. One of the main improvements that drives the increase in processing power is again related to how many transistors we can fit on the chip. The ELI5 is that we need to draw these transistors into the chip with light. The longer the wavelength of the light we use, the larger the features will end up. Originally, they used mercury lamps (wavelength \\~400 nm ) which limited the feature size to a couple hundred nanometers (still really small!) but they are now using what we call extreme ultraviolet (wavelength \\~10 nm) which enables us to shrink everything down to about 10 nm. EDIT: A lot of people have been mentioning limits to scaling (ie: the death of Moore's Law) and while there definitely are fundamental laws in physics that limit scaling, there are a lot of neat tricks engineers have come up with that will probably continue to drive improvements for years to come (both in terms of materials and architectures). One of the top examples that come to mind off the top of my head is introducing what we call \"high-k atomic layer deposition\" in the mid 2000's. ELI10 of high-k ALD: A lot of people in the thread have mentioned \"quantum tunneling\" which basically occurs when something becomes so thin, an electron can just appear on the other side (wave-particle duality sucks). If you break a transistor down to its most simple parts, it is just a capacitor and engineers want to make the dielectric as thin as possible (think of your basic parallel plate capacitor formula). At some point, all of your electrons just \"tunnel\" across which leads to a bunch of leakage/power loss. Instead of scaling the thickness, engineers switched the dielectric material to one with higher dielectric constant (\"k\") which allows you to get the same device performance with less leakage.",
"Before you make a 1mm drill bit, you probably have to make a 2mm drill bit, to build the tools you need to make a 1mm drill bit. You don't start with rocks and sticks and make a 1mm drill bit. It's an iterative process, where precision at one scale leads to precision at the next smaller scale. Chip technology has progressed in that fashion. It's very similar to how t-shirts are printed, via lithography. You make images of the circuits, and project them onto a silicon crystal. There are a few dozen other steps. At each generation, the scale gets smaller, making them faster, and more powerful (larger circuits). This is Moore's law. Eventually, there are hard physical limitations to how small you can scale this process. We're more or less hitting these limitations, now. So, we're turning to other strategies, like putting more chips in one package. It's unclear at the moment what direction the technology will go. tldr; for many years, the path forward has been obvious, and iterative: make it smaller. Now, the path forward is less obvious, and may require fundamentally new technologies.",
"We're making new advancements. Stuff might slow down a bit (there's some theoretical caps that we're trying to working around) but engineers are still developing new technologies and getting better at doing the ones we have.",
"I work at a company that produces components that go into the machines that produce chips. Atomic Layer Deposition (ALD) is one of the processes for chip manufacturing and LAM Research has a nifty video on YouTube about it. It's crazy how much goes into it the technology. Surface finish, cleanliness, and cycle time are among the most important that come to mind. There's a lot that can ruin the process so these factors are continuously monitored by our quality control group.",
"there's a lot of speculation here, but the boring real answer is that the semiconductor industry requires a lot of moving parts so to speak, from many companies doing different things, from fab equipment manufacturers, the foundries themselves, EDA tool companies, and many others. you can't just go from one generation of chip to the next without everybody moving in lockstep. so there are global semiconductor industry groups that will dictate this, and one of the most important documents is the International Technology Roadmap for Semiconductors (ITRS), now called the Intl Roadmap for Devices and Systems (IRDS). this document lays out what each generation should strive for and what each gen can likely support. this means that all these moving parts know what they'll want to support for the next few years. chip makers will have an idea of what every generation can likely support, what their costs will be, and target a certain tech for a certain chip years in advance. basically, all these companies have to advance together to make better chips. it's not that they are still making new discoveries per se (they are but it's not like every day ooh new discovery!), nor that they have all this tech already, but more that they know what they have to hit every generation, will work on that, and then refine it for the next generation. a lot of things that apple and qualcomm do is simply refine their chip designs for the next gen, and move down to a tighter process when foundaries figure out how to improve their yields. so thats how it works for the industry as a whole. for a chip designer though, a lot of improvements just come from being able to refine their existing stuff and make gradual improvements. remember, a company like apple or qualcomm has a major time deadline for their stuff, so they will decide for this year, they will make a chip that hits these features. then for next year, since we will have more time and a working chip, we can add these new features. and so on. so it's not like they didn't know at the time, just that they don't have the time to implement everything. most chip companies will have roadmaps several years in the future on what they plan to produce and what features those chips will have. new discoveries are still being made, but these aren't what is driving most year to year improvements. when new discoveries are made (usually at the research level), they get invariably placed on the roadmap so that all the companies can start working on hitting it years in the future.",
"The engineers aren't making breakthroughs, we do have the technology. However, they aren't exactly controlling the rate which our technology improves. Ultimately it comes down to costs. Apple could build a beast of a phone right now but it would cost 10,000 so no one would want to buy it. So, they build a mediocre performance phone and make it cost $1000. Then next year, slightly improve the performance and make it cost $1000.",
"From the perspective of a software engineer, one of the things that makes chips better and \"faster\" is new features, like better support for matrix operations (important for ai) or new security features (for example, to better prevent one program from seeing what another is doing). However, the software needs to be written to use these features after it's released in a chip. So we'll often see (for example) Intel release a simple version of a feature, see how it gets used, and then improve it in the next processor version, etc. This is necessarily slow, because programmers can't use a feature until it's available in a chip they can buy, and Intel can't improve a feature until they see how it's used.",
"Does apple make its own chips? I think they design some of them for their phones but outsource production to real semiconductor manufacturers. I know for certain Apple uses Intel semiconductors in their Mac products. There are always improvements in semiconductor design and engineering. There are plenty of companies that, that is their whole purpose. A notable one would be something like Intel. Most companies do incrementally release tech to keep it replaceable and not \"future proof\" on purpose so you keep buying more, but since the discovery of parallel processing CPUs kind of took a huge leap in processing power. There is not much need currently to build faster chips, because CPUs are not the bottleneck right now in terms of getting better performance.",
"A little of both, but not quite like you might be thinking. The discoveries are happening all the time. Someone, somewhere has a bright idea, tries it out, and it works. This is just the start of the journey (and we are ignoring all the times it does not even work at this early point). Now that the general idea is proven, you need to find a way to produce it in an affordable way. This can be pretty tricky. You might end up needing to develop a string of yet more technologies just to produce that first one that was your goal. You might need to adapt production techniques. You might need more basic science. Once your lab rats have figured out how to produce the original idea in an affordable way, the whole thing goes into a pilot phase where everything is scaled up. Many times, the ideas that worked well in small-scale production don't work well when you try to do it in any big numbers. This can be very dangerous. My father did this pilot-phase stuff for a living and has many near-miss stories where things turned exciting for a few moments. So now you've got the pilot-scale production down and it's time to go to full-scale production. This is where the final big investment is made. This is the point when not only the manufacturing costs skyrocket, but you start needing sales and marketing investment that can easily rival those manufacturing costs. Now, during all this time, your guys at all levels have been coming up with even better ideas and optimizations. If we move this bit here, we can save 5% on material. If we move that bit there, we can be 3% faster. And so on. This is where the \"control the rate\" comes into play. You have to make a decision: go with the design you have and commit, or wait a few months until the improvements can worm their way through all the development phases. When do you pull the trigger and go to market? Go too early, and someone else might bring out a product a month later that the market prefers. Go too late, and someone else might eat up all the market before you can even deliver the first product. This can be a billion dollar decision. If you time it right, congrats: you are in the next issue of Forbes. If you time it wrong: oops, you are also in the next issue of Forbes. So this is why things keep improving; the entire process has a momentum that is continually bringing out new things. This particular industry has a direct self-referential loop as well. Every time something new does make it to market, all those guys at the beginning of the process have even better tools to come up with even better ideas. By the time those new ideas work their way through the whole process, yet another generation of better tools has found their way into the researchers' hands, keeping the whole thing in motion. The main bottleneck is simply the length of time that it takes to move an idea all the way up to a mass-production concept. So far, only humans could really guide and control the process. The on-going AI revolution threatens/promises to change everything and compress time. If this does, in fact, happen, then we will see a new phase of automation kick in where the whole \"new idea-- > pilot-- > production-- > new tools-- > new idea\" circle starts looking more and more like a point. What happens after that is anyone's guess."
],
"score": [
458,
133,
66,
62,
20,
20,
17,
9,
7,
5
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d344it | Why the microphone I had lying around has a standard quarter inch jack instead of an XLR connector, and why it's recommended microphones use XLR as opposed to standard jacks. TY. | Technology | explainlikeimfive | {
"a_id": [
"ezys0p3"
],
"text": [
"this isn't 1/4” vs xlr, but 2-wire and 3-wire microphones. a 3 wire microphone is what is called ”balanced” and sends a differential signal. this is fancy talk for one wire being the normal mic signal, another being the normal mic ground, and the third being the inverse of the signal... if the signal is +, the inverse signal is -. drawn as a waveform, it's a mirror image. why? because interference affects *both* the signal and the inverse signal wires identically, and we can pretty much eliminate interference by simply taking the difference between the signal and it's inverse. this concept is used is a *lot* of electronics and is usually called ”differential signaling” as to get a high quality signal at the output, you need to take the difference between your two signal wires."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d37d5q | Roman Legionnaires used to throw a spear (the pilum) into the enemy prior to engaging them in melee combat. Why was this technique not used in centuries after? | Technology | explainlikeimfive | {
"a_id": [
"ezzuuhq",
"f00vicw"
],
"text": [
"It was used, but it's expensive. Rome had a solid government that was able to pull decent amount of resources from their population, but in medieval time, they didn't had that structure anymore. Kings didn't have the power to keep control over all their territory, so they had lower nobles, to do that, but in exchange those lord were keeping a good amount of those resources for themselves. At the end, the central state was not able to pull a significant percentage of the resources from their land. For that reason Medieval times didn't really had any standing army. The only military professional were either mercenaries or nobles who usually were heavy cavalry. The infantry was most of the time poorly equipped and only raised during war. So throwing spear was an expense that most people couldn't afford, they barely had any armour and that's would be their priority. You have to wait until late medieval period to see the average soldier wear armour similar in protection to what the Roman used to wear. In addition, the doctrines were different. The Roman used disciplined heavy infantry with short sword and big shield. It work extremely well because of the discipline of the men and their heavy protection. With their armour or shield, the Roman could get in close and use the short sword effectively. On top of that they faced mostly unarmoured tribes so their sword were excellent. In Medieval times the big threat was heavy cavalry and even if most infantry were wearing less armour than the Roman, they were wearing more armour than the average tribes that used to face the Roman. So a sword wasn't really a good weapon of war in medieval times, a spear was cheaper, was better than a sword against chainmail and cavalry. Roman infantry could keep their pilum in one hand, and their shield in another, throw their pilum and take their sword for melee. But when your main weapon is a spear, you can't just put it away while you throw javelin or something like that and carrying in your hands some javelin, your spear and your shield is not really practical. So yah. For infantry throwing weapons were not in their budget and was impractical because of the spear/shield combination was the most efficient weapons at the time. While noble had the money, but they were mostly heavy cavalry.",
"The Roman army phased out the pilum around the time of Diocletian in favor of the lead dart called the *plumbata*. The plumbata was lighter, cheaper, had more range, and required less training. It continued to be used into medieval times. The late Imperial army was more mobile force, centrally located to respond to both threats from outside or usurpers inside the border. It seems to have contained a larger percentage of both cavalry (who often carried bows) and archers than previous incarnations (the Romans always made use of auxiliary and mercenary archers, however). So, the pilum fitted late Republic and early Empire well-trained heavy Infantry legion, but decreased in usefulness as the tactics changed."
],
"score": [
35,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d37dp6 | How do people learn hacking, even though it’s kinda illegal? Where do they find information? | Technology | explainlikeimfive | {
"a_id": [
"ezzu8zq"
],
"text": [
"The knowledge isn't illegal; using it to break into a system without authorization is. \"Without authorization\" is the key point here; penetration testing and ethical hacking are multi-million dollar industries, and proficient white-hat hackers can pull down staggering paychecks. Tools and knowledge are available anywhere you care to look; programs like Wireshark, nmap and Metasploit have as much use in the hands of the good guys as they do in the hands of the bad guys. There are training courses available (CEH is one; OSCP is another, and that's not counting just classes) that will teach you everything you need to know to break into systems."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3cvg2 | How does calling 911 works from a cellphone? | Technology | explainlikeimfive | {
"a_id": [
"f01lrbw"
],
"text": [
"The phone signal used to connect the call bounces off the nearest cell signal tower and connects to the closest dispatch office."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3flpf | How are new types of universal technology like USB/USB-C created and who makes them? | Technology | explainlikeimfive | {
"a_id": [
"f02da7c",
"f02goqi",
"f02e8pk",
"f034q4i",
"f03gf5c",
"f03fwbb",
"f03i2q7",
"f03xbo0",
"f04c4tz",
"f03keey",
"f03lwhw"
],
"text": [
"Since 1996, the USB Implementers Forum (USB-IF) has been developing the USB standards. It's an industry group, where companies work together to improve the standardization of their products. New inventions are considered and discussed for inclusion in future versions of the standards. WiFi is handled by the IEEE-802.11 working group, and many other groups standardize other technologies.",
"First, you'll start with a need. I manufacture computers, and hard disks. I need to get data from my computer to my external hard disk. I'll invent a way to do this, and call it \"FastCable\" because it uses cable, and moves data fast. My competitors have the same problem, and invented their own solution called \"SpeedyCable\". Unfortunately for you the consumer, SpeedyCable and FastCable are only similar in functionality. The cables are different, the connectors are different, the protocols are different... everything is different. Some of y'all consumers start getting annoyed by this, as they want to buy my computer, but they have a couple hundred SpeedyDisk external hard drives, so they can't afford to switch. I'm out customers! Talking to my competitors at a convention, I learn that they are losing business to me the same way. Folks want to change, but can't due to the investment. This sucks for everyone! So my competitors and I decide to form a \"team\" to come up with a way to move data from BOTH brands of laptops to BOTH types of hard disks. I'll send a few engineers, the competition sends a few engineers, and some smart people from the Internet join the team too. A year into the project, the team releases UniCable. I've committed to no longer building FastCable devices, switching to UniCable. My competition has done the same. When you dig deep into the code, you learn that UniCable is really just updated/rewritten SpeedyCable, because theirs made for a better \"starting point.\" You can take this same method and apply it to just about any industry standards.",
"USB connections were developed by several companies together. Check Wikipedia for the list. Looks like all the companies were related to PC platforms and had a common problem: Connecting peripheral devices. As for their design process I don't know what they did, but it works good enough. Right? Sometimes industrial standards just happen. Someone develops a great idea like the seatbelt and governments say everbody's got to have it. Some standards are created by organizations that literally create standards. Like ISO, ANSI, NEMA, UL, TUV, etc. If you pick up most electronics you will see a list of logos or acronyms of the standards that they meet on the label.",
"In general, there is usually a consortium of companies or a standard setting organization that sees a need to have a standardized technology in a certain area. For example, 3GPP is a standard setting body in the mobile communications space and sets standards for LTE and features like speech codecs. Sometimes, a standard is created by a bunch of companies submitting ideas and contributions for combination. Other times, a stand setting body might run a competition of sorts to find the best technology, with the winner's submission being adopted as the standard. The standard setting bodies protect against monopolies and unfair competition through FRAND obligations. I might win a competition for a new standardized music codec, but as part of that, I agree to license my technology to the market on fair, reasonable and non-discriminatory terms.",
"Standards bodies which are generally just self-organized forums run by what is intended to be a representative democracy for the industry. Companies want to work together and not duplicate effort, and cross-compatibility is seen as a benefit to consumers, so they make their R & D a community effort. Let's look at how it normally would play out. So let's say you have USB Micro B (the previous major port standard). Lots of companies need USB ports on their devices so they have thoughts about how it should behave and what can be done with it. Over time, they come up with ideas and improvements on the existing tech. For example, some company (or group of companies) sat down and tackled the directionality problem with micro b. They came up with a new design that made it so you could plug it in either way. They take this new design back to the standards body, and then other companies in the community weigh in. We want this small tweak for our own purposes, says one company. Another company says this feature you added here makes it tough for us to do what we're doing with it, so can we abandon it or find a compromise. So there's some back and forth, and then eventually they agree on a common approach. They each (or the representatives) go out and implement the concept, and if it works out for everybody, they publish the standard. There are other ways standards happen. A company might come up with a design that they couldn't get everybody else to agree on, so they just spin it off as their own proprietary variant. Then if others follow suit, it gains traction, eventually it may become a standard that way.",
"MP3 was made by the Fraunhofer institution. It's a research facility in germany funded by the gouvernement. Source: I work there :D I guess most countries have such research facilities/companies.",
"I know a patent attorney who works for Nokia, they have teams of researchers trying to invent new platform technologies like Bluetooth, in the hopes that it becomes industry-standard and everyone has to licence it from them.",
"Basically, this comic? :) [ URL_0 ]( URL_1 )",
"I both love and hate USB-C. They were too liberal with the standard so the entire thing is just a mess. - USB-C should never be used for USB2.0 only. They should have made 3.0 the cutoff. It can be backward compatible with 2.0 but every USC-C cable should be capable of USB3 minimum. USB 3 is over 10 years old now. USB 2 is like 18 years old now. There is no reason anything should still be using USB2, just like there was no reason to use USB1.1 for anything once USB2.0 existed. Yet USB 2.0 won't die because the industry won't let it. - USB-C is capable of 'full power delivery' at like 87W/20v/5amp. But not every cable supports full PD. And there are no standardized lower tiers, it's just w/e the manufacturer decides to support. - [Alternate modes...]( URL_0 ) what a confusing mess that is. Because it can't support all at once we get something like this. - There are no standardized and enforced labeling/symbols on the cables to differentiate. Right now I label them like USB3.1,FullPD,bidirectional (yup some USBC cables only work in one direction) just so I know what cable is which.",
"“Standards bodies” make these, and typically it’s a group of all the major manufacturers in an industry who recognize they gain more by doing some things the same way than by inventing their own standards for how to do things. They typically just send a few people from each company to meet at regular intervals, discuss the goals for the project, and discuss the topic until everyone agrees on a proposal. When it works it solves a lot of problems. Typically it’s just in everyone’s best interest to standardize. Not only does it bring down costs (third-parties can manufacture things in bulk), but it reduces problems for everyone. One example is the problems with some usb-c chargers breaking the Nintendo switch. If everyone builds their stuff to work exactly the same way, no one has to deal with repairing broken things. But if some people make their chargers work just a bit differently, people end up paying somehow (either in customer support or warranty repairs). ELI5: My classmates all agreed to turn out the lights, be quiet, and take a nap at noon, because if we didn’t all work together, no one would be able to take a nap.",
"Why do chargers for new phones with USB type C don't have 2 type C connectors? Instead of 1 old in the charger itself and 1 on the other end."
],
"score": [
4658,
1010,
62,
47,
17,
15,
6,
4,
4,
4,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[
"https://www.explainxkcd.com/wiki/index.php/927:\\_Standards",
"https://www.explainxkcd.com/wiki/index.php/927:_Standards"
],
[
"https://imgur.com/a/qrwrBXd"
],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3fucs | How do stealth airplanes/bombers work? | Technology | explainlikeimfive | {
"a_id": [
"f02i341",
"f02fe7j"
],
"text": [
"The easiest way to understand this is to take a flashlight to a reflectorized stop sign at night (most places have these). Hold the flashlight by your head and shine your flashlight at the ground, you'll see a light spot in the darkness. Now shine it at the sign, and you'll see a much brighter spot on the sign. The sign is specially made to return as much of the light to you as possible. Stealth is the opposite. Stealth geometries return as little of the energy to the sender, by sending it in different directions. Any direction other than back to the sender means the sender can't detect the plane.",
"To understand this, you must first understand how normal aircraft are detected- most commonly by a radar station. The radar works by sending a pulse out in all directions. When this pulse hits something, some of it will be reflected back to the station. The time it takes to come back to the detector can be used to calculate the distance, and direction is recorded. This gives the exact position of an object, in this case, an aircraft. Stealth technology counters this in two ways major ways: Engineering the aircraft to deflect the radar pulse away from the station instead of back to it, and paint that absorbs rather than reflects radio waves. This reduces the reflected radar signal, making the aircraft harder to detect. The problem is that the paint is generally very heavy, and having funny angles and shapes on your plane usually makes it pretty hard to fly- it’s not nearly as sleek and smooth as other aircraft. In addition to your radar signature, the heat from the engine is also a good way to detect aircraft. This isn’t as effective as radar in most situations, but is much harder to conceal your heat signature. Because of this, in the Cold War, stealth tech basically got scrapped and speed was the key instead. It’s been making a comeback, but it seems to be in the same category of arms race as body armor: always trailing, never the cutting edge."
],
"score": [
10,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3gsxs | How are alternate colors in old sprite-based games made? | Technology | explainlikeimfive | {
"a_id": [
"f02ph2v",
"f02osjt"
],
"text": [
"Two-dimensional images in game data were often made up of two parts: part one is a grid-map which says color 1 goes in the top-left square and the square immediately to the right of that, color 2 goes in the next four squares to the right, etc. Part 2, then, is a \"palette\" that says color 1 is equal to this numeric value which the display system shows as black, color 2 is a number that shows as red, etc. If you modify the image by replacing the palette values with new numbers, you get an image with the same shape, but the colors are different.",
"Sprite colors are mapped onto palettes, so each pixel has one of say, 4 values. 0110 1001 2222 This might represent a shitty hut or something, where there's an arc in one color (1) over a line in another color (2), and then some background color (0), which might be transparent. The computer also had a palette loaded into memory, which would have those 4 colors as bytes in some sequential list. So when it goes to draw, it sees there's a 1 and pulls the [1] element from the palette list, and it sees the 2 and pulls the [2] element, etc. When it's time to draw something that's palette-swapped though, the game can just switch which palette is in memory, changing the list of 4 elements to a different list of 4 elements which represent different colors, and keep using the same sprite. So it would still make the arc out of whichever colors are in the [1] slot, and the ground out of the [2] slot, etc., but those colors would be different because it's using a different palette. This helps save memory because you can have two palettes of four bytes each, and that will double the number of potential sprites you have. So you could have, say, 31 sprites that each take up 4 bytes and 1 palette (32 bytes total) or 30 sprites with two palettes, for a total of 60 potential images for the same 32 bytes."
],
"score": [
5,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3jv5j | How can satellites give us a steady weather picture of one place all the time if they orbit around the earth? | Technology | explainlikeimfive | {
"a_id": [
"f0397gu",
"f03a941",
"f03akgm"
],
"text": [
"I'm no expert, but a geocentric orbit is a thing, right? Satellite pretty much stays over one area.",
"As others have said, but another term is geosynchronous orbit. It moves slightly in a figure 8 pattern to maintain position. I've worked in Satellite Communications for 15 years.",
"Because not all orbits are the same. There are orbits that allow satellites to be stationary over one place on the surface of the Earth all the time. This is called a geosynchronous orbit. There are also orbits that allow satellites to pass over the same place on the Earth at the same local time of day. This is called a sun-synchronous orbit. Then there are other types of orbits that allow satellites to spend most of their time over one specific area of the planet. Between these types of orbits and having multiple satellites in use, it's easy to have 24/7 coverage of pretty much any place you want."
],
"score": [
7,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3jy6a | As algorithms and spambots are quickly catching up in terms of solving captchas. Why cant we simply create a whole new type of captcha instead of improving current one? | Technology | explainlikeimfive | {
"a_id": [
"f03bkdd"
],
"text": [
"While captcha was initially created to deter bots, it has been re-purposed to do something more useful as well. For instance, until a couple of years ago, reCAPTCHA was used to help digitize books while deterring bots. Now, Google has re-purposed it to help them do things\\* they cannot automate very well like identifying things on roads, cars, traffic lights, street signs, bridges, fire hydrants, storefronts, etc. If Google can't do it, chances are average spambot people don't have the resource to even try. And if they can, they can get rich easily and legally by simply selling their algorithms to Google. So, the short answer is: as long as Google still can't automate literally everything (that matters) in the world, they won't run out of things for reCAPTCHA, and they'll always be well ahead of spambots. \\*Technically you're creating data that will feed into their data mining models."
],
"score": [
15
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3l97n | Why does rebooting a computer/phone fix so many problems? | Technology | explainlikeimfive | {
"a_id": [
"f03k4h8"
],
"text": [
"Well firstly you want to ask the question: What goes wrong that causes computers and phone to need to be rebooted? Memory errors - Computer memory is not perfect. All computer memory has potential errors. Now there's a lot of technology to detect and reduce those errors (eg parity bits) but errors can still occasionally creep through. Most single errors have such a negligible effect that you wouldn't even notice - so one pixel out of the 8 million that flashed in front of your eyes for a 30th of second in God of War was the incorrect shade of green. But the longer you have a machine running (without reboot) the more likely errors would have occurred and accumulated in the system processes. Variety of software, hardware & updates - go have a look at your number of running processes on your computer or phone. I have about a hundred different processes running on my laptop currently. Each is a separate piece of software that is either part of the operating system or programs that are loaded in active memory. The oldest piece of software (if I look at system32) is 10 years old. The selection of processing running on my laptop is almost certainly unique to my laptop. There is no way in only 10 years to completely debug all the potential interactions that every piece of hardware and software has with each other. Even the exact hardware configuration on each phone is not guaranteed to be identical between the same models. Memory Fragmentation - Memory is like your room. It starts out neat and tidy with everything in its place but until you tidy it again things just get messier and messier. Once you start running out of space, if you're not tidying on a regular basis, the messiness just multiplies with every task that you are doing. With computers as they load files into memory once they get a file so big that it doesn't fit into any completely unused part of memory it then gets split into multiple parts of which each can then fit into the empty spaces - this is called fragmentation. The more programs you load and the longer the machine runs, without housekeeping, the more your files fragment, the longer it takes to process anything, the hotter your processor runs to compensate, the more likely errors will start occurring."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3m86s | Why there's no screen that uses both RGB and actual CMY pixels? | Technology | explainlikeimfive | {
"a_id": [
"f03p2d0",
"f03pug7",
"f03wibm"
],
"text": [
"Because CMYK is a subtractive color model. It only works when we describe things that reflect light, like inks, pigments, or dyes. It's subtractive because we start with white light and then subtract the wavelengths absorbed by whatever the light is reflecting off of. When describing things that emit light, like screens or lightbulbs, we use additive color, which is RGB.",
"CMY doesn't work on things that emits light and RGB doesn't work on things that absorb light. They are mutually exclusive. That's why screens use RGB and printers CMY",
"This question is more interesting that it at first seems. The usual \"cmyk is a subtractive color model\" seems a bit hasty response, when you realize that LCD screens start with white light and literally subtract unwanted wavelengths. I thought, there's no reason why we coudn't make CMY filters instead of RGB and install them in series, rather than parallel. However, the problem is controlling the filters dynamically. After a short google, it seems that electrically controllable color filters are technically possible, but not even close to the range of the entire visible spectrum. & #x200B; So in short, the technology doesn't exist."
],
"score": [
15,
7,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3pvp3 | What it means for a certificate to expire on a WiFi? | Technology | explainlikeimfive | {
"a_id": [
"f04d4xk"
],
"text": [
"Think of a WiFi certificate as a special key you are given. This key offers a level of encryption of credentials such as log-in information when accessing a public or shared WiFi network. It, at a basic level, helps prevent other people on the same network from viewing or ‘sniffing’ out what other devices on the network are doing. If this certificate expires you can either delete the old certificate and acquire a new one, or you can proceed to use the connection *without* the certificate. The latter provides other people on the network the ability to potentially view and record your nonsecured traffic; for example information you view or submit on http domains instead of http(*s*). The most common form of messing with other people’s information when not using a certificate is the notorious MITM, or man in the middle, tactic, where you route other people’s connection to the same network through your device before then allowing it to access the internet. This lets you capture virtually every bit of information they send and receive."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3rvyx | Carriers adding bloatware | Technology | explainlikeimfive | {
"a_id": [
"f04qowj"
],
"text": [
"SIM cards cannot store apps so I highly doubt your carrier is sending you bloatware though the SIM card. It is most likely that the bloatware was installed by the manufacturer of the phone. This happens all the time when you buy laptops from a store. For example, Dell and HP laptops always come with Dell and HP software that a regular user would never use. Apple iPhones also have apps that I never use, like Stocks or Files apps. I delete as much as I can and for those that I can’t, I put them all in a Trash folder and put it out of sight."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3ryc4 | How does a single USB port handle the multiple signals from a USB hub? | If you have a hub plugged into your usb port, and several peripherals plugged into the hub, how does the computer use so many devices on one port? | Technology | explainlikeimfive | {
"a_id": [
"f04swhi",
"f04uhdo"
],
"text": [
"The port itself has a chip build in that says: I'm an usb hub, to which device do you want to speak? Of course the pc can still only talk to one of these devices at a time, but most of them dont need its full attention anyways.",
"Effectively works the same as an Ethernet switch. Each transaction has a send and receive address and the hub/switch knows where to route the traffic."
],
"score": [
42,
11
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d3rzcv | Why is the volume of my Bluetooth headphones independent of my phone’s volume? | Technology | explainlikeimfive | {
"a_id": [
"f04ux6j"
],
"text": [
"So that you don't blow your ears out going from car to headphones. Or at least that's how I use it."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3sv5x | Why do computer screens appear to flicker when recorded by a phone camera? | Technology | explainlikeimfive | {
"a_id": [
"f04x49a"
],
"text": [
"Because the computer screen is not a constant image, but is actually flashing very quickly. This is known as the \"refresh rate\" and is measured in Hertz (which is a measure of cycles per second). A screen that has a refresh rate of 144 Hz means that it is flashing 144 times per second, which is so fast that the human eye perceives it as being a constant image. A camera recording video does the same thing -- it does not record contiguously but rather takes consecutive snapshots very quickly, so fast that the human eye will see a video also as a constant image. The flickering occurs when the cycle of the computer screen and the cycle of the camera doesn't line up perfectly, such that the camera takes a snapshot in between cycles of the screen, resulting in the screen being dark for that snapshot / moment in time."
],
"score": [
31
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3sw5e | What is aliasing, and what is anti-aliasing? | Technology | explainlikeimfive | {
"a_id": [
"f04wt1q",
"f05jjg5",
"f04x6rb",
"f04xasg",
"f064hl5",
"f05c7go",
"f05o91r",
"f058wsi",
"f05qjzm"
],
"text": [
"So your computer monitor is made up of square pixels. Trying to make round shapes out of square objects is going to give you jagged edges (thing about trying to build a circle in a game like minecraft). The jagged appearance of curved objects on a screen is called aliasing. Anti-aliasing works to do the opposite of aliasing and make curved images look less jagged and smoother. It does this by adding a little blurred coloring in the pixels just outside edge of the curve, which makes the image seem smoother.",
"From the signal-processing perspective: If you've watched a video of a car wheel or propellor spinning fast enough that it appears to spin backwards, due to it advancing more than a half-cycle per frame, you've seen something like aliasing - a high-frequency signal presenting itself as a low-frequency one. (The half-cycle part also hints at why your sampling frequency must be more than twice your highest desired signal frequency - hence ~~60Hz monitors and~~ 44kHz audio sampling - sampling frequency being how often you record data. For more information, look up the Nyquist Sampling Theorem.) In such a context, anti-aliasing is the practice of filtering out undesirable high-frequency signals before sampling, so that they don't appear as lower frequencies in the sampled data set (\"aliasing distortion\").",
"Think of a screen like bricks on a wall. When your computer transfers a image onto those bricks, it has to fill a whole brick with color. This leaves a staircase-like jagged edge instead of a smooth looking one, this is called aliasing. Anti-aliasing takes those bricks along the edge, and mixes a bit of color information from those on either side to make the edge look smoother. It's of course a bit more complex than that in practice, but that is the basic idea, to makes the edges of objects look smooth on a display.",
"\"Aliasing,\" or \"artifacting,\" is a general term for a sound or video element recreated from a digital source being different from the origial due to the way the element is converted from analog to digital format and back. As the level of compression in lossy encoding formats like MP3 or MP4 is increased to save bandwidth or space, the number of artifacts also increases as the decoding device has less of the original data to work with and must make estimations or approximations. The aliasing that's most often talked about (and specifically referred to as \"aliasing\") is the \"jagged line\" effect when re-creating a diagonal line on a digital monitor. As pixels (usually tiny squares) are arranged in horizontal and vertical lines, a thin diagonal line shows up on that type of display with a \"stair-step\" pattern. Anti-aliasing is a digital process that lights extra pixels at strategic places along the line so that the \"stair steps\" are less pronounced and appear to the casual observer as a true diagonal line. Modern video cards have the capability to perform anti-aliasing multiple times (2X, 4X, etc.) to make the jaggies even smoother. Of course, this requires a lot of digital processing power!",
"Imagine a sheet of grid paper. Draw a circle by only filling in individual squares, no lines or curves, just fill in the squares. Looks pretty jagged right? Anti aliasing takes those jagged boxes and smooths them out to make a regular looking circle. Different types of antialiasing use different techniques but they’re all trying to achieve a similar goal.",
"In general, aliasing when a signal or image is changing too rapidly for how often you are sampling or displaying it. So a tilted line on a computer screen might make you notice the “square pixels” forming a staircase. With audio it’s a weird effect where a pure high tone is muddled and generates lower frequencies that shouldn’t be there. Anti-aliasing is when you smooth the signal first before sampling / displaying it. In computer graphics that might create extra cost since you may need to generate points more densely than you output so you can get the best possible image after smoothing.",
"Imagine how this sounds: O A O A O A O A now if you sampled it so that you could only hear every other letter, it would sound totally different: O O O O Perhaps it was originally O O O A O O O A You don’t have enough information to tell. When you sample a pattern, it can “alias” to another pattern that would appear the same when sampled at the same points in time/space. Anti-aliasing is anything that can be attempted to preserve the distinction between possible patterns. Usually it involves applying a transformation to the original pattern before it’s sampled. To completely avoid aliasing, you would have to sample at least a little faster than the Nyquist Frequency (twice the highest frequency of any sub-pattern in the original pattern). It turns out that you can express any pattern as a sum of a bunch of sinusoids of different frequencies (called the Fourier Transform), and you can tell which components might alias based on what your sampling frequency is.",
"Aliasing is a mistake computers make when turning pictures and sounds into numbers or numbers into pictures and sounds. These mistakes make one thing look or sound like a different thing. Anti-aliasing is ways to fix these mistakes or make them less obvious. It's not about jagged lines. It's just that a lot of people only know about it from video games so they think it has something to do with video games. It has nothing specifically to do with video games.",
"Pixels are a grid. If you drew a diagonal line across a pixel grid, the smoothness would depend on how many pixels are present. With fewer pixels, the line looks more jagged--like a staircase. This is aliasing. Anti-aliasing is a technique that looks at those pixels and says, \"Hmm, this needs to be smoothed so it looks like a line again and not a staircase.\" So it goes along and makes neighboring pixels a softer version of the line and the end result is a more gradual transition and a straighter line. You might be asking, \"Why can't we just increase the number of pixels?\" We can. And that is actually a good way to reduce aliasing. Although it takes more computer power to display more pixels than it does to smooth lines, the benefit is that the entire screen looks sharper."
],
"score": [
1664,
87,
17,
5,
4,
4,
4,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3tr3x | How come when downloading something, the shown download speed never seems true compared to how long it takes to actually download the file? | For example, if downloading a 1 GB file with a speed of 50 Mb/s, shouldn't it only take 20 seconds to download- but it never does. I also feel like I should be able to count 4 seconds and see 200mb/1GB, why does the actual download take so much longer than this? | Technology | explainlikeimfive | {
"a_id": [
"f051tg0",
"f056lxm"
],
"text": [
"The size of the file is in Bytes (B), the transfer speed is in bits (b, which is 8 times smaller). So 50 Mb/s means you are downloading at 6.25 MB/s, so to download a 1GB file (1024 MB), it would take roughly 164 seconds if the speed stays at a constant 50Mb/s. In short, file sizes are usually given in Bytes, while data download speeds are usually given in Bits. It'd be like if distance was only ever measured in miles, but speed was only ever measured in kilometers/hour. You have to convert one of the values to the other to get a 1:1 comparison.",
"You're confusing bytes (B) with bits (b). 1 byte is 8 bits. Generally, internet speeds are measured in bits, and storage capacity (and sometimes transfer speed) is measured in bytes. Obviously, this is so that your ISP can make their number seem bigger. Also, most internet connections don't actually have a 100% stable speed. Cable (and some fiber) connections are usually \"pooled\", which means that group of customers is assigned a fix amount of bandwidth for all of them, on the assumption that average use will be less than the sum of all their maximum bandwidths. So, depending on network use, your download speed could go up or down while downloading a file. Also, many ISPs have \"turbo speed\", which essentially allow you to exceed your rated speed for very short bursts of time, and then scale you back to your rated bandwidth (or sometimes lower, depending on your plan). This increases their speedtest scores (most speedtests use very small files), and makes everyday browsing feel more responsive. So, you may not actually get the speed you initially measure."
],
"score": [
13,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d3wcw6 | How was the first software created and integrated into computers when there wasn't any software to do it? | Technology | explainlikeimfive | {
"a_id": [
"f05mah3",
"f05v7zl",
"f05lbbe",
"f05zo69",
"f05qv82"
],
"text": [
"Physical punchcards (card with holes dotted on them) were fed into computers to set memory switches on or off which would then act as the program and data to be executed. When a basic computer is \"booted\" its instruction pointer points to a location in memory. The 1's and 0's at that location are retrieved and depending on what's there different actions are caused: that might be loading a memory location, adding two numbers together, or storing the result. Using a relatively small set of instructions, vastly complicated processes can be made. Computer data, being binary, has always been interpretable as numbers. Eventually punchcards were phased out in favour of analogue tape which stored the numbers as magnetic patterns on the strip. Analogue tape gave way to magnetic disks (floppy disks) and magnetic harddrives then solid state drives (SSDs) which is basically 'now'. Fundamentally the same thing is being done as with the punchcards, just the process is faster.",
"Imagine turning the light in your house off and on. Now imagine 256 switches and you need to set each switch in a particular state... That is 1st gen software development.",
"The first \"programs\" were mechanical in nature. You programmed computers with punch cards and by wiring it up properly. In modern terms, the software was \"hardcoded\" (literally)",
"One thing that kind of blew my mind when I was doing my CS major was the realization that ALL software can be theoretically implemented in hardware. That's where the term \"soft\"ware came from in the first place. It's sort of a virtualization of hardware. The hardware for modern programs would be extraordinary complex and expensive, but it would also be insanely fast compared to the software implementation. Think of a new programming language. The compiler for the new language (let's say Java) could never be written in Java to begin with. But after you create a Java compiler (let's say in C++), you could then possibly rewrite the Java compiler in Java and compile it with the C++ version. In the case of the original punchcards, you could say the hardware was the compiler, in a sense.",
"It depends on your definition of \"Software.\" & #x200B; Originally, the first \"software\" produced were cards with holes that programmed a pattern into a loom for an automated printing process. Each hole was designated to a specific function, and these cards would be read by the loom to produce the fabric. & #x200B; Following this, the first Computer (That broke the Enigma code) was programmed using physical connections. The inputs were put in via connections between points, and then \"software\" was tweaked to produce a result. & #x200B; In the first programs, software was very literally coded using binary - 1s and 0s - to produce a piece of software. Each byte of binary was an instruction to the hardware to carry out - using Random Access Memory. A person could, say, program a piece of software to do addition or multiplication by calling specific memory blocks and using instructions which would be in the code. This was the fetch-execute cycle which is in place in modern computers. & #x200B; Essentially, at the start people didn't \\*use\\* software to code and program, they wrote the code in a simple text document and hoped it ran properly because debugging was near impossible. Nowadays, of course, we have Notepad++ and other IDEs that allow us to more easily code and compile programs and software, however way back when before it all it was trial and error with LOTS of education in the field."
],
"score": [
62,
7,
6,
4,
3
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3wqho | how do delivery companies like UPS and FedEx deliver my packages so quickly when my item is on the other side of the country? | Technology | explainlikeimfive | {
"a_id": [
"f05o901",
"f05psb2",
"f05ubhj"
],
"text": [
"If desired, air cargo. It isn't hard to get packages across the country when you load then up in a plane and fly them most of the distance. This is also why guaranteed 2-day/next-day/overnight services are typically very expensive, especially for large/heavy items.",
"The key is ride sharing. Consider that every package has a source city and every one has a destination. If you put a single sorting warehouse somewhere central, then you can ride share in AND out. For example, one package comes from Los Angeles, the other from New York, and they’re both headed for San Francisco. Yes it would be cheaper to drive or fly LA to SF directly, but the cost of NY to SF is many times larger. What if you put a hub in Kansas? Both packages go there first, and get put on the same flight out to SF. Now each person pays about the same to ship and they can still both ship in about the same amount of time. Now multiple that across several million packages across thousands of cities...several large geographically central hubs...and that’s UPS and FedEx. As a bonus, suppliers can strategically place their warehouse IN that same city as a UPS or FedEx hub and save one of those flights, getting their goods to you even quicker.",
"It’s fairly easy to get between any two points in the world with a decent number of people in them in the span of a single day. Given that, if I wanted to take a package to someone almost anywhere in the world, I could probably hand deliver it to them within 24 hours if I knew where I was going and had the money to travel. So the travel portion isn’t really the hard part. It’s the logistics of doing that with a lot of different packages going to a lot of different places all at once. That requires a lot of tracking and organizational work with hubs that allow packages from relatively the same place that are going to relatively the same place to be grouped together, shipped at once and then dispersed to the correct places on arrival."
],
"score": [
9,
5,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3xjfz | how a headphones/earphones can still function after going thru both washer and dryer? | Technology | explainlikeimfive | {
"a_id": [
"f05s8hg",
"f05sb14"
],
"text": [
"Headphones are pretty simple objects: you’ve got two wires going in, a resistor or two inline, then a magnet, sitting behind a diaphragm. As long as the diaphragm doesn’t rip and the dryer gets the electrical contacts dry, everything should still work fine.",
"Depends on the exact product, but fundamentally a speaker isn't all that complicated. Its just voltages that push a magnet back and forth, there is little to no circuitry involved. The only reason water even damages electronics is because it shorts out things, and earbuds have no active power source. So long as they dry fully, new earbuds are no different from a pair that got submerged. (Obviously this is different for bluetooth earbuds or wireless headphones or the like)"
],
"score": [
11,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3ybsa | How do VR headsets have depth in their images to the point where if I don't wear contacts while playing VR I have my usual nearsighted fuzziness trying to read things? | Technology | explainlikeimfive | {
"a_id": [
"f05wxz0",
"f06wcgn",
"f06wlvi",
"f06tg0r",
"f05x4e8",
"f06xlac",
"f07291j",
"f07ijp1",
"f06xw35"
],
"text": [
"Headsets have lenses inside that are adjusted so someone with normal sight can focus on the screen a few centimetres away from their eye. This would not be possible without the lens. If you are nearsighted, you need to correct your vision to normal with contacts so the lens in the headset does it's proper function.",
"Because VR headsets use special lenses (typically fresnel lenses) that distort light from upclose to make it look like its coming from further away. It used to be bent such that the light appeared parallel, and thus coming in from infinite distance, but this has its own set of perculiar issues. Nowadays, the lenses are designed to make the light appear as though its coming from about 1-2m away. Which for people with short sightedness is enough to make it so they'll need perscription lenses to correct for it. As an aside, the big insight that allowed for modern VR is that you could use cheap to manufacture lenses, create distortion and then fix it by using software to create the opposite distortion, allowing the virtual environment to be seen properly.",
"Wait is this why I cant read shit in VR? I just thought that the resolution was too low and I needed to wait for technology to catch up. Jesus christ it never dawned on me to put my contacts in im retarded. First thing I do when I come home is try the VR with contacts. No way its actually clear I honestly cant grasp that Edit: some people are asking for updates. Im sorry I got a fuckoff headache and just went to sleep. Will update as soon as possible Update: update after like a week. It definitely helps and I see things so much clearer, but now I actually see the bad resolution lol. At least I can read",
"None of these answers really answer the question. It's all a matter of something called the focal point. Focal points are why when you look at something close, everything behind it is blurry. In VR headsets, the focal point is set to infinity, so it's like looking at something infinitely far away. This is done using special lenses. Now because you say that things look blurry, I know you must be near sighted. Because everything in the VR headset has a focus infinitely far away, a far sighted person would be able to use a VR headset without glasses.",
"Because the VR headset is specifically calibrated to work with someone who has 20/20 vision. It is literally trying to recreate reality in a virtual setting, if your nearsightedness seems the same in the VR headset, it means they have done a good job.",
"Okay, I'm working on my Master's in animation and stereoscopic technology, and there are some misconceptions in this thread. VR works by displaying two images (one for left eye and one for right) about ~1 inch away from the viewer's eyes. The slight shift in perspective between the two images is what creates the perception of depth. If you'd like a simple demo of how that works: hold your thumb in front of you in a thumbs-up. Focus on it with both eyes, then close your right eye, looking through your left; then swap so you're looking out of just the right eye. You should see slightly more of your fingerprint side or thumbnail side depending on which eye you look out of - this small difference is used in the brain to calculate the \"depth\" of your thumb. * TL;DR Small differences in the left and right eye's images create the illusion of depth in the brain. Now in VR and 3D films, where you focus your eyes and where that depth is in physical space are not always the same. I'm going to try to break this down into words, but it's tricky, so please bear with me. When you look at a chalkboard in real life, you perceive the words on the chalkboard as the same depth as the chalkboard itself, because that's how vision works with human eyes. But in 3D, we intentionally mess with the positioning of images in the left and right view so that you perceive depth differently. If we re-imagine the chalkboard as a movie screen, sometimes you'll perceive a movie as having things \"fly off the screen\" or receding behind the walls of the screen's surroundings. The repositioning of the images tricks your brain into thinking the image you are seeing is actually several feet in front of/behind the physical location of the screen, even though the image is *still being projected at the original physical location*. So your eyes **converge** at the physical depth (aka the **convergence plane**) while your eyes **focus** on the fake depth. [Here's a picture]( URL_0 ) from 3D Storytelling by Bruce Block to help explain. * TL;DR Where your eyes converge in physical space is not the same as where they focus in 3D-perception space. So now we know that where your eyes converge in physical space is different from where your eyes actually focus. What does this mean for VR and vision irregularities? Let's review what causes near/far-sightedness. Depth perception problems are caused by the length of the eye as it relates to the position of the cornea and the retina/ocular nerve. Think of the cornea as a camera's lens and the retina as a camera's processing chip/film. If the distance between the lens and the retina is too long, the resulting picture will have blurriness at long distances - that's nearsightedness. A distance that's too short has blurriness up close - that's farsightedness. * TL;DR: If you have long eyeballs, you need glasses for distance, and short eyeballs need glasses for reading. Let's get back to VR. Though the screen may only be an inch away, that's where the eyes **converge** - the fake depth that the headset creates is where the eyes **focus**. As others have mentioned, VR is set for \"infinite distance\", so farsighted individuals don't have a problem focusing on faraway objects in VR. For nearsighted folks, it's a different story. Their focus points are still processed in the brain as if they were actual objects in real life, so if you can't read an eye chart in reality, you can't read one in VR. The only way to correct this is to shorten the distance light travels between the eye's lens and retina. For that, we use corrective lenses - glasses and contacts! In theory, you could also build those into a headset, but it makes more sense to just use your own pair. * TLLLDR: The closeness of the lenses in no way impacts the way the brain processes far-depth focus, so glasses are necessary in VR if you use them in reality.",
"I can confirm the Oculus Rift S is comfortable and fine with small frames (big round lenses). Wide frame arms might struggle a little though.",
"Similar to this, when I was younger, I always wondered why I'd still see things as blurry if I was looking at them in a mirror without my glasses. To me it seemed logical that the light sort of \"reset\" when it hit another surface. Obviously that's not the case, though :)",
"Headsets set lenses usually to about 2 meters (6 feet) of a focus distance. So no matter where you look in VR your eyes have to focus to exactly that distance. Depth comes from stereoscopic effect from slightly shifted virtual point of view to about the same distance as an average person interpupulary distance (distance between pupils). It can get kinda awkward if you're moving an object close to your face in VR it would appear focused if your eye focused 2 meters away instead. Or looking far away in VR also feels weird compared to reality."
],
"score": [
3145,
207,
95,
66,
52,
30,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[
"https://imgur.com/a/6wqwbD8"
],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d3zdmv | How did Pacific Islanders carry enough water with them for long sea voyages? | Their boats don't seem like they were very big. The Pacific is freaking enormous. At least for the discovery/ exploration phase, they didn't know where they were going. How did they manage to carry enough food and water (but especially water) to make those crossings? | Technology | explainlikeimfive | {
"a_id": [
"f06f64t",
"f06q8vg",
"f064vv2",
"f063kqo",
"f063r30",
"f070699"
],
"text": [
"You want to read The Kon Tiki Expedition by Thor Heyerdahl. Kon-Tiki carried 1,040 litres (275 US gal) of drinking water in 56 water cans, as well as a number of sealed bamboo rods. The purpose stated by Heyerdahl for carrying modern and ancient containers was to test the effectiveness of ancient water storage. For food Kon-Tiki carried 200 coconuts, sweet potatoes, bottle gourds and other assorted fruit and roots. And the answer in the book - the traditional containers and coconuts preserved liquid better than modern tins. The rafts themselves were plenty big enough to carry large quantities of food and water.",
"They were a bit bigger than you might think, often 50-60 feet and consisting of two hulls with a platform across the middle. URL_2 For reference, this is comparable or slightly longer than the Nina and Pinta. Polynesians stored water in gourds and bamboo segments, drank from coconuts, and captured rainwater. In a pinch, fish and sea turtle blood is also drinkable. URL_1 In fact, though, their voyages were actually more impressive because colonization voyages brought not only people and the food they needed, but also an entire agricultural suite, including a wide array of domesticated plants as well as pigs, dogs, and chickens. URL_0",
"It rains a lot over the ocean. It's easy to replenish your water barrels if you sail during the rainy months.",
"They had some versions of their boats that were enormous. These could carry a lot. They commonly kept chickens on-board for eggs, and kept water in jars and other containers. They also did a lot of their early colonizing when sea levels were lower and there were more islands to go to.",
"Bags of water, coconuts, and rain collection, mostly. Probably some hydration from food. Bring as much as you can when you don’t know where you’re going. I suspect if they ran out long enough, they perished. Wasn’t just Pacific Islanders with that problem, though. I’m sure early Europeans or Africans didn’t know how wide the Mediterranean or Atlantic are. Asians and Africans with the Indian Ocean, too. Surely the other seafarers from Asia into the Pacific, but not the islanders, ran into the same problems.",
"More importantly, they apparently navigated with their testicles [ URL_1 ]( URL_0 )"
],
"score": [
208,
40,
32,
22,
4,
3
],
"text_urls": [
[],
[
"https://en.wikipedia.org/wiki/Domesticated_plants_and_animals_of_Austronesia",
"http://archive.hokulea.com/ike/canoe_living/holmes_provisioning.html",
"http://archive.hokulea.com/ike/kalai_waa/kane_search_voyaging_canoe.html"
],
[],
[],
[],
[
"http://www.ifa.hawaii.edu/friends/Technology_of_Oceania.pdf",
"http://www.ifa.hawaii.edu/friends/Technology\\_of\\_Oceania.pdf"
]
]
} | [
"url"
]
| [
"url"
]
|
d3zyng | Why does plastics make so much noise when you wrap it even though it is so soft? | Technology | explainlikeimfive | {
"a_id": [
"f06cwdf",
"f06f0sk"
],
"text": [
"Plastic wrappers make noise when crumpled because of those creases and ridges you see on it. Unlike very elastic materials like rubber sheets, when you stretch a plastic sheet, the stress doesn't get distributed evenly. Treated plastics have metastable energy minima. When you apply more force after the first crumple, these facets release some elastic potential energy they were storing, (Ridges store 4/5 of the total energy in your plastic wrapper), they buckle and form a new facet with a new orientation. The energy is released in the form of sound clicks, heat and vibrations.",
"I assume you're thinking about things like mylar sheets or potato chip bags. & #x200B; These things are not actually particularly soft. They're in very thin layers, which makes them flexible, but the material itself isn't soft, which means it can bend but doesn't smoothly drape or wrinkle. Instead, it crinkles, and the breaking up of these crinkles can vibrate it. The stiffness of the plastic transmits sound across its width, so a crinkling makes noise."
],
"score": [
11,
7
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d40ity | How do mirrors in video games work? | I was playing GTA online and noticed that the mirror in game updates in “real time”. How does the game recognize that, and how does it update with no delay? | Technology | explainlikeimfive | {
"a_id": [
"f06d1po",
"f0767g9"
],
"text": [
"In simple terms your avatar or charicters is coppied into another room built the same way on the other side of the wall. URL_0 does a great explanation on how different effects are created by breaking popular games.",
"Former video game developer here. You are asking two different questions. I am going to answer the title of your thread. Visually there are many ways to create mirror effects. The different techniques have varying degrees of fidelity and computational efficiency. One of the older techniques is a simple image projection onto a given model which requires baking the environment into a texture. Though this one does not work in real time and therefore does not consider dynamic objects. Another technique is screen space reflection which considers anything that lies within the field of view but accordingly cannot show anything indirectly. And then there is plannar reflection which actually copies geometry required for the effect. But this also means you have to compute a lot more vertices. Nowadays you will find ray tracing becomes more and more accessible to consumers and a few games produce accurate reflections on glossy surfaces. This is because this technique actually follows the path of light through any given scene and considers bounced rays which can also hit indirect objects that do not lie within line of sight. This is not a complete list but gives a rough introduction to this topic."
],
"score": [
8,
3
],
"text_urls": [
[
"https://www.youtube.com/user/PencakeAndWuffle"
],
[]
]
} | [
"url"
]
| [
"url"
]
|
d414du | What is “blue-light” from phones and other electronic screens? And how does it affect our eyesight? | Technology | explainlikeimfive | {
"a_id": [
"f06npf5",
"f06nxw9"
],
"text": [
"The \"white\" that a screen displays can lean towards a blue or yellow (or even orange) tint. A blue tint looks nicer to most people, so screens are usually adjusted that way. However, looking at that for too long can cause eye strain, and (at least in some studies) interferes with sleep.",
"Because our eyes only have three different colour receptors in them: Red, Blue and Green. It is the various combinations of these three that make our total vision in colour. So with screens they have three different colours in each pixel (red blue and green) that light up in various different combinations to show what ever it is we want to see on the screen. The thing with blue light is that our brain VERY STRONGLY associates bright blue light with day time. Being outside and seeing the bright blue sky. This is why you generally won’t fall asleep during a bright sunny day. Your brain sees the bright blue and goes “oh, it must be day time. That’s the time to be up and about doing stuff” The thing with our screens is that they are usually very bright. (If you had a dim screen in a dark room it’d be hard to see). The bright blue pixels shining into our eyes tricks the brain. You may have noticed that it’s a lot easier to stay up late at night if you are looking at your phone or a computer screen. This is why. This isn’t healthy for you, it strains the eyes. Making them stay active for longer without rest. Your eyes can get watery or irritable. You will get tired and probably irritable as well. This is why submarines have only red lights on during night time. It’s nothing like blue light and it allows the submariners the rest they desperately need."
],
"score": [
6,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d41ct5 | Why can't landers/rovers be landed or operated on the moon the same way drones are operated on Earth? | In light of the recent Chandrayaan landing failure on moon's surface, I am curious why the landers and rovers can't be operated remotely by Earth based pilots similar to how Predator drones are flown in middle East by pilots in the US? Can't communications to the lunar done be mediated through a moon orbiting satellite and since radio waves travel at light speed it should only take about 1s from Earth to moon? | Technology | explainlikeimfive | {
"a_id": [
"f06meg5",
"f06mksu",
"f06mei5"
],
"text": [
"A 1 second delay means any feedback you control is 2 seconds behind. Try driving a RC car around a track with 2 second delay. Its going to be really hard.",
"1s is definitely too long of a delay for someone to be controlling a vehicle. [Imagine driving a car with 1s delay from your input to what you see.]( URL_0 ) You're also counting on an orbiting satellite, which does not have direct view to the lander at all times. Geostationary satellites for the moon are non-existent.",
"Radio waves are limited to the speed of light. Due to the distance to the Moon this means the delay is 2.5s. So even if a human operator on Earth have perfect reaction time a command will only be received by the lander 2.5s after the event that the operator is reacting to. And this could be too late. Theoretically the communications delay between any two points on the Earth is under 200ms. However even then the Predator drone pilots is for the most part programming its autopilot just like mission control controls the autopilot on space probes."
],
"score": [
12,
6,
4
],
"text_urls": [
[],
[
"https://www.youtube.com/watch?v=kxuwPRY8kEo"
],
[]
]
} | [
"url"
]
| [
"url"
]
|
d41si8 | Most electronic wires are small and flexible, so why are electrical wires so thick and rigid? | Technology | explainlikeimfive | {
"a_id": [
"f06qmkl"
],
"text": [
"Because they need to pass a large amount of power. Small wires, like headphone wires, are passing small amounts of current. Power cables, like the cord for your fridge, move LARGE amounts. The amount of heat generated in the cord is a function of resistance and the square of the current (often called \"I squared R losses\", with \"I\" being current and \"R\" being resistance). So since you've moving a lot of current, and the heat goes up by the square of that, a lot more current means way, WAY more heat. So to reduce the heat, you reduce the \"R\"; one way to do that is to use a much thicker cable."
],
"score": [
12
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d4229x | If you‘re in a VR Headset, do you still have poor eyesight? | Technology | explainlikeimfive | {
"a_id": [
"f06x3gp"
],
"text": [
"So, a VR headset uses a lens to focus the screen light as if it's coming from far away, rather than right in front of your face. The effects of this are split into two parts your brain cares about: **Convergence** - aka why you have two eyes This is the classic \"depth perception\" thing people have. The brain uses differences in each eye's image to determine how far away objects in the scene are. This is the effect VR headsets excel at replicating. If you have lazy eye or another condition that messes this up, then VR can't do anything to correct it. **Depth of Field** - aka why you have an iris and pupil This is why objects you're not focused on look blurry. Because any lens (like your pupil) can only have one focal point, (because irrelevant geometry math,) your eye needs to change focus based on what convergence told it about the scene's depth, or else things don't look sharp. VR headsets use a lense to make the screen far enough away to focus on. But, because a lens can only have one focal point, that means the screen is flat out there somewhere. If where the screen's focus is doesn't match the point of convergence, that can give people bad headaches as their eyes keep trying to sharpen the image, then returning to the convergence point to prevent double-images and crossed eyes. So even people with good eyesight can have a bad time in VR. It's the same with 3D movies in theaters. Nearsighted people can't make their pupil focus light from far away. But, if the VR headset has the screen focused close enough, that doesn't matter. If they're lucky enough to be able to focus and converge independently, then they'll have a good time in VR. Farsighted people can't make their pupil focus light from close by. But, if the VR headset has the screen focused far enough, that again doesn't matter. But this is... a little less likely to work out. People with astigmatisms might be nearsighted in one axis (say, up/down) and farsighted in the other. VR can't do anything to correct that, as the bad aspect ratio will mess with their focus no matter what. Finally, people who simply have reduced resolution (due to pupil damage, retina damage, or optic nerve damage) will have the same damage in VR. Luckily for all correctible forms of the above, most VR headsets are designed with enough room to wear some shapes of glasses. Because light emerging from the VR lenses is supposedly the same convergence as light emerging from a real scene, and a single fixed focus, the glasses lenses can do their correction exactly the same as they do in the real world. EDIT: Crikey! Forgot, back when Oculus DK1 had just come out, I played a demo that used VR to help cure lazy eye. By providing only half of a scene's objects to each eye, it ~~gave the subjects headaches~~ forced the subjects' brain to match up the two images, which in turn forced the brain to make the eyes move in sync. It takes a lot of gameplay to see effects, but it was showing effects. So yes, VR can actually improve eyesight in some careful lab setups. But don't count on it in general; it's a very bright, very close screen. That's not good for eyesight in large doses, no matter the condition."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d43jwt | how does autofocus detect when the object is in focus? | Technology | explainlikeimfive | {
"a_id": [
"f07eo37",
"f07f039"
],
"text": [
"There are a few techniques. The most basic is to use maths to judge the contrast. When it is out of focus, the image doesn't have any sharp edges, but there will be sharp edges when it is in focus. So you simply subtract the value of pixels from the value of pixels next to them, add all those differences together to get a single number, and keep adjusting focus so that value is largest. If the value drops when it adjusts focus one way, then try adjusting the other way. Indeed, almost all focus techniques depend on measuring contrast. Better systems provide ways for the system to know which way it should adjust, instead of by guessing and correcting. You know when it is guessing, with videos - you'll see the focus briefly get worse, before getting better. Of course, the old way to focus was to use a system to measure the distance to the subject, and then adjust the focus to what it calculated it to be for that distance. But this requires expansive extra hardware - reasonably complex and carefully built hardware - and so is rarely, if ever, used these days.",
"It compares the general similarity of neighbouring pixels: if all pixels are very similar to their neighbours, the picture is blurry. Then autofocus tries to correct this by changing the focus and at the same time measures this similarity. When is finds a point where shifting the focus in either direction makes the neighbouring pixels more similar, it knows it has found the focus with the highest clarity, with clear, well-defined, non-blurry shapes."
],
"score": [
8,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d49ukk | Why is an open-source OS considered more secure than a closed source? | Technology | explainlikeimfive | {
"a_id": [
"f08z2ms",
"f092cq2"
],
"text": [
"For open source systems, the computer code that makes up that system is public, and anyone can read through it. So if someone has left some sort of backdoor or security flaw, it can be caught and fixed. Closed source systems have code that's only available to the people who make the software. It could have a dozen backdoors or security holes, and you'll never know unless it gets exploited and someone talks about it. The idea is that if you can see how the system is put together, you can build a more secure system than one that works with code you cant see.",
"Something being open source or being closed source has no inherent value on its security. What happens is that popular open source softwares are looked at by hundreds/thousands of users and so it's very likely that most major flaws get fixed quickly. Closed source software is only looked at by the people payed to do it who might not even care that much ( it's just a job after all ), so things take longer to get found and fixed."
],
"score": [
24,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d49xa3 | Trey Parker records his voice for the character Cartmen in South Park. After recording, they speed up the track so it gives the character the right pitch that you hear in the show. How do they record music like that? | Technology | explainlikeimfive | {
"a_id": [
"f094vae",
"f08yhq1",
"f09xs96"
],
"text": [
"They don't speed it up. They pitch it up. [Here you can see that the words are the same speed.]( URL_0 ) And here's the wiki on pitch scaling. URL_1 URL_2",
"If you will notice with his song \"Kyle's Mom is a Bitch\", the song itself is very high energy and fast paced. Clearly, in that case, he sung the song in a slightly slower time, then sped it up. Where that is not possible, because the song is already slow, they could use other software (a little more technical to use so they don't like using it) to raise the pitch without speeding it up.",
"[You might find this illustrative]( URL_0 ). It's The Chipmunks slowed down so you can hear how it was recorded. First, the record the instrumental backing track and Dave (the human's) voice together at regular speed. Then they slow that recording way down so that it's slow and low-pitch. Then they play that as a backing track so that the voice actors for the Chipmunks can sing (very slowly). Then they speed the whole thing up. Dave and the instrumental sound normal, but the Chipmunks sound like Chipmunks."
],
"score": [
5,
5,
3
],
"text_urls": [
[
"https://www.youtube.com/watch?v=EDqBBkXPCDc",
"https://en.wikipedia.org/wiki/Pitch_shift",
"https://en.wikipedia.org/wiki/Audio_time_stretching_and_pitch_scaling"
],
[],
[
"https://www.youtube.com/watch?v=7cTPoTmY1SA"
]
]
} | [
"url"
]
| [
"url"
]
|
|
d4aj1v | Why doesn’t someone merge all the open source AIs to create just one? | Technology | explainlikeimfive | {
"a_id": [
"f098438",
"f099c8s"
],
"text": [
"Much for the same reason nobody merges all car designs to create just one. \"AI\"'s are still complex software systems, and are designed to solve specific tasks. You can't just throw them all together and hope they somehow play nice with each other or are compatible. You need some way for the AI's to interact with each other, some new system do decide which AI system to use at which moment, and somehow to decide how to use each AI system. Each AI might be written in differing programming languages, utilize incomparable frameworks, and need to be significantly changed. most importantly; there isn't a demand for making a super-AI that uses all these open source AI's",
"That is sort of like asking why someone doesn't take all the free books in the world and merge them together to create just one. A lot of the books are in different languages and putting them together simply wouldn't work in creating an intelligible result, with grammar and sentence structure simply being incompatible. Beyond that the overarching story structures don't mesh; a fantasy horror story and a factual historical political commentary just don't work together. In this same way AI programs can't just be mushed together. They are complex chains of logic aimed at performing different, narrow tasks using different languages and techniques."
],
"score": [
6,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d4br0h | Dialysis | How is fluid removed from the body while your blood is being cleaned during dialysis? | Technology | explainlikeimfive | {
"a_id": [
"f09epim"
],
"text": [
"Your blood is pumped through a filter (dialyzer). The filter is made up of small, hollow fibers with microscopic pores in the wall. They run a special fluid through the filter that bathes the fibers from the outside, while the blood flows through the hollow fiber. Toxins, urea and other small particles can pass from the blood, through the membrane, and into the dialysis fluid."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
d4c6xu | How does google find out the frequency of words going back like 300 years, and how sure are they, that they have it correct? | Technology | explainlikeimfive | {
"a_id": [
"f09hkz6",
"f09tb6c"
],
"text": [
"They scan and analyze preserved texts (newspapers, bulletins, letters etc.). The results aren't extremely precise as a lot of material from that time has been destroyed, but it's enough to draw decent conclusions.",
"You know way back in the 2000's when you had to confirm that you weren't a robot by \"reading\" two words. One of those words came from a historical document, the other was computer generated. Eventually they had enough data to teach a computer to digitise the documents and suddenly they could put any book online. Libraries did this with all their historical texts to make them available to scholars and Google looked at all that data. So if it's in a library somewhere and it was correctly scanned, then Google has it. There might always be and older lost text with the real first usage, but as a ballpark there's nothing to suggest it's far off."
],
"score": [
8,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d4dil0 | Are servers giant advanced hard drives or is there more to it? | Technology | explainlikeimfive | {
"a_id": [
"f09wi5s",
"f0a5q7q",
"f0avsq0"
],
"text": [
"Servers are at a basic level more or less just like any computer with better diagnostics and redundancy. Features like Two power supplies, multiple hard drives, multiple network interfaces and fault tolerant ram are all pretty common.",
"A server is a surprisingly ordinary computer by most standards. But when you think of servers you usually mean things like: * Fixed size so they can be mounted in a standard 4-post rack (or similar) * More reliable and redundant parts, like ECC memory (can detect corruption and even repair it if only 1 bit got flipped) * Generally better specs. CPUs with 20 cores, 2 CPUs on a motherboard, and 512 gigabytes of RAM are not unreasonable * \"Server\" grade operating systems, like Windows Server, Linux, or some BSD variant are most common. But they can still run the same programs you know like Chrome. But if Chrome was want you wanted to run on a server, I'd expect you to have 100 tabs open or it would be a serious waste of a $10,000 computer.",
"The word \"server\" can be confusing. It's a word that's used to mean multiple, slightly different, things in different contexts. - (1) A server is a *computer program that talks to other computer programs*. It's a program that does things when asked by other programs. (The \"asker\" program is called a *client*.) - (2) A server is a *computer program that listens for requests coming in from a network* [1]. It's a program that does things when asked by programs running on other network-connected computers. A server program (definition 2) is a server program (definition 1) that uses a specific means of communicating with its clients (a network). - (3) A server is *the main purpose a specific computer's being used for*. If a particular PC or laptop is being used to run server programs (definition 2), then that PC or laptop itself is called a *server* (definition 3). - (4) A server is *a computer specifically designed to be used as a server* (definition 3). If you're running server programs for a lot of users, or otherwise need a large amount of resources like CPU / RAM / disk, an ordinary PC might not be powerful enough to handle what you need. So you might decide to buy a more powerful computer. Instead of shopping for a laptop or a desktop PC, the kind of computer you're shopping for is a *server*. Computers that are significantly more powerful than ordinary PC's can be very expensive. So usually this comes up in the context of running some kind of Internet-based business. [2] - (5) A server is *a computer you can rent from an Internet-based company*. This could be a full computer (definition 4), but it could also be a \"virtual server\". [3]. Virtual server rentals are surprisingly affordable, for example, [this company]( URL_0 ) will rent you a server for $5 per month. [1] Usually \"the network\" is the Internet. But sometimes people / companies run their own networks for various reasons, so \"the network\" could be some other network. [2] A server (definition 4) is basically a PC that's way more powerful in some respects (more / faster disks, more RAM, many CPU cores), less powerful in others (no need for a powerful graphics system capable of playing 4K video). A server (definition 4) has different design features. Parts are designed for reliability / redundancy / minimizing downtime, e.g. power supplies and disks can be replaced without shutting down the system. A server (definition 4) often doesn't have a \"tower\" case like an ordinary desktop PC, instead it's designed to fit in standard-sized \"racks\". Costs are higher. [3] Today's computers have hardware features and supporting software that \"pretends\" one large computer is multiple smaller computers. One large computer that can pretend to be one hundred small computers is expensive, but it's much cheaper than one hundred small computers. Many large Internet companies buy a bunch of big computers, set them up to pretend to be a huge number of small computers, then rent out the small computers to other Internet companies. This lets the other Internet companies focus on their business without worrying about maintaining racks of computers. The rental systems are fully automated. Meaning rentals are billed by the hour, or even by the minute. You can rent a new computer system, or cancel an existing rental, in a matter of minutes. You can even set up the rental orders to be managed by a computer program, so it can e.g. automatically rent more computers when your website's starting to get slow and overwhelmed, or cancel rentals to save money when few people are visiting your site and you have more computers than you need at this precise moment. This has made life much easier for Internet companies, because you can rent exactly as many computers as your business needs right now, and rent more / less right away whenever your needs change. This new way of renting computers is called \"cloud computing.\""
],
"score": [
7,
5,
5
],
"text_urls": [
[],
[],
[
"https://www.digitalocean.com/pricing/"
]
]
} | [
"url"
]
| [
"url"
]
|
|
d4e439 | what does the large hadron collider actually do | Technology | explainlikeimfive | {
"a_id": [
"f0a6ij2"
],
"text": [
"It takes particles and makes them travel in a big loop. As they travel around and around they get faster and faster and then they collide with each other. When they collide they break up into lots of smaller different particles which are then detected. These detections are the discoveries being made there."
],
"score": [
9
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d4gfd4 | The working of the Hubble space telescope and how its able to capture galaxies far away. | Technology | explainlikeimfive | {
"a_id": [
"f0bhh81"
],
"text": [
"There are a few things to remember. 1) the sky is an asshole. Makes everything hard to see and whatnot. Relative to the emptiness of space, the sky makes everything blurry, so putting a telescope right in space solves that issue 2) Hubble is a big boi. When it comes to telescopes the bigger the better. And being over 40 feet long and 10 feet wide, that goes pretty well for our good little Mr hubble 3) while it was built in 1990, it's been updated and repaired during various missions, and can send all the funky stuff it finds down to earth for our greedy little human eyes. We tell it what to do, and it is our cosmic vision slave"
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d4m6xp | ; How do we know atoms exist? | Like obviously they exist. We can measure them, split them and manipulate them. But like, how? | Technology | explainlikeimfive | {
"a_id": [
"f0e4vpo",
"f0eb32h"
],
"text": [
"In fact, technology is now advanced enough that there are photographs of atoms. You can google \"image of atoms\" to find the pictures. But early investigation of atoms involved things like bombarding them with subatomic particles and measuring the scattering and reflecting patterns.",
"In ancient Greece, Democritus thought that if you cut something in half again and again, you would at last have to stop. He said that this last piece of matter could not be cut any smaller. Democritus called these small pieces of matter atoms, which means \"indivisible.\" In 1803, Dalton defined the atom as the basic unit of an element that can take part in a chemical combination, and made up some rules about how they work. Around 1900, Thomson noticed that atoms were made up of positive charges and negative charges, but he thought they were all mixed together. In 1910, Rutherford figured out that all of the positive charges in an atom are in the middle, and all the negative charges are around the outside. (This is probably what you picture when you think of an atom. [Looks like this.]( URL_0 )) In modern physics, we believe that the electrons don't circle around in paths like that, but are instead mathematically probable to be in certain places based on how much energy they have and where the other electrons are."
],
"score": [
20,
12
],
"text_urls": [
[],
[
"http://www.whoinventedfirst.com/wp-content/uploads/2017/01/atom.jpg"
]
]
} | [
"url"
]
| [
"url"
]
|
d4mi6c | How did chemists in the past discover and collect gaseous elements such as helium? | Technology | explainlikeimfive | {
"a_id": [
"f0ebzjm"
],
"text": [
"Helium was discoed by observing light from the sun the name is from Helios the Greek word for the sun. Spectroscopy is the study of the interaction between matter and electromagnetic radiation. Different atoms and molecules releases and absorb light of different colors. Neon for example release a distinct red light when you excite it with electricity and that's why neon light have the colors they have. If you shine white light trough neon gas it will absorb the same red color so if you spit the light with a prism you will have dark line in the red part where neon have absorbed the light. There is multiple color that neon absorbent to a larger os smaller degree. The same it true for all elements. The colors of fire works is because of the emission colors of metal salts you add to the gunpowder. Yellow is produce by adding Sodium and old sodium lights that is common for streets light also release the distinct yellow color So if you test what color of light the atoms you know about absorb and write them down you can use that information to identify a gas. When it was done with sunlight We saw the line we identified the line Hydrogen has but we also found line that matched no element we had found on earth and called it Helium. The line of helium look like [Helium\\_spectrum]( URL_0 ) where the bright line is what color helium emit/absorb The observation of sunlight was a decade the gas was fires observer on earth from release during a volcanic eruption and almost 30 years before the element was isolated on earth. This was done by dissolving a type of uranium or in a acid. Helium is produce by radioactive decay of uranium."
],
"score": [
5
],
"text_urls": [
[
"https://en.wikipedia.org/wiki/Helium#/media/File:Helium_spectrum.jpg"
]
]
} | [
"url"
]
| [
"url"
]
|
|
d4o7j2 | How can prosthetic limbs be controlled with our brain? | Like prosthetic hands with fingers, how do they work? | Technology | explainlikeimfive | {
"a_id": [
"f0eprbx"
],
"text": [
"Same way your actual limbs function really. You're able to control your body through electrical signs sent from your brain throughout your nervous system, a functioning/responsive prosthetic arm will act according to those electrical currents. Needless to say, it wasn't exactly a simple/easy invention and took quite a lot of studying & breakthroughs on our knowledge of the nervous system. E: This is getting disliked? oO Guys I know this is \"explain like I'm five\" but... You're not actually five right? Not all things can be explain that simply lol."
],
"score": [
28
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
d4pzjh | how does a pc know the temperature of everything in the pc? Does it have thermometers in every component? I want to know the GPU and CPU the most. | Technology | explainlikeimfive | {
"a_id": [
"f0f7t45",
"f0fasav"
],
"text": [
"The motherboard measures the CPU temperature using a thermistor — an electronic component whole resistance changes with temperature. A GPU has its own thermistor on board.",
"The very short answer is yes, the vital components, such as CPU, GPU, sometimes also the motherboard and HDD have small thermometers that allow not only for monitoring the temperature, but also prevent damages from overheating."
],
"score": [
19,
17
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d4s88b | How did WASD become the standard for movement control for video games on PC? | Technology | explainlikeimfive | {
"a_id": [
"f0fvfob",
"f0fvktp"
],
"text": [
"For a long time, gamers used nothing but the keyboard to move and look around in 3D third person and first person games. Many people didn't even really think to use a mouse to play, especially since mouse looking in very early first person shooters was limited to moving on the x-axis (moving left to right) - it didn't give any huge advantage over looking left and right with the keyboard. Over time, as mouse looking became more popular, people started experimenting with other sets of keys to use for moving around. WASD was one option. ESDF was another one, some people even experimented with random crap like ASZX or ZXCV. WASD started becoming the most popular during the days of Quake, since people kept asking one of the bigger name tournament winners of the time what his setup was and he explained it. So, he wasn't necessarily the first person to come up with this control scheme, but he definitely helped make it more popular among the gaming scene of the time, and with time it just became a natural standardized configuration for PC games.",
"WASD controls have existed since the 80's but it was hardly a standard. Popular FPS games like Doom and Duke Nukem 3D had controls all over the place until the late 90's. Quake player Dennis “Thresh” Fong is credited with popularizing the control scheme, as his recommended configuration was packaged along with versions of the game. After Quake's success the control scheme was used as the default in Half-life and it's most successful mods Counter-Strike and Team Fortress which made it the de-facto standard for FPS games. As for why WASD vs WADX or the arrows keys or number pad? WASD is naturally comfortable for the left hand on the keyboard, the thumb can is in position to hit space and the pinky shift+tab. This complements using the mouse in the right hand for aiming. Prior to mouse aiming in games the left and right arrows or A and D keys would turn left and right vs strafing side to side."
],
"score": [
31,
14
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d4syxi | I'm watching the MASH TV show on my wide screen TV. It takes up the whole space in perfect ratio. How did it look on 4:3 tube TVs? | Was it cropped? Was it "squeezed" and everyone looked more vertical? | Technology | explainlikeimfive | {
"a_id": [
"f0g9jsg",
"f0gvilo"
],
"text": [
"You are either watching a cropped version where the top and bottom of the picture are chopped off (meaning you can't see foreheads or shoulders in closeups), or they went back and remastered the original film into 16:9, which would mean that every character is going to be in the middle of the screen and nothing will be happening on the sides (perhaps even to the point where you can see filming equipment or the edges of sets). MASH was filmed in 4:3 without any intention of it ever being converted to widescreen.",
"Yes, it was remastered from film. Partly they cut content from the top and bottom edges. Partly they add content on the sides, because the film format was a little wider than TV. I've seen a few cases of mic-in-picture or mic's-shadow-in-picture on the sides. It's not as the director originally intended and personally I would have preferred them to have black bars on the side. They did make a good effort to do it well though; the episode that had a clock continuously superimposed in the corner has had the clock effect correctly re-done."
],
"score": [
5,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d4tkf1 | How can Google store billions of hours of YouTube videos? The sheer magnitude of youtube is enormous and undoubtedly the total amount of storage needed is just as monumental. | Technology | explainlikeimfive | {
"a_id": [
"f0gapgv",
"f0gdn31",
"f0giqqb"
],
"text": [
"Storage is cheaper than sin. A 1TB external hard drive is like $50. And that is for consumer grade portable stuff. Google buys much more powerful devices.",
"Because storing huge amounts of data is far less expensive than other ways to draw eyeballs and extract monetizable user data.",
"Google has storage centers in fly over states across the country, and fly over countries around the world. The land is basically free, and the storage itself isn't much more expensive In fact, a bigger concern to them is not how much it costs to store the data, but how to store it physically close to its consumers. If you are in NYC, streaming a youtube video stored in Georgia is way easier / faster / cheaper than streaming from Nevada."
],
"score": [
8,
5,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d4vidt | Why are clapboards necessary when filming movies? | I'm talking about the black and white board that someone has to clap and say the scene, take number, and "action!" while rolling the camera. | Technology | explainlikeimfive | {
"a_id": [
"f0gv5u8",
"f0gvbph",
"f0gvd61",
"f0gw7b6",
"f0hpcd8",
"f0hgtdb",
"f0h1ut7"
],
"text": [
"They have been used to synchronise sound and picture. I'm not sure, if they are still necessary.",
"Helps the editor know what scene they are dealing with afterwards. Especially important when they were recording onto film as rolls might get lost and or misplaced. Recording that information with the shot made things easier to work with. Oh and the ‘clapper’ bit is used to help sync the sound and the picture. It was such a critical job that I heard the director usually fired you on the spot if you got it wrong. Not sure how true this is though.",
"The final, released sound in a movie or television show is not captured at the time of filming - lot of sound work is done in post production and added to the video afterward. The clapboard gives an easy way to sync the sound to the video when it's ready.",
"They are easily identifiable \"markers\" for an editor. Traditional film ended up being a long reel, and finding out where to cut and splice together the clips that actually \"went to print\" was a process that clapboards made more efficient because they took up the whole frame, and also gave the actors a queue for when to begin. Now with digital film and editing, its still useful to find and label the takes one wants to use out of the raw footage, even if they may \"keep rolling\" in the case of a blooper.",
"one of my first jobs in film making was 2nd AC, who is the person who clacks the slate. they're used for a few reasons, one is because the clack is so sharp and causes such a spike in audio that's how you sync the audio with the video. notice the slate also has the scene and take number on it and other things as well, one of the reasons that stuff is written on it is so that the editor can see this and immediately know which take it is. and another reason is for the script supervisor, the scriptee, so that they can keep track of it and write all of the information about the scene and the take down, again to help the editor. :) to walk you through it step by step, the 2nd AC writes on the slate the date, director, title, take number, scene number, roll, filter, and MOS, (we always joked it stood for Mit Out Sound cause it means you're shooting with sound from a boom or LAVs instead of from the camera) and the scriptee writes down all of these things as well, they also take note of every detail of prop placement, wardrobe, director's notes, etc. so that the editing process is easier, you have all of the information right there, and you look at scriptee's notes and the first frame of the take and know immediately if that take is no good, or if just the last part is, or if one line is just slightly off, etc. honestly, although this isn't what your question is about, script supervisors are the real heroes of film making. I'm amazed by how organized they are and how much rides on their shoulders. one time we tried to skirt by without one and shot an entire day of close ups with part of the wardrobe missing on an actor and never even noticed.",
"It's an easy way to attach the metadata, and the clapper part itself makes it easy to synchronise the video and audio components.",
"Back in the day the picture and sound were recorded separately and this gave a visible/audible point to synchronize them afterwards."
],
"score": [
87,
57,
11,
9,
9,
5,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d4xbtr | How can we discern the difference between 144Hz and 60hz monitors if we can’t discern the 60Hz oscillation of lights in our homes? | Edit: as a couple people pointed out, the flicker would typically be 120 Hz for lights connected to US AC mains power because of the way bridge rectifiers, etc work. The voltage crosses 0 V twice per cycle. Additionally, the type of light source will influence how much flicker, if any occurs. I mentioned in a thread below (and later someone used the phrase), but this wiki article is helpful, though not exactly ELI5: URL_0 | Technology | explainlikeimfive | {
"a_id": [
"f0hhco9",
"f0he0yg",
"f0hmr08",
"f0j7ubb"
],
"text": [
"Lights don't oscillate at 60 - the light goes through a full cycle at 60, which includes two pulses - one positive, and one negative. And if you do find a light source that reacts strongly to the mains, like old style fluorescent lights, the 100/120 Hz flicker can be seen by many people. It shows as a distinct flicker in your peripheral vision, and some people find it really disturbing.",
"Your lights are not flickering at 60 Hz. Either they are light sources that can't be turned on and off that fast, for example old-fashioned incandescent bulbs. The incandescent bulb filament can't cool down in in 1/120th of a second, so it is still lit during the zero crossing of the mains current. Or, if you have modern LED lightbulbs, they contain rectifiers and capacitors that convert the mains AC to a nice, smooth DC.",
"Motion is the main reason. When looking at a mostly still scene, anything above the flicker fusion rate (about 24 Hz) will mostly look the same. Your brain can ignore the flicker and still see a clear image. When things are moving fast, which happens more while gaming than in real life indoors, your brain is trying follow along with the moving object. Your brain is trying to track a smooth motion, but a sequence of frames leaves gaps in the motion. Your brain tries to understand what it's seeing, but the gaps cause motion blur, leading to reduced visual clarity. More fps results in smaller gaps between frames and increased clarity. ^(Side note about lights: Good LED lights have a big enough capacitor that you won't get a 60 Hz flicker. Many dimmable LED lights use a PWM flicker at a much higher frequency than 60 Hz, but low enough that a slow motion video will capture it. Cheap LED lights do have a flicker, which drives me insane. I don't know the exact reasoning, but it's either not having a big enough capacitor or not having a full bridge rectifier.)",
"/u/JoelMay is right - motion is the reason. Here is a demonstration: [ URL_0 ]( URL_0 ) Try experimenting with different speeds and framerates. Also check out other animations there, for example this one - [ URL_0 eyetracking]( URL_0 eyetracking) \\- first look at the stationary UFO, and then at the moving one. The effect is framerate dependent, and also depends on how the display works: if it's continously emitting light (LCD) or flickering (CRT and ultra-low motion blur monitors)."
],
"score": [
77,
21,
12,
3
],
"text_urls": [
[],
[],
[],
[
"https://www.testufo.com/",
"https://www.testufo.com/eyetracking"
]
]
} | [
"url"
]
| [
"url"
]
|
d4xesk | Why does the same website sometimes load lighting fast with just 1-2MB of download speed and other times struggle to load up all the way with 10MB+? | To clarify...where I live I have 1-4MB depending on the day but often it works just fine, with every website popping up immediately. So it surprises me when I am overseas at a hotel or cafe with 10-20MB of speed that the internet can often lag a bit. I can tell a big difference when streaming but not browsing. Perhaps it's related to the number of people logged into to the same WiFi network...actually, I am sure that's it now that I think about it but I'll ask anyhow. | Technology | explainlikeimfive | {
"a_id": [
"f0hge7o",
"f0j4xr2",
"f0helxv",
"f0j3obt",
"f0j5189"
],
"text": [
"Your browser caches pages and will save it to your device to speed loading. That cache will periodically get cleared and require the entire site to be loaded again. Ads add a lot of overhead as well with images, animations, video etc. If those change it will impact loading time. Plus there are a lot of libraries that may need to get loaded behind the scenes. Again, ads may call different libraries that also need to be loaded. Then there's the time of day you visit. There may be higher traffic at certain times of day. There might be issues with a server somewhere along the line. There are tons of issues that can come up that will slow things down but ads are mostly to blame.",
"ALOT of factors. You wouldn't believe how many Modern sites don't load from 1 location. It's not just NFL's servers. It's content deliver networks. It's possibly Amazon's or Microsoft's or Google's or some other cloud company. It's third party libraries hosted on yet another set of servers. Each of those contents from hundreds of different locations are being loaded. Each of those servers has to serve thousands to millions of requests each second. Your request is a drop in the bucket. Each request runs thru 10-30 hops on the internet to get to you. Each of those hops is a network device that serves thousands to millions of data requests a second. Your data is a drop in the bucket.",
"It could be a number of factors, but most likely it's due to the number of people using that website. For most parts of the internet, you can think of it like a hose. The more people trying to use it (a website, the internet in your house/location, the internet in your neighborhood), the slower it's gonna be because its trying to deal with more \"water\" for the same size hose, so what ends happening is it forcibly slows down each person's usage, so each person gets less \"water\" from the hose but everybody still gets some. In this case, because you're getting 10 Mb speed I'm the cafe, it's most likely due to the amount of people using the website. If it were due to the amount of people in the cafe, a speedtest would show you a lower download speed.",
"Let's ignore browser caching and wifi network congestion and focus on distance. Imagine your connection is like a water pipe and a server is a big holding tank with a valve at the other end. A 20MB connection is like a wide pipe, it can send a lot of water at once and fill your water bottle very quickly. However, when you're overseas it's like you have a very long pipe. So while it may still be wide, it takes a long time for the water to get all the way down the pipe to you. The 20MB is the capacity of the connection, how much data can be sent at once. The actual \"speed\" of the network is usually what people call the \"ping\", how quickly can a request be sent to a server and back again. Ping is greatly affected by physical proximity to the server because the further you go, the more servers you need in between to pass your request along. Also, to mitigate this issue, international sites will use Content Delivery Networks (CDN) to create duplicates of their pages and/or data so that a copy of that data is as close to you as possible. When you watch a YouTube video in the US, you're pulling it from a nearby US CDN, when you're watching in Europe, you're pulling from a nearby European CDN.",
"Imagine the internet like a pipe system. Some pipes are big and can carry lots of data, others are smaller and can carry less data. Some pipes are pretty much straight from the server you're accessing to your computer with little in the way, others will twist and turn, maybe even go to multiple other cities until they reach you. That impacts latency. Some pipes can also be leaky and when your data \"leaks\" the server might need to send the data again. This is known as packet loss. What the speed your internet provider tells you is just how big the pipe coming into your home is and the speed of the internet your computer reports is how big the pipe coming into it is. That has nothing to do with the distances the data has to travel (latency) or the leakage of the pipes (packet loss) and all 3 of these impact how fast a website is loaded. Also, as you said, on crowded WiFi networks things get a bit more complicated. Imagine the WiFi router as a boss that orders a waiter (the antenna) to fetch and send orders from/to clients (the devices). What you might notice is that if there's a single antenna there's only a single waiter that has to serve multiple devices. So if you start doing a speed test which measures how fast a big file can get to your device the waiter will do a few trips to your client with very big orders. But a meal(website) may have many many small plates instead. So in order to receive all of them you'll have to wait for the waiter to complete the other clients' orders multiple times before your entire meal (the website) is at your table. There are other things like caching (in the meal analogy, imagine you have a fancy 3D printer for meals so if you feel like eating the same thing you've already had you can make it again without having the waiter serve you), fancier routers with more antennae (i.e. more waiters), ad-blocking (either telling the waiter \"don't bring me anything containing gluten(an ad)\" or the waiter bringing you something containing gluten but you throw it in the trash immediately), etc."
],
"score": [
41,
7,
5,
5,
4
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d4xhvc | stack pointer, frame pointer and link | Hi. I'm currently taking OS class and I haven't found a definitive answer to a.) what are stack pointer, frame pointer and link and b.) what is their purpose? My lecturer refers to these terms a lot, which is understandable, but doesn't write down that they are. I have a vague understanding that stack pointer points to the top most stack. But for the other two I'm lost. I somehow got an impressions that with frame pointer you know the return address from the top most stack, e.g. some subroutine, to the stack before it, but I really don't know. Thanks! | Technology | explainlikeimfive | {
"a_id": [
"f0hg6w4"
],
"text": [
"I'm kind of assuming this is in assembly, but either way that's how I'll be explaining it because that's my main source of knowledge. I don't think it should be too different in C or other languages. I'm not sure this is exactly explaining like you're 5, but the frame pointer points to (essentially tells the program) where the base of the stack frame is. On the other hand, the stack pointer points to the top of the stack. The purpose of the stack pointer is to let the program know the next available stack memory address, aka letting it know what memory it can use and access. Using this, in assembly at least, you can also access data in the stack by taking the stack pointer and incrementing the value, and the stack pointer memory address decreases every time you add to the stack The frame pointer will always have the same value (the memory location of the base of the stack frame) and is mostly passed in functions so the program can access it and that memory location is retained and still usable"
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
d4y3m8 | How did TV stations broadcast live TV back when cameras used film? | Like for big events or even daily things such as the news, did they have to quickly develop and scan the film? That seems impossible. Were early digital sensors used or something? | Technology | explainlikeimfive | {
"a_id": [
"f0hmj13",
"f0hp1yw"
],
"text": [
"Because TV cameras didn't use film, they used vidicon tubes, which are a type of cathode ray tube. Essentially, the tube turned light that entered the camera's lens into beams of electrons that could then be sent as a signal through wires and broadcast antennas in real time. Note this this was all analog - there was no digital technology in use.",
"Imagine you have a picture in an art gallery that you want to copy by hand. One way to do it would be to divide your drawing page into a grid of squares then copy the drawing systematically, copying each area of the original picture onto the corresponding grid on your piece of paper. You start from the top left most one then work your way through, left to right top to bottom. That's is basically how video camera's worked prior to the 80's. Light detectors in the camera scan across the image line by line just like your eyes would when copying the picture in the art gallery. This image was then converted to an electrical signal which could then by transmitted to peoples homes. Your TV would then take that signal and convert it back into an image."
],
"score": [
13,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d4yfub | How can a group of hobbyists or whatever make a deepfake that sounds exactly like Bill Gates, while digital assistants made by the world's biggest tech companies still sound like robots? | Technology | explainlikeimfive | {
"a_id": [
"f0hv6n9",
"f0hpt74",
"f0hqv2t",
"f0hzofa",
"f0iannr",
"f0ht75g",
"f0hy104"
],
"text": [
"[Here's an example from 2018 of Google Assistant sounding \"too real\"]( URL_0 ) that people got uncomfortable and questioned if/how to disclose the person was talking to a robot. As others said, the tech is there. But humans don't exactly want it.",
"It's a different ball game to make a specific voice say specific things, another give a computer a human voice to say whatever it needs to say. Also there is some research to suggest we prefer those roboty voices because that way you know it's a machine and it's less creepy. Edit: Link, so folks stop sending me replies with the words uncanny valley. URL_0",
"Its not that they can't, its that they choose not to. Making a digital assistant sound too human could be uncomfortable for some according to a lot of market research. There is still a lot of trust issues abound when it comes to interactive technology so they risk alienating their client base by making the voice too human.",
"Next to 'uncanny valley' and processing time, ethics also play a role. It can make people feel uncomfortable, betrayed or even unsafe about future interactions if they were convinced they have been talking to a real human but it turns out to have been a robot. Not in the same way as the uncanny valley, but in a way of \"what if this [real human in a callcenter] is a robot too?!\" combined with all the negative thoughts (non-tech) people have about robots and automatization. While not everyone would be bothered by it, big tech companies have to consider this.",
"Don't forget, that deepfake audio takes a lot more processing power than the robotty sounding voices, plus they don't convey information as clearly as robotty voices. There's a reason Steven Hawking had his robotty voice, even though there were already more natural sounding artificial voices, it's because his voice was clearer more consistently, and that's the most important function needed in those use cases.",
"Hey a good podcast on this making voices where you just type stuff in, was 'finding your voice' on 'hidden brain'.",
"It takes hours to days to render what hes saying on a computer made for rendering. Can you imagine asking your alexa a question, and getting an answer after a week?"
],
"score": [
4055,
3211,
456,
71,
17,
5,
5
],
"text_urls": [
[
"https://www.cnbc.com/video/2018/05/08/google-assistant-is-getting-so-smart-it-will-soon-be-able-to-make-you-dinner-reservations.html"
],
[
"https://en.m.wikipedia.org/wiki/Uncanny_valley"
],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d4ze0s | How games can continue playing without a disc | Technology | explainlikeimfive | {
"a_id": [
"f0j2piw",
"f0i08sq"
],
"text": [
"Let's say you want to read Lord of the Rings. So, you go to the library, pull the first book off the shelf and start reading. You can check out that book from the library and take it with you so you can continue to read while not at that library. But you can only keep reading until you get to the end of the first book, because the other two books are still back at the library. If you finish the first book, you're going to have to wait until you can get back to the library before you can continue reading. This is kind of what happened with your game. The disk contains the whole game. Your Wii grabbed enough info off the disk that you could run around Termina, and the Wii never needed to go back to the disk. This is just like how you could read the whole first book while not at the library. But as soon as you wanted to go to the swamp, the Wii needed to go back to the disk to get that information.",
"The game loads the entire files from the disc into the console's memory. When it reaches a part that wasn't loaded it prompts you to re-enter the disc."
],
"score": [
9,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d53zix | Why does video rendering take quite long when it can be viewed back in real time? | Not even talking about having crazy effects in your video, just some cuts. | Technology | explainlikeimfive | {
"a_id": [
"f0jhl0z"
],
"text": [
"When you play it back what you're playing is a series of \"already rendered\" stills animated, much like flip book is. The rendering process is taking a set of \"ideas\" and turning them into the \"already rendered stills\". So..it's the difference between \"flipping through\" the flip book and \"drawing each page of the flipbook\"."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
d53zra | When you’re playing chess with the computer and you select the lowest difficulty, how does the computer know what movie is not a clever move? | Technology | explainlikeimfive | {
"a_id": [
"f0jg2h6",
"f0jhjqy",
"f0jhtaa",
"f0jj23k",
"f0jg1qn",
"f0jgnsz",
"f0jkaup",
"f0jis48",
"f0ji721",
"f0k4ihp",
"f0l3yzf"
],
"text": [
"The computer typically rates moves by looking ahead -- if I make this move, will I lose a piece or a good position in the future, or will my opponent. Setting lower difficulty tells the computer to look less far ahead, or to consider fewer possibilities before stopping.",
"Computers are ranking and scoring moves as it goes. When you lower the difficulty it will not look as far ahead, and purposly not choose the move it deems the best.",
"In short a computer is capable (with enough processing power) of looking at every possible move that could happen based on the current state of the board, and calculating the response to each move, and repeating this calculation until it hits the end of each possible list of moves. This builds what is called a decision tree. Once that tree is built, the computer can score it's potential moves based on how likely they are to lead to a win in the computers favor. Once all the moves are scored, it simply picks the highest scoring move and goes with that one. A difficulty setting may affect how moves are scored or it may require the computer to pick lower scoring moves so the game swings more in favor of the player. tl;dr - Computers can calculate the best moves possible, lower difficulty can force the computer to make weaker moves.",
"Generally, when you set a computer to play at a lower difficulty, three things are happening: * You're limiting the amount of time that the computer is allowed to \"think\" * You're limiting the number of moves ahead that the computer looks * You're denying the computer access to its opening book and its pre-selected \"good moves\" So if you take a lot of that stuff away, you really limit a computer's ability to select strong moves. It might not get so bad that it just throws its queen away and leaves its king open to an easy checkmate, but it might miss things like \"Oh, in two turns your knight can do some damage unless I move this pawn\" or \"if I don't move this rook now, I can be checkmated in 5 turns\" the way that a supercomputer would be able to calculate.",
"The computer typically rates moves by looking ahead -- if I make this move, will I lose a piece or a good position in the future, or will my opponent. Setting lower difficulty tells the computer to look less far ahead, or to consider fewer possibilities before stopping.",
"Well... if it knew what *was* clever, surely it would just pic one of the lower-ranking possible moves in its algorithm?",
"Making AI convincingly stupid is often a lot harder than making it cruelly difficult. The program has some method of determining the \"best\" moves by combining brute force (just calcuate all possible moves for the next few turns) and priorities. After running these programs over and over we start to know which priorities produce the best win rates and which produce the worst. To make the AI look dumb, you have it stick with the bad priorities more often and pick the \"best\" move less often. You don't want it to *never* pick the best move though, it should still respond believably to easy-to-see hazards and not just lose pieces any child would have repositioned.",
"A typical chess program analyzes a position by \"looking forward\" - it predicts the best moves to achieve a better result in subsequent moves. Setting it to low difficulty limits the number of moves it looks ahead. This allows the human player to more easily beat the program by employing better positional strategy (ie using human heuristics/experience to make \"better\" moves for the long term)",
"A bit simplified description is that a computer play chess by evaluating a position with some numerical score that is based on how the pieces is placed on the board. It it stat with the current position and test all possible move and evaluated by calculating the score. It reject the one that is bad for it and test all possible opponents move and take the one that is good for the opponent. Recent the alternative where the opponents have move that is very good for them and continue to test all alternative. So that for some time of for some number of moves and you can find what move that give you the best advantage even if the opponent do there best move. To change difficulty you primary limit the time or the numer of position the computer use to evaluate moves , you could also change the selection criteria so it select a move with a lower score. You could write in so that there is a 5% chance that is take a move that is very good for you.",
"Anyone remember the computer cheating in windows 95?",
"Since this question has already been answered in many very good ways I would just like to add a little bit of general AI knowledge on top. Most AIs are designed to the hardest difficulty first and then scaled downward. The designer creates a \"perfect\" AI that usually is too difficult to be fun and then scales that AI down by intentionally causing mistakes. This is particularly true for games with potentially perfect play or where the computer has a distinct advantage (such as reaction time). For first person shooters enemies usually deal less damage than players and have intentionally bad aim. For fighting games random pauses are often injected into the AI where it is not allowed to take an action or sometimes intentionally wrong actions are taken at intervals providing space for the player. The primary similarity is that AI is always designed from best to worst and it takes more skill, time and effort to make an AI bad at a game than good at a game. EDIT: Grammar"
],
"score": [
12759,
915,
293,
156,
38,
16,
13,
11,
8,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d5660i | how does the battery indicator on electronic devices works? How can your phone know if it has like 15% battery left? | Technology | explainlikeimfive | {
"a_id": [
"f0jyyel",
"f0jz3b0"
],
"text": [
"Most simply detect the voltage and that can fairly accurately be used to calculate the charge state of the battery. Depending on the chemistry lithium cells will be about 4.2v fully charged and 3v fully discharged, with 3.7 being 50% charged. Then they'll have a bit of a cushion on either end because fully charging and fully discharging decreases the battery's life. Some devices use the current drawn from the battery and then subtract that from the capacity. So if it's a 2000mah cell and 1000mah has been drawn from it then it's 50% charged.",
"Batteries have a nominal voltage (Lithium is 3.7 usually, AAs are 1.5V, etc), but they are never exactly that. For Lithium it's more like 3.4-4.2V, and this voltage is pretty much proportional to the charge level. So your phone just measured the voltage and computes the charge level. In practice it's a little more difficult, the actual charge to voltage relationship is something [like this]( URL_0 ), and it depends a bit on temperature and load (the 2C, 1C, 0.5C are load) and every battery is a little different too. So your cell phone monitors the charge/discharge to figure out battery health and measure some of those numbers to figure out the curve for your battery, and then it uses that curve to lookup the charge of your battery."
],
"score": [
5,
3
],
"text_urls": [
[],
[
"https://electronics.stackexchange.com/questions/32321/lipoly-battery-when-to-stop-draining"
]
]
} | [
"url"
]
| [
"url"
]
|
|
d56t47 | this may sound naive, but why can we not just remove the carbon dioxide and other greenhouse gasses from the atmosphere with some sort of mechanical or chemical technology? Like why aren't we developing something like a big filter or vacuum that can remove it from the air? | Technology | explainlikeimfive | {
"a_id": [
"f0k30yk",
"f0k2o1f",
"f0k4q5h"
],
"text": [
"We *can* remove C02 from the atmosphere. The problem is it takes time and energy, and as long as we are still burning fossil fuels anything that takes energy hurts more than it helps. Trying to remove C02 now is like trying to rebuild your house while it's still on fire. Before we can really make things better we need to first stop it from getting worse. Trees are actually a really great way to sequester carbon, but they take time and space. Also the capacity is finite unless you chop down and store fully grown trees somewhere and replace them with new ones, but that leads back to the energy problem in my previous paragraph.",
"Actually there are startups doing this, but it costs a lof of energy to do this and once you removed the carbon dioxide you need to store it. You can't turn it back into coal and storing billions of tons of CO2 is also not a trivial task.",
"The eli5 version would go like this: Because it is expensive to do so. Expensive means lot of energy, most of which is electricity, and most of our electricity comes from power plants that produce lot of CO2. In a bit longer version, one has to think of the entire chain of an operation - it has to generate less CO2 then the amount you capture otherwise it's pointless. From mining through transportation to building and operating the machinery, and obviously the electricity itself which your machinery works off, you must calculate your entire carbon budget (basically how much CO2 each step releases) to see if you aren't actually making things worse."
],
"score": [
23,
6,
5
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d58mlr | How did scientists measure the distance from the Earth to the Moon in order to land a craft on there? | Technology | explainlikeimfive | {
"a_id": [
"f0kegjz"
],
"text": [
"Hold your finger up in front of this sentence and close one eye. Note which words your finger is covering. Now open that eye and close the other one. Your finger will now be covering different words. Without moving your finger, it seems to have \"jumped\" to a different position. This effect is called *parallax* and you can use it to figure out how far away your finger is from your face. All you have to do is measure the distance between your eyes, and then measure the angle that you saw your finger move. With those two measurements, you can use the same triangle math you learned in high school (\"sohcahtoa\") to get the distance to your finger. We do the same thing with objects in space. If I'm standing in Los Angeles and looking at the Moon, I might see it right next to a star in the background. If I then call you in New York and tell you to look at the Moon, you might see the Moon covering up that star. We're seeing it from different angles. So all we have to do is measure the angle of the difference between our perspectives, and measure the distance between where we're standing, and we can use math to get the distance to the Moon."
],
"score": [
26
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d5c5v0 | How are there so many blue ticked verified users on Instagram and Twitter that most people do not even know? | Technology | explainlikeimfive | {
"a_id": [
"f0l2ded"
],
"text": [
"Being verified is not an indication of how popular you are. It means they have gone through the formal verification process and met the requirements."
],
"score": [
13
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d5epn8 | What is a "biocomputer"? Do they actually exist or it's only a concept? | Technology | explainlikeimfive | {
"a_id": [
"f0ljlig"
],
"text": [
"Biocomputers use biological molecules such as proteins, DNA and RNA to store data and perform calculations. It is possible to engineer biological systems to behave in a certain manner based upon a certain input, storing and transmitting data as needed, which is basically what a normal computer does. They do exist. An advantage of biocomputers would be that they will eventually be able to grow them, making them very cheap to produce. A disadvantage is that they're limited in what they can do compared to normal computers today, but this is a topic of research."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d5hdhp | Why are there more transistors in a flagship phone processor like a snapdragon 855 or a13 bionic compared to AMD ryzen 7 2700 cpu? What am I missing? | The ryzen has about 5 billion transistors and the a13 bionic has about 9 billion transistors. What am I missing? | Technology | explainlikeimfive | {
"a_id": [
"f0lu1mt",
"f0lu1oj"
],
"text": [
"SD 855 is an SOC meaning it includes CPU, GPU, I-O and everything else on the same chip. So those 9 billion transistors are not only for CPU but for other hardware as well. But in case of Ryzen those 5 billion transistors are for CPU only and nothing else. Ryzen 7 2700 is based on 12 nm process and SD 855 is based on 7 nm process. This refers to the size of transistors, the smaller the transistors you have the more transistors you can put on your chip. Also the size of transistors have no effect of performance, smaller transistors perform same as larger transistors.",
"Keep in mind that a desktop or laptop CPU pretty much does only CPU things. So it will have cache memory, integer and floating point units - so it can do lots of math really really fast, but not much else. Less niche Intel CPUs also throw in a crappy Intel HD GPU chipset as well. But they don't have to have super feature rich GPU logic, hard drive controller, RAM controller, ethernet, wifi and sound chipsets. Nor do they need to have any fancy power management chips (for battery charging and power drain management) or radio (the WLAN or LTE or 5G radios I mean). Oh and bluetooth etc. etc. All of these subsidiary functions of your computer or laptop is in a separate chipset baked onto the main/motherboard of your computer. Thats why they're still so large. On the other hand, mobile devices are designed for compactness. So having ALL of these functions in one single chip is a serious design point. They call processors like the Snapdragon \"System-on-chip\" or SoC - they have not only traditional CPU circuits, but they have all of the other features as well, all baked into a single chip. Same thing with the processors on these Raspberry Pi or Beagle Bone or small enthusiast linux computers - the whole \"system\" is built into the CPU SoC, the \"board\" only serves as a mounting point for external input/output and interconnects to other things like the RAM, flash storage and ethernet headers."
],
"score": [
22,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d5ifpz | How do animators make sure everything is timed correctly? How do they sync everything from the voice acting to other sounds or music and make sure the movements are natural looking? | Technology | explainlikeimfive | {
"a_id": [
"f0m3fad",
"f0m7pfz",
"f0m3wsb",
"f0mnnmd"
],
"text": [
"Usually, animation in larger studios is rigorously planned and very well coordinated. Usually a storyboard is created showing all the scenes as still images. When the storyboard is finished, voice acting is done, which is combined with the storyboard to create the \"animatic\", a kind of power point presentation with the still images with the voice acting in realtime running in the background. This is used to check the pacing of the movie\\episode, so there are no boring slow parts or scenes that are hard to follow because they move on too quickly. After that, animation work is done and the still images are replaced with the raw animated footage. While that is happening it's not uncommon that certain things are still subject to change: voicelines are getting re-recorded because they were too wordy or didnt quite fit, scenes changed or shots added\\removed. After that, everything is double checked and sound effects\\foley is done. Those are a lot easier to edit to fit the footage. Then the final movie is exported and the local Studios get a Version without voicelines for dubbing in their respective language. In that case, the voicelines are recorded within a margin so they just \"feel\" right when running to the video, and usually depends on resources the studio has left for post production and localization. Sometimes, idioms or sayings need to be changed because they don't exist in the country the movie is translated to. This is a Job for writers, which translate the text so it's meaning isnt \"lost in translation\". Back in the analog days there were timetables with marked frames which showed when certain key actions would start and end. Those were transferred to digital and that's where the Name \"keyframe\" comes from. Edit: thanks for the silver, kind stranger! This is propably the best i can hope for from my arts degree.",
"I used to do stick-figure animation. It's actually quite simple. You have a timeline, a bar that represents every frame of animation. You simply put the music in and you can literally match it per *frame*. You can add or remove frames, start music or sound effects at different times, whatever you want. Very simple.",
"I used to work at a studio where we sent our materials to Korea to be animated. For that project, and many like it, there are two jobs called timers and track readers. Track readers listen to the audio track and write down the phonetics of the track frame by frame at 24 frames per second (although it's usually 12 frames per second because that's the speed of classic animation). They then ascribe a mouth shape for the character. These are labelled A through H usually, and have no relation to the sound. A is closed mouth, B through D are open 'ahh' sounds. E is an oo sounds, f is actually ffff. And on. This way, animators who don't speak English know how to animate the mouths. A timer takes the storyboard, which is like a series of comic panels put together as a movie, and literally times out the movement. A character starts walking and their last walking pose is 48 frames later, well on something called an X-sheet, the timer indicates what frame each foot contacts the ground on, how fast the arms swing, whether and exactly when any variance in the wall cycle occur. For actions like a point, they will often indicate how the action is to occur and when. Is the hand down at frame one and hand pointing at frame 9 based on the boards? They might right large overshoot on frame 7 to give the gesture more force, or indicate an 'ease in\" (where the action gets progressively closer to the final pose) to indicate and slower move. These two jobs are getting a lot less common as digital animation develops, but are still used on major TV shows that contract animation overseas.",
"I work as an animator in a small french studio and storyboarded for many productions. There are pretty in depth and interesting answers already, so I will just speak from the perspective of how we do it in france. When you have an animatic, it almost already looks like the final film but roughly animated. That's the key to the whole thing. Usually the storyboard artist will record a draft track for the voice acting to find the timings and emotional beats of the film. By this stage, you will be able to know whether or not the movements will look natural. When this part is done, the actual voice acting is made, maybe there will be some changes, but by the time the animator receives the shot, they will already have all the acting cues to match perfectly with the sound with good movements and timing. The sound effects are usually done after the animation is out, and it's the sound designer who matches the sounds with the visuals. It's done afterwards so that it will be possible to be precise with the small details that make it sound natural; the rustle of cloth, contacts, etc. So in short, all the work is already done beforehand by the storyboard/animatic artist to ensure that it is easy for the animator to match the voice acting in their shot, the music and sfx are done separately to match the visuals, and then there's a final round of tweaking at the end to make sure everything lines up."
],
"score": [
923,
17,
14,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d5iwz0 | What actually happens inside a pc when it freezes? | Technology | explainlikeimfive | {
"a_id": [
"f0m7lys",
"f0m3e06"
],
"text": [
"Lord of the Flies. You need the conch to speak and Piggy has it but he's to flustered to say anything or hand it to someone else who can.",
"Imagine a bully taking all the pens from a desk and everyone. Unless someone bigger comes along, they all need to wait for the bully to give the pens back. And in the meanwhile no work can get done. A bit more detailed: A prcoess takes control in the cpu and stops other processes to do anything. Dragging the windows, minimizing, maximizing, closing, they all need a bit of cpu to work."
],
"score": [
6,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d5n0dj | - Why does American TV look more blurred and warmer than European TV? | Technology | explainlikeimfive | {
"a_id": [
"f0mvkit"
],
"text": [
"I'm not European, but Mexican but I always felt American TV looked more cinematographic and \"epic\", and later found out a lot of USA TV shows are recorded in 24fps (The standard fps for movies), while Mexican shows and novelas were recorded at 60fps (Which made them look a bit more realistic and cheesy). For European TV the format of PAL (The European standard) displayed at 625 lines (575 but only in the screen) while NTSC (The American standard) was displayed at 525 lines (But only 486 visible), so that's probably why European TV looks \"crispier\" or American shows looked \"blurry\"."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d5os2z | what is the difference between lumens and ANSI Lumens? I was looking at projectors and I will see some will say they have like 2000 or 3000 Lumens but others will have like 4000 ANSI Lumens. I feel like they are not the same and get confused as they only ever have one or the other and not both | Technology | explainlikeimfive | {
"a_id": [
"f0n65n1",
"f0nh5cs"
],
"text": [
"An ANSI Lumen is a standardized measurement and more accurate. Just saying lumen could mean anything. It could be a true measurement or an over estimate, or even an underestimate.",
"ANSI is the american national standards institute. ANSI provides a very specific set of tests to measure the light output of projectors and if a projector is saying ANSI lumen, it has been measured using these tests rather than other, lest accurate ones. It's basically saying that this projector has been measured to a certain standard rather than just measured haphazardly."
],
"score": [
7,
7
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d5sqxb | The lack of goal line replay technology in American football as compared to other sports | For example, tennis has hawk eye and soccer has goal line technology. How come football still doesn't have anything capable of accurately measuring where the ball is on the field for touchdowns, first downs etc.? | Technology | explainlikeimfive | {
"a_id": [
"f0nvte4",
"f0nvbk9"
],
"text": [
"In Football a lot of it is that the ball moves and stuff after the play anyway, and often it's not a matter of 'did it go this far' its' 'did it go this far before his knee touched here and did he have control and was it grounded etc... In Tennis the location is like 99% of the call. In Football, its not very common that the ball's specific location is the problem, its the timing of it being in that location relative to other events",
"The more interesting proposals in football actually doesn't involve cameras. Instead they would have chips embedded in the ends of the football that would send a signal if it breaks the goal line."
],
"score": [
8,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d5v6nm | Why does wiggling the batteries in the TV remote (or any other battery powered thing) make dead batteries work for a little bit longer? | Technology | explainlikeimfive | {
"a_id": [
"f0o6xu4"
],
"text": [
"A lot of the problem with old batteries is not that the charge have run out but that the contacts have become corroded. This is a quite common problem when you have exposed electrical contacts as the electrical potential helps promote rust. It is often seen as a colored salt that seams to grow on the contacts. And this prevents the batteries making proper contact with the equipment. If you move the batteries you will cause the contact to scrub against each other and scrape away the oxidation or at least come into contact differently where there might be less oxidation. If this does not work you might also try to scrape away the oxidation using a nail file or sand paper."
],
"score": [
20
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d60o6d | Why do cold callers ring and immediately hang up when you answer? | My guess is it's a (bad) bot that's trying to discern if there's a human on the other end? But then what? | Technology | explainlikeimfive | {
"a_id": [
"f0p35bm",
"f0p4or5"
],
"text": [
"You picked up the phone. Now your number is recorded as a 'live' number. Your number will now be sold to telemarketing companies to be a candidate. Rather than cold calling random numbers and getting dead lines half the time",
"Telemarketing companies use something called a predictive dialer. Rather than dialing one number after another waiting for someone to answer while their agent sits there twiddling their thumbs, they dial whole batches of numbers at a time predicting that a good portion of the calls will either be unanswered or be answered by voicemail/answering machine. When the system detects that a live person has answered the phone, the call gets sent to an agent. Sometimes the prediction gets it wrong and there's no agent available for your call. The call gets hung up and this is called an \"abandon\". The FTC has mandated that no more than 2-3% of calls originating from a call center may be abandoned, so you have to set your dialer to dial enough calls to keep your agents busy but not so many calls that you're getting lots of abandons."
],
"score": [
15,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d63gcd | . How do those russian house keys work that look like a battery? | [keys]( URL_0 ) | Technology | explainlikeimfive | {
"a_id": [
"f0pmkqe"
],
"text": [
"It's a RFID tag. The key has an identifying number in it that it broadcasts at a very short range every time a receiver (that is actually the transmitter of the radio signal, but let's not complicate this) gets near it. If the lock can find the identifying number in a list of allowed tags, it unlocks the door. Well. In reality, the lock sends the tag ID to a computer somewhere that verifies that the tag is allowed access. But that complicates the reasoning somewhat, so leave that bit out. Assume that the lock works standalone, even though very few of them actually are standalone."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
d68f7g | How do they program FPS video game bots to know how to attack you? | Technology | explainlikeimfive | {
"a_id": [
"f0qrwxe"
],
"text": [
"So, like, they are all like, “Marco,” and, if you a) fall into a certain “recognition range,” (comparable x/y/z coordinates to aforementioned “game bots”) or b) take some action to provoke / trigger their pre-programmed behavior (e.g., shooting at you, melee attack, etc.), you can’t *not* say, “polo.” Its as if the game makes you holler it out, just because you’re within visible range. If you do certain things, the game is set to send you “obstacles” like zombies trying to eat your flesh or terrorist trying to shoot you from a window to slow your progress & engage your interest. Once certain constraints are met, rules of engagement become defined by the level of proximity or involvement... an enemy may *only temporarily see you* if you’re 30’ away. Lying down behind a bush may allow the timer to count down & if no further actions are taken (you don’t get any closer / shoot at him / make noise - it all depends on the game, right?) his awareness drops off & he wanders away. Let that same scenario unfold, except this time, while the enemy or zombie or whatever is still “alert,” (normally as you’d be hiding it to avoid engagement) your character activates another trigger. In that case, the speed in which the enemy engages you is likely to increase, and exponentially so - again, all contingent on the game, itself, and the story, premise, mechanics and metrics."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
]
| [
"url"
]
|
|
d6c1kz | I never understood how they change the camera angle and the place where the other camera used to be is not there anymore. | Do they tell the actors to stop so they can change the camera and then the actors continue talking? Sorry if this has already been asked before. I don't know what to search to find it. | Technology | explainlikeimfive | {
"a_id": [
"f0ro364",
"f0rs74h",
"f0sb6ux"
],
"text": [
"They shoot the exact same scene multiple times. Sometimes with the same actors, sometimes with stand-ins. (So often the conversation you think you're watching never really happened - it's an actor talking to someone else and the whole thing is edited together). Sometimes you can spot this because of \"continuity\" errors - things that change between the two angles that shouldn't - a lamp moving, a door being extra wide open - and so on. You can even sometimes spot the stand-in if the actor is supposed to be partially in shot (like conversation is happening over their shoulder) - sometimes it's someone with totally different hair!",
"This is why Film Editors have their own category at the Oscars. Editors take the dozens of different \"takes\" that are shot of each scene and work with the director to assemble them into a single cohesive scene that makes sense to the viewer.",
"It depends... We can split it to two cases: single camera and multicamera. The good old sitcom is done with three camera setup (there can be more but three is the minimum). When it is a dialogue between two characters, we place two cameras to point to the actors faces and one showing both. They can take other kind of shots too but the main idea is to cut between the cameras either as live edit or in post-processing. Single camera is harder to do, you have to plan for it. Very common practice is to run thru the dialog fimlimng just one of the actors, change camera position and run thru the dialog again. They do this over and over again and actors absolutely have to remember their positions. Storyboarding and blocking are done in the pre-production phase. \"Blocking a scene is simply “*working out the details of an actor's moves in relation to the camera*”\" says google and that is very nice explanation. This is where \"you are blocking my light\" also comes from, it means either that one of the actors is in wrong position or there is a problem with the lighting design ;) It is a like a dance in the end, all movements are repeated 20 times. There are of course variations where things are improvised and ad libbed but then the scene and everything in the shoot has to be planned for that. Stand-ins are used to plan the shot and often they are also the shoulder that is seen from behind when our actor speaks their line. This allows for the actors to not even be present at the same time. Stand'ins also are used for blocking plan, this is why they have to be same height and build as the real actor (which is a luxury, smaller productions use any staff that is available, catering, security, who ever they can find..). It is tedious job where you stand for hours in one place while people around you are busy but i digress. Those are the basic types but of course, there are variations and breaking rules. Two cameras are often used in a \"single camera\" shoot. One rule that does help is the 180 degree rule. I won't explain that, look this short video about it. [ URL_0 ]( URL_0 ) It allows for switching that angle in the first place in a way that makes us understand that jump between cameras. It also gives us the space to put two cameras almost opposite of each other..."
],
"score": [
18,
4,
3
],
"text_urls": [
[],
[],
[
"https://www.youtube.com/watch?v=Bba7raSvvRo"
]
]
} | [
"url"
]
| [
"url"
]
|
d6cvq7 | How does a computer know to boot up again when you select to "restart" it? | When you select your computer to restart, how does it know to boot up again when it shuts down? | Technology | explainlikeimfive | {
"a_id": [
"f0rx1i7",
"f0rxxl5",
"f0sb40l"
],
"text": [
"There is a boot controller built into the motherboard that manages the boot signals from a variety of places, like the power and restart button. It also can receive digital signals from the OS. One of those signals is restart.",
"The motherboard is never shut dow. It's why computer still consume power when turned off. So it can trigger power ON or OFF for all other parts. And it's why sometimes you need to unplug for many seconds your computer to do a cold reboot to solve some problems, to force the motherboard to shut down.",
"The processor in your computer does one pattern: fetch memory, decode it into instructions, execute the instruction, and possibly write memory. It runs this like it is playing and endless runner game, though there may be one instruction called halt that tells it to just stop. When you start a computer, it starts executing from special memory in the \"BIOS,\" that stands for Basic Input Output System; it is a tiny operating system that knows how to look on disk to load your actual operating system. When you restart your computer, it is designed to go back to its starting state, and just like when you turned it on, it goes back to executing the code in that BIOS. The BIOS is basically like a USB flash drive without assuming the interface is USB. It is memory that works even if it loses power. So when you are updating your BIOS, the program is writing a new version of this tiny operating system indefinitely to that memory."
],
"score": [
5,
5,
4
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
]
| [
"url"
]
|
d6cw6x | What is Exploit Development? | I want to know what exactly is Exploit development in a real job. | Technology | explainlikeimfive | {
"a_id": [
"f0s3fqq",
"f0sa4jw"
],
"text": [
"Exploit is literally just that. Using something in a way it wasn't designed for. Lighting a cigarette using a toaster is an exploit for example. In the computer world an exploit could be for example a search tool with a text field that for one reason or another lets you run program code or run queries to a database directly. Even deeper down the rabbit hole you might find out that you can crack passwords by listening to certain RF signals the processor makes and so on. So exploit development is practically just figuring out flaws in a certain system. There's probably a systematic way into it as well but that's where my knowledge ends",
"This feels like a let me google that for you. Here's a decent description: [ URL_0 ]( URL_0 ) Basically, identify a flaw in the system. The article uses fuzzing as an example, which basically means that you send random data to an api or listening port, and then see how far you get. For every byte you send, the listening protocol thinks you are: 1. Speaking its language 2. Definitely not speaking your language, and probably disconnects you 3. Is fooled into doing something unexpected (by the programmer) If you find option 3, you may have found something exploitable. You can reverse engineer the code, or just experiment and see if you can get it to do something specific and to your advantage. With an identified exploit (basically access to undefined behavior), and an effort to develop it into something that you can use to your advantage, you are doing Exploit Development. I'm going to write this unpleasant thing in an effort to help you: If you are majoring in Information Security, and you don't know this or how to find it on the Internet, you have much to learn. Since I know nothing of your intelligence, I will assume you are smart, so I would start with something like Grit. Check out this video: [ URL_1 ]( URL_2 )"
],
"score": [
4,
3
],
"text_urls": [
[],
[
"https://null-byte.wonderhowto.com/how-to/exploit-development-everything-you-need-know-0167801/",
"https://www.ted.com/talks/angela\\_lee\\_duckworth\\_grit\\_the\\_power\\_of\\_passion\\_and\\_perseverance?language=en",
"https://www.ted.com/talks/angela_lee_duckworth_grit_the_power_of_passion_and_perseverance?language=en"
]
]
} | [
"url"
]
| [
"url"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.