q_id
stringlengths 6
6
| title
stringlengths 4
294
| selftext
stringlengths 0
2.48k
| category
stringclasses 1
value | subreddit
stringclasses 1
value | answers
dict | title_urls
sequencelengths 1
1
| selftext_urls
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|
ehc5k4 | How did NYC come to depend on steam so much? | What are the origins of New York City and their usage of steam power? Are there other examples of cities that rely on steam as heavily as NYC? | Technology | explainlikeimfive | {
"a_id": [
"fci641n",
"fcig9fn"
],
"text": [
"Steam was thee power-that-be during the late 1800's. And so was when the beginnings of both heating of building space and for domestic hot water of a large areas like in N.Y. Later on, the heat from steam could also be used for absorption type systems for air conditioning & refrigeration.",
"Municipal steam systems were developed when domestic steam heated really started taking off and every building and house had their own steam boilers. In denser more built up areas some people figured it would be easier to have one big boiler plant and pipe the steam to the indavidual buildings, and many people with boilers prone to violently exploding in their homes agreed and got rid of their boilers in favor for the municipal systems. Denver currently has the older operational district steam heating system in north America, though the city doesn't rely on it anywhere near as much as NYC, as only the government buildings and large structures near the central business district are still tied into it with most of the single family homes or townhomes cutting themselves from the system in the 40's for more efficient heating systems. Many cities in Europe rely on steam and district heating as much as New York, but many of those systems are built off of waste heat from factories and power plants near the cities they serve, and are also nowhere near as large as New York's system."
],
"score": [
3,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
ehgi8x | Why can’t we digitally store our memories and see a visual representation of some sorts? | Technology | explainlikeimfive | {
"a_id": [
"fcj5cjy",
"fcjcnu6"
],
"text": [
"Science hasnt really caught up to that. We dont really know exactly where and how memories are stored. If we did we would be able to cure dementia. I've often wanted something similar but yeah until we understand the brain it wont happen.",
"The brain isn’t a computer. neurons aren’t switches. We don’t really understand how it all works."
],
"score": [
8,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ehh3xx | How does an MRI work? | Technology | explainlikeimfive | {
"a_id": [
"fcjcbc2"
],
"text": [
"The machine makes an energy field inside it. Within that field, the tiny building blocks that make up your body get \"excited\". These excited blocks realease the energy in a different form that is measured by detectors. The noises you hear are the machine changing the energy field and measuring how the blocks respond. It can take a while to take a picture cause it has to wait for the excited blocks to give off their energy and return to their normal state. MRIs are safer than xray or CT scans cause they don't put radiation in your body."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ehjt5s | why does cold weather drain a phone battery? | Technology | explainlikeimfive | {
"a_id": [
"fcjpc43"
],
"text": [
"Because you're relying on a chemical reaction to take place in order create a flow of electrons in a battery. Cold slows down chemical reactions in the same way it slows down food from rotting when you put it in the fridge. Your battery isn't draining faster in the cold, it just produces less current and your phone registers that as a low battery."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ehktyw | How are communication companies able to provide us internet? Where do they get internet from and how do the speeds and volume caps work? | Technology | explainlikeimfive | {
"a_id": [
"fcjtzk7",
"fck7kc4"
],
"text": [
"Internet is just a network. Your computer is a part of the internet. Every device is part of it. Your ISP just connects you to all of the other people on their network, and then they connect their network to the networks run by *other* ISPs. Now everyone is connected in one big spiderweb, and by sending a message with the right address you can contact any node in this network. To cap your speed and use, all they have to do is measure the number of bits (1's and 0's) that are transmitted through the connection that runs to your house or whatever.",
"To expand on what others have said, while the entire internet is quite simply a large and complicated network, most of the data is transferred over the internet \"backbone\" which is comprised of hugely expensive and fast network/data centers owned and managed by telecommunications companies and other corporations. In this sense they \"get the internet\" because they are \"making\" the internet along with all of the other ISPs and companies out there who manage the backbone. Plenty of data is certainly stored and served on individual computers around the world but the vast majority of what average people use is probably stored at one of these nodes."
],
"score": [
9,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
eho7v0 | Why do computers have to restart to finish installing an update or software? | Technology | explainlikeimfive | {
"a_id": [
"fckgoco"
],
"text": [
"Imagine if someone tried to change your shoes at the same time while you're standing. You would probably fall down, because you need your feet to stand. For this to work, you have to sit down and wait for the person to change your shoes for you, then stand up. Likewise, in most computer operating systems, there are system files that are required for the operating system to run. If you try to patch them while the operating system is running, the system will crash. The workaround is to shut down the computer, and safely stop the old version of the file during shutdown. Then, start the computer, and start the new version of the file during boot. This ensures that the system file never stops while the system is running. Now, I know what you're thinking: \"But I can stand on one foot and change a shoe that way! Why can't computers do the same?\" The answer is that most operating systems are not built to do this, but there are a few that are. In particular, Linux-based systems can do what's called live patching, where important system files are updated without requiring a restart. This system is not 100% foolproof, though, and sometimes Linux systems still need to reboot to fully install updates, but the end result is that Linux operating systems require reboots for updates much less than Windows operating systems, for example."
],
"score": [
32
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
eht93g | How do the camera operators at darts tournaments know where to point to | I've been watching the Darts World Championship for the first time this year and I'm wondering how the camera operators always know where to pan to and zoom in even before the competitors are throwing their darts | Technology | explainlikeimfive | {
"a_id": [
"fcle4s3",
"fclwyog"
],
"text": [
"There's a bunch of cameras set up at different angles, and a guy sitting behind a bunch of screens watching it all and telling which camera to do what, based on his/her knowledge and experience. Or there's more than one person.",
"When trying to go 'out', the shooter must end on a double, which limits options. The number of 'outs' for a score under 170 (the maximum points a player can have to attempt to win a round) are fairly well established. For example, the only way to go 'out' with 170 points would be T20-T20-D25 (bulls-eye). In these cases, it's obviously very easy to know what the shooter will be aiming at. As the point total gets lower, the theoretical number of 'outs' goes up, but experienced darts players will usually go with known patterns based on the ease with which the darts can be thrown and therefore the director can guess with high certainty what to point the camera at. Let's say you had 117 left. You have a couple of choices: T20-17-D20 or T19-20-D20 Any veteran dart player will start with the T20 option and the director knows that. Not because T20 the easier shot, but because it leaves more options if they miss. The most common miss when going for the triple is to hit the single. If you miss T20 and get a single you still have T19-D20 left to win.If you miss T19, you have T20-D19 left which is far more difficult. Almost all 'outs' have a pattern with the highest chance of success. Directors know them and are able to direct the cameras accordingly."
],
"score": [
8,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
ehv4pb | What causes television/computer screens to look so bad when recorded with a camera? Shouldn't it look closer to what we see? | Technology | explainlikeimfive | {
"a_id": [
"fclrezz"
],
"text": [
"It's called the moiré pattern. Basically, you might have known that a TV screen is a grid of dots (pixels), each pixel is made of 3 primary color dots. The digital camera also captures a grid of dots. Now when you try to map your TV's grid to your camera's grid, it almost certainly won't line up perfectly. That means, one pixel on your camera might be capturing two halves of two separate pixels on the screen."
],
"score": [
11
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ehw5zz | How do people program/teach random algorithms to actually be random? | Technology | explainlikeimfive | {
"a_id": [
"fclx78o",
"fclycd4",
"fcm43bd"
],
"text": [
"Algorithms are not at their roots random. They are predictable because an algorithm is just following a sequence of steps and following steps exactly doesn't result in random behaviour. That said, algorithms can start with unknowable information and use that to produce random-looking data that can't be reasonably predicted without knowing that information at the start. For example: an operating system might measure how fast a spinning hard drive responds to read/write operations - a process that involves waiting for the spinning disk to line up with the drive head - down the the nanosecond and use that time as an unknowable and not reproducible value. Now you have a little piece of chaotic information available and can produce something unpredictable, and \"unpredictable\" is generally a satisfactory definition for \"random\". So, software can't be truly \"random\", but it can be anywhere from difficult to nigh-impossible to predict and that's good enough.",
"Most random stuff on a computer is only pseudorandom. Pseudorandom is when when you take a seed as an input to an algorithm and run it one iteration that produces a new internal state and a random number. Repeat the algorithm for each new number. If you start with the same seed the generated number square is identical. That is sometimes a drawback but sometimes an advantage. If you use if for simulations or testing programs you can create the exact same sequence again. This is good enough for games and everything else except for cryptographic applications like setting up secure connection to a website A common seed is a current time and date when the program run. You could use encryption algorithms to create a random number. Use the seed as the key and encrypt 1 for the first look, 2 for the next and so on. When needing a real random number you can use dedicated physical hardware that is integrated into some CPUs that use thermal noise to generate a random number. Another way is to collect outside stimuli like when mouse and keys are pressed, a packet arrives on the network interface and use is to create a random number or a seed. There is or at least was specialized expansion cards for some system with a hardware random number generators.",
"They don’t. Random algorithms are basically an advanced version of “eenie meenie minie mo”. You do some sort of basic math operation and use the end number/s to generate a result that is hard to guess. For example, let’s say you want to pick a “random” number between 0 and 9. You might start by adding together all of the numbers in the current date and time together and then looking at just the smallest digit. 12+29+2019+10+41= 2111, the last digit being “1”. So, a pseudo random number between 0 and 9 might be 1."
],
"score": [
41,
7,
6
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ehwx0t | How are videos of animation and video games able to be displayed at 60 FPS on YouTube when the original material was never 60 FPS to begin with? | Technology | explainlikeimfive | {
"a_id": [
"fcm2uzi"
],
"text": [
"You can change the framerate of a video by duplication framer or even buy create new frames as an interpolation of two frames. I interpolation is to attempt to calculate what the middle frame should have been by detection of how stuff moves etc. It is a bit like if you scale up an image, the result is a larger image but the data that would have been there if it was captured at that resolution is not there. So if it is meaningful to in most cases? I suspect the answer is no. That will happen all the time when you show a video or game content on your computer monitor that does not have the same framerate as the monitor. So if your computer monitor is at 60 HZ and you play a 30 FPS video it will show the same frame for two consecutive updaters. It is even worse if you display a film that is often recorded in 24 fps, then some frames are shown for a longer time then others. So an uploaded video to youtube will have done the same as your computer do when you playback low fps video or some interpolation."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ehxi2b | How do the score updates from sports app get sent? | Is there someone in the stadium watching the game and updating it super fast? I'll often get alerts before it happens on TV. They have so much information on each play and I want to know how they get it all. | Technology | explainlikeimfive | {
"a_id": [
"fcm6of4"
],
"text": [
"It's pretty much just people being paid to watch the game and record everything that happens. Sports statistics are pretty big business. Companies that do this sell their data to TV networks, oddsmakers and even the teams themselves, each with their own uses for the data. Networks like having statistics and odd facts about particular matches, sportsbooks use them to set odds for upcoming matches and teams analyze the data of their players and teams performance and those of other teams to get whatever competitive advantage they can."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
ehy13m | Why is RAID 5 considered unsafe for larger arrays | Technology | explainlikeimfive | {
"a_id": [
"fcm9oka",
"fcmc2vr"
],
"text": [
"RAID 5 means you can have a drive failure without losing everything because you have an extra drive's worth of data across the rest of the drives. BUT if you lose two drives at the same time, you lose everything. Even if you replace a drive, if you lose a drive before the rest are done rebuilding, you aren't in a good spot either. The odds of losing 1 in 3 at the same time are pretty low, but the more drives you have, the higher chance you lose two at the same time. For larger arrays, RAID 6 lets you lose two drives at the same time, so you'd have to lose 3 to have a total failure, which is much less likely.",
"RAID 5 is unsafe for a bunch of reasons, not just for larger arrays. Basically you should never use RAID 5 period, it's effectively depreciated. ie it's been abandoned by the industry and you shouldn't use it anymore. **A.** RAID 5 can only tolerate a single drive failure. More drives means more can go wrong. The larger the array the greater the chance of having multiple simultaneous drive failures. **B.** Rebuild times are very long and with ever increasing drive size this problem is only getting worse. If you replace a drive it will take hours, sometimes days to rebuild the array. This puts a lot of stress on the array and can cause another drive to fail during the process. If that happens you lose all your data. This is the main practical reason not to use RAID5 anymore. Basically drives are now so large on average that the long rebuild times means there is too high a risk that you'll have another drive failure during the process. **C.** Parity based Arrays like RAID 5 are vulnerable to failures called UREs (Unrecoverable Read Error). Basically RAID5+6 have a fundamental flaw in that they assume that all of the data on the array is perfect at any time. Parity is only calculated during a write operation so if any of the blocks on the array become damaged during the Rebuild process the array rebuild will fail. Basically the array is hooped and you can't rebuild it without formatting it. The mathematical probability of a URE occurring on a RAID 5 array increases as the array gets larger. It reaches near 100% with SATA drives at around 12TB. For SAS drives its about 10 times larger than that. RAID 6 is vulnerable to this as well, but the drive sizes need to be much larger for the math to line up. So the double parity kinda just kinda kicks the problem down the road. Storage manufacturers like EMC + HPE work around this problem by daisy chaining multiple smaller RAIDs together in RAID 50 or 60. This reduces the risk of URE and allows for multiple failures in an array, because each indidvidual block of RAID 5/6 can lose drives and not affect the others. But speaking as a storage admin this is honestly a nasty workaround and has some serious drawbacks associated with it. Most notably the performance SUCKS."
],
"score": [
23,
13
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ei407y | why is cleaning my computer of junk files the proper way painfully slow but using a 3rd party "cleaner" app way faster? | Technology | explainlikeimfive | {
"a_id": [
"fcnc8q6",
"fcnb6kn",
"fcn7k56",
"fcn7ofg",
"fcnkkr1",
"fcnbkk9",
"fcnbw5u",
"fcnun6z"
],
"text": [
"Oh man, here we go with bad analogies. Ccleaner is like wiping down the surfaces in your house, taking out the trash, and sweeping up. It'll get the extra junk out, like temporary files that weren't erased properly, or Internet data you're not using, but it doesn't mess with programs, and it won't touch most folders on the drive, because it doesn't know what can be safely deleted. Disk cleaner acts differently, the most deletion it will do is cleaning the recycle bin and your Internet history. It has a second function though, which is very CPU hungry, it does file compression. That's like packing down a sleeping bag or a tent, it takes time and work, and you end up with something smaller than you had before, and every time you use that file afterward, you have to spend the time and energy to unpack it. This sounds good in practice, but disk clean up doesn't always know what to pack down, and it can cause errors and slowdowns in your computer. I don't recommend using it. Instead, get familiar with your C: drive and the apps you have stored there. Uninstall old apps, and keep your downloads and documents regularly clean. And if all else fails, backup your important stuff onto an external or flash drive, and format the whole hard drive to start again, you'll have a faster computer and more space.",
"> I'm stuck on Windows 7 End-of-life is in 15 days; don't listen to \"I'll never upgrade\" types, it happens every time a Windows release reaches EOL, and eventually they'll upgrade anyway. You need to find a new OS, which will make your current problems nicely irrelevant. 7 & 10 are exactly the same in that commercial software like CCleaner is ineffective and a security risk, but a new installation is a clean start and you can help keep it \"cleaner\" by not using CCleaner (or similar garbage).",
"Because Ccleaner is a primary attack vector: URL_0 It doesn’t matter if you uninstall it; if it’s ran at all you’re potentially at risk.",
"Why is cleaning your house so time consuming and tedious but hiring a maid is so easy? And for the same reason hiring a maid is so risky, they could steal your valuables and personal information, if you fire them after the job is done they still have your information.",
"As an IT person, your IT person probably told you not to use it because as a non-technical user, you're far more likely to do more harm dicking around with a 3rd party cleaner app than the good you'll do cleaning up stuff like old temp files and such. Basically he's saving either himself or his fellow IT professionals the future headache of undoing something you could very easily screw up.",
"answering your bonus question. A couple reasons. Programs like that often come bundled with bloatware which your IT person doesn't want to have to clean off your computer. Also there may be files that apps like that tend remove but are actually necessary. In general if it's a work computer and your IT person says to do or not do something, please listen to them for their sanity.",
"So is there any secure deletion apps that are useful? I’ve always had the impression CCleaner was great",
"Hey I just thought I'd mention on here for OP or anyone else - Windows 10 upgrade is still free. Your Windows 7 won't prompt you to do it anymore, but it is still something you can download and install and get a valid license upgrade for zero cost. Go to: [ URL_0 ]( URL_0 ) and download the tool. It will put the installation on a USB stick for you and you can use that to do an in-place upgrade of your existing Windows 7 instance, or use it to do a fresh install. Don't keep using Windows 7 - it won't be getting more security updates after January 14, 2020."
],
"score": [
92,
35,
24,
20,
18,
6,
5,
5
],
"text_urls": [
[],
[],
[
"https://www.google.com/amp/s/www.zdnet.com/google-amp/article/avast-no-plans-to-discontinue-ccleaner-following-second-hack-in-two-years/"
],
[],
[],
[],
[],
[
"https://www.microsoft.com/en-us/software-download/windows10"
]
]
} | [
"url"
] | [
"url"
] |
|
ei6vyy | what is the Kernel in an operating system? | Technology | explainlikeimfive | {
"a_id": [
"fcnt4fx",
"fcoy5cc",
"fco8jvv"
],
"text": [
"The kernel has full access to all physical system resources ( i.e cpu, memory, network card, disk ) Applications have to go through the kernel by making ‘system calls’ in order to request an allocation of resources, to access or to make changes. It’s basically the interface between your physical hardware and your applications. For example: The kernel knows how to write to all different types of media. The application doesn’t care on the media type, as it just wants to store a file. So the application only needs to know or be programmed with the knowledge on how to tell the kernel that it wants to save a file. It does more than this and is a bit more complex but this is an ELI5",
"Think of a restaurant - you don't get to just barge into the kitchen, rummage through the fridge, cook whatever you want with all the equipment, toss everything else aside, then go drag it out to wherever you decide to eat, stealing other people's food or leaving your bones on their plates. It would be fine if you were the only customer, but when a whole bunch of people want dinner that's just not going to fly. There are *rules*. You request a table from the front desk, you go to the one you're allocated, you wait for a waiter to give you a menu, then you order, then you wait for your food, then you eat it, then you pay, then you leave. The staff keep track of what tables are available, what food is available, how to use the oven, who's using the deep fryer, etc, and they don't let you just snack off other diners' plates either. What you do with your own food on your own plate is your business, but everything outside of that, the staff deals with, *not you*. And if you get roaring drunk, they can quietly bustle you out the door without disturbing anyone else. An application is the customer in this scenario: it wants resources (memory, CPU time, input/output), but it isn't allowed to just take whatever it wants, *especially* not resources that other applications are using. The kernel is the staff - it's a *privileged* program that knows what resources are available, it allocates them to the application, and it prevents that application from using anything it wasn't given. The kernel knows how to talk directly to the screen, disk, network, mouse and keyboard, speakers, etc - the application doesn't (and wouldn't be allowed to even if it did). The kernel keeps an eye on the application, and if it stops responding, the kernel just takes back all the resources and terminates it. There's a bit of overhead involved, of course, but this means that one broken bit of software can't crash your whole computer. It means that your music player can't spy on keystrokes from your browser. It means that broken or malicious software can't just scribble all over your hard drive. It means that one particularly intensive bit of software doesn't get to just take all the CPU and memory it wants, leaving your computer unusable. And really importantly, it means that applications don't need their own code built in to talk to specific graphics cards, don't have to know what CPU you have, don't have to reinvent the wheel just to use the mouse, don't have to invent their own window systems, etc. All that shit is just abstracted away into basic services, and the kernel can worry about exactly how to provide them. When your hardware changes or updated drivers come out, only the kernel has to be updated; all your applications can continue on in blissful ignorance. They only need to know how to do *their* stuff in their little ivory tower, and they leave all the real-world heavy lifting to the kernel.",
"The kernel is a program that: * 'Knows' how to access computer hardware and resources, namely CPU time, memory, and disk space. The parts of the kernel responsible for this are called 'drivers'. * Makes these resources available to user programs in such a way that they're easy to use *and* the OS can also gate keep access so it can enforce security. It does this through something called 'the system call interface'. * Manages these resources 'under the hood', as part of making them accessible in an easy-to-use form. The name for this management is called 'virtualization'. Thanks to the hard work done by the kernel and kernel devs, user programs can pretend they have the entire CPU to themselves, have all the computer's memory to themselves, and when it needs to use disk space, it can just 'write to a new file' instead of manually finding spare space on the disk by itself. This is also good for the sanity of the programmers so they can focus on more important things. It's not all rosy- You don't want programs somehow gaining access to the memory of some other programs. You don't want applications miraculously overwriting files of another user, or even deleting system files. Those are big reasons the kernel bothers to enforce security, by taking advantage of it's role as a resource manager."
],
"score": [
18,
8,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
alx64y | How did people from earlier times get clean drinking water without our modern methods? | Technology | explainlikeimfive | {
"a_id": [
"efhnsjh",
"efhnknr"
],
"text": [
"The simplest response: booze! Back then, there was a very poor to limited understanding of water quality leading to illness. There would not have been attempts to purify the water, since it was not understood that this needed to be done to have potable water. The concept simply didn't exist. However, it was noted that drinking alcoholic beverages or diluted beverages did noticeably reduce the rate of illness. As a result, people learned to harness fermentation to provide themselves with something to drink that wouldn't kill them. This didn't always work, as in the cholera outbreaks in England when drinking alcohol 24/7 was not an option and many still had to rely on plain water. However, much of our culture's association with alcohol is in part due to the discovery that it was safer to get a little buzzed than to crap your guts out by drinking plain water.",
"Usually, the water wasn't as polluted as today. A stream had plenty of fresh water that, had anyone/anything used as a bathroom, would be naturally be filtered. Wells, while mineral rich, normally didn't have bacteria unless the out house was built too close to it."
],
"score": [
24,
8
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
alya6v | Why are records read from the outside in, and CDs/DVDs read from the inside out? Why did it change? | Technology | explainlikeimfive | {
"a_id": [
"efhxl30"
],
"text": [
"Audio files stored on CDs sort of made it somewhat standard for cds to read inside out. Although seemingly rare, Cds did not come in standard sizes and data is sort of stamped in so you could use the same stamp for several sizes of CD. Circumference of a circle gets larger as the radius gets bigger so more of the disc passes by the read head at longer radius with the same amount of rotation. Things that needed speedy reading read from outside in like many game discs. To improve performance. This wasn't needed for music. Not really sure about records but I would guess it had to do with performance as speed of the read head would be faster and may improve audio quality at the edges."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
alz5f6 | How are televisions in showroom in perfect sync ? | Upto 30+ televisions in showroom display the same video in perfect synchronisation. How do they do it ? Do they use a special device which can give output to so many monitors in Full HD ? Asking this cause even my Surface Book with best specs can't power more than 2-3 monitors. | Technology | explainlikeimfive | {
"a_id": [
"efi3sci"
],
"text": [
"There is a huge difference between having multiple monitors on a computer with different output on each like if you use a computer compared to duplicate a signal to multiple display. There are HDMI splitter boxes that take the input from one cable and send out the same signal to multiple monitors. A quick search on amazon resulted in 1 to 8 splitter for $50 and you would need 5 of those to drive 36 screens for a total of $25 . URL_0 The input to the TV will be in sync and as long as you disable any post processing often by using a game mode on the tv they will have as little delay as possible and all screens are in sync."
],
"score": [
3
],
"text_urls": [
[
"https://www.amazon.com/avedio-links-Certified-Splitter-Resolutions/dp/B01A6VELVQ/ref=sr_1_4?ie=UTF8&qid=1549003056&sr=8-4&keywords=1x8+HDMI+Splitter-Full+3D%2C+Ultra+HD%2C+4K"
]
]
} | [
"url"
] | [
"url"
] |
alzgnm | why smart phone producers are not able to make phone battery last longer than 3 days? | Technology | explainlikeimfive | {
"a_id": [
"efi4ri4",
"efi4x1s"
],
"text": [
"They can, it just would be a very large battery and few consumers want a very thick/large phone.",
"Current Lithium ion batteries that can fit in a extra large phone are usually 3,000-4,000 mAh, 1. We need to develop new technologies, we have one that can charge in 10 minutes and lasts hours on a 10m charge, but is unstable and needs testing. 2. size, the bigger the battery, the bigger the size, unless you want your smartphone to be as big as a tablet, current batteries will have to do 3. Larger charging times, even with super fast charging, larger batteries = more time charging 4. Explosion risk, when you don’t test things correctly, the phone can explode due to the battery, so it’s sometimes best to stick with what has worked and gradually increase the capacity as technology develops."
],
"score": [
4,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
am0a7c | In the Speed Racer movie, T180's are racecars with wheels that can spin independently, like a shopping cart. Would it be possible to build such a car in real life? | The wheels can spin 180 degrees, to allow for maximum control during a race. T180's can also go up to 400 MPH, however I am only focusing on the possibility of such a steering system. An excellent example of a T180's wheels at work: URL_0 As you can see, the wheels are used to steer the car in a certain direction, in order to minimize spinning out and allow for easy drifting. Would it be possible to build such a car? | Technology | explainlikeimfive | {
"a_id": [
"efia7eo",
"eficyng",
"efiaikb"
],
"text": [
"If each wheel has its own electric motor, sure, why not?",
"It's certainly possible - but not in a car going anywhere near 400 MPH. First, just consider the insane forces acting on the wheels when you're cornering at such speed. Take a look at the suspension of a [formula 1]( URL_0 ) car: It's got 4 struts reaching to the wheel hub, each made from carbon fiber composite. This is optimal for the forces from cornering and braking. A free spinning wheel would need have a huge bearing and suspension to take the same kinds of forces, not something you want in a race car. The second issue is that at high speed, aerodynamic forces can flip or even lift up a car and make it fly off the track. High speed race cars like F1 are designed to be exceptionally stable while going forwards, with big wings generating downforce to improve traction. This gives the cars better corner speed than drifting, and is a lot safer on top of that. Something like that might be useful for parking a car sideways or perhaps in a car designed for doing drifting tricks, but there's no point in a race car.",
"Yes, there are already commercially available cars with four wheel steering, and have been available since the late 80s Here's a 1937 mercedes benz with 4 wheel steering URL_0"
],
"score": [
4,
4,
3
],
"text_urls": [
[],
[
"http://www.formula1-dictionary.net/Images/engine_f1_stresed.gif"
],
[
"https://en.wikipedia.org/wiki/Steering#/media/File:Mercedes_K%C3%BCbelwagen_G5.jpg"
]
]
} | [
"url"
] | [
"url"
] |
am1p69 | How did medieval era people cut their toenails when there were no nail-cutters? | Technology | explainlikeimfive | {
"a_id": [
"efilppa",
"efion6v"
],
"text": [
"People used knives to \"peel\" their nails shorter. Or just let them break from physical labour.",
"Nail grooming as a practice goes back to at least the ancient Romans, although for most of history, this only applied to those of high socioeconomic status. Mostly everyone else was engaged in some form of manual labor which kept nails short from wear and tear. There were plenty of devices available for trimming nails prior to the invention of the modern nail cutter. Most prominently small knives and the many different types of scissors that have existed for thousands of years."
],
"score": [
3,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
am3p6c | How did my 1980's watch maintain more accurate time than my digital clock from today? | When I was younger, I would set my watch to the shortwave time broadcasting station. The drift over months seemed minuscule. Now, I have digital clocks that seem to shift minutes in months. Other smart watches I use seem to loose seconds in a week, if not synchronized. Why are the new clocks much less accurate? | Technology | explainlikeimfive | {
"a_id": [
"efj3wbo",
"efjcppx"
],
"text": [
"The watch was *meant* to be accurate and was made with higher-quality materials, especially by the standards of the time... not only “better” ones but ones more suitable for keeping accurate time. Many electronics have ring or relaxation oscillator circuits— things that create a rhythmic on/off signal. These are entirely electronic, using the same type of parts already used to supply power, light up different segments of the display, etc. They’re also used to tune in radio stations and regulate the speed of digital circuitry, so they’re cheap or *already included* inside a clock radio type device. But they’re not too accurate. A nice watch likely uses a quartz crystal. This is like a tiny “tuning fork”, it actually has vibrations going on in it. And it’s very good at keeping time accurately. Since this is the whole point of a good watch, and there isn’t other circuitry already there for other purposes, it seems like the obvious choice for timekeeping needs. But in the bedside clock where it has an alarm, maybe radio reception, powers a digital display, etc... the temptation to cheap out or repurpose existing circuits for timekeeping purposes would be stronger.",
"The alarm clock is probably drifting due to temperature. Your smart watch likely uses an entirely different time keeping mechanism. The crystal in a clock vibrates at a certain rate to keep time. Temperature affects this rate. As it happens, the temperature effect is smallest at approximately human body temperature, and this is where the frequency of the crystals is calibrated. A watch worn on your wrist sees a nearly constant and optimal temperature. A digital clock does not. Quartz watches are surprisingly accurate. If you never take a cheap quartz watch off, and if you know exactly how many seconds it gains or loses per month, it is actually stable enough to use for navigation -- you can figure out your longitude by sunrise times. Our time standard before atomic clocks was essentially an array of these circuits held in an oven at a very stable temperature. The smart watch is more interesting. It probably lacks a standard watch crystal to save precious circuit board space, uses the CPU oscillator as a poor substitute, and then relies on getting frequent updates from your phone. The quartz crystals used in watches and clocks vibrate 32,768 times per second. This value implies that they are a certain size. If you use 15 bits of memory to count those vibrations then you roll over once per second. If you go to twice that frequency you cut the crystal size in half, but need another bit in your counter. If you cut the frequency in half, you can drop a bit from the counter, but your crystal becomes twice as large. The 15-bit/32,768 value was selected because it was the best compromise for size, cost and power at the time digital watches were invented. It then became the standard, and as a result this is by far the most commonly produced and therefore cheapest quartz crystal size, so we kept using it. The problem for smart watches is that by today's standard, that crystal is a huge and slightly expensive component. To save cost and board space, it is often eliminated, and time gets calculated using whatever oscillator the CPU is making use of. This is accurate enough under the assumption that it will sync every few days, but not enough to hold the second per month or so that a dedicated quartz watch can. A cheap analog Timex will likely keep time just as well as you 80's watch, because it is still using the same tech."
],
"score": [
20,
10
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
am4bvs | If we can produce phone screens that are 6" in diameter with a full HD resolution, whats stopping us from making 16K TVs? | Thinking about it, it's probably cost and graphical capabilities of hardware, but are there other factors? | Technology | explainlikeimfive | {
"a_id": [
"efj6s4u",
"efj6szk",
"efj7vpy",
"efj75rr",
"efj8aef",
"efja8df",
"efjck1f"
],
"text": [
"Nothing is stopping us. Theres just no market value. Even 4k is limited by content creators actually making their products 4k in resolution, most of the market for media consumption is for 2k displays.",
"Who honestly would want a 16k tv when a lot of media hardly comes through anywhere between 720 and 1080p. Having a 4K tv is a stretch not to mention those thinking about 8.",
"First off: Diagonal. Not diameter. Second: yeah, we have hi-res phone screens, but think about how close to your face you hold your phone, you scrutinize things much more closely on it. You might just stare at that insta pic for a minute or two. Generally not the case with TVs. You are generally watching something. Even if you do use it as a monitor, you don't get as close to your monitor as your phone, I'm generally a couple feet away at closest. Your eye being able to distinguish the difference between 4k and 16k at a certain distance is what makea higher resolution valuable in the consumer market. A 65\" 16k tv would be just as pointless as an 8k tv. Your tv isn't even a 4k tv even though it is advertised as such, it's more likely 2k.",
"No demand. They would be very expensive to produce, and there is virtually no content in such resolutions outside of a few niche industrial purposes.",
"Pretty sure there's a limit of just how much data a standard hdmi cable can transfer which means we technically could but refresh rates would be terrible and everything would look choppy. This will probably change with newer interfaces",
"It's different for tech geeks (visualphiles?), but the average uninformed consumer often can't see much difference between 720p and 1080p, let alone 4K or 16K. & #x200B; There comes a point at which the extra clarity just doesn't matter that much to most people (myself included).",
"One big factor is a problem that is known as \"Yield rate\". If on average your process for making a screen is going to have obviously missing pixels every ten inches, you can make six inch screens and only trash a small percentage of them as defective. But you'd be trashing every single screen you made if you tried to make 20\" screens. Or be selling screens with obvious dead spots. And neither of those is something you can base a business around. Other factors: * The amount of data you have to send for show a single image goes up fourfold every time you double the resolution. * Existing cables and Internet can only go so fast. * Your eye can only pick out details that are so large. If you're sitting on your couch looking at the TV, it's hard to see a single pixel with a 4k display. 16k would be way more detail than you'd ever see. I'm not going to do the calculations but I'm think that's getting into the territory where you still wouldn't see any pixels even if you put your nose on the screen. it is worth chasing higher resolutions in phones because we typically hold them a lot closer to our eyes than a TV; you can actually see the difference at a comfortable viewing distance."
],
"score": [
29,
12,
8,
7,
6,
4,
4
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
am6e3q | How do phone mics filter audio based on source (like YouTube vs me talking using the loudspeaker) | My girlfriend and I video call each other every day (long distance fml). When I have her on speaker and also have a YouTube video playing on my phone at the same time, she can hear everything I say but nothing from the YouTube video. While this is perfect for me, how does the phone do it? Is it software specific because I've only tried it with Google Duo and Hangouts? | Technology | explainlikeimfive | {
"a_id": [
"efjpw8u"
],
"text": [
"Assuming you're playing the YouTube video on the same phone you're using for the video call, it's relatively easy: the phone knows what sound it's outputting from the speakers, so it can simply subtract that sound from what it's receiving through the microphone before sending it. Any phone that can be used as a speakerphone needs to do this to some extent, or you'd get feedback."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
am80sh | Why is it so hard for game consoles to include backwards compatibility? | More specifically, I just saw an article on a new Sony patent for the PS4 processor to emulate older hardware - I originally thought this was just a software issue but I guess older software is sensitive to the actual hardware running it as well? | Technology | explainlikeimfive | {
"a_id": [
"efk2f64",
"efk2bju",
"efke6t6",
"efkdib2",
"efkcl98",
"efk1to1",
"efkel64",
"efkdgnt",
"efkd7qu",
"efkdoym",
"efkqa6b",
"efkeits",
"efkcn7j",
"efkexjg",
"efl4n8e",
"efkhrpu",
"efki3f8"
],
"text": [
"So, there are many different things that can cause these games to not work. & #x200B; & #x200B; First of all, when your game is trying to work on a newer system, it was equipped with a map of the older hardware, this allows the game to find all of the different pieces inside your console to run, whether it be memory, processors, video processors or even where the controllers are. When newer consoles come out, the map changes, and it was programmed on the older maps. Your program doesn't know where anything is. It gets lost and feels scared. & #x200B; Next, older consoles may have had only a single processor. Think of it like a little person doing things inside your console. Newer consoles have more little people working inside of them. Older games, if they sent out a person to go do jobs, the jobs always came back in the order they got sent out, because there was only the one guy. Newer consoles have more people, so sometimes, the jobs come back in a different order and the game doesn't know what to do with the completed job. It is used to only getting one thing back at a time. This can make your game have a panic attack and shut down. & #x200B; Also, there are other programs that are on your console that also do work and help your game work. Think of them less like little people and more like the school you go to. These programs are filled with rules, whether its handling how your game looks, or what buttons do when they are pressed, or what sounds are supposed to play. Just like your school, these programs have rules. Your game was written on the old rules, and doesn't know what the new ones are. It may try to do something it thinks is fine and a hall monitor pops up and yells, \"You can't do that here!\" Or, maybe your game is allowed to do something that would make it's day easier, but doesn't know that it's not against the rules. & #x200B; Edit: Thanks for all the appreciation and gildings!. A couple of things: I am not a programmer as a job. I had to drop out of college when I ran out of money. I wanted to work on video games, but it didn't pan out. My wife (who is much smarter than me) helped write the post. I am a locksmith. Also, there are apparently some caveats on weird wordings that didn't come across quite right, but others have responded below with far more technological information that I have. I encourage you to look through the responses, they are quite good. We tried our best for an ELI5, so the metaphors are imprecise. When I told my wife the prompt and what I was going with, she gave me an exasperated look and said, \"Oh my god, so much.\" Apparently it can be really hard, but some programs are written to not utilize the hardware in weird and quirky ways are much easier.",
"Because they were reinventing the wheel with every generation of console instead of using a standardized architecture. If we're talking about PlayStation, CPU and GPU in every console before PS4 had different architectures, they weren't using standard \"computer\" x86-64 architecture that can run any standard computer program, so this is why they can't just run games written for older console on a new one. They can emulate older CPU inside a more powerful CPU, but this a silly exercise to begin with. Now you know why standards are a good thing. The situation is better nowadays, PS5 will have backward compatibility with PS4 games (just as PS4 Pro does with PS4) because they will use a CPU and GPU based on the same standard architecture in it.",
"Each gen of a console may have very unique hardware tied together in a very specific way with unique capabilities that help it perform as fast as possible. The way the components work together may be very tightly coupled, and the code the games run may be so optimized that e.g. the time it takes for some routine on the graphics chip to complete might be highly interdependent on the CPU getting some other task done fast enough to even run a game stable. The new gen might be called e.g. the PS4, but inside the hardware is nothing like the PS3 at all. To make the older code run on the new hardware, the newer device has to intercept everything the old code thinks it's doing on the old hardware and make it work on the new hardware instead, with no discrepancies in behavior. This is called emulation. That's usually very hard to do and the emulator code itself takes time to process too. The result is that unless the new hardware actually contains a lot of same the chips that the older hardware also had, or the manufacturer abstracted away the functionality behind an easily re-implememted API (like DirectX), you need a much, much faster processor to have any chance of emulating the old system fast enough to make it perform well. Especially in older generation devices, before the likes of XBox with it's more standardized directx-like API, coders used to write code that talked to hardware directly. It means emulators have to interrupt almost everything that code does and pretend to do the same thing on the newer hardware. Imagine writing an emulator: Your program has to read hundreds of thousands of instructions per second, transform them into instructions that can be understood by the new hardware, translate instructions for the video and audio and I/O chips, keep all their work in sync like on the original system even though the performance of the new hardware is totally different, and do it all fast enough to keep everything running in real-time as far as the user can tell. Even today there are still devices from 25 years ago (like Amiga) that still don't have perfect emulation and where code still runs better on a 7Mhz original device than a 2Ghz PC running an emulator. The performance cost of emulation can be so great that some consoles that can run games from older generation systems go as far as including most of the older generation hardware to be able to do it. For example, older PS3's have chips in them that are compatible with ones from the PS2 and can play PS2 games. Newer model PS3's don't, and can't. tl;dr: Your PS3 makes beef burgers. Your PS4 is vegan. These are not compatible. It takes a lot more work for your PS4 to make a vegan burger passable as beef.",
"It's usually not cost effective. There's usually a tradeoff between having the highest possible performance and being compatible with your previous generation's hardware. You are often compromising your performance by sticking to your previous architecture, than switching to whatever is top of the line at the moment. The Wii was the same architecture as the Gamecube, at the cost of the console's performance. The PS3 used a different architecture than the PS2/PS1, so it couldn't natively support PS2 games. So Sony put the additional hardware of a PS2 inside the PS3 to enable reverse compatibility, which was expensive. That's why the first generation PS3 was $600 and they took out reverse compatibility for the price reduction model.",
"Think of a Console as set of building blocks like Lego. All the pieces are designed to fit and work together. Now a new set of pieces comes out, that has a different shape and size, so those don't fit anymore, even tho they both are building blocks. The set of blocks is the architecture of a console, so a new console is not just a \"better\" better console, but a \"different\" one, since it has a different architecture.",
"Software run on hardware. So to software you need hardware the either work the same way to emulation the behavior in other software. The PS3 for example had a unusual processor. It use used the IBM PowerPC architecture and not the x86 architecture that the PS4. Xbox one and PC and Mac today So you have to convert all instruction so the do the same thing on the x86 architecture. It also had eight Synergistic Processing Elements (SPEs) that was special processors that was extremly fast on some stuff but bad on other. So you need to emulate them and how information was transferred between them and the main CPU and memory. So you need to emulate everything so it work the same way as in the old console for the old software to run.",
"One way of looking at it is this - there is very little actual profit in selling the consoles, most margin is made from selling software / controllers. That was the big issue with the Wii U. People who purchased the console maybe also got a couple of Wii U games, but since there was plenty of good Wii games already in their collection, and backwards compatibility was a major selling point.... Very little profit for Nintendo. Remember when PS4 and Xbone were announced? Neither was marketed as being backwards compatible. It was only after getting smashed by PS4 in sales did Microsoft announce backwards compatibility for the Xbone. But not all at once, and it's been a steady trickle of 360 games becoming playable on the Xbone.",
"Consoles are not like PC's made of the same types of parts. BC on consoles is more like porting software from a calculator, to a nokia, to a iphone. It's so different every time that it is hard to translate. and additional work is needed. On the PS2, an \"old\" PS1 processor is actually included inside the machine! It's used to help out with processing, but also used to run old PS1 games. :D",
"For the same reasons that backwards compatibility is hard to include in any technology. Hardware and software standards change to make room for improvements, and it is not financially suitable to spend time or allocate resources in the architecture for BWC",
"Trying for the most simple explanation: Pretend every console is a board game. And then the board game continues to get updated with additional rules and additional pieces. Now if a new board is required to play the game, you won't be able to play with old pieces, but if the board can stay the same, you will. Not a perfect analogy, but probably an appropriate ELI5.",
"Unlike on the PC where Microsoft stuck to x86 chip architecture video console makers tend to jump from one chip architecture to another looking for the best deals. These change of hardware does not impact native games for that system but for it does impact older games that ran on different chip architecture. To get around that incompatibility they resort to emulation which can cause unexpected behavior. It also costs money to do. And seeming it only benefits games that have already been sold and do not return revenue back to the video console makers then it is not exactly a priority.",
"There are actually two separate answers to this, depending on which… let's call them eras. Which era of video game console you're talking about. Before, approximately, somewhere between the original XBox and the PS3, a video game console was a single, special-purpose system. Games programmed for it were often programmed in assembler, or at least in languages with very little separating them from assembler, and directly manipulate the hardware to achieve their results. Code for a SNES game, for example, contains carefully-designed loops designed to drive the sound hardware at just the right rate to play music and sound effects, and to drive the video hardware fast enough and in just the right ways to update the screen. Any change to the platform required not just some code changes, but often a complete rewrite of major sections of the game. The only way to be \"backwards compatible\" with these games is to provide some degree of emulation of their original platform. That can be total software emulation, which is how old NES games on the Switch and Wii consoles work, or partial emulation with hardware support, as was done in the PS2 to run PSX games, or any of a number of approaches. There isn't really a viable alternative to making something that runs exactly like the original platform, and since emulating hardware near-perfectly is surprisingly hard, it's not always commercially viable. _After_ that dividing line, game consoles start to look a lot more like general-purpose computers with a game-specific OS running on them. As part of that change, games stopped talking to the hardware directly and started talking to the OS. Instead of including a tight loop to drive sound hardware, for example, modern games have standard-ish audio files in them that you could just as easily play on your computer or iPhone, and will hand that audio data off to the operating system to play through the sound system. The OS handles converting a lot of data into hardware calls on behalf of the games running on it, just as your laptop's OS allows you to run the same software you'd run on a desktop or server. Games written this way are much, much more resilient to changes in hardware, so long as someone writes a compatible OS. The game itself contains a lot less hardware-dependent code. That, for example, is why the PS4 Pro doesn't run most games at a higher framerate than the PS4, despite having a CPU with a slightly higher clock rate. That also means that it's now much, much easier at a technical level for consoles to run previous generations' games, so long as the OS is compatible - and once a compatible OS exists, a _wide range_ of games become compatible, depending on how vital the remaining platform-specific code is. Of course, there are business issues, too. Nintendo didn't start selling Virtual Console titles for the fun of it: they do it because it's profitable to bring back their back catalog on a new platform. That will remain true as long as gaming is dominated by for-profit platforms: you can expect future XBox consoles to drop compatibility with any XBox version it's too expensive to support.",
"It's not that hard in some respects...the PS2 included specialized hardware to allow PS1 games to run on it. This was possible because advances in technology allowed fitting the same hardware capabilities of a PS1 onto a single chip in the PS2. Likewise, early versions of the PS3 contained a complete PS2 in hardware, which gave it great backward compatibility. This sort of feature is expensive though, and was removed to reduce cost, size, and complexity. It was omitted entirely from the PS4. This goes back quite a way. The Apple IIGS had a chip called [Mega II]( URL_0 ) that duplicated a large amount of functionality of the older Apple II hardware to provide backward compatibility. Without this level of hardware support, backwards compatible functionality can be quite difficult. All consoles before the current generation had vastly different hardware capabilities and instruction sets, requiring either binary conversion or real-time emulation. This is very slow, and can be completely impossible in some cases.",
"The Playstation 2 was a very specialised piece of hardware. It contains several different specialised cores that can do a single thing very quickly, so even if you just want to render a single triangle you have to pass it through all the relevant hardware. Each of the specialised cores have their own purpose, functionality, memory, etc and work together as finely tuned team with specific communication channels. This is completely unlike PCs, where you have a single general purpose CPU for game logic and a GPU for rendering the game. Due to this, emulating the PS2 on PC is still a challenge today due to the completely different hardware. The Playstation 3 was also very different from a PC, but in a completely different way from the PS2. Here we have a similar GPU to a PC, but the Cell processor works in a very peculiar way. Instead of having a number of functionally identical CPU cores, the PS3 had one main CPU that could do anything, and eight support co-processors that could only be used for certain specific tasks. Utilising these support cores effectively turned out to be very difficult for developers, and it took a long time before they learned to make full use of the PS3 hardware. To run PS2 software on a PS3, you'd essentially have to emulate all the specialised hardware in a PS2, which the PS3 doesn't have the speed to do. You can think of it as a general purpose human trying to replace an industrial robot. A human is capable of doing almost anything, but the robot can do a single specific thing much faster. Similarly, when the all the support cores of the PS3 are utilised to their full potential it's hard for a PC CPU to keep up. Emulating the PS3 on a PC is really hard, but valiant effort is being done. The Playstation 4 is a Linux PC with a close to off-the-shelf CPU and GPU. The only difference from a standard PC is that the CPU and GPU share the same memory. There is no reason to assume that the PS5 will be doing anything different from the PS4; it'll probably just have a faster, more modern CPU and GPU and more RAM. Running PS4 games on it should be a breeze. Note that there's apparently a fake PS4 emulator up (PCSX4). There is no functioning PS4 emulator available at this time.",
"It's been answered a ton already so this won't add anything new but, being in the emulation world myself, it may be among the most simple answers while still making the point: Not all hardware is created equally and all console games are made to run on VERY specific hardware. That's it. It boils down to that and we could end here. It's like asking your liver to also work as a kidney while also remaining your liver (lol). In some sci-fi world it may be possible but you're still asking one organ to do the work of an entirely different organ as well as still be itself. Sure, both are organs so they're made of cells and use blood and such but that's about as far as the similarities go. They're just not the same things and major tricks/understanding would be needed to make it happen. Most emulation is hardware knowledge and tricks. Horrible example but we're going simple. So, again one could stop there but if someone wants more detail, let's go a bit deeper with examples: A 300mhz CPU in one machine doesn't mean any other machine with at least a 300mhz CPU can run it. In operation, the CPU in a PS2 is nothing like the CPU used in a Dreamcast or Gamecube or XBOX (and all vice versas). That's not even going into the memory, GPU, etc. Hardware within the same generation is rarely similar, let alone different generations. So, if you want the PS4 to emulate the PS2 (it can) you first have to tell the PS4 hardware to process information like a PS2 did. You're asking hardware made to run PS4 software to literally interpret PS2 software like the PS2 did (emulation). Well, you're asking the PS4 to run itself (layer 1) AND run a faux PS2 within itself (layer 2). Finally, you're asking it to run PS2 software within that 2nd layer (layer 3). The only reason that's possible is the raw power difference between a PS2 and PS4 because, as you can imagine, that takes much more processing power than an actual PS2. Hence, it's not a 1:1 processing power situatuon BEFORE dealing with the different types of hardware architecture. PS2 and PS4 speak different languages but let's use a much more obvious example. For instance, PS3 is based on PowerPC arcitecture while PS4 is using something much closer to x86. They fundamentally don't process information the same exact way. Further, PS3 uses proprietary hardware made specifically for it. This is why a PS4 will never properly emulate a PS3 - it's too similar in power and too different in architecture. PS4 couldn't do all 3 layers of PS3 emulation needed. In that scenario you're emulating an entirely different language before you even try to run the software within the emulator. It's just too different and the power to run the 3 layers isn't there. On the flip side, the hardware from the Gamecube to the Wii U is the same architecture. Not even similar but the same. In fact, Nintendo purposefully made them the same so it's not even really emulation. Each system has to do very little to run the games from the system before it. Some things are being emulated but it's so little being done on such stronger hardware that it's basically 1:1 processing. They speak the same language. The bad analog here would be an organ transplant instead of asking another organ in your body to do a job it's not made for. It just knows what to do, no real tricks or special knowledge needed. Pop it in and it just knows what to do and can handle it.",
"In the case of Sony, the PS1/PS2 hardware was designed and built largely in-house. The PS3 was designed by NVIDIA. PS4 hardware comes from AMD. When you play a game, it can be made to look the same on all the vendors' hardware, but obviously, the performance, and specific details about how the hardware renders the game can vary wildly. On a Windows PC that isn't too much of a big deal because we have APIs like DirectX and OpenGL to try to abstract away the hardware. But on consoles, where developers are trying to eek out every last bit of performance, they pay no attention to portability and layers of software. As a result, the new hardware has to be made to look feature-for-feature, and bug-for-bug, compatible, with old hardware. Source: I worked at NVIDIA on the PS3.",
"Here is a good analogy: Game disks have instructions telling consoles how to create the game. Those instructions are written in a language which is specific to the console it is being written for. Consoles have different languages. For example think of having a cookbook (a game disk) in French when you speak English. In order to make that recipe, you need to have a dictionary to translate from French to English. The game equivalent of this dictionary is called an 'emulator' and not a lot of consoles have historically had emulators because someone has to write it, it needs to be included in the memory on each system and it isn't a big profit generator to make the old games work well. In addition to having this emulator to act as a dictionary to translate the code, the next hurdle is that the console doing the emulation needs to both translate and execute the code at the same time as fast as the old system just executed the code. Think of this like making a sauce on the stove where you need to translate from French to English and convert the units from liters to cups while the sauce is on the stove - take too long to translate and it will burn. In the game world, taking too long leads to freezes in the gameplay or crashes. Also, the console have different chips which do different things with the instructions. Think of this like the tools you have in your kitchen (stove, pots and pans, knives, etc.). Most consoles have the same general things (CPU, RAM, GPU) but some consoles also have rather unique things (vector units on PS2, SPE processors on PS3 cell chip, embedded DRAM in XBox 360 and XB One). Failure to have a key piece of hardware that is used by a certain game run can make it unable to be emulated just like in the kitchen you can't make the creme brulee recipe if you don't have a blow torch."
],
"score": [
9312,
554,
147,
112,
25,
16,
14,
12,
9,
8,
5,
5,
4,
4,
4,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[
"https://en.wikipedia.org/wiki/Mega_II"
],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
am8pwf | Why is it that we can’t superduper charge our batteries with a super «shock» like one of those heart rescue thingies and it’ll be full? | Technology | explainlikeimfive | {
"a_id": [
"efk6llm",
"efk6ex3",
"efk7hp3",
"efkc4f6"
],
"text": [
"Because you charge a battery by triggering a chemical reaction with an electrical current as the energy source. High voltage or amperage will just cause it to heat up too quickly and explode/catch fire. Kind of like if you fill up a container with water through a sieve from a tap, the container will slowly fill up. If you instead try to use a fire hose, the water will most likely bounce right off the sieve, possibly destroying it and the container in the process.",
"because batteries store energy in chemical bonds, and the discharge and charge rate is limited by the speed at which said chemical reactions take place.",
"Because recharging a battery isn't the same as recharging a capacitor, I'll explain. When we recharge a battery we use a sufficient current to move the electrons (the power) from one chemical in a battery back to another, this makes it so the electrons can move back from one chemical to the other and provide power for the electronic device again. If you try to move them all at once, it will either explode or burst into flames because a lot the energy used to move those electrons is released from the device as heat, and is why your cell phone heats us when you recharge it, and when you use an app that draws a lot of power to run. A defibrillator uses a capacitor, which builds up a charge quickly, and can release it *very* quickly. It works the same way our body does when we build up a static shock, we charge up off of something like carpet, and our body stores that small charge until we are able to release it into something conducive, usually a car door it seems. Edit: Incorrectly stated that capacitors hold more charge than batteries, supercapacitors can kinda reach the ballpark but general use capacitors generally cannot. My bad.",
"Imagine charging a battery to be like wetting a dry sponge. You can dunk a dry sponge in as much water as you like, but it is going to need time to soak up the water. There are chemical changes occurring in the battery that take time. Just like it takes time for the cells of a sponge to absorb water. Unlike a sponge in water, however, if you give too much charge to a battery, it will blow up!"
],
"score": [
80,
12,
12,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ambl2m | How are those laser-pointer temperature guages able to take a temperature by hitting something with a laser? | Technology | explainlikeimfive | {
"a_id": [
"efkt49b"
],
"text": [
"They use infrared sensors, since heat (or thermal energy) falls within the infrared spectrum. The sensor can interpret differences in heat just like we can determine differences in the brightness of visible light with our eyes. The laser is just there for aiming, they have nothing to do with measuring temperatures."
],
"score": [
9
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
amevz6 | LED lights | Why do LED lights flicker on slow-motion cameras - like around sports grounds? I sort of get that they turn on and off really quickly, faster than we can see, but why do they do that? | Technology | explainlikeimfive | {
"a_id": [
"eflghuj",
"eflida3"
],
"text": [
"LED is short for light emitting diode. A diode is an electronics component which allows electric current to pass through it in one direction but not the other. Stadium lights are generally hooked up to the alternating current electricity grid of the area they're in. As a result, an LED only has current moving through it about half the time unless equipped with relatively expensive rectifying and smoothing equipment. Since this happens too fast for human perception to notice it unassisted, stadiums tend to prefer not spending the extra money to eliminate it.",
"But this ‘flicker’ also occurs in cars that use a DC battery. Why is that....alternator?"
],
"score": [
19,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
amhyqe | How do dollar store nightlights and small electronics convert 110V to usable energy for a small led? | Sorry if this is the wrong sub but im curious to add to my personal projects but how do they do it so cheaply? | Technology | explainlikeimfive | {
"a_id": [
"efm8gii"
],
"text": [
"Think of how small an iPhone charger cube is and then remember that while 120V AC is going in, it's only outputting 5 volts DC. Instead of a bulky transformer to step the voltage down, it uses various solid state discrete components and minimal ICs to get the desired voltage. Using similar technology, converting 120V AC to just a few volts DC to illuminate something as small as an LED is quite easy. Typically the first thing that happens is the 120VAC is routed through a mechanism called a [full bridge rectifier]( URL_0 ) which is created by arranging 4 diodes in a particular configuration which converts the electricity into DC. There are usually a few capacitors and inductors just after the rectifier to smooth out or filter the ripples to create a smooth constant 170VDC voltage supply (the reason it's 170V and not 120V is outside the scope of this ELI5, but it's because 120VAC is not really 120VAC, it's more of an average). Once there is a smooth DC voltage, an integrated circuit (IC, or \"smart\" chip) is used to control a MOSFET (an electrical switch) which chops up the 170VDC into very small chunks, VERY quickly. This high frequency DC is now fed through a small transformer with very thin windings (as opposed to the large wall-wart style) which outputs another AC supply, but at a smaller voltage, much closer to the desired final output voltage. This now-smaller AC supply is the rectified again, ran through more smoothing and filtering, and often other components to \"regulate\" the voltage to make sure you get a clean, reliable desired output voltage. To illuminate an LED, the LED has what is called a \"forward voltage\" which is how much power is \"used up\" between the input and output. This is typically between 1.8 and 3.3V, depending on size and color. If supply 5V DC to an LED with a forward voltage of 3V, then you need a resistor to \"use up\" the left over voltage which is converted to heat. LEDs also have an operating current, or how many amps is required to use it, typically around 25-30mA. Using Ohms Law (V=IR, or Voltage = Current * Resistance), you calculate the size of the resistor that you need in your circuit. You need to \"use up\" 2V at 25mA, so the formula is 2V = .025 * R where you solve for R. 2V/0.025 = 80 ohms. Admittedly there is a bit of hand-waving going on with the circuit explanation, but this should give you a better understanding of how it can be possible."
],
"score": [
3
],
"text_urls": [
[
"https://www.youtube.com/watch?v=wOW6gtxfk8U"
]
]
} | [
"url"
] | [
"url"
] |
amiw8a | Why are most one-time password codes five to six digits? | Technology | explainlikeimfive | {
"a_id": [
"efm99bu"
],
"text": [
"More than that is hard to remember and retype. Since it's one-time-only there isn't much brute force risk."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
amjntq | How do Binoculars work | I recently wanted to buy some binoculars. How do they work and how much do they zoom in on an Image. In case i want to recognise somebody from 200m away | Technology | explainlikeimfive | {
"a_id": [
"efmfc80",
"efmhqdc"
],
"text": [
"Common binoculars are about 7X or 8X, so a person at 200m would look like they were 25m - 30m away. Except that binoculars can't compensate for atmospheric distortion. You know that heat-wave shimmering look you see on a summer highway? There will be that kind of distortion on a figure at 200m, that you can't magnify away. But you should be able to recognize a person at that range.",
"Binoculars contain at least two lenses. These lenses bend (refract) the light in such a way that two parallel rays of light that enter the binocular on the far side will be further apart (but still parallel) when they leave the binocular on the eye side. This means that objects appear larger when looking through the binocular. One of the simplest ways to achieve this is with a [Keplerian refracting telescope]( URL_0 ), which uses two lenses. In such a telescope, the magnification depends on the ratio of the focal lengths of the two lenses. More advanced binoculars might have more lenses to correct for optical aberrations (e.g. color fringes), but magnification will work the same. For a telescope/binocular with 2x magnification, a distant object will appear half as far away. For 3x magnification, the distance will appear to be a third, and so on. I don't know how far a person can be away to still recognize them, but this relationship should allow you to compare binoculars. Some binoculars might have a \"zoom\" feature, which means that their magnification can be changed."
],
"score": [
3,
3
],
"text_urls": [
[],
[
"https://en.wikipedia.org/wiki/Refracting_telescope"
]
]
} | [
"url"
] | [
"url"
] |
amjsuw | How come my computer has no problem rendering modern games at 60fps+ but it might take hours to render a single frame of a 3d model in Blender/Maya or other modeling software | Technology | explainlikeimfive | {
"a_id": [
"efmgbfp"
],
"text": [
"The models used in those programs are much more complex, for one. Second, the algorithms used to render those scenes are much more complex. They take into account things like reflecting light and such. The more reflections you take into account, the more processing has to be done... exponentially."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
amk76m | Heavy gravity chambers | Hey guys, So I know that humans can operate in micro gravity and indeed zero gravity in space etc. But does the reverse exist? I was thinking of when Goku trains in that heavy gravity chamber, where he cranks up the gravity to build resistance. I know that pilots, astronauts, racers etc can temporarily experience moments of heavy gravity (I presume this is what is meant by G's in terms of force), but is there a way for us to operate in a stable heavy gravity environment? Thanks in advance | Technology | explainlikeimfive | {
"a_id": [
"efmiv4v"
],
"text": [
"You can use centrifugal force to simulate gravity, and get up to several times the normal force of gravity. The problem is that to generate enough force you have to get something spinning pretty fast."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
amkmm5 | The video effect that happens when you miss/lose a packet in online streaming... | Have you ever had that moment when you are streaming a video, the frame doesn't change but gets masked onto the movements in the next frames? Like once scene fails to update to the next, but the movements from the next scene get applies to it? What is the name of this phenomenon? And which encoding methods are susceptible to this? | Technology | explainlikeimfive | {
"a_id": [
"efmmygf"
],
"text": [
"This is due to the use of video compression and key frames, which is basically all kinds of encoding methods which I am aware of. The general idea is that there are complete frames which are stored periodically (called \"key frames\") and then subsequent frames are derived from that complete frame by only storing what has changed from the previous one. This reduces the amount of information that needs to be stored or sent, but if a key frame is lost the video will be wrong until the next key frame."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
amkpj3 | Why do wireless printers have so many more problems compared to other, seamless technologies? | Technology | explainlikeimfive | {
"a_id": [
"efmq20r",
"efmppcr"
],
"text": [
"Printers have always been problematic as a technology because of how they are sold. Most printers are sold right at cost or slightly above with printer companies trying to make profit on the ink, which is why ink is so expensive. Why this matters is that the companies try to keep costs to develop the devices very low and to use the cheapest parts possible to make a printer, which includes the wireless connection parts. Also, to keep the cost low the software drivers that allow you to print are developed quickly and cheaply and normally have bugs that can cause the problems. For most companies there is minimum desire to update the drivers if they work “most of the time”.",
"Stability of a wireless connection with a router set to factory channels combined with the different protocols that all try to connect from the printer with varying levels of stability and availability. Turn off all the other protocols and set a reserved IP in your router for the printer's MAC. Then change the channel of your router to a channel that's not used as much in your area."
],
"score": [
15,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
amksnn | Why is it difficult to make destructible environments in games? | I heard that it is difficult when making destructible environments in games but what makes it so difficult? | Technology | explainlikeimfive | {
"a_id": [
"efmo24j"
],
"text": [
"Once an item or items break into pieces each of those pieces needs its physics calculated independently. It’s not hard to make destructible environments it’s just hard to make them look good and perform well on a variety of machines."
],
"score": [
13
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
amkxro | How video game developers make the “computer” players and how they make them different skill levels. | Technology | explainlikeimfive | {
"a_id": [
"efmso4m",
"efmsglb",
"efn8r7q",
"efmrf9l"
],
"text": [
"A video game is a simulation. You have a list of objects and you update those objects at an interval. Each object has multiple scripts attached to it and each script runs when the game updates. They basically look at what the game is like now and determine where it should be next. A simple example is a moving object. The script in charge of moving the object applies the objects speed and calculates the next position based on the last position & nbsp; For an AI you write scripts that make decisions based on the current list of objects and update objects accordingly. This can be simple like \"get all targets in range, then shoot the closest one\". Or you can make it more advanced like \"get all targets in range, prioritize based on level, then shoot the closest high priority target\". & nbsp; You can also define states for your AI script to be in; like \"searching\", \"idle\", \"aggressive\" etc. You define what criteria activates one of these states, and then write code to determine the behavior based on the state. & nbsp; As for difficulty you usually start with the easiest behavior you can program. In a shooter that means enemies have a ton of health, 100% accuracy and always know where you are. In a game like rocket league you just program the car to always point at the ball and drive forward. Then you just work on variations like setting a range that enemies can see you, adding randomness to their accuracy, making them keep a set distance from the player etc.",
"I'll reference this question to a game of chess, as I'm sure the majority knows what chess is. The \"computer\" player is a system which has been familiarized with the rules, mechanics, and basics. For example, the computer is programmed with mechanics such as how having more chess pieces is better, and how a queen is of more value than a pawn, etc. Afterwards, the computer *most likely* uses raw processing power to evaluate the position it's in and plans the next move accordingly based on how likely it is to win. It does this by predicting future moves and simulating how the game will play out. In a certain scenario, perhaps the computer can benefit from winning a pawn in the short term. However it fails to do so because that trade may cause it to lose a more valuable piece or positional advantage in another 5 moves. A computer planning 5 moves in advance seems pretty complex though, doesnt it? That's because it is. Our brains can perhaps predict the next couple moves or so, but Modern chess engines can evaluate over 50 million moves, per SECOND. With all those computations available at will, it's not impossible to think that most modern \"computer\" players are now outperforming actual humans. The level of difficulty all varies on what initial commands the computer was given when being told the rules and such. On a lower difficulty, a computer engine may value stronger pieces less, and may thereby make weaker plays. Perhaps it decides to place its pieces in a less secure or strong area. Generally, we know controlling the middle area in chess is more beneficial than controlling a corner area. Perhaps a lower difficulty chess engine may not value defending, or being aggressive when it would be beneficial to do to. All these variables are critical in evaluating difficulty. Earlier, I said most modern chess engines most likely rely on raw computational power. This is not always the case, as newer technology has resulted in computer engines performing less calculations per second, but learning from its past mistakes. These engines are simply programming with the idea that winning = good, and builds off itself. It does this by playing itself millions of times, learning off every defeat and using that knowledge in future games. Newer chess engines using this programming have been able to outperform computational engines, despite performing far less calculations per second.",
"The basic idea behind a video game is a loop. A sequence of logic is executed repeatedly in short succession to analyze the state of the game and perform actions accordingly. A simple game loop might look something like this: Start: * check for end conditions (for example, player died or decided to quit), exit the loop if end condition is met * check for input from the player (keyboard/mouse/controller/etc), adjust the game state accordingly * carry out the next logical step according to game state (for example, if an object is moving in a certain direction within the game world, then move it by a specified amount) * for computer controlled entities, run through the logic that approximates intelligent behavior (for example, in a Pac-Man game this amounts to checking player position in respect to the ghost and moving the ghost in Pac-Man's direction), of course for more complex games like RTS/strategy, this logic can get very intricate * check if it's time to render the next frame (game loop usually runs much faster than the desired framerate so time control is important) Loop back to Start",
"I've done some AI previously for a Chess game, albeit at a very basic level. I can help out a bit until someone better comes along. For this chess game I had a player playing against an AI; there were 2 versions of this AI that could be loaded. 1. A random AI that chose any move out of the entire valid movepool (legal moves) but that's not interesting or what you want. Still, this is the most basic level of AI. 2. The other AI was an aggressive AI, it would seek to capture an enemy's piece when possible and also take the highest point piece available to it. If I wanted to improve the aggressive AI I could add on some things like checking if that piece would be available to be captured after capturing the enemy piece. there are many other situations that I could look for that might increase (or decrease) the effectiveness of the move. Thus for the highest level (difficulty) of AI I would employ all of these strategies but for the mid-tier difficulties I would employ only some of these strategies. Hope that helps."
],
"score": [
24,
6,
3,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
amls6h | How did they resize photos before computers | Before computers, how did they resize photos to be bigger than they originally were? Like taking a small negative from a camera and enlarging it to be almost poster size. Or taking a drawing and resizing it to fit a book cover or album cover. | Technology | explainlikeimfive | {
"a_id": [
"efmwb1n",
"efn5ucs",
"efnehqj",
"efn5ox0",
"efnb2oq",
"efn6cf7",
"efn7qqf",
"efnaz25",
"efn7sgi",
"efn65nt",
"efn7xj8",
"efn8qzc"
],
"text": [
"A photo enlarger beamed a high powered light through a negative or slide onto photo paper. Raising or lowering the enlarger changed to size of the photo protection, just like a shadow is bigger when closer to the light source.",
"Have you ever seen an old school movie projector? That is pretty much the same setup used to make prints from negatives (with the paper replacing the screen). You would zoom, crop etc. just by adjusting the size and position of the image projected onto the paper.",
"Former 35mm darkroom user here. We used projectors in order to create prints. The negative was slid into what is basically a slide projector but instead of projecting against a wall, it projects down onto a table. To make an image bigger or smaller you would raise or lower the projector, and re-focus it at the new height so it was sharp. You would put the unexposed photographic paper underneath, turn the projector on and time how long it was on depending on the exposure needed, then turn it off and develop the paper. In developing studios this is an automated process but let me tell you how people developed their own film. You start with undeveloped negatives. You go into a dark room, put the roll of film into a special bag along with a developing wheel, which was a ratcheting plastic wheel that could hold the film in a loop, but spread apart so that the developer chemicals could easily reach every piece of it. The film is so sensitive to light that you can't just open it up inside even a dark room, hence the bag which is light proof. You opened the roll of film and fed it into the wheel by touch alone. Once you had it on the wheel, you could put it into a little thermos looking container which was light proof, and had an opening at the top for pouring in chemicals. You'd usually do 1 or 2 rolls at a time. You could then go out into the light as it's light proof. You use a developer like D76, if you shot black and white tmaxx film like I did. The developer and other chemicals came as a powder that you had to cook on the stove like making jello. Depending on the speed of the film, and the temperature, you would immerse it in developer for 5 to 10 minutes. Then you emptied it out, and rinsed with water, then you added stop bath, which stops the development process. At that point you had images already, but any light would still have ruined it. So you finished with a fixer chemical, which prevented further development of the image. At that point it was safe to take out of the container and squeegee dry (after more washing). You would then cut the film up into strips, usually 5 images long. And store them in a plastic sheet in a binder, like baseball cards. You would then take this sheet into the dark room, grab an 8x10 sheet of photographic paper, lay the contact sheet literally on top of it (not projecting) And expose it to the light of the overhead projector for about a minute. You would then develop this photo in a similar way to the negatives, except the paper is lain in a developer tray which has the chemical. It's possible to see it develop in real time and while you were \"supposed\" to time it, we would just eyeball it, and when it looked developed enough we would take it out and rinse then put it in the stop bath. If it wasn't exposed enough, you could actually put it back in the developer longer since it hadn't been \"fixed\" yet. Once satisfied, you'd put it in the fixer, and you could then turn the lights on. You'd use a magnifying loop made of clear plastic to look at the miniature negative images (now positive) on the print you just made, and select which photographs looked the best, and that you wanted to now develop. The development process and enlarging, cropping, exposing, etc is as follows. You'd shut the lights back off. Dark rooms have red light that is very dim, enough to see by, but not to expose the film. You'd keep the unexposed paper in a light proof locker. To set up the enlargement you'd take out a sheet of paper and turn it upside down so the light sensitive side wasn't exposed, and you'd feed the negative into the projector and move it back and forth until you had the image you wanted. To make the image larger or smaller, and to crop it so it looked nice and was in a nice place on the photo you raise or lower the projector. Just like moving an LCd projector closer or farther from a wall, it makes the image bigger or smaller. You'd then focus the projector sharply onto the paper by moving a knob similar to a microscope. Once it was sharply focused, you would finally be ready to create the print. The light also had an aperture to make it brighter or dimmer. This could be used to make a print \"fast\" or to slow it down so it took longer to expose it, giving you more time to do dodging and burning (explained below). You'd shut off the projector light, turn the paper over and re-center it using the grid/ruler on the projection surface so it was in the same place. You'd set a timer for how long the projector would stay on, anywhere from 20-60 seconds for most shots depending on the speed of the print and how over or under exposed the negative was. Then you could develop the print, and look at it under a magnifying loop to make sure it was fully in focus, wasn't too grainy, etc. and then you were done! You've heard of photoshop \"dodge and burn\" right? We used to literally dodge or burn the projected image in order to fix minor exposure problems on the negative. Say there was an area of the image that was too dark. You could hold a small piece of paper, unfocused close to the lens, so it cast a shadow over the image, and move it around over the 60 seconds it was projecting light, so not to leave a sharp shadow. You literally prevented some of the light from striking that area, making it darker/brighter (reversed remember). Burning was the opposite, you could make a part of the image that was too bright, darker. You'd cut a hole in a piece of paper and use it to project the light on just one small part of the paper, moving it around so it wasn't a sharp edge. You'd usually expose just that part for 10-15 seconds depending on how much burn you needed. You would then shut off the projector, and expose the entire paper for the normal amount of time. These helped you make a balanced image that wasn't too bright or dark. If you needed to adjust the contrast and make the whites whiter, and the darks darker, you would use a contrast filter in between the projector light, and the negative. They were basically red gel filters. The more \"red\" it was, the higher the contrast on the developed photo. You could use it to make very bland, greyish photos look more striking with whither whites and blacker blacks. After developing film or paper, you'd put it in a special dryer in the dark room that would gently dry it without making it curl. We usually didn't \"hang photos\" like you see in the movies as you don't want chemicals dripping everywhere. We had a professional dark room as we were doing photography for a small news paper. So everything was purpose built.",
"I remember this huge projector thingy in middle school where you could put a map or picture in the device and it would get projected on the wall much bigger. We mostly used it for making posters. And yes, I also remember mimeograph machines.",
"> Before computers, how did they resize photos to be bigger than they originally were? Like taking a small negative from a camera and enlarging it to be almost poster size. The key to quality was using high resolution source material. The earliest photographs were not enlarged, they were contact prints from very large glass negatives like 8x10 inches. They could be extremely sharp because since they weren't being enlarged, the film grain was basically invisible. To make photography more compact and pocketable, films had to be made smaller, and then optically enlarged for the print. This was OK as long as the film grains were fine enough that they weren't objectionably visible after enlargement. Back when a size like 4x5 was once the standard, professionals laughed at 35mm because it was so small (too grainy when enlarged), but as 35mm film was improved, it became acceptable for professional work. And now 35mm is the sensor size we call \"full frame,\" now professionals laugh at the sensor sizes below that (which are improving rapidly and are often good enough for many pro jobs now, more than 35mm film ever was). > Or taking a drawing and resizing it to fit a book cover or album cover. Again, while such art could be photographically enlarged, that was not considered best practice because you would lose quality and sharpness. If you were going to do a 9-inch book cover or 12-inch LP album cover, no way would you draw or paint it at that size. You would draw or paint at a bigger size, more like the size of a painting you would see in a gallery, and then it would be photographically reduced to the size of the book or album. That would enhance the sharpness of the artwork.",
"Processed film becomes a series of “negatives”, a tiny strip of translucent material that is not very useful for us to look at. To make a print, or a photograph you can actually hold in your hand, you basically have to project the negative image onto special paper that can capture what is projected on it. It is like a movie in a theater being projected on a screen when you go to the cinema. When you are projecting the image on the paper you can move the paper around so you can choose what part of the image is actually on the paper. If you move the paper closer, the image will look bigger, but there will also be part of the image that falls off the sides of the paper so they won’t be included in the final printed image. For instance, I could take a photograph of a man and a woman are standing next to each other. When I am making a print of that photo, I could move the paper closer so that only the woman’s face is on the paper, so I see a close up view of the woman’s face but I will no longer be able to see the man at all, because that part of the image is outside of the paper.",
"To make a photo, you shine light through a negative into photo papers. The further you move the light away, the bigger the image gets. Like when you shine a torch at a wall and take a step back, the circle gets bigger",
"Line drawings resizing uses a pantograph which you traced over the lines and enlarged or shrunk the drawing. The pantograph looks like two compasses laid ontop each other.",
"Since you're born in the 2014, you wouldn't know what an old-school overhead projector would be like, perhaps not even a modern digital projector. But you seem to know about negative film. Simplest explanation on how to enlarge that is to by holding that film under sunlight very close to the ground. You'll see the photo projected onto the ground. Hold it higher from the ground, it will project bigger. Now if you replace the ground with a photosensitive paper (or silver halide plate back in the 1800s and early 1900s), you will have the image burned/printed onto the paper. But the larger the image, the blurrier it looks because it's out of focus. You'll now need lenses to refocus the image on the paper as sharp as you can. Okay now replace the sun with a lamp so that you can do it day or night. Since other lights will also leave unwanted marks on the photosensitive paper, you will have to do it in a dark room with just a dimly lit red light just a little bright enough for you not to bump into things.",
"You know those projectors in school? When you move the projector farther away from the screen, the picture gets bigger. It works the same when resizing film photos.",
"Take the photo, make a negative out of it. Now, use it in a projector-like device and shine it to a new film. The distance and lenses will allow to make it of any size you want. Want it as big as in a theater? Put a big film instead of the projection screen and shine it there!",
"The very ELI5 version is kind of like this. Imagine you have a projector to watch TV. The farther you are from your wall, the bigger the image. & #x200B; That's how enlargers worked. The film would be projected onto a plate which could be moved further or closer to the original negative. On the plate, you would put something that could be exposed to the image."
],
"score": [
5491,
86,
47,
38,
20,
10,
7,
5,
5,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ammeek | How can a password not containing numbers or special characters be considered weak? | If I can pick ANY combination of characters for my password, isn’t a combination containing only letters equally hard to guess as one that doesn’t? | Technology | explainlikeimfive | {
"a_id": [
"efn14zb",
"efn18j5",
"efn1wno"
],
"text": [
"Unless you're using a randomly generated chain of letters, most password cracking software have functions built in to use the most commonly used words in passwords. If you set a limit to the length of the password (say 20 characters), you go from very predictable, and easily brute-forcible using numbers, to somewhat predictable, and not impossible to brute force, using the 26 letters of the alphabet, to impossible (unless you have lots of time and a supercomputer) if you combined all 3.",
"In theory an attacker who doesn't know you didn't use the numbers and special characters would need to consider that they might have been used, slowing their attempts on your password. But consider that an attacker might simply only decide to go for the easy targets and just try normal characters, leaving out numbers and special characters. That wouldn't ever get into accounts of the people who included them but it would happen to break yours since you didn't. By expanding the possible characters to include the extra numbers and special symbols it presents a task too great for an attacker to solve, and to ensure it actually *is* too hard to solve they force you to include them in your password. Otherwise the attackers just solve the problems they can and the simple passwords get broken.",
"My point is: why is a password like ‘gsnnssijcbdbhduehvedbvpqqqqq’ considered weak by most services while something like ‘dopeHe4d1999’ is considered strong?"
],
"score": [
4,
4,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
amnvfy | How water damages electronics | Technology | explainlikeimfive | {
"a_id": [
"efnc274",
"efnc6id"
],
"text": [
"Most electronics are delicate circuits which rely on specific amounts of electricity running between certain components. Water allows electricity to bridge gaps and flow into areas it isn't designed to go, overloading and breaking components of the circuit. It can also cause corrosion over a longer period of time which can prevent connections from being made or even sever connections.",
"Let's say we're building bridges for toy cars out of foam board. I have 4 bridges built and each can only support 4 cars before falling down. Now, let's say I connect all of these bridges in the middle with another bridge and load it with cars. All of the bridges will fall. The 4 original bridges are your circuits that conduct electricity. The added bridge in the middle is the water that can also conduct electricity. This is why if the device in question is powered off when it gets wet and you put it in rice to dry it out(or soak up the moisture) it might be okay since no cars had a chance to get on that 5th bridge. Edit: a word"
],
"score": [
9,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
amoumi | Online Refunds | ELI5: How can online stores give refunds to debit/credit cards, when (in the EU anyway) they are not allowed to retain card details. As far as I knew, online stores must use a third party encrypted card handling service, that stores your card details, takes the payment and gives it to the online store you're paying. Like, in a physical store, you present your card at the time of a refund, so they have your card and can physically give the money back into your account. How does this work in an online situation, when there is no card? | Technology | explainlikeimfive | {
"a_id": [
"efnkgpo"
],
"text": [
"The same processing entity that handled the initial card transaction is the one that issues the refund. The merchant (online store) doesn't need the card details - they can initiate a refund through the payment processor with a transaction ID or other unique identifiers to the purchase you made."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
amqbw9 | Why are plane windows so small? | Why are they all (roughly) the same size? Is there no way to make them larger, so the passengers would have a better view? I can imagine it being less efficient energy-wise but that doesn’t quite explain why I’ve never seen even one plane with slightly larger windows then all the others. | Technology | explainlikeimfive | {
"a_id": [
"efnsebm",
"efnuzzq",
"efohixb"
],
"text": [
"It's less a matter of energy and more a matter of structural integrity. Aircraft have to deal with very uneven pressures: as you go up it drops quite a bit outside, but for people to not-die you have to keep it high enough inside. Every one of those windows is a hole in the hull - sure it's sealed, but that's still a point where there's two different separate materials that are being forced all of the time - that means stress and fatigue around every single one. The seals and separations also affect the airflow - at low speeds like in a car this doesn't do much more than maybe affect how quick you do 0-60 compared to the sports model, but for aircraft that's potentially going to affect range and fuel efficiency. Or, at the very least, cost: It may be possible to make an entirely transparent hull, but with the appropriate strength and properties for a commercial jet? There's also the issue of what happens if one fails: A nice big window isn't just a bigger weak-spot, if it breaks, that affects how quickly cabin pressure drops - which makes the difference between \"we can compensate for this\", \"we have time to drop down a few thousand feet so folks won't be in danger\" and \"They're all dead, everybody's dead, everybody is dead, Dave\" There'd be no windows at all if designers were allowed to get away with it.",
"787 windows are larger than most 19\" high, compared to average of 11\" or 12\". Concorde, since it flew much higher, had tiny windows, only 6\". But there isn't much you can do for width. Aircraft is a thin skin (1mm - 2mm thick) Supported on a frame. The windows can't be wider than the spaces between the frames.",
"An airplane fuselage is a stressed-skin structure: i.e., the skin itself is a major load-bearing member. A window is a hole in that structure: the plastic can't contribute much to the strength. So, the designer has to surround the hole with a frame that will transfer the loads from top to bottom and from left to right -- and that frame is invariably heavier than the expanse of skin it's substituting for. In principle you could make the window bigger, but you'd have to make the frame *much* heavier -- and every kilogram of structure you add is a kilogram of payload you can't carry! Also, while the plastic doesn't carry structural loads, it does carry the load imposed by pressurization -- so the bigger the window, the thicker the plastic has to be. Weight again."
],
"score": [
70,
10,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
amsabq | What is Conway's Game of Life, and how is it linked to the Hacker community? | I don't really understand the point of this "game," or why the Glider shape is a popular tattoo design for hackers. & #x200B; Is this actually a game, or is it just a script that you're supposed to observe the results of? | Technology | explainlikeimfive | {
"a_id": [
"efoaary"
],
"text": [
"It is an example of a complex system that derived from very simple rules. You are right, it is more of a simulation rather than a game as such."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
amsca9 | If music CDs are burned with the opening tracks towards the center, how are open-world video game discs burned? | Like Fallout 4, for example. You can walk anywhere, do anything, talk to anyone, at any time. How is that all rendered onto the disc when it’s not exactly chronological? **EDIT**: thank you everybody! I appreciate you all taking the time and I learned a lot from all of your answers! I read through each and every one. | Technology | explainlikeimfive | {
"a_id": [
"efo8ykz",
"efpby96",
"efoandh",
"efogqfi",
"efo7k9u",
"efo96t4",
"efo85m4",
"efo92d5",
"efp85fc",
"efpgc81",
"efpds4w",
"efpb8tf",
"efpcglr",
"efpejpq",
"efos973",
"efplg03",
"efpikje",
"efokeqj"
],
"text": [
"Ignoring the hard drive element of modern games, for games that *did* run entirely from the disc, the non chronological nature of their storage wouldn't have been a problem. Unlike a music CD, it wasn't necessary to read the disc from start to finish in one go. (Indeed, even with a music CD, you can use track select to change the order in which it is read.) When a game needed to load more data from the disc, it would know where on the disc that is located, and start reading from that point. There would still be some care taken with data placement - the time taken to find the correct location on disc (seek time) is a factor in the overall loading speed, especially if the system is constantly jumping around for fragmented data.",
"Just a reminder that Short answers Jokes And anecdotes Are not allowed at top level Top level comments (i.e. comments that are direct replies to the main thread) are reserved for explanations to the OP or follow up on topic questions. While allowed elsewhere in the thread, Jokes anecdotes and short answers may not exist at the top level. Thanks ^also ^fuck ^maroon ^5 ^for ^not ^playing ^sweet ^victory",
"Let's ignore for a moment whether or not the game is actually stored on the disc these days, as several people have mentioned. A few years ago, the game would have been stored on the disc, so your question still stands. A game works rather differently from a movie. A movie would be recorded on a disc in the order you'd play it back. The images you see on the screen for a movie are actually stored on the disc. For a game, you can't do that (usually), because the player can make choices, and there would be too many images. The way it's done instead is that the disc contains a huge number of files... much like a computer's hard drive. There are several different types of files on there that do different things. As an example, let's look at a horse you might see in a game: * There might be a file that describes what a horse looks like (as if it were a statue). * Another file might describe how the limbs of a horse move as it gallops. * Another file might contain the sound a horse makes when it is running. * Yet another file might describe how a horse behaves when various things happen to it (for instance: \"if a player hits you on the rump, play the 'whinny' sound and gallop away from the player for 1 minute.\"). * Then there might be another file that describes an area the player can walk through (\"there is a brown horse at this spot and it wanders around at random in this area until a player does something to it.\"). Finally, there is a main program to pull all these different types of files together. It follows a set of rules the game designers have written into its files to show you, for example, a horse wandering around grazing in a field and which whinnies and runs away if you slap it.",
"No data is stored in \"order\" for games on disc. The open world's you are used to seeing are divided into separate sections of data. That's why you get a loading screen sometimes. The computer has a certain amount of memory that that section of open world gets loaded to. Once you go to a different section it now loads that new section. Those sections do not have to be in order, but they do have to have a file name that the computer knows to load for each section. Common things like certain objects \"you and your weapons\" are always loaded and in memory as well as the game \"program\" but not necessarily all places you can visit. All the data is written on the discs as bumps and pits, representing the 0's and 1's that make up the files. Playing a game that was downloaded instead of on disc is the same process, but much faster so things load quicker. This can be improved further by installing the game onto an ssd, making load times even shorter. This is also why adding ram to a computer can make things faster, the computer doesn't have to keep going back to the disc or disk to get the data to load the next section. You don't have to know where physically your picture you took on your cell phone is located, knowing where each file of the game is located physically is the same premise.",
"Well the game is basically downloaded to the console. When playing on PC you don't need a disk to play after it's installed and it often times uses the same storage ammount as a console does. I think on console the disc just gives you permission to play the game since it's downloaded. I don't know how they work but that's my best guess.",
"Game CDs are data discs. There’s a table that indexes the locations (track and “sector”). It’ Basically a hard disk that you can’t write to. When a new section of the game is entered, an index (that’s on the CD is consulted), the CDs reading head moves to the appropriate place and reads the target information. This is termed random-access, although the term is largely accurate, it’ a little bit of a misnomer. Music CDs have a different index structure that is more conducive to streaming. This would be termed sequential access. I am using the term random and sequential a little bit loosely here. The terms are being used as descriptive, not for strictly computer science definitions(this is ELI5 after all). The decision as to what kind of disk a CD is going to be happens at format time. Formatting a disk lays down the digital infrastructure to then be used during it’s expected purpose.",
"Games don’t run off of disks these days so much. They’re installed to a hard drive. The files are “addressed” so to speak and the operating system knows where to find them. A music disc is a much simpler thing. Apples and oranges really. Both are round. Both are fruit.",
"Imagine a music cd is like an old vinyl album. As the disk spins, a laser reads the grooves and converts it to sounds. Imagine “unwinding” the disc and now you have a long, linear string. It’s like sheet music that you read along in one long line. As the disc spins, you just keep progressing along the sheet music. For open world games (and really any software), the contents of the disc get “installed” to the hard drive and computers memory. The computer reads the long line of sheet music, but instead of keeping it in one long line, it sorts it into “pages”. When you want to access a certain area of the map, it looks up that “page” and plays the content. I hope this helps; it’s a very basic explanation and I’m sure I’m missing something, so hopefully a computer or software engineer can chime in.",
"The lens that reads data doesn't have to read from start to finish. It can jump around the disc.",
"The rendering happens on your computer, not the disc. It can access the data randomly to load and render images.",
"Playing a CD (or tape) is like when I read a bedtime story for you. I take the book and read it loud, so you know what is written in there. Playing a game is like when grandpa tells his stories. It all happened some time in the past, but it is all in his memory, and he can tell you what ever you like, whenever you ask him. /edit: word",
"I haven’t read every comment but none of the current top comments contain a crucial piece of information. Especially in the CD/DVD days, data that wasn’t copied to RAM but necessary to load dynamically were actually stored on the outer portion of the disc. The reason being that when you’re spinning a disc, the outer portions have a higher speed than the inner portions (look at a ceiling fan, it’s a lot easier to track the inside vs the outside). Because it’s spinning faster, the bits move by the laser faster and the computer can thus read the data faster. So any data that you wanted to be able to read quickly, you’ll want to have towards the outside of the disc.",
"This is an ELI5 challenge, because no simple answer is correct. Imagine the whole game using a thousand mail boxes for the content. If loading a single animal in the open world game required visiting 500 mailboxes, it would be too slow. If all of that information was in just one mailbox, everything needed would be available in one trip to it. Game developers spend a lot of time trying to put the stuff that needs to come off the disc and into the game in ways that load fastest. A disc reader has a laser on an arm that has to move around. That is the slow part. By putting all of the stuff needed in one place on the disc, the arm moves less and so reads faster. (source: i have been a game developer since the 1990s)",
"There are programmer tales of **ye olden days** when you would write your program so that the branches in logic would be written so that they would fall on the same linear track of the disk. This is old graybeard magic that meant that you not only needed to know the software side but how to work the physical hardware side of the job. I'll see if I can find it but one story in particular was a programmer who was tasked with making a blackjack game for demos at trade shows to show off how versatile the systems were. A salesman wanted to have a hidden switch that when thrown would guarantee a win (sales tactics get the endorphins flowing from a win). Programmer sees that as an offensive thing for him to have to write so instead forces the program to cheat the player to loose instead. I'll see if I can find the article and post it.",
"Musical tracks are laid out like a record. There is a table of contents track near the hub that tells a music or movie player how many songs there are on the disk and where they are physically. DVD and music players just read data, convert to audio/video and spit it out of the speakers and TV. No memory needed so no buffer and if there’s a problem with the disk, you get stuttering/pixelation/weirdness in real time as the laser gets interrupted. Not too different from data disks. Same table of contents but a lot of different physical points on that disk to find whatever the computer is looking for. And a console/computer has memory to store it and let the CPU/GPU work with it. Video game worlds are files. Maybe one or several files that describes what’s in the world with math, another bunch of files that are pictures to put ontop of those shapes in that world and others for sounds etc. Those files aren’t that big and can be loaded into memory to run. Move into another world/building/room/realm and your computer/console forgets the old and goes and loads a bunch of other files into its memory.",
"Other posters mention optimizing disc layout but I can speak firsthand to some of the lengths we went to in days of yore to minimize load times on optical media. In PS2 days I spent the better part of a month writing a layout optimization tool whose goals were the following, in order of importance: 1. Position level data on the region of the disc where linear velocity was greatest under the read head without being so high that re-reads of the data were common due to read errors. PS2 spun its media at constant rotational velocity so this was a thing. (Most DVD players varied rotational velocity to keep linear velocity under the head constant.) Testing showed that bandwidth was highest near but not too near the outer edge. 2. Position streaming data (anything we loaded from disc during gameplay, generally streaming audio in our case) close enough that it could be read with a small move/refocus of the laser as opposed to a full move of the read head. 3. Replicate the same music tracks/voiceovers in multiple locations on the disc if goal 2 was difficult to achieve without. 4. Bake absolute positions on disc into the executable code so that no filename lookups or system queries were necessary. Everything boiled down to \"Read N bytes from sector X, offset Y.\" And yeah, I get nostalgic for this shit.",
"As others have said, game data is read non-sequentially \\_for the most part!\\_. I do want to add that seeking makes a \\_huge\\_ difference in data throughput. Seeking is what happens when the disk reader needs to look futher down/up the disk for the next piece of data, similar to skipping to the next track on an audio CD: the little reading head has to move and/or has to wait for the right area of the disk to spin around so that the next piece of data travels under the reading head of the player. This is why a lot of care is put into \"designing\" the data layout of AAA games (esp. console games where everything can be tuned very tightly). Things that might be loaded together are put next to each other on disk, so that the data can be read in \"one swoop\" without the disk reader moving around or having to wait for the next rotation of the disk. This has become less and less doable with modern hardware and thicker abstractions, I'm not sure if anyone still bothers. Game data would also often duplicated, so that it might take a bit more disk space, it will be packaged with the data that needs it. This is also true for HDDs, which still work with spinning disks and reading heads. (It's even true for memory, due to cachlines etc - but that is a different story altogether) & #x200B; For extra credit: the outside of the disk spins faster than the inside; stuff on the outside of the disk is read faster given a constant spinning speed!",
"The piece of technology you are missing is a file system. In audio discs where music is linear, we indeed don't really need a file system, or at least only a rudimentary one that tells you where, on the disc, each music starts and ends. Just so that you can implement next/prev buttons. But a more interesting file system will have a hierarchy of descriptors, folders and files. The root of the hierarchy is always at the beginning of the disk, so that's where your computer starts from. That root is a descriptor of the total size of the file system, and of the start, end and names of everything right below in the hierarchy (typically, all the folders and file directly at the root). The idea is that each folder has a descriptor that tells you where to find the stuff it contains, one you hit a file, you get a descriptor of all the chunks of data that you have to read in order to read the file. Typically, we try to put those chunks one after an other so that we can read the disk as linearly as possible. That's what is improved by doing defragmentation. What that means is that a computer only have to read the descriptor tree to know where is what. It can keep that in memory (usually, it will remember all recently open folders and files) and when the game wants to access any file, it goes look for the chunks. Note that historically chunks where 512 Bytes, nowadays 4KB is classic for USB drives, and 16KB to 128KB for terabyte size disks and SSDs. Added to that, most modern filesystems can have extended size chunks of 2MB and even huge one of 1GB each. As there is no need to split huge files into so many pieces. Note that SSD allow for more or less random access anywhere on the disk, and defragmentation is no longer an issue. Typically, SSD like to fragment things to do wear leveling (but that's a story for an other time)."
],
"score": [
1246,
136,
93,
15,
8,
6,
4,
4,
3,
3,
3,
3,
3,
3,
3,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
amscb4 | how do video games know they are a pirated copy | Some games have anti-piracy measures, for example *Serious Sam* will summon an invincible pink monster on pirated versions or *Spyro* will delete savegames. How do games know that, assuming the pirated versions are bit_by_bit copies? | Technology | explainlikeimfive | {
"a_id": [
"efo72uu",
"efofkf5"
],
"text": [
"From /u/rsb_david \"The below items are three ways in which games/studios can and do track if a game copy is pirated or not. 1. A studio will release a pirated copy on common locations and have modified code to either send a packet to a remote server or just screw with the person who obtained the copy illegally. 2. The game will require online connectivity and have multiple layers to check for pirating. For example, a game may do the first check upon loading your computer and a parent application like Steam or UPlay. A second check might be when you start the actual game. A third check may occur every 3200 - 4450 game cycles while the game is running. A fourth check will be a static period in which the client has to make contact with the outside network. If at any point the game can not validate your authorization, it may notify the studio, corrupt your save files, or corrupt the game. Each time the game update, a list of remote servers changes as well so you can't just block the IP in your firewall. Some studios are talking with social media sites to route through their network so that you will lose access to something like Facebook if you try to block a remote verification server for your pirated game. 3. Installation unique identifier tracking. A company may generate a custom install key per user which is then embedded into the compiled application in a way which is not obvious. The studio might randomly download pirated copies and pull the key to trace the original source. __ Essentially, check the terms of service/EULA/privacy policies for how your data is tracked and how a company may track copyright offenders.\"",
"I don't know all the tricks, but a couple are... 1. Printed CDs that contain intentional errors. If the error is there, the disk is considered legit. If it is not (i.e., fixed when the disk is copied) then it is not legit. 2. Special instructions written into the empty parts of the 'table of contents' on the disk. You can kind of imagine it as having something on the jacket of a book. You can copy the whole contents of the book, but if you don't copy the book jacket as well it is assumed to be stolen."
],
"score": [
8,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
amspk6 | How did album covers were printed in the 70s, 80s? | Technology | explainlikeimfive | {
"a_id": [
"efoalfw"
],
"text": [
"Art is painted. Art is then photographed. Enlarger is used to rephotograph the picture at the correct size. Printing plate(s) is made from the negative. Offset press is used to print the artwork onto cardboard. Cardboard is folded and glued to become the album cover."
],
"score": [
12
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
amtg9t | VGA vs. HDMI vs. DisplayPort | Like, why do people think that DisplayPort will become the standard for PC, instead of the "old" HDMI? | Technology | explainlikeimfive | {
"a_id": [
"efoftu5"
],
"text": [
"It’s mostly just gamers who say this. In reality both HDMI and DisplayPort are quite good, and the differences in trade offs can be small in many cases, not relevant in others, and niche for a special case. For gamers, DisplayPort can support multiple monitors from a single source output as well as supporting G-sync and Free-sync for monitors. This is it’s big pluses. The multi monitor support is also good for business users and for devices which are constrained on space (like a laptop) as only one output is needed... however as most consumer electronics use hdmi, you may still need a converter for many business uses, so in this case DisplayPort is just meh. Neither are going anywhere soon and both HDMI and DisplayPort are improving with new updates regularly and neither one seems to be winning, except that HDMI controls the consumer electronics market, and has some additional functions (long list) that may prove useful in the consumer world in the future over DisplayPort which is more aimed at some more specific use cases on computers."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
amv9ba | Why could retro cartridge-based games not save player data without a battery backup? Why do modern cartridge-based systems not let you do this as well? | I assume with retro systems there was some kinda tech limitation. But if the game data could be saved on the cartridge why could a small rewriteable section of the cart not allow player data to be saved on it that wouldn't require a battery? I know batteries are used to hold data in volatile memory on things like cartridges but yeah why could you not save to Non-volatile memory? Furthermore with the advancements in storage and us now having SD cards the size of ones fingernail why cant player data be stored on a a Nintendo 3DS/Switch cart? (...I assume, I dont actually own either.) | Technology | explainlikeimfive | {
"a_id": [
"efoswzi",
"efot2r2"
],
"text": [
"Technology like the Non-volatile memory in SDcards didn't exist yet, or at least not in a commercially viable form. If the classic NES was made today the system would contain either a hard drive, an SDcard slot, or a portion of non-volatile memory on the motherboard. The Cartridges used on older systems like an Atari or Nintendo were ROMs. Read-Only-Memory The chips were created at the factory without the ability to be modified so there was no way to store dynamic information on them. To get around this cartridges would have a small amount of NVRAM (Non-Volatile Random Access Memory) that could store variables (like what level you are on, your life total, which weapons you unlocked, etc) even if the power was shut down. NVRAM can only keep its contents if it has a constantly power source, which in the case of Nintendo games was a small watch battery inside the cartridge. > Furthermore with the advancements in storage and us now having SD cards the size of ones fingernail why cant player data be stored on a a Nintendo 3DS/Switch cart? They total could, they just don't :D There is no reason that Nintendo couldn't use common SDCard technology to release video games for a portable system, other than they don't want to. Probably for anti-piracy reasons.",
"None-volatile storage of that size did not at all exist back then. The best that existed were floppy drives which were not exactly something you could add into a cartridge. They could do something like that today, but it would add a bit to the cost of the cartridge potentially. Most modern gaming devices have their own internal memory storage anyway, so it's not really an issue anymore. The Nintendo 3ds, for example, has a 1 gig flash drive inside of it. There's no reason to attach the memory to the catridge when it is so small you can just add it to the console itself."
],
"score": [
36,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
amvenn | Signal. How does WiFi and mobile data pass through walls? | Technology | explainlikeimfive | {
"a_id": [
"efotzkb",
"efovbmn"
],
"text": [
"Glass is solid, but transparent to light. Walls are solid but transparent to radio signals.",
"This has to do with how a given material reacts to electromagnetic radiation: In the case of normal walls, there aren't many free electrons, so the electromagnetic signals aren't absorbed or scattered that much, and pass trough with minimal loss. In the case of metal walls for example, the amount of free electrons is enormous and they absorb, reflect, or dissipate the electromagnetic signals, killing your wi-fi or cellphone signal. There's a lot more detail regarding how different frequencies and wavelengts interact with different materials (like glass letting visible light trough but not infrared/ultraviolet), but that would warrant a long ass explanation, certainly not fit for a 5 year old."
],
"score": [
44,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
amw6ei | What are the sounds you hear when connecting on a dial-up modem? | Technology | explainlikeimfive | {
"a_id": [
"efp1997",
"efp1uxt"
],
"text": [
"It is called the handshake. There are still used in technology but we seldom hear them any more. After a connection is made. The devices negotiate what protocol parameters, speed, error correct and compression will be used. URL_0",
"It is called a handshake. The modem starts out just playing a tone that lets other modems know that it is indeed a modem and not a person. After the modem knows that it is speaking to another modem they need to make sure that they are both speaking the same language and talking at the same speed. So they take turns sending out confirmation messages and wait for the reply. After some back and forth communication, they both know that the modem on the other end of the connection can hear it properly, that it can respond properly, what language they are going to speak in, and how fast they are going to speak. Then they start transferring packets back and forth. For a fun experiment,. If you call a fax machine you will hear the \"hey I'm a modem\" tone. It's a high pitched long beep, then it will pause and then beep again. If you whistle into the phone and can get the pitch just right, the fax machine will start sending the first part of the handshake negotiation. Back in the day you could tell if the connection was going to fail by listening to the handshake. That is why you can hear that part but it mutes the audio after it's finished. Hopefully that helps"
],
"score": [
3,
3
],
"text_urls": [
[
"https://en.m.wikipedia.org/wiki/Handshaking"
],
[]
]
} | [
"url"
] | [
"url"
] |
|
amwnmg | How does the Nikon P900 zoom so far - like all the way to the moon - whilst still not being bigger than a 200m+ lens...? | Technology | explainlikeimfive | {
"a_id": [
"efp7c7s"
],
"text": [
"That 200mm lens you're referring to is for 35mm film (36mm wide image) format. The p900 is a 1/2.3 inch format (6.17mm wide image). The smaller format means lenses can be smaller."
],
"score": [
12
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
amx6xu | CPU cores and threads | What are CPU cores and threads and what do they do? | Technology | explainlikeimfive | {
"a_id": [
"efp8ucz",
"efp8z8v"
],
"text": [
"Traditionally CPUs were fundamentally limited to only doing one thing at a time, it could do those things very quickly, but only 1 at a time. This meant that every program on your PC, including the operating system, had to share time on the CPU. Often sitting there and waiting for the CPU to become available. By building a CPU with multiple cores you could now have a CPU perform multiple instructions at once, making things much more efficient. So even though a newer CPU might be slower than an older one, if it has more cores it can perform tasks much more efficiently and therefore there is a net benefit. A thread is a particular process or set of instructions that a program needs to get processed. Each thread is handled by a CPU core. By creating a program that can divide it's individual tasks in multiple threads (multi-threading) they can take advantage of a multi-core CPU and run more efficiency. On a side note many CPUs have a technology called Hyper-threading. This allows an individual core to process 2 threads at once. The catch being that Hyper-threading isn't as good as a dedicated core, so while doubling the number of cores in a chip effectively doubles the performance, adding Hyperthreading is worth about 30% in the real world.",
"CPU (as its name implies) refers to the actual processing unit or thing that does the various computation we want it to do. A CPU core is just referring to a distinct one of these units so when it says “quad core” there are 4 of those units inside. A thread is a theoretical programming concept of a “task” or some thing you want to do. A thread is a task that is being done within a computer program, that is, they share memory. Most of the time when talking about threads you’re talking about an application which runs multiple threads simultaneously: a good example of this is having one thread manage the interface, while another does work for the user. This is different from separate programs which use separate memory. TLDR; a cpu core is something that does things for you and a thread is a task that a program performs, generally separately from its “main execution path” Source: software engineering student Edit: added some clarity to the end"
],
"score": [
14,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
an05mg | is there any way to gather up the 'rubbish' floating around the orbit around the earth? | Technology | explainlikeimfive | {
"a_id": [
"efpst7z",
"efpxzjf"
],
"text": [
"This is an area of active research. The issue is that most of the rubbish in orbit is fragments of satellites, specifically three different satellites. And it is going too fast to be captured by something physical like a net. So you need some other way to collect it or at least slow it down so it burns up in the atmosphere. There are several proposal. For example electromagnetic nets that will slow the objects down as they pass through without impacting the net itself, lasers that can target the objects and heat them up and vaporize them, etc. None of these have been proven to work and is mostly theoretical. The other issue is the disused satellites which are bigger and have the potential of becoming debris field. There have been proposals to how you can build a spacecraft that can visit each of them and deorbit them in various ways. & #x200B; The big issue is that cleaning up the orbits will cost money. And nobody is interested in having to pay this money. If the US pay then Russia, China, Japan and India will get a free service, similar if any other is paying. Even if you manage to get together and all decide to pay you end up in discussions of who should pay the most.",
"Not easily. Mostly because the rubbish is often very small and very fast and in a region that is larger than Earth itself. It is not just floating around but shooting a several dozen times what would be the speed of sound down here. The ISS for example orbits at speed like 7.66 km/s while guns and rifles shoot bullet at 1.2 km/s to 1.7 km/s Also because space is huge. So you are essentially try to catch a bullet in a region much greater than earth itself. (while hopefully not producing more trash than you collect). This is not trivial. thankfully much of the space debris that are orbiting close to earth will eventually come down by themselves (and burn up in the atmosphere in the process). Stuff further out is more of a problem. There is also the worry that eventually some debris impacting some satellite or other object in orbit will cause that to break and create more debris and that will create a chain reaction where each impact cause more debris to cause even more impacts. This would be the sort of worst case scenario of a catastrophe movie, but it is not entirely far-fetched that this might happen."
],
"score": [
8,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
an23nz | Why do older PC games require to put Disc 2 on your PC while playing? | Technology | explainlikeimfive | {
"a_id": [
"efq4ag1",
"efq50s8"
],
"text": [
"Because older games would not fit on a single disc, and when you installed games you didn't install everything, some stuff remained on discs - both for performance's sake, and as an extra drm-like measure. When you are asked to put disc 2, its because you finished part of the game contained on disc 1, and it needs access to data from disc 2.",
"A lot of games also used it as a way to prove you were the owner. d2 comes to mind you could run the game without a cd if you moved a couple mpq files from the discs."
],
"score": [
11,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
an4vj4 | Why can't we use neuroimaging technologies like MRI for the diagnosis of mental disorders, like depression? | I worded this question mostly in reference to MRI; could other technologies like PET, CT, MRI with contrast be useful in diagnosing mental disorders? | Technology | explainlikeimfive | {
"a_id": [
"efqqruj"
],
"text": [
"A diagnostic test is only useful if it can reliably distinguish between people who have the condition and people who don't. For almost all mental conditions, there's no information that can be gained from neuroimaging that does this. There are some differences you can find between the brains of \"normal\" people and those with, say, depression, but these are *averages* and there isn't enough consistency to clearly put people into two camps based on their brain structure. It would be like trying to tell from a satellite photo whether or not a house is well-insulated. Sure, you might be able to make some broad generalizations about the construction of a house and whether or not it's likely to be warm, but it makes a lot more sense to just go inside and see if it's drafty."
],
"score": [
13
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
an4w3u | How do wireless charge pads charge your phone battery? | Technology | explainlikeimfive | {
"a_id": [
"efqq943"
],
"text": [
"The concept is called \"electrical induction\". A current in a conductor creates a magnetic field around it (you probably made an electromagnet in school with a coil around a nail) and in turn a changing magnetic field creates a current in nearby conductors. This is how electric generators turn physical movement into electricity and how electric motors turn electricity into movement. In the case of the phone charger it is using electricity to generate a changing magnetic field which bridges the gap and then creates a current in the conductors of the phone to charge it."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
an5quf | What process does a rape kit use to tell if someone has been raped or not? | Technology | explainlikeimfive | {
"a_id": [
"efqw9bo",
"efqxkod",
"efqxlm8"
],
"text": [
"No, the kit can not measure consent. The kit looks for evidence of trauma and genetic material. The victim is the one who alleges facts regarding consent and then it is up to the investigators to build a case against the accused and up to the jury to decide if the evidence shows guilt or innocence of the accused",
"No, it can't. Like most forensics, all it can do is give some evidence but it is still up to the police to tie that evidence to a narrative of a crime. A rape kit can document trauma (such a tearing or bruising) as well as DNA evidence that a sexual encounter occurred (such as hair and semen samples) but those could be the result of just rough sex (which isn't a crime). You need other evidence (such as victim statements, eye witnesses, etc.) to prove that the event was rape.",
"The rape kit itself swabs for DNA typically, which doesn't tell you anything about consent by itself. But the physical exam that accompanies the rape kit looks for bruises, cuts, tearing, and other defensive injuries to help determine if the victim was fighting the attacker. That can be evidence of consent or the lack thereof."
],
"score": [
17,
10,
6
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
an6u0n | whats the differents between resolution, aspect ratio, and megapixels? | Difference**** | Technology | explainlikeimfive | {
"a_id": [
"efr4pxv"
],
"text": [
"Resolution is the number of pixels of the width and height. EG, 1920x1080 is 1920 pixels across, 1080 pixels tall. The aspect ratio is the ratio of those pixels. So, a widescreen format of 16:9 has 16 pixels width for every 9 pixels height. Megapixel is just a way to have a more impressive sounding numbers by the pixel area of the resolution, So, 1920x1080 is 2 megapixels, roughly."
],
"score": [
9
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
an7pnv | Why do most cell-phones have a higher resolution screen than most above average monitors and TV's? | Technology | explainlikeimfive | {
"a_id": [
"efrcw0j"
],
"text": [
"Because you look at your phone from a lot closer. That's where resolution matters and why a TV you sit 4-8 feet from doesn't need a higher resolution."
],
"score": [
12
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
anany8 | How did people in the late 1800s calculate temperature with such precision? | Technology | explainlikeimfive | {
"a_id": [
"efryyhg",
"efrz421"
],
"text": [
"Mercury and alcohol thermometers haven't changed in centuries. Mind the freezing and boiling point of water and you can space your degrees from there whether you're using Fahrenheit or celsius",
"Thermometers. The mercury thermometer was invented in 1714 by Gabriel Fahrenheit (1686-1736)"
],
"score": [
10,
8
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
anc8yk | when does the data from your phone get consumed. Eg. I’m scrolling reddit, when is the data used. Is it used when I first load into the app or when I’m scrolling looking at the posts. Or does it use the data when it is buffering and loading? | Technology | explainlikeimfive | {
"a_id": [
"efsacl6",
"efsaeay",
"efsajvj"
],
"text": [
"(working in app development) It's a combination and differs from app to app. Normally the app will preloaded as much text as possible - just because it's very little data compared to images and video. Something like reddit, will ask the server for title and basic information on your top 10-20 feed storis, when you open the app, then start rendering that on your screen. As all at it see that is needs an image it will fetch it. Most apps will also try to pre-fetch images that you will probably scroll to next.",
"Your data is used when something new is brought to you phone. If you start reddit and turn off your data you can see that there is a fair amount of posts you can still read. You can however reach the bottom. If you the reactivate your data it will load more posts.",
"I’m no expert in the stuff, but I’d assume the data is used up as your phone is loading the content on the app. From my basic understanding of computers, the app downloads the data and temporarily stores it onto your phone’s ram which allows you to look through what’s already been loaded even if you were to suddenly lose your internet connection."
],
"score": [
6,
5,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ancy11 | Where did the cracking noise go when someone receives a text message? | I remember ~5-10 years ago when someone on Skype, or in my house, received a text message my headphones would be all weird and brr-brr-brrr-krrrrr. It was the classic "Someone's got a text message!" - Somehow that stopped and I started to think about it, and I can't really tell how it went away, and also not really *when*. | Technology | explainlikeimfive | {
"a_id": [
"efse177",
"efsfj9d",
"efsfjhl"
],
"text": [
"That noise was a phenomenon known as 'interference' which occurs when radio waves interact with eachother and end up distorting their respective signals. Advancements in technology have mitigated this effect but it can still be observed, personally I still get it with my wireless headset when someone calls my phone if it's nearby.",
"The problem with interference came around when the GSM phones became popular in the late 90's. It went away because it was a pretty rude awakening for all manufacturers of audio technology that they needed to improve their products. It still happens from time to time if you stumble upon some cheap products. And some of the really, really expensive products too, where the buyer expects it to be as raw as possible.",
"Old phones used to go in to a low power mode where they would just listen on a frequency and wait for the tower to broadcast a wake up message. When they got it they would try and connect with the tower. Initially it would use the highest power signal to do so, but once the connection was established the power would drop back down to the lowest level necessary. These days phones are always connected as they are constantly sending and receiving data, so you don't get those wake up power bursts any more."
],
"score": [
15,
10,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ane81k | Why is it bad when people aren't concerned about the government / companies tracking them online? | Technology | explainlikeimfive | {
"a_id": [
"efsn4la",
"efsmlcr",
"efst8z4",
"efsm8ct",
"eft0e4a",
"efsmh1o",
"efsy4hy"
],
"text": [
"The “I have nothing to hide” argument implies only guilty people have something to hide, but privacy is not about being guilty or innocent, and privacy is not about hiding things, privacy is the right to choose what to share and who to share it with. I chose to share my naked body with my partner, though I want privacy while on the toilet. I chose to say what I bought my mother for her birthday with the rest of my family and expect them not to tell her so it is a surprise. The first example is my right to privacy. The second example is me hiding something and not being a criminal, but I don’t expect privacy, just that they keep the present a secret from my mother. Companies have data breaches all the time, government information gets leaked, I deserve the right to not share my personal details with a company if I do not trust them, they could share my details by selling them to a third party or they could just be hacked, and I don’t want the world to know who my ex is, my favourite coffee shop or why I had to visit the urologist last summer. Someone who thinks privacy is as simple as “I want to hide this thing” doesn’t understand what privacy is, and someone who thinks people hide stuff when they are guilty doesn’t understand human nature. Privacy is your right to chose what you share and who you share things with.",
"Here's a good ELI5 idea I've seen before. Let's say you live in a society where everyone walks. It's a peaceful town, people know that there can be crime, but for the most part, everyone feels safe. Maybe you are a little different and want to train to run. Maybe you did it once on accident and liked it. Now you're running, and the police see you and they stop you everytime. They think the only reason you'd be running is because you stole something or you were running away from something, since everyone always walks. So now, they're watching you. You can't run and they follow your movements to prevent it. You didn't do anything wrong, and the police aren't trying to be bad, but you're rights are being limited even though they were good for yourself and weren't hurting anybody else. Even though you have nothing to hide, your life has been worsened by having to be stopped everytime an officer notices you running to ensure you didn't break a rule. Another idea is, in the US especially, we believe in a right to privacy. This can be good, and less than good. For example, maybe you want to propose to your girlfriend. You want to do a great proposal so you look online at jewelers for a beautiful ring, and someone out there is tracking your computer. If someone takes this information (a company, or the government leaks information accidentally), and sends you emails about more wedding ring deals, they could ruin this fun thing you were planning on do. A second reason may be that as they track all this computer data, they realize the serial killers like reading books about serial killers. However, everyone that reads a book about serial killers, is not a serial killer. But the government decides its safer to track every one who reads a book about a serial killer. You purchase a book about a serial killer, and now you're on a watchlist. You travel to Washington DC and they won't let you in because you're on a watchlist and, even though you have nothing to hide, there afraid that maybe your mentally sick. The police call you in regularly to ask you questions, and even though you have nothing to hide, your family starts to wonder what's going on and people stop trusting you. This nothing to hide argument makes sense, but quickly, can fall apart as people can be inconvenienced, lose their right to privacy, or be followed when they don't deserve to be",
"Just because I don’t hide the fact that I poop doesn’t mean I’m going to start doing it with the door open. Privacy isn’t about hiding our wrongdoing, it’s about being able to choose what we share with the world.",
"You having nothing to hide isn’t the point. You should be worried about the larger fact that your rights are slowly being eroded away in front of your eyes until one day you live in a society that looks nothing like the one you used to. The constitution is important.",
"Why are posts asking what the big deal is about online privacy always asked by people like \"year-of-the-uribo\" instead of \"Janet Kilpatrick\" A person on social media will imply that one shouldn't be worried about being tracked - while hiding their identity",
"Being tracked online is just the starting point. In history, things like complete government control (?) don't occur all at once. It's a step-by-step process. And this is one of the first steps. Allow it to be taken and you have to ask yourself... \"What's next?\"",
"Everyone has something to hide and normally nobody cares. Can you truly say that you have obeyed the law 100% of the time, or have committed no actions that you would not want revealed to the wider world? By surveilling everyone, you catch benign breaches of the law and taboo. If the public are all guilty, the executive part of the government can selectively enforce laws, essentially giving them both judicial and legislative power, which defeats the whole point of separation of powers, and brings us closer to a police state."
],
"score": [
16,
16,
5,
5,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
anf5s9 | why did dial up modems need to make noise? Why couldn’t they silently send their signals through phone lines? | Technology | explainlikeimfive | {
"a_id": [
"efstkgr",
"efsu0ww",
"efstou9"
],
"text": [
"After the connection is established, they would go silent. And they could be configured to be completely silent. But the nice thing about having the noise at the beginning was that you could tell when the connection had been established. There was an easily distinguished change in the noise that said \"everything is working now.\" You listened to make sure the connection succeeded, and wasn't interrupted by something, like someone on another extension picking up the phone and yelling \"oh, sorry, I didn't realize you were using the phone for the computer.\"",
"They only played the sound out loud so you knew they were connecting to a computer. If you heard someone saying \"hello? who is this?\" you'd probably called the wrong number.",
"Sure, but until the connection was made the modem wasn't sure you'd dialed the right number. Alas, the first tone made by fax machines was the same, so only after the alternating tone was responded to was the modem sure it had another modem on the line. If there was talking, or a recording, that came out the speaker to tell you \"Hey, user, you dialed Grandma instead of AOL.\"."
],
"score": [
9,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
angtuo | Why do touchscreens and trackpads work with 'organic' materials (fingers, bananas, etc) but don't work with the majority 'inorganic' materials (metals, plastic, etc.)? | Technology | explainlikeimfive | {
"a_id": [
"eft7g0v"
],
"text": [
"Touchscreens work by detecting capacitance of whatever is near enough the sensors. They're calibrated to respond only to the values that are expected from human skin. The other organic stuff just happens to have similar enough capacitance to human skin. Metals and common plastics being aren't that close."
],
"score": [
13
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
anjpst | the magnetic North Pole is moving south at a faster pace than usual. What does this mean for our civilization and where is it going? | Technology | explainlikeimfive | {
"a_id": [
"eftukry"
],
"text": [
"The magnetic north and south poles aren't what you think. It's more like a bunch of small poles that line up to make one big magnet. The magnetic north amd south poles have actually swapped places numerous times in the earth's history. Other than accurate navigation, this will have little impact. If the megnetosphere collapses, that's a different story."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ankbbn | Why do todays cell phones still use horrible .3gp (potato cam) format to transmit video? | Technology | explainlikeimfive | {
"a_id": [
"eftyuqt"
],
"text": [
"because the link is very low bitrate. Higher quality video uses more bandwidth that is not available over mms. MMS is a different application than say 4g which is data only"
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ankpf5 | Movies | So back in the day before DVDs, how was audio played during movies. Was it on the film? How did audio work in movie theaters? | Technology | explainlikeimfive | {
"a_id": [
"efu1u41",
"efu2b4y"
],
"text": [
"The sound was recorded along the edge of the film using light and dark areas. URL_0",
"In theatres, when the first talkies came out they had separate reels that had sync dots encoded on them so the projectionist could keep the two lined up. Eventually, Dolby and friends found a way to encode the audio waveform onto the edges of the film so that the read heads could extract the audio in synch with the projected image. This meant that if parts of the film stretched, the audio would stay in sync. Towards the end, the data was encoded digitally with error correction and surround sound so that not only would it stay in sync, it would keep its fidelity. If you look at the MPEG 2 codec used for DVD, it actually retains many of the same techniques to keep the decoded audio and video streams synced for playback."
],
"score": [
6,
6
],
"text_urls": [
[
"https://en.wikipedia.org/wiki/Optical_sound"
],
[]
]
} | [
"url"
] | [
"url"
] |
anl980 | How does a rice cooker know that the rice inside is cooked and automatically stops? | Technology | explainlikeimfive | {
"a_id": [
"efu6gwc",
"efvgoos"
],
"text": [
"Water boils at 212 degrees. The steam dissipates and takes the heat with it. It's difficult to get it hotter under normal conditions. There is a temperature sensor in the rice cooker that monitors the temp at the bottom of the cooker bowl and when it starts to rise that means the water is all/mostly boiled off and its just burning rice at that point so it turns off.",
"/u/MundaneSummer has it right, the cooker looks for the temperature increase after all the water has boiled away. But *how* it does that is pretty cool! More modern digital rice cookers use a little computer -- boring -- but you can still buy the oldschool ones that work with *no electronics at all*. It's all done with magnets! In the bottom of the rice cooker is a spring-loaded steel plate, and beneath it a magnet mounted to the end of a lever switch. When you push the button down to start the cooker, the lever switch rises, and the magnet is attracted to the steel plate, holding the switch on. As the temperature rises, the metal plate starts to [lose its ferromagnetic properties]( URL_0 ), so the magnet doesn't stick to it as much anymore: the magnet falls, deactivating the lever switch. URL_1 (Footnote for people who know about this phenomenon, and say \"But the Curie temperature of steel is like 800°C!\" The Curie temperature is just the point at which *all* ferromagnetism goes away, but magnetism weakens gradually with temperature. The rice cooker's switch has a spring just strong enough to pull the magnet away from the metal plate when the temperature is at the desired value.)"
],
"score": [
22,
4
],
"text_urls": [
[],
[
"https://en.wikipedia.org/wiki/Curie_temperature",
"http://i.imgur.com/qHc3pvW.png"
]
]
} | [
"url"
] | [
"url"
] |
|
ano37f | What exactly is happening when a computer starts getting file corruptions to the point where it's doomed towards total failure? | Technology | explainlikeimfive | {
"a_id": [
"efuua1h"
],
"text": [
"Generally, in a HDD, it's due to mechanical failure. The drive is structured by its smallest units of storage called sectors, when it experiences a mechanical error in one of those sectors, the sector is beyond repair and referred to as a bad sector. The bad sectors are marked to be skipped, and remapped into spare physical sectors. If your drive uses a S.M.A.R.T monitoring system, it keeps track of the bad sectors its had to remap in the reallocated sector count. This count is used as a metric of the life expectancy of the drive."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
anokpf | how does ISO, shutterspeed and if you shoot raw/large jpeg/etc affect your photos? | My mind doesn't wanna grasp the answers I've been trying to research and watch videos but I still cannot get it. | Technology | explainlikeimfive | {
"a_id": [
"efux28h"
],
"text": [
"I'm not a photographer but I do understand these concepts enough to explain them to a 5 year old :) ISO is the sensitivity of the light sensor. Lower number means it is less sensitive, higher is more sensitive. This also affects the quality of the image. Shutterspeed is the speed at which the shutter (barrier between the outside world and the light sensor) opens and closes for a photo. You didn't mention it, but the aperture is the weird shaped thingy around the camera lense. This acts in the same way as your pupil. If it's wider it lets more light in and vise versa. RAW is a file format, it is the highest quality you can (and should) take photos with if you need to edit them afterwards. It basically contains every bit of information about the image that the camera has. This is very needed for photo editing as you can edit the exposure settings in a program like Adobe Lightroom. JPG is a lossy image file, basically just an image - nothing special. The kind of photo your phone takes. You can edit it but it is not as detailed as RAW. Okay so not combining ISO, shutterspeed and aperture let's you take clear, well lit photos in most light conditions. There is always a trade off though. To make a scene lighter, you can decrease shutterspeed - this means you'll need a tripod so that you do not make the image blurry. You could widen the aperture - this also means you'll need a lower shutterspeed and you need an expensive camera to get really wide or small apertures. Or you can increase the ISO - this will make the image grainier the higher it is. Now, a combination of those camera settings is how you take clear images regardless of your surroundings and is very dependant on the specifics of your scene. I hope this helps :) Typed quicky on a mobile so it's not formatted or perfectly written."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
anr40s | What's the difference between CS (Computer Science), CIS (Computer Information Science, and IT (Information Technology? | Technology | explainlikeimfive | {
"a_id": [
"efvj3lf",
"efvdzq5",
"efvv66k",
"efvf6xz",
"efvd1hf",
"efvtp3k",
"efvou1b",
"efvsj2d",
"efvn6r3",
"efvncm1",
"efvu4kd",
"efvx7ki",
"efvd4tt",
"efvof8a",
"efwfe8o",
"efvt1rd",
"efvtwi6",
"efw74gf",
"efw0loa",
"efvsz56",
"efvydxw",
"efw9qa9",
"efwp82z"
],
"text": [
"**Computer Science** : It’s the science (mathematics) of how computers inherently work. It would have an answer to this question: If I had a bunch of random numbers, what would be the fastest way to sort them, is it the fastest way? And why is it the fastest way. It often requires writing code but only to verify and quantify an idea. **CIS**: I’ve got this gigantic set of numbers and letters and words and other data. CIS will answer this question (amongst many other): How can I make sense of this data to find how they’re interrelated **IT**: I’ve got a business to run that requires selling lemonade. But because I’m a genius lemonade maker and the biggest one in town, I’ve set up many lemonade stands around town that are completely automated. IT answers this question: How can I effectively tie in all these lemonade machines to work seamlessly and serve customers without a moments delay? What computers do I need? How shall I set up my storage? What’s the ideal internet connection to use? Edit: well shit, good morning to me. Glad this is my most upvoted comment! And thank you for the gold and silver! Edit 2: Because some of y'all asked me to ELI5 some more, so here's my take: **Software Engineering**: The customers of Lemonade Inc. need an app to order their favorite kind of lemonade right to their door step. A software engineer would be able to: Make an app that's easy to use, and can be installed on the customer's phone. **Data Science**: Data science is (amongst other things) using lots of data to draw conclusions about a specific topic. If Bob opened the app made by the software engineer, given his previous purchases, which lemonade flavor can I suggest to him that he is most likely to buy? Also, can I perhaps make him buy another one by showing his wife's favorite lemonade right next to his so he would remember to buy her one as well? **Computer Engineering**: Computer Engineering deals with actually making the physical computer that will physically run the programs made by the computer scientist or software engineer. Example: Hey computer science guy! I hear you want to run that new number sorting method on a set of 1,873,347,234,123,872,193,228 numbers! Oh, are current processors too slow because they need to do 10x more work than required for this specific task? Ok let me see what your method is, and let me perhaps build a custom processor for you to efficiently do everything in as much time as you expect. (Warning: this is a gross oversimplification of computer engineering, and they dont go around making new custom processors for everyone. I've tried to keep it simple and in line with the examples above!)",
"Computer Science in essence is academic, research focused, scientific. It concerns studies of AI algorithms, network protocols, security research, ... Not many people who study CS continue in this theoretical field, since the demand for practical applications is enormous. CIS is the part of CS that deals with information gathering and processing. Again, there's a huge practical interest, given what Facebook, Google, etc. do. Smaller companies all try to implement their own versions. But there is also tons of research to improve their algorithms. IT is a bit different, in the sense that its core business is managing computer infrastructure. They make sure all employees have the correct and up-to-date software installed, the servers keep running, the network is secured, etc. This is almost purely practical.",
"My rough take; each answers a different fundamental question: * Computer Science: What is a computer? (What can a computer do?) * Computer Engineering: How can we build a computer? * Computer Information ~~Science~~ Systems: What can the computer tell us about this data? * Software Engineering: What problems can we solve with the computer? * IT: How can I ~~keep~~ make all these computers ~~working~~ efficient and secure? EDIT: I did not expect this comment to get so much attention! Please, do not base your academic or career decisions on these ELI5, one-sentence breakdowns. I think if you study in any of these fields you can learn enough to jump to any other in practice. Most of what you will actually use every day you will learn on the job or on your own time (if that scares you, you will have a harder time making a jump). The key is to learn *how to learn on your own*. Please consult with people actually working in the industry. I myself have an electrical engineering degree, work mostly as a software/controls engineer, and have a passion for computer science. On a daily basis, most of my time is spent working with teams to solve practical problems where software is simply one tool in the box. Feel feel to ask me anything about these areas.",
"Since we're here, where does Computer Engineering falls?",
"Oversimplified, but here we go. * **Computer Science** - the science of creating computer programs. Algorithms and data structures. Almost entirely focused on writing code. * **Computer Information Science** - How to use computers to organize and make use of data. A little higher level than CS. * **Information Technology** - How to use technology to solve business problems. This can involve CS and CIS but is more problem focused.",
"Absolutely no difference to 90% of the people out there. People: \"What do you do?\" Me: \"I develop software.\" People: \"So you are in IT?\" Me: \"No. I develop software. Which means I USE a computer and a network, but I do not spend my life maintaining a network of computers. If I have a computer problem I phone my IT department and go for coffee.\" Me: \"No I cannot help you with your computer, WIFI, printer, or networking problem.\"",
"Programmer, data scientist, and admin. One writes code. One manages and manipulates data. One keeps a computer system up and users happy.",
"CS: You write a program to see how often that guy picks his nose CIS: You use that program to gather the data and determine what is actually a true nose pick IT: You set up the computers and cameras and network them so that you never miss a nose pick again.",
"Also, how does Information Systems relate to these?",
"There seems to be plenty of answers, but I figured I would throw one more in there for you. I majored in MIS (management information systems) for a bit. It was a lot like the CIS but more focused on software used in businesses. In my short time studying it they really seemed to put emphasis on not only knowing technical side of how to make the software, but also knowing the business side of things so you could make the most effective software for the customers needs.",
"Imagine you have a lemonade stand with 3 people working it. One person understands how lemonade works. They research better ways to make the lemonade, better ratios of ingredients, different lemons, design better pouring methods, hell maybe there is a BRAND NEW stand we could be creating. This is like in computer science where people are researching more efficient patterns and algorithms, and more broad computing concepts. The second person is responsible for building the lemonade. They understand the best mugs to use, understand those better pouring techniques and how to use them (but may not understand WHY they are better or how), and everything involved in building your lemonade stand and delivering the lemonade to the customer. This is Computer Information Systems. They design the programs that are used, using concepts laid out by computer science (over simplifying for the ELI5). Then the final person is essentially the middleman between the lemonade stand and the customer. They may not sell the lemonade (that would be product or sales), but have interactions with the customers. If a customer isn't liking their lemonade, or can't even drink it for some reason they can contact this person to help fix the problem. They have some idea of how everything works, but don't typically design it themselves. This is IT.",
"CS: We make things. CIS: We manage things. IT: We fix things. This might be more ELI3...",
"This will vary on the program you are enrolled in: Computer Science = learn programming to eventually become a developer building apps, services, and automation. Computer Information Science = you learn a technical curriculum with the intent on becoming an IT manager or Program Manager. You basically manage projects and have some technical insights. IT = tech support with some PM skills, maybe dabble in programming.",
"I have never heard of the term CIS in my field. As for the two other, they are vastly different. & #x200B; Computer science relates to the science of how to translate a task in a way that a computer can do it. Example: you take a map and decide how you are going to drive your car from point A to point B while avoiding congested areas and accidents. How can a computer do that (like google map)? It involves modeling a mathematical formula, a logic per se, that will allow a computer to determine the best path. Or say you have a sheet of metal and you need to cut shapes into it, how do you make sure select which shapes to cut and in which angles to minimize the material loss? It's not dependent on programming language, even if selecting the rigth language for the right task is essential. & #x200B; Information Techbnologies is geared toward business applications. From designing interfaces, business applications and understanding business processes and how to automate them or support them with a software, to infrastructure and server installation and maintenance. Web design, maintaing a company's computer fleet, it's all IT. & #x200B; & #x200B;",
"*In academia:* CS/CIS/IT are largely dependent on schools. For example, there are some schools where CIS is more theory/math than another school's CS program. To keep things simple we're going to go by the largest national accrediting body for computing (abet)'s [criteria]( URL_0 ) \\- there are three specialties: Computer Science (CS), Information Systems (IS) and Information Technology (IT). They define CS as: > Apply computer science theory and software development fundamentals to produce computing-based solutions. and IS as: > Support the delivery, use, and management of information systems within an information systems environment. and IT as: > Identify and analyze user needs and to take them into account in the selection, creation, integration, evaluation, and administration of computing-based systems. Pretty vague, right? Academically it's not really strict like you would see in medical, engineering, law or business. There's essentially a handful of courses that a school's faculty puts together, then calls the degree whatever it most aligns to. There's a ton of overlap. Typically the curriculum with the most math and theory courses becomes Computer Science, then the one with the most business courses becomes (computer/management/nil) Information Systems, and then the remaining one becomes Information Technology. Another important distinction is in which section/school the program is in. The business school, liberal arts school, the math department, or the engineering school? Now, I did say typically. I have seen ivy league-tier schools that would offer a degree like \"Computer and Information Science: concentration Computer Science\" that is just a very rigorous CS degree with a long name. \\--------- *In industry:* CS is a degree that HR looks at for software engineering positions. To a lesser extent they look at related degrees like electrical engineering, math, information systems, and information technology. Sort of confusingly, the IT industry (not the degree) is mostly a customer-facing support kind of role. In summary: traditional engineers create computers and maybe some software, software engineers create software like algorithms, and IT people utilize those creations to benefit the business.",
"Went to school for CS. Wish I would have gone to school for CIS. I did not know the difference. I don't have the interest in or dedication to math that it took to make it into Calculus 4 and differential equations. Of course, the real secret is you don't need a degree to do what the pros do in this specific field. No other STEM field has such a lack of academic requirements for the pay we receive, and that's because there's an incredibly high demand for us. I didn't graduate and am making top tier salary as an SRE in silicon valley. What matters is what you can demonstrate. Certifications and code reviews weigh a lot more than a degree in this particular occupational field. And we tend to get lots of office perks too.",
"**Computer Science** - Math behind creating computer programs and systems. **Computer Information SYSTEMS** - This is what businesses called Information Technology in the '70s and '80s. It is a set of things working together to control information on computers. Databases, file servers, etc. **Information Technology** - Basically the same as computer information systems. The technology we use to process information from fax machines to smartphones.",
"CS: How computers work CIS: How this computer works IT: How these computers work together",
"Computer Science is a branch of mathematics that got rich enough to afford its own building. Everything else is about doing practical stuff with computers.",
"I see so many things I disagree with here. Terms aren't always used correctly or consistently so don't get too hung up on them. The way I see it: Computer Science: is the study of how computers work. It is typically a program offered at universities. IT: Is the career of working with computers. This is most often done by people with Computer Science or related degrees. Computer Information Science: Is a subbranch of computer science. Software Engineering: Is another subbranch of computer science, however with more focus on engineering aspects.",
"CS = Computer Science - The focus is on the theoretical basis of computing. What makes computers work the way they do CIS = Information Systems - The focus is on the systems (including humans) and how they are used to support business IT is the one everyone else (especially computer scientists) tend to get wrong. So I will refer to the formal definition according to the ACM / IEEE curriculum statements for these fields. URL_0 I quote: Information Technology is the study of systemic approaches to select, develop, apply, integrate, and administer secure computing technologies to enable users to accomplish their personal, organizational, and societal goals. My shorter version: IT = Information Technology - The focus is on technology and how to apply CS theory to help improve solutions within IS systems (kind of). If you think about it in terms of cars: CS is equivalent to Physics working out the \"rules\" of what makes a car work IS is equivalent to car manufacturers that analyse the needs of humans in various environments and design what our cars should look like and how we want to use them etc IT is equivalent to Engineering who works closely with IS to build cars, engines, etc, according to the rules the physics people discovered to meet the specifications of the designs the IS people came up with. The above is of course not nearly as clear cut since all of these overlaps in many aspects. The primary focus of the degree is different though. A CS graduate will always have advanced math, and IS graduate will always know a lot more social science theory, and IT graduate will always be somewhere in between CS and IS.",
"Many answers here incorrectly associate computer science with building software or writing code. Computer science is best understood as a field with significant overlap with pure and applied mathematics. Very broadly, computer scientists seek to understand claims about the nature of computation. For example, it is known that if you're only allowed to perform comparisons, sorting a list of n numbers cannot be done without performing at least C * n * log(n) comparison operations in the worst case. Here C is some constant number that depends on how you implement your algorithm (for example, whether you chose to execute the algorithm on pencil and paper, or whether you wrote a computer program to execute the algorithm for you). After specifying a model of computation (and some other details), this claim can be proved rigorously using mathematics. The result above is a classic example of a lower bound type result. It tells you that no matter how clever you are, you cannot avoid doing a certain amount of work if you want to compute a solution to some problem. More generally, lower bound type results tell you that given an input to a problem of size N, your algorithm must perform at least C*f(N) computations to find a solution to the problem in the worst case, where C is some constant that depends on the way you implemented the algorithm, and f is some function of the input size. (The question of how to measure the \"size\" of an input is very important, but I've chosen to ignore it here.) Although there is considerable diversity even in computer science, I believe the above example is more representative of what computer science involves and the things computer scientists think about.",
"Oooh, I'm way late to the party but I've always enjoyed my take on this very question, but people rarely ask this question. This is only based on my observations at the school I got my MIS degree at, so your mileage may vary. I don't know where IT falls in here. But for the rest, I think of it as a spectrum that looks like this: < CS---CIS---MIS---BA > When I use those acronyms I'm thinking Computer Science, Computer Information Systems (Maybe same as IT?), Management Information Systems (which is a terrible name for this degree, but it's an awesome degree, and Business Administration. That spectrum also loosely equates to the kinds of classes you take: < Computer--------Business > So computer classes will be like intro to comp sci, networking, programming, database design, etc. Business classes are like marketing, management, finance, etc. So when I think back to the original spectrum I gave you: < CS----CIS----MIS----BA > and < Computer------Business > In the CS degree, you get almost all computer classes and no business classes. With CIS you get some business, but still mostly computer. With MIS you get mostly business with some compsci classes, and BA you get all business classes. So why would someone do any of this? Wouldn't it be better to specialize either in CS or BA? Why have those two in the middle? First, I would say the two in the middle are largely interchangeable in the business world. If you have a job that wants a CIS degree, your MIS will work, and vice versa. But those two play an **important** role in a business setting because to be frank: a CS and a BA don't know how to talk to each other. The CIS/MIS person knows enough of both sides of the world to translate between the two. They know how to take the BA's business requirement and translate it into SQL code, or java, or whatever. They probably aren't doing the actual programming, but they can work closely with the CS person to ensure what they're doing matches what the BA wants. They can also help temper both sides' priorities. CS will want to do everything perfect. BA will want to do everything cheap. The CIS/MIS person will help the two negotiate. I'm an MIS major because I actually love doing this kind of work. I also lean more towards the business side, so that's why I took the MIS classes. When I graduated, I had to give an oral presentation on a subject in order to qualify for my Summa Cum Laude, and I gave it on this very topic (to which I passed). I've lived this role for 15-ish years in Corporate America and it's important, but not well understood or valued. But you'll get things done better when all sides are accounted for in a project."
],
"score": [
12085,
3071,
617,
96,
76,
22,
18,
13,
12,
12,
10,
6,
6,
6,
6,
5,
4,
3,
3,
3,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[],
[
"https://www.abet.org/wp-content/uploads/2018/02/C001-18-19-CAC-Criteria-Version-2.0-updated-02-12-18.pdf"
],
[],
[],
[],
[],
[],
[
"https://www.acm.org/education/curricula-recommendations"
],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
anry4s | Why is the "Federal Universal Service Charge" on my AT & T bill so different per line? One line its $1.09 and the next line its $19.20? | Technology | explainlikeimfive | {
"a_id": [
"efvkcue"
],
"text": [
"Looking at your bill, you should see that the line with the higher fee is the line your carrier bills your plan to, if your other lines are sharing. The fee is a percentage of the bill so different bills cause different fees."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
anshs3 | How is data encoded/transported as light in fiber optic wires? | To my understand this is only uses for internet and some surgical equipment. But I do get that fiber optic is fast because it uses light, but how do we encode 1s and 0s as light? Or am I understanding this wrong? | Technology | explainlikeimfive | {
"a_id": [
"efvn052"
],
"text": [
"Well that was quick! Thanks. I totally didn’t realize of an on/off system. I was thinking you use different colors of light."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
anth7n | Why aren't there more tidal power stations | Technology | explainlikeimfive | {
"a_id": [
"efvx8m1",
"efvz6j4"
],
"text": [
"> tidal energy has traditionally suffered from relatively high cost and limited availability of sites with sufficiently high tidal ranges or flow velocities, thus constricting its total availability. [ URL_1 ]( URL_0 )",
"Salt water is really damaging to mechanical equipment meaning short lives and lots of wear and tear. We haven’t done enough studies to show that tidal generation is not harmful to the ecosystem we put it in. In order to successfully capture enough tidal power to get the system to pay for itself the water must be focused through the generation point which usually means limiting what vessels can use that area. Most of the best locations are ends of rivers and sounds that are more densely populated with both human and animal activity."
],
"score": [
5,
3
],
"text_urls": [
[
"https://en.wikipedia.org/wiki/Tidal_power",
"https://en.wikipedia.org/wiki/Tidal\\_power"
],
[]
]
} | [
"url"
] | [
"url"
] |
|
anu7rz | If humans can only hear as low as 15 Hz, then why do some headphones go down to 5 Hz? | Technology | explainlikeimfive | {
"a_id": [
"efw6hxs"
],
"text": [
"First, even though the headphones might claim they reach down to 5 Hz, they won't be very effective at those frequencies. So, the useful frequency range will almost certainly be smaller than what's on the package. Second, you can feel very low frequencies even if you don't hear them."
],
"score": [
10
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
anv6b1 | How do thought controlled prosthetic limbs work? | This specific article/video has shocked me in particular: URL_0 This prosthetic arm is not even close to the woman, yet she controls it with her thoughts somehow, and it even sends her information back (feeling). Just how? | Technology | explainlikeimfive | {
"a_id": [
"efwa0f5",
"efwhbh8"
],
"text": [
"This isn't \"thought control\" this is nerve remapping. Essentially it's just useing the wiring of the flesh and blood arm to control the prosthetic. Essentially they relocated this ladies nerves and programmed the prosthetic arms to react to her nerve impulses. That band on her arm is reading and sending signals from her remapped nerves to the prosthetic. Engineers and neurologists have created an arm that can react to nerve impulses and translates them in to instructions that move actuators in a specific way. It's no more mind control than moving your connected flesh and blood arm. That said there is in fact research on actual mind reading prosthesis. And the mechanics are pretty similar in concept as this nerve control type. It boils down to making an interface that reads flesh and blood stimuli converting them in to a machine readable instruction to send to a machine programmed to do specific movements.",
"We have two nervous systems - the central nervous system and peripheral nervous system. The CNS is the brain, spinal cord, and some sensory apparatus like the eyes; It's floating in its own fluid which the body keeps quarantined from this filthy 'blood' stuff, and doesn't extend too far from the core - generally one axon connects inside and outside the CNS. The PNS by contrast is a branching network of long transmission lines (subunit: the axon) that run through the limbs and around the muscles and skeleton of the body, talking to the brain when it deems fit (\"Catch the ball!\"), but sometimes just doing its own thing (\"Left ventricle, pump now!\"). The largest PNS nerve bundles are tiny compared to the brain, but huge compared to anything extending outside of the CNS. The PNS has the practical mechanical design consideration of bundling most of the nerves for eg the fingers, tightly together alongside each other, and wrapping them in a protective sheath. Then as the nerve gets farther from its attachment to the brain/spine, it forks off into smaller and smaller bundles of axons, like a tree separating branches into smaller and smaller limbs - until the tiny nerve endings which produce *individual sensations* like 'something is touching my finger on the first segment on the bottom quarter of the left side'. The link describes taking one of those bundles that's been severed, using a scalpel to split it up into smaller groups of axons, mounting those groups on the upper arm as you mount controls on a control panel, and touching electrodes to those groups, and sending electrical signals through them in a similar way to the body's natural design."
],
"score": [
8,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
any5hb | How exactly do stethoscopes work? | Technology | explainlikeimfive | {
"a_id": [
"efwz9jg"
],
"text": [
"The end of the stephoscope captures the sound vibrations and bounces them through the tubes until they reach the other end, which you put to your ears. It's effectively the same as putting your ear on someone's body, but more comfortable for both of you and less intrusive. In fact, that's exactly what doctors used to do before it was invented."
],
"score": [
15
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ao3se6 | Why does film footage from the 60's often look much better than footage from the 80's given that technology should have advanced? | Technology | explainlikeimfive | {
"a_id": [
"efxya3l",
"efxyiut",
"efy1yes"
],
"text": [
"Home cameras didn't really exist in the 60s, so practically all the footage you see from before the late 70s was shot professionally or semi-professionally with fairly expensive equipment that used very high resolution film. By the 80s, home VHS recorders allowed any doofus to record low quality video to a low res tape relatively cheaply and quickly. There's a lot of sucky VHS footage still floating around, but that format didn't exist until the very late 70s.",
"Back before digital sensors became the norm, there were two ways to record movies: You could use a film camera, which takes a series of photos on film. This way, the image quality could be just as good as it is today with digital cameras, but it required huge and expensive cameras as well as very strong lighting, and more often than not manual post processing. Many old movies were recorded on very high quality cameras, so they can be remastered to full HD looking like it's brand new footage. TV-cameras however were much more limited. They had a resolution of just 480*320 pixels, but these were portable and could be instantly broadcast to the audience. Storing this signal on a video tape degraded the quality even further, so watching something like 80s news footage today looks even worse than it did on TVs back then.",
"[Technicolor]( URL_0 ) was a very high-quality way of recording color images onto film, but it was complex and expensive. In the mid 60's it began to be replaced by cheaper processes. A lot of those processes resulted in films that have degraded a lot over time, but the technicolor films have stood up much better. [Here's a video]( URL_1 ) that touches on some of this."
],
"score": [
54,
18,
4
],
"text_urls": [
[],
[],
[
"https://en.wikipedia.org/wiki/Technicolor#Three-strip_Technicolor",
"https://youtu.be/Mqaobr6w6_I"
]
]
} | [
"url"
] | [
"url"
] |
|
ao4c9g | What is neural network and how is it different from A.I | I'm truly sorry if this is a stupid question. Edit : Thank you very much guys. You all make it simpler for me to understand | Technology | explainlikeimfive | {
"a_id": [
"efy1ro1",
"efy1u2f",
"efy3x2a"
],
"text": [
"It's not a stupid question. A neural network is a form of artificial intelligence. To explain it simply a neural net is an attempt to replicate the structure of a brain with neurons and connections between them that get weaker or stronger as they are trained. On one end of the neural network you have the inputs. This could be a picture, data from a self driving car, or the state of a chess board. On the other end you have the outputs of what the AI wants to do. In between you have nodes and connections between those. The strength of the connections determines what output is called for based on the inputs.",
"AI is a very general term used to describe any software which does intelligence-like things: learning from data, making decisions, solving problems. A neural network is a specific way of making an AI. A neural network is kind of like the human brain - lots of individual decision-making units, similar to the neurons in your brain, working together. Typically a neural network first \"learns\" by observing data, then makes decisions to solve a problem based on what it learns.",
"AI is an overbroad term, like transportation. Neural network is a mode of AI, like \"cars\" to extend the analogy. Some people think cars are the most profitable kind of transportation, and some people think they are inefficient and on the way out. You need to get to something specific, like reinforcement learning neural networks (= Tesla Model 3 to flog a dead analogy) before actual facts make it into the technical descussion. Otherwise there is a high risk of over generalization or talking past each other."
],
"score": [
13,
6,
6
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ao527j | Besides shape, what is the difference between different media cables such as Ethernet, HDMI, USB and others? | Technology | explainlikeimfive | {
"a_id": [
"efybvwp"
],
"text": [
"An interesting facet of these cables is how they deal with \"cross-talk\". Since you have a bunch of wires all carrying a current, and all conductors are antenna, and a current moving through a conductor produces an electromagnetic field, a signal on one wire might get picked up on another wire. Ethernet solves this problem by twisting the wires. Enter physics, here - but suffice it to say, it somehow helps to cancel out this effect. The difference between Category 5 and Category 6 ethernet cable is in the twists."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ao7kc7 | Why does some electronic equipment (smart tv’s, consoles, computers etc) respond so slowly when they are first turned on? | Technology | explainlikeimfive | {
"a_id": [
"efysiip",
"efywq3q",
"efyxom8"
],
"text": [
"They have a lot of things to load into memory and process (startup processes, kernal modules, etc) Think of it like eating breakfast. It takes a while to get to the eating part because you have stuff to do prior like cooking, getting the dishes, bringing them to the table, etc.",
"Not sure if you meant first turned on in the context of a hard shutdown or new out of the box experience, so I’m putting both in the ELI5: **First turned on**: You were just born. You have to set everything up from scratch. Install a language into your brain, install the movement system, fine tune the vision system etc. before you can become fully functional. **Turn on after a full shutdown:** You just woke up after a great night’s sleep. You’re a bit groggy so you have to take some time to wake up. Eat breakfast, drink your coffee, then go to work. **Turn on from standby (like most modern consoles)**: You just took a power nap. You’re up now but your faculties are still with you, you aren’t groggy and you’ve already eaten, so you are good to get back to work.",
"Because they're running a real operating system like Linux or BSD on the cheapest hardware available. They're a computer with a slow disk, crappy CPU and not enough memory, running a shitty, cut-down operating system that will just about run."
],
"score": [
20,
4,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ao8jl1 | On some game platforms, it's possible to start playing the game midway through downloading it. How is that possible? Shouldn't you need all the files for the game to work? | Besides, if you don't need the rest to start playing, why bother having the rest in the first place? P.S: The game platform I first witnessed this fact was Origin, a friend of mine said it happens in GOG as well but I'm not sure | Technology | explainlikeimfive | {
"a_id": [
"efyzrjc",
"efyzvpu",
"efz0nou"
],
"text": [
"The game will load the stuff it needs to run first like the game engine and the first few maps, leaving textures, models and maps that appear later in the game until last.",
"This is a common feature (see also: Steam, PSN, Blizzard launcher, etc). Most of the size of a game comes from art assets - textures, geometry, etc. A large game can flag certain assets as essential, and download those first, so you can go ahead and start playing. If you happen to progress extremely quickly, the game will typically stop you until the rest of the assets it needs are downloaded.",
"Say you have your whole day planned out: wake up > eat breakfast > go to the gym > go to work > go to the bar > go home. It will take you 2 hours to do the first 3 things. In a game you only need the data that immediately concerns those first 3. So the game downloads what it needs to the hard drive quickly and continues downloading the final 80% of your day in the background, knowing by the time you finish up at the gym the rest of the day will have had plenty of time to finish downloading."
],
"score": [
6,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ao9ael | How do “Sleep tanks” work? | So recently, an article came out that the [New England Patriots have sleep tanks]( URL_0 ) at Gilette stadium which can simulate 4-5 hours of sleep in just 45 minutes. How does this work? Does our body truly act like it received 4-5 hours or is it all mental? | Technology | explainlikeimfive | {
"a_id": [
"efz5sbi",
"efz7o57",
"efzr4hl",
"efzmc4y",
"efzoq6u",
"eg01a1u"
],
"text": [
"Welcome to the 70s... again. Deprivation tanks are basically just soundproof, light proof bathtubs filled with body temperature water and a shit ton of Epsom salts. The water gets super dense, meaning even a bodybuilder would float like no body's business. With no light, no sound, and super heavy water at the same temp as you, you basically feel like you are floating in the middle of nothing. It's really conducive to meditation and relaxation. Some people hallucinate, some fall asleep, some feel like they are in an altered state. It doesn't REALLY replace a lot of sleep, that's a biological process that takes the same time no matter what. It is just very relaxing. Well.... for som e people. Some people really can't take it.",
"It's less that you get 5 hours of sleep in 45 minutes and more that the sleep quality is improved because the tank is meant to create the perfect environment for sleep and meditation. People normally don't get perfect sleep between the physical and mental ailments or bad sleep habits they may have. Sleep apnea can nullify literally like 8 hours of your sleep making you feel sleep deprived no matter what you do for example.",
"I don't know, these things worked wonders for goku on so many occasions, but they took at least 15-20 episodes so I think 45 minutes is a bit of an exaggeration",
"Most things athletes use to enhance their performance don’t have measurable benefits that would fall outside of the margin of error in a scientific study. A great deal of it is really placebo effect. Some of it actually works, but the benefit is astonishingly small. But remember, these are ELITE athletes. For someone who is a layperson, that tiny advantage won’t make us appreciably better. But with, say 100 elite athletes who are all at the top of their professions, the smallest advantage can have a marked effect on their performance. Even if it doesn’t have an actual benefit, the placebo effect can still make a major mental difference, and a mental edge can be every bit as effective as a physical one. The Freakonomics podcast has done a series on sports and their most recent was on this mental game. Athletes and teams will spend millions chasing a 1% edge. But often times they’ll ignore the data telling them what they should do, even if it gives them a bigger edge, because they don’t want athletes or coaches second guessing themselves ... confidence is such a big factor.",
"It’s a placebo effect. At best it might provide a psychological benefit in a similar way meditation works for some people. It can not speed up the biological processes that occur while sleeping especially not a full nights worth in 45 min.",
"I can help answer this. I worked at a major blue blood college football program athletic training department and have looked into the research behind it and have given a presentation on it. Basically it doesn’t really give you 5 hours of sleep in 45 minutes. But what it does do is make that 45 minutes of sleep a perfect 45 minutes of sleep. When you sleep on a bed, no matter how firm/soft/expensive it is. Your muscles are always working to maintain posture and fight gravity. With the sleep tank, you are completely suspended in the water, so all those small postural muscles can relax. Also, since there is almost no sensory input, there is nothing to pull you out of deep sleep like a car driving by your house or sunlight peaking through your blinds. Sleep is incredibly important to athletic performance, mental function and physical rest/recovery"
],
"score": [
398,
61,
52,
30,
8,
7
],
"text_urls": [
[],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ao9n8o | How do smart phones not over heat? | I know that ARM architecture is different than your average x86 processor found in computers. But given that modern smartphones have comparable clock speeds to your low end x86 based laptops and desktops, and given that there's a lot less room for heat dispersion, I'm just curious as to why there is no need for an active cooling system on the device. & #x200B; Obviously there's a lot more down time for a smart phone, but even if I were to play say PUBG on my device from 100% to 0% battery, the phone would never seem to overheat | Technology | explainlikeimfive | {
"a_id": [
"efz8ijk",
"efzpee6",
"efzoz12"
],
"text": [
"It's the same method how anything else doesn't overheat. As you use your phone, it heats up, and when it hits a predetermined point, it artificially limits your performance until the temperature drops back below the point - rinse and repeat. Of course, PUBG for your phone is not the same game as PUBG on your PC, and so will be much less intensive and not cause your phone to heat up like your PC does.",
"They certainly can. I have a Pixel XL with a daydream VR headset, and if I'm playing particularly intensive games for more than 20 minutes, I'll get a warning that the phone is overheating and will shut down if it doesn't cool off.",
"Something I haven't seen as a response yet is also also important to keep in mind, and that's energy. Mobile processors are specificly designed to be as energy efficient as possible, with less importance given to performance. The applications available for phones are purpose built for these devices. Much more attention is given to optimization, making games and such much easier to run. When a desktop CPU is using 125w compared to a phones 2.5w, you can expect it heat up faster relative to it's size."
],
"score": [
39,
5,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
aobi39 | How do self driving cars recognize stop signs? | Technology | explainlikeimfive | {
"a_id": [
"efzscwj",
"efzn6xp",
"efzpa09"
],
"text": [
"Have you ever those reCaptcha tests that ask you to select the squares with street signs in them? They are used to help teach self driving cars how to identify what is and isn't a street sign. Edit: CPG Grey does a video about it - URL_0",
"They are pretty easy for computer vision to make out. Red octagons next to the road are a very rare sight; almost all are Stop signs. The computer can of course confirm by looking for the word STOP to be on it.",
"Okay, imagine building a million robots with wires in their heads attached randomly. Then, you give them all a test with pictures of roads, and ask them if they see a stop sign in the pictures. Some of the robots will do just a little better on the test than the others. The wiring in these robot's heads is mixed around a little to create new robots, and the ones that don't do well are discarded. If you have enough robots, a good enough test, and enough repetition, eventually you'll get robots that can recognize the stop signs- there's little room for confusion with stop signs, they're very similar to each other and different to everything else. Also, there's another way to do this, by instead of building millions of robots, building one robot and slowly turning up and down dials in it's head to get closer and closer to the right configuration. This is call deep learning it's much more efficient- one robot instead of millions- but can only find one solution, whereas the million-robot evolutionary method can find multiple configurations that solve the same problem, and might find a better one. [Here's a good explanation of how learning works.]( URL_0 )"
],
"score": [
16,
15,
3
],
"text_urls": [
[
"https://www.youtube.com/watch?v=R9OHn5ZF4Uo"
],
[],
[
"https://www.youtube.com/watch?v=aircAruvnKk&index=1&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi"
]
]
} | [
"url"
] | [
"url"
] |
|
aoc1yp | How does fast charging work? And why does it sometimes fast charge first try and other times it needs to be replugged in? | Technology | explainlikeimfive | {
"a_id": [
"efztgs1"
],
"text": [
"So, to start, we have to consider the usual standard: USB 2.0 is designed to provide a potential difference of five volts at a current of 500 milliamps. That's not an awful lot of power in the scheme of things, and that translates to taking a long time to charge. Enter fast charging (called Quick Charge by Qualcomm, and other names by other companies). Fast-charging is a technology where a compatible phone and charger \"talk\" to each other and agree on a higher voltage and/or amperage that both devices can tolerate without risking damage to either. If it needs to be plugged in again to recognize Quick Charge, that means that the data lines didn't make good enough contact for the two pieces to connect."
],
"score": [
24
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
aoclg9 | Why can my phone last so long on 1%? | Technology | explainlikeimfive | {
"a_id": [
"efzww17"
],
"text": [
"The phone doesn’t actually know the exact amount of battery life left in the charge, it just estimates the best it can. The phone reaches ‘0%’ when there is not enough power left in the battery to sustain the applications/background applications/operating system."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
aods2v | Why do countries even bother blocking websites if people can easily bypass and access them? | Technology | explainlikeimfive | {
"a_id": [
"eg04w2g",
"eg04vub",
"eg06m2m",
"eg0g0ez"
],
"text": [
"Because : 1. Most politicians don't have any idea about IT topics such as VPNs and proxies. 2. Blocking websites does prevent average users, children and people who aren't technically savvy from accessing them 3. It allows them to look like they are taking action and get political capital, votes from the conservative right etc.",
"Probably because not everyone knows what a VPN is. Not everyone is tech savvy I guess?",
"Because the vast majority of people in the world don't know how easy it is to bypass. Also, they block access to VPNs and other software that make it easy",
"If you know a Web site befor he gets blocked you might still brose it. But how many people will Know about this web site after that ? Yep a lot less. For the same reason there is a wall around your house. I sure can climb the wall but multiples people will be wallking into your garden if the wall was not here. Edit: If trump ever read this it work with a house but that about it. Do NOT try this at much bigger scale XD."
],
"score": [
28,
5,
5,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
aodxk1 | Why do services like Discord and Skype sound so much better than actually making a phone call? | If I'm using my mobile data on my phone to call on Discord, why is the sound quality so much clearer than a regular phone call? Surely we've come far enough to like, make it sound better? | Technology | explainlikeimfive | {
"a_id": [
"eg05wb0",
"eg05z27",
"eg0flf7"
],
"text": [
"Because voicecalls use smaller bitrate. Skype and discord calls also lags hellalot more than regular calls. Regular calls rarely ever lags as long as your reception is a ok.",
"When you make a standard cellphone call, it gets routed from the cell tower to the closest telephone exchange, where it essentially becomes a standard telephone call. Services such as Skype and Discord use Internet Protocol-based services, where there is a significant increase in available bandwidth in order to greatly increase the number of samples per second of the audio sent to the distant end.",
"Standard telephone calls over landlines use 64 kbps of data to send the audio with only a tiny amount of compression. Cell phones will sometimes use as little as 9.6 kbps of data depending on which vocoder is in use. These are laughably small bitrates in today's world. But, since this standard has been in place for decades it's hard to get everyone to agree to a better sounding system. There is a push to use HD Voice which sounds a lot better than plain old telephone service but it is limited by interoperability problems between carriers. Discord also uses 64 kbps by default but it uses compression that allows for 10:1 or 20:1 data savings. So, it sounds much better while using the same amount of data as the old standards."
],
"score": [
9,
4,
4
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
aogbx1 | What determines when people can/can't use cartoon/movie footage on YouTube? | Like if I were to upload part of an episode of Spongebob on my account, it would probably get claimed by Viacom and the video would be gutted or something But what about channels where showing the movie clip is necessary? Channels like Film Theory or something (idk, first example that came to mind) need to be able to show what they're talking about, and their videos work out just fine I know legality and YouTube is always a hot mess but there has to be some rule for when it is/isn't accepted | Technology | explainlikeimfive | {
"a_id": [
"eg0ki10"
],
"text": [
"If you’re discussing a movie or a TV show and you sporadically show short clips as part of your commentary, it falls under “Free Use.” As long as the content and true subject of the video is your commentary and not the movie or TV show itself, you are legally allowed to show clips. It’s important to note that this does not mean you cannot be sued or a claim cannot be made against your video. You can be sued for anything by anyone at anytime. EDIT: As pointed out below, the term is “Fair Use” not “Free Use.”"
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
aognpa | Can astronauts use their cell phones to make calls or text? What about the internet? Isn’t all that information going from a cell tower, to a satellite? If so, wouldn’t they get better service? | Technology | explainlikeimfive | {
"a_id": [
"eg0nrb4",
"eg18uny",
"eg0ny0p",
"eg14l3n",
"eg0nxvk",
"eg167v5",
"eg1046t",
"eg1dgtv",
"eg1cbzt",
"eg0rxry",
"eg1edmq",
"eg0mxig",
"eg17uok",
"eg1ronk",
"eg1dyjk",
"eg19etu"
],
"text": [
"> Can astronauts use their cell phones to make calls or text? The International Space Station does have internet access. They can make calls or text over the internet/Wi-Fi (like Wi-Fi calling, Facetime, or Skype does), but they can't make a \"normal\" phone call because there are no cell towers. I don't know if they can just use it whenever they want since bandwidth would be limited and probably must be kept free most of the time for official business. > Isn’t all that information going from a cell tower, to a satellite? No, your phone connects to a cell tower, and then there are cables and wires connected to the cell tower that your data moves through. Neither you nor the cell towers connect to a satellite.",
"I wonder if right now, someone up in space is browsing Reddit instead of actually working.",
"The ISS is 250 miles from the nearest cell phone tower, so a conventional cell phone would not work. They do have internet service, via a variety of radio/etc links with NASA.",
"Given that ISS is moving at 17,500 mph wouldn't the doppler effect screw up the frequencies?",
"It depends on what you mean by \"cell phone\". If you mean the phone you have in your hands right now, it is highly unlikely that it would work in space. Our consumer cell phones look for nearby cell networks, which then connect to a nearby cell tower. Neither cell networks or cell towers exist in space, so these kinds of cell phones won't work. And the range for these types of cell phones is only 5-10 miles, because they use directional radio waves to travel horizontally. If you mean a satellite phone, which connects directly to satellites in orbit, then yeah. They could almost definitely make a phone call with them, depending on where they were in relation to those satellites. See, those satellites use directional signals that point straight down at the Earth, so if you were between the Earth and those satellites, it would work just fine. You could do it near the ISS, for example. And as you move further away from the Earth, they would stop working again, because they would quite quickly leave the maximum range for those types of communications. The ISS, for reference, uses VoIP from their internet connection. And, for goodness sakes, don't dial the wrong number. Imagine the long distance charge! /badjoke EDIT: Oh, and that's just kind of ignoring the physical problems that would occur due to temperature, pressure, radiation, or magnetism. A modern phone cannot operate in the temperature extremes of space, for example. In the areas around Earth, the temperatures can range from 240F to -150F, far too hot/cold for sensitive electronics.",
"Just FYI, cell phones don't even work in planes because cell tower radios aren't pointed up, they're pointed out along the ground because that's where people with cell phones are.",
"ELI5: You need to be within a few miles/kms of a cell phone tower to use it. ISS is about 250 miles up (500 km EDIT: 400 km).",
"> Isn’t all that information going from a cell tower, to a satellite? Nope. This is a common assumption but that's not how it works. Satellites are not used at all for cell phone networks. If they were, internet on your phone would be a **lot** slower. Satellite internet is really only used in places **way** out in the sticks where it's just too expensive to run cables for the few people that live there. It's slow and shitty. Even though radio waves travel at the speed of light, it's an incredibly long distance for a signal to get up to a satellite and back down. Cell towers have cables connecting them to wired networks, fairly similar to how a home internet connection works. A related fun fact, the vast vast majority of internet traffic going between continents does not use satellites either. For the same reason, it's really slow. The internet actually depends on big fancy expensive underwater cables that have been set up to cross oceans. We've been putting underwater cables across oceans for over 150 years now. The earliest ones were for telegraph and then telephone and at this point we have far more advanced ones for internet. URL_0",
"Fun fact: British space Station astronaut Tim Peake once mis-dialed while on the iss and got through to a random person in the UK and politely apologised and re-dialled.",
"It would work connected through the Wifi network, just like in an airline, but your regular network would not work, as you are above the cell towers (just like in an actual airplane).",
"Because the actual need for specialized hardware that had been mentioned, people are also forgetting that in space, radiation is a concern. Our atmosphere shields us from alot of that. Normal electronics have a high failure rate when put in space because chips and other components are not radiation hardened, and tend to have problems. Integrated circuits can, and routinely do fail due to radiation damage of the particular circuit because the transistors are so small. Radiation can easily induce a current in an improper manner, and destroy handfuls of transistors (which effectively make the entire integrated circuit dead). Just figured I'd mention this aspect.",
"Not a Space/Electronic buff so I cant fully do an ELI5, but I remember seeing a comment on a post saying that an astronaut called 911 in a space station for fun and the emergency responder picked up, so maybe?",
"You said it yourself. From the cell tower to the satellite, the satellites and the phones aren't equipped to talk to each other. And most cellphone calls never actually get routed through satellites. That said the ISS does have internet access from a satellite connection (similar but not the same as what planes use) so they can communicate well enough.",
"I remember when this service was introduced. Someone from the space community was being interviewed by a dumb reporter who legitimately thought that internet in space was so we could talk to aliens. And that’s not all, I saw another reporter asking an exec from Google about Streetview, she was concerned that burglars would be able to tell if you were out of the house. Er... no. It’s not LIVE data.",
"Satellite is usually not used for real-time communication unless there's no other route on the ground. The speed of light is very fast, but still takes over half a second for data to bounce off a geosynch orbit satellite and back. That's why satellite interviews and phone calls have such an obvious lag. The vast majority of internet, text, and phone traffic goes over cables in the ground and under the ocean, which can get data from point to point 10x as fast as satellites, at hundreds or thousands of times the bandwidth. Satellite is usually only used as a fallback plan for remote areas where the sender/receiver isn't in range of a normal network.",
"Geostationary satellites are too far away from earth for phone calls or internet. They have large ground stations that give them big bandwidths, which are suitable for batch delivery, usually TV. LEO communications satellites move quickly with respect to ground. Like GPS, they're only useful if there are several to choose from. A single phone call gets handed off from satellite to satellite. The cost of running a constellation of satellites makes satellite phones very expensive (source: knew a guy who did collections for a satphone company). The ISS, contrarily, switches from ground station to ground station. Their ground stations are large, and do nothing but follow the space station. Ferrying an internet connection, or phone over internet, is relatively low cost. Because the earth is round (or because NASA wants us to think it is) the ISS is [occasionally]( URL_0 ) out of sight of all of its ground stations."
],
"score": [
1894,
283,
201,
38,
33,
27,
19,
19,
17,
10,
4,
3,
3,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[
"https://en.wikipedia.org/wiki/Submarine_communications_cable"
],
[],
[],
[],
[],
[],
[],
[],
[
"https://space.stackexchange.com/questions/12296/when-does-the-iss-have-a-loss-of-signal"
]
]
} | [
"url"
] | [
"url"
] |
|
aoh6wn | How do those companies like Invisalign and Smile Direct Club fix teeth in 6 months when it took me 2 years with braces? | There has to be something severely negative about these companies or the results. Braces and moving teeth takes time from what I've been told so how can they promise a 6-month timeline for results? | Technology | explainlikeimfive | {
"a_id": [
"eg0s9gq",
"eg16dce"
],
"text": [
"Those times don’t apply to everybody with every tooth-related issue. Braces are often used for more severe misalignments which are obvious when the patient is just a kid. The teeth that still haven’t been corrected by that time usually have less-severe issues that did not obviously need correction, but that the patient might want fixed for cosmetic reasons when they’re an adult.",
"The easy answer is that they don’t. In my office we will often use Invisalign in place of braces when people want a more aesthetic option. Often we will take longer correcting someone’s teeth with Invisalign because it is difficult to make corrections on the fly and require a lot more time set up the orthodontic plan and also require greater compliance from the patient. As for smile direct club, please avoid that at all costs. It’s a product owned by aligntec (the company that also owns Invisalign) that is provided at a much lower cost, but with no oversight from a dental professional. I know you think we don’t know a lot or that moving teeth can’t be that hard, but trust me it is and should really be overseen by people who know what’s up."
],
"score": [
11,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
aojp6t | How do computers make color images black and white? | What is it doing to the rgb values to determine what shade to put there? | Technology | explainlikeimfive | {
"a_id": [
"eg1bjo8"
],
"text": [
"Let's say that on a scale from 0 to 255 that the RGB values of a pixel are: * Red = 128 * Green = 64 * Blue = 255 Average those three values together and you get 149. So, the black and white value of the pixel would be 149/255."
],
"score": [
10
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
aojzv0 | Discord | can someone explain discord to me and how it works? I want to find others to play xbox games with | Technology | explainlikeimfive | {
"a_id": [
"eg1ex4r",
"eg1lppx"
],
"text": [
"Discord is a semi-private hosting service for chatting. Each server is like a subreddit, where there are mods and rules and purposes. Anyone can made a server. If you're trying to find people to play games with you'll want to find a server with that purpose. It's basically a cloud version of teamspeak/ventrilo, which is why it's organized the way it is. With TS/vent you had to have a buddy physically host a server program on their PC and everyone else connected to it, or pay a company to host one for you. Discord kept the look/feel of the setup but it's just a chat platform like Skype/FB chat/ etc.",
"I play DnD on URL_0 using discord for voice. I also play a couple MMO’s and connect to different servers for those. I also have some of my XBL friends on another server(so they don’t have to opt in on DnD stuff or mmo stuff) I’m also in another discord for some audio work on I do on another reddit account(as are a bunch of Redditers from that corner of the web) Think of discord like a map of your town. All the cool places are on it. As are(potentially) your friends. Including those you don’t know about yet. It’s a good love child of like teamspeak or ventrilo, and an instant messager like aim, or icq. Tl;dr : it’s what Skype would be if it didn’t insist on REDACTED group calls. And arguably superior because of that."
],
"score": [
6,
3
],
"text_urls": [
[],
[
"roll20.net"
]
]
} | [
"url"
] | [
"url"
] |
aok7jb | How can some game services (Origin for my case) allow you to play games mid-way through download, before it's complete? | Technology | explainlikeimfive | {
"a_id": [
"eg1fwah",
"eg1g1oq"
],
"text": [
"Most of what makes a modern game so huge is the assets, especially graphical ones like character models and maps. So if they have it ordered in such a way where the base game code downloads, along with the assets needed for the first couple levels or so, you can play the game while it works to download the rest of the levels and stuff in the background.",
"What takes the longest to download is usually assets (models, images, video files etc), while the programming itself takes up a relatively small amount of space. Thus, if you only download the assets needed for the first stage of the game, you can allow gameplay while downloading the assets for the second stage of the game. Also, for many games that works like this (I know World of Warcraft uses this method), it will download the lowest quality textures and models first to get it to a playable state as fast as possible."
],
"score": [
4,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
aolo8m | How does a camera's shutter sync with plane propellars? | Technology | explainlikeimfive | {
"a_id": [
"eg1s6jg"
],
"text": [
"Usually entirely by chance... And it's less of a \"shutter\" and more of a frame rate. If the propeller spins exactly 30 times in a second, and your camera (and screen!) show things at 1 frames per second, it will appear as if the propeller (or wheel, or any other spinning object) didn't move at all after one second... In reality, it spun 30 times and ended up exactly where it started, one second later. Edit: tried to oversimplify, got math wrong"
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
aolpyo | Does playing music on an iPhone with the volume off drain the battery? | Assuming the music is already downloaded and the screen is off. | Technology | explainlikeimfive | {
"a_id": [
"eg1sgr5"
],
"text": [
"Well yes. This is because whatever app you are using to play music, wether is be Spotify or Apple Music or just the normal built in music player it is still running. Even if no audio is coming out, the app is still playing it."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
aoog9w | Why is the audio quality so bad on a lot of TV shows and movies from the 70's and 80's? | It seems like it's more recent that TV shows and movies have decent audio in the recordings, but music recordings seem to sound good no matter the era. A lot of people say that early Jazz has some of the best recording quality. But, going back to an 80's TV show, it sounds like things are being spoken into a tin can. Why is this? | Technology | explainlikeimfive | {
"a_id": [
"eg2g2ax"
],
"text": [
"Some of the early TV shows were only recorded on video tape. The quality was limited to the resolution and contrast of the era. Not good. Sound should have been reasonably good, but it would never be mistaken for high fidelity. Other shows with higher budget were shot on film. The resolution was far better than TV of the day required, so they still look good even on today's HDTV. Sound would be film quality as well. I have also noticed that some of the old shows in syndication are presented with quality far beneath the original. There appears to be some cut-rate engineering in the off-channel TV business."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
aopjbm | What are the clean-cut pros and cons of HBM vs GDDR Memory in graphics cards? | Technology | explainlikeimfive | {
"a_id": [
"eg2q1o8"
],
"text": [
"HBR has higher bandwidth, meaning it is faster to access. It's also more power efficient per for the same performance. However it's more expensive, for the time being."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
aoqpg0 | How does the physical film from cinema cameras get transferred to computers for editing? | Technology | explainlikeimfive | {
"a_id": [
"eg2v27l"
],
"text": [
"Very few movies these days are actually shot on film, they're mostly digital from the start. But for the few that are shot on traditional reel, the film is fed through a digital scanner which itself is a projector with a lens kit that focuses the image onto a CMOS or CCD sensor so that the film can be recorded digitally frame by frame."
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.