q_id
stringlengths 6
6
| title
stringlengths 4
294
| selftext
stringlengths 0
2.48k
| category
stringclasses 1
value | subreddit
stringclasses 1
value | answers
dict | title_urls
sequencelengths 1
1
| selftext_urls
sequencelengths 1
1
|
---|---|---|---|---|---|---|---|
iewx5b | Why do smartphones have multiple cameras instead of using their best camera sensor with multiple lenses? | Top of the line phones have a main camera sensor which is of far higher quality than the other cameras (Tele and wide and sometimes macro). Why not use the main sensor, but have different lenses rotate into place like an old Bolex or and some old two-focal length point and shoots? | Technology | explainlikeimfive | {
"a_id": [
"g2jyev1",
"g2kcjs3"
],
"text": [
"moving parts makes it more prone to failure. and who's going to rotate the lens? the user or the software? users will forget and get annoyed and the software doing it will cause it to become slow, not to mention the more prone to failure part.",
"Marketing and practicality More lenses and a wider range of focal lengths are an obvious advantage to users, so they look great on the spec sheet of a new phone. Moving parts on phones however are bad - we want them to be as simple and reliable as possible, and every part that moves adds in one very obvious failure point. If you were to add in a rotating camera assembly, that means you have one additional part that can jam or break. Alongside that, cameras are high precision items, especially when miniaturised like on phones. Having a fixed camera module means that the lenses, sensor and other elements are all factory built and permanently fixed together, so it is impossible to knock the lens and misalign it from the camera (at least without the phone suffering some pretty significant damage). If you have a moving lens assembly, every time it moves there is the opportunity for it to misalign slightly, let in dust, or face other problems."
],
"score": [
9,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
iey063 | If I watch the same video on 720p, 1080p and 4K, what's the easiest aspect to tell the difference between the video qualitiy? | I can notice anything below 720p and Frame rate is something I can pick on but not video quality. I've been trying on Multiple devices for ages and maybe I just don't know where to look at? | Technology | explainlikeimfive | {
"a_id": [
"g2k4ocs",
"g2k9qku",
"g2k53mk"
],
"text": [
"Well for one, it requires your deviance to actually be able to display those ratios. Most can display 720 and 1080, but only relatively newer devices can display 4K, and some new devices still can’t. But since you said you’ve tried multiple devices I’ll assume you knew that. So the answer really is how “blurry” or pixelated does the video look? It is easier to notice these changes on larger screens because the higher resolutions are being stretched across larger surfaces. Making it easier to notice any tiny blues that would be impossible to see on a phone. Honestly one thing I might recommend is get yourself set up with a 4K computer monitor or TV, and then search of videos that can be displayed in 4K (the one that comes to mind to me is a slow mo guys video where they shoot paint through water). And then just watch a short segment of the video a couple times in 4k, and then a couple times in 720. By making the jump bigger any differences should be more obvious.",
"Well, resolution and video quality are kind of two different things. You could render a video at 4k at a really low bit rate and again at 720p but a higher bit rate and the 720p video would look better in many scenarios. In terms of pure resolution, you're going to want to look at areas with a lot of detail. Something like hair, distant scenery, or clothing texture. But if you're streaming these videos from YouTube or Netflix they often change the \"video quality\" along with resolution. There you'll also want to look at moving objects or camera pans. Most modern video stores the differences between frames instead of just storing them as a series of images. As such, still scenes on low quality videos can look pretty good. Although, this will break down when there are a lot of moving objects on screen.",
"Look at clothes people are wearing. The higher the resolution the more you can see details in the fabric. Looking at skin is also often a good tell, but that's more likely to be less apparent because of make-up. Interestingly, HD remasters of content often age poorly, especially effects heavy stuff because the if the effects aren't re-done, they look visibly worse than everything else. Buffy The Vampire Slayer and Babylon 5 both suffer from this."
],
"score": [
7,
6,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
if0m2b | What's the difference between Textures, Shaders, Materials, Meshes and Sprites in Unity? | Technology | explainlikeimfive | {
"a_id": [
"g2kj4uj"
],
"text": [
"Textures are the image that tells the renderer what colour a particular spot is, it is usually unique to an object, shaders are what tells the renderer what to do with the texture, dictates how light interacts with i,t, reflections and shadows and combigner multiple textures that control things like specularity (shininess) and normal maps (sub mesh detail that'd be too heavy to render with polygons) meshes are a 3d collection of polygons that make up a 3d object to which all of the above apply and sprites are a different item used in 2d games, basically images that can be moved or controlled and displayed like mario in ye olde mario games"
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
if0q9a | Can't I just vacuum my smartphone charging port? | My samsung charging port won't take chargers due to, what I assume is dirt buildup. I've tried using a tooth pick but that hasn't got it out. Instead of paying $60 to a phone repair company, couldn't I literally just use a vacuum on it? The only parts it could suck out would be the actual charger connector, but that's firmly secured right? Other option would be compressed air, just want to know if I can avoid buying stuff first. Thanks. | Technology | explainlikeimfive | {
"a_id": [
"g2kjpv5",
"g2kjirh"
],
"text": [
"You could try vacuuming, but since the port is basically sealed and not much will happen, compressed air should do a better job, but stay cautious because if you use too much pressure you can break the display from the inside from too much pressure. Also you could try a thin needle instead of a toothpick, but turn off the phone before sticking metal in the charging port Also you can try putting alcohol inside the port to help loosen the dirt, it doesn’t conduct electricity, but still turn the phone off and dry it before using it to be sure",
"Use a pin or sewing needle; the lint is usually pressed in there pretty hard, it needs to be scratched out. Be carful not to damage the connectors"
],
"score": [
16,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
if0ro4 | Why isn't there a (start-up) company that makes older technology printers with cheaper ink? | Technology | explainlikeimfive | {
"a_id": [
"g2kjhfs",
"g2kmgrp"
],
"text": [
"The ink is costly not because it’s some expensive exotic substance that is the only thing that works on modern printers, it’s actually pretty cheap, but the manufacturer prefer to sell cheap printers at a loss and sell the ink you need to use them at a huge markup. While you could make a printer startup, you cannot compete with the prices of the other brands since they are deliberately losing money on printer sales",
"This is actually a model of sales. Where you sell a product for relatively cheaply, but then the consumable for that product is very expensive. Another example is mens razors. Also, there is a HUGE cottage industry for refilling or aftermarket inkjet cartridges. The problem is that the OEMs do everything they can to make the cartridges proprietary to protect their profit margins."
],
"score": [
11,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
if1e77 | why is Internet speed measured in bits per second rather than bytes per second? | Bytes seem like the logical option given everything else in personal computing is given in bytes (ram, storage, file size etc.) | Technology | explainlikeimfive | {
"a_id": [
"g2knf39",
"g2koojb",
"g2kosfl",
"g2knfov"
],
"text": [
"Network speeds were measured in bits per second a long time before the internet was even a thing. In the 1970s back then it was beneficial to measure by a larger number - larger numbers mean more accuracies. Also, transmission is based on bits and packets, whereas storage is based on bytes (which is what makes the data based on a certain number of bits). The purpose of networking is to transmit data, not store it 🙂 Also, as explained by another Redditor: \"It probably had more to do with how in the past a byte was not always 8-bits. It could have been 4-bits, 6-bits, or whatever else a specific computer supported at the time. It would have been confusing to measure data transmission in bytes since it could have different meanings depending on the computer. That's probably also why in data transmissions 8-bits is still referred to as an octet rather than a byte.\"",
"Simple answer is data is transmitted in bits and so it's speed is measured in bits. Bytes are the standard in terms of computing. 8 bits make a byte. A byte is typically 1 character, (letter, or number). Computer memory uses bytes because the language the computer uses at this level is typically in hexadecimal. And each single character in hexadecimal is one byte. Internet speed though is measured in bits. This is because data is sent in its basic form of 1's and 0's. All data encrypted or otherwise is sent this way. Is referred to in IT as layer 1. (You can look at what is called the layer OSI model which show the different levels between what we see on the screen and how it gets there.)",
"Bytes are a higher level concept than bits. In fact during the history of computer development it wasn't even always 8 for all systems. Bits are also more closely related to the physical ability of the communication device to send signals, expressed in \"baud\" a.k.a. signal/second. Not every signal is a bit (for example the start and sync signals that mark the beginning of a transmission and carry no other data than telling the other end to prepare to receive), but for most of the transmission it's close enough. So it was already the de facto standard way to describe communications channels even pre-internet and it does give a more impressive number so ISPs saw no reason to break tradition.",
"First it’s a higher number so it sounds better, but also as far as the ISP is concerned, they are just transferring bits, one after the other and it’s just computers down the line that split them up in groups of 8, and also there is other stuff like parity and error correction in the middle, so it’s not exactly the same as pure file download speed"
],
"score": [
29,
8,
5,
4
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
if1njr | - Why do our phones sometimes sort of twitch (buzz/flash for a millisecond), before a call actually comes in? What is happening in that moment? | Technology | explainlikeimfive | {
"a_id": [
"g2l41xp"
],
"text": [
"I'm pretty confident I've managed to dig up the answer to this, in most (all) processors you have deep sleep states. Where the actual current is cut off from the entire processor cores is effectively offline. This is really advantageous for battery usage, the CPU core is using no power, makes battery last longer. Once you actually power up a CPU core there's a minimum power state, which in my phone is like 576 MHz, 826 MHz. If it really has shit to do it can clock all the way up to 2.65 GHz. Most processors should be able to switch between power states on the millisecond level, it's really not that big a deal switching states. The problem is if you have one random app that wants to check for new LIMITED OFFERS, you don't want to wake up everything and process it immediately now, it's not important. Boosting a core up from sleep to 576 MHz, to run this for a split second then powering off is also huge waste. It's really a constant battle between the OS (Google) and Apps (developers). The OS wants to shut everything down and conserve battery, while TikTok just wants your location. In Android there's two big features meant to handle this, first one is Doze which will schedule maintenance windows for apps, where the phone will CPU will wake up and process all scheduled tasks (like check new notifications). The longer the phone has been idle the longer Doze waits before triggering maintenance windows, capping out at only checking every couple hours. > **During Doze** > > Apps aren't allowed network access. > > App wakelocks ignored. > > Alarm clock apps are limited to 1 wakeup every 15 minutes > > No Wi-Fi scans are done > > No data syncs are done > > If an app is specifically receiving an SMS messages it's temporary whitelisted and can complete it's processing > > **Exit Doze** > > User interaction with device > > Device movement > > Device screen turns on > > Imminent AlarmClock alarm There's lighter states of Doze that will also allow apps to access their priority push notification and network, still keep Wi-Fi, GPS scans alive. There's some machine learning power management feature that will try to place apps in different priority buckets. Developers don't have any control over what bucket their app ends up in and it depends on user usage patterns and how much they interact with the app. **Active apps**, currently being displayed on screen, started activity. No restrictions. **Working set,** \"launched most days\". There's restrictions like the OS can delay jobs for up to 2 hours, scheduled alarms can be deferred for up to 6 minutes. The OS will deal with it when it's convenient and can delay your alarm for up to 6 minutes if it just doesn't feel like waking up, or if there's one thing scheduled for 7:35 and this app wants to wake up 7:30, then that app can just wait. **Frequent/Rare,** these apps are things used maybe one a week or once every couple days. Jobs can be deferred up to 8 hours, Alarms can be delayed 30 minutes, they're limited to 10/5 high priority messages per day, while working apps have no high priority restrictions. High priority messages is the only thing that will actually give an app connectivity and processing resources exactly when the apps wants. An \"Frequent\" app can only do 10 of these per day, so if an app is trying to send you another push notification, or TikTok is trying to get your location for the 11th time, the OS will just tell the app to fuck off, the user just doesn't care enough about you for you to keep waking up the phone. So it's really not the apps receiving the messages, or waking up the screen. All the apps have told the OS \"hey, if you get this kind of message, I really care about that, so please wake up and send it to me if you see something like this\". This is the Android transport layer (ATL) for android and Apple Push Notification service (APNs) for iOS. And this is why it probably can buzz before anything happens, [ URL_0 ]( URL_0 ) > Notification message > > FCM automatically displays the message to end-user devices on behalf of the client app. Notification messages have a predefined set of user-visible keys and an optional data payload of custom key-value pairs. > > Data message > > Client app is responsible for processing data messages. Data messages have only custom key-value pairs with no reserved key names (see below). Automatic messages is handled by baseline OS process. The call app and everything have to wake up, check it's cached memory, while the CPU is also working on finishing all the scheduled jobs. Then once the app has connected to everything it can actually read the message and decide what to do about it. I guess it takes a couple milliseconds."
],
"score": [
7
],
"text_urls": [
[
"https://firebase.google.com/docs/cloud-messaging/concept-options"
]
]
} | [
"url"
] | [
"url"
] |
|
if2l6d | Why are photos on my display always so bad at showing a smooth color gradient, when it comes to very dark - > black? | It's hard to explain, but I've realised that when there's a source of light surrounded by darkness on a picture, the shift from very bright to very dark is quite smooth, while on the outer shift the border to the blackness consists of rough lines and seems kind of pixelated, and the color difference to the lighter tone that comes before it is quite noticeable. Like, otherwise I wouldn't be able to see the border in the first place. | Technology | explainlikeimfive | {
"a_id": [
"g2l2krl"
],
"text": [
"Tom Scott did an interesting video on the topic so I’ll link it here if you’re interested URL_0 But a simpler answer is that with 256 shades of red green and blue, only so many of those shades can “represent” black, so those blotchy bands you see are where the color shifts from one shade (say 0 0 0) to another (say 0 1 1)."
],
"score": [
5
],
"text_urls": [
[
"https://youtu.be/h9j89L8eQQk"
]
]
} | [
"url"
] | [
"url"
] |
if3m1w | What does “Turing-Complete” mean in computer science? | Technology | explainlikeimfive | {
"a_id": [
"g2l44ab"
],
"text": [
"Theory of Computation is a bit of a tall subject for ELI5, but the basics are: if you have a bunch of different ways to program a computer (either programming languages, actual hardware, or even mathematical abstractions), are any of them \"more powerful\" than the others? Alan Turing designed a hypothetical machine that can do some simple operations (it can read and write data from a long \"tape\" of storage, and it can make decisions based on the data it's currently reading and the state that it's in) - this is a Turing Machine. He then showed that a Turing Machine is powerful enough to do _any_ sort of computation you can do. Not that it'll be _fast_, mind you, but it's _possible_ Now that we know that a Turing Machine can do any computation that we'd want, we can answer questions about new computing systems: can this new system pretend to be a TM? If so, it can compute everything and we call it Turing-complete. TL,DR: if a system is Turing-complete, it can function as a computer"
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
if5nha | Why airplanes loud and clear when we are thousands of feet away on the ground, but can't hear them when we are 30 feet away inside the plane? | Technology | explainlikeimfive | {
"a_id": [
"g2lgrsm",
"g2lz5r7"
],
"text": [
"The plain has hella insulation. So when the cabin is sealed you cant hear the engines outside. Exactly like how you cant hear a (newer) car engine when inside the car with windows up.",
"What airline are you flying that you cant hear it inside? It is so loud that noise cancelling headphones were literally invented for Air Travel! Its a lot quieter than being directly outside with the engines going because it is insulated against sound and vibrations - certainly doesn’t reach ‘quiet’ levels inside tho!"
],
"score": [
12,
6
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
if5veo | If you support a company, does clicking their ads show support or does it charge them money? Same question for skipping them on youtube. | Technology | explainlikeimfive | {
"a_id": [
"g2ljoko",
"g2m1vtf"
],
"text": [
"Social media manager here: It depends on what platform you see them. On google, it will charge them each time you click the ad. On FB they get charged just for letting you see it and extra for clicking on it but it depends on how the ad is set up. On YT I'm not sure but I believe it's the same as google since google own youtube.",
"The company will get charged for every time you click an ad. Skipping a YouTube ad has no cost but clicking on it does have a cost."
],
"score": [
10,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
if91b4 | why IP addresses all look the same | I have been trying to learn how to add wifi functionality to my arduino project, and one of the steps involved was to find out the IP address of the wifi module. It was something along the lines of 192.168.0.1, but every IP address I've seen has those same numbers. If there are so many wifi devices everywhere, why do they all have this 192.168.0.1 style address? Shouldn't they all be unique? Sorry for being an idiot, but I am so confused. | Technology | explainlikeimfive | {
"a_id": [
"g2m41rt"
],
"text": [
"192.168.0.0 through 192.168.255.255 are used for what's called a private network (there are other blocks of IPs that are used, but 192.168.\\*.\\* is a popular one). They are not routable on the public internet as they are reserved for the private zone. What this means is on your own local home network (and that of many organizations) you will have many IPs in this range. They cannot connect to the internet except through a router. The router will take *one* public IP from the modem it connects to and use it on every device connected to it. However, it has to be able to tell those devices apart so it gives them a private network IP. Essentially this is done to reduce the number of IPs that need to be given out. If you have 500,000 private networks that all use the above range, and each network has 50 devices, you only have to give out 500,000 public IPs instead of 25,000,000 public IPs (one for every device on every private network - 500,000 * 50). Think of it like your household - you may live alone but many do not. If every person in every household had to have a *unique* physical address (rather than sending it to the same address) you would need a lot more physical addresses than we currently have. Instead everyone in your household just piggybacks on the same home address. It's *slightly* different in that we can just put a different name on the envelope, but the mail system doesn't care about that....once the mail gets to the right physical address it's up to whoever lives there to give it to the right person. This means you don't need to obtain a new physical address for everyone in your house"
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
if9pol | Why can we upscale video to 4k and even beyond (Nvidia Shield) but cant upscale audio from low bitrate files? | Technology | explainlikeimfive | {
"a_id": [
"g2mbmi5",
"g2man8p",
"g2mn88b",
"g2nkv9z"
],
"text": [
"You definitely can. The issue is the same as with upscaled video - you're generally not actually accurately representing the missing data, just making a somewhat good approximation of what the computer thinks should be there. Few people really have 4K TVs large enough and watch their TVs from a visual distance where it is extremely obvious whether you're looking at 4k or not. Therefore, crappy \"4k\" video still looks decent. Doing an okay job of approximating the tone of an instrument or voice based off of a low quality sample is going to be easier to notice.",
"Actually we can! URL_0 But upsampling cannot add any new information, so if source is poor quality you cannot make it better.",
"One way that we regularly \"upsample\" audio is with surround channels. If you are listening to a stereo (2 channel) music recording from a CD or something on a surround (5.1 channel) system, your audio system tries to intelligently place certain elements as best it can in various speakers. There are often several preset modes, since the logic for music (which they assume you would want surrounding the listener somewhat equally) is different than watching a movie (where you would want dialogue in the center channel and music/fx in the surrounds). Like picture upscaling, these modes are not perfect but they are most of the time \"good enough\" for the average listener. NOTE: these best guess surround modes are different than features like Dolby Pro Logic. DPL recordings can actually compress 4 channels (LCRS) into 2 tracks which play like stereo on stereo systems, but for surrounds the Pro Logic decoder unpacks them according to that compression algorithm and it's not a guess, the tracks are discrete and specific.",
"A lot of this boils down to audio working very differently from video. Any given part of a video frame shows one thing, one object. There is a lot of space around the image for everything to be separated, identified, and for a neural network to try to guess what detail to add. Audio is like a one pixel image, or like a stack of dozens of images blended together. Everything is mixed. Different instruments have different frequencies, but they overlap and blend together and that is what we call music. Splitting things again is hard, and you'd have to perfectly do that in order to try to add detail like a neural network would. And our ears are very sensitive if anything goes wrong. That said, we can try. There are different scenarios to consider here. First, there is \"low bitrate\" as in a compressed file, like an MP3. MP3 and other audio compression formats work by throwing away parts of the sound that we can't hear (much). They literally punch holes in the audio and remove certain frequencies. This is similar to JPEG artifacts. In principle, you can smooth over JPEG artifacts and make an image look better (see e.g. waifu2x, which does that for anime-style art), but it isn't something that you can do for any image with quality, it only works well for certain kinds of images. For audio it's similar: you can try papering over the holes by filling them in with \"something\", and that does sound better (in fact, more modern compression algorithms like Opus effectively do this by default), but it will never be the original audio. In general, if audio is \"simple\" enough (like a voice only, or a single instrument) it probably won't have that many artifacts, and if it does then you've lost too much information already. Then, there is \"low bitrate\" as in a lower sampling rate than CD quality (44.1 kHz). In that case, you lose higher frequencies entirely (the audio is muffled). That information is gone. This is like upscaling a lower resolution image. The problem is that images and audio work very differently. With images, different things are in different parts of the image, and adding higher frequencies just adds more detail to them. However, with music, *different instruments are at different frequencies*, even though they all overlap to some extent. If there is an instrument that only exists at higher frequencies, like cymbals, and you chop off those frequencies (for example by putting the audio through a phone call, at 8 kHz), then those cymbals are gone and nothing will ever get them back. If it's an instrument that starts off at a lower frequency, like the human voice, and you just lost the higher frequencies (harmonics) which make the sound \"brighter\" (so it sounds muffled but you can hear it), then you can try to add them back by distorting the audio. This is called an \"exciter\" in audio production and it's a common trick to make parts or entire songs sound \"better\" or more interesting, but it's not something that will magically restore a low sample rate recording, though it might make it sound a bit better. There's also \"low bitrate\" as in lower bit depth than CD quality (16 bits). In that case, that's like adding noise, like film grain on an image - a (properly created) 8-bit audio file sounds exactly the same as 16-bit audio file, except with a bunch of constant noise added. This whole section also applies if the audio is high bit depth, but just has noise for some other reason. You can try blurring noise in an image away, but you'll lose some real information. You can do that with audio too (it's called a multiband gate, the Audacity noise removal filter works roughly like this), but it probably won't sound terribly good if you need to do it a lot. It works fine when the audio is silent (the noise gets removed) or when it's loud and distinct (you can't hear the noise much), but in the transition in between you get artifacts as quieter sounds mixed with noise suddenly cut to silence once they become quiet enough. If you do it too widely (to the song as a whole or to large frequency bands) then it sounds like cutting in and out. If you do it too finely (to very narrow frequency bands) then you're doing the same thing an MP3 does, punching small holes in the audio, and it literally starts sounding like MP3 artifacts. Finally, there is \"low bitrate\" as in CD quality, which you'll hear from the snake oil salesmen selling \"high bitrate\" audio at 96kHz/24bit and beyond. This is bullshit. CD quality audio can reproduce everything humans can hear, perfectly (we haven't fully gotten there with video yet, though think of it as a similar concept to a \"retina\" HDR screen; but technology has been able to do this perfectly for decades with audio). There are technical reasons to use higher sample rates and bit depths in music production, but for finished tracks, it makes no difference or can make things worse. You can downscale from any \"high bitrate\" audio like that to CD quality and it will sound exactly the same. What usually happens if a song sounds \"better\" in high definition like that is that it was mastered/produced differently, and they lie and tell you it's because it's high definition, when in fact it's literally a different song. I should also add that there's not a whole lot of research into this, and no reason to go off training neural networks like with video, because... If you want high quality audio, you can just get it. Buy the CD, or a lossless download version. The files aren't huge. If we come up with better ways of compressing audio to preserve quality, we can just recompress from the original lossless version. There just isn't much of a reason to attempt to \"restore\" or \"upscale\" things like MP3s. Video is different, and there is legit reason to upscale stuff that was originally produced in lower quality, while audio, well, has never been produced in low resolution (we went straight from analog to CDs). There are reasons to improve old analog recordings, and some of the above techniques apply, but this is something that usually an engineer/human will do partially manually once (using their ears and artistic judgement as to what to do to the recording) and then we all get to enjoy the finished product. No need for a universal \"makeitsoundgooder\" algorithm."
],
"score": [
184,
20,
5,
3
],
"text_urls": [
[],
[
"https://en.m.wikipedia.org/wiki/Upsampling"
],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ifb0vb | Why renewable energy isn’t dominating the market yet? Wouldn’t a combination of solar/wind/hydro/tidal/geothermal energy suffice at least a big chunk of a country’s consumption? | Technology | explainlikeimfive | {
"a_id": [
"g2mi91n",
"g2mhsp5",
"g2mieby",
"g2ng0lb",
"g2nkefj"
],
"text": [
"Fossil fuels in many cases are still cheaper than renewable energy. Importantly they are also very reliable and can adapt quickly to demand. We don't have very effective ways to store large amounts of energy so electricity needs to be generated when we need it, and that isn't always when the wind is blowing, the sun is shining, etc. If everyone comes home in the evening and turns on their appliances it isn't like we can expect the wind to blow harder to keep up. The sun isn't going to shine any more, in fact it is probably setting and solar energy output is decreasing dramatically. Also if we need energy in a specific place it is easy enough to ship coal or petroleum products to be burned there, but it isn't always the case that there is a convenient form of renewable energy to tap. Many of those challenges might be possible to overcome, but the alternative is inexpensive fossil fuels which feed into an infrastructure that already exists. If you have a billion dollar power plant that burns coal to power a massive city nearby, how do you get the funding to try replacing that with another billion dollar project which has challenges with only theoretical, unproven solutions? What is available now definitely works and the massive city isn't going to enjoy being your test subject if things go wrong.",
"There’s a huge amount of infrastructure already in place to use fossil fuels - think of all the power stations, refineries, tankers, gas stations etc. All that stuff would take a lot of time replacing. And generally countries don’t dismantle them unless they are old.",
"In addition to the other answers, which I mostly agree with, a lot of the green technologies are location-dependent. * Solar works less well the further you are from the equator, and it works less well when there's a lot of cloud. (It does still work! Just not as well.) * Wind only works in places where (amazingly enough) there's fairly strong, fairly consistent wind. * Hydro only works where there are rivers, and preferably rivers you can afford to dam. * Tidal only works by the ocean. * Geothermal only works where there's a good geoheat source close enough to the surface that you can dig to it. That's a lot of options, but some places are just SOL and don't check any of the above boxes. For them, green energy would take a long time to pay for itself and it'll be understandable if they're slow and not-excited about changing over.",
"Aside from hydro it’s less efficient and more expensive. Once it becomes more cost effective and you can store it effectively - it will make sense to use more of it.",
"From what I understand, a big reason why fossil fuel is still more efficient and cheaper than renewable resources is the conductors commonly used to transport energy across long distances are still too inefficient to make it worth while. You only reliably collect solar/wind/hydro energy from very specific places on the planet. Highly effective \"superconductors\" are EXPENSIVE AF and in short supply. Whereas you can simply move the coal or oil where you need it and burn it much closer to where the energy will actually be used."
],
"score": [
75,
9,
7,
4,
3
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ifgqga | Why do old videos looks so blurry even when we remember them being clear. Is there a way to play them clearly? | Technology | explainlikeimfive | {
"a_id": [
"g2niv6m"
],
"text": [
"Your perception of whether something is clear or blurry is based on comparison. Back then, you didn't have HD to compare it to. Based on your frame of reference at the time it would have seemed clear. Also, VHS degrades slightly over time. Probably wouldn't be enough to affect how you see it, but VHS is also less clear than a TV signal (sometimes) so if you saw it on TV before, that might have made a minor difference."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ifjvwk | How can phones with powerful specs run 24/7 without overheating, while computers need giant fans to cool down? | Technology | explainlikeimfive | {
"a_id": [
"g2o1u6l",
"g2o3uho",
"g2o4903"
],
"text": [
"As powerful as phones are they are nowhere near as powerful as desktop cpus. Desktop cpus dont worry as much about thermals because there is room for a giant fan. Since theres room for it, let's not waste effort designing it to be super duper power efficient. Just somewhat efficient is good enough.",
"Desktop CPUs and GPUs are *significantly* more powerful than mobile ones. We're talking tractor engine compared to remote-control car. Try to do anything especially computationally intense on your phone, it'll get hot as fuck and drain your battery, while still taking 10 times longer. Mobile versions of software are significantly cut down, and even then they're still a lot slower.",
"Because the phone specs are theoretical maximums. Phone processors are designed to run really fast for less than a second, and then start to slow down to avoid getting too hot. The hotter they get the slower they run until they're running slow enough that the passive heat dissipation through the body of the phone is enough to keep them from getting any hotter. PC processors are hooked up to coolers that are strong enough to keep them cool at maximum load, so they just run at full speed all the time. So, even if the phone has a similar-ish \"top speed\", it doesn't matter for anything but tasks that are already so quick that you don't notice the difference."
],
"score": [
20,
9,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ifnwcj | We are living in 2020, but movies and TV shows are still not being shot at 60FPS, why? | Technology | explainlikeimfive | {
"a_id": [
"g2onwgn",
"g2oo0tg"
],
"text": [
"60 fps makes live action film appear almost unreal. It dramatically reduces the \"cinema\" feel that people are used to.",
"Movies are just a succession of photographs with sound. But you have to know, when you take a picture : if the photo is taking more time to be shot, more light will go through the lense. And if the photo is taking less time, less light will go through. So when you shot a movie at 60fps, you'll eventually have half the possible luminosity that you can have in 30fps. And you will be forced to artificially boost it (on postproduction), and that lowers the video quality."
],
"score": [
8,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ifo4m3 | why is it that whenever I need to find something online, there is always a forum from 2001? | Do such forums still exist? Will people in 2040 trying to figure out why their computer screen is flickering find old forums from 2020? | Technology | explainlikeimfive | {
"a_id": [
"g2osf3x",
"g2p29ie"
],
"text": [
"2001 was the time most websites started to grow, there are many websites, forums and chatrooms from back then, but they just got a different look these days. Some problems may exist way longer than modern websites which means every forum that discussed that problem way back then, has a higher click rate which means it has a higher priority in Google's algorithm.",
"I would like to point out, you can modify the search settings on all engines as well as YouTube to only show results from a specific date range. Sometimes the content is still valid mostly it's not. The search engines have a certain crawl rate that removes out dated stuff. Also the algorithms are not perfect. Interesting to note a Google search of your topic reveals a very popular thread back in 2009 where they had the same problems. Search results from 1995 and such. Everyone needs to take this seriously and do removal requests and archive old data and crawl It Off the search results"
],
"score": [
18,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
ifwh19 | If silencers don't supress gunshots as peeps like in the movies. Then why are they used for at all? | Technology | explainlikeimfive | {
"a_id": [
"g2q5pxl",
"g2q53gi",
"g2q6vwx"
],
"text": [
"They reduce the noise about as much as a set of ear plugs will, so they provide hearing safety without everyone around having to stuff things in their ears. Nice to have when shooting in a group, you don't have to worry as much about who has hearing protection on.",
"Its the difference between a firecracker going off in your living room and somebody clapping outside.",
"Some rounds are pretty quiet when fired through a suppressor. They can be useful for someone like a soldier that needs it to be more difficult for them to be located. They are also useful for someone like a farmer that might need to shoot pests near livestock that would become spooked by gunfire to to kill an animal for slaughter. Most commonly they are used by recreational shooters who want to minimize the disturbance their shooting causes others that are near enough to hear the shots. Rounds that can be fired without hearing protection through a silencer have a rather short effective range though so they really aren't particularly useful. If you ask what suppressors are actually used for the answer is not much. The cases where they are useful there is almost always something that isn't a gun that works better."
],
"score": [
13,
8,
4
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ifwx2q | Microsoft Monopoly | Technology | explainlikeimfive | {
"a_id": [
"g2q9b3d"
],
"text": [
"Windows was on about 95% of desktop computers. Which gave MS a monopoly on desktop OS. That was fine as long as MS did not use that monopoly to its advantage for other software. But windows was shipped with IE but not Netscape. So 95% of desktop computer users had IE automatically but had to seek out Netscape. This was using the monopoly on OS to get an unfair advantage in browsers."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ifzrb0 | Console games were developed on PC, but why is it difficult to port them to PC without low rate of bugs? | GTA 4 and Dark Souls 1 are prime examples of this question i’m asking. They are considered by gaming community as one of the worst PC ports ever due to low frame-rate, new bugs and not much setting options to configure | Technology | explainlikeimfive | {
"a_id": [
"g2qrj3h",
"g2qs350",
"g2rhotb",
"g2qsiip"
],
"text": [
"There’s fewer variables when you develop for consoles. Let’s say you’re a big game developer. Your work computer is going to be very high end. Console generations only have a few different combinations of hardware specs. The possible screen resolutions are limited. The operating systems aren’t going to be running anything else in the background. So, you can test on basically every console scenario and be pretty confident about catching major bugs. For PC versions, you don’t know the hardware each user has so you set some minimum target and do your best. You can’t test every combination. There’s not enough time or money to do that. There’s almost inevitably going to be more bugs. A separate issue is that often the original developers don’t work on the PC port. They might but if a game developer is focused on consoles, their PC ports might be outsourced to a whole different company. If that company hires cheap, inexperienced developers, it probably won’t go so well. If they have great developers, the original code might have been written by inexperienced developers and it’ll be a mess. A lot can go wrong.",
"It takes a lot of time and money to iron out kinks in a product when shipping it to a new target platform. Games that were initially made for consoles are tested on those specific consoles are are developed and designed with this specific console hardware in mind from the ground up. When a game is later ported to the PC relatively little time is spent on ironing out the inconsistencies that weren't taken into account when first developing the game. Some companies take the time and effort to extensively test their PC ports. While others don't.",
"1. Consoles have little to no variation in hardware and operating system. It's a set target you can aim at to ensure your game runs well on that machine without too many issues. PCs have many combinations of hardware, and different operating systems, so without adequate testing and optimisation you're less likely to find and resolve performance or compatibility issues. That's exacerbated by the fact that PCs mostly having no central quality control authority, and the mindset of modern publishers to release on time and fix issues \"later\" through online updates. 2. Consoles have features that PCs don't or are significantly different. Whether that's unique hardware (like the PS3's Cell processor) or APIs for online services, there are application changes which must be made to support the new platform, which is an opportunity for issues to be introduced. Especially if the porting work is outsourced to another team, or you're under pressure to release on multiple platforms on the same date. 3. PC gamers have different expectations than console gamers, largely stemming from the historical culture of customisation the PC platform allows and the evolution of PC games. The ability to change resolution, graphics settings, button mappings, use different input devices, create mods etc. So the lack of these features is often disappointing for PC gamers who have come to expect it. If they're missing from a game that was previously released on a console, it's easy to blame that disappointment on a perceived bad or lazy porting process.",
"on OLDER consoles (ps3 era) some of them had specific hardware that only that console used. it required a specific setting and kit to even make. so, those ports will be more difficult bug wise because the instructions might not match up. kinda have to dig into comp sci a bit. the processor reads assembly language, which is mostly just hexadecimal or binary characters and some coding environments have words for a few things too. problem with assembly is that its made specific for each environment. pc has been essentially standardized with 32/64 bit of a particular set of instructions. the ps3 for example spoke a different assembly language entirely. if they ran it through a translator, theres likely some parts wont come through properly, like using a trick available to the hardware to make the game run a certain way. more modern consoles dont really have that issue; but what they all share (new and old) is usually the exact same equipment inside (of the same console) which means they are both limited in what they can do, BUT can optimise freely knowing that no matter what physical console of the targetted type gets the game, the workaround will work. these optimisations might work or not work on some pcs, especially since amd and intel do things a little differently. oh and theres another entire structure of processors outside amd and intel. ARM. and then you also have different OS like windows, the many linuxes, and macos. imagine you were making a game and your target was ONLY a specific iphone gen and ALL androids released during the \"high time\" of that iphone. all the iphones are nearly identical; write once and itll work the same. but on the android side... gl buddy lol thats more or less for bugs and crashes. the graphics or ingame interface is 100% redesignable but they wont because it would take actual work and \"everyone has a wireless xbox controller anyway\". some game devs put a little thought into keyboard and mouse, but a lot dont and it really shows. but thats kinda subjective and stuff so grain of salt it."
],
"score": [
33,
6,
3,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ig1ysz | What exactly does an "EMP" do?, and how long does it last? | Electro magnetic pulse, I assume it shorts everything out. Does it damage electronics so they will never work again? Thanks! | Technology | explainlikeimfive | {
"a_id": [
"g2r5kbn",
"g2r6a5m"
],
"text": [
"A moving magnetic field creates an electric field, and vice versa. An EMP is a near instantaneous and powerful burst of an electric and magnetic field that as it passes through objects will induce a current in anything made of metal. Think like a wireless charger or a transformer. Except in the case of an EMP, it's typically much larger and uncontrolled. Instead of a limited amount of current passing through coils put there specifically to harness the current being generated, it is a current passing through *all* of the wires more or less simultaneously, often in the wrong direction, and usually at a much higher voltage than the circuit is designed to handle. Which is essentially what a short does, too. Everything except superconductors have electrical resistance and your electrical device is not made of superconductors, so there is resistance. Resistance creates heat. Wires melt, semiconductors (like the transistors in computer chips) melt. Parts that do not melt are degraded by the heat. Solder melts off, destroying contacts that are supposed to be there and creating shorts that aren't supposed to be there. Resistors are overloaded and melt. Capacitors are overloaded and melt or explode, or shorts are created that cause the capacitor to empty where they shouldn't. Batteries may be shorted and catch fire. Whether or not the device survives depends on its complexity and the power of the EMP. Something delicate like a computer chip almost certainly won't survive and will be completely fried. But, say, a diesel engine vehicle that doesn't have any kind of computer in it will probably just pop most or all of the fuses and will work just fine once the fuses are replaced. A powerful enough EMP can even melt the wires so \"will it work\" becomes a subjective argument at that point - the engine will, after replacing all of the wiring but at that point it's not really the \"electronics\" that survived. If it's a mild EMP that can't generate a high enough current then stuff may or may not survive, it really depends on what the thing is. In general, though, it's a bit like chucking the thing in the bathtub and then chucking a toaster in along with it. You can shield electronics from EMPs using a Faraday Cage, which is a mesh of wires that absorbs the EMP and directs the energy around and away from whatever is inside. The downside is that the Faraday cage blocks *all* electromagnetic radiation so, for instance, a phone will not get a signal inside of it which kind of defeats the purpose of having a phone.",
"An EMP, like the name suggests, is just a pulse of EM energy that can be naturally ocurring (like a lightning strike) or man-made (like what accompanies a nuclear explosion). It can be as short as a few microseconds. The effects of an EMP therefore can vary widely depending on what the source was, how much energy was released, and even what type of electronics were affected. Generally, you'd expect affected equipment to sustain momentary fluxes of extreme current and voltage, which may or may not cause permanent damage. Overloads may cause arcing within circuitry which can often severely damage electrical equipment."
],
"score": [
6,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
ig7h91 | Why are CPU coolers for servers so small? | The consumers side of things are throwing massive monstrosity like the [Noctua D15]( URL_1 ), [Cryorig R1 Ultimate]( URL_2 ), and the enterprise side has these [cute little aluminum blocks]( URL_0 ). And server CPU usually have way bigger TDP than consumer CPU right? Then why are they having smaller heatsink for hotter CPU? | Technology | explainlikeimfive | {
"a_id": [
"g2s4thj",
"g2s2zoz",
"g2s2eay"
],
"text": [
"As others said, noise isn't really a concern in the server world. But also the CPU isn't the hottest thing in a server, these SAS-harddrives are. Plus rack servers already are a perfect wind tunnel, usually air goes in in the front, comes out in the back. An easy enough geometry to push huge amounts of air through. You have to cool your drives regardless so everything else gets away with just putting heatsinks in the path of the air.",
"Most rack server I have seen don't have active CPU coolers at all. They just have these big heatsinks that are made of aluminum and meant to provide as much surface area for the air flowing through it to touch. The cooling for the CPU and most other component in those servers is from the big fans at the front that blow air through the entire case. Often the whole server is designed to optimize airflow with strategic placed baffles and funnels to ensure that as much air as possible passes by the components that get really hot. In most desktop casing the components are arranged in a way that the the main fan for the case doesn't much to affect the individual components. It should perhaps also be pointed out that server parts don't necessarily grow hotter than desktop parts. Most servers lack such things as dedicated high performance graphics cards for example and the CPUs grow hot but not that hot. But it all boils down to design. [Here is a picture of the sort of server I am familiar with.]( URL_0 ) See that big row of fans at the front that blow air towards all the other stuff? This is the main source of cooling for the entire server. Normally there is plastic insert that channels the air towards the CPU and other parts, but it is removed in the picture to better show the layout. It is all designed to cool optimally, but that only works because the designers know where all the parts are. Desktops can be much more varied and in different shapes which would make this hard in such form-factors.",
"Servers are generally going to be running in an air-conditioned environment where nobody cares how much noise they're making, so small, noisy fans are acceptable. Also, a lot of the jobs servers do don't need much in the way of CPU power--it's the storage subsystems and memory that are the most important parts."
],
"score": [
8,
4,
3
],
"text_urls": [
[],
[
"https://i.imgur.com/1xYZNAS.png"
],
[]
]
} | [
"url"
] | [
"url"
] |
ig9253 | The difference between Rasterization, Ray casting, Ray tracing and Path tracing | Technology | explainlikeimfive | {
"a_id": [
"g2sbq03"
],
"text": [
"I’m not an expert (or qualified, really) but here’s the best breakdown I can do: Rasterization is the process of taking a vector image (points in space connected by straight or curved lines to make up a scalable, “infinite” resolution image) and converting it to a bitmap (a grid of pixels with a color value assigned to each) by checking what the geometry looks like at each given point. This is almost a 2D version of the processes used in ray casting, ray tracing and path tracing Ray casting is the simple idea of shooting a beam somewhere and seeing what it hits. It’s used for ray tracing and path tracing, but can also be used for things like finding a position in 3D space based on a 2D input (perhaps the position of a mouse pointer). By shooting this beam out and taking note of the point at which it first intercepts an object, we’ve found said point Ray tracing uses ray casting, but we do it from all sources of light in a given space. However, when a ray finds an object it can bounce off, change color, and more. This is a basic approximation of how photons work, bouncing around in space until they reach our eyes. However, while ray tracing makes for highly realistic renders, it’s far too costly for 60 frames a second gameplay (or real time rendering as it’s known). The solution is path tracing. Path tracing is ray tracing’s little brother. Similar to how we can solve math equations backwards and forwards, path tracing is the reverse of ray tracing. By only assuming we care about the light that reaches the camera, we can just shoot light out from the camera and see similar results. The upshot to this slightly impaired rendering is that we only have to think about one camera, instead of tens or hundreds of light sources Sorry if that was an incoherent mess, I’m on mobile rn"
],
"score": [
8
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ig9pkf | What exactly does incognito browser do? | How incognito does it really make us? | Technology | explainlikeimfive | {
"a_id": [
"g2sdahm",
"g2smuet"
],
"text": [
"It means that the browser will start a new browsing session and will not record anything during it. This means two things: it will not use identifying information (such as cookies) from your normal browsing session, and any information collected while browsing incognito (cookies, browsing history) will be discarded and not stored on your computer. This means that anyone using your computer later won't know what sites you went to.",
"It's a bit of a misnomer - it doesn't anonymise you at all. What it does do is create an isolated browsing session from which things don't (shouldn't) escape - anything you do in that session should be lost forever once it's closed (edit: as far as things local to your own computer are concerned)."
],
"score": [
25,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
ig9te8 | - Why is Linux NOT considered an OS? What the hell is GNU? Among other things | I'm still tryna wrap my head around the concept of Linux, as I just transitioned to it on one of my computers. A lot of people say Linux is an OS like Windows and Mac, but then others refute and say stuff like "no it's a kernel" and then GNU gets mentioned and I'm automatically lost. So if Linux isn't an OS, does that mean that the distros are the ones who should be given the proper OS label? | Technology | explainlikeimfive | {
"a_id": [
"g2shf24",
"g2se0ni",
"g2se7dw",
"g2sehbt"
],
"text": [
"An operating system consists of two major parts: 1. The kernel, which is the lowest level piece of software in the system. Its role is to control memory and CPU access, and communicate with the various hardware devices in and connected to your computer. In a Linux system, this is performed by the Linux kernel. 2. A collection of software tools and services that allow you to do various basic things using that kernel. This includes things like reading and writing files, playing audio and displaying graphics, handling various kinds of network connections, and providing a set of standard APIs so that Application developers can code to a common standard. The second piece is a little more complicated. There is an organization that creates these tools and it has named them collectively GNU, short for “GNU’s Not UNIX”. They based their work heavily on tools created for the original UNIX operating systems, but they intentionally open sourced their derivatives, and allowed anyone to use them free of charge. Put together the Linux kernel and the GNU tools and you have a complete operating system foundation. We refer to this as “Linux.“ Some GNU advocates would prefer that you call it “GNU/Linux”. That’s fine. Just understand that what we call “Linux” includes the GNU tools.",
"The kernel is basically the part of an OS that is the foundation of all the other things added on top of it. It's always running in memory when the computer is on. GNU is basically a toolset of sorts that can be used in Linux, for example compilers, GUI libraries, etc. Linux is a kernel because it basically just provides very basic sets of features that a regular user wouldn't be able to directly interface with. Well to be clear, Linux is a monolithic kernel, so it has quite a complete feature set without needing much more to be fully operating. Linux distributions like Ubuntu then adds things over this kernel, like a graphical user interface and whatever else needed for usage.",
"Linux isn't a single operating system, it's a whole set of different operating systems which are all based on the Linux kernel, which is the part in charge of the most basic OS stuff - memory, hardware access, multiprocessing, etc. On top of the kernel there are different distributions, which just collections of (usually free) software on top of the kernel, such as a package manager, GUI, command line shell, and lots of others.",
"There's an Unix-like OS kernel called \"Linux\". (kernel is the foundation that knows how to do things like running programs and reading files, but it doesn't give you tools to actually use this capabilities). To make it a usable OS you need to add some user-facing stuff. Usually this stuff is the one created by organization called GNU. (GNU also has its own kernel, but it's not as popular as Linux kernel). When you combine kernel with this user-facing stuff you get a full OS, which confusingly we also call \"Linux\". You are confused because there's really a confusion: we use the word \"Linux\" for several different things. One of them is \"Linux kernel\", and others - OSes based an this kernel."
],
"score": [
7,
4,
3,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
igcnhc | What happens if you turn off your gaming pc without shutting down properly? | Technology | explainlikeimfive | {
"a_id": [
"g2ssuce"
],
"text": [
"Same thing that happens if you turn off a non-gaming PC. The hard drive cache and memory, which require power to store data, do not get a chance to finish writing data to disk. This can result in corruption of data in the hard drive."
],
"score": [
9
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
igdwwo | how laser eye surgery works. | How does slicing someone's eyeball with a laser fix someone's vision? | Technology | explainlikeimfive | {
"a_id": [
"g2t2by6",
"g2tmvj9"
],
"text": [
"Our vision is based on how the cornea and lens of your eye bends light to receptors in your retina. Instead of a lens replacement, you can cut the cornea and reshape it to a specific measurement and curvature to change the way the light refracts and bends. There are certain criteria for who is eligible for laser eye surgery.",
"when i was little i for real thought it meant you’ll be able to shoot lasers with your eyes"
],
"score": [
7,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
igfma4 | how servers in video games work, and what exactly is a server? | When the devs of a game say something like “we’re working on our server issues” what are they exactly working on? And what causes some issues like lag or “lack” of servers for players? | Technology | explainlikeimfive | {
"a_id": [
"g2teseo",
"g2teytj",
"g2tfjsk"
],
"text": [
"A physical server is a computer like any other, but designed and programmed specifically to connect to many 'clients' (the players' computers/games). They're normally quite large, noisy, powerful, and have a very good internet connection. A game server is the software side of this. It is the program that all of the clients talk to. It is what tells everyone's client what is going on, and sort of runs the game.",
"A server is a just a computer that's responsible for handling all of the traffic for a given application/system. In this case, it's a computer that handles all the traffic from the people playing the game. It's the \"single source of truth\" for where players are located, what actions they're performing, and what ultimately needs to get rendered to the \"clients\" (the people playing the game on their computer/playstation/xbox or whatever). So, the game is tracked on the server, and the clients (you) send requests to the server to perform an action, which the server is responsible for resolving with all the other requests it gets each \"tick\" of the game (games are not actually real time. They are interpolated and so there's always \"turns\" that happen, which is why ping/lag can affect gameplay). This, of course, assumes a multi-player online game. For single player games, servers are basically just checking to make sure the game is valid, handle updates/patches and some other administrative things.",
"What an IT guy calls a server is a physical box in a rack somewhere. What a player calls a server is just whatever they are connecting to that other players are. Let's call it an Endpoint for arguments sake. Sometimes those endpoints will reside on the same physical server as other endpoints, as different processes running on the same physical hardware (perhaps in Virtual Machines, but not necessarily). Sometimes a game may route you from one endpoint to another, such as when entering a dungeon in an MMO. A different physical server is setup to handle dungeon content, and your connection gets rerouted inside their network to the right endpoint."
],
"score": [
6,
4,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
igi88b | Why does the WiFi seem to rarely go down at work, but go down at home all the time? | There are many many more devices connected at work than at home. | Technology | explainlikeimfive | {
"a_id": [
"g2tuulw",
"g2twril",
"g2tz97v"
],
"text": [
"They are also using much higher end equipment. At home, you probably have a consumer grade router, which is a router/switch/firewall/access point all rolled into a single device that cost maybe $100. At work, they are using _multiple_ discrete devices for each one of those functions, and each one of those devices probably costs between $500-$2500. Your home network cost you maybe $200 in equipment, while the work network could easily have cost five figures. That higher end equipment is much more reliable.",
"As well as all the other answers given so far, interference. At home you likely have many neighbours each with their own wifi and you will likely be stepping on each others' frequencies. If your work WiFi is physically separated by other homes or businesses who may be running their own by a large physical distance then there's going to be a lot less interference to ruin it.",
"Because WiFi at work is set up by professionals to work stably with many users and higher quality equipment in a carefully planned network of many access points that cooperate with each other across the whole building, and WiFi at home is a free-for-all battle royale with your neighbors with everyone using cheap consumer routers that fall over at the first sign of trouble and just step all over each other's signals. *If* you can solve the problem of fighting your neighbors, then it is perfectly possible to get good WiFi at home, if you spend some money on it. If you have a lot of neighbors, then that probably means switching to the 5GHz frequency band. In the old/default 2.4GHz frequency band, there are only 3 channels to choose from that don't interfere with each other, so as soon as you have more than 2 neighbors, you're going to be in trouble. In the 5GHz band there are dozens of channels to choose from, so you have a much better chance of being able to find one or more unused ones and set up a stable network. This takes some investigation into what your environment is like. Then you need to buy one or more good quality access points, depending on how big a house you need to cover. 5GHz doesn't go through walls very well, so you might need one every second room, and/or one for every floor, depending on layout. And then, if you have more than one, they should probably be set up with a special controller server to make them work with each other and coordinate so that your devices can smoothly switch between them and pick the best one. And you need to pick the channels so they don't overlap within your home. And then to make sure your router isn't a problem either, you might want to upgrade that. Now you might end up spending $1k on this and having to learn a lot about WiFi and networks, and this is why it takes a professional to pull it off for an office :-) That said, cheap consumer routers aren't *all* terrible, so plenty of people who haven't done anything special have pretty good WiFi, if their neighbors aren't causing trouble and their router is decent enough and doesn't have to cover a large area. (I do use a single pro grade WiFi access point in my small apartment, with a decent quality router, having worked out channels that are free and configured everything carefully, and it all works very reliably)"
],
"score": [
8,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
igjhia | What is the difference between POP3 SMTP and IMAP? | I've been having issues with my phone lately, as I am not able to connect my mail account. After googling the issue, it says to try switching servers between the three. What is the difference and what should I use for my mobile phone? | Technology | explainlikeimfive | {
"a_id": [
"g2u4zyn"
],
"text": [
"These are simply different protocols. Think of them as different languages your phone can speak to an email server in. Different servers may speak one or more of them. Since there's a different code base to send the communications back and forth in each (to an extent, the core data is the same, but they'll run through different translators) it's possible to introduce an error in one, but not in other languages. Like knowing the exact right word for something in english but not in spanish. Now we get into your specifics, the differences get a bit technical but roughly: SMTP is all about sending messages, you're sending a message to a server and saying hey please deliver this here. Short version it's like packaging up a letter and putting it in your mailbox. The post office (server) will then sort out how to get it to the local post office for your recipient. IMAP and POP3 are all about receiving messages, basically how do you check your mail. With IMAP you request copies of all the messages addressed to you. It's as if the post office didn't give you your actual letter, it scanned the letter and printed a copy for you. You could go into any branch and request that printout. This makes it easy to sync email across multiple devices and makes it easy to recover lost messages since mail servers usually live on heavily redundant hardware. POP3 doesn't get a copy, you get the original. Once you pick it up it's gone from the server, so that's more like how letters actually work. Once you pick it up the post office doesn't have it any more, you only have your local copy. IMAP is generally preferred since you can always load up your email on a new device and get everything from the server. You also get it on ALL devices connected to that account, instead of just the first one to check the box. It gets a bit more complicated than that, since the software making a POP3 request could then archive the message and make it available in multiple places instead of relying on the mail server to do it or take other actions. Some services might also push notifications out to the device to prompt it to make an IMAP or POP3 request instead of waiting for it to do it on its own. Mobile OSs are more friendly to those push requests, which is why you'll often see emails come in a minute or two faster on a phone than on a PC based client sitting right next to it. That last bit is why it gets hard to make a recommendation without knowing what platforms you're using but generally SMTP and IMAP are the defaults, but you could try switching to POP3 if you're having a problem because that problem might not exist on the POP 3 side of things."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
iglf3j | How does HTML5 differ from Flash? | Technology | explainlikeimfive | {
"a_id": [
"g2vfkk8",
"g2uhm4j"
],
"text": [
"The top answer here does a pretty good job at explaining how they are different from a technical perspective. I'd like to take a different approach and describe how we got to where we are, and why you hear the two in the same sentence so often these days. Back in the 2000s era (and perhaps earlier) the web was a pretty lawless place. HTML version 4 was fresh on the scene and it didn't really offer much in the way of dynamic web content. HTML was, and still is, largely just a way to structure static data. If you want things on the page to change, move, or even just look halfway decent, you're going to need some outside help. This is the role that JavaScript and CSS play -- JavaScript is a web designer's primary tool to manipulate the page dynamically, and CSS is their tool to make pages look snazzy. But even then, the offerings of these two were rather weak, especially compared to today. If you wanted to do anything *really* cool, you needed some really specialized tools. These would either come in the form of software you had installed on your machine that webpages would download small programs for (which is what Java, not to be confused with JavaScript, was originally designed for), or in the form of small plugins you installed into your browser to render special content. Flash came about as the latter. Flash was a plugin from Adobe that was able to do a shitload of things. Most notably to the common user would be full motion video, flash animation, and those crazy flash games. And it was crazy easy to write software for, too. Flash just simply made so many more things possible on the web that the current tools didn't offer, and it smoothed out a lot of things by offering some common, stable ground in an ecosystem where browsers didn't really follow feature standards. Lots of hacky workarounds for things that would seem so trivial today, like automatically copying text to the clipboard, were possible thanks to Flash. This hubris, though, would ultimately be its weakness -- it was hella exploitable. Even to today, Flash has been riddled with security vulnerabilities. It was a leaky ship that never stopped leaking despite constant patching. But people used it anyway, because the features it offered were unparalleled. HTML's next successor, HTML 5, was in planning for a long time (since 2004!), but it finally rolled out in 2014. Not long after, JavaScript's latest advancements were also being published, starting in 2015. And CSS was also chugging out some releases around this time too. All these new publications were a call to action for browsers, which had historically been pissing about doing their own things, to start putting on their adult pants and adopting some universal standards, so that developing for web wasn't so hellish and fractured. HTML 5 was the kingpin in a lot of these. Need a video player? We got a new ` < video > ` tag for that. Need to draw some pretty pictures? ` < canvas > `. Wanna play some sick tunes? ` < audio > `. I should take care to mention that it's not actually HTML *itself* that's allowing these things to work. HTML is still just a way to arrange static data. What it has done with these fancy new tags is it created a standardized way for webpages to say something like, \"put a video in the page right here\", which all complying browsers would simply know what to do with. What Flash did was it held the browser's hand, doing all of the heavy lifting of displaying multimedia content. Modern browsers today are advanced enough that they can do most of it themselves now. All they need is to be told is where to get the data and where to show it, which is what HTML 5 allows. Flash had already been on the downturn for some time. Adobe said they were abandoning it as early on as 2008. They also refused to support it for mobile devices. Adobe found it far more advantageous to instead try to get on the rolling ball that would become HTML 5. They claimed they would cease support for it altogether in 2020. And thus, that brings us to now in that very year, where most major browsers have all but eradicated Flash from themselves. Firefox plans to ban Flash for good by December of this year, and Chrome intends to do the same by early next year. By mid 2021, Flash will be all but gone except in the most niche applications. So, to briefly summarize, webdev in the 00's was a wild west hellscape, and Flash was there guns blazing performing black magic, which made it super popular. But HTML 5's development and eventual entry onto the scene, along with all of the other standards developed for CSS and JavaScript, made a lot of the selling points of Flash obsolete. There's a number of things Flash was always able to do that the combined trio of web tools still cannot do today, but there's no shortage of fancy tools to fill those niches, including some technologies being developed by Adobe themselves.",
"HTML is just a markup language -- your browser interprets tags and decides what to display and in what order to render them. It's like a paint by numbers. Tags new to HTML 5 simplify some functionality that modern web users want (like the < video > tag, which tells the browser to load a video file in the browser's own/preferred video player tool). Any web browser (and a handful of other programs) can load an HTML file and display it to you correctly. Flash is an animation tool. Things designed in Flash require Flash to run. So, an HTML file might have some Flash elements, but your browser must have a Flash plug-in to display them. It's more like a flip book than a paint by numbers: the animation is set to run in a specific pattern as long as you have the right tools to display it."
],
"score": [
8,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ign8wd | Why special characters like "ò" or "é" aren't allowed in passwords? Doesn't increasing the number of possible combinations make a password more secure? | Technology | explainlikeimfive | {
"a_id": [
"g2uu28s",
"g2ur7px"
],
"text": [
"This is my first ELI5, I hope I am doing it right. I will try and keep it basic, feel free to ask for more complexity. I have omitted some additional truths about storing passwords for the sake of specifically answering your question. Characters on a computer are encoded as binary. The letter a for example is actually `1100001`. Additionally there is a system called hexadecimal which is faster way of writing this. The letter a in hexadecimal is `61`. This system with the regular characters is called American Standard Code for Information (ACSII). Later on new systems were created for displaying other characters. One system was called Extended ASCII. When you do the à character in Extended ASCII the character is actually encoded as `11100000` in binary or `E0` in hexadecimal. Now the complication comes because there are other systems for displaying additional characters in a different ways. *Latin-1*, *UTF-8*, *Unicode*. However these other systems encode the à in different ways to each other. As all of these systems are widely used by different programmes and systems there can be some unexpected results. In many versions of *Extended ASCII*, \"à\" is encoded as `85`. In *ISO Latin-1*, \"à\" is encoded as `E0`. In *Unicode* \"à\" is codepoint `U+00E0`. When encoded with *UTF-8*, the result is `(hex)C3(hex)A0`. The end result meaning that there truly is no à. There are just several different ways of presenting à. Adding to this the additional complexity that when you save your password on a website, for security reasons it jumbles up all of the letters in a process called encryption. This is so that if a hacker gains access to the database they do not have the user's passwords. All in all its potentially problematic, costly and less robust to support the special characters when it doesn't actually make the password that much more secure. Although there is some added security against bruteforce attacks. An attack whereby a powerful computer will attempt every possible combination of characters. What truly makes a password strong is entropy (unpredictability and randomness). àààààà is no more strong of a password than aaaaaa. Sorry for my poor grammar.",
"1. Customer service issues. You'll likely end up with more people forgetting their passwords if they forget which characters have which passwords, which is harder to remember than just a word. 2. Ease in programming. There are a *lot* of ways people handle passwords on the back end, but for various reasons there are often certain characters you don't want to support, which usually means a whitelist of acceptable characters, and it's easier to cover just the standard ASCII characters."
],
"score": [
14,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
igo75p | ; Why do electronics in general start messing up over time? | if there are no moving parts in electronics and computers can only do what they’re coded to do an nothing more than what makes them slow and buggy over time? | Technology | explainlikeimfive | {
"a_id": [
"g2uxa6f",
"g2uwvii",
"g2vg5b7",
"g2v4e3x",
"g2vusrd"
],
"text": [
"There is some physical degradation that happens in the electronics: corrosion and heat are two big factors. What you probably are experiencing though is added complexity. As programs and operating systems get more and more complex, they require more and more resources to do those complicated things. Without additional hardware, the experience seems slower. It’s kind of like if your one man lemonade stand expanded to include all the menu items of a chick fil a bit you didn’t build a proper restaurant with a large enough kitchen and drive through to serve all your new customers. You’d have a line around the block at your folding table, and the experience would be excruciatingly slow for customers. But it’s not because your folding table lost a leg - you’ve just got a grossly more complex operation. And with more and more complex things to do come more opportunities for “bugs” (aka screwups). At your lemonade stand there are only a few mistakes you can make - wrong amount of sugar/water/lemons, spilling drink, etc. But at chick fil a there are seemingly limitless things that can go wrong - cooking mistakes and cash register failures and leaks in the roof and clogged toilets and things you never thought of in your lemonade stand days.",
"While they physically stay the same the software doesn't. Websites get more and more features that take more power to run. Windows gets updates that adds features and patches security flaws, which all takes more horse power and makes things slower. Computers do also physically break down over time, but it's usually sudden and instantly kills things. This happens cause sometimes there are physical process that happen and wear down components (Hard drives and solid states) other times it's just heat cycles that make things move slightly as they heat up and cool down and eventually breaks things There are really old computer systems out there that just haven't been updated in years and the hardware has been taken good care of... They're just as fast as the day they were put it. (Think like bank systems)",
"Here is something I have noticed from my work. Solder cracks overtime like all metals do. With the expanding and contracting of the materials over time. The silicon lottery does not help as well you might have same chips from the same waffer that can handle different loads. Chip 1 might be able to handle more current, but if you push the same current through chip 2 if will fry that chip.",
"Electronic parts can still degrade. For one, electricity conducts heat. This means the parts warm up and expand when in use. Over time, this degrades materials from wires and chips to the housing, which can effect and even prevent performance Also, as time goes on electronics become obsolete because the hardware is less capable of running the programmed software as time goes on - simply because the software is designed for more powerful machines.",
"Embedded devices that do one thing and don't get upgraded do pretty much keep working over time, until they physically fail. An example would be a washing machine. The computer in there (there is one, if it's been made in the last decade or two) isn't going to slow down or stop working over time, until it is physically damaged from wear. A more complex example would be a WiFi router. Assuming firmware upgrades don't screw something up (or you don't upgrade the firmware), and no software bugs that cause it to screw up over time, it'll just keep working the same as the first day until something fails. However, you might *feel* as if it's getting worse if, say, you add more devices or upgrade your internet connection and it can't handle those higher demands very well. Computers are different because they're so complicated and we just keep throwing junk in them; the software changes over time. We either actually demand more of them, or have too much junk wasting resources, or both. If you took a hard disk snapshot of a computer, used it for 10 years as it got \"slower\", and then restored the snapshot, it would be exactly as fast as it was 10 years ago (and running 10 year old software). That said, even solid state electronics do break over time. Here are just some things that can cause solid state electronics to fail (and when that happens, they usually become blatantly unreliable or outright stop working entirely): * Electrolytic capacitors have actual liquid inside. This can dry out over time and cause them to fail. There was a whole decade or so where these issues were very common due to a company stealing the formula for the liquid and screwing up. * Solder joints can crack from the stress of warming up and cooling down. This is how Xbox 360s and some old MacBooks were dying all over the place. * Due to funky properties of metals, tin \"whiskers\" can literally grow between connections and short them out. * Flash memories (and related technologies like EEPROM) actually rely on trapping electric charge inside materials to store data. That charge leaks out over the course of a decade or two, and then the program gets corrupted and the device stops working. * Electric current inside microchips actually causes the metal connections inside them to spread out and break over time, like \"blurring\" the design (electromigration). * Dust and other external gunk can just collect on surfaces and, if it's conductive, short things out. * Moisture can also short things out or cause corrosion that eventually makes connections fail. And so on and so forth."
],
"score": [
19,
12,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
igs5kl | Why do CPUs get so hot, and what is preventing people to make CPUs run cooler? Will there ever be a groundbreaking advancement to allow classical CPUs to run at low temps without fans? | Technology | explainlikeimfive | {
"a_id": [
"g2vp4tq",
"g2vsf7m",
"g2w1wzo",
"g2w1ul4"
],
"text": [
"There is a lot going on in such a small chip. There is a ton of currents running all around and microcontrollers moving around, all that generates a lot of heat. The more stress you put on the chip the hotter it gets and if left ~~uncooked~~ **uncooled** can melt your whole system.",
"CPUs rely on electricity moving through wires and various other electrical components (transistors, capacitors, logic gates). All of these components have an inherent electrical resistance in them. When electricity passes through a resistive force, it loses energy in the form of heat. As other commenters have pointed out, we already run CPUs without fans. Your phone is a good example. The more powerful CPUs in desktops will probably not be able to be fanless for a long time. The reason is because metal will always have resistance to it (unless you super cool it with liquid nitrogen) due to the laws of physics. The semiconductors used in transistors are created with impurities in the metals, which is a fundamental property transistors rely on to actually work. These impurities also create resistance. Not only that, but there is an electrical component literally called a resistor, which is used to limit current flow and/or drop voltage in an electrical circuit. CPU dies are also becoming more and more dense as our lithographic process improve. This means more transistors, more metal, and more components packed into a smaller area.",
"They can run slower without fans. Do you want your computer to run slower? No! So they make them run faster, and add the fans. Really old computers didn't need fans because they could *only* run slow.",
"> Will there ever be a groundbreaking advancement to allow classical CPUs to run at low temps without fans? Moore's law, where transistor size gets exponentially smaller, has allowed continuous advances like this since soon after transistors were invented. So why are CPUs still hot? Because we use the advances to get more powerful computers instead of cooler computers. At the same time there are tiny CPUs now that use less than a watt which are still faster than the \"classical CPUs\" of decades ago; we just don't use those as the main CPU of computers but they power all kinds of tiny things that need embedded smarts."
],
"score": [
6,
6,
5,
4
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
igva6z | What is volt and Watt??? | Technology | explainlikeimfive | {
"a_id": [
"g2w64by"
],
"text": [
"Think of rolling a ball down a hill. Volts are how steep the slope is. Watts are how fast the ball is rolling."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
igvaex | Why was crysis so hard to run? | Im sure we all know of the joke thats been going on for ages, can it run crysis? Was crysis really that hard to run? Was it badly optimized? I have never understood. | Technology | explainlikeimfive | {
"a_id": [
"g2w7qga",
"g2w91yt"
],
"text": [
"Crysis is a Ferrari as opposed to something like Half Life 2, which is a Toyota Corolla. Crysis is meant to perform, display incredible feats to tease you of the future of gaming. It's isn't great at towing or carrying lots of passengers or groceries. It's about awesome speed and performance. This comes at a high cost, just like the Ferrari. Half Life 2 is just a fun-ass game that will run on most if not all PCs. It's a Corolla. It get's you from A to B in style and comfort but don't expect the thrills.",
"Crysis had cryengine 2.0 as basic. One of the first engine to use directx 10.0 with all it's new features. (like volumetric smoke). But since directx 10.0 was fairly new - the hardware renderer aka graphic cards weren't as fast as they would haver to be to render this beauty of game in full details at ultra details. With newer generations of graphics cards this has become better but cryengine 2.0 was a very resource hungry engine."
],
"score": [
13,
9
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
igz2ym | How do computers work? | Technology | explainlikeimfive | {
"a_id": [
"g2wr2g4"
],
"text": [
"\"How do computers work\" in the context of the body of your post is a really, really, *fantastically* broad question. Before I can start addressing it in a way that's actually responsive, it'd be great if you had one specific topic you had in mind first, and we can branch out from there."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ih2zlb | Where do deleted files go? | Technology | explainlikeimfive | {
"a_id": [
"g2xh6e1",
"g2xh6pi"
],
"text": [
"Effectively nowhere. When you delete a file, it simply writes over the \"Header\" which is a piece of information that tells the computer what the file is named, where it is located, how big it is, what format it is... etc. So once you delete that header, all the data still remains in the same place it was, it just has no indexing information so the computer sees it as free space. It remains there and available if you want to try to recover it, that is until you overwrite it with new data. This is why they say if you accidentally delete a file, to not use the computer until you can recover that file because your computer might come along and write a new file over that existing data, which effectively eliminates any trace of it.",
"They actually stay on your HD, but your OS marks those bits as safe to overwrite, meaning your OS looks at that space as free. If you use a secure delete, those bits are written over and deleted multiple times which makes it impossible to recover the data"
],
"score": [
9,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ih35sl | How do news websites have scrolling into a new article, with seemingly no refresh between the two? | I was reading about the new [Mafia: Definitive Edition]( URL_0 ) preview build on Forbes (not a preference, just the first link I saw) and as I finished reading about the article, I noticed it went straight into a new one when I scrolled down. The more you scroll, the more articles appear and you can just zoom through them without the page refreshing once. There is no refresh, but the link changes when I go back and forth. I can't figure out how it does it! I have a basic understanding of web design, but this goes beyond anything I know. | Technology | explainlikeimfive | {
"a_id": [
"g2xl63a",
"g2xkxue"
],
"text": [
"It is done using JavaScript. JavaScript provides a bevy of capabilities; among them it can load data from a server, modify the contents of the page, monitor your scroll position and change the contents of the address bar. What you're seeing is a clever application of those four capabilities. The page monitors your browser's scroll position. When you scroll 90% of the way to the bottom, it requests the next article from the server and modifies the content of the page to include the new article at the bottom. As you scroll into the new article, it changes the address bar to show the address of the new article.",
"One person already answered part of your question (lazy loading), but regarding changing the URL in the address bar, this is done with [pushState()]( URL_0 ). One you load the new article, you can use that method to change the URL to the new one."
],
"score": [
6,
4
],
"text_urls": [
[],
[
"https://developer.mozilla.org/en-US/docs/Web/API/History/pushState"
]
]
} | [
"url"
] | [
"url"
] |
ih39gc | Why does the effectiveness of rechargeable batteries that are in phones or laptops for example, deteriorate over time and seem to not be able to hold a charge for the same time? | So for example, if a brand new phone has 12 hours on screen battery life, but a year or so later could barely manage 8 or 9 hours, what causes this? | Technology | explainlikeimfive | {
"a_id": [
"g2xjdpj",
"g2xnpe4",
"g2yke35",
"g2yilrp",
"g2yiit6",
"g2yi4rp"
],
"text": [
"Rechargeable batteries use a reversible chemical reaction. In the normal discharge direction, they release electrons to one side of the battery and we flow those through the device to power it before we return them to the other side of the battery. When we recharge the battery we force electrons in the opposite direction, reversing the chemical reaction. But...the reaction isn't perfect. The battery will physically change over time; some parts will degrade, some of the chemicals won't fully revert to their original form, some side chemicals will form. As you do this over and over and over the amount of chemical available in the right form to do the power reaction fades away.",
"A modern phone battery (lithium-ion) has a crystal structure that allows it to hold a charge. Every time it is charged/discharged, it gets tiny cracks in the structure. After enough charge/discharge cycles the cracks become severe enough that it cannot hold as much charge. You can look up a technical manual for most batteries that can tell you how many charge/discharge cycles to expect out of the battery, which is more important than the battery's age alone.",
"Imagine a herd of hungry anamals on some dry desert plains. On the edge there's a river that can't be crossed. There is a lush utopia on the other side that every animal wants to get to, but can't cross the river. Your phone is powered by the animals gratefulness, so you fly some animals over to the lush utopia whenever you want to browse the Internet. Pretty soon though, you run out of animals to be grateful, so you need to fly those animals back. When they get off the plane, they run towards the river because they miss the lush, lush greenness, and when they get to the waters edge, they kick a bit of dirt into the water. Over time, more and more dirt gets kicked into the water, and closer and closer to the utopia they can get. One day the dirt reaches all the way over the river and forms a narrow path that slowly lets animals pass over to the utopia. So you end up having to send more and more animals back to the dessert (charging) than you send to the utopia (powering your phone) and eventually that dirt path becomes so wide it doesn't make sense to fly animals anymore, so you throw them away and get a new utopia, river and desert full of hungry animals. Edit: So the animals are electrons, the desert plain is the anode plate, the utopia is the cathode plate, the river is the electrolyte seperating the two plates, and the dirt kicked into the river represents the small amount of dissolved anode in the electrolyte as it recombines onto the anode plate as close to the cathode as possible, and that's what degrades a lithium battery. Plane rides to utopia are you consuming battery charge, while rides to the desert are recharging the battery.",
"Used to work in battery tech. Lithium ion batteries have a cathode and electrode plates in them. The ionic liquid that makes up the material between the plates is filled with positively charged and negatively charged molecules and free moving lithium ions. As the battery discharges the positive ions move towards the electrode which is offset by the electrons moving through the circuit. The ions get deposited onto the plates surface, so there is a physical change in the battery. Then when the battery is charged the opposite happens. But the process isn't 100% perfect and ions get deposited onto the surface and get stuck, so the overall capacity of the battery diminishes over time",
"Ok so a battery contains a rod and a pool of liquid. Over time the rod melts into the liquid and this releases electricity. If you put electricity into the liquid, it “freezes” it back into a rod. IE: Melting Rod - > electricity Plugging the phone in puts electricity in so builds Rod. However, the rod doesn’t build perfectly every time and over cycles it gets shorter or more deformed. By about 200 cycles the Rod is crap so the battery is crap. This also helps explain where the battery advice comes from: If you charge your phone little and often, you only have to re-build a bit of the rod so less of it becomes deformed over time. If you keep it on charge when the rod is fully built, you damage the liquid cuz it can’t freeze into more Rod. This ain’t really how it works, but it works for an ELI5",
"As others have said, it's just a form of wear on the chemical structure of the battery. However, every single battery type operates differently so exactly how this wear occurs will vary depending on which rechargeable battery you're talking about. For example, lead acid batteries in a car stop working over time because the lead plates that diffuse particles in the acid and form a bridge for the current inside the battery are supposed to reclaim the lead inside the acid after the battery is no longer active. Over time the lead stops sticking to the internal plates and allows electricity to flow regardless of whether the car is on. This allows the battery to drain slowly into ground and be dead when you attempt to next start the car. Additionally, the plates are no longer properly equipped to hold charge and don't function as well for storage. There's also other small defects over time such as acid leaking out the terminals due to overcharging from the alternator, but the above is the primary reason for this battery type."
],
"score": [
866,
255,
19,
9,
5,
5
],
"text_urls": [
[],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ih44a8 | How does your phone know when it needs to rotate to the side? | Technology | explainlikeimfive | {
"a_id": [
"g2xoygu"
],
"text": [
"Each phone has a sensor inside called an accelerometer and a sensor called a gyroscope. These sensors detect tilt and orientation and your phone screen rotates depending on the reading from the sensors."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ih9xvt | What is the difference between a wifi router and a modem? What is LAN and WLAN? What are packets? What is ethernet? | Technology | explainlikeimfive | {
"a_id": [
"g2yvi4r",
"g2yvw20",
"g2ze74g"
],
"text": [
"- A modem transforms analog signal (the one in the cable that arrives to your house) into digital signal. Your wifi router is probably a modem as well. A wifi router is what lets you connect wirelessly to the network. - LAN is a network of computers that are physically close. WLAN is a LAN where those computers are connected wirelessly - A packet is a self-contained block of information that can be exchanged by devices - Ethernet is a (extremely common) protocol used to pass information through wired connection. Sometimes it refers to the ethernet cable, which is the one you probably use to connect to the router when you connect directly with a cable. Obviously this is an ELI5 so this is glancing over a lot of nuances",
"1a) a router will move a signal to it's intended destination. Think of it as a train the train passes through many stations and arrives at the one that is yours. 1b) a modem takes the information coming from outside your house or business and translates it into a type of data that routers and other devices can use. Its full name is a MOdulator/DEModulator for this reason. 2a) LAN is an acronym that stands for Local Area Network. It represents all the wires and devices they are connected to within a network. 2b) WLAN stands for Wireless Local Area Network and it represents all the devices and the invisible connections through WiFi and such that connect them all. 3) packets are bits of information sent over networks through routers through modems to reach their destinations. They have layers to them and each layer does something specific from telling the routers their destination to ensuring that their destination is the correct one. 4) ethernet is a standard of wire that is used within small home offices and offices in general.",
"A packet is like a piece of mail. It usually has a to and from address, and things read these addresses to get it to its destination. The internet is kind of like the postal service in that people can address a packet to someone far away, and the internet facilitates it getting there. A modem is like the mailman. He delivers all the mailbox at the address 123 Internet Street. He also checks the mailbox for any outgoing mail someone might want to send out. A router would be someone that sorts the mail at the house. They look at the mail coming into 123 Internet Street's mailbox, and then puts it into a cubbyhole based on who the mail was indented for. So mail for the 'mom' goes in the mom box, mail for the 'dad' goes in the dad box. Unsolicited junk mail hopefully goes right into the recycling. A LAN is the name for the group of cubbies. Ethernet brings the mail from the cubbies, to the whomever it was addressed to. A WLAN take the mail from the cubbies and tries to throw it to you from across the room. Sometimes its aim is pretty good and the packet makes it to you. Sometimes it misses, so it has to try a few times before the packet makes it to you. & #x200B; Think of it like this. Grandma wants to send you a birthday card. She addresses it to you and puts it in her mailbox (modem). With the help of the mailman its gets sent across the country to us (internet). The mailman puts it in our mailbox (modem). I get it from the mailbox, read who its addressed to (router) and then place it in the appropriate cubbyhole. Once its in the Cubbyholes (LAN) it has to get to you somehow. (Ethernet) Our house has tubes that suck up the mail and deliver it right to your bedroom. Its pretty reliable unless something breaks (WLAN) I do my best to hit your door from down the hall. Less reliable, but I'm lazy and dont want to install tubes. The internet is also not a truck."
],
"score": [
26,
5,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ihc0t1 | In the USA, why do emergency broadcast warnings sound like absolute garbage? It’s usually a robotic sounding voice that sounds like they are reporting from the middle of a static storm. Why is there so much extra noise in these recordings? | I’m referring to the actual message, not the warning tones at the beginning. :) | Technology | explainlikeimfive | {
"a_id": [
"g2zdvzg",
"g30k4nh",
"g2zrgs4",
"g30f0kl",
"g30w2ws",
"g30uu7r",
"g31o0h3",
"g31f0t9",
"g319dgt",
"g31clzt",
"g30txx8",
"g30uq9s",
"g31laoy",
"g326me2"
],
"text": [
"The Emergency Alert System (EAS) functions like ripples in a pond. The alert initiates from a warning point, then its message is issued by a broadcast station called an LP1. The voice is auto-generated from the text of the alert by various EAS encoding systems, which greatly saves time in an emergency. Other broadcaster stations monitor the LP1 station and automatically record/repeat the alert onto their on-air broadcasts and repeated at the next level of listening stations. The system works like a game of 'telephone.' Each station listens and passes along the alert, so each new generation of the message is a copy of the original. That is probably what you may perceive as static or distortion. Source: I managed an EAS program at the state level for emergency management for several years.",
"The garbled stuff at the beginning works kinda like modems did. Remember all the modem screeching? Those work like that. I am assuming the 'bleats' are designed to not be accidentally recreated or 'misheard' by audio equipment. But basically there are systems constantly listening for those. I'm amazed some idiot hasn't made it their ringtone yet.",
"Yeah it actually automatically triggers other radio stations to send out the message. So, there have even been instances of someone recording a home movie while the tone( a test) played on their TV. Then somehow that video got played back in a radio station, and the equipment heard the tone and sent out real emergency tones. Or something. I can't remember exactly but there's a lot of automatic shit that happens when that tone is played.",
"The sound design is specifically meant to be loud and disturbing to trick your brain into a fight or flight response. They use jarring noises in emergency broadcasts in every country in the world.",
"Footnote: In an EAS SAME, you hear three bursts of frequency modulated (FSK) information. ALL three bursts are identical. For a single bit, you have a 50% probability of decoding one of them correctly (probably higher, but this is a lower bound). For a given bit, if you decode all three of them to the same value, then that bit is known. If there is discrepency, then the one that was heard the most is recognized (i.e. if you hear 1, 1, 0, you recognize a 1, while for 0,1,0, recognize a 0). It's rudimentary error correction that works without fancy arrays and interaction.",
"Basically, it’s an old technology, and it’s still the most reliable, which is the most important factor in a legitimate scenario.",
"Low fidelity, high amplification. They need to get the message to as many people as possible. Systems with the ability to reproduce a broad frequency range and that have a wide footprint exist but the better these two qualities get, the more expensive the system. Emergency alert systems need to have a large audio footprint, and be able to be clearly understood by everyone within said footprint. That’s it. They don’t need to be able to reproduce a wide range of frequencies like you’d want high quality headphones or speakers at home to be able to do.",
"What is a emergency broadcast warning? (Ignorant European here). Warning about a traffic accident ahead? Bad weather?",
"There was a wonderful podcast dedicated to this subject. They can definitely elaborate on it better than I, but the sounds are unique because they stand out from typical background noise even at low volume, they do not blend end with other noises that may be occurring as well. I highly recommend listening to the podcast though. URL_0",
"I think it is a psychological thing. We have heard it that exact way with the screech at the beginning that I think if I was at a rock concert and that came out of someone’s phone the entire auditorium would go silent",
"One of my first jobs was \"DJ\"(actually I just sat in one of the booths with a computer and made sure the computer was playing the right programming) for an FM station back in the early 2000s. This was a cheap station and not a large broadcast signal, all I new about the EAS is that it was automatically setup to cutoff the programming and run its course, there was a button to switch it manual in case I wanted to play it at a certain time like after a song, but I was told it was fine just to leave it to automatic. I kid you not, the whole EAS was still broadcast through a cart machine, like this only more than 1 cartridge: URL_0",
"The sounds are unnatural and the sound wave they used is not naturally accruing it’s a sine wave and it grabs people’s attention because it’s so unnatural.",
"Does the alert system use the lowest possible technology so it's less likely to be disrupted?",
"What benefit is there to not having the emergency warning system not be “jarring”?"
],
"score": [
10121,
314,
301,
254,
37,
21,
15,
8,
7,
6,
6,
5,
4,
3
],
"text_urls": [
[],
[],
[],
[],
[],
[],
[],
[],
[
"https://www.20k.org/episodes/emergencyalert"
],
[],
[
"https://www.youtube.com/watch?v=-m9WfAJMMa8"
],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ihepnr | Website Tracking | Using an apple computer, with cross site traffic blocking and cookie blocking, and using duck duck go as a browser, how do websites still know so much information about you? Not only how many times you have been to their site, but also your zip code? How can you be more anonymous? | Technology | explainlikeimfive | {
"a_id": [
"g2zrah6"
],
"text": [
"Imagine you're sending a letter to someone requesting a package you want sent to your address. The mailman is like your ISP. The person you're requesting a package from is like the web server. The package is like the website, and your street address is like your IP address. How could you make such a request while keeping the person you're sending the request to in the dark about where you live or how many times you've made such request to them? With traditional web surfing, that's not really possible. How can the server send the package you requested to you if it doesn't know your address? The answer is to use a VPN. It's like having a second mailman. A middle man mailman if you will. The ISP hands your request to the VPN, and the VPN makes the request for the package to the server. The VPN doesn't tell the server what your address is. He just remembers it himself while asking that the package be sent to him. When he gets the package you requested, he hands it off to the mailman(ISP) without saying where it came from, only to deliver it back to you."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
ihgas4 | How do they grab old film and turn it into a high quality frame rate? | As the text implies, It baffles me how much quality theyre able to add to old films. | Technology | explainlikeimfive | {
"a_id": [
"g303iys"
],
"text": [
"Old film can be converted to HD pretty easily, but here’s the thing: not *all* old film. Back in the dawn of projection technology, video was captured on a transparent film of silver halide crystals. These crystals would darken significantly if exposed to any amount of light. Photographers would use a transparent canvass with a thin layer of these crystals to take photos by placing the sealed frame in a dark box, unsealing the frame in the box, then using flash powder to create a bright enough amount of light to “burn” the silver crystals on that frame. This is what is known as a photo negative. That negative would go get chemically treated, so that the crystals on the negative would stabilize, and no longer darken. This is why old school photo development rooms were lit in that dark red light; the red was too dark and weak to affect the crystals a second time. If you tried to develop a negative in a normally lit room, your negative would completely black out, as the crystals that haven’t been stabilized would darken when exposed to the light. Projector film used the same basic idea, but instead of a single transparent canvass in a box, they used a long series of very tiny canvasses on a tape, and exposed each box to whatever was to be filmed, one at a time, in sequence. That’s what video is, just a bunch of pictures. Now, to answer your question: old films all used variations of this basic design, even when color was added in. When someone wants to “upscale” old film, they put it on a powerful scanner to convert the physical data of the crystals into a digital image. Depending on how old the film is, and the quality of the film that was used, you are able to get enough visual detail to create a crisp HD image. See, the amount of silver halide crystal on your film, and the size of each individual “grain”, would determine how HD your recording was. Movie studios and TV crews that used film of high quality were able to create rolls of film with enough detail in the crystals to qualify as HD, while some lower quality films simply don’t have fine enough crystals to create that same level of detail. Bonus fact: this is why all direct to TV recordings are terrible. Instead of using film, these were recorded using cameras that were broadcasting directly in old school standard definition, so instead of being stored as a physical thing, it’s a TV signal being converted into an image. I learned all of this from Captain Dissillussion on YouTube. He’s great, easily one of the best channels on the site, that perfect balance of entertaining and educational. I highly recommend you go watch his videos if you want a better explanation, he has a video on this exact question of yours."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
ihgexe | Do megabytes, gigabytes, terabytes, so on and so forth carry on any weight? | Even if it’s minuscule, do they carry any weight to them? | Technology | explainlikeimfive | {
"a_id": [
"g300obj"
],
"text": [
"That depends on how they are implemented. They can be said to have e. g. the weight of the electrons trapped in memory cells for modern computer. However, it then depends on their value, since zeroes will not have these trapped electrons. In HDDs, data is not stored in added particles but rather in magnetic polarities, so no change in mass occurs. In mechanical storage (e. g. punch cards) , adding information typically actually subtracts mass."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
ihh665 | Why is it so hard to put the phone down? | Technology | explainlikeimfive | {
"a_id": [
"g305vmi",
"g307tax"
],
"text": [
"Dopamine, the reward chemical. Your brain anticipates release of dopamine following certain activities(eating and sex are the two biggies) and drives you to seek out those activities. Social media hijacks that mechanism, so do addictive drugs. URL_2 URL_1 URL_0",
"Because a lot of clever people have spent a lot of money designing it to take advantage of how peoples brains work to keep them there, scrolling and being advertised too."
],
"score": [
6,
3
],
"text_urls": [
[
"https://youtu.be/NMq_MyOFtW8",
"https://youtu.be/HffWFd_6bJ0",
"https://youtu.be/v0sWeLZ8PXg"
],
[]
]
} | [
"url"
] | [
"url"
] |
|
ihhpzn | Why does an external hard drive have varying space depending on the device it's used with? | ELI5: Today I cleared an external hard drive to use it as extra storage on my PS4. While setting everything up I noticed that on my windows laptop it showed about 900 GB of available space (while being completely empty). On the Playstation it now shows I have about 1,4 TB of available space. How is such a big discrepancy possible? | Technology | explainlikeimfive | {
"a_id": [
"g308avu",
"g30o1m2",
"g30gd0d"
],
"text": [
"The usual culprit in those cases is Microsoft's confusing way of reporting space - Windows may *display* \"GB\" and \"TB\", but what it actually reports is GiB and TiB. Your discrepancy however is too large to be solely caused by this. Are all the partitions available to your Windows machine also accessible to and readable by your console?",
"Are you sure the 1.4Tb on the PS4 is just the external drive? It might be the total available space on both the internal drive and the external one.",
"I'm guessing your playstation uses some partition type windows can't read, hook it up to your computer and open \"create and manage hard drive partitions\", there you should be able to view anything"
],
"score": [
9,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ihlvdl | How do live lightning websites work? | Technology | explainlikeimfive | {
"a_id": [
"g31by4i"
],
"text": [
"Electrons moving around create electromagnetic waves, that's basically how an emitting antenna works. Lighting, being a large discharge of electrons, has the same property, and as such each strike creates a fairly large electromagnetic pulse, which travels hundreds of kilometres, and is easily measurable by dedicated antennas. The Blitzortung projects relies on volunteers buying/building such detectors to create a web of detectors. Since the electromagnetic pulse travels at the speed of light, you can measure when it reaches neighbouring detectors, and use triangulation to determine where and when it originated."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ihohpj | why is there a lag during communication between on-site news reporters and off-site reporters despite of all the technological advancements? | Technology | explainlikeimfive | {
"a_id": [
"g31fy4d",
"g31feap",
"g32co9f"
],
"text": [
"Short version: speed of light is finite. In order to provide a guaranteed high-bandwidth pipe (we'll come back to this) between the on-site reporter and the studio, you need a satellite link. That's what those vans with big dishes on a pole are. They're talking to geosynchronous communications satellites that are about 35,000 km above the van. It takes the signal about 0.1 seconds to get up to the satellite and 0.1 seconds to get back down to the studio, so there's at least 0.2 seconds of delay due to physics. That doesn't include any delays in processing the signals at each end or in the satellite. Aha! (you say), but I can stream HD video from my phone no problem! Yes, you can...but only when you're near a good cell or WiFi station. And even if you are, if you're trying to send the news from there so are a dozen other news crews and they're going to saturate the network in that area. And on-site reports have no idea where the story is going to be so they can't rely on WiFi or cell signals, that leave satellites. Aha! (you say again), but there's all these new cool satellites from SpaceX and Iridium and others that orbit much lower and aren't so laggy because they're closer. Also true, but they don't (currently) provide the capability to guarantee that you can have an HD video pipe on demand whenever you want, especially with several other people right next to you trying to do the same thing, and the news vans don't have the right hardware (yet). Eventually, that's probably where we'll end up, but it's not where we are today.",
"They're using a satellite phone/transmitter to communicate with people who are off-site because that's the only way to guarantee a reliable signal. Satellite communications have ~1 second of latency on them due to limitations imposed by the speed of light and how far the signal has to travel. This is true anytime there is a delay. Unless the person is in another studio that has been specifically set up to communicate with the home studio they will always use a satellite link because reliability is much more important to them than eliminating the slight delay. And a lot of the times when they're interviewing someone and it appears as though that person is in another studio they're really not. What's happened is that a newsvan has gone out to their office or home and the interviewee is sitting in front of a green screen that has been set up inside of the newvan or the interviewee's office/home.",
"This has been asked on this sub multiple times before, with hundreds of comments in response. Searching \"news lag\" was all it took. Please follow rule 7 of this sub. Reporting this post for breaking the rules."
],
"score": [
15,
5,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ihrh1c | Why do non-backlit screens (such as most Gameboy screens) need bright surroundings in order to be seen, while backlit screens (such as your phone) need more "backlight" if it is in the same conditions? | Technology | explainlikeimfive | {
"a_id": [
"g31zrik",
"g3258hi"
],
"text": [
"You kind of answered your own question there. Non-Backlit screens don't have a light coming from behind the screen to enable you to see what's there. Screens with a backlight, do have that light. What's important to remember though is that what's on the screen is not actually lit. It's more like they cast a shadow over the light, much like a shadow puppet does. So if there isn't much light, the \"shadow\" blends into the dark areas surrounding it and it becomes much harder to distinguish from the rest of the screen.",
"Non-backlit screens reflect the ambient light. So when there's more ambient light, like outside, the display gets brighter. Backlit screens have a tiny light built-in, so when you go outside, there's too much ambient light bouncing off the screen to see the tiny light passing through it."
],
"score": [
9,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ihs9wj | Why are movies always in 24fps? | I'm so curious as to why they are always in 24fps. Games under 60fps tend to get choppy, but movies tend to look fine at 24fps. How? | Technology | explainlikeimfive | {
"a_id": [
"g3257j5",
"g325i21"
],
"text": [
"Games render entire pristine frames with no motion blur. You need a lot of pristine images fast one after another to look smooth. Movies are recorded at 24 FPS and then played back at the same frame rate. This means that whatever motion blur they pick up in recording is displayed again in playback. The result is that it looks smooth because you can't pick out individual frames. As for why movies don't just record at higher FPS, a lot of it is inertia in the movie industry. Many theaters have projectors that play exclusively at 24 FPS. If you were to produce something higher FPS, many cinemas wouldn't be able to show it.",
"Tradition. That's really it. 24fps became the standard when films were filmed on actual film (hence the name film). People became accustomed to how that frame rate looks, particularly regarding fast motion. 24fps is still the cinematic standard for films in the digital age because audiences still expect to see it. One notable exception is the soap opera TV genre. That whole genre switched to a higher frame rate decades ago, and that's why soap operas have a particular \"look\". The higher frame rate means less blurred motion and more like real life. When people say soaps look different but they can't put their finger on why, it's the frame rate difference they are seeing. As for video games, this is a different beast because the images aren't captured optically but rather generated digitally. There is no cinematic history telling your brain what looks right or wrong, and you really want a game to look as much like real life motion as possible. It's very practical that way. Missing frames in a game means worse accuracy, reaction time, etc."
],
"score": [
4,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
ihu35t | What’s the difference between a fast shutter speed on a photography camera and just recording the event on a video camera? | Like why don’t sports photographers just use video cameras and pause at the best spot, instead of take multiple still shots? Isn’t video footage made up of many many still shots anyway? This way they can’t stop too early or start too late. | Technology | explainlikeimfive | {
"a_id": [
"g32hv2x",
"g32i5kp"
],
"text": [
"Great question... High speed movement requires VERY fast speeds to capture the movement and make it clear when shown as a single shot. Video at the movies is 24 frames per second. High definition 4k video are faster between 30 or 60 frames per second So of you are able to film 60 frames in a second each on is 1/60 of a second long. A typical camera can capture a single image as fast a 1/2000 of a second for sports. To remove the blur you really need much higher speeds. If you play computer games if the frame rate bogs down in a first person shooter.. you find that the shots go on either side of the target. That is why dedicated cameras are used. Even medium priced cameras take at speeds from 1/30 to 1/500 of a second.",
"Video frames are far lower resolution than images from a comparable still camera. Video is compressed in real-time for the sake of the camera’s limited processing power and for data storage/bandwidth considerations. Trying to capture video footage at the same resolution as DSLR stills would require ludicrous write speeds and storage space."
],
"score": [
10,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
ihv1jz | how does Netflix censor a video while screen recording by making it black only in the recording as you continue watching the video? | Technology | explainlikeimfive | {
"a_id": [
"g32rz0z",
"g32sjo3"
],
"text": [
"Pretty sure you can code specifc parts of your webpage(in this Case the videofeed) to ne not picked up by screen recording softwares. These parts then have some sort of attribute that tells the recording software to black out the information. Ob This of course doesn't mean that this will happen on every screen recording software. Some might ignore these Attributes.",
"Programs send a signal just before recording, so Netflix always shows a black screen when that's happening. As far as I know, the only way to have a screenshot would be running a virtual machine. Personally, I think taking screenshots should be fine, but Netflix (or Prime Video and others for that matter) doesn't have a way to tell it apart from recording video."
],
"score": [
11,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ii0lpc | How come the xray machines officers use in airports don't harm the workers? (Like how doctors stand in another room for their protection) | Technology | explainlikeimfive | {
"a_id": [
"g33q4ke",
"g33qy4h"
],
"text": [
"Notice the massive metal lined flaps on the conveyer to the massive metal box that houses the x ray. All of that is there to protect everyone, it shields against the x rays.",
"The effective dosage from a medical scan is much higher than for any airport scanner. I looked over a list of medical procedures and they went from roughly 2 to 20 millisieverts. Backscatter x-ray scanners used to dose about 0.00025 millisieverts, and when people worried about that they switched to a different system that uses microwave instead of x-ray. If you're scared of microwaves, you should be more scared of heat lamps and really, really scared of tanning beds."
],
"score": [
8,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ii3leh | How do chess engines calculate moves? | How do they figure out which move is winning and how can they make mistakes if they calculate everything? | Technology | explainlikeimfive | {
"a_id": [
"g348joe"
],
"text": [
"Game theory is a big part of chess. It's figuring out if I do move \"w\", my opponent will probably do \"x\", and I can respond with \"y\", and then they'll do \"z\", and so on. Expert chess players are always looking well beyond just their next move, and computers are the same way. However, it takes a lot of processing power to figure out all the possible outcomes of the next twenty moves (or more), so most computers are designed to just look ahead, say, three or four moves. That's how they make mistakes. (Also, some computer have \"easy\" difficulty settings, and make bad moves on purpose.) Against the best supercomputers, humans usually can't win; tying is considered a victory."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
ii3xa9 | How does a Turing machine work? | Bonus points for using the least amount of words possible. | Technology | explainlikeimfive | {
"a_id": [
"g34axdj"
],
"text": [
"A Turing machine is not a real concrete thing, but a theoretical computer. It has an infinitely long tape with discrete cells. It's given the input on these cells. A \"programmer\" gives it instructions to read and write data in these cells to solve some sort of mathematical problem (such as sorting a list)."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
ii62bg | How are lines on roads painted so consistently and accurately? | Technology | explainlikeimfive | {
"a_id": [
"g34m8jj",
"g34lllw",
"g34m61e",
"g34oqe4"
],
"text": [
"The trucks that paint the dotted white lines on the highways have an automated sprayer that's connected to the speedometer of the truck. That way it can calculate that for a given speed, the sprayer needs to run for X amount of time to produce a ten foot long line. The driver of the truck will maintain a straight driving line, often by picking a spot in their view that they can line up with where they need to be for the line to spray in place (the paving crew usually has spray painted lines to show where to place them. There's another way to put lines on a road that's more like an asphalt adhesive sticker. Those get put in by the paving crews. They're laid in place, then a large torch is passed over them to slightly melt the adhesive and the asphalt underneath. Those are very durable and can last longer than painted lines, but can be accidentally scraped up by overzealous snowplows. The ones used for the stop lines at an intersection often get slid back in parts due to the repeated number of cars accelerating over that spot over time.",
"It's a truck with a roller and paint supply. As the truck drives the roller rotates (with slightly elevated lines), gets fed paint, and if the speed is consistent the raised lines apply the paint to the road. I used to see them all the time, but not in 10 years or more, but I can't imagine the process changed.",
"They aren’t. There is enough that it looks like it. As to how they are painted they often use truck mounted systems, some are laser guided. The guns (paint and reflective beads) are fixed to the bottom of the truck or on a structure affixed to the truck. That keeps the fan pattern equal as the distance to the surface relatively unchanged. The guns have tips with a hole cut to make the paint spray out at a width (fan) and controls the amount of fluid put out.",
"Lots and lots of practice. We paint hundreds of thousands of miles every year. Thus, we get pretty good at it. Thanks for the compliment."
],
"score": [
28,
5,
5,
4
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ii7ng3 | If it's true that you can't "truly" erase anything on the internet, then how does one go about obtaining all that information back that was deleted? Where does it all get saved? | Technology | explainlikeimfive | {
"a_id": [
"g34w198",
"g34wrsq"
],
"text": [
"The idea is that if you ever publish anything on the internet, then anyone can grab a copy of it for themselves and possibly share it later themselves. So even if the data is technically erased, it's still potentially *out there*.",
"It's not that information can't disappear. That happens all the time. The problem is that you can't really *control* what pieces of information are lost or when it happens. You can take things off of your own servers, but that doesn't guarantee that the server has not been mirrored or backed up."
],
"score": [
12,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ii8ps7 | Why do chair massagers and other related equipment say to only use them for 15 minutes or less? | Technology | explainlikeimfive | {
"a_id": [
"g35558r"
],
"text": [
"Electric motors have something called a \"duty cycle\" that says how much of the time they can be on before they need to stop to cool down. A 100% duty cycle motor can run continuously, under load, but that usually requires a pretty robust motor and, crucially, a good cooling system because electric motors generate heat. With a duty cycle less than 100%, the motor is slowly heating up while it runs and it will eventually fail due to overheat. Very few consumer goods are designed to 100% duty cycle, whether labeled as such or not, because that's not usually not a realistic use and it would cost a lot of money & weight for a capability that's not required...who'd need a blender that could run 24/7? Massagers and related equipment are in the worst case situation for this because they're squashed between you and a chair back (poor ventilation/cooling), they're high load (lots of heating), and you're likely to want to use them for a long time (massages are nice). There are 100% duty cycle massage chairs...they cost several thousand dollars. The only way to build reasonably powered massage devices at normal consumer prices is to build them for lower duty cycle and then tell the consumers so that, if you do run it for two hours straight and fry the motor, at least they can say it wasn't their fault."
],
"score": [
24
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ii9m74 | Led Vs Lcd vs Oled vs Amoled | Technology | explainlikeimfive | {
"a_id": [
"g35bvi5"
],
"text": [
"LCD is a technology where the screen light comes from a few lights behind the pixels which are always on so even if the image is supposed to be black it still makes some light and it's not that black. LED refers to a kind of LCD technology where they use LEDs to produce the light. The main benefit is it decreases the power consumption significantly. OLED is a completely different technology where each pixel makes it's own light and it can be turned off individually as well. This means black is as black as it can be (resulting in much better contrast) and doesn't consume power. AMOLED is a specific kind of OLED that improves power consumption and response times. A problem with OLED technologies is they suffer from burn-in so they aren't very suitable for PCs yet because they typically display static images for long times."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ii9qm8 | Why do many audiophiles store lossless music even though it's supposedly the same as 320kbps MP3 to the human ear? | EDIT: What I've gathered from the comments so far is that the higher frequencies getting cut off doesn't matter but the lower audible frequencies also get distorted on MP3. Supposedly, that's where the extra quality in lossless lies. Also it matters when you EQ audio. Is that all? YMMV, but I cannot hear the difference between any MP3 256kbps and up and lossless on my Etymotic ER2XR. | Technology | explainlikeimfive | {
"a_id": [
"g35bbiz",
"g35kcth",
"g35bqd5",
"g35hm3y",
"g35mgan",
"g35gjkw",
"g35v3yh"
],
"text": [
"* Technically it's not the same because using the MP3 format removes information from the original recording. * Also many of them believe it's possible to hear the difference. * But the main reason is because we used sell music in a lossless format (compact discs) all the time. * Data-compressed audio only became a popular format when music started being distributed over the internet at a time when the kind of data throughput required for lossless audio wasn't available. * So going *back* to a lossless format isn't really a big deal.",
"Aside from the question of the direct lossy vs lossless comparison, which other comments have pretty well addressed, there's the issue of conversion. Storing a file in a lossless format preserves as much information as possible for future compression. Say you had mp3s and wanted to encode everything as AAC, your sound quality could be worse than if you had encoded in AAC using a lossless source. Keeping lossless files preserves your options. Say a better lossy codec comes along, or a device limitation requires you to have a certain format. Starting with lossless gives you the best possible result for the lossy file.",
"It's not. 320kbps MP3 cuts off abruptly at 20KHz where as 44.1KHz audio can have frequencies all the way up to 22.05KHz. This normally manifests itself in loss of high resolution information. [Percussion is usually what suffers]( URL_0 ). In fact, in some lowest bitrate signals, the percussion isn't even compressed. It's just approximated using the Perceptual Noise Substitution.",
"NOT an expert, but I have made a ton of vinyl rips and saved the lossless as well as recompressed to 320kbps MP3s and I definitely notice a difference. Primarily the bass can be troubling. In the MP3 version of something wicked bass driven (Run DMC, David Byrne's newer stuff, any hip hop) the bass sounds distorted and muffles/mingles with the rest, making it muddier. Whereas the lossless one's truly sound like the original. Again, not an expert but just an enthusiast with a lot of records and that's what I hear.",
"To me, the main reason is because if you rip to a lossy format, you get severe degradation if you change the bit rate. If you want 320 kbps for home and 128 kbps for your car, you either have to rerip the files, or live with much worse sound quality. It's easier to rip to flac and convert to whatever you need later, especially now that storage is super cheap. When a 20 GB drive was big, flac didn't make much sense for most people.",
"You can hear the difference. There are websites wear you can do a blind test and consistently tell which one is best.",
"For me, unless I am using headphones, the distinguishing feature is the harshness. Compressed music has an \"edge\" that is not present in well recorded tracks and sounds unnatural. No, I cannot tell the difference in a store blaring music, but that's usually because volume and poor source/amplification is going to make anything indistinguishable if there isn't something to compare it against. There is certainly a difference, it's a matter of who is listening and preferences, but I would argue that having more information to begin with is a better alternative to having less and the individual user can determine how they consume the data."
],
"score": [
83,
18,
14,
9,
8,
4,
3
],
"text_urls": [
[],
[],
[
"https://www.youtube.com/watch?v=UoBPNTAFZMo"
],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
iibrcg | How do animators make cartoons that can go up to 30 frames per second in the limited time they are allowed? | There must be some trick to it. I couldnt believe that animators actually slave there for days on end drawing each and every frame with body parts and mouths moved ever so slightly different. That they can do that thirty times for every second for a thirty minute to hours long video. It just seems impossible!! | Technology | explainlikeimfive | {
"a_id": [
"g35ni7e",
"g35p9ez",
"g35p229",
"g360zb7"
],
"text": [
"Modern animation is largely done with computers with significantly reduces the time and resources it would take to develop longer cartoons. Hand drawn animations are fairly rare nowadays.",
"Old-school animation, the types of stuff that Disney and Warner Brothers used to do, actually was drawn by hand. It was a massive undertaking that involved large teams of animators to produce short cartoons. Everyone called Disney crazy when he proposed using these techniques to make animated features, and he kind of was. Even then, however, they had some ways to save time. Backgrounds were drawn once. Anything in a shot that moved was drawn on a transparent 'cel'. Then you just place each cel on the background, take a picture, and go to the next. Modern animation is significantly assisted by computers. Computer generated animation is based on models. Instead of redrawing each frame, the animators are giving the computer a set of instructions for how various bits of the model move and change over time. Modern animation in a more traditional style is often based on a set of \"key\" poses and depends on a computer to fill in the gaps. This is how truly old-school animation used to work too, except it was an army of animators filling in the gaps between keys instead of a powerful computer.",
"They didn’t draw every frame, the background was the same and they had transparencies with different characters so they could change only those and not redraw everything always, also the animations rarely ran at a full 30 fps.",
"Others have mentioned a number of tricks, but one thing that's often used (though not always) is that they don't often draw all 30 frames. When I was studying animation, one of the most common cheats was to animate on 'twos' - basically, showing a frame twice. Often, people can't really tell, and it looks better than just 15 FPS (for reasons I can't adequately explain, the science escapes me there). That having been said, as others have pointed out, you will try to save as much work as possible, by breaking bodies up into parts. If the mouth is moving but the body isn't? Just re-use the body, and only animate the head/mouth over top of it. It's absolutely crazy work, and you're not wrong to sit there in disbelief that people would actually do this. But they do. If you think traditional animation is crazy, stop motion is that to the nth-degree."
],
"score": [
30,
29,
8,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
iichfc | How does bionic hand work? If they don’t have any fingers on hand, how they control the hand? like grip and move and all. I just saw a video with little girl that uses bionic hand for firs time and i wonder how it works. | Technology | explainlikeimfive | {
"a_id": [
"g35sxqq",
"g35t3pd"
],
"text": [
"The muscles the control your fingers opening and closing are actually all in your wrist! Only those super fine motor control movements are controlled by small muscles in your hand. So, the bionic is then tied into those existing muscles in your wrist that your brain would normally use to control your hand anyways. These muscles movements in your wrist, or the nerve impulses themselves that activate the muscles, are then tracked and sensed by a computer built into the bionic, that then translates that into the motors opening and closing",
"The hand is fitted over the stump with different electrical conducting pads and/or movement sensory pads making contact at different parts of the skin. The user learns to trigger muscles in the arm in such a way as to send discrete signals, which are picked up by the sensors and transmitted to the robotics, which then do what they do depending on the signals received. In many ways, this is exactly how a baby learns to use its hands and fingers, but in that instance it's all happening inside the body."
],
"score": [
9,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
iig4i1 | -How do big games get coded | Technology | explainlikeimfive | {
"a_id": [
"g36hfy6",
"g36hys6"
],
"text": [
"A ) Good planning and compartmentalization. A big project is more or less designed before anyone writes any code. People will work on the ground features first and then build from there, and have frequent meetings and open communication to make sure everyone is on board and that everything is working well together when the features are stitched together. You try to make things as modular as possible so that every \"department\" can in theory complete their entire (or large parts of) module regardless how far along the other departments are. B ) Using proper collabiration tools. There are plenty of things that help people split tasks between each other, keep each other updated on what is going on, and monitoring the progress of each thing being worked on. C ) Source control. Source control like Git is crucial when you have many people working on the same project. Git is a distributed version control system that allows many people to work on the same set of files without stepping on each others toes. Git will monitor the files, track any changes, and merge your changes with what other people have changed so that it all falls smoothly in to place with minimal fuss. That way if we both change the same file git will try its best to merge our versions in to a single file that includes both the things we changed. This allows large features and projects to be worked on by multiple people without having to send back and forth \"Enemy_ai_version_30_final_final_really.cpp\".",
"In bigger projects you typically split the whole project into relatively independent modules (e. g. engine, AI, physics... Teams working on them only need to agree on interfaces between these. These modules are further split, eventually into individual source files attributed to individual programmers. When files themselves are too big to manage by a single person, more people can do changes to copies of the same files and then these changes are \"merged\" by a version manager such as git or svn. A version manager is almost always used, and it is also usually responsible for each team member having access to the work of others."
],
"score": [
8,
3
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
iiiy5n | why haven't auto makers developed a warning system for infants left in hot cars? | Technology | explainlikeimfive | {
"a_id": [
"g36zhg6"
],
"text": [
"Seems entirely too specific for a default addition to a vehicle. Also, people shouldnt need to be reminded of their CHILD. Some people shouldnt have kids."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
iiku36 | how do reviews get the phones and other electronics and some tear it apart , do they really buy? | Technology | explainlikeimfive | {
"a_id": [
"g37cegv"
],
"text": [
"Known and popular reviewers are often sent the product for free (or with conditions of return) on the agreement that they do a review and publish it. Some retailers are honest and know a decent review will come because they made a good product. Some give away expensive product to reviewers on the condition that the review is flawless."
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
iip07n | Why are there no USB-C hubs with lots of USB-C ports for USB-C devices, in the style of old school USB hubs? | Technology | explainlikeimfive | {
"a_id": [
"g38c4sm",
"g38uok2"
],
"text": [
"There are: [ URL_0 ]( URL_1 ) But generally if you have USB-C you also want a couple of USB-A's. They're evolving but I've only just moved everything I have to USB-C myself and even then I bought a powerbank which has USB-A charging and delivery as well as USB-C charging and delivery because there are still devices that demand USB-A. Same for my in-car accessory socket chargers - one USB-C and one USB-A on each because my dashcam is still just USB-A. They'll come on the market eventually, because USB-A is slowly dying off, but while you can just buy a bunch of cheap convertors to go between the two, it'll take a while for that to happen. Brand new, very expensive laptop... has two A's and two C's. So I just bought a USB-hub thing (not the one above) that has EVERYTHING on it, and I carry that with my laptop. It'll come, but at the moment you're stuck with the things like above - niche, expensive and not available everywhere. But to be honest, unless you're talking about charging, USB-C is just the \"connector\", and it's \"USB3.1\" or whatever talked through it. So a USB3.1 hub with a bunch of cheap adaptors is essentially the same thing. Charging / power delivery is slightly different, I grant you, but then some USB-C ports are also Thunderbolt, so it's never going to be that simple.",
"They do exist - for example, Sitecom make one with [4 x 5 Gb/s USB-C]( URL_2 ), one with [3 x 10 Gb/s USB-C + 1 x 100W PD]( URL_1 ), and [some other combinations]( URL_0 ). But this is all quite new. Why? The only attempt at an explanation i've heard is that [\"the chips needed to produce one-to-many USB-C hubs essentially do not currently exist\"]( URL_3 ), but that is unsourced, and no detail is given. I don't know *why* those chips wouldn't exist - is there something about USB-C or USB 3 that makes them more difficult to make? I bought one of those Sitecom hubs, because i wanted to have both an external drive and a wired ethernet adapter on a laptop with one USB-C port. It works, in that i can plug in both devices at once and use them. But, interestingly, i can't boot off the external drive - the machine just says that it can't find anything to boot. I can boot fine when the drive is attached directly. So there's some oddity where an operating system can see a drive attached via a hub, but the BIOS or whatever low-level firmware is booting the machine can't. That suggests to me that the hub is not entirely transparent, the way i believe a classical USB hub is."
],
"score": [
22,
8
],
"text_urls": [
[
"https://www.sotel.de/en/Computer-Drucker/PC-Zubehoer-extern/Sitecom-CN-386-interface-hub-USB-3-2-Gen-1-3-1-Gen-1-Type-C-10000-Mbit-s-Aluminum-Black.html?gclid=EAIaIQobChMIrpSSn6TA6wIVqujtCh3gtwtIEAQYDSABEgIWifD\\_BwE&cur=1",
"https://www.sotel.de/en/Computer-Drucker/PC-Zubehoer-extern/Sitecom-CN-386-interface-hub-USB-3-2-Gen-1-3-1-Gen-1-Type-C-10000-Mbit-s-Aluminum-Black.html?gclid=EAIaIQobChMIrpSSn6TA6wIVqujtCh3gtwtIEAQYDSABEgIWifD_BwE&cur=1"
],
[
"https://www.sitecom.com/en/hubs",
"https://www.sitecom.com/en/usb-c-hub-4-port/cn-386/p/1881",
"https://www.sitecom.com/en/usb-c-hub-4-port/cn-385/p/1879",
"https://superuser.com/a/1414046/86299"
]
]
} | [
"url"
] | [
"url"
] |
|
iist3a | How exactly does data compression work? | Technology | explainlikeimfive | {
"a_id": [
"g38ur3m"
],
"text": [
"There are several techniques. The simplest technique is to replace repeats of the same character with a single one and the number. So aaaabbbbccc would be replaced with 4a4b3c. This is not useful for English text (where we don't really get repeats like that) but it's useful for certain kinds of data. This is called RLE (run-length encoding). The next trick is to remember what you have already stored, and if you find a repeated part then you instead store how far back it was and how long it was (called a distance-length pair). Thinking in terms of words (real compression works with bytes, but it's easier to explain this way): There is a red car and a blue car and a white car Becomes There is a red car and a blue 4,3 white 4,1 Notice how we didn't compress the first repeated \"a\" because that wouldn't have saved us any length. (The last 4,1 is the same number of characters as \"car\", but let's pretend the comma is \"free\" here and we saved something) Now one interesting trick is that this also lets you store repeats of any length. If you have this text: A car a car a car a car a car a car Then you can store it as: A car a 2,9 \"Go back 2 and copy 9 words\" works because by the time you've copied 2 words, now you have 2 new words to work with, so you can copy two more... etc. It's clever. This is called Lempel-Ziv coding, there are a million variants of it, and it's probably the most used, reinvented, modified, and abused compression technique. Now that you have gotten rid of outright repeats, wouldn't it be nice if you could compress the remaining words and numbers (bytes and distance-length pairs) better? Well, once you're done with the entire file, you could sort them by frequency, figure out what words or pairs happen most often, and assign them short numbers. This works like telephone numbers in countries where all the numbers aren't the same length. One number is never the prefix of another number (e.g. no number starts 911-xxxx-xxxx in the US). For example (using just words in the English language, but a real file would use a table including length-distance pairs and optimized for that file): * 1 the * 2 a * 3 of * 4 and * .... * 8 is * 91 then * 92 for * .... * 9999991 zygote * 9999992 asymptote * .... Now really common data has short numbers (actually in binary, not decimal). Then you store the table that relates numbers to real words at the top of the compressed tile. This is called Huffman coding. This is as far as zip goes (which uses a compression algorithm known as zlib/deflate that is based on LZ and Huffman). But there are better formats than zip, like 7z which uses something called LZMA. Formats like that take the simple phone number-like system and go full math nerd on it. They start making estimates of the probability of certain words or length-distance pairs showing up, keep updating those estimates, and then use exactly as many bits as they need to write the resulting number. Including \"fractional\" bits. That sounds impossible, but it's possible if you look at a whole set of bits as larger number, representing multiple chunks of information in one number. Basically, you look at the phone numbers as a *decimal* number. For example, if the first word has these probabilities: * 50% The * 40% A * 10% You Then you assign them ranges: * 0-0.5 The * 0.5-0.9 A * 0.9-1.0 Your If the word happens to be \"A\" then you pick that range: 0.5-0.9 Now the next word is: * 50% (0.5-0.7) car * 50% (0.7-0.9) house If it happens to be car, now your range is 0.5-0.7. But if the first word had been \"Your\", the current range would've ended up being *0.90-0.95*. That's the magic: *uncommon things require more precision, and so you end up with more decimals*. Once you're done with the entire file, you pick any point in the final range, and that number is the entire file. The \"less obvious\" the file was, the more digits you end up with. But every word didn't take an exact number of digits, so this is more efficient than the previous system using whole digits for every word. That's about as complicated as it gets for \"common\" compression algorithms. Hope it made sense."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
iit3j7 | if Whatsapp don't run ads and don't require a subscription. How do they generate money? | Technology | explainlikeimfive | {
"a_id": [
"g38vidk",
"g38w4at",
"g39027j",
"g38tpha",
"g38ulk9",
"g392uzv"
],
"text": [
"It doesn't currently make money. It used to cost a small amount of money to buy the app itself and was intended to be a subscription service. Now Facebook have acquired it and are positioning it to get loads of people on there. Then they'll start getting businesses on there to communicate directly with all the users, apparently (as they'll have a huge user base to sell profiles to businesses).",
"Plenty of companies operate at a loss (ex. Uber, formerly Amazon) and get capital from investors because there is future potential to generate income based on the massive market share they come to dominate, or on the ability of future tech or streamlining advancements to turn a profit. That said, while WhatsApp supposedly end-to-end encrypts 2 party messages, it may be possible there's other data to mine, for example, the number of downloads of the app, where those downloads originate, other apps used by accounts that download it. Also the potential of looking at group messages may be there. Keep an eye on the company in the future, because it's security could always be compromised for profit once it has destroyed it's competitors. Or transition to a app like Signal with better corporate ethics.",
"The contact list in your phone is harvested to fill Facebook's social graph. Your phone is finger printed to allow for tracking This allows them to find what you have bought what you are shopping for. It also fills in who your friends and family are and what they have bought and are shopping for This data is used to target ads to you on Facebook and else where. Really WhatsApp is just a data funnel to help them advertise. They don't need ads in the app itself",
"Whatsapp uses End-to-end encryption so facebook is not able to read your messages. Anyone claiming otherwise should post a source.",
"The truth: if you receive a service or good for free nd no adds it's because you are the product. (They are selling you). This applies for any established business model, not for special offers or promotion. EDIT: Or if it's a free trial version for you to buy the full one.",
"If you are not paying for a product, you ARE the product. & #x200B; Ditch whatsapp, Switch to signal. Totally free and open source. Non-profit, donation driven."
],
"score": [
51,
33,
11,
8,
8,
4
],
"text_urls": [
[],
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
iitjis | Why can’t a camera sensor have variable sensitivity across for a given image? | Technology | explainlikeimfive | {
"a_id": [
"g38weh0",
"g38v2jw"
],
"text": [
"The ISO isn't the voltage the sensors are at. It's how far the amplifier at the end is turned up. High ISO = the \"volume\" knob on the amplifier is high. The thing is there is usually only one or a few amplifiers for the whole sensor. If you have multiple amplifiers, you can have variable ISO. Canon DSLR cameras have ~~two~~ four amplifiers, each handling one quarter the rows of pixels (one every 4 lines each). With modified firmware called [Magic Lantern]( URL_0 ), you can set them to two pairs at two different ISOs and basically get single-shot HDR images or HDR video, at the cost of some resolution. Edit: Four amplifiers, not two. Why four but you can only have two ISOs? Because the chip is wired that way, and because of the way the sensor is laid out: every second line only has red pixels or blue pixels (all lines have green pixels), so if you changed the ISO differently for those groups, you'd end up with the red pixels at one ISO and the blue pixels at another ISO, and that would be a mess.",
"Because that's not how the sensors are generally manufactured. The sensor grid shares a common Vdd (positive voltage) and Vss (ground). They are all shorted together. (All Vdd's shorted to one grid, all Vss' shorted to another grid.) There are also some other common signals that are likewise the same for every pixel. Of course, you could do what you want by manufacturing them differently, but that would likely compromise the sensor density or the manufacturing cost. If you are going to give each pixel a different Vdd line, then you're going to have to add a shitload of stuff that takes up space. It would make more sense to divide the sensor into multiple zones and give each of them a separate Vdd, but even that would take up some space. But it would be a lot more do-able. Of course, you could do what you want by using time instead of space. You could change the Vdd level as the image is being acquired. Now you have a time tradeoff instead of a space tradeoff, since it would take longer to acquire the image, which of course could lead to other issues such as blurring."
],
"score": [
10,
3
],
"text_urls": [
[
"https://www.magiclantern.fm/forum/index.php?topic=7139.0"
],
[]
]
} | [
"url"
] | [
"url"
] |
|
iiu3vf | Why are magazines in some weapons behind the trigger? | Now, I don't have any experience with weapons in real life or anything like that. Though, I have noticed that some weapons like the AUG have their magazines behind the trigger and other weapons like the M16 have their magazines in front of the trigger. To me (keep in mind, I don't have experience with weapons), the AUG design seems a bit unefficent and uncomfortable. If you have a hand on the trigger, and a hand on a grip or something like that, then isn't it easier to just move it slightly to the magazine next to it, than to move it all the way to the back of the gun to reload? Thanks in advance. | Technology | explainlikeimfive | {
"a_id": [
"g38x3sl"
],
"text": [
"That's what's called a bullpup design, where the whole action is behind the trigger. It allows you to have a full length barrel while reducing the weight and length of the weapon, thus improving handling. It is slightly more awkward to reload, but the advantages outweigh that for the most part."
],
"score": [
7
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
iiwsh1 | Why is it so easy to teach a bot to write entire articles that pass the Turing test, but even the best bots become obvious in just a few chat messages? | Technology | explainlikeimfive | {
"a_id": [
"g39k3r9"
],
"text": [
"When you write an article, whether it's yourself or a robot doing so, that article is ultimately placed in a context. It has a topic. It has a format. You know, and you can teach your robot, what things make sense in that context and what things don't. If your article is about zebras, you don't start talking about calculus in the middle of your article without a lot of content explaining why it's relevant to do so. When a chatbot is trying to respond to prompts from a human partner, it has essentially no context to work from. A conversation with somebody on the internet could be about literally anything, and even worse, it can change topics very rapidly. A person can recognize immediately, or close to immediately when the topic is changed and can respond appropriately, even if that appropriate response is \"what the hell are you talking about?\" Since chatbots don't actually understand anything, it can be very hard for them to come up with an appropriate response if the topic shift, especially if it's done deliberately in such a way that the language doesn't make it immediately obvious. If I started talking to you about that striped horse that lives on the plains, you would recognize that I meant a zebra. Even if our previous conversation was about socks. But a chatbot might not be able to notice that, because at least some of the words in my sentence, like striped, would also show up in a conversation about socks."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
iix4ty | What are WASAPI, ASIO, and Direct Sound ? | I'm exploring sound virtualization on Windows 10 and have noticed that a lot of the user guides for different programs mention "WASAPI" , "ASIO", "Direct Sound", "pins", "KS", etc. What do these all mean? Are they options that I can enable for better audio playback? | Technology | explainlikeimfive | {
"a_id": [
"g39mlks"
],
"text": [
"They are different audio API's for programmers to use, when they want to make windows play some audio. So no, they won't change your audio for the better."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
iizs8m | How can millions of people use the ~400 underwater cables for internet simultaneously? | I'm aware each cable is branched into many many smaller cables but surely there still can't be one cable per person so how is the issue of overlapping data signals solved? | Technology | explainlikeimfive | {
"a_id": [
"g3a52oq",
"g3b0sn0",
"g3a5e6a"
],
"text": [
"There isn’t one cable per person. Many (many many many) people are taking tiny slices of transmission time in turn. In addition, if it’s a fiber optic cable, you can run multiple light beams down one fiber and they won’t interfere with each other. So one physical cable will contain many individual fibers, each fiber can carry multiple signals, and each signal is time shared between a huge number of individual users.",
"ELI5 answer is that each cable is like a United States Postal Service semi-trailer loaded with hundreds of thousands of pieces of mail. While all the letters share the truck on its journey, each one has specific instructions (the address) that tells us where they are going. When the truck gets to its destination, the mail is unloaded and sorted, and sent on its way in the direction of its final destination.",
"Data over the internet is sent using something called packets. Basically, when you want to send something, that data is broken into very small chunks and addressed to your computer. Once your computer gets all the packets, it can reassemble whatever it was that was sent. This is what allows multiple people to use the internet. I believe that the undersea cables may use different frequencies or other tricks to send multiple packets at once, but let’s say that they couldn’t do that. When you have messages divided into packets, you can interleave them. So maybe a packet to you gets sent, and then one for me, and then some others for a few other people, and then your second packet gets sent. It’s slower than if you just had the entire wire to yourself, but these undersea cables have pretty high bandwidth, so you can still get your data in a reasonable amount of time this way."
],
"score": [
22,
7,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ij1vy9 | What goes the gigahertz (ghz) actually mean in terms of a computer processor, and how does that translate to actual performance? | Technology | explainlikeimfive | {
"a_id": [
"g3an9gm"
],
"text": [
"Your home clock probably makes a tick. That tick happens once per second. We also call this one hertz. If it were to be fast and tick twice per second, we could call that 2 hertz. A gigahertz is 1 billion of something per second. In this case, 1 billion cpu ticks per second. A CPU does various kinds of logic and computation, the issue is the way this works depends on proper timing. Each basic step has to be completed before the next begins. This requires some kind of circuitry to time all of these operations to make sure that they don’t happen too fast and your logic becomes mush as steps aren’t completed before they are needed. We call this timer a clock, and a gigahertz a clock speed. Modern processors run at 4 gigahertz approximately, 4 billion computation steps per second are achieved by them. It’s a feat of modern engineering that we can go this fast. If you can speed up the clock of the same design of CPU without having steps be incomplete, then the steps happen faster and thus you increase the speed at which the computer calculates."
],
"score": [
12
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ij47ic | - Why have blimps not made a comeback, especially in the way of public transportation? | I was reading up on blimps and they seem to have a ton of untapped potential. They are actually super safe, could be built for a reasonable price, and can carry a large number of people. So, why haven't they made a comeback? On a city public transport level, they could make a massive impact on traffic reduction etc. | Technology | explainlikeimfive | {
"a_id": [
"g3b45jl",
"g3b3hm3",
"g3b3y06"
],
"text": [
"The 2 lighter than air gases commonly used for blimps have major drawbacks. * Hydrogen - Highly combustible. * Helium - Has other uses for which there are no viable alternatives such as cooling the magnets in medical CAT scanners and superconductors, as a nitrogen substitute in SCUBA tanks, and pressurizing liquid fuel tanks for rockets to name a few.",
"Speed is one of the biggest issues. Plus they're (generally) far more susceptible to foul weather than a plane or helicopter pushing them off course. In addition they are massive and need a large landing site or a safe tall building to dock with, and flying one into a city with skyscrapers in bad weather would be a nightmare. The lifting gas can be either expensive or dangerous. And the list goes on.",
"Why would you use them? Landing and take off adds not insignificant time. Blimps are fairly slow moving compared to other options, especially over short distances where they can't accelerate. URL_0 says a practical limit of 100-130 Km/hour. Which, up here in Canada, is highway speeds. Cars would be accelerating far faster, and since roads are already basically everywhere, why invest in infrastructure for a novelty?"
],
"score": [
7,
4,
3
],
"text_urls": [
[],
[],
[
"https://en.m.wikipedia.org/wiki/Airship"
]
]
} | [
"url"
] | [
"url"
] |
ij6ooc | what is posix in Linux? Read multiple articles, not understanding. | Technology | explainlikeimfive | {
"a_id": [
"g3bn8nm"
],
"text": [
"Posix is a bunch of standards about how operating systems should work if they want to be compatible. An operating system is the basic computer program that \"runs\" a computer; it's what supports all your applications and lets other programs actually work with the computer's hardware. Windows, Mac OS, Android, Linux, etc. are operating systems. Posix isn't an operating system, it's a bunch of standards about how an operating system should behave, how you \"talk\" to it, etc. Any operating system that follows the posix standards is posix compliant. Linus is a posix operating system. So is Unix. So is (I think) MacOS X since it's sitting on a posix-compliant operating system called NEXT. This makes it relatively easier for programmers to work with multiple operating systems, since they can work with them in pretty similar ways. A rough analogy is the USB standard...any USB plug fits into any USB port and they can work together even though they might be made by different companies and support wildly different devices."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ij80d7 | Why do we need to buy new phones for increased mobile internet speeds like for 4/5G. | Our phones are powerful enough for advanced 3D graphics and WiFi has existed for awhile that is capable of vastly superior speeds yet our phones can't receive data faster without buying a new one? | Technology | explainlikeimfive | {
"a_id": [
"g3bt369",
"g3btupd",
"g3bwyyk"
],
"text": [
"Your phone lacks the radio bands that new technologies use. Look back at the first gen LTE bands till now and you’ll see additional frequencies in newer phones.",
"The modem is a physical chip- not a software payload like HTTP. Every generation of cell requires a new modem.",
"In short, the 4/5g antenna and processing chip is a physical antenna. Each new generation of data uses a different frequency of radio wave. If the antenna isn’t exactly the right physical shape or the chip isn’t able to process the new connection (impossible to do without knowing the next-gen specifics), then an old chip won’t be able to use the new generation of mobile data."
],
"score": [
9,
7,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ijagjr | How are bitcoins created? | I know it takes mathematical knowledge, but i dont exaclty know how they work and how exactly a equation can be some type of currency, im five help | Technology | explainlikeimfive | {
"a_id": [
"g3c8ztk"
],
"text": [
"So, if we skip over all the complicated math bit all Bitcoin boils down to is a distributed ledger: a list everyone can have a copy of that goes \"Bill pays Sally five bitcoins. Sally pays Joe three bitcoins. Mary pays Bill eight bitcoins\". By looking over the list you can see exactly how many bitcoins someone has received and spent. When you want to send someone some Bitcoin all you do is send out a broadcast that roughly goes \"I am paying Pete a bitcoin\", and people start including it in their ledger. If people see that this would cause you to have spent more bitcoin than you own, they reject this new transaction. Now, that's all well and good, but how do new coins enter this economy? That happens due to mining. Mining is a fairly bad name because it's really verifying. The ledger is split in to blocks, and each block is a collection of transactions that happened while that block was being made. Blocks have a verification code at the bottom that relies on the block that came before and all the transactions in the block: this prevents people from being able to just add fake transactions to the list or change the items on it, because doing so will make the verification code wrong. This verification code is easy to check, but it's really hard to create it. Miners will spend a lot of work trying to create this verification code, and as a reward they are allowed to put a special transaction in to the block that says \"The miner gets < reward > bitcoins as a reward\". That's how new Bitcoins are created: the people who are creating these verification codes that come with valid blocks get them from the void as a reward for their work."
],
"score": [
26
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
ijap6v | How did someone figure out a bunch of 0s and 1s (binary code) could be used to make the computers and phones we all use today? | I understand that you write code to run a program but how did someone figure all that out in the first place? | Technology | explainlikeimfive | {
"a_id": [
"g3caf0k",
"g3cbipt",
"g3cgh2w",
"g3ct4su",
"g3cg5uf",
"g3cqszp"
],
"text": [
"[The ones and zeros were invented in the early 1800s]( URL_0 ) as a part of [Jacquard loom]( URL_1 ), which was a way to weave patterns into cloth. A cloth has a warp (vertical threads) and a weft (horizontal threads). Every horizontal line had a punch card associated with it. Every vertical line corresponded to a position on that punch card. If that position had a hole in it, the horizontal weft would go over the vertical warp, if that position didn't have a hole in it, the weft would go underneath the warp. This was done automatically because the punch card was placed on top of the levers that pulled the warp threads up and down, and then a thing would push on top of the card to activate the levers.",
"Binary is just the simplest natural representation of memory we have. 1 is On, 0 is Off, with electricity either flowing or not, with your finger up or not. With that base assumption, a lot of different clever people spent a lot of boring time building it all up. The article on URL_0 's explains quite well how we managed to do lot's with ones and zeroes.",
"A lot of it was already figured out by George Boole. He created Boolean Algebra back in the 1840s and 1850s. He set forth all the laws, rules and operations to do basic math. URL_0 if you want to read more up on it. Crash course also has a serious explaining it.",
"Nobody really 'figured out' binary, it's a pretty fundamental concept and we've been using binary instructions to automate things for at least the last 1000 years. It's not particularly hard to understand 'on' and 'off', whats complicated is figuring out how to automate a complex task using those instructions alone. The earliest examples of this were things like [music boxes]( URL_0 ); holes or knobs on the reel represent '1' and their absence represents '0'. Things like player pianos are just an iteration on this idea. Next came more complex mechanisms that either performed logic to produce a result like a [calculator]( URL_2 ), or automated a physical process like a [punch card loom]( URL_1 ). From there all we really did was make the switches smaller by moving from mechanical one to electronic ones. Modern computers are fundamentally no different to the mechanical calculators from 200 years ago, only they're able to be much more compact and therefore you can cram more logic into a far smaller space. The fact you can build a computer that can run Windows inside Minecraft using Redstone switches alone is proof of this - with space and resources removed as a limiting factor, literally anything that can be switched easily between 2 states is sufficient to build a computer. TL;DR we didn't go from zero to MS Flight Simulator 2020. We've been using binary since antiquity and have just gotten better at cramming more switches into less space which enables us to do more complicated things with it.",
"In the beginning there were switches and lamps. Flip the switch up (1) and the lamp turns on, flip the switch down (0) and the lamp turns off, so binary is how it all started. We just didn’t have a name for it. We just called it on/off. Then programmable circuits came along where you could combine the states of 2 switches to get 4 possible outcomes. 3 switches for 9, 4 switches for 16 and so on. Then we needed a practical way to represent the state of all these switches so we started calling the on and off states 1 and 0",
"No one figured that out. There are no zeros and no ones. \"0 and 1\" just means \"no voltage, or some voltage\" which, as a paradigm, allows for a simplification of electronics/circuitry, which, when miniaturized, can do complex things in a small amount of space. The notion of logical operators that take binary operands: true (\"1\") and false (\"0\"), having existed since the dawn of formalized philosophy, were the start of this type of thinking. Binary values are no different than decimal values or even a theoretical base-36 system that would use numbers and letters. Using a binary representation is just more electrically convenient to store and transmit, and lends itself to discrete logical expressions that can be realized with simple gates (first, via transistors, and then eventually, as elements etched onto layered silicon wafers.) It is electronically much simpler to detect voltage transitions rather than test discrete voltage levels. This is why digital systems are preferable to analog systems. While many believe CDs have pits and lands that represent \"zeros and ones\", it is actually the change between a pit to a land, or a land to a pit that indicates a set bit (\"1\"), and no change at all (pit to pit, or land to land) that represents a clear bit (\"0\"). Most digital signals (Ethernet included) use rising and falling edges of voltage in a similar way."
],
"score": [
52,
26,
17,
16,
5,
3
],
"text_urls": [
[
"https://www.youtube.com/watch?v=MQzpLLhN0fY",
"https://www.youtube.com/watch?v=OlJns3fPItE"
],
[
"https://en.wikipedia.org/wiki/Binary_number"
],
[
"https://en.m.wikipedia.org/wiki/Boolean_algebra"
],
[
"https://en.m.wikipedia.org/wiki/Music_box",
"https://en.m.wikipedia.org/wiki/Jacquard_machine",
"https://en.m.wikipedia.org/wiki/Mechanical_calculator"
],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ijayuy | what is Cloudflare? Why is it so integral to games, and why do so many sites/services use it? | Technology | explainlikeimfive | {
"a_id": [
"g3cdhh4",
"g3cbgi5"
],
"text": [
"Think of it like going to the one fruit store to buy some food. You really want apples so you buy a small bag. Now let's say someone else wants to buy apples so they also go to the store. If too many people try to get apples the store will run out! What if someone on the other side of town wants apples? They have to travel across town to buy them. What a pain! Instead if a company buys up loads of apples and distributes them in stores all over the place then anyone can get apples and they're much less likely to run out and it's also more convenient for everyone who wants an apple. Internet traffic is kind of similar. You ask for a webpage from a server (a type of computer) and then it gets the info and sends it back to you. If the server is too busy you won't get the webpage you asked for so instead, if that data is stored all over the world on loads of different servers then everyone will be able to get it faster and quicker. Cloudflare is one of the biggest companies that provide this service. Other companies doing this are Akamai, AWS Cloudfront and many more.",
"It's a service that helps mitigate Distributed denial of service attacks and provides internet security. It ensures reliability of the companies resources."
],
"score": [
6,
4
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ijc779 | What happens when you don't unplug the charger from a device after it reached a 100%? Can it overcharge? Will something bad happen? | Technology | explainlikeimfive | {
"a_id": [
"g3ckz6w"
],
"text": [
"No, if the device is designed like it should, the charging automatically stops when the battery is full and the device runs from the charger directly so the battery doesn’t discharge"
],
"score": [
4
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ijejap | Why are there still "business days" for online transactions when just about everything is done automatically by programs? | Technology | explainlikeimfive | {
"a_id": [
"g3d9dgt",
"g3d3tct",
"g3djzoh",
"g3er8tg",
"g3ebx3t",
"g3eiey5",
"g3etimc"
],
"text": [
"Actual files (pretty much excel files) are transferred between banks and even though most parts of that process is automated there are still manual components to it. There are also other issues that need to be processed (flagged accounts receiving large sums for example.) And people don’t typically work during the weekends in finance. Beyond this you should know that legacy banks has legacy systems (think DOS) that they should replace but so much is built upon this ancient tech that a process like that would require boatload of investments from most banks at once and... don’t fix what ain’t broken.",
"Even though the order processing is done by computers, most of the 'picking' and packing, even in large Amazon warehouses, is done by hand. Plus, shipping companies still charge a premium for weekend pickup and delivery. So unless you pay for something above and beyond the free/basic shipping, your Friday night order won't go out until Monday anyway.",
"Even though most of everything is automated, there is still an amount of human touch that goes into every transaction. Packages have to be hand sorted, hand packed and picked up by a human, mailed through the system and delivered by a person. Banks make money on \"the float\" this is the time when money comes into the bank (say Friday at 5pm) and doesn't leave the bank until Monday at 5pm, because the \"bank wasn't open.\" Yes, the computer got the money and knows what to do with it, BUT...the bank, by holding th money for a weekend may have made .0000000001 on the money in that time. Now multiply that by billions of transactions and over the course of the life of the bank, you just made the bank some very serious coin. Also, there is a bit of human oversight that goes into the most automated of systems. computers crash. They miss stuff, they can botch stuff. there are human error corrections that can resolve something before it goes out into the wild. Those business days help with that.",
"This thread is full of justifications about why banking transfers can't be any other way, but [other countries have already figured this out]( URL_1 ), and more are working on it. People are also pushing for it in the US, perhaps most famously [Elizabeth Warren]( URL_0 ). This is how it always goes in the US: if you ask a question about why any system is the way it is, we will reflexively respond with uninformed yet vehement off-the-cuff arguments that change is impossible and the current system is the best we could ever realistically hope for. This happens with healthcare, education, criminal justice, immigration, race issues, and more. I hope we can find a way to move past that and learn from other countries once in a while.",
"It comes down to accountability. You can't make a computer accountable for any error, so if an error happens when nobody's working, that's a problem. No matter how automated any system is, the accountability problem will always force a human to be part of the chain of custody.",
"It's already been fixed in the UK and we now have instant payments and (nearly) instant BACS. Any (large) bank not processing paynments is merely collecting interest of your money by holding it up.",
"The biggest reason is that there is an electronic payment system between banks known as the \"Fedwire\" which is operated by the Federal Reserve and is only open on business days. All banks are required by law to be open on all business days (weekdays which are not federal bank holidays)."
],
"score": [
265,
40,
17,
14,
7,
4,
3
],
"text_urls": [
[],
[],
[],
[
"https://medium.com/@teamwarren/end-wall-streets-stranglehold-on-our-economy-70cf038bac76",
"https://www.swift.com/sites/default/files/documents/swift_payments_whitepaper_realtimepayments.pdf"
],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ijg05n | Why do most BIOS look so dated compared to your actual OS? | Technology | explainlikeimfive | {
"a_id": [
"g3dexka"
],
"text": [
"TLDR: Because hardly anyone goes into the BIOS, it's not a daily part of maintaining your PC, so there's not much of a point spending all that time and effort to make it look pretty. Older BIOS's had to fit on a single chip and were rarely used for anything other than vary basic configuration of your PC so a low-quality text based interface was ideal. Today the chips on motherboards can be made cheaply with orders of magnitude more memory so if they wanted to they could load an entire Linux based OS from the BIOS chip with a full GUI if they wanted, but there's not much point. The BIOS interface only has to serve the specific purpose of modifying key hardware settings and tbh most users leave it all on default anyway! Instead they just make a basic GUI for changing the relevant settings. The less complicated it is the less code the manufacturers need to build and maintain, and the safer it is ie less bios updates for security holes."
],
"score": [
9
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ijg8je | How do one-way mirrors work? | Technology | explainlikeimfive | {
"a_id": [
"g3dgx7w",
"g3dh1gq"
],
"text": [
"A little bit of physics and a lot of marketing. One-way mirrors are just partially reflective mirrors. They let some percentage of light through and they reflect most of the rest. They do this equally in both directions, which the laws of physics are actually quite adamant about. If a mirror could be set up to only reflect in one direction (while not consuming power) then it could be used to make a cold thing warm up a hot thing, which breaks thermodynamics. Once you have a partially reflective mirror you just make one side of the mirror very bright and the other side very dim. This way when you're standing on the bright side of the mirror you see a very bright reflection and have a very dim view of the other side. When you're on the dark side you see the bright room clearly while the reflection is too dark to matter. This effect is helped along by the fact that there's a *massive* range of brightness that our eyes can deal with. A brightly lit room can easily be hundreds of times brighter than a dim one. That really helps the view of the dim room get washed out by the bright room.",
"They aren't one-way. They are just half-silvered so that they do reflect some of the light but also let some through. When you look from a dark room into a bright room, there's not much light to be reflected, so you almost exclusively see the light passing through from the bright room. When you look from a bright room into a dark room, there's lots of light that ton be reflected, it completely drowns out the small amount of light passing through from the dark room, so you only see the reflection."
],
"score": [
36,
5
],
"text_urls": [
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ijgvxc | Why does internet explorer have such a bad reputation with tech-savvy people? Average users seem to have no problem with it. | Technology | explainlikeimfive | {
"a_id": [
"g3dlxt4",
"g3dlsdx",
"g3dlx71",
"g3eql70",
"g3dmzul"
],
"text": [
"Here's just a few of the more common reasons. 1) It's had a history of security issues over the years resulting in people getting infected with viruses. Since microsoft typically only issues updates once a month unless something is considered seriously bad, Patch Tuesday might result in Exploit Wednesday when evil hackers know of a bug that wasn't fixed in a month's updates. 2) A lot of businesses invested in ActiveX applications for their internal stuff assuming Internet Explorer would be running them. This caused a number of problems where updates to IE would result in these applications breaking, plus Microsoft was pulling ActiveX support entirely over time. This is good from a security standpoint but would break so many applications. The options are to update the application (very expensive to completely uproot the user facing portion of it) or *not* update IE. Due to item #1 not updating IE is a major risk when you take it out into the public internet. 3) There's been a lot of tribalism on the internet about browsers. Web browsers making up their own standards and then web sites implementing them to the detriment of other browsers was (and honestly, still is) a problem and IE was pretty bad about it in the early days. While things have improved dramatically, IE has a stain on its reputation.",
"Several reasons. 1) Microsoft was trying to \"win\" the browser wars by making IE do certain things in a non-standard way. They hoped that people would be forced to write their webcode in an IE-only manner and that all other browsers would be left out. This partially worked. There were lots of corporate web-based systems that were IE only and often an ancient version of IE, like v8. 2) The browser was an integrated part of the OS. This mean that a security hole in the browser could completely compromise your whole system. This happened very, very often. 3) General Microsoft hate.",
"IE has a history of bad support for common web features as well as issues around browser security. \"Average users\" don't understand this, they see a webpage look bad and just assume the designer didn't know what they were doing. Or they get a virus from a site and just assume the site itself is bad (very much can be the case, but it may be caused by exploits that other browsers are immune to). They don't see the above and think \"Wow, IE is a crappy browser\", they see the above and think \"Wow, this website sucks\". In short, average users don't know enough about internet technologies to recognize that IE is a bad browser.",
"From a web developers perspective IE did not follow standards in a way that other browsers did. Microsoft was really slow to adapt to HTML5 as well. A web developer could build a website that worked correctly in Firefox, Chrome, Safari and Opera but have to write a lot of conditional code for IE to get the same behavior.",
"You have to really go back 20+ years. Back in the early days of the internet the only thing that existed was Netscape Navigator and it was quite primitive compared to a modern internet browser. All it could really do was display text and images, so a lot of the stuff that you take for granted on the internet - like being able to interact with a website in any meaningful way - didn't exist. Then came Internet Explorer. Internet Explorer had what was called ActiveX, which was essentially a primitive version of Flash, which itself is a primitive version of HTML5 (which is what gives websites all of the functionality that they currently have). ActiveX was the first thing that really allowed websites to be interactive and play sound or video. Essentially, when websites began running ActiveX was the first point at which they began to somewhat resemble modern websites. Internet Explorer with ActiveX was such an improvement over Netscape Navigator that basically everyone switched to it. But after a few years hackers realized that it was trivial to install malware on people's computers through ActiveX, and allowing ActiveX to run on a website was super risky. Most people didn't realize that, continued to allow ActiveX to run, and ended up with computers that were filled to the brim with malware and viruses. Microsoft tried to fix the security problems in ActiveX but that just ended up making Internet Explorer a slow, buggy mess. Then came Firefox, which did everything that Internet Explorer could do with ActiveX but without actually having ActiveX. Firefox was so much more secure, and so much less buggy, that Internet Explorer developed a reputation so bad that it lives on over a decade after all of the problems with it were fixed."
],
"score": [
37,
19,
11,
9,
9
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ijmamn | Why do games running on emulators still have the same performance issues as they had on their original console? | Today I decided to get a NES emulator on my laptop, which already is an old machine. I decided to play Mega Man, a game I actually own, and I could not help but notice that this game, launched in *1987* for the *Nintendo Entertainment System*, has the same visual glitches and performance issues you would expect from playing it on the original console on a machine that came 23 years after it. If there are too many enemies on the screen and you shoot, the game will get slower. Whenever Mega Man climbs to the top of the screen, the score will glitch out. All of these things still happen, but still, the computer is powerful enough to run this game on a much higher resolution then what it was intended to. My computer is not much to brag about, but even it still has upwards of thousands, maybe even millions of times the processing power that the original NES had. So why does this things still happen? | Technology | explainlikeimfive | {
"a_id": [
"g3eqf7r",
"g3erodh",
"g3fmvfk",
"g3etavo"
],
"text": [
"Because it emulates the original hardware, including it's limitations and speed, and every bug, glitch, and whatever else. It actually takes quite a bit of processing power to emulate chips in software. The emulator actually emulates every chip in the original system, down to the single transistor, so it actually takes quite a bit of power to do that. Even the best computer will only emulate at the original limitations of the hardware on purpose. However a slower system can and will show a reduction in frame rates and slowness if it can't keep up with the emulation.",
"As u/ziksy9 said but also: Some emulators let you goose the speed a bit but the frame rates on those older machines are often fixed to the processor steps so... if you speed up the processor, you also make the game play faster overall. Enemies move faster, cursors move faster, etc. which can render the game unplayable. The original XCOM comes to mind. If you play it on an emulator without clocking it down to 8mhz, the globe moves too fast to stop it where you want on purpose. Placing your bases is either an exercise in luck. Similarly, mousing to the edge of the map makes it auto-scroll completely off the screen in an instant. Imagine BattleToads with enemies moving too fast for you to see.",
"because a Good emulator will mimic the hardware as is, this includes its quirks and bottlenecks. a lot of the work revolving around Emulators is finding a means to translate instructions of the specific hardware into something a x86/ARM based system can interpret and doing this requires a good understanding of the specific hardware and its firmware. Lots of emulators cheese this by requiring a File representing a BIOS dump of the system and they proceed to create a program that can read it. This allows the devs to reduce the work required, but the result will inherit all of the quirks of the simulated hardware and the occasional hiccup caused by \"alien\" instructions(be warned that any legit dev for these cannot provide you this dump file, since this file is the IP of the Console manufacturer). on the other end you have a few emulators that actually Reverse engineered the Code of the original hardware and made a x86 compliant emulator(ie: the original BLEEM for the PS1), this kind of emulator not only can perfectly mimic the system, but with careful coding they can actually improve on its features,since the code base is fully understood, however the workload moves to instead being compatibility since different games, might exploit different aspects of the system.",
"The nice thing about programming for a game console is that the platform is consistent: you always know exactly what hardware makes up the system, without having to worry about third-party upgrades (not counting everything that happened with the N64's memory, because that was *weird*) or systems otherwise mysteriously not doing exactly what the devs said they would do. We often hear about game developers pushing the hardware to its limits. But especially with older hardware, the opposite of this also often occurred: developers learned to *depend on* the hardware's limits to govern their games' behavior. This was most often done when setting timers, because that requires fewer system resources than other methods of timing things, but it sometimes spread into other areas. Not every game did this, and even when they did, sometimes things happened that could redefine those limits in later games (the NES was especially notorious for this, with all the different mapper chips that came out). It is possible, to a limited extent, to emulate a system at a high level without taking the system's limitations into account. That's the way it used to be done. UltraHLE, the first n64 emulator, even named itself after this technique (HLE = High Level Emulation). But there are limits to what you can do with this: games that depend on a system's limitations will break when those limitations aren't present, and the further you get into a system's lifetime, the more often games will do this. So especially with older systems, if you really want to achieve full compatibility with a system's entire library, you wind up having to emulate the system's limitations too."
],
"score": [
9,
5,
3,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ijnpba | How does smoke damage electronics? Can you clean it? | My house was damaged in a fire, and I was told by the team that's overseeing the repairs that any electronics in the house will likely be documented by an insurance claims adjuster and added due to "smoke damage". If I'm understanding it correctly, I'll be permitted to keep everything regardless of the fact that it was added to the claim, so I figured it wouldn't hurt to see if I could clean them up and keep using them, especially if it saves me from having to buy everything again. So explain it like I'm five, Reddit: How does wood smoke damage electronics? How easy would it be to clean them? | Technology | explainlikeimfive | {
"a_id": [
"g3f3kg1"
],
"text": [
"Three things happen. There's a film the forms which insulates parts that are not meant to have insulation and can cause resistance to the flow of electricity. The second thing is smoke has a electrical charge which can cause electrical shorts. The third thing is soot which happens as a bipoduct of the fire and smoke. It is solid and can make electronics short by bridging connections. A short is when a circuit has no resistance (or is overloaded) and the parts receive too much charge. Electricity is basically like fire in this instance and it will fry/burn components that don't have enough protection. Resistance is insulation. It prevents some of the heat/charge from being transferred to the parts. Too much resistance stops the electricity from being able to flow. That's considered an open. An open can also be a break in the line. That means electricity cannot flow because there is a gap in the path. Think of how fire fighters who dig trenches as fire breaks make it so the fire cannot continue in a specific direction."
],
"score": [
5
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
ijphrn | How can I tell if a post on social media is from an real individual person or a spam account that is planning on manipulating me? | Technology | explainlikeimfive | {
"a_id": [
"g3fc437",
"g3fd0j0",
"g3fbp1g"
],
"text": [
"You can't. Just don't accept everything you are told as fact. It's nothing new, it used to be called yellow journalism.",
"Grammar is a big indicator along with spelling and punctuation. Also if they ask how you're doing, then immediately ask you a question about politics or loaning money or about a grant etc.",
"If I don't know them personally, I am usually cautious if anyone randomly contacts me. If its someone I know and I am suspicious they may have been hacked, I will text them to confirm. Be careful out there, people are ruthless, especially online. Listen to your gut, that feeling happens for a reason!"
],
"score": [
11,
4,
4
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ijqdud | why does your phone battery die so much faster when using the camera vs normal apps (texting, email, etc)? | Technology | explainlikeimfive | {
"a_id": [
"g3fhqfu"
],
"text": [
"The phone has a new piece of hardware to power and the camera module or modules tend to be a very power hungry component. This leads to the the battey dying faster than it would normally. Believe or not the camera image processing also takes alot of power from the other components than they normally would for instance browsing your email. Though these are two very different things, this is part of the reason that your phone may die faster on twitter than say your email. Rendering images may or may not be a big task for your phone, depending on the image quality. Even then when your phone is rendering many images at once, it can put a huge load on your phone, leading to more power draw and a shorter battery life as opposed to rendering small blocks of text on the screen. ~~This also requires the network chip to be grabbing the data from twitter~~ Probably not the most detailed explanation but it gets the point across"
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ijrjj1 | How does WhatsApp as a company make money even though there are no ads on whatsapp? | Technology | explainlikeimfive | {
"a_id": [
"g3fpl0o",
"g3fpo6s",
"g3fqcij",
"g3fv20i"
],
"text": [
"Message and call flow metadata is valuable. Who chats with who can help you understand social groups, purchasing patterns, etc. Facebook, who owns WhatsApp, benefits from this data and can sell more",
"What’s app used to be a subscription service, where it cost one dollar a year to use. This is how it originally made its money, since it was relatively cheap and good messenger, millions of people used it a year. When it was bought by Facebook, they eventually moved away from the subscription fee, and they’ve recently hinted that how WhatsApp is monetized is going to change in the future, but that right now “it isn’t monetized in any reasonable way”. So it’s quite possible that Facebook bought the massive messaging app to get at all its users, and to integrate it with Facebook, then got rid of the subscription fee to get even more users, and are still in the process of best figuring out how to make money off those users.",
"Good question. Initially there was a very small fee of 1$ to download it and a subsequent yearly subscription to use WhatsApp that cost 1$ as well. It has since been made free to download and use. Based on what I've come across, Facebook bought WhatsApp because of the treasure of user behavioral data, contacts and personal information they could analyze and use. However it's also been said that there plans for ads and to further monetize WhatsApp business use in the future. That being said, based on the public info available, their current earnings are negligible , they might actually even be losing money. But being the most popular messaging app in the world , with over a billion daily active users, their future earning potential is huge. TL;DR - Whatsapp isn't really making money right now but it's earning potential is huge",
"They know who you talk to, when, what world events cause you to chat more, what links you share, what sites you’ve also been to, what ads you’ve seen what ads you’ve clicked what ads your friends have clicked etc etc etc. they can then sell advertisers spots at better rates because they can assure them a more focused group of target users that will see their ads. So the ads (Facebook) sells are more valuable (and can be sold for more $) because the users who see them will be more likely to click them / react to them."
],
"score": [
16,
9,
4,
3
],
"text_urls": [
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ijtm84 | What does refreshing a web-page do exactly? | Technology | explainlikeimfive | {
"a_id": [
"g3g0t9o"
],
"text": [
"It asks the server to send the page again. If the data has been updated since you first opened the page, then it will send you the updated data. Sometimes if there are network issues, parts of the page might not be received by the browser, so refreshing makes sure the server sends everything again."
],
"score": [
11
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ijvcxd | What is a WEP key? | is a wep key the same as my wifi password? the reason i’m asking is because i’m trying to play my 3DS. thanks in advance. | Technology | explainlikeimfive | {
"a_id": [
"g3ga9pw",
"g3ga106",
"g3ga8u6"
],
"text": [
"It stands for Wired Equivalent Privacy. It's a really old way of putting a password on your wifi. The key would scramble up stuff so only your router could figure it out. Thing is people figured out a way to unscramble it, so the guys who made WiFi told everyone to stop using it. These days we use Wi-Fi Protected Access (WPA), usually WPA2 or WPA3 which are similar but better.",
"WEP stands for Wireless Encryption Protocol, the key is indeed the passphrase. However, WEP is vulnerable to a number of attacks, and thus is considered weak and obsolete. You should switch to WPA2 encryption. If your wireless access point doesn't support that, it means that it is also quite old, and you might actually get a noticeable speed boost from replacing it as well.",
"WEP was an old standard for encrypting wifi connections. It used a key of 10 or 26 hexadecimal digits (numbers and letters between a and f). However a lot of clients allowed you to use a password which would be converted to hexadecimal. But nowdays nobody is supposed to use WEP as it is very insecure. Most people use WEP2 which use a normal password."
],
"score": [
7,
3,
3
],
"text_urls": [
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
ijvg8h | What do spoilers (on cars) actually do, and how? | Technology | explainlikeimfive | {
"a_id": [
"g3gb82d"
],
"text": [
"People often confuse spoilers and wings which both are aerodynamic devices on cars to improve performance. But they have very different ways of working. A spoiler is designed to direct air into the void that forms behind the car. By directing air into this void the pressure behind the car increases and the car is not getting sucked back as much. This makes sure the aerodynamic drag of the car is as small as possible. In general cars that is designed to be fuel efficient or be fast in a streight line would have a spoiler to reduce the air resistance. However cars designed to navigate corners at high speed will have a similar looking but very different wing instead. Unlike spoilers a wing will increase drag but will push the car down into the ground improving traction."
],
"score": [
28
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ik119p | Why do analog thermometres take 5 minutes to measure, while a digital one can do it in seconds? | Technology | explainlikeimfive | {
"a_id": [
"g3hhejc"
],
"text": [
"It takes time to transfer heat from your body, through glass, into mercury to the point it is about equal to your body temperature. Digital thermometers work faster because the bits you heat up with your mouth are smaller and change faster allowing for faster readings."
],
"score": [
3
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ik26b6 | How do we record (and play sound)? I don’t know what is done to allow us to record and re-play audio | Technology | explainlikeimfive | {
"a_id": [
"g3ho7vm"
],
"text": [
"It's actually a really interesting inversion of speakers! Speakers use electricity to move magnets, and that magnet moves a diaphragm, and shaking that diaphragm will shake the surrounding air, causing sound waves. Microphones have a diaphragm too. When sound waves hit the diaphragm, the diaphragm will shake, and that moves a magnet, and that creates electricity."
],
"score": [
6
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
|
ik3mtc | Why do updates go from something like 9.04.32 to 10.58.7? | Technology | explainlikeimfive | {
"a_id": [
"g3i097h",
"g3i4qb3",
"g3i0ea4",
"g3i0vva",
"g3i49x6"
],
"text": [
"Software versioning follows a fairly uniform scheme that tracks revisions based on their significance to the program. The gap in the revision numbers you're seeing is the internal revisions and changes that software went through before it was released to you.",
"One common numbering scheme is Major.Minor.Patch The first digit indicates Major releases which would be a significant overhaul in the code. It might be something visible to the user like a completely redone UI or something under the hood that breaks compatibility with old versions or adds support for some new set of features The second digit would indicate minor builds within that major release. Maybe they just added support for custom avatars, that gets a minor rev! And the ability to switch to dark mode! That gets a minor rev! Sometimes multiple sets of changes will be rolled out at the same time so you could see the number go from 9.04 to 9.08 because before they got to 8 they had 4 minor features that were each ready to go. If you're seeing high numbers on the end its generally a patch or nightly build level. Lots of software projects are compiled every night with the latest code which is checked in the morning so every night this number can increment and they select last tuesday's nightly build as what they're going to ship so you get 58 there because it was 58 days from the last build, or sometimes you get 1248 because they have kept that counter going through all releases Other times a company will keep their build numbers hidden and instead show you a revision so 1.2.1 would be version 1 after the second patch with one hotfix applied, and they'll only update these numbers for versions that users end up getting not nightly builds that no one off team sees.",
"Sometimes I think its because of some being \"bug fix\" updates and some being \"large\" updates. Forexample games often updates 1.x and when a arge update appears they go from 1.x to 2.x :)",
"there is no reason why they wouldn't. the numeration is entirely up tothe creators. they could even generate random numbers every time or just name their versions after extinct species. though traditionally & for convenience, each number usually denotes amount of change. the left is the most significant, the right most is \"just small improvements\". one of the ways a company can 'jump' over versions because versions between were never released; they existed simply in the middle of development, I. e. one programmer changed something and the version became 9.04.33, and 5 minutes after another programmer changed something and the version became 9.04.34",
"In general a software release like 10.5.03 would break down like this: 10=Those awesome feature sets that took two years to develop and now you can install it on a virtual machine instead of just an x86 platform .5=The 5th iteration about one year after release 10 came out. It incorporates some major bug fixes and one a couple of new minor features. .03= We fixed that one little issue that only one company mentioned to us and they're the only company that will ever even notice it. You could install it if you have 10.5.02 installed but you'll never notice the difference."
],
"score": [
12,
5,
3,
3,
3
],
"text_urls": [
[],
[],
[],
[],
[]
]
} | [
"url"
] | [
"url"
] |
|
ik4dak | what makes RAM faster if there are no moving parts? | Technology | explainlikeimfive | {
"a_id": [
"g3i6xyc"
],
"text": [
"RAM is faster *because* there are no moving parts. Compared to a hard drive RAM operates at much high speeds mostly because it doesn't have to wait for seek time, which is the amount of time need for the drive to rotate and the hammer to move to the correct position. Different RAM modules have different speeds due to continual improvements made in the architecture. Newer chips are more efficient in design and heat resistant and can operate at much higher clock speeds."
],
"score": [
9
],
"text_urls": [
[]
]
} | [
"url"
] | [
"url"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.