q_id
stringlengths
6
6
title
stringlengths
4
294
selftext
stringlengths
0
2.48k
category
stringclasses
1 value
subreddit
stringclasses
1 value
answers
dict
title_urls
sequencelengths
1
1
selftext_urls
sequencelengths
1
1
88i9sf
How does a wax cylinder phonograph produce sound?
Technology
explainlikeimfive
{ "a_id": [ "dwksdpk" ], "text": [ "It super cool. What's neat is that it works the same way a vinyl record does (and to some degree the way older CDs used to). But it's more impressive because it's bigger and more obvious how crazy it is that these work at all. When a wax cylinder phonograph is recorded, what they do is use sound waves to make impressions on the soft wax. Sound is caused by high frequency vibrations of air molecules. A rapid increase and decrease air pressure sends a wave of pressure through the air. What can air pressure do? Ever open a Snapple and hear that \"pop\" of the lid? That metal button was held shut by air pressure and when it equalized, it made a noise and was forceful enough to snap that lid. Press it with your hands to see just how much force that is. Our eardrums do this too. They vibrate back and forth like a more sensitive Snapple lid and all the information we get from sound comes from that vibrating. Recording a phonograph reverses this. A large cone concentrates sound pressure waves down to a point. At the tip of this cone is a membrane (like the Snapple lid) that vibrates as the pressure waves increase and decrease the pressure. Attached to that vibrating membrane is a needle point. Picture a can of cranberry sauce. Pour out the cylinder of cranberry inside. A wax phonograph look like this. It's a cylinder of wax. Warm up that wax to make it soft but still solid. Now press the phonograph needle to the wax and start the wax cylinder rotating like it is on a pottery wheel. The needle will leave a neat little groove in the cylinder and the depth of that groove will depend on the sound pressure behind the needle membrane at the time of the recording. As the recording continues, all the variations in sound pressure are captured. Now stop the recording and play it back. Cool the wax to room temperature and it will harden. Hard like a candle. It’s now hard enough that dragging the needle over it will vibrate the membrane. Amplify this little vibration and you’ll get a speaker making sounds that were recorded. Microphones are speakers in reverse. And modern ones do exactly the same thing but use magnetic fields produced by magnets moving through coils instead of a needle making marks on wax." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
88k1e9
How are Chinese fonts made, other than by going through and making a new version of tens of thousands of individual characters?
edit: I should have said that I know Chinese and how characters are structured with radicals, like how 言 radical will be in many characters. But it's not usually the same size, sometimes narrower, fatter, taller, or shorter, so I don't know how it could be coded to be kept regular in all the different characters it appears in (if indeed there is some kind of automated way to apply font style to radicals individually).
Technology
explainlikeimfive
{ "a_id": [ "dwl56wj", "dwl7dph" ], "text": [ "Basically by making a few thousand individual characters. There are some shortcuts however. Asian characters are put together out of radicals, about 200 different characters that can be used to create the regular characters. So you can reuse a lot of work from character to character that use the same radicals. Unfortunately you can't just make the radicals and then have a computer create all the combinations out of that. It is not quite that regular.", "This article explains how Chinese fonts are made: * URL_0 Just one quote from the article: * *\"An experienced designer, working alone, can in under six months create a new font that covers dozens of Western languages. For a single Chinese font it takes a team of several designers at least two years.*" ], "score": [ 8, 7 ], "text_urls": [ [], [ "https://qz.com/522079/the-long-incredibly-tortuous-and-fascinating-process-of-creating-a-chinese-font/" ] ] }
[ "url" ]
[ "url" ]
88k8zu
Why is the rate that technology improves rising.
Technology
explainlikeimfive
{ "a_id": [ "dwl6q8r", "dwl6r1u", "dwlvzes" ], "text": [ "Because once the knowledge is acquired to create new technologies, those technologies can be used for more in-depth research and thus speeding up the researching process.", "Better technology allows more people to be supported without devoting their efforts to simple survival. This means more people can be working on advancing technology which in general means faster progress.", "As technology advanced, a smaller portion of the population had to be in occupations required to sustain the rest of the population. This allows more of the population to focus on other improvements. Before the agricultural revolution, nearly everybody was involved in the food gathering process. The agricultural revolution allowed specialization and people to focus on other things since agriculture could feed more people. This continues today in other areas. We can have more scientists and software engineers rather than factory workers. The other part of the equation is the more technology you have, the easier it is to make new discoveries and create new technology. With all this being said, its difficult to quantify what the \"rate\" of technological improvement is. Perhaps there are some metrics where its linear and others where it isn't. It may intuitively feel like a non-linear process but it might be. Perhaps there is a sophisticated model out there that can describe this process mathematically." ], "score": [ 6, 3, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
88kelf
Why are phone batteries fine for years but portable battery packs can deform or 'bulge' within months?
Technology
explainlikeimfive
{ "a_id": [ "dwla62l", "dwl7zcy" ], "text": [ "This is a sign that you have a dangerously defective battery. You are just buying defective battery packs.", "Can you provide an example of where a portable battery pack has deformed “within months”? There’s many kinds of portable battery packs, and if you are asking about what I think you are asking, I’ve had mine for years with no problems. Maybe you only buy low quality packs?" ], "score": [ 10, 5 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
88ki5m
What is GPG Encryption and how does it work?
I tried using it once but got lost in the multiple directions given considering I'm not a super computer 'geek' (no offense intentions). I have something coming up where I will need to learn this and what it's about and how to encrypt a message or letter.
Technology
explainlikeimfive
{ "a_id": [ "dwlbi8w" ], "text": [ "The Gnu Privacy Guard (GPG) is a an _implementation_ of PGP, (initially) a method for encrypting emails. Searching for PGP might yield more information. PGP is built around public key (assymetric) cryptography, which essentially means that everyone using it has a key pair; one of the keys can encrypt but not decrypt (the public key) and one can decrypt but not encrypt (the private key). This opposed to symmetric cryptography where one key can both encrypt and decrypt. The idea is that you give the public key away to anyone, they can then encrypt a message and send it to you, and only the one with the private key (hopefully only you) can decrypt the message. PGP can also sign messages, which is where it creates a cryptographic hash of the message contents that enables anyone with the public key to check if the message really was from you, and if it was tampered with along the way (while not necessarily encrypting the message). It's possible to both sign and encrypt messages, the signature is then from the sender, telling the recipient that this could only come from a specific person. But how do you find someone's key and how do you know that that key really is their key? Most PGP implementations are able to connect to keyservers around the world so that you can search for someone's public key (say by name or email address). But since you didn't get that key from the person themselves, there's no telling who uploaded it. This is where something called the web of trust comes in, and key signing. PGP public keys can be signed, that is to say, a private key can add a signature (like when signing a message) to a key, this is a cryptographic way of saying \"I, the holder of this key verify that the owner of the key is who they say it is\" so you may not be sure about a given public key, but if it's signed by someone whose public key you do know is trustworthy, it gives you a clue that this key probably is good. Keys can be signed any number of times (potentially strenghtening your trust level), and the chains of trust can be any length. Back in the day there used to be PGP key signing parties, where people would meet up and sign eachothers' keys and upload them to keyservers so that they were sure that the person with the key were who it said they were and helped others be more sure (hardcore security people would say that you couldn't trust a key unless you were given it on a floppy or something by the owner, signed in triplicate or whatever). PGP never had any real mainstream pull, most of the programs for it were difficult to use and the whole web of trust thing never really got enough signatures to be useful. These days, PGP signatures are mostly used for verifying that software hasn't been tampered with, I think debian linux uses pgp with their apt system." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
88l3o5
why does live tv look “cleaner” / different than a pre-recorded show?
Hard to explain, but you can tell when a show is live, so was curious as to why/how this is.
Technology
explainlikeimfive
{ "a_id": [ "dwlfeoc" ], "text": [ "People have become accustomed to how certain shows \"look\". Live TV has a certain \"look\" to it, 3-camera sitcoms have a certain look, soap operas have a certain look, movies have a [certain look]( URL_0 ). The audience expects a show to look a certain way... so the programmers deliver." ], "score": [ 3 ], "text_urls": [ [ "https://www.youtube.com/watch?v=R26_F7pecqo" ] ] }
[ "url" ]
[ "url" ]
88l6mu
Why every social media is moving away from chronological order for its feeds?
Technology
explainlikeimfive
{ "a_id": [ "dwlx11n", "dwle3ed", "dwlha6j", "dwlzbnb", "dwlspru", "dwm4eu5", "dwlfzsl", "dwlz86y", "dwlkyez", "dwlgxqu", "dwm1o98", "dwleo0f", "dwm3zcz", "dwm0kcc", "dwlyxah", "dwm76yi", "dwly4sn" ], "text": [ "If your feed is in chronological order, it’s easier to feel like you’ve seen everything and can therefore leave the site without missing anything. If they jumble it all up, you don’t get the feeling of “completing” an action, so you continue scrolling and scrolling, seeing more and more ads.", "It gives the social media companies more control over what content their users are exposed to. More control over the content means more control over the experience, which they tailor to a user so that their social media platform gets used more regularly.", "If something benefits the owner of a platform the platform owner will do that thing. Without looking at the data, I'd guess that non-linear feeds result in more positive metrics for the platform owners. Probably more engagement, longer time in app etc. Also keep in mind that user preferences are only considered when they lead to increased conversion, leads, etc. Tldr: apps do whatever makes you click more. Even if users *say* a thing makes them *like* the app more, if that thing makes them engage less, the app won't do it. Tldrtldr: what makes people convert is considered over what people say they like. Source: I'm in marketing and data from user surveys often doesn't correspond with analytics data about what actually results in higher conversion.", "I noticed that content providers stopped dating their articles a while ago. Not what you're referring to, but annoying, as you never know how current the information is...", "Every time I jump on the ol FB at my computer I have to change it to chronological, and it always changes back, so infuriating. I switched it because that’s how I like it!!!", "There's something called [operant conditioning]( URL_2 ). This is a psychological behaviour mechanism in humans (as well as many animals) that causes us to do certain behaviours. The algorithms used in social media websites with scrollable newsfeeds are built upon this knowledge. A researcher named B. F. Skinner examined behaviour motivation in animals. For a long time, people believed that animals (and humans) would only learn to reliable perform an action for a reward (usually food), and this is known as [classical conditioning]( URL_3 ). [He had this \"box\" where he would test pigeons and rats]( URL_1 ), with a switch inside that the animal would press to get food. Now, when the food morsel would always come every single time the animal pressed the switch, or at regular intervals, the animal would learn this and press it enough times to eat their fill, and then leave the switch alone. The most interesting thing that Skinner found, however, was that **if the food only came at *random* intervals, the animal would press the thing forever, even past the point that they were full.** It was then learned that animals (and humans) would more reliably perform an action in response to a stimulus if the reward was **unreliable**, not always good. Operant conditioning has been used in things like slot machines to keep players playing even when there's nothing left to gamble. **Now how does this relate to social media?** Every time you open the site to scroll, you may see something that really interests you, so you say, \"Ok, let's keep scrolling.\" You might scroll past two boring posts before seeing another interesting one. Your brain then wants to keep scrolling, waiting for the next interesting one. You will constantly think \"just one more, maybe this next one will be good,\" even if you're way past the point of being entertained, and are now just bored. [Social media newsfeed algorithms are specifically designed to manipulate human psychology to get you to keep scrolling]( URL_0 ), because the longer you're on the site, the more likely you'll run into ads.", "If they go purely chronological, then you see what happened most recently, not what is most interesting to you. The social media platform has significant data on what you find interesting, and by serving up what you find interesting, spaced appropriately to keep you on the page, they get you on the page longer and more frequently. This integrates the social media platform into your life more deeply, and ensures you continue to use the platform, and hopefully contribute to it (and by using and contributing, provide more data for them). Someone who has followed a bunch of pages because of promotional deals or games or other semi-forced pulls, but is only interested in finding out what their family members and actual friends are up to, their feed is SLAMMED with game/advertisement material. Pure chronological means they will not see the people they want to see, and so they will just ignore the feed. So, if their data indicates you want to see information from family, they make sure you see that. Just like Pandora, there will be some random elements tossed in there to see if you engage with them as well, thus feeding more information on what you like to ensure the feed becomes even more addictive for you.", "I may be late to the party but what the hell -- sites like Facebook and Twitter have algorithms that sort posts based on the interactions they get from users both in your network and outside of it. If you notice, some people's posts don't appear in your timeline even if you are friends. That is because you and those people do not interact much online. Now say your friend -- let's call her Susan -- and you tag each other in posts or whatnot, you would then see that Susan's posts have more traction in your feed, because, as FB/Twitter determines it, you are likely to be interested to find out what's up with Susan, and therefore be likely to want to see what she posts. Multiply this effect by a couple of times and the chronological order of posts quickly become in disarray. Your feed is now a compilation of the posts you are most likely to be interested in, hence the Top Stories feature of Facebook and ICYMI of Twitter.", "Because chronological feeds are worthless as shit as soon as you have any commercial participants at all, since those are incentivized to post more to increase their visibility. That results in an arms race of posting as often as they can get away with, which drowns out all the non-commerical content, which in turn makes the platform unattractive to end users.", "They want you to stay on the service for as long as possible for each visit, so they can get more advertising dollars through exposure. By putting all of the \"very best\" content at the top of your page, they think you'll have a more enjoyable experience during your browsing session, and you're more likely to keep scrolling. They also use this as a way to present a more varied experience if you log on multiple times in a short amount of time. If it's chronological, then when you log in a 2nd time after 10 minutes, you see the same stuff, and leave again. But if they don't have to be chronological, when you log back in, they can re-sort and show you the stuff you didn't see last time, and you then stay longer on your return visit as well.", "Most people follow too many sources for them to be able to see all the content. Ranking attempts to show you the most important stuff without scrolling through everything. If you have 1200 pieces of eligible content, you are likely to only see a couple hundred. There is a very high likelihood that in that remaining 1000 there is stuff you would want to see.", "Because time is a static relationship. When a company pays the social media company for more exposure, they need to get more exposure - without regard for things the social media company can't control, like time.", "Instagram has mentioned they will bring back a chronological option, this could be seen as a way to become the more popular option for viewing content", "Because screw you, that's why. Or at least, that's the impression I get. \"What? You wanted to see this yesterday? PFFFFFT, NERD!\" - Zucc", "Go to any subreddit and sort by new. Then sort by best/top. Clearly non-chronological is better if you're trying to sort by quality, and that's what social media companies are trying to do to increase engagement the same way that a link aggregator attempts to here.", "My Facebook notifications being in whatever crazy order they're giving me now sure isn't improving my engagement. I was already getting annoyed with how the app is super unreliable in showing what I've checked. I'll tap it, check the thread, notification gets unhighlighted, refresh my notifications, same notification is highlighted again. For a while, Facebook was successful in these notification shenanigans driving up my notification as I tapped pointlessly at shit I've already checked. But this new weird notification ordering is legit making me close the app faster than I used to, as I check my notifications and think, \"Oh, fuck this mess.\" I'm not ready to give up Facebook because I do have a pretty big gaming group I've been in for years and a lot of the people there I consider good friends, but it's damn frustrating my interactions with them recently have been hampered by these bogus notifications.", "Two reasons. 1) It’s actually a hard thing to implement now. As large social media sites got to be as ludicrously large as they are now, serving *billions* of users, they made astounding leaps in terms of the technologies they used to chase efficiency and performance at all cost. Some of these leaps involved moving to technologies that distribute your data and the processing necessary to display useful pages for you across large numbers of systems across the world. Some of these technologies make it hard to do operations that used to be very easy, and some of them get harder and harder the more the system grows and gets distributed across more and more locations. Think about the volumes of users, every single one having a unique collection of connected friends, each requiring a unique ordered listing of posts, with each request having a very short time frame that needs to be repeated very quickly, and with the data necessary for it being distributed over a huge number of systems. It’s really hard to do all that quickly. Hell, with the sorts of architectures necessary for such large systems to work, it’s still pretty hard to do that even for a small set of users. 2) Engagement. A chronological timeline lists things based solely upon the order they were posted rather than on any other metric the companies behind these sites want to encourage. Maybe they want to encourage popular posts that might go viral, or they want to encourage posts that are likely to push advertising or sales, or maybe they actually want to do a public service and encourage posts covering news events or important health information, or maybe they just want you to engage more with your closer friends, or people more like you, or with posts that align to your interests, to keep you engaged in the community. Whatever behaviours they want to encourage on their platforms, good or bad, it’s hard to do so with a chronological timeline." ], "score": [ 8768, 1458, 222, 130, 69, 37, 35, 31, 24, 22, 7, 6, 5, 4, 4, 4, 4 ], "text_urls": [ [], [], [], [], [], [ "https://www.theguardian.com/technology/2017/nov/09/facebook-sean-parker-vulnerability-brain-psychology", "https://en.wikipedia.org/wiki/Operant_conditioning_chamber", "https://en.wikipedia.org/wiki/Operant_conditioning", "https://en.wikipedia.org/wiki/Classical_conditioning" ], [], [], [], [], [], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
88mkvo
How does Google "know" how busy a place is?
Technology
explainlikeimfive
{ "a_id": [ "dwlp20m", "dwltk56" ], "text": [ "everybody using an android phone with their location on in one area sends data to google. Google uses this to populate areas. It also uses the same way to tell us where traffic is etc", "If you have a Google account goto: * URL_0 * Top left click on the 3 vertical bars (aka. Menu) * Click \"Your timeline\" Now you should understand :) _If You're Not Paying for It; You're the Product_" ], "score": [ 21, 7 ], "text_urls": [ [], [ "maps.google.com" ] ] }
[ "url" ]
[ "url" ]
88ne0y
How can Google Maps get a 3D model of an entire city?
Technology
explainlikeimfive
{ "a_id": [ "dwlw1t2" ], "text": [ "They don't use satellites for that. In fact, most of what you think of as \"satellite maps\" aren't taken from satellites at all, because the resolution (for non-military satellites, at least) is way too low. Instead, images are taken from airplanes. For large cities, they fly the airplanes over the city from multiple directions. They can then combine the imagery of the same building from multiple angles to calculate its geometry." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
88ovre
How does mineral oil not destroy the pc parts?
Technology
explainlikeimfive
{ "a_id": [ "dwm7px1" ], "text": [ "Mineral oil is non-conductive, so there is no damage to the components. Water, on the other hand, is conducive, so it would totally wreck the system by creating shorts everywhere." ], "score": [ 13 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
88pg3p
How does social media figure out how to advertise something for me even if it’s something I’ve never looked up before?
Technology
explainlikeimfive
{ "a_id": [ "dwmbbla", "dwmbexl" ], "text": [ "There's much speculation that these companies are listening to your conversations but no hard evidence yet that I am aware of. The glasses store could be explained by you going there and your wife's phones location tracking not being disabled, then marketing to you by association. The scuba gear scenario is a conundrum and is why more and more people suspect these apps of listening in.", "That and your electronics can spy on you. Samsung had a disclaimer in the small print for a smart tv that said something like “The data your tv collects may be used by third parties” the TVs have a microphone and camera built in and records everything, this is then sold on to companies." ], "score": [ 6, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
88r62s
If there is lead in solder are solder joints dangerous to touch?
In instances like computer parts how dangerous is touching a solder joint?
Technology
explainlikeimfive
{ "a_id": [ "dwmmcns", "dwmmfw3", "dwmmkbt" ], "text": [ "No, it is not toxic to touch solid lead. Lead poisoning results from **ingestion**, **inhalation** or **dermal contact** (e.g. if you were working with organic lead compounds that were easily absorbed through the skin).", "**TL;DR**: *No, but wash your hands afterward anyway.* You wouldn't want to eat it, but generally the solder locks most of it away, and even if is some \"loose\" lead, it has to be ingested for it to affect you. It's recommended that you wash your hands with soap after handling much of though, but if you're going to be handling a lot of it, such as if you're removing old piping or other soldered metalworks, one would hope you'd do that anyway. Even if you're actively soldering and handling a spool of the stuff, you're not going to get lead out of it - soldering irons don't get hot enough to vaporize any lead. That being said, you shouldn't breathe the fumes anyway as they can cause other problems.", "Solder joints on pretty much every electronic product are lead free for about 10 years ([source]( URL_0 )). If you work on older solder joints remember to wash your hands afterwards and you should be good." ], "score": [ 20, 12, 7 ], "text_urls": [ [], [], [ "https://en.wikipedia.org/wiki/Solder#Lead-free_solder" ] ] }
[ "url" ]
[ "url" ]
88rd2a
Why do mobile companies insist on fully charging the phone on first use?
Technology
explainlikeimfive
{ "a_id": [ "dwmpoah", "dwmppru", "dwmpp3l" ], "text": [ "There’s no easy way to measure the charge level of a lithium ion battery, especially at higher charge levels. Battery controllers won’t pick up self discharge that happens over time so charging the device that’s been sitting for a while to full and leaving it on charge for a bit is necessary to recalibrate the device’s measurement of the state of charge.", "Because of regulations, the battery is shipped at around 30% state of charge. Fully-charged batteries can cause much larger fires when damaged, compared to discharged batteries. Lithium-ion battery chemistries don't suffer from the memory effect like NiCd did, so it's not that. In fact, Li-ion cells age slowest when never left in a fully charged or discharged state, and their wear seems dependent on total energy in/out, rather than depth of discharge. The real reason is a human one: when most people get a new device, they (understandably) want to play with it. If the first impression people get of their new phone is the battery going flat, people being people, they're likely to suspect the phone's battery is at fault (insert customer-is-always-right sentiment). To avoid spurious warranty claims and PR battles on social media, manufacturers always write instructions for the dumbest people. Edit: also see Mr_Engineering's explanation about reading state of charge from a partially-charged lithium battery (it's hard, yo).", "They may use non Li-ion batteries, not trust their manufacturers to properly prep the battery, not have updated their generic battery FAQ over the years or may just want you to have a great first impression with how long their device lasts before you load all the charge nibbling apps. URL_0" ], "score": [ 19, 11, 3 ], "text_urls": [ [], [], [ "http://batteryuniversity.com/learn/article/how_to_charge_when_to_charge_table" ] ] }
[ "url" ]
[ "url" ]
88rlta
Why do echo feedback loops increase in pitch and turn into one horrible screeching sound?
I hope this hasn’t been asked before and apologise if it has, but as a gamer I’ve always wondered why repeated echo loops slowly increase in pitch and turn into an awful high pitched sound. To give an example, if two people phoned each other on speaker in the same room, the voices/sounds would echo through each other and slowly turn into an unrecognisable sound.
Technology
explainlikeimfive
{ "a_id": [ "dwmra9r", "dwmwjqy" ], "text": [ "Feedback is when a system captures its output as input, amplifies it, and outputs it again. Most microphones that are designed to capture the human voice tend to favor higher frequencies, also higher frequency sounds tend to be reproduced better in computer sound systems. These differences in a normal scenario are subtle but when you create a feedback loop, any slight advantage one set of frequencies has over another will quickly become apparent as the loop cycles very very quickly. So it's not that the pitch increases per se, but rather the signal gets more and more refined until only the high frequency parts of it remain. Source: I'm a professional live sound engineer.", "So the pitch is caused by a standing wave who's form is defined by the distance between the mic and the speaker. It gets louder because the amplification is getting compounded onto itself. So the pitch is the stable frequency between the mic and the speaker. if you move the mic closer it gets higher, you move it away it gets lower. Interestingly this setup is essentially a phase locked loop since the system self corrects the signal phase as the mic moves." ], "score": [ 113, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
88rsl4
Why do OLED phone displays get damaged by green lasers?
Technology
explainlikeimfive
{ "a_id": [ "dwmszfx", "dwmyaen" ], "text": [ "Come back with that one more time now?", "Could you please give an example or context to this? After speaking with my mind, I was still confused about the nature of this question, so I asked my old friend Google who seems to only know of a few instances where screens where damaged intentionally with high powerd lasers, and the rest were unintentional, however also with lasers far more powerful then a standard pointer(IIIb++)" ], "score": [ 7, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
88s8dm
The Soviet DUGA system
What is it? Can it be used again even though it's abandoned now?
Technology
explainlikeimfive
{ "a_id": [ "dwmzpzn" ], "text": [ "Hello, Chernobyl-visitor here. From what was explained to us on the Chernobyl Tour, DUGA radars send signals bouncing through a specific layer of Earth’s atmosphere (Ionosphere) which are received by a similar DUGA device on the other end. For the Chernobyl one, I believe that its counterpart was in Eastern Ukraine, although I’m not quite so sure about that. Whenever the signal sent by the transmitter hits something, it creates a “black image” on the receiver and generates a warning. During the test phase of Chernobyl-2 DUGA system, it had one false alarm that almost triggered a nuclear war against USA when it detected a plane and mistakingly thought it was a missile (or so we were told). Anyway, although the technology could still be used (not the Chernobyl one because it is mostly derelict), we have far more advanced tech nowadays, so I believe that although they could (theoretically) be rebuilt, they would be largely inefficient." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
88to5v
What's the difference between a "Game Designer" and a "Game Programmer"
I know there is a lot of people who aspire to make video games (myself included), however I did look up jobs and two popped out to me. Senior Game Designer and Senior Game Programmer, I was always under the impression that the person who did the coding also decided mechanics. I would assume that Game Developer is for those who would want to be in the creative process, likely being in the board room discussing ideas and work flow while the game programmer is the one getting it done. Both having their merits.
Technology
explainlikeimfive
{ "a_id": [ "dwn5vsu", "dwn8g1y" ], "text": [ "A designer decides on game mechanics, creates levels, balances gameplay related values and many other things depending on the type of game. A programmer is the one that actually makes the game *do* that stuff, or at least creates the tools and systems that allow designers to do it themselves. There can be a bit of overlap though. A gameplay programmer will probably make some decisions about some of the fine details of the gameplay, but in general will program it to the designer's specification. And sometimes game designers might do a bit of basic programming, which would typically be referred to as scripting. This might involve using a simple programming language like Lua, or a visual scripting system where you create game logic by connecting boxes together in a graph. This would be done within a sandbox created by programmers, so it's really a way of piecing together functionality that they have provided. In a small indie studio you might get people who both design and program, but bigger studios would usually keep them as separate disciplines. A game developer is anyone involved in the development of a game. So that includes designers, programmers, artists and possibly other roles. Although I think some companies define a \"developer\" as just programmers.", "game designer: we should make the game in a ring world where you can see the other side of the world just by looking up. game programmer: if (player.viewport.azimuth > 60) player.viewport.blend(world.mirror())" ], "score": [ 7, 6 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
88v2em
How is the NSA able to "hack" into someone's phone or computer device at any given time, but a hacker cannot? Does the NSA have special tools, or do companies allow them to get into peoples devices?
Technology
explainlikeimfive
{ "a_id": [ "dwnfj87", "dwnhd0k" ], "text": [ "> Does the NSA have special tools, or do companies allow them to get into peoples devices? It's a combination of both. The NSA develops its own tools and does intense research into applications to find their weak spots and take advantage of them. They also either ask or demand companies give them information about how to circumvent the security precautions on products like hardware and software. It's also likely inaccurate to assume the NSA can hack any specific item at *any time* and also that hackers *can't*.", "Wha makes you assume that a hacker cannot? The NSA does develop their own tools and they can be very sophisticated - but any vulnerability that the NSA is exploiting could in principle be exploited by anyone else. There was a ransomware outbreak last year that was spread using an exploit originally developed by the NSA." ], "score": [ 8, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
88v55a
How does the computer start to run the operating system in Von Neumann's architecture?
In Von Neumann's architecture memory space is divided between instructions and data, between programs and who will run those programs. I understand that it takes a start to run the initial algorithm. How does it work? How does the computer start to run the operating system? Please consider Von Neumann's architecture to answer the question.
Technology
explainlikeimfive
{ "a_id": [ "dwnj202", "dwnoecj", "dwnlmu7" ], "text": [ "You press the power switch. The crystal oscillator starts and when it is stable the program counter starts iterating from location 0. The CPU loads and executes the instruction there. The rest is history.", "* The CPU gets powered on. * The CPU has a hard-wired memory address for the start of the code. Say 0x000FFFF0 in the case of a PC (but 0 is a common choice for other types of processors). * The CPU reads the instruction stored at that address. * If it's a PC, the BIOS chip responds with the instruction. * The CPU executes the instruction.", "So, while the responses so far are accurate to the question, you have it wrong. Von Nuemann architecture does *not* separate instructions and data. That's Harvard architecture." ], "score": [ 8, 5, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
88ve3y
How exactly do athsma inhalers work?
Technology
explainlikeimfive
{ "a_id": [ "dwniwx5", "dwnlq7i", "dwnlgv2", "dwnm8mj", "dwniyl3", "dwnmgb3", "dwnmriv", "dwnwhtr" ], "text": [ "During an asthma crisis, the airways in the lungs get tighter and almost closes completely, therefore air cannot pass and the person cannot breathe. Asthma inhalers contains molecules (such as salbutamol) that will force the airways to release the tightness and therefore open again so the person can breathe.", "There are different types of medications in the inhalers, which are known as metered dose inhalers, or MDIs. First, you should understand what asthma is. Asthma is a reactive obstructive lung disease which causes air trapping, which means the patient can breath in, but has trouble getting rid of the air in their lungs due to narrowed pathways in the lungs caused by inflammation and tightening of the smooth muscles that makeup these pathways. The most common medication used that most people are probably familiar with is Albuterol, and this is what is most commonly used in rescue inhalers. Albuterol is a bronchodilator, and specifically a beta agonist. A beta agonist is a medication that specifically targets beta receptors (receptors are sites on cells which cause specific reactions when triggered by specific matching molecules), which are found on smooth muscle, like those in the airways in the lungs. The reaction between the beta agonist and the beta receptor on the smooth muscle triggers the smooth muscle to relax, which expands the airways, allowing the patient to expell that trapped air.", "Most inhalers for asthmatics work because they are bronchodilators. This means that they work to relax the muscles in your airways (bronchi and bronchioles) in order to decrease the resistance and increase airflow into the lungs. They also can help to reduce inflammation, which has the same result of widening those airways. The medicine mimics the signals that your body normally makes to carry out this action. (Pharmacy student who has kind of learned about this so far)", "Is the type of breathing difficulty in an asthma attack different from breathing difficulties in anxiety attacks or allergic reactions?", "Asthma is an inflammation of the *bronchi* and *bronchioles* in the lungs (the smallest tubes where gas goes in and out), which means they swell up and let less air in and out. The medicines in inhalers work to decrease inflammation, so the gases in your lungs can get in and out effectively.", "Rescue inhalers have medications called bronchodilators that force the muscles wrapped around your airway to relax when they are inappropriately contracting (bronchospasm). The drugs are sprayed at high pressure through a small nozzle to turn them into a mist that directly coats the airway because you inhale as you spray the inhaler. (Note: an inhaler does not work on an unconscious person or a person unable to inhale at all) A pill or even an injection would not get the drug to the airway muscles fast enough in an emergency. Since we are talking about inhalers, I'd like to also add that there are some slow-action inhalers that asthmatic people use. These are usually steroids and are usually taken on a schedule once or twice a day and help to *prevent* asthma attacks. They have a long but gentle action that is *not* suitable for saving a person who is having an asthma attack. If you ever find yourself helping someone and can't find a knowledgeable person to tell you which inhaler to use (sometimes young kids don't understand their inhalers) it's safe to do a couple puffs from each before you follow up by calling an ambulance.", "For starters, asthma inhalers are broken down into two big general categories - rescue inhalers and chronic management therapy. Rescue inhalers, typically albuterol (which is a short acting beta agonist) act as bronchodilators and allow more air to travel through the big passages in your lungs to the small ones, which contain the alveoli (which are where gas exchange takes place between your blood and the air. Rescue inhalers (examples include ProAir and Ventolin in the US, both albuterol) are intended to be used 'as needed' during an attack. They can also be used, under MD / pharmacist direction, to prevent exercise exacerbation. In general, rescue inhalers are to control symptoms during an asthma attack. Chronic management involves the use of a once or twice daily asthma inhaler that is meant to reduce the likelihood of an attack. There are several varieties and escalations of therapy that are possible, and medications include the use of long acting beta agonists (LABAs), long acting muscarinic antagonists (LAMAs), and inhaled corticosteroids (ICS), with ICS being the preferred first line agent according to most guidelines currently. These medications work differently and may be used in specific combinations, daily, to prevent asthma attacks or exacerbations. LABAs work the same as SABAs, but over a longer duration. ICS are primarily anti-inflammatory agents. The goal for all of these is to prevent airway inflammation and/or bronchoconstriction from occurring. In the US, all of these medications require prescriptions and should only be used under the direction of a medical professional. Incorrect use can be extremely dangerous, so be sure to follow the instructions provided by your physician or pharmacist. Specifically, the types of inhalers used for chronic management should never be used for 'rescue' or acute events. That is basically how they work. If you have further questions about the individual pharmacology I can try to simplify that a bit and provide it. Additionally, there are different methods of drug delivery - propellant based inhalers (like those labeled HFA) and dry powder inhalers (DPI). As you might imagine, propellant based inhalers used a propellant under pressure to deliver the drug as deeply as possible. Dry powder inhalers instead rely on the user to breathe the powder in. Both have benefits and drawbacks. Further, there are devices called 'spacers' which can assist in drug delivery quite a bit.", "ELI5: A rescue inhaler gives you a puff of special medicine straight to the back of your throat. You have to use this medicine because it can send a *very* important message that your brain can't:\"Hey, relax!\" Then your lungs realize you are right and that they need to loosen up so they relax and you can breathe better again!" ], "score": [ 594, 113, 23, 12, 11, 6, 6, 3 ], "text_urls": [ [], [], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
88y0ci
Why cables - like laptop charging cable and phone usb cables- stop working without any obvious damage ?
Technology
explainlikeimfive
{ "a_id": [ "dwo0l5f" ], "text": [ "The wire inside the cable might break over time, that's why sometimes your headphones work in only certain positions, and then stop working altogether. It might also be that the electronics inside the adapter got messed up due to prolonged use, since using it makes it heat up. I hope that answer was detailed enough." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
88y0w1
Why does a video stop and show the buffering icon when the grey bar which runs ahead of the main progress bar in them is still much ahead?
Technology
explainlikeimfive
{ "a_id": [ "dwo1rmv" ], "text": [ "This depends a lot on what player you're using, but could be because the player is supposed to have a certain amount of data buffered (say, 5 - 10 seconds or so) and has dropped below that threshold. Or there could have been an error in the data received, so the player is attempting to re-download the corrupt data." ], "score": [ 8 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
88ypyd
What’s the difference between firmware and software
Technology
explainlikeimfive
{ "a_id": [ "dwo4hjm", "dwo4iye" ], "text": [ "Firmware is a set of programs and routines that are built into the device running them. They are typically essential for the basic operation of the device and are not usually accessible by the user. Software is a program that the device runs such as an application or a printer driver. Software can usually be installed and uninstalled with ease by the user and is not essential to the operation of the device. If your software is faulty, you can usually uninstall and reinstall it to resolve the problem. If you have faulty firmware your device probably won’t function properly or in serious cases at all. To make a simple comparison; Firmware is the stuff that you do without thinking like breathing and blinking, software is stuff you learn, like the ability to drive a car or do calculus.", "The firmness. It's a matter of degree, not black and white; firmware traditionally refers to software that is encoded in (mostly...) read-only memory with the intent that it operate as part of the hardware for a given electronic device. For example, the boot code for a computer, the programming that runs a microwave oven, or the control system for your car would be considered firmware. While firmware isn't usually changed very often, it is sometimes possible to update it without physically changing the roms if some sort of rewritable permanent storage (e.g. flash) is used." ], "score": [ 38, 8 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
88z9eg
Why is 16:9 the standard ratio for visual media?
Why was 16:9 chosen as the standard ratio for media, television, computer monitors, phones etc?
Technology
explainlikeimfive
{ "a_id": [ "dwo8w85", "dwobbdo", "dwojq4s" ], "text": [ "Well, initially, TV's and monitors had 4:3, because it was slightly easier to make a square cathode ray tube. But cinema had 16:9 or even wider, and when computer screens and TV's went to high-definition HD resolution, they wanted to mimic the high quality experience of the cinema, so they made them 16:9 (wide screen). For a while, around 2010, there were options, you could pick either 4:3 or 16:9, your choice, but eventually all the manufacturers stopped making the 4:3 screens. As an additional reason, if you focus on your field of view, you'll see that it's pretty wide (humans have almost 180 degree peripheral vision), but not very high vertically (approx 90 degrees, forehead blocks upward vision). So a wide field of view feels a bit more natural than a narrow one. TV's are designed so you sit on a sofa, relatively far from the TV, but for computer screens, where you may be sitting very close to the screen, so it fills your field of view, they now have [ultra-wide versions]( URL_0 ), for an even more \"realistic\" feel.", "TV was 4:3, movies were a variety of aspect ratios, but 2.35:1 was the widest in common use. 16:9 is the average of these two ratios, with both a 4:3 image and a 2.35:1 image taking up the same area on a 16:9 display. Then TV switched to filming in 16:9 to make use of the new screens. Then computers switched because everyone was doing it.", "I've searched tha seven seas fer an answer. Yer not alone in askin', and kind strangers have explained: 1. [ELI5 : Why is the 16:9 aspect ratio is more desired than the 4:3 aspect ratio on TV/Monitors? ]( URL_3 ) ^(_16 comments_) 1. [[ELI5] How did 16x9 become the new standard aspect ratio? ]( URL_5 ) ^(_17 comments_) 1. [ELI5: Why is defualt HD resolution 1920x1080 in 16:9? ]( URL_2 ) ^(_7 comments_) 1. [ELI5: why are tv shows shot in 16:9, and not a cinematic aspect ratio like 1:85.1, 2:39.1 etc? ]( URL_7 ) ^(_2 comments_) 1. [ELI5: How come the \"standard\" monitor ratio ended up being 16:9 and not 16:10 ? ]( URL_1 ) ^(_4 comments_) 1. [ELI5: Why did we go through this huge change from 4:3 TVs to 16:9 TVs, only to have all the movies now come out in letter-boxed 2.40:1? ]( URL_4 ) ^(_37 comments_) 1. [ELI5:Why did we switched to wide(16:9) resolution, instead of just making the 4:3 resolutions \"bigger\"? ]( URL_6 ) ^(_14 comments_) 1. [ELI5:Why we used to have 4:3 screens but we now use 16:9 ]( URL_0 ) ^(_26 comments_)" ], "score": [ 6, 3, 3 ], "text_urls": [ [ "https://www.newegg.com/Product/Product.aspx?Item=N82E16824260490" ], [], [ "https://www.reddit.com/r/explainlikeimfive/comments/3pdh63/eli5why_we_used_to_have_43_screens_but_we_now_use/", "https://www.reddit.com/r/explainlikeimfive/comments/40aims/eli5_how_come_the_standard_monitor_ratio_ended_up/", "https://www.reddit.com/r/explainlikeimfive/comments/5q2adj/eli5_why_is_defualt_hd_resolution_1920x1080_in_169/", "https://www.reddit.com/r/explainlikeimfive/comments/1ww3op/eli5_why_is_the_169_aspect_ratio_is_more_desired/", "https://www.reddit.com/r/explainlikeimfive/comments/21kkvi/eli5_why_did_we_go_through_this_huge_change_from/", "https://www.reddit.com/r/explainlikeimfive/comments/1fp9cn/eli5_how_did_16x9_become_the_new_standard_aspect/", "https://www.reddit.com/r/explainlikeimfive/comments/4nsffx/eli5why_did_we_switched_to_wide169_resolution/", "https://www.reddit.com/r/explainlikeimfive/comments/6i0ljc/eli5_why_are_tv_shows_shot_in_169_and_not_a/" ] ] }
[ "url" ]
[ "url" ]
88zkot
When it comes to computer parts, how does cpu/ram/ssd have "speed"?
Technology
explainlikeimfive
{ "a_id": [ "dwoah76" ], "text": [ "Speed comes down to things; how quickly the data can be read or written, and how quickly an instruction can me executed. RAM, ROM, SSD and hard drives all play on getting at sata while the cpu plays on speed of execution. Before you upgrade for speed you need to identify what is slowing your computer down. You can use a bunch of benchmarks to give you an idea of where the bottleneck lives. For example your CPU is being strangled because you can read data from the hard drive quick enough. Therefore it would be pointless to put in a faster CPU. You would first need to upgrade to SSD." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
890fdf
if only a small amount of nukes can render Earth in serious danger, how can countries continuously test them?
Technology
explainlikeimfive
{ "a_id": [ "dwommlh", "dwohn2o", "dwojlu3", "dwohelg", "dwohuyq", "dwoj3zp", "dwox2h7" ], "text": [ "> Wouldn’t countries testing nuclear weapons already have hit one of these numbers, and why haven’t we seen those projected issues? The evolution of nuclear testing is pretty interesting. Some other people in this thread have great and accurate answers, but I want to provide a little context. When the USA tested the first nuclear weapon, scientists weren't exactly sure that it would work as expected. It was such a major leap in technology that there wasn't consensus on whether or not it would even work, and what the outcome would be. While it seems silly now, there were well respected, professional physicists who actually briefly discussed the possibility that a nuclear explosion could ignite the atmosphere and end life as we know it. This uncertainty was compounded by the fact that the USA (and everyone else) couldn't produce material fast enough to have a lot of bombs to test. We basically had one shot to test it before it was weaponized. During that test, the actual explosion was nearly twice as large as the highest expectations. Even with a huge team of top scientists working on calculations, the bomb ended up being more impactful than estimated. So picture yourself as a scientist or war planner during this period. You would be on the cutting edge of science, and you have to figure out theoretical predictions for a very real and untested technology. Meanwhile your country is engaged in a bloody war against brutal enemies. Sure you pay attention to some of the negative consequences of testing the bomb, but your primary focus would be on learning as much as you can... that means doing testing. Without real data points, it's hard to figure out all of the negative consequences. So the US did tests. For the first few years, it was just fission bombs. Though powerful, they were nothing compared to the thermonuclear bombs that succeeded them. Compared to thermonuclear bombs, they had much less explosive power, less radiation, and a far smaller fireball radius. You could explode them in a desert, and things would be fine. In the 1950s, much progress was made on thermonuclear (fusion) weapons. These were orders of magnitude more destructive than what was previously known. With the advent of thermonuclear bombs, nations quickly realized that their standard test practices might not adequately contain these bombs. That shift in understanding was best exemplified in the [Castle Bravo]( URL_0 ) test in 1954. During that test, the explosion was a full 2.5 times larger than expected. Though the US military had blocked off an area around the test site to prevent harm to people, the larger-than-expected event ended up harming and killing members of the US Navy and civilian fishermen near the blast site. While above ground testing continued for a few years, the Castle Bravo event started to focus attention on how truly dangerous nuclear weapons in general and weapons testing in particular could be. Because the blast hurt non-American citizens, it opened the eyes to other world leaders that US and USSR weapons testing actually threatened people who had nothing to do with nuclear weapons. There was global outrage, and a growing push to reign in on weapons testing. So in 1963, the US, USSR, and UK (the only other nuclear power at the time) agreed to a ban on testing anywhere other than below ground (the partial test ban treaty). By moving the testing below ground, the countries doing the testing could contain radioactive fallout. It's important to understand that the PTBT couldn't have come at a better time. Prior to the test ban treaty, Nikita Khrushchev seemed determined to assert his posture by testing larger and larger yield weapons. Six of the top 10 largest bombs ever designed were tested in the year prior to the PTBT. It's reasonable to conclude that had that trend continued, there would have been more consequences as a result of above ground testing (it's also worth noting that the US had been testing dozens of bombs per month). All told, only about 600 of the 2,400 nuclear weapons tests happened before testing was moved underground. ___________ As others have mentioned, the testing was both spread out and happened in areas where there wouldn't be massive fires. For context on the fires, it's been speculated that fires would kill more people than an actual blast if a thermonuclear bomb was detonated in a city. A modern thermonuclear bomb would instantly ignite every flammable object within a several mile radius, blotting out the skies with smoke. ___________ Another interesting thing to know is that the concept of nuclear winter wasn't really widely accepted until the 1980s. So as people were testing weapons in the 1940s, 1950s and 1960s, they didn't realize (at least with any scientific accuracy) how bad nuclear winter could be. Prior to the 1980s, people were afraid of the blast and the fallout. After a series of studies came out in the 1980s, people learned that even if you survived the blast and weren't subject to fallout, you'd still probably die because the skies would be darkened and the food supply would dry up. Had they known about nuclear winter in the 1940s/50s/60s, it's reasonable to conclude that tests may have been moved underground sooner.", "The Limited Nuclear Test Ban Treaty of 1963 : > • prohibited nuclear weapons tests or other nuclear explosions under water, in the atmosphere, or in outer space > > • allowed underground nuclear tests as long as no radioactive debris falls outside the boundaries of the nation conducting the test > > • pledged signatories to work towards complete disarmament, an end to the armaments race, and an end to the contamination of the environment by radioactive substances. While nuclear weapons are very dangerous, the US and other nuclear powers haven't tested any weapons in 20 years. Civilized countries are working to get a comprehensive nuclear test ban, but there are a few rogue countries that are still testing.", "The radiation from above ground nuclear tests is everywhere, you can't use carbon dating on anything more recent than 1945 because of nuclear contamination, but it hasn't ruined life on Earth. [However, a regional nuclear war between Pakistan and India could cause a nuclear winter, ruining global food production for a year.]( URL_0 ) This is because the nuclear tests in the 1950 were out in the desert, the ocean, or the treeless tundra of northern Siberia. A \"small\" nuclear war would *burn cities*, generating huge amounts of smoke. If 100 nuclear bombs go off in the desert, there would be a little extra radiation and life would go on. If 100 cities get vaporized, the smoke would block the sun.", "depends greatly on the magnitude of the nuke, but most tests are done underground or underwater. Precisely to avoid that fallout issue.", "A small amount of nukes does not render the earth in serious danger. They are also tested in remote places often underground thus limiting the amount of exposure that people and crops have to it. The threat of destruction from nuclear war is the threat of nukes taking out dozens if not hundreds of cities at one time, not a few bombs being detonated in protected and remote test areas.", "The danger is multiple weapons setting multiple cities on fire, which potentially put enough smoke and particular in the air to alter the climate, a phenomenon known as nuclear winter. Isolated tests in isolated areas over decades pose no danger on a global scale.", "Nuclear weapons are a bad idea, sure, but their danger is very mystified and normally people don't have any numbers, and no clue about the actual consequences of those bombs. Of course it's as bad as it gets, lots of people die, but then they build everything back up and that's it, the long term effects are not nearly as bad as they are presented. Radiation from atomic bombs is really not that big of a problem, it's the destructive effect of the blast wave and the heat." ], "score": [ 726, 60, 21, 11, 11, 6, 4 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/Castle_Bravo" ], [], [ "http://fpif.org/global-nuclear-winter-avoiding-unthinkable-india-pakistan/" ], [], [], [], [] ] }
[ "url" ]
[ "url" ]
890o0z
How do range finders work?
Technology
explainlikeimfive
{ "a_id": [ "dwp0ldq" ], "text": [ "Old school rangefinders in cameras and for military artillery worked by having two viewfinders. A combination of mirrors, including half-silvered mirrors, would project them onto a single viewfinder screen, but the user would start out by seeing a double image. By adjusting the range or focus, which changed the angles of the two viewfinders, the user would cause the two images to merge and become a single image. At this point, the angles of the two viewfinders would effectively compute the distance, changing the focus of the lens or adjusting the targeting range of the artillery. These systems were essentially a form of analogue of analogue computer, with the inputs the angles of the two viewfinders." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
89528n
What are cookies and why do websites ask me about them?
Technology
explainlikeimfive
{ "a_id": [ "dwp0j30" ], "text": [ "At its most basic, a cookie is a small text file stored on your computer, which is associated with a specific website and can be read by that website. When they were first introduced, they were designed as a way to \"save state\": for example, if a website allowed you to change the font size (for example), that setting could be saved in a cookie so you wouldn't have to change it every single time you visited the site: it would \"remember\" your setting. However, since then, cookies have been used in more sophisticated ways; they can, for example, track which pages you visit, and this information can be read by an ad agency so that they can run ads based on what websites you visit frequently. Although this is usually benign, some people find it worrisome. This is particularly the case in, for example, eastern European countries, which spent nearly half the 20th century under governments that regularly spied on them -- they're not comfortable with the idea of a central database holding information about their browsing habits. So the European Union passed a law requiring all sites that use cookies to inform visitors of that fact. That's why you're constantly being asked to confirm that you're happy to accept cookies. If you're not in the EU, of course, this law doesn't apply to you; but many sites might still show the dialogue to everyone, simply because it's easier." ], "score": [ 11 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
895uol
How is it that humans can see a new object once and then be able to identify it in images with extreme precision, but the best computer vision algorithms need thousands upon thousands of examples to be able to pick specific objects out of images?
Technology
explainlikeimfive
{ "a_id": [ "dwp70jb", "dwplzpk", "dwp8uxw", "dwp8984", "dwpbvya", "dwpgwca", "dwpcdqh", "dwpt7m9", "dwpu7iq", "dwpa0la" ], "text": [ "If you ignore the metaphysical (spirit, soul, etc.) and assume that the brain is a biological machine, then there are basically two arguments: 1. The human brain is (in some ways, at least) much more powerful than the computers we can build and/or it runs much better image-recognition software than we can write. 2. When you learn to recognize a new object, you aren't really starting from scratch - you're building on your past experience of learning to recognize many different things and then actually recognizing them many times.", "Show someone a pawpaw leaf once and then see if they can distinguish it from a chestnut, a buckeye, a catalpa, etc. If this person has spent a lot of prior time looking at trees and leaves (without knowing the names) they might be able to learn it in one shot, but in my experience most people need lots of examples over a long period of time to fully assimilate new tree identification knowledge.", "If you show a 1 month old infant an image, that child will not, necessarily, be able to identify what it is or pick out similar objects. That child has received thousands of images. It takes millions and millions of images and really good software in the brain to do this. If you are a 20 year old person, and you've been awake 75% of the time and using your vision constantly while awake, and you are capturing 30 fps for that entire time, you've processed over 470 billion images. Oh and it's taken at least half a million years to write the software that processes those images.", "Because a human isn't really seeing a new object once: they've been doing the relevant training since birth. The theory of neural networks essentially involves training the computer like a baby is \"trained\" by its environment.", "We don't yet have a strong fundamental theory for how intelligence works; nothing as universally agreed upon as evolution is to biology. If we had such a theory, AI development would advance immensely.", "Something that I'm not seeing in the other responses is that we, as humans, have context. We already have a general understanding of everything else we see, so if I show you a picture with a fancy new gizmo on a table in a room with a couch and some pictures, I already have context for what each of those things are and can rationalize that the 'pixels' that make up the new object are what makes up the new object. When training a computer, we start from basically scratch, so with only that singular image the computer would have no way of knowing what pixels are relevant and what ones aren't. Building on this, human brains are great at *understanding*. We can see a 2D picture and imagine what the 3D object would look like, and thus guess what it would look like at any angle. Computers, on the other hand, are just taught to look for patterns. Take, for example, a coffee cup. If you put it in front of a human so that the handle is sideways, and then put it in front so that the handle points up, that human has enough context and worldly knowledge to understand that you have rotated it. A computer would just see a particular pattern of pixels, and then a different pattern of pixels (bear in mind we might not teach it directly about rotation or anything because those are hard concepts to quantify mathematically when talking about pixels). Over enough images it might understand that, say, a block of white pixels with a half-ring next to it is the pattern it is looking for, but it essentially has to guess that with no prior knowledge of anything.", "Just want to add on to everyone's comments that your brain is never seeing a new object only once. You have two eyes and even if you're focused on a single point, both eyes are constantly jittering back and forth to generate images from slightly different perspectives. Overall, the brain does a lot of processing so that you can get the idea of what an object is, and you're really seeing the object thousands of times, not just once", "The human brain is extremely good at fuzzy correlation. We identify and remember things into many different categories at once and make loose, often illogical associations between things to remember them. This makes our identification prowess extremely strong, but it's an extremely hard thing to replicate in a machine, although we are very rapidly improving that lately.", "Something a lot of people aren't even touching on is the fact that we deconstruct everything, to some degree, when we view it. If you're shown a picture of an object you've never seen before, your brain *immediately* gets to work trying to imagine what the texture is like, how heavy it is, how large/dense/light/etc it might be, and on and on, even what it's made of and how it would move. This is all built on our instincts to seek out shelter, food, etc, so when a human sees something for the first time, our brains do *hella* analysis on the object. Computers, however, are purely looking at patterns in the pixels, and are therefore missing out on a huge amount of potentially useful, relevant data.", "The brain takes in data at a way higher bandwidth than these algorithms. I don't have exact numbers, but every photon is tracked and our representation of reality is smooth compared to what the brain is actually interpreting. That coupled with faster processing and better memory, plus instincts tuned for our universe by evolution means the brain is really smart. A camera and GPU can't process the billions of photons moving around like a human eye and brain can." ], "score": [ 566, 227, 110, 72, 12, 8, 6, 5, 5, 3 ], "text_urls": [ [], [], [], [], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
8981oc
When there's a cut or a break in a power line or cable, how do technicians find out where it happened without visually inspecting every mile?
[Related search here.]( URL_0 ) Say an underground cable carrying service over a long distance gets chewed through by a burrowing critter. Everyone will know, based on the interruption of service, that something's been cut. And based on who loses service, they'll be able to deduce approximately where along the line the break occurred. But that still leaves a huge distance in a lot of cases - how do the technicians find the break to repair it? Same question goes for power lines, although visual inspections are easier if those are above ground.
Technology
explainlikeimfive
{ "a_id": [ "dwpi137", "dwpwcnc", "dwpk86d", "dwq89jp" ], "text": [ "For undersea cables (both electrical and optical fiber) they can send a signal and measure the time it takes to reflect back. That will tell them where the defect is.", "I have been working with power cables, mainly fixing these types of faults for about a decade. There are a handful of ways you can identify a broken power cable. - visual feedback (someone calls and says \"there is a strange blue light in the woods. I saw it when eating breakfast\", and if that time frame sits well with when the fault occurs, you go out there and have a look first.) - someone calls in and admits that they damaged the cable. While this is technically something that they need to provide insurance details for, it causes a lot more trouble if they don't admit their mistake and just fill in their newly dug hole with the broken cable still in it than if we already know where to look, so unless they have been more ignorant than usual we usually just thank them for calling and get at it. - fuses randomly burn out in a feeder cabinet somewhere. Fuses burn out by natural causes too, like thunderstorms. And age. So the first time it happens, it's not really something that you put much thought into. You just replace the fuses. But...if you have to return there again the day after, then you know something is going on. In this case, you have to identify which cable is actually broken before you can identify the actual fault. This is the true scope of your question, but in my experience it's only half the faults that fall in this category. Power grid, and lamp post grids alike, have a lot of connection points. The cabinet outside your house that your service line is connected to is getting it's feed from another cabinet. And often those cabinets share the same fuses in the sub station. I.e, when the fuses burn out, you don't know better than to look on the cables that run in a specific direction because feeder cable don't technically have to have their own fuses every time they leave a cabinet if the cable type and size is maintained in the entire grid and the fuses in the substation are calculated to suit the grid characteristics. Lamp posts have a connection junction in each post. In every post, a cable can run off in another direction. You'll have to start with establishing which cable it is that has the fault by disconnecting them one by one and measure the conductors impedance. In a healthy cable, there is no connection between the conductors - the impedance is indefinite or at least very, very high. Once you find your damaged cable, you have three options. - look for obvious signs of excavations along the cable. Could be a newly erected fence. A new driveway. a patch of the road that has new pavement despite that the rest of the road was last paved in the 80's. A patch of wood that has been timbered recently right next to the road. A lawn that has recently been resowed in a corner. Something that tells a tale of something been going on. - connect a *pulse echometer* to the cable. The result is a graph where joints, potential future faults and whatnot will be shown with a distance estimation. The short circuit is often the marking on the graph where the markings hit the roof worst. It will tell you, within a yard or so, where the potential short circuit is in the cable. Start digging. - connect a surge generator to the cable. The surge generator knocks a surge into the cable, ranging from 2000 volts (with equipment you can carry short distances if you are reasonably fit) up to near 60000 volts (with equipment that comes with it's own bus and has its own power source) depending on cable type and cable length. The idea is that the surge will create a spark in the cable. The spark can be heard^1 . If you up the voltage, it creates a distinct pounding in the ground that you can go looking for. If it's difficult to hear, you bring out a geo microphone and look for the place where you hear it loudest in the headset. Start digging.", "There are sensors along the power line which let the power company know everything is working alright. If the power line breaks they stop receiving signals from the sensor and will know which section is broken", "Not exactly what you asked, but for communications cable there are a few ways. My work involves cables up to 1000ft, not miles long cables, but some techniques will work for either. 1. A time domain reflectometer will give you a display of the distance to the fault. You have to know the velocity factor of the cable. It is usually 66% to 90% of the speed of light. 2. Measure the capacitance and use the data sheet for the cable to find the capacitance per foot, or measure capacitance from both ends and do a ratio calculation. 3. Inject a tone into the cable and use an inductive probe to find where the tone disappears. 4. Frequency domain reflectometry uses an RF signal generator, a TEE connector, and an RF meter like a spectrum analyzer, oscilloscope, or RF power meter. The signal will be reflected at the open end of the cable and return. At certain frequencies the signal returning will cancel out the signal being sent. With some simple math the distance can be calculated." ], "score": [ 12, 6, 3, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
89cfen
How does whatsapp gains money if it is totally free?
Technology
explainlikeimfive
{ "a_id": [ "dwq2jfy", "dwq1tj9", "dwq20xx" ], "text": [ "Welcome to the tech era. If you're not paying to use the product, ***you*** are the product.", "Sells your data to ad companies. This is more relevant since Facebook purchased it and you need to opt out of Facebook reading your messages. Although most people don't realise this and leave it on.", "Whatsapp doesn't always totally free, back then it costs $1 to download, then it used a subscription model, free to download and use for the first year but to continue using, you have to subscribe. The model is dropped in 2016 and the app is totally free after that. But whatsapp doesn't have in-app purchase, and there's no ad... this really rises the question: how do they generate money? and if they don't generate money, why facebook put big value on them? The simplest answer is: **Data Mining** whatsapp messages go through the server first before reaching user device(s). The server analyze data based on words usage. Big companies would paid big bucks for this kind of data to optimize their marketing and advertising campaigns. Though the currently end-to-end encryption used in whatsapp makes data mining tough. > Your messages are yours, and we can’t read them. We’ve built privacy, end-to-end encryption, and other security features into WhatsApp. We don’t store your messages once they’ve been delivered. When they are end-to-end encrypted, we and third parties can’t read them. Above quote is taken from [WhatsApp Legal Info]( URL_0 ), who knows if they're telling the truth, half the truth, or just an outright lie. Please correct me if I'm wrong or add for any lacking information." ], "score": [ 9, 8, 3 ], "text_urls": [ [], [], [ "https://www.whatsapp.com/legal/#key-updates" ] ] }
[ "url" ]
[ "url" ]
89fvp5
How is machine learning different from a statistical model?
From what I understand you can take a set of data and either feed it to a computer via machine learning, or generate a statistical model (linear regression, etc) from it. How is ML different from traditional statistical models? Is ML always superior?
Technology
explainlikeimfive
{ "a_id": [ "dwr0ur6" ], "text": [ "Machine learning is a way to build a statistical model. It is all well and good to say \"let's use a statistical model!\", but you still have to decide which one to use and how to implement it. Is your data distribution linear? Geometric? Logarithmic? Sinusoidal? You have to start making guesses and see which one you think fits the best. Machine learning automates that process, building a statistical model based on the data itself." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
89kvnn
why do gamers use headsets?
Is that the main audio source? Dont they use speakers to hear the game sound? Im not a hardcore gamer and when i play i use the speakers for sound so i never understood the use of headsets in youtube twitch and e sports. Edit: thank you for your answers i didnt know.
Technology
explainlikeimfive
{ "a_id": [ "dwrn6jj", "dwrn9r9", "dwrn7sr", "dwrnhqh" ], "text": [ "So you can have voice and game audio at the same time. Also headsets can have 7.1 setups that actually help in games by allowing for truly accurate locational sounds.", "They use headsets because in some games it gives players advantages. Like hearing where an opponent is coming from, or when a certain sound queues a unique action in game. Some players also just enjoy hearing the ambient music or fx within the game. It's also just courteous to the other people around the house who may be bothered by the sounds.", "I use them when I'm also using the microphone to talk with other players. Otherwise I just use the speakers.", "Speakers and microphones do not mix, it causes alot of echoes and feed back. Its cheaper and more convenient to use headphones." ], "score": [ 9, 8, 4, 4 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
89n6j1
What are the advantages/disadvantages to a V6 engine vs. an In-line 6?
Technology
explainlikeimfive
{ "a_id": [ "dws5guj" ], "text": [ "Straight 6 advantages are simplicity and ease of manufacture so cost less. They are, contrary to what one of the other comments said, actually better balanced than V6. In a V6 to get it precisely balanced you have to add specially placed weights or have balance shafts. V6 advantages are weight as they are more compact they weigh less. And also they can be easily used in a front engine front wheel drive car. The straight 6 is usually so long it cannot be whats called transverse mounted so it usually means no front wheel drive. Also due to length the Straight 6 has issues with stiffness and thing like the crank shaft can flex." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
89p10w
Playing online worldwide
ELI5: Why can i play a online game of Mario Kart with people from all around the world with no lag, but selecting a different server when playing Rocket League makes the game unplayable due to the high ping?
Technology
explainlikeimfive
{ "a_id": [ "dwt1hh0", "dwsfqx2" ], "text": [ "There is lag, but the game uses a mix of prediction methods, slow acceleration values, and smooth updates to hide it! Nintendo's studios have very good netcode compared to many others.", "The game usually connects to the nearest server which sends game data back to you almost instantly. If you’re from Europe and connect to a server in NA, data needs a bit longer to reach your computer." ], "score": [ 6, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
89qlh8
How does the FBI find anonymous users on 4Chan?
Technology
explainlikeimfive
{ "a_id": [ "dwsrjsn", "dwsrp8w", "dwsypji" ], "text": [ "An anonymous user simply means they don’t login. There is no anonymity on the Internet, everything leaves a trail. Now some people will chime in that you can obfuscate that trail but that’s outside the scope of the question. A user creates an http request to 4chan. That request comes from a modem to an isp. The isp routes that request to another isp, and on and on until it reaches that destination. Each packet that gets forwarded on contains information about where it’s from and where it’s going. The FBI simply back tracks the request from the destination to the original sender location. Basically your ISP rats you out.", "There is no such thing as anonymous on the internet, your computer always leaves a traceable footprint everywhere you go. Most users don't know how to spoof this footprint to something harder to trace, so even if they post anonymously their ip address, their subnet, ISP, etc. is all tracked and they can pinpoint where they are, usually right down to the address that the post was made from. Even with out help from the web site the FBI has access to federal resources for tracking digital footprints, and even with spoofing they can still usually track you down, so there is really no such thing as anonymous on the internet when it comes to what the government can track.", "When you access a website, tons of things have to happen: * You have to figure out where on the internet the URL is. This goes through a system called DNS, which can leave a trail: your DNS provider can almost surely know what you're trying to access. * You have to access that data by asking your ISP (internet service provider) how to get there. They'll almost certainly have to ask someone else (think of the many different post offices/centers that might route one parcel), and all of them will have some record of it. * The actual data itself has to be sent, which usually involves many different computers making a record of that: their servers, your computer, any intermediaries. * If you try and obfuscate this by essentially getting someone else to do it (a VPN), and then send it to you so no one can see that source, you just shift the trust from your DNS/ISP to their VPN, and your computer or their server might still have some record of it. Any one of these can be used to catch you. While it might be possible to obfuscate this to the point that an agency can't prove you did something, as a working assumption it's reasonable to just believe that nothing you do is anonymous." ], "score": [ 44, 3, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
89qnp3
Cell phone tower radiation. Is it dangerous to live near a cell tower?
Technology
explainlikeimfive
{ "a_id": [ "dwsrvvt", "dwsw46u", "dwssyqr", "dwsvcj5", "dwt3faw" ], "text": [ "Cell phone towers emit non-ionizing radiation. This type of radiation is also emitted by televisions, cell phones, microwaves, fish, your mother, and dirt. If one receives a significant enough dosage, the health risk is \"getting a burn\" as this form of radiation, when absorbed, generates heat. Recommend to your mother that she not climb the tower in order to get close enough to be burned and she should be fine.", "Tell her that cel towers put out about 500 Watts max. If there's a radio or TV tower nearby, they can put out 50,000 to 500,000 Watts. People have been living near these TV/radio towers for *decades* without any noticeable effects of the \"radiation\". Tell her to focus on the REAL risks: too much sugar, too much cholesterol, not enough exercise, smoking, etc. Those kill hundreds of thousands each year.", "> Do cell towers actually pose any health risks? No they do not, provided you aren't touching them. They can use high voltages. > How would one compare the radiation from cell towers vs mobile devices and other electronics? The cell tower will be way more powerful but also won't be pointed at your mother's house. It is on a tower for a reason after all, it is to get line of sight to more distant targets. Most important to this discussion is to understand that not all \"radiation\" is the same. Sunlight is radiation, the heat from your fireplace is radiation, and your wifi signal is radiation. Any sort of electromagnetic emission is radiation, the relevant difference here is if the emissions are \"ionizing\" or \"non-ionizing\". Some high energy radiation can knock an electron off of an atom which changes the total charge of the atom. This changes how it behaves chemically and can lead to other reactions. Humans are in danger both from the general tissue damage and the potential damage to DNA. Enough random changes to DNA and a cell might develop cancer, which is where the idea of \"Radiation causes cancer,\" comes from. But ionizing radiation isn't what all radiation is. Your cell phone and wifi are not ionizing. Ultraviolet radiation from the sun which gives you sunburn is ionizing, which is why you can get skin cancer, but the rest of the sunlight isn't ionizing. In summary your mother isn't going to be exposed to ionizing radiation so she will be fine.", "Your mother gets more of the same type of radiation daily from the sun that she would get from a lifetime living next to one of these towers. Just don't let her go mess with the pylons and she should be fine", "These towers emit non ionizing radiation which decays exponentially. This radiation can only affect you as heat. Literally ELI5: close to fire is too hot but a few feet away its cold. On the other side of your house you wouldnt even know its there." ], "score": [ 94, 15, 14, 6, 3 ], "text_urls": [ [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
89suhc
What is the difference between an Ethernet and a serial connection?
I want to know what the main differences are between an Ethernet and a serial connection in regards to flow computers. I'm a woman working in oil and gas, and learning SCADA. Being that it's a male-dominated field, and that I'm new in this position, I feel that I need to educate myself in my own time, to keep up with everyone else. Any help would be greatly appreciated. I did google search and read a lot, but I need a true ELI5 explanation. The articles online expect the readers to have a deeper understanding of this technology than I do. A lot of this is new to me, and I need more basic terminology, please. (I searched this sub and found that this question was asked once 4 years ago, but there was no answer.) Edit: Thank you all for your responses. Even though some of it is still over my head, I do have a slightly better understanding, and at least now I can do a search on more specific terms or concepts.
Technology
explainlikeimfive
{ "a_id": [ "dwtl40w" ], "text": [ "Computer engineer here, Ethernet is a family of related networking technologies defined by the IEEE as a group of standard known as IEEE 802.3 You may recall that IEEE also defines a set of wireless (WiFi) technologies as IEEE 802.11 Ethernet is primarily concerned with physical standards and data transmission standards. There are Ethernet standards for transmission over coaxial cable, fibre optic cable, and twisted pairs of cable. Regardless of the physical medium used, the [nearly] universal use for Ethernet is to establish wired networks that communicate using the Internet Protocol with a transmission protocol on top of that. A serial connection on the other hand doesn't mean much on its own. All that it means is that the underlying data link protocol transmits only a single bit of data at a time, just very quickly. It describes nothing about the form of the physical transceiver; RS-232, RS-485, USB, Serial ATA, PCI Express, SAS, I2C, SPI, etc... are all mutually incompatible standards that employ serial data transmission in various configurations. Some serial connections are directly between a host and a device (such as RS-232) while others are multi-drop networked (such as USB), yet others require a ring configuration (such as SPI). Each one has its place. Despite the above, in contemporary usage the term \"serial\" most commonly refers to the use of UART transmission over RS-232. However, that's far from universally true. If I were to ask someone what interface something used and they told me \"serial\" I'd just ask them to send me the manual instead." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
89swrf
What's the diference between a car alternator, and a car generator?
Technology
explainlikeimfive
{ "a_id": [ "dwtidyv" ], "text": [ "Older cars used generators. They developed DC to charge the battery direct. Unfortunately they were very inefficient at idle speeds. An alternator generates AC, then has a rectifier in it to convert to DC, these work more effectively at idle speeds, hence the now common use for them." ], "score": [ 7 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
89t3to
What is front and backend development?
Technology
explainlikeimfive
{ "a_id": [ "dwtb4ee", "dwtcza4" ], "text": [ "Frontend is the code that gets delivered to the client to be executed on their device. Backend is the code that runs on a private machine. Typically the frontend will be the view logic and the backend will be buisness logic and databases.", "Frontend is code that a user sees and interacts with - for example any interface, whether it be a webpage, Windows/MacOS program, terminal program, or Android/iOS app. Backend is basically anything not frontend, typically business logic. Frontend development deals with user interaction and defers most computations to a backend." ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
89tkj6
Why are smartphone batteries advertised by how much current they use and not voltage or power?
Technology
explainlikeimfive
{ "a_id": [ "dwteyhu", "dwtexpj", "dwtffrf" ], "text": [ "Batteries are advertised not with current, which would be mA, but net charge, mAh. Basically current is a unit of rate of change of charge with time, specifically coulombs per second. Think of velocity, which is meters per second or miles per hour or whatever you want. What if you multiply velocity by time? What is that? Well, it's just meters. In the same way, mAh is just a roundabout way to describe the amount of charge in the battery. We use mAh and not just coulombs because it's more convenient to use.", "Voltage is largely irrelevant for single cells, as all cells using the same chemistry will have similar voltage characteristics through the charge/discharge. (However, it is important for designing around multiple-cell packs) Maximum current output (and by extension, power) is not commonly advertised in the context of smartphones as it is rarely an issue or limiting factor (ie the battery can safely output more power than would reasonably be demanded in the usage case). However capacity (the total ability to do work) is pretty important, and it is usually stated as a product of current (at nominal voltage) and time", "It's a far more important statement. (Though not perfect by any stretch) The voltage tells you almost nothing. A 9v battery and a 9v battery are the same on paper. So almost all smartphones would have \"the same\" battery. Power is meaningless in a device as usage changes person to person and app to app. They could say \"500 hours of on time*\" Then small print *on low brightness with all apps and features turned off. But signal strength, updates, what's currently open, how graphically intense an app is will throw that number into the bin. Running halo whilst having the Xbox app open, Facebook chat and background downloads is very different than just running halo. It's the same reason a breakfast cereal has grams on the box instead of how many bowls it contains. They don't know how big my bowl is." ], "score": [ 11, 5, 4 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
89tr9o
Why do street lights work during a powercut?
I live in a close, every single light in every house went out but the street lights are working, why?
Technology
explainlikeimfive
{ "a_id": [ "dwth3hs" ], "text": [ "Not sure where you’re from, OP, but in the UK there are usually 3 cables going down the street. These are called phases. Houses are usually connected to one of these, larger buildings are connected to all 3. When a power cut happens on one phase/cable, you should see 1/3 of houses going off. In your situation, it sounds like 2 are being lost. Maybe your side of the road and the other side. The street lights might be connected to the 3rd phase which is unaffected. It all depends on how the street is wired up when it was built. Sometimes it’s like every other house down a street goes off. Source: work in power distribution. I answer this question several times a day." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
89tz7j
how data or electric signals travel through the wires in an Ethernet cable?
I want to understand how each twisted pair work with each other? Like does a signal going to solid blue means '1' and the blue/white means '0'? Thanks in advance.
Technology
explainlikeimfive
{ "a_id": [ "dwtqdpb" ], "text": [ "Computer engineer here, Ethernet is supported over several mediums. The following is relevant to 10BASE-TX, 100BASE-T, 1000BASE-TX, and 10GBASE-T also known as Ethernet over Twisted Pair. This is not applicable to Ethernet over Coaxial cable or Ethernet over fibre optic cable. 10BASE-TX and 100BASE-T require two pairs of twisted wires. Cables that have more than 2 pairs may use some of those pairs for DC power delivery. In 10BASE-TX and 100BASE-T each pair is unidirectional. One pair transmits, and one pair receives. Historically, when a computer was connected directly to another computer the pairs needed to be crossed such that the transmit pair on computer A became the receive pair on computer B. This is known as a crossover cable; modern hardware automatically determines this and adjusts accordingly so crossover cables are no longer necessary for new installations. Data transmission on Ethernet cables is not clocked or timed. Rather, it uses *Manchester encoding*. Manchester encoding encodes the clock signal used to sample data into the data stream itself through the use of edge transitions at predictable times. Once a clock has been established, it's simply a matter of measuring the direction of a voltage change at the right time. In 10BASE-TX a transition from +2.5V to -2.5V is a 0 and a transition from -2.5V to +2.5V is a 1.These transitions mirror each other on each pair so that interference affects both pairs equally. Because unshielded twisted pair cables act as low-pass filters, it's not possible to increase the transmission rate of 10BASE-TX (10mbps) by 10 in order to reach 100mbps. Thus, the encoding scheme of 100BASE-T is different. Rather than transitioning on every bit ala 10BASE-TX, 100BASE-TX transitions only on a 1, and transitions between 3 voltage levels rather than 2. Transitions always occur in the same pattern, -1, 0, + 1, 0, -1... and so on. Extracting data is a simple matter of determining if a transition occurred or not. 1000BASE-T uses 5 levels +1V, +0.5V, 0V, -0.5V, and -1V as well as all four pairs in both directions. Unlike 10BASE-T and 100BASE-TX, 1000BASE-T does not measure transitions but rather voltage levels. Each symbol (one of five voltage levels) decodes to two bits, and four pairs of two bits decodes to one byte. 1000BASE-T also uses a very complicated scrambling system to reduce spectral interference caused by Manchester encoding." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
89uh1s
Why does a laptop/PC need a fan but a tablet does not?
Technology
explainlikeimfive
{ "a_id": [ "dwtlk6c", "dwtlme6", "dwtn4ea" ], "text": [ "Tablets have smaller and more efficient processors, so they don't need as much cooling. The heat sinks are also designed differently. Many will integrate the case into the cooling system. The heat sinks in your computer are designed to work with a fan, so they'll be less efficient without one, and that's in addition to the computer having more heat to dissipate.", "Tablets have ultra low wattage CPUs, while laptops typically have with higher wattage CPUs. Note that modern, thin notebooks like the Macbook Air or any equivalent windows laptop just like tablets do not have any fans and rely only on passive cooling. Remember that you may be dealing with older laptops, with more inefficient chips, which is why they may seem to have the same performance as a tablet while generating much more heat.", "CPUs draw power. **All of that power** must be radiated as heat. Hopefully, you're not *really* 5 years old and still remember incandescent light bulbs so you have some idea how hot a 100 watt or 60W bulb is. A typical desktop CPU draws between 60W and 150W of power. A typical high-powered laptop CPU is around 50W and a low-powered CPU is 15-30W. A typical phone/tablet CPU is going to be 5W or less. The phone/tablet CPU is running low enough power that it can passively radiate heat away through the case. The more power-hungry CPUs in desktops and laptops need fans to move cool air across the heatsink to do this." ], "score": [ 3, 3, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
89vfaf
How do Bluetooth earbuds produce sound without being connected to a phone?
I know it’s Bluetooth but where is the DAC and AMP at?
Technology
explainlikeimfive
{ "a_id": [ "dwtuy3k" ], "text": [ "They're in it! I know it's insane to think that they made a microprocessor and amplifier THAT TINY but they did. Your bluetooth headphones, on their own, are capable of taking a digital audio source, turning it into an analog signal and amplifying it for the speaker. And there's some [like these]( URL_0 ) that support loading music directly onto the device, which means your tiny earbuds are doing 100% of the work indpendent of the phone." ], "score": [ 6 ], "text_urls": [ [ "https://www.bragi.com/" ] ] }
[ "url" ]
[ "url" ]
89yw80
Why do google captchas make you click street signs?
Technology
explainlikeimfive
{ "a_id": [ "dwuhz8w", "dwun9d3", "dwui1tm", "dwume8t", "dwur402", "dwumqmc", "dwurju6", "dwurycn", "dwuoslz", "dwup0pf", "dwuqc98", "dwuu9ni", "dwutedu" ], "text": [ "They're used to train their image recognition software so that they can read street signs more accurately.", "...and does the pole count as part of the sign?", "Google is working on self driving cars and they need their algorithms trained. So they crowd source it. And they know computers cant tell because that is why they are having you train their cars.", "What about the one that just says “I am not a robot” and you just click it once and the box turns into a check mark and lets you go", "The irony of all of this too, is that once machines are 100% successful at completing CAPTCHAs, they will be obsolete as a method for checking human vs. machine.", "As pointed out in other posts, they are using it to train their image recognition software. The machine learning algorithm they are using requires training. They will start out with a massive database of known data that is used for the initial training. Then they will continually refine the algorithm over time. Both these steps take a lot of work because manually validating that a dataset is correct takes a lot of manpower. Google captcha essentially is a huge mechanical turk. By asking real people to validate what is in a picture they can now validate and refine the algorithm. [this training piece is a part of the algorithm, it's not a manual thing]. All the while the algorithm is also \"guessing\" and comparing it's answers with the manicured data set of known answers provided by humans. You lower risk of getting bad data by asking say 30 people the same question. You can make some assumptions that if 80% of the people said box \"X\" is a road sign, car face.... that yep, it's that thing. Essentially Image Captcha is saving google millions of dollars in real human time to help improve their machine learning! And if you've ever gotten a Captcha wrong (and you were right); it's most likely because the image didn't have enough answers to achieve a consensus to say you are right, but it does hold on to your answer to help build that consensus for the next person!", "[A lot of people here aren't reading from Google's info on reCAPTCHA]( URL_0 ) ELI5: Google cuts up images into a grid 4x4 for small images, 5x5 for bigger ones, and so on and so forth. Google gives these images to their bots and teaches (trains) them to find which grid has what object. Google has the answers for this small set of images, and for the bots that guess correctly, they get to have clones made of them. The reason Google breaks up these images into smaller grids because it's easier to find Waldo in a small box, but harder to find Waldo in a big, crowded image. Google's bots, after some time, are really good at spotting objects. A reCAPTCHA on a website asks a user to figure out which squares have a stop sign. Google and their bots know the answer to this. But Google is tired and doesn't want to go through new sets of images and identify more and more traffic light, for instance. Google then asks several bots to guess where the traffic light is. To make sure the bot's guess is right, the user, from earlier, that answered the first reCAPTCHA right is asked to find the traffic light. Many more of the user's friends are also asked the same question to make sure it's right. If the bot and the users think the traffic light is in a particular square, then the bot is given a pat on the back, and has many clones made of it. A benefit of Google asking a lot of people what's a traffic light, what's a stop sign, what's a store front, is that when the bots get older and can drive cars, they can recognize when to stop, and where. Another benefit is that Google's bots can help other people find things in images, like a person lost in a flood from an image taken from high above. More detailed info from the link above: > reCAPTCHA offers more than just spam protection. Every time our CAPTCHAs are solved, that human effort helps digitize text, annotate images, and build machine learning datasets. This in turn helps preserve books, improve maps, and solve hard AI problems. -- Side note: a lot of people still hate reCAPTCHA, but, it's quicker, now, is better at protecting sites, and contributes to image recognition.", "What are you supposed to do with one like [this?]( URL_0 )", "Everything Google has you do is to feed/train a new technology it is working on. The new image captchas are most likely helping train some sort of image recognition AI.", "They run out of books to digitize for google books, so now they are digitizing street signs for google maps.", "Current bots have hard times with pictures, so it shows that you are human. You are also helping google develop their self-driving cars. That is why cars and street signs are what you are always clicking. You are essentially training their car bots while proving you aren't an internet bot.", "2 purposes: - Giving a task that is relatively hard for AI to perform reliably (making sure you're human and not a bot) - Creating training data (where is the sign & where it's not on a photo) to improve their image recognition, most likely in preparation for driverless vehicles. Old text-based captchas are being solved pretty easily by open-source deep learning setups these days, so they are not a good candidate to filter out bots anymore.", "Do you know what this picture is? (It's a banana.) And this one? (It's a dog). People have very good eyes, and are very good at telling what is in a picture. Computers don't have eyes! They aren't very good at telling what is in a picture. They need some help from people. Some smart scientists want to teach computers how to tell what is in a picture, but they need help from lots and lots of people. When you click on the pictures of street signs, you are helping the computers learn how to see what is in a picture!" ], "score": [ 5371, 1402, 992, 305, 70, 34, 30, 29, 16, 4, 4, 4, 3 ], "text_urls": [ [], [], [], [], [], [], [ "https://www.google.com/recaptcha/intro/" ], [ "https://i.imgur.com/2qQtV4w.jpg" ], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
89zkps
Why do all of my electronics have an FCC interference statement?
You can find the following statement on nearly all electronics. Why does the FCC require this? This device complies with part 15 of the FCC rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation.
Technology
explainlikeimfive
{ "a_id": [ "dwumxye", "dwumuyx" ], "text": [ "It proves that the device has passed a certain set of tests, designed to ensure it will not affect nearby devices. All electronic devices emit radiation to some degree, and its important when designing them to ensure the radiation isn't going to affect the operation of other devices. An example is that when you switch on a microwave oven, the WiFi signal to a PC will be interfered with. Without the FCC limits, companies could sell cheap, terrible microwaves which could wipe out WiFi networks for a block around, which would cause chaos. Edit: and it also ensures your WiFi card won't be destroyed by a microwave oven nearby", "Say there is a radio station you are listening to, or a tv show you're watching. Now imagine if someone with *insert device here* drives or walks down your neighborhood and said device doesnt comply with the FCC interference statement. Then that device that person is using can unintentionally block or interrupt your radio station or tv show. It also would prevent emergency lines from being used or blocked if I remember right" ], "score": [ 22, 5 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8a43ua
Why does skipping five seconds of a video often take longer due to buffering than just sitting through the five seconds?
Technology
explainlikeimfive
{ "a_id": [ "dwvtmtz" ], "text": [ "The way online video is compressed, you're receiving data that represents incremental updates from the last frame. If you start playing the video at the beginning, it all works as intended. However, if you start playing in the middle, you need to process incremental updates for some time period prior to the frame you want to see before you can display it. It takes time to download both the incremental data and process it." ], "score": [ 20 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8a69pk
Why Netflix and Amazon aren't buying the rights to stream sports
Technology
explainlikeimfive
{ "a_id": [ "dww7l7t" ], "text": [ "Amazon is buying sports rights. Actually last NFL season, the package they bought for Thursday night football was bought at a hilariously bad rate of at least 2x-3x the next bidder. That’s horrifyingly bad. They got had. Netflix has explicitly said they are not going to buy sports rights and are not a sports service. They have their reasons but won’t say why, as they are notorious for secrecy. We can speculate. But that’s not for this sub." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8a6nz1
Anti-aliasing, what it does and how it works
Technology
explainlikeimfive
{ "a_id": [ "dwwb8gs", "dwwek95", "dwwkraq", "dwwh2pe" ], "text": [ "It makes ugly stair-step like lines on the screen appear smooth by adding low contrast pixels where the stair-stepping is most obvious. Reference: [Close up]( URL_1 ) [Result]( URL_0 )", "Take a piece of graph paper. Place a pencil on it at an angle, and mark every square it touches with another pencil. As a computer, you can't partially fill a square, but you can choose how dark to fill it in. So, if you fill in every marked square as dark as possible, you get an ugly stair-step. If you instead fill in some darker, some lighter, and even lightly color a few cells the pencil didn't touch, then when you step back, you get a much smoother looking line. Basically, in computer graphics, it's trivial to determine if a pixel is covered by a given triangle being drawn (in 3d graphics, basically everything's made of triangles). However, it's often the case, that on the edge of any shape, the pixels are actually only partially covered. So then the question is what would be *behind* the triangle? Ideally you'd figure out how to compute the %coverage on every pixel for every triangle, and get an exact color. This is *very* expensive to figure out exactly. Instead it's often cheated by doing things like rendering the scene 4x as large so that you can just blend together every 4 pixels (like having finer grid graph paper, you can trace out your pencil more exactly, then get the right 0:4, 1:3, 2:2, 3:1, 4:1 color blend).", "Pixels are squares. Squares only have right angles, so if you try to make a diagonal line, close up it will look like a staircase. We add some pixels onto the stairs that are midway between colours to soften the line, so it doesn’t look so jagged", "It makes the rough edges smoother. Very common in gaming. But it is just not in graphics. Aliasing occurs in signal processing too. It happens in analogue to digital and digital to analogue convertors. When you want to convert a signal from continuous to discrete by sampling it you will have some distortions and aliases. Why the data is sampled is because discrete data is much more efficient to work with, it is easier to store, transfer etc. Imagine you have your continous sinusoidial signal converted to discrete points. If you use a good sample rate you should be able to reconstruct a sin graph by discrete points instead of a continual data. During the conversion, depending on factors like sample rates, you can result in noisy data, distortions and innacurracies. Because nothing is perfect. When you produce the discrete signal from the continuous source with a insuffienct sample rate, aliasing and frequency ambuigity occurs. To prevent this we use anti aliasing filters. It helps reduce the noise and distortions that occur during sampling. It basically attenuates the higher frequencies so that the alias ones are cut off. It is a way to make noisy data reduced." ], "score": [ 188, 17, 8, 3 ], "text_urls": [ [ "https://helpx.adobe.com/content/dam/help/en/photoshop/ps-key-concepts/aliasing.png", "https://i.stack.imgur.com/pA7uy.png" ], [], [], [] ] }
[ "url" ]
[ "url" ]
8a7z72
Open Banking
While updating my apps on my phone today I noticed my mobile banking app had this in their update notes: "This update is preparing our app for Open Banking, which gives you the choice and freedom to securely share your account data with registered third-party providers. If you don't want to use Open Banking, there is no need to opt-out, you can simply continue to use our services as you do today." What is Open Banking? And what would be the benefits/drawbacks of opting into this? (Edit: in the UK if that makes any difference!)
Technology
explainlikeimfive
{ "a_id": [ "dwwjy8l" ], "text": [ "Open banking is the concept that your banking information such as balances and spending habits is not just the banks data, but is also yours, so you should get to use it how you see fit. So with open banking you can share your data and access to 3rd parties. Who in turn will help you. Such as going online, looking to makes a purchase, at the checkout it will display your remaining balance before paying so you don't go overdrawn. Or you could get an app that transfers the pennies after each purchase into a saving account." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8a94ab
how a helicopter tilts forward/backwards
So when a helicopter wants to gain speed, it tilts forward to angle the rotors and achieve forward momentum. But how does it tilt? When it rotates it can adjust the speed of the tail rotor.
Technology
explainlikeimfive
{ "a_id": [ "dwwrmm0", "dwwrvoh", "dwwrrgu" ], "text": [ "Visualize a helicopter with two rotor blades, just to make it easier. And let's just ignore the tail rotor and the engine for now. Focusing only on the rotor and how it controls the movement of the helicopter, there are two controls the pilot uses. One is called the collective, and one is called the cyclic. The rotor blades on a helicopter are wings. Another term for helicopter is \"rotary wing aircraft\", as opposed to \"fixed wing aircraft\". The blades spin through the air, and their *angle of attack* can be adjusted as they spin. You can think of \"angle of attack\" as simply tilting the blade. If you increase the angle of attack, or tilt the blades down, they push more air down, generating lift. Back to \"collective\" and \"cyclic\". \"Collective\" means increasing the angle of attack of both blades simultaneously, and throughout the entire rotation. This can be thought of as simply pushing air down to make the helicopter go up. \"Cyclic\" refers to making an adjustment to the angle of attack at a certain point in the blade's cycle. That is, the pilot is able to change the angle of the blade at a certain point, such as the front of the aircraft, or to either side. If you increase the angle of attack of the blade while it is on the right side of the helicopter, the right side will get more lift and the helicopter will tilt toward the left and move that direction. In reality, all the mechanics and forces involved are quite a bit more complicated, but that's a very basic breakdown of how the rotors themselves operate. The blades can be adjusted collectively, which is how helicopters go up and down, and they can also be adjusted individually at any point in the cycle, which is how helicopters move forward/backward and left/right. If you want more information, the FAA's helicopter manual is available for free online and goes very in-depth in terms of how helicopters fly. URL_0", "If you are really interested in getting deep into the physics of how helicopters work in a fun an easily-understandable way, I would highly recommend a video series from SmarterEveryDay. Here's a link to the first video of the series. URL_0 Also, shoutout to /u/mrpennywhistle himself for creating this awesome series. To answer your question, helicopter rotors have something called a [swashplate]( URL_1 ) that adjusts the pitch of each blade as it rotates about it's axis to control vertical position. It can also adjust the pitch of the entire rotor assembly to control horizontal movement in all 4 directions. The tail rotor's job is to counteract torque spin to keep the helicopter flying straight but also to allow it to spin about its axis when needed.", "One method is to change the angle of attack of the individual rotor blades at different points during a single rotation. It's called cyclic pitch. Increasing the lift during the rear half of the rotation causes a nose-down tilt of the whole helicopter so the lift direction is pointed forwards, causing motion as well mas lift. Different cyclic pitch adjustments can cause the aircraft to move sideways or backwards." ], "score": [ 82, 21, 4 ], "text_urls": [ [ "https://www.faa.gov/regulations_policies/handbooks_manuals/aviation/helicopter_flying_handbook/" ], [ "https://www.youtube.com/watch?v=WdEWzqsfeHM", "https://thumbs.gfycat.com/PlasticSpiffyBushsqueaker-size_restricted.gif" ], [] ] }
[ "url" ]
[ "url" ]
8a9zn2
Why does the light on chargers remain lit for a couple seconds after it has been unplugged?
Technology
explainlikeimfive
{ "a_id": [ "dwwykxr" ], "text": [ "Capacitors. The charger contains capacitors, which you can think of as being a bit like batteries (they're different in important ways, but the point is they store electrical energy). It uses them for things like smoothing over any irregularities in the mains power supply. When you disconnect mains power, the capacitors are still charged, and the status light runs on that charge. Eventually it uses it up, and the light fades out. Try turning off the charger while a device is still connected, as though to charge: you'll see that the light goes out much faster because the stored energy has somewhere to go beyond keeping the little light on." ], "score": [ 83 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8abl0b
How was super bouncing in Halo 2 possible?
Technology
explainlikeimfive
{ "a_id": [ "dwxbwto" ], "text": [ "It is a software defect, caused by their imperfect attempt to take old software and teach it new tricks without rewriting it. URL_0" ], "score": [ 3 ], "text_urls": [ [ "https://www.giantbomb.com/super-bouncing/3015-401/" ] ] }
[ "url" ]
[ "url" ]
8acq3x
how does tap work with credit cards
Like I just tap my credit card on the machine and it works but how, like there’s no computer in it I honestly don’t understand how the machine gets the information
Technology
explainlikeimfive
{ "a_id": [ "dwxlaqz", "dwxlwuu" ], "text": [ "There is a little RFID chip in the card. When you tap it on the scanner, the scanner sends a signal that the antenna in the card picks up and uses to temporarily power the chip. The chip then broadcasts the card number, which the scanner receives.", "A few things here: 1. When you tap a credit card, you're using the Near Field Communications chip in the card to process a payment instead of the contact chip or magnetic stripe. 2. Using the NFC chip absolves the merchant of all liability. So if something goes wrong with a transaction you've tapped for, YOU are responsible and out of pocket, not the merchant or their bank (or the processor or your bank). 3. Modern credit cards contain two computer chips: one is in the contact chip and is powered by some of the shiny tabs exposed to the surface. The other tabs are for serial communication with the card reader. 4. The actual chip is located just beyond the contact chip you can see on the surface. If you run your fingers top and bottom over the card above the embossed numbers, you can feel a slight bulge where the embedded chip sits. 5. NFC works the same way as wireless charging: the chip is connected to a wire antenna which runs around the outside of your card, sandwiched between layers of plastic. When the right radio waves hit the antenna, it induces a current through the antenna which powers the microchip. The chip is then able to read the data being transmitted to the antenna, process it, and using the power from the carrier wave, emit a response signal on a second channel. 6. Since radio waves follow the inverse square law, the distance the chip communicates is strongly limited by the strength of the signal emitted from the card reader. Get too far away, and you might be able to power up the chip, but it won't have enough power to send a response back to the reader. Most EM-compliant cards and readers are limited to a distance of around 2 inches, although a skimmer with a hyperbolic antenna or other signal boosting can read the transmissions from a larger distance. [edit] RFID, such as used in RFID tags on store products, driver's licenses and passports, uses the exact same technology, but usually without the embedded microchip. RFID tags generally are designed to emit their serial number when powered up and so can do this using less energy, which means they have a larger possible range between the reader and the chip." ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8aeot7
What makes a password stronger than another?
How is a password that has uppercase and lowercase letters, numbers, and special characters more secure if every character is always a possibility?
Technology
explainlikeimfive
{ "a_id": [ "dwy2a96", "dwy4906", "dwy2iml", "dwygoof" ], "text": [ "Because some types of attacks will do what's called a 'dictionary attack' and try dictionary words and thing based on them (apple, Apple, APPLE, etc). If you have to brute force a password (meaning try every possible combination) then as you add numbers, case, and specials the number of possible combinations goes through the roof (keep in mind, on average you'd have to try 50% of the possible combinations to get it right). Having those all be possible characters in a password increases the amount of possible passwords and hence makes it hard to guess.", "This is actually a common fallacy. \"Stronger\" in this case means more entropy, or to put it another way, harder for a computer to guess or brute force. What people aren't telling you enough is that longer is better than shorter. Suppose a password had only 8 upper/ lower case letters in it. That would give 52^8 possibilities, or 53,459,728,531,456. Adding in 10 special characters (@#$_ & -+*!) for example, gives you 62^8 or 218,340,105,584,896. Better, but not remarkably so (4x ish the size), considering how many guesses can be made per second. If you had simply added another letter, you would then have 52^9 or 2,779,905,883,635,712 which is literally exponentially better. For a fun way to visualize this, I recommend xkcd. URL_0 **EDIT I feel like I should add here that you should make sure to use a DIFFERENT, LONG password for each service that requires one. As people are pointing out, and I failed to mention, the easiest ways to \"hack\" an account are 1) find the same or similar username in a password dump from another breach and check if they re-use their password and 2) just call customer service and tell them you forgot it and no longer have access to your email. If you know who you are trying to \"hack\" you can probably find the answers to most of their \"security questions\" by being their facebook friend, checking their instagram, or just googling them. Hell, half the time its like \"what is your favorite football team\" and if joe blow lives in Philly, you probably have your answer.", "Depends on what you mean by more secure. Random uppercase and lowercase letters will make it more difficult for humans to guess. (\"PassWord\" is different from \"password\") Computers don't care. You can arrange 4 numbers in 24 unique ways. You can arrange 8 numbers in 40320 ways. When cracking a password with a computer the number of variables adds complexity. But for your usual password that does not matter bc usually they are usually broken by leaked databases. EDIT// They force number and uppercase to force complexity. The majority of people will still use Password123 unfortunately. EDIT2// URL_0", "Many folks have provided good answers. I would add that the \"complexity\" requirement is no longer viewed as best practice. The more serious risk is using an easy-to-guess password. \"Easy\" here meaning a password that, based on breach data, other humans have chosen before, and worse one that many humans have chosen. NIST (National Institute of Standards and Technology) recently published some [new guidelines]( URL_0 ) that basically say: * don't require complexity (the subject of OP's question) * do check proposed passwords against breach lists and other sources of commonly chosen passwords * do require long passwords * do not set a maximum password length (they suggest allowing at least 64 characters) * do not use password hints * do not set up security questions for password reset Given that the previous argument for complexity was (at least in part) based upon 2003 NIST guidelines, this revision is noteworthy." ], "score": [ 16, 6, 3, 3 ], "text_urls": [ [], [ "https://xkcd.com/936/" ], [ "https://xkcd.com/936/" ], [ "https://pages.nist.gov/800-63-3/sp800-63-3.html" ] ] }
[ "url" ]
[ "url" ]
8afrs4
How the government can/does seize a website?
Technology
explainlikeimfive
{ "a_id": [ "dwyd3gl", "dwybx0m" ], "text": [ "The way website names work is basically a phone book. When you type in \" URL_0 \", your computer goes to a Domain Name Server and asks it for the website's IP address (the equivalent of a phone number). Your computer then calls up that IP address, where the website's computers pick up and transmit the website data. There's a non-profit corporation called ICANN that runs the whole name thing, and they designate control of each section (.com, .org, etc.) to a different company. The directory of .com sites is run by an American company called Verisign, for example, while the .uk directory is run by a UK-based non-profit called Nominet UK. If the government wants to seize a website, they can get a court order to take control of that website's directory entry and change it to point to government computers instead of the website's computers, assuming they have jurisdiction for the company that runs that directory or they can cooperate with the country that does have jurisdiction. This is why The Pirate Bay was jumping from domain to domain- every time the record and movie industries convinced a country to take over their entry, they just registered a domain with a company in a different country. They can also seize the physical computers that run the website, but if they don't also take control of the directory entry, the website operators can just get new computers and change the directory to point to those computers instead.", "The find whoever is hosting it. Get a subpoena, and issue the subpoena to the hosting agency. The hosting agency then shuts it down and turns it over to the government." ], "score": [ 9, 3 ], "text_urls": [ [ "reddit.com" ], [] ] }
[ "url" ]
[ "url" ]
8agcp9
What exactly is turbo lag?
Technology
explainlikeimfive
{ "a_id": [ "dwygl0c", "dwygfwu", "dwyg9kd", "dwyi3w7" ], "text": [ "It’s easiest if you think about a turbo in two halves: the turbine side and the impeller side. A turbo works by forcing the exhaust coming out of the engine to spin a turbine that sits after the exhaust manifold which is connected to an impeller that drives air into the engine before the intake manifold, increasing the air pressure within the cylinders and allowing for more fuel to burned (and consequently more energy to be created through combustion). That turbine has a mass, and anything with a mass has inertia (meaning it takes energy to make it accelerate). When an engine is running at relatively low power, the amount of exhaust being dumped out of the exhaust manifold is relatively low, and as a result the turbine isn’t spinning very quickly. When you step on the gas, the engine picks up speed and as a result more exhaust gases are blown through the exhaust manifold and across the blades of the turbine, accelerating it. While the turbine is accelerating, the impeller is also accelerating because the two are tied together, which in turn forces more air into the engine on each intake stroke. It takes time for the turbo to spin fast enough to increase the pressure sufficiently to have an effect on the power generated by the engine. This time spent spooling up the turbo is what’s referred to as turbo lag.", "An engine turbo-charger has 2 sections. The turbine and compressor. Exhaust gases enter the turbine and cause it to spin. A common shaft drives the compressor. The compressor draws in more air for the engines intake. More air = > more fuel = > more power Turbo lag is the time it takes for the compressor/turbine spool to spin up to operating speed when the engine starts. It can be minimized by using 2 stage turbos which is 2 turbo where 1 has a low operation speed and the other has a higher designed operation speed. Or using geared or clutched turbos which can shift gears like the transmission to make starting easier.", "I think it's the delay on how fast or how slow will the turbo spin up to the point of making boost", "Turbos make a car go faster by shoving more air into the engine. More air in there makes bigger explosions inside the engine, and explosions in the engine make the car go. Turbos shove air into the engine by spinning fan blades. It's hooked up so the exhaust from the engine makes the fan blades spin. When the engine is going slowly, there isn't much exhaust, the fan spins slowly, and doesn't force in much extra air, so it doesn't make the car go much faster. When the engine start going faster, there's more exhaust, and it makes the fans spin faster, which forces more air in. The time before much air is getting forced in is called turbo lag. There's another thing called a supercharger that's like a turbo but instead of the exhaust making the fan blades move, a belt on the engine does. There's no lag because you don't have to wait for the engine to be going fast enough to make lots of exhaust. But since the engine has to turn the belt now too it has a little less energy to use on making the wheels turn." ], "score": [ 12, 4, 3, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
8agjwv
Solar panel losing its efficiency
I read that solar panels loses efficiency eventually. But I can not wrap my head on why it does, considering that there is no moving parts.
Technology
explainlikeimfive
{ "a_id": [ "dwyo05h", "dwyjjyt" ], "text": [ "Hey, Ph.D here working on light induced degradation (LID) in solar cells. Essentially, you need to think of the efficiency of a solar cell as its ability to generate but also collect electrons as useful energy. Now in the body of the solar cell, which we call the bulk, you can get many things that will steal that electron and stop it from being collected. These include metallic impurities (Iron, chromium, copper...etc.), crystallographic defects and usually anything that has a charge that can capture an electron or a hole (which we call charge carriers). The more of these defects you have the lower your effective efficiency. So what happens out in the field over time? There are many things that can happen that can cause a module to degrade. On the macro level, standard deterioration from corrosion, UV exposure, humidity, etc as mentioned by others in this thread. We also have a few such as PID (potential induced degradation) and LID (Light Induced Degradation). LID for example in monocrystalline solar cells is largely known to be due to the Boron-Oxygen defects. Under sunlight, boron and oxygen can combine together to form a B-O complex, which likes to steal charge carriers and usually results in a drop in efficiency within the first 48 hours. Now this problem has largely been solved. There are a few other known elements that can cause degradation under sunlight, i.e copper (which can form precipitates) and Hydrogen, etc etc..", "But it is exposed to elements, i.e. sun, humidity, wind, temperature changes, etc. Is not that there's a single point of failure, according to a [2012 study]( URL_0 ) by the NREL (National Renewable Energy Laboratory) in the U.S. some of the causes are: - Constant temperature changes (night & day thermal cycling): makes some soldering \"weak\" so cables and circuitry may get disconnected or with a higher electrical resistance. - Damp heat: may cause corrosion on cells. - Sun rays (UV light): causes the color of the cell to fade, which is important because of the way panels work to ~~generate~~ transform energy. And to this add: - Cheaper materials - Manifacturers cutting corners (using less material to make them) - \"bad\" Transportation and installation: with less material, they're more fragile and handle less stress. As you can see there are multiple reasons for degradation even without moving parts. Still they have a lifetime around 25-30 years but every year they degrade around 0.5% to 3%. [Source]( URL_1 )" ], "score": [ 38, 32 ], "text_urls": [ [], [ "http://www.nrel.gov/docs/fy12osti/51664.pdf", "https://www.solarpowerworldonline.com/2017/06/causes-solar-panel-degradation/" ] ] }
[ "url" ]
[ "url" ]
8ahsx7
Is there any consequences to have both 2.4 ghz and 5 ghz running at the same time?
Technology
explainlikeimfive
{ "a_id": [ "dwypqo3" ], "text": [ "No your modem will allocate bandwidth as it is needed, same as if it had one band. 5ghz means capable devices can have a better connection. Do it." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8aiqak
Why did it take so long for console games to run on 60fps while pc has this option for decades?
Technology
explainlikeimfive
{ "a_id": [ "dwyxfl6", "dwyxw0o", "dwyzfdw" ], "text": [ "PC performance is limited by your budget. You could go build an insane $10000 rig right now to run games at 4k, 120fps if you wanted to. Consoles need to be cost effective, and historically their hardware is already somewhat middling and out of date when they're released. The console manufacturers have no interest in pushing out a thousand dollar system nobody will buy.", "The simple answer is price. Generally speaking, people don't want to spend as much on a console as they would on an above-average PC. In order to keep costs low, they use cheaper/less powerful parts and try to make up some of the performance difference through \"optimization\" - game devs know every single machine has the exact same parts, so they can design/code the game to get the most out of that part rather than designing for multiple generations of product from multiple manufacturers. Yes, PC proponents will always try to convince you that you can build a PC as good as a console for the same price or cheaper, but this usually isn't the case and - even when it is - requires a significant amount of time/effort devoted to hunting sales. & nbsp; The big thing about frame rate is that - in most instances and for the average player - stability matters more than the raw number. Dipping below 26-30 feels clunky bc 26-30 is what we are used to from tv, movies, and previous console generations. Fluctuating between 45 and 60 feels choppy bc you are shifting back and forth, if it just locked at 45 and stayed there it wouldn't feel nearly as bad. & nbsp; Consoles, to keep prices low, shoot for good enough for the majority of people. While professional First person Shooter or Fighting Game competitors might really need higher FPS and refresh rates for reactions, the vast majority of people buying a console don't. & nbsp; **EDIT** by the time i finished this, /u/mmmmmmBacon12345 had a better TL;DR > Consoles need to be cheap to move volume to make up for their lower margin because they make a good chunk off of games. If they increase the price then they'll lower sales and make less money in the long run.", "Consoles are inherently built on compromises. Price of course being the primary limiting factor. History has shown that $399 USD is the \"sweet spot\". The maximum the average consumer are willing to pay. As such, you're limited by the kind of hardware you can use while still turning a profit on each sale. The guy playing Madden isn't going to fork out $599 on a system, which Sony learned the hard way a decade ago. And there's a lot more of them than the enthusiast crowd. By the same token, consumers expect a significant increase in visual fidelity with each generation. So console makers and game developers have instead focused on prioritizing graphical detail first, with the trade off being lower resolution and frame rate. Standard television broadcasts run at 30fps, and most games are playable at this frame rate. While higher temporal resolution does look better, and it does benefit many fast paced games especially in the competitive sphere, it requires much more horsepower. The other drawback with consoles is that consumers expect them to be a certain size, so they fit neatly in their home A/V centre. And they want them to be reasonably quiet. This doesn't leave a heck of a lot of room for cooling. Higher end hardware can put out a lot of heat, and it uses more power. Take the Xbox 360 for example. It was equivalent to a higher end gaming PC when it first launched, but they literally cooked themselves to death because the coolers were inadequate. The recent PS4 Pro and Xbox One X sell as luxury versions. AMD has also focused on increasing performance-per-watt of their GPU architecture. They run cooler. So consoles can feature mid-range hardware that's now capable of pushing out a solid 60fps at 1080p. PC on the other hand has far fewer compromises to deal with. Budget is really the only limiting factor. Since it tends to attract a hobbyist crowd, people are willing to spend more for higher end components. With larger cases and bigger fans, cooling is less a concern. Plus, while consoles tend to focus on a one-size-fits-all experience, PC allows for greater customization. So you're free to turn down graphics detail in order to get higher frame rates. It's worth noting though that older 2D consoles like the NES, SNES, and Genesis did in fact run at 60fps. The transition to 30fps happened with the introduction of 3D. It requires a lot more processing power and memory to push out 3D polygons than it does to push out 2D sprites. For a time even PC struggled until dedicated 3D accelerators like the Voodoo card became a thing." ], "score": [ 10, 5, 4 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
8aj82v
How do digital compasses work? (e.g. smartphone apps)
I assume there's no physical magnetic compass in them with how obsessed manufacturers are with reducing size. And GPS accuracy is within meters at best. So how do they sense which direction the device is facing?
Technology
explainlikeimfive
{ "a_id": [ "dwz14j3", "dwz14wq" ], "text": [ "smartphones have a small magnetometer built in, which can measure the Earth’s magnetic field. This information is combined with an accelerator that acquires information regarding the phone’s position in space. It is able to pinpoint the phone’s position from solid-state sensors within the phone that can measure their tilt and movement. The information provided by these devices means that your compass app can display cardinal directions no matter which orientation the phone is in.", "There is a physical magnetometer in most smartphones, they’re often integrated into the GPS chips anyway so phone manufacturers don’t have extra cost by using them." ], "score": [ 7, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8alsjt
is watching a video game being played on youtube Graphically the same as playing it on your computer? Is there a graphics difference?
Like even if I am watching a 1080p HD gameplay walkthrough video on youtube , am I looking at the exact same graphics I would be if directly playing it on my computer (from the exe file). Are the subtle differences my eye can detect but not consciously? If so, please do go into detail.
Technology
explainlikeimfive
{ "a_id": [ "dwzmk0o", "dwzmbam", "dwzmsfl", "dwzrqjs" ], "text": [ "YouTube is going to be using some form of lossy video compression to greatly reduce the amount of data being sent to you. In addition, if you are viewing at a lower resolution or color depth, there is going to be some kind of downscaling going on. Finally, if your connection is slow, some streaming services will reduce the quality of the video to save bandwidth. > Are the subtle differences my eye can detect but not consciously? Correct. A lot of video and sound correction come down to physiology and psychology, removing details you wouldn't notice, and preserving the ones that you do. For example, changes in brightness are more noticeable than changes in color, to brightness is more highly preserved. If you looked at both side by side, frame by frame, there would be easily noticed differences. But if you just see one at full speed, you don't miss much.", "Yes, your recording program will compress the video to store it, then the video is again compressed by YouTube when it is sent to your computer. Each compression results in some loss of data, making the video slightly different which is very hard to detect.", "> am I looking at the exact same graphics I would be if directly playing it on my computer (from the exe file). It depends. The computer the walkthrough producer is using might be capable of setting the graphical fidelity higher than your computer can handle, but if your computer is capable of their settings then it should be similar. Also YouTube encodes their videos with a compression algorithm which is pretty good but still introduces noise into the final image. These are most visible around subtle gradients and when parts of the image change dramatically between frames. All else being equal a YouTube video is going to look worse than the raw output of the video card, but that is a necessity as that raw output is *immense* if stored without compression.", "Video compression is being applied *twice*. Once from the initial recording, and once to either get it to YT format or YT itself doing that. Toss in any downscaling youtube does when they encode for resolutions lower than what your video is at(Which BTW YT forces lower bitrates for lower resolutions than most people would use if they wanted a good looking vid*. So, a 720p YT video will look worse than a 1080p vid even without the resolution differences) and things can get... wacky. Things can still look decent, mind you, especially if it's a simple video visually. Not much in the way of color changes or highly, highly detailed scenes. Animation, and especially anime or anime styled animation especially tends to benefit from this as the coloring is often very flat or simple. *Yes, I know that lower resolutions don't need as much in the bitrate department as higher resolutions. YT reduces it beyond what normal people would do." ], "score": [ 9, 3, 3, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
8amp6r
Why do so many password requirements specifically forbid spaces?
Technology
explainlikeimfive
{ "a_id": [ "dwzvvzt", "dwzwtp1", "dwzym5w", "dwzvedp" ], "text": [ "Because they have always had that requirement. Many years ago, passwords were stored as plain text in a database, or sometimes in a text file. When the computer read the password back from storage to compare it against the one it just got from a logging in user, a space in the password could be interpreted as the end of the password. This caused issues when comparing the passed password to the stored one. Issues could also be caused by certain special characters ( like ‘ and $) being used, that also meant something to the computer program, causing confusion and errors. So many times some special characters were not allowed either. Also, many programs had a tight limit on their storage space, so password length was also restricted. Since then, standards have changed, passwords normally aren’t stored directly anymore (T-Mobile????) Instead, a password is fed into a complex math problem and the result is stored. This means that most of the limitations that were imposed by the previous issues no longer apply. Unfortunately, the past rules are still used, even though they’re not needed, because they worked before and people are used to them.", "[Relevant XKCD ]( URL_0 ) Passwords would be far more memorable if they allowed spaces, and unlimited characters. “1@Mth3Hax0R” is memorable enough, but really not that secure. It’s going to be included in a basic library for password cracking so it would only take seconds to crack. Where “Peter hungry 3 lightpoles on LEFT” would take months if not years to brute force. LPT: Anytime you’re limited to 8 or 12 characters, You can be relatively certain that the password is being stored in plain text. Edit: this was actually meant to be a reply to OP on my other comment.", "Back in the day, certain special characters like space were used as delimiters. Between filenames, programming languages, and command line interfaces, it has hard to remember when it was allowed, when it wasn't, and when it was allowed with special handling. What's more, using a special character where you weren't supposed to could cause an error, result in wrong behavior, or even create a security vulnerability. Even when it was allowed, it could be a pain in the ass, your program crashing because someone put an unseen space at the end of a filename. Rather than try to keep track of it all, it was just easier to not use the space and use _ instead, and this became policy, whether it was necessary or not. Fast forward about 20 years, computers and computer interfaces are more sophisticated, and data sanitation is a thing. There are few places where a space cannot be used like any other character, but a lot of those old policies remain. There are still some decent arguments against using whitespace in general, if you want your password to be < tab > < space > < linefeed > < tab > < carriage return > < vertical tab > , you deserve whatever happens to you.", "There's no *technical* reason for this, and quite honestly I've not seen many password requirement that actually *forbid* spaces. Most password requirements will tell you want they *require*, and that will be letters and numbers and occasionally a *selection* of special characters. In the Olden Days, when password lists were often pre-printed, password generators would avoid letters or symbols that could cause confusion. For example, depending on the font used, the number 1, lowercase L and uppercase I can easily be confused. A space would also be one of those characters that can easily be missed in those cases." ], "score": [ 58, 35, 6, 5 ], "text_urls": [ [], [ "https://xkcd.com/936/" ], [], [] ] }
[ "url" ]
[ "url" ]
8andne
What was so revolutionary about James Cameron's Avatar?
Technology
explainlikeimfive
{ "a_id": [ "dx00bff", "dx00dlo", "dx00uq5", "dx01gbu" ], "text": [ "It was one of the first movies made with the facial mocap technologies and updated mocap suits. It allowed for far better and more realistic CGI movements and faces. To many it was also one of the few 3D movies with good 3D effects.", "It looked awesome. Pandora was beautifully rendered, the scenes with the flora reacting to touch were amazing. It was probably some of the best CGI around at the time.", "The movie wasn't great (I loved it, but I'm aware of what it too straight from other movies) but the 3D was amazing. I don't see many movies in 3D because it often feels like it's just tagged on to make a buck. It felt necessary to Avatar though, especially the parts where it was used to create depth.", "It was one of the first 3D movies shot with 3D cameras. As what seems to be Cameron's fucking MO at this point, he worked with people to invent camera tech for this/told them to. I'm not sure of his involvement on the tech side." ], "score": [ 20, 13, 5, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
8angrj
Up until about ten years ago, people in photos taken with flash almost always came out with red eyes, but now you never see people with red eyes in pictures - what happened?
Technology
explainlikeimfive
{ "a_id": [ "dx01org", "dx01p41" ], "text": [ "Two things really happened. First, the way flashes happen is engineered to give the eyes time to close the pupils, reducing the reflection. This is both the color and quality of the light, and the flickering that occur before the photo is taken. Second, the software now involved in nearly every photo-taking device recognizes red-eye and can mask it out. You can still get red-eye, especially on older cameras, with unsophisticated flash, and if you use film.", "One way of reducing red-eye is with a pre-flash before the actual picture. The flash before the picture is taken makes the pupils get smaller, since it's a bright light. Since the eye opening is smaller, less light from the real flash gets reflected back to the camera, so there's less red-eye. The downside is that this often makes people's eyes close for the actual picture." ], "score": [ 14, 5 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8aory4
why ARM processors are less energy consuming than x86 and why it is not easy to make x86 less consuming?
Technology
explainlikeimfive
{ "a_id": [ "dx0ckdz" ], "text": [ "A lot of the techniques that make x86 processors as fast as they are come with a severe power cost. The architecture has 30+ years of backwards compatibility and quirky behavior to maintain. ARM is a relatively simple architecture and was made with energy efficiency in mind, not the \"performance at all costs\" mantra that drove x86. ARM has also been able to issue multiple versions that break backwards compatibility to drop legacy features and add new ones." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8aoutq
these 2 questions. How does/did dial up internet work? Why and how did making a phone call affect the internet connection?
Technology
explainlikeimfive
{ "a_id": [ "dx0cnsw", "dx0ck5t", "dx0crmr", "dx0csmx" ], "text": [ "It was literally like making a phone call. One computer to another. The sender would convert 1’s and 0’s into ‘analogue noise’ as if talking and the computer at the other end would covert them back into 1’s and 0’s again. You can only make one phone call at a time. If you picked up the phone while an internet call was in place. The background noise from the phone mic added to conversation. Therefore this would confuse a computer when listening as it failed to understand the extra noise. So it would freak out and disconnect. History: the first modems; you had to place a phone handset onto a ‘mic and speaker’ and manually dial a phone number on the phone for the computer.", "In the days of dial up there was no pre existing data infrastructure. That means that the only lines run to a house that were capable of transmitting and receiving data were the telephone lines. The dial up modem (modulator/demodulator) would place a specially created call to a service provider and create a session to the web. The session would require the service provider and the local modem to constantly \"talk\" to one another... If the session/call was interrupted for too long, the modem would drop the connection. If you had a second phone line only for the dial up this could be avoided. Any other questions? Happy to answer.", "> How does/did dial up internet work? You would use a device called a *modem* to connect your computer to the phone line. The computer would then use the modem to dial a number the ISP (internet service provider) gave you; at the ISP another modem would answer that call and your computer would then be able to talk to your ISP (i.e. you could access the internet). To give more details: your computer would talk to your modem which would translate the data into sounds (it would *modulate* them), the modem at the ISP end then would translate the sound back into data (this is called *demodulation*). This two tasks (**mod**ulation and **dem**odulation) give modems their name! Since phone lines don't have good sound quality the speed would be really slow. > Why and how did making a phone call affect the internet connection? A phone line can only be used for one call at once, so this means that while anybody talks on the phone nobody can go on the internet and vice versa. To be able to talk on the phone and be online you would have to have two phone lines.", "The data is ones and zeros. Simplifying things let’s say, dunno, Middle C note is a 0, and an A above is a 1. Cool, but slow. Now let’s say the C is 00, E is 01, A is 10, and D is 11. Cool now we’re twice as fast. We can split even more. But now any noise will mess with recognizing those notes. That’s why your phone call would mess things. You’ve got a chip “listening” to notes and “writing” ones and zeros. Any noise gets in tr way of the listening chip." ], "score": [ 19, 7, 6, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
8aowos
How does a country/company decide the perfect location for a space launch?
Technology
explainlikeimfive
{ "a_id": [ "dx0d94q" ], "text": [ "First, whether it is by a private company (SpaceX) or a government agency (NASA), they need to find sites like the Kennedy Space Center in Florida. They are \"allowed\" to launch rockets there by contract. Second, it's a space launch. That means you're sending stuff into orbit around the Earth. That means you need to be travelling as fast as you can in order to take advantage of the Earth's rotation. So that's why NASA launches in the South part of the US not somewhere like Washington. Third, another way to take advantage of the spin is literally to take advantage of the direction of rotation, which is why 99% (inaccurate but pretty accurate at the same time statistic) of rockets launch in the Easterly direction. BECAUSE you have to launch East, and because you have to ditch rocket stages as it flies and jettisons fuel tanks, the rocket flies over a body of water in the East." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8ap4bo
how does the bios know if theres an os in a drive ?
Technology
explainlikeimfive
{ "a_id": [ "dx0eg2r", "dx0hkar", "dx0nn9i" ], "text": [ "It knows to look at the [boot sector]( URL_0 ) at the start of the drive to find the code that starts loading the OS. If there's nothing in the boot sector, it assumes there's no OS on the drive. If you have a boot sector but no OS, the BIOS doesn't care, it's done its job.", "It doesn't. All it knows is to start executing the program on the boot sector of the first drive. That program is usually an OS or bootloader, but it could be anything. It could be a tiny program that just prints out \"No operating system found on boot disk.\" and exits.", "Newer computers do it a bit different, they use EFI, so the firmware on the motherboard loads (it's just a flash chip connected to the CPU which is configured to just start with that firmware). The EFI firmware scans all GPT partitions for EFI boot partitions, then looks for executable efi programs in those partitions and shows you the list of bootable executables. When you select the boot EFI the firmware loads it into memory and executes it, and that will load the operating system into memory and start it's execution." ], "score": [ 15, 6, 5 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/Boot_sector" ], [], [] ] }
[ "url" ]
[ "url" ]
8at78n
Why does rendering take so long?
Edit: I mean image synthesis, or computer graphics, for example turning a 3D model into a realistic animation.
Technology
explainlikeimfive
{ "a_id": [ "dx1m2h2", "dx1lowl" ], "text": [ "The mathematical equations that describe the way light bounces through raindrops and off of oddly-shaped objects, the way smoke disperses in the air, the way the wind blows through blades of grass and so on are all incredibly complex. The more precisely you want to model these interactions, the more time it'll take for a computer to finish the computations.", "TL;DR It's just lots of calculations that get easier over time as hardware gets more advanced and cheaper so you can do more calculations in a shorter time, and better algorithms/engines are made that limit the number of calculations needed to get a nice-looking render. Rotating models: For something very simple, take a model of a person. The model is made up of points, each of those is made up of 3 numbers (x,y,z). Depending on how realistic you want the person to look, the model could be a thousand points so that you get very fine details. That's already 3,000 numbers just for that person. Now to do something as simple as rotating the person a bit to the left, you have to multiply all 3,000 of the numbers by something called a \"rotation matrix\" made up of the cosine and sine values of the angle you want to rotate them by. So you have to get the cosine and sine values of the angle you're rotating by, then go through all 3,000 of the numbers and multiply them. Another minor problem: the rotation matrix rotates the points around (0,0,0). So if your model is sitting at (1,0,2), you have to subtract (1,0,2) from all the points, multiply all the points by the rotation matrix, then move it back to its original position by adding (1,0,2). Some lighting: Then if you want light reflection to make your person look better, you need to look at polygons. Each polygon is made of 3 of the aforementioned points (some points are shared by different polygons, so it isn't just 1,000/3 polygons). The infamous low-poly Lara Croft with pyramid boobs was made of just a few polygons. Each of these polygons has a \"normal\", i.e. the way it faces. Each of those normals is 3 numbers. If your model is moving around and going through animations, you have to calculate the polygon normals any time you want to use them, which is done with a few subtractions and multiplications of the polygon's points' components. Once you have all those normals, you can go through and get rid of the polygons that aren't facing the camera, which itself takes calculations but will save you calculations later. Once you have the polygons that matter and their normals, you take into account the lighting position (sun, a lamp, etc.) and camera position, and brighten up the color of certain polygons based on their normals. Coloring: Speaking of color, now you have to apply color to all those polygons, so you take the texture of the person model and wrap it around all the polygons. The problem is, the artist drawing and coloring the texture was working with and coloring textels in their artist software. All you have are the pixels on the screen. If you have a really high res texture slanted to the side on a low res screen, you have way fewer screen pixels than the textels that the artist crammed into the texture. So essentially you just end up going along the texture, sampling it as often as you can and coloring the pixels on the screen. You can try to fix that by sampling multiple textels and coloring each pixel based on that, but that's more calculations. Then, because you're working with a grid of pixels on a screen, edges of polygons look jagged and overly defined (go into paint and use the line tool, it will look sort of \"terraced\", breaking up the smoothness of the line). This is called aliasing. There are anti-aliasing techniques that smooth out the jaggedness of polygon edges, but again, that creates more calculations. Now if you want something like a mirror, you essentially just render a new viewpoint looking outward from the mirror, crop it down to the shape of the mirror, and paste it over the scene. More lighting, more coloring, fewer polygons you can cull, more calculations. Now rotate every model in the scene, move it to its appropriate position, calculate normals, wrap all the textures, make them brighter based on the polygon they're wrapped around, and do it all 30-60 times for every second of video that you want." ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8atd7f
How do radios "listen" to a specific frequency?
Technology
explainlikeimfive
{ "a_id": [ "dx1ex3x" ], "text": [ "The simplest radios simply used a resonating circuit. Like a tuning fork that only vibrates at a certain frequency, these circuits could be excited by a radio signal at their resonant frequency. These are called \"crystal\" radios. Usually these had a separate circuit for each frequency of interest, and you could select between them, although by adding an adjustable capacitor, you could make them tunable. The next type of radio used a clever way to variably tune the frequency of the receiver. They used the fact that two signals overlaid will have a \"beat\" to them, where they constructively and destructively interact. This creates a new frequency equal to the difference between the two input frequencies. In one of the best names for things I know, these receivers are called \"superheterodyne\" receivers. They have a variable local frequency source, that is a circuit that is adjustable and just puts out a tone at a selected frequency. This is added to the incoming radio signal, creating a new signal at a frequency that is the difference between the incoming signal and the local signal, due to that beat effect I mentioned. By selecting the right local signal to add to it, you can make the output signal whatever you want. So, you just create a filter circuit for some fixed frequency, and then adjust your tuner to shift the signal until the desired frequency fits through that filter, discarding all the other noise. I'm sure it's all done with microprocessors or something now, but the old analog solutions are quite clever, I think." ], "score": [ 9 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8ax7n1
How does software companies keep their software code secure when there are dozens/hundreds/thousands of people working on it?
Technology
explainlikeimfive
{ "a_id": [ "dx29xjo", "dx2a4uh", "dx29bl8", "dx2d3xu", "dx2dwwg", "dx2cjl4", "dx2krq1", "dx2mkq8", "dx2liwz", "dx2io23", "dx2gugc" ], "text": [ "When you say \"secure\" do you mean safe from internal tampering? like someone adding a back-door without people knowing? or people selling this information to other companies / people? 1. This is sometimes solved with a code review process, someone needs to review this code and sign off that it is okay to be added. In more sensitive parts, the same code will be reviewed by multiple people, usually not from the same team. This increases the chances of finding anything bad (malicious or not). If all of them are in on it, then sure, they can probably place it in but usually many different people will work on the same code, so you will need a lot of people to be a part of it. 2. Regarding exposing the code, this is obviously difficult to prevent. Depending on what the code does, many companies use other methods to try and learn if their code was leaked (Checking competitors new \"features\", scanning certain websites that may show such code and so on). Keep in mind that many people wouldn't want to risk their job or risk going to prison that easily, and in many cases it is easy to learn who was involved in a leak or a malicious addition to your code. That being said, it still happens a lot.", "Secure from what? This word means many different things according to the threat model, that is, the kind of attacks or accidents you want to protect against: * Accidental deletion/corruption: backups, no need for more explanation. * Accidental or harmful leaks: big companies split the resources so employees don't have access to everything. For instance when the windows 2000 source code was leaked, it wasn't complete. Small companies do little or nothing, and trust their employees. * Inclusion of malicious code: code review, when they do it (not often enough!) * Leak by hacking: try to protect it like any other asset from the company. Source code is hard to protect, because all developers must have access to part of it. Development and testing infrastructure is notoriously insecure (deployed with default passwords or none at all, debugging mode, etc.)", "Most places have the code in chunks. Your team works on a chunk. That team works on a chunk. Programmers don’t have access to the whole enchilada. That said there are backups and your source control guy can see everything. You keep them happy.", "Stealing a company's source code wouldn't really be that big of a deal in most cases. Even with the code, nobody can turn around and legally sell the software without it being immediately obvious they're doing it. Even if you try to, you don't have the advertising, marketing, support staff & industry reputation it takes to make something of it. If the code is for an online service of some sort (eg - Google), it's worthless without the millions/billions of dollars of infrastructure it takes to run the code & make use of it. If somebody just wants to steal a program, it's easier to just pirate a copy & not deal with the complications of building it themselves.", "Some people here are saying code reviews and preventing all developers from having access to all the code. I've worked for several large software development companies and not a single one of them did this due to the cost. All developers had access to all code and there was very little code review due to tight timelines to release new features. The only people who didn't have complete access were consultants and once they'd worked for us for a few weeks they were also given full access. All the developers had access to all the data in the database, too. The companies felt it was too time consuming and expensive to restrict access, even to sensitive data. What we did do was log who accessed and checked code into the repository. We didn't log who accessed which data in the database.", "The solution is surprisingly non-technical: Lawyers. Code can leak, but if you have some really nasty attack lawyers, it will not be profitable or safe to benefit from code you've gained covertly. Also, in most cases, it doesn't matter. Say, for example, that I got the source for, say, Microsoft Windows and Office, and that Microsoft's lawyers all got hit by meteorites. Now, I can make my own Windows and Office solution. Would the user base care? No, they don't want some new product of questionable legality, they want the support that a huge organization can provide. Heck, there are legal alternatives that are superior to Windows and Office today, yet, almost no one cares.", "A few people have touched on this, but here’s my experience. The places I have worked don’t have any protection in place to keep code secure from developers. One final company I worked for had a “no flash drives” policy, but there was no way for them to detect a flash drive, and I could have just copied everything to a cloud drive without anyone noticing. In most cases, I had access to the entire projects codebase and I some cases, all the code to other projects I didn’t need. I think the risk of a developer leaking code is lower than the hassle of people not being able to access something when they need it. I’ve always worked at places where “dozens” of people were working on a project, so I admittedly don’t have experience at larger companies.", "Because the last thing any software developer wants to do is more work. Source: am software developer", "Ex Microsoft guy, worked on SharePoint. You only get access to source code you absolutely need. There are parts you need from say MS Word but you get those in an already compiled form (.dlls etc) usually. Only the build engineers get the keys to the kingdom. Also, there are background checks. Also, there are ways to examine the binary (apps) to see if application X has the same code paths as application Y if someone directly steals your code. Similar to how virus scanners work.", "Source control and build environments keep mistakes by individual developers from harming the end product; you can pick and choose which version of the code goes into software that ships. But typically you can see most if not all of the code. But what’s valuable isn’t the code but algorithms and techniques for handling certain problems. In a large software base there are thousands of well paid developers with very domain specific knowledge of a particular problem space. People occasionally steal code, but mostly companies go after people who know how it works instead. Companies don’t want stolen code - it’s a huge liability.", "If you mean keeping it private then they sign non-disclosure agreements. If you mean tampering with it then they have a higher up review every addition before added to the main code." ], "score": [ 767, 90, 60, 57, 34, 13, 9, 5, 5, 5, 3 ], "text_urls": [ [], [], [], [], [], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
8axlab
Instead of battery technology, why do Smartphone companies only upgrade cameras, processors, memory, etc yearly?
Technology
explainlikeimfive
{ "a_id": [ "dx2b45j", "dx2bjoz", "dx2b18m" ], "text": [ "I will start by saying I am not a scientist or battery specialist. Just a techy and rc enthusiast. Most phones do receive an upgraded battery along with other components. however the components require more power and therefore battery life remains similar. As for pioneering entirely new battery types or using things similar to the RC world, volatility becomes an issue if they’re not handled extremely carefully which is not in the cards for the average cell phone user.", "It's because battery technology has hit a wall, unlike processors, cameras, and radios. The lithium-polymer battery in your new phone this year is essentially the same as the one in your phone five years ago. At this point all they can do is 1) make the battery physically bigger or 2) make the hardware and software more efficient. They're doing both of those things. Big leaps in battery tech are coming, but we're getting down to some really hard physical limits now. I wouldn't expect anything major in less than five years, probably closer to ten.", "Most recent smartphones last a day on charge. Usually people charge their phones once a day so it's sufficient for most people, not very much required." ], "score": [ 12, 6, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
8azcgh
What is the purpose of a ReLU in convolutional neural networks? What does it do?
Technology
explainlikeimfive
{ "a_id": [ "dx2ri09" ], "text": [ "A neuron in a neural network has an 'activation function' which converts the sum of its inputs into an output. ReLU stands for Rectified Linear Unit. If the input is positive then it will simply output the input. If the input is negative then it will output 0. ReLU is popular because it doesn't suffer from the \"vanishing gradient problem\" (where small gradients (slopes of functions) across several layers of neurons result in tiny changes when trying to improve the network (small number multiplied by small number is a tiny number) and so takes a lot of data to train) because its gradient is 1 for positive inputs." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8azdi9
Why does it seem to me that razors purposed for females are way duller than those for men?
Technology
explainlikeimfive
{ "a_id": [ "dx2p80v" ], "text": [ "URL_0 According to this article, the shave angle is different between male and female marketed razors, so maybe that has an influence on it (They said the metallurgy should be the same)." ], "score": [ 4 ], "text_urls": [ [ "https://www.rd.com/health/beauty/mens-and-womens-razors-whats-the-difference/" ] ] }
[ "url" ]
[ "url" ]
8b05gz
How does a microphone work?. How does it take sound and move to another device or what ever.
Technology
explainlikeimfive
{ "a_id": [ "dx2vffa" ], "text": [ "A microphone works by allowing sound waves to move a flexible membrane. The motion of the membrane is detected, usually magnetically, and turned into an electrical signal. The signal can be run through an amplifier into a speaker, or digitized and turned into anything." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8b0x24
What is so special about petroleum oils that we can't just simply use most other kinds of oil (vegetable, animal) as a fuel substitute?
I am assuming similar viscosity and consistency and whatever because obviously you can't run a combustion engine on a solid block of fat.
Technology
explainlikeimfive
{ "a_id": [ "dx32ay6", "dx3e5vr" ], "text": [ "Petroleum oils are hydrocarbon chains, links of carbon atoms with hydrogen atoms hooked to them. Vegetable oils have links of carbon atoms with some hydrogen atoms, but also some oxygen-hydrogen units attached to them. You can absolutely use these as a fuel supplement, in the US almost all cars run on 10% Ethanol (mostly derived from corn oil). Some cars can run on 85% Ethanol, and others run on pure oil (which is usually called bio-diesel when used for fuel). When it comes to why petroleum, the answer is very simple. It's cheaper. People buy fuel from the lowest cost source.", "You can burn nearly anything as fuel if you want to; there are engines that run on (processed) vegetable/cooking oil, and obviously ethanol supplements are well-known. They just aren't as efficient as petroleum gasoline. the reason petroleum is prized is that it is basically the highest potential-energy-per-weight/volume substance we can find that both occurs naturally and isn't hugely volatile." ], "score": [ 12, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8b1v1n
If the "M" stands for "model", how is the M4 more advanced than the M16?
For clarification, Irish here, not American. I do know a little bit about guns, but not all *that* much. We do have private gun ownership here, but nothing military grade. It's just something I've wondered about because I've seen it in movies and games.
Technology
explainlikeimfive
{ "a_id": [ "dx3a4qa", "dx3a51j", "dx3a25u" ], "text": [ "The american military nomenclature system is screwy, and requires some contextual knowledge. The main point of distinction is that the M-16 is a \"rifle\" while the M-4 is a \"carbine\". Rifles, Carbines, Handguns, machine-guns, grenade launchers, ect. all have their own, independent and parallel model (M) designations. The M-16 is the 16th \"Rifle\" adopted since the military moved to this naming conventions, while the M-4 is only the 4th \"carbine\" adopted. Also, it's pretty much impossible to legally buy any full auto in the US, so the civilian market doesn't particularly care about M designations.", "They're different types of weapons, so the models do not match. The M4 is a carbine rifle, the M16 is a rifle. The M4 is a lighter, shorter version of the M16A2 assault rifle, which itself is a version of the M16 rifle. Note that \"assault\" in this instance means \"capable of selective fire\". If it helps to think of it differently, \"M4\" also applies to a specific combat shotgun. It follows the same model naming system, but it's an entirely different weapon.", "I tried reading up on it but im still not sure. I believe the M4 is a carbine model and the M16 is a rifle model. Perhaps thats why there is a difference. As in, the model 4 carbine is newer than the model 16 rifle. [Found some stuff here]( URL_0 ) > If we look at the common M4A1 Carbine, the weapon's designation is literally Carbine, Model 4, Alteration 1. Then if we look at the M16A1 Rifle, we can see the full designation should be Rifle, Model 16, Alteration 1." ], "score": [ 20, 3, 3 ], "text_urls": [ [], [], [ "https://www.quora.com/How-does-the-US-Military-name-their-weapons-like-the-M1-Garand-M1-Carbine-and-M14-What-do-the-letters-stand-for-and-why-are-the-numbers-often-so-far-apart" ] ] }
[ "url" ]
[ "url" ]
8b6g90
- How does wireless phone charging work? Where does the electricity go through?
Technology
explainlikeimfive
{ "a_id": [ "dx4b5te", "dx4b75p" ], "text": [ "Wireless chargers or induction chargers work by using an electromagnetic field, a magnet, to transfer current into the device. The charging station generates a magnetic field wich is used by the device to charge the battery. Thus there is no \"shock\".", "Induction. if you move electrons down a wire you create a magnetic field and if you move a piece of wire through a magnetic field you create a current in it. Inductive charge plates have coils in it which create rapidly changing magnetic fields which in turn create a current in the parts of the phone that use them to charge up their battery. So electricity is turned into magnetism which travels freely through the air and then turned back into electricity on the other end." ], "score": [ 5, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8b8qwq
How are songs from concerts recorded, considering the sheer amount of background noise from the crowd?
Take for example this video: URL_0 (AC/DC concerts are a riot!). How is the audio so clear considering the sheer number of people in the background?
Technology
explainlikeimfive
{ "a_id": [ "dx4s5vf", "dx4vy45" ], "text": [ "The audio is recorded straight from the source. So the mic's, guitars etc all go straight to the sound board and are recorded from there.", "The microphones are much much much closer to the sources on stage than they are to the crowd. Also most stage mics, especially the mics the singers use, are designed to reject sound coming from the back of them. Crowd noise plays almost zero factor in recording sound from the stage, in fact, the bigger problem is avoiding capturing the sound of the band's monitor system. The band has their own sound system with speakers on stage pointed directly at the performers. This is crucial as they need to be able to hear what they are playing and singing. This needs to be loud enough for them to hear but not too loud that it gets into their microphones too much, otherwise you'll get feed back. This is why they usually have a totally separate mixer on the side of the stage just to adjust the levels in that monitor system. Source: I'm a professional live audio engineer." ], "score": [ 13, 8 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8b8xxs
The devices government and public officials wear at conferences with multiple languages.
Technology
explainlikeimfive
{ "a_id": [ "dx4tq41" ], "text": [ "It is basically a radio that connects to a translator who is translating what is said into a language they speak." ], "score": [ 8 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8bb3h1
What's the difference between FDMA and TDMA?
I'm having a hard time grasping the major differences and uses between the two. When and why would one be more practical than the other? What applications is each one most useful for? FDMA - Frequency Division Multiple Access TDMA - Time Division Multiple Access
Technology
explainlikeimfive
{ "a_id": [ "dx5f0d5" ], "text": [ "FDMA: send signals on different radio channels TDMA: share a channel by taking turns Both are very handy." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8bbjrl
Why does older music often play at a lower volume?
Back when I used CDs, I noticed I often needed to turn the volume up to hear older CDs at the same level as recent CDs. The same seems to be true for streaming. I may have to turn up a track from the '80s, but current tracks always sound loud as hell.
Technology
explainlikeimfive
{ "a_id": [ "dx5fc6q", "dx5eslw" ], "text": [ "TL;DR because music company executives are dicks who care more about scrounging money than sound reproduction. Early CDs and mixing engineers were propely doing their job in order to maintain the maximum amount of dynamic range (the difference between the quietest and the loudest sound in a recording). However as time went on, market forces caused a sort of arms race in loudness in the recorded media so that songs played would make more of an impression on the listener. This is referred to as [the loudness war]( URL_0 ).", "The encoding formula allows the producer to decide what percentage of \"maximum loudness\" each passage should be. It used to be that they'd aim for medium, with max reserved for the loudest moments. But the style has changed, since this changing-loudness format is tricky to listen to in noisy environments." ], "score": [ 52, 3 ], "text_urls": [ [ "https://en.m.wikipedia.org/wiki/Loudness_war" ], [] ] }
[ "url" ]
[ "url" ]
8beryt
Why doesn’t stereo volume stop at a point before distortion occurs in stock speakers? Conversely, why don’t stock speakers hold up to the capability of their stereo counterparts?
I was fudging with my stereo today and increased the volume from 66.6% (30/45) to 80% (40/45) and the music started to distort. Now, 30/45 is already overkill in my car, but theoretically, shouldn’t you be able to turn the volume to 100% without any negative effects? These speakers were designed for this stereo by default. Why wouldn’t the max out the volume before losing audio quality?
Technology
explainlikeimfive
{ "a_id": [ "dx66j20" ], "text": [ "Manufacturers often make more variety of speaker than head unit, so low quality speakers are often paired with more powerful head units. They want to keep the overall system inexpensive, but they're not able to get a cheaper head unit." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8bfn7u
over what time span is facial recognition software accurate - and does it allow for changes as a result, for example, from piercings?
Technology
explainlikeimfive
{ "a_id": [ "dx6fly6" ], "text": [ "The way that many facial recognition algorithms work, is that they take many different pictures of your face (as many as possible), from different angles, and preferably with slight changes like different haircuts. All of those images are then combined into an [eigenface]( URL_0 ) image that is essentially the average of all the images. When the images are averaged together, certain features that are unique to your face will stick out more when compared to an eigenface image from someone else's face. Whether or not a facial algorithm will be thrown off by your face from a different age, or with piercings would depend on what data the facial recognition was fed to begin with. If you only supplied pictures of your face from when you were young, then it is possible that the facial recognition algorithm might not recognize your face when it is older, because your face might have changed enough to not be recognizable enough. However, if you had fed the algorithm pictures of your face from different ages, then the age differences would be averaged into the reference eigenface image, so the algorithm would be more likely to recognize your face. Same goes for piercings, but I bet the algorithm would have less of an issue recognizing your face with and without piercings because it wouldn't change your overall face too much in the averaged eigenface image." ], "score": [ 3 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/Eigenface" ] ] }
[ "url" ]
[ "url" ]
8bg5fw
Why is the reversible, and compact design of USB-C only made recently? Why couldn't they have used this design decades ago when they were designing USB-A?
Technology
explainlikeimfive
{ "a_id": [ "dx6l0qs", "dx6qai9", "dx6mk0j", "dx6hetr", "dx6rs29", "dx6ppwc", "dx6vmq2", "dx6uokm", "dx6o5l1", "dx6mfiq", "dx6oyl0", "dx6q2h3", "dx6ra26", "dx6m5sa", "dx6rlbz", "dx70a7l", "dx6tmu1", "dx6q1hd", "dx6uzgj", "dx6xz5i", "dx6v7pc", "dx7cogs", "dx740oz" ], "text": [ "One of the other deals with this: USB was meant to be a replacement for SERIAL interfaces (Eg, RS232). It was a way to quickly transmit data that happened to also provide a little bit of power. Today, it's primarily used as a power source that happens to have a data exchange. In terms of transmitting information like RS232, USB was meant to be a semi-permanent, hot-swappable interface for things like mice, printers, keyboards, and gamepads. No one was thinking about charging phones. The USB-A and B ports were designed to be fairly strong on their own. USB C is a significantly \"weaker\" physical connection.", "Some more food for thought: USB connectors were designed in the '90s. At the time, there weren't really any devices small enough to *require* super compact connectors. They did provision Mini USB at the time, and it took a while before devices got small enough to necessitate the introduction of Micro USB. Additionally, larger connectors are more physically robust, so this would have factored into the decision. Regarding reversibility: either you make the connector keyed, so it can only be inserted one way, or you make it reversible/symmetrical. If there's no perceived advantage to one solution or the other, then both seem equally viable. It's possible that a keyed design was chosen at the time to minimize the complexity of the connector in manufacturing. Also keep in mind that at the time of USB-A's design, virtually every data connector standard used a keyed design. Reversible connectors were pretty distant on the industry's radar - consider again that connectors hadn't gotten super small yet, and reversible connectors are most useful at small sizes where it becomes a pain to fit the keyed ones. Some better-informed responses already in the thread, but I wanted to add these points for consideration! The TL;DR of the whole thing would be that it wasn't a matter of technical limitations, but simply that there was no perceived need or benefit at the time to make tiny reversible connectors. Those just plain weren't necessary for the intended purpose of USB-A in the timeframe during which it was designed.", "What is obvious in retrospect was not necessarily obvious at the time. Engineering always involves compromises, and reversibility would have been low on the list. At the time USB was relatively compact, robust, and convenient, but was pushing the mass manufacturing of the time. Then you saw the mini USB, and later micro USB. Reversibility wasn’t seen as a necessary feature until Apple created the lightning port. It was only then that the market began demanding reversibility in the plugs. So really it is a combination of vision based on where they were coming from, engineering compromises, manufacturing capabilities, and market forces.", "Reversible plugs require better microcontrollers to determine powe distribution. Basically directional plugs are dumb and easy, you always know what pin is going to be where and sending what. If its directional then you need a microchip to check which pin each slot got and how to act accordingly Think like water taps that have 2 handles (one hot, one cold) or just one lever. The one lever is more complicated to design and requires more balancing and math on the engineering side, the 2 tap one is cheaper and leaves it up to the user to know what each does.", "USB was abismally easier to connect than previous serial or parallel interfaces, which were 4-5 times usb size, had dozens of pins and had to be actually screwed. Complaining about why wasn't it reversible from the beginning is like giving a nice car to someone traveling by mule and him refusing because seats are not leather. Edit: Yup, I'm abismally wrong and I should've used a different word. Sorry, English is not my first language.", "In mass electronics the answer is almost always cost. Today on Mouser a USB-C socket is over 3x more expensive than a USB-A. Multiply this out across a few million units and you are talking real money. The USB forum, which represents device manufacturers, wanted to minimise these costs. There were also design compromises which were made which were later shown to be less important than first thought. Notably the wear and directionality of the USB cables. The USB forum decided on a deliberate policy of wearing the plug, not the device. So a USB micro B plug has small spring elements on the bottom that lock it into the socket. Being a mechanical bendy device these wear with time, a deliberate choice was made to take this wear on the plug. Time and the lightening connector have shown that the wear is less of an issue and consumers don't care. With USB-C the bendy fatiguing spring is in the socket of the device allowing for a cleaner appearing plug. Initially USB was designed with a strong Master-Slave relationship. One issue they wanted to avoid was pairing a Master-Master or Slave-Slave, a common issue with Serial and led to atrocities like null modem adapters. Part of avoiding this was to have a distinct Master and Slave plugs so building the wrong relationship was physically impossible and obviously not going to work. Mobile phones buggered this all up with USB On-The-Go, which allows a device to be a Master one minute and a Slave the next. With this compromise made the initial rational fell through and now we have identical USB-C plugs on both ends of a cable.", "All these answers are great but realistically what happened is this: - no one thought you could make a reversible cable - no one asked for a reversible cable - there was so much stuff to do on this project, even if someone did think of a reversible cable it was thrown down to the lowest priority - Apple came out with one and suddenly the masses were like “oooooh I want that” This goes with the theme of Henry Ford’s “if you asked people what they wanted they’d say a faster horse”", "There are a lot of really good answers in here, but I think there's an angle that isn't getting much attention. The direct answer to your question is that there was no technical barrier to engineering and mass-producing a reversible connector that would have had the same technical capabilities of a USB 1.0 connection. It would have been simple to create a connector that was mirrored on each side and thus could be inserted with either side facing up. The problem is that engineers don't always consider the user experience. And to be fair, that's not their job. Their job is to make things work well. Sure enough, USB-A connectors work well *as long as you follow the instructions and insert them properly*. But as anyone who has used a USB-A connector knows, the experience is often mildly frustrating: you try to plug it in, but it won't fit, so you flip it over, and it still won't fit, so you flip it over again, and surprise, it fits. The companies that got together and came up with USB -- including Microsoft, Intel, IBM, and several others -- should have done some user acceptance testing. That is, they should have asked a group of everyday, random, run-of-the-mill people to come in and try using the thing with no instruction other than \"plug this in\". The reason they didn't do this is, ironically, a huge part of why USB was invented in the first place. When USB was being developed in the early 1990s, most people had only a basic working knowledge of how to use a computer. They could turn it on and launch WordPerfect, but couldn't hook up a new printer. USB was meant to help by greatly simplifying (and standardizing) the connector, and greatly simplifying how drivers were installed (prior to USB, installing a peripheral typically involved a nightmare of IRQ, DMA, and address settings, as well as installing software that was not at all user friendly). Since this was all beyond the grasp of the average computer user who wanted stuff to \"just work\", the coalition beyond USB didn't really consider their experience. Since USB was such a massive improvement over the current standards, it was assumed that everyone would love it. And we did, mostly. TL/DR: There were no physical, technical, or manufacturing limitations that prevented this. It was just never anticipated that it would be an issue.", "They simply didn't think about it at the time; no technical reason why they couldn't have implemented reversible plugs decades ago. *edit* also URL_0", "USB in and of itself is an attempt to condense parallel communications into a serial interface. When USB was designed, they condensed 8 pins to 4. I am not trying to say the original RS232 port was parallel, it was serial; but the logic to achieve the same ability with fewer pins was one of the main priorities when USB was designed. Fewer pins + more data throughput = much beefier controller chip needed. It should be noted that only in the last few years has technology become cheap enough to push the limits of the USB standard. Which also means that the designers for USB had already attempted to use the most powerful controllers they could. This entire conversation is more or less a display of how technology 'know-how' doesn't always keep pace with the literal technology available. The technology exists to push the standard much farther, but the price per controller is holding it back. Remember, BILLIONS of devices use this standard.", "If you look at Apple they were always trying to get here. They were the first to release a computer with no dedicated keyboard or mouse ports, USB all the way. They were pushing firewire which was better than USB for a decade, they used DisplayPort and miniDP before almost anyone, and released lightning in 2012, giving a small reversible connector. Since USB is a standard that has a consortium behind it, it moves at a snail's pace in regards to upgrades.", "Because we humans can be slow to build the obvious. For example mankind has had the wheel for thousands of years. Mankind has had luggage of various types, also for thousands of years. Wasn't until about 1970 that someone thought to put wheels ON the luggage.", "I think the real question is why USB mini-b and especially (!) micro-B didn't have reversability as a feature", "As with most improvements things take time. We learn over time what works, what doesn't - and we gain additional knowledge in the process that brings to mind new ways of doing things.", "There is no technical limitations. No one thought of it till Apple did the Lightning cable. Then that influenced USB C cable to be reversible.", "Thats like saying why didn't we have SUVs until they were invented, we already had cars and trucks. The USB drive was a breakthrough itself when first invented and it could only handle about 12Mb/sec, but as they were used the limitations and issues were discovered and addressed in future models. In fact, although it is called a USB-c, it is not the same as USB3.0, which is a standard for speed and other features, whereas USB-c is referring to the connector shape so cables from different companies can have much different speeds. URL_0 Regarding why it didn't switch designs earlier, the usb connector was one of the smallest ports on a regular computer until recently. Even laptops would have a vga out and various other ports which kept the design of the cases deep enough to accommodate multiple usbs stacked on top of eachother. Add in the fact that a USB3.0 port is backwards compatible so a first generation USB device will work without needing any adapters. This leads to most peripheral manufactures to use the USB standard, and if most peripherals use that standard then pc manufacturers have little incentive to switch to a whole new port that few people need or want. Now, with the desire to make smaller and smaller devices, even 1 USB port can be too large to fit in the laptop case, but the real benefit for computers is the added speed. But the real driver pushing adaption of the USB-c are cell phones, where even the headphone jack is too large for some high end devices as they try to get thinner and thinner. And Apple computers, but they just really want to sell you all the adapters you need to connect to the devices they sold you with your last computer.", "USB-A was introduced in 1996. At that time, almost nobody had PCs, laptops rarely existed, and Windows 95 was a revolutionary operating system. Google was launched in 1997. The \"revolutionary\" Nokia 8110 featured in The Matrix had not been launched yet. URL_0", "The needs have changed as time went along. They had different things which had different uses. Now USB C can transfer energy and also specific information. I have USB c for my laptop and phone. They didn't have cables for phones before, and didn't even consider charging it for a laptop.", "The design is not good for immediate identification. Generally ports are made so that they are not reversible. That was one of the considerations when making them originally. A unique shape, that only fits one way. USB c is not very unique and is not a very good design when speaking in terms of safety and longevity. URL_0", "Imagine a power plug with two \"pluggy-inny\" ends. (Old USB plugs are just conductors from the pin on the computer to the pin on the device. USB-C has circuitry to figure out which end is plugged into what, and which direction the zappy stuff goes. Even the first USB-C cables were a mixed bag, and a lot of times a high-draw device would melt a low-capacity cable.)", "I haven't seen it mentioned here, but one of the bigger \"Electrical\" (read not electromechanical) complications of implementing USB-C is the forced inclusion of a signal mux for the data lines. This wasn't always easy (and certainly at high speeds, can be quite hard). A good bit of technology had to catch up to do this affordably. E.g. in simple terms if you happen to have wire A mated to wire A, B to B, C to C by flipping the connector you mismatch the letter pairings in the cable. From a hardware point of view, there's an integrated circuit that than \"fixes\" the marriages of the different lettered wires regardless of the orientation of the physical connector. There's more to it of course, there's a good bit of science involved to negotiate the different power deliver requirements (and increase the voltage) to mitigate problems with increasingly small wire gauge. But I won't dive too deeply there. Source: have designed / laid out USB 3 connectors And the less ELI5 version: URL_0", "The pins have to be made much more scaled down to fit them all in a compact conductor. This means higher cost. In the old days of USB 1 and 2, you only had like 6 pins. Now with USB-C you have something like 18. (You have 8 for superspeed pairs, 4 for normal high speed data, 4 vbus, 4 gnd, 2 sidebands, and 2 CC pins) Almost 1/4 of those are only used because the USB-c is flippable so it isn't the most effective use of real-estate from a connector point of view. You also have a lot of neat features for USB-C that frankly weren't thought of way back then. Those sidebands can be used for \"alt-modes\" for displayport, for example. There's also support for Power Delivery too. These needs didn't exist until recently. Tl;dr: 1) Manufacturing small enough pins to fit 24 pins in such a small connector 2) USB A/B type connectors were good enough for what they were needed for at the time. With more interest in alt-modes and power delivery. 3) Cost", "I'm not completely familiar with how USB-C works, but if it's anything like Lightning it works by probing the cable to figure out which way it is oriented and then choosing which pins to use, which requires extra computing power and circuitry. That's nothing today, when even an IO chip has considerable computing power and high-density circuit boards with nearly microscopic components are universal, but back in the mid-1990s it would have been seen as a pointless waste of resources. At that time I don't think there were *any* multi-pin computer interfaces that could be plugged in more than one orientation. It simply wasn't something that anyone had thought of or needed. USB *was* unique in that it was the only (non-round) computer connector whose orientation was determined only by the internal pin layout, not by the external shape of the connector, but it is still pretty easy to tell the correct orientation just by looking at the port. Also, in the 1990s the concept of \"hot-swapping\" was still brand new and rarely implemented, for the most part plugging and unplugging devices had to happen when the computer was turned off. While a big complaint about USB today is that you can't see which way the port is oriented when you're reaching behind your monitor that simply wasn't an issue back then. Even though USB *could* be hot-swapped there were no USB data storage devices yet (apart from semi-permanent things like Zip drives and CD burners), so most things would be set up once when you could see the ports and then left plugged in forever. And for the rare times you *would* need to hot-swap a USB cable the one or two ports on the front of a computer were enough. It wasn't until years later, with the proliferation of USB storage devices, digital cameras, media devices, and especially USB 2.0 that the need to constantly access and use the USB ports became a big deal. And, finally, USB-C for the most part solves a problem that doesn't really exist. While it is definitely more convenient to not have to worry about orientation it's not like that is a \"must-have\" feature, and while it is definitely smaller which is a bonus for smartphones it's not any smaller than existing smartphone ports were and the need for small connectors is a very recent trend. Back in the mid-1990s when USB was being developed even a full-size USB port was far smaller than any of the legacy ports it was replacing, and even mobile devices were large enough to fit a full-size port so there was no need to make it smaller." ], "score": [ 4108, 1864, 530, 307, 222, 132, 44, 41, 35, 32, 21, 8, 7, 6, 5, 5, 3, 3, 3, 3, 3, 3, 3 ], "text_urls": [ [], [], [], [], [], [], [], [], [ "https://images-na.ssl-images-amazon.com/images/I/41htM9RiX%2BL._SY450_.jpg" ], [], [], [], [], [], [], [ "http://www.qacqoc.com/usb-type-c-vs-usb-3-0-whats-difference/" ], [ "https://en.wikipedia.org/wiki/Nokia_8110" ], [], [ "http://www.belkin.com/us/Resource-Center/USB-C/USB-C-counterfeits/" ], [], [ "https://www.reclaimerlabs.com/blog/2017/1/12/usb-c-for-engineers-part-2" ], [], [] ] }
[ "url" ]
[ "url" ]
8bk0m9
Why do different cables cause phones to be charged drasticaly faster or slower, even when they're plugged into the same socket?
Technology
explainlikeimfive
{ "a_id": [ "dx7ba9k" ], "text": [ "Further to add to other post, the amperes (amps) tell you how many electrons in coulomb packets are being drawn per given time period. The socket is like a water tower. You can have different sized funnels or piping/tubes connected to your output. The size of the tower and amount of water (potential energy/voltage) remains the same. The size of the tubing (amps) determine how many packets can go at once. This is of course also determined by the material, the quantity, quality and software design." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8bkde8
How is it that my phone can record in HEVC (h.265) in real-time but my desktop takes hours to convert videos to HEVC?
Is there some qualitative difference between videos exported on my desktop vs those recorded by my phone? Is this because of some kind of hardware acceleration? iPhone 7 vs i5-7300HQ, if it matters.
Technology
explainlikeimfive
{ "a_id": [ "dx7iclx" ], "text": [ "Your phone has a specialized processor just for encoding video. While this is possible on the computer, it requires a combination of hardware and software that you may not have, so most programs just default to using your normal CPU which is slower." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8blk8b
Friction stir welding
So anyways i seen a gif about friction stir welding and the comments weren't clear about what was happening. Alot of answers like MAGNETS! and MAGIC! And.. That doesn't do it for me.
Technology
explainlikeimfive
{ "a_id": [ "dx7y12y" ], "text": [ "There are no magnets involved. A special tool with 2 main components, a pin that inserts into the material and a shoulder that rides on top of the material, is in a spindle. This spindle rotates the tool at an appropriate speed and applies sufficient force down onto the material. This generates heat due to the friction between the shoulder of the tool and the material based on the force pushing down and the speed of the tool. This heat is enough to make the material soft enough to be “stirred” by the pin portion of the tool, but not hot enough to actually melt the material (which is why it’s a solid state welding process). The design of the pin portion of the tool is typically very important, as it helps to move the softened material from in front of the tool to behind the tool where the 2 separate pieces are now intermixed and also almost forged together due to the high forces applied downward from the shoulder of the tool. The gif shows a double sided tool so there are 2 shoulders, but it’s the same concept." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8blrjo
Why does "game mode" on TV and monitors look worse and other settings?
Technology
explainlikeimfive
{ "a_id": [ "dx7rc21", "dx7q91b", "dx7vt6k", "dx7zc6z", "dx7vkk1" ], "text": [ "Processing video to make it look better takes time. This increases the delay between when the signal comes into the TV and when it gets displayed. If you're watching a TV station or a movie, all the TV needs to do is delay the sound by the same amount. You're not going to care about a fraction of a second delay. But if you're playing a game, you make some input via a controller and expect the see the results of that input immediately. A fraction of a second delay can be very annoying. So, game mode skips the processing, reducing the delay but making the quality worse.", "The biggest thing game mode does is turn of the chip that processes the video to make it smoother and have less noise on normal TV viewing or movies. This can have a huge response on the TV response time at the expense of, in some cases like your describing, the *smoothness* of the images.", "I work in CG/production, and personally I *only* run my TVs in game mode. Game mode puts the image right through to the screen. Exactly as it was intended when it was painstakingly made. Boom. There it is. Then come along the people who make TVs. \"Oh, you know what would make video better? Cranking the saturation up to 11 because nuance is stupid, adding tons of extra fake frames to make the motion weirdly smooth because obviously more frames is always better, and lets also make the brightness levels wobble around like a drunken sorrority girl dancing on a slick bartop, because that's *dynamic.*\" I hate all that shit. So much. It makes even the most carefully crafted work look like shitty soap operas directed by Michael Bay. But yeah. All that shitty stuff takes time to process, which means shitty lag for your games.", "I purchased a Sony Bravia 4K TV to go with my new ps4 pro, and I have been looking for the answer for this, I feel like it looks worse when I switch it to game mode, and I question if I'm looking at a sharper image than a regular ps4 image all the time. I haven't gotten one of those moments where people go \"look how much better this looks on the ps4 pro!\".", "If youve played many computer games, you probably have seen the anti-aliasing option in the graphics menu. Higher anti-aliasing looks nicer, but requires a faster processor. If you have a slower one, you can turn it down or off. The game won't look as nice, but it'll run better. The \"game mode\" is the same idea. Turning it on reduces the quality of the display, but improves performance. Note that it doesn't do the same thing as anti-aliasing, i was just using that to demonstrate the idea with something most people would be familiar with." ], "score": [ 162, 41, 23, 4, 4 ], "text_urls": [ [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
8bnaxk
How does light transfer data in optical fibers?
Technology
explainlikeimfive
{ "a_id": [ "dx851hd" ], "text": [ "When data is transmitted over a wire, the wire is turned on and off very quickly. Thousands of times per second, it checks if the wire is currently on. If the signal is off, that is interpreted as a 0. If it's on, that's a 1. Light travels through fiber optic cables. It's basically done the same way, but instead of electricity it uses flashes of light." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8bo843
Why some websites don't work without www.?
Technology
explainlikeimfive
{ "a_id": [ "dx8clp7" ], "text": [ "www. Is just a sub domain to your main domain. So while Google has URL_0 , they can also have www. URL_0 , maps. URL_0 or anything. URL_0 . The problem is in the earlier days, people would stick www in front of their domain to represent their public World Wide Web presence. It because standard practice even though it was technically unnecessary. These days, most websites skip the www and go for the cleaner, shorter, non-www website url and just redirect www requests to their non-www website. Some websites do the redirect wrong or simply forget. And that is how you end up with websites that work with only www or non-www. Hope that explains it enough. Still working on my coffee here :-)" ], "score": [ 10 ], "text_urls": [ [ "google.com", "maps.google.com", "www.google.com", "anything.google.com" ] ] }
[ "url" ]
[ "url" ]
8bogsm
Why does a fiber optic internet connection require its own power supply?
My new fiber optic needs to be plugged into the wall, as well as a modem. Just wondering if this is inherent to fiber optics in some way, or just the box the internet company uses.
Technology
explainlikeimfive
{ "a_id": [ "dx8dpmq" ], "text": [ "The modem needs power to convert the fiber carrier signal. Just like the old modems of the past. There is no power in the fiber signal just light" ], "score": [ 12 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8bp68i
Why do different countries have different electricity plugs? Why not have a universal one like a USB?
Technology
explainlikeimfive
{ "a_id": [ "dx8ifot", "dx8iudr" ], "text": [ "Not all countries use the same voltage. Different shaped outlets can prevent you from frying whatever you’re plugging in", "Because when the electrical socket standards were adapted, international travel was a lot less common than it is now, and carrying anything that needed electricity with you was even less common. So, there was no need for any sort of standardization. Now that it is a lot more common (both travel and traveling with electronics) it's too late to expect entire countries to re-do their established infrastructures" ], "score": [ 9, 5 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8brwnu
If you grew crops inside with lights on 24/7 could they grow much faster than outside?
Put another way, is the crop growth rate limited by the number of hours of sunlight? Or are there other limiting factors that make that a moot point? I've been having this notion recently that you could totally automate farming certain crops by doing it in a large building. You could have each individual crop in its own container, with temperature, gas levels, soil nutrient levels, moisture, etc. all monitored and adjusted as needed. Also cameras could be used to know when to harvest the plant. One of the biggest advantages of having it all done this way aside could be faster growth by having 24/7 lighting, but I dunno if that is true or not. Some more potential advantages: -You could grow any crop anywhere because of tightly controlled temperature/gas levels -total automation and the controlled environment could theoretically simplify certain aspects. Farming would be less guesswork, and more about optimization. Then you would just have to worry about suppling water, power, nutrient dense soil, insectiside maybe, and occasional maintenance and you would get crops as output. -storms/wind/etc. Would theoretically be not as bad because they would be inside -in theory bugs could be more easily addressed because each crop would be contained in its own container I assume that there are probably economic and technical reasons why this is not already done on a large scale, but bonus points if you can also explain specific reasons why farming is not done this way.
Technology
explainlikeimfive
{ "a_id": [ "dx94io1", "dx94jin" ], "text": [ "lots of marijuana growers already take advantage of this, keeping their plants in a grow cycle until theyre ready to flip the switch into a flowering stage. they use different variations of light to simulate a summer growing season, later changing the light to simulate a flowering season. im sure there is more to it, but hopefully this is informative and still accurate.", "It really depends on the crop, but yes is the short answer. By carefully controlling the elements that facilitate growth, you can maximize yields for any given plant species. The reason we don't do this is that, most often, it isn't economical to do so. As an aside, I remember reading in _The Botany of Desire_ that marijuana plants actually thrive with excessive sunlight like can be provided in growhouses. They actually greatly benefit from the way they are (were) forced to be grown prior to legalization." ], "score": [ 7, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8bs61q
How did early civilians get to work and wake up on time for work in times where time wasn’t kept by most individuals?
Technology
explainlikeimfive
{ "a_id": [ "dx96gdi" ], "text": [ "Most jobs didn't really care about time - you got up when the sun came up and went to bed when the sun went down. There wasn't much of a concept of \"meet me at 11:30\" or \"be at work by 9\" back then - people worked when they worked. There was a brief period when clocks were not as common but people did need to be up at specific times. In that period, there were \"knocker-uppers\" that would come by and tap on your window at a specific time of day to wake you up." ], "score": [ 10 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8bsiy6
what happens when a PC bluescreens
Technology
explainlikeimfive
{ "a_id": [ "dx99ipm", "dx9grlk" ], "text": [ "A piece of important software or hardware failed in a way that the operating system couldn't compensate for and continue normal operations, so to protect the integrity of the data as much as it can, it fails into a state that gives an error message, captures a log, backs up as much data as possible, and restarts the system if it can. Your operating system encounters thousands of errors running normally that it can step over or otherwise handle without you knowing; bluescreens are the ones so bad that it can't handle it at all.", "A BSOD (Blue Screen Of Death) means a critical error happened in the Windows OS. An error so critical that the OS cannot function anymore. So critical that the very core (kernel) of the OS cannot function anymore. In such cases the only thing that the OS can do is show you an error message and force you to reboot the machine. Such critical errors aren’t specific to Windows: for example, Linux can literally `panic` because of some critical error; this behavior is called “kernel panic”. As the name suggests, the panic is caused by a fundamental error, like errors in drivers or failure to access a memory address. BTW, a kernel panic doesn’t necessarily result in a blue screen; instead, it most commonly is a black screen with the error message written in a white font, like in the terminal. Operating systems are actually dealing with a lot of errors in a daily basis. They even keep huge logs of all of them. For example, macOS let’s you read all the available logs with a built-in application, and there you can see that errors have various degrees of danger to your data or the security of your Mac, etc." ], "score": [ 17, 8 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
8bssz7
How can two items with the same name be in the Recycle Bin at the same time?
Technology
explainlikeimfive
{ "a_id": [ "dx9byoo" ], "text": [ "The recycling bin isn't a regular folder. It also contains a reference to the original location (because you can restore it) and presumably uses a unique identifier other than the filename, so you can even have two items from the same location with the same name in the recycling bin. The same way your school/work's student/employee database won't get confused if you have two people called John Doe, it creates a unique identifier for every new entry so it can internally keep track of things even if no human ever sees that unique identifier." ], "score": [ 21 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
8bsyfg
Is Fourier Transform used in wifi (wireless transmission) in any ways?
Technology
explainlikeimfive
{ "a_id": [ "dx9evzv" ], "text": [ "The CSIRO fast [Fourier transformation]( URL_0 ) processor was developed for radioastronomy but proved to be the key to making fast Wi-Fi possible." ], "score": [ 4 ], "text_urls": [ [ "https://imgur.com/fKzfVjk" ] ] }
[ "url" ]
[ "url" ]