q_id
stringlengths
6
6
title
stringlengths
4
294
selftext
stringlengths
0
2.48k
category
stringclasses
1 value
subreddit
stringclasses
1 value
answers
dict
title_urls
sequencelengths
1
1
selftext_urls
sequencelengths
1
1
assdeq
How does the video game world work, and how are games made?
Technology
explainlikeimfive
{ "a_id": [ "egwdohe" ], "text": [ "Think of a game like building a house. There isn’t one person, or one group of people, who do the work. There’s the person who is paying the money to have it built. They’ll hire an architect to design it, surveyors to work out the material needs, builders to lay the bricks, electricians to wire the house, carpenters to do the woodwork, glaziers to do the windows, plumbers to do the pipes etc. None of these people generally work for the same company - you may get some crossover when it comes to decoration etc, as you may have painters and plasterers from the same company - but they all get hired, or more accurately contracted, to do their specialist area. In the same way, a company like Activision will give a project to one of their sub companies based on their areas of expertise. Those then get broken into sub projects, and these may be contracted out to other companies to work on. There’s a lot goes into a game. To keep it all running smoothly, companies have project managers on their staff. Their role is to coordinate and brief the individuals parts of the overall project" ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
astauz
If we block robo-caller and spam callers that show spoofed phone numbers, are we blocking people's real phone numbers that we might actually want to talk to?
Technology
explainlikeimfive
{ "a_id": [ "egwix44", "egwirlx", "egwjmyj", "egwjq8n", "egwkqck" ], "text": [ "Yes, which is why the robocallers are using local area and prefix codes for the spoofed numbers they're \"calling\" from. Sadly, the best course of action is to mute the call and let it go to voicemail and pray that some day the telecoms can get their collective heads out their asses and put an end to this.", "Yes. And this question is going to get deleted because it is a yes/no question. This is the harsh reality of spoofing: we the consumers can do nothing about this. Only the telcos can stop this at the source by scouring or improving the caller ID tags, and they aren’t doing it because they don’t have an incentive to reduce call volume. Similar thing could be said about junk mail and the postal service.", "There are valid reasons for this substitution of \"Outbound Caller ID.\" For example, replacing it with the company's toll-free (e.g. 800) number or the Direct Inward Dial number of the Sales Department rather than the particular number of the outbound call center phone. That said, one thing which has been proposed is that a number substitution can only be with a number the caller actually owns. Back when I was installing phone systems, I tested this by putting our company's phone number as the outbound ID for the phone in the phone / data room. When I called our office, they showed a call coming in from themselves. (Confused!) Later on I tried it again and that number was rejected, since our customer didn't \"own\" our number. But this depends on the dial tone provider, who, as has been noted, has little incentive to enforce this. I'd be interested in any action, which I, as a private phone subscriber, could reasonable take.", "If you’re interested in hearing more about these spam calls, there was just a great podcast about it. The podcast is Reply All and the episode is Robocall. It goes into how they get your information, and it’s scary. Link: URL_0", "I had a pain in the ass ex-client who used to call my cell by spoofing it to look like my office number. I finally told him that I was going to report him to the police for harassment and he stopped." ], "score": [ 21, 10, 7, 3, 3 ], "text_urls": [ [], [], [], [ "https://itunes.apple.com/us/podcast/reply-all/id941907967?mt=2&i=1000428854333" ], [] ] }
[ "url" ]
[ "url" ]
asxdxy
How do motor boat engines not get water logged?
Any time that there's moving parts, especially in a circular motion, wouldn't the water find a way in? Or are the moving parts so disconnected from the engine that it just doesn't reach it?
Technology
explainlikeimfive
{ "a_id": [ "egxby7j", "egxcmt2", "egysn66" ], "text": [ "Same reason your car motor doesn't get water logged when you drive in rain or puddles. The motor is not in the water. It's spins a shaft which is also attached to a propeller, that goes into the water.", "The actual motor is well above the waterline, usually at least 1-2 feet. Like on this outboard boat for example URL_0 The actual motor part is in that black plastic housing, there's possibly a gear box and shaft that goes down to the actual propeller. In theory water could get into the housing for the shaft, proper maintenance of the shaft seals, and a lot of marine rated grease will protect them from water damage. The grease lubricates any moving parts, but also fills any gaps with grease where water could enter.", "On large sailboats with internal motors there is a thing called a stuffing box. Between the water and the engine is a box like compartment that contains a grease infused material that keeps water out yet is slippery enough to allow the shaft to spin. The compartment on my boat is six or eight inches deep, and it is stuffed tightly so there is basically no room for water to enter. One interesting aspect of this is that if there isn't a tiny bit of water coming through, like a drip a minute, the grease can dry out. So the box is made so it can be tightened or loosened. Eventually you run out of room to tighten and you have to pull the boat out of the water and replace the material." ], "score": [ 7, 4, 4 ], "text_urls": [ [], [ "https://www.powerandmotoryacht.com/.image/t_share/MTUwMTAzMDIyMTA1OTk1MDM0/brackets-on-a-stamas-center-console-.jpg" ], [] ] }
[ "url" ]
[ "url" ]
at0yjd
How can phones have 8gb ram in such a small formfactor and pc needs these huge 4gb ram stick?
Technology
explainlikeimfive
{ "a_id": [ "egxyjzs", "egxyssl", "egy9j3f", "egxzcvo", "egxymll", "egy9pmv", "egy6iwg", "egydeg7", "egyd4so", "egymptf", "egzjavj" ], "text": [ "This has to do with industry standards. The RAM slots on PC's are almost always the same size. A 64GB RAM stick is the same size as early 256MB RAM sticks. The size does not really represent the power/storage anymore. It's just a matter of standardization. Addition: RAM chips in phones are often integrated and can't be swapped. Cell phone RAM is also a lot slower. It consumes less power, which means it doesn't reach the same temperatures as the far more powerful equivalents in desktop computers. In larger chips, it is easier to distribute heat than in smaller chips, so a larger stick is beneficial for desktop RAM. And well... Desktops simply have the room for it.", "Space isn't a premium in desktop PC's, so the DRAM on PC memory can be made larger and therefore cheaper. Space absolutely is a premium on mobile phones, so the DRAM has to be made as small as possible - which is expensive.", "In addition to what others have said about existing standards: * The RAM stick standard size is based off RAM sticks from well over a decade ago. Desktop RAM sticks have larger contacts than laptop RAM for more stable contacts (see ref pic halfway down on [this StackOverflow article]( URL_0 )). Part of the reason is the NUMBER of contacts. The DDR4 standard, for example, defines a certain number of pins, the data path 'width' per cycle. Fewer pin contacts would require a different standard but ALSO be much harder to reach the same speeds with. * There is the SO-DIMM standard. However, it's not MUCH smaller (about half the length) and requires far more compact traces, which are more failure-prone. It's used heavily in laptops with upgradable RAM, but even this is too big for a lot of devices. In laptops, that stick is laying down - which takes a LOT of valuable motherboard space - while also still adding a good .5\" thickness. Apple is a good example of this - their laptops haven't had upgradable RAM in years, and part of the reason is that they try to minimize thickness (to an unhealthy degree). * We don't use SO-DIMM in desktops because the motherboard space isn't as much of a premium, and because we tend to want to maximize speed instead. RAM can have a number of different compromises, but it can't be fast, cheap AND space efficient. Pick 2. :) * Many laptops (and even some Small-Form Factor desktops) often have RAM chips directly soldiered-on the motherboard to deal with this very issue. However, this has substantial downsides - more custom engineering, higher manufacturing cost, higher repair/replacement costs. * Many phones use higher density modules (RAM chips) to help save space, and package-on-package is being used to save even more. For example, the Samsung Galaxy S9+ has 6GB, and [in this teardown on iFixit]( URL_3 ), it's embedded on the CPU in a second layer. This makes sense - the CPU and GPU are in the single SOC, and they are really the only things that need direct RAM access, so all of those connections skip the 'motherboard' entirely and are just in the SOC package. Meanwhile, if you [look at the part number for that RAM chip on Samsung's site]( URL_1 ), it's listed as 48Gb (RAM chips, unlike sticks, are listed in Gigabits, not Gigabytes). That's VERY high density. That high density is also VERY expensive. Compare that with [this cheap stick of 4GB DDR4 desktop RAM]( URL_2 ); it's not worried about space savings (in fact, only half of one side of the PCB is used!), and is instead using 4x 8Gb chips to reach 4GB. Those low density chips are much less expensive, and PCB traces map them to the full DDR4 standard (with the help of a very tiny chip in the middle of the PCB). & #x200B; So TL;DR: The industry never came up with a smaller RAM board standard than DIMM/SO-DIMM because you need a large number of contacts for RAM to be fast. Very high density RAM is MUCH more expensive, and since space is not at a premium and there is the option to put it directly on the motherboard or in a package-on-package solution, there isn't enough demand to justify the work on one. Update: Added SO-DIMM to the explanation.", "For mobile phones, they use very lower powered memory chips, running at a lower clock speed, lower voltage and often lower bus width as well, that is how much data can be transmitted at the same time. This allows them to put the chips into smaller packages without risk of overheating, possibly even stack multiple chips on top of each other into a single package like they do with memory cards. But the 4 GB modules aren't anywhere near their maximum capacity. You can buy 16, 32, 64 and in the near future even 128 GB modules, which all fit into the same form factor.", "RAM chips on a standard PCB with the DIMM standard make PC RAM sticks. RAM chips crammed as close as possible on a custom PCB inside a phone make phone RAM.", "the ram in your phone costs waaaaaaaaaaaaaaaaaaaay more than the ram that pc takes, the smaller the device the more it costs to minatuarize the hardware. ie parts for a laptop are more $$$ and phones even more so.", "I've seen 1MB DIMMs that are recognizable as such, I think desktop parts have more to do with a standard form factor than the limits of technology. Whenever you see RAM soldered down, it's smaller because it's designed with that specific device in mind. So I don't think the desktop \"needs\" the extra space, it was just never worth building a new shape for a RAM interface. A good analogy would be comparing 2.5 inch SATA SSDs to an M.2 SATA SSD. The technology is similar, just a different shape. In some cases, bigger may be better just because it fits where you want it to fit. Space isn't at as much of a premium in a desktop, if it ain't broke don't fix it", "As I've seen, many people have said that it's because of the standard of desktop RAM. This is entirely correct, but I'd just like to expand on that. I've seen the same size RAM sticks carry anywhere between 512mb (and probably smaller) up to 64gb. All of these were the same size module, but the chips on the module are pretty spread out on something like an 8gb. Those chips are basically put very close together in phones to make the same amount of RAM for into such a small form factor. Phone manufacturers almost always opt to buy more condensed chips that have less speed as well, since it's not necessarily as important on a smartphone with a less powerful CPU. All of this combined helps to pack in more and more RAM into a smaller and smaller package.", "4 gigs of ram in a phone will underperform 4 gigs of ram in a pc", "Memory chips for mobile usually have internally stack of chips. This makes it harder from heat, power and signal integrity points of view. This is why the modules in phones run at much lower frequency. PC's can cool them easier and run wider buses (64 bit/72 bit), but in mobile they often only use 16 bit for data bus. This is limitation due to soc design and board space for tracks. This makes them significantly slower.", "Everyone else is super right, but I just wanted to add because PCs are also around for a while longer, as you upgrade them and bring new life back. If those ram sticks didnt have to come out ever, they could have made them smaller" ], "score": [ 8626, 1099, 150, 60, 19, 17, 8, 8, 6, 5, 3 ], "text_urls": [ [], [], [ "https://superuser.com/questions/18995/how-can-i-tell-what-ram-will-fit-my-computer", "https://www.samsung.com/semiconductor/dram/lpddr4x/", "https://www.amazon.com/Patriot-Signature-Module-PC4-19200-PSD44G240081/dp/B016A3B09U/", "https://www.ifixit.com/Teardown/Samsung+Galaxy+S9++Teardown/104308" ], [], [], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
at1eit
Why can't games run nicely on MacOS?
Every time I use MacOS, I am way more productive than when I am on Windows. However, when I try to play any game more demanding than Minecraft, MacOS slows to a crawl. Why does MacOS seem so much better for everything considered productivity but sucks at gaming?
Technology
explainlikeimfive
{ "a_id": [ "egy1byc", "egy0en8" ], "text": [ "Most of what people say here is wrong. MacOS uses Metal(Apple specific OpenGL another, open sourced Api) as a graphic API and Windows DirectX. Most games use DirectX and wont even be able to run on MacOS. Even if the game would be able to use OpenGL or Metal, the rest of the software isn‘t optimized for games. Optimization make a very very big part of how well a game runs. It depends mostly on developers. They don’t optimize it well because it is still very costly and not rewarding as very few people game on MacOS. Hardware is often weak, but it would perform better on Windows in games at least. MacOS cannot use DirectX because it is patenten by Microsoft. We have to hope for developers giving Mac users more attention, sadly...", "I have this same problem. I think that game developers prioritise windows when it comes to optimising their code (Because it's the most popular and profitable way to go about it) and they just port it over to Mac." ], "score": [ 15, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
at1xki
What is HTTPS and what data can our IPS theoretically see and realistically see?
Technology
explainlikeimfive
{ "a_id": [ "egyd6k8", "egy40sl" ], "text": [ "The ISP will see you are visiting URL_0 , but HTTPS will hide the fact that you are searching for hardcore midget porn. Edit: On the other hand, if you were to use HTTP (with no S) your ISP will be able to map your dirty mind.", "It is a secure protocol that encrypts data before sending or receiving it. This means that the ISP can see which requests you make, so which sites you visit for example, but not the content of the site (unless they access the website themselves as well). If you are accessing a private website, they can see that you are on that website, but they can't see what you are doing on there, as long as the data is sent via HTTPS." ], "score": [ 8, 3 ], "text_urls": [ [ "xvideos.com" ], [] ] }
[ "url" ]
[ "url" ]
at5nir
How does my washing machine turn my clothes inside out from simple agitation?
Technology
explainlikeimfive
{ "a_id": [ "egyva7y", "egz2o5f" ], "text": [ "Washing machines rotate really quickly. Chances are it will cling onto something or get stuck to something causing it to inverg", "There’s a lot of friction going on during a wash, specially if there are a lot of clothes inside, the water also helps by making them heavier and more flexible" ], "score": [ 4, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
at6nqq
How does Captcha work? How do ‘robots’ not recognize the letters/pictures?
Technology
explainlikeimfive
{ "a_id": [ "egyznoe", "egyzoe1" ], "text": [ "The human visual system is remarkably good at understanding what something is even when really distorted, computers are not so much (at least now). Most recognition of letters/objects by a computer are done via neural networks, and requires a large training set of examples of objects it is trying to classify. Captchas try to distort the characters randomly each time and in various ways, or use a huge set of real images, which means a neural network trying to classify them would not have likely seen enough examples very close to it to successfully classify the characters/images.", "Captcha cares about how you answer more than what you answer. If you don't move your mouse in a completely straight line, or accidentally double click, or make other errors, you're probably a human. And for now, most AI doesn't have the capability to suss out which pictures have a truck or read non-standard writing. Most bots trying to attack a site don't have machine learning, so the captcha is pretty good at protecting against them." ], "score": [ 8, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
at7taq
In terms of internet speeds, why are download and upload speeds so different?
Technology
explainlikeimfive
{ "a_id": [ "egz83he", "egzcl73", "egz8qos" ], "text": [ "The reason for the difference is that most people download more data than upload. When you visit a page, the page sends you a lot more data then you send it. On the flip side, the web server needs a stronger upload speed as it is sending files places instead of receiving them mostly. The biggest upload speed need for the average person is cloud storage of files, though video games require some upload bandwidth as well. There are plans that are equal, but it's a matter of saving money.", "Because, on a physical link with limited bandwidth, you have to give up one to get the other. You are getting 5up/60down because, as a home user, you will consume much more data (down) than you will generate (up), and that split of the total capacity gives you the highest usability. A server operating on the same capacity link might use an even split (32/32 up/down) or even the opposite (60up/5down) because that would better fit the usage profile of an automated service providing lots of data based upon relatively simple user requests.", "Because they don't want you running a business ($$$$$) from a residential account ($-$$$). Download (many sites to home) is more important to home users (Streaming video / audio, web, tablet / mobile devices). Upload matters to business (home to potentially millions of customers). If you want the ability to serve the customers you need more upload and a more expensive business account." ], "score": [ 18, 6, 5 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
at9vcv
When charging anything, how does it know when it is full battery and to stop charging? Also what happens when you keep something plugged in on full charge?
Technology
explainlikeimfive
{ "a_id": [ "egzw85y", "egzuo1u" ], "text": [ "The voltage of a battery falls as it drains. So, a 1.5 volt alkaline batter really means a 1.7 volt battery when full, 1.5 volts when somewhere in the middle, and 1.2v when it's almost empty. The full vs empty voltage is different for every different type of battery chemistry. So, you can determine how full a battery is by simply measuring its voltage and looking it up on a battery voltage curve for that type of battery. This is trivial for most electronics to do. Likewise, a battery will break (sometimes violently) if it is over-charged. So, basically, either the battery charger stops charging the battery when it detects the \"battery is full\" voltage, or it simply supplies the \"battery is full\" voltage blindly to the battery, which is ok, since.... & #x200B; (separate explaination here) voltage is like air pressure, and the battery is like a container that holds pressurized air. If you hook up the container to a line of air that gives exactly 5 pressure units, the container will fill up until it reaches exactly 5 pressure units, and stop there, because... well, it's all evened out. no difference in pressure, no flow of air. Same thing with electricity.", "It depends on the battery type. For lithium cells, the charger usually turns off when the charge current drops below some threshold, typically around 3 % of rated capacity or if the current drops to a steady value. For longest lifetime, it is advantageous to terminate charge on lithium cells before they reach their full capacity. They can be destroyed by over charging. Nickel based cells use either temperature sensing or voltage dip sensing to detect 100% charge. Lead acid batteries are similar to lithium-ion, but can tolerate brief over charge. What happens if you keep something on the charger for too long depends on the type of charger. Phones/laptops/etc are designed to safely terminate charge. A car charger will overcharge unless it has a timer or a \"maintainer mode\". Cordless drill fast chargers for NiCd or Nimh will not over charge." ], "score": [ 12, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
ata0xg
what is the Turing machine
And what does mean that a process is compliant to it?
Technology
explainlikeimfive
{ "a_id": [ "egzpq44", "egzwud4" ], "text": [ "A Turing machine is an abstract idea (based on a real machine made by Alan Turing). The machine would read a symbol that had been printed out, and then make another symbol on another piece of paper, following a specific set of rules. The original Turing machine would begin in \"state 0\", and if it saw a 0, it would write a 1. If it saw a 1, it would change to \"state 17\". If it saw a 0 in \"state 17\", it would write a 1 and change to \"state 6\". By doing this, the machine could continue to run through an entire set of characters and find patterns in them, without the need for human interaction. A human would then go over the Turing data to determine if anything useful had been found. A \"Turing complete\" or \"Turing compliant\" program is a program that can be used to express every possible task that a computer can perform, because even if it doesn't understand a fancy new language, it can always translate it back to an older, known language that it can then perform (so long as you input the instructions on how to do so).", "A turing machine is an abstract idea of an utterly minimal general-purpose computer - the simplest possible device that can be programmed for arbitrary tasks. Saying that something that is *Turing-complete* means that it has the full set of operations available to replicate that functionality. A little oversimplified, but if it can read symbols from a list, conditionally write symbols to that list and jump to a specific point in the list to read/write, then you can translate *any possible program* to run on it (assuming you have a big enough list to work with). That translation might not be fast or efficient (and usually *extremely* inefficient), but having that minimal set of operations means you can *provably* guarantee that you can get there eventually. People are often finding surprisingly simple programs or very specific-purpose languages to be turing-complete; nobody expected or intended them to be useful outside of one specific thing, but it turns out that technically, you could use them for literally any computing task." ], "score": [ 5, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
atb0t7
Why are updates to video games so large, sometimes larger than the game itself?
Technology
explainlikeimfive
{ "a_id": [ "egzwegj" ], "text": [ "Code is small, because it is instructions for your computer to make something (such as a graphic). However, It's much faster to pre-make and store the graphic first, so your computer doesn't need to create it on-the-fly. But since we don't what your computer will need.. Well, We'll just pre-make EVERYTHING, so it'll be there when you need it. Therefore, much of the game files are pre-computed stuff that your computer just looks up in a table or refers to, instead of generating from scratch when its needed." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
atcpz1
What is a server room like this used for? Any examples you could give?
[What are server rooms like this used for?]( URL_0 ) What is the computing power of this room? Or are these not computers? Are they interconnected? I have always seen rooms like this but never knew exactly what I was looking at or what they are used for.
Technology
explainlikeimfive
{ "a_id": [ "eh0agml", "eh0am3n", "eh0bfuq", "eh0at0r", "eh0c7jj" ], "text": [ "Mainly those are used for data storage. Any website is hosted in a room like that, and any online game has a server room like that. They are all networked together, yet can act independently. They don't look that sci fi usually, but they're kept dark to help with cooling unless needed. & #x200B; They usually host a ton of databases and other back end information that doesn't, shouldn't or can't be stored on a client's PC.", "These are servers. This is a data center. The servers you see are most likely connected, often referred to as “clustered” so that they can complete processes and computations together. They can also store data, serve up web content, and just about anything else you can think of.", "This is coming from a university student, so I don't quite have all the answers, but here's my best. Desktop computers have a limited amount of space available to store things. Also, it's relatively easy to take that one desktop computer offline. So businesses invest in larger computers called servers. Each individual shelf in rooms like that can be a dedicated server, but that's not always the case. So what's the difference between a server and a desktop? Inherently, computers have some jobs they can be built to perform better at. Some processing units are built to do some small things very fast, they can be built to do a lot of things somewhat more slowly. Server are generally powered by the later processing units. This is really convenient for small data processing or running a large database website for instance. In addition to servers having different processors, they also have more room for hard drives, or the component that stores data. Remember, they have an entire shelf just for one server, that's a lot of space to hold drives. I've seen some servers with room for 48 drives, each drive could be 10tb with a reasonably sized company. So now let's sort of delve into the idea of the whole room. Okay, so say you're a business owner. You own Google, or Facebook, or Twitter, or maybe all three if you're really rich. Everyone of your users has an account, and sometimes you even offer to store their data for them in \"the cloud\". Not only do you need space for this, but you also need to be capable of handling their request to see the data. Not just their request, but everyone's. That's one or several portions of a processor dedicated to accessing that information for just that one user. That's not even accounting for redundancy. For every so many hard drives there exists a redundant hard drive, or maybe multiple redundant hard drives, depending on the configuration. All of this is so that if one hard drive with Johnny Appleseed's family photos inexplicably malfunctions, Google still has his files available for him. But, now you need servers to handle processing these redundancies, and once the operation gets big enough, you need servers to handle communicating which drives need to be replaced due to failure. All of this is pretty lengthy and tiring to delve too far into, but this is the basics of how server rooms work.", "Facebook, Google, Amazon, cloud computing. Any computing suppliers who sell capacity to clients.", "Servers are a type of computer that does exactly what it sounds like, serves. Things like YouTube videos, web pages, remote access video surveillance, client information, or anything else you access online is being hosted on a server. Because it's only purpose is to serve, servers are usually left unattended, they don't need all the peripherals that a regular computer does, like monitors, computer mouses, keyboards etc. It could be a single laptop in someone's house hosting a personal website, or a massive server farm in india hosting Netflix videos, once its configured, all it needs is access to the internet to operate. A room like that is a data center full of networked servers, connected to each other and to the internet. could be hosting virtually anything, NSA surveillance feeds, pornhub videos, wordpress sites, archives from the library of Congress, bank account information, the matrix, who knows." ], "score": [ 20, 7, 4, 3, 3 ], "text_urls": [ [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
atd1bw
What waves do GPS signals use and do they have to have direct line of sight as opposed to bouncing off walls and can they penetrate thin plastics?
Technology
explainlikeimfive
{ "a_id": [ "eh0bw0v" ], "text": [ "Gps signals are sent by radio wave at 1575.42mhz and 1227.60mhz It's literally your car radio tuned up higher" ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
ateb15
How are images transmitted in tv?
Im having a hard time understanding how frequency is transformed into images
Technology
explainlikeimfive
{ "a_id": [ "eh0k7r6", "eh0lldz" ], "text": [ "It's a series of tubes. Cable is produced through coaxial cables that use electrical signals to send a message that is the translated by your television to make a picture and sound that is acceptable to human perception. Antenna use radio waves to send a similar yet more limited signal through radio waves that does the same thing. HDMI uses encoding to send the exact same signal as coaxial cables but in a much finer packet that is received in a better form by your television.", "Let's look at an LED TV. There are three LED colors, red, green, and blue. For each pixle there are these three LEDs. A camera is looking at a bowl of fruit that has a red apple to the left. A green pear in the middle. And a blueberry in the right. The camera then sends the data as a pixle for pixle representation of the different colors. The left LEDs will have a 1 for the red and 0 for green and blue Right will have 1 for blue and 0 for red and green Middle will have 1 for green and 0 for red and blue There are other factors that high definition video compression uses to make the data packets smaller If you're more curious about the data packets being sent from camera to tv, Basically the binary 1s and 0s are sent much in the same way that internet data packets are sent. There are different protocols for different types of connections" ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
atglzu
How does Google remain so dominant in the search engine market?
The process of building of search engine isn't a secret, nor are the tools one could use to do it. So how does Google stay on top? At this point, is it mostly due to brand recognition? As an American, it's very common to hear Google used in a verb sense as a euphemism for searching things online ("Try Googling it", "I Googled it", etc.) But given how much resources and experience Alphabet has with search engines, is it also possible that they still have some edge on the competition via secret algorithms, or the sort?
Technology
explainlikeimfive
{ "a_id": [ "eh0vxhz", "eh1jd9v" ], "text": [ "At this time Google is the best at the task. The majority like me have tried other search engines and been disappointed. Maybe you should be asking why consumers choose Google.", "Sure, the basic mechanics of building a search engine are well understood - but most of the nuance and real tech is in the optimization. Thus while it’s easy enough to build a passable search engine, it’s hard to build a truly great one. In a nutshell, a ton of the intellectual property lies in the natural language processing (understanding slang, etc) and the ranking of results. Those are major area of research, and Google hires an absurd number of PHD’s to stay ahead there. Plus, the more data & users one has, the better the results get - machine learning algorithms incorporate user behavior to fine tune. The minimum amount of hardware required to crawl, index, and quickly return results on enough of the web is also prohibitive. Nevermind monetizing it... That creates a pretty big barrier to compete directly against Google. Only Microsoft really has the resources to try to go toe to toe at this point, and we’re all aware of how Bing stacks up. Thus most small startups instead opt to become search experts on a particular *domain*. Building a high quality site that searches a specific type of thing (be it movies, research papers, etc) is achievable - and they can expand out. But Google / MSFT / FB / etc end up buying or copying successful startups doing that stuff. Engines like DuckDuckGo try to compete not by superior search results, but by promises of the best privacy controls. It’s unclear if that’s understandable or valuable enough to the average Joe to switch." ], "score": [ 14, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
atijr3
Before digital cameras, did pictures have a resolution? If not, how was image quality measured?
I see some old images that are super blurry but others are very clear. What exactly is causing these differences?
Technology
explainlikeimfive
{ "a_id": [ "eh179d1", "eh1aas9", "eh19n5t" ], "text": [ "Blurry-ness is mostly a factor of focus, shutter speed + stability and depth of field. Photographic film did have a resolution in the sense that there were small crystals that detected and recorded the light, but I would imagine most of the blurry pictures were a result of the former, not the resolution, unless they were from the very beginnings of photography.", "The crystals were referred to as 'grains' . If you saide it was grainy it meant the film was of low quality. High grain film was more expensive. Over a certain level the grains were not noticeable so it was clear or sharp.", "The nearest equivalent to \"mega-pixels\" was ISO, which was effectively a measurement of how small the film grain was. Higher ISO meant the film responded to light faster but it was also had more grain. Blurry old photos were mostly caused by cheap optics or slow shutter speed and camera movement. Some older cameras used small format film which also made the quality poorer." ], "score": [ 9, 3, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
atik8b
What is the difference between Firmware and Software?
Technology
explainlikeimfive
{ "a_id": [ "eh17w5o" ], "text": [ "Firmware is a type of software. It's generally written to a memory chip on a piece of hardware, and controls how the hardware works. It's different from other types of software in that it usually isn't changed, it's integrated into that piece of hardware. Many devices have the capability to update the firmware, but it is done a lot less often than updates to other types of software, and if something goes wrong during a firmware update, it can render the device unusable." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
atixdi
How did they make the „holograms” performing on stage during the LoL World Championship opening ceremony (during the POP/STARS song)?
I’ve read they were made in AR but I don’t really understand. Could the people watching performace live actually see these avatars? Were they perhaps made using motion capture or something similar?
Technology
explainlikeimfive
{ "a_id": [ "eh1bnrm" ], "text": [ "AR means the holograms were there for the recording, but not live. Basically they were added in video editing. AR means that instead of having a person sit there and track the shot etc, the software automatically placed them correctly. Imagine taking a picture with your phone, and then using the AR emojis to add bunny ears etc to the picture as you're using the camera. That is AR at work. The software automatically detects where to put the graphics, but its not there in real life, its just there \"in the video\"." ], "score": [ 7 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
atm9sn
How do credit cards physically work? How is data stored and taken from it via chip/strip?
Technology
explainlikeimfive
{ "a_id": [ "eh1zlm9" ], "text": [ "The strip is simply a band of magnetic material which has sections with the magnetic pole flipped one way or another. It is only a way to easily read the number off the front of the card into a machine rather than typing it in manually. The chip is connected to by electrical contacts touching the large metal pads you see on the front of the card. The tiny breaks between the pads isolate the different connections. Communication with the chip allows an input (in part based on the specific time of the transaction) to be run through an algorithm on the chip and output a unique value which identifies the transaction as valid. Someone without the chip wouldn't be able to calculate the proper output to authorize a transaction, and since it is different for every input and time then simply copying a value from a previous transaction won't work either. It cuts down on fraud since you physically need the chip. Note that information about transactions or credit allowance isn't stored on the card at all. That stuff is managed by the credit company which is why credit processing requires communication via telephone or internet to clear credit transactions." ], "score": [ 12 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
atn07y
How does a space telescope judge distance?
Technology
explainlikeimfive
{ "a_id": [ "eh24pah" ], "text": [ "Parallax is the most common type for near-ish objects. Put something in front of you. Look at it with only one eye open, then the other. You notice how you can use those two slightly different images to judge the distance? That's ***basically*** how we do it for nearby objects too. We can look at a star from one point of earth orbit, then look at it again when the earth has moved. There are tons of different ways to measure cosmic distances however: URL_0 Redshifting is easily one of the coolest though." ], "score": [ 5 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/Cosmic_distance_ladder" ] ] }
[ "url" ]
[ "url" ]
atn6d4
What's the difference between FAT32, NTFS and exFAT?
I know that FAT32 only supported 4GB files but I am not too sure how NTFS and exFAT differ.
Technology
explainlikeimfive
{ "a_id": [ "eh27u3s" ], "text": [ "Each one is a way of organizing data on disk. FAT, or File Allocation Table, has seen many variations over the years. FAT12, FAT16, FAT32, each one has been able to hold larger files and work with bigger disks. The exFAT format is another evolution of the system allowing bigger files and larger drives. As you mentioned, FAT32 had a limit of 4 GB file size and 2 TB disk size. The exFAT format has a limit of 16 EB files and 128 PB disk size. GB = gigabyte, TB = terabyte, PB = petabytes, EB = exabyte, each one is about 1000x the one before it. (Yes, it can support a large file size than disk size.) NTFS, a file system originally designed for Windows NT, but still used in newer systems. It has support for 16 EB files and 8 PB disk size. It also has many more security features than FAT. Probably the most critical of those security differences is called journaling, which helps protect against losing data if writing is interrupted. When there is a problem like power failure or disk getting yanked, an NTFS disk guarantees you will have either ALL the old file or ALL the new file. In contrast, FAT32 or exFAT can give a mixture of old data and new data, like writing half of your document or half of your image. NTFS also allows security rules based on the user, disk quotas per user, safe backup support, disk-wide compression and disk-wide encryption. You can still compress and encrypt individual files on exFAT, but it doesn't support disk-wide operations." ], "score": [ 14 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
atngmm
How is it that you don't drop your call when you're moving around and get switched to a different tower?
Technology
explainlikeimfive
{ "a_id": [ "eh27hgv", "eh27ia7" ], "text": [ "Most of the time your phone is within range of multiple towers. When it is getting close to the edge of the range of one tower the system will hand off the call to the next closest tower. Ideally this handoff is done before you lose connection to the old tower. This is called \"make before break\". The handoff has to be done in under 100 milliseconds. Any slower than that and it's fairly noticeable to the person on the phone.", "Because this is exactly what cellular technology was *specifically invented for.* Your phone detects that the signal is getting weak, so it looks for another stronger signal. Finding one, it sends a request to switch over to the other tower, and the carrier's computer agrees." ], "score": [ 11, 7 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
atnlu0
How does a breathalyzer detect blood alcohol content by blowing into them?
Technology
explainlikeimfive
{ "a_id": [ "eh28k5h", "eh28jh0", "eh28p21" ], "text": [ "When you have alcohol in your blood there will be some in your exhaled breath as well. The breathalyzer converts the ethanol in your breath into acetic acid and water. The byproduct of this reaction is a small amount of electricity. The breathalyzer measures how much electricity is produced and uses that to calculate how much alcohol was present in your breath.", "Every time you breathe, you'll have a small amount of alcohol molecules being turned to gas and transferred to your lungs, and it's those molecules that a breathalyser checks. The more alcohol in your blood, the more alcohol molecules get turned to gas and breathed out.", "Your blood leaks alcohol into your lungs, which evaporates as you breathe. The rate at which it does this depends on how drunk you are. The breathalyzer can measure the alcohol vapor concentration in air and determine a rough estimate for your BAC. Interestingly, you can also get drunk by inhaling ethanol vapors while sober. It's exactly the reverse process. That said, don't ever do this; you can get so drunk so fast that it can seriously hurt or kill you." ], "score": [ 73, 6, 5 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
atpl1p
how do they store and transport highly radioactive materials like reactor core level or higher radioactivity
Technology
explainlikeimfive
{ "a_id": [ "eh2qqy5" ], "text": [ "Active fuel from a reactor core is stored under water. Water is one of the best radiation shielding materials. It blocks pretty much every kind of atomic radiation, it's cheap, safe and easy to maintain. It's kept this way for at least 5 years after coming out of the reactor to cool it down. Fuel is not only radioactive, it's thermally very hot. Placing it in any container at that temperature would damage the container. When ready for transport to long term storage a container is lowered into the water. The fuel is then moved into the container under water. All fuel moves always occurs under water or inside/behind some kind of shielding barrier. Once full the container is lifted out of the water. The water is drained out and the container is placed in a big concrete and steel container called a cask. If it's transported it's by train or by giant trucks. But in the US spent fuel is primarily stored on site at the power plant due to there being no approved place to store it." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
atppz3
Why do some Bluetooth devices (receivers) seem to maintain connection at longer distances?
For instance, I have a Bluetooth headset (Jaybird) that seems to lose connection about 10-15 feet away, but a Bluetooth speaker (JBL) that I'm currently streaming music to about 30 feet away and through a closed door. Is the difference on the receiving end of the devices? Both would be connected to my mobile device, so the signal going out is always the same. Edit: Forgot flair.
Technology
explainlikeimfive
{ "a_id": [ "eh2pwin" ], "text": [ "There are 3 different classes for bluetooth devices - lower power = less range. Class 3 is designed for short range, < 10 meters. Class 2 is designed for around 10 meters. Class 1 devices are designed for around 100 meters. Your Jaybird is probably a class 3 or 2 device and your JBL speaker is a class 1." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
atpw5w
How do cars have cameras that broadcast the top of the car and the surrounding area?
Technology
explainlikeimfive
{ "a_id": [ "eh2pqzn", "eh2pwco" ], "text": [ "It's not the top of the car. It's most commonly 4 cameras. One in the front, one on each side and one on the back. Through editing you attach them all together showing the different sides and put the car where there's a missing spot (I. E. In the middle) that's how you get a faux 360 camera - you should still actually look around as you back up though", "The top of the car you see is just an image. There are cameras in front, back, and both sides that have wide angle views. The system is adjusted so that where one camera view stops, the other begins. This creates a view all around the car with a “car picture” placed in the middle so make it look like a top down view." ], "score": [ 6, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
ats5a9
How does thermal imaging work?
Technology
explainlikeimfive
{ "a_id": [ "eh365m6" ], "text": [ "A very basic explanation is that all heat generates thermal radiation...and thermal radiation is just another form of light. Our eyes can't detect that light (just like we can't see ultraviolet rays or x-rays), but we can make equipment that can detect that light and convert it to a screen image or whatever display like goggles, etc." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
ats7a2
what is the point of wi-fi calling?
Technology
explainlikeimfive
{ "a_id": [ "eh35928", "eh3564b", "eh35x5a" ], "text": [ "Cost, if you’re international and don’t have a solid or free international plan you will be calling for free", "Call quality. Theoretically you’ll get better voice quality over your ISPs data network than your cell providers voice network.", "In addition to improved quality because of bandwidth, reduced cost because of per minute billing, there’s also better coverage as WiFi often does better in buildings than cellular might." ], "score": [ 6, 5, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
atum4v
Why the phone call voice quality sucks while we have pretty nice mics and speakers on our phones?
Technology
explainlikeimfive
{ "a_id": [ "eh3ilej", "eh3wdle" ], "text": [ "Long story short, it’s to make it cheaper to have many phone calls happening simultaneously. On a copper wire, or fiber optic cable, or in the wireless frequency, there’s only a certain amount of bandwidth available. To maximize the number of simultaneous phone calls, you want to make each phone call as “small” as possible. You can do this by cutting unnecessary frequencies out of the human voice, what’s called filtering. The human voice is made of a whole slew of frequencies, but it turns out that we can filter out most of the frequencies and still understand speech. So, you apply filters to the voices to allow more simultaneous calls on the system. You can look up voice intelligibility to find out more.", "POTS is an acronym in the telecom field that means \"Plain Old Telephone System\" and it is the standard for voice calls. This standard comes from the analog telephone service that existed prior to cellular and voice over IP (VOIP). Because there is a huge infrastructure for POTS, even cutting edge VOIP and cellular phones still interface with the existing system and so they need to be compatible with it. In other words the need for backwards compatibility places limits on voice quality. The original telephone standards limited the range of sound frequencies which could be transmitted. This was because of a combination of headset and microphone quality, line transmission quality, and the need to bundle or trunk calls together and concentrate them for distribution over a wide geographic area. The smaller the frequency range used, the more calls a given line can support at once. Because the human voice and sensitivity of our hearing is centered around the 500hz to 3000hz range, this is the bandwidth of most voice calls. For comparison a CD typically covers 20hz to 20,000hz. But that narrow range is all that is needed for intelligible speech. In the digital realm we no longer need to compress audio by limiting the frequencies that are in use, instead we convert the audio to pulse code modulated (PCM) sound, and there are compression algorithms we can use to digitally reduce the amount of bandwidth for each call. When the POTS systems began going digital, they used digital lines like the T1, which could handle 24 simultaneous calls (23 really with 1 signal channel) with each call alloted 64kbps of bandwidth. A small mp3 using modern compression might only need 128kbps for CD quality audio, but those early digital calls didn't have much PCM compression. For cellular and other wireless carriers, this very small footprint that voice calls take up, is a good thing because it allows them to handle many more phone calls, with less equipment and cost than it would if the voice quality was higher. It also makes routing calls through the POTS a no brainer as quality is not impacted very much. Cellular radios don't have T1 lines snaking out to everybody's phones, they use radio waves but logically they are set up in a similar way. A radio might have 24 time slots (like channels on a TV) similar to the T1 (coincidence? Nah it made it easy to support a radio with a T1 connecting it to the rest of the network) and support 23 calls with 1 channel reserved for signalling. Many cellular carriers can also route calls internally though. For instance a sprint user calling another sprint user, doesn't need to ever go through the old POTS system. And in these situations it's actually possible for a carrier to have higher voice quality for these \"mobile to mobile\" calls. The Iphone for instance uses a higher bandwidth voice channel for mobile to mobile calls and many android phones can do the same thing; if the carrier enables it. Instead of sending calls through \"switched\" infrastructure, they use IP communication networks which aren't limited to timeslots like a T1 or cellular radio, and so they are capable of using the latest PCM compression to pack a lot of audio into a small size, hence the higher quality. Voice over IP systems as well have different audio quality standards and they are capable of a wide range of audio quality, depending on the equipment used, whether calls are routed over POTS systems or direct network calls, whether the connection is local or over a wide area network like the internet, and many other factors. Source: did a 10 year stint in GSM mobile communications at a now defunct Canadian telecom company. RIP BNR." ], "score": [ 63, 10 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
atvfcb
How does the illegal streaming/torrenting system work? Are there groups who record , subtitle and upload new episodes/movies as and when they are released? How do they sustain these activities?
Technology
explainlikeimfive
{ "a_id": [ "eh3njeq" ], "text": [ "**Stream site:** I have a movie, and I invite the whole neighborhood to come to my garage and watch it whenever they want, even make a copy for themselves if they want. **Torrent:** Most of the neighborhood has the movie. When somebody new wants it, they all contribute a tiny portion of that movie to make a new whole copy for that person. That way, no single person is responsible for keeping that movie available. **How they get away with it:** The sites are operated in countries where the government doesn't give a shit about European/ US copyright laws, usually. **How they sustain it?** They probably have regular jobs like you and I do, and volunteer that time and money towards their hobby. They might make a little money off advertising revenue as well." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
atwrte
How does windows know if a program is not responding?
Technology
explainlikeimfive
{ "a_id": [ "eh3wxvx", "eh3xeyi" ], "text": [ "If the application doesn’t “ping” the event queue for 5 seconds, then Windows makes an assumption that the program isn’t responding, and alerts the user.", "Answered here: URL_0 Whenever Windows wants to tell the application something it pushes it as an event on a queue. If the process stops asking for the next event on this queue for a while then Windows marks the process as non-responsive. Let's say the process has a mailbox that it uses to be aware of what the user is doing. Windows puts letters in this mailbox quite often for things like \"User has clicked the minimize button\" or \"User has pressed this key\". The process normally pulls these letters out to read them and respond to them, if Windows notices that the process has stopped pulling letters out of the mailbox then it assumes it has frozen." ], "score": [ 8, 3 ], "text_urls": [ [], [ "https://superuser.com/questions/961843/how-does-windows-know-if-a-program-is-not-responding" ] ] }
[ "url" ]
[ "url" ]
atym36
Why dont internet service providers invest in something similar to cellular towers so they can provide service at a broader range?
Technology
explainlikeimfive
{ "a_id": [ "eh4a7l7", "eh4b2qu" ], "text": [ "I mean, some do, and provide wireless internet at long range. That is how this post is coming to you for instance. It's just expensive, and for most ISPs, not worth the expense because it would require a lot of work for not as much benefit.", "Why would they? Each ISP has a monopoly in it’s region thanks to the FCC. Without any competition, there is literally no incentive to provide even decent services, let alone invest in expensive cell towers. They can make more than enough money bleeding people dry for the shitty service they have now." ], "score": [ 7, 5 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
atyvf5
What is physically happening, on the smallest possible scale, inside a computer processor?
Technology
explainlikeimfive
{ "a_id": [ "eh4d8vd" ], "text": [ "The most basic component of a computer processor is the transistor. A transistor acts like a switch. When it's on the switch allows electricity to flow from the input to the output. When the switch is off the flow is blocked. The way that transistors are constructed is throughs semiconductors. We take silicon and dope certain parts with chemicals. This doping process leaves the silicon with either an abundance or a deficit of electrons. Electricity has trouble flowing through the parts that are low in electrons. But, if you apply a current to the section that is lacking electrons then it will allow that part to act as a conductor. That's what we mean when we say it is a semiconductor. As I said, the transistor has an input, an output and a gate that controls the flow. The real power of the transistor comes from when the output of one transistor is connected to the gate of another. Now turning on the first transistor will also turn on the second transistor. If you connect enough of these switches together you can perform simple logic functions. For example, an AND gate will only turn on when all of its inputs are turned on. An OR gate will turn on when *any* of its inputs are turned on. From these simple logic circuits you can build larger circuits that will do math. Ultimately, that's what a processor is. Just a pile of silicon that can do math." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
au16o1
How are machines used for medical imaging calibrated?
Technology
explainlikeimfive
{ "a_id": [ "eh4tngb" ], "text": [ "Probably with some reference data. Here's a thing we send out with the machine its as exactly 1.0 as we can get if ur machine isnt measuring this at 1.0 its offcenter" ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
au1bkx
. Why are they flipped?
Technology
explainlikeimfive
{ "a_id": [ "eh4vwx6" ], "text": [ "Because Bell (the telephone company) started doing it so everyone started doing it, there was a video made fairly recently on it, can't seem to find it right now." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
au1c3m
What is differential privacy? How does it work and Why is it important?
I hear the term differential privacy being used more and more among techies. What is it? Why is it important, and how does it work?
Technology
explainlikeimfive
{ "a_id": [ "eh5boln" ], "text": [ "Differential privacy has to do with how confidential information can be gleaned from seemingly obfuscated (disguised) information. Organizations like the US Census Bureau and the Department of Labor collect a lot of data about people under the condition of anonymity. This data is then published in aggregate to inform the public as to the state of the country. Theoretically, it is possible to go through these tables and, repeatedly cross-referencing one data set to another, to find confidential information that was not explicitly published. For instance, if you look up your local chamber of commerce information, it might give you last year's annual sales of goods by sector (food & beverage, hardware, gas, clothing, etc.). But if you also know that there is only 1 clothing store in town (or if this information is posted as part of the town directory), then even though the board did not explicitly tell you how much that clothing store sold last year, you can determine it from the information they gave. Differential privacy is concerned with how to carefully curate how data is published to prevent this. For instance, the chamber of commerce may simply omit the clothing section from their economic report, or roll it into a miscellaneous goods section. Given the enormous amount of data being collected and published every day on all different facets of people's lives, differential privacy is important to allow this data to be usable without compromising individual identities." ], "score": [ 6 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
au2jts
how does the auto setting on my windshield wipers work? How does the car know there is enough rain on the windshield that warrants a wipe??
Technology
explainlikeimfive
{ "a_id": [ "eh55fxh" ], "text": [ "The sensor beams an infrared light onto/out of the windshield. Raindrops will affect how this beam of light is bounced back into the sensor, so with that information it can determine approximately how much rain there is. & #x200B; [ URL_0 ]( URL_1 )" ], "score": [ 3 ], "text_urls": [ [ "https://en.wikipedia.org/wiki/Rain\\_sensor#Automotive\\_sensors", "https://en.wikipedia.org/wiki/Rain_sensor#Automotive_sensors" ] ] }
[ "url" ]
[ "url" ]
au45xo
How do smartphones/tablets still track time after battery dies and has no internet?
Technology
explainlikeimfive
{ "a_id": [ "eh5gv5n" ], "text": [ "There's a smaller battery that powers the clock and flash memory. On a computer it'll be a little button battery. They provide just a trickle of electricity. They last a long time. In fact you can find YouTube videos of people unboxing computers from the 80s where those batteries are still good." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
au4u85
OLED vs QLED
Technology
explainlikeimfive
{ "a_id": [ "eh5m5p5", "eh657mf" ], "text": [ "OLED (Organic Light-Emitting Diode) - It's an organic LED that is capable of emitting it's own light (unlike LCD screens which require back-light) therefore they are capable of displaying the \"true\" black colour, as they simply turn off and no light is being leaked out. & #x200B; QLED - It's basically an LCD screen with Quantum-Dot technology. The main difference is that in comparison to regular LCD screens, it uses blue back-light, instead of white. The quantum dots (they only have a couple of nano-meters) turn this blue back-light into colours that we actually want to see on the screen (red/green/blue). TL;DR OLED are LEDs that can \"display\" black colour (they only turn on the needed part of the screen). QLED is basically an LCD with blue back-light and quantum dots.", "Speaking as a consumer who's recently purchased a QLED, if you like real blacks go for OLED. If you prefer vibrant colours, pick QLED. Needless to say, if you're planning a purchase you can't beat seeing them in the flesh." ], "score": [ 22, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
au4whi
How do transistors work?
Technology
explainlikeimfive
{ "a_id": [ "eh5mw6q", "eh6iu62" ], "text": [ "A transistor is an electrical component that works as one of two things: an amplifier or a switch. An amplifier takes a small electrical current and turns it into a big electrical current. A switch takes a small electrical current and uses it as a signal to let another electrical current pass through it. The second one is probably more pertinent to this question, because switches form the basic function of all electronic systems. You probably have an idea of how the binary system works, and when it comes to electronics, transistors are the physical component of that system. In computer systems, transistors have two states: on and off. When a little electric current is applied to them, they turn on. When it isn't, they turn off. Certain patterns of transistors activate other patterns of transistors in a big network to perform calculations depending on which ones are on and off. When transistors are turned off and on in a certain order that can be 'read', they're storing information as memory.", "The answers given are all well explained and correct. But ill have a go at a more ELIF version. At the most fundemental level, a transistor is a gate. You provide a little push, and the gate opens of closes. Think of a river, with a thick wood beam on one bank. When the beam is on the bank, the water (electricity) flows as normal, and when the beam is in the river. It acts like a dam and stops the water from flowing. Now. There are a lot of tricks you can do with this beam and river. For example, imagine you stick the beam partially into the river and push an pull it slightly. The little movement you do moving the beam is amplified in the river as it rushes and wanes with the beam. This is a simple transistor amplifier. Another thing you could do would be have the downriver control beams of other rivers. Or have the river control its own beam. This is the fundamental idea behind digital gates." ], "score": [ 3, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
au4zvh
How come some websites have massive password leaks, but large ones like Google do not?
Technology
explainlikeimfive
{ "a_id": [ "eh5n03r", "eh5pnz2", "eh5rdcz" ], "text": [ "Just to be clear for some reality into the situation. The only times you hear about a password leak is when several things occur: 1) Passwords get stolen 2) The company actually discovers it (way harder than you'd think 3) The company thinks its in their best interest to reveal that publicly. In all likelihood, tons of sites and passwords are hacked and stolen, including \"large\" sites. But they are not discovered to be hacked or the company for whatever reason does not make it public. Be very cautious to say a site has never been hacked or stolen, they may just not know or not have said they were.", "Yahoo, twitter, linkedin, tumblr, ebay, myspace, adobe, dropbox and several other large ones _have_ had massive breaches. Being large is not security, in fact it makes them a more valuable target. Google is not immune, although I'm not aware of a password breach yet, one of the reasons they're shutting Google+ down is due to personal data breaches^1. Of course, we've all heard of the data breaches at Facebook as well. Some of the big breaches: URL_0 ^1. URL_1", "No company wants this to happen, but while a small company might be able to assign only 1 or 2 people to actively search for vulnerabilities, a company like Google have their own department, automated tools before a change goes live etc. Also, in a company like Google, no single person have access to your password, and there are only few people who even have access to the code that checks your password. These people are probably some of the best in the world at knowing how to protect it, and they not only are updated on advancements in security and hacking, they also probably research it themselves." ], "score": [ 24, 13, 4 ], "text_urls": [ [], [ "https://informationisbeautiful.net/visualizations/worlds-biggest-data-breaches-hacks/", "https://www.theverge.com/2018/12/10/18134541/google-plus-privacy-api-data-leak-developers" ], [] ] }
[ "url" ]
[ "url" ]
au6xg6
How do OLED displays differentiate between a dim white and gray?
As the title says; how do those make dim white compared to a gray?
Technology
explainlikeimfive
{ "a_id": [ "eh60q6t" ], "text": [ "Dim white is grey. You see it as grey if it is surrounded by white, and white if it is surrounded by darker grey (or 'black'.)" ], "score": [ 8 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
au9dzk
For years NSA has raised concerns about the US power grid being hacked. What can the US do to protect the power grid? Going Green is critical for all, if nations are unable to protect their power grids, will that slow going green?
Technology
explainlikeimfive
{ "a_id": [ "eh6iylh", "eh6j1qm", "eh6jm06" ], "text": [ "First, I think you're connecting two things that aren't connected. Going green, that is using more renewable energy sources, is not connected to the security of the electrical grid. Second, hardening the electrical grid against a cyberattack is an ongoing process. Most electric companies are private organizations but they work with various government agencies to coordinate a defense and restoration plan in case of a cyberattack. The exact details of what they're doing are understandably secret but they involve things like keeping control systems isolated from publicly accessible networks and providing manual overrides in case the automated systems are compromised. You can read more in this government report if you're interested. URL_0", "We will never get to a point of 100% safety from foreign interference or attack in many aspects of our daily lives. The best suggestion I can give you if you’re really concerned is to install your own solar panels, battery system, etc. I don’t think that the security of our infrastructure is that related to the speed of green energy adoption. That’s basically going to come down to the advancement of technology and public policy that incentivizes people to go green when it’s cheaper than the alternatives. While a lot of people on Reddit will agree that going green is critical to all, many more people will only take action when it’s in their shortsighted best interest— not because of an ideology of conservation and planetary health.", "Going green generally involves more diffuse power generation (many many more sources of power, each producing far less). This is a double edged sword for security since the attack surface area is much larger. However, an attacker who gains control of a single small array of solar panels, or a single battery substation, isn’t going to accomplish very much. This is in contrast to today, where gaining control of a single power plant could disrupt tens of millions easily. There are some complex ancillary effects though. A modernized grid (practically a requirement for any serious decarbonization) would have a significantly higher number of “smart” controllers than the current system. This is dangerous for two reasons. One, it’s again more surface area for attack. But two, it’s also more surface area for leapfrogging into other attacks. Smart home devices are often compromised, not for the ability to control Joe Smith’s hallway light bulb, but for the ability to use that tiny computer as a slave nose to launch other attacks on unrelated things. This is probably the more serious danger. Securing the grid is very achievable, and while logistically complicated, all ultimately something we understand. Distributing power generation, storage, and control more broadly isolated any potential impact an attacker could have directly on the grid. But the rest of the connected world might feel more pressure due to larger DDoS attacks and more complicated attribution. Ultimately, the problem of figuring out how to fully decarbonize (especially if people are dead set against fission) is a much more challenging one than figuring out the security of the grid." ], "score": [ 75, 10, 3 ], "text_urls": [ [ "https://www.ferc.gov/legal/staff-reports/2017/06-09-17-FERC-NERC-Report.pdf" ], [], [] ] }
[ "url" ]
[ "url" ]
au9w5x
Why does the plug of vacuum cleaners always end where the chord wraps so you can never attach it properly
Technology
explainlikeimfive
{ "a_id": [ "eh6mlgd", "eh6mp7t" ], "text": [ "It has to do with wrapping the cord too tight. If it ends where the hoop is, the cord is too tight, which means youre not taking care of the vacuum properly because it grinds down the plastic when it's too tight. Source: every southern mother ever", "My thought is that the cord can only be wrapped around one direction for it to fit perfectly. You just have to guess which side." ], "score": [ 12, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
aubeaw
What is long exposure photography?
I don't really understand what it is and how it works. Is it a lot of photos combined in one or one photo that is made over long period of time? (Or even something different!)
Technology
explainlikeimfive
{ "a_id": [ "eh6yoq5" ], "text": [ "> one photo that is made over long period of time That's exactly what it is. It's called \"long exposure\" because the aperture of the camera is held open for a very long time, leaving the film / sensor \"exposed\" to the light. Normal pictures snap the aperture open and back closed again in around 1/60th of a second (more or less), while long exposure pictures can hold it open for anywhere from several seconds, to minutes, to hours. This causes everything the camera \"sees\" to smear together into one image that shows the streaks of moving objects." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
aubfz9
Why can’t robots click “I’m not a robot” on websites, and what robots are being stopped by this?
Technology
explainlikeimfive
{ "a_id": [ "eh6yqjo", "eh6y9n9", "eh71b7n" ], "text": [ "Just the way your cursor moves from its initial position to the check box also tells the system you’re a human. Bots would go straight to the box while finger drag would be shaky.", "That \"I am not a robot\" typically checks your activity with Google and how you are interacting with the page. If it doesn't trust you it brings up a challenge.", "It stops spam bots or brute force scripts. Preventing them from creating a lot of different fake account or guessing your password." ], "score": [ 33, 13, 5 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
aubma4
How do multiple exposures work to create such amazing photographs?
Technology
explainlikeimfive
{ "a_id": [ "eh71fr7", "eh71xu0" ], "text": [ "This is a technique that is used when photographing in low light conditions. Normally this would require very long exposure times in order to collect enough light to get all the details. However you run the risk of overexposing parts of your image and underexposing other parts and you also risk motion blur as objects move (such as stars in the sky). So instead of one long continuous exposure you take multiple shorter exposures. This gives you a lot of options when you merge them together. It does not on its own make the images better but it helps you out when you are editing the photograph as there is more information for you to manipulate.", "With film, you can take a photo (an exposure), and then take expose it again over to of the first. The resulting image is a combination of both exposures; they often appear to have transparent or ghostly layers. That same effect can be done digitally with Photoshop. Otherwise, digital cameras can do lots of neat things by combining multiple exposures of the same subject. One is HDR, or High Dynamic Range. HDR is a combination of at least two exposures. One exposure will show detail in the deepest shadows, but lose all detail in bright areas. Another exposure will show the bright areas while losing the shadow detail. There may be other exposures in between. These are combined into a final image that has detail in the deep shadows and bright highlights at the same time. Another helpful tool is focus stacking. If you want to show sharp detail right up close to the camera and very far away at the same time, you can take multiple images where the only thing you change is the focus point. Starting focused up close, and ending focused far away. Combining these makes an image where everything is in sharp focus from front to back. A very close-up macro image that is sharp everywhere could be a stack of dozens of exposures. There are a handful of other camera functions that can use multiple exposures, but those are the common ones." ], "score": [ 4, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
aucav2
How do the security gates at the exits of shops (to find if someone is stealing) work? Why do they activate sometimes when nothing has been stolen and security tags haven't been left on?
I was caught by one of these the other day -- hadn't stolen anything, didn't have clothing/alcohol/etc with "not removed" security tags. Just a normal law-abiding supermarket trip. Security searched me and didn't find anything either so he let me through, but it's left me wondering how these get activated and why they would go off with no "trigger" sometimes? ETA: I've been caught a few times by these before, and a couple of times the employee remarked that "oh it's been doing that all day!" Why would that be?
Technology
explainlikeimfive
{ "a_id": [ "eh750wv" ], "text": [ "An EAS system or electronic article surveillance system works with detection antennas that are installed at the exit of the store and the hard tags or labels attached to the articles sold in the store. The detection antennas constantly emit electromagnetic energy. When a customer passes these antennas with an article, a hard tag or label - if it is not deactivated - causes a short electromagnetic signal that is received by the antennas and results in an alarm. The hard tags or labels themselves do not contain transmitters but an electric current is generated by the electromagnetic field of the antennas. Electric current is only generated during the short moment that the article is carried past the detection antennas and an electromagnetic interaction occurs between the antenna and the hard tag. Sometimes, you might be carrying something on you that accidentally trips these sensors. False alarms happen on occasion because the gates aren't looking for a precise object, just something that is close enough to what's in the security tags." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
aud3z1
How can we measure battery percentage from 0 to 100 one by one? Is there 100 different sensors or some sensors and a predicting software between them?
Technology
explainlikeimfive
{ "a_id": [ "eh7bjq4", "eh7bwuy", "eh7brcb" ], "text": [ "The voltage of a battery drops a predictable amount as the battery depletes. So, it might be 3.7 volts at 100% charge and only 2.5 volts when it's almost dead. The circuitry inside your phone measures the current voltage and compares it to the known voltage curve and tells you how much charge is remaining.", "We measure the voltage. When at at max capacity we expect a certain voltage. As the battery depletes the voltage drops ever so slightly. When it reaches a specified voltage we consider that “0%”. To make clear, there’s still charge in the battery that could be used to power your phone and insatiable reddit appetite, but we know that going below a certain amount of charge on lithium ion batteries degrades their life span. The phone has, in the software, instructions not to turn on or use power if the battery is below that voltage threshold. This is my understanding off the top of my head, I made no effort to research or confirm my assumptions.", "The computer reads the active voltage in the battery of the device up to the millivolt if necessary to get an accurate battery usage" ], "score": [ 20, 6, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
aujoh1
how do printers and printer ink and printing work? I don't understand how the liquid ink doesn't just like soak the page, how does it form letters and pictures?
Technology
explainlikeimfive
{ "a_id": [ "eh8j7cd", "eh8j4rk", "eh8kltn" ], "text": [ "For liquid-based inks, pretty much the same way pens work - they release a very specific amount of ink in a certain pattern and that amount soaks into the paper and dries quickly enough to maintain the pattern (letters, numbers, symbols, lines, shading, etc.). It's just that the pattern making happens very quickly because machines like these can do things very fast. For powder type inks (toners) - typically laser printers, it's a bit more complex, although it usually happens even faster. Essentially, the \"drum\" (a rotating cylinder) is given a negative charge, then the image to be printed is imprinted on the drum by exposing particular areas to a laser beam (hence laser printing), which changes the electric charge in those areas by a specified amount. Toner is then deposited on the drum, and the areas of different (or no) charge pickup different amounts/colors of toner, and the toner becomes negatively charged. While this is happening, the paper receives a positive electric charge, and as the drum with negatively charged toner on the rotating drum meets the positively charged paper, the toner is transferred to the paper. Heat is then applied to the toner/paper, fusing them together. Again, this happens very quickly.", "Think tiny spray cans putting a tiny amount on the page that soaks it up. You can still smear it when it's wet.", "The page does get soaked, in a way. But the ink is sprayed in such small quantities and at so precise locations such that you get the image you want. This precision implies the need for very fine nozzles and the hassle that comes with them getting dried up or clogged. If you print a heavy image (e.g. a full black page) on an inkjet you might notice the sheet being rather wet." ], "score": [ 7, 4, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
aum3yy
how astronomers can observe for a prolonged period of time a tiny part of the sky, with the earth rotating around its axis and the sun
Technology
explainlikeimfive
{ "a_id": [ "eh8xchr", "eh97ci9", "eh9jpco" ], "text": [ "They mostly use a device called a heliostat. This is a device which you add to your telescope mount and configure it so that it aligns with the plane of the rotation of the planet. There is a motor attached to it, or you can turn it manually, that will rotate your telescope once every 24 hours. So the end result is that your telescope will be pointed towards the same point in the sky even if the Earth is rotating underneath it since the heliostat will counter the movement of the Earth. More fancy setups will have a fully motorized mount for the telescope which includes a heliostat function.", "The simplest way is by moving the mount as the Earth rotates, the rotation of the Earth is close enough to constant in speed and axis of rotation so you only need to move in one axis (the right ascension). This is possible by hand simply turning a certain screw along with a clock (see barn door tracker). Past a few minutes of exposure or with longer focal lengths you'll need motors on both axis to correct for errors in the drive which can be done by guiding where you use a small and fast camera to record a star and adjust the tracking of the scope to keep that star in the same place in the guide scope, this will also keep the image of the main scope in the same place ideally. Practically modern telescopes no longer track in right ascension and declination but just use a 'normal' alt-azimuth mount which can track left/right and up/down.", "The most straightforward way is to put the telescope on a mount that can rotate about an axis parallel to the Earth's rotation axis -- the axis points toward Polaris, the North Star. This is called an \"equatorial mount\". So as the Earth rotates once every 24 hours, a motor turns the telescope the other direction once every 24 hours to compensate. In the old days, a system of weights and pulleys was used instead of a motor, but same idea. The less straightforward way is to put the telescope on a mount that rotates about one axis perpendicular to the ground, and another parallel to the ground. This is an \"alt-azimuth\" mount. It can track the stars as they move through the sky, but the stars will still appear to rotate in its field of view. You can fix that problem with a special arrangement of prisms called a \"derotator\", or you can just take a bunch of short exposures and de-rotate them before adding them together to form a final image." ], "score": [ 14, 5, 4 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
auooh8
Why does the cursor go backwards when you move it to the left side of the word document?
Technology
explainlikeimfive
{ "a_id": [ "eh9gvjr" ], "text": [ "It indicates that the cursor will be selecting entire lines of text. There are a variety of feedback methods used to indicate different behaviors of the user interface such as changing the cursor into a caret." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
aupioy
If we can't get under 7nm with silicon why are intel/AMD not building bigger dies with more transistor instead of always shrinking?
If a regular CPU consume 95W today in 14nm, why aren't they building HUGE cpu that consume 3-400 watts? all we would have to do is having a bigger PSU and more efficient cooling.
Technology
explainlikeimfive
{ "a_id": [ "eh9n6s2", "eh9nb9p", "eh9n59h" ], "text": [ "> why aren't they building HUGE cpu that consume 3-400 watts? all we would have to do is having a bigger PSU and more efficient cooling. Because it will generate a metric ton of heat. Like, a lot. Which means a big ol' heatsink and a noisy, high RPM fan. Your power user might use liquid cooling to make this more efficient (which would also mean using larger fans which mean lower RPMs for the same airflow which means more quiet) but this isn't practical for Joe Consumer or Joe Business User. Furthermore - did I mention it will generate a ton of heat? Using a processor that was 125W stock and close to 200W overclocked in a ~10-14' bedroom that would normally be 68 degrees in winter, in a few hours it would heat the room up to about 78 degrees and I'd have to open my window in the dead of winter. Now try generating 2-3 times as much heat. Oof. Also, energy isn't free. An extra 305 watts for 8 hours a day is 890.6kWh in a year. The average price americans pay per kWh is 12 cents, which means you're spending an extra $106.87 a year on energy. Finally.... Why NOT continue to reduce die size until we've hit practical limits? More efficient means you can get the same performance for less heat/power... Which means you can get even MORE performance if you overclock it a bit.", "We're about at the limit for electricity traveling between two sides of the die in one clock cycle already. You can get around that by adding more independent cores, and localizing resources so electricity never has to go too far, but I think a few things would need redesigned for that. Additionally, the current chip fabrication process has effectively randomly distributed flaws throughout the die. The larger the die, the higher chance of a ruining flaw to be present making the chip useless. This, however, is changing with something called \"chiplets\" where chips are made of individual parts \"glued\" together.", "Because heat is a never ending enemy, and you'd have to find a way to dissipate all that heat. Plus you only need a certain number of transistors to do each instruction. More transistors doesn't make your CPU faster. Smaller transistors do - because they generate less heat, meaning you can run them faster while keeping them cool enough. Your solution is only feasible in the sense of adding more cores - which doesn't make each one faster, but does allow more parallel computing - if the computations are paralellizeable." ], "score": [ 26, 17, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
auqbz3
how do we keep people alive during heart transplants?
In other words when there's no heart attached to the body to pump blood how to we keep the brain and other organs functioning? Also bonus if someone explains how a working heart stays working when moved to another body.
Technology
explainlikeimfive
{ "a_id": [ "eh9u16p", "eh9tobk" ], "text": [ "During an operation the circulatory system will be put onto a bypass machine which circulates and oxygenates the blood. Your heart has its own internal electrical conduction including little natural pacemakers called nodes. If you restart this current the heart and give oxygen to the cardiac tissue the heart will beat again at a constant beat.", "They have machines that continue to pump the blood while the new heart is installed so to speak." ], "score": [ 17, 14 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
auqcou
How does punch card computing work?
Technology
explainlikeimfive
{ "a_id": [ "eh9u2gs" ], "text": [ "A computer program is stored in the computer's memory (RAM). If the program is not loaded into RAM, then the computer can't do anything. Punch cards were simply the original method of loading a program into memory. The computer had a circuit which could read punch cards and load the program the punch cards had into memory, then could execute it. Fundamentally a computer takes an input and spits out an output, be that every single pixel on your screen or just a number printed on a paper. These days we have better ways of accessing a computer's memory, be it downloading a program from the internet, using a USB drive, then using an operating system, which itself is a program that loads a program into memory then executes it. Keyboards and other methods of input require operating systems to really manage them and those require more powerful computers than back then. Punch cards were just a method that was very simple to implement into a computer for cheap." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
aurzaj
how do rechargeable batteries work?
Simply that, how do they work? What allows them to store their charge? I’m having a hard time wrapping my brain around electricity and energy in general.
Technology
explainlikeimfive
{ "a_id": [ "eha8obb" ], "text": [ "Technically batteries don't store electricity. They create electricity through a chemical reaction. Rechargeable batteries work because some of those chemical reactions can be reversed by running electricity back through the battery in the reverse direction. A fully charged battery will be made up of chemical A. Through a chemical reaction it is converted to chemical B and electricity is created as a side effect. Run an electrical current back through the battery and it's converted back to chemical A." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
auxfr8
Why is it that mobile phone signal decrease significantly in the restroom of workplaces?
Technology
explainlikeimfive
{ "a_id": [ "ehb6jad", "ehb6mzb" ], "text": [ "Cellphones use radios to connect to the internet. Your signals gets weaker when there are more things in the way. Bathrooms tend to be in the center of buildings, which means more things are in the way. As a result your signal is weak and you may lose connection.", "Also instead of signal passing just through drywall, most bathrooms are made of tile which not only allows somewhat easier cleanup, it is an additional, denser material for the signal to attenuate." ], "score": [ 10, 6 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
auxkg3
why did old games need batteries in the cartridge to save rather than using the consoles power?
Technology
explainlikeimfive
{ "a_id": [ "ehb6vyf", "ehb76hc" ], "text": [ "The console's power would be unavailable when the console was off or the cartridge removed from the system. Because the technology used to maintain save files required a power source, a battery was the only option.", "The cartridges could have two kinds of memory: They had ROM, that is read-only memory, which stores all the game data. It can hold data without power, but it can't be written to by the console. That's where all the game data was stored - the graphics, sounds, levels and of course the program itself. But some games like Pokemon or Zelda had save files, so you could continue playing the game where you left off. This data was stored in RAM inside the cartridge, that is random-access memory, which can be both read and written by the console. The problem with using regular RAM for this purpose is that it is so called \"volatile\" memory, that is memory where the charge storing the data will slowly trickle out if you cut power. The battery keeps the memory powered so the save game doesn't get deleted, and removing the battery for more than a few seconds would wipe the save game. That is also the reason a lot of electronic devices can be reset by cutting the power for a few seconds. For example, microwaves with a clock sometimes reset to midnight if you unplug them." ], "score": [ 10, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
auyfv0
What's with the sudden increase in number of cameras lenses on mobile phones?
Technology
explainlikeimfive
{ "a_id": [ "ehbk0sm", "ehbbehf", "ehc3eoh", "ehc0ie3", "ehc3k0s", "ehbkd8a", "ehcgcb6", "ehcdbke", "ehc6png", "ehca2xs", "ehc4c23", "ehcd0xu" ], "text": [ "In two words: [Computational photography]( URL_1 ) Phones are too small to capture high quality images by classic photographic principles. So what they do instead is use multiple lenses as a more general \"information capture\", the final photograph is a computer render based on the information and not the direct result from any one lens or even of a single exposure. The exact degree on how much computational photography is used can vary from phone to phone, but the general direction is doing more of it. For an extreme example, see the [Light L16 camera]( URL_0 ) or the [Lytro cameras]( URL_2 ).", "They've hit the limit in terms of what they can do with sensors, so to differentiate they've added lenses. These yield better results than digital zoom, and you can also do stuff like faking shallow depth of field by comparing the views from two different lenses.", "Mainly two possible applications of multiple lenses: 1. Already stated in a lot of comments: wide angle- normal- telephoto , multiple \"zoom\" options without loss of quality. 2. Better picture quality. You put one colour camera and one black and white camera: the black and white doesn't have a colour filter, so it captures more light ,hence more detail. In software you then combine the two pictures and the result is a higher quality colour image", "A large part of it is the shift from hardware to software. On a regular SLR camera, you have a large lens with high quality glass that can collect a lot of light to expose great detail on the film. The large lenses attached to these cameras provide a ton of adjustability and customization. Cameras have incorporated more technology moving into DSLRs and mirrorless cameras, using a lot of software in conjunction with the large premium lenses. Then as the cameras started getting compressed smaller and smaller to fit into cell phones, they were limited as to how large of a lens they could fit. So there are two different primary methods of dealing with this. The first method is to use a lot of in camera(phone) processing to make some assumptions and essentially \"edit\" the picture in real time. Google is currently the best at doing this with it's Pixel phones. It uses only one camera module but does a *ton*of processing to stabilize and enhance it by overlaying a bunch of \"frames\" of a picture, then making some assumptions about what should be lit, what should be dark, what should be in focus, etc. They are actually really good at it. The second method is instead of overlaying several sequential \"frames\" of a scene, it uses multiple versions of the same picture by taking pictures with multiple cameras at once. That is mostly for low light shots and enhanced detail. The current leader in this method is probably Samsung But the other thing that it accomplishes is zoom. Because it doesn't have the room to physically move the lenses, it uses different lenses. So in the Samsung, it has 3 (forward facing) cameras. A short focal length, a long focal length, and a wide angle (similar to a Gopro). The phone uses software to switch between, or combine the cameras to get different effects. A company called Oppo, just released a 10x optical zoom phone that has never been done before. It does it by using a periscope type of setup so a long lens can be housed in the phone 90 degrees from the camera.", "3 reasons. 1. Multiple lenses and sensors allows them to create a single better image without it being necessary to have a much larger and expensive single lens or a much more expensive (and/or large) single sensor. 2. Allows the ability to take an optical zoom photo when one of the lenses is a telephoto (or wide angle, as the case may be). 3. Marketing.", "With more cameras you can do more. You got a 3 or more focal length, so ultra wide to capture buildings e.g., a normal one and a telephoto one to capture things in distance better without the quality lose of digital zoom. And as mentioned with more equal lenses like the new Nokia 9 you can get better pictures in darker conditions or general photos with more details and better HDR.", "Some of these explanations are 100% above a 5 year old talking about Apartures and photo principles and stuff. So i am going to try to dumb it down a bit more. & #x200B; Normal cameras have more space between the lens and the what captures the picture and have more space to work with to change the way the lens can view things because of the space . Phones are thinner and dont have that option . So more lens give the option to figure out depth . & #x200B; & #x200B; First grade science lesson - put your index finger two inch from your nose and look at something nearby. Close one eye , the switch eyes , keeping one closed . See how the finger jumps around , but with both eyes opened its more in the center ? The multiple lens can work kind of like that. & #x200B; Repeat above at 12-15 inches out - The finger moves less because your brain can figure out depth a bit better at distance from being clearer distance to both lens ( your eyes) & #x200B; Bonus lesson: At 12 inches out out what ever eye you have open that the finger moves less in (assuming its perfectly centered is your dominate eye , your brain always likes to use one eye a bit more then the other , but as you get older the balance actually gets better & #x200B;", "More cameras = More Light, more light = More data to process, and faster processors mean it's feasible. There re a few approaches: Some have 2 or 3 cameras at different zooms. This means you can zoom in with no loss in quality or zoom out really wide without a panorama. Some use a monochrome sensor for making sharper images. Some have many cameras but use all at the same time, and do processing to measure it's depth, 3d picture that's sharp with no noise, highlight/reflections can be detected by the reflections shift, and filtered, etc. Doubling the camera MORE than doubles the amount of information the phone gets from it.", "All of these good answers + the push towards 3D stuff. Just like our eyes, you need two cameras looking forward and slightly offset for depth perception.", "For two reasons: 1. To give the camera a better zoom capability. Because of how small phone cameras are, the lense can't physically zoom in and out, so phone makers add a second lense that's magnified 2x or 3x the regular lense. 2. Just like having two eyes, having two lenses gives the camera depth perception, which is useful for adding camera affects like background blur for portraits. The Galaxy S10 has 3 lenses. One is a zoom, one is a \"regular\" focal length, and one is an ultra wide lense. It gives you the ability to take photos you just can't take with one lense.", "camera sensors and lenses on the phone are as good as it'll get while keeping it cost effective. So to increase image quality to cost ratio they've added more cameras and use AI to combine multiple pictures into 1 picture", "To prevent moving parts. A regular point-and-shoot camera can zoom in and out thereby letting you taking wide angle shots and close up shots with the same camera without having to switch lenses. The zoom (not the digital zoom, that is just literally the same as zooming in on a digital picture until you see the giant pixels, but the physical actual optical zoom) requires mechanical parts to change the distance of lens elements in the camera. To prevent these moving parts on a cellphone, manufacturers decided to just use multiple lenses and they let you switch between the 2 depending on which type of photo you want to take at the moment." ], "score": [ 2119, 625, 111, 48, 16, 14, 11, 6, 5, 5, 3, 3 ], "text_urls": [ [ "https://light.co/camera", "https://en.wikipedia.org/wiki/Computational_photography", "https://en.wikipedia.org/wiki/Lytro" ], [], [], [], [], [], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
auzrlb
What is electro-static discharge (ESD), how does it work and why is it harmful to some electronic devices but generally tolerable for humans?
So recently, I've been reading up stuff about building/upgrading PCs (looking forward to building my first pc lol). While surfing through various guides and tips, I encountered the term ESD on multiple occassions. Everyone seems to agree that it is harmful to some PC components but not as much as for us humans. I'm curious as to why is that case and as to what really ESD is. I guess most of my confusion stems from the thought that motherboards regularly have electricity flow in them so it sounds a bit weird to me. A lot of the information I've read also mentioned grounding/bonding yourself to a grounded object avoid it without really elaborating anything about why or what a grounded object is.
Technology
explainlikeimfive
{ "a_id": [ "ehbjc6s" ], "text": [ "ESD is just the shock you get from static electricity. You know when you shuffle your feet on the carpet while wearing socks and then touch something metal? That shock is an electrostatic discharge. ESD can have very high voltage but usually incredibly tiny current. So, it might sting a little for humans but it can cause damage to computer components that aren't expecting such high voltage. Most computer chips work on 12 volts or less but an ESD can be thousands of volts. The easiest way to avoid ESD is to simply touch a metal part of your case while the power supply is plugged in. This will ground you and remove any built up static charge. If you want to be really careful you can buy an anti-static wrist strap and connect the wire from it to something else that is grounded." ], "score": [ 12 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
av0bf3
How are cracks in concrete detected
Technology
explainlikeimfive
{ "a_id": [ "ehbo2x6" ], "text": [ "Usually visually. Cracks in the ground or in a foundation wall underground can be easily spotted. If the area behind or under the concrete is wet, the crack is lined with a dark color, as concrete is porous, it absorbs water making the crack and the surrounding concrete dark. Due to the brittle quality of concrete, and the most common causes of cracking, frost heaves and poor soil conditions, concrete almost always cracks all the way through from one side to the other." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
av0sl3
-target locking missles/guns?
Specific on fighter jets. How do they lock on to other planes? Magnets? Magic?
Technology
explainlikeimfive
{ "a_id": [ "ehbqp0y" ], "text": [ "There are 3 main ways to \"lock\" on to a target this way for air-to-air targets Radar: Either the missile, or plane, or both, shoot radar out at the other plane, and the missile tracks its target based on radar Infrared (heat): The missile has a camera in it that seeks out heat from the exhaust/engine of the opposing jet, and tracks and flys towards it. Active Track (laser): A newer, more complicated method - the missile (or plane, or both) shoots a tracking laser at its target and the missile goes towards the laser. Oh and don't forget, these aren't exclusive -- you can certainly have a missile that does 1, 2, or all 3 of these! The goal is to hit the target and avoid any countermeasures, having multiple ways to track a target helps" ], "score": [ 24 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
av1dwd
- Why did Windows just need an hour and a half to update? What takes so long?
Technology
explainlikeimfive
{ "a_id": [ "ehby6r4" ], "text": [ "Actual five year old explanation: Windows is broken up into lots and lots of small play blocks that are sometimes stacked on top of eachother. Moving a bottom block, which is usually the most important because of its support to the tower, would cause it to fall over. Instead, a new block is pushed in to its place while the old block is taken out, fixed, then put back in. & #x200B; ELI5 typical manlet understanding of computers level: All of these comments have been hooplah, and stand no actual basis to how the patching process in Windows ACTUALLY works. To begin, Windows first has to move sections of System32 to a backup partition on disk called \"System Reserved.\" It does this to then apply a delta change between the old file and the new file. Several hash calculations are then done to cross-check the patch was successfully applied and that no malware is in the way manipulating the patch process. It then has to replace these files after taking down a lot of the system components, like the Capcom and WinAPI. Once all of these are unloaded, it can then replace the files, then put then stand them back up, then the actual reboot process is put into place. Sometimes critical sections of the kernel have to replaced, and this requires a bootscript. That's why sometimes the machine has to reboot several times. The reason yours is taking so long, is usually due to your NTFS journal being corrupted/useless and Windows has to reconstruct it." ], "score": [ 8 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
av2mrz
how close do you have to get to a wifi network before your phone starts connecting?
Technology
explainlikeimfive
{ "a_id": [ "ehc7t8p" ], "text": [ "It has to do with wavelength and power. Your WiFi operates at either 2.5 GHz or 5.0 GHZ The wavelengths are less than 5 inches big. Your router is also only putting out about 1/2 a watt of power. The smaller the wavelength the more buildings and other obstacles affect it. So your range will be about 300 feet in a building or about 1km outside in flat grassy area." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
av8o2w
How do animated shows like South Park and Futurama sync up audio and the animations? What’s the process here?
Technology
explainlikeimfive
{ "a_id": [ "ehdek26", "ehdijmu" ], "text": [ "There are animation tools that do this automatically. You provide the tool with a library of mouth positions that correspond to different sounds in a language. Then run the voice actor's audio through the program, and the program detects what sound is playing at each moment, and selects the correct mouth position accordingly.", "The western process is that you record the voice actor(s) reading the script first. Once all the dialog is recorded the animation is then drawn (by hand or assisted) to match the audio. Recording the audio first also allows the director/producer to identify if the show is going to be too long and start making cuts before they even start drawing. This limits the amount of extra animation ($$$) that gets left on the editing room floor after fitting it to 22 minutes or whatever the episode length is supposed to be." ], "score": [ 13, 5 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
av9j88
EMPs
How do they work? And are the electronics that get affected by them permanently broken?
Technology
explainlikeimfive
{ "a_id": [ "ehdl27e", "ehdna1r" ], "text": [ "EMP stands for electromagnetic pulse. It is a sudden massive burst of energy, which can be created through multiple natural and artificial processes. The main way EMPs cause damage is a huge current flowing through electronics, which will destroy capacitors, resistors, diodes, wiring and circuit boards. The magnetic field created might also break inductors or other magnetically sensitive components. A lightning strike creates multiple EMP pulses when the current enters the ground. The electromagnetic energy spreads through the ground and is luckily stopped by circuit breakers before it enters your house. The rapid release of high energy photons and charged particles during a nuclear explosion can also generate EMP pulses. There are even plans to use the earth’s magnetic field to focus and direct the EMP pulse, which allows an army to disable electronics on the other side of the earth.", "Electricity and magnetism is related, so magnetic fields going across a conductor will cause electricity to be induce in conductor, and electricity running through a wire will cause magnetic fields to be generated. This is how and why things like electric motors and loudspeakers can exist. Pulses of electromagnetism can be caused by solar flares and other solar activity. They can also be caused by setting off nuclear explosions at the right altitudes. It's also possible to make smaller, portable devices that emit short range pulses of a few dozen feet. The exact effect of an EMP can vary. If the EMP is weak, and the wires subjected to it are short, the surge of electricity might be stopped by protective components at for example a transformer station, and if the wires from that transformer station into a household are much shorter, the electricity induced over this distance might not be enough to knock out electronics in that house. If the EMP is stronger, the transformer station might be get damaged, but leave the things behind it safe. If it's even stronger, it might be able to induce a destructive amount of electricity into even the shorter power lines going into houses, possibly causing lots of material damage. Equipment that isn't protected against power surges may very well be permanently destroyed." ], "score": [ 10, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
avdzky
Why do big companies/schools etc. use outside mass email services? Why not save money and send them themselves?
Technology
explainlikeimfive
{ "a_id": [ "eheby0n" ], "text": [ "There are a lot of logistical issues with sending out an email to millions of people, and it’s generally cheaper and more effective to have a service do it. One of the big components is ensuring you get past ISP spam filters. If you send out a million emails all at once from one address, ISPs mark you as a known spammer and your emails will never even make it to their recipients’ inboxes (or spam folders). Mail services have automated ways to send out batches of email from known-clean addresses to make sure the ISPs don’t kill them. Analytics is another big component. Mail service providers typically have robust systems built up that allow marketers to see how many people received, opened, read, and followed up on their emails so they can figure out what works, what doesn’t, and whether the campaign was worth the money spent on it. They can also feed those data back into list management and do clever things like notice when you’re most likely to open an email, so they can make sure to include you in the batches that go out at that time. Speaking of list management: they tend to have good tools for subscribing and unsubscribing (which can be a regulatory headache otherwise), and for pruning bad and non-deliverable email addresses. All of that could be developed in-house, of course, but it would take a *lot* of IT effort and resources and probably be less effective than what you can buy." ], "score": [ 9 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
avfchh
considering that we have had waterproof phones for quite some time now, whats stopping us from making a touch screen that doesnt spazz out when water is on it?
Technology
explainlikeimfive
{ "a_id": [ "ehelbu2", "eheojwh" ], "text": [ "Nothing. Such screens exist. However, consumers prefer extremely sensitive screens, so they react to small inputs like some water.", "The touch screen on for example a DS doesn't care about water, because its pressure based. Phone screens are current-based though (they electrocute your finger), and water on the screen basically makes the screen think your finger is huge and pressing lots of parts of the screen simultaneously, because (impure) water conducts electricity. That's just the way these kinds of screens work. No way of fixing it." ], "score": [ 19, 8 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
avfjcy
How do instant money transfer services like Venmo & Paypal work?
Technology
explainlikeimfive
{ "a_id": [ "eheoi5w" ], "text": [ "Venmo and Paypal are the intermediaries. You Venmo your friend $50 and they instantly withdraw the cash from their Venmo account before it even hits your credit card (or whatever funds your Venmo). What really happened is that you promised to give Venmo $50, and Venmo has decided that you're good for it so they gave your friend $50. Since they're acting as a pseudo-bank, they don't mind waiting for the check from your Credit Card. If you backcharge the account, they're likely out the $50, but that's a cost of doing business. That's also why they usually charge an \"Instant Withdrawal\" fee to cover the risk they're assuming. If your friend left the $50 in his venmo account they'd have a right to pull it back out when you reverse the transfer or backcharge. They're not unfamiliar to folks pulling a low-key scam using Backcharge and Instant Withdrawals. First offense it usually means they eat the $50, and ban everything about you (email, billing address, SSN, CC#, etc). For sums less than $1k they're not worth the legal expense to sue you. Do it regularly and they're compliance/security team will be after you. Every transaction is geotagged, so they're going to start putting together a profile of people doing more than a polite amount of scamming or running illegal businesses via their app. That gets tied up into a nice bow and dropped off at the relevant DOJ/Prosecutor for a fraud case." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
avi3gy
How will the James Webb Space Telescope look for alien life? Where will astronomers look first and what type of life might it detect?
Technology
explainlikeimfive
{ "a_id": [ "ehfavmr" ], "text": [ "So there are 2 ways we can observe planets: 1) When a planet passes between us and a distant star, the light dims. It also changes the color of the star's light. The difference in color and the amount of dimness tell us about the size of the planet and the chemical composition of it's light reflecting atmosphere (if it has on) and its surface. 2) Put a light shield between the telescope and the star. The point is to block out the star's light so it doesn't saturate the camera, even if it's a very distant star, and a mere point of light. This will allow the camera to resolve the light reflecting off the planets in its orbit. We can see these planets this way without them being directly between us and the star, and we should be able to see more light from them. There's also a way to detect planets without seeing them, and that's to measure the wobble in a star's orbit, but this only really works with planets large enough to make a detectable wobble, so, gas giants. If I were an astrophysicist looking for life - I'd look for atmospheric chemicals that are signs of life. There are organic compounds built from amino acids, methane, that comes from organic processes, but these might not be totally reliable. We have one example of life, and that's here on Earth. Life here is aerobic, it relies on oxygen, and plants produce it as a byproduct (life began here before we had an oxygen atmosphere as well). But the dead ringer about an oxygen atmosphere is that oxygen is chemically very reactive, so it would not exist in abundance in an atmosphere on its own, and would inevitably all react with its environment to make other compounds. If we find an oxygen rich atmosphere, the only way we know how that could happen is from life. And that's it, our search for life is searching for life as we understand it. There may be other forms of life, like on our early Earth, but not likely. Life here is made of hydrogen, oxygen, nitrogen, and carbon. These also happen to be the 4 most abundant elements in the universe, aside from helium, which is chemically inert. They're also the 4 most chemically reactive - you can make more molecules out of carbon alone than you can make out of all the other elements combined. If life is going to form, it's likely going to do so using the most abundant, reactive materials around. Exotic forms of life formed out of rarer and more exotic elements aren't necessarily impossible, but unlikely. So we're looking for what we know. And if we spot something really weird, and we can't explain it any other way, maybe we just might consider the possibility of exotic life." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
avkynj
Did T.V stations back then used to play the national anthem then cut to static? If so, why?
Technology
explainlikeimfive
{ "a_id": [ "ehfwhu5", "ehg46gs", "ehfwuq8", "ehfwaab", "ehfwlau", "ehg9ejo" ], "text": [ "In the past, TV networks didn’t have enough cone to to broadcast 24 hours. So at a certain point it would just cut off broadcasting. Different stations had different ways to signal the end of the broadcast. I’m not sure if any ending with national anthem, most started that way though. Following the ending of their end of transition message, the broadcast would end, causing the TV to cut to static.", "An example of a sign-off chain: [ URL_0 ]( URL_0 ) The technology that brings television was not as robust as you might imagine. Until the late 1980s very nearly every television station signed off at night due to transmitter maintenance and simply not having to provide programming to an audience that didn't exist. The FCC mandated that every broadcast station, including the then experimental television stations, play the national anthem at sign-on and sign-off following Pearl Harbor. In the revisions that established the television spectrum in 1945 this was lifted at the end of the war. Radio fell out of the practice for the most part by the 1970s. It endured on television throughout the NTSC or analog television era. Some stations substituted America the Beautiful during the middle and late 1960s. It was mostly a tradition. However, sign off provided an opportunity to provide community information, issue contracted intellectual property warnings and information on contacting the station. Between the sign-off chain and actually turning off the transmitter most stations presented a tuning signal or test pattern which allowed the technicians at the transmitter to adjust that device and insomniacs to adjust their televisions at home. Public television and independent television stations tended to sign off as early as 10 PM into the 1970s. CBS and ABC affiliates tended to play movies or other cheaply acquired content after the 11/10 PM news and sign off around 1 AM. Of course, from 1955 NBC provided a late-night schedule and their affiliates would sign off after this was presented. As late as 2 AM during the 1970s.", "Transmitting costs money for content, the electricity to power the transmitter, and even the engineer who minds the equipment. Stations earn money to pay these costs by selling ads. Advertisers are only willing to buy ads when there are going to be people seeing them. At some point, there aren't enough people to make enough money to make a profit transmitting. After the last profitable program, the station owner got to play something \"just because they feel like it\", and a variety of poetry, prayers, and patriotic content was often part of this \"good night\" message.", "Saw this in Toy Story 2 and got curious as to if and why this happened : [ URL_0 ]( URL_0 )", "Yes. And because it was bed time. Everyone, even tv stations, had a bed time.", "I remember those days fondly -- 11pm local news would come online and right before the intro music would start for the news broadcast, many channels would ask, \"It's 11pm, do you know where your children are?\" After local news ended at 11:30, it would cut straight to Johnny Carson (Jack Paar for the pre-Carson folks). If you knew any better, you watched Carson and nothing else but he did have other competition on the other networks which came and went. After Carson was over it was anthem, test pattern, and static until 6am when local networks would have children shows and morning cartoons. Eventually we got late night mainstream news after local news ended at 11:30. Late Night with Ted Koppel was the popular choice leading up to Carson starting at midnight and then switching to local networks that would often run repeats of old 50s and 60s shows like the Twilight Zone, F-Troop, My Three Sons, Bonanza... et al. This was at a time when late night TV could make money running the first of its kind infomercials in between those old classic shows..." ], "score": [ 20, 13, 11, 8, 3, 3 ], "text_urls": [ [], [ "https://youtu.be/Im9fe4a6bcg" ], [], [ "https://www.youtube.com/watch?v=xdvZim2-mNk" ], [], [] ] }
[ "url" ]
[ "url" ]
avl2qh
On older game consoles (NES, etc.), how is movement not bound to the shown pixels?
It's difficult to put into words, but on older consoles, how is it possible that the player is able to stand, say, in between pixels?
Technology
explainlikeimfive
{ "a_id": [ "ehfzrw7", "ehfz6d6" ], "text": [ "I think I get what you're asking, on a console like the NES the horizontal resolution is only 256 pixels. Since the framerate is 59.94 fps for most NES games this would imply that almost a quarter of the screen would be covered in 1 second if a character moved at what would seem to be the slowest speed, 1 pixel per frame. Graphics wise the sprites can't stand between pixels, so if you take any freeze frame the sprite will always be in an exact pixel align position with the background. Game code wise they usually did it by using a coordinate system smaller than the screen pixels. Now on the NES you could only work with 8 bit values (0-255). This meant that expressing a character's position already required combining several bytes together unless you wanted to limit the level to a single screen. In Super Mario Bros for example the coordinate data used to move around characters could be up to 3 bytes. The first byte contained the block (levels up to 256 blocks wide) while the next byte contained both the pixel position (blocks were 16 pixels wide) and then a subpixel position, expressed as 1/16 of a pixel. Finally you could have a third byte representing a sub-subpixel value. This meant that for the most precise calculations (such as Mario's speed and acceleration) the game actually measured things down to an accuracy 1/4096 of a pixel. Of course when it came time to display a sprite those extra bits where all just cut off. Calculations were made that corrected for the horizontal scroll position and then cut off both the lower and upper bits so that only the 8 bits relevant to placing Mario on the screen remained.", "Because the player position is stored as a number in memory. The precision of that number is greater than what the pixels can render. It calculates the nearest pixel to draw the image to given the player position." ], "score": [ 23, 9 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
avnkea
When using slow internet, why do some web pages load bit by bit, while others show nothing until the full result appears?
Technology
explainlikeimfive
{ "a_id": [ "ehghfus", "ehha3o8" ], "text": [ "When the internet was always slow we designed it so that the first part loaded quickly and other stuff showed up when it could. Now pages are built expecting everything to load quickly so they don't optimize. Plus advertisements take time to pull from other servers and JavaScript code sometimes needs everything to load first. This pulls in code libraries and until everything is there it won't go.", "To clarify, as a web developer, it's entirely up to how the developer to choose the behavior of how a page will appear while loading, and what loads for that matter. By default, content will be loaded as it comes in, the classic example being a plain, Word-like document with pictures that suddenly pop in. However, it's possible for us to specify that nothing should display until everything has finished loading. These days, we try to find a happy middle ground where we try to give you the bare essentials as quick as possible and then load in less important but convenient things in the background. That said, it's still up to the skill of the developer for how well they manage this. Performance optimization isn't something typically emphasized to people learning web dev, it's something they have to learn on their own." ], "score": [ 4, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
avowvb
How come phone CPUs have octa-core processers when most laptop CPUs (even high end) are usually dual/quad core?
Technology
explainlikeimfive
{ "a_id": [ "ehgord6" ], "text": [ "If you look at the specs on these phones, they're actually *two joined quad core* processors. There's one set of fast cores for when you're actively using the phone and a set of slow processors that don't use nearly as much power for times it's not \"active\" (eg - sitting in your pocket). This is done to help them preserve battery power because the slower cores use far less power. ...and the standard for desktops has moved to quad core with all but the absolute cheapest new systems coming with at least 4 cores & high-end desktops can have 12 or more." ], "score": [ 14 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
avt65q
Why does text on a PDF not appear blurry no matter how much you zoom in?
Technology
explainlikeimfive
{ "a_id": [ "ehhglac", "ehhiu3t", "ehhosiw" ], "text": [ "Because a PDF is not an image that goes under compression. Pdf can contain other stuff, so when zooming in on fonts, its the same as doing it on word (in theory)", "The text is in vector format - you can convert a PDF to an svg (scalable vector graphic) and you'll sometimes see that each letter is its own shape, especially if it's an unusual font. This means the text was converted to shapes during the conversion process. If you also want vector **graphics** in your pdf (if you make it from an MS office document), then you need to save the vector graphic as an enhanced metafile (.emf) and import that into the word. Useful tip: making a pdf of Excel charts creates a group of vectors, which you can edit in something like inkscape. Makes creating composite figures a breeze.", "From URL_0 What is the difference between vector and raster graphics? Answer: The difference between vector and raster graphics is that raster graphics are composed of pixels, while vector graphics are composed of paths. A raster graphic, such as a gif or jpeg, is an array of pixels of various colors, which together form an image. A vector graphic, such as an .eps file or Adobe Illustrator? file, is composed of paths, or lines, that are either straight or curved. The data file for a vector image contains the points where the paths start and end, how much the paths curve, and the colors that either border or fill the paths. Because vector graphics are not made of pixels, the images can be scaled to be very large without losing quality. Raster graphics, on the other hand, become \"blocky,\" since each pixel increases in size as the image is made larger. This is why logos and other designs are typically created in vector format -- the quality will look the same on a business card as it will on a billboard." ], "score": [ 92, 15, 7 ], "text_urls": [ [], [], [ "PC.net" ] ] }
[ "url" ]
[ "url" ]
avtuqf
Why aren't old films that can't be remastered with the old reels not just remastered digitally with algorithms or upscaling software?
Technology
explainlikeimfive
{ "a_id": [ "ehhm83y", "ehi4a3a" ], "text": [ "They can be, but the technology is pretty new and still requires a lot of manual tweaking, which adds to the cost. So far it usually hasn't been worth it economically.", "Old films don't have a resolution, theres nothing to upscale. They also suffer from degradation if they were not stored correctly. Then theres the cost of the film itself, it wasn't cheap leading to a lot of knock offs or other inferior quality products that *really* didn't last long even if stored well. So you have to take a potentially fragile product and send it through a scanner, without damaging it and have each frame be relatively clean. Its not cheap to even get started making these things a digital file. And demand for older titles is low." ], "score": [ 11, 4 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
avvkry
What does 64/32bit architecture means?
Technology
explainlikeimfive
{ "a_id": [ "ehi118o" ], "text": [ "A bit is a binary digit. A decimal number like 8376 has four digits. A 32-bit number is comprised of 32 1's and 0's. A 64 bit number has 64 1's and 0's. A 32 bit computer will read and operate on, at most, 32 bits at a time. If your calculation fits within a 32 bit number (about 4 billion in decimal) then that's fine. But, if you're dealing with bigger numbers it has to do two or more calculations to give you the final answer. 64 bit computers can process more data in each cycle. Also, 64 bit computers can work with much more memory because it has more memory addresses that it can work with." ], "score": [ 13 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
avvuxm
what is mouse DPI?
I know that dpi is dots per inches I just don’t understand how that translates into mouse sensitivity or anything about how the DPI is figured out to begin with.
Technology
explainlikeimfive
{ "a_id": [ "ehi3cu7" ], "text": [ "DPI indicates sensitivity: how far the cursor moves in relation to how far you move the mouse. Think of it like pixel density in monitors - the more “dots” in an inch translates to more stuff that fits in that inch, in this case motion. Sorry if this doesn’t help, an ELI5 answer is hard with the math/sciency bits involved here." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
avxopy
Why are artists ever more capable of creating hyperrealistic art than those before them?
The same challenges have faced all artists throughout history yet only within the last 40 years have we seen photograph-like paintings. Like this for instance: URL_0 Why were people not able to achieve this in the past when the tools have not changed significantly, and why is there a progression throughout history?
Technology
explainlikeimfive
{ "a_id": [ "ehij8vz", "ehija05", "ehil6hb", "ehij1ts" ], "text": [ "Basically Technology. The Technology and Science around painting and drawing increased significantly. Math plays a big role, but also tools do have advanced, finer inks, more consistent, machine-manufactored utensiles. An ever growing supply of tutorials, cheap materials for practice and peer reviews seperating working techniques from bogus. Just cause it does not involve computerization or heavy machinery doesn't mean there isn't a significant technological difference.", "The tools might not have changed but the techniques and the understanding of _how_ to actually paint have changed significantly. For example, I used to be a really good doodler. But I never explored techniques such as perspective, anatomy, proportions, etc. Therefore, my ability to sketch is severely limited to just bad doodles. Art builds on itself. The understanding of how to capture a certain effect is compounded, refined, and ultimately mastered. This mastery is something which was unavailable to previous generations. Newer generations of artists have access to the full gamut of artistic techniques that have been discovered up until this point in time.", "In the past, before photography, your reference was often a moving thing, unless it was still life. Early photography provided the ability to capture a moment in time, allowing for a stable reference, but at a relatively low definition. Recent advances in HD photography allow for hyperealistic capture of living subjects at a given instant.", "You work on improving the thing people criticize. And you teach your students the techniques you worked to develop. So, those hyper-realistic images today will be analyzed to figure out what isn’t perfect, and now people will notice that in the current images and wonder “how did we ever think this was realistic?” While the new paintings will be able to avoid that specific issue." ], "score": [ 11, 5, 3, 3 ], "text_urls": [ [], [], [], [] ] }
[ "url" ]
[ "url" ]
avxvsx
layers and masks in photoshop.
Technology
explainlikeimfive
{ "a_id": [ "ehilbbb", "ehilgei", "ehilitg" ], "text": [ "It's like having a lot of transparent sheets of plastic all over each other. Together they make one image, but if you want to you can take one out, change something about it, or it back in, or change the order.", "Get some tracing paper. Make vertical line. Put another piece on there and put a horizontal line making an \"L\" shape. You have created layers. Now draw a picture on another piece of tracing paper, erase a little bit where your \"L\" is and place it on top of the \"L\". You created a mask.", "Think of layers as drawing on transparent sheets. You can lay them on top of each other to create an image. You can move one sheet around while the others stay put. You can even remove a layer entirely. They help immensely when you want to edit just a part of the image without affecting anything else. Masks are a bit different as they interpret each point in the image on a scale from 0 to 100%, typically represented as greyscale. Then when you perform actions using the mask, the action you take will affect each point according to the percentage dictated by the mask." ], "score": [ 5, 5, 3 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
avzu35
Why aren't charger/data ports in phones shaped more like a headphone jack?
Technology
explainlikeimfive
{ "a_id": [ "ehj3n36" ], "text": [ "Headphone jacks aren't really a great design, but they usually work OK if you only have a small number of signals. If you want to use a bunch of signals, making sure that each of them has a solid connection can get tricky. It's just not a great system from a mechanical point of view, and that can result in cruddy electrical connections. Particularly as things wear out. It could also create more problems with crosstalk (interference). With a typical data port, you can keep signal wires from crossing. That gets a little difficult at the end of a round jack if there are a lot of signals. It also gets a little more tricky to keep the signal wires the same length, which is valuable with high speed signals." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
avzxph
Why do most television DVD sets have analog artifacts on some episodes?
So sometimes when you watch TV shows on DVD, sometimes the episode will be a clear, good quality image. But in other episodes there's tons of analog artifacts such as bob crawl, rainbows, and the image will look "soft". Why is this? Can't all the episodes be mastered in digital instead of analog? & #x200B; [ URL_0 ]( URL_0 ) (It's all soft and grainy, signs of it being from an analog source)
Technology
explainlikeimfive
{ "a_id": [ "ehj14s7" ], "text": [ "It most likely has to do with what it was filmed with and the quality of the original that is used to master the digital copy." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
aw0aij
How does an activity tracker know I'm asleep and differentiate between sleep phases?
Technology
explainlikeimfive
{ "a_id": [ "ehj6bho" ], "text": [ "People who aren't asleep usually move around. People who are trying to sleep but not having success usually shift around a bit. People who are asleep but not sleeping well also tend to roll around. People who are sleeping well and deeply are usually motionless for long stretches. & #x200B; Trackers have accelerometers or even gyroscopes in them. An accelerometer is a machine that can tell if you're shaking it up like a can of soda. A gyroscope is a machine that can tell if it's being rotated like a globe. So, if you put an accelerator or a gyroscope on somebody, it can record how much they move around. But that's not the whole story. & #x200B; How can the tracker tell what's actually going on? First, humans had to just record the data and then compare notes with human observers. They began to notice that certain movement patterns tended to correspond with certain states of sleep or wakefulness. Over time, they got pretty good at interpreting the movements. Then, somebody wrote a computer program that tries to apply the same kind of analysis to the measurements, without needing a human to do anything. & #x200B; It's not exact. The tracker doesn't observe your mind. But it's close enough to give reasonably accurate results. & #x200B; \\--- & #x200B; Incidentally, I used to work at a company that was funded in part by someone who made their money developing one of the algorithms for interpreting tracker motion." ], "score": [ 12 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
aw3zg6
Can someone please explain to me what exactly is IOT? I read about it a lot but I still couldn't really get it.
Technology
explainlikeimfive
{ "a_id": [ "ehjrtkp", "ehjqmyp", "ehk7q6e" ], "text": [ "Internet of Things. Basically, the idea that as tech has advanced, we can now cheaply stick a CPU with wifi and networking ability into pretty much anything. Some ideas for making use of this include: * Thermostats. Forgot to turn heating off? You can remotely turn it off from work. * Program your window blinds to open and close as needed. * Use your phone to tell your coffee maker to start brewing, so that you have hot coffee by the time you are home. * Be able to see when the Amazon delivery guy rings your bell, and remotely unlock a storage box for them to place your package in. So, at least in theory, potentially cool and useful stuff, at least for some people. The downside though is that we're often talking about poorly designed devices, with little attention paid to security, and no ability on the part of the owner to deal with those issues. Being able to unlock your door from work because a family member forgot the keys? That's pretty cool. A thief being able to do the same because the security is complete crap? Not so cool. The lock being forgotten by the manufacturer and being forever insecure? Awful. **Edit:** IOT devices are also a potential entry point into your home network. Even if you don't have a smart door lock that allows thieves with the right app to get in, you might have a thermostat or coffee maker that allows an interested person to figure out when you're home. If you have such things, you could be watched through your own cameras, or listened to through your own device's microphone. It could also be a starting point to attack your personal computers in case they're interested in your data or credit card number. **Edit 2:** Now some people might wonder, what's the big deal? After all, somebody getting into your laptop has always been a possibility. True. However, most people's laptops don't intentionally offer remote access over the internet. Such things today are usually a result of either OS bugs (which get patched), or personal carelessness (installing the wrong thing from the internet). The IOT is different in that such devices are made to be accessed remotely. Also they're consumer products, and I don't think any company is going to maintain their coffee maker firmware for a decade, even though having a 10 year old coffee machine or toaster is a perfectly normal thing. Things like Windows get far more attention and updates than such devices can be expected to get.", "It stands for Internet of Things. basically everything is connected to the internet,even your hoo-haa rattler.", "Most of the top examples here are consumer goods. Those arent the game changers. Turning on your lights from your phone isn't that big a deal. The game changers are industrial and commercial IoT. Self driving cars will talk to each other and to the road because they will have internet connections that are always communicating. There are already two competing standards for this automotive communication. Imagine if roads had traffic sensors in each lane, and cars could determine how to best organize by lane to keep traffic flowing. Or how to best avoid each other and accidents, by knowing there is a slowdown ahead or an emergency situation. Cars could calibrate how closely they follow each other based on the conditions of the road + brakes + tires of all the cars around them, assuming the road and brakes and tires all had remote, internet connected sensors. And on and on. Factories and industrial equipment will have tons of monitoring sensors that are constantly feeding information about their health and production. Theoretically they could take themselves offline when they need maintenance, or self-adjust, etc. A number of things that were not possible before could now be possible if every component or device is connected to the internet, and can talk to central computers or to each other. Much of this infrastructure is already being built, and this is why 5g is such a big deal. All the stuff in the news about Huawei is related to IoT and who owns the communications infrastructure underpinning it. The fight isn't about how to stream Netflix to phones faster. It's about commercial applications and IoT." ], "score": [ 142, 10, 7 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
aw4dm0
what causes servers to crash?
The image in my head of a server is basically a big computer, not many moving parts and lots of cables and codes running the thing 24/7. Now, we have a "broken server" at work at the moment, which causes almost nothing work-related to work, but that popped a question into my head. What exactly breaks? Is a part broken and do they have to find it to replace it? has it overheated and do they need to cool the place before restarting it? Did someone trip over some cables and now has to figure out where the cables came from? If it is something like a broken part, why would that cause nothing to work?
Technology
explainlikeimfive
{ "a_id": [ "ehjtx93", "ehjuogt", "ehjvgb5" ], "text": [ "A server has a lot of physical components, and these can fail and cause system problems. Any medium-large company will have a server infrastructure with lots of redundancy, so that one failed component will not cause an outage, or only a very brief one. Often problems are caused by changes. Someone installs an upgrade, an update, new software, that fails or is incompatible with the existing environment. Again, larger companies have the resources to have multiple copies of their environment, so that changes can be tested in a parallel system, and only migrated into production when they have been fully tested. It's kind of a thankless job, and I'm glad I don't work in that area right now. It's like working for the electric company. You provide a vital service, but everybody complains about it, and if you fuck up, everybody hates you. And if you actually do your job properly, nobody notices or cares.", "A lot of things. * Software bugs * Mistakes made during administration * Disks failing * Power supplies failing * Defective components (eg, bad RAM) * Problems with the infrastructure (cabling, routers, internet connection, etc) Also, you're often talking about specific, custom-made parts. You don't have the normal CPU fan, you have one that's [custom-made to fit into the thin server case]( URL_0 ), and it just clicks into place, so it has a non-standard connection. The power supplies are special. The disks are inserted into special enclosures to make them easier to swap. If you run a tight operation, all those things can be very convenient. If not, you may find out you have to order some oddball component for a server that hasn't been sold in 5 years.", "Like you’re 5: When your computer server stops working, it could be for lots of reasons. The first thing to figure out if it’s a software problem or a hardware problem. You can think of the software like a list of tasks we ask the machine (server) to do, and the hardware like the machine itself that’s actually carrying out those tasks. With these two parts in mind, when the machine fails, it could be because of either of these two things. A software failure happens when the computers instructions get messed up. This can look like lots of things, but mostly has to do with the computer being asked to do funny things by the people who program its instructions. A hardware failure happens when the actual machine has a problem: the cooling fans may fail which causes overheating, the hard drives where the computer’s instructions are stored may stop being able to access that information, or something as simple as too much electricity may just break the machine altogether. It’s hard to say what causes any particular server to fail, but it’s usually one of these two problems: hardware or software." ], "score": [ 8, 6, 3 ], "text_urls": [ [], [ "https://images.esellerpro.com/3459/I/145/7/DC471%20001.JPG" ], [] ] }
[ "url" ]
[ "url" ]
awbydf
How did banks manage their databases and (inter)national transactions when there was no internet? (~1850-1950)
Technology
explainlikeimfive
{ "a_id": [ "ehlgid8", "ehlhnps" ], "text": [ "Buildings and buildings of physical documentation. Underwriters were a lot more prevalent back then, and were required to do more.", "Paper. It's a great material for records. Sure, it's a lot slower to access, but getting your foriegn exchange transfer in 5 days is almost as good as getting it in 5 seconds for most businesses." ], "score": [ 8, 5 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
awd2d6
how solar panels actually generate power from the sun
Technology
explainlikeimfive
{ "a_id": [ "ehm512h" ], "text": [ "Solar Cells consist of two layers of material. In-between is a thin interface that cannot be simply passed by electrons. Sunlight however provides enough energy to push electrons over this barrier. These electrons want to go back to their original side, but can't because of the barrier. So their only option is to move all the way through the circuit connected to the solar cell." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
awgxvz
How do radios (or other communications) send different messages on very similar frequencies?
For example, a radio channel from one to the next may be 156.575 MHz to 156.625 MHz (0.05 MHz). These signals should become superimposed, and so how does a radio decouple the multiple different channels? How does it really know that the 1 it reads is really a 1 from the channel you selected, and not a 1 from a different channel?
Technology
explainlikeimfive
{ "a_id": [ "ehmepq3" ], "text": [ "Imagine all the frequencies in the air are like a bucket of mixed coins. You throw the coins through one of those coin sorters and then just look at the nickels for one channel, and the quarters for another channel. That coin sorter is called a band-pass filter." ], "score": [ 4 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
awhgej
Aspect ratios, or why certain films don’t fill my screen?
Technology
explainlikeimfive
{ "a_id": [ "ehmij21" ], "text": [ "Because widescreen movies were specifically created not to fill your TV screen. Well, the TV screen standard from the 50’s through the early 2000’s... Theatres needed a way to keep audiences coming after the advent of Television. The studios decide that spectacle would combat comfort. The original tube tv’s used a 1:33 to 1 aspect ratio, commonly known as 4x3. The studios decided they would produce films that did not fit directly onto a TV screen as a way to create interest. The other thing to note is this is before the age of digital transfer, so in order for a movie to be shown on TV, the used a process called Telecine, where they literally recorded the film with a camera at standard TV 4x3 format usually from the centre of the screen. This meant you lost tonnes of details from the left and right of centre. This was the industry standard all the way until DVD’s gained popularity. Here is the atrocity that is the Blade Runner transfer: URL_0 Your Modern HDTV is a 1.78 to 1 aspect ratio, which is a pretty basic widescreen standard. Filmmakers will choose different aspect ratios for artistic and cinematic reasons. Lawrence of Arabia for example has a 2:20 to 1 Aspect ratio. Wider over all, skinnier too to bottom. Long story short, you are now getting the full image the director and cinematographer worked on. Enjoy those black bars as tribute to the film buffs who had to do without for over 50 years." ], "score": [ 6 ], "text_urls": [ [ "https://youtu.be/ETGfeSim1K4" ] ] }
[ "url" ]
[ "url" ]
awhgi4
how did GAME GENIES work?
Technology
explainlikeimfive
{ "a_id": [ "ehmiuep" ], "text": [ "Repeating my answer from the last time this was asked: Game Genie's worked a little different depending on each system because each system treated cartridges a little different. In general a Game Genie was designed to sit between the console and the cartridge and when the console asked the cartridge for data the game genie could secretly change it before passing it back to the console. So for example you have a game where you start with 3 lives. That number 3 exists somewhere in the data or code on the cartridge. Let's assume the number 3 is stored at the 5000th byte of the cartridge's data bank. On the Game Genie you'd enter a code like \"50 00 99\". This would tell the genie that every time the console tried to load the number from address 5000 to send back a value of 99 instead of what was really there. Now when a new game starts you have 99 lives because that's the number the console recieved. While that code is obvious in its meaning the genie usually used scrambled codes in a known way, so for example you might actually enter the code \"90 05 09\" and it would get unscrambled into the more meanful code. Different consoles had different ways of working. On the NES the cartridge was linked directly into the CPU bus in such a way that it could control all memory, not just the cartridges (this allowed NES cartridges to enhance the original hardware, not just provide game data) by routing any memory access through the cartridge pins first. This means that the cartridge could even override the data in the consoles built in ram. So Game Genie codes for the NES might do things like \"hold\" a byte. What this means is that it essentially kept a value in RAM locked - attempts to change it wouldn't work. So you have a place in ram where health was stored and when health reached 0 you are supposed to die. The game genie could just hold that value at 99 and now you are invincible." ], "score": [ 29 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
awhxzc
Why are most PC components compatible, yet CPU's architecture changes massively that you need new motherboard structure for the CPU?
New GPU models come every year and you can use your current motherboard to put it in. With CPU'S though, this is different. You need an exact socket compatible motherboard, and everytime a new CPU comes out, you will most likely get a new motherboard with it. RAM probably has 4 different types. DDR1 - > DDR4, their structure doesn't change as often. Why are CPUs not as compatible as other components and why does their strucure change so often? Can't they be making the same 'style' of cpus that can work with all motherboards from now on?
Technology
explainlikeimfive
{ "a_id": [ "ehmn8p8", "ehmldyu" ], "text": [ "New CPU generations generally add capabilities (additional registers, different memory access modes, etc etc), and require supporting hardware on the motherboard. When you say GPU you are really referring to the video card, not the GPU chip itself. GPU chips also change their supporting architecture dramatically from generation to generation, but all of that change is isolated to the video card. The video card then has a standardized interface to the motherboard, and those don't change often because motherboard manufacturers have incentive to maximize compatibility.", "AMD rarely changes sockets. Even some AM2 CPUs could be used on AM3. Some time ago they introduced AM4 and I think it has a few years to live. I guess that Intel makes CPUs not so compatible to increase MB sales." ], "score": [ 5, 3 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]
awl5n4
how do Google Maps and other maps services get updates as new roads and highways are built?
And how do they know when things like a new highway (or modifications to an existing one, like more lanes or new exits) have been added in an area that is already well-mapped?
Technology
explainlikeimfive
{ "a_id": [ "ehnaxib" ], "text": [ "You can tell them online, my new build road and house was added in a couple of days. Makes getting deliveries much more reliable." ], "score": [ 3 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
awl7k2
Why modern CPU has 8 core 16 thread but not 8 core 24 thread?
Technology
explainlikeimfive
{ "a_id": [ "ehnbfmj" ], "text": [ "Basically because it's not nearly as efficient as just adding more cores. Each core can really only execute one thread at a time, but because of technology such as Intel's hyper-threading, it can make each core appear as two cores to the operating system, and then do some scheduling shenanigans to make it run faster as if it had more cores than it actually does. This works because each core has different \"paths\" that perform different computations. For example, one path might do an integer addition while another path does a floating point multiplication. If a single thread doesn't need to do both of those kinds of operations then normally one of those paths would be unused. Hyper-threading is able to find another thread that may be able to make use of that unused path. But it's not always the case that different threads can share the core that way (what if all threads only needed to do integer additions?), so 8 cores with hyper-threading isn't as fast as 16 actual cores. And if you tried to do 3 \"virtual\" cores with hyper-threading, there would be even fewer times when three different threads could share a core, so it would be even further from the performance that one would get from 24 actual cores." ], "score": [ 7 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
awn4nv
Why do phone calls and texts still get charged for being international but internet traffic does not?
Technology
explainlikeimfive
{ "a_id": [ "ehodbmt", "ehofqv1", "ehnp43r" ], "text": [ "All these pseudo technical explanations are great but the answer is money. They know they can charge you for it, so they do. It's no more skin off anyone's back to connect your cellphone to India than it is to Indiana in today's networked world. When it used to cost a phone company money to have large networks of operators and limited infrastructure to interconnect those far away local networks to you, long distance calling charges made sense, and it's just stayed that way even though technology has advanced. If you want to see some of the shittiest money-gouging asshole business practices in the developed world you need look no further than the telecom industry", "The phone system dates back to the times when a circuit - a single wire - was used to make a single call. When you made a long-distance call, you would be connected directly to the other phone by a long wire. There weren't many wires, and they were expensive, so you had to pay for the time you used the connection. We eventually learned to use a single wire for many phone calls, but the way we were charged for phone calls remained the same. It was what we expected. Prices dropped, but we accepted that we would be charged for the time we made the call. The internet came about after we worked out how to use one wire for many connections. So from the start, you paid for a connection to 'the network', and the network would get your information where you wanted it. This was expected. Note that as the cost of making telephone calls has dropped, more companies are not charging for time used. I am on one of the cheapest mobile plans available in this country, and last year they changed to unlimited calling across the country on all their plans. It's no longer worth counting the time used. More and more companies are doing this, only charging for the connection and allowing you to make what calls you will - just like the internet.", "An internet call basically terminates at your local server, how the server then relays the information internationally doesn't matter to the phone company." ], "score": [ 38, 8, 4 ], "text_urls": [ [], [], [] ] }
[ "url" ]
[ "url" ]
awqkgw
How did ROM files originally get extracted from cartridges like n64 games? How did emulator developers even begin to understand how to make sense of the raw data from those cartridges?
I don't understand the very birth of video game emulation. Cartridges can't be plugged into a typical computer in any way. There are no such devices that can read them. The cartridges are proprietary hardware, so only the manufacturers know how to make sense of the data that's scrambled on them... so how did we get to today where almost every cartridge-based video game is a ROM/ISO file online and a corresponding program can run it? **Where you would even begin if it was the year 2000 and you had Super Mario 64 in your hands, and wanted to start playing it on your computer?**
Technology
explainlikeimfive
{ "a_id": [ "ehojre4", "eholn2p", "ehoneik", "ehojgns", "ehojtub", "ehoib3s", "ehokvyl", "eholxmw", "ehok7nx", "ehotg61", "ehp1o4b", "ehoz9v0", "ehpwbg6", "eholdor", "ehpaybr", "ehp2sej" ], "text": [ "This is going to be a lot so hang on. Computers work by reading a set of instructions and doing exactly that. This is true of anything, from your PC to your phone to your game consoles. Game cartridges are usually just small chips that contains this list of instructions, like an SD card. Especially on older ones, the connector directly or nearly directly exposes that chip. In many cases these chips aren't super unique so public data sheets exist that say how to read and write to them so companies and engineers can use these parts in their own stuff. Nintendo uses the part, likely for cost and performance reasons, and then when hobbyists disassemble the cartridge they find out what is in there and how to talk to it. Sometimes though, like for the game boy, the cartridge is more than just a few chips with the stored data, and the method of communication is logged while the game boy is running and then the cart dumped once they figure out how to read it. Most of the time the actual reading is done with a microcontroller, Arduinos have proven particularly popular due to low cost and ease of coding. The person dumping it simulates a request for the data across the whole contents of the cartridge, and what's on there is read and then written to a file. Emulators are a bit more difficult. Typically, emulators start by investigating as much as they can about the system they're emulating. What cpu is used, what instruction set it runs on, what is connected where to the CPU and how it's accessed. A lot of this is borderline trial and error using some of the games dumped from game carts to figure out what commands do what based off of what they might expect them to be doing. Think of it as reading a cake cooking instructions in a foreign language. You have an idea of how to cook a cake, so you can guess what ach instruction is actually telling you to do. Compare your guess with guesses from a dozen more instructions and you can start to figure out what each word means. Use that knowledge and you can write a program that will read the instructions and do what they say to do on your own computer.", "I was around during the early days of emulation and was affiliated with a group called Vertigo2099. Basically, back then rom dumping evolved out of console piracy. We used to have devices called cart readers that plugged directly into the cart slots of consoles. You attached them to your console and inserted your cart. These devices also had a standard floppy drive built into them. From there, the devices could literally just pull the raw ROM data from the carts and write them to a 1.44 MB floppy disc. Once you had the ROM data on a floppy, it was trivial to just copy the file to a PC. Before emulation, if you wanted to download and play a game, you could run it directly from the cart readers. As the data in these ROM files were indistinguishable from the actual hardware ROM chip data, emulators were \"fooled\" into seeing ROM files as actual carts. Later devices utilized a serial port connection to PC, as games began to become too large to copy and/or play from a standard sized floppy disk. They worked in the same way, but these devices either read ROM data from PC storage, or some of the more sophisticated ones had IDE support for an actual hard drive. The ELI5 version is that once the ROMs were dumped, emulator authors just \"hooked\" in the ROM files when they were loaded, and the emulators saw them bit for bit as the physical carts. Arcade ROMS were the tricky stuff, as boards started getting encrypted, and dumpers pretty much had to reverse engineer the hardware in order to get a full, usable ROM. But once the unencrypted ROM was dumped, the principle is more or less the same. I can't really say much past that, as my involvement was pretty much limited to NES and Genesis stuff, but I saw a lot of pretty cool stuff happen in real time. Regarding the NES era, the biggest accomplishment was successfully dumping a playable ROM of Dragon Warrior 4. The ROM data was split across four separate chips, and the cart readers just didn't know how to handle that.", "So I’m not sure the tone frame we’re talking about here, but I did this in the 80’s & early 90’s. I hacked games, cartridges, etc. To figure out a cartridge, you first took it apart. Chips were way bigger then than they are today and the brand and type was printed on it. Some would have the brand sanded off and we’d have to use trial and error to figure out what chip it was, but in most of those cases we already knew from disassembling similar ones. Once you know the chip, you get the specs from the manufacturer and hook up your own circuit and read the chip. I actually don’t know the hardware - my buddy did that. But I would program the drivers to read the data off the cartridge using his hardware. It was often serial or parallel interfaces back then. Crazy simple stuff. My buddy and I bought games just to pirate them. We didn’t even play them. The pirating was the fun part. My proudest moment was when a colleague at my work gave me a pirated copy of a game that I had pirated. 😁 Many, many, years later, I was involved in a project with the author of that same game and the subject of piracy came up and he told me about how shocked he was that this game got pirated. I never admitted it to either of them that it was me. Just reveled in my relevance. But removing copy protection from that game (basically, removing the checks that it uses to ensure it’s on proper hardware) was as simple as changing five bytes of machine code to NOP (no-op... basically commenting out five bytes of code.). Jesus, was the machine code for NOP 0x90?! Those were simpler times....", "The data on the ROMs is not scrambled (as far as I am aware). There is a lock chip that is used to validate the game is approved by Nintendo, but the data is freely readable. Otherwise they are standard ROM chips, you can download the datasheet here: URL_0 If you are an electric engineer, you can build a circuit pretty easy to read the data off. The processor is based on a standard MIPS processor, so you could get the datasheets for that as well. The hard parts would be the GPU which I don't think was a standard part. So probably some game company would have had to leak the specs. And reverse engineering the lock chip for the cartridges. Earlier consoles like NES and SNES would have been a lot easier since their hardware was quite simple.", "Cartridges are merely a bunch of chips in plastic. As I understand it, It is much like a flash drive, but it just requires a different port. There are a few ways to connect this to a computer. You say there is none, but there are plenty of things you can download to connect, say, a hacked gameboy cartridge or ds cartridge to allow you to upload roms. The cartridge is just shaped to fit the console, but the insides are all the typical green, gold, and black of data chips and motherboards. So, it just takes finding a way to connect that data to a system the computer can read. I used to mess with my old Wii when I was done with it. You would just connect a USB to USB to it, or a hard drive set up to plug into the Wii to read it, then you can extract the information that way. you might not be able to find that hardware easily at, like, walmart but they do exist. Likely from someone who is immensely curious. So, essentially, either make a port that plugs into the cartridge, goes to the USB, and then can be translated into a coding language the computer understand OR teach it to read the language, make a poirt in the N64 that allows you to plug directly into its storage and the reader for the cartridge and connect that way. This is more based on my understanding of computers, so it could easily be wrong since Consoles are mildly different much like phones. Similar, but some differences make everything a hint different. so long as the computer can translate the code and there is a way to plug it in, there should be no problem allowing a curious coder to go through the data. EDIT: I didn't realize what subreddit this was from, so let me try again. The cartridge is like a book written in German while the computer reads French. similarly, they have different methods and specifications on how you communicate. Instead of grammar, word placement, and vocabulary it is Syntax and transferring the energy (the scribbles/font/handwriting that further makes it legible.) may just lead to confusion on both sides. Teach the computer to speak German and communicate as the cartridge desires and it'll give the computer access to the folders, files, and data in the cartridge's storage.", "From what I understand, the console creators (starting with Nintendo?) began producing machines to make games for their consoles so that third party (American) companies and developers could produce for their stuff. I would imagine that these devices were deconstructed first... Then the cartridges.", "Devices like Retrode allow for cartridges to be read via USB. Also dev kits would provide some way of transferring game code from the PCs used to develop on to the console to test on. For ISOs, some discs could be read by any CD drive, others (like the Dreamcast and Gamecube) required specific CD drives that were capable of reading the discs. As for the emulation question, it involves a lot of reading documentation, inference and trial-and-error. Console manufacturers provide game developers with a lot of documentation on how the consoles work so they can program games that effectively showcase that consoles capabilities. This same documentation can be used to reverse engineer how the console works. This is kind of where you get into two types of emulators. One type seeks accuracy to the original hardware, even at the expense of performance. The other type is about performance and experience, with accuracy being only important in so far as the game works like people expect. Most emulators will be in between the two types on a sort of spectrum. For the accurate emulators, there is a greater emphasis on researching how the hardware works. CPU instructions, delays due to physical layouts, quirks in firmware, all of it. This can often include recreating bugs in the console because some developers coded with those bugs in mind, either to do things that weren't officially provided or to work around them. These emulators are generally made for archival purposes. There will never be more SNES consoles, so creating emulators that play exactly like the hardware will only become more important as the remaining consoles fail. For the performance emulators, there is a greater emphasis on doing things in a way that makes the most sense on the hardware the emulator is running on. So they will look at how the console performed a given task and then find a new way to approximate that task more efficiently. In either case, emulators are made by very talented programmers. Most consoles ran very specialized hardware and it takes a lot of clever coding to replicate that hardware on a PC. For instance, you have the Sega Saturn, which not only had two CPUs, they were very specialized Hitachi CPUs that were mainly used in industrial or automotive systems. The Saturn also didn't even handle basic geometry like any system does today. Instead of triangles, it used quads and didn't technically do 3D but simulated it by stretching square sprites into the correct shapes. Basically, clever people are needed to both extract the data for the games and create the emulators to play them. I am always astounded at how well many emulators run these days given all of the challenges in even knowing how a system even worked.", "I can't ELI5 this very well. That said... First thing's first: **We don't need to know the exact details on every \"file\" stored in the ROM. An emulator's only job is to recreate the hardware in the original console so that the program running still runs the same way it did on the original hardware. All a computer really does is execute instructions. The game will tell your fake audio chip where to start, as long as your fake audio chip responds appropriately. What does an emulator do? It also just runs code. It only needs to know** ***how to execute*** **the instructions stored in the ROM. It doesn't care or need to know where the pictures are. If given a random block of the code from a ROM, the emulator better know what instruction that is and what to do with it.** I don't know a whole lot about the ripping process but from what I understand, some of the original extractors may have been custom/original hardware, or literally hijacking the read process from the original console and putting that raw data through a cable to a PC. For example, there's a discontinued device out there called Kazzo and creates a raw dump of the cart as one big ass file on a PC through USB (or other connectors available at the time). There are a lot of ways to do this if you're an electrical engineer. Once you have the raw data, the next part is somewhat similar to working with a raw program from an unknown computer on a PC. One thing you do need, though: the CPU and hardware inside the ROM so you know how that CPU might try to start communicating with it and how to read the raw data. The other common thing you see with emulators is the BIOS: the chip inside the console (and PC) that helps start everything up. Once your PC powers on, it starts running a program stored on the BIOS chip. This does everything needed to get to the part where your computer finally starts reading Windows off your hard drive (and there's a lot of shit that it does). You bet your ass the process is similar on a console. The CPU in the N64 is the NEC 4300i. The low-level information on that CPU was very well known at the time. I have little idea how the first N64 emulator was written, but that person would have already had knowledge about, for example, where that CPU starts reading from to run the very first program on boot. We know the incredibly low-level language the 4300i uses so once you know where execution in the BIOS starts, someone can start trying to trace that program to see where it then starts trying to read the cartridge. That means you know what part of the ROM is just a program. From there, you start to figure out what the program is reading on the ROM, since at this point the BIOS has probably finished and is just reading code on the ROM. The first part of a cartridge may be to show a picture and start playing a song. Translated from the ugly raw data, since we know the machine language of the NEC 4300i chip, it may look something like this: `LOAD FROM ROM ADDRESS 1000, STORE INTO RAM ADDRESS 1000` `LOAD FROM ROM ADDRESS 2000, STORE INTO ROM ADDRESS 2000` `LOAD FROM ROM ADDRESS 3000, STORE INTO RAM ADDRESS 3000` `EXECUTE CODE AT RAM ADDRESS 3000` Congratulations, you now need to start writing an emulated CPU that performs these actions. Oops, looks like it loaded 3 things off the ROM then jumped to start running code somewhere else. Better go see what it's doing at address 3000. If what's contained at address 1000 is a song -- maybe you also discovered the audio chip, and data from RAM address 1000 is being pushed to it -- you now know where that particular game stores at least one song. You've also already figured out where some of the game's program code is stored. If data from RAM address 2000 is pushed to the video chip, you also know where the game is probably storing images. Or at least can start deciphering how the game draws stuff. You may not even care where the game stores data, unless you're interested in writing an extractor. The game certainly isn't going to contain a nice directory listing. It just doesn't need to. That ROM is not necessarily a hard drive with a complete listing of all files on it. (Of course, consoles now use CD, DVD or blu-ray discs that conveniently do have a directory listing.) But you can use this information to start poking at the hardware to see what it does if you don't know what the original CPU is. And don't forget, most consoles have more than one processor. Want even more complicated? The NES and SNES didn't use simple music formats like AAC or MP3: they were [programs that manipulated the sound processor to make sound]( URL_0 ). If this sounds complicated, yep. It's certainly more difficult when the processors are custom. As long as you can write a program that executes the original CPU's instructions, you can make an emulator. But, the PS2 and PS3 used custom chips. No one outside Sony's hardware people knew much about them until someone tried to pick them apart, painstakingly trying to figure out where execution starts once the power button is pressed. How's that work? Good luck. Time to start throwing random data at the original hardware and see what it does. If you can get a white dot to move around the screen, that's a major accomplishment, or even a beep out of the speakers when you push a button on the controller. Bonus points if you can make it play two different sounds from two specific buttons, or make that white dot turn colors. In reality, a block of code on a ROM is literally just a bunch of 0s and 1s. Figuring out what those do for a CPU you have no documentation for means literally trying to monitor the machine's memory and see how it changes. You can use some of the ROMs you've dumped to help out, by throwing those instructions at the console. This is why most emulators for the newer consoles couldn't do shit until they could dump a game. It's akin to writing out random letters in a language you've never used and then asking a native speaker if it means *anything* but not being able to ask *what* it means. Knowing what CPU you're working with is like being given a dictionary. If I remember right, the PS3 encrypted data on the discs. That had to be broken before emulation could start, since the hardware was custom. And also if I remember right, the NES and SNES used some sort of lock on some of the carts so the cart had to pass some sort of secret key to the console before the console would start running it or decrypting it. These also complicate emulation. In later PS3 games and later NES/SNES games, these encryption keys could change, preventing the old ones from working with the newer games, which prevented them from being dumped using some hardware. The hardcore method, though: put that bitch under an expensive fucking microscope and examining the circuits. I believe this was the method used for the BSNES emulator. This shows you the gate logic used to create the chip. Emulate that and you can emulate the original chip without needing to know *anything* about the chip. Remember, all you need to make an emulator is to emulate the behavior of the various chips. You don't need to know anything about the data. **ELI5: It's a guessing game, starting from what the CPU does once the device powers on, and trying to trace program execution in the original hardware and trying to figure out what the various processors (sound processor, if there is one; video chip; etc.) are doing with data given to them. It's easier if the console uses off-the-shelf components and not custom-built ones, since then you have no fucking idea what it's doing.**", "While the formats and systems are custom and proprietary, the consoles still use a fair amount of \"common\" chips. If you know what sort of a CPU is used in a system, you can make some guesses about what sort of data this CPU needs to receive in order to process information. This information can be used to make educated guesses about when and where important data would flow through the systems, and you could then hook up wires to the pins inside the console, and connect them to an external device that detects and records all the raw electrical signals flowing through a certain location. After the architecture of a system has become well understood by hobbyists and the like, they could create custom hardware that allows you to connect a cartridge to a PC, and forcefully read every memory location on the cartridge's storage chip. This would be a ROM image of the cart's storage.", "You inspired me to dig up my \"explanation\" from [a website]( URL_0 ) I made in middle School circa 2001. It's hilarious, and I have no idea which oraface I pulled it from. Oh wait, it's my butt. Enjoy. So, you like Nintendo games, but have never heard of emulation before? You can read a pretty basic short history of roms and emulators, or scroll down to find out how to play Nintendo games on your PC. When a company manufactures a game (say that company is Nintendo), they will have an original backup copy of that game in pure data form, a ROM. That is, the game in the form of a file on a computer. It is important to understand that this file is merely data, and won't do anything without the program that runs them, which would be the game deck, The Nintendo Entertainment System, for example. Well, obviously you can't play a computer file on your Nintendo. Which was what the programmers intended. Roms were never actually intended to be played, but merely as a backup for the programmers. But some resourceful programmers (completely independent of Nintendo) decided to write a program that mimmicked (or emulated) the operations performed by the game console. The resulting program is an emulator, that closely mimicks the operations of the original gaming deck. Once you have this executable program, you can then run the back up files made by the original manufacturer through the emulator's GUI, and, in essence, play Nintendo games right on your computer. Such emulation programs have been mande for a lot of systems, from the old extinct systems such as Atari and Intellivision, to the newest ones out there (Nintendo 64!!). On my page I focus primarily on the Nintendo systems: Gameboy, Gameboy Color, the original NES, Virtual Boy, the Super Nintendo, and Nintendo 64. If you are looking for emulation for other consoles, email me and I'll give you a site. As emulators were made without the manufacturers consent, and as the backup files were never intended for gameplay, there are very strict guidelines you must follow if you decide to delve into the world of emulation. See my disclaimer for complete details. But, all in all, emulators are a great thing, don't you think?", "Old technology was comparably easy to read the data off of. All they required was a digital clock source, and a basic understanding of the serial protocol which could be deterministically reverse engineered with the ability to probe the lines of communication either using a rudimentary logic analyser or oscilloscope which abstracts the digital bits into a visual waveform for you. Most ROMs even now are just a clock source, an input instruction such as write, you write then an architecture dependent address such as 16 zeroes or perhaps 32 or more now, then you continue to flap the clock around and you get the raw data out of the transmit pin. Then you store it. The chips in a device use those bits in chunks called bytes, as instructions and offsets or as literal memory. They also all have things called program counters which are an internal word of memory that tracks where in the program the next instruction gets read from. Instructions themselves can manipulate the program counter readily. Then you’ll find general purpose registers and registers that track things like whether two bytes being multiplied overflowed, meaning they couldn’t be stored correctly in the resultant memory location. There are also special registers that control what functions the chip might perform on its pins such as what CPU clock divider to use to generate a source clock signal for serial communications. When a chip first turns on it has a default function that it performs. Sometimes that function can be manipulated by some of the digital states of pins on its lines, or by a sequence of events on those lines for example, a chip might without external influence, execute a bootloader located in its first bootloader memory region upon boot. But to get that bootloader there it must have a mechanism to program it, so there is also an option to for example, pull a programming pin low while it turns on and then you can program the bootloader segment directly from another pin, with digital data sequentially. Then you reboot and voila it would run the bootloader. The reality of modern chips is a lot more complicated than that. A bootloader generally has the job of providing very low level robust functions to a system such as, reading a ROM by configuring a chips pins in the right way (by containing a program in the bootloader that writes the desired bits into the correct registers of the chip to configure it), then copying its contents into its program memory, and finally setting the program counter to the desired entry point in that program memory and continuing its operation (ie executing the program). Making sense of an unknown ROM image or binary is a tough job without access to a device and the ability to change things and observe the outcome. Often you find raw strings of data represented in common encodings such as ASCII and then the instructions surrounding these might be obvious. You might quite easily be able to determine the set of instructions responsible for a program call (ie executing a function), storing a value in a general purpose register, comparing values, jumping if a comparison has certain expectations met etc. Sometimes it is even possible to figure out if the original software that was compiled into the ROM was written in a particular language and even what time period it came from due to certain optimisation techniques that have been automatically introduced by the compilation software. Generally compilers that are reliable are popular and are contributed to by programmers from all around the world so they contain bleeding edge algorithms and optimisations for certain common tasks. It’s a very fidgety process but since most stuff works in similar ways it is usually reverse engineerable. When it comes to emulating a system, it means in a software environment, providing a course of action for instructions that would have performed tasks on the target system, but on a different system - say a 32 bit PC. If for example two instructions that were expected, first loaded a number into a registers then treated that number as an address and loaded the value at that address back into the same register, that could be done in software because the goal is understood. But then comes timing. Different chips take different numbers of clock cycles to perform different functions, and sometimes programmers take into account how many clicks things take to specifically time things into the real world, or with respect to each other. For example, writing a byte into flash memory might take one instruction but if the byte is read back out immediately after - it will almost always be invalid (unless the clock is extremely slow) which is because the process of writing to flash takes a substantial amount of time compared to high speed signals. Code which in the original architecture had quirk side-effects would have to be emulated correctly. So an emulator attempts to recreate the behaviours Of the original chip properly. Chips always have things called interrupts which allow a chip to change what it’s doing, if the program it is executing set the chip up to do so in the right way. Take for example a button press. If a chip sets up that a transition on a pin from high to low causes an interrupt to fire, and a particular value written into a peripheral register to be set to indicate to the program what has happened, then ideally an emulator should recreate that. Say your emulator maps the W key to forward, then when you press it, the ideal emulator has a virtual set of registers that the emulating program can read from using the real instructions from the program. The interrupt is fired and presumably the emulated chip the stores the values of the general purpose registers on the stack, stores the program counter and then calls the interrupt service routine. That code then gets emulated as expected with the byte set to a value which tells it the forward button was pressed on a real controller and it has no way of telling that it is really being emulated on a PC. The timing thing is really important by the way. A modern CPU that virtualises a console CPU rather than emulating it, would mean that the CPU runs at say 3.5GHz when the original might have run at say 24MHz. If the original code expects an interrupt to fire when a counter based off the CPU clock prescaled by 1/32 hits 727 for example () then it expects this interrupt to fire approximately once every millisecond. There are more accurate clocks than that for timing but often CPU clocks are used for rudimentary real world timing. Well now that it’s virtualised and executes at 3.5GHz, the milliseconds count up 146 times faster and so anything that makes time based judgments based off of a software implemented millisecond counter like that would now be wildly inaccurate. It could make a physics engine act bizarrely or most likely, just everything else that is not real world time based just occurs much too fast for the user. Virtual machines generally use virtualisation over emulation because they are running operating systems that were programmed with the expectation of variable hardware installations and so they most often are not dependent on the speed of the computer to perform tasks. They instead schedule tasks based off of both clock counting and real world timing with priority queues. Modern emulation is afforded the luxury of being executed on much much faster processors than those that are being emulated and so the emulator can do things like manage virtual memory for the emulated processor and still have time to let a thread perform some tasks without affecting the emulated softwares operation. Even so fast that they are regularly starved of computer resources deliberately to make them execute at the desired speed. IE a software based scheduler yields time slots to the emulated processor all the while running under an operating system such as windows that is itself doing the same scheduling for the thread that is executing the emulator on the modern processor. One last thing worth mentioning is that modern bootloaders often lock the processor from being interacted with with common hardware or technologies such as JTAG but provide a proprietary way to unlock themselves for in-house debugging. Similarly, a ROM could contain an encrypted program quite easily, that the bootloader can decrypt and execute, but the same bootloader locks the chip off unless provided with a specific set of external inputs such as a pin latches and passwords. It has become pretty difficult to reverse engineer modern properly secured hardware if the manufacturer doesn’t want you to be able to. The same logic can extend to games which contain software that needs to be executed. The bootloader can contain a decryption key for a program, but why bother when it can also perform a key verification on the game to make sure it was built with a licensed compiler etc? There are ways to visually see memory under electron microscopes but some chips utilise memory types which specifically inhibit this attack vector.", "ROM is just like an SD card. Nintendo is nothing else than a computer, just built on a different processor (MIPS). You could theoretically start Nintendo games on Linux with qemu emulator. Longer story: the way the games access graphics and other peripheries had always been more messed up than in your PC, so there's that, it's not easy. PS3 was infamously convulted, but the atmosphere is changing. Interestingly, Nintendo (S)witch is an relatively standard computer built on Nvidia's TX1 platform designed for early self-driving cars. If I were 30 yours younger, I'd spend my afternoons hacking it, as I think it looks *very* promising to open up", "Most of these explanations are getting a little heavy for eli5 imho. For the N64 specifically Nintendo had included an expansion port and an accessory called the 64DD to allow the N64 to read data off optical discs. The 64dd didn't pan out, but the port still provided access for interested tinkerers. A gameshark like cheat device that plugged into 64DD port was released called the Doctor v64. It had the ability to alter game code in real time, which means it has the ability to **see** all the game code. The device had a parallel port* and its firmware was not very secure. Through the efforts of dedicated hobbyists the firmware was modified and then anyone who had this device could dump any game they had to a computer. *(It might have been a different port but my old brain seems to remember it being parallel)", "> The cartridges are proprietary hardware, so only the manufacturers know how to make sense of the data that's scrambled on them That part isn't true. They're just regular old parallel ROM chips that are addressable exactly the same way any other ROM chips are. You can buy them for next to nothing and they're readily available. Any kid in university doing an electrical engineering degree has used them lots and can dump them without much difficulty. You have a bunch of address pins. Wikipedia says Nintendo ROM chips can be up to 512 megabits, so with a 2 byte word (16 bits, or 16 data wires) that means you'll have 25 address pins. So if you put a series of 1s and 0s on the 25 address pins, you'll get data out on the data pins, and it'll always be the exact same data for the same address. So all you have to do is put in every address combination 0000000000000000000000000 to 1111111111111111111111111 and capture what the data pins do for each address. This is how pretty much every parallel ROM works and Nintendo carts were no different. So reading the data was easy. Emulating the Nintendo hardware in a program that runs on a completely different platform (i.e. a computer) was very much not easy.", "I’ll have to dig out my old mr. Zbackup 64 for a photo shoot. This was a $300 unit I bought mailorder from Japan within a few months of the n64 release. FWIW they made it for SNES, too but I don’t have that one any more. But, that’s how I knew to keep my eyes peeled. For n64 it was a large, plastic box that sat on top of the n64 with a cartridge shaped extrusion that inserted in to the n64. It required one legit cartridge inserted in to a slit in the z64 mr. Backup for possibly two reasons: 1) to get an authentication from the n64 2) to copy the game data to the z64. Once authenticated, the data from the rom was allowed to be read “into the n64 console” but, in reality it was captured by the Z64 and written to a Zip disk inserted in the z64. Once written to a file on a disk, anything could be done with it. You could remove the Zip disk and copy the file to your computer. You could load the file from Zip disk to the n64 console. You could download files from the internet and load them to Zip disk and then load those to the n64 console. Back in those days we were still largely on dialup or ISDN. Typically I’d queue up 5-10 7mb downloads and go to bed. In the morning there was a good chance I’d have 4-5 new games. I’m the end I was more of a collector than a gamer. For me the quest was in collecting all the games rather than playing them. I’d load them once and play a level or until I died just to verify the rom worked, then move on. I still have all of the games on burned CD and 10-20 Zip disks (and a functional Zip drive) to shuttle around the games—at least it was all working when I packed it away in 2001 or so.", "Just one side of the story I know a lot of: There used to be developer units you could get for relatively cheap from China. The most popular being Doctor V64 by Bung (yes, Bung). Nintendo also sold developer units, but they were very expensive, so China came to the rescue and developed their own versions. This unit connected to the expansion slot under the N64. Anyway, this unit had a parallel port on the back so you could easily connect it to a PC and send ROM files directly to the N64 to do real live bug testing on your games. This also worked in reverse, so you could rip games directly from a real Nintendo cartridge and send it to your PC. You had to use a special cartridge converter though (that you got with the V64) that would block the lockout chip on the N64 cartridge. The process was really easy from what I've heard, and the Doctor V64 became very popular for that, not because of developers who wanted to create games, but for pirates who wanted to rip their games. If you have ever downloaded any N64 ROM, you might have noticed that the extension used on it was .V64, and that's because that exact ROM was most likely ripped from a Doctor V64. And quickly pirates modified the BIOS of the Doctor V64 so that it allowed you to play ROMs from CD, because the V64 also had a CD drive. So now people could download ripped ROMs from the Internet, and burn them to a CD, and play all N64 games ever released for free. And keep in mind that all this happened just 2-3 years after the release of the N64. And people were already pirating ROMs. Nintendo obviously got very mad, and they sued Bung. This caused them to not be able to sell the Doctor v64 in North America. I own a Doctor V64 myself, modified and ready to play pirated roms. Still works great, but nothing you'd really get any use of today. It's a pretty rare console from what I've seen, mine even has the upgraded RAM which is even more rare. Too bad nobody wants to buy it from me, it's a kinda useless console in 2019." ], "score": [ 7221, 476, 438, 101, 22, 13, 13, 11, 6, 4, 4, 3, 3, 3, 3, 3 ], "text_urls": [ [], [], [], [ "https://www.alldatasheet.com/view_datasheet.jsp?Searchword=MX23L9602" ], [], [], [], [ "https://wiki.nesdev.com/w/index.php/NSF" ], [], [ "https://web.archive.org/web/20010625104730fw_/http://www.angelfire.com/emo/linkman/" ], [], [], [], [], [], [] ] }
[ "url" ]
[ "url" ]
awqrbt
What are these website cookies that I keep consenting to? ,
What other tools do websites use to track me? I have heard of tracking tags and pixel tags but don't know what they are.
Technology
explainlikeimfive
{ "a_id": [ "ehok6cs", "ehossq7" ], "text": [ "A cookie is just a small piece of data that a website sends to your browser and your browser saves on your hard drive in a place it can quickly find it again. It's used so that the next time you visit the website it can ask the browser for that cookie and use the information stored to remember things about your previous visit(s). Shopping cart items that you haven't checked out yet, display preferences, anything a site might use to keep your experience as consistent as possible across visits. & #x200B;", "It is a little text-file which typically contains some sort of unique identifier code that allows a web-server to recognize that you are the same visitor as before. If these cookies are only used to facilitate things like login on a webpage, remembering language or color scheme sertings or for providing some sort of functionality on just that one webpage until you close your browser - they don't have to ask for your consent. It's then considered part of the website, which you have \"consentent\" to receive by visiting it. EU law requires asking for consent only for third party cookies which can track you accross many different websites, and/or are long-lasting. This sort of cookies is typically used by advertisers, to create a profile of you. This profile not only allows for targeted ads - but the collected data will also be provided to the company paying for the ads. They will then know what sorts of websites you visit - which often allows to guess what your hobbies, interests, political leanings, etc. are. These sort of extra cookies have to be declared and asked for consent. The whole thing is a bit of a mess though, since the vast majority of webpages will actually set the cookies BEFORE you click okay. The whole thing is more like informing you about it after the fact. Also it would be wayyyyyy easier to just have the browser ask you \"accept cookie from URL_0 \" - which is something browsers actually did in the past. Browsers switched to auto-accept, because the asking annoyed people." ], "score": [ 10, 3 ], "text_urls": [ [], [ "site.com" ] ] }
[ "url" ]
[ "url" ]
awrsw8
what is overclocking, why does it matter, and how is it done?
Technology
explainlikeimfive
{ "a_id": [ "ehor5nu" ], "text": [ "All CPUs have a specified clock speed, effectively for each tick of the clock the CPU performs one instruction (e.g. add, subtract, copy, move), faster clock = more instructions = the computer does more. The clock speed is set by the manufacturer based on performance tests but they lump them together into just a few official clock speeds 1Ghz, 1,2Ghz etc. Overclocking is where you set the clock speed higher than the manufacturer said you should, this means you get a faster processor for the price of a slow one, this works because, for example a 1Ghz processor might actually be a 1.19Ghz processor that failed the 1.2Ghz test." ], "score": [ 5 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
awv3vw
What improves/changes between versions of Ethernet wires? (i.e. cat6, cat5e)
Technology
explainlikeimfive
{ "a_id": [ "ehpdrf3" ], "text": [ "These standards are based on various characteristics. Such as impedance tolerances, resistance and cross talk between pairs. The higher the category the stricter the specification. So a damaged category 6 cable might only test as a category 5e cable. The way you get these better cables is both stricter manufacturing tolerances so the cables are uniform and also by adding insulation and shielding in the cables to minimize cross talk. So if you open a category 6 or 7 cable you will find a lot more shielding wires and foil between the pairs of wires then in category 5e cables and in category 3 cables there is almost none." ], "score": [ 7 ], "text_urls": [ [] ] }
[ "url" ]
[ "url" ]
awx6dh
How do cars and computers know what time it is even after being completly without electricity or internet?
Technology
explainlikeimfive
{ "a_id": [ "ehptj4m", "ehprd68" ], "text": [ "The short answer is that they are never without electricity. In a computer, there is a tiny battery, often a CR2032 button cell, that keeps a chip running called the RTC, or Real Time Clock chip. Inside this chip there are several different components, a crystal and a clock chip. The quartz crystal is what actually keeps the time. Basically, when you apply a signal to it, it vibrates at a predetermined frequency, normally in the MHz range. This vibration is extremely consistent, and generally stays the same regardless of temperature or other factors. This vibration produces pulses at that frequency that are then read by the clock chip. The clock chip then uses that frequency as a reference to count the seconds. For example, if you had a 20 MHz crystal, that means it vibrates at 20,000,000 times a second. The clock chip would count the pulses from 1 to 20,000,000, and then advance the second counter. This entire circuit uses a minuscule amount of power, so oftentimes that tiny little battery will power it for years. Your car works exactly the same way, except that instead of a tiny little button cell battery, it is powered off of your car battery. That is why when your car battery goes completely dead, you have to reset your clock. Also, when you see a digital watch or clock that says \"Quartz Movement\" or something of the sort, it is referring to the quartz crystal that was mentioned above. & #x200B;", "They have little tiny batteries, normally like those you’d find in a watch (so long life but low draw) that power a very low power clock. If the device has an internet connection, it can query a NTP (an internet time server) that provides the current time. Also, GPS satellites can provide the current time very precisely to any GPS receivers." ], "score": [ 10, 7 ], "text_urls": [ [], [] ] }
[ "url" ]
[ "url" ]