2018-08-15 at 7:14 AM UTC
Wtf? Bullshit. That's the only evidence that makes the moon landing seem fake to me.
2018-08-15 at 8:52 AM UTC
aldra
JIDF Controlled Opposition
wehrner braun was a wizard
2018-08-15 at 7:40 PM UTC
I don't think they needed graphics power or even much CPU for a non-cyber space mission.
2018-08-15 at 7:43 PM UTC
Cootehill
African Astronaut
[my unsymmetrically blurry oregano]
Something I never got is if you don't need a lot of computing power just to manipulate text, then why are things like Python and NodeJS so slow?
2018-08-15 at 8:45 PM UTC
Originally posted by Cootehill
Something I never got is if you don't need a lot of computing power just to manipulate text, then why are things like Python and NodeJS so slow?
"so slow" is relative, but the reason they're significantly slower than something like C is their dynamicity. The model for thinking about python/ruby/node is that every object you make requires instantiating a hashmap. Every time you access a member or call a method you have to do a lookup in that map. This is what lets you stick arbitrary properties or methods onto instances at runtime. Sometimes it involves a number of hashmap lookups because you have to go up something like a prototype chain. Verses compiled, statically typed languages you can burn in addresses (sorta, there's OS magic in there but it doesn't count) during compilation. When you access a member you calculate an offset (usually free on x86) and off you go instead of having to go through a hashmap that may or may not be in a cache.
Of course it's more complex than that, things like GC, loop unrolling, blood magicks or worse UB all play a part too. And intrepreters/VMs can do their own sorts of sourcery to make the situation better (the v8 blog has a pretty interesting post on JS arrays and how basically it lies to you and gives you real arrays and then swaps them out for a hashmap once you start doing something stupid like defining non-numeric indicies) but that dynamictic character of these languages is the generally cited as the central thing that makes them difficult to impossible to optimize.
The following users say it would be alright if the author of this
post didn't die in a fire!
2018-08-15 at 9:03 PM UTC
Also I got curious about the relative "power" of the N64 and Apollo computers so I looked it up. I was expecting the factoid was going ot be doing something dumb like somehow trying to take the sum of the N64's CPU and coprocessor's clock speeds or something like that and call it the N64's "computing power". But actually the N64's CPU clock speed is about 50 times higher than the apollo guidance computer.
The takeaway I got was that the AGC was fucking wierd. It has some wierd features I've never seen before like a 4-phase clock and a parity bit on every 16-bit word, and "core rope memory" which I didn't know was a thing. Comparatively it's a lot slower to do arithmatic on the AGC that the N64, but on the other hand things like the parity bits and all-code-in-rom design are clearly reliability measures made at the cost of speed, so it still ends up being a bit of an apples and oranges comparison.