|
Roosevelt posted:i think he’s probably just a stupid rear end in a top hat Hmm
|
# ? May 19, 2024 23:30 |
|
|
# ? Jun 12, 2024 09:29 |
|
FAUXTON posted:train data is important
|
# ? May 19, 2024 23:36 |
|
"airbags deployed" refers to musk stans in your replies and it's never an accident
|
# ? May 19, 2024 23:38 |
|
tk posted:Must’ve thought those flashing lights were a fire truck. I honestly wonder if flashing lights screw up something in their driving software. Has anyone tried shining a strobing flashlight at a auto driving Tesla?
|
# ? May 19, 2024 23:39 |
|
Viscous Soda posted:I honestly wonder if flashing lights screw up something in their driving software. Has anyone tried shining a strobing flashlight at a auto driving Tesla?
|
# ? May 19, 2024 23:44 |
|
Arms_Akimbo posted:Still not as good as this i remember this lol definitely better than the cybertruck
|
# ? May 19, 2024 23:48 |
|
Viscous Soda posted:I honestly wonder if flashing lights screw up something in their driving software. Has anyone tried shining a strobing flashlight at a auto driving Tesla? it's not terribly hard to guess, they're pretty lovely quality SDR sensors with fairly small, frequently dirty lenses, and emergency vehicle lights are extremely bright. so, let's say someone is driving at night — exposure-wise, those tiny cameras will have their aperture open as wide as it can go without compromising on shutter speed or focal plane (i hope) while probably bumping up light sensitivity (ISO) as far as it can go without being unusably grainy now imagine extremely bright lights cycling, while hitting them all at different angles: they're all going to have to rapidly shift exposure settings to compensate, while (assuming they're not independent of each other) since they're all at different angles, none of them should be using the exact same settings, i.e. they will basically freak the gently caress out. if they don't clamp down (i.e. up, lol) on the aperture, or down on the shutter speed or ISO, then they're just going to get a completely overexposed, bright frame, and if they do (but the light is off at that time), then they'll get an underexposure where (maybe almost) nothing will be visible of course i have not tried this myself with a tesla, but having operated many, many digital cameras i'm pretty sure something like this is unavoidable for at least a few ms, and while that might not sound like a lot it's probably critical when you're going like 30+ mph and those lights cycle continuously Beeftweeter fucked around with this message at 02:39 on May 20, 2024 |
# ? May 19, 2024 23:54 |
|
and when you have a "memory" of about 60 frames dollars to donuts the pathing just goes apeshit because it hasn't identified the road in seconds
|
# ? May 20, 2024 00:00 |
|
infernal machines posted:and when you have a "memory" of about 60 frames yeah, i have no idea how long it keeps the current and previous frames in memory for analysis. i also don't know what framerate they use but wouldn't be surprised in the least if it were less than 60, with multiple cameras that'd be a lot of frames to process for kinda-lovely hardware, even if it is specialized now i'm kinda wondering what a screengrab from the combined footage really looks like in such a scenario. i was just thinking i remembered seeing some when the problem was first identified, and they were all blown out, but tbh i'm not sure and that might be a false memory lol. but either way i imagine they probably would be completely blown out, though i suppose it's possible they have some HDR-ish kind of technique to try to combine a few different exposures into a usable frame (assuming the cameras can be set independently, this time) even so, a fairly low-res (and chroma res, in particular, is probably something they compromise on — again, with multiple cameras, that's a lot of frames, almost regardless of the framerate — red being the biggest casualty of such a choice, lol) stack of exposures probably isn't going to yield anything usable without some further processing, which they likely wouldn't be able to do on the fly that fast, and even then you're still not going to get a lot of usable data so even assuming they do actually try something like that (which, afaict, they don't), it's not likely to help much anyway. those lights are extremely bright, you'd probably need a huge sensor on each camera to compensate e: lol i randomly missed a bunch of words in this and the previous post because my dog was sitting on me. just fyi, i didn't change any content in either, just added them back to where they were intended and a bit of context while i was at it Beeftweeter fucked around with this message at 02:35 on May 20, 2024 |
# ? May 20, 2024 00:18 |
|
just occurred to me that "Tesla robotaxis" would require them to underwrite their own insurance and without the ability to just blame the driver for not supervising them lol or I guess they just have a driver in there, in which case lmao
|
# ? May 20, 2024 00:21 |
|
...! posted:no such thing as being mean/cruel to a billionaire so the format usually goes "maybe it was a bit mean, maybe even a little bit cruel, however <justification>" the sentence abruptly ends because no justification was required
|
# ? May 20, 2024 00:32 |
|
that’s one of the things that bothers me about “cameras are enough!”, the human eye is actually pretty good at high dynamic range so it’s not like you have an equivalent sensor.
|
# ? May 20, 2024 00:43 |
|
as someone who does a lot of photography the cameras are definitely not enough. Your eyes are amazing at compensating for different lighting conditions so much so that we’re notoriously bad estimators of available light, but more importantly they’re also connected to your brain.
|
# ? May 20, 2024 00:54 |
|
Eeyo posted:that’s one of the things that bothers me about “cameras are enough!”, the human eye is actually pretty good at high dynamic range so it’s not like you have an equivalent sensor. in elon's view it's more like "cameraphone cameras are enough!" they're not. not even close Megabound posted:as someone who does a lot of photography the cameras are definitely not enough. Your eyes are amazing at compensating for different lighting conditions so much so that we’re notoriously bad estimators of available light, but more importantly they’re also connected to your brain. megabound might be the only person itt who does more photography than i do, and while he's definitely more on the analog side, he's right
|
# ? May 20, 2024 00:56 |
|
reminder https://x.com/greentheonly/status/1412597377228226562
|
# ? May 20, 2024 01:04 |
|
poo poo looks like a magic why
|
# ? May 20, 2024 01:07 |
|
seems to be something floating just in front of and above the camera all the time, weird
|
# ? May 20, 2024 01:08 |
|
that was a heater element on the window of the car it was originally trained on
|
# ? May 20, 2024 01:09 |
|
paint the back of your car in vantablack to get endlessly rear ended by teslas
|
# ? May 20, 2024 01:11 |
|
Beeftweeter posted:in elon's view it's more like "cameraphone cameras are enough!" only the purity of silver chemistry can truly capture the face of god
|
# ? May 20, 2024 01:12 |
|
Megabound posted:as someone who does a lot of photography the cameras are definitely not enough. Your eyes are amazing at compensating for different lighting conditions so much so that we’re notoriously bad estimators of available light, but more importantly they’re also connected to your brain. Please feel free to correct me if any of that is wrong; I’d be curious to know just so I can shoot a good night / low light still with less than five pounds of kit.
|
# ? May 20, 2024 01:24 |
|
Tripod and manual control over the camera at a minimum. Your iPhone is prioritising handholding more than correct exposure at low light. A longer exposure at a lower iso will reduce sensor noise and increase dynamic range.
|
# ? May 20, 2024 01:29 |
|
kitten emergency posted:paint the back of your car in vantablack to get endlessly rear ended by teslas the video is an artifact of the way he's displaying the calculated depth, presumably the depth isn't actually based on the luminance, that's just how he represented the 3d point data but... it's losing all ability to estimate depth every time the sensors get overwhelmed by light, which might have something to do with why teslas like to slam into stationary emergency vehicles infernal machines fucked around with this message at 01:40 on May 20, 2024 |
# ? May 20, 2024 01:34 |
|
Elder Postsman posted:seems to be something floating just in front of and above the camera all the time, weird infernal machines posted:that was a heater element on the window of the car it was originally trained on yeah, and because they use the same training data for every vehicle they have to just eliminate that area from the footage from cars that don't have it now lol incidentally that's similar to why the cybertruck can't do fsd. the design necessitates a completely different camera config
|
# ? May 20, 2024 01:34 |
|
Shumagorath posted:Having taken a lot of low-light photos this year starting with the eclipse, it’s wild that something as sophisticated as an iPhone 11 still ruins contrast at night compared to what your own eyes will perceive. Maybe a dedicated camera and lens system on manual mode wouldn’t ruin the shot, but that just further underscores that image processing can only do so much. an iphone 11 has a tiny sensor compared to even a midrange point and shoot camera, it's not going to be great at low-light photography no matter what you do so yes: image processing can only do so much, there's no getting around physical limitations (though you can cheat a bit). if the sensor isn't capable of capturing the scene, then it's not capable of capturing the scene; you can't just magic more data out of nowhere however, you can (and iphones do, if enabled) stack several burst exposures at once and then average them out, similar to the HDR method i was talking about above. that allows you to stretch the sensor's capabilities, but it takes a lot of processing power and still won't be as good as a "real" camera e: shot-for-shot, anyway. most people have a terrible eye for detail and legitimately can't tell the difference individually, but put them side by side and phone sensor deficiencies become more obvious Beeftweeter fucked around with this message at 01:41 on May 20, 2024 |
# ? May 20, 2024 01:38 |
|
lmao how much more training data do these nerds think is needed before it stops driving into oncoming traffic, trains, or through gates?
|
# ? May 20, 2024 01:42 |
|
Megabound posted:Tripod and manual control over the camera at a minimum. Your iPhone is prioritising handholding more than correct exposure at low light. A longer exposure at a lower iso will reduce sensor noise and increase dynamic range.
|
# ? May 20, 2024 01:49 |
|
Shumagorath posted:Thanks! I had exposure and ISO backwards. they're pretty different concepts, but both will allow for better or worse light sensitivity ISO is the sensor's (or film's) light sensitivity, exposure is a combination of settings: aperture and shutter speed. a higher aperture (somewhat confusingly to noobs, the numbers are inverted: a "high aperture" like f/22 means the lens might be completely shut) will allow less light to hit the sensor, while a lower one (both boundaries are defined by the lens; some expensive ones can do e.g. f/0.x while some e.g. iphones will have a maximum of f/2.2, which is about average) may be completely open, allowing in more light. but with lower aperture, you also get less of a focal plane, which is your trade-off there. shutter speed is, well, the shutter speed: the longer the shutter stays open, the more light is allowed to hit the sensor. combining the three, you get overall exposure, but usually in photography that refers to settings without regard to ISO ISO used to refer to film speed, so if you ever messed with film photography, you'll get the idea. basically, the higher you go, the more sensitive it is; the trade-off is image quality. modern digital cameras can vary wildly in ISO capabilities too. i have a couple that will happily do ISO 40000 with little noise; i also have a few where ISO 1600 is unacceptably noisy. it highly depends on the sensor, but huge strides in improving light sensitivity have been made in the past ~15 years regardless and like i had said, there's a lot of postprocessing tricks available today to make a sensor punch above its weight, and the iphones do more or less all of them, even if you don't tell it to e: stacking low-light exposures as described above will also allow the phone to cut down on sensor noise (since it will average them and, because we're talking about fractions of a second, be able to detect constants vs. random noise for removal) Beeftweeter fucked around with this message at 03:38 on May 20, 2024 |
# ? May 20, 2024 02:07 |
|
somehow ive never heard a tesla in reverse and it makes some kind of freeware jetsons noise and i fuckin hate it kiss my entire rear end elon
|
# ? May 20, 2024 02:08 |
|
Megabound posted:as someone who does a lot of photography the cameras are definitely not enough. Your eyes are amazing at compensating for different lighting conditions so much so that we’re notoriously bad estimators of available light, but more importantly they’re also connected to your brain. Under ideal conditions the human eye can spot individual photons
|
# ? May 20, 2024 02:17 |
|
well-read undead posted:fsd tried to drive this mf into a train once and he kept using it Fool me once, shame, shame on you Fool me twice, can't get fooled again (because you've been smashed by a train)
|
# ? May 20, 2024 02:18 |
|
Tunicate posted:Under ideal conditions the human eye can spot individual photons
|
# ? May 20, 2024 02:20 |
|
Shumagorath posted:How would you even set that up? 100% dark underground room and photoelectric effect? Its in nature if you wanna read methods https://www.nature.com/articles/ncomms12172 That doesnt mean it will make you understand it any better tho quote:To probe the absolute limit of light perception, we built a single-photon quantum light source with sub-Poissonian photon number statistics based on spontaneous parametric down-conversion (SPDC). SPDC is a quantum optical technique in which correlated pairs of photons (called signal and idler) are produced probabilistically from a higher energetic pump photon in a non-linear crystal following energy and momentum conservation16,17 (Fig. 1a). By detecting one of the photons (idler) and sending the other (signal) to the observer’s eye, our SPDC source allowed us to create an effective single-photon light source with a sub-Poissonian photon number distribution (Fig. 1a, Supplementary Fig. 1 and see Methods section for details).
|
# ? May 20, 2024 02:21 |
|
Shumagorath posted:"airbags deployed" refers to musk stans in your replies and it's never an accident airbags in bio
|
# ? May 20, 2024 02:31 |
|
I’m pretty myopic, this video feels like it’s worse than my ability to make things out without glasses, but it’s illegal for me to drive without vision correction.
|
# ? May 20, 2024 02:41 |
|
That's lower resolution than a Game Boy.
|
# ? May 20, 2024 02:49 |
|
Beeftweeter posted:but with lower aperture, you also get less of a focal plane, which is your trade-off there. after decades of knowing this but not really getting why, this video explained it in a way that really made it click for me. timestamped for the relevant part: https://www.youtube.com/watch?v=8bXhsUs-ohw&t=386s
|
# ? May 20, 2024 02:51 |
|
DominoKitten posted:I’m pretty myopic, this video feels like it’s worse than my ability to make things out without glasses, but it’s illegal for me to drive without vision correction. unless i'm misunderstanding, that's just spatial data, so it'd be like trying to navigate with your glasses off while also having a fairly low resolution (yet still sharp, to the resolution's limit, anyway), fairly low-color (4:2:0, lol nice) image overlaying it. like, i imagine the object identification is done using, you know, the actual video frame
|
# ? May 20, 2024 02:51 |
|
I did better than that using a Galaxy S3 controlling a soccer playing robot for a uni assignment
|
# ? May 20, 2024 02:52 |
|
|
# ? Jun 12, 2024 09:29 |
|
Twerk from Home posted:That's lower resolution than a Game Boy. i'm not sure (e: legitimately, i have no idea) how high res it really needs to be, but afaik it does at least mean it can't detect objects under a certain size. no idea what that would be though (also again, only if i'm right about this "just" being the spatial data. if it's not, lol, lmao) Beeftweeter fucked around with this message at 02:57 on May 20, 2024 |
# ? May 20, 2024 02:55 |