|
If valuation at last round was >= $1B US, you get to wear the horn.
|
# ? Dec 13, 2016 18:11 |
|
|
# ? Jun 6, 2024 13:05 |
|
Specifically choose to move to a company that produces a tangible product as a last ditch effort to rekindle my excitement for my job. Only worrisome thing is that it is hard to identify a specific demographic for the product as long term customers share little in terms of specific overlap. The tech projects are definitely worthwhile and applicable although the operations projects might be a little bit of a pipedream.
|
# ? Dec 13, 2016 18:44 |
|
Gail Wynand posted:Google's self driving car program was always 95% pure PR, just like most of their other "moonshot" projects.
|
# ? Dec 13, 2016 20:13 |
|
archangelwar posted:Specifically choose to move to a company that produces a tangible product as a last ditch effort... Man I knew the tech industry is kinda borked but if this is considered a solid thing then god drat. The rabbit hole goes deep indeed.
|
# ? Dec 13, 2016 21:18 |
|
Arsenic Lupin posted:The self-driving car is known to be Sergey Brin's baby; he promised in 2012 that self-driving cars would be on the market in five years. So a vanity project, but not a PR throwaway. They poured way too much money into Google Car for it to have been a PR ruse. They are working on machine vision. It's core to their business. It is a major part of everything they do. Basically every single thing that has ever worked on machine vision has a branch that says "hey, lets use this to navigate around", most universities don't have as much money as google so use a toy car instead of a real one. But self driving cars are not some wacky side thing disconnected from everything else they do. The same software that can recognize and read a stop sign at an angle in weird lighting conditions at 60mph is the same software they use in boring database stuff on their image search and translation and youtube whatever.
|
# ? Dec 13, 2016 21:19 |
|
Gail Wynand posted:These projects are also great places to stick top engineers who are sick of their jobs, sure you lose them on whatever they're actually good at but at least they're not at your competition. I've considered Google to have a secret strategy with respect to hiring which is pretty much just sucking up as much talent as possible to keep them out of competition, adding in all the PR moonshot stuff to attract and retain them. Like I see PhDs get gobbled up and its easy to see why: they pay you a ton of money, you can still research stuff you like and publish, they treat you well, and you don't have to deal with other academic duties like teaching or tenure or whatever Arcteryx Anarchist fucked around with this message at 21:28 on Dec 13, 2016 |
# ? Dec 13, 2016 21:26 |
|
lancemantis posted:I've considered Google to have a secret strategy with respect to hiring which is pretty much just sucking up as much talent as possible to keep them out of competition, adding in all the PR moonshot stuff to attract and retain them Google has fantastic working conditions and a great reputation and they really don't need anything else. Unlike Microsoft they are seen as the future despite having about the same grasp of where their future growth is heading.
|
# ? Dec 13, 2016 21:28 |
|
Owlofcreamcheese posted:The same software that can recognize and read a stop sign at an angle in weird lighting conditions at 60mph is the same software they use in boring database stuff on their image search and translation and youtube whatever. Could you elaborate on the elements of the software that are the same? My understanding is that the vehicular control systems rely heavily on sensors that are irrelevant to, f.e., text detection for translation. It would surprise me to discover that more than a trivial amount of code was shared between the autonomous vehicle system and image search.
|
# ? Dec 13, 2016 21:30 |
|
Identifying and categorizing/labeling objects is an area of interest to both image search and other machine vision systems
|
# ? Dec 13, 2016 21:36 |
|
Yes, I'm aware of that, but there are many different approaches, which manifest in different software and research directions.
|
# ? Dec 13, 2016 21:37 |
|
Subjunctive posted:Could you elaborate on the elements of the software that are the same? My understanding is that the vehicular control systems rely heavily on sensors that are irrelevant to, f.e., text detection for translation. It would surprise me to discover that more than a trivial amount of code was shared between the autonomous vehicle system and image search. Google cars do of course do a ton of real time sensing but the whole gimmick that separates google cars from other self driving cars is that google cars have a comprehensive model of the entire area they drive in. Pre-compiled. Which includes a bunch of image processing that they already got from google street view cars and google maps and so on. And yeah, your right, I doubt there is tons of shared code. (although I bet there is some shared libraries), it's more that when you hire top experts on machine vision you get successful machine vision projects. Not that they copy and paste their pre-written code.
|
# ? Dec 13, 2016 21:47 |
|
Owlofcreamcheese posted:it's more that when you hire top experts on machine vision you get successful machine vision projects. Not that they copy and paste their pre-written code. Then how is it the "same software"? That's like saying that Chrome and Doom are the same software because they're both renderers. Machine vision is an enormous field, and the techniques that build you a huge online-trained model of static image contents aren't necessarily the ones that let you make realtime decisions based on frame sequences and a much smaller model, plus sensor fusion. Do you see a lot of cross-citation between the autonomous vehicle researchers and people working on, f.e., environmental classification of video?
|
# ? Dec 13, 2016 22:39 |
|
Like, I am an autonomous vehicle keener, and my history working with Facebook's CV people was "oh, so a car could use that to..."/"no, not really, because...".
|
# ? Dec 13, 2016 22:49 |
|
Subjunctive posted:f.e., text detection for translation. Subjunctive posted:f.e., environmental classification of video? e. g. already exists to do what you want to do here.
|
# ? Dec 13, 2016 23:43 |
|
Subjunctive posted:Do you see a lot of cross-citation between the autonomous vehicle researchers and people working on, f.e., environmental classification of video? Yes? Clearly? When google claims they have a car that can see the hand signals bikers make do you actually think that is totally unrelated to the million other places google is doing stuff that involves figuring out human arm positions? Like they told the car guy is just working that whole idea out from scratch? And he wasn't allowed to look at the places they already had done that stuff?
|
# ? Dec 14, 2016 00:02 |
|
MickeyFinn posted:e. g. already exists to do what you want to do here. yes, of course it does? though in the latter case not very well unless something amazing is coming out of this batch of CVPR papers. I haven't checked arxiv lately though, and the field moves quickly. Owlofcreamcheese posted:do you actually think that is totally unrelated to the million other places google is doing stuff that involves figuring out human arm positions No, I think some of the essential techniques are portable. I don't think, as was your assertion that triggered this whole line of discussion, that it's the "same software" being used in "boring database" stuff like "text translation", or even substantially the same software. Google will have to develop many vision techniques in the pursuit of autonomous vehicles that have no application to image search or YouTube captioning.
|
# ? Dec 14, 2016 00:28 |
|
Subjunctive posted:No, I think some of the essential techniques are portable. I don't think, as was your assertion that triggered this whole line of discussion, that it's the "same software" being used in "boring database" stuff like "text translation", or even substantially the same software. Google will have to develop many vision techniques in the pursuit of autonomous vehicles that have no application to image search or YouTube captioning. I have a hard time thinking of a machine vision technique that wouldn't have applications in images or videos. What techniques are you suggesting?
|
# ? Dec 14, 2016 01:02 |
|
Owlofcreamcheese posted:I have a hard time thinking of a machine vision technique that wouldn't have applications in images or videos. What techniques are you suggesting? Anything that synthesizes a 3D scene from stereo images, as an example. All the sensor fusion stuff. Is that enough for it to not be "the same software" that powers image search or photo auto-clustering?
|
# ? Dec 14, 2016 01:05 |
|
Subjunctive posted:Anything that synthesizes a 3D scene from stereo images, as an example. All the sensor fusion stuff. Hmm yes, I can think of no reason that youtube: the largest collection of stereo image video in the history of earth could ever find any sort of use for transforming that data into 3D scenes for as company that also sells VR headsets.
|
# ? Dec 14, 2016 01:24 |
|
Owlofcreamcheese posted:Hmm yes, I can think of no reason that youtube: the largest collection of stereo image video in the history of earth could ever find any sort of use for transforming that data into 3D scenes for as company that also sells VR headsets. Fair enough, though I think pretty far from your original claim. Inference of speed and direction? Prediction of subsequent frames? Doing any of this in realtime with a storage-limited model? Are you really still saying it's the same software? I just find that such an unbelievable position to take given the different contexts in which they operate and the different success criteria for each system.
|
# ? Dec 14, 2016 01:33 |
|
Like remember when google made a big deal about image stabilization they used in google glass, then made a big deal with new image stabilization tools being rolled out on youtube? Then all the other apps google had suddenly also started doing image stabilization stuff too, like live view translate and google goggles didn't need to be still images anymore. They will also use that stabilization in the google car's cameras. And if they don't because it's not good enough and make a better way to stabilize an image that will be in the next version of youtube's tools and in live view and the next version of google phone cameras and whatever else they can find to use it in. Like seriously, even if they aren't doing the exact same code the stuff they do in one field is useful in another. Like you don't think they will use the sensor fusion for something else? You don't think if they can map image data onto lidar data that won't show up in the 3D models they use on street view? It's all connected. They don't write everything from scratch every time they do something new. quote:Fair enough, though I think pretty far from your original claim. Inference of speed and direction? Prediction of subsequent frames? Doing any of this in realtime with a storage-limited model? That sounds extremely useful for AR stuff, like google live view translation. Or pokemon go style AR that was "real" instead of just using the gyro. Owlofcreamcheese fucked around with this message at 01:41 on Dec 14, 2016 |
# ? Dec 14, 2016 01:37 |
|
But that's not the only place where there are fundamental engineering challenges. It's true that you're dealing with very different sensors and data types on a car, but you're still having to build algorithms for feature classification and decisioning, which requires a robust framework that it turns out can be used for a lot of different applications. There's also more overlap than you might think between the engineer skillset it takes to create a self driving car and a classification system for the web. For example, there's this company: Diffbot. The founder of this company got his start working on autonomous rovers for the X Prize, and the techniques that their first generation classification system used to determine, for example, what part of a website was the headline, were all things that he used for that. I mean, its undeniable that the self driving car thing was mostly a Sergey Brin pet project, but I don't think it's fair to say it was completely useless to Google either.
|
# ? Dec 14, 2016 01:42 |
|
Owlofcreamcheese posted:Like seriously, even if they aren't doing the exact same code the stuff they do in one field is useful in another. That was my whole point: the techniques might be portable, but the software -- your original assertion -- is unlikely to be. I wouldn't be surprised at all to discover that YouTube image stabilization or photogrammetry was 100% different image-related code from that found in Glass or cars. I'm trying to think of who in my network I can ask that would a) know, and b) be willing to answer. (I'd actually be impressed if Google was able to share major, active-development code like that across YouTube/Glass/Daydream/cars. It would be quite the feat of dependency management!) E: I don't think it's useless to Google at all, I just don't think it's cookie cutter with their production services like image search. Someday my car's ability to park itself in SF will depend on how a million people answered captchas, I'm sure. Subjunctive fucked around with this message at 01:49 on Dec 14, 2016 |
# ? Dec 14, 2016 01:46 |
|
a foolish pianist posted:I work at a network analytics startup (deepfield), and the market is kinda weird. The potential customer base is pretty much just large ISPs and transit providers who do a lot of peering and need to figure out how much traffic is coming from network X and going to network Y. How is this different from tier 1 scale CDN stuff like Akamai or Cloudfront? Hasn't this field been saturated since the end of the first bubble? What's left to solve?
|
# ? Dec 14, 2016 01:52 |
|
MiddleOne posted:Google has fantastic working conditions and a great reputation and they really don't need anything else. Unlike Microsoft they are seen as the future despite having about the same grasp of where their future growth is heading. It depends on which part of Google you're talking about. Verily (their health sciences company) has a worse reputation than Amazon.
|
# ? Dec 14, 2016 02:02 |
|
Subjunctive posted:yes, of course it does? though in the latter case not very well unless something amazing is coming out of this batch of CVPR papers. I haven't checked arxiv lately though, and the field moves quickly. You misunderstand me: f.e. is not a real abbreviation, e.g. is the correct abbreviation. It is precisely the same length and means precisely the same thing. You don't need to invent a new abbreviation.
|
# ? Dec 14, 2016 03:20 |
|
Subjunctive posted:That was my whole point: the techniques might be portable, but the software -- your original assertion -- is unlikely to be. I wouldn't be surprised at all to discover that YouTube image stabilization or photogrammetry was 100% different image-related code from that found in Glass or cars. I'm trying to think of who in my network I can ask that would a) know, and b) be willing to answer. I mean I guess you can win this one on the premise they probably don't use the same executable file and just share research instead because I have no actual idea if internally when the "same" image stabilization ends up in youtube as in google glass if they mean they rewrote the code based on the same concept or wrote a library that uses both. Or if they have a library that recognizes hand motions of bikers/sign language speakers/gmail users or if they just retyped the same ideas three times.
|
# ? Dec 14, 2016 03:37 |
|
Subjunctive posted:That was my whole point: the techniques might be portable, but the software -- your original assertion -- is unlikely to be. I wouldn't be surprised at all to discover that YouTube image stabilization or photogrammetry was 100% different image-related code from that found in Glass or cars.
|
# ? Dec 14, 2016 03:59 |
|
Owlofcreamcheese posted:Or if they have a library that recognizes hand motions of bikers/sign language speakers/gmail users or if they just retyped the same ideas three times. These are just such vastly different problems that any sharing of ideas is bound to be high-level techniques and one common library couldn't begin to be appropriate for both. Sign language especially has to deal with disambiguation of fingers. The gist is that you want to have a model skeleton and figure out the most likely 'pose' that a hand could be in, here's a recent paper on a technique like that. For a car trying to determine a cyclist's arm position, any skeletal model would be ridiculous overkill and a waste of computation for no discernible benefit to determine if a single joint is pointing "UP" or "DOWN". "re-typing" is such a slander to the actual efforts that it's getting in the way of your techno-fetish
|
# ? Dec 14, 2016 04:06 |
|
JawnV6 posted:For a car trying to determine a cyclist's arm position, any skeletal model would be ridiculous overkill and a waste of computation for no discernible benefit to determine if a single joint is pointing "UP" or "DOWN". http://patft.uspto.gov/netacgi/nph-...5&RS=PN/9014905 the computing device may then be configured to determine one or more subsets of data points that are indicative of at least a body region of a cyclist within the vicinity of the autonomous vehicle. The body region of the cyclist may include at least an upper-body region of the cyclist, such as the head, hands, and arms of the cyclist. The one or more subsets of data points may vary depending on the posture of the cyclist and the distance and angle at which the cyclist is detected by the autonomous vehicle's sensors. --- To determine whether a cyclist is turning instead of stopping, Google computers could compare the distance between the cyclist’s hand and head. A shorter distance between indicates a turn, whereas a longer distance probably indicates slowing or stopping. (This is a product of the hand signals most cyclists use.) The patent says Google may also consider the angle at which the cyclist’s elbow is bending, and the size and shape of the cyclist’s hands, arms and head.
|
# ? Dec 14, 2016 04:13 |
|
Papercut posted:It depends on which part of Google you're talking about. Verily (their health sciences company) has a worse reputation than Amazon. Problem is while Google is the most famous place with a great working environment for engineers, it's not the only one, and Google is not the hottest game in town anymore. Facebook has stolen a ton of their thunder as any poster here in web development can attest. FB keeps dropping amazing open source projects on the community on a practically weekly basis and Google is just not keeping up. The prestige of working on open source is important to a ton of engineers. Credit where credit is due though, when it comes to cloud computing platforms Google is giving AWS a run for their money these days, and they have a lot of people's attention.
|
# ? Dec 14, 2016 05:31 |
Dr. Fishopolis posted:How is this different from tier 1 scale CDN stuff like Akamai or Cloudfront? Hasn't this field been saturated since the end of the first bubble? What's left to solve? The problem is sort of orthogonal to that - CDNs usually sit caches on other peoples' networks to minimize latency, so measuring traffic across network boundaries is still difficult. And the field definitely isn't saturated - ISPs are desperate to get good measurements for things like video stream so they can push their way up the netflix rankings, and the networks want accurate numbers to workout peering agreements. It's basically unrelated to CDNs.
|
|
# ? Dec 14, 2016 05:44 |
Subjunctive posted:That was my whole point: the techniques might be portable, but the software -- your original assertion -- is unlikely to be. I wouldn't be surprised at all to discover that YouTube image stabilization or photogrammetry was 100% different image-related code from that found in Glass or cars. I'm trying to think of who in my network I can ask that would a) know, and b) be willing to answer. The particulars of the software will be different from use case to use case, but the algorithms, and particularly the training sets (which is the real hard bit about this sort of ML, building good training corpora), will be useful in lots of different scenarios. The code that uses the models will be different, of course, but the models will probably be very similar, if not the same.
|
|
# ? Dec 14, 2016 05:46 |
|
a foolish pianist posted:The particulars of the software will be different from use case to use case, but the algorithms, and particularly the training sets (which is the real hard bit about this sort of ML, building good training corpora), will be useful in lots of different scenarios. I agree, though I'm not sure you're quite right about the models. I was responding to a comment that was specifically asserting that it was the same software, like there's some librecognizeobjects.so that they run in the data center and the car. I apologize for my contribution to the derail. It stopped being productive quite some time ago.
|
# ? Dec 14, 2016 14:04 |
Subjunctive posted:I agree, though I'm not sure you're quite right about the models. I was responding to a comment that was specifically asserting that it was the same software, like there's some librecognizeobjects.so that they run in the data center and the car. Oh, yeah, there's no way the same little bit of code exists both places.
|
|
# ? Dec 14, 2016 17:25 |
|
a foolish pianist posted:The problem is sort of orthogonal to that - CDNs usually sit caches on other peoples' networks to minimize latency, so measuring traffic across network boundaries is still difficult. And the field definitely isn't saturated - ISPs are desperate to get good measurements for things like video stream so they can push their way up the netflix rankings, and the networks want accurate numbers to workout peering agreements. It's basically unrelated to CDNs. Oh, I get it. I'm sure this is inevitable, but are you concerned that easy access to cross-boundary monitoring paves the way for easy detection and blocking of things like VPNs, Tor hidden service nodes and SSH tunnels? What with the UK's new insanity, it's hard not to imagine the stuff you're working on being very valuable to a police state.
|
# ? Dec 14, 2016 17:48 |
|
Owlofcreamcheese posted:The patent says Google may also consider the angle at which the cyclist’s elbow is bending, and the size and shape of the cyclist’s hands, arms and head. There's certainly nothing there that shows some "common library" between Microsoft Research and a Google patent. That sure spends a lot of time talking about GPS and IMU integration, which again seem wholly inappropriate for a sign language interpreter. I get that you're just abandoning the original point and myopically focusing on a turn of phrase, but you didn't even do that right.
|
# ? Dec 14, 2016 17:54 |
|
There's an interesting New York Times story today about how Google revamped the AI behind Google Translate and greatly improved its translations. Back in the world of disruption, Uber has put self-driving taxis on the road in SF... without complying with state regulations. quote:Levandowski says that at present Uber is not working toward a vehicle that has no steering wheel, pedals or need for a human driver. Rather, it is pursuing technology that provides a significant level of driver assistance while demanding driver oversight. It all depends on what the meaning of "autonomous" is.
|
# ? Dec 14, 2016 17:54 |
|
I really hope regulators crack down on this poo poo HARD. If they don't, we'll have cars that can do 99% of driving tasks autonomously, but can't handle a few unusual situations. That is a very dangerous zone, as the driver has liability and is required to be in control, but human nature means they won't be paying attention, since they have nothing to do the vast majority of the time. This is a well known issue with aircraft autopilots, and they are engineered to keep the pilots involved, even though an airline pilot is much better trained and is much more safety conscious than the average driver.
|
# ? Dec 14, 2016 18:11 |
|
|
# ? Jun 6, 2024 13:05 |
Dr. Fishopolis posted:Oh, I get it. I'm sure this is inevitable, but are you concerned that easy access to cross-boundary monitoring paves the way for easy detection and blocking of things like VPNs, Tor hidden service nodes and SSH tunnels? What with the UK's new insanity, it's hard not to imagine the stuff you're working on being very valuable to a police state. That is worrying, but I don't think it's possible to have good visibility into network use without that danger. The exact data you need for network capacity planning and bottleneck relief is also useable for villainous purposes. Detecting a DDoS and blackholing the source ASNs is the same thing you'd do to keep anyone on your network from accessing twitter, for instance.
|
|
# ? Dec 14, 2016 18:11 |