|
It's not that the robots are coming for your jobs. It's that your boss wants to replace you with a robot.
|
# ¿ Sep 8, 2019 20:18 |
|
|
# ¿ May 10, 2024 13:11 |
|
https://twitter.com/dalykyle/status/1174360934237855749 No way this isn't an NLP error.
|
# ¿ Sep 18, 2019 20:40 |
|
https://twitter.com/_julesh_/status/1177533459843207168
|
# ¿ Sep 28, 2019 03:43 |
|
https://twitter.com/WIRED/status/1181437300414275584
|
# ¿ Oct 10, 2019 00:38 |
|
https://twitter.com/y0b1byte/status/1182363296177016834
|
# ¿ Oct 12, 2019 00:26 |
|
https://twitter.com/BullenRoss/status/1187039545965064193
|
# ¿ Oct 24, 2019 05:35 |
|
Sleng Teng posted:Very Cool and legal
|
# ¿ Nov 8, 2019 00:52 |
|
https://twitter.com/ben_golub/status/1192499354021572609Sagebrush posted:my grandma is a wonderful person but god drat for the last few years before she gave up driving it was absolutely terrifying to be in the car with her. The most dangerous drivers by far are the 80+ crowd, and we're about to get a whole lot of them. poo poo's going to hit the fan (and pedestrians).
|
# ¿ Nov 9, 2019 01:26 |
|
https://twitter.com/lfschiavo/status/1195796736507179008
|
# ¿ Nov 17, 2019 06:09 |
|
https://twitter.com/RDBinns/status/1198716726709571584
|
# ¿ Nov 25, 2019 16:08 |
|
If it's stupid but it works, it's not stupid.
|
# ¿ Nov 28, 2019 18:40 |
|
redleader posted:is fortran still faster than c for some types of numeric calculations for deeply technical yet tedious reasons? That's not changing any time soon. tl;dr is that Fortran doesn't allow programmers to do arbitrary things with memory and that means the compiler can make more assumptions about how code behaves and that makes optimization easier.
|
# ¿ Dec 5, 2019 02:30 |
|
CUDA's great if what you're trying to do exhibits SIMD parallelism. If not, it's a non-starter.
|
# ¿ Dec 5, 2019 03:49 |
|
https://twitter.com/JanelleCShane/status/1202968242286784512
|
# ¿ Dec 7, 2019 01:51 |
|
https://twitter.com/nxthompson/status/1203840778608529408
|
# ¿ Dec 10, 2019 01:09 |
|
https://twitter.com/dril_gpt2/status/1208788034407292930 https://twitter.com/dril_gpt2/status/1208854462984577025 https://twitter.com/dril_gpt2/status/1208454587054735361 https://twitter.com/dril_gpt2/status/1208419178442641408
|
# ¿ Dec 23, 2019 04:00 |
|
https://twitter.com/lindsey/status/1211698750759944193 lol (Also read the thread the quoted tweet is in. Dude knows what's up.)
|
# ¿ Dec 31, 2019 01:22 |
|
https://twitter.com/Bschulz5/status/1206577850234658817 That's really deep, man.
|
# ¿ Jan 3, 2020 03:52 |
|
https://twitter.com/joose_rajamaeki/status/1096397000520749056
|
# ¿ Jan 3, 2020 21:59 |
|
https://twitter.com/VPrasadMDMPH/status/1212840987363442689
|
# ¿ Jan 4, 2020 16:36 |
|
Yes, that's the major issue. And the only way you can get the data that you'd need to train a classifier is to identify potentially harmful cancers and not treat them, which flies in the face of how medicine is practiced.
|
# ¿ Jan 5, 2020 20:23 |
|
Artificial Intelligence Makes Bad Medicine Even Worse Here's a better article on the issues with Google's breast cancer screening technology. tl;dr is that medicine is hard and AI is attacking the easiest part, but it's not clear that there's any value in that.
|
# ¿ Jan 11, 2020 16:51 |
|
https://twitter.com/StatsPapers/status/1223073098066350080 I'm genuinely surprised that this wasn't all known a long time ago.
|
# ¿ Jan 31, 2020 04:36 |
|
https://twitter.com/rchrdbyd/status/1227431038831468546
|
# ¿ Feb 14, 2020 01:05 |
|
I really hope they post lecture notes for that. Applying ML/AI to diseases is hard and I'd love to have pointers on how to do it right.
|
# ¿ Mar 26, 2020 22:20 |
|
4xx at Stanford indicates an experimental graduate level class.
|
# ¿ Mar 26, 2020 23:08 |
|
https://twitter.com/Strife212/status/1255789106522656773 https://twitter.com/Indoorsness/status/1256177588705169408
|
# ¿ May 3, 2020 15:57 |
|
https://twitter.com/ryxcommar/status/1259289338854154240
|
# ¿ May 10, 2020 02:17 |
|
Exploratory data analysis is a thing, but it has to be guided by a question or at least a good sense of what value can come out of looking at the data.
|
# ¿ Jun 5, 2020 15:59 |
|
Hamming distance.
|
# ¿ Jun 6, 2020 16:21 |
|
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
|
# ¿ Jun 9, 2020 04:04 |
|
There's some work showing that optimizing for accuracy leads to vulnerability to adversarial attacks (Robustness May Be at Odds with Accuracy, Theoretically Principled Trade-off between Robustness and Accuracy). A lot of this seems like it's really deep-learning specific, but maybe that's just because people are only looking there right now. Something else that may be interesting: Politics of Adversarial Machine Learning
|
# ¿ Jun 10, 2020 17:05 |
|
https://twitter.com/MaartenvSmeden/status/1272613128304578561
|
# ¿ Jun 15, 2020 20:40 |
|
a neurotic ai posted:Statistics can get real weird man. We get it beaten into us in formal logic that correlation does not imply causation, but then stats basically turns around and says ‘yeah but if this R is high enough then we gonna build a model that assumes it is anyway’. No one in statistics is saying that.
|
# ¿ Jun 16, 2020 23:34 |
|
https://twitter.com/jonathanfly/status/1274277258002300928
|
# ¿ Jun 20, 2020 18:10 |
|
https://twitter.com/tkasasagi/status/1274575977465540608
|
# ¿ Jun 21, 2020 18:24 |
|
We're all in the training sets nowadays.
|
# ¿ Jun 21, 2020 18:44 |
|
https://twitter.com/databoydg/status/1275236482190434304
|
# ¿ Jun 23, 2020 16:07 |
|
https://twitter.com/matt_zucker/status/1275574621522341888
|
# ¿ Jun 25, 2020 05:41 |
|
|
# ¿ May 10, 2024 13:11 |
|
Carthag Tuek posted:why are they all making the weird orgasm meme face That reflects what was found in the training set.
|
# ¿ Jun 25, 2020 19:52 |