|
Methanar posted:I'm not actually working with strings. It's a list. If you have a long list of strings, like code:
code:
|
# ¿ Apr 15, 2017 00:18 |
|
|
# ¿ May 3, 2024 07:51 |
|
Eela6 posted:I've now been working strictly in golang for about three months. QuarkJets posted:The Coding Horrors thread killed any mild interest I had in golang Thermopyle posted:same I'm in a similar boat to Eela6. Most of my career has been Python stuff but the past few months has been very Go-dominated, because I've been working on various things related to Kubernetes. I dislike Go less now than when I started, but there's nothing specific to Go that I wish I could take back to Python. The best feature of Go isn't a feature of the language, it's that your program compiles to a static binary. If Python had enforcement on type hints and a static binary generator that wrapped up cx_freeze/py2exe/py2app into pythonc, would Go have ever left the launchpad?
|
# ¿ Aug 28, 2017 21:44 |
|
Does anyone have a pointer to some reasonably thorough benchmarks for evaluating performance when running typical data science/ML tasks? I'm currently looking at https://github.com/numpy/numpy/tree/master/benchmarks, but maybe one of you ML expert types has something even better. The source of the request is that I'm working on some functional tests for our ML people so that we can establish a known baseline for the performance of their Jupyter notebooks, then start switching things out and quantifying the performance difference (OpenBLAS vs MKL, that sort of stuff). Turns out none of them have an existing set of comprehensive benchmarks for this purpose, and I'm doing a little research/questioning before we start writing our own or packaging up those Numpy benchmarks for our testing.
|
# ¿ Jul 27, 2019 15:41 |