|
Thermopyle posted:My HP 8600 and 8720 both do this. Are you serious? Yeah they technically do some form of OCR I guess. I suppose I thought there might be some magic solution but there are none. Just rolling my own at this point.
|
# ? Dec 24, 2017 07:58 |
|
|
# ? May 16, 2024 03:46 |
|
FAGGY CLAUSE posted:Are you serious? Yeah they technically do some form of OCR I guess. I'm not sure what you mean here. They use the ABBY FineReader OCR engine.
|
# ? Dec 24, 2017 16:04 |
Why does this happen? (Python 3.6.3)Python code:
|
|
# ? Dec 26, 2017 23:22 |
|
Does anyone have a clear and concise resource that talks about using native C code in Python? I've got some C functions I've written that I want to interface with a Python script but everything points me to cython, which looks like a mixed language of the two
|
# ? Dec 26, 2017 23:34 |
VikingofRock posted:Why does this happen? (Python 3.6.3) Okay, so I asked a friend and he said that it's because the lambda is storing a reference to val, and so when val is updated throughout the dictionary comprehension it changes the val "pointed to" by all the lambdas. And this works: Python code:
|
|
# ? Dec 27, 2017 00:02 |
|
VikingofRock posted:Why does this happen? (Python 3.6.3) http://quickinsights.io/python/python-closures-and-late-binding/ Yr lambdas are getting defined, with val as a reference to the local variable in your loop (not inlined as a constant), and when you run the lambdas they all look up the value of val. The loop has already finished running when you call them, so in each case val is set to the last value it was assigned during the loop, i.e. 3.0. baka kaba fucked around with this message at 00:14 on Dec 27, 2017 |
# ? Dec 27, 2017 00:10 |
|
VikingofRock posted:Why does this happen? (Python 3.6.3) You can do the following: Python code:
You could probably use functools.partial to do this but (also I'm not super familiar with it and the 30 seconds I tried I couldn't get it to work right so) E: f; b
|
# ? Dec 27, 2017 00:20 |
|
creatine posted:Does anyone have a clear and concise resource that talks about using native C code in Python? If you have actual C-style functions dealing with regular old primitive types (float, int, etc) you can just compile a shared library and load it with ctypes. I would not recommend using Cython... for anything, really. There are better alternatives no matter what you want to do
|
# ? Dec 27, 2017 01:33 |
|
VikingofRock posted:Okay, so I asked a friend and he said that it's because the lambda is storing a reference to val, and so when val is updated throughout the dictionary comprehension it changes the val "pointed to" by all the lambdas. And this works: Is there a reason that you're returning a lambda instead of just x/val?
|
# ? Dec 27, 2017 01:37 |
QuarkJets posted:Is there a reason that you're returning a lambda instead of just x/val? This question actually came from me helping someone else with some python (and getting stumped), but presumably the answer is "because you don't know x at the time when this dictionary is created".
|
|
# ? Dec 27, 2017 02:17 |
|
Python code:
|
# ? Dec 27, 2017 03:40 |
|
QuarkJets posted:If you have actual C-style functions dealing with regular old primitive types (float, int, etc) you can just compile a shared library and load it with ctypes. I'll give ctypes a look, thanks
|
# ? Dec 27, 2017 10:06 |
|
QuarkJets posted:If you have actual C-style functions dealing with regular old primitive types (float, int, etc) you can just compile a shared library and load it with ctypes.
|
# ? Dec 27, 2017 14:08 |
|
Cingulate posted:What's bad about Cython? Don't a bunch of scientific packages use it a lot? Most scientific packages are written in compiled C or Fortran. For instance many of the Numpy libraries are compiled FORTRAN77. It's true that there are a number of packages that use Cython (including some parts of Numpy), but that's more due to Cython having been around forever than it being the optimal choice for building extension packages today. For compiling Python, Numba is faster and way easier to use than Cython. No need to write separate .pyx files that get precompiled into a huge amount of boilerplate, simply write Python functions without vectorization and then attach decorators to them; boom, C performance with much less mess For compiling small snippets of C code to be called from Python, ctypes is the fastest and easiest to use option For compiling large amounts of basically anything to be used with Python, SWIG is the go-to option and is faster and easier to use than Cython's C-interfacing features.
|
# ? Dec 29, 2017 02:23 |
|
I've never written a test for any of my projects, and I'd like to change that. I know very little about the subject in general. As a novice, what testing package/methodology should I commit to learning? I'll be using Pycharm, if it makes any difference.
|
# ? Dec 30, 2017 03:58 |
|
I have a bit of a conundrum: how do I lint my code when my application's Python environment is contained in a Docker container? I just spun up a new Django project and stumbled a bit when I realized that I don't know what environment path to point the linter to. This is preventing Flake8 from understanding that I have Django installed, and it's outputting typical linting issues like the E0401 one below: I feel as though my only option is to maintain a virtual environment on my machine that I'll have to keep in sync with the dependencies running in the Docker image that's serving the actual application. Unless there's a better way?
|
# ? Dec 30, 2017 04:16 |
|
I have a comma-separated spreadsheet with a bunch of information about condos in my city, most importantly it has the latitude and longitude of these places. I want to be able to output these coordinates onto google maps but I'm not sure how to go about doing this. I looked at this link but none of the API's seem to quite provide what I'm looking for (at least, not with python). Any suggestions?
|
# ? Dec 30, 2017 22:26 |
|
Seventh Arrow posted:I have a comma-separated spreadsheet with a bunch of information about condos in my city, most importantly it has the latitude and longitude of these places. I want to be able to output these coordinates onto google maps but I'm not sure how to go about doing this. I looked at this link but none of the API's seem to quite provide what I'm looking for (at least, not with python). Any suggestions? Maybe something like this? https://github.com/vgm64/gmplot
|
# ? Dec 30, 2017 22:31 |
|
Hughmoris posted:Maybe something like this? https://github.com/vgm64/gmplot That looks good, thanks! I will look into it.
|
# ? Dec 30, 2017 22:34 |
|
Hughmoris posted:Maybe something like this? https://github.com/vgm64/gmplot So this seems to work pretty good, it seems like the basis of this is using "gmap = gmplot.GoogleMapPlotter(43.66548, -79.3875, 16)" to store the lat & long and then "gmap.draw("filename.html")" to put it onto an actual map. So I have two challenges:
Any hints on how I can do this?
|
# ? Jan 2, 2018 03:13 |
|
I’m setting up my Windows 10 PC from a clean reinstall and I was just wondering what’s the Best(™) way to set up Python now in Windows. My choices are between a native Windows Anaconda install, or installing Anaconda (or just making use of the default Python and Pip) inside the Windows/Linux Subsystem. I use PyCharm and this guide says it’s possible to use my WSL as a “remote” interpreter. Just wondering if there are any drawbacks to this. (I’d prefer to use Python inside WSL because I’m more familiar with the Linux like system than I am with Windows.)
|
# ? Jan 2, 2018 10:05 |
|
Native Windows install. Bonus: Try pipenv. Use powershell as your terminal, vice cmd. Dominoes fucked around with this message at 15:12 on Jan 2, 2018 |
# ? Jan 2, 2018 10:12 |
|
Seventh Arrow posted:I have a comma-separated spreadsheet with a bunch of information about condos in my city, most importantly it has the latitude and longitude of these places. I want to be able to output these coordinates onto google maps but I'm not sure how to go about doing this. I looked at this link but none of the API's seem to quite provide what I'm looking for (at least, not with python). Any suggestions? Are you trying to create a map? Or load custom points on a Google Map? If you need to use Google, then I would create your own map (https://www.google.com/maps/d/), and then upload the CSV.
|
# ? Jan 2, 2018 15:00 |
|
accipter posted:Are you trying to create a map? Or load custom points on a Google Map? If you need to use Google, then I would create your own map (https://www.google.com/maps/d/), and then upload the CSV. I need to put markers on a map, but since this is for a data science python course I need to find a ~*pythonic*~ way of doing it. The module that was linked to earlier was good, but I need to find around some of the details.
|
# ? Jan 2, 2018 15:08 |
|
Updated OP package and virtualenv sections to emphasize builtin venv and pipenv; vice Anaconda, and legacy tools like virtualenv and virtualenvwrapper.
|
# ? Jan 2, 2018 15:26 |
|
Seventh Arrow posted:I need to put markers on a map, but since this is for a data science python course I need to find a ~*pythonic*~ way of doing it. The module that was linked to earlier was good, but I need to find around some of the details. Okay. For nearly all of my mapping needs I use basemap or cartopy, but these create static maps. If you want a slippy interactive map you could also consider Bokeh (https://bokeh.pydata.org/en/latest/docs/user_guide/geo.html).
|
# ? Jan 2, 2018 19:24 |
|
You might also want to look into Folium. It's a quick, easy way to build a slippy LeafletJS based map using OpenStreetMap tiles from Python. https://folium.readthedocs.io/en/latest/quickstart.html#getting-started
|
# ? Jan 2, 2018 19:46 |
|
I'd like to package a scripted tool to members of my team without requiring them to source virtualenv/bin/activate every time to satisfy dependencies within the script, or installing the dependencies into their global environment. Is there a good way to accomplish this?
|
# ? Jan 3, 2018 14:08 |
|
If you still want to use virtuaenv, you should take a look into at virtualenvwrapper which is a set of convenience scripts over virtualenv (meaning source virtualenv/bin/activate becomes workon <name_of_env> and so on). I think the new hotness is pipenv, though, but I never used it.
|
# ? Jan 3, 2018 14:17 |
|
There's packaging as a python module and then there's packaging as an executable. The executable option can encapsulate all dependencies and allow your coworkers to double click on something to run the tool. Try PyInstaller. If you're wanting to send out a python script then you can still package it and list the dependencies and they will be installed automatically, but still requires the environment juggling. porksmash fucked around with this message at 16:52 on Jan 3, 2018 |
# ? Jan 3, 2018 16:48 |
|
Somehow I missed that this was part of 3.5, but you can now merge two dicts into a new dict in a single expression:Python code:
Thermopyle fucked around with this message at 04:12 on Jan 6, 2018 |
# ? Jan 6, 2018 04:09 |
Thermopyle posted:Somehow I missed that this was part of 3.5, but you can now merge two dicts into a new dict in a single expression: Yeah, that's a good one. Cleaner and terser than Python code:
|
|
# ? Jan 6, 2018 04:13 |
|
Why don't dictionaries have an extend(dict) method?
|
# ? Jan 6, 2018 04:17 |
They do, it's called .update(), which updates the dict in place. Creating a new dictionary from the contents of two or more others is what we're doing, which is somewhat different. I'd like to call it a union, but Union's imply no loss of information, which isn't quite right if some of the dictionaries have overlapping keys with different values.
|
|
# ? Jan 6, 2018 04:46 |
|
Not sure if this belongs in the Web Development thread, but it's mostly Python-specific. I have a Django app that's part of a "microservice" network. The Django app is responsible for the frontend of the whole website through a Django REST Framework API that feeds into an Angular SPA. Django applies some business logic rules to data it retrieves from a Service Layer. Django communicates with a service layer run by a different team that I do not have much control over. This service layer has a few responsibilities, but the primary one is providing an API for the data send to and retrieve. Django itself has a small database instance, but it's mostly for metadata about an object and the actual data lives in the service layer (e.g. service layer holds the first name/last name, address, etc submitted by a user). For example, Angular hitting a Django endpoint /api/user/ would cause Django to make a call via the Requests library to the service layer to retrieve the user's information. So a request is typically: Angular -> GET Django -> GET ServiceLayer -> return to django -> Apply business logic -> return to Angular. The state of the information can change based on other calls or data submitted to the service layer (like a user's order request changing from "pending" to "complete"). The problem I've run into is how to best test the Service Layer interactions in Django. Right now we use an old version of the Responses library to fake the responses from the service layer when running pytest. We register custom callbacks for each service layer API endpoint and use memcached during tests to simulate the Service Layer's database. We built an API client class for interacting with the service layer and that class has a switch that intercepts requests when in test mode and does mock.patch for that request. Based on documentation for other python request mocking libraries, it seems like I have to manually apply some decorator around a test function, even though a single test function may be trying to hit 4 or 5 service layer API endpoints and that seems really redundant to apply a bunch of decorators around each test method. Is there some other python library I should be exploring that supports the workflow of django -> multiple API calls to external service layer?
|
# ? Jan 6, 2018 23:40 |
|
You could write a decorator that wraps all the decorators into one. You could even make it an argument-taking decorator that you can configure at each call site with what you want. Like: Python code:
|
# ? Jan 6, 2018 23:44 |
|
Thermopyle posted:You could write a decorator that wraps all the decorators into one. Hadn't thought about doing a "god" decorator. I'll look into that approach as well.
|
# ? Jan 7, 2018 07:45 |
|
We use vcrpy for that and it works pretty well.
|
# ? Jan 7, 2018 08:01 |
|
I have very limited programming experience generally and even less experience with Python, so I'll apologize if this is a really stupid question, but I wasn't able to find much from googling: I've got a csv file with a little over 90,000 rows that each have a key in the first column and a value in the second. I also have a list of keys that I want to retrieve the values for. Currently, I'm using csv.reader to read the file into a dictionary and then looping through my list of keys to retrieve the value for each from the dictionary. This works, but I have a feeling that this is a really stupid/inefficient way of going about things. The other approach that comes to mind is creating a duplicate of the list of keys that I want to retrieve values for, iterating through the rows of the file checking if that row matches any of the keys I'm after, storing the value and removing the key from my duplicate list if it does match, and continuing on until the duplicate list is empty. Am I an idiot? Is either of these approaches appropriate? Is there a better solution? Wallet fucked around with this message at 17:43 on Jan 8, 2018 |
# ? Jan 8, 2018 17:40 |
|
|
# ? May 16, 2024 03:46 |
What you've done sounds like how I would do it frankly. I'd rather process each row once and then have a nice fast hash-table lookup from a dictionary (for each of n keys) than process each row n times looking for the keys. The benefit that your proposed solution has is that n keeps decreasing as keys are found, and once they're all found you can discard the whole rest of the CSV; but whether that's overall more performant than the first approach depends on the shape of the data. The proposed solution is also trickier and sounds like it would take more testing and tinkering, and for that reason alone I'd probably stick with the first approach.
|
|
# ? Jan 8, 2018 17:45 |