Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
mbt
Aug 13, 2012

It's really tough to gauge how well you're organizing without someone else looking at it.

I thought I had a great module structure until someone else attempted to ford the spaghetti river I created.

That being said, if you're the only one looking at it just do what comes naturally.

Adbot
ADBOT LOVES YOU

Hollow Talk
Feb 2, 2014

cinci zoo sniper posted:

Same, my rule of a thumb for a file is to be able to coherently explain what’s in it through its name.

Furism
Feb 21, 2006

Live long and headbang
Thanks! I think I'll just make it a package then (properly this time) just in case I need to expand it in the future and add more classes (I tend to split classes into different files).

mbt
Aug 13, 2012

Wow I didnt realize how difficult it would be to implement semi accurate timing into my code. And the time between function calls is adjustable ( it's basically a metronome). Sleep (60/bpm) doesnt work as execution time could vary. The time function that asks windows what the time is kinda works, then doing a

while current time < intended next time:
qthread.msleep(1)

almost works but it occasionally skips a beat or speeds up or slows down because windows isnt a real time OS and what's a few milliseconds between friends?

Yes there are python 'metronomes' out there but they either suffer from the same issues or do nothing but print 'tick' to console.

I even looked at C and it has the same issues funnily enough. I guess how some people get around it is either not care, pare the code down to the absolute minimum, or my favorite that I've heard someone do: pre record a long series of beats and only worry about playback speed and the point at which it loops.

The function calls an arduino anyway, maybe I'll have better luck timing off that.

baka kaba
Jul 19, 2003

PLEASE ASK ME, THE SELF-PROFESSED NO #1 PAUL CATTERMOLE FAN IN THE SOMETHING AWFUL S-CLUB 7 MEGATHREAD, TO NAME A SINGLE SONG BY HIS EXCELLENT NU-METAL SIDE PROJECT, SKUA, AND IF I CAN'T PLEASE TELL ME TO
EAT SHIT

I think you're meant to use time.monotonic for basic scheduling, that might be the easiest way of calculating elapsed real time

if I were doing this I'd use your short-sleep polling method, but with a little more logic around it. If the next beat is extremely close (like your expected sleep period might cross it) it might be better to play it early. Or maybe calculate a more accurate sleep period to aim for the exact time you need to hit. If you overshoot, make sure you play the beat - it should never actually skip one, even 600bpm is a beat every 100ms, you shouldn't be locked out of your process for that long surely?

Also make sure your scheduling is completely separate from the actual time your beats get played. If you start at time 0, and you have beats at 0.5s, 1s, 1.5s etc, and your second beat comes out late at 0.51s, make sure your next beat time is calculated at 1s, not 0.51 + 0.5, y'know? So you have a constant pulse scheduled, and if a beat is inaccurate it doesn't affect the others that follow. You're probably already doing this but it's worth mentioning, you don't want timing errors compounding

ALSO I see you're using Qt I think? Have you tried using QTimer which they recommend over the sleep methods? Or using Qt's Priority values to create a high-priority worker thread (or just bump the priority of the main thread, might be a bad idea though, you'd have to test it I don't know Qt)

Foxfire_
Nov 8, 2010

You won't do better than the C windows API way of doing it since that's underneath everything Python does [and Python's underlying thread scheduling is terrible]. That will still give ~20ms jitter if other things want CPU and don't yield sooner, so if that is too much, you'll need to do something else.

There you would do it by using

CreateWaitableTimer to create a timer object
SetWaitableTimer to configure it as periodic with some beat
WaitForSingleObject to sleep until the timer expires

You get bonus priority from the scheduler because you're waiting on IO.

You could ctypes it from python. I don't think there's a standard way to sleep for anything but a relative duration, so otherwise the best you could do is calculating an approximate wake time and doing a relative sleep (then either spinning until its time or accepting whatever inaccuracy is left).

But if you have a microcontroller, just use that. It is much better at doing things with consistent timing than a big computer.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
I think the correct way to do this (at least on linux) would be to use alsaaudio, that way it is buffered on the hardware and the hardware deals with the timing. You would write DC into the PCM buffer until the length of your sample. Given a sample rate of 44.1khz, 140bpm; you'd need to write (60*44100/140)-(sample length) DC samples into the PCM buffer to achieve your timing, then your tick sample, then repeat.

https://larsimmisch.github.io/pyalsaaudio/ will get you where you want to go in linux.

https://docs.microsoft.com/en-us/windows/desktop/Multimedia/waveform-audio would be the windows way I believe, but I don't know of a python wrapper that exists for it, sorry.

E: Ahh I just realized that you want to call an arbitrary function. Yeah I would implement it on the arduino and go the other way around if your user program needs to know. Crazy option: wire line out to the arduino.

dougdrums fucked around with this message at 16:42 on Mar 30, 2019

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
I have a CLI tool installed into a virtual environment and I just noticed/realized that I could directly run it by calling the full path (eg ~/.virtualenvs/whatever/bin/tool) without activating the virtualenv first. Can I just make an alias/symlink to that path and use the tool and stop bothering with activating and deactivating the virtualenv?

Nippashish
Nov 2, 2005

Let me see you dance!

Boris Galerkin posted:

I have a CLI tool installed into a virtual environment and I just noticed/realized that I could directly run it by calling the full path (eg ~/.virtualenvs/whatever/bin/tool) without activating the virtualenv first. Can I just make an alias/symlink to that path and use the tool and stop bothering with activating and deactivating the virtualenv?

Running it this way is equivalent to not using a virtual environment at all, which means that dependencies installed into the virtualenv won't be found when you do this. The fact that the tool works this way means that you just so happen to have all those dependencies installed at the system level. You can install it outside a virtualenv and run it that way, or you can install it inside a virtualenv and run it inside the environment, but this half hand half version where you install it into the virtualenv and run it outside doesn't make much sense imo.

NinpoEspiritoSanto
Oct 22, 2013




Nippashish posted:

Running it this way is equivalent to not using a virtual environment at all, which means that dependencies installed into the virtualenv won't be found when you do this. The fact that the tool works this way means that you just so happen to have all those dependencies installed at the system level. You can install it outside a virtualenv and run it that way, or you can install it inside a virtualenv and run it inside the environment, but this half hand half version where you install it into the virtualenv and run it outside doesn't make much sense imo.

This is not true at all. So long as the python being executed is the one in the venv path, it doesn't need to be activated. Activation is merely a shell convenience for the user.

Nippashish
Nov 2, 2005

Let me see you dance!

Bundy posted:

This is not true at all. So long as the python being executed is the one in the venv path, it doesn't need to be activated. Activation is merely a shell convenience for the user.

I thought it was responsible for setting up site-packages as well. It looks like you're right though. Learn something new every day.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Specifying the full path to the interpreter is a good way of running stuff from a virtualenv from cron.

punished milkman
Dec 5, 2018

would have won
I've had instances where I needed to make a subprocess Popen call to a separate Python environment (e.g. a 2.7 script) and the only way I could get it to behave correctly was by hardcoding the path to the interpreter when making the call. There's probably a better way to do that, and it felt really gross, but it worked.

Data Graham
Dec 28, 2009

📈📊🍪😋



Today is #2to3 party day wooooo

Migrating about 7 legacy django apps that all have to be updated at the same time because they're all sharing an Apache mod_wsgi space uuughghh

cinci zoo sniper
Mar 15, 2013




Data Graham posted:

Today is #2to3 party day wooooo

Migrating about 7 legacy django apps that all have to be updated at the same time because they're all sharing an Apache mod_wsgi space uuughghh

Fake your death and move to Belize.

QuarkJets
Sep 8, 2008

punished milkman posted:

I've had instances where I needed to make a subprocess Popen call to a separate Python environment (e.g. a 2.7 script) and the only way I could get it to behave correctly was by hardcoding the path to the interpreter when making the call. There's probably a better way to do that, and it felt really gross, but it worked.

You could instead set the shebang at the top of the script to point explicitly to the other Python binary, but you'd still need to use a Popen call

Furism
Feb 21, 2006

Live long and headbang
Welp, trying to get my package to work but it still won't :(

I made the modifications you guys suggested: moved my cyberfloodClient.py file into a cyberfloodClient/cyberfloodClient.py directory. Put a __init__.py in there (but it's empty, except for a variable to print). Ran the setup.py file. Checked that under /dist/ the tar.gz, tar file does contain my file. Pushed it to PyPi. Installed from there. But if I want to reference it in code ("from cyberfloodClient import CfClient", which is my class), py-lint tells me it cannot import it.

I've pretty much followed the online documentation so my mistake must be so dumb it's not even covered in it. :smith:

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
Try putting this in your init

code:
from .cyberfloodClient import CfClient
Not sure but I think that would fix it?

Furism
Feb 21, 2006

Live long and headbang
Jesus loving Christ. It works in PyCharms, just not in VS Code.

Hollow Talk
Feb 2, 2014

Furism posted:

Jesus loving Christ. It works in PyCharms, just not in VS Code.

To be fair, the whole PYTHONPATH-thing is not exactly well-designed. Neither is namespacing. But if you use PyCharm, be aware that it does a bunch of things below the hood that might not work from the command-line. I prefer my tests as a directory below the root, alongside the library directory, migrations etc., and that usually means pytest needs a not-so-subtle hint via PYTHONPATH=. pytest.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Are there any little tricks for parsing irregular space-formatted columns of text? Here's a paraphrased variant of what I am dealing with:

code:
                    Third   Fourth   Fifth
      A       B     Column  Column   Column       C
      1       2          3       4        5       6
And I want
A=1
B=2
Third Column=3
Fourth Column=4
Fifth Column=5
C=6

I can brute force this, but I was hoping for some kind of helper. I thought I could do something cute with csv, but... meh?

mbt
Aug 13, 2012

dougdrums posted:

Crazy option: wire line out to the arduino.
I'm already connecting to it via pyfirmata! My solution, given the traditional 24 hours notice before implementation, was to download an existing metronome app that already did the legwork, then do my classic combo of reading user inputs / an mss window pointed at the app to determine when things occur, and send any info to the arduino.

basically I'm supposed to feed info into a national instruments box and i'm not allowed to use labview or touch their computer :)

Qtimer had the same issue but was a bit better than sleep, ctypes is probably a better option given more time.

the 'correct' option is get an audio out from an arduino and ram a potentiometer on top to adjust tempo and never touch python for precise timing, ever. Thank you for the advice everyone!

Rocko Bonaparte posted:

Are there any little tricks for parsing irregular space-formatted columns of text? Here's a paraphrased variant of what I am dealing with:

turn the first two rows into numpy arrays then do c = np.core.defchararray.add(a,b)? then you can do whatever with the first row and replace second with c etc

or you could use pandas but that's effort

mbt fucked around with this message at 01:31 on Apr 2, 2019

wolrah
May 8, 2006
what?

Rocko Bonaparte posted:

Are there any little tricks for parsing irregular space-formatted columns of text? Here's a paraphrased variant of what I am dealing with:

code:
                    Third   Fourth   Fifth
      A       B     Column  Column   Column       C
      1       2          3       4        5       6
And I want
A=1
B=2
Third Column=3
Fourth Column=4
Fifth Column=5
C=6

I can brute force this, but I was hoping for some kind of helper. I thought I could do something cute with csv, but... meh?

You could probably half-rear end that by doing a find/replace to replace two spaces with a comma, then treating it as a CSV, assuming all columns have at least two spaces between them and none of the text in the fields has more than one. At that point you'd just need to trim the whitespace on the results.

QuarkJets
Sep 8, 2008

wolrah posted:

You could probably half-rear end that by doing a find/replace to replace two spaces with a comma, then treating it as a CSV, assuming all columns have at least two spaces between them and none of the text in the fields has more than one. At that point you'd just need to trim the whitespace on the results.

Using the text in the OP, that won't work; "Third" and "Column" and "3" all wind up in different columns, because they all have different amounts of preceding whitespace. The fact that there's an unpredictable amount of space before each "column" makes line-replacement solutions frustrating

But usually the person producing fixed-width columns is at least being consistent about it; usually it's the result of hard-coded column widths and reading the data back is meant to use those same widths. Rocko, are you sure that you're unable to just specify the column widths and extract text that way? That's really the right approach here, assuming consistency

SurgicalOntologist
Jun 17, 2004

Pandas isn't effort, it's the easiest way. You can set the separator to be the regex for "arbitrary whitespace" (\s+ I think) and that will do it.

Edit: it might choke on the column names if they have spaces and aren't quoted though. Might have to parse those in a separate pass. Or a more creative regex. If the columns are separated by two or more spaces you could do \s\s+ I think.

SurgicalOntologist fucked around with this message at 11:51 on Apr 2, 2019

the yeti
Mar 29, 2008

memento disco



This may be a matter of philosophy but: using the logging module, would y’all consider it unneeded effort to set up separate streamhandlers for stdout and stderr and configure log levels appropriately (e.g., >= WARNING to stderr)?

What about if it’s all logging to a file anyway?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

the yeti posted:

This may be a matter of philosophy but: using the logging module, would y’all consider it unneeded effort to set up separate streamhandlers for stdout and stderr and configure log levels appropriately (e.g., >= WARNING to stderr)?

What about if it’s all logging to a file anyway?

It's really a matter of what you're going to do with the logs. Is it useful to you to have those warnings on stderr? If so, do it. If not, maybe do it if you might ever care. Not really any cost to doing it that way...

ironypolice
Oct 22, 2002

Rocko Bonaparte posted:

Are there any little tricks for parsing irregular space-formatted columns of text? Here's a paraphrased variant of what I am dealing with:

code:
                    Third   Fourth   Fifth
      A       B     Column  Column   Column       C
      1       2          3       4        5       6
And I want
A=1
B=2
Third Column=3
Fourth Column=4
Fifth Column=5
C=6

I can brute force this, but I was hoping for some kind of helper. I thought I could do something cute with csv, but... meh?

Maybe you could preprocess the input with something like awk? Idk if awk will handle this case out of the box, but it seems like the kind of thing it was built for.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

QuarkJets posted:

But usually the person producing fixed-width columns is at least being consistent about it; usually it's the result of hard-coded column widths and reading the data back is meant to use those same widths. Rocko, are you sure that you're unable to just specify the column widths and extract text that way? That's really the right approach here, assuming consistency

I don't want to count on it. One of the killers here is that the number of columns isn't even consistent. As it stands, I'm looking at a little dilly that finds the numbers and tries to infer columns from that.

SurgicalOntologist
Jun 17, 2004

Rocko Bonaparte posted:

I don't want to count on it. One of the killers here is that the number of columns isn't even consistent. As it stands, I'm looking at a little dilly that finds the numbers and tries to infer columns from that.

SurgicalOntologist posted:

Pandas isn't effort, it's the easiest way. You can set the separator to be the regex for "arbitrary whitespace" (\s+ I think) and that will do it.

Edit: it might choke on the column names if they have spaces and aren't quoted though. Might have to parse those in a separate pass. Or a more creative regex. If the columns are separated by two or more spaces you could do \s\s+ I think.

Python code:
data = pd.read_csv(path, sep=r'\s\s+')

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Looking at this more, I can share more on this. The application this is wrapping is iozone. There's an "Excel output" mode that, well, dumps it to the stdout. Because that's what Excel output is, apparently. However, that output is much more predictable.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
Pandas has trouble with the two offset headers if you pass header[0,1], I'm guessing because of the duplicate names and maybe something else. You can do it by using header=1 and mangle_dupe_cols=True, but you lose half of the column name.

I wrote this because I had a similar challenge for a programming competition and didn't get it in time, and it's been haunting me since. I assume the winner knew some numpy/pandas trick that I did not, idk how the gently caress he got it because this is still pretty gnarly:
Python code:

def unfuck_table(filename, header_size=2):
    import re
    from itertools import chain, tee, groupby
    from operator import itemgetter
    regex = re.compile(r'\b\w+\b')
    with open(filename) as file:
        def scan(line):
            match = regex.search(line)
            while match:
                yield match.group(0), match.span()
                match = regex.search(line, pos=match.end())
        parts, other_parts = tee(map(list, map(scan, file)))
        spans = map(itemgetter(1), chain(*other_parts))
        def overlaps(a, b):
            return max(a[0], b[0]) <= min(a[1], b[1])
        def union(a, b):
            return min(a[0], b[0]), max(a[1], b[1])
        guides = []
        for span in spans:
            def search():
                for i, guide in enumerate(guides):
                    if overlaps(span, guide):
                        guides[i] = union(span, guide)
                        return
                guides.append(span)
            search()
        guides = sorted(guides, key=itemgetter(0))
        def within(a, b):
            return a[0] >= b[0] and a[1] <= b[1]
        def range_to_column(r):
            return next( i 
                for i, guide in enumerate(guides) 
                if within(r, guide))
        columnized = [
            [ (data, range_to_column(r)) for data, r in part ]
            for part in parts ]
        data = columnized[header_size:]
        header = sorted(chain(*columnized[:header_size]), key=itemgetter(1))
        header = ([ ' '.join(map(itemgetter(0), groups))
            for _, groups in groupby(header, key=itemgetter(1)) ])
        assert len(header) == len(guides)
        data = [ list(map(itemgetter(0), row)) for row in data ]
        return [ header ] + data

E: gently caress I just figured it out. He used str.parse() and specified the width.

dougdrums fucked around with this message at 18:21 on Apr 2, 2019

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

dougdrums posted:

Pandas has trouble with the two offset headers if you pass header[0,1], I'm guessing because of the duplicate names and maybe something else. You can do it by using header=1 and mangle_dupe_cols=True, but you lose half of the column name.

I wrote this because I had a similar challenge for a programming competition and didn't get it in time, and it's been haunting me since. I assume the winner knew some numpy/pandas trick that I did not, idk how the gently caress he got it because this is still pretty gnarly:

E: gently caress I just figured it out. He used str.parse() and specified the width.

Woooooooow. I think I'm supposed to say, "I'm sorry for your loss." That is some self-torture.

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
Nah it was a good break from the regular torture.

SurgicalOntologist
Jun 17, 2004

Oh man, my brain just didn't parse that there was a multi-line header there. In that case I don't have a one line solution. Another output format is likely to be easier. Can you capture stdout?

dougdrums
Feb 25, 2005
CLIENT REQUESTED ELECTRONIC FUNDING RECEIPT (FUNDS NOW)
It's gonna be somehing like
Python code:

# pip install parse
from parse import parse
parse('{:7}{:8}{:11}{:8}{:9}{:8}', line)
for the example. I got the string formatter parse function confused with this package.
Since iozone uses fprintf to produce the output you can just take it from the format string it's using. Or use the excel output ...

Manually entering the format widths is cheating though imho :colbert:

Gangsta Lean
Dec 3, 2001

Calm, relaxed...what could be more fulfilling?
I’ve used this library before, does it do what you want? http://cxc.harvard.edu/contrib/asciitable/

Boris Galerkin
Dec 17, 2011

I don't understand why I can't harass people online. Seriously, somebody please explain why I shouldn't be allowed to stalk others on social media!
Speaking of data, I'm looking to store several GB of CSV data in a single compressed HDF5 file and I was wondering what package I should use for that. Right now I've found h5py, pytables, and there's also xarray I guess. Are there any pros/cons for any of these?

cinci zoo sniper
Mar 15, 2013




I can vouch for h5py being legit, I have used it for radio telescope data.

Adbot
ADBOT LOVES YOU

CarForumPoster
Jun 26, 2013

⚡POWER⚡

Boris Galerkin posted:

Speaking of data, I'm looking to store several GB of CSV data in a single compressed HDF5 file and I was wondering what package I should use for that. Right now I've found h5py, pytables, and there's also xarray I guess. Are there any pros/cons for any of these?

Why not just use pandas?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply