Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
I'm trying to figure out to implement a sort of callback functionality:
code:

def somefunc():
 	print ('do something here')

class Test():
    def __init__(self, Handler = None):
         self.handler = Handler

    def perform(self):
        print ('run the function')
        self.handler()
 


And running this as...

code:
test = Test(Handler = somefunc)
test.perform()
do something here
It works as advertised, yay. The problem is I want to be able to pass an object into somefunc so it can actually do something useful. So something like,

code:
def somefunc(  someobj ):
 	someobj.do_something()
But I'm really struggling with how to set this up. I can't just call test = Test( Handler = somefunc(MyObj) ), because this ends up executing the function and supplying its return value. As well I don't really know what to do with the perform function in the Test class. If I include arguments in the call - self.handler(stuff_in_here), then it expects there to be stuff_in_here, which doesn't exist in the function scope.


Just so this doesn't sound like an XY problem let me add some detail. I'm writing a class that performs some streaming activity and provides connect, start, and stop methods. I want the user to be able to specify a function to be called once the connection is established.

Adbot
ADBOT LOVES YOU

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
Hmmm, no help on my callback question? (Its okay, I figured it out)

Oh well, I have another question. I'm struggling with laying out the proper design pattern to implement something. Basically, I want the user to be able to queue up a series of tasks (in any particular order) but have them execute sequentially. However, I want said sequential execution to take place in a thread that won't block my main program.

So kind of like if I hired an assistant - I give he or she my to-do list (the chores can only be done one at a time) and meanwhile, I'm free to do whatever.

Now to add a twist - the tasks are actually all async coroutines from an external module that I'm using.

Thanks goons!

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
Trying to get a question in here before the forums implode :v:

I'm working on a Windows application where I have a GUI that plays a movie. I capture every (say) 5th frame, then pass this image to a machine learning module that identifies if it contains a cat or not. If it does contain a cat, it goes on for further image manipulation (dresses the cat up in a a goofy costume for example). The ML identification task takes a "long" time, and the image manipulation task takes a "long" time - long in the sense that it takes much longer than my frame update rate - which is fine, I don't need my dressed-up cat images in real time. However, I do not want these processing steps blocking my movie playback.

So I've been thinking I need some type of FIFO queue approach, where each new image gets pushed as a task into a queue. But that's about where my knowledge runs out. Searching for queues brings up all kinds of things, plain old queues, queues with multiple threads, queues this multiple process, just multiprocessing, just multithreading, asyncio...blarrgh it all kind of goes over my head.

As a further complication, I'm using PyQt5 and pyqtgraph, so, whatever my design pattern is, it has to be able to drive/connect/trigger GUI updates safely (so it probably has to play nicely with signals and slots).

Thanks goons!

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

KICK BAMA KICK posted:

So yeah speaking of is there a Discord/exit strategy for CoC? Learned a ton here, would hate to lose it.

Is a task queue what you're looking for -- main program enqueues your long-running functions, which are executed in a separate worker process? Sounds similar to a question I asked a while ago and that was the answer. python-rq is probably the simplest one to try first (you'll need a Redis server, but that's trivial with Docker and a great excuse to learn if it you haven't). Don't think it pings back on job completion out of the box so your main loop would probably query the queue periodically to see what's finished and then act accordingly.

Thanks, I think this might be a bit overkill. I actually went ahead and just used python's own threading and queue libraries ( https://docs.python.org/3/library/queue.html ), and this seems to work, though I have yet to wire it up to my GUI.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

QuarkJets posted:

Nice! I was going to suggest this route, just a single queue and a single thread running the combination of "analyze image" and "process image". Qt is designed to operate in this manner anyway, wherein the main thread runs the GUI and spawned threads perform heavier processing loads. With GPUs commonly being used for machine learning (and hopefully you use one for image processing) it's almost like you're escaping from the performance restrictions imposed by the GIL, it's like a performance vs effort sweet spot.

The training was done on a GPU but the model itself is small, so predictions run fine on standard CPU hardware. As does the image processing, though not quite in real time.


QuarkJets posted:

You have the option of using a QThread or a standard python thread. QThreads are QObjects, and will emit signals when a thread is started or finished, which is a cool feature that is sometimes useful; it sounds like you don't have any use for that in this case, but it's good to be aware of these features. Your main thread should have no issue receiving signals emitted by your worker thread regardless of which kind it is, so long as it's all in the same process

When a given image processing task finishes, I do want to trigger some GUI updates and as you know, you're not supposed to call a GUI update directly. So I will have to find a way to emit Q-compatible signals. Presumably if I use QThreads from the get-go, this should be relatively straightforward.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
So I'm at the point where I'm finally putting together a real python project and I'm trying to understand package/module structure better. I'm working on an ML project where I want to test different pipelines by stringing together different combinations of data feeders, models, and trackers. Don't need to get into too much detail about this, but, the way I've gone about structuring my package is as follows:

code:
MLP (top-level package folder)
---> models
---> trackers
     ---> __init__.py
     ---> trackerA.py
     ---> trackerB.py 
Assume that within trackerA.py we have BobsTracker and in trackerB.py we have JoesTracker. Then, when coding, I'd have to use eg.,

code:
from MLP.trackers.trackerA import BobsTracker
from MLP.trackers.trackerB import JoesTracker
However it seems kind of unnecessary/redundant to have to include the two "parts" of the namespace path. Like, ideally, I'd just want to do this:

code:
from MLP.trackers import BobsTracker 
from MLP.trackers import JoesTracker
But I'm not sure how to do that without requiring that somehow everyone put their trackers into the same file. Maybe some __init__.py trickery can be helpful here?

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
Great, thanks, that was helpful!

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
As a follow up, is there a way to allow dynamic assignment of classes? In order to assemble my pipeline, I was thinking of using a configuration dictionary, with the keys specifying the particular model to use, amongst other things.

I.e., something like:

code:
config_dict = { 'predictor': modelA }
I suppose one way would be to use a string value...

code:
config_dict = { 'predictor': 'modelA' }
then in code, I do something like

code:
if config_dict['predictor'] == 'modelA':
    # hard coded loading of model A
But the need to do a kind of string lookup seems cheap somehow, and I'm trying to think how to maintain a linkage between a class and its lookup name (might not be wording this very clearly). Anyway, this seems like probably a solved problem, so some guidance would be helpful!

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

Phobeste posted:

You can literally do this, classes are objects too and defining them with the class keyword creates an entry for that class object. You can call them to call their constructors, too


Thanks, yes I know this and probably shouldn't have used that example. I don't want to have the class object itself in the dictionary, but rather, use a descriptive entry for ease of use. So the user would just specify something like "vgg" or "resnet" and that would map to the appropriate class.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
Interesting, thanks. I'm a bit confused by your example code though. Is MODEL_A an actual class definition (in which case how can that assignment work?), or something else?

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
double post, ignore.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
I've been playing around with some idea from my questions a few pages back, including a way of dynamically determining available model prediction classes. See the following code:

code:
from abc import ABC, abstractmethod


class Animal(ABC):
    def __init__(self):
        super().__init__()
    
    @abstractmethod
    def speak(self):
        pass

class Cat(Animal):
    def __init__ (self, y=0):
        super().__init__() 
        self.y = y
        self.fixed = 'A cat'
    def speak(self):
        print('meow!')
            
    
class Dog(Animal):
    def __init__ (self, x=0):
        super().__init__() 
        self.x = x;
        self.fixed = 'A Dog'
    def speak(self):
        print ('bark!')
        
        
class Puppy(Dog):
    def __init__ (self, x=0):
        super().__init__() 
        self.x = x;
        self.fixed = 'a puppy'


for c in Animal.__subclasses__():
    print ( c.__name__ )
whose output is:
code:
Cat
Dog
So, I'm able to enumerate my Cat and Dog classes, but not the derived Puppy class, which I'm not quite sure why. Is there something I can do to find it as well?

(Also, how are ya'll able to do those Python code blocks?)

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

necrotic posted:

It's not a direct subclass of Animal. You would need to recurse through each subclass and look at their subclasses, too.

yaaaa I get this, but it seems kind of odd that that registration trick only works with direct subclasses. Like, so long as all the classes in the inheritance tree support the original ABC interface, wouldn't it be useful to crawl through them?

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
I have a situation where I'm extracting two sets of byte arrays from a larger byte array buffer as follows:

code:
msb = np.frombuffer( data[0::2] , dtype=np.int8, count=64)
lsb = np.frombuffer( data[1::2] , dtype=np.int8, count=64)
At this point, msb and lsb are two bytearray objects of length 64. What I want to do is somehow "match" the two bytes together to form a 16 bit integer. That is, by first 16 bit integer would be formed by merging msb[0] and lsb[0]. The second 16 bit integer would be formed by merging msb[1] and lsb[1], etc. Any clever ways to do this?

Edit:

Even better, can I directly interpret my original 128-byte byte array at 64 16-bit integers, rather than my split-and-combine approach? Looking through the struck package, there are some unpack functions that seem to be what I'm looking for?

Cyril Sneer fucked around with this message at 17:56 on Apr 30, 2021

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

OnceIWasAnOstrich posted:

I am missing something. Why can't you just frombuffer() as a np.int16 dtype? Is the original data two 8-bit signed ints sequentially or something? If its something like that, just cast to 16-bit, multiple your more-significant part by the appropriate factor (or bit-shift), then add them together.

Thanks, yep, I figured out how to do it with both frombuffer() or struct.unpack().

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
I'm working with very large lists-of-dictionaries where those dictionaries each include a large 2D numpy array of floats (amongst other keys). I have a few processing steps where I want to sort this list into two lists based on a certain condition on the numpy array - basically a keep list and a reject list - the logic here isn't challenging, but in terms of speed and/or memory use is it better to use a modify-in-place approach (iterating over the list backward and popping the unwanted elements) or creating a new list via comprehension?

I realize there may not be a universal answer here.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

ExcessBLarg! posted:

Roughly how big are your lists? How often do you anticipate having to actually modify them (e.g., "rarely" vs. "frequently")?

You might have to benchmark it.

~200,000 entries each containing a 256x256 image.

There are 3 steps I perform. First loop is a zero-check (np.all applied to each image); second loop is more complicated as I'm checking certain pixels ranges for certain features; then finally I resize everything to a new size (~80 x 80).

What I've been finding is that my loops start fast but slow to a nearly-unusable crawl toward the end and I'm not really sure why.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
Hey guys. To what extend can these various web frameworks be used to control local hardware?

Let me set up the problem I'm trying to solve. I work in a facility where Technician Bob might want to interface with hardware X, and Technician Sam might want to interface with hardware Y. The way it works right now is someone like me physically accesses Bob's laptop and installs whatever Python Stuff (environment, scripts) is needed for interfacing with hardware X. Then, someone like me gets ahold of Sam's computer and installs whatever Python Stuff is needed for him to interface with hardware Y.

I was thinking it would be really cool if instead Bob, Sam, and whoever else could simply access an internal webapp that provided the necessary functionality. I know that you can control hardware via a web interface but (and I'm going to bungle the phrasing here) where I've seen this, its external hardware connected to a server, and the server provides a remote user access to the local hardware. What I'm thinking is a bit of an inversion of this -- user connects their laptop to the hardware, loads the appropriate site, and then that "remote" site enables control of the local hardware (all of this would be fully internal).

My coding background is primarily in DSP/algorithm development/embedded processing so this webapp stuff is all a bit foreign to me.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

cum jabbar posted:

Replace that laptop with something permanently attached to the machine that runs a web server.


CarForumPoster posted:

Computer hardware is cheap compared to technician time so I’m assuming you’re putting a dedicated laptop on the hardware.

No, there aren't dedicated laptops attached to each machine. Yes, it would be easy if that was the case - hence the question.

To elaborate, these "machines" are really just things like scopes and VNAs. We have scripts for operating them in particular ways (along with collecting/recording data in certain ways). Our engineers have their own laptops and want to be able to use these tools freely.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

CarForumPoster posted:

Could you draw a use diagram that has these components and some idea of what scripts are where?

It sounds like the current state of affairs is this:


And what you want is...??this??


IDGI, provide the specifics. What problem you tryin to solve, what does success look like?

The top is what we currently have. I'd elaborate though and say the scripts are really apps, featuring GUIs, that allow the user to interact with the hardware in various pre-defined ways.

The bottom is sort-of what I want (sorry, can't edit the diagram right now): instead of the functionality provided via the local Python app, its provided "in browser" via a web app.


I know you can do remote users connecting to a remote server to control remote-connected hardware (think all those cam sites where users can go in and control the camera).
I want a remote user connecting to a remove server to control locally-connected hardware.

StumblyWumbly posted:


If you definitely need a web based answer, I think the computer would need something special installed so the web app can interface with the serial or USB or w/e

I think this is what it all hinges on. Seems like there's no way for a app-in-browser to interact with the hardware on which its running.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

LightRailTycoon posted:

There is webUSB and webserial, but they are new, and not fully supported, and requires JavaScript drivers.

OooooOOh, this looks promising!

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
Looking for advice on the following example code I've been playing around with:

code:
class aName:
    vals = []
    @classmethod
    def do_something (cls, x,y):
        output = x + y
        cls.vals.append( output )
        print(f'appended {output} in {cls}')
        
    @classmethod
    def get_vals(cls):
        return cls.vals
    
class Bob (aName):
    vals = []
    
    
class Sam (aName):
    vals = []
    @classmethod
    def do_something (cls, x,y):
        output = x * y
        cls.vals.append( output )
        print(f'appended {output} in {cls}')
    
    

Bob.do_something (1,3)
Bob.do_something (2,3)
Bob.do_something (3,3)

Sam.do_something(1,3)
Sam.do_something(2,3)
Sam.do_something(3,3)

Bob.get_vals() # returns [4, 5, 6]
Sam.get_vals() # returns [3, 6, 9]
It's easiest to understand if you start from the bottom, where you'll see my desired calling pattern. I want to create different "static" classes that can accumulate different results depending on which is called. You'll see the Bob class does not subclass anything and so preserves the addition operation. In my Sam class, I've modified the do_something function to perform multiplication instead of addition.

The above code does actually work the way I want it to, I just don't know if its the best way to do it (or might in fact be considered a bad way!).

My main gripe is the need to define that Bob class, which doesn't override any behaviour, and only serves to re-scope the class variable.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

eXXon posted:

You can make do_something an abstractmethod and move the implementation to Bob.

Unless you really desperately need these things to be quasi-singletons, I would not make do_something a classmethod. For one, it makes it difficult to have more than one around, whereas I don't see why users should be forbidden from doing so. If you want default set of Bob/Sam/whatever instances you can define that in a module.

It's the second part - I want to provide a default set and those should be the only ones available to the user. They're really a set of pre-defined ETL operations.

To elaborate, I have a bunch of files, containing tabular data, but structured differently. So I want the user to be able to invoke calls like:

code:
RecordsA.load (filepath1, a_config_dict)
RecordsA.load (filepath2, a_config_dict)
RecordsA.load (filepath3, a_config_dict)

RecordsB.load (filepath4, b_config_dict)
RecordsB.load (filepath5, b_config_dict)
RecordsB.load (filepath6, b_config_dict)
where the load method loads the file and, along with the config_dict, extracts the relevant info from a table, and appends it to a pandas dataframe. The exact processing steps may very, so a single fixed function won't work.

I only want the user to access the provided Records interfaces.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

eXXon posted:

You're kind of missing the part where Cyril wanted them to behave like singletons, which is why vals is a class attribute rather than an instance attribute. The best advice I've read in that regard is to only try to make something a singleton if your program cannot possibly work otherwise and I don't think that's the case here. To that end,

... why can't there be more than one RecordsA? If these are chunks of a larger dataset, couldn't a user conceivably want filename1+2 separately from filename3?

And if you're ultimately creating a dataframe from this, are you planning to protect it from mutation somehow while thr Records classes provide some subset of dataframe functionality? Otherwise I'm not sure what you mean by only allowing the user to access the Records interfaces.

I should clarify on the files a bit. I have a giant pool of files named as -

code:
recordsfile_TypeA_00001.xlsx
recordsfile_TypeA_00002.xlsx
recordsfile_TypeA_00003.xlsx
recordsfile_TypeB_00001.xlsx
recordsfile_TypeB_00002.xlsx
recordsfile_TypeC_00001.xlsx
recordsfile_TypeC_00002.xlsx
recordsfile_TypeC_00003.xlsx
So there are multiple file records types (A,B,C here), and multiple cases of each type. I want all Type A records accumulated into one dataframe, all Type B records into another, etc. My plan was that I would iterate through each file name, extract the record type (easy), then via a dictionary look-up, call the appropriate (class) function, e.g., RecordsA.load (recordsfile_TypeA_xxx, a_config_dict). The actual processing steps could be the same between say, TypeA and TypeB,, aside from some custom details fed in through the two different config dicts. So this is equivalent to my empty Bob class that overrides nothing, but re-sets the class variable. In TypeC, the processing steps differ, so I overwite the load method with the different steps.


Does that make sense?

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

Falcon2001 posted:

Even if it was supposed to be a Singleton, I wouldn't just implement one class level attribute. You should be enforcing the Singleton pattern via some init fuckery or via a factory.

Can you elaborate on this? Even if I don't ultimately go in this direction I'd still like to learn/try it.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
I skipped over them, but I wanted to comment on the "normal" class-based solutions from QuarkJets and Falcon2001. I understand both of these, and I agree they work, but I'm somewhat dissatisfied with this approach as I don't really want the user to have to initiate the class instances, and since they're liable to be accessed in various other functions, they'd have to live as globals (or, passed around, which would be annoying). Whereas using static class methods in a module, I can call them from wherever and they don't have to be instantiated.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

Falcon2001 posted:

For example, I'm using one in a current program I'm working on, because the specific use case is essentially a logging system of sorts. It does not take any additional side effect actions of any kind, it just stores data in a globally available place for later retrieval,

I would argue that this is actually similar to what I'm doing. Except replacing the "logging" with "tables" (from data has been extracted from the various files).

The vals list in my example code would be the equivalent to your logging accumulation.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
Thanks for all the responses. So this doesn't turn into an XY problem, maybe I'll just start from the top and explain what I'm trying to do. See the following diagram:

https://ibb.co/Fgy2f26

I'm working on a project to extract data from a bunch of production-related excel files. Individual files consist of two sheets - a cover sheet and a report sheet. The cover sheet has certain fields whose values I extract and the report sheet contains tabular data records. This tabular data gets extracted, possibly cleaned, then merged with the cover fields.

The blocks in the black circles can be considered stable/fixed, meaning the same code works for all file types. The red circles represent places where the code may vary. For example, for some file types, the clean block has to have a few lines of code to deal with merged cells.

We can think of there being 3 files types. FileTypeA and FileTypeB require the same processing steps, with only certain options in a configuration dictionary that need changing (column names, desired fields, that sort of thing). However they are different datasets and should be separately aggregated. A 3rd file type, FileTypeC, requires some different processing in the Clean module.

Normal classes at first pass seem like an obvious solution. I can define standard behaviors for those 5 blocks, and aggregate the results to each class instance. Then, I can subclass the blocks when/if needed (i.e,. to handle FileTypeC). The thing that doesn't sit will with me here is that none of these blocks actually require any state information. They can all be standalone functions. This was partially why I explored the singleton approach.

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
Fun little learning project I want to do but need some direction. I want to extract the all the video transcripts from a particular youtube channel and make them both keyword and semantically searchable, returning the relevant video timestamps.

I've got the scraping/extraction part working. Each video transcript is returned as a list of dictionaries, where each dictionary contains the timestamp and a (roughly) sentence-worth of text:

code:
    {
    'text': 'replace the whole thing anyways right so',
     'start': 1331.08,
     'duration': 4.28
    }

I don't really know how YT breaks up the text, but I don't think it really matters. Anyway, I obviously don't want to re-extract the transcripts every time so I need to store everything in some kind of database -- and in manner amenable to reasonably speedy keyword searching. If we call this checkpoint 1, I don't have a good sense of what this solution would look like.

Next, I want to make the corpus of text (is that the right term?) semantically searchable. This part is even foggier. Do I train my own LLM from scratch? Do some kind of transfer learning thing (i.e., take existing model and provide my text as additional training data?) Can I just point chatGPT at it (lol)?

I want to eventually wrap it in a web UI, but I can handle that part. Thanks goons! This will be a neat project.

Cyril Sneer fucked around with this message at 03:46 on Apr 17, 2024

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
I have a case where I create two instances of an object via a big configuration dictionary. The difference between the two objects is a single, different value for one key. So, this works:

code:
big_config_dict = { .... }

B = dict(big_config_dict)
B['color'] = 'blue' #this one key is the only difference

thingA = Thing(big_config_dict) #default values
thingB = Thing(B) #single modified value
...but feels clunky. Am I missing some simpler way to do this?

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.
[quote="boofhead" post="539140163"]
If I want to take a base dict and change value in one line, I'll usually use a spread operator if the structure and changes are simple

Python code:
config_1 = {'val1': 100, 'val2': 200}
# {'val1': 100, 'val2': 200}

config_2 = {**config_1, 'val2': 0}
# {'val1': 100, 'val2': 0}
oooh, okay yeah this is perfect.

Adbot
ADBOT LOVES YOU

Cyril Sneer
Aug 8, 2004

Life would be simple in the forest except for Cyril Sneer. And his life would be simple except for The Raccoons.

nullfunction posted:

Since you're offering it up for a roast, here are some things to consider:
  • Your error handling choices only look good on the surface. Yes, you've made an error message slightly more fancy by adding some text to it, yes, the functions will always raise errors of a consistent type. They also don't react in any meaningful way to handle the errors that might be raised or enrich any of the error messages with context that would be useful to an end user (or logging system). You could argue that they make the software worse because they swallow the stack trace that might contain something meaningful (because they don't raise from the base exception).


Do you have any good resources that discuss the proper way to deal with error handling?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply