Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
TURTLE SLUT
Dec 12, 2005

Jo posted:

Are you not storing the individual sessions as Django objects? Could you do GameSession.objects.get(username="hurfdurf"), or something like it to grab a session?
Session is an object composed of several other objects, including the entire game. Django doesn't support saving objects straight into it by default, you have to use the Fields provided which are usual database data types. I installed a PickleField plugin to enable Fields that are serialized objects, but to read from that Field I assume Django has to deserialize and when saving reserialize.

This would end up happening constantly, every time the user sends a command, and I'm aiming for this app to be usable by possibly thousands of people at once - not that I really will have that many users, but it's a possibility I'd like to account for. What I'm wondering is whether serialization and deserialization in Python is usually resource-intensive enough that it would not really make sense in a context like this.

Adbot
ADBOT LOVES YOU

Jo
Jan 24, 2005

:allears:
Soiled Meat

Cukel posted:

Session is an object composed of several other objects, including the entire game. Django doesn't support saving objects straight into it by default, you have to use the Fields provided which are usual database data types. I installed a PickleField plugin to enable Fields that are serialized objects, but to read from that Field I assume Django has to deserialize and when saving reserialize.

This would end up happening constantly, every time the user sends a command, and I'm aiming for this app to be usable by possibly thousands of people at once - not that I really will have that many users, but it's a possibility I'd like to account for. What I'm wondering is whether serialization and deserialization is usually resource-intensive enough that it would not really make sense in a context like this.

I suppose I don't understand adequately what your session is. I thought originally you were talking about Django's built in session support, which is why I figured you'd have problems accessing it from a process outside the original, but I see that it's actually your internal game state on a per-user basis. That's interesting.

It's hard to say how much it will cost to serialize and deserialize without looking at the code. It's probably best to simply test, in this case. Servers are beefy, and these operations need not be in real time. This is a text adventure, right? It's not an FPS or RTS. Users won't notice 100ms latency.

Could you serialize a session to a string and do len(ses) really quickly? Just to see how how big it is. If it's only 1k, you're fine. If it's 4k or 5k, then you're fine. If it's 100k, you're probably still okay. Even at 1000 users, that's 100 megs of memory.

TURTLE SLUT
Dec 12, 2005

Ah, sorry for the confusion. Looks like the pickled file is about 19k, probably bigger as the game develops. Maybe you're right and I'm doing a little bit of premature optimization - but it still seems a little icky to be constantly pickling and unpickling objects. Thanks for all the advice, I'll contemplate on this and maybe do some stress testing in the future to see what kind of bottlenecks I have.

Jo
Jan 24, 2005

:allears:
Soiled Meat

Cukel posted:

Ah, sorry for the confusion. Looks like the pickled file is about 19k, probably bigger as the game develops. Maybe you're right and I'm doing a little bit of premature optimization - but it still seems a little icky to be constantly pickling and unpickling objects. Thanks for all the advice, I'll contemplate on this and maybe do some stress testing in the future to see what kind of bottlenecks I have.

I agree that the pickle/unpickle process seems icky. Ideally, I'd like to save a game state as little more than a player location, an inventory, and whatever stats the person has. The world can be a separate db object or even a file. If each person is directly accessible in the DB without pickling, that lets you do things a bit easier on the db side. You can't perform DB ops on pickled data (unless there's some serious madness I don't know about).

I can understand why you wouldn't want to do that, though, especially if the session is composed of complicated data types. Perhaps the individual sub-session objects can be decomposed into db-ready strings? I can see an inventory stored as a delimited text list. "leather_armor|golden_dildo|emoticon|butts", which can be stored easily as a TextField() or CharField() in Django. The location perhaps can be a heirarchy of locations "universe:supercluster:milky_way:solar_system:earf:usausausa:kahleefonia:mums_house:basement"

TURTLE SLUT
Dec 12, 2005

Yeah, I suppose that's a possibility, but the game I have so far wasn't built for Django or databases from the ground up, so I have all these dang complex objects. I could convert the whole game to a database-oriented design but it would take a whole lot of work.

Haystack
Jan 23, 2005





Well, you could always just go with the path of least resistance and use an Object Database.

Modern Pragmatist
Aug 20, 2008
I have a feeling I know the answer to this, but I'll ask anyhow.

I'm working on a project that ~100 people are currently using. In trying to be more stringent on naming conventions, we need to change the name of the module. Ideally, we would like to have a deprecation warning if the user attempts to import the old module. Is this even possible? I would prefer something graceful like this so I don't get a billion "OMG WTF where'd it go?!" emails. Any ideas?

xPanda
Feb 6, 2003

Was that me or the door?

Modern Pragmatist posted:

I have a feeling I know the answer to this, but I'll ask anyhow.

I'm working on a project that ~100 people are currently using. In trying to be more stringent on naming conventions, we need to change the name of the module. Ideally, we would like to have a deprecation warning if the user attempts to import the old module. Is this even possible? I would prefer something graceful like this so I don't get a billion "OMG WTF where'd it go?!" emails. Any ideas?

What is your expected answer?

If you want this module to only work when importing with the new name, just have the old module's file raise a DeprecationWarning which gives the name of the new module. That way when they try and import the old module name the warning message will tell them exactly how to fix it.

If you want them to be able to use the old module name and the new module name at the same time, things will get ugly. It's probably not the best way to go.

Modern Pragmatist
Aug 20, 2008

xPanda posted:

If you want this module to only work when importing with the new name, just have the old module's file raise a DeprecationWarning which gives the name of the new module. That way when they try and import the old module name the warning message will tell them exactly how to fix it.

This is the behavior I am looking for. I actually mis-spoke. I meant package not module. So this is being installed into site-packages. There is no file for the old package name on a new install (unless the only way to go about this would be to install a psuedo-package or something) so if someone tries to run a script that uses the old name then they will get an Import Error with no more information.

Emacs Headroom
Aug 2, 2003

Modern Pragmatist posted:

I have a feeling I know the answer to this, but I'll ask anyhow.

I'm working on a project that ~100 people are currently using. In trying to be more stringent on naming conventions, we need to change the name of the module. Ideally, we would like to have a deprecation warning if the user attempts to import the old module. Is this even possible? I would prefer something graceful like this so I don't get a billion "OMG WTF where'd it go?!" emails. Any ideas?

What's wrong with just raising a warning for awhile until you switch things over?

I've noticed that scipy does this when it changes function default arguments etc.

xPanda
Feb 6, 2003

Was that me or the door?

Modern Pragmatist posted:

This is the behavior I am looking for. I actually mis-spoke. I meant package not module. So this is being installed into site-packages. There is no file for the old package name on a new install (unless the only way to go about this would be to install a psuedo-package or something) so if someone tries to run a script that uses the old name then they will get an Import Error with no more information.

Hmm, that's a tough one. If there exists no package with the old name, I don't see how you might run a script informing the user of the new name. As you said, you'd have to have a pseudo-package with the old name, but it doesn't seem clean to install such a package to site-packages merely to inform the user. It might be your only option though, should you need that behaviour.

Since this is a package installed to site-packages, how is this package distributed to the users? Unless you have some sort of automated system, or you are all using this package on the same system, they're going to have to notice the package's name change when installing the new version.

Ridgely_Fan posted:

What's wrong with just raising a warning for awhile until you switch things over?

I've noticed that scipy does this when it changes function default arguments etc.

I wondered about that too, and I made the assumption that if the users wouldn't listen to an announcement about the change, they would probably ignore non-fatal warnings. It depends on the method of distribution, too. Otherwise, yeah, it's a courteous way to go.

xPanda fucked around with this message at 23:53 on Jun 19, 2012

Modern Pragmatist
Aug 20, 2008

Ridgely_Fan posted:

What's wrong with just raising a warning for awhile until you switch things over?

I've noticed that scipy does this when it changes function default arguments etc.

Hmm I'll look into the way scipy does it. That doesn't seem so bad, with the exception that I think we will make this change in the next major release so this warning would only exist for people using the development version.

quote:

Since this is a package installed to site-packages, how is this package distributed to the users? Unless you have some sort of automated system, or you are all using this package on the same system, they're going to have to notice the package's name change when installing the new version.

We supply the source code as well as windows installers. The problem is that the rename is pretty subtle. It's going from cardiac to pycardiac so it could possibly be overlooked by the user.

I guess I was hoping there was a way to supply an alias and then use package.__name__ to see which alias was used and throw the appropriate warning.

This is why I always stress out when I have to come up with a name for something.

Comrade Gritty
Sep 19, 2011

This Machine Kills Fascists

Cukel posted:

Ok, neat, thanks.

I did just realize a problem with the two processes communicating with the Sessions through the Django database ORM.

For context: A Session is basically the object that's created every time someone starts or loads a new game in my web app. It contains all the game logic, interfaces, states, and such. It's updated every time the player sends an in-game command, so basically there's a lot of these Sessions created and they are updated very frequently, like probably one command every few seconds for just one user.

Right now the Sessions are just kept in and updated from a dictionary and saved to the Django database only when the player saves the game.

To save a Session in the Django database, I think I would have to use PickleField or other serialization methods. Would serializing and deserializing the Session every time it's updated be very costly performance-wise?

Don't use Pickle if your session can contain any user entered data what so ever.

Comrade Gritty
Sep 19, 2011

This Machine Kills Fascists

Modern Pragmatist posted:

Hmm I'll look into the way scipy does it. That doesn't seem so bad, with the exception that I think we will make this change in the next major release so this warning would only exist for people using the development version.


We supply the source code as well as windows installers. The problem is that the rename is pretty subtle. It's going from cardiac to pycardiac so it could possibly be overlooked by the user.

I guess I was hoping there was a way to supply an alias and then use package.__name__ to see which alias was used and throw the appropriate warning.

This is why I always stress out when I have to come up with a name for something.

Release a new version of cardiac that has install_requires for pycardiac. Inside of cardiac simple do from pycardiac import * (need more of these for each file). Include in your files a (Pending)Deprecation warning. Eventually remove the pycardiac import *'s and release another new version of cardiac where the setup.py raises an exception and says to Install Pycardiac instead.

Hed
Mar 31, 2004

Fun Shoe

Cukel posted:

Ok, neat, thanks.

I did just realize a problem with the two processes communicating with the Sessions through the Django database ORM.

For context: A Session is basically the object that's created every time someone starts or loads a new game in my web app. It contains all the game logic, interfaces, states, and such. It's updated every time the player sends an in-game command, so basically there's a lot of these Sessions created and they are updated very frequently, like probably one command every few seconds for just one user.

Right now the Sessions are just kept in and updated from a dictionary and saved to the Django database only when the player saves the game.

To save a Session in the Django database, I think I would have to use PickleField or other serialization methods. Would serializing and deserializing the Session every time it's updated be very costly performance-wise?

You might try using the ORM for blobs, something like this: http://djangosnippets.org/snippets/1597/

TURTLE SLUT
Dec 12, 2005

Steampunk Hitler posted:

Don't use Pickle if your session can contain any user entered data what so ever.
Funnily enough, due to the nature of the app I'm writing the Session is composed of little else than user entered data. BUT I'm not at any point sending Pickles through a network, so it shouldn't be a problem if I understand anything about this. All I do is pickle/unpickle Sessions between server memory and harddrive when the user sends a "Save"-command. User entered data shouldn't be run or eval'd at any point.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Cukel posted:

Funnily enough, due to the nature of the app I'm writing the Session is composed of little else than user entered data. BUT I'm not at any point sending Pickles through a network, so it shouldn't be a problem if I understand anything about this.

Nope.

JetsGuy
Sep 17, 2003

science + hockey
=
LASER SKATES
This is a dumb question, which I have a feeling the answer will just be "know your data, stupid". However, it also comes down to a general computer science question that I don't know the details of.

I am currently writing a script/pro to essentially make giant lightcurves (i.e. time series) data for me from existing fits files. I need to be able to extract info from these things from the native fits files they exist (and do some translations in the process).

The question I have is as follows:
The data is generally about 10 minutes long, with *on average* half a million events spread through it. For coarse resolution, there really would be no problems in creating numpy arrays to handle this. In fact, even for very fine resolution, each *value* could be well described by a double (which python does natively).

However, the *real* question is more about the *lengths* of arrays. At some point, I figure I may run into overflow on the array itself?? Like what would happen if I made a numpy array that had length over 2 billion? Would it start loving up the data within the arrays? Would it just refuse to make an array that big? Is the solution to somehow tell numpy to make a "long" array?

I don't think I *will* run into this, because ultimately the created array will be just a time series (so an array of length 10*60*(sub_second_res) ) with just how many of those half a million events are in each time bin. Still, it got me thinking about what would happen if I had a big enough array.

Emacs Headroom
Aug 2, 2003

I really doubt that numpy has a built-in restriction on array length. More likely you'll just run out of memory. 64-bit doubles in an array of length 2 billion is 16 gigs of RAM. Even if you're on a machine that can support that, doing any operations at all will probably involve doubling that.

I think what you'll want to do is break your time series down into manageable chunks, either in the frequency domain or time domain or both. You can make it 'coarser' to look at low-frequency things that have effects across lots of time, or you can break off short-time chunks at the sampling resolution to look at high-frequency stuff that will only effect the local time.

Really what you're talking about is sampling and time series analysis questions, not Python questions per se, as any language you use is going to be limited by the memory and processing capability of the machine.

Jo
Jan 24, 2005

:allears:
Soiled Meat

Cukel posted:

Funnily enough, due to the nature of the app I'm writing the Session is composed of little else than user entered data. BUT I'm not at any point sending Pickles through a network, so it shouldn't be a problem if I understand anything about this. All I do is pickle/unpickle Sessions between server memory and harddrive when the user sends a "Save"-command. User entered data shouldn't be run or eval'd at any point.

Haystack made a really good point with object databases. Check out MongoDB or CouchDB. Look at Mongo first. That should suit your existing code really nicely, should be more than adequately fast, will avoid the need to pickly and unpickle, and doesn't present the same security concerns. It's really the optimal solution.

JetsGuy
Sep 17, 2003

science + hockey
=
LASER SKATES

Ridgely_Fan posted:

I really doubt that numpy has a built-in restriction on array length. More likely you'll just run out of memory. 64-bit doubles in an array of length 2 billion is 16 gigs of RAM. Even if you're on a machine that can support that, doing any operations at all will probably involve doubling that.

I think what you'll want to do is break your time series down into manageable chunks, either in the frequency domain or time domain or both. You can make it 'coarser' to look at low-frequency things that have effects across lots of time, or you can break off short-time chunks at the sampling resolution to look at high-frequency stuff that will only effect the local time.

Really what you're talking about is sampling and time series analysis questions, not Python questions per se, as any language you use is going to be limited by the memory and processing capability of the machine.

I suppose that's true, it's much more of a general programming issue. The reason why I was asking in a python specific sense is I have runinto this in IDL. Particularly with indicie identification. For example, I wouldn't want to say light_cv[37000] and end up getting an error, or worse, light_cv[-35xxx].

Emacs Headroom
Aug 2, 2003

JetsGuy posted:

I suppose that's true, it's much more of a general programming issue. The reason why I was asking in a python specific sense is I have runinto this in IDL. Particularly with indicie identification. For example, I wouldn't want to say light_cv[37000] and end up getting an error, or worse, light_cv[-35xxx].

Python ints are by default arbitrary precision, so this shouldn't happen. It might happen if you have say an array of indices which are int8 or int16, but probably not if they're int32 (max ~2 billion) or int64 (max ~9e18).

Comrade Gritty
Sep 19, 2011

This Machine Kills Fascists

Cukel posted:

Funnily enough, due to the nature of the app I'm writing the Session is composed of little else than user entered data. BUT I'm not at any point sending Pickles through a network, so it shouldn't be a problem if I understand anything about this. All I do is pickle/unpickle Sessions between server memory and harddrive when the user sends a "Save"-command. User entered data shouldn't be run or eval'd at any point.

http://nadiana.com/python-pickle-insecure

Bazanga
Oct 10, 2006
chinchilla farmer
I've googled around on this one and found a few answers, and it looks like I've found the Best (TM) one but its ugly and I hate it. I'm converting a script from Perl to Python because the Perl script is an old, nasty, hackjob and I'm using every chance I can get to work in Python. It was going well then I ran into this in Perl:
code:
my $PWD = `pwd`;

Looks simple enough, I'm sure Python has a nice elegant way of issuing a system command then reading in stdout. I'll just google it.
code:
pwd_process = subprocess.Popen( 'pwd', shell=False, stdout=subprocess.PIPE)
working_directory = pwd_process.communicate()[0]
:stare:

Is this really the best way to issue a system command get the response? This is on a Linux system, in case that wasn't obvious. The above answer works, but it is ugly as piss and I hate it.

geonetix
Mar 6, 2011


Try this instead:

code:
import os
os.getcwd()
Where perl is usually used with shell commands, Python has it's implementations that work on most OS'es, and are, in my opinion, much cleaner.

Bazanga
Oct 10, 2006
chinchilla farmer
Nice. That works wonderfully for my purposes, but is the subprocess module generally the best thing to use when interacting with system commands? For instance, the point of the script is to parse configuration files and generate arguments for openssl commands. Basically, it's a menu-driven OpenSSL frontend that I'm going to use to generate self-signed certificates in large quantities for testing systems. Am I pretty much stuck with using the subprocess module for more custom system commands? Hope that makes sense.

geonetix
Mar 6, 2011


The subprocess module is basically built for these kind of things; especially when you start injecting arguments into the command string. In all cases though, try to see if you can get a python built-in or library to do the heavy lifting for you.

It might feel a bit verbose after coming from Perl; but I personally love the amount of control I get with using subprocess.

Civil Twilight
Apr 2, 2011

If you just want to run a shell command and return the text it outputs like perl backticks, then subprocess.check_output will do that for you, and raise CalledProcessError if the command's exit code is nonzero. The popen methods give you much more flexibility.

TURTLE SLUT
Dec 12, 2005


I don't get it. :( Either you're misunderstanding what I'm doing or I'm not reading that carefully enough. That also just talks about receiving pickles from untrusted sources - which I am not doing. I only unpickle entries from the database, which I have created in my server code in the first place. The only unknown part of those are some of the attributes, like player name.

Is unpickling ANY object that includes an unkown string ANYWHERE in it really dangerous? Not just unpickling a file that might not contain the object you think?

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Let me ask. Why do you want to store a Pickle in a relational DB? It sounds like you want a key-value store. Consider using a key-value store, not a blob in a database.

memcached and redis are popular options.

EDIT: http://en.wikipedia.org/wiki/NoSQL#Key-value_store

Lysidas
Jul 26, 2002

John Diefenbaker is a madman who thinks he's John Diefenbaker.
Pillbug

Ridgely_Fan posted:

I really doubt that numpy has a built-in restriction on array length. More likely you'll just run out of memory. 64-bit doubles in an array of length 2 billion is 16 gigs of RAM. Even if you're on a machine that can support that, doing any operations at all will probably involve doubling that.

16GB RAM is nothing when your server is built for SCIENCE :science:

code:
Python 3.2.3 (default, May  3 2012, 15:51:42) 
Type "copyright", "credits" or "license" for more information.

IPython 0.12.1 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import numpy as np

In [2]: x = np.zeros(2 ** 33)
code:
$ ps u | grep python
me 20194 10.4 67.8 67314988 67145196 pts/2 Sl+ 19:27   0:32 /usr/bin/python3 /home/me/bin/ipython
Yep, that's one Python process using 64GiB RAM. Looks like NumPy arrays scale very well.

Scaevolus
Apr 16, 2007

Cukel posted:

I don't get it. :( Either you're misunderstanding what I'm doing or I'm not reading that carefully enough. That also just talks about receiving pickles from untrusted sources - which I am not doing. I only unpickle entries from the database, which I have created in my server code in the first place. The only unknown part of those are some of the attributes, like player name.

Is unpickling ANY object that includes an unkown string ANYWHERE in it really dangerous? Not just unpickling a file that might not contain the object you think?
The danger is deserializing arbitrary attacker-controlled pickles.

You're fine.

Cryolite
Oct 2, 2006
sodium aluminum fluoride
Does anyone else here use IPython from within PyDev? Are you able to use the '?' command (.e.g %timeit?) on things with really long docstrings without the console freaking out?

Whenever it needs to page out the the results like with %timeit? above, saying ---Return to continue, q to quit---, pressing return, q, or any key at all doesn't do anything. It'll print the character in grey (or a newline if return), but it doesn't print out the rest of the result or return me to the interpreter. The indeterminate progress bar for PyDev Console Communication shows on the lower right, as if it's waiting on the interpreter and it's just not getting my return or q correctly. Canceling that doesn't help - I have to terminate the current console or worse sometimes Eclipse crashes.

Does anybody else get this problem? I'm running IPython 0.12.1, PyDev 2.5, Eclipse 3.7.0, and Python 2.7.1. PyDev and IPython own but this is kind of annoying since ? is really useful.

edit: Realized it's all Python anyway, so I just went in and changed the source myself! I changed the default paging behavior so that it'll only page output if it's more than 10,000 lines long, which hopefully won't happen.

Cryolite fucked around with this message at 03:10 on Jun 21, 2012

Comrade Gritty
Sep 19, 2011

This Machine Kills Fascists

Cukel posted:

I don't get it. :( Either you're misunderstanding what I'm doing or I'm not reading that carefully enough. That also just talks about receiving pickles from untrusted sources - which I am not doing. I only unpickle entries from the database, which I have created in my server code in the first place. The only unknown part of those are some of the attributes, like player name.

Is unpickling ANY object that includes an unkown string ANYWHERE in it really dangerous? Not just unpickling a file that might not contain the object you think?

You're one SQL Injection / stolen password / permissions bug / any other method of an attacker being able to inject an arbitrary value in your database. You're using Django right? Django is relatively secure, but is your app code? What do your admin password look like? Exploits rarely come from a singular vulnerability but instead tend to come from multiple being used in conjunction.

For example, in this case, a simple permissions bug or SQL Injection can escalate into a full out remote code execution.

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
I understand that there are various statements in Python that implicitly carry out assignment. So for example a def statement is just a special kind of assignment statement used for assigning a function, and the header statement of a for loop carries out assignment. But I noticed while experimenting that you can index a dictionary or list in one but not the other. Is there a particular reason for this difference in behaviour?

code:
Python 3.2.3 (default, Apr 11 2012, 07:15:24) [MSC v.1500 32 bit (Intel)] on win
32
Type "help", "copyright", "credits" or "license" for more information.
>>> d = {}
>>> for d['x'] in [1, 2, 3]:
...     print('butt')
...
butt
butt
butt
>>> d['x']
3
>>> def d['f'] ():
  File "<stdin>", line 1
    def d['f'] ():
         ^
SyntaxError: invalid syntax
>>>

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
The def statement (and the class statement, too) takes an identifier in the grammar. The for statement takes an expression.

The for loop takes an expression because it needs to do:

Python code:
for k, v in D.iteritems():
    print k, v
And while you're at it, why not allow all kinds of expressions?

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe

Suspicious Dish posted:

The def statement (and the class statement, too) takes an identifier in the grammar. The for statement takes an expression.

The for loop takes an expression because it needs to do:

Python code:
for k, v in D.iteritems():
    print k, v

I thought that dictionary and list offsets were indeed identifiers, since you can assign to them, but I guess not.

Actually, something just clicked for me, because I had been wondering what exactly the thing on the left is in a statement like

[a, b] = 'xy'

I was reluctant to think of it as a "list" because I was thinking of lists as something that contains objects, not unresolved names. But I guess really there is no reason why it can't contain unresolved names up until the point it's evaluated, or in this case, purposely contain unresolved names so as to be the target of assignment. So that makes more sense to me now.

So, the notion of the assignment target in a for loop header being in fact an expression does explain it for me. Thanks.

quote:

And while you're at it, why not allow all kinds of expressions?

I don't know. Why not indeed. That's why I was asking - you react as though you think I'm trying to argue for one behaviour or other.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
Maybe if you play around with printing the AST for an expression like [x, y] = 'xy', you'll have a better understanding of the internals of the grammar - that there's a Get context and a Set context. You can even do things like [d['x'][2].foo, y] = 'xy' if really want to.


Hammerite posted:

I don't know. Why not indeed. That's why I was asking - you react as though you think I'm trying to argue for one behaviour or other.

Sorry if it read that way to you. I just have a weird writing style where the "you" there refers to the designers and implementers of the feature.

TURTLE SLUT
Dec 12, 2005

Steampunk Hitler posted:

You're one SQL Injection / stolen password / permissions bug / any other method of an attacker being able to inject an arbitrary value in your database. You're using Django right? Django is relatively secure, but is your app code? What do your admin password look like? Exploits rarely come from a singular vulnerability but instead tend to come from multiple being used in conjunction.

For example, in this case, a simple permissions bug or SQL Injection can escalate into a full out remote code execution.
Ah, well that's true. I guess it will make more sense to refactor my savegames with XML or something later on just to make sure.

About non-SQL databases: Yeah, I've used object-databases before, but this GameSession thing isn't the only thing I'm doing with the app, just a small part of it. A regular SQL type database with pickles in it makes more sense with all the other, more conventional Django model data. Unless there's a way to run the object database next to an SQL database in Django... and even that seems like a lot of effort.

Thanks for all the input everyone, this is hugely educational :)

Adbot
ADBOT LOVES YOU

duck monster
Dec 15, 2004

Suspicious Dish posted:

Let me ask. Why do you want to store a Pickle in a relational DB? It sounds like you want a key-value store. Consider using a key-value store, not a blob in a database.

memcached and redis are popular options.

EDIT: http://en.wikipedia.org/wiki/NoSQL#Key-value_store

You can actually do some pretty funky things with XML blobs in some of the more recent mysql/pgsql/etc databases, like include Xpath terms in search queries and the like, letting you do low-rent schema-less stuff, although I still think NoSQL/schemaless stuff is the devil

  • Locked thread