|
Yes, that's EAV. If you need dynamic modification of your model at run time, you essentially have two options: (1) modify your literal schema dynamically using the data definition language (create/alter tables, basically) called from the application code (2) use an EAV approach and basically implement a database on top of your database as you are doing with your example They are both lovely approaches. Which one is worse is going to depend on the needs of your application, the platform you are on, and probably your own biases. Actually, the third option that is advocated by Oracle guru Tom Kyte is to create as many columns as a table will allow, and name them something like FIELD_001 through FIELD_999 or what have you. Then you just deal with your relationships that way, somehow. I'm a huge fan of his but I have to admit this is pretty npe fucked around with this message at 18:59 on Apr 10, 2008 |
# ¿ Apr 10, 2008 18:54 |
|
|
# ¿ May 11, 2024 15:19 |
|
Oracle is hint crazy. There are 124 different documented hints, and 57 undocumented.
|
# ¿ Apr 15, 2008 16:10 |
|
You want to just test if something exists? You can just docode:
|
# ¿ Apr 17, 2008 16:29 |
|
Well, ok I was just trying to point out that you don't need it in a procedure, as it seems like it's going to be easier for other queries to simply check using EXISTS when they go to do their queries. But if you're refactoring, for a drop-in replacement to a procedure use the count(*) method you had but drop the rownum=1, it's not doing anything. A relevant Tom Kyte answer on this topic that may be useful: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3069487275935 Edited to add: I didn't mean to sound so much like a jerk, but here in Oracle-land there is a frequent clash of idioms from other databases. A procedure to test if a value exists in a table explicitly and then do some action strikes me as a smell of something designed inappropriately for a PL/SQL application. Normally you would either use EXISTS() in whatever query you were doing inline, or if you wanted to select the data out just handle the exception if it doesn't exist if you're not expecting that. It's similar to using explicit cursors in PL/SQL, you can do it, but it's discouraged, and frequently a sign of some poor design decisions. Some other things that drive me crazy, that I see from people who "grew up" in other databases or who just never learned the correct way somehow: * explicit cursors when a simple join, inline view, or FOR loop would have sufficed * wrapping up individual "simple" queries in functions/procedures, only to call them in a procedural way using a cursor in a loop * WHEN OTHERS THEN [whatever]; * COMMIT or ROLLBACK in pl/sql * application logic in triggers * dynamic sql in procedures using EXECUTE IMMEDIATE npe fucked around with this message at 17:43 on Apr 17, 2008 |
# ¿ Apr 17, 2008 17:22 |
|
Barrackas posted:
Cursors are not evil, they are just vastly overused by people who aren't comfortable with implicit cursors or complex queries. Implicit cursors perform better and result in much cleaner code, and it's pretty rare to actually NEED an explicit cursor. EXECUTE IMMEDIATE is pretty heinous. That's not to say it's never appropriate, but on top of the substantial performance problems Xae covered, it also makes your code unsafe. When PL/SQL is compiled, dependencies on objects (tables, functions, procedures, etc) are taken into account. If you refer to a table and column in a procedure, and that table is changed, it will invalidate the state of the procedure and force a recompile, which will fail if the column you're referring to has changed type or name. Same for calls to other procedures and functions, changing an argument in a procedure spec will cause anything that depends on it to be invalidated. All of this type safety and dependency checking goes away with dynamic SQL via EXECUTE IMMEDIATE. You're winging code on the fly and hoping it doesn't break, and it's incredibly more risky. Use only when necessary, when nothing else will do. Tom Kyte's mantra is handy when getting started doing PL/SQL: o if you can do it in a single sql statement - do so. o if you cannot, do it in as little PLSQL as you can. o if you cannot, because plsql cannot do it, try some java (eg: sending an email with an attachment) o if you cannot do it in java (too slow, or you have some existing 3gl code already) do it in C as an extproc.
|
# ¿ Apr 18, 2008 15:29 |
|
Zombywuf posted:You can't do it in SQL Server, unless you do EXEC('...'). There are problems with this approach, and generally it's better to rethink your design than attempt it. One problem is speed, every time you call your query the bit in the EXEC will need to be re-compiled, the DB can't use a cached plan for this. Another is safety, SQL Server's static checking is pretty crap, but it's better than nothing. By using dynamic SQL like this, you get nothing. It's funny, because while my earlier post about the problems with Oracle's EXECUTE IMMEDIATE was Oracle specific, the argument is basically identical for the MS SQL equivalent. Bad ideas seem to be consistent across platform...
|
# ¿ Apr 18, 2008 17:36 |
|
Xae posted:I can't say I have ever seen much of a difference between explicit and implicit cursors. I don't doubt that they are slightly faster, but I think any gains are pretty small when you get over a couple rows. It's certainly not a massive difference, however it completes the trifecta: more cumbersome to write, more prone to errors (forgetting to close, for example), and slower. We do use ref cursors sometimes when we need to pass them to/from procedures, but otherwise implicit cursors are the way to go. npe fucked around with this message at 01:36 on Apr 19, 2008 |
# ¿ Apr 19, 2008 01:23 |
|
noonches is right, these are usually called "xref" tables. Don't do the comma delimited thing.
|
# ¿ Apr 21, 2008 11:11 |
|
Not sure what you mean by that, could you give an example of what you want to do?
|
# ¿ Apr 25, 2008 13:04 |
|
Do not ever, under any circumstances, commit or rollback in your procedure. Doing so ruins the whole transaction model of Oracle, which is based on the idea that an entire set of operations can be transparently undone at the client level. If individual procedures decide to commit/rollback at their own time, it literally breaks the integrity of the client's transaction. Remember that when you commit or rollback, you are deciding to commit or rollback the entire transaction. Take for example, the following: code:
In short, a procedure does not have enough information, ever, to decide if it's ok to either commit or rollback. That is up to the client program. The only reason I've ever put commit in pl/sql is when I'm writing a utility script for maintenance, where it's never intended to be used as a component in anything (such as doing a mass data migration as a one-off script). It's still a bad practice, but it's a one-off. If you're doing a test case for something (which it appears you might be) then do what you want, it's not going to incur the wrath of the Oracle gods or anything. Just be advised that it doesn't belong in actual production code anywhere. Edit: Personally, I think the keywords COMMIT and ROLLBACK should not even be allowed in pl/sql code. They just don't belong. If you really have the need (like with a one-off script, as is sometimes needed), you should have to run it through EXECUTE IMMEDIATE, just to drive home how gross it is to be doing it. npe fucked around with this message at 14:44 on Apr 25, 2008 |
# ¿ Apr 25, 2008 14:39 |
|
It applies to anything that uses Oracle, it's always the job of the client program to manage the transaction. I can't speak for other platforms, if you're using MS SQL I don't know if it's considered more acceptable or not (although it seems sort of suspect to me if it is, but I don't know). I seem to remember reading somewhere that older versions of SQL server had issues where certain types of things (like code in triggers) didn't necessarily operate as part of the actual transaction, so you couldn't count on a full rollback being transparent. It wouldn't surprise me if this has created a culture of people who are just used to managing the transaction all over the place even if it's no longer necessary. Victor probably knows more about that. npe fucked around with this message at 15:06 on Apr 25, 2008 |
# ¿ Apr 25, 2008 15:04 |
|
Yes, you are talking about an autonomous transaction. The problem is that 95% of the time, autonomous transactions are still a very bad idea. They're less of a problem than outright committing or rolling back, but it's very possible to create situations where you've lost data integrity. The most accepted use of an autonomous transaction is for auditing. For example: code:
But again, you need to be very careful. If you find yourself writing business logic inside of autonomous transactions, you've probably done something wrong to get there because you are still stealing control away from the calling program. If you think you have a situation where it's necessary, it might be worth posting some example code and discussing the more correct form it should probably take. To take your example and improve it a little, it would probably look something like... code:
npe fucked around with this message at 15:42 on Apr 25, 2008 |
# ¿ Apr 25, 2008 15:21 |
|
Autonomous transactions are probably the closest thing here.Barrackas posted:I appreciate that it is probably not a great approach, but this is such a huge undertaking as it is, that moving transaction handling completely to the calling client is probably infeasible unfortunately. If you change "probably not a great approach" to "a terrible thing that should never be done and will get you yelled at by any Oracle developer worth his pay", I'll agree. I mean, I understand sometimes people have to do things due to the reality of their job situation and everything, so I'm not unsympathetic. But on the other hand, this is sort of a glass bottle or old shoe kind of question.
|
# ¿ Apr 25, 2008 16:03 |
|
One thing to consider, is maybe looking into using savepoints instead of autonomous transactions. They allow you to control rolling back small chunks of the transaction, without breaking the overall transaction. Here's a quick demo: code:
npe fucked around with this message at 02:27 on Apr 26, 2008 |
# ¿ Apr 26, 2008 02:23 |
|
That's pretty much it. One additional thing to note with Oracle is that unhandled exceptions that propagate out of the top level of pl/sql and are returned to the client, are automatically rolled back to the start of the block being called. This is referred to as an implicit savepoint. An easier way to describe it is to imagine that your client automatically has all calls wrapped in code:
An interior call that raises an exception that is handled won't implicitly do this. See: http://www.psoug.org/reference/exception_handling.html
|
# ¿ Apr 28, 2008 12:19 |
|
I wasn't really clear, all I meant was that there is no implicit savepoint set before blocks inside of the "outermost" one. For example:code:
|
# ¿ Apr 28, 2008 18:09 |
|
What platform?
|
# ¿ May 1, 2008 02:58 |
|
MySQL has last_insert_id(), additionally if you're using PHP I believe there's a wrapper for it (mysql_insert_id() I think?) as well.
|
# ¿ May 1, 2008 03:26 |
|
Code Jockey posted:Okay gentlemen, I need some clarification. I can't speak for MS SQL, but on the Oracle side this would sort of work as he expects (although it's still a bad way to do it). The reason is that you have to explicitly commit, and if you don't it's still the same transaction. If you set autocommit on your db driver, then it will automatically call commit on each separate call, so stacking them into one statement would effectively treat them as one transaction. A far better way to accomplish this same thing is to turn off autocommit, call each one in sequence, and then commit when you are ready. Now, maybe MS SQL is different in how it handles transactions, I'm not sure if it's the same in this regard. Hopefully someone can clarify. npe fucked around with this message at 18:40 on May 2, 2008 |
# ¿ May 2, 2008 18:27 |
|
Well, in your defense, it is pretty and you may as well crosspost to the Coding WTF's thread. It works only because you're dodging the autocommit by shoving it all on one line. That's just a stupid way to do it.
|
# ¿ May 2, 2008 19:02 |
|
Sartak posted:I'm about to go back to in-memory data structures which are much slower and fiddly. Any advice would be greatly appreciated! Are you using Oracle by any chance?
|
# ¿ May 2, 2008 20:59 |
|
Was worth a shot. This is the sort of thing that analytic functions (sometimes called windowing functions) are meant to do, but I don't know if postgres does much in that area.
|
# ¿ May 2, 2008 21:05 |
|
TiMBuS posted:So yeah, for a situation like this, would I just dump everything in one table, or would I make (potentially)thousands of tables? Or, maybe there is some kind of alternative data structure that I haven't thought of? You tell me. The correct way is to put all of your posts in one table. Creating individual tables would just be the wrong way to do it. Filtering is simply joining or adding a WHERE clause, and is the normal way of doing things. Generally, creating tables on the fly is to be avoided in pretty much any database system.
|
# ¿ May 6, 2008 12:57 |
|
Your table is going to be terrible for performance. Can you add a column, something like "ack_of" that is a foreign key to the ID, referencing the sender's id? This would make querying much simpler and faster.
|
# ¿ May 21, 2008 19:07 |
|
Just make it an inline view...code:
|
# ¿ Jun 5, 2008 17:24 |
|
No Safe Word posted:As long as you kids are aware of the data quality/sync issues for when you use NOLOCK. This is why I take issue with Oracle's recent addition of asynchronous commits. Sure, it's faster, and sometimes you legitimately want to trade the reliability for the performance, but I have this vision of consultants everywhere showing up to "improve performance", turning on global deferred commits, collecting a fat bonus for speeding up the system in short order, and then blowing out before any problems appear.
|
# ¿ Jun 5, 2008 19:05 |
|
Maybe MySQL doesn't allow inline views inside your literal views? Maybe you can create that subquery as another view and reference it that way... Seems silly though.
|
# ¿ Jun 10, 2008 13:00 |
|
Aredna posted:Ahh, my understanding was that if you just groupped by one field, and the other 2 fields would return a data for every row in the table. Is this specific to MySQL or did I just completely misunderstand group by? This is something that MySQL handles wrong. In most database systems, if you try to access columns that are not explicitly included in your GROUP BY, it will complain. For example, in Oracle: code:
In MySQL, you can refer to other columns not included in the GROUP BY criteria plainly. Of course the question is, which row are you getting back, and I think the answer is "whichever", but I haven't read any actual MySQL documentation on this subject in years so it may have changed.
|
# ¿ Jun 13, 2008 21:44 |
|
|
# ¿ May 11, 2024 15:19 |
|
Stephen posted:I found that MySQL just returns the last row in whatever order it groups it by, usually the primary key. So in my case I had to sort it by the orderid, and then group it in order to return one row with the result I wanted. The concern here is that it's undefined behavior, and it may appear to work in many circumstances but not in others. A good example from Oracle is where people order by ROWID, hoping to preserve the "natural" ordering of the table. I see this a lot and it's bad because it usually looks like it's working, but it will occasionally not -- Oracle uses ROWID internally and makes no guarantee about table data being in any particular order. This is the same sort of situation. It would be important to know what the explicit behavior should be, but a quick check of the MySQL docs about GROUP BY doesn't show any mention of this behavior that I can find.
|
# ¿ Jun 13, 2008 21:57 |