|
yaoi prophet posted:Oracle is hint crazy. There are 124 different documented hints, and 57 undocumented. Oracle always amazes me at how impressive it can be, but also how stubborn it can be.
|
# ¿ Apr 16, 2008 02:40 |
|
|
# ¿ May 13, 2024 05:36 |
|
Barrackas posted:Well, as you say, this smacks of something that wasn't designed with PL/SQL in mind because it's true It's a massive MSSQL database (over a thousand stored procedures, hundreds of tables etc.) that is being ported to Oracle. Hard parsing is when Oracle creates an execution plan for the statement. For many SQL statements the act of parsing consumes more resources than executing the statement. Once Oracle has determined the execution plan for the statement it stores it for a while so that if similar statements come through it can use that plan instead of generating a new one. Generally speaking you want the commit or rollback to be handled by a smarter language than PL/SQL. There may be some batch processes that you want PL/SQL to do the transaction handling for, but most of the time your Stored Procedures shouldn't be handling the transaction.
|
# ¿ Apr 18, 2008 12:42 |
|
yaoi prophet posted:Cursors are not evil, they are just vastly overused by people who aren't comfortable with implicit cursors or complex queries. Implicit cursors perform better and result in much cleaner code, and it's pretty rare to actually NEED an explicit cursor.
|
# ¿ Apr 18, 2008 23:27 |
|
hey wiz posted:Would this work? The SQL%FOUND is part of Oracle not just PL/SQL right? I don't think it would. If there is no row Oracle should raise a NO_DATA_FOUND exception. The most common way to check for data before you select it is to do a SELECT COUNT(*) FROM <<stuff>>, as COUNT(*) will always return data, even if it is 0.
|
# ¿ Apr 19, 2008 22:52 |
|
Anyone know a way I can make a procedure/function in Oralce that will allow comparisons of any two objects of the same type?
|
# ¿ Apr 25, 2008 04:20 |
|
Another Oracle question here. I'm still working on and off on a way to compare two generic objects in PL/SQL. gently caress this is a goddamn run around. I devised a way of doing it by creating a massive overload clusterfuck (240 declarations, generated by another PL/SQL script) and letting Oracle determine the type through the overloading. This method was vetoed by the client, with decent reasoning, there are issues that are popping up with the dynamic blind sorting, and the overloading. They have instead decided to go with a Java method. While java is able to handle generic objects PL/SQL is not. I have been searching for a way to force some sort of serialization or somehow get into Oracle's kernal functions where they allow generics in PL/SQL (see the sys.ANYDATA and sys.ANYTYPE types for examples) however it appears that Oracle blocks access to those. Even for the java method they want there still has to be a PL/SQL procedure "mapped" to the Java method. So, anyway: Does anyone know of a way to have a single function in PL/SQL that will accept any type of object? Or somehow get raw binary or a pointer or SOMETHING that will allow a generic type?
|
# ¿ May 13, 2008 02:11 |
|
uh zip zoom posted:in MS SQL, I'm trying to alter a nullable integer column so that it's not null with a default value of zero. Create a Before Insert Row Trigger Add Constraint
|
# ¿ May 27, 2008 23:19 |
|
Alex007 posted:Triggers ? For Default values ? God dammit this is a stupid idea. What the gently caress do you think a default value is?
|
# ¿ May 28, 2008 00:58 |
|
Alex007 posted:Default values are better implemented using, well, default values, not triggers. Its a chunk of code that goes "If blah is null, then set to x". Oddly similar to what a trigger would be. And for almost any database the performance hit would be negligible. Regardless of the terminology the code running is going to be shockingly similar.
|
# ¿ May 28, 2008 01:21 |
|
Start with the Postgres Docs Link. Even if you don't use Postgres their docs are some of the best out there for most core DB things like Indexes, constraints, etc.
|
# ¿ May 28, 2008 02:46 |
|
Anyone have experience with Teradata's Fastload or Multiload? I'm pretty familiar with Oracle's sqlloader but I need to load a bit of data into a Teradata setup (1.5 million rows). Unfortunately the turn around from the DBA/ETL team is measured in weeks, and I need something very fast. Both of Teradata's programs are pissing me off to no end, they are seemingly needlessly complex and the documentation is borderline non-existent for them. All I want to do is load a "|" delimited file with 6 columns into a database, it shouldn't be so loving difficult.
|
# ¿ Aug 21, 2008 03:05 |
|
Triple Tech posted:I don't know much about database design, but I'm not convinced the standard stored procedure benefit still comes through with dynamically generated SQL. It's like eval'ed code in a programming language, it's not semantically relevant up until the point that it's run, because it can always change before then. This isn't something predictable like bind parameters. There is tremendous overhead in executing a SQL statement. Most of the work goes into finding out what the statement is and what it wants, not actually getting the data (unless you are pulling tons of rows). A DMBS will pre-parse and pre-plan queries in a stored procedure. This means that when they execute half the work is already done. Think about all the rules for SQL, when you feed it dynamic statements it has to check them to make sure it is valid. Then it has to plan out the joins and filters, figure out what indexes to use what method to do the join and only after all that is done, can it actually read the data. My first job was rewriting Pro-C into PL/SQL. Simply sitting on the database and not having everything fed through as dynamic statements resulted in orders of magnitude better performance. It took processes from days to hours. In addition many Stored Procedure languages have custom keywords and operations that are designed to be stupid fast, like Oracle's "BULK COLLECT". Xae fucked around with this message at 16:02 on Aug 23, 2008 |
# ¿ Aug 23, 2008 15:58 |
|
Have you thought of using a Surrogate key? One that has no "real meaning" but is just a unique number to link the tables?
|
# ¿ Aug 29, 2008 04:13 |
|
No Safe Word posted:The short answer is: you shouldn't care what the value of an auto_increment field is. You are completely correct. Though I should add that you shouldn't care that there are gaps in the sequence either. If it really bothers you switch to a logical delete.
|
# ¿ Aug 29, 2008 21:59 |
|
I need a way to export huge(20+gb) data sets from Oracle as fast as possible. However there are some problems: 1) It needs to be by query, not by Table. 2) It must be to my local machine. The network admins have refused to give me a couple dozen gigs on the network drives to dump these. 3) Variable formatting. I have to pull in the format the requester wants. 4) Preferably some way to break on file size or records retrieved. I need to be able to fit these on a DVD or CD. 5) I can't actually run code on the system, so no PL/SQL. This is probably impossible, but it is the situation I find my self in. I'll charitably describe the situation as a "clusterfuck", but it is the assignment I have. Right now I just run queries through SQL*PLUS and spool them, but this is stupidly slow and I generally dislike spending 20-30 hours on a weekend baby sitting my laptop at work. I have looked at SQLDeveloper, but it only seems to support export by table, not be query. Any ideas?
|
# ¿ Oct 6, 2008 16:57 |
|
Jethro posted:It seems to me that if you are retrieving specific queries with specific formatting, you aren't "dumping data" you are generating reports. So maybe look into some sort of reporting tool? Most of the formatting is simply switching between delimiters in columns. Reports are generally human readable, these I consider "dumping data" because they are in thousand character long '|' or ',' delimited files. The files are to be loaded into external (to the company) databases, not to be read by humans.
|
# ¿ Oct 6, 2008 18:09 |
|
MoNsTeR posted:http://www.quest.com/toad-for-mysql/ PL/SQL Developer > TOAD (Oracle only). It only costs a fifth as much too! It is nice to see some decent tools for MySQL, hopefully companies start to follow suit and offer them for PostgreSQL as well.
|
# ¿ Nov 30, 2008 18:37 |
|
tayl0r posted:That is a good point, but you could easily use write stored procedures for everything the application developers want to do to the database (inserts, updates, deletes), and then only give them access to select + run the stored procedures. Logical deletes make a lot of sense, particularly if you are dealing with anything involving money. You really don't want to ever lose a record involving cash. It has less to do with worrying that you will screw up and more to do with keeping a history.
|
# ¿ Dec 10, 2008 17:13 |
|
MoNsTeR posted:I could not work without TOAD, period. It is worth 10 times whatever ridiculous price they charge for it. Unless you are a DBA TOAD is a waste. PL/SQL Developer is much better for development work. It is cheaper and far superior. Xae fucked around with this message at 20:12 on Dec 20, 2008 |
# ¿ Dec 20, 2008 20:06 |
|
Markoff Chaney posted:I'm going to go out on a limb here and ask what is superior about PL/SQL Developer specifically other than the price tag. I've used TOAD my whole career and love it, other than the random crashes (which are pretty infrequent in TOAD 9). I'm almost pure development, other than having some DBA privs to do things like kill sessions and add datafiles when a tablespace fills up on a development box. What can PL/SQL developer do for me that TOAD cannot? Not crash, multi-threaded so one query won't lock the whole thing, not crash, superior interface, vastly better debugger, built in testing, doesn't crash. The interface as a whole is miles better. It is the only development tool I have ever purchased out of my own pocket. I ended up having a client buy an unlimited license to support the people who wanted to ditch TOAD after seeing my use PL/SQL Developer. The Unlimited license was less than the cost of 6 TOAD licenses. Xae fucked around with this message at 05:04 on Dec 23, 2008 |
# ¿ Dec 23, 2008 05:00 |
|
MoNsTeR posted:The earliest TOAD I've used is 7.5 and that's multi-threaded, so I'm not sure what ancient version you've tried. Since upgrading to 9.5 I've had maybe 10 crashes total, that's nearly a solid year of using it 8-10 hours a day. 1) Yes, you can. 2) I think you can control-click a table name, or any object name, to get the description 3) Because you don't always need to see the body? It comes in handy when you don't want to grab a 20k SLOC package to see the note in the spec 4) You can alter the explain plan window quite a bit. You can set up custom filters object filters that let you just peak at any objects you want. It is also fairly easy to develop your own plug-ins to add any TOAD like features you just can't do with out. Almost the entire program can be customized.
|
# ¿ Dec 25, 2008 00:10 |
|
Victor posted:My shop has run into problems with tons of business logic being embedded in triggers, so it is perhaps an esoteric issue. The danger of triggers is people forget about them, or that the business case changes and you have to delve through layer after layer of triggers to change the code. If you can count on good documentation they are perfectly fine to use, but reality is that the documentation will inevitably suck. Cascading triggers (triggers inserting/updating other tables, whose triggers update other tables, etc) have been responsible for some of my biggest headaches. I spent 4 weeks mapping out triggers for a small claims system. Most people had forgotten what triggers did what and even that some existed. The system ended up doing an asinine amount of duplicated work. I dislike the whole "automagic" system that triggers create. People should know what the system is doing with out diving through a couple dozen triggers. Xae fucked around with this message at 01:47 on Jan 5, 2009 |
# ¿ Jan 5, 2009 01:45 |
|
Victor posted:Curious. I'm not aware of any such issues with our triggers, but I have not personally worked with the more complicated scenarios. What I will say is that it is nice to be able to write normal insert/update statements to do things instead of executing sprocs that necessarily have to operate one-at-a-time, at least until we get table-valued parameters in SQL2008. I don't know what the TVP support situation is for other databases. Having to operate through a web service seems even more gross. Kyte from Oracle is able to explain my aversion to triggers far better than I can. I ran into a similar situation where I used them to maintain denormalized data similar to Kenneth Downs first example. Triggers have their place, such as setting watermark fields or simple checks, but any 'business logic' (I loving hate that phrase, but it works) should be explicit. Aside from simple architecture preference there can be some wonky things that happen with triggers and transactions. I can't remember the code the the ORA errror, but my previous employer get them by the bucket full.
|
# ¿ Jan 6, 2009 00:17 |
|
The issue is less about when the original team is working on the project and more about 5-10 years later. Databases tend to long outlive the application built on them. I do a bit of digging in old databases, and time after time some 'clever' trigger solution is the cause of problems later on. Oracle's mutating table error causes even more headaches. The intent is to tell people they are bumping into ACID problems, the reality is that they come up with convoluted schemes to circumvent it.
|
# ¿ Jan 6, 2009 06:25 |
|
Anyone have ideas on how to pull out data in a denormalized fashion? I have two tables, they are related as a many to one. I want it to come out as "Parent.Column1 | Parent.Column2 | Child.Column1-A,Child.Column1-B...Child.Column1-N | Parent.Column3"
|
# ¿ Jan 16, 2009 00:19 |
|
Victor posted:Some RDBMSes allow you to do this easily. For example, SQL2005 has the pivot operator. Thanks for the tip, I'll have to check out the docs and see if Teradata supports it. Oh god, another night digging through Teradata docs .
|
# ¿ Jan 16, 2009 04:46 |
|
Victor posted:That would probably actually be a good way to explain the pivot operator... The problem is that there are '...N' possible values. We kicked some ideas around, but we couldn't figure out a way to flatten a child table into one column in the script. We'll have to mess with the ETL side of the system, or just say 'gently caress it'.
|
# ¿ Jan 17, 2009 00:16 |
|
Victor posted:Yeah, so you code-gen the TSQL... Yea, if I could use a procedural language it would be easy to do with a stored Proc. But the Support Team has a pretty steep charge to put in a stored procedure, and no one wants to pay it. I ended up just telling them it was impossible unless they wanted to pay. The joys of internal billing.
|
# ¿ Jan 19, 2009 00:26 |
|
Victor posted:sp_executesql That is sort of nifty, I'll have to keep that in mind for the SQL Server systems, I don't see a Teradata equivalent though.
|
# ¿ Jan 19, 2009 03:38 |
|
camels posted:On the off chance that this might be feasible - if the result set is not too large, would it be possible to do the pivoting on the client side, with a reporting tool? The result set is in the tens of millions of rows and dumps to flat files. It is defiantly possible to do, and in normal circumstances that is probably how I would do it. But right now budgets are getting slashed, so things like this that are nice to have but are expensive aren't being funded.
|
# ¿ Jan 19, 2009 06:58 |
|
It isn't until I used Teradata that I realized how badly all other explain plans sucked. code:
quote:We do an all-AMPs RETRIEVE step from PROD.PARTY_CUSTOMER Retrieve 180,000,000 rows, hash and sort them in 90 seconds? Compare with Oracles explain plan (different query): code:
Xae fucked around with this message at 17:58 on Jan 23, 2009 |
# ¿ Jan 23, 2009 17:53 |
|
Triple Tech - What is the projected rate of change? If only a small number of entries change you can just set up the table with Record Begin/End Dates and a REC_CURRENT_FLG.
|
# ¿ Feb 6, 2009 20:15 |
|
Triple Tech posted:What is the flag in this model used for? It's not enough to just select the one with the max date? I'm sure the flag helps trivialize this process. It just seems redundant, data-wise. That or make a trigger-powered table of latest entries and just a historical dump table of previous incarnations. Some people hate using between and think it kills performance, I add it out of habit now so I don't have to argue with people. You can keep a complete history and still have an easy select by using a view. code:
|
# ¿ Feb 7, 2009 04:56 |
|
Hammerite posted:Suppose that I have a table called User and a table called UserLogins. The table UserLogins has the columns UserID, LoginTime (together making the table's Primary Key), and EnteredNumber. EnteredNumber is an integer between 0 and 9 that the user is required to enter when he logs in. code:
|
# ¿ Feb 7, 2009 17:46 |
|
Hammerite posted:Thanks for the suggestion. I ought to have mentioned that I am constrained to using MySQL. Looking on the internet, I see that ROWNUM is a feature that other RDBMSs have but that MySQL does not have. Is there a way of doing it that will work in MySQL? You will need to use a work table to store the results then. MySQL supports the LIMIT function to limit the number of rows. Pseudo Code code:
|
# ¿ Feb 8, 2009 17:46 |
|
Sergeant Hobo posted:So index the important stuff then? Like primary/alternate keys, fields you know are going to involve queries, etc.? PKs almost always create an index automatically. I normally throw indexes on the columns that end up in where and group by clauses the most.
|
# ¿ Feb 24, 2009 04:11 |
|
If you want to mess around a bit you can try using Rank. It is handy to know and this is a good excuse as to learn.
|
# ¿ Feb 27, 2009 05:45 |
|
In PL/SQL how do I use a collection in an "IN" clause? I was certain I have done this before, but I can't remember or Google the syntax.
|
# ¿ Jul 8, 2009 00:03 |
|
MoNsTeR posted:where something in (select * from table(t_foo)) I plan on doing a BULK COLLECT though. I'm rewriting it to only use one select instead of the N it currently uses.
|
# ¿ Jul 10, 2009 13:16 |
|
|
# ¿ May 13, 2024 05:36 |
|
dwayne_dibbley posted:I have a table that stores the history of status changes for accounts but have some bad rows in there, here is a simplified example: code:
|
# ¿ Sep 6, 2009 05:02 |